text stringlengths 8 267k | meta dict |
|---|---|
Q: Unable to run assembly program I have just started reading Introduction to 80x86 Assembly Language and Computer Architecture.I am trying to use NASM, to run the first example shown in chapter 3 ,but unsuccessfully.Has anyone read the book,and run the examples?
A: According to Google Books, you should be using MASM, not NASM. Try that.
For reference guys, page 47 here.
A: I prefer NASM over MASM. There are significant differences between them, especially when it comes to things like variables and procedures (they are not really part of the NASM approach). You'll have to make up your own mind about which to use.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146368",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Visual Studio support for new C / C++ standards? I keep reading about C99 and C++11 and all these totally sweet things that are getting added to the language standard that might be nice to use someday. However, we currently languish in the land of writing C++ in Visual Studio.
Will any of the new stuff in the standard ever get added to visual studio, or is Microsoft more interested in adding new C# variants to do that?
Edit: In addition to the accepted answer, I found the Visual C++ team blog:
http://blogs.msdn.com/vcblog/
And specifically, this post in it:
https://web.archive.org/web/20190109064523/https://blogs.msdn.microsoft.com/vcblog/2008/02/22/tr1-slide-decks/
Very useful. Thanks!
A: I've been involved in the ISO C++ work (2000-2005), and Microsoft made significant contributions to that language. There's no doubt they will work on C++0x, but they'll need a bit more time than say Intel. Micosoft has to deal with a larger codebase that often uses their proprietary extensions. This simply makes for a longer testfase. Yet, they will support most of C++0x eventually (export still isn't loved though, or so I understand).
When it comes to ISO C, the people working on standard are not representative for Microsofts market. Microsofts customers can use C++98 if they're just looking for a better C. So why would Microsoft spend money on C99? Sure, Microsoft cherry-picked parts, but that's sane business. They'd need those for C++0x anyway, so why wait?
A: MSVC support for C is unfortunately very lacking. It only supports the portion of C99 that is a subset of C++... which means that, for example, it is physically impossible to compile ffmpeg or its libav* libraries in MSVC, because they use many C99 features such as named struct elements. This is made worse by the fact that libavcodec also requires a compiler that maintains stack alignment, which MSVC doesn't.
I work on x264, which unlike ffmpeg does make an effort to support MSVC, though doing so has often been a nightmare in and of itself. It doesn't maintain stack alignment even if you explicitly pass the highest function call through an explicit assembly-based stack alignment function, so all functions that require an aligned stack have to be disabled. Its also been very annoying that I cannot use vararrays either; perhaps this is for the best, since apparently GCC massively pessimizes them performance-wise.
A: A more recent post about MSVC's C++11 feature compatibility for MSVC 2010 and 2011 is now online.
A: Microsoft has never expressed any real interest in keeping up-to-speed with the c99-standard (which is getting old by now). Sad for C-programmers, but I suspect that Microsoft cares more for the C++-community.
A: Visual C++ 2008 SP1 contains parts of TR1 at least, and from time to time, the Visual C++ team is blogging or talking about C++0x, so I guess they will support it at some time in the feature. I didn't read anything official though.
A: Updated information on this:
There is now (10 Nov 2008) a "Community Tech Preview" (CTP) of VS2010 which contains a preview of VC10 that has some parts of C++0x implemented (note that VC10 will not have the full set of C++0x changes implemented even when VC10 is released):
http://www.microsoft.com/downloads/details.aspx?FamilyId=922B4655-93D0-4476-BDA4-94CF5F8D4814&displaylang=en
Some details on what's new in the VC10 CTP:
*
*Visual Studio 2010 CTP released
*Lambdas, auto, and static_assert: C++0x Features in VC10, Part 1
As noted in the above article, "The Visual C++ compiler in the Microsoft Visual Studio 2010 September Community Technology Preview (CTP) contains support for four C++0x language features, namely:"
*
*lambdas,
*auto,
*static_assert,
*rvalue references
A: Herb Sutter is both the chair and a very active member of C++ standardisation comitee, as well as software architect on Visual Studio for Microsoft.
He is among the author of the new C++ memory model standardised for C++0x. For example, the following papers:
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2669.htm
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2197.pdf
have his name on it. So I guess the inclusion on Windows of C++0x is assured as long as H. Sutter remains at Microsoft.
As for C99 only partly included in Visual Studio, I guess this is a question of priorities.
*
*Most interesting C99 features are already present in C++ (inlining, variable declaration anywhere, // comments, etc.) and probably already usable in C in Visual Studio (If only doing C code within the C++ compiler). See my answer here for a more complete discussion about C99 features in C++.
*C99 increases the divergence between C and C++ by adding features already existing in C++, but in an incompatible way (sorry, but the boolean complex implementation in C99 is laughable, at best... See http://david.tribble.com/text/cdiffs.htm for more information)
*The C community on Windows seems non-existent or not important enough to be acknowledged
*The C++ community on Windows seems too important to be ignored
*.NET is the way Microsoft wants people to program on Windows. This means C#, VB.NET, perhaps C++/CLI.
So, would I be Microsoft, why would I implement features few people will ever use when the same features are already offered in more community active languages already used by most people?
Conclusion?
C++0x will be included, as extention of VS 2008, or on the next generation (generations?) of Visual Studio.
The C99 features not already implemented won't be in the next years, unless something dramatic happens (a country full of C99 developers appears out of nowhere?)
Edit 2011-04-14
Apparently, the "country full of C99 developers" already exist: http://blogs.msdn.com/vcblog/archive/2007/11/05/iso-c-standard-update.aspx#6415401
^_^
Still, the last comment at: http://blogs.msdn.com/vcblog/archive/2007/11/05/iso-c-standard-update.aspx#6828778 is clear enough, I guess.
Edit 2012-05-03
Herb Sutter made it clear that:
*
*Our primary goal is to support "most of C99/C11 that is a subset of ISO C++98/C++11."
*We also for historical reasons ship a C90 compiler which accepts (only) C90 and not C++
*We do not plan to support ISO C features that are not part of either C90 or ISO C++.
The blog post add links and further explanations for those decisions.
Source: http://herbsutter.com/2012/05/03/reader-qa-what-about-vc-and-c99/
A: Herb Sutter is the chairman of the ISO C++ standards body and also works for Microsoft. I don't know about the Visual Studio C standard - mainly because I never use plain C - but Microsoft is sure trying to push the new C++ standard forward. Evidence of this is - like OregonGhost mentioned - the TR1 that is included in the latest Visual Studio Service Release.
A: The Visual C++ team did put out a table of C++0x features that the 2010 release supports at http://blogs.msdn.com/b/vcblog/archive/2010/04/06/c-0x-core-language-features-in-vc10-the-table.aspx. Since there can be a lag time between the spec and the implementation, that seems pretty reasonable. Wikipedia has a nice article about the spec. It's not finished at the time I'm writing this.
A: Starting from VC2013 preview 1, C99, a more diversified set of C++11 and some newly introduced C++14 standards are supported. Checkout the official blog for more details: http://blogs.msdn.com/b/vcblog/archive/2013/06/27/what-s-new-for-visual-c-developers-in-vs2013-preview.aspx
Update:
From https://news.ycombinator.com/item?id=9434483 (Stephan T Lavavej aka: STL is maintainer of STL @VC team):
Specifically, in 2015 our C99 Standard Library implementation is complete, except for tgmath.h (irrelevant in C++) and the CX_LIMITED_RANGE/FP_CONTRACT pragma macros.
Check this post out for details: http://blogs.msdn.com/b/vcblog/archive/2015/04/29/c-11-14-17-features-in-vs-2015-rc.aspx.
A: MS has a series of public replies to this, most of them blaming their users. Like this one:
https://devblogs.microsoft.com/cppblog/iso-c-standard-update/
Now, the Visual C++ compiler team receives the occasionally question as to why we haven’t implemented C99. It’s really based on interest from our users. Where we’ve received many requests for certain C99 features, we’ve tried to implement them (or analogues). A couple examples are variadic macros, long long, __pragma, __FUNCTION__, and __restrict. If there are other C99 features that you’d find useful in your work, let us know! We don’t hear much from our C users, so speak up and make yourselves heard
http://connect.microsoft.com/VisualStudio/feedback/ViewFeedback.aspx?FeedbackID=345360
Hi: unfortunately the overwhelming feadback we get from the majority of our users is that they would prefer that we focus on C++-0x instead of on C-99. We have "cherry-picked" certain popular C-99 features (variadic macros, long long) but beyond this we are unlikely to do much more in the C-99 space (at least in the short-term).
Jonathan Caves
Visual C++ Compiler Team.
This is a pretty sad state of affairs, but also makes sense if you suspect MS wants to lock users in: it makes it very hard to port modern gcc-based code into MSVC, which at least I find extremely painful.
A workaround exists, though: Note that Intel is much more enlightened on this. the Intel C compiler can handle C99 code and even has the same flags as gcc, making it much easier to port code between platforms. Also, the Intel compiler works in visual studio. So by scrapping MS COMPILER you can still use the MS IDE that you seem to think has some kind of value, and use C99 to your hearts content.
A more sensible approach is honestly to move over to Intel CC or gcc, and use Eclipse for your programming environment. Portability of code across Windows-Linux-Solaris-AIX-etc is usually important in my experience, and that is not at all supported by MS tools, unfortunately.
A: The Visual C++ Bloq provides a lot of information on several interesing points regarding the support of C++11 in VC++11, including several tables
*
*C++11 Core Language Features
*C++11 Core Language Features: Concurrency
*C++11 Core Language Features: C99
*x86 Container Sizes (Bytes)
*x64 Container Sizes (Bytes)
Visual C++ Team Blog, C++11 Features in Visual C++ 11
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146381",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "103"
} |
Q: Server cert and Client Truststore I am trying to call a webservice using ssl.
How do i get the relevant server cert so that i can import it into my truststore?
I know about the use of property com.ibm.ssl.enableSignerExchangePrompt from a main method but i would add the server cert to my truststore manually.
I dont want this property set in any of my servlets
Any help is greatly appreciated
Thanks
Damien
A: you can programmatically do this with Java by implementing your own X509TrustManager.
public class dummyTrustManager implements X509TrustManager {
public void checkClientTrusted(X509Certificate[] chain, String authType) throws CertificateException {
//do nothing
}
public void checkServerTrusted(X509Certificate[] chain, String authType) throws CertificateException {
// do nothing
}
public X509Certificate[] getAcceptedIssuers() {
//just return an empty issuer
return new X509Certificate[0];
}
}
Then you can use this trust manager to create a SSL sockect
SSLContext context = SSLContext.getInstance("SSL");
context.init(null, new TrustManager[] { new dummyTrustManager() },
new java.security.SecureRandom());
SSLSocketFactory factory = context.getSocketFactory();
InetAddress addr = InetAddress.getByName(host_);
SSLSocket sock = (SSLSocket)factory.createSocket(addr, port_);
Then with that socket you can just extract the server certificate (an put import it
in the trusted keystore)
SSLSession session = sock.getSession();
Certificate[] certchain = session.getPeerCertificates();
A: If you browse to the site in your web browser you can look at the security info by hitting the little padlock icon and in the dialog that pops up you can save the certificate.
Steps for Chrome
*
*Click the padlock(in the address bar)
*Click 'Certificate Information'
*Under the 'Details' tab you can select 'Copy to file...'.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146385",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: What is the best way to produce random double on POSIX? I'd like to get uniform distribution in range [0.0, 1.0)
If possible, please let the implementation make use of random bytes from /dev/urandom.
It would also be nice if your solution was thread-safe. If you're not sure, please indicate that.
See some solution I thought about after reading other answers.
A: This seems to be pretty good way:
unsigned short int r1, r2, r3;
// let r1, r2 and r3 hold random values
double result = ldexp(r1, -48) + ldexp(r2, -32) + ldexp(r3, -16);
This is based on NetBSD's drand48 implementation.
A: Simple: A double has 52 bits of precision assuming IEEE. So generate a 52 bit (or larger) unsigned random integer (for example by reading bytes from dev/urandom), convert it into a double and divide it by 2^(number of bits it was).
This gives a numerically uniform distribution (in that the probability of a value being in a given range is proportional to the range) down to the 52nd binary digit.
Complicated: However, there are a lot of double values in the range [0,1) which the above cannot generate. To be specific, half the values in the range [0,0.5) (the ones that have their least significant bit set) can't occur. Three quarters of the values in the range [0,0.25) (the ones that have either of their least 2 bits set) can't occur, etc, all the way to only one positive value less than 2^-51 being possible, despite a double being capable of representing squillions of such values. So it can't be said to be truly uniform across the specified range to full precision.
Of course we don't want to choose one of those doubles with equal probability, because then the resulting number will on average be too small. We still need the probability of the result being in a given range to be proportional to the range, but with a higher precision on what ranges that works for.
I think the following works. I haven't particularly studied or tested this algorithm (as you can probably tell by the way there's no code), and personally I wouldn't use it without finding proper references indicating it's valid. But here goes:
*
*Start the exponent off at 52 and choose a 52-bit random unsigned integer (assuming 52 bits of mantissa).
*If the most significant bit of the integer is 0, increase the exponent by one, shift the integer left by one, and fill the least significant bit in with a new random bit.
*Repeat until either you hit a 1 in the most significant place, or else the exponent gets too big for your double (1023. Or possibly 1022).
*If you found a 1, divide your value by 2^exponent. If you got all zeroes, return 0 (I know, that's not actually a special case, but it bears emphasis how very unlikely a 0 return is [Edit: actually it might be a special case - it depends whether or not you want to generate denorms. If not then once you have enough 0s in a row you discard anything left and return 0. But in practice this is so unlikely as to be negligible, unless the random source isn't random).
I don't know whether there's actually any practical use for such a random double, mind you. Your definition of random should depend to an extent what it's for. But if you can benefit from all 52 of its significant bits being random, this might actually be helpful.
A: Reading from files is thread-safe AFAIK, so using fopen() to read from /dev/urandom will yield "truly random" bytes.
Although there might be potential gotchas, methinks any set of such bytes accessed as an integer, divided by the maximum integer of that size, will yield a floating point value between 0 and 1 with approximately that distribution.
Eg:
#include <limits.h>
#include <stdint.h>
#include <stdio.h>
...
FILE* f = fopen("/dev/urandom", "r");
uint32_t i;
fread(&i, sizeof(i), 1, f); // check return value in real world code!!
fclose(f);
double theRandomValue = i / (double) (UINT32_MAX);
A: #include <stdlib.h>
printf("%f\n", drand48());
/dev/random:
double c;
fd = open("/dev/random", O_RDONLY);
unsigned int a, b;
read(fd, &a, sizeof(a));
read(fd, &b, sizeof(b));
if (a > b)
c = fabs((double)b / (double)a);
else
c = fabs((double)a / (double)b);
c is your random value
A: The trick is you need a 54 bit randomizer that meets your requirements. A few lines of code with a union to stick those 54 bits in the mantissa and you have your number. The trick is not double float the trick is your desired randomizer.
A: /dev/urandom is not POSIX, and is not generally available.
The standard way of generating a double uniformly in [0,1) is to generate an integer in the range [0,2^N) and divide by 2^N. So pick your favorite random number generator and use it. For simulations, mine is the Mersenne Twister, as it is extremely quick, yet still not well correlated. Actually, it can do this for you, and even has a version that will give more precision for the smaller numbers. Typically you give it a seed to start with, which helps for repeatability for debugging or showing others your results. Of course, you can have your code grab a random number from /dev/urandom as the seed if one isn't specified.
For cryptographic purposes you should use one of the standard cryptographic libraries out there instead, such as openssl) which will indeed use /dev/urandom when available.
As to thread safety, most won't be, at least with the standard interfaces, so you'll need to build a layer on top, or only use them in one thread. The ones that are thread safe have you supply a state that they modify, so that instead you are effectively running multiple non-interacting random number generators, which may not be what you are looking for.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146387",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: What is a good search engine for embedding in a web site I am thinking of changing my web site's homegrown search engine. Before I break out Visual Studio, I wondered if anyone can suggest an alternative that gives me what I need. This being:
*
*Works with an ASP.NET site (is a .NET project)
*Creates a file-based index
*Fast search across hundreds or thousands of pages
*Performs word-stemming to find variations upon words
*Gives full control over the output styles
*Is cheap (or better still, free!)
A: You can't really beat Google Site Search for this. It's fully customizable - and no need for embedding or maintaining.
EDIT: found this ASP.NET opensource search engine that you can take and run with, In response to your comment about knowing what google does, this is well documented and they have TONS of webmaster tools for you.
A: The .NET version of Lucene is what we've been using. It meets all of your criteria.
A: First I would agree with Google Site Search.
However, if you want to search on criteria that Google might not see (like stuff in the database, etc), then you might look at Lucene.net. It is a port of the Java Lucene project:
Apache Lucene is a high-performance, full-featured text search engine library written entirely in Java .Net. It is a technology suitable for nearly any application that requires full-text search, especially cross-platform.
It is free under the Apache license.
A: Why can't you try out google custom search engine? If you want ultimate control over the indexing you can create your own search engine using lucene.net
A: From my question "In-house full-text search engine for source code and SQL scripts":
I use Hyper Estraier, but
Namazu is also well-known.
There are also ht://Dig, Lucene, Xapian, etc.. but I don't know too much about them.
A: DTSearch engine at http://www.dtsearch.com/ is a solid engine that is easy to develop against. Although it does cost money.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146388",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: Java printf functionality for collections or arrays In python you can use a tuple in a formatted print statement and the tuple values are used at the indicated positions in the formatted string. For example:
>>> a = (1,"Hello",7.2)
>>> print "these are the values %d, %s, %f" % a
these are the values 1, Hello, 7.200000
Is there some way to use any array or collection in a java printf statement in a similar way?
I've looked at the documentation and it appears to have built in support for some types like Calendar, but I don't see anything for collections.
If this isn't provided in java, is there any java idiom that would be used in a case like this where you are populating collections and then printing the values from many collections using one format string (other than just nested looping)?
A: You might be interested by the MessageFormat class too.
A: printf will have a declaration along the lines of:
public PrintString printf(String format, Object... args);
... means much the same as []. The difference is ... allows the caller to omit explicitly creating an array. So consider:
out.printf("%s:%s", a, b);
That is the equivalent of:
out.printf("%s:%s", new Object[] { a, b });
So, getting back to your question, for an array, you can just write:
out.printf("%s:%s", things);
For a collection:
out.printf("%s:%s", things.toArray());
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146390",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Annotations on Interfaces? I can't figure out a use case for being able to annotate interfaces in Java.
Maybe someone could give me an example?
A: A use case that I am working with is javax/hibernate bean validation, we are using interfaces to help us on avoiding to define validations on every specific class.
public interface IUser {
@NotNull Long getUserId();
...
}
public class WebUser implements IUser {
private Long userId;
@Override
public Long getUserId(){
return userId;
}
...
}
A: I use findbugs extensively. I find the use of annotations to specify Nullity constraints very useful. Even if you dont actually use findbugs, it makes the intent of the code much clearer. Those annotations have their place on Interfaces as much as Classes. You could argue that it is kind of programming by contract ...
A: Even without examples, it should be clear to explain - interfaces describe behaviour, and so can annotations, so it's a logical match to put them together.
A: I've used it in Spring to annotate interfaces where the annotation should apply to all subclasses. For example, say you have a Service interface and you might have multiple implementations of the interface but you want a security annotation to apply regardless of the annotation. In that case, it makes the most sense to annotate the interface.
A: More an example, but Local and Remote annotations in EJB3. According to the java doc,
When used on an interface, designates
that interface as a local business
interface.
I guess the use case here is that the interface has a special function best denoted by an annotation.
A: I've seen a lot of research tools use method annotations to allow users to specify protocols, restrictions, etc. to facilitate automatic checking later.
Since annotations don't dictate what you can do with them, there is no good reason not to allow users to annotate interfaces.
A: You could use it for contract style programming - go one step further than just defining the interface (types and method names) and also define some semantics (contents of the types, preconditions, postconditions).
I'd have to check up on how annotations work in Java though, but this stuff could easily be done with Python annotations...
A: When deploying applications under JBoss you can use annotations on a interface to export a service in the jmx-console.
A: I have used it when defining a REST interface. RESTeasy allows you to create REST client that uses an annotated interface (RESTInterfaceV1 in this example):
final RESTInterfaceV1 client = ProxyFactory.create(RESTInterfaceV1.class, "http://localhost:8080/rest");
Although the annotations have to be duplicated on object that actually implements the REST interface, the interface itself can be distributed separately to those wanting to make a Java REST client.
A: I mark my interfaces with @Immutable to clearly indicate to developers of subclasses which Mutability contract a class implementing the interface should have. As gehel puts it, programming by contract.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146391",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21"
} |
Q: How do I execute any command editing its file (argument) "in place" using bash? I have a file temp.txt, that I want to sort with the sort command in bash.
I want the sorted results to replace the original file.
This doesn't work for example (I get an empty file):
sortx temp.txt > temp.txt
Can this be done in one line without resorting to copying to temporary files?
EDIT: The -o option is very cool for sort. I used sort in my question as an example. I run into the same problem with other commands:
uniq temp.txt > temp.txt.
Is there a better general solution?
A: Here you go, one line:
sort temp.txt > temp.txt.sort && mv temp.txt.sort temp.txt
Technically there's no copying to a temporary file, and the 'mv' command should be instant.
A: I like the sort file -o file answer but don't want to type the same file name twice.
Using BASH history expansion:
$ sort file -o !#^
grabs the current line's first arg when you press enter.
A unique sort in-place:
$ sort -u -o file !#$
grabs the last arg in the current line.
A: Many have mentioned the -o option. Here is the man page part.
From the man page:
-o output-file
Write output to output-file instead of to the standard output.
If output-file is one of the input files, sort copies it to a
temporary file before sorting and writing the output to output-
file.
A: This would be highly memory constrained, but you could use awk to store the intermediate data in memory, and then write it back out.
uniq temp.txt | awk '{line[i++] = $0}END{for(j=0;j<i;j++){print line[j]}}' > temp.txt
A: An alternative to sponge with the more common sed:
sed -ni r<(command file) file
It works for any command (sort, uniq, tac, ...) and uses the very well known sed's -i option (edit files in-place).
Warning: Try command file first because editing files in-place is not safe by nature.
Explanation
Firstly, you're telling sed not to print the (original) lines (-n option), and with the help of the sed's r command and bash's Process Substitution, the generated content by <(command file) will be the output saved in place.
Making things even easier
You can wrap this solution into a function:
ip_cmd() { # in place command
CMD=${1:?You must specify a command}
FILE=${2:?You must specify a file}
sed -ni r<("$CMD" "$FILE") "$FILE"
}
Example
$ cat file
d
b
c
b
a
$ ip_cmd sort file
$ cat file
a
b
b
c
d
$ ip_cmd uniq file
$ cat file
a
b
c
d
$ ip_cmd tac file
$ cat file
d
c
b
a
$ ip_cmd
bash: 1: You must specify a command
$ ip_cmd uniq
bash: 2: You must specify a file
A: A sort needs to see all input before it can start to output. For this reason, the sort program can easily offer an option to modify a file in-place:
sort temp.txt -o temp.txt
Specifically, the documentation of GNU sort says:
Normally, sort reads all input before opening output-file, so you can safely sort a file in place by using commands like sort -o F F and cat F | sort -o F. However, sort with --merge (-m) can open the output file before reading all input, so a command like cat F | sort -m -o F - G is not safe as sort might start writing F before cat is done reading it.
While the documentation of BSD sort says:
If [the] output-file is one of the input files, sort copies it to a temporary file before sorting and writing the output to [the] output-file.
Commands such as uniq can start writing output before they finish reading the input. These commands typically do not support in-place editing (and it would be harder for them to support this feature).
You typically work around this with a temporary file, or if you absolutely want to avoid having an intermediate file, you could use a buffer to store the complete result before writing it out. For example, with perl:
uniq temp.txt | perl -e 'undef $/; $_ = <>; open(OUT,">temp.txt"); print OUT;'
Here, the perl part reads the complete output from uniq in variable $_ and then overwrites the original file with this data. You could do the same in the scripting language of your choice, perhaps even in Bash. But note that it will need enough memory to store the entire file, this is not advisable when working with large files.
A: Here's a more general approach, works with uniq, sort and whatnot.
{ rm file && uniq > file; } < file
A: Read up on the non-interactive editor, ex.
A: sort temp.txt -o temp.txt
A: Tobu's comment on sponge warrants being an answer in its own right.
To quote from the moreutils homepage:
Probably the most general purpose tool in moreutils so far is sponge(1), which lets you do things like this:
% sed "s/root/toor/" /etc/passwd | grep -v joey | sponge /etc/passwd
However, sponge suffers from the same problem Steve Jessop comments on here. If any of the commands in the pipeline before sponge fail, then the original file will be written over.
$ mistyped_command my-important-file | sponge my-important-file
mistyped-command: command not found
Uh-oh, my-important-file is gone.
A: Use the argument --output= or -o
Just tried on FreeBSD:
sort temp.txt -otemp.txt
A: To add the uniq capability, what are the downsides to:
sort inputfile | uniq | sort -o inputfile
A: If you insist on using the sort program, you have to use a intermediate file -- I don't think sort has an option for sorting in memory. Any other trick with stdin/stdout will fail unless you can guarantee that the buffer size for sort's stdin is big enough to fit the entire file.
Edit: shame on me. sort temp.txt -o temp.txt works excellent.
A: Another solution:
uniq file 1<> file
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146435",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "111"
} |
Q: Way to stop a program from starting up using c#? Here's the idea, I'd like to make a service? that will look for a certain program starting up and dissallow it unless certain conditions are met.
Let's say I have a game I want to stop myself from playing during the week. So if I start it up on any day other than Friday/Saturday/Sunday, it will intercept and cancel. Is this possible with C#?
Main thing I am looking for is how to catch a program starting up, rest should be easy.
A: Well, you can definitely determine which programs are running by looking for the process names you want (GetProcessesByName()) and killing them.
Process[] processes = Process.GetProcessesByName(processName);
foreach(Process process in processes)
{
process.Kill();
}
You could just have a list of them you didn't want to run, do the time check (or whatever criteria was to be met) and then walk the list. I did something like this once for a test and it works well enough.
A: I don't know about C# in particular here but one why you could accomplish this (I'll be it is a dangerous way) is by using the Image File Execution Options (http://blogs.msdn.com/junfeng/archive/2004/04/28/121871.aspx) in the registry. For whatever executable you are interested in intercepting you could set the Debugger option for it and then create a small application that would be used at the debugger then have it essentially filter these calls. If you wanted to allow it to run then start up the process otherwise do whatever you like. I’ve never attempted this but it seems like it could do what you want.
Or if you wanted to react to the process starting up and then close it down you could use a ProcessWatcher (http://weblogs.asp.net/whaggard/archive/2006/02/11/438006.aspx) and subscribe to the process created event and then close it down if needed. However that is more of reactive approach instead a proactive approach like the first one.
A: I'm not sure if you can catch it starting up, but you could try to look for the program in the list of windows (was it ENUM_WINDOWS? I can never remember) and shut it down as soon as it shows up.
You could probably even do this in AutoIt!
Drag out the Petzold and have some fun with windows...
EDIT: Check out sysinternals for source on how to list the active processes - do this in a loop and you can catch your program when it starts. I don't think the official site still has the source though, but it's bound to be out there somewhere...
A: Hrm, rather than intercepting it's startup, maybe your service could somehow sabotage the executable (i.e. muck with the permissions, rename the EXE, replace the EXE with something that scolds you for your weak will, etc).
If that's not good enough, you can try one of these approaches:
http://www.codeproject.com/KB/system/win32processusingwmi.aspx
http://msdn.microsoft.com/en-us/magazine/cc188966.aspx
A: There's always hooks. Although there's no support for it in the .net library you can always wrap the win32 dll. Here's a nice article: http://msdn.microsoft.com/sv-se/magazine/cc188966(en-us).aspx
This is low-level programming, and if you want your code to be portable i wouldn't use hooks.
A: This app will do just that Kill Process from Orange Lamp
I'm not the author.
A: You don't necessarily need to intercept start up programs. You can write a "program start up manager" app that launches the programs for you (if they are white-listed). After you write the app, all you would need to do is modify your application shortcuts to point to your start up manager with the proper parameters.
A: If you just need to disable the application, you can edit the registry to try and attach a debugger to that application automatically, if the debugger doesn't exist, windows will complain and bail out.
[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options] is the key, look up MS article http://support.microsoft.com/default.aspx?kbid=238788 for more info.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146439",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: What are POD types in C++? I've come across this term POD-type a few times.
What does it mean?
A: POD stands for Plain Old Data - that is, a class (whether defined with the keyword struct or the keyword class) without constructors, destructors and virtual members functions. Wikipedia's article on POD goes into a bit more detail and defines it as:
A Plain Old Data Structure in C++ is an aggregate class that contains only PODS as members, has no user-defined destructor, no user-defined copy assignment operator, and no nonstatic members of pointer-to-member type.
Greater detail can be found in this answer for C++98/03. C++11 changed the rules surrounding POD, relaxing them greatly, thus necessitating a follow-up answer here.
A: The concept of POD and the type trait std::is_pod will be deprecated in C++20. See this question for further information.
A: Very informally:
A POD is a type (including classes) where the C++ compiler guarantees that there will be no "magic" going on in the structure: for example hidden pointers to vtables, offsets that get applied to the address when it is cast to other types (at least if the target's POD too), constructors, or destructors. Roughly speaking, a type is a POD when the only things in it are built-in types and combinations of them. The result is something that "acts like" a C type.
Less informally:
*
*int, char, wchar_t, bool, float, double are PODs, as are long/short and signed/unsigned versions of them.
*pointers (including pointer-to-function and pointer-to-member) are PODs,
*enums are PODs
*a const or volatile POD is a POD.
*a class, struct or union of PODs is a POD provided that all non-static data members are public, and it has no base class and no constructors, destructors, or virtual methods. Static members don't stop something being a POD under this rule. This rule has changed in C++11 and certain private members are allowed: Can a class with all private members be a POD class?
*Wikipedia is wrong to say that a POD cannot have members of type pointer-to-member. Or rather, it's correct for the C++98 wording, but TC1 made explicit that pointers-to-member are POD.
Formally (C++03 Standard):
3.9(10): "Arithmetic types (3.9.1), enumeration types, pointer types, and pointer to member types (3.9.2) and cv-qualified versions of these types (3.9.3) are collectively caller scalar types. Scalar types, POD-struct types, POD-union types (clause 9), arrays of such types and cv-qualified versions of these types (3.9.3) are collectively called POD types"
9(4): "A POD-struct is an aggregate class that has no non-static data members of type non-POD-struct, non-POD-union (or array of such types) or reference, and has no user-define copy operator and no user-defined destructor. Similarly a POD-union is an aggregate union that has no non-static data members of type non-POD-struct, non-POD-union (or array of such types) or reference, and has no user-define copy operator and no user-defined destructor.
8.5.1(1): "An aggregate is an array or class (clause 9) with no user-declared constructors (12.1), no private or protected non-static data members (clause 11), no base classes (clause 10) and no virtual functions (10.3)."
A: Plain Old Data
In short, it is all built-in data types (e.g. int, char, float, long, unsigned char, double, etc.) and all aggregation of POD data. Yes, it's a recursive definition. ;)
To be more clear, a POD is what we call "a struct": a unit or a group of units that just store data.
A:
Why do we need to differentiate between POD's and non-POD's at all?
C++ started its life as an extension of C. While modern C++ is no longer a strict superset of C, people still expect a high level of compatibility between the two. The "C ABI" of a platform also frequently acts as a de-facto standard inter-language ABI for other languages on the platform.
Roughly speaking, a POD type is a type that is compatible with C and perhaps equally importantly is compatible with certain ABI optimisations.
To be compatible with C, we need to satisfy two constraints.
*
*The layout must be the same as the corresponding C type.
*The type must be passed to and returned from functions in the same way as the corresponding C type.
Certain C++ features are incompatible with this.
Virtual methods require the compiler to insert one or more pointers to virtual method tables, something that doesn't exist in C.
User-defined copy constructors, move constructors, copy assignments and destructors have implications for parameter passing and returning. Many C ABIs pass and return small parameters in registers, but the references passed to the user defined constructor/assigment/destructor can only work with memory locations.
So there is a need to define what types can be expected to be "C compatible" and what types cannot. C++03 was somewhat over-strict in this regard, any user-defined constructor would disable the built-in constructors and any attempt to add them back in would result in them being user-defined and hence the type being non-pod. C++11 opened things up quite a bit, by allowing the user to re-introduce the built-in constructors.
A: Examples of all non-POD cases with static_assert from C++11 to C++17 and POD effects
std::is_pod was added in C++11, so let's consider that standard onwards for now.
std::is_pod will be removed from C++20 as mentioned at https://stackoverflow.com/a/48435532/895245 , let's update this as support arrives for the replacements.
POD restrictions have become more and more relaxed as the standard evolved, I aim to cover all relaxations in the example through ifdefs.
libstdc++ has at tiny bit of testing at: https://github.com/gcc-mirror/gcc/blob/gcc-8_2_0-release/libstdc%2B%2B-v3/testsuite/20_util/is_pod/value.cc but it just too little. Maintainers: please merge this if you read this post. I'm lazy to check out all the C++ testsuite projects mentioned at: https://softwareengineering.stackexchange.com/questions/199708/is-there-a-compliance-test-for-c-compilers
#include <type_traits>
#include <array>
#include <vector>
int main() {
#if __cplusplus >= 201103L
// # Not POD
//
// Non-POD examples. Let's just walk all non-recursive non-POD branches of cppreference.
{
// Non-trivial implies non-POD.
// https://en.cppreference.com/w/cpp/named_req/TrivialType
{
// Has one or more default constructors, all of which are either
// trivial or deleted, and at least one of which is not deleted.
{
// Not trivial because we removed the default constructor
// by using our own custom non-default constructor.
{
struct C {
C(int) {}
};
static_assert(std::is_trivially_copyable<C>(), "");
static_assert(!std::is_trivial<C>(), "");
static_assert(!std::is_pod<C>(), "");
}
// No, this is not a default trivial constructor either:
// https://en.cppreference.com/w/cpp/language/default_constructor
//
// The constructor is not user-provided (i.e., is implicitly-defined or
// defaulted on its first declaration)
{
struct C {
C() {}
};
static_assert(std::is_trivially_copyable<C>(), "");
static_assert(!std::is_trivial<C>(), "");
static_assert(!std::is_pod<C>(), "");
}
}
// Not trivial because not trivially copyable.
{
struct C {
C(C&) {}
};
static_assert(!std::is_trivially_copyable<C>(), "");
static_assert(!std::is_trivial<C>(), "");
static_assert(!std::is_pod<C>(), "");
}
}
// Non-standard layout implies non-POD.
// https://en.cppreference.com/w/cpp/named_req/StandardLayoutType
{
// Non static members with different access control.
{
// i is public and j is private.
{
struct C {
public:
int i;
private:
int j;
};
static_assert(!std::is_standard_layout<C>(), "");
static_assert(!std::is_pod<C>(), "");
}
// These have the same access control.
{
struct C {
private:
int i;
int j;
};
static_assert(std::is_standard_layout<C>(), "");
static_assert(std::is_pod<C>(), "");
struct D {
public:
int i;
int j;
};
static_assert(std::is_standard_layout<D>(), "");
static_assert(std::is_pod<D>(), "");
}
}
// Virtual function.
{
struct C {
virtual void f() = 0;
};
static_assert(!std::is_standard_layout<C>(), "");
static_assert(!std::is_pod<C>(), "");
}
// Non-static member that is reference.
{
struct C {
int &i;
};
static_assert(!std::is_standard_layout<C>(), "");
static_assert(!std::is_pod<C>(), "");
}
// Neither:
//
// - has no base classes with non-static data members, or
// - has no non-static data members in the most derived class
// and at most one base class with non-static data members
{
// Non POD because has two base classes with non-static data members.
{
struct Base1 {
int i;
};
struct Base2 {
int j;
};
struct C : Base1, Base2 {};
static_assert(!std::is_standard_layout<C>(), "");
static_assert(!std::is_pod<C>(), "");
}
// POD: has just one base class with non-static member.
{
struct Base1 {
int i;
};
struct C : Base1 {};
static_assert(std::is_standard_layout<C>(), "");
static_assert(std::is_pod<C>(), "");
}
// Just one base class with non-static member: Base1, Base2 has none.
{
struct Base1 {
int i;
};
struct Base2 {};
struct C : Base1, Base2 {};
static_assert(std::is_standard_layout<C>(), "");
static_assert(std::is_pod<C>(), "");
}
}
// Base classes of the same type as the first non-static data member.
// TODO failing on GCC 8.1 -std=c++11, 14 and 17.
{
struct C {};
struct D : C {
C c;
};
//static_assert(!std::is_standard_layout<C>(), "");
//static_assert(!std::is_pod<C>(), "");
};
// C++14 standard layout new rules, yay!
{
// Has two (possibly indirect) base class subobjects of the same type.
// Here C has two base classes which are indirectly "Base".
//
// TODO failing on GCC 8.1 -std=c++11, 14 and 17.
// even though the example was copy pasted from cppreference.
{
struct Q {};
struct S : Q { };
struct T : Q { };
struct U : S, T { }; // not a standard-layout class: two base class subobjects of type Q
//static_assert(!std::is_standard_layout<U>(), "");
//static_assert(!std::is_pod<U>(), "");
}
// Has all non-static data members and bit-fields declared in the same class
// (either all in the derived or all in some base).
{
struct Base { int i; };
struct Middle : Base {};
struct C : Middle { int j; };
static_assert(!std::is_standard_layout<C>(), "");
static_assert(!std::is_pod<C>(), "");
}
// None of the base class subobjects has the same type as
// for non-union types, as the first non-static data member
//
// TODO: similar to the C++11 for which we could not make a proper example,
// but with recursivity added.
// TODO come up with an example that is POD in C++14 but not in C++11.
}
}
}
// # POD
//
// POD examples. Everything that does not fall neatly in the non-POD examples.
{
// Can't get more POD than this.
{
struct C {};
static_assert(std::is_pod<C>(), "");
static_assert(std::is_pod<int>(), "");
}
// Array of POD is POD.
{
struct C {};
static_assert(std::is_pod<C>(), "");
static_assert(std::is_pod<C[]>(), "");
}
// Private member: became POD in C++11
// https://stackoverflow.com/questions/4762788/can-a-class-with-all-private-members-be-a-pod-class/4762944#4762944
{
struct C {
private:
int i;
};
#if __cplusplus >= 201103L
static_assert(std::is_pod<C>(), "");
#else
static_assert(!std::is_pod<C>(), "");
#endif
}
// Most standard library containers are not POD because they are not trivial,
// which can be seen directly from their interface definition in the standard.
// https://stackoverflow.com/questions/27165436/pod-implications-for-a-struct-which-holds-an-standard-library-container
{
static_assert(!std::is_pod<std::vector<int>>(), "");
static_assert(!std::is_trivially_copyable<std::vector<int>>(), "");
// Some might be though:
// https://stackoverflow.com/questions/3674247/is-stdarrayt-s-guaranteed-to-be-pod-if-t-is-pod
static_assert(std::is_pod<std::array<int, 1>>(), "");
}
}
// # POD effects
//
// Now let's verify what effects does PODness have.
//
// Note that this is not easy to do automatically, since many of the
// failures are undefined behaviour.
//
// A good initial list can be found at:
// https://stackoverflow.com/questions/4178175/what-are-aggregates-and-pods-and-how-why-are-they-special/4178176#4178176
{
struct Pod {
uint32_t i;
uint64_t j;
};
static_assert(std::is_pod<Pod>(), "");
struct NotPod {
NotPod(uint32_t i, uint64_t j) : i(i), j(j) {}
uint32_t i;
uint64_t j;
};
static_assert(!std::is_pod<NotPod>(), "");
// __attribute__((packed)) only works for POD, and is ignored for non-POD, and emits a warning
// https://stackoverflow.com/questions/35152877/ignoring-packed-attribute-because-of-unpacked-non-pod-field/52986680#52986680
{
struct C {
int i;
};
struct D : C {
int j;
};
struct E {
D d;
} /*__attribute__((packed))*/;
static_assert(std::is_pod<C>(), "");
static_assert(!std::is_pod<D>(), "");
static_assert(!std::is_pod<E>(), "");
}
}
#endif
}
GitHub upstream.
Tested with:
for std in 11 14 17; do echo $std; g++-8 -Wall -Werror -Wextra -pedantic -std=c++$std pod.cpp; done
on Ubuntu 18.04, GCC 8.2.0.
A: As I understand POD (PlainOldData) is just a raw data - it does not need:
*
*to be constructed,
*to be destroyed,
*to have custom operators.
*Must not have virtual functions,
*and must not override operators.
How to check if something is a POD? Well, there is a struct for that called std::is_pod:
namespace std {
// Could use is_standard_layout && is_trivial instead of the builtin.
template<typename _Tp>
struct is_pod
: public integral_constant<bool, __is_pod(_Tp)>
{ };
}
(From header type_traits)
Reference:
*
*http://en.cppreference.com/w/cpp/types/is_pod
*http://en.wikipedia.org/wiki/Plain_old_data_structure
*http://en.wikipedia.org/wiki/Plain_Old_C++_Object
*File type_traits
A: A POD (plain old data) object has one of these data types--a fundamental type, pointer, union, struct, array, or class--with no constructor. Conversely, a non-POD object is one for which a constructor exists. A POD object begins its lifetime when it obtains storage with the proper size for its type and its lifetime ends when the storage for the object is either reused or deallocated.
PlainOldData types also must not have any of:
*
*Virtual functions (either their own, or inherited)
*Virtual base classes (direct or indirect).
A looser definition of PlainOldData includes objects with constructors; but excludes those with virtual anything. The important issue with PlainOldData types is that they are non-polymorphic. Inheritance can be done with POD types, however it should only be done for ImplementationInheritance (code reuse) and not polymorphism/subtyping.
A common (though not strictly correct) definition is that a PlainOldData type is anything that doesn't have a VeeTable.
A: With C++, Plain Old Data doesn't just mean that things like int, char, etc are the only types used. Plain Old Data really means in practice that you can take a struct memcpy it from one location in memory to another and things will work exactly like you would expect (i.e. not blow up). This breaks if your class, or any class your class contains, has as a member that is a pointer or a reference or a class that has a virtual function. Essentially, if pointers have to be involved somewhere, its not Plain Old Data.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146452",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1195"
} |
Q: Tutorials for Wii programming I have a Nintendo Wii, and I've got devkitpro working to load some simple programs. The example source code that I've been able to find is very simplistic, such as drawing and rotating a simple shape.
I've been looking for more in depth tutorials, and I haven't been able to find much. Most of the applications available on wiibrew are open source, so I could look through those. However, I'd rather have something that's a little more geared towards teaching certain techniques, rather than have to browse through someone else's code to find the answer.
What are some good tutorials? Currently I'm stuck at just getting alpha (translucent) colours to work, but I'm also interested in getting lighting and other more advanced graphics techniques working.
A: What about this one? It doesn't go into more advanced stuff, but new tutorials are still being added.
A: I'm going to provide you with some nice links:
*
*Getting started (dead)
*Wii Programming (dead)
*Wii Wiki about Homebrew
*A Blogger who talks about Wii Programming
*A big resource on Wii Development (dead)
Another very good resource is WiiBrew, here are some development specific links:
*
*Tutorials
*Debugging
*Development Tools
*DevkitPro
*Getting started with Development
*Developer Tips
I know you've been to WiiBrew before but it's a very good resource, even if you want to go advanced.
Good luck and have fun programming for Wii! (But please, don't make another Fitness-game ;) )
A: Gamedev.net and DCEmu are 2 great homebrew game dev resources.
Gamedev.net has a lot of great articles, and forums.
DCEmu started as a Dreamcast Emulator website, but it's strong community is probably the current most active for all things game development.
I'm sure you will have a lot of fun reading these websites and their forums, and you will certainly find some tutorials there.
A: I know this doesn't answer your question, but thought it might be useful to someone else interested in Wii development:
http://unity3d.com/unity/features/wii-publishing
A: Some friends were programming a game using Wii with Flash. I don't how, though :(
Have you checked this post here? Writing a game for the Nintendo Wii
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146466",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "72"
} |
Q: How do I get the resolution of the main monitor in Mac OS X in C++? I have a graphical app that needs to test the resolution of the display it is starting up on in Mac OS X to ensure it is not larger than the resolution. This is done before the window itself is initialized.
If there is more than one display, it needs to be the primary display. This is the display that hardware accelerated (OpenGL) apps will start up on in Full Screen, and is typically the display that has the menu bar at the top.
In Windows, I can successfully use GetSystemMetrics(). How can I do this on OS X?
A: Using CoreGraphics:
CGRect mainMonitor = CGDisplayBounds(CGMainDisplayID());
CGFloat monitorHeight = CGRectGetHeight(mainMonitor);
CGFloat monitorWidth = CGRectGetWidth(mainMonitor);
More information at Apple's Quartz Display Services Reference.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146480",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: What is the windows API to change Screen refresh rate? Can some one specify the windows API, one need to use in order to be able to change programmatically the screen refresh rate?
A: you can use ChangeDisplaySettings as described before. But you should use EnumDisplaySettings to get a list of valid combinations of (color dept, width, height, mode and frequency).
Sample code (in Delphi but translation should be trivial)
Get valid display modes
i := 0;
while EnumDisplaySettings(nil, i, dm) do begin
Memo1.Lines.Add(Format('Color Depth: %d', [dm.dmBitsPerPel]));
Memo1.Lines.Add(Format('Resolution: %d, %d', [dm.dmPelsWidth, dm.dmPelsHeight]));
Memo1.Lines.Add(Format('Display mode: %d', [dm.dmDisplayFlags]));
Memo1.Lines.Add(Format('Frequency: %d', [dm.dmDisplayFrequency]));
Inc(i);
end;
Set display mode
// In this case i is an index in the list of valid display modes.
if EnumDisplaySettings(nil, i, dm) then begin
// Sanity check!
if ChangeDisplaySettings(dm, CDS_TEST) = 0) then
ChangeDisplaySettings(dm, 0); // Use CDS_UPDATEREGISTRY if this is the new default mode.
end;
It is very important to chose a valid combination!
A: I found this via a google search. Hope it helps some.
http://www.codeproject.com/KB/winsdk/changerefresh.aspx
http://msdn.microsoft.com/en-us/library/ms533260(VS.85).aspx
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146484",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: What are the quintessential tools and resources for hosting Rails applications on Amazon's EC2? I'm looking for:
*
*documentation
*blogs
*books
*ready-to-use pre-configured slice images
*services
*wrappers
*libraries
*tutorials
...anything that would make it easier to start using EC2 to host a Rails application.
A: Have you looked at the amazon getting started tutorial? It is sufficient to put an ec2 instance together.
I did use it to setup an ubuntu server with ruby-enterprise, rails and passenger (this part was not any different from any other ubuntu server I use)
A: ElasticFox is a must have utility for overseeing your instances
http://developer.amazonwebservices.com/connect/entry.jspa?externalID=609
Capazon is an awesome utility to bring instance management into Capistrano
http://soylentfoo.jnewland.com/articles/2007/03/27/capazon-capistrano-meets-amazon-ec2 (there is a newer version out somewhere)
I like these AMI's (I dig Ubuntu) http://alestic.com/
If you're using Heroku or EngineYard (the main cloud hosting solutions today - they build on top of Amazon EC2) you can also use git to manage your code and both Heroku and EngineYard have great instructions on how to use integrate git with them:
Heroku: http://devcenter.heroku.com/articles/git
Engine Yard: http://docs.engineyard.com/host-your-code-on-github.html
A: There is a Rails image for EC2 at http://ec2onrails.rubyforge.org/
A: I highly recommend Scott Chacone's Fuzed and EC2 demo. Other's recommend the EC2 docs, I will as well. Be sure to also check out the fuzed code. The performance is amazing but you better be bringing in some money to support it.
A: And don't forget SimpleDeployr, one click Ruby on Rails deployment to your EC2 account.
A: Here's a service you might want to try out to deploy your Rails app using EC2: Morph AppSpace
A: I have been configuring a rails app to run directly on EC2 using EC2onRails and its corresponding ami. I've documented my progress here, because I found the other documentation out there lacking:
http://www-cs-students.stanford.edu/~silver/ec2.html
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146486",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
} |
Q: JTable column spanning I am trying to make a JTable that has column spans available. Specifically, I am looking to nest a JTable inside another JTable, and when the user clicks to view the nested table, it should expand to push down the rows below and fill the empty space. This is similar to what you see in MS Access where you can nest tables, and clicking the expand button on a row will show you the corresponding entries in the nested table.
If someone knows of a way to perform a column span with JTable, can you please point me in the right direction? Or if you know of an alternative way to do this, I am open to suggestions. The application is being built with Swing. Elements in the table, both high level and low level, have to be editable in any solution. Using nested JTables this won't be a problem, and any other solution would have to take this into consideration as well.
A: As a pointer in the right direction, try this article at SwingWiki that explains the TableUI method of column spanning quite well. Before this, I also tried some alternatives such as overriding the TableCellRenderer paint methods without much success.
A: You need to write your own TableUI for the master table. It can also helpful to use your own TableModel to save additional data like if a row is expanded. But this is optional.
I write an equals TableUI that expand a row and show an text editor. In the TableUI you need to change the the row hight dynamically with table.setRowHeight(height). Also it is necessary to copy some stuff from the BaseTableUI because you can not access the private stuff.
A: Based on Code from Code-Guru:
/*
* (swing1.1beta3)
*
* |-----------------------------------------------------|
* | 1st | 2nd | 3rd |
* |-----------------------------------------------------|
* | | | | | | |
*/
//package jp.gr.java_conf.tame.swing.examples;
import java.awt.*;
import java.awt.event.*;
import javax.swing.*;
import javax.swing.table.*;
import jp.gr.java_conf.tame.swing.table.*;
/**
* @version 1.0 11/09/98
*/
public class MultiWidthHeaderExample extends JFrame {
MultiWidthHeaderExample() {
super( "Multi-Width Header Example" );
DefaultTableModel dm = new DefaultTableModel();
dm.setDataVector(new Object[][]{
{"a","b","c","d","e","f"},
{"A","B","C","D","E","F"}},
new Object[]{"1 st","","","","",""});
JTable table = new JTable( dm ) {
protected JTableHeader createDefaultTableHeader() {
return new GroupableTableHeader(columnModel);
}
};
TableColumnModel cm = table.getColumnModel();
ColumnGroup g_2nd = new ColumnGroup("2 nd");
g_2nd.add(cm.getColumn(1));
g_2nd.add(cm.getColumn(2));
ColumnGroup g_3rd = new ColumnGroup("3 rd");
g_3rd.add(cm.getColumn(3));
g_3rd.add(cm.getColumn(4));
g_3rd.add(cm.getColumn(5));
GroupableTableHeader header = (GroupableTableHeader)table.getTableHeader();
header.addColumnGroup(g_2nd);
header.addColumnGroup(g_3rd);
JScrollPane scroll = new JScrollPane( table );
getContentPane().add( scroll );
setSize( 400, 100 );
header.revalidate();
}
public static void main(String[] args) {
MultiWidthHeaderExample frame = new MultiWidthHeaderExample();
frame.addWindowListener( new WindowAdapter() {
public void windowClosing( WindowEvent e ) {
System.exit(0);
}
});
frame.setVisible(true);
}
}
Source: http://www.codeguru.com/java/articles/125.shtml (unavailable since 2012, see now in web archive)
Other ressources:
*
*Java-6 updated sources: http://qoofast.blog76.fc2.com/blog-entry-2.html (translated)
*
*ColumnGroup.java
*GroupableTableHeader.java
*GroupableTableHeaderUI
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146498",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: How do I conditionally display a field in Infopath? I'm trying to do the following in infopath: I have to choose between 2 options (with bullets), and depending on this, if we choose Option1, I need to display a text field to enter more details, but if Option2 is chosen, I don't need this additional field.
I'm not sure about how to enter a Rule to define this :-(
Anyone could help?
Thx!
A: In 2010, you do this with 'Rules' (changed from 2007). Open the Home tab in the ribbon, open Manage Rules from there. Then select your control, and add / edit rules in the rules pane for each control to achieve what you're trying to do.
A: In InfoPath 2007, you can:
*
*Right click on any control
*Select Conditional Formatting
*Enter a condition that maps to option 2 being chosen
*Click "Hide this control" as the formatting to apply.
You probably want to put this and any descriptive field text inside a section; and hide / show it using the steps above.
A: In InfoPath 2013, you can:
You can hide Controls on the basics of a condition using Formating Rules.
1. Click on the control
2. Add Rule
3. Select Formatting
4. Add Condition
5. Check Hide Control
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146504",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How to programmatically simulate XP's "Windows Security" start menu option on Windows 2000 I'm trying to find a way to invoke the Ctrl+Alt+Delete dialog on a Windows 2000 computer that I'm connected to via Remote Desktop. Windows XP and 2003 include a new start menu command called "Windows Security" that does this, but Windows 2000 has no such option.
It appears that Ctrl+Alt+End will do this, but it only goes to the outermost RDP window, so if you're several connections deep, it doesn't help. In this scenario, I'm on computer A, connected to computer B. From computer B, I connect to computer C. Pressing Ctrl+Alt+End opens the Ctrl+Alt+Delete dialog on computer B, not computer C.
The goal here is to allow users connecting to computer C to change their own passwords. The users are not administrators on the computer, so they can't access the various tools that an admin might use to accomplish this.
[edit] I forgot to make this a programming question; my intent was to figure out how to do this from code (although a non-code method would be useful as well).
A: My coworker found the way to accomplish this directly: Start | Settings | Windows Security. If it's not present, it may have been disabled via Group Policy (Technet).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146520",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How do I successfully pass a function reference to Django’s reverse() function? I’ve got a brand new Django project. I’ve added one minimal view function to views.py, and one URL pattern to urls.py, passing the view by function reference instead of a string:
# urls.py
# -------
# coding=utf-8
from django.conf.urls.defaults import *
from myapp import views
urlpatterns = patterns('',
url(r'^myview/$', views.myview),
)
# views.py
----------
# coding=utf-8
from django.http import HttpResponse
def myview(request):
return HttpResponse('MYVIEW LOL', content_type="text/plain")
I’m trying to use reverse() to get the URL, by passing it a function reference. But I’m not getting a match, despite confirming that the view function I’m passing to reverse is the exact same view function I put in the URL pattern:
>>> from django.core.urlresolvers import reverse
>>> import urls
>>> from myapp import views
>>> urls.urlpatterns[0].callback is views.myview
True
>>> reverse(views.myview)
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/Library/Python/2.5/site-packages/django/core/urlresolvers.py", line 254, in reverse
*args, **kwargs)))
File "/Library/Python/2.5/site-packages/django/core/urlresolvers.py", line 243, in reverse
"arguments '%s' not found." % (lookup_view, args, kwargs))
NoReverseMatch: Reverse for '<function myview at 0x6fe6b0>' with arguments '()' and keyword arguments '{}' not found.
As far as I can tell from the documentation, function references should be fine in both the URL pattern and reverse().
*
*URL patterns with function references
*reverse with function references
I’m using the Django trunk, revision 9092.
A: Got it!! The problem is that some of the imports are of myproject.myapp.views, and some are just of myapp.views. This is confusing the Python module system enough that it no longer detects the functions as the same object. This is because your main settings.py probably has a line like:
ROOT_URLCONF = `myproject.urls`
To solve this, try using the full import in your shell session:
>>> from django.core.urlresolvers import reverse
>>> from myproject.myapp import views
>>> reverse(views.myview)
'/myview/'
Here's a log of the debugging session, for any interested future readers:
>>> from django.core import urlresolvers
>>> from myapp import myview
>>> urlresolvers.get_resolver (None).reverse_dict
{None: ([(u'myview/', [])], 'myview/$'), <function myview at 0x845d17c>: ([(u'myview/', [])], 'myview/$')}
>>> v1 = urlresolvers.get_resolver (None).reverse_dict.items ()[1][0]
>>> reverse(v1)
'/myview/'
>>> v1 is myview
False
>>> v1.__module__
'testproject.myapp.views'
>>> myview.__module__
'myapp.views'
What happens if you change the URL match to be r'^myview/$'?
Have you tried it with the view name? Something like reverse ('myapp.myview')?
Is urls.py the root URLconf, or in the myapp application? There needs to be a full path from the root to a view for it to be resolved. If that's myproject/myapp/urls.py, then in myproject/urls.py you'll need code like this:
from django.conf.urls.defaults import patterns
urlpatterns = patterns ('',
(r'^/', 'myapp.urls'),
)
A: If your two code pastes are complete, then it doesn't look like the second, which makes the actual call to reverse(), ever imports the urls module and therefor if the url mapping is ever actually achieved.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146522",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
} |
Q: What is the fastest way to get to documentation for Ruby? Say I'm writing some ruby code and I want to use the standard Date type to get the current date. Instead of using a search engine, is there a faster way to find the documentation for this class? I know I can get the methods for Date by typing Date.methods, but as far as I know this doesn't provide details about argument types or return value.
Editor-specific answers are welcomed. My editor of choice is Emacs.
A: Bookmark the ruby core docs
Use your web browser's find-text command.
Unexpected as it may seem, I find this is actually quicker than using ri, which for some reason seems to take ages to start up.
It is also much better than ri because the HTML page lists all the documentation for all the methods on a single page. Often methods are related to others, and switching between 2 ri's is painful
A: On your console use "ri"
ri Date
That works with all classes. (e.g. ri String)
To see documentation for a particular method you use this:
ri Date#yourMethod
A: The canonical source for Ruby documentation is Ruby-doc - the two links there which are of the most interest are core and standard library. You get a javadoc-style representation which usually covers argument types and return values. You can even make your own with RDoc.
A: For those of you who want docs from within vanilla IRB
*
*Follow these instructions for setting up the core RI documentation. As of writing, the steps are
$ cd ~/.rvm/src
$ rvm docs generate-ri
*Now you can view docs for a specific method on the command line using the ri command. To invoke it from within IRB, use the help command:
$ irb
irb(main):001:0> help 'String#chomp'
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146528",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Delete all but the 50 newest rows I have a SQL table with news stories and Unix timestamps. I'd like to only keep the 50 newest stories. How would I write an SQL statement to delete any amount of older stories?
A: Blockquote
delete from table where id not in (
select id from table
order by id desc
limit 50
)
You select the ids of the data you don't want to delete, and the you delete everything NOT IN these value...
A: I ended up using two queries since MySQL5 doesn't yet support LIMIT's in subqueries
SELECT unixTime FROM entries ORDER BY unixTime DESC LIMIT 49, 1;
DELETE FROM entries WHERE unixTime < $sqlResult;
A: Well, it sort of looks like you can't do it in one query - someone correct me if I'm wrong. The only way I've ever been able to do this sort of thing is to first figure out the number of rows in the table. For Example:
select count(*) from table;
then using the result do
delete from table order by timestamp limit result - 50;
You have to do it this way for two reasons -
*
*MySQL 5 doesn't support limit in subqueries for delete
*MySQL 5 doesn't allow you to select in a subquery from the same table you are deleting from.
A: If you have a lot of rows, it might be better to put the 50 rows in a temporary table
then use TRUNCATE TABLE to empty the table out. Then put the 50 rows back in.
A: I've just done it like this:
DELETE FROM `table` WHERE `datetime` < (SELECT `datetime` FROM `table` ORDER BY `datetime` DESC LIMIT 49,1);
Where table is the table, datetime is a datetime field.
A: Maybe not the most efficient, but this should work:
DELETE FROM _table_
WHERE _date_ NOT IN (SELECT _date_ FROM _table_ ORDER BY _date_ DESC LIMIT 50)
A: Assuming this query selects the rows you want to keep:
SELECT timestampcol FROM table ORDER BY timestampcol DESC LIMIT 49,1;
Then you could use a subquery like so:
DELETE FROM table WHERE timestampcol < ( SELECT timestampcol FROM table ORDER BY timestampcol DSEC LIMIT 49,1 )
Of course, make sure you have a backup before doing anything as potentially destructive. Note that compared to the other approaches mentioned, which use IN, this one will avoid doing 50 integer comparisons for every row to be deleted, making it (potentially) 50 times faster - assuming I got my SQL right.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146531",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: What is the best way to version control my SQL server stored procedures? What is the best way to version control my database objects? I'm using Visual studio 2005/2008 and SQL server 2005. I would prefer a solution which can be used with SVN.
A: I use SVN for all of my table/sproc/function source control.
I couldn't find anything that met my needs so I ended up writing a utility to allow me to dump the code out into a nice directory structure to use with SVN.
For those interested, the source is now available at svn://finsel.com/public/VS2005/GenerateSVNFilesForSQL2005.
A: Same as your other code, add a "Database project" to your application solution and keep the sql files used to build the database objects in there. Use the same version control for those code files as you do for the application.
A: We use Subversion and all we do is save the sql code in the directory for our subversion project and then commit the code to the repository when we are ready and update from the repository before we start working on something already in there.
The real trick is to convince developers to do that. Our dbas do that by deleting any stored proc (or other database object) that isn't in Subversion periodically. Lose stuff once and pretty much no one does it again.
A: Look at the tools offered by RedGate. They specifically deal with backup / restore / comparison cases for SQL Server objects including SP's. Alternately I am not sure but I think that Visual Studio allows you to check sp's into a repository. Havent tried that myself. But I can recommend RedGate tools. They have saved me a ton of trouble
A: I don't know of a pre-packaged solution, sorry...
... but couldn't you just a little script that connected to the database and saved all the stored procedures to disk as text files? Then the script would add all the text files to the SVN repository by making a system call to 'svn add'.
Then you'd probably want another script to connect to the DB, drop all stored procedures and load all the repository stored procedures from disk. This script would need to be run each time you ran "svn up" and had new/changed stored procedures.
I'm not sure if this can be accomplished with MS SQL, but I'm fairly confident that MySQL would accommodate this. If writing SVN extensions to do this is too complicated, Capistrano supports checkin/checkout scripts, IIRC.
A: Best way - one which works for you.
Easiest way - one that doesn't currently exist.
We use a semi-manual method (scripts under source control, small subset of people able to deploy stored procedures to the production server, changes to the schema should be reflected in changes to the underlying checked in files).
What we should do is implement some sort of source control vs plaintext schema dump diff ... but it generally 'works for us' although it's a really faff most of the time.
A: I agree that if possible, you should use database projects to version your db along with your application source.
However, if you are in an enterprise scenario, you should also consider using a tool to track changes on the server, and version those changes. Just because the database project exists doesn't mean some admin or developer can't change those sprocs on the server.
A: We do dumps to plaintext and keep them in our VCS.
You'd be able to script a backup-and-commit to do something similar.
A: I'm using scriptdb.exe from http://scriptdb.codeplex.com/
And it might be usefull to use the rails way: http://code.google.com/p/migratordotnet/wiki/GettingStarted
A: Use versaplex for dumping your schema: http://code.google.com/p/versaplex/
Versaplex comes with Schemamatic, which reads database schema (tables, SPs, etc) and also data (data is dumped as CSV).
I use it, with SVN and git, and it's awesome :)
If you need help let me know, it's worth a try!
http://github.com/eduardok/versaplex
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146543",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "63"
} |
Q: Do you use the "global" statement in Python? I was reading a question about the Python global statement ( "Python scope" ) and I was remembering about how often I used this statement when I was a Python beginner (I used global a lot) and how, nowadays, years later, I don't use it at all, ever. I even consider it a bit "un-pythonic".
Do you use this statement in Python ? Has your usage of it changed with time ?
A: In my view, as soon as you feel the need to use global variables in a python code, it's a great time to stop for a bit and work on refactoring of your code.
Putting the global in the code and delaying the refactoring process might sound promising if your dead-line is close, but, believe me, you're not gonna go back to this and fix unless you really have to - like your code stopped working for some odd reason, you have to debug it, you encounter some of those global variables and all they do is mess things up.
So, honestly, even it's allowed, I would as much as I can avoid using it. Even if it means a simple classes-build around your piece of code.
A: I use 'global' in a context such as this:
_cached_result = None
def myComputationallyExpensiveFunction():
global _cached_result
if _cached_result:
return _cached_result
# ... figure out result
_cached_result = result
return result
I use 'global' because it makes sense and is clear to the reader of the function what is happening. I also know there is this pattern, which is equivalent, but places more cognitive load on the reader:
def myComputationallyExpensiveFunction():
if myComputationallyExpensiveFunction.cache:
return myComputationallyExpensiveFunction.cache
# ... figure out result
myComputationallyExpensiveFunction.cache = result
return result
myComputationallyExpensiveFunction.cache = None
A: Objects are the prefered way of having non-local state, so global is rarely needed. I dont think the upcoming nonlocal modifier is going to be widely used either, I think its mostly there to make lispers stop complaining :-)
A: I use it for global options with command-line scripts and 'optparse':
my main() parses the arguments and passes those to whatever function does the work of the script... but writes the supplied options to a global 'opts' dictionary.
Shell script options often tweak 'leaf' behavior, and it's inconvenient (and unnecessary) to thread the 'opts' dictionary through every argument list.
A: I avoid it and we even have a pylint rule that forbids it in our production code. I actually believe it shouldn't even exist at all.
A: Rarely. I've yet to find a use for it at all.
A: It can be useful in threads for sharing state (with locking mechanisms around it).
However, I rarely if ever use it.
A: I've used it in quick & dirty, single-use scripts to automate some one-time task. Anything bigger than that, or that needs to be reused, and I'll find a more elegant way.
A: I've never had a legit use for the statement in any production code in my 3+ years of professional use of Python and over five years as a Python hobbyist. Any state I need to change resides in classes or, if there is some "global" state, it sits in some shared structure like a global cache.
A: I've used it in situations where a function creates or sets variables which will be used globally. Here are some examples:
discretes = 0
def use_discretes():
#this global statement is a message to the parser to refer
#to the globally defined identifier "discretes"
global discretes
if using_real_hardware():
discretes = 1
...
or
file1.py:
def setup():
global DISP1, DISP2, DISP3
DISP1 = grab_handle('display_1')
DISP2 = grab_handle('display_2')
DISP3 = grab_handle('display_3')
...
file2.py:
import file1
file1.setup()
#file1.DISP1 DOES NOT EXIST until after setup() is called.
file1.DISP1.resolution = 1024, 768
A: Once or twice. But it was always good starting point to refactor.
A: If I can avoid it, no. And, to my knowledge, there is always a way to avoid it. But I'm not stating that it's totally useless either
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146557",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "63"
} |
Q: Crafting .webloc file I'm writing a program (for Mac OS X, using Objective-C) and I need to create a bunch of .webloc files programmatically.
The .webloc file is simply file which is created after you drag-n-drop an URL from Safari's location bar to some folder.
Generally speaking, I need an approach to create items in a filesystem which point to some location in the Web. As I understand .webloc files should be used for this on Mac OS X.
So, is it possible to craft a .webloc file having a valid url and some title for it?
A: .webloc files (more generically, Internet location files) are written in a format whose definition goes back to Mac OS 8.x. It is resource-based, derived from the clipping format you get when you create a file from dragged objects such as text or images.
The resources written are 'url ' 256 and 'TEXT' 256, which store the URL, and optionally 'urln' 256, containing the text associated with the URL. 'drag' 128 points to the other two (or three) resources.
NTWeblocFile, part of the Cocoatech Open Source framework CocoaTechFoundation (BSD licensed), supports writing these files from Objective-C. If you want to specify a title separately to the URL, you'll need to modify the class so it writes something other than the URL into the 'urln' resource.
In Mac OS X 10.3 and later, the URL is also written (may also be written) into a property list in the file's data fork. See the other answer for how this works...
A: A .webloc file doesn't have anything in its data fork; instead, it stores the URL it refers to as a resource in its resource fork. You can see this on the command line using the DeRez(1) tool
Here I've run it on a .webloc file that I dragged out of my Safari address bar for this question:
% DeRez "Desktop/Crafting .webloc file - Stack Overflow.webloc"
data 'drag' (128, "Crafting .webloc file -#1701953") {
$"0000 0001 0000 0000 0000 0000 0000 0003" /* ................ */
$"5445 5854 0000 0100 0000 0000 0000 0000" /* TEXT............ */
$"7572 6C20 0000 0100 0000 0000 0000 0000" /* url ............ */
$"7572 6C6E 0000 0100 0000 0000 0000 0000" /* urln............ */
};
data 'url ' (256, "Crafting .webloc file -#1701953") {
$"6874 7470 3A2F 2F73 7461 636B 6F76 6572" /* http://stackover */
$"666C 6F77 2E63 6F6D 2F71 7565 7374 696F" /* flow.com/questio */
$"6E73 2F31 3436 3537 352F 6372 6166 7469" /* ns/146575/crafti */
$"6E67 2D77 6562 6C6F 632D 6669 6C65" /* ng-webloc-file */
};
data 'TEXT' (256, "Crafting .webloc file -#1701953") {
$"6874 7470 3A2F 2F73 7461 636B 6F76 6572" /* http://stackover */
$"666C 6F77 2E63 6F6D 2F71 7565 7374 696F" /* flow.com/questio */
$"6E73 2F31 3436 3537 352F 6372 6166 7469" /* ns/146575/crafti */
$"6E67 2D77 6562 6C6F 632D 6669 6C65" /* ng-webloc-file */
};
data 'urln' (256, "Crafting .webloc file -#1701953") {
$"4372 6166 7469 6E67 202E 7765 626C 6F63" /* Crafting .webloc */
$"2066 696C 6520 2D20 5374 6163 6B20 4F76" /* file - Stack Ov */
$"6572 666C 6F77" /* erflow */
};
The only resources that probably needs to be in there are the 'url ' and 'TEXT' resources of ID 256, and those probably don't need resource names either. The 'urln' resource might be handy if you want to include the title of the document the URL points to as well. The 'drag' resource tells the system that this is a clipping file, but I'm unsure of whether it needs to be there in this day and age.
To work with resources and the resource fork of a file, you use the Resource Manager — one of the underlying pieces of Carbon which goes back to the original Mac. There are, however, a couple of Cocoa wrappers for the Resource Manager, such as Nathan Day's NDResourceFork.
A: Another way to make "web shortcut" is .url file mentioned here already.
The contents look like (much simplier than plist xml-based):
[InternetShortcut]
URL=http://www.apple.com/
Note the file has 3 lines, the last line is empty.
More info on .url file format
A: It is little known - but there is also a simple plist based file format for weblocs.
When creating webloc files you DO NOT NEED to save them using the resource method the other three posters describe. You can also write a simple plist:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>URL</key>
<string>http://apple.com</string>
</dict>
</plist>
The binary resource format is still in active use and if you want to read a plist file - then sure you need to read both file formats. But when writing the file - use the plist based format - it is a lot easier.
A: It uses a resource fork-based binary format.
Valid workarounds:
*
*Have the user drag a URL from your application (NSURLPboardType) to the Finder. The Finder will create a webloc for you.
*Create a Windows Web Shortcut (.URL file). These have a INI-like data fork-based format and should be documented somewhere on the Internet; the OS supports them as it supports weblocs.
A: Here's how Google Chrome does it: WriteURLToNewWebLocFileResourceFork
A: This does the basic task, without needing any third party libraries. (Be warned: minimal error checking.)
// data for 'drag' resource (it's always the same)
#define DRAG_DATA_LENGTH 64
static const unsigned char _dragData[DRAG_DATA_LENGTH]={
0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x03,
0x54, 0x45, 0x58, 0x54, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x75, 0x72, 0x6C, 0x20, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x75, 0x72, 0x6C, 0x6E, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00};
static void _addData(NSData *data, ResType type, short resId, ResFileRefNum refNum)
{
Handle handle;
if (PtrToHand([data bytes], &handle, [data length])==noErr) {
ResFileRefNum previousRefNum=CurResFile();
UseResFile(refNum);
HLock(handle);
AddResource(handle, type, resId, "\p");
HUnlock(handle);
UseResFile(previousRefNum);
}
}
void WeblocCreateFile(NSString *location, NSString *name, NSURL *fileUrl)
{
NSString *contents=[NSString stringWithFormat:
@"<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n"
@"<!DOCTYPE plist PUBLIC \"-//Apple//DTD PLIST 1.0//EN\" \"http://www.apple.com/DTDs/PropertyList-1.0.dtd\">\n"
@"<plist version=\"1.0\">\n"
@"<dict>\n"
@"<key>URL</key>\n"
@"<string>%@</string>\n"
@"</dict>\n"
@"</plist>\n", location];
if ([[contents dataUsingEncoding:NSUTF8StringEncoding] writeToURL:fileUrl options:NSDataWritingAtomic error:nil])
{
// split into parent and filename parts
NSString *parentPath=[[fileUrl URLByDeletingLastPathComponent] path];
NSString *fileName=[fileUrl lastPathComponent];
FSRef parentRef;
if(FSPathMakeRef((const UInt8 *)[parentPath fileSystemRepresentation], &parentRef, NULL)==noErr)
{
unichar fileNameBuffer[[fileName length]];
[fileName getCharacters:fileNameBuffer];
FSCreateResFile(&parentRef, [fileName length], fileNameBuffer, 0, NULL, NULL, NULL);
if (ResError()==noErr)
{
FSRef fileRef;
if(FSPathMakeRef((const UInt8 *)[[fileUrl path] fileSystemRepresentation], &fileRef, NULL)==noErr)
{
ResFileRefNum resFileReference = FSOpenResFile(&fileRef, fsWrPerm);
if (resFileReference>0 && ResError()==noErr)
{
_addData([NSData dataWithBytes:_dragData length:DRAG_DATA_LENGTH], 'drag', 128, resFileReference);
_addData([location dataUsingEncoding:NSUTF8StringEncoding], 'url ', 256, resFileReference);
_addData([location dataUsingEncoding:NSUTF8StringEncoding], 'TEXT', 256, resFileReference);
_addData([name dataUsingEncoding:NSUTF8StringEncoding], 'urln', 256, resFileReference);
CloseResFile(resFileReference);
}
}
}
}
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146575",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20"
} |
Q: Why is the Java main method static? The method signature of a Java mainmethod is:
public static void main(String[] args) {
...
}
Is there a reason why this method must be static?
A: It's just a convention, but probably more convenient than the alternative. With a static main, all you need to know to invoke a Java program is the name and location of a class. If it weren't static, you'd also have to know how to instantiate that class, or require that the class have an empty constructor.
A: Applets, midlets, servlets and beans of various kinds are constructed and then have lifecycle methods called on them. Invoking main is all that is ever done to the main class, so there is no need for a state to be held in an object that is called multiple times. It's quite normal to pin main on another class (although not a great idea), which would get in the way of using the class to create the main object.
A: If the main method would not be static, you would need to create an object of your main class from outside the program. How would you want to do that?
A: When you execute the Java Virtual Machine (JVM) with the java command,
java ClassName argument1 argument2 ...
When you execute your application, you specify its class name as an argument to the java command, as above
the JVM attempts to invoke the main method of the class you specify
—at this point, no objects of the class have been created.
Declaring main as static allows the JVM to invoke main without creating
an instance of the class.
let's back to the command
ClassName is a command-line argument to the JVM that tells it which class to execute. Following the ClassName, you can also specify a list of Strings (separated by spaces) as command-line arguments that the JVM will pass to your application. -Such arguments might be used to specify options (e.g., a filename) to run the application- this is why there is a parameter called String[] args in the main
References:Java™ How To Program (Early Objects), Tenth Edition
A: This is just convention. In fact, even the name main(), and the arguments passed in are purely convention.
When you run java.exe (or javaw.exe on Windows), what is really happening is a couple of Java Native Interface (JNI) calls. These calls load the DLL that is really the JVM (that's right - java.exe is NOT the JVM). JNI is the tool that we use when we have to bridge the virtual machine world, and the world of C, C++, etc... The reverse is also true - it is not possible (at least to my knowledge) to actually get a JVM running without using JNI.
Basically, java.exe is a super simple C application that parses the command line, creates a new String array in the JVM to hold those arguments, parses out the class name that you specified as containing main(), uses JNI calls to find the main() method itself, then invokes the main() method, passing in the newly created string array as a parameter. This is very, very much like what you do when you use reflection from Java - it just uses confusingly named native function calls instead.
It would be perfectly legal for you to write your own version of java.exe (the source is distributed with the JDK) and have it do something entirely different. In fact, that's exactly what we do with all of our Java-based apps.
Each of our Java apps has its own launcher. We primarily do this so we get our own icon and process name, but it has come in handy in other situations where we want to do something besides the regular main() call to get things going (For example, in one case we are doing COM interoperability, and we actually pass a COM handle into main() instead of a string array).
So, long and short: the reason it is static is b/c that's convenient. The reason it's called 'main' is that it had to be something, and main() is what they did in the old days of C (and in those days, the name of the function was important). I suppose that java.exe could have allowed you to just specify a fully qualified main method name, instead of just the class (java com.mycompany.Foo.someSpecialMain) - but that just makes it harder on IDEs to auto-detect the 'launchable' classes in a project.
A: Why public static void main(String[] args) ?
This is how Java Language is designed and Java Virtual Machine is designed and written.
Oracle Java Language Specification
Check out Chapter 12 Execution - Section 12.1.4 Invoke Test.main:
Finally, after completion of the initialization for class Test (during which other consequential loading, linking, and initializing may have occurred), the method main of Test is invoked.
The method main must be declared public, static, and void. It must accept a single argument that is an array of strings. This method can be declared as either
public static void main(String[] args)
or
public static void main(String... args)
Oracle Java Virtual Machine Specification
Check out Chapter 2 Java Programming Language Concepts - Section 2.17 Execution:
The Java virtual machine starts execution by invoking the method main of some specified class and passing it a single argument, which is an array of strings. This causes the specified class to be loaded (§2.17.2), linked (§2.17.3) to other types that it uses, and initialized (§2.17.4). The method main must be declared public, static, and void.
Oracle OpenJDK Source
Download and extract the source jar and see how JVM is written, check out ../launcher/java.c, which contains native C code behind command java [-options] class [args...]:
/*
* Get the application's main class.
* ... ...
*/
if (jarfile != 0) {
mainClassName = GetMainClassName(env, jarfile);
... ...
mainClass = LoadClass(env, classname);
if(mainClass == NULL) { /* exception occured */
... ...
/* Get the application's main method */
mainID = (*env)->GetStaticMethodID(env, mainClass, "main",
"([Ljava/lang/String;)V");
... ...
{ /* Make sure the main method is public */
jint mods;
jmethodID mid;
jobject obj = (*env)->ToReflectedMethod(env, mainClass,
mainID, JNI_TRUE);
... ...
/* Build argument array */
mainArgs = NewPlatformStringArray(env, argv, argc);
if (mainArgs == NULL) {
ReportExceptionDescription(env);
goto leave;
}
/* Invoke main method. */
(*env)->CallStaticVoidMethod(env, mainClass, mainID, mainArgs);
... ...
A: Let's simply pretend, that static would not be required as the application entry point.
An application class would then look like this:
class MyApplication {
public MyApplication(){
// Some init code here
}
public void main(String[] args){
// real application code here
}
}
The distinction between constructor code and main method is necessary because in OO speak a constructor shall only make sure, that an instance is initialized properly. After initialization, the instance can be used for the intended "service". Putting the complete application code into the constructor would spoil that.
So this approach would force three different contracts upon the application:
*
*There must be a default constructor. Otherwise, the JVM would not know which constructor to call and what parameters should be provided.
*There must be a main method1. Ok, this is not surprising.
*The class must not be abstract. Otherwise, the JVM could not instantiate it.
The static approach on the other hand only requires one contract:
*
*There must be a main method1.
Here neither abstract nor multiple constructors matters.
Since Java was designed to be a simple language for the user it is not surprising that also the application entry point has been designed in a simple way using one contract and not in a complex way using three independent and brittle contracts.
Please note: This argument is not about simplicity inside the JVM or inside the JRE. This argument is about simplicity for the user.
1Here the complete signature counts as only one contract.
A: The method is static because otherwise there would be ambiguity: which constructor should be called? Especially if your class looks like this:
public class JavaClass{
protected JavaClass(int x){}
public void main(String[] args){
}
}
Should the JVM call new JavaClass(int)? What should it pass for x?
If not, should the JVM instantiate JavaClass without running any constructor method? I think it shouldn't, because that will special-case your entire class - sometimes you have an instance that hasn't been initialized, and you have to check for it in every method that could be called.
There are just too many edge cases and ambiguities for it to make sense for the JVM to have to instantiate a class before the entry point is called. That's why main is static.
I have no idea why main is always marked public though.
A: It is just a convention. The JVM could certainly deal with non-static main methods if that would have been the convention. After all, you can define a static initializer on your class, and instantiate a zillion objects before ever getting to your main() method.
A: I think the keyword 'static' makes the main method a class method, and class methods have only one copy of it and can be shared by all, and also, it does not require an object for reference. So when the driver class is compiled the main method can be invoked. (I'm just in alphabet level of java, sorry if I'm wrong)
A: main() is static because; at that point in the application's lifecycle, the application stack is procedural in nature due to there being no objects yet instantiated.
It's a clean slate. Your application is running at this point, even without any objects being declared (remember, there's procedural AND OO coding patterns). You, as the developer, turn the application into an object-oriented solution by creating instances of your objects and depending upon the code compiled within.
Object-oriented is great for millions of obvious reasons. However, gone are the days when most VB developers regularly used keywords like "goto" in their code. "goto" is a procedural command in VB that is replaced by its OO counterpart: method invocation.
You could also look at the static entry point (main) as pure liberty. Had Java been different enough to instantiate an object and present only that instance to you on run, you would have no choice BUT to write a procedural app. As unimaginable as it might sound for Java, it's possible there are many scenarios which call for procedural approaches.
This is probably a very obscure reply. Remember, "class" is only a collection of inter-related code. "Instance" is an isolated, living and breathing autonomous generation of that class.
A: Recently, similar question has been posted at Programmers.SE
*
*Why a static main method in Java and C#, rather than a constructor?
Looking for a definitive answer from a primary or secondary source for why did (notably) Java and C# decide to have a static method as their entry point – rather than representing an application instance by an instance of an Application class, with the entry point being an appropriate constructor?
TL;DR part of the accepted answer is,
In Java, the reason of public static void main(String[] args) is that
*
*Gosling wanted
*the code written by someone experienced in C (not in Java)
*to be executed by someone used to running PostScript on NeWS
For C#, the reasoning is transitively similar so to speak. Language designers kept the program entry point syntax familiar for programmers coming from Java. As C# architect Anders Hejlsberg puts it,
...our approach with C# has simply been to offer an alternative... to Java programmers...
...
A: The public keyword is an access modifier, which allows the programmer to control
the visibility of class members. When a class member is preceded by public, then that
member may be accessed by code outside the class in which it is declared.
The opposite of public is private, which prevents a member from being used by code defined outside of its class.
In this case, main() must be declared as public, since it must be called
by code outside of its class when the program is started.
The keyword static allows
main() to be called without having to instantiate a particular instance of the class. This is necessary since main() is called by the Java interpreter before any objects are made.
The keyword void simply tells the compiler that main() does not return a value.
A: The true entry point to any application is a static method. If the Java language supported an instance method as the "entry point", then the runtime would need implement it internally as a static method which constructed an instance of the object followed by calling the instance method.
With that out of the way, I'll examine the rationale for choosing a specific one of the following three options:
*
*A static void main() as we see it today.
*An instance method void main() called on a freshly constructed object.
*Using the constructor of a type as the entry point (e.g., if the entry class was called Program, then the execution would effectively consist of new Program()).
Breakdown:
static void main()
*
*Calls the static constructor of the enclosing class.
*Calls the static method main().
void main()
*
*Calls the static constructor of the enclosing class.
*Constructs an instance of the enclosing class by effectively calling new ClassName().
*Calls the instance method main().
new ClassName()
*
*Calls the static constructor of the enclosing class.
*Constructs an instance of the class (then does nothing with it and simply returns).
Rationale:
I'll go in reverse order for this one.
Keep in mind that one of the design goals of Java was to emphasize (require when possible) good object-oriented programming practices. In this context, the constructor of an object initializes the object, but should not be responsible for the object's behavior. Therefore, a specification that gave an entry point of new ClassName() would confuse the situation for new Java developers by forcing an exception to the design of an "ideal" constructor on every application.
By making main() an instance method, the above problem is certainly solved. However, it creates complexity by requiring the specification to list the signature of the entry class's constructor as well as the signature of the main() method.
In summary, specifying a static void main() creates a specification with the least complexity while adhering to the principle of placing behavior into methods. Considering how straightforward it is to implement a main() method which itself constructs an instance of a class and calls an instance method, there is no real advantage to specifying main() as an instance method.
A: The protoype public static void main(String[]) is a convention defined in the JLS :
The method main must be declared public, static, and void. It must specify a formal parameter (§8.4.1) whose declared type is array of String.
In the JVM specification 5.2. Virtual Machine Start-up we can read:
The Java virtual machine starts up by creating an initial class, which is specified in an implementation-dependent manner, using the bootstrap class loader (§5.3.1). The Java virtual machine then links the initial class, initializes it, and invokes the public class method void main(String[]). The invocation of this method drives all further execution. Execution of the Java virtual machine instructions constituting the main method may cause linking (and consequently creation) of additional classes and interfaces, as well as invocation of additional methods.
Funny thing, in the JVM specification it's not mention that the main method has to be static.
But the spec also says that the Java virtual machine perform 2 steps before :
*
*links the initial class (5.4. Linking)
*initializes it (5.5. Initialization)
Initialization of a class or interface consists of executing its class or interface initialization method.
In 2.9. Special Methods :
A class or interface initialization method is defined :
A class or interface has at most one class or interface initialization method and is initialized (§5.5) by invoking that method. The initialization method of a class or interface has the special name <clinit>, takes no arguments, and is void.
And a class or interface initialization method is different from an instance initialization method defined as follow :
At the level of the Java virtual machine, every constructor written in the Java programming language (JLS §8.8) appears as an instance initialization method that has the special name <init>.
So the JVM initialize a class or interface initialization method and not an instance initialization method that is actually a constructor.
So they don't need to mention that the main method has to be static in the JVM spec because it's implied by the fact that no instance are created before calling the main method.
A: The main method in C++, C# and Java are static.
This is because they can then be invoked by the runtime engine without having to instantiate any objects then the code in the body of main will do the rest.
A: If it wasn't, which constructor should be used if there are more than one?
There is more information on the initialization and execution of Java programs available in the Java Language Specification.
A: Before the main method is called, no objects are instantiated. Having the static keyword means the method can be called without creating any objects first.
A: Because otherwise, it would need an instance of the object to be executed. But it must be called from scratch, without constructing the object first, since it is usually the task of the main() function (bootstrap), to parse the arguments and construct the object, usually by using these arguments/program parameters.
A: Let me explain these things in a much simpler way:
public static void main(String args[])
All Java applications, except applets, start their execution from main().
The keyword public is an access modifier which allows the member to be called from outside the class.
static is used because it allows main() to be called without having to instantiate a particular instance of that class.
void indicates that main() does not return any value.
A: What is the meaning of public static void main(String args[])?
*
*public is an access specifier meaning anyone can access/invoke it such as JVM(Java Virtual Machine.
*static allows main() to be called before an object of the class has been created. This is neccesary because main() is called by the JVM before any objects are made. Since it is static it can be directly invoked via the class.
class demo {
private int length;
private static int breadth;
void output(){
length=5;
System.out.println(length);
}
static void staticOutput(){
breadth=10;
System.out.println(breadth);
}
public static void main(String args[]){
demo d1=new demo();
d1.output(); // Note here output() function is not static so here
// we need to create object
staticOutput(); // Note here staticOutput() function is static so here
// we needn't to create object Similar is the case with main
/* Although:
demo.staticOutput(); Works fine
d1.staticOutput(); Works fine */
}
}
Similarly, we use static sometime for user defined methods so that we need not to make objects.
*void indicates that the main() method being declared
does not return a value.
*String[] args specifies the only parameter in the main() method.
args - a parameter which contains an array of objects of class type String.
A:
The public static void keywords mean the Java virtual machine (JVM) interpreter can call the program's main method to start the program (public) without creating an instance of the class (static), and the program does not return data to the Java VM interpreter (void) when it ends.
Source:
Essentials, Part 1, Lesson 2: Building Applications
A: I don't know if the JVM calls the main method before the objects are instantiated... But there is a far more powerful reason why the main() method is static... When JVM calls the main method of the class (say, Person). it invokes it by "Person.main()". You see, the JVM invokes it by the class name. That is why the main() method is supposed to be static and public so that it can be accessed by the JVM.
Hope it helped. If it did, let me know by commenting.
A: static - When the JVM makes a call to the main method there is no object that exists for the class being called therefore it has to have static method to allow invocation from class.
A: Static methods don't require any object. It runs directly so main runs directly.
A: The static key word in the main method is used because there isn't any instantiation that take place in the main method.
But object is constructed rather than invocation as a result we use the static key word in the main method.
In jvm context memory is created when class loads into it.And all static members are present in that memory. if we make the main static now it will be in memory and can be accessible to jvm (class.main(..)) so we can call the main method with out need of even need for heap been created.
A: It is just a convention as we can see here:
The method must be declared public and static, it must not return any
value, and it must accept a String array as a parameter. By default,
the first non-option argument is the name of the class to be invoked.
A fully-qualified class name should be used. If the -jar option is
specified, the first non-option argument is the name of a JAR archive
containing class and resource files for the application, with the
startup class indicated by the Main-Class manifest header.
http://docs.oracle.com/javase/1.4.2/docs/tooldocs/windows/java.html#description
A: From java.sun.com (there's more information on the site) :
The main method is static to give the Java VM interpreter a way to start the class without creating an instance of the control class first. Instances of the control class are created in the main method after the program starts.
My understanding has always been simply that the main method, like any static method, can be called without creating an instance of the associated class, allowing it to run before anything else in the program. If it weren't static, you would have to instantiate an object before calling it-- which creates a 'chicken and egg' problem, since the main method is generally what you use to instantiate objects at the beginning of the program.
A: Any method declared as static in Java belongs to the class itself .
Again static method of a particular class can be accessed only by referring to the class like Class_name.method_name();
So a class need not to be instantiated before accessing a static method.
So the main() method is declared as static so that it can be accessed without creating an object of that class.
Since we save the program with the name of the class where the main method is present( or from where the program should begin its execution, applicable for classes without a main() method()(Advanced Level)). So by the above mentioned way:
Class_name.method_name();
the main method can be accessed.
In brief when the program is compiled it searches for the main() method having String arguments like: main(String args[]) in the class mentioned(i.e. by the name of the program), and since at the the beginning it has no scope to instantiate that class, so the main() method is declared as static.
A: Basically we make those DATA MEMBERS and MEMBER FUNCTIONS as STATIC which are not performing any task related to an object. And in case of main method, we are making it as an STATIC because it is nothing to do with object, as the main method always run whether we are creating an object or not.
A: there is the simple reason behind it that is because object is not required to call static method , if It were non-static method, java virtual machine creates object first then call main() method that will lead to the problem of extra memory allocation.
A: The main method of the program has the reserved word static which means it is allowed to be used in the static context. A context relates to the use of computer memory during the running of the program. When the virtual machine loads a program, it creates the static context for it, allocating computer memory to store the program and its data, etc.. A dynamic context is certain kind of allocation of memory which is made later, during the running of the program. The program would not be able to start if the main method was not allowed to run in the static context.
A: main method always needs to be static because at RunTime JVM does not create any object to call main method and as we know in java static methods are the only methods which can be called using class name so main methods always needs to be static.
for more information visit this video :https://www.youtube.com/watch?v=Z7rPNwg-bfk&feature=youtu.be
A: because, a static members are not part of any specific class and that main method, not requires to create its Object, but can still refer to all other classes.
A: static indicates that this method is class method.and called without requirment of any object of class.
A: As the execution start of a program from main() and and java is purely object oriented program where the object is declared inside main() that means main() is called before object creation so if main() would non static then to call it there would be needed a object because static means no need of object..........
A: It's a frequently asked question why main() is static in Java.
Answer: We know that in Java, execution starts from main() by JVM. When JVM executes main() at that time, the class which contains main() is not instantiated so we can't call a nonstatic method without the reference of it's object. So to call it we made it static, due to which the class loader loads all the static methods in JVM context memory space from where JVM can directly call them.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146576",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "537"
} |
Q: Links with and without the .aspx extention Is is possible to configure a server to allow using links with and without using the .aspx extention.
If yes, how could I go about setting this up.
I'm working on a client site who is using umbraco. I know it has built in friendly URL capibility. Unfortunatly the site is already live and turning the feature on for the whole lot of links.
The problem is they want to use promotional urls like www.sitename.com/promotion without having to append the .aspx extention. And we don't want to go through the trouble of enabling url rewriting site wide and having to track down all the broken links.
A: Scott Guthrie has a good post on this.
http://weblogs.asp.net/scottgu/archive/2007/02/26/tip-trick-url-rewriting-with-asp-net.aspx
A: I've done this before by writing a simple HttpModule, a couple things to note:
*
*You need to point 404 errors in IIS to an aspx page, otherwise IIS won't invoke the ASP.NET runtime and the HTTPModule will never hit.
*This works best to catch and redirect from vanity urls, not as a full featured urlrewrite.
public class UrlRewrite : IHttpModule
{
public void Init(HttpApplication application)
{
application.BeginRequest += (new EventHandler(this.Application_BeginRequest));
}
private void Application_BeginRequest(Object source, EventArgs e)
{
// The RawUrl will look like:
// http://domain.com/404.aspx;http://domain.com/Posts/SomePost/
if (HttpContext.Current.Request.RawUrl.Contains(";")
&& HttpContext.Current.Request.RawUrl.Contains("404.aspx"))
{
// This places the originally entered URL into url[1]
string[] url = HttpContext.Current.Request.RawUrl.ToString().Split(';');
// Now parse the URL and redirect to where you want to go,
// you can use a XML file to store mappings between short urls and real ourls.
string newUrl = parseYourUrl(url[1]);
Response.Redirect(newUrl);
}
// If we get here, then the actual contents of 404.aspx will get loaded.
}
public void Dispose()
{
// Needed for implementing the interface IHttpModule.
}
}
A: Half way down under the section Completely Controlling the URI provides links and a number of methods of accomplishing this:
http://blogs.msdn.com/bags/archive/2008/08/22/rest-in-wcf-part-ix-controlling-the-uri.aspx
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146582",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Upgrading DOS Batch files for Windows Has anyone had any recent requirements for programming automated DOS Batch style tasks on a Windows box?
I've got some automation to do and I'd rather not sit and write a pile of .BAT files in Notepad if there is a better way of automating these tasks: mainly moving of files under certain date and time conditions, as well as triggering Windows applications before and after moving the files.
I am thinking along the lines of an IDE that has all the DOS commands 'available' to the editor with the correct parameter syntax checking. Is there anything like this out there, or should I be solving this problem with something other than .BAT files?
A: For simple Windows automation beyond BAT files, VBScript and Powershell might be worth a look. If you're wondering where to start first, VBScript+Windows Task Scheduler would be the first place I'd start. Copying a file with VBS can be as simple as:
Dim objFSO
Set objFSO = CreateObject ("Scripting.FileSystemObject")
If objFSO.FileExists("C:\source\your_file.txt") Then
objFSO.CopyFile "C:\source\your_file.txt", "C:\destination\your_file.txt"
EndIf
A: Try Python.
A: Definitely PowerShell. It's Microsofts new shell with lots of interesting possibilities and great extensibility.
A: Windows 98SE and up have Windows Script Host built in, which lets you use VBScript to automate tasks (see for example http://www.windowsdevcenter.com/pub/a/oreilly/windows/news/vbscriptpr_0201.html).
A: I'd recommend Python over Ruby as a Windows scripting language. Python's windows-support is much more mature than Ruby's.
A: I use AutoIt for this since PowerShell is not by default available at all machines. AutoIt offers a simple language with lots of default code you can re-use. AutoIt can compile to .exe.
I'm not sure what "Moving files under a certain condition" is but I once wrote Robocopy Controller script (using AutoIt) via which you can setup a copy script that fill be passed onto Robocopy.exe ("robust file copy" - a copy program which can be downloaded from Microsoft and is by default included with Windows Vista to replace xcopy). Maybe this free script of mine can be of assistance.
A: I personally use Python or PowerShell
for this kind of tasks.
A: There's an IDE for Powershell here:
PowerGUI
A: vbscript/WSH is actually what Microsoft wants you to use - unfortunately, I've written a few of those and it is not pleasant -
I totally agree with Mikael - if you know what systems will be running the scripts and you can install interpretters on them, go with a scripting language like Python or Ruby
Of course it depends on what type of automation you need to do - if you're messing with the OS or Active Directory settings, go with WSH - but for your average file housekeeping, use Python or Ruby
A: Although this isn't exactly what you're looking for, I'd opt for Perl. Lacking the GUI, Perl would allow you to complete your task quickly, and it's "glue" features would be helpful in future tasks you might have. Also, if one day you'll have to do similar things under another OS (which is not Windows), PowerShell might not be available, and your knowledge in Perl would come in handy.
Other than that - Perl's closest brother under Windows is definitely PowerShell.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146602",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Why isn't my NHibernate bag collection setting the 'parent id' of the children dynamically? I have a new object with a collection of new objects within it on some property as an IList. I see through sql profiler two insert queries being executed.. one for the parent, which has the new guid id, and one for the child, however, the foreign-key on the child that references the parent, is an empty guid. Here is my mapping on the parent:
<id name="BackerId">
<generator class="guid" />
</id>
<property name="Name" />
<property name="PostCardSizeId" />
<property name="ItemNumber" />
<bag name="BackerEntries" table="BackerEntry" cascade="all" lazy="false" order-by="Priority">
<key column="BackerId" />
<one-to-many class="BackerEntry" />
</bag>
On the Backer.cs class, I defined BackerEntries property as
IList<BackerEntry>
When I try to SaveOrUpdate the passed in entity I get the following results in sql profiler:
exec sp_executesql N'INSERT INTO Backer (Name, PostCardSizeId, ItemNumber, BackerId) VALUES (@p0, @p1, @p2, @p3)',N'@p0 nvarchar(3),@p1 uniqueidentifier,@p2 nvarchar(3),@p3
uniqueidentifier',@p0=N'qaa',@p1='BC95E7EB-5EE8-44B2-82FF30F5176684D',@p2=N'qaa',@p3='18FBF8CE-FD22-4D08-A3B1-63D6DFF426E5'
exec sp_executesql N'INSERT INTO BackerEntry (BackerId, BackerEntryTypeId, Name, Description, MaxLength, IsRequired, Priority, BackerEntryId) VALUES (@p0, @p1, @p2, @p3, @p4, @p5, @p6, @p7)',N'@p0 uniqueidentifier,@p1 uniqueidentifier,@p2 nvarchar(5),@p3 nvarchar(5),@p4 int,@p5 bit,@p6 int,@p7 uniqueidentifier',@p0='00000000-0000-0000-0000-000000000000',@p1='2C5BDD33-5DD3-42EC-AA0E-F1E548A5F6E4',@p2=N'qaadf',@p3=N'wasdf',@p4=0,@p5=1,@p6=0,@p7='FE9C4A35-6211-4E17-A75A-60CCB526F1CA'
As you can see, its not resetting the empty guid for BackerId on the child to the new real guid of the parent.
Finally, the exception throw is:
"NHibernate.Exceptions.GenericADOException: could not insert: [CB.ThePostcardCompany.MiddleTier.BackerEntry][SQL: INSERT INTO BackerEntry (BackerId, BackerEntryTypeId, Name, Description, MaxLength, IsRequired, Priority, BackerEntryId) VALUES (?, ?, ?, ?, ?, ?, ?, ?)] ---\u003e System.Data.SqlClient.SqlException: The INSERT statement conflicted with the FOREIGN KEY constraint
EDIT: SOLVED! The first answer below pointed me into the correct direction. I needed to add that back reference on the child mapping and class. This allowed it to work in a purely .net way - however, when accepting json, there was a disconnect so I had to come up with some quirky code to 're-attach' the children.
A: You may need to add NOT-NULL="true" to your mapping class:
<bag name="BackerEntries" table="BackerEntry" cascade="all" lazy="false" order-by="Priority">
<key column="BackerId" not-null="true"/>
<one-to-many class="BackerEntry" />
</bag>
as well as make sure that you have the reverse of the mapping defined for the child class:
<many-to-one name="parent" column="PARENT_ID" not-null="true"/>
I had similar issues with hibernate on my current project with parent-child relationships, and this was a part of the solution.
A: I had this problem and it took me forever to figure out.
The Child table has to allow nulls on it's parent foreign key.
NHibernate likes to save the children with NULL in the foreign key column and then go back and update with the correct ParentId.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146604",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: I'm using Python regexes in a criminally inefficient manner My goal here is to create a very simple template language. At the moment, I'm working on replacing a variable with a value, like this:
This input:
The Web
Should produce this output:
The Web This Is A Test Variable
I've got it working. But looking at my code, I'm running multiple identical regexes on the same strings -- that just offends my sense of efficiency. There's got to be a better, more Pythonic way. (It's the two "while" loops that really offend.)
This does pass the unit tests, so if this is silly premature optimization, tell me -- I'm willing to let this go. There may be dozens of these variable definitions and uses in a document, but not hundreds. But I suspect there's obvious (to other people) ways of improving this, and I'm curious what the StackOverflow crowd will come up with.
def stripMatchedQuotes(item):
MatchedSingleQuotes = re.compile(r"'(.*)'", re.LOCALE)
MatchedDoubleQuotes = re.compile(r'"(.*)"', re.LOCALE)
item = MatchedSingleQuotes.sub(r'\1', item, 1)
item = MatchedDoubleQuotes.sub(r'\1', item, 1)
return item
def processVariables(item):
VariableDefinition = re.compile(r'<%(.*?)=(.*?)%>', re.LOCALE)
VariableUse = re.compile(r'<%(.*?)%>', re.LOCALE)
Variables={}
while VariableDefinition.search(item):
VarName, VarDef = VariableDefinition.search(item).groups()
VarName = stripMatchedQuotes(VarName).upper().strip()
VarDef = stripMatchedQuotes(VarDef.strip())
Variables[VarName] = VarDef
item = VariableDefinition.sub('', item, 1)
while VariableUse.search(item):
VarName = stripMatchedQuotes(VariableUse.search(item).group(1).upper()).strip()
item = VariableUse.sub(Variables[VarName], item, 1)
return item
A: sub can take a callable as it's argument rather than a simple string. Using that, you can replace all variables with one function call:
>>> import re
>>> var_matcher = re.compile(r'<%(.*?)%>', re.LOCALE)
>>> string = '<%"TITLE"%> <%"SHMITLE"%>'
>>> values = {'"TITLE"': "I am a title.", '"SHMITLE"': "And I am a shmitle."}
>>> var_matcher.sub(lambda m: vars[m.group(1)], string)
'I am a title. And I am a shmitle.
Follow eduffy.myopenid.com's advice and keep the compiled regexes around.
The same recipe can be applied to the first loop, only there you need to store the value of the variable first, and always return "" as replacement.
A: Never create your own programming language. Ever. (I used to have an exception to this rule, but not any more.)
There is always an existing language you can use which suits your needs better. If you elaborated on your use-case, people may help you select a suitable language.
A: Creating a templating language is all well and good, but shouldn't one of the goals of the templating language be easy readability and efficient parsing? The example you gave seems to be neither.
As Jamie Zawinsky famously said:
Some people, when confronted with a
problem, think "I know, I'll use
regular expressions!" Now they have
two problems.
If regular expressions are a solution to a problem you have created, the best bet is not to write a better regular expression, but to redesign your approach to eliminate their use entirely. Regular expressions are complicated, expensive, hugely difficult to maintain, and (ideally) should only be used for working around a problem someone else created.
A: The first thing that may improve things is to move the re.compile outside the function. The compilation is cached, but there is a speed hit in checking this to see if its compiled.
Another possibility is to use a single regex as below:
MatchedQuotes = re.compile(r"(['\"])(.*)\1", re.LOCALE)
item = MatchedQuotes.sub(r'\2', item, 1)
Finally, you can combine this into the regex in processVariables. Taking Torsten Marek's suggestion to use a function for re.sub, this improves and simplifies things dramatically.
VariableDefinition = re.compile(r'<%(["\']?)(.*?)\1=(["\']?)(.*?)\3%>', re.LOCALE)
VarRepl = re.compile(r'<%(["\']?)(.*?)\1%>', re.LOCALE)
def processVariables(item):
vars = {}
def findVars(m):
vars[m.group(2).upper()] = m.group(4)
return ""
item = VariableDefinition.sub(findVars, item)
return VarRepl.sub(lambda m: vars[m.group(2).upper()], item)
print processVariables('<%"TITLE"="This Is A Test Variable"%>The Web <%"TITLE"%>')
Here are my timings for 100000 runs:
Original : 13.637
Global regexes : 12.771
Single regex : 9.095
Final version : 1.846
[Edit] Add missing non-greedy specifier
[Edit2] Added .upper() calls so case insensitive like original version
A: You can match both kind of quotes in one go with r"(\"|')(.*?)\1" - the \1 refers to the first group, so it will only match matching quotes.
A: You're calling re.compile quite a bit. A global variable for these wouldn't hurt here.
A: If a regexp only contains one .* wildcard and literals, then you can use find and rfind to locate the opening and closing delimiters.
If it contains only a series of .*? wildcards, and literals, then you can just use a series of find's to do the work.
If the code is time-critical, this switch away from regexp's altogether might give a little more speed.
Also, it looks to me like this is an LL-parsable language. You could look for a library that can already parse such things for you. You could also use recursive calls to do a one-pass parse -- for example, you could implement your processVariables function to only consume up the first quote, and then call a quote-matching function to consume up to the next quote, etc.
A: Why not use Mako? Seriously. What feature do you require that Mako doesn't have? Perhaps you can adapt or extend something that already works.
A: Don't call search twice in a row (in the loop conditional, and the first statement in the loop). Call (and cache the result) once before the loop, and then in the final statement of the loop.
A: Why not use XML and XSLT instead of creating your own template language? What you want to do is pretty easy in XSLT.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146607",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: Sieve of Eratosthenes in Erlang I'm in the process of learning Erlang. As an exercise I picked up the Sieve of Eratosthenes algorithm of generating prime numbers. Here is my code:
-module(seed2).
-export([get/1]).
get(N) -> WorkList = lists:duplicate(N, empty),
get(2, N, WorkList, []).
get(thats_the_end, _N, _WorkList, ResultList) -> lists:reverse(ResultList);
get(CurrentPrime, N, WorkList, ResultList) -> ModWorkList = markAsPrime(CurrentPrime, N, WorkList),
NextPrime = findNextPrime(CurrentPrime + 1, N, WorkList),
get(NextPrime, N, ModWorkList, [CurrentPrime|ResultList]).
markAsPrime(CurrentPrime, N, WorkList) when CurrentPrime =< N -> WorkListMod = replace(CurrentPrime, WorkList, prime),
markAllMultiples(CurrentPrime, N, 2*CurrentPrime, WorkListMod).
markAllMultiples(_ThePrime, N, TheCurentMark, WorkList) when TheCurentMark > N -> WorkList;
markAllMultiples(ThePrime, N, TheCurrentMark, WorkList) -> WorkListMod = replace(TheCurrentMark, WorkList, marked),
markAllMultiples(ThePrime, N, TheCurrentMark + ThePrime, WorkListMod).
findNextPrime(Iterator, N, _WorkList) when Iterator > N -> thats_the_end;
findNextPrime(Iterator, N, WorkList) -> I = lists:nth(Iterator, WorkList),
if
I =:= empty -> Iterator;
true -> findNextPrime(Iterator + 1, N, WorkList)
end.
replace(N, L, New)-> {L1, [_H|L2]} = lists:split(N - 1, L),
lists:append(L1, [New|L2]).
This code actually works :) . The problem is that I have this feeling that it is not the best possible implementation.
My question is what would be the "erlangish" way of implementing the "Sieve of Eratosthenes"
EDIT: OK, Andreas solution is very good but it is slow. Any ideas how to improve that?
A: Here's my sieve implementation which uses list comprehensions and tries to be tail recursive. I reverse the list at the end so the primes are sorted:
primes(Prime, Max, Primes,Integers) when Prime > Max ->
lists:reverse([Prime|Primes]) ++ Integers;
primes(Prime, Max, Primes, Integers) ->
[NewPrime|NewIntegers] = [ X || X <- Integers, X rem Prime =/= 0 ],
primes(NewPrime, Max, [Prime|Primes], NewIntegers).
primes(N) ->
primes(2, round(math:sqrt(N)), [], lists:seq(3,N,2)). % skip odds
Takes approx 2.8 ms to calculate primes up to 2 mil on my 2ghz mac.
A: I approached the problem by using concurrent processing.
Source
A: My previous post did not get formatted correctly. Here is a repost of the code. Sorry for spamming...
-module(test).
%%-export([sum_primes/1]).
-compile(export_all).
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%Sum of all primes below Max. Will use sieve of Eratosthenes
sum_primes(Max) ->
LastCheck = round(math:sqrt(Max)),
All = lists:seq(3, Max, 2), %note are creating odd-only array
%%Primes = sieve(noref,All, LastCheck),
Primes = spawn_sieve(All, LastCheck),
lists:sum(Primes) + 2. %adding back the number 2 to the list
%%sieve of Eratosthenes
sieve(Ref,All, LastCheck) ->
sieve(Ref,[], All, LastCheck).
sieve(noref,Primes, All = [Cur|_], LastCheck) when Cur > LastCheck ->
lists:reverse(Primes, All); %all known primes and all remaining from list (not sieved) are prime
sieve({Pid,Ref},Primes, All=[Cur|_], LastCheck) when Cur > LastCheck ->
Pid ! {Ref,lists:reverse(Primes, All)};
sieve(Ref,Primes, [Cur|All2], LastCheck) ->
%%All3 = lists:filter(fun(X) -> X rem Cur =/= 0 end, All2),
All3 = lists_filter(Cur,All2),
sieve(Ref,[Cur|Primes], All3, LastCheck).
lists_filter(Cur,All2) ->
lists_filter(Cur,All2,[]).
lists_filter(V,[H|T],L) ->
case H rem V of
0 ->
lists_filter(V,T,L);
_ ->
lists_filter(V,T,[H|L])
end;
lists_filter(_,[],L) ->
lists:reverse(L).
%% This is a sloppy implementation ;)
spawn_sieve(All,Last) ->
%% split the job
{L1,L2} = lists:split(round(length(All)/2),All),
Filters = filters(All,Last),
L3 = lists:append(Filters,L2),
Pid = self(),
Ref1=make_ref(),
Ref2=make_ref(),
erlang:spawn(?MODULE,sieve,[{Pid,Ref1},L1,Last]),
erlang:spawn(?MODULE,sieve,[{Pid,Ref2},L3,Last]),
Res1=receive
{Ref1,R1} ->
{1,R1};
{Ref2,R1} ->
{2,R1}
end,
Res2= receive
{Ref1,R2} ->
{1,R2};
{Ref2,R2} ->
{2,R2}
end,
apnd(Filters,Res1,Res2).
filters([H|T],Last) when H
[H|filters(T,Last)];
filters([H|_],_) ->
[H];
filters(_,_) ->
[].
apnd(Filters,{1,N1},{2,N2}) ->
lists:append(N1,subtract(N2,Filters));
apnd(Filters,{2,N2},{1,N1}) ->
lists:append(N1,subtract(N2,Filters)).
subtract([H|L],[H|T]) ->
subtract(L,T);
subtract(L=[A|_],[B|_]) when A > B ->
L;
subtract(L,[_|T]) ->
subtract(L,T);
subtract(L,[]) ->
L.
A: Here's a simple (but not terribly fast) sieve implementation:
-module(primes).
-export([sieve/1]).
-include_lib("eunit/include/eunit.hrl").
sieve([]) ->
[];
sieve([H|T]) ->
List = lists:filter(fun(N) -> N rem H /= 0 end, T),
[H|sieve(List)];
sieve(N) ->
sieve(lists:seq(2,N)).
A: I haven't studied these in detail, but I've tested my implementation below (that I wrote for a Project Euler challenge) and it's orders of magnitude faster than the above two implementations. It was excruciatingly slow until I eliminated some custom functions and instead looked for lists: functions that would do the same. It's good to learn the lesson to always see if there's a library implementation of something you need to do - it'll usually be faster! This calculates the sum of primes up to 2 million in 3.6 seconds on a 2.8GHz iMac...
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%Sum of all primes below Max. Will use sieve of Eratosthenes
sum_primes(Max) ->
LastCheck = round(math:sqrt(Max)),
All = lists:seq(3, Max, 2), %note are creating odd-only array
Primes = sieve(All, Max, LastCheck),
%io:format("Primes: ~p~n", [Primes]),
lists:sum(Primes) + 2. %adding back the number 2 to the list
%sieve of Eratosthenes
sieve(All, Max, LastCheck) ->
sieve([], All, Max, LastCheck).
sieve(Primes, All, Max, LastCheck) ->
%swap the first element of All onto Primes
[Cur|All2] = All,
Primes2 = [Cur|Primes],
case Cur > LastCheck of
true ->
lists:append(Primes2, All2); %all known primes and all remaining from list (not sieved) are prime
false ->
All3 = lists:filter(fun(X) -> X rem Cur =/= 0 end, All2),
sieve(Primes2, All3, Max, LastCheck)
end.
A: I kind of like this subject, primes that is, so I started to modify BarryE's code a bit and I manged to make it about 70% faster by making my own lists_filter function and made it possible to utilize both of my CPUs. I also made it easy to swap between to two version. A test run shows:
61> timer:tc(test,sum_primes,[2000000]).
{2458537,142913970581}
Code:
-module(test).
%%-export([sum_primes/1]).
-compile(export_all).
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%Sum of all primes below Max. Will use sieve of Eratosthenes
sum_primes(Max) ->
LastCheck = round(math:sqrt(Max)),
All = lists:seq(3, Max, 2), %note are creating odd-only array
%%Primes = sieve(noref,All, LastCheck),
Primes = spawn_sieve(All, LastCheck),
lists:sum(Primes) + 2. %adding back the number 2 to the list
%%sieve of Eratosthenes
sieve(Ref,All, LastCheck) ->
sieve(Ref,[], All, LastCheck).
sieve(noref,Primes, All = [Cur|_], LastCheck) when Cur > LastCheck ->
lists:reverse(Primes, All); %all known primes and all remaining from list (not sieved) are prime
sieve({Pid,Ref},Primes, All=[Cur|_], LastCheck) when Cur > LastCheck ->
Pid ! {Ref,lists:reverse(Primes, All)};
sieve(Ref,Primes, [Cur|All2], LastCheck) ->
%%All3 = lists:filter(fun(X) -> X rem Cur =/= 0 end, All2),
All3 = lists_filter(Cur,All2),
sieve(Ref,[Cur|Primes], All3, LastCheck).
lists_filter(Cur,All2) ->
lists_filter(Cur,All2,[]).
lists_filter(V,[H|T],L) ->
case H rem V of
0 ->
lists_filter(V,T,L);
_ ->
lists_filter(V,T,[H|L])
end;
lists_filter(_,[],L) ->
lists:reverse(L).
%% This is a sloppy implementation ;)
spawn_sieve(All,Last) ->
%% split the job
{L1,L2} = lists:split(round(length(All)/2),All),
Filters = filters(All,Last),
%%io:format("F:~p~n",[Filters]),
L3 = lists:append(Filters,L2),
%%io:format("L1:~w~n",[L1]),
%% io:format("L2:~w~n",[L3]),
%%lists_filter(Cur,All2,[]).
Pid = self(),
Ref1=make_ref(),
Ref2=make_ref(),
erlang:spawn(?MODULE,sieve,[{Pid,Ref1},L1,Last]),
erlang:spawn(?MODULE,sieve,[{Pid,Ref2},L3,Last]),
Res1=receive
{Ref1,R1} ->
{1,R1};
{Ref2,R1} ->
{2,R1}
end,
Res2= receive
{Ref1,R2} ->
{1,R2};
{Ref2,R2} ->
{2,R2}
end,
apnd(Filters,Res1,Res2).
filters([H|T],Last) when H
[H|filters(T,Last)];
filters([H|_],_) ->
[H];
filters(_,_) ->
[].
apnd(Filters,{1,N1},{2,N2}) ->
lists:append(N1,subtract(N2,Filters));
apnd(Filters,{2,N2},{1,N1}) ->
lists:append(N1,subtract(N2,Filters)).
subtract([H|L],[H|T]) ->
subtract(L,T);
subtract(L=[A|_],[B|_]) when A > B ->
L;
subtract(L,[_|T]) ->
subtract(L,T);
subtract(L,[]) ->
L.
A: you could show your boss this: http://www.sics.se/~joe/apachevsyaws.html. And some other (classic?) erlang arguments are:
-nonstop operation, new code can be loaded on the fly.
-easy to debug, no more core dumps to analyse.
-easy to utilize multi core/CPUs
-easy to utilize clusters maybe?
-who wants to deal with pointers and stuff? Is this not the 21 century? ;)
Some pifalls:
- it might look easy and fast to write something, but the performance can suck. If I
want to make something fast I usually end up writing 2-4 different versions of the same
function. And often you need to take a hawk eye aproach to problems which might be a
little bit different from what one is used too.
*
*looking up things in lists > about 1000 elements is slow, try using ets tables.
*the string "abc" takes a lot more space than 3 bytes. So try to use binaries, (which is a pain).
All in all i think the performance issue is something to keep in mind at all times when writing something in erlang. The Erlang dudes need to work that out, and I think they will.
A: Have a look here to find 4 different implementations for finding prime numbers in Erlang (two of which are "real" sieves) and for performance measurement results:
http://caylespandon.blogspot.com/2009/01/in-euler-problem-10-we-are-asked-to.html
A: Simple enough, implements exactly the algorithm, and uses no library functions (only pattern matching and list comprehension).
Not very powerful, indeed. I only tried to make it as simple as possible.
-module(primes).
-export([primes/1, primes/2]).
primes(X) -> sieve(range(2, X)).
primes(X, Y) -> remove(primes(X), primes(Y)).
range(X, X) -> [X];
range(X, Y) -> [X | range(X + 1, Y)].
sieve([X]) -> [X];
sieve([H | T]) -> [H | sieve(remove([H * X || X <-[H | T]], T))].
remove(_, []) -> [];
remove([H | X], [H | Y]) -> remove(X, Y);
remove(X, [H | Y]) -> [H | remove(X, Y)].
A: Here is my sieve of eratophenes implementation C&C please:
-module(sieve).
-export([find/2,mark/2,primes/1]).
primes(N) -> [2|lists:reverse(primes(lists:seq(2,N),2,[]))].
primes(_,0,[_|T]) -> T;
primes(L,P,Primes) -> NewList = mark(L,P),
NewP = find(NewList,P),
primes(NewList,NewP,[NewP|Primes]).
find([],_) -> 0;
find([H|_],P) when H > P -> H;
find([_|T],P) -> find(T,P).
mark(L,P) -> lists:reverse(mark(L,P,2,[])).
mark([],_,_,NewList) -> NewList;
mark([_|T],P,Counter,NewList) when Counter rem P =:= 0 -> mark(T,P,Counter+1,[P|NewList]);
mark([H|T],P,Counter,NewList) -> mark(T,P,Counter+1,[H|NewList]).
A: Here is my sample
S = lists:seq(2,100),
lists:foldl(fun(A,X) -> X--[A] end,S,[Y||X<-S,Y<-S,X<math:sqrt(Y)+1,Y rem X==0]).
:-)
A: my fastest code so far (faster than Andrea's) is with using array:
-module(seed4).
-export([get/1]).
get(N) -> WorkList = array:new([{size, N}, {default, empty}]),
get(2, N, WorkList, []).
get(thats_the_end, _N, _WorkList, ResultList) -> lists:reverse(ResultList);
get(CurrentPrime, N, WorkList, ResultList) -> ModWorkList = markAsPrime(CurrentPrime, N, WorkList),
NextPrime = findNextPrime(CurrentPrime + 1, N, WorkList),
get(NextPrime, N, ModWorkList, [CurrentPrime|ResultList]).
markAsPrime(CurrentPrime, N, WorkList) when CurrentPrime =< N -> WorkListMod = replace(CurrentPrime, WorkList, prime),
markAllMultiples(CurrentPrime, N, 2*CurrentPrime, WorkListMod).
markAllMultiples(_ThePrime, N, TheCurentMark, WorkList) when TheCurentMark > N -> WorkList;
markAllMultiples(ThePrime, N, TheCurrentMark, WorkList) -> WorkListMod = replace(TheCurrentMark, WorkList, marked),
markAllMultiples(ThePrime, N, TheCurrentMark + ThePrime, WorkListMod).
findNextPrime(Iterator, N, _WorkList) when Iterator > N -> thats_the_end;
findNextPrime(Iterator, N, WorkList) -> I = array:get(Iterator - 1, WorkList),
if
I =:= empty -> Iterator;
true -> findNextPrime(Iterator + 1, N, WorkList)
end.
replace(N, L, New) -> array:set(N - 1, New, L).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146622",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
} |
Q: C header file loops I have a couple of header files, which boil down to:
tree.h:
#include "element.h"
typedef struct tree_
{
struct *tree_ first_child;
struct *tree_ next_sibling;
int tag;
element *obj;
....
} tree;
and element.h:
#include "tree.h"
typedef struct element_
{
tree *tree_parent;
char *name;
...
} element;
The problem is that they both reference each other, so tree needs element to be included, and element needs tree to be included.
This doesn't work because to define the 'tree' structure, the element structure must be already known, but to define the element structure, the tree structure must be known.
How to resolve these types of loops (I think this may have something to do with 'forward declaration'?)?
A: Crucial observation here is that the element doesn't need to know the structure of tree, since it only holds a pointer to it. The same for the tree. All each needs to know is that there exists a type with the relevant name, not what's in it.
So in tree.h, instead of:
#include "element.h"
do:
typedef struct element_ element;
This "declares" the types "element" and "struct element_" (says they exist), but doesn't "define" them (say what they are). All you need to store a pointer-to-blah is that blah is declared, not that it is defined. Only if you want to deference it (for example to read the members) do you need the definition. Code in your ".c" file needs to do that, but in this case your headers don't.
Some people create a single header file which forward-declares all the types in a cluster of headers, and then each header includes that, instead of working out which types it really needs. That's neither essential nor completely stupid.
The answers about include guards are wrong - they're a good idea in general, and you should read about them and get yourself some, but they don't solve your problem in particular.
A: The correct answer is to use include guards, and to use forward declarations.
Include Guards
/* begin foo.h */
#ifndef _FOO_H
#define _FOO_H
// Your code here
#endif
/* end foo.h */
Visual C++ also supports #pragma once. It is a non standard preprocessor directive. In exchange for compiler portability, you reduce the possibility of preprocessor name collisions and increase readability.
Forward Declarations
Forward declare your structs. If the members of a struct or class are not explicitly needed, you can declare their existence at the beginning of a header file.
struct tree; /* element.h */
struct element; /* tree.h */
A: I think the problem here is not the missing include guard but the fact that the two structures need each other in their definition. So it's a type define hann and egg problem.
The way to solve these in C or C++ is to do forward declarations on the type. If you tell the compiler that element is a structure of some sort, the compiler is able to generate a pointer to it.
E.g.
Inside tree.h:
// tell the compiler that element is a structure typedef:
typedef struct element_ element;
typedef struct tree_ tree;
struct tree_
{
tree *first_child;
tree *next_sibling;
int tag;
// now you can declare pointers to the structure.
element *obj;
};
That way you don't have to include element.h inside tree.h anymore.
You should also put include-guards around your header-files as well.
A: Read about forward declarations.
ie.
// tree.h:
#ifndef TREE_H
#define TREE_H
struct element;
struct tree
{
struct element *obj;
....
};
#endif
// element.h:
#ifndef ELEMENT_H
#define ELEMENT_H
struct tree;
struct element
{
struct tree *tree_parent;
...
};
#endif
A: These are known as "once-only headers." See http://developer.apple.com/DOCUMENTATION/DeveloperTools/gcc-4.0.1/cpp/Once_002dOnly-Headers.html#Once_002dOnly-Headers
A: Include guards are useful, but don't address the poster's problem which is the recursive dependency on two data structures.
The solution here is to declare tree and/or element as pointers to structs within the header file, so you don't need to include the .h
Something like:
struct element_;
typedef struct element_ element;
At the top of tree.h should be enough to remove the need to include element.h
With a partial declaration like this you can only do things with element pointers that don't require the compiler to know anything about the layout.
A: IMHO the best way is to avoid such loops because they are a sign of physical couping that should be avoided.
For example (as far as I remember) "Object-Oriented Design Heuristics" purpose to avoid Include Guards because they only mask the cyclic (physical) dependency.
An other approach is to predeclare the structs like this:
element.h:
struct tree_;
struct element_
{
struct tree_ *tree_parent;
char *name;
};
tree.h:
struct element_;
struct tree_
{
struct tree_* first_child;
struct tree_* next_sibling;
int tag;
struct element_ *obj;
};
A: Forward declaratio is the way with which you can guarantee that there will be a tyoe of structure which will be defined later on.
A: I don't like forward declarations cause they are redundant and buggy. If you want all your declarations in the same place then you should use includes and header files with include guards.
You should think about includes as a copy-paste, when the c preprocesor finds an #include line just places the entire content of myheader.h in the same location where #include line was found.
Well, if you write include guards the myheader.h's code will be pasted only one time where the first #include was found.
If your program compiles with several object files and problem persists then you should use forward declarations between object files (it's like using extern) in order to keep only a type declaration to all object files (compiler mixes all declarations in the same table and identifiers must be unique).
A: A simple solution is to just not have separate header files. After all, if they're dependent on each other you're never going to use one without the other, so why separate them? You can have separate .c files that both use the same header but provide the more focused functionality.
I know this doesn't answer the question of how to use all the fancy stuff correctly, but I found it helpful when I was looking for a quick fix to a similar problem.
A: So many answers here has mentioned "include guards" and "forward declaration" but none of them really has the intention to solve the issue the OP is currently facing. A third ".h" file is definitely not the answer. "Include guards" if used properly can break the "#include loop" and eventually lead to cleaner project structure. Why even bother creating another header file just for the typedefs if you already got two?? Your header files should be like this:
/* a.h - dependency of b.h */
#ifndef _A_H
#define _A_H
#include "b.h"
typedef struct a_p {
b_t *b;
} a_t;
#endif // _A_H
/* b.h - dependency of a.h */
#ifndef _B_H
#define _B_H
typedef struct b_p b_t;
/**
* !!!
* to avoid recursion, only include "a.h"
* when "a.h" isn't included before
*/
#ifndef _A_H
#include "a.h"
typedef struct b_p {
a_t a;
} b_t;
#endif
#endif // _B_H
To use both of the header files you only need to include one, that is, the one that also unconditionally includes another (in this case, a.h). But if you wanna, you may also include "b.h" as well. But it's not going to make any difference (due to forward declaration) anyway.
#include "a.h"
int main() {
a_t aigh;
return 0;
}
Voila! This is it! No extra includes no nothing. We got em bois!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146657",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: How do I get a floating footer to stick to the bottom of the viewport in IE 6? I know this would be easy with position:fixed, but unfortanately I'm stuck with supporting IE 6. How can I do this? I would rather use CSS to be clean, but if I have to use Javascript, that's not the end of the world. In my current implementation I have a "floating footer" that floats above the main content area and is positioned with Javascript. The implementation I have right now is not particular elegant even with the Javascript, so my questions are:
*
*Is there a way to do this without Javascript?
*If I have to use Javascript, are there any "nice" solutions to this floating footer problem? By "nice" I mean something that will work across browsers, doesn't overload the browser's resources (since it will have to recalculate often), and is elegant/easy to use (i.e. it would be nice to write something like new FloatingFooter("floatingDiv")).
I'm going to guess there is no super easy solution that has everything above, but something I can build off of would be great.
Finally, just a more general question. I know this problem is a big pain to solve, so what are other UI alternatives rather than having footer content at the bottom of every page? On my particular site, I use it to show transitions between steps. Are there other ways I could do this?
A: if you do want to not use the conditional comments, so that you can put the css in a separate file, use !important. Something like this:
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html>
<head>
<style>
html {
overflow-x: auto;
overflow-y: scroll !important;
overflow-y: hidden; /* ie6 value b/c !important ignored */
}
body {
padding:0;
margin:0;
height: 100%;
overflow-y: hidden !important;
overflow-y: scroll; /* ie6 value b/c !important ignored */
}
#bottom {
background-color:#ddd;
position: fixed !important;
position: absolute; /* ie6 value b/c !important ignored */
width: 100%;
bottom: 0px;
z-index: 5;
height:100px;
}
#content {
font-size: 50px;
}
</style>
</head>
<body>
<div id="bottom">
keep this text in the viewport at all times
</div>
<div id="content">
Let's create enough content to force scroll bar to appear.
Then we can ensure this works when content overflows.
One quick way to do this is to give this text a large font
and throw on some extra line breaks.
<br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/>
<br/><br/><br/><br/><br/><br/><br/><br/>
</div>
</body>
</html>
A: I have done this using CSS expressions in the Past.
Try something like this:
.footer {
position: absolute;
top: expression((document.body.clientHeight - myFooterheight) + "px");
}
read more here
and here
A: This may work for you. It works on IE6 and Firefox 2.0.0.17 for me. Give it a shot. I made the footer's height very tall, just for effect. You would obviously change it to what you need. I hope this works for you.
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">
<html>
<head>
<title>Liquid Footer</title>
<style type="text/css">
.footer {
background-color: #cdcdcd;
height: 180px;
text-align: center;
font-size:10px;
color:#CC0000;
font-family:Verdana;
padding-top: 10px;
width: 100%;
position:fixed;
left: 0px;
bottom: 0px;
}
</style>
<!--[if lte IE 6]>
<style type="text/css">
body {height:100%; overflow-y:auto;}
html {overflow-x:auto; overflow-y:hidden;}
* html .footer {position:absolute;}
</style>
<![endif]-->
</head>
<body>
This is to expand the content on the page<br>
This is to expand the content on the page<br>
This is to expand the content on the page<br>
This is to expand the content on the page<br>
This is to expand the content on the page<br>
This is to expand the content on the page<br>
This is to expand the content on the page<br>
This is to expand the content on the page<br>
This is to expand the content on the page<br>
This is to expand the content on the page<br>
This is to expand the content on the page<br>
This is to expand the content on the page<br>
This is to expand the content on the page<br>
This is to expand the content on the page<br>
This is to expand the content on the page<br>
This is to expand the content on the page<br>
This is to expand the content on the page<br>
This is to expand the content on the page<br>
This is to expand the content on the page<br>
This is to expand the content on the page<br>
This is to expand the content on the page<br>
This is to expand the content on the page<br>
This is to expand the content on the page<br>
This is to expand the content on the page<br>
This is to expand the content on the page<br>
This is to expand the content on the page<br>
This is to expand the content on the page<br>
This is to expand the content on the page<br>
This is to expand the content on the page<br>
This is to expand the content on the page<br>
This is to expand the content on the page<br>
This is to expand the content on the page<br>
This is to expand the content on the page<br>
This is to expand the content on the page<br>
This is to expand the content on the page<br>
This is to expand the content on the page<br>
This is to expand the content on the page<br>
This is to expand the content on the page<br>
This is to expand the content on the page<br>
This is to expand the content on the page<br>
This is to expand the content on the page<br>
This is to expand the content on the page<br>
This is to expand the content on the page<br>
This is to expand the content on the page<br>
This is to expand the content on the page<br>
This is to expand the content on the page<br>
This is to expand the content on the page<br>
This is to expand the content on the page<br>
This is to expand the content on the page<br>
This is to expand the content on the page<br>
This is to expand the content on the page<br>
This is to expand the content on the page<br>
This is to expand the content on the page<br>
This is to expand the content on the page<br>
This is to expand the content on the page<br>
This is to expand the content on the page<br>
This is to expand the content on the page<br>
This is to expand the content on the page<br>
This is to expand the content on the page<br>
This is to expand the content on the page<br>
<div class="footer">-- This is your liquid footer --</div>
</body>
</html>
A: $(function(){
positionFooter();
function positionFooter(){
if($(document).height() < $(window).height()){//Without the body height conditional the footer will always stick to the bottom of the window, regardless of the body height, $(document).height() - could be main container/wrapper like $("#main").height() it depend on your code
$("#footer").css({position: "absolute",top:($(window).scrollTop()+$(window).height()-$("#footer").height())+"px"})
}
}
$(window).scroll(positionFooter);
$(window).resize(positionFooter);
});
A: If you put the height to 100% and overflow: auto to the <html/> and <body/> tags, anything with the absolute position will become fixed. It the most basic for is pretty funky with oddly placed scroll bars but can be tweak to decent results.
example
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html>
<head>
<style>
html, body
{
height: 100%;
overflow: auto;
}
.fixed
{
position: absolute;
bottom: 0px;
height: 40px;
background: blue;
width: 100%;
}
</style>
</head>
<body>
<div class="fixed"></div>
overflow....<br />
overflow....<br />
overflow....<br />
overflow....<br />
overflow....<br />
overflow....<br />
overflow....<br />
overflow....<br />
overflow....<br /><!-- ... -->
</body>
</html>
A: If the footer has fixed height and you know it and can hard-code it in CSS, you can do it like this:
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html>
<head>
<style>
.content
{
position : absolute;
top : 0;
left : 0;
right : 0;
bottom : 50px; /* that's the height of the footer */
overflow : auto;
background-color : blue;
}
.footer
{
position : absolute;
left : 0;
right : 0;
bottom : 0px; /* that's the height of the footer */
height : 50px;
overflow : hidden;
background-color : green;
}
</style>
</head>
<body>
<div class="content">
everything from the page goes here
everything from the page goes here
everything from the page goes here
everything from the page goes here
everything from the page goes here
everything from the page goes here
everything from the page goes here
everything from the page goes here
everything from the page goes here
everything from the page goes here
everything from the page goes here
everything from the page goes here
everything from the page goes here
everything from the page goes here
everything from the page goes here
</div>
<div class="footer">
the footer
</div>
</body>
</html>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146659",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: How can I build an Adobe Air project with Maven? Has anyone successfully built an Adobe Air application with Maven? If so, what are the steps to get it working?
I have been trying to use flex-mojos to build an Air applications. When I set the packaging type to "aswf", as suggested in the DashboardSamplePom, Maven complains that aswf is an unknown packaging type. I also found their air-super-pom, but could not figure out how to reference it as the parent of my POM.
A: When a plugin declares a new packaging type, like 'aswf', you need to declare it as an extension. In your top-level pom, add the extensions element to the plugin config.
<plugin>
<groupId>...</groupId>
<artifactId>...</artifactId>
<extensions>true</extensions>
...
</plugin>
A: There is an article called Building an AIR Application on the mojos website wiki. It should be able to help you.
A: I've been searching for an answer to this problem as well. There are a couple sites that have proved helpful, though I don't have a full solution yet.
Check these for possible leads:
*
*http://blogs.citytechinc.com/bgloff/?p=17
*http://jchristie.wordpress.com/2009/01/11/compiling-an-air-application-with-maven-and-flex-mojos/
AS for the packaging type, most of the information I've found indicates that rather than using aswf as the package type, you'll need to use swf and then convert the compiled swf into your air executable by creating an exec tax to invoke adt.jar The links above will show you how to do that much.
As for the air super pom you found, I think there are a few different ones... But to use any super pom, you need to have your flex maven project declare the super pom as the parent, with a block similar to this:
<parent>
<groupId>org.sonatype.flexmojos</groupId>
<artifactId>flexmojos-air-super-pom</artifactId>
<version>3.1-SNAPSHOT</version>
</parent>
However just extending the parent pom may not be enough to get your swf building - once again, see the links above for a more detailed treatment of this problem.
A: I created an AIR Maven template, you can find the details in this github project: https://github.com/branscha/tmplt-airapp. The question is quite old so the versions/frameworks in my solution might not be suitable for you any more. The Flex/Air situation has changed a lot during the last years.
Characteristics of my solution:
*
*I use a Mavenized Apache SDK (13.0), you should probably do the same
thing for your projects to be independent of an external Maven
repository. The procedure to mavenize the SDK is not too difficult.
*Flexmojos 6.0.1
*The hello-world app can be turned into an Android
app.
*The packaging is 'air' (i.s.o. aswf that was mentioned above)
A: In my case I simply created a new maven project using the
org.graniteds.archetypes graniteds-tide-seam-jpa-hibernate archetype and got this error. I don't know anything about flex, but simply wanted a sample project using Seam. This seemed like a good candidate. But I get
unknown packaging type:swf
even though
the <extensions>true<extensions> is present in the plugin definintion of the generated POM file.
I read somewhere else that this reqires a beta version of maven. Why the heck is sonatype allowing archetypes that depend on beta versions of maven in their repository?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146661",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: How do I parse a number from a String that may have a leading zero? In ruby I am parsing a date in the following format: 24092008.
I want to convert each section (year, month, date) into a number.
I have split them up using a regex which produces three Strings which I am passing into the Integer constructor.
date =~ /^([\d]{2})([\d]{2})([\d]{4})/
year = Integer($3)
month = Integer($2)
day = Integer($1)
When it hits the month line it crashes as follows:
`Integer': invalid value for Integer: "09" (ArgumentError)
It took me a while to realise that it's interpreting the leading zero as Octal and 09 is not a valid Octal number (it works fine with "07").
Is there an elegant solution to this or should I just test for numbers less than 10 and remove the zero first?
Thanks.
A: Specify base 10
Tell Ruby explicitly that you want to interpret the string as a base 10 number.
Integer("09", 10) # => 9
This is better than .to_i if you want to be strict.
"123abc".to_i # => 123
Integer("123abc", 10) # => ArgumentError
How I figured this out
In irb, method(:Integer) returns #<Method: Object(Kernel)#Integer>. That told me that Kernel owns this method, and I looked up the documentation on Kernel. The method signature shows that it takes a base as the second argument.
A: I'm not familiar with regexes, so forgive me if this answer's off-base. I've been assuming that $3, $2, and $1 are strings. Here's what I did in IRB to replicate the problem:
irb(main):003:0> Integer("04")
=> 4
irb(main):004:0> Integer("09")
ArgumentError: invalid value for Integer: "09"
from (irb):4:in `Integer'
from (irb):4
from :0
But it looks like .to_i doesn't have the same issues:
irb(main):005:0> "04".to_i
=> 4
irb(main):006:0> "09".to_i
=> 9
A: Perhaps (0([\d])|([1-9][\d])) in place of ([\d]{2})
You may have to use $2, $4, and $5 in place of $1, $2, $3.
Or if your regexp supports (?:...) then use (?:0([\d])|([1-9][\d]))
Since ruby takes its regexp from perl, this latter version should work.
A: Instead of checking any integer with leading 0 directly. Eg:
Integer("08016") #=> ArgumentError: invalid value for Integer(): "08016"
Create a method to check and rescue for leading 0:
def is_numeric(data)
_is_numeric = true if Integer(data) rescue false
# To deal with Integers with leading 0
if !_is_numeric
_is_numeric = data.split("").all?{|q| Integer(q.to_i).to_s == q }
end
_is_numeric
end
is_numeric("08016") #=> true
is_numeric("A8016") #=> false
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146668",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: Glassfish: Deploying application in domain failed; Error loading deployment descriptors - invalid bit length repeat I'm trying to deploy a .war trough the server's web interface. Getting the same error when deploying it as an exploded directory with IDEA.
What is the solution to this problem?
A: The message points to an exception in java.util.zip thus implicating a corrupted packed file. Maybe you could try rebuilding the war file.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146700",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Is _VERSION the only global variable left in Lua 5.1? Puzzled by the Lua 5.0 documentation references to things like _LOADED, LUA_PATH, _ALERT and so on (that I could not use in Lua 5.1), I discovered all of those have been removed and the functionality put elsewhere. Am I right in thinking that the only one global variable left in Lua 5.1 is _VERSION?
A: The docs seem to think that's almost the case....
_G
A global variable (not a function) that holds the global environment
(that is, _G._G = _G). Lua itself does
not use this variable; changing its
value does not affect any environment,
nor vice-versa. (Use setfenv to change
environments.)
It looks like there's also _PROMPT and _PROMPT2, but only when using standalone lua interactively:
If the global variable _PROMPT
contains a string, then its value is
used as the prompt. Similarly, if the
global variable _PROMPT2 contains a
string, its value is used as the
secondary prompt (issued during
incomplete statements). Therefore,
both prompts can be changed directly
on the command line or in any Lua
programs by assigning to _PROMPT.
A: Assuming you don't open any libs, there's also _G, pairs, ipairs and newproxy.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146704",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Use the serialVersionUID or suppress warnings? I want to create a class that, for example, extends HttpServlet? My compiler warns me that my class should have a serialVersionUID. If I know that this object will never be serialized, should I define it or add an annotation to suppress those warnings?
What would you do and why?
A: Even if you know this object will be serialized there is no need to generate serialVersionUID because java will automatically generate it for you and will automatically keep track of changes so your serialization will always work just fine. You should generate it only if you know what you are doing (backward serialization compatibility, manual change tracking etc.)
So I would say suppressing the warning is the best and safest solution in most cases.
A: It is good to generate SVUID to every class implementing serializable. The reason is simple. You never know when it will be serialized by you or by some 3rd party. There can be configured a lot of services which will serialize servlets. For every IDE exists plugin which generates one or just use template and set svuid = 1L.
A: I don't know Java best practices, but it occurs to me that if you are claiming that serialization will never happen, you could add a writeObject method which throws. Then suppress the warning, safe in the knowledge that it cannot possibly apply to you.
Otherwise someone might in future serialize your object through the parent class, and end up with a default serialized form where:
*
*the form isn't compatible between different versions of your code.
*you've suppressed the warning that this is the case.
Adding an ID sounds like a bodge, since what you really want to do is not serialize. Expecting callers not to serialize your object means that you expect them to "know" when their HttpServlet is of your class. That breach of polymorphism is on your head for having a Serializable object which must not be serialized, and the least you can do is make sure unwary callers know about it.
A: That warning drives me crazy, because every time you subclass a Swing class, you know you're never going to serialize it, but there is that stupid warning. But yeah, I let Eclipse generate one.
A: If you do not plan to serialize instances, add a SuppressWarning.
A generated serial ID can be a bit dangerous. It suggests that you intentionally gave it a serial number and that it is save to serialize and deserialize. It's easy to forget to update the serial number in a newer version of your application where your class is changed. Deserialization will fail if the class fields have been changed. Having a SuppressWarning at least tells the reader of your code that you did not intend to serialize this class.
A: Let Eclipse generate an ID. Quick and easy. Warnings are not to be ignored. Also saves you lots of trouble should you ever come to the point where the object /has/ to be serialized.
A: If you leave out a serialVersionUID java will generate one for the class at compile time (it changes with every compilation).
When deserializing objects the serialVersionUID of the deserialized object is compared to that of the class in the jvm. If they are different they are considered incompatible and an Exception is thrown. This can happen for instance after upgrading your program and deserializing old classes.
I always use 1L for serialversionUID. It doesn't hurt (compared to the default generated) and it still leaves the option of breaking compatibility later by incrementing the id.
A: It depends.
If you use different compilers to compile your source code multiple times, your compiled code could have different serializationIds that will break the serialization. Then you need to stick to a constant serializationId explicitly in your code. It must be static and final and per class (not inheritable).
However, if you always compile your code with a specific compiler and always deploy your code in one shot to all of your VMs, you probably need strict version checking and want to make sure that anytime there is only one version of you code running, in that case, you should just suppress the warning. So in case a VM is not deployed successfully and is running old version of your code, you probably expect an exception during serialization rather than quirk deserialized objects. This happens to be my case, we used to have a very very large cluster and we need strict version checking to find out any deployment issue.
Anyway, probably you should avoid serialization whenever possible since the default serialization is very slow compared to protocol buffers or thrift and does not support cross-language interoperability.
A: I refuse to be terrorized by Eclipse into adding clutter to my code!
I just configure Eclipse to not generate warnings on missing serialVersionUID.
A: Thanks @ Steve Jessop for his answer on this. It was 5 lines of code... hardly a hassle.
I added @SuppressWarnings("serial") just above the class in question.
I also added this method:
private void writeObject(ObjectOutputStream oos) throws IOException {
throw new IOException("This class is NOT serializable.");
}
Hopefully that's what Steve meant :)
A: Please follow this link to get detailed explanation: http://technologiquepanorama.wordpress.com/2009/02/13/what-is-use-of-serialversiouid/
A: If you know your applications never serializes things, suppress the warning application-wide. This can be done using javac command line arguments:
javac -Xlint -Xlint:-serial *******
This way you will have all warnings except "serial". IDE-s and build tools like Maven/SBT/Gradle work fine with that.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146715",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "76"
} |
Q: Development with a tablet, not a mouse Lifehacker had a post a couple days ago asking people about the best mouse you've ever had and it appears some people have traded their mouse for a tablet.
I'm curious if anyone here has traded their mouse in for a tablet? Does it work well for development? Looking for pros and cons from people who have tried it or are using it.
Thanks.
A: For development, I tend to switch between the keyboard for typing and shortcuts, and the mouse for pointing at things. Given that context:
I've heard no end of praise for this approach from everyone who's used it. They say it's easier on the wrist, it feels more natural, and once you're used to it, it's impossible to go back without missing it. Strong enough praise that I'm planning on switching within the next two paycheques.
A: I second what Dan Udey says. I had no trouble switching to a tablet and on the rare occasions that I use a pointer at all while programming, it is a blessing. One simple example: scrolling through a long list by dragging the scrollbar seems more natural with a tablet than a mouse.
And as an added benefit, when I am in "graphic artist" mode on little projects the tablet is an obvious win.
Spodi: I agree on the keyboard for shortcuts, so the only buttons I use on either the mouse or the tablet are the first and second, both of which are available on the pen; I am not sure what you mean by looking at the tablet to click buttons. As for the speed of getting to the mouse, I tuck the pen between two fingers on my right hand, so it is always right there.
A: I love them so much I have 2 on my desk. You simply can not go wrong, the only time I need/use a mouse is playing games, and I wish I could use the tablet.
A: I strongly dislike a tablet for programming. I even have a tablet laptop I "permanently borrowed" from my girlfriend, and even the idea of using that for any aspect of programming just makes me cringe.
My left hand is always on the keyboard, while my right is jumping to and from the mouse. Because the left is on the keyboard all the time, it is also the most logical hand to perform shortcuts with. Tablets lack the convenience of "quick-switches" due to the pen. Even if you did not have to use the pen, it is unlikely you will be able to achieve the same kind of speed of switching to your tablet, moving slightly, then switching back, as you can with a mouse.
Finally, even if you are as quick with a tablet as you are with a mouse, buttons are in a much less convenient location. Who looks at their mouse when clicking a button? I'd hope nobody. Now, who looks at their tablet? Hmm, I see a lot more hands in the air.
If a tablet suits you, then sure, go for it. But you have to be pretty bad with a mouse to be better with a tablet.
If the reason you want to switch or have switched is due to comfort, either get a new mouse, one of those (often gel-like) wrist rests, or physically reposition yourself. There is a solution to the discomfort, and it's not in the form of a tablet.
A: I used a small Wacom tablet as my primary "mouse" for a couple years. I found the tablet to be very natural, and even got used to typing while holding the stylus in between my fingers. (I don't type the "correct fingers on home row" way anyway, so it wasn't that big of an adjustment)
I did find that I got MUCH better at using the keyboard for most navigation because when I did put down the stylus, I didn't want to have to pick it back up.
Since I had a cheaper tablet, after a couple years of use the protective plastic had a gridwork of scratches that made it sometime more difficult for precise mouse movements (being a web-developer, I've done my share of quick graphic design pieces). Eventually, it got to a point where I needed to replace it and rather than spending the money on a new tablet, I bought a new optical mouse because was heavily into Counterstrike at the time and couldn't justify buying an input that only benefit one of my obsessions.
Overall, I didn't really notice any great benefits or drawbacks.. it was simply just a different way to use the computer.
To the critcism clicking buttons on a tablet.. the stylus I used had a click/right-click built right into it. No searching for anything..
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146730",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Should I go with SSIS or multithreaded C# application to load flat files in to database? Within SQL Server Integration Services (SSIS) there is the ability to setup a connection to a flat file that can hold millions of records and have that data pushed to a SQL DB. Furthermore, this process can be called from a C# app by referencing and using the Microsoft.SqlServer.Dts.Runtime namespace.
Would a flat file with millions of records best be ran with SSIS, or would the collective "you" prefer a c# app with multiple worker threads(one to read and add the row to variable, one to write from that variable to the DB), and a "mother" class that manages those threads? (the dev box has two cpu's)
I have seen this data (sql team blog) stating that for a flat file with a million lines, SSIS is the fastest:
Process Duration (ms)
-------------------- -------------
SSIS - FastParse ON 7322 ms
SSIS - FastParse OFF 8387 ms
Bulk Insert 10534 ms
OpenRowset 10687 ms
BCP 14922 ms
What are your thoughts?
A: I can only speak for myself and my experience. I would go with SSIS, since this is one of those cases where you might be re-inventing the wheel unnecessarily. This is a repetitive task that has already been solved by SSIS.
I have about 57 jobs (combination of DTS and SSIS) that I manage on a daily basis. Four of those routinely handle exporting between 5 to 100 million records. The database I manage has about 2 billion rows. I made use of a script task to append the date, down to the millisecond, so that I can run jobs several times a day. Been doing that for about 22 months now. It's been great!
SSIS jobs can also be scheduled. So you can set it and forget it. I do monitor everything every day, but the file handling part has never broken down.
The only time I had to resort to a custom C# program, was when I needed to split the very large files into smaller chunks. SSIS is dog slow for that sort of stuff. A one gig text file took about one hour to split, using the script task. The C# custom program handled that in 12 minutes.
In the end, just use what you feel comfortable using.
A: SSIS is incredibly fast. In addition, if it's something that needs to occur repeatedly, you can setup an agent to fire it off on schedule. Writing it yourself is one thing, trying to make it multithreaded gets a lot more complicated than it appears at first.
I'd recommend SSIS 9 times out of ten.
A: I can't see how using multiple threads would help performance in this case. When transferring large volumes of data, the main bottleneck is usually disk I/O. Spawning multiple threads wouldn't solve this issue, and my guess would be that it would make things worse since it would introduce locking contention between the multiple processes hitting the database.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146732",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Closures in PHP... what, precisely, are they and when would you need to use them? So I'm programming along in a nice, up to date, object oriented fashion. I regularly make use of the various aspects of OOP that PHP implements but I am wondering when might I need to use closures. Any experts out there that can shed some light on when it would be useful to implement closures?
A: PHP will support closures natively in 5.3. A closure is good when you want a local function that's only used for some small, specific purpose. The RFC for closures gives a good example:
function replace_spaces ($text) {
$replacement = function ($matches) {
return str_replace ($matches[1], ' ', ' ').' ';
};
return preg_replace_callback ('/( +) /', $replacement, $text);
}
This lets you define the replacement function locally inside replace_spaces(), so that it's not:
1) cluttering up the global namespace
2) making people three years down the line wonder why there's a function defined globally that's only used inside one other function
It keeps things organized. Notice how the function itself has no name, it's simply defined and assigned as a reference to $replacement.
But remember, you have to wait for PHP 5.3 :)
A: A closure is basically a function for which you write the definition in one context but run in another context. Javascript helped me a lot with understanding these, because they are used in JavaScript all over the place.
In PHP, they are less effective than in JavaScript, due to differences in the scope and accessibility of "global" (or "external") variables from within functions. Yet, starting with PHP 5.4, closures can access the $this object when run inside an object, this makes them a lot more effective.
This is what closures are about, and it should be enough to understand what is written above.
This means that it should be possible to write a function definition somewhere, and use the $this variable inside the function definition, then assign the function definition to a variable (others have given examples of the syntax), then pass this variable to an object and call it in the object context, the function can then access and manipulate the object through $this as if it was just another one of it's methods, when in fact it's not defined in the class definition of that object, but somewhere else.
If it's not very clear, then don't worry, it will become clear once you start using them.
A: When you will need a function in the future which performs a task that you have decided upon now.
For example, if you read a config file and one of the parameters tells you that the hash_method for your algorithm is multiply rather than square, you can create a closure that will be used wherever you need to hash something.
The closure can be created in (for example) config_parser(); it creates a function called do_hash_method() using variables local to config_parser() (from the config file). Whenever do_hash_method() is called, it has access to variables in the local scope ofconfig_parser() even though it's not being called in that scope.
A hopefully good hypothetical example:
function config_parser()
{
// Do some code here
// $hash_method is in config_parser() local scope
$hash_method = 'multiply';
if ($hashing_enabled)
{
function do_hash_method($var)
{
// $hash_method is from the parent's local scope
if ($hash_method == 'multiply')
return $var * $var;
else
return $var ^ $var;
}
}
}
function hashme($val)
{
// do_hash_method still knows about $hash_method
// even though it's not in the local scope anymore
$val = do_hash_method($val)
}
A: Apart from the technical details, closures are a fundamental pre-requisite for a programming style known as function oriented programming. A closure is roughly used for the same thing as you use an object for in object oriented programming; It binds data (variables) together with some code (a function), that you can then pass around to somewhere else. As such, they impact on the way that you write programs or - if you don't change the way you write your programs - they don't have any impact at all.
In the context of PHP, they are a little odd, since PHP already is heavy on the class based, object oriented paradigm, as well as the older procedural one. Usually, languages that have closures, has full lexical scope. To maintain backwards compatibility, PHP is not going to get this, so that means that closures are going to be a little different here, than in other languages. I think we have yet to see exactly how they will be used.
A: I like the context provided by troelskn's post. When I want to do something like Dan Udey's example in PHP, i use the OO Strategy Pattern. In my opinion, this is much better than introducing a new global function whose behavior is determined at runtime.
http://en.wikipedia.org/wiki/Strategy_pattern
You can also call functions and methods using a variable holding the method name in PHP, which is great. so another take on Dan's example would be something like this:
class ConfigurableEncoder{
private $algorithm = 'multiply'; //default is multiply
public function encode($x){
return call_user_func(array($this,$this->algorithm),$x);
}
public function multiply($x){
return $x * 5;
}
public function add($x){
return $x + 5;
}
public function setAlgorithm($algName){
switch(strtolower($algName)){
case 'add':
$this->algorithm = 'add';
break;
case 'multiply': //fall through
default: //default is multiply
$this->algorithm = 'multiply';
break;
}
}
}
$raw = 5;
$encoder = new ConfigurableEncoder(); // set to multiply
echo "raw: $raw\n"; // 5
echo "multiply: " . $encoder->encode($raw) . "\n"; // 25
$encoder->setAlgorithm('add');
echo "add: " . $encoder->encode($raw) . "\n"; // 10
of course, if you want it to be available everywhere, you could just make everything static...
A: Here are examples for closures in php
// Author: HishamDalal@gamil.com
// Publish on: 2017-08-28
class users
{
private $users = null;
private $i = 5;
function __construct(){
// Get users from database
$this->users = array('a', 'b', 'c', 'd', 'e', 'f');
}
function displayUsers($callback){
for($n=0; $n<=$this->i; $n++){
echo $callback($this->users[$n], $n);
}
}
function showUsers($callback){
return $callback($this->users);
}
function getUserByID($id, $callback){
$user = isset($this->users[$id]) ? $this->users[$id] : null;
return $callback($user);
}
}
$u = new users();
$u->displayUsers(function($username, $userID){
echo "$userID -> $username<br>";
});
$u->showUsers(function($users){
foreach($users as $user){
echo strtoupper($user).' ';
}
});
$x = $u->getUserByID(2, function($user){
return "<h1>$user</h1>";
});
echo ($x);
Output:
0 -> a
1 -> b
2 -> c
3 -> d
4 -> e
5 -> f
A B C D E F
c
A: Bascially,Closure are the inner functions tat have access to the outer variables and are used as a callback function to anonmyous function (functions that do not have any name).
<?php
$param='ironman';
function sayhello(){
$param='captain';
$func=function () use ($param){
$param='spiderman';
};
$func();
echo $param;
}
sayhello();
?>
//output captain
//and if we pass variable as a reference as(&$param) then output would be spider man;
A: Closures:
MDN has the best explanation IMO:
A closure is the combination of a function bundled together (enclosed)
with references to its surrounding state (the lexical environment). In
other words, a closure gives you access to an outer function’s scope
from an inner function.
i.e. A closure is a function with access to the variables which are in the parent scope. A closure allows us to conveniently create functions on the fly since in some situations a functions is only needed in one place (callbacks, callable arguments).
Example:
$arr = [1,2,3,3];
$outersScopeNr = 2;
// The second arg in array_filter is a closure
// It would be inconvenient to have this function in global namespace
// The use keyword lets us access a variable in an outer scope
$newArr = array_filter($arr, function ($el) use ($outersScopeNr) {
return $el === 3 || $el === $outersScopeNr;
});
var_dump($newArr);
// array (size=3)
// 1 => int 2
// 2 => int 3
// 3 => int 3
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146737",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "90"
} |
Q: How do I programmatically cycle a router? I have the need to be able to cycle my linksys router via a program, the reason I want to do this is complex, so let's just assume there is no better option.
Is there anyway to do this? I think there is because when I install a firmware update to the router via the admin console, it cycles the router.
A: If you can find the function to do that on the router's web interface, you can achieve the same via a suitable HTTP request. Also, you can often telnet to the router and reboot it through its command-line interface. Programmatically you can achieve the same, by using a tool like expect.
A: If you have access to the local web interface of the router you might be able to access the "reset" function of the router. You can try to call this via a http request, e.g. via wget. But this depends very much on the router. Some routers also support SNMP or a telnet interface.
Another possibility is to plug the router into an ethernet-connected power strip which can be remotely controlled. These power strips usually have a simple http server build in, so you can individually switch on and off the devices that are connected.
A: If it's a Linksys WRT54G, I highly recommend DD-WRT. It's an open source firmware replacement, and it has this feature built in for timed restarts. A programmatic restart with an HTTP request should be quite easy as well, as the restart function is available through the admin GUI.
A: https://github.com/danielrussellLA/restart-linksys-EA2700-router you can try using a nightwatch bot similar to this one to programatically restart your router through its browser ui. This works well if your router does not have a command line interface that you can access (through telnet, ssh etc). It's pretty hacky, but it gets the job done.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146741",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: How do you store an Integer and Boolean key value pair in an un-ordered collection? I need to store a list of key value pairs of (integer, boolean) in .NET
When I use a dictionary it re-orders them. Is there a built in collection that will handle this.
A: List<KeyValuePair<int, bool>> l =
new List<KeyValuePair<int, bool>>();
l.Add(new KeyValuePair<int, bool>(1, false));
A: If you want to preserve insertion order, why not use a Queue?
http://msdn.microsoft.com/en-us/library/6tc79sx1(VS.71).aspx
A Dictionary reorders the elements for faster lookup. Preserving insertion order would defeat that purpose...
A: You could just create a list of KeyValuePairs:
var myList = new List<KeyValuePair<int, bool>>();
A: Ordered dictionary allows retreival by index or by key.
A: OrderedDictionary is the way to go. It provides O(1) retreival and O(n) insert. For more detailed info see codeproject
A: What about an array?
KeyValuePair<int, bool>[] pairs
A list might be more useful when you want to add pairs after initialization of the collection.
List<KeyValuePair<int, bool>>
A: The dictionary is supposed to reorder them, the a map by itself has no notion of order.
There is a class in .Net that supports that notion:
SortedDictionary<Tkey, Tvalue>
it requires that the Tkey type implements de IComparable interface so it known how to sort items. This way when your return the keys or the values they should be in the order the IComparable implementation specifies. For integers of course that is a trivial:
a < b
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146743",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Why are user created namespaces not being recognized by Visual Studio 2008? For some reason when I create a new namespace in Visual Studio 2008 its not being recognized. I'm using asp.net mvc, I don't know if that has anything to do with it.
Has anyone come across this before?...and how do you fix it?
Also is there a way to force Visual Studio to maybe re-examine new namespaces?
Answer:
I figured out the problem...check it.
A: Not sure how this happened but the build property for this class file was set to "content". The compiler didn't see the new namespace. As soon as I set it to "compile", it worked fine.
Weird!
A: Questions:
Is the namespace and class thats not being picked up in the same project or is it referenced?
Is this a vb or a c# project? - I cant remember which way round (think its c#) but one of them defaults the root namespace to the project name. You can run into problems or get a confusing namespace e.g MyProject.MyProject if you create one at the same level of the root namespace.
Couple of things to try:
Press ctrl alt b to rebuild the project does it pick up the namespace?
Add a new method to an existing class that is working - is this picked up?
Try creating a completley different namespace is this picked up e.g.
namespace StackOverflow.Test
{
}
Finally have seen studio not pick up new namespaces and methods when its not compiling properly. This can be due to NTFS permissions and read only permissions. Check the build date of the dll. You can also try clearing ASP.net temporary files.
A: Please make sure your class is either private or public because when we add class to project it by default private and private is not accessible in other project
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146745",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Any reason to have SQL Server 2005 and 2008 installed on same machine? I'm setting up a new development server and want to install the latest version of SQL Server 2008 Express.
Will our existing sql2005 databases work with 2008 without modification?
If so is there any reason to install both versions on the same server?
A: I haven't actually tried migrating a 2005 database to 2008, but generally SQL handles this cleanly and without difficulty. The simplest way to do it would be to make a backup of your database from SQL 2005 and then restore that backup with SQL 2008.
If you want to keep the SQL 2005 copy around and online until you know that the 2008 copy is working, you might need to move the data/log files for your database when restoring the backup onto 2008, since the old data files will be in use by 2005. You can do this using the with move option of restore database, for example:
RESTORE DATABASE mydb FROM disk = 'c:\backupfile.bak'
WITH MOVE 'maindatafile' to 'c:\newdatalocation.mdf',
MOVE 'mainlogfile' to 'c:\newloglocation.ldf'
As to having both installed at the same time, one reason you might decide to do this would be to simplify the job of testing code against both versions, if you were intending to have your software support talking to both versions.
A: You can detatch a 2005 database and attach it to a 2008 server. I would recommend against installing both on the same machine unless you must (e.g. you're writing code for a third party and they only use 2005).
What I'd highly recommend is using windows server 2008 hyper-v to create 2 virtual machines one with the 2005 environment, the other with 2008. Hyper-v virtual machines are incredibly faster than Virtual server 2007.
A: The databases should (should!) work unmodified. However, for development it is preferable that you have sql2005 to test your scripts unless you assume all your clients would upgrade to 2008 as well, since 2008 has features that do not exist in SQL Server 2005.
A: In dev and test environments, having multiple database servers installed is not a problem and can reduce the number of test servers required.
In production, I wouldn't recommend it due to the fact that multiple buffer pools fight and kill your performance.
A: To me the important thing is will you have prod instances that are 2005 databases? Will you have to support reporting services reports that are on a prod server that only has the 2005 version of reporting servives, etc?
If so, you should have both the 2005 and the 2008 versions on your development machines. I've seen a lot of code that had to be thrown out because developers worked on 2008 when prod was 2005. ALways develop against the version of the software you will have in prod. If you are converting to 2008 but not there in prod yet, you need both, one for maintenance changes and one for future stuff.
Personally I have SQL server 2000, 2005 and 2008 on my machine because we haven't converted everything yet and I have some things which can only be done on the older version. We have found the key to maintaining multiple versions is to install them i nthe correct order. It seems to go badly if you put 2008 on first and then the older versions.
A: Sometimes you need to be able to test on multiple versions, or you may need 2005 for one thing and 2008 for another.
Sometimes you maintain several different apps, some of which are on one and some on the other, and you haven't updated everything yet. Sometimes you're upgrading, and need to test on both versions during the upgrade. Sometimes you support several different customers, some on one version and some on another. Sometimes you want to upgrade your internal apps, but you're using a software package that is only certified on an older version.
There's lots of reasons.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146750",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: What SNMP library for .NET makes traps, sets or gets simple? What are the best SNMP libraries to use with .NET? Specifically for listening for traps or sending set or get requests.
A: I am using the Sharp SNMP Suite (#SNMP) : LGPL, Mono compatible, developed in C# 3.0, has very good API.
A: Hi as the author of #SNMP, I try my best to be unbiased here :)
I have a blog post here which is a simple evaluation report.
http://www.lextm.com/index.php/2007/12/product-review-snmp-libraries-for-net-evaluation-report/
In my opinion, PowerSNMP is the leading one which has both complete feature set and simple/natural API. There are many commercial and open source products you can evaluate yourself to see which one meets your special needs.
Which one is the best? This bases on which subset of SNMP features you need and how big your budget is. :)
A: here's a library and a few examples http://www.c-sharpcorner.com/UploadFile/malcolmcrowe/SnmpLib11232005011613AM/SnmpLib.aspx
A: I've personally used Adventnet's .NET SNMP API for snmp work. It's now been renamed to WebNMS. I have code running based on this API several places, that has just worked and keeps on working 24/7.
Recommended for lots of examples and stability. Also it's fast. Seems though that there are several other .NET SNMP libraries that have come since I used SNMP last that might be worth checking out. ex: #SNMP, which has been referenced to in other replies.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146766",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
} |
Q: In what namespace should you put interfaces relative to their implementors? Specifically, when you create an interface/implementor pair, and there is no overriding organizational concern (such as the interface should go in a different assembly ie, as recommended by the s# architecture) do you have a default way of organizing them in your namespace/naming scheme?
This is obviously a more opinion based question but I think some people have thought about this more and we can all benefit from their conclusions.
A: I usually keep the interface in the same namespace of as the concrete types.
But, that's just my opinion, and namespace layout is highly subjective.
Animals
|
| - IAnimal
| - Dog
| - Cat
Plants
|
| - IPlant
| - Cactus
You don't really gain anything by moving one or two types out of the main namespace, but you do add the requirement for one extra using statement.
A: What I generally do is to create an Interfaces namespace at a high level in my hierarchy and put all interfaces in there (I do not bother to nest other namespaces in there as I would then end up with many namespaces containing only one interface).
Interfaces
|--IAnimal
|--IVegetable
|--IMineral
MineralImplementor
Organisms
|--AnimalImplementor
|--VegetableImplementor
This is just the way that I have done it in the past and I have not had many problems with it, though admittedly it might be confusing to others sitting down with my projects. I am very curious to see what other people do.
A: I prefer to keep my interfaces and implementation classes in the same namespace. When possible, I give the implementation classes internal visibility and provide a factory (usually in the form of a static factory method that delegates to a worker class, with an internal method that allows a unit tests in a friend assembly to substitute a different worker that produces stubs). Of course, if the concrete class needs to be public--for instance, if it's an abstract base class, then that's fine; I don't see any reason to put an ABC in its own namespace.
On a side note, I strongly dislike the .NET convention of prefacing interface names with the letter 'I.' The thing the (I)Foo interface models is not an ifoo, it's simply a foo. So why can't I just call it Foo? I then name the implementation classes specifically, for example, AbstractFoo, MemoryOptimizedFoo, SimpleFoo, StubFoo etc.
A: The answer depends on your intentions.
*
*If you intend the consumer of your namespaces to use the interfaces over the concrete implementations, I would recommend having your interfaces in the top-level namespace with the implementations in a child namespace
*If the consumer is to use both, have them in the same namespace.
*If the interface is for predominantly specialized use, like creating new implementations, consider having them in a child namespace such as Design or ComponentModel.
I'm sure there are other options as well, but as with most namespace issues, it comes down to the use-cases of the project, and the classes and interfaces it contains.
A: (.Net) I tend to keep interfaces in a separate "common" assembly so I can use that interface in several applications and, more often, in the server components of my apps.
Regarding namespaces, I keep them in BusinessCommon.Interfaces.
I do this to ensure that neither I nor my developers are tempted to reference the implementations directly.
A: Separate the interfaces in some way (projects in Eclipse, etc) so that it's easy to deploy only the interfaces. This allows you to provide your external API without providing implementations. This allows dependent projects to build with a bare minimum of externals. Obviously this applies more to larger projects, but the concept is good in all cases.
A: I usually separate them into two separate assemblies. One of the usual reasons for a interface is to have a series of objects look the same to some subsystem of your software. For example I have all my Reports implementing the IReport Interfaces. IReport is used is not only used in printing but for previewing and selecting individual options for each report. Finally I have a collection of IReport to use in dialog where the user selects which reports (and configuring options) they want to print.
The Reports reside in a separate assembly and the IReport, the Preview engine, print engine, report selections reside in their respective core assembly and/or UI assembly.
If you use the Factory Class to return a list of available reports in the report assembly then updating the software with new report becomes merely a matter of copying the new report assembly over the original. You can even use the Reflection API to just scan the list of assemblies for any Report Factories and build your list of Reports that way.
You can apply this techniques to Files as well. My own software runs a metal cutting machine so we use this idea for the shape and fitting libraries we sell alongside our software.
Again the classes implementing a core interface should reside in a separate assembly so you can update that separately from the rest of the software.
A: I give my own experience that is against other answers.
I tend to put all my interfaces in the package they belongs to. This grants that, if I move a package in another project I have all the thing there must be to run the package without any changes.
For me, any helper functions and operator functions that are part of the functionality of a class should go into the same namespace as that of the class, because they form part of the public API of that namespace.
If you have common implementations that share the same interface in different packages you probably need to refactor your project.
Sometimes I see that there are plenty of interfaces in a project that could be converted in an abstract implementation rather that an interface.
So, ask yourself if you are really modeling a type or a structure.
A: A good example might be looking at what Microsoft does.
Assembly: System.Runtime.dll
System.Collections.Generic.IEnumerable<T>
Where are the concrete types?
Assembly: System.Colleections.dll
System.Collections.Generic.List<T>
System.Collections.Generic.Queue<T>
System.Collections.Generic.Stack<T>
// etc
Assembly: EntityFramework.dll
System.Data.Entity.IDbSet<T>
Concrete Type?
Assembly: EntityFramework.dll
System.Data.Entity.DbSet<T>
Further examples
Microsoft.Extensions.Logging.ILogger<T>
- Microsoft.Extensions.Logging.Logger<T>
Microsoft.Extensions.Options.IOptions<T>
- Microsoft.Extensions.Options.OptionsManager<T>
- Microsoft.Extensions.Options.OptionsWrapper<T>
- Microsoft.Extensions.Caching.Memory.MemoryCacheOptions
- Microsoft.Extensions.Caching.SqlServer.SqlServerCacheOptions
- Microsoft.Extensions.Caching.Redis.RedisCacheOptions
Some very interesting tells here. When the namespace changes to support the interface, the namespace change Caching is also prefixed to the derived type RedisCacheOptions. Additionally, the derived types are in an additional namespace of the implementation.
Memory -> MemoryCacheOptions
SqlServer -> SqlServerCatchOptions
Redis -> RedisCacheOptions
This seems like a fairly easy pattern to follow most of the time. As an example I (since no example was given) the following pattern might emerge:
CarDealership.Entities.Dll
CarDealership.Entities.IPerson
CarDealership.Entities.IVehicle
CarDealership.Entities.Person
CarDealership.Entities.Vehicle
Maybe a technology like Entity Framework prevents you from using the predefined classes. Thus we make our own.
CarDealership.Entities.EntityFramework.Dll
CarDealership.Entities.EntityFramework.Person
CarDealership.Entities.EntityFramework.Vehicle
CarDealership.Entities.EntityFramework.SalesPerson
CarDealership.Entities.EntityFramework.FinancePerson
CarDealership.Entities.EntityFramework.LotVehicle
CarDealership.Entities.EntityFramework.ShuttleVehicle
CarDealership.Entities.EntityFramework.BorrowVehicle
Not that it happens often but may there's a decision to switch technologies for whatever reason and now we have...
CarDealership.Entities.Dapper.Dll
CarDealership.Entities.Dapper.Person
CarDealership.Entities.Dapper.Vehicle
//etc
As long as we're programming to the interfaces we've defined in root Entities (following the Liskov Substitution Principle) down stream code doesn't care where how the Interface was implemented.
More importantly, In My Opinion, creating derived types also means you don't have to consistently include a different namespace because the parent namespace contains the interfaces. I'm not sure I've ever seen a Microsoft example of interfaces stored in child namespaces that are then implement in the parent namespace (almost an Anti-Pattern if you ask me).
I definitely don't recommend segregating your code by type, eg:
MyNamespace.Interfaces
MyNamespace.Enums
MyNameSpace.Classes
MyNamespace.Structs
This doesn't add value to being descriptive. And it's akin to using System Hungarian notation, which is mostly if not now exclusively, frowned upon.
A: I HATE when I find interfaces and implementations in the same namespace/assembly. Please don't do that, if the project evolves, it's a pain in the ass to refactor.
When I reference an interface, I want to implement it, not to get all its implementations.
What might me be admissible is to put the interface with its dependency class(class that references the interface).
EDIT: @Josh, I juste read the last sentence of mine, it's confusing! of course, both the dependency class and the one that implements it reference the interface. In order to make myself clear I'll give examples :
Acceptable :
Interface + implementation :
namespace A;
Interface IMyInterface
{
void MyMethod();
}
namespace A;
Interface MyDependentClass
{
private IMyInterface inject;
public MyDependentClass(IMyInterface inject)
{
this.inject = inject;
}
public void DoJob()
{
//Bla bla
inject.MyMethod();
}
}
Implementing class:
namespace B;
Interface MyImplementing : IMyInterface
{
public void MyMethod()
{
Console.WriteLine("hello world");
}
}
NOT ACCEPTABLE:
namespace A;
Interface IMyInterface
{
void MyMethod();
}
namespace A;
Interface MyImplementing : IMyInterface
{
public void MyMethod()
{
Console.WriteLine("hello world");
}
}
And please DON'T CREATE a project/garbage for your interfaces ! example : ShittyProject.Interfaces. You've missed the point!
Imagine you created a DLL reserved for your interfaces (200 MB). If you had to add a single interface with two line of codes, your users will have to update 200 MB just for two dumb signaturs!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146777",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
} |
Q: In Django, where is the best place to put short snippets of HTML-formatted data? This question is related to (but perhaps not quite the same as):
Does Django have HTML helpers?
My problem is this: In Django, I am constantly reproducing the basic formatting for low-level database objects. Here's an example:
I have two classes, Person and Address. There are multiple Addresses for each Person, setup likeso (in their respective models.py)
class Person(models.Model):
...
class Address(models.Model):
contact = models.ForeignKey(Person)
Now, whenever I look at a Person, I want to see all their Addresses. So suppose Persons/views.py has something likeso:
def detail(request, person_id):
person = get_object_or_404( Person, pk=person_id )
return render_to_response('persons/details.html',
{ 'title' : unicode(person), 'addresses': person.address_set.all() } )
And, I have a template, persons/details.html, with code, for example, like-so:
{% extends "base.html" %}
{% for address in addresses %}
<b>{{ address.name }}</b>
{{ address.type }} <br>
{{ address.street_1 }}<br>
{{ address.street_2 }}<br>
{{ address.city }} {{ address.stateprov }} {{ address.postalcode }}<br>
{{ address.country }}
<hr>
{{ endfor }}
I am repeating this code quite a bit, often with minor variations, such when it's in a table, and then < br > must be substituted by < /td >< td >. Other times, I don't want a street_2 to display (or the < br > after it). All to say, there is fundamental logic that I want to express, that I am even more loath to tote around with block-and-copy!
What I want is a persons/details.html with, for example, the following:
{% extends "base.html" %}
{% for address in addresses %}
{% address.as_html4 %}
{% endfor %}
And if I want inline table, something likeso (I guess!):
{% extends "base.html" %}
<table><tr>
{% for address in addresses %}
<tr><td> {% address.as_html4 </td><td> %} </td></tr>
{% endfor %}
</table>
The question is, then: Where is the best place to put the formatting? The logic?
Django seem to have the following (plausible) options:
*
*Put the formatting in models.py
*Put the logic/formatting in views.py
*Put the logic/formatting in some other sub-class of Person or Address (i.e. addresses/html4.py)
*Create custom tags
Help / insight much appreciated!
A: I would use a template tag outputting data using a template html-file a k a inclusion-tag
A: Sounds like an inclusion tag is what you're looking for. You could have a template and tag for each major variation and use the tag's arguments to customise the context for each template as required.
Basic tag definition:
@register.inclusion_tag('person/address.html')
def display_address(address):
return {'address': address}
Use in templates (assuming the templatetag module containing it has already been {% load %}-ed):
{% display_address address %}
A: I think template filter will be useful too. You can pass filter on each object, for example:
{{ value|linebreaks }} # standard django filter
Will produce:
If value is Joel\nis a slug, the output will be <p>Joel<br>is a slug</p>.
See Django Built-in template tags and filters complete reference.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146789",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: Any way to un-register a WPF dependency property? I'm running into an unusual problem in my unit tests. The class I'm testing creates a dependency property dynamically at runtime and the type of that dependency property can vary depending on the circumstances. While writing my unit tests, I need to create the dependency property with different types and that leads to errors because you can't redefine an existing dependency property.
So is there any way to either un-register a dependency property or to change the type of an existing dependency property?
Thanks!
OverrideMetadata() only lets you change a very few things like default value so it isn't helpful. The AppDomain approach is a good idea and might work but seems more complicated than I really wanted to delve into for the sake of unit testing.
I never did find a way to unregister a dependency property so I punted and carefully reorganized my unit tests to avoid the issue. I'm getting a bit less test coverage, but since this problem would never occur in a real application and only during unit testing I can live with it.
Thanks for the help!
A: I had similar issue just yesterday when trying to test my own DependencyProperty creating class. I came across this question, and noticed there was no real solution to unregister dependency properties. So I did some digging using Red Gate .NET Reflector to see what I could come up with.
Looking at the DependencyProperty.Register overloads, they all seemed to point to DependencyProperty.RegisterCommon. That method has two portions:
First to check if the property is already registered
FromNameKey key = new FromNameKey(name, ownerType);
lock (Synchronized)
{
if (PropertyFromName.Contains(key))
{
throw new ArgumentException(SR.Get("PropertyAlreadyRegistered",
new object[] { name, ownerType.Name }));
}
}
Second, Registering the DependencyProperty
DependencyProperty dp =
new DependencyProperty(name, propertyType, ownerType,
defaultMetadata, validateValueCallback);
defaultMetadata.Seal(dp, null);
//...Yada yada...
lock (Synchronized)
{
PropertyFromName[key] = dp;
}
Both pieces center around DependencyProperty.PropertyFromName, a HashTable. I also noticed the DependencyProperty.RegisteredPropertyList, an ItemStructList<DependencyProperty> but have not seen where it is used. However, for safety, I figured I'd try to remove from that as well if possible.
So I wound up with the following code that allowed me to "unregister" a dependency property.
private void RemoveDependency(DependencyProperty prop)
{
var registeredPropertyField = typeof(DependencyProperty).
GetField("RegisteredPropertyList", BindingFlags.NonPublic | BindingFlags.Static);
object list = registeredPropertyField.GetValue(null);
var genericMeth = list.GetType().GetMethod("Remove");
try
{
genericMeth.Invoke(list, new[] { prop });
}
catch (TargetInvocationException)
{
Console.WriteLine("Does not exist in list");
}
var propertyFromNameField = typeof(DependencyProperty).
GetField("PropertyFromName", BindingFlags.NonPublic | BindingFlags.Static);
var propertyFromName = (Hashtable)propertyFromNameField.GetValue(null);
object keyToRemove = null;
foreach (DictionaryEntry item in propertyFromName)
{
if (item.Value == prop)
keyToRemove = item.Key;
}
if (keyToRemove != null)
propertyFromName.Remove(keyToRemove);
}
It worked well enough for me to run my tests without getting an "AlreadyRegistered" exception. However, I strongly recommend that you do not use this in any sort of production code. There is likely a reason that MSFT chose not to have a formal way to unregister a dependency property, and attempting to go against it is just asking for trouble.
A: If everything else fails, you can create a new AppDomain for every Test.
A: I don't think you can un-register a dependency property but you can redefine it by overriding the metadata like this:
MyDependencyProperty.OverrideMetadata(typeof(MyNewType),
new PropertyMetadata());
A: If we register name for a Label like this :
Label myLabel = new Label();
this.RegisterName(myLabel.Name, myLabel);
We can easily unregister the name by using :
this.UnregisterName(myLabel.Name);
A: I was facing scenario where I created a custom control that inherits from Selector which is meant to have two ItemsSource properties, HorizontalItemsSource and VerticalItemsSource.
I don't even use the ItemsControl property, and don't want the user to be able to access it.
So I read statenjason's great answer, and it gave me a huge POV on how to remove a DP.
However, my problem was, that since I declared the ItemsSourceProperty member and the ItemsSource as Private Shadows (private new in C#), I couldn't load it at design time since using MyControlType.ItemsSourceProperty would refer to the shadowed variable.
Also, when using the loop mentioned in is enswer above (foreach DictionaryEntry etc.), I had an exception thrown saying that the collection has changed during iteration.
Therefore I came up with a slightly different approach where the DependencyProperty is hardcodedly refered at runtime, and the collection is copied to array so it's not changed (VB.NET, sorry):
Dim dpType = GetType(DependencyProperty)
Dim bFlags = BindingFlags.NonPublic Or BindingFlags.Static
Dim FromName =
Function(name As String, ownerType As Type) DirectCast(dpType.GetMethod("FromName",
bFlags).Invoke(Nothing, {name, ownerType}), DependencyProperty)
Dim PropertyFromName = DirectCast(dpType.GetField("PropertyFromName", bFlags).
GetValue(Nothing), Hashtable)
Dim dp = FromName.Invoke("ItemsSource", GetType(DimensionalGrid))
Dim entries(PropertyFromName.Count - 1) As DictionaryEntry
PropertyFromName.CopyTo(entries, 0)
Dim entry = entries.Single(Function(e) e.Value Is dp)
PropertyFromName.Remove(entry.Key)
Important note: the above code is all surrounded in the shared constructor of the custom control, and I don't have to check wether it's registered, because I know that a sub-class of Selcetor DOES provide that ItemsSource dp.
A: Had an issue with a ContentPresenter with different Datatemplates where one of them had a DependencyProperty with a PropertyChangedCallback
When changing ContentPresenters content to another DataTemplate the callback remained.
In the UserControls Unloaded event i called:
BindingOperations.ClearAllBindings(this);
Dispatcher.CurrentDispatcher.Invoke(DispatcherPriority.Normal, new DispatcherOperationCallback(delegate { return null; }), null);
That worked for me
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146794",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: How to read config file entries from an INI file I can't use the Get*Profile functions because I'm using an older version of the Windows CE platform SDK which doesn't have those. It doesn't have to be too general.
[section]
name = some string
I just need to open the file, check for the existence of "section", and the value associated with "name". Standard C++ is preferred.
A: What I came up with:
std::wifstream file(L"\\Windows\\myini.ini");
if (file)
{
bool section=false;
while (!file.eof())
{
WCHAR _line[256];
file.getline(_line, ELEMENTS(_line));
std::wstringstream lineStm(_line);
std::wstring &line=lineStm.str();
if (line.empty()) continue;
switch (line[0])
{
// new header
case L'[':
{
std::wstring header;
for (size_t i=1; i<line.length(); i++)
{
if (line[i]!=L']')
header.push_back(line[i]);
else
break;
}
if (header==L"Section")
section=true;
else
section=false;
}
break;
// comments
case ';':
case ' ':
case '#':
break;
// var=value
default:
{
if (!section) continue;
std::wstring name, dummy, value;
lineStm >> name >> dummy;
ws(lineStm);
WCHAR _value[256];
lineStm.getline(_value, ELEMENTS(_value));
value=_value;
}
}
}
}
A: You should have a look at Boost.Program_options.
It has a parse_config_file function that fills a map of variables. Just what you need !
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146795",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How do I rename a SharePoint virtual machine I am using virtual machines for development,but each time I need a new VM, I copy the file and create a new server, but I need a new name for the server to add it to our network.
After renaming the server, the Sharepoint sites have many errors and do not run.
A: Actually there is a bunch of stuff you will need to do in regards of the unique computer SID, SQL Server etc.
These links will get you started:
http://paulhorsfall.co.uk/archive/2007/04/10/How-to-Create-a-Cloneable-SharePoint-Virtual-Machine.aspx
http://msmvps.com/blogs/laflour/archive/2008/04/25/renaming-server-pc-with-sharepoint.aspx
http://dotnet.org.za/jpfouche/archive/2008/02/12/renaming-your-sharepoint-virtual-machine.aspx
I hope this helps, enjoy.
A: Here is a Technet article that might be helpful: http://technet.microsoft.com/en-us/library/cc261986.aspx
If you are going to uninstall SharePoint check this article for more details about SQL Server rename: http://technet.microsoft.com/en-us/library/ms143799.aspx
A: Here is another link that helped me a lot:http://www.sharepointblogs.com/mirjam/archive/2007/08/06/renaming-a-moss-server.aspx
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146801",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Googlebot + IFrames? How does googlebot treat iframes? Does it follow the src attribute like a link? Is the iframe content analyzed as if it was part of the page where it is included?
A:
IFrames are sometimes used to display content on web pages. Content displayed via iFrames may not be indexed and available to appear in Google's search results. We recommend that you avoid the use of iFrames to display content. If you do include iFrames, make sure to provide additional text-based links to the content they display, so that Googlebot can crawl and index this content.
From the Google Webmaster Guidelines
The iframe content isn't included in the original page. If you want the content indexed, try providing a text link to the src page. If you don't want it indexed, use the meta robots tag or robots.txt to restrict the file.
A: Anecdotal evidence suggests it treats it like a link (so you can end up with a page designed to be viewed inside the frame being loaded on its own via a link from a search engine).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146805",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: How can I tell what type of computers are in a coffee shop? This is more a thought experiment than anything.
I'm wondering what it would take to detect everything I legally can about the laptops in a hotspot. My first thought was to grab every MAC address I can and extract the maker from the first 24bit.
The question is would this be illegal and what else could I legally scavenge, preferably passively?
P.S. This constitutes a pattern-recognition problem so it IS programming... I think.
A: nmap can do a reasonable job of guessing the operating system by the way the target system responds to various probes
A: For the brain dead answer: Quit typing and look around for a few minutes :-D
A: I guess the obvious one would be to sniff the user-agent out of their HTTP requests.
A: IIRC there are ways to detect what TCP/IP stack is being used by sniffing choices in port usage and sequence numbers.
A: Why does it matter, 90% of them will be apples ;-)
A: The people wearing black turtle necks will be using Macs, the rest will be using PC's and Windows (except for that one guy with long pony tail who is using Linux).
A: Packet sniffing is illegal without the network owners consent. At least that's the generally followed precedent. There has been very few tests of this in court. But your idea of how to do it, using the mac addresses, would work well. Wireshark is a good packet sniffer if you're looking.
A: GFI Languard gives a lot of info like this. Check it out.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146807",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-2"
} |
Q: Struts2 Annotation-Validators for Invalid Chars While using Struts2, I am using several annotations to do my validations inside the Model class, in the set() methods, like:
@RequiredStringValidator(message = "Name is required")
@StringLengthFieldValidator(message = "Name must be between 5 and 60 characters",
minLength = "5", maxLength = "60")
public void setName(String name) {
this.name = name;
}
But I would like how to put a validation do avoid Strange characters, like !, ?, #, $ and etc...
Is there any existing validator I should use like a @RestrictChars or maybe is there some way to use the @StringRegexValidator. Or should I write a new one. Any hint or documentation how to do such would be great.
A: I found it finally:
Just adding the already existent @RegexFieldValidator gave me the Validator I needed:
@RegexFieldValidator(message = "Use only Letters or numbers",
expression = "^[a-zA-Z0-9]+$")
Thanks anyway guys!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146815",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: The exec family I have a project the requires the use of the exec family. My project consist of making an interactive shell. The shell will implement a few basic commands like cd, ls, echo, etc. I have been researching the use of exec, but have not found a useful site. Any suggested links would help.
int ret;
ret = execl ("/bin/ls", "ls", "-1", (char *)0);
How would i get the output of this operation to show on the screen?
A: doing
int fd = 1;
dup(fd);
close(fd);
gets the output to the screen.
A: The code you wrote works for me in a simple test program that does nothing else. Remember, when you call execl, the process retains all of the old file handles. So whatever stdout was when you call execl, it will be the same when the new binary is loaded. If you just want the output to go to the terminal, just make sure stdout goes to the terminal.
If you want to do I/O with another program, popen is good for this (as mgb mentioned). It will fork a new process, set up plumbing for you, call some variant of exec, and return a file handle you can use for communication.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146835",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Comprehensive server-side validation I currently have a fairly robust server-side validation system in place, but I'm looking for some feedback to make sure I've covered all angles. Here is a brief outline of what I'm doing at the moment:
*
*Ensure the input is not empty, or is too long
*Escape query strings to prevent SQL injection
*Using regular expressions to reject invalid characters (this depends on what's being submitted)
*Encoding certain html tags, like <script> (all tags are encoded when stored in a database, with some being decoded when queried to render in the page)
Is there anything I'm missing? Code samples or regular expressions welcome.
A: You shouldn't need to "Escape" query strings to prevent SQL injection - you should be using prepared statements instead.
Ideally your input filtering will happen before any other processing, so you know it will always be used. Because otherwise you only need to miss one spot to be vulnerable to a problem.
Don't forget to encode HTML entities on output - to prevent XSS attacks.
A: You should encode every html tag, not only 'invalid' ones. This is a hot debate, but basically it boils down to there will always be some invalid HTML combination that you will forget to handle correctly (nested tags, mismatched tags some browsers interpret 'correctly' and so on). So the safest option in my opinion is to store everything as htmlentities and then, on output, print a validated HTML-safe-subset tree (as entities) from the content.
A: Run all server-side validation in a library dedicated to the task so that improvements in one area affect all of your application.
Additionally include work against known attacks, such as directory traversal and attempts to access the shell.
A: This Question/Answer has some good responses that you're looking for
(PHP-oriented, but then again you didn't specify language/platform and some of it applies beyond the php world):
What's the best method for sanitizing user input with PHP?
A: You might check out the Filter Extension for data filtering. It won't guarantee that you're completely airtight, but personally I feel a lot better using it because that code has a whole lot of eyeballs looking over it.
Also, consider prepared statements seconded. Escaping data in your SQL queries is a thing of the past.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146839",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Why does my "out of the box" SharePoint Navigation look like it is leaking memory My site has quite a deep navigation structure and quite often it looks like the out of the box navigation is leaking memory, especially the SPWeb objects.
The log message looks like
Potentially excessive number of SPRequest objects (14) currently unreleased on thread 5. Ensure that this object or its parent (such as an SPWeb or SPSite) is being properly disposed.
A: Stefan Goßner's blog post seems to answer the question. The issue is not that the SPWeb objects are not being closed, but that once a certain threshold (defaults to 8) of allocations are hit, the warning is created in the log.
Depending on your site structure the number that will be created will vary.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146841",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Is D a credible alternative to Java and C++? Is the D language a credible alternative to Java and C++? What will it take to become a credible alternative? Should I bother learning it? Does it deserve evangelizing?
The main reason I ask is that with the new C++ standard (c++0x) almost here, it's clear to me that the language has gone well past the point of no return with respect to anyone ever understanding it. I know that C/C++ will never die but at some point we need to move on. Even COBOL had its day and Java has in many respects undone C++. So what's next? Does D fill the bill?
A: It really depends on what your needs are - large scale commercial applications written in D do exist on the server side, and for that D (typically using Tango/Mango) is a perfect fit, and you are likely to be able to serve more requests than with any other language/platform.
For more specialized solutions in terms of protocols and interactivity (which many are) you will have more problems finding the needed libraries, and the lack of tools is likely to affect you more.
A: D is pretty impressive, and Andrei's book about it is well-written. But as others have said, you need the tools and the platform support. GDC may, over time, be the answer to both.
Have you seen this?
"GNU Debugger adds D language support":
http://www.linux.com/news/enterprise/biz-enterprise/358956-gnu-debugger-adds-d-language-support
Also, the digitalmars site has pages discussing interfacing to C and C++ (for those libraries you just can't live without). I wonder if there are any tools that, given a C header file, will take a stab at writing the D prototypes.
Personally I wouldn't at this point push for doing a large project in D, but I would use D for in-house tools, getting experience with it and introducing others to it.
The original question was whether D is a credible alternative to Java and C++. I don't think D and Java are really going to compete much in practice; D competes with C++ and now Go. Other questions address the differences between D and Go, but Go is generally considered easier to use. So I suspect that the future of D depends a lot on how much room there is for it to breathe between C++, the current king of the hill, and Go, the much easier alternative that has Google's backing.
UPDATE: I just discovered that my favorite chapter of Andrei's book, the one on concurrency, is available for free online. Definitely worth a read!
And here's a loooong discussion about the relative merits/objectives/approaches of Go and D.
A: I agree that C++ is becoming a dead-end language - and it pains me to say that, after having used it for the last 17 years.
I think D is the rightful successor to C++. From a language perspective it "does all the right things" (even if I don't agree with every minute decision). I think with C and C++ on the decline there is no other systems level language that can really do what they do, while holding itself up in the world of modern languages - except D! Not only does D fill this role - it excels at it! A look at D1.x should be enough to convince you of that - but when you look at D2.0 it blows you away. It is my opinion that there is no other language around today that works as well as D2.0 in bridging the imperative and functional programming paradigms - which is only going to get more significant in coming years.
Lack of mainstream acceptance - or even visibility - as well as large scale, mature, libraries - are an obstacle of course. However I don't think you can write it off because of this. I am convinced that D will grow to become one of the most important languages around within the next few years - and those that are taking it seriously now are going to be well placed to take the lead when that time comes.
I think the difference is going to come about due, in large part, to Andrei Alexandrescu's involvement. That's not to discredit Walter Bright in any way, who has done a momentous job in bring D to the world. But Alexandrescu is an important, and compelling, figure in certainly the C++ community - and there's where most of the potential D switchers are going to come from. And he has also made a significant and important contribution to D2.0 in its support for functional programming.
It may still turn out that D is doomed to be the Betamax of systems level languages - but my money is on it turning around within the next two years.
A: As a language, I always felt that D is closer to C# than C++. Not in features and libraries, but in "feeling". It's much cleaner,nicer ... fun (than C++).
IMHO the biggest obstacle in becoming a credible alternative is tools,IDE and debugger. If D overcomes some obstacles of widespread usage/adoption, more tools and libraries will manifest. (I myself will return to D, if there will be an usable IDE and debugger.)
A: It looks like the question has been answered. D is the better language compared with C++.
The question of whether for practical purposes D has better infrastructure around is secondary.
In short if they are both brand new languages without any support around them D is the better language, ergo it is the better language.
A: Works great for my own pet projects. I'd use it for employers' projects but for not knowing how hard it would be for them to find someone to take over the source after i move on. There are no technical reasons to avoid it, at least on the supported platforms. (knock on wood)
A: It looks like a very well designed language; much better than C - C++ - Objective C.
I can live without an IDE or a debugger for a while, but not without a good, documented library for D 2.0.
I'll check back in 6 months...
A: One approach is to search for jobs in your area. Find the jobs you would like to do and see what skills they are asking for. If they are asking for C++ or Ruby or Oracle or D, then that is the skill which is mostly to help you to get the job you want.
A: I like that D is the work of a genius, primarily one mind - Walter Bright, whose Zortech compiler was fantastic in its day.
In contrast C++ is too much design by committee, even if Bjarne is an influence. Too many add-on features and weird new syntax. This difference reflects in the ease of learning and ease of everyday use, fewer bugs.
The more coherent languages lead to better productivity and programmer joy - but this is subjective and arguable! (i guess i should vote my own answer down)
A: D is a good language and decently popular, but like all languages, it is just another tool. Which tool to use depends on the kind of person you are, how you think, the environment you are working in, what restrictions of the languages apply the the program, and most importantly, the program itself. If you have the time, I would definitely recommend learning D. Worst case scenario, you will never use it. More likely you will learn what aspects of it you like the most, and under what conditions it shines brightest, and take advantage of that for when making new programs.
I would recommend looking at the D comparison chart to see what the features are for the language and see if it sounds right for you.
A: What determines the success and popularity of a programming language for real-world software development is only partially related to the quality of the language itself. As a pure language, D arguably has many advantages over C++ and Java. At the very least it is a credible alternative as a pure language, all other things being equal.
However, other things matter for software development - almost more than the language itself: portability (how many platforms does it run on), debugger support, IDE support, standard library quality, dynamic library support, bindings for common APIs, documentation, the developer community, momentum, and commercial support, just to name a few. In every one of those regards, D is hopelessly behind Java, C++, and C#. In fact, I'd argue it's even behind so-called "scripting" languages like Python, Perl, PHP, Ruby, and even JavaScript in these regards.
To be blunt, you simply can't build a large-scale, cross-platform application using D. With an immature standard library, no support in any modern IDEs (there are plugins for both Visual Studio and Xamarin Studio/MonoDevelop), limited dynamic/shared library support, and few bindings to other languages, D is simply not an option today.
If you like what you see of D, by all means, learn it - it shouldn't take long if you already know Java and C++. I don't think evangelism would be helpful - at this point if D is going to succeed, what it really needs is more people quietly using it and addressing its major shortcomings like standard library and IDE support.
Finally, as for C++, while most agree the language is too complex, thousands of companies are successfully using C++ as part of a healthy mix of languages by allowing only a smaller, well-defined subset of the language. It's still hard to beat C++ when both raw performance and small memory usage are required.
A: Just to add my own experiences into the mix:
About a year ago I worked on a small scale game project (3 coders) lasting 3 months, where we used D as our primary language. We chose it partly as an experiment, partly because it already had bindings for SDL and some other tools we were using, and mostly for the benefits is appeared to have over C++.
We loved using it. It was quick to learn and easy to write. Many of it's features proved invaluable, and I miss them having gone back to C++.
However, the following points made our lives more difficult:
*
*There was no good IDE at the time which was a major issue. We ended up making our own by customising Scite. This worked ok, but was not ideal.
*There was no debugger at the time. We managed to get WINDBG to work on a hit-or-miss basis, but it was unreliable. Debugging code without a debugger made life hellish at times.
*There were 2 standard libraries to choose from at the time (Tango and Phobos). We started with one, switched to the other, and really needed a mixture of features from both (Tangobos!). This caused headaches and some code re-write.
*Bindings to other tools not available. In the end we had to switch to DirectX (for reasons I won't get into). There were no bindings for DirectX available so we had to write our own in C++, build it as a .dll and bind to that. This was fairly nasty work and took some time.
Overall, we loved to write D. It made actually writing code easy and was quick to learn. The issues I've mentioned echo the answer that has been accepted for this question - it's the "extra" bits that need addressing in D, the core of it is solid.
A: D language is modern. No language is perfect, and there is no doubt about this. But languages are born to make life easier. Where D language compared to C++ has lot of good features, in terms of complexity. Many other language combination specialty is involved, which helps coders to code faster, with TOP features introduced by other languages. Please see also:
for more details about D and other languages
*
*D vs C++, is the compatibility, where huge C++ languages are involved and requires compatibility with D. D allow already 100% compatibility with C, which is a good win still.
*D vs C++, C++ is my opinion very nice language, but its hard to code, and time consuming, requires more and more experience to get success, but D allow to do the same with simplicity
*D vs C++, i am not sure if C++ does, but D do allow none type restriction variable assignment using "auto", which is good to have a variable dynamic, when require you can make a strict type
*D vs C++, if you have other language experience, you can straight get started with it, it has easy learning road map, and its getting designed by a strong experienced team and company support
*D vs C++, the very nice thing i found of D, is the code style, it gives the look and feel exactly the same like C/C++, while coding it reminds i am doing really modern C/C++ which called D
There are lot of more good reason for D language, there is no reason to underestimate any language, its always the user choice.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146850",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "165"
} |
Q: How does the UTF-8 support of TinyXML work? I'm using TinyXML to parse/build XML files. Now, according to the documentation this library supports multibyte character sets through UTF-8. So far so good I think. But, the only API that the library provides (for getting/setting element names, attribute names and values, ... everything where a string is used) is through std::string or const char*. This has me doubting my own understanding of multibyte character set support. How can a string that only supports 8-bit characters contain a 16 bit character (unless it uses a code page, which would negate the 'supports Unicode' claim)? I understand that you could theoretically take a 16-bit code point and split it over 2 chars in a std::string, but that wouldn't transform the std::string to a 'Unicode' string, it would make it invalid for most purposes and would maybe accidentally work when written to a file and read in by another program.
So, can somebody explain to me how a library can offer an '8-bit interface' (std::string or const char*) and still support 'Unicode' strings?
(I probably mixed up some Unicode terminology here; sorry about any confusion coming from that).
A: First, utf-8 is stored in const char * strings, as @quinmars said. And it's not only a superset of 7-bit ASCII (code points <= 127 always encoded in a single byte as themselves), it's furthermore careful that bytes with those values are never used as part of the encoding of the multibyte values for code points >= 128. So if you see a byte == 44, it's a '<' character, etc. All of the metachars in XML are in 7-bit ASCII. So one can just parse the XML, breaking strings where the metachars say to, sticking the fragments (possibly including non-ASCII chars) into a char * or std::string, and the returned fragments remain valid UTF-8 strings even though the parser didn't specifically know UTF-8.
Further (not specific to XML, but rather clever), even more complex things genrally just work (tm). For example, if you sort UTF-8 lexicographically by bytes, you get the same answer as sorting it lexicographically by code points, despite the variation in # of bytes used, because the prefix bytes introducing the longer (and hence higher-valued) code points are numerically greater than those for lesser values).
A: UTF-8 is compatible to 7-bit ASCII code. If the value of a byte is larger then 127, it means a multibyte character starts. Depending on the value of the first byte you can see how many bytes the character will take, that can be 2-4 bytes including the first byte (technical also 5 or 6 are possible, but they are not valid utf-8). Here is a good resource about UTF-8: UTF-8 and Unicode FAQ, also the wiki page for utf8 is very informative. Since UTF-8 is char based and 0-terminated, you can use the standard string functions for most things. The only important thing is that the character count can differ from the byte count. Functions like strlen() return the byte count but not necessarily the character count.
A: By using between 1 and 4 chars to encode one Unicode code point.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146873",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: Tips for speeding up build time on Linux using ANT, Javacc, JUnit and compiling Java classes We have a large codebase that takes approx 12 minutes on the developer machines to auto-generate some Java 5 classes using JavaCC and then compiles all the classes as well as running the units test.
The project consists of multiple projects which can be built in groups, but we are aiming for a full a build in under 10 minutes
What tips are there for reducing this build time?
Thanks
A: One quick fix that might shave some time off is to ensure that you are running Ant using the server JVM (by default it uses the client VM). Set ANT_OPTS to include "-server".
A: *
*Profile the build process and see where the bottlenecks are. This can give you some ideas as to how to improve the process.
*Try building independent projects in parallel on multi-core/CPU machines. As an extension of this idea, you may want to look around for a Java equivalent of distcc (don't know whether it exists) to distribute your build over a number of machines.
*Get better machines.
A: some tips for reducing build time:
*
*do less work.
e.g. remove unnecessary logging/echoing to files and
console
*make your build 'incremental'.
Compile only changes classes.
*eliminate duplicated effort.
Easier said than done, but if you run
the build in debug mode ("ant -debug") you can sometimes see
redundant tasks or targets.
*avoid expensive operations.
copying of files, and packaging of jars into wars are necessary for release.Signing jars is expensive and should only be done, if possible' for milestone releases rather than every build
A: Try be inspired by pragmatic programmer. Compile only what is necessary, have two or more test suites. One for quick tests, other for full tests. Consider if there is real need to use each build-step every time. It necessary try to use jikes compiler instead of javac. After project spans several hundreds of classes I switch to jikes to improve speed. But be aware of potential incompatibility issues. Don't forget to include one all in one target to perform every step with full rebuild and full test of project.
A: Now that you've explained the process in more detail, here are two more options:
*
*A dedicated machine/cluster where the build is performed much quicker than on a normal workstation. The developers would then, before a commit, run a script that builds their code on the dedicated machine/cluster.
*Change the partitioning into sub-projects so that it's harder to break one project by modifying another. This should then make it less important to do a full build before every commit. Only commits that are touching sensitive sub-projects, or those spanning multiple projects would then need to be "checked" by means of a full build.
A: This probably wouldn't help in the very near term, but figured I should throw it out there anyway.
If your project is breakable into smaller projects (a database subsystem, logging, as examples), you may be interested in using something like maven to handle the build. You can run each smaller bite as a separate project or module, and maven will be able to maintain what needs to be built if changes exist. In this the build can focus onthe main portion of your project and it won't take nearly as long.
A: What is the breakdown in time spent:
*
*generating the classes
*compiling the classes
*running the tests
Depending on your project, you may see significant increases in build time by allocating a larger heap size to javac(memoryMaximumSize) and junit(maxmemory).
A: Is it very important that the entire build lasts less than 10 minutes? If you make the sub-projects independent from one another, you could work on one sub-project while having already compiled the other ones (think Maven or Ivy to manage the dependencies).
Another solution (and if your modules are reasonably stable) is to treat your sub-projects as standalone projects. Each project would then follow their own release cycle and be available from a local Maven/Ivy repository. This of course works well if at least parts of the project are reasonably stable.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146888",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Anyone successfully using Commission Junction API? Is anyone successfully using the CJ web services? I just keep getting java.lang.NullPointerExceptions even though my app is .net (clearly their errors). CJ support doesn't even know what a web service is. I googled and found many people getting this or other errors. Question is: is it a temporary problem or am I doomed to parse manually downloaded reports for eternity?
The specific API I'm trying to use is the daily publisher commission service. Here is the WSDL.
Links:
*
*CJ web services home
*API Reference
A: After a spending many days, this code is working for me.
$client = new SoapClient($cjCommissionUrl,
array('trace' => 1,
'soap_version' => SOAP_1_1,
'style' => SOAP_DOCUMENT,
'encoding' => SOAP_LITERAL
));
$date = '06/23/2010';
$results = $client->findPublisherCommissions(array(
"developerKey" => $cjDeveloperKey,
"date" => $date,
"dateType" => 'posting',
"countries" => 'all',
));
A: I have successfully used CJ's API with PHP, though not this particular WSDL. I am seriously troubled by the lack of documentation and even cannot find any serious programmer using it (all amateurs basically trying to copy-paste). If you have some more experience we may be able to help each other out.
A: I had written a Python library for retrieving commission info from CJ. Here is the code: https://github.com/sidchilling/commissionjunction-python-lib
Works for me.
A: create a page cjcall.php forexample
paste this code and do according to your requirement i.e. keyword , dev id , records per page
include('../../../../wp-load.php');
$stringkeyw=urlencode(get_option('cj_keyword'));
if(get_option('rm_num_products')==''){
$pperkeyword=50;
}else{
$pperkeyword= get_option('rm_num_products');
}//number of products against keyword
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, "https://product-search.api.cj.com/v2/product-search?website-id=".get_option('cj_siteid')."&keywords=".$stringkeyw."&records-per-page=".$pperkeyword."&serviceable-area=US");
curl_setopt($ch, CURLOPT_HEADER,false);
curl_setopt($ch, CURLOPT_HTTPGET, true);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
curl_setopt($ch, CURLOPT_HTTPHEADER, array("Accept: application/xml", "Authorization:".get_option('cj_devid').""));
$result = curl_exec($ch);
create another page and paste the following code to bring xml from this page:
$hurl= home_url();
$homepage = file_get_contents(''.$hurl.'/wp-content/plugins/rapid_monetizer/cronjob/cjcall.php');
$object = simplexml_load_string($homepage);
foreach($object->products->product as $cjres)
{
//do your code with products coming in $cjres
}
A: I can make a user interface for you to lift your curse !!!
To use Daily Publisher Commission Report Service !!
Let me know here if you still need help.
A: EDIT: First and foremost, you will not get any results back if there are no commissions to report.
I am working with these API's, I have no problem with any of the REST API's, the SOAP API for the daily publisher commission service does not appear to be working. The results from:
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:api="https://pubcommission.api.cj.com">
<soapenv:Header/>
<soapenv:Body>
<api:findPublisherCommissions>
<api:developerKey>*DEVKEY*</api:developerKey>
<api:date>01/19/2007</api:date>
<api:dateType>event</api:dateType>
<api:advertiserIds></api:advertiserIds>
<api:websiteIds>123456</api:websiteIds>
<api:actionStatus>all</api:actionStatus>
<api:actionTypes></api:actionTypes>
<api:adIds></api:adIds>
<api:countries></api:countries>
<api:correctionStatus></api:correctionStatus>
<api:sortBy>commissionAmount</api:sortBy>
<api:sortOrder>desc</api:sortOrder>
</api:findPublisherCommissions>
</soapenv:Body>
</soapenv:Envelope>
Which is completely valid and correct, gives me an HTML page back. Your error is probably related to parsing the page as XML.
The results are:
<html>
<head>
<title>Web Services</title>
</head>
<body vlink="#333333" alink="#FFCC33" bgcolor="#FFFFFF" leftmargin="0" marginwidth="0" topmargin="0" marginheight="0">
<table cellpadding="0" cellspacing="0" border="0" width="100%">
<tr>
<td background="images/header_bg.gif">
<a href="http://webservices.cj.com"><img src="images/header.gif" width="600" height="63" border="0" alt="webservices.cj.com" /></a>
</td>
</tr>
</table>
<h3>Latest WSDLs</h3>
<table width=70%><tr><td>
<ul>
<li>ProductSearchServiceV2.0<a href="wsdl/version2/productSearchServiceV2.wsdl">[wsdl]</a><img src="images/new11.gif" width="40" height="15"/></li>
<li>LinkSearchServiceV2.0<a href="wsdl/version2/linkSearchServiceV2.wsdl">[wsdl]</a><img src="images/new11.gif" width="40" height="15"/></</li>
<li>PublisherCommissionService and ItemDetails V2.0<a href="wsdl/version2/publisherCommissionServiceV2.wsdl">[wsdl]</a><img src="images/new11.gif" width="40" height="15"/></</li>
<li>RealTimeCommissionServiceV2.0<a href="wsdl/version2/realtimeCommissionServiceV2.wsdl">[wsdl]</a><img src="images/new11.gif" width="40" height="15"/></</li>
<li>AdvertiserSearchService<a href="wsdl/version2/advertiserSearchServiceV2.wsdl">[wsdl]</a></li>
<li>FieldTypesSupportService<a href="wsdl/version2/supportServiceV2.wsdl">[wsdl]</a></li>
</ul>
</td></tr></table>
<h3>Previously Released WSDLs</h3>
For previous versions of the wsdls <a href="old_versions.jsp">click here.</a><p>
<h3>Sign Up</h3>
<ul>
<li><a href="sign_up.cj">Sign Up</a></li>
</ul>
</body>
</html>
I have sent them an email, and expect a response today. I will confirm with you that this API is still available, it may have been completely replaced by the Real Time publisher commission API.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146890",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Key concepts to learn in Assembly I am a firm believer in the idea that one of the most important things you get from learning a new language is not how to use a new language, but the knowledge of concepts that you get from it. I am not asking how important or useful you think Assembly is, nor do I care if I never use it in any of my real projects.
What I want to know is what concepts of Assembly do you think are most important for any general programmer to know? It doesn't have to be directly related to Assembly - it can also be something that you feel the typical programmer who spends all their time in higher-level languages would not understand or takes for granted, such as the CPU cache.
A: Register allocation and management
Assembly gives you a very good idea of how many variables (machine-word-sized integers) the CPU can juggle simultaneously. If you can break your loops down so that they involve only a few temporary variables, they'll all fit in registers. If not, your loop will run slowly as things get swapped out to memory.
This has really helped me with my C coding. I try to make all loops tight and simple, with as little spaghetti as possible.
x86 is dumb
Learning several assembly languages has made me realize how lame the x86 instruction set is. Variable-length instructions? Hard-to-predict timing? Non-orthogonal addressing modes? Ugh.
The world would be better if we all ran MIPS, I think, or even ARM or PowerPC :-) Or rather, if Intel/AMD took their semiconductor expertise and used it to make multi-core, ultra-fast, ultra-cheap MIPS processors instead of x86 processors with all of those redeeming qualities.
A: I think assembly language can teach you lots of little things, as well as a few big concepts.
I'll list a few things I can think of here, but there is no substitute for going and learning and using both x86 and a RISC instruction set.
You probably think that integer operations are fastest. If you want to find an integer square root of an integer (i.e. floor(sqrt(i))) it's best to use an integer-only approximation routine, right?
Nah. The math coprocessor (on x86 that is) has a fsqrt instruction. Converting to float, taking the square root, and converting to int again is faster than an all-integers algorithm.
Then there are things like accessing memory that you can follow, but not properly apprecatiate, until you've delved into assembly. Say you had a linked list, and the first element in the list contains a variable that you will need to access frequently. The list is reordered rarely. Well, each time you need to access that variable, you need to load the pointer to the first element in the list, then using that, load the variable (assuming you can't keep the address of the variable in a register between uses). If you instead stored the variable outside of the list, you only need a single load operation.
Of course saving a couple of cycles here and there is usually not important these days. But if you plan on writing code that needs to be fast, this kind of knowledge can be applied both with inline assembly and generally in other languages.
How about calling conventions? (Some assemblers take care of this for you - Real Programmers don't use those.) Does the caller or callee clean up the stack? Do you even use the stack? You can pass values in registers - but due to the funny x86 instruction set, it's better to pass certain things in certain registers. And which registers will be preserved? One thing C compilers can't really optimise by themselves is calls.
There are little tricks like PUSHing a return address and then JMPing into a procedure; when the procedure returns it will go to the PUSHed address. This departure from the usual way of thinking about function calls is another one of those "states of enlightenment". If you were ever to design a programming language with innovative features, you ought to know about funny things that the hardware is capable of.
A knowledge of assembly language teaches you architecture-specific things about computer security. How you might exploit buffer overflows, or break into kernel mode, and how to prevent such attacks.
Then there's the ubercoolness of self-modifying code, and as a related issue, mechanisms for things such as relocations and applying patches to code (this needs investigation of machine code as well).
But all these things need the right sort of mind. If you're the sort of person who can put
while(x--)
{
...
}
to good use once you learn what it does, but would find it difficult to work out what it does by yourself, then assembly language is probably a waste of your time.
A: It's good to know assembly language in order to gain a better appreciation for how the computer works "under the hood," and it helps when you are debugging something and all the debugger can give you is an assembly code listing, which at least gives you fighting chance of figuring out what the problem might be. However, trying to apply low-level knowledge to high-level programming languages, such as trying to take advantage of how the CPU caches instructions and then writing wonky high-level code to force the compiler to produce super-efficient machine code, is probably a sign that you are trying to micro-optimize. In most cases, it's usually better not to try to outsmart the compiler, unless you need the performance gain, in which case, you might as well write those bits in assembly anyway.
So, it's good to know assembly for the sake of better understanding of how things work, but the knowledge gained is not necessarily directly applicable to how you write code in high-level languages. On that note, however, I found that learning how function calls work at the assembly-code level (learning about the stack and related registers, learning about how parameters are passed on the stack, learning how automatic storage works, etc.) made it a lot easier to understand problems I had in higher-level code, such as "out of stack space" errors and "invalid calling convention" errors.
A: The most important concept is SIMD, and creative use of it. Proper use of SIMD can give enormous performance benefits in a massive variety of applications ranging from everything from string processing to video manipulation to matrix math. This is where you can get over 10x performance boosts over pure C code--this is why assembly is still useful beyond mere debugging.
Some examples from the project I work on (all numbers are clock cycle counts on a Core 2):
Inverse 8x8 H.264 DCT (frequency transform):
c: 1332
mmx: 187
sse2: 127
8x8 Chroma motion compensation (bilinear interpolation filter):
c: 639
mmx: 144
sse2: 110
ssse3: 79
4 16x16 Sum of Absolute Difference operations (motion search):
c: 3948
mmx: 278
sse2: 231
ssse3: 215
(yes, that's right--over 18x faster than C!)
Mean squared error of a 16x16 block:
c: 1013
mmx: 193
sse2: 131
Variance of a 16x16 block:
c: 783
mmx: 171
sse2: 106
A: Memory, registers, jumps, loops, shifts and the various operations one can perform in assembler. I don't miss the days of debugging my assembly language class programs - they were painful! - but it certainly gave me a good foundation.
We forget (or never knew, perhaps) that all this fancy-pants stuff that we use today (and that I love!) boils down to all this stuff in the end.
Now, we can certainly have a productive and lucrative career without knowing assembler, but I think these concepts are good to know.
A: I would say that learning recursion and loops in assembly has taught me alot. It made me understand the underlying concept of how the compiler/interpreter of the language i'm using pushes things onto a stack, and pops them off as it needs them. I also learned how to exploit the infamous stack overflow. (which is still surprisingly easy in C with some get- and put- commands).
Other than using asm in every-day situations, i don't think that i would use any of the concepts assembly taught me.
A: Nowadays, x86 asm is not a direct line to the guts of the CPU, but more of an API. The assembler opcodes you write are themselves are compiled into a completely different instruction-set, rearranged, rewritten, fixed-up and generally mangled beyond recognition.
So it's not like learning assembler gives you a fundamental insight into what's going on inside the CPU. IMHO, more important than learning assembler is to get a good understanding of how the target CPU and the memory hierarchy works.
This series of articles covers the latter topic pretty thoroughly.
A: I would say that addressing modes are extremely important.
My alma mater took that to an extreme, and because x86 didn't have enough of them, we studied everything on a simulator of PDP11 that must have had at least 7 of them that I remember. In retrospect, that was a good choice.
A: timing
fast execution:
*
*parallel processing
*simple instructions
*lookup tables
*branch prediction, pipelining
fast to slow access to storage:
*
*registers
*cache, and various levels of cache
*memory heap and stack
*virtual memory
*external I/O
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146893",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: How can I access UserId in ASP.NET Membership without using Membership.GetUser()? How can I access UserId in ASP.NET Membership without using Membership.GetUser(username) in ASP.NET Web Application Project?
Can UserId be included in Profile namespace next to UserName (System.Web.Profile.ProfileBase)?
A: I decided to write authentication of users users on my own (very simple but it works) and I should done this long time ago.
My original question was about UserId and it is not available from:
System.Web.HttpContext.Current.User.Identity.Name
A: Try this:
MembershipUser CurrentUser = Membership.GetUser(User.Identity.Name);
Response.Write("CurrentUser ID :: " + CurrentUser.ProviderUserKey);
A: Try the following:
Membership.GetUser().ProviderUserKey
A: Is your reason for this to save a database call everytime you need the UserId? If so, when I'm using the ASP.NET MembershipProvider, I usually either do a custom provider that allows me to cache that call, or a utility method that I can cache.
If you're thinking of putting it in the Profile, I don't see much reason for doing so, especially as it also will still require a database call and unless you are using a custom profile provider there, it has the added processing of parsing out the UserId.
If you're wondering why they did not implement a GetUserId method, it's simply because you're not always guaranteed that that user id will be a GUID as in the included provider.
EDIT:
See ScottGu's article on providers which provides a link to downloading the actual source code for i.e. SqlMembershipProvider.
But the simplest thing to do really is a GetUserId() method in your user object, or utility class, where you get the UserId from cache/session if there, otherwise hit the database, cache it by username (or store in session), and return it.
For something more to consider (but be very careful because of cookie size restrictions): Forms Auth: Membership, Roles and Profile with no Providers and no Session
A: public string GetUserID()
{
MembershipUser _User;
string _UserId = "";
_User = Membership.GetUser();
Guid UserId = (Guid)_User.ProviderUserKey;
return _UserId = UserId.ToString();
}
A: You have two options here:
1) Use username as the primary key for your user data table
i.e:
select * from [dbo.User] where Username = 'andrew.myhre'
2) Add UserID to the profile.
There are pros and cons to each method. Personally I prefer the first, because it means I don't necessarily need to set up the out-of-the-box profile provider, and I prefer to enforce unique usernames in my systems anyway.
A: Andrew: I'd be careful of doing a query like what you've shown as by default, there's no index that matches with that so you run the risk of a full table scan. Moreover, if you're using your users database for more than one application, you haven't included the application id.
The closest index is aspnet_Users_Index which requires the ApplicationId and LoweredUserName.
EDIT:
Oops - reread Andrew's post and he's not doing a select * on the aspnet_Users table, but rather, a custom profile/user table using the username as the primary key.
A: Have you tried using System.Web.HttpContext.Current.User.Identity.Name? (Make sure to verify that User and Identity are non-null first.)
A: {
MembershipUser m = Membership.GetUser();
Response.Write("ID: " + m.ProviderUserKey.ToString());
}
Will give you the UserID (uniqueidentifier) for the current user from the aspnet_Membership table - providing the current has successfully logged in. If you try to <%= %> or assign that value before a successful authentication you will get the error "Object reference not set to an instance of an object".
http://www.tek-tips.com/viewthread.cfm?qid=1169200&page=1
A: I had this problem, the solution is in the web.config configuration, try configuring web.config with these:
<roleManager
enabled="true"
cacheRolesInCookie="true"
defaultProvider="QuickStartRoleManagerSqlProvider"
cookieName=".ASPXROLES"
cookiePath="/"
cookieTimeout="30"
cookieRequireSSL="false"
cookieSlidingExpiration="true"
createPersistentCookie="false"
cookieProtection="All">
<providers>
<add name="QuickStartRoleManagerSqlProvider"
type="System.Web.Security.SqlRoleProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"
connectionStringName="ASPNETDB"
applicationName="SecurityQuickStart"/>
</providers>
</roleManager>
A: internally it's executing below script in sql server
select * from vw_aspnet_MembershipUsers where USERNAME like '%username%'
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146896",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18"
} |
Q: Error with bindParam overwriting in PHP This is a bit of a weird one, and I could well be coding this completely wrong - hence why I've hit the same error twice in two days, in completely different parts of a script. The code I'm using is below:
public function findAll( $constraints = array() ) {
// Select all records
$SQL = 'SELECT * FROM ' . $this->tableName;
// See if there's any constraints
if( count( $constraints ) > 0 ) {
$SQL .= ' WHERE ';
foreach( $constraints as $field => $value ) {
$SQL .= $field . ' = :' . $field . ' AND ';
}
}
// Remove the final AND and prepare the statement
$SQL = substr( $SQL, 0, -5 );
$PDOStatement = $this->PDO->prepare( $SQL );
// Loop through constraints and bind parameters
foreach( $constraints as $field => $value ) {
print 'Binding ' . $field . ' to ' . $value . ' ';
$PDOStatement->bindParam( $field, $value );
}
$PDOStatement->execute();
var_dump($PDOStatement);
while ( $results = $PDOStatement->fetch( PDO::FETCH_ASSOC ) ) {
var_dump($results);
}
}
I'm pretty new to using PDO, but basically I'm attempting to pass an array of constraints e.g. array( 'active' => 1, 'name' => 'James' ) and return all rows from the table WHERE active = 1 AND name = 'James'
If I use this array, the SQL executed from the first var_dump( ) is SELECT * FROM {table} WHERE active = :active AND name = 'James' - exactly as I expect. The bound parameters prints 'Binding active to 1' and 'Binding name to James' - exactly as expected. The rows exist in the database, and yet the second var_dump() call for $results outputs nothing - i.e. no rows are returned.
If I pass an array of a single constraint, e.g. array( 'active' => 1 ), this works perfectly fine. It appears to be whenever multiple constraints are passed that it stops working.
A: That's because bindParam works by binding to a variable, and you are re-using the variable ($value) for multiple values. Try with bindValue instead.
Or even better yet; Pass the values as an array to execute instead. This makes the statement stateless, which is generally a good thing in programming.
A: As mentioned, using bindValue instead of bindParam will certainly accomplish this. However, after spending a considerable amount of time troubleshooting this issue recently, I discovered an alternate solution. Here is how to accomplish PDO variable binding in a foreach loop using bindParam:
Replace the following line from the original post:
$PDOStatement->bindParam( $field, $value );
...with this:
$PDOStatement->bindParam( $field, $constraints[$field] );
Instead of binding $value, use $array_name[$array_key]. This works is because you are now binding to a unique variable instead of one that gets reused on each pass of the loop.
The variable $field used as the placeholder apparently does not need to be a unique variable, however. I have not thoroughly researched this yet, but a variable used as a placeholder appears to be parsed immediately (instead of being assigned as a variable reference) even when bindParam is used.
Also, as you would no longer need to access $value directly, you could also replace this:
foreach( $constraints as $field => $value ) {
... with this:
foreach (array_keys($constraints) as $field) {
This is optional, as it will work fine without this change. It looks cleaner in my opinion though, since it might get confusing later as to why $value is assigned but never used.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146897",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How to correctly write dynamic files to an FTP server? I'm using C# and i have written a locally installed application that dynamically generates files which need to be on an FTP server.
Do i generate them to disk then upload them to the FTP server? or is there a way to open a stream to an FTP server and write the files directly?
A: Check the code sample I gave in this answer, doesn't rely on writing to files. It's not SQL specific and was just a suggestion on how to use SQL CLR integration assemblies to upload output from sql queries to an FTP server. The for loop in the method is just to demonstrate writing to the FTP stream. You should be able to rework to you needs:
How to write stored procedure output directly to a file on an FTP without using local or temp files?
A: You should follow the class:
System.Net.FtpWebRequest
You will see that its arguments are streams and you can send data to them from any source.
When seaching for .Net capabilities you should be aware of the object browser for visual studio acessible in:
View > other windows > object browser
Is supplies a search over all known assembly .Net objects.
A: The better way is to save file locally, and upload it later, since there could be problems with upload process.
A: Since you are using c# I'm thinking maybe you are in a Windows Env. Something I know little about :)
If you are dealing with a unix env, you could just pipe your output thru SSH, which would also take care of encryption overhead.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146901",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Top & httpd - demystifying what is actually running I often use the "top" command to see what is taking up resources. Mostly it comes up with a long list of Apache httpd processes, which is not very useful. Is there any way to see a similar list, but such that I could see which PHP scripts etc. those httpd processes are actually running?
A: If you're concerned about long running processes (i.e. requests that take more than a second or two to execute), you'll be able to get an idea of them using Apache's mod_status. See the documentation, and an example of the output (from www.apache.org). This isn't unique to PHP, but applies to anything running inside an apache process.
Note that the www.apache.org status output is publicly available presumably for demonstration purposes -- you'd want to restrict access to yours so that not everyone can see it.
A: There's a top-like ncurses-based utility called apachetop which provides realtime log analysis for Apache. Unfortunately, the project has been abandoned and the code suffers from some bugs, however it's actually very much usable. Just don't run it as root, run it as any user with access to the web server log files and you should be fine.
A: The php scripts happen so fast, top wouldn't show you very much. Or it would zip by quite quickly. Most webrequests are quite quick.
I think your best bet would be to have some type of real time log processor, that kept an eye on your access logs and updates stats for you of average run time, memory usage and stuff like that.
A: You could make your PHP pages time themselves and write their path and execution time to file or database. Note that would slow everything down while you were monitoring, but it would serve as a good measuring method.
It wouldn't be that interactive though. You'd be able to get daily or weekly results from it, but it'd be hard to see something meaningful within minutes or hours.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146906",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: What is MySQL mostly doing? Is there any way to see an overview of what kind of queries are spent the most time on every day on MySQL?
A: Yes, mysql can create a slow query log. You'll need to start mysqld with the --log-slow-queries flag:
mysqld --log-slow-queries=/path/to/your.log
Then you can parse the log using mysqldumpslow:
mysqldumpslow /path/to/your.log
More info is here (http://dev.mysql.com/doc/refman/5.0/en/slow-query-log.html).
A: You can always set up query logging as described here:
http://dev.mysql.com/doc/refman/5.0/en/query-log.html
A: It depends on what you mean by 'most time'. There may be thousands if not hundreds of thousands of queries which take very little time each, but consume 90% of CPU/IO bandwidth. Or there may be a few huge outliers.
There are tools for performance monitoring and analysis, such as the built-in PERFORMANCE_SCHEMA, the enterprise tools from the Oracle/MySQL team, and online services like newrelic which can track performance of an entire application stack.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146914",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Javascript Events: Getting notified of changes in an control value I have the following problem:
I have an HTML textbox (<input type="text">) whose contents are modified by a script I cannot touch (it is my page, but i'm using external components).
I want to be notified in my script every time the value of that textbox changes, so I can react to it.
I've tried this:
txtStartDate.observe('change', function() { alert('change' + txtStartDate.value) });
which (predictably) doesn't work. It only gets executed if I myself change the textbox value with the keyboard and then move the focus elsewhere, but it doesn't get executed if the script changes the value.
Is there another event I can listen to, that i'm not aware of?
I'm using the Prototype library, and in case it's relevant, the external component modifying the textbox value is Basic Date Picker (www.basicdatepicker.com)
A: As you've implied, change (and other events) only fire when the user takes some action. A script modifying things won't fire any events. Your only solution is to find some hook into the control that you can hook up to your listener.
Here is how I would do it:
basicDatePicker.selectDate = basicDatePicker.selectDate.wrap(function(orig,year,month,day,hide) {
myListener(year,month,day);
return orig(year,month,day,hide);
});
That's based on a cursory look with Firebug (I'm not familiar with the component). If there are other ways of selecting a date, then you'll need to wrap those methods as well.
A: addEventListener("DOMControlValueChanged" will fire when a control's value changes, even if it's by a script.
addEventListener("input" is a direct-user-initiated filtered version of DOMControlValueChanged.
Unfortunately, DOMControlValueChanged is only supported by Opera currently and input event support is broken in webkit. The input event also has various bugs in Firefox and Opera.
This stuff will probably be cleared up in HTML5 pretty soon, fwiw.
Update:
As of 9/8/2012, DOMControlValueChanged support has been dropped from Opera (because it was removed from HTML5) and 'input' event support is much better in browsers (including less bugs) now.
A: IE has an onpropertychange event which could be used for this purpose.
For real web browsers (;)), there's a DOMAttrModified mutation event, but in a couple of minutes worth of experimentation in Firefox, I haven't been able to get it to fire on a text input when the value is changed programatically (or by regular keyboard input), yet it will fire if I change the input's name programatically. Curiouser and curiouser...
If you can't get that working reliably, you could always just poll the input's value regularly:
var value = someInput.value;
setInterval(function()
{
if (someInput.value != value)
{
alert("Changed from " + value + " to " + someInput.value);
value = someInput.value;
}
}, 250);
A: Depending on how the external javascript was written, you could always re-write the relevant parts of the external script in your script and have it overwrite the external definition so that the change event is triggered.
I've had to do that before with scripts that were out of my control.
You just need to find the external function, copy it in its entirety as a new function with the same name, and re-write the script to do what you want it to.
Of course if the script was written correctly using closures, you won't be able to change it too easily...
A: Aside from getting around the problem like how noah explained, you could also just create a timer that checks the value every few hundred milliseconds.
A: I had to modify the YUI datable paginator control once in the manner advised by Dan. It's brute force, but it worked in solving my problem. That is, locate the method writing to the field, copy its code and add a statement firing the change event and in your code just handle that change event. You just have to override the original function with that new version of it. Polling, while working fine seems to me a much more resource consuming solution.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146916",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: How can I tell if a given path is a directory or a file? (C/C++) I'm using C and sometimes I have to handle paths like
*
*C:\Whatever
*C:\Whatever\
*C:\Whatever\Somefile
Is there a way to check if a given path is a directory or a given path is a file?
A: Call GetFileAttributes, and check for the FILE_ATTRIBUTE_DIRECTORY attribute.
A: With C++14/C++17 you can use the platform independent is_directory() and is_regular_file() from the filesystem library.
#include <filesystem> // C++17
#include <iostream>
namespace fs = std::filesystem;
int main()
{
const std::string pathString = "/my/path";
const fs::path path(pathString); // Constructing the path from a string is possible.
std::error_code ec; // For using the non-throwing overloads of functions below.
if (fs::is_directory(path, ec))
{
// Process a directory.
}
if (ec) // Optional handling of possible errors.
{
std::cerr << "Error in is_directory: " << ec.message();
}
if (fs::is_regular_file(path, ec))
{
// Process a regular file.
}
if (ec) // Optional handling of possible errors. Usage of the same ec object works since fs functions are calling ec.clear() if no errors occur.
{
std::cerr << "Error in is_regular_file: " << ec.message();
}
}
In C++14 use std::experimental::filesystem.
#include <experimental/filesystem> // C++14
namespace fs = std::experimental::filesystem;
Additional implemented checks are listed in section "File types".
A: On Windows you can use GetFileAttributes on an open handle.
A: In Win32, I usually use PathIsDirectory and its sister functions. This works in Windows 98, which GetFileAttributes does not (according to the MSDN documentation.)
A: stat() will tell you this.
struct stat s;
if( stat(path,&s) == 0 )
{
if( s.st_mode & S_IFDIR )
{
//it's a directory
}
else if( s.st_mode & S_IFREG )
{
//it's a file
}
else
{
//something else
}
}
else
{
//error
}
A: This is a simple method using the GetFileAttributesW function to check if the path is a directory on Windows. If the received path must be a directory or a file path then if it is not a directory path you can assume that it is a file path.
bool IsDirectory(std::wstring path)
{
DWORD attrib = GetFileAttributes(path.c_str());
if ((attrib & FILE_ATTRIBUTE_DIRECTORY) != 0)
return true;
return false;
}
A: Easier to try FileInfo.isDir() in qt
A: If you're using CFile you can try
CFileStatus status;
if (CFile::GetStatus(fileName, status) && status.m_attribute == 0x10){
//it's directory
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146924",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "50"
} |
Q: Should I not subclass by type of object if there are many types? I am working with a log of events where there are about 60 different "types" of events. Each event shares about 10 properties, and then there are subcategories of events that share various extra properties.
How I work with these events does depend on their type or what categorical interfaces they implement.
But it seems to be leading to code bloat. I have a lot of redundancy in the subclass methods because they implement some of the same interfaces.
Is it more appropriate to use a single event class with a "type" property and write logic that checks type and maintain some organization of categories of types (e.g. a list of event types that are category a, a second list that are category b, etc)? Or is the subclass design more appropriate in this case?
First Approach:
public interface Category1 {}
public interface Category2 {}
public abstract class Event {
private base properties...;
}
public class EventType1 extends Event implements Category1, Category2 {
private extra properties ...;
}
public class EventType2 extends Event implements Category3, Category4 {
private extra properties ...;
}
Second Approach:
public enum EventType {TYPE1, TYPE2, TYPE3, ...}
public class Event {
private union of all possible properties;
private EventType type;
}
My personal opinion is that it seems like a single event object is what is appropriate, because, if I am thinking about it correctly, there is no need for using inheritance to represent the model because it is really only the behavior and my conditions that alter based on the type.
I need to have code that does stuff like:
if(event instanceof Category1) {
...
}
This works well in the first approach in that instead of instanceof I can just call the method on the event and implement "the same code" in each of the appropriate subclasses.
But the second approach is so much more concise. Then I write stuff like:
if(CATEGORY1_TYPES.contains(event.getEventType()) {
...
}
And all my "processing logic" can be organized into a single class and none of it is redundantly spread out among the subclasses. So is this a case where although OO appears more appropriate, it would be better not too?
A: I would go with the object per event type solution, but I would instead group commonly used combinations of interfaces under (probably abstract) classes providing their skeletal implementations. This greatly reduces the code bloat generated by having many interfaces, but, on the other hand, increases the number of classes. But, if used properly and reasonably, it leads to cleaner code.
A: Inheritance can be limiting if you decide to extend an abstract base class of a
particular Category interface, because you might need to implement another Category as well.
So, here is a suggested approach:
Assuming you need the same implementation for a particular Category interface method (regardless of the Event), you could write an implementation class for each Category interface.
So you would have:
public class Category1Impl implements Category1 {
...
}
public class Category2Impl implements Category2 {
...
}
Then for each of your Event classes, just specify the Category interfaces it implements, and keep a private member instance of the Category implementation class (so you use composition, rather than inheritance). For each of the Category interface methods, simply forward the method call to the Category implementation class.
A: Since I didn't really get the answers I was looking for I am providing my own best guess based on my less than desirable learning experience.
The events themselves actually don't have behaviors, it is the handlers of the events that have behaviors. The events just represent the data model.
I rewrote the code to just treat events as object arrays of properties so that I can use Java's new variable arguments and auto-boxing features.
With this change, I was able to delete around 100 gigantic classes of code and accomplish much of the same logic in about 10 lines of code in a single class.
Lesson(s) learned: It is not always wise to apply OO paradigms to the data model. Don't concentrate on providing a perfect data model via OO when working with a large, variable domain. OO design benefits the controller more than the model sometimes. Don't focus on optimization upfront as well, because usually a 10% performance loss is acceptable and can be regained via other means.
Basically, I was over-engineering the problem. It turns out this is a case where proper OO design is overkill and turns a one-night project into a 3 month project. Of course, I have to learn things the hard way!
A: Merely having a large number of .java files isn't necessarily bad. If you can meaningfully extract a small number (2-4 or so) of Interfaces that represent the contracts of the classes, and then package all of the implementations up, the API you present can be very clean, even with 60 implementations.
I might also suggest using some delegate or abstract classes to pull in common functionality. The delegates and/or abstract helpers should all be package-private or class-private, not available outside the API you expose.
A: If there is considerable mixing and matching of behavior, I would consider using composition of other objects, then have either the constructor of the specific event type object create those objects, or use a builder to create the object.
perhaps something like this?
class EventType {
protected EventPropertyHandler handler;
public EventType(EventPropertyHandler h) {
handler = h;
}
void handleEvent(map<String,String> properties) {
handler.handle(properties);
}
}
abstract class EventPropertyHandler {
abstract void handle(map<String, String> properties);
}
class SomeHandler extends EventPropertyHandler {
void handle(map<String, String> properties) {
String value = properties.get("somekey");
// do something with value..
}
}
class EventBuilder {
public static EventType buildSomeEventType() {
//
EventType e = new EventType( new SomeHandler() );
}
}
There are probably some improvements that could be made, but that might get you started.
A: It depends on if each type of event inherently has different behavior that the event itself can execute.
Do your Event objects need methods that behave differently per type? If so, use inheritance.
If not, use an enum to classify the event type.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146931",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: What can you do to a legacy codebase that will have the greatest impact on improving the quality? As you work in a legacy codebase what will have the greatest impact over time that will improve the quality of the codebase?
*
*Remove unused code
*Remove duplicated code
*Add unit tests to improve test coverage where coverage is low
*Create consistent formatting across files
*Update 3rd party software
*Reduce warnings generated by static analysis tools (i.e.Findbugs)
The codebase has been written by many developers with varying levels of expertise over many years, with a lot of areas untested and some untestable without spending a significant time on writing tests.
A: I'd say 'remove duplicated code' pretty much means you have to pull code out and abstract it so it can be used in multiple places - this, in theory, makes bugs easier to fix because you only have to fix one piece of code, as opposed to many pieces of code, to fix a bug in it.
A: Add unit tests to improve test coverage. Having good test coverage will allow you to refactor and improve functionality without fear.
There is a good book on this written by the author of CPPUnit, Working Effectively with Legacy Code.
Adding tests to legacy code is certianly more challenging than creating them from scratch. The most useful concept I've taken away from the book is the notion of "seams", which Feathers defines as
"a place where you can alter behavior in your program without editing in that place."
Sometimes its worth refactoring to create seams that will make future testing easier (or possible in the first place.) The google testing blog has several interesting posts on the subject, mostly revolving around the process of Dependency Injection.
A: I can relate to this question as I currently have in my lap one of 'those' old school codebase. Its not really legacy but its certainly not followed the trend of the years.
I'll tell you the things I would love to fix in it as they bug me every day:
*
*Document the input and output variables
*Refactor the variable names so they actually mean something other and some hungarian notation prefix followed by an acronym of three letters with some obscure meaning. CammelCase is the way to go.
*I'm scared to death of changing any code as it will affect hundreds of clients that use the software and someone WILL notice even the most obscure side effect. Any repeatable regression tests would be a blessing since there are zero now.
The rest is really peanuts. These are the main problems with a legacy codebase, they really eat up tons of time.
A: *
*Read Michael Feather's book "Working effectively with Legacy Code"
This is a GREAT book.
If you don't like that answer, then the best advice I can give would be:
*
*First, stop making new legacy code[1]
[1]: Legacy code = code without unit tests and therefore an unknown
Changing legacy code without an automated test suite in place is dangerous and irresponsible. Without good unit test coverage, you can't possibly know what affect those changes will have. Feathers recommends a "stranglehold" approach where you isolate areas of code you need to change, write some basic tests to verify basic assumptions, make small changes backed by unit tests, and work out from there.
NOTE: I'm not saying you need to stop everything and spend weeks writing tests for everything. Quite the contrary, just test around the areas you need to test and work out from there.
Jimmy Bogard and Ray Houston did an interesting screen cast on a subject very similar to this:
http://www.lostechies.com/blogs/jimmy_bogard/archive/2008/05/06/pablotv-eliminating-static-dependencies-screencast.aspx
A: I work with a legacy 1M LOC application written and modified by about 50 programmers.
* Remove unused code
Almost useless... just ignore it. You wont get a big Return On Investment (ROI) from that one.
* Remove duplicated code
Actually, when I fix something I always search for duplicate. If I found some I put a generic function or comment all code occurrence for duplication (sometime, the effort for putting a generic function doesn't worth it). The main idea, is that I hate doing the same action more than once. Another reason is because there's always someone (could be me) that forget to check for other occurrence...
* Add unit tests to improve test coverage where coverage is low
Automated unit tests is wonderful... but if you have a big backlog, the task itself is hard to promote unless you have stability issue. Go with the part you are working on and hope that in a few year you have decent coverage.
* Create consistent formatting across files
IMO the difference in formatting is part of the legacy. It give you an hint about who or when the code was written. This can gave you some clue about how to behave in that part of the code. Doing the job of reformatting, isn't fun and it doesn't give any value for your customer.
* Update 3rd party software
Do it only if there's new really nice feature's or the version you have is not supported by the new operating system.
* Reduce warnings generated by static analysis tools
It can worth it. Sometime warning can hide a potential bug.
A: I'd say it largely depends on what you want to do with the legacy code...
If it will indefinitely remain in maintenance mode and it's working fine, doing nothing at all is your best bet. "If it ain't broke, don't fix it."
If it's not working fine, removing the unused code and refactoring the duplicate code will make debugging a lot easier. However, I would only make these changes on the erring code.
If you plan on version 2.0, add unit tests and clean up the code you will bring forward
A: Good documentation. As someone who has to maintain and extend legacy code, that is the number one problem. It's difficult, if not downright dangerous to change code you don't understand. Even if you're lucky enough to be handed documented code, how sure are you that the documentation is right? That it covers all of the implicit knowledge of the original author? That it speaks to all of the "tricks" and edge cases?
Good documentation is what allows those other than the original author to understand, fix, and extend even bad code. I'll take hacked yet well-documented code that I can understand over perfect yet inscrutable code any day of the week.
A: The single biggest thing that I've done to the legacy code that I have to work with is to build a real API around it. It's a 1970's style COBOL API that I've built a .NET object model around, so that all the unsafe code is in one place, all of the translation between the API's native data types and .NET data types is in one place, the primary methods return and accept DataSets, and so on.
This was immensely difficult to do right, and there are still some defects in it that I know about. It's not terrifically efficient either, with all the marshalling that goes on. But on the other hand, I can build a DataGridView that round-trips data to a 15-year-old application which persists its data in Btrieve (!) in about half an hour, and it works. When customers come to me with projects, my estimates are in days and weeks rather than months and years.
A: As a parallel to what Josh Segall said, I would say comment the hell out of it. I've worked on several very large legacy systems that got dumped in my lap, and I found the biggest problem was keeping track of what I already learned about a particular section of code. Once I started placing notes as I go, including "To Do" notes, I stopped re-figuring out what I already figured out. Then I could focus on how those code segments flow and interact.
A: I would say just leave it alone for the most part. If it's not broken then don't fix it. If it is broken then go ahead and fix and improve the portion of the code that is broken and its immediately surrounding code. You can use the pain of the bug or sorely missing feature to justify the effort and expense of improving that part.
I would not recommend any wholesale kind of rewrite, refactor, reformat, or putting in of unit tests that is not guided by actual business or end-user need.
If you do get the opportunity to fix something, then do it right (the chance of doing it right the first time might have already passed, but since you are touching that part again might as well do it right time around) and this includes all the items you mentioned.
So in summary, there's no single or just a few things that you should do. You should do it all but in small portions and in an opportunistic manner.
A: Late to the party, but the following may be worth doing where a function/method is used or referenced often:
*
*Local variables often tend to be poorly named in legacy code (often owing to their scope expanding when a method is modified, and not being updated to reflect this). Renaming these in line with their actual purpose can help clarify legacy code.
*Even just laying out the method slightly differently can work wonders - for instance, putting all the clauses of an if on one line.
*There might be stale/confusing code comments there already. Remove them if they're not needed, or amend them if you absolutely have to. (Of course, I'm not advocating removal of useful comments, just those that are a hindrance.)
These might not have the massive headline impact you're looking for, but they are low risk, particularly if the code can't be unit tested.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146936",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "39"
} |
Q: NHibernate + JSON/Ajax Parent/Child Relationships? Why this isn't working? I have a typical parent/child relationship. Writing news in .NET and adding children works fine, NHibernate plays nice and adds the correct relationships.
However, when passing a JSON object from the client to some method that serializes my JSON into the .NET representation, NHibernate seems to get confused. It comes up with the correct query to add the parent (and assigns a new guid for Id), however, it doesn't associate that parent id to the children objects in the `SQL it tries to execute. I came up with a quick and dirty hack, which I list below - But I was wondering, is there something I'm missing here?
IList<BackerEntry> backersTemp = new List<BackerEntry>();
foreach (BackerEntry backerEntry in jsonBackerEntity.BackerEntries)
{
backersTemp.Add(backerEntry);
}
jsonBackerEntity.BackerEntries.Clear();
foreach (BackerEntry backerEntry in backersTemp)
{
jsonBackerEntity.AddChild(backerEntry);
}
Doing it that way is the only way I can seem to get NHibernate to see that these children really belong to this parent. Inside my AddChild method looks like this:
public virtual void AddChild(BackerEntry backerEntry)
{
if (backerEntry.Backer != null)
{
backerEntry.Backer.BackerEntries.Remove(backerEntry);
}
backerEntry.Backer = this;
this.BackerEntries.Add(backerEntry);
}
EDIT: I think I may have just realized why - Probably because I am not sending back the Parent property of the child in the JSON. I'm not even sure if that'd be possible, due to the circular nature of the two. Child has a parent (who in json has a child who is the original child who has a parent, etc)... Any ideas?
A: Before saveOrUpdate with Hibernate you must ensure that all your objects childs has the same memory object parent. Review the mappings and think that every row on the database are one and only one memory object, make sure that is true for you, and reset ParentObjects to fit this.
Also, you could try with merge method that is more flexible.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146942",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Help improve this INI parsing code This is something simple I came up with for this question. I'm not entirely happy with it and I saw it as a chance to help improve my use of STL and streams based programming.
std::wifstream file(L"\\Windows\\myini.ini");
if (file)
{
bool section=false;
while (!file.eof())
{
std::wstring line;
std::getline(file, line);
if (line.empty()) continue;
switch (line[0])
{
// new header
case L'[':
{
std::wstring header;
size_t pos=line.find(L']');
if (pos!=std::wstring::npos)
{
header=line.substr(1, pos);
if (header==L"Section")
section=true;
else
section=false;
}
}
break;
// comments
case ';':
case ' ':
case '#':
break;
// var=value
default:
{
if (!section) continue;
// what if the name = value does not have white space?
// what if the value is enclosed in quotes?
std::wstring name, dummy, value;
lineStm >> name >> dummy;
ws(lineStm);
WCHAR _value[256];
lineStm.getline(_value, ELEMENTS(_value));
value=_value;
}
}
}
}
How would you improve this? Please do not recommend alternative libraries - I just want a simple method for parsing out some config strings from an INI file.
A:
// what if the name = value does not have white space?
// what if the value is enclosed in quotes?
I would use boost::regex to match for every different type of element, something like:
boost::smatch matches;
boost::regex name_value("(\S+)\s*=\s*(\S+)");
if(boost::regex_match(line, matches, name_value))
{
name = matches[1];
value = matches[2];
}
the regular expressions might need some tweaking.
I would also replace de stream.getline with std::getline, getting rid of the static char array.
A: This:
for (size_t i=1; i<line.length(); i++)
{
if (line[i]!=L']')
header.push_back(line[i]);
else
break;
}
should be simplified by a call to wstrchr, wcschr, WSTRCHR, or something else, depending on what platform you are on.
A:
// how to get a line into a string in one go?
Use the (nonmember) getline function from the standard string header.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146943",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Should I be extending this class? (PHP) I'm creating an ORM in PHP, and I've got a class 'ORM' which basically creates an object corresponding to a database table (I'm aiming for similar to/same functionality as an ActiveRecord pattern.) ORM itself extends 'Database', which sets up the database connection.
So, I can call: $c = new Customer();
$c->name = 'John Smith';
$c->save();
The ORM class provides this functionality (sets up the class properties, provides save(), find(), findAll() etc. methods), and Customer extends ORM. However, in the future I may be wanting to add extra public methods to Customer (or any other model I create), so should this be extending ORM or not?
I know I haven't provided much information here, but hopefully this is understandable on a vague explanation, as opposed to posting up 300+ lines of code.
A: I agree with the other answers here - put the additional methods into a descendant class. I'd also add an asterisk to that though: each time you extend the class with extra methods, think about what you are trying to achieve with the extension, and think about whether or not it can be generalised and worked back into the parent class. For example:
// Customer.class.php
function getByName($name) {
// SELECT * FROM `customer` WHERE `name` = $name
}
// ** this could instead be written as: **
// ORM.class.php
function getByField($field, $value) {
// SELECT * FROM `$this->table` WHERE `$field` = $value
}
A: You're certainly thinking correctly to put your business logic in a new class outside your 'ORM'. For me, instead simply extending the ORM-class, I'd rather encapsulate it with a new, value object class to provide an additional degree of freedom from your database design to free you up to think of the class as a pure business object.
A: Nope. You should use composition instead of inheritance. See the following example:
class Customer {
public $name;
public function save() {
$orm = new ORM('customers', 'id'); // table name and primary key
$orm->name = $this->name;
$orm->save();
}
}
And ORM class should not extend Database. Composition again is best suited in this use case.
A: Yes, place your business logic in a descendant class. This is a very common pattern seen in most Data Access Layers generation frameworks.
A: You should absolutely extend the ORM class. Different things should be objects of different classes. Customers are very different from Products, and to support both in a single ORM class would be unneeded bloat and completely defeat the purpose of OOP.
Another nice thing to do is to add hooks for before save, after save, etc. These give you more flexibility as your ORM extending classes become more diverse.
A: Given my limited knowledge of PHP I'm not sure if this is related, but if you're trying to create many business objects this might be an incredibly time consuming process. Perhaps you should consider frameworks such as CakePHP and others like it. This is nice if you're still in the process of creating your business logic.
A: I have solved it like this in my Pork.dbObject. Make sure to check it out and snag some of the braincrunching I already did :P
class Poll extends dbObject // dbObject is my ORM. Poll can extend it so it gets all properties.
{
function __construct($ID=false)
{
$this->__setupDatabase('polls', // db table
array('ID_Poll' => 'ID', // db field => object property
'strPollQuestion' => 'strpollquestion',
'datPublished' => 'datpublished',
'datCloseDate' => 'datclosedate',
'enmClosed' => 'enmclosed',
'enmGoedgekeurd' => 'enmgoedgekeurd'),
'ID_Poll', // primary db key
$ID); // primary key value
$this->addRelation('Pollitem'); //Connect PollItem to Poll 1;1
$this->addRelation('Pollvote', 'PollUser'); // connect pollVote via PollUser (many:many)
}
function Display()
{
// do your displayíng for poll here:
$pollItems = $this->Find("PollItem"); // find all poll items
$alreadyvoted = $this->Find("PollVote", array("IP"=>$_SERVER['REMOTE_ADDR'])); // find all votes for current ip
}
Note that this way, any database or ORM functionality is abstracted away from the Poll object. It doesn't need to know. Just the setupdatabase to hook up the fields / mappings. and the addRelation to hook up the relations to other dbObjects.
Also, even the dbObject class doesn't know much about SQL. Select / join queries are built by a special QueryBuilder object.
A: You're definitely thinking along the right lines with inheritance here.
If you're building an ORM just for the sake of building one (or because you don't like the way others handle things) than go for it, otherwise you might look at a prebuilt ORM that can generate most of your code straight from your database schema. It'll save you boatloads of time. CoughPHP is currently my favorite.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146963",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Can SQLExpress 2005 and 2008 be installed on same machine without issue? I would like to install SQLExpress2005 as an instance "SQLExpress"
and install SQLExpresss2008 as "SQLExpress2008" instance.
Is there any problem with doing this on the same machine?
A: As Long as you give them distinct names, there shouldn't be any problems. The binaries are stored in directories based on version, and you can (and should) point their filegroups at different locations. This should also apply to the Full versions.
A: I'm pretty sure you can with the full version of SQL, but you need to do it by installing named instances. This could be an issue with SQL Express where it doesn't give you that option.
A: Works on my machine :) Just avoid instance conflicts. Have at most one default instance and the named instance(s) should be unique.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146970",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Powershell script to download file, having trouble setting up a secure connection I'm making an automated script to read a list from a site posting the latest compiled code. That's the part I've already figured out. The next part of the script is to grab that compiled code from a server with an Untrusted Cert.
This is how I'm going about grabbing the file:
$web = new-object System.Net.WebClient
$web.DownloadFile("https://uri/file.msi", "installer.msi")
Then I get the following error:
Exception calling "DownloadFile" with "2" argument(s): "The underlying
connection was closed: Could not establish trust relationship for the
SSL/TLS secure channel."
I know I'm missing something, but I can't get the correct way to search for it.
A: Brad is correct, but notice that PowerShell V1 doesn't really have native support for delegates, which you'll need in this specific case. I believe this should get you around that limitation (in fact the scenario you describe is exactly one of the examples used).
A: You need to write a callback handler for ServicePointManager.ServerCertificateValidationCallback.
A: If you are using powershell and face this error. Use command:
[Net.ServicePointManager]::SecurityProtocol = [System.Net.SecurityProtocolType]::Tls12
before downloading the package. It forces PS to use TLS 1.2.
The reason for the failure could be the website you are trying to download from has disabled the support for TLS 1.0 which PS uses by default.
A: The simplest PowerShell implementation of ServerCertificateValidationCallback is a script block that always returns true. This works for me in PowerShell version 5.1; haven't tested it on other versions.
[System.Net.ServicePointManager]::ServerCertificateValidationCallback = {$true}
See Bhargav Shukla's blog
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146973",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: Search Engines Inexact Counting (about xxx results) When you search in Google (i'm almost sure that Altavista did the same thing) it says "Results 1-10 of about xxxx"...
This has always amazed me... What does it mean "about"?
How can they count roughly?
I do understand why they can't come up with a precise figure in a reasonable time, but how do they even reach this "approximate" one?
I'm sure there's a lot of theory behind this one that I missed...
A: Most likely it's similar to the sort of estimated row counts used by most SQL systems in their query planning; a number of rows in the table (known exactly as of the last time statistics were collected, but generally not up-to-date), multiplied by an estimated selectivity (usually based on a sort of statistical distribution model calculated by sampling some small subset of rows).
The PostgreSQL manual has a section on statistics used by the planner that is fairly informative, at least if you follow the links out to pg_stats and various other sections. I'm sure that doesn't really describe what google does, but it at least shows one model where you could get the first N rows and an estimate of how many more there might be.
A: Not relevant to your question, but reminds of a little joke a friend of mine made when doing a simple ego-search (and don't tell me you've never Googled your name). He said something like
"Wow, about 5,000 results in just 0.22 seconds! Now, imagine how many results this is in one minute, one hour, one day!"
A: I imagine the estimate is based on statistics. They aren't going to count all of the relevant page matches, so what they (I would) do is work out roughly what percentage of pages would match the query, based on some heuristic, and then use that as the basis for the count.
One heuristic might be to do a sample count - take a random sample of 1000 or so pages and see what percentage matched. It wouldn't take too many in the sample to get a statisically significant answer.
A: One thing that hasn't been mentioned yet is deduplication. Some search engines (I'm not sure exactly how Google in particular does it) will use heuristics to try and decide if two different URLs contain the same (or extremely similar) content, and are thus duplicate results.
If there are 156 unique URLs, but 9 of those have been marked as duplicates of other results, it is simpler to say "about 150 results" rather than something like "156 results which contains 147 unique results and 9 duplicates".
A: Returning an exact number of results is not worth the overhead to accurately calculate. Since there's not much of a value add from knowing there was 1,004,345 results rather than 'about 1,000,000', it's more important from an end user experience perspective to return the results faster rather than the additional time to calculate the total.
From Google themselves:
"Google's calculation of the total number of search results is an estimate. We understand that a ballpark figure is valuable, and by providing an estimate rather than an exact account, we can return quality search results faster."
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146980",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: What #defines are set up by Xcode when compiling for iPhone I'm writing some semi-portable code and want to be able to detect when I'm compiling for iPhone. So I want something like #ifdef IPHONE_SDK....
Presumably Xcode defines something, but I can't see anything under project properties, and Google isn't much help.
A: To look at all the defined macros, add this to the "Other C Flags" of your build config:
-g3 -save-temps -dD
You will get some build errors, but the compiler will dump all the defines into .mi files in your project's root directory. You can use grep to look at them, for example:
grep define main.mi
When you're done, don't forget to remove these options from the build setting.
A: The answers to this question aren't quite correct. The question of the platform and hardware vs Simulator is orthogonal.
Do not use architecture as a shortcut for platform or simulator detection! This kind of sloppy thinking has caused many, many programmers a great deal of heartburn and headache over the years.
Here is an ASCII chart of the conditionals. The names don't necessarily make sense for historical reasons:
+--------------------------------------+
| TARGET_OS_MAC |
| +---+ +---------------------------+ |
| | | | TARGET_OS_IPHONE | |
| |OSX| | +-----+ +----+ +-------+ | |
| | | | | IOS | | TV | | WATCH | | |
| | | | +-----+ +----+ +-------+ | |
| +---+ +---------------------------+ |
+--------------------------------------+
Devices: TARGET_OS_EMBEDDED
Simulators: TARGET_OS_SIMULATOR
TARGET_OS_MAC is true for all Apple platforms.
TARGET_OS_OSX is true only for macOS
TARGET_OS_IPHONE is true for iOS, watchOS, and tvOS (devices & simulators)
TARGET_OS_IOS is true only for iOS (devices & simulators)
TARGET_OS_WATCH is true only for watchOS (devices & simulators)
TARGET_OS_TV is true only for tvOS (devices & simulators)
TARGET_OS_EMBEDDED is true only for iOS/watchOS/tvOS hardware
TARGET_OS_SIMULATOR is true only for the Simulator.
I'll also note that you can conditionalize settings in xcconfig files by platform:
//macOS only
SOME_SETTING[sdk=macosx] = ...
//iOS (device & simulator)
SOME_SETTING[sdk=iphone*] = ...
//iOS (device)
SOME_SETTING[sdk=iphoneos*] = ...
//iOS (simulator)
SOME_SETTING[sdk=iphonesimulator*] = ...
//watchOS (device & simulator)
SOME_SETTING[sdk=watch*] = ...
//watchOS (device)
SOME_SETTING[sdk=watchos*] = ...
//watchOS (simulator)
SOME_SETTING[sdk=watchsimulator*] = ...
//tvOS (device & simulator)
SOME_SETTING[sdk=appletv*] = ...
//tvOS (device)
SOME_SETTING[sdk=appletvos*] = ...
//tvOS (simulator)
SOME_SETTING[sdk=appletvsimulator*] = ...
// iOS, tvOS, or watchOS Simulator
SOME_SETTING[sdk=embeddedsimulator*] = ...
A: It's in the SDK docs under "Compiling source code conditionally"
The relevant definitions are TARGET_OS_IPHONE (and he deprecated TARGET_IPHONE_SIMULATOR), which are defined in /usr/include/TargetConditionals.h within the iOS framework. On earlier versions of the toolchain, you had to write:
#include "TargetConditionals.h"
but this is no longer necessary on the current (xCode 6/iOS8) toolchain.
So, for example, if you want to only compile a block of code if you are building for the device, then you should do
#if !(TARGET_OS_SIMULATOR)
...
#endif
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146986",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "67"
} |
Q: Is writing "this." before instance variable and methods good or bad style? One of my nasty (?) programming habits in C++ and Java is to always precede calls or accesses to members with a this. For example: this.process(this.event).
A few of my students commented on this, and I'm wondering if I am teaching bad habits.
My rationale is:
*
*Makes code more readable — Easier to distinguish fields from local variables.
*Makes it easier to distinguish standard calls from static calls (especially in Java)
*Makes me remember that this call (unless the target is final) could end up on a different target, for example in an overriding version in a subclass.
Obviously, this has zero impact on the compiled program, it's just readability. So am I making it more or less readable?
Note: I turned it into a CW since there really isn't a correct answer.
A: Sometimes I do like writing classes like this:
class SomeClass{
int x;
int y;
SomeClass(int x, int y){
this.x = x
this.y = y
}
}
This makes it easier to tell what argument is setting what member.
A: More readable, I think. I do it your way for exactly the same reasons.
A: I find that less is more. The more needlessly verbose junk you have in your code, the more problems people are going to have maintaining it. That said, having clear and consistent behavior is also important.
A: I think it's less readable, especially in environments where fields are highlighted differently from local variables. The only time I want to see "this" is when it is required, for example:
this.fieldName = fieldName
When assigning the field.
That said, if you need some way to differentiate fields for some reason, I prefer "this.fieldName" to other conventions, like "m_fieldName" or "_fieldName"
A: In my opinion you are making it more readable. It lets potential future troubleshooters know for a fact where the function you are calling is.
Second, it is not impossible to have a function with the exact same name global or from some namespace that that gets "using"'ed into conflict. So if there is a conflict the original code author will know for certain which function they are calling.
Granted that if there are namespace conflicts some other rule of clean coding is being broken, but nobody is perfect. So I feel that any rule that does not impede productivity, has the potential to reduce errors(however minuscule a potential), and could make a future troubleshooters goal easier, is a good rule.
A: There is a good technical reason to prefer to use or avoid this - the two are not always equivalent.
Consider the following code:
int f();
template <typename T>
struct A
{
int f();
};
template <typename T>
struct B : A<T>
{
int g()
{
return f();
return this->f();
}
};
Now, there are two f() calls in B<T>::g(). One would expect it to call A<T>::f(), but only the second one will. The first will call ::f(). The reason behind this is that because A<T> is dependent on T, the lookup does not normally find it. this, by being a pointer to B<T>, is also dependent on T however, so if you use it, the lookup will be delayed until after B<T> is instantiated.
Note that this behavior may not be present on some compilers (specifically, MSVC) which do not implement two-phase name lookup, but nonetheless it is the correct behavior.
A: Python folks do it all the time and almost all of them prefer it. They spell it 'self' instead of 'this'. There are ways around it putting explicit 'self' in, but the consensus is that explicit 'self' is essential to understanding the class method.
A: I have to join the 'include this' camp here; I don't do it consistently, but from a maintenance standpoint the benefits are obvious. If the maintainer doesn't use an IDE for whatever reason and therefore doesn't have member fields and methods specially highlighted, then they're in for a world of scrolling pain.
A: I use this for at least two reasons:
Fallacies reasons
I like to have consistent code styles when coding in C++, C, Java, C# or JavaScript. I keep myself using the same coding style, mostly inspired from java, but inspired by the other languages.
I like also to keep a coherence inside my code in one language. I use typename for template type parameters, instead of class, and I never play mixer with the two. This means that I hate it when having to add this at one point, but avoid it altogether.
My code is rather verbous. My method names can be long (or not). But they always use full names, and never compacted names (i.e. getNumber(), not getNbr()).
These reasons are not good enough from a technical viewpoint, but still, this is my coding way, and even if they do no (much) good, they do no (much) evil. In fact, in the codebase I work on there are more than enough historical anti-patterns wrote by others to let them question my coding style.
By the time they'll learn writing "exception" or "class", I'll think about all this, again...
Real reasons
While I appreciate the work of the compiler, there are some ambiguities I'd like to make UN-ambiguities.
For example, I (almost) never use using namespace MyNamespace. I either use the full namespace, or use a three-letters alias. I don't like ambiguities, and don't like it when the compiler suddenly tells me there are too functions "print" colliding together.
This is the reason I prefix Win32 functions by the global namespace, i.e. always write ::GetLastError() instead of GetLastError().
This goes the same way for this. When I use this, I consciously restrict the freedom of the compiler to search for an alternative symbol if it did not find the real one. This means methods, as well as member variables.
This could apparently be used as an argument against method overloading, perhaps. But this would only be apparent. If I write overloaded methods, I want the compiler to resolve the ambiguity at compile time. If a do not write the this keyword, it's not because I want to compiler to use another symbol than the one I had in mind (like a function instead of a method, or whatever).
My Conclusion?
All in all, this problem is mostly of style, and with genuine technical reasons. I won't want the death of someone not writing this.
As for Bruce Eckel's quote from his "Thinking Java"... I was not really impressed by the biased comparisons Java/C++ he keeps doing in his book (and the absence of comparison with C#, strangely), so his personal viewpoint about this, done in a footnote... Well...
A: This is a very subjective thing. Microsoft StyleCop has a rule requiring the this. qualifier (though it's C# related). Some people use underscore, some use weird hungarian notations. I personally qualify members with this. even if it's not explicitly required to avoid confusion, because there are cases when it can make one's code a bit more readable.
You may also want to check out this question:
What kind of prefix do you use for member variables?
A: I'd never seen this style until I joined my current employer. The first time I saw it I thought "this idiot has no idea and Java/OO languages generally are not his strong suit", but it turns out that it's a regularly-occurring affliction here and is mandatory style on a couple of projects, although these projects also use the
if (0 == someValue)
{
....
}
approach to doing conditionals, i.e. placing the constant first in the test so that you don't run the risk of writing
if (someValue = 0)
by accident - a common problem for C coders who ignore their compiler warnings. Thing is, in Java the above is simply invalid code and will be chucked out by the compiler, so they're actually making their code less intuitive for no benefit whatsoever.
For me, therefore, far from showing "the author is coding with a dedicated thought process", these things strike me as more likely to come from the kind of person who just sticks to the rules someone else told them once without questioning them or knowing the reasons for the rules in the first place (and therefore where the rules shouldn't apply).
The reasons I've heard mainly boil down to "it's best practice" usually citing Josh Bloch's Effective Java which has a huge influence here. In fact, however, Bloch doesn't even use it where even I think he probably should have to aid readability! Once again, it seems to be more the kind of thing being done by people who are told to do it and don't know why!
Personally, I'm inclined to agree more with what Bruce Eckel says in Thinking in Java (3rd and 4th editions):
'Some people will obsessively put this in front of every method call and field reference, arguing that it makes it "clearer and more explicit." Don't do it. There's a reason that we use high-level languages: They do things for us. If you put this in when it's not necessary, you will confuse and annoy everyone who reads your code, since all the rest of the code they've read won't use this everywhere. People expect this to be used only when it is necessary. Following a consistent and straightforward coding style saves time and money.'
footnote, p169, Thinking in Java, 4th edition
Quite. Less is more, people.
A: 3 Reasons ( Nomex suit ON)
1) Standardization
2) Readability
3) IDE
1) The biggie Not part of Sun Java code style.
(No need to have any other styles for Java.)
So don't do it ( in Java.)
This is part of the blue collar Java thing: it's always the same everywhere.
2) Readability
If you want this.to have this.this in front of every this.other this.word; do you really this.think it improves this.readability?
If there are too many methods or variable in a class for you to know if it's a member or not... refactor.
You only have member variables and you don't have global variables or functions in Java. ( In other langunages you can have pointers, array overrun, unchecked exceptions and global variables too; enjoy.)
If you want to tell if the method is in your classes parent class or not...
remember to put @Override on your declarations and let the compiler tell you if you don't override correctly. super.xxxx() is standard style in Java if you want to call a parent method, otherwise leave it out.
3) IDE
Anyone writing code without an IDE that understands the language and gives an outline on the sidebar can do so on their own nickel. Realizing that if it aint' language sensitive, you're trapped in the 1950's. Without a GUI: Trapped in the 50's.
Any decent IDE or editor will tell you where a function/variable is from. Even the original VI (<64kb) will do this with CTags. There is just no excuse for using crappy tools. Good ones are given away for free!.
A: Not a bad habit at all. I don't do it myself, but it's always a plus when I notice that someone else does it in a code review. It's a sign of quality and readability that shows the author is coding with a dedicated thought process, not just hacking away.
A: I would argue that what matters most is consistency. There are reasonable arguments for and against, so it's mostly a matter of taste when considering which approach.
A: "Readability"
I have found useful the use "this" specially when not using an IDE ( small quick programs )
Whem my class is large enough as to delegate some methods to a new class, replacing "this" with "otherRef" it's very easy with the most simple text editor.
ie
//Before
this.calculateMass();
this.perfornmDengerAction();
this.var = ...
this.other = ...
After the "refactor"
// after
this.calculateMass();
riskDouble.calculateMass();
riskDouble.setVar(...);
this.other = ...
When I use an IDE I don't usually use it. But I think that it makes you thing in a more OO way than just use the method.
class Employee {
void someMethod(){
// "this" shows somethings odd here.
this.openConnectino() ; // uh? Why an employee has a connection???
// After refactor, time to delegate.
this.database.connect(); // mmhh an employee might have a DB.. well..
}
... etc....
}
The most important as always is that if a development team decides to use it or not, that decision is respected.
A: From a .Net perspective, some of the code analysis tools I used saw the "this" and immediately concluded the method could not be static. It may be something to test with Java but if it does the same there, you could be missing some performance enhancements.
A: I used to always use this... Then a coworker pointed out to me that in general we strive to reduce unnecessary code, so shouldn't that rule apply here as well?
A: If you are going to remove the need to add this. in front of member variables, static analysis tools such as checkstyle can be invaluable in detecting cases where member variables hide fields. By removing such cases you can remove the need to use this in the first place. That being said I prefer to ignore these warnings in the case of constructors and setters rather than having to come up with new names for the method parameters :).
With respect to static variables I find that most decent IDEs will highlight these so that you can tell them apart. It also pays to use a naming convention for things like static constants. Static analysis tools can help here by enforcing the naming conventions.
I find that there is seldom any confusion with static methods as the method signatures are often different enough to make any further differentiation unnecessary.
A: I prefer the local assignment mode described above, but not for local method calls. And I agree with the 'consistency is the most important aspect' sentiments. I find this.something more readable, but I find consistent coding even more readable.
public void setFoo(String foo) {
this.foo = foo; //member assignment
}
public doSomething() {
doThat(); //member method
}
I have colleagues who prefer:
public void setFoo(String foo) {
_foo = foo;
}
A: less readable unless of course your students are still on green screen terminals like the students here... the elite have syntax highighting.
i just heard a rumour also that they have refactoring tools too, which means you don't need "this." for search and replace, and they can remove those pesky redundant thisses with a single keypress. apparently these tools can even split up methods so they're nice and short like they should have been to begin with, most of the time, and then it's obvious even to a green-screener which vars are fields.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146989",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "34"
} |
Q: Image processing server I'm looking for a free, preferably open source, http image processing server. I.e. I would send it a request like this:
http://myimageserver/rotate?url=http%3A%2F%2Fstackoverflow.com%2FContent%2FImg%2Fstackoverflow-logo-250.png&angle=90
and it would return that image rotated. Features wanted:
*
*Server-side caching
*Several operations/effects (like scaling, watermarking, etc). The more the merrier.
*POST support to supply the image (instead of the server GETting it).
*Different output formats (PNG, JPEG, etc).
*Batch operations
It would be something like this, but free and less SOAPy. Is there anything like this or am I asking too much?
A: The ImageResizing.Net library is both a .NET library and an IIS module. It's an image server or an image library, whichever you prefer.
It's open-source, under an MIT-style license, and is supported by plugins.
It has excellent performance, and supports 3 pipelines: GDI+, Windows Imaging Components, and FreeImage. WIC is the fastest, and can do some operations in under 15ms. It supports disk caching (for up to 1 million files), and is CDN compatible (Amazon CloudFront is ideal).
It has a very human-friendly URL syntax. Ex. image.jpg?width=100&height=100&mode=crop.
It supports resizing, cropping, padding, rotation, PNG/GIF/JPG output, borders, watermarking, remote URLs, Amazon S3, MS SQL, Amazon CloudFront, batch operations, image filters, disk caching, and lots of other cool stuff, like seam carving.
It doesn't support POST delivery of images, but that's easy to do with a plugin. And don't you typically want to store images that are delivered via POST instead of just replying to the POST command with the result?
[Disclosure: I'm the author of ImageResizer]
A: Apache::ImageMagick, you install that - and also Apache along with mod_perl. This is the standard setup, check docs, there are alternatives. This is probably as turn-key as it gets.
Sample conf:
<Location /img>
PerlFixupHandler Apache::ImageMagick
PerlSetVar AIMCacheDir /tmp/your/cache/directory
</Location>
Your requests could look like:
http://domain/img/test.gif/Frame?color=red
More docs are here!
A: You can use LibGD or ImageMagick to build a service like that fairly easily. They both have many language bindings.
A: While not an out of the box solution, check out ImageMagick. There is a perl interface for it, so combine that with some fairly simple cgi scripts, or mod_perl and it should do the trick.
A: You could make this with Google App Engine -- they provide image processing routines and will host for free within some bounds.
Here are some examples of people doing things like this already
http://appgallery.appspot.com/results?q=image
A: I found this product, it seems to match my requirements
A: Try Nginx image processing server with OpenResty and Lua. It uses ImageMagick C API. Openresty comes with LuaJIT. It has amazing performance in terms of speed. Checkout some benchmarks for LuaJIT and Openresty.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146994",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: Where to put common writable application files? I thought that CSIDL_COMMON_APPDATA\company\product should be the place to put files that are common for all users of the application and that the application can modify, however, on Vista this is a read-only location, unless modified by the installer (as per MSDN - http://msdn.microsoft.com/en-us/library/ms995853.aspx), so... what's best? Modify the location's security settings to allow writing or use CSIDL_COMMON_DOCUMENTS\company\product instead? Maybe there's a third option?
Also, is there an "official" Microsoft recommendation on this somewhere?
A: Here's a simple example showing how to create files and folders with Read/Write permission for all users in the Common App Data folder (CSIDL_COMMON_APPDATA). Any user can run this code to give all other users permission to write to the files & folders:
#include <windows.h>
#include <shlobj.h>
#pragma comment(lib, "shell32.lib")
// for PathAppend
#include <Shlwapi.h>
#pragma comment(lib, "Shlwapi.lib")
#include <stdio.h>
#include <aclapi.h>
#include <tchar.h>
#pragma comment(lib, "advapi32.lib")
#include <iostream>
#include <fstream>
using namespace std;
int _tmain(int argc, _TCHAR* argv[])
{
DWORD dwRes, dwDisposition;
PSID pEveryoneSID = NULL;
PACL pACL = NULL;
PSECURITY_DESCRIPTOR pSD = NULL;
EXPLICIT_ACCESS ea;
SID_IDENTIFIER_AUTHORITY SIDAuthWorld = SECURITY_WORLD_SID_AUTHORITY;
SID_IDENTIFIER_AUTHORITY SIDAuthNT = SECURITY_NT_AUTHORITY;
SECURITY_ATTRIBUTES sa;
// Create a well-known SID for the Everyone group.
if (!AllocateAndInitializeSid(&SIDAuthWorld, 1,
SECURITY_WORLD_RID, 0, 0, 0, 0, 0, 0, 0,
&pEveryoneSID))
{
_tprintf(_T("AllocateAndInitializeSid Error %u\n"), GetLastError());
goto Cleanup;
}
// Initialize an EXPLICIT_ACCESS structure for an ACE.
// The ACE will allow Everyone access to files & folders you create.
ZeroMemory(&ea, sizeof(EXPLICIT_ACCESS));
ea.grfAccessPermissions = 0xFFFFFFFF;
ea.grfAccessMode = SET_ACCESS;
// both folders & files will inherit this ACE
ea.grfInheritance= CONTAINER_INHERIT_ACE|OBJECT_INHERIT_ACE;
ea.Trustee.TrusteeForm = TRUSTEE_IS_SID;
ea.Trustee.TrusteeType = TRUSTEE_IS_WELL_KNOWN_GROUP;
ea.Trustee.ptstrName = (LPTSTR) pEveryoneSID;
// Create a new ACL that contains the new ACEs.
dwRes = SetEntriesInAcl(1, &ea, NULL, &pACL);
if (ERROR_SUCCESS != dwRes)
{
_tprintf(_T("SetEntriesInAcl Error %u\n"), GetLastError());
goto Cleanup;
}
// Initialize a security descriptor.
pSD = (PSECURITY_DESCRIPTOR) LocalAlloc(LPTR, SECURITY_DESCRIPTOR_MIN_LENGTH);
if (NULL == pSD)
{
_tprintf(_T("LocalAlloc Error %u\n"), GetLastError());
goto Cleanup;
}
if (!InitializeSecurityDescriptor(pSD, SECURITY_DESCRIPTOR_REVISION))
{
_tprintf(_T("InitializeSecurityDescriptor Error %u\n"), GetLastError());
goto Cleanup;
}
// Add the ACL to the security descriptor.
if (!SetSecurityDescriptorDacl(pSD,
TRUE, // bDaclPresent flag
pACL,
FALSE)) // not a default DACL
{
_tprintf(_T("SetSecurityDescriptorDacl Error %u\n"), GetLastError());
goto Cleanup;
}
// Initialize a security attributes structure.
sa.nLength = sizeof(SECURITY_ATTRIBUTES);
sa.lpSecurityDescriptor = pSD;
sa.bInheritHandle = FALSE;
TCHAR szPath[MAX_PATH];
if (SUCCEEDED(SHGetFolderPath(NULL, CSIDL_COMMON_APPDATA|CSIDL_FLAG_CREATE, NULL, 0, szPath)))
{
PathAppend(szPath, TEXT("Your Shared Folder"));
if (!CreateDirectory(szPath, &sa)
&& GetLastError() != ERROR_ALREADY_EXISTS)
{
goto Cleanup;
}
PathAppend(szPath, TEXT("textitup.txt"));
HANDLE hFile = CreateFile(szPath, GENERIC_READ | GENERIC_WRITE, 0, &sa, CREATE_ALWAYS, 0, 0);
if (hFile == INVALID_HANDLE_VALUE)
goto Cleanup;
else
CloseHandle(hFile);
//TODO: do the writing
ofstream fsOut;
fsOut.exceptions(ios::eofbit | ios::failbit | ios::badbit);
fsOut.open(szPath, ios::out | ios::binary | ios::trunc);
fsOut << "Hello world!\n";
fsOut.close();
}
Cleanup:
if (pEveryoneSID)
FreeSid(pEveryoneSID);
if (pACL)
LocalFree(pACL);
if (pSD)
LocalFree(pSD);
return 0;
}
A: I think this post may answer some questions, but it seems a difficult problem for many.
Apparently, CSIDL_COMMON_DOCUMENTS provides a common workaround
A: Modify just the security on a specific sub-directory of the AppData directory (this is from the link you provided):
CSIDL_COMMON_APPDATA This folder
should be used for application data
that is not user specific. For
example, an application may store a
spell check dictionary, a database of
clip-art or a log file in the
CSIDL_COMMON_APPDATA folder. This
information will not roam and is
available to anyone using the
computer. By default, this location is
read-only for normal (non-admin,
non-power) Users. If an application
requires normal Users to have write
access to an application specific
subdirectory of CSIDL_COMMON_APPDATA,
then the application must explicitly
modify the security on that
sub-directory during application
setup. The modified security must be
documented in the Vendor
Questionnaire.
A: Guidelines for Vista/UAC can be found here. Search that page for "CSIDL" and you'll find some "official" answers.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/147016",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
} |
Q: When creating a web control should you override OnLoad or implement Page_Load When you create a new web user control in visual studio it by default adds the Page_Load event. What is the advantage to using this rather than overriding the base OnLoad event on the control? Is it just that the Page_Load event fires before OnLoad?
A: The OnLoad method should be the place where the Load event is raised. I personally always try to handle the event unless I need to do extra processing around raising the event.
I recommend handling the event itself under normal circumstances.
A: You may find this article on the page lifecycle from Microsoft useful.
A: As you can see above, it does mostly come down to personal choice IF that choice is made knowledgeably. The best quick but solid overview I've seen is at http://weblogs.asp.net/infinitiesloop/archive/2008/03/24/onload-vs-page-load-vs-load-event.aspx
A: It's really just a matter of choice. To me it seems weird for an object to attach an event to itself, especially when there is a method you can override.
I think the ASP.NET team used events because that was the model for Global.asa in ASP, and to lower the bar for developers who don't understand inheritance and overriding virtual methods.
Overriding the method does require more knowledge about the page lifecycle, but there is nothing "wrong" with it.
A: Read the section called: "Binding Page Events" on the MSDN page titled: "ASP.NET Web Server Control Event Model" (link to the page)
There are some useful statements like these:
One disadvantage of the AutoEventWireup attribute is that it requires that the page event handlers have specific, predictable names. This limits your flexibility in how you name event handlers. Another disadvantage is that performance is adversely affected, because ASP.NET searches for methods at run-time. For a Web site with high traffic volumes, the impact on performance could be significant.
(AutoEventWireup flag turns on such methods like Page_Load)
A: Even though you're inheriting from UserControl, I think you should stay away from overriding the protected methods if you don't have to. The Page_Load is there to make it easier for you to add the code that's specific to your UserControl.
Only override OnLoad if you need absolute control over when(/if) the Load event is fired (which should be rare, IMO).
A: i think it's the same.
IMHO, With Events, you have a bit of more flexibility, because you can happen more than one listener to your event!
A: I think there is one potentially significant difference in the two methods.
What I am referring to is the ability to have control over the execution sequence.
If you are overriding, you know when the base classes Load will take place because you are calling it. This provides more control, but probably is a bad thing as many will argue.
If you use event, you have no guarantee in terms of order of call. This forces you to write Load event which should be agnostic about what super classes are doing during Load phase. I think this would be the preferred approach and maybe that is why VS auto-generated code is this way.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/147033",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: Automatically adding specified text at beginning of files in VS 2008 Is there a way to have Visual Studio 2008 automatically add heading information to files? For example, "Copyright 2008" or something along those lines. I've been digging through the options, but nothing seems to be jumping out at me.
A: I assume you'd like to modify the class file templates. They're in:
%ProgramFiles%\Microsoft Visual Studio 9.0\Common7\IDE\ItemTemplates\CSharp\Code\1033
More specific details here
A: I found a better solution than modifying the template file directly. This utility allows you to create and save header/footer templates and apply them to entire source trees.
C# Header Designer from MSDN Code Gallery
| {
"language": "en",
"url": "https://stackoverflow.com/questions/147040",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Is it correct to use inheritance instead of name aliasing in c#? In other words, is it correct to use:
public class CustomerList : System.Collections.Generic.List<Customer>
{
/// supposed to be empty
}
instead of:
using CustomerList = System.Collections.Generic.List<Customer>
I'd rather use the first approach because I'd just define CustomerList once, and every time I needed a customer list I'd always use the same type. On the other hand, using the name aliasing approach not only forces me to have to redefine it everywhere, but also a different alias could be given every time someone wanted to use it (think of a big team), and thus cause the code to be less readable.
Please note that the intention in this case would never be to extend the class, just to create an alias.
A: Don't do it. When people read:
List<Customer>
they immediately understand it. When they read:
CustomerList
they have to go and figure out what a CustomerList is, and that makes your code harder to read. Unless you are the only one working on your codebase, writing readable code is a good idea.
A: I'd agree with not using an alias in that manner. Nobody in your team should be using aliases in the manner presented; it's not the reason aliasing was provided. Additionally, from the way generics work, there is only ever one List class no matter how many places you use it.
In addition to just declaring and using List<Customer>, you're going to eventually want to pass that list to something else. Avoid passing the concrete List<Customer> and instead pass an IList<Customer> or ICollection<Customer> as this will make those methods more resilient and easier to program against.
One day in the future, if you really do need a CustomerList collection class, you can implement ICollection<Customer> or IList<Customer> on it and continue to pass it to those methods without them changing or even knowing better.
A: Actually you shouldn't use either. The correct approach according to the framework design guidelines is to either use or inherit from System.Collections.ObjectModel.Collection<T> in public APIs (List<T> should only be used for internal implementation).
But with regards to the specific issue of naming, the recommendation appears to be to use the generic type name directly without aliasing unless you need to add functionality to the collection:
Do return Collection<T> from object
models to provide standard plain
vanilla collection API.
Do return a subclass of Collection<T>
from object models to provide
high-level collection API.
A: Using inheritance to do aliasing/typedefing has the problem of requiring you redefine the relevant constructors.
Since it will quickly become unreasonable to do that everywhere, it's probably best to avoid it for consistency's sake.
A: well, unless you are adding some functionality to the base class there is no point in creating a wrapper object. I would go with number two if you really need to, but why not just create a variable?
List<Customer> customerList = new List<Customer>();
A: This is one of those 'It depends' questions.
If what you need is a new class that behaves as a List of Customers in addition to your other requirements then the inheritance is the way.
If you just want to use a list of customers then use the variable.
A: If you're just trying to save on typing, then use the latter. You're not going to run into any bizarre inheritance issues that way.
If you actually want to expose a logically distinct collection type, then use the former - you can go back and add stuff to it then.
Personally, i would just use List<Customer> and call it a day.
A: I essentially agree with Ed. If you don't need to actually extend the functionality of the generic List construct, just use a generic List:
List<Customer> customerList = new List<Customer>();
If you do need to extend the functionality then typically you would be looking at inheritance.
The third possibility is where you need significantly changed functionality from the generic list construct, in which case you may want to simply inherit from IEnumerable. Doing so make the class usable in enumerable operations (such as "foreach") but allows you to completely define all class behaviour.
A: One programmer's saving on typing could very well be the next programmer's maintenance nightmare. I'd say just type out the generic correctly, as so many here have said. It's cleaner and a more accurate description of your code's intent, and it will help the maintenance programmer. (Who might be you, six months and four new projects down the road!)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/147049",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: REGEX: Grabbing everything until a specific word ex: <a><strike>example data in here</strike></a>
I want everything inside the a tag, to the end
/<a>([^<]*)<\/a>/
It works when there are no additional tags within the <a> tag, but what if there are?
I want to know if you can tell it to grab everything up to [^</a>] instead of [^<] only.
Doing it with /<a>(.*)<\/a>/ doesn't work well. Sometimes I get everything in the <a> tag and other times I get tons of lines included in that call.
A: /<a>(.*?)<\/a>/
should work. The ? makes it lazy, so it grabs as little as possible before matching the </a> part. but using . will mean that it matches everything until it finds </a>. If you want to be able to match across lines, you can use the following if with preg_match
/<a>(.*?)<\/a>/s
The "s" at the end puts the regular expression in "single line" mode, which means the . character matches all characters including new lines. See other useful modifiers
| {
"language": "en",
"url": "https://stackoverflow.com/questions/147052",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "32"
} |
Q: Outlook attachments.Add() is not showing in mail body I'm creating a new mail item, in C# VS-2008 outlook 2007, and attaching a file. The first issue is that I don't see an attachment area under the subject line showing the attachment. If I send the e-mail its properties show that there is an attachment and the e-mail size has grown by the attachment amount. I just cannot see it or extract the attachment.
Here is the code I'm using:
Outlook.MailItem mailItem = (Outlook.MailItem)this.Application.CreateItem(Outlook.OlItemType.olMailItem);
attachments.Add(ReleaseForm.ZipFile, Outlook.OlAttachmentType.olByValue, 0, "DisplayName");
I am expecting the part "DisplayName" would show as the attachment name and I should be using the filename.
I don't call .Send() on the e-mail programmatically, I call mailItem.Display(true) to show the e-mail to the user for any final edits. At this point I can look at the properties and see that there is an attachment there.
If I press send (sending to myself) I see the same thing, the attachment appears to be there but not accessible.
A: I have found the issue. I change the code to use the following:
attachments.Add(ReleaseForm.ZipFile, Outlook.OlAttachmentType.olByValue, Type.Missing, Type.Missing);
It appears that the Position and DisplayName parameters control what happens with an olByValue. Using Type.Missing and now I see the attachments correctly in the e-mail.
A: By the way, if you will set Position to 0 your attachement will be hidden:
Attachment.Position Property
A: I have excactly problem as yours, but even I change the code as yours, but it seems not work still. again, it seems already in the mailitems but not display on the mail items display.
OK, you have to make sure the mailItem body is not null to diplay the attechament
A: Bit of an old post, but as some others mentioned, using
attachments.Add(path, Outlook.OlAttachmentType.olByValue, Type.Missing, Type.Missing);
did not help me either, so I thought I would share an alternative approach. The solution to this problem ended up being to call mailItem.Save(); right before you call mailItem.Display(true);. What this will do is refresh the outlook form to show your attachments. It also worthwhile to point out that it will save the message to drafts. Not an issue if you expect the user to send the email, but if they cancel, it will be left in their Drafts folder.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/147053",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Unix path-searching C function I am programming a UNIX shell and I have to use the execv() system call to create a process.
One of the parameters for execv() is the filepath for the executable. So if somebody types in /bin/ls, it will run the ls executable. But what I need is a function such that when ls is typed, it will search for the filepath of ls (like the which command). Is there a function which allows me to do that?
Unfortunately, this is a school project and I am not allowed to use execvp(). I need to implement some path searching function and then add that filepath to the execv() parameter.
A: I think execvp() does what you need.
Edit: So you're actually asking how to do this manually? In that case...
*
*Find your PATH in envp (3rd argument to main())
*Split this into individual paths
*Check for your program's existance in each of these with stat()
*Execute the first one you find to exist
Or if you want a really solid implementation, use this. It might set off the plagarism detectors though :)
A: You want execvp(), it will search the path specified in the PATH variable unless the filename contains a '/'.
A: If you can't use execvp, you could get the PATH variable from char** environ from <unistd.h> or char* getenv(const char* name) from <stdlib.h> then use int access(const char* filename, int mode) to see if the file exists and is executable.
I'll leave the implementation up to you as it's a school project.
A: Use execvp.
char *args[] = {"ls", (char *) NULL};
execvp("ls", args);
e.g. this example will exec /bin/echo (assuming /bin is on your PATH).
#include <unistd.h>
int main()
{
char *args[] = {"echo", "hello world", (char *) NULL};
execvp("echo", args);
return 0;
}
A: A few people have suggested that you call access() or stat() before attempting to execute the program with execv(). You don't need to do this. execv() will return an error if it could not execute the requested file.
A: Use PATH = getenv("PATH") to get the path string from the environment, then use successive calls to strtok(PATH,":") then strtok(NULL,":") to parse out the paths from the PATH string into an array of char **path, which you will need to allocate with malloc(). Place path[x] + '/' + argv[0] into a buffer, and use access(buffer, X_OK) to see if you can execute your file at that path location, if so, perform your execv(buffer,argv).
A: execvp :-)
Edit: Okay. Here's a Perl version, which can serve as pseudocode for your problem.
use List::Util qw(first);
my @path = split /:/, $ENV{PATH};
my $dir = first {$_ ||= '.'; -x "$_/$name"} @path
or die "Can't find program $name\n";
exec "$dir/$name", @args;
split splits a string into an array of strings, using the given separator. first finds the first item that matches some criterion; here, that the concatenation of the directory and the name being sought after is executable (-x). It then runs that.
Hope it helps!
A: From the execvp man page:
The functions execlp() and execvp() will duplicate the actions of the
shell in searching for an executable file if the specified filename
does not contain a slash (/) character. The search path is the path
specified in the environment by the PATH variable. If this variable
isn’t specified, the default path ‘‘:/bin:/usr/bin’’ is used. In addi-
tion, certain errors are treated specially.
Perhaps you are allowed to use execlp()? If you must use execv you'll need to get the value of the PATH environment variable, parse this using strtok() to get the individual paths, append your filename to each path and attempt to execute it with execv().
I'd provide code, but I'm not doing your homework for you.
A: Oh a school project...
Well if you want something "like which" why not just exec "which" itself (/usr/bin/which on linux, not the bash alias or tcsh builtin) to get the path to what you are looking for...
:-)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/147057",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Does iPhone support XML-RPC? Does iPhone support XML-RPC, Is their any open source framework which I can use?
A: "Airsource Ltd" is really really incredibly wrong about that document. That document is specific to Mac OS X and NOT the iPhone. In fact (almost) all Apple iPhone documentation is hidden away behind a login page and a licence agreement. Most of the technologies that document refer to (e.g. AppleScript) do not even exist on the iPhone.
Amit, you'll have Zero luck if you follow Airsource's advice. You will however do ok if you do as "Lounges" says and go grab the wordpress source code. It looks like they rolled their own XMLRPC library for use on the iPhone.
As for SOAP - you're on your own. You might be able to find an opensource SOAP library built on top of libxml2 though. Good luck.
A: Yes iPhone support XML-RPC and wordpress opensource application is best example of it,
but from performance aspect I must say JSON is better to use with iPhone application,
from here https://github.com/stig/json-framework/ u can download JSON parser.
A: Checkout the source for the wordpress app. They might be using XML-RPC. :)
http://iphone.wordpress.org/
| {
"language": "en",
"url": "https://stackoverflow.com/questions/147062",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: How can a service control its own state? I have a standard windows server that inherits from the ServiceBase class.
On the OnStart method I want to check for certain conditions before I get to the main purpose of what my service does.
For example:
protected override void OnStart(string[] args)
{
if (condition == false)
{
EventLog.WriteEntry("Pre-condition not met, service was unable to start");
// TODO: Convert service state to "Stopped" because my precondition wasn't met
return;
}
InnitializeService();
}
Anybody have a good example for how a service can control its own state?
A: Throw an Exception. This will cause the services MMC to get an error - and the exception message and stack will automatically be logged to the event log. I use ApplicationException in this case.
In addition, the service will return to the "not running" state.
If you need to stop later on, you can call the Stop method on your ServiceBase.
A: The error shown the user in the "Computer Management" MMC app does not seem to get the exception text correctly in Vista. It shows the following:
The "your service name here" service
on Local Computer started and then
stopped. Some services stop
automatically if they are not in use
by other services or programs.
However it writes correctly to the Event Log just fine.
Very Cool. Thanks.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/147083",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How are permissions inherited on an Ubuntu Server? Sorry for the second newbie question, I'm a developer not a sysadmin so this is all quite new to me. I hope I can explain myself clearly! Here goes.
Here's what I did:
*
*Logged into the root account
*Created the accounts 'richard' and 'austin'
*Created the group 'developers', and added 'richard' and 'austin' to it
*Created the directory /server, and used "chown richard:developers /server" to change the owner
*Changed the permissions of /server to drwxrwxr-x (so the group can write to it)
*Logged out of the root account, and into the 'richard' account
*Created the directories /server/production and /server/staging
*Used "ls -l" inside /server to list the contents, and found permissions of drwxr-xr-x and ownership of "richard:richard" for both /server/production and /server/staging. Consequently, 'austin' can edit inside the /server directory, but not inside the directories 'richard' created.
What am I doing wrong? I want to ensure that any files or folders created inside the /server directory have group write permissions and belong to the developers group. How do I go about that?
Thanks for any help!
A: Looks like you want to use "chmod g+s" or "chmode 2775" to get the SetGID bit set on the directory, that should preserve the group if I remember my permission modes properly.
A: When you created the directories as richard the system assumed that you were the owner and set you as the owner, you can either change the ownership and permissions manualy
sudo chown richard:developers
sudo chmod 775
or
set the default permissions for creating files/folders (found this: http://wiki.slicehost.com/doku.php?id=setting_up_ubuntu_slice_with_django_postgresql_ledgersmb_and_openvpn)
or
use acl's (see: http://ubuntuforums.org/showpost.php?p=3718480&postcount=12) for details
A: How did you change the permissions of /server? Do it recursively, if you didn't.
Good luck!
A: you must have set a restrictive umask
edit ~/.bash_profile
and modify the umask setting for the specific user.
http://en.wikipedia.org/wiki/Umask
| {
"language": "en",
"url": "https://stackoverflow.com/questions/147084",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Are there any free ways to turn an html page into an image with .net I want to take html, including the text and images and turn it into one image containing everything. Is there a free way to do it?
This is using .net 3.5.
See also:
Server Generated web screenshots?
What is the best way to create a web page thumbnail?
A: You might check out this project or this page.
Hope that helps.
A: Here's some code that I posted on my blog a few weeks ago that does this:
C#: Generate WebPage Thumbnail Screenshot Image
I'll also post the code for it below:
public Bitmap GenerateScreenshot(string url)
{
// This method gets a screenshot of the webpage
// rendered at its full size (height and width)
return GenerateScreenshot(url, -1, -1);
}
public Bitmap GenerateScreenshot(string url, int width, int height)
{
// Load the webpage into a WebBrowser control
WebBrowser wb = new WebBrowser();
wb.ScrollBarsEnabled = false;
wb.ScriptErrorsSuppressed = true;
wb.Navigate(url);
while (wb.ReadyState != WebBrowserReadyState.Complete) { Application.DoEvents(); }
// Set the size of the WebBrowser control
wb.Width = width;
wb.Height = height;
if (width == -1)
{
// Take Screenshot of the web pages full width
wb.Width = wb.Document.Body.ScrollRectangle.Width;
}
if (height == -1)
{
// Take Screenshot of the web pages full height
wb.Height = wb.Document.Body.ScrollRectangle.Height;
}
// Get a Bitmap representation of the webpage as it's rendered in the WebBrowser control
Bitmap bitmap = new Bitmap(wb.Width, wb.Height);
wb.DrawToBitmap(bitmap, new Rectangle(0, 0, wb.Width, wb.Height));
wb.Dispose();
return bitmap;
}
Here's some example usages:
// Generate thumbnail of a webpage at 1024x768 resolution
Bitmap thumbnail = GenerateScreenshot("http://pietschsoft.com", 1024, 768);
// Generate thumbnail of a webpage at the webpage's full size (height and width)
thumbnail = GenerateScreenshot("http://pietschsoft.com");
// Display Thumbnail in PictureBox control
pictureBox1.Image = thumbnail;
/*
// Save Thumbnail to a File
thumbnail.Save("thumbnail.png", System.Drawing.Imaging.ImageFormat.Png);
*/
| {
"language": "en",
"url": "https://stackoverflow.com/questions/147100",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: What's a solid, full-featured open rich text representation usable on the Web? I'm looking for an internal representation format for text, which would support basic formatting (font face, size, weight, indentation, basic tables, also supporting the following features:
*
*Bidirectional input (Hebrew, Arabic, etc.)
*Multi-language input (i.e. UTF-8) in same text field
*Anchored footnotes (i.e. a superscript number that's a link to that numbered footnote)
I guess TEI or DocBook are rich enough, but here's the snag -- I want these text buffers to be Web-editable, so I need either an edit control that eats TEI or DocBook, or reliable and two-way conversion between one of them and whatever the edit control can eat.
UPDATE: The edit control I'm thinking of is something like TinyMCE, but AFAICT, TinyMCE lacks footnotes, and I'm not sure about its scalability (how about editing 1 or 2 megabytes of text?)
Any pointers much appreciated!
A: FCKeditor has a great API, supports several programming languages (considering it is javascript this isn't hard to achieve), can be loaded through HTML or instantiated in code; but most of all, allows easy access to the underlying form field, so having a jQuery or prototype ajax buffer shouldn't be terribly difficult to achieve.
The load time is very quick compared to previous versions. I'd give it a whirl.
A: In my experience a two-way conversion between HTML and XML formats like TEI or DocBook is very hard to make 100% reliable.
You could use Xopus (demo) to have your users directly edit TEI or DocBook XML. Xopus is a commercial browser based XML editor designed specifically for non-technical users. It supports bidi and UTF-8. The WYSIWYG view is rendered using XSLT, so that gives you sufficient control to render footnotes the way you describe.
As TEI and DocBook don't have means to store styling information, those formats will not allow your users to change font face, size and weight. But I think that is a good thing: users should insert headers and emphasis, designers should pick font face and size.
Xopus has a powerful table editor and indentation is handled by nesting sections or lists and XSLT reacting to that.
Unfortunately Xopus 3 will only scale to about 200KB of XML, but we're working on that.
A: I can't really decide on one of them. IMHO they are all not very good and complete. They all have their advantages and clear disadvantages. If TinyMCE is your favorite then afaik, it also does tables.
This list will probably come in handy: WysiwygEditorComparision.
A: I've also used FCKEditor and it performed well and was easy to integrate into my project. It's worth checking out.
A: Small correction to laurens' answer above: As of now (May 2012), Xopus supports UTF8, but not BiDi editing. Right-to-left text is displayed fine if it came from another source, cannot be edited correctly.
Source: I was recently asked to evaluate this, so have been testing it.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/147104",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: Why do discussions of "swappiness" act like information can only be in one place at a time? I've been reading up on Linux's "swappiness" tuneable, which controls how aggressive the kernel is about swapping applications' memory to disk when they're not being used. If you Google the term, you get a lot of pages like this discussing the pros and cons. In a nutshell, the argument goes like this:
If your swappiness is too low, inactive applications will hog all the system memory that other programs might want to use.
If your swappiness is too high, when you wake up those inactive applications, there's going to be a big delay as their state is read back off the disk.
This argument doesn't make sense to me. If I have an inactive application that's using a ton of memory, why doesn't the kernel page its memory to disk AND leave another copy of that data in-memory? This seems to give the best of both worlds: if another application needs that memory, it can immediately claim the physical RAM and start writing to it, since another copy of it is on disk and can be swapped back in when the inactive application is woken up. And when the original app wakes up, any of its pages that are still in RAM can be used as-is, without having to pull them off the disk.
Or am I missing something?
A:
If I have an inactive application that's using a ton of memory, why doesn't the kernel page its memory to disk AND leave another copy of that data in-memory?
Lets say we did it. We wrote the page to disk, but left it in memory. A while later another process needs memory, so we want to kick out the page from the first process.
We need to know with absolute certainty whether the first process has modified the page since it was written out to disk. If it has, we have to write it out again. The way we would track this is to take away the process's write permission to the page back when we first wrote it out to disk. If the process tries to write to the page again there will be a page fault. The kernel can note that the process has dirtied the page (and will therefore need to be written out again) before restoring the write permission and allowing the application to continue.
Therein lies the problem. Taking away write permission from the page is actually somewhat expensive, particularly in multiprocessor machines. It is important that all CPUs purge their cache of page translations to make sure they take away the write permission.
If the process does write to the page, taking a page fault is even more expensive. I'd presume that a non-trivial number of these pages would end up taking that fault, which eats into the gains we were looking for by leaving it in memory.
So is it worth doing? I honestly don't know. I'm just trying to explain why leaving the page in memory isn't so obvious a win as it sounds.
(*) This whole thing is very similar to a mechanism called Copy-On-Write, which is used when a process fork()s. The child process is very likely going to execute just a few instructions and call exec(), so it would be silly to copy all of the parents pages. Instead the write permission is taken away and the child simply allowed to run. Copy-On-Write is a win because the page fault is almost never taken: the child almost always calls exec() immediately.
A: Even if you page the apps memory to disk and keep it in memory, you would still have to decide when should an application be considered "inactive" and that's what swapiness controls. Paging to disk is expensive in terms of IO and you don't want to do it too often. There is also another variable on this equation, and that is the fact that Linux uses of remaining memory as disk buffers/cache.
A: According to this 1 that is exactly what Linux does.
I'm still trying to make sense of a lot of this, so any authoritative links would be appreciated.
A: The first thing the VM does is clean pages and move them to the clean list.
When cleaning anonymous memory (things which do not have an actual file backing store, you can see the segments in /proc//maps which are anonymous and have no filesystem vnode storage behind them), the first thing the VM is going to do is take the "dirty" pages and "clean" then by writing the contents of the page out to swap. Now when the VM has a shortage of completely free memory and is worried about its ability to grant new free pages to be used, it can go through the list of 'clean' pages and based on how recently they were used and what kind of memory they are it will move those pages to the free list.
Once the memory pages are placed on the free list, they no longer are associated with the contents they had before. If a program comes along a references the memory location the page was serving previously the program will take a major fault and a (most likely completely different) page will be grabbed from the free list and the data will be read into the page from disk. Once this is done, the page is actually still 'clean' since it has not been modified. If the VM chooses to use that page on swap for a different page in RAM then the page would be again 'dirtied', or if the app wrote to that page it would be 'dirtied'. And then the process begins again.
Also, swappinness is pretty horrible for server applications in a business/transactional/online/latency-sensitive environment. When I've got 16GB RAM boxes where I'm not running a lot of browsers and GUIs, I typically want all my apps nearly pinned in memory. The bulk of my RAM tends to be 8-10GB java heaps that I NEVER want paged to disk, ever, and the cruft that is available are processes like mingetty (but even there the glibc pages in those apps are shared by other apps and actually used, so even the RSS size of those useless processes are mostly shared, used pages). I normally don't see more than a few 10MBs of the 16GB actually cleaned to swap. I would advise very, very low swappiness numbers or zero swappiness for servers -- the unused pages should be a small fraction of the overall RAM and trying to reclaim that relatively tiny amount of RAM for buffer cache risks swapping application pages and taking latency hits in the running app.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/147125",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: How to diagnosis app crash from OS X error log? Short Q.: What does this exception mean? "EXC_BAD_ACCESS (0x0001)"
Full Q.: How can I use this error log info (and thread particulars that I omitted here) to diagnosis this app crash? (NB: I have no expertise with crash logs or OS kernels.)
In this case, my email client (Eudora) crashes immediately on launch, every time, after no apparent system changes.
Host Name: [name of Mac]
Date/Time: 2008-09-28 14:46:54.177 -0400
OS Version: 10.4.11 (Build 8S165)
Report Version: 4
Command: Eudora
Path: /Applications/[...]/Eudora Application Folder/Eudora.app/Contents/MacOS/Eudora
Parent: WindowServer [59]
Version: 6.2.4 (6.2.4)
PID: 231
Thread: 0
Exception: EXC_BAD_ACCESS (0x0001)
Codes: KERN_PROTECTION_FAILURE (0x0002) at 0x00000001
A: To answer your short question: EXC_BAD_ACCESS means an illegal memory access. This means that the program tried to use a memory location outside its virtual address space (roughly speaking, the area of memory it has requested from the OS kernel). This is what Unix people typically call a "segmentation fault" (segfault), and what Windows people typically call an "access violation" (AV) or "general protection fault" (GPF). (Yes, you probably already knew that. But I'm just making sure...)
Does the crash report say what memory address was being referenced? Does the report include the call stack or a core dump? All three are very valuable, especially the core dump as you can load it into the debugger and restart the program right from the point of the exception. And if the memory address seems totally out of whack (in the higher half of the virtual address, or very close to 0, then you have a good clue where the problem lies).
A: Just to complete Michael's answer - analyzing the crash log is not what you are supposed to do. The eudora programmers screwed up (or the people providing the tools the eudora programmers use to make eudora). The best thing you can do is copy and paste the entire contents of the crash log in to a mail client (try MAIL, as you can't seem to use eudora, right now, hahaha), and send it to eudora-support@eudora.com along with a short description of what's happening. It's their job to figure out what went wrong.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/147126",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: c# downcasting when binding to and interface Is there a better way of binding a list of base class to a UI other than downcasting e.g:
static void Main(string[] args) {
List<Animal> list = new List<Animal>();
Pig p = new Pig(5);
Dog d = new Dog("/images/dog1.jpg");
list.Add(p);
list.Add(d);
foreach (Animal a in list)
{
DoPigStuff(a as Pig);
DoDogStuff(a as Dog);
}
}
static void DoPigStuff(Pig p)
{
if (p != null)
{
label1.Text = String.Format("The pigs tail is {0}", p.TailLength);
}
}
static void DoDogStuff(Dog d) {
if (d != null)
{
Image1.src = d.Image;
}
}
class Animal {
public String Name { get; set; }
}
class Pig : Animal{
public int TailLength { get; set; }
public Pig(int tailLength)
{
Name = "Mr Pig";
TailLength = tailLength;
}
}
class Dog : Animal {
public String Image { get; set; }
public Dog(String image)
{
Name = "Mr Dog";
Image = image;
}
}
A: Why not make Animal include an abstract method that Pig and Dog are forced to implement
public class Animal
{
public abstract void DoStuff();
}
public Dog : Animal
{
public override void DoStuff()
{
// Do dog specific stuff here
}
}
public Pig : Animal
{
public override void DoStuff()
{
// Do pig specific stuff here
}
}
This way each specific class takes responsibility for its actions, making your code simpler. You also won't need to cast inside your foreach loop.
A: When faced with this type of problem, I follow the visitor pattern.
interface IVisitor
{
void DoPigStuff(Piggy p);
void DoDogStuff(Doggy d);
}
class GuiVisitor : IVisitor
{
void DoPigStuff(Piggy p)
{
label1.Text = String.Format("The pigs tail is {0}", p.TailLength);
}
void DoDogStuff(Doggy d)
{
Image1.src = d.Image;
}
}
abstract class Animal
{
public String Name { get; set; }
public abstract void Visit(IVisitor visitor);
}
class Piggy : Animal
{
public int TailLength { get; set; }
public Piggy(int tailLength)
{
Name = "Mr Pig";
TailLength = tailLength;
}
public void Visit(IVisitor visitor)
{
visitor.DoPigStuff(this);
}
}
class Doggy : Animal
{
public String Image { get; set; }
public Doggy(String image)
{
Name = "Mr Dog";
Image = image;
}
public void Visit(IVisitor visitor)
{
visitor.DoDogStuff(this);
}
}
public class AnimalProgram
{
static void Main(string[] args) {
List<Animal> list = new List<Animal>();
Pig p = new Pig(5);
Dog d = new Dog("/images/dog1.jpg");
list.Add(p);
list.Add(d);
IVisitor visitor = new GuiVisitor();
foreach (Animal a in list)
{
a.Visit(visitor);
}
}
}
Thus the visitor pattern simulates double dispatch in a conventional single-dispatch object-oriented language such as Java, Smalltalk, C#, and C++.
The only advantage of this code over jop's is that the IVisitor interface can be implemented on a different class later when you need to add a new type of visitor (like a XmlSerializeVisitor or a FeedAnimalVisitor).
A: Another way to do this is to perform a typecheck before calling the method:
if (animal is Pig) DoPigStuff();
if (animal is Dog) DoDogStuff();
What you are looking for is multiple-dispatch. NO - C# doesn't support multiple-dispatch. It only supports single-dispatch. C# can only dynamically invoke a method based on the type of the receiver (i.e. the object at the left hand side of the . in the method call)
This code uses double-dispatch. I'll let the code speak for itself:
class DoubleDispatchSample
{
static void Main(string[]args)
{
List<Animal> list = new List<Animal>();
Pig p = new Pig(5);
Dog d = new Dog(@"/images/dog1.jpg");
list.Add(p);
list.Add(d);
Binder binder = new Binder(); // the class that knows how databinding works
foreach (Animal a in list)
{
a.BindoTo(binder); // initiate the binding
}
}
}
class Binder
{
public void DoPigStuff(Pig p)
{
label1.Text = String.Format("The pigs tail is {0}", p.TailLength);
}
public void DoDogStuff(Dog d)
{
Image1.src = d.Image;
}
}
internal abstract class Animal
{
public String Name
{
get;
set;
}
protected abstract void BindTo(Binder binder);
}
internal class Pig : Animal
{
public int TailLength
{
get;
set;
}
public Pig(int tailLength)
{
Name = "Mr Pig";
TailLength = tailLength;
}
protected override void BindTo(Binder binder)
{
// Pig knows that it's a pig - so call the appropriate method.
binder.DoPigStuff(this);
}
}
internal class Dog : Animal
{
public String Image
{
get;
set;
}
public Dog(String image)
{
Name = "Mr Dog";
Image = image;
}
protected override void BindTo(Binder binder)
{
// Pig knows that it's a pig - so call the appropriate method.
binder.DoDogStuff(this);
}
}
NOTE: Your sample code is much more simpler than this. I think of double-dispatch as one of the heavy artilleries in C# programming - I only take it out as a last resort. But if there are a lot of types of objects and a lot different types of bindings that you need to do (e.g. you need to bind it to an HTML page but you also need to bind it to a WinForms or a report or a CSV), I would eventually refactor my code to use double-dispatch.
A: You're not taking full advantage of your base class. If you had a virtual function in your Animal class that Dog & Pig override, you wouldn't need to cast anything.
A: Unless you have a more specific example, just override ToString().
A: I think you want a view-class associated with a factory.
Dictionary<Func<Animal, bool>, Func<Animal, AnimalView>> factories;
factories.Add(item => item is Dog, item => new DogView(item as Dog));
factories.Add(item => item is Pig, item => new PigView(item as Pig));
Then your DogView and PigView will inherit AnimalView that looks something like:
class AnimalView {
abstract void DoStuff();
}
You will end up doing something like:
foreach (animal in list)
foreach (entry in factories)
if (entry.Key(animal)) entry.Value(animal).DoStuff();
I guess you could also say that this is a implementation of the strategy pattern.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/147129",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Why doesn't C++ have a garbage collector? I'm not asking this question because of the merits of garbage collection first of all. My main reason for asking this is that I do know that Bjarne Stroustrup has said that C++ will have a garbage collector at some point in time.
With that said, why hasn't it been added? There are already some garbage collectors for C++. Is this just one of those "easier said than done" type things? Or are there other reasons it hasn't been added (and won't be added in C++11)?
Cross links:
*
*Garbage collectors for C++
Just to clarify, I understand the reasons why C++ didn't have a garbage collector when it was first created. I'm wondering why the collector can't be added in.
A: To answer most "why" questions about C++, read Design and Evolution of C++
A: What type? should it be optimised for embedded washing machine controllers, cell phones, workstations or supercomputers?
Should it prioritise gui responsiveness or server loading?
should it use lots of memory or lots of CPU?
C/c++ is used in just too many different circumstances.
I suspect something like boost smart pointers will be enough for most users
Edit - Automatic garbage collectors aren't so much a problem of performance (you can always buy more server) it's a question of predicatable performance.
Not knowing when the GC is going to kick in is like employing a narcoleptic airline pilot, most of the time they are great - but when you really need responsiveness!
A: One of the fundamental principles behind the original C language is that memory is composed of a sequence of bytes, and code need only care about what those bytes mean at the exact moment that they are being used. Modern C allows compilers to impose additional restrictions, but C includes--and C++ retains--the ability to decompose a pointer into a sequence of bytes, assemble any sequence of bytes containing the same values into a pointer, and then use that pointer to access the earlier object.
While that ability can be useful--or even indispensable--in some kinds of applications, a language that includes that ability will be very limited in its ability to support any kind of useful and reliable garbage collection. If a compiler doesn't know everything that has been done with the bits that made up a pointer, it will have no way of knowing whether information sufficient to reconstruct the pointer might exist somewhere in the universe. Since it would be possible for that information to be stored in ways that the computer wouldn't be able to access even if it knew about them (e.g. the bytes making up the pointer might have been shown on the screen long enough for someone to write them down on a piece of paper), it may be literally impossible for a computer to know whether a pointer could possibly be used in the future.
An interesting quirk of many garbage-collected frameworks is that an object reference not defined by the bit patterns contained therein, but by the relationship between the bits held in the object reference and other information held elsewhere. In C and C++, if the bit pattern stored in a pointer identifies an object, that bit pattern will identify that object until the object is explicitly destroyed. In a typical GC system, an object may be represented by a bit pattern 0x1234ABCD at one moment in time, but the next GC cycle might replace all references to 0x1234ABCD with references to 0x4321BABE, whereupon the object would be represented by the latter pattern. Even if one were to display the bit pattern associated with an object reference and then later read it back from the keyboard, there would be no expectation that the same bit pattern would be usable to identify the same object (or any object).
A: SHORT ANSWER:
We don't know how to do garbage collection efficiently (with minor time and space overhead) and correctly all the time (in all possible cases).
LONG ANSWER:
Just like C, C++ is a systems language; this means it is used when you are writing system code, e.g., operating system. In other words, C++ is designed, just like C, with best possible performance as the main target. The language' standard will not add any feature that might hinder the performance objective.
This pauses the question: Why garbage collection hinders performance? The main reason is that, when it comes to implementation, we [computer scientists] do not know how to do garbage collection with minimal overhead, for all cases. Hence it's impossible to the C++ compiler and runtime system to perform garbage collection efficiently all the time. On the other hand, a C++ programmer, should know his design/implementation and he's the best person to decide how to best do the garbage collection.
Last, if control (hardware, details, etc.) and performance (time, space, power, etc.) are not the main constraints, then C++ is not the right tool. Other language might serve better and offer more [hidden] runtime management, with the necessary overhead.
A: One of the biggest reasons that C++ doesn't have built in garbage collection is that getting garbage collection to play nice with destructors is really, really hard. As far as I know, nobody really knows how to solve it completely yet. There are alot of issues to deal with:
*
*deterministic lifetimes of objects (reference counting gives you this, but GC doesn't. Although it may not be that big of a deal).
*what happens if a destructor throws when the object is being garbage collected? Most languages ignore this exception, since theres really no catch block to be able to transport it to, but this is probably not an acceptable solution for C++.
*How to enable/disable it? Naturally it'd probably be a compile time decision but code that is written for GC vs code that is written for NOT GC is going to be very different and probably incompatible. How do you reconcile this?
These are just a few of the problems faced.
A: All the technical talking is overcomplicating the concept.
If you put GC into C++ for all the memory automatically then consider something like a web browser. The web browser must load a full web document AND run web scripts. You can store web script variables in the document tree. In a BIG document in a browser with lots of tabs open, it means that every time the GC must do a full collection it must also scan all the document elements.
On most computers this means that PAGE FAULTS will occur. So the main reason, to answer the question is that PAGE FAULTS will occur. You will know this as when your PC starts making lots of disk access. This is because the GC must touch lots of memory in order to prove invalid pointers. When you have a bona fide application using lots of memory, having to scan all objects every collection is havoc because of the PAGE FAULTS. A page fault is when virtual memory needs to get read back into RAM from disk.
So the correct solution is to divide an application into the parts that need GC and the parts that do not. In the case of the web browser example above, if the document tree was allocated with malloc, but the javascript ran with GC, then every time the GC kicks in it only scans a small portion of memory and all PAGED OUT elements of the memory for the document tree does not need to get paged back in.
To further understand this problem, look up on virtual memory and how it is implemented in computers. It is all about the fact that 2GB is available to the program when there is not really that much RAM. On modern computers with 2GB RAM for a 32BIt system it is not such a problem provided only one program is running.
As an additional example, consider a full collection that must trace all objects. First you must scan all objects reachable via roots. Second scan all the objects visible in step 1. Then scan waiting destructors. Then go to all the pages again and switch off all invisible objects. This means that many pages might get swapped out and back in multiple times.
So my answer to bring it short is that the number of PAGE FAULTS which occur as a result of touching all the memory causes full GC for all objects in a program to be unfeasible and so the programmer must view GC as an aid for things like scripts and database work, but do normal things with manual memory management.
And the other very important reason of course is global variables. In order for the collector to know that a global variable pointer is in the GC it would require specific keywords, and thus existing C++ code would not work.
A: When we compare C++ with Java, we see that C++ was not designed with implicit Garbage Collection in mind, while Java was.
Having things like arbitrary pointers in C-Style is not only bad for GC-implementations, but it would also destroy backward compatibility for a large amount of C++-legacy-code.
In addition to that, C++ is a language that is intended to run as standalone executable instead of having a complex run-time environment.
All in all:
Yes it might be possible to add Garbage Collection to C++, but for the sake of continuity it is better not to do so.
A: Though this is an old question, there's still one problem that I don't see anybody having addressed at all: garbage collection is almost impossible to specify.
In particular, the C++ standard is quite careful to specify the language in terms of externally observable behavior, rather than how the implementation achieves that behavior. In the case of garbage collection, however, there is virtually no externally observable behavior.
The general idea of garbage collection is that it should make a reasonable attempt at assuring that a memory allocation will succeed. Unfortunately, it's essentially impossible to guarantee that any memory allocation will succeed, even if you do have a garbage collector in operation. This is true to some extent in any case, but particularly so in the case of C++, because it's (probably) not possible to use a copying collector (or anything similar) that moves objects in memory during a collection cycle.
If you can't move objects, you can't create a single, contiguous memory space from which to do your allocations -- and that means your heap (or free store, or whatever you prefer to call it) can, and probably will, become fragmented over time. This, in turn, can prevent an allocation from succeeding, even when there's more memory free than the amount being requested.
While it might be possible to come up with some guarantee that says (in essence) that if you repeat exactly the same pattern of allocation repeatedly, and it succeeded the first time, it will continue to succeed on subsequent iterations, provided that the allocated memory became inaccessible between iterations. That's such a weak guarantee it's essentially useless, but I can't see any reasonable hope of strengthening it.
Even so, it's stronger than what has been proposed for C++. The previous proposal [warning: PDF] (that got dropped) didn't guarantee anything at all. In 28 pages of proposal, what you got in the way of externally observable behavior was a single (non-normative) note saying:
[ Note: For garbage collected programs, a high quality hosted implementation should attempt to maximize the amount of unreachable memory it reclaims. —end note ]
At least for me, this raises a serious question about return on investment. We're going to break existing code (nobody's sure exactly how much, but definitely quite a bit), place new requirements on implementations and new restrictions on code, and what we get in return is quite possibly nothing at all?
Even at best, what we get are programs that, based on testing with Java, will probably require around six times as much memory to run at the same speed they do now. Worse, garbage collection was part of Java from the beginning -- C++ places enough more restrictions on the garbage collector that it will almost certainly have an even worse cost/benefit ratio (even if we go beyond what the proposal guaranteed and assume there would be some benefit).
I'd summarize the situation mathematically: this a complex situation. As any mathematician knows, a complex number has two parts: real and imaginary. It appears to me that what we have here are costs that are real, but benefits that are (at least mostly) imaginary.
A: Implicit garbage collection could have been added in, but it just didn't make the cut. Probably due to not just implementation complications, but also due to people not being able to come to a general consensus fast enough.
A quote from Bjarne Stroustrup himself:
I had hoped that a garbage collector
which could be optionally enabled
would be part of C++0x, but there were
enough technical problems that I have
to make do with just a detailed
specification of how such a collector
integrates with the rest of the
language, if provided. As is the case
with essentially all C++0x features,
an experimental implementation exists.
There is a good discussion of the topic here.
General overview:
C++ is very powerful and allows you to do almost anything. For this reason it doesn't automatically push many things onto you that might impact performance. Garbage collection can be easily implemented with smart pointers (objects that wrap pointers with a reference count, which auto delete themselves when the reference count reaches 0).
C++ was built with competitors in mind that did not have garbage collection. Efficiency was the main concern that C++ had to fend off criticism from in comparison to C and others.
There are 2 types of garbage collection...
Explicit garbage collection:
C++0x has garbage collection via pointers created with shared_ptr
If you want it you can use it, if you don't want it you aren't forced into using it.
For versions before C++0x, boost:shared_ptr exists and serves the same purpose.
Implicit garbage collection:
It does not have transparent garbage collection though. It will be a focus point for future C++ specs though.
Why Tr1 doesn't have implicit garbage collection?
There are a lot of things that tr1 of C++0x should have had, Bjarne Stroustrup in previous interviews stated that tr1 didn't have as much as he would have liked.
A:
If you want automatic garbage collection, there are good commercial
and public-domain garbage collectors for C++. For applications where
garbage collection is suitable, C++ is an excellent garbage collected
language with a performance that compares favorably with other garbage
collected languages. See The C++ Programming Language (4rd
Edition) for a discussion of automatic garbage collection in C++.
See also, Hans-J. Boehm's site for C and C++ garbage collection (archive).
Also, C++ supports programming techniques that allow memory
management to be safe and implicit without a garbage collector. I consider garbage collection a last choice and an imperfect way of handling for resource management. That does not mean that it is never useful, just that there are better approaches in many situations.
Source: http://www.stroustrup.com/bs_faq.html#garbage-collection
As for why it doesnt have it built in, If I remember correctly it was invented before GC was the thing, and I don't believe the language could have had GC for several reasons(I.E Backwards compatibilty with C)
Hope this helps.
A: tl;dr: Because modern C++ doesn't need garbage collection.
Bjarne Stroustrup's FAQ answer on this matter says:
I don't like garbage. I don't like littering. My ideal is to eliminate the need for a garbage collector by not producing any garbage. That is now possible.
The situation, for code written these days (C++17 and following the official Core Guidelines) is as follows:
*
*Most memory ownership-related code is in libraries (especially those providing containers).
*Most use of code involving memory ownership follows the CADRe or RAII pattern, so allocation is made on construction and deallocation on destruction, which happens when exiting the scope in which something was allocated.
*You do not explicitly allocate or deallocate memory directly.
*Raw pointers do not own memory (if you've followed the guidelines), so you can't leak by passing them around.
*If you're wondering how you're going to pass the starting addresses of sequences of values in memory - you can and should prefer span's, obviating the need for raw pointers. You can still use such pointers, they'll just be non-owning.
*If you really need an owning "pointer", you use C++' standard-library smart pointers - they can't leak, and are decently efficient (although the ABI can get in the way of that). Alternatively, you can pass ownership across scope boundaries with "owner pointers". These are uncommon and must be used explicitly; but when adopted - they allow for nice static checking against leaks.
"Oh yeah? But what about...
... if I just write code the way we used to write C++ in the old days?"
Indeed, you could just disregard all of the guidelines and write leaky application code - and it will compile and run (and leak), same as always.
But it's not a "just don't do that" situation, where the developer is expected to be virtuous and exercise a lot of self control; it's just not simpler to write non-conforming code, nor is it faster to write, nor is it better-performing. Gradually it will also become more difficult to write, as you would face an increasing "impedance mismatch" with what conforming code provides and expects.
... if I reintrepret_cast? Or do complex pointer arithmetic? Or other such hacks?"
Indeed, if you put your mind to it, you can write code that messes things up despite playing nice with the guidelines. But:
*
*You would do this rarely (in terms of places in the code, not necessarily in terms of fraction of execution time)
*You would only do this intentionally, not accidentally.
*Doing so will stand out in a codebase conforming to the guidelines.
*It's the kind of code in which you would bypass the GC in another language anyway.
... library development?"
If you're a C++ library developer then you do write unsafe code involving raw pointers, and you are required to code carefully and responsibly - but these are self-contained pieces of code written by experts (and more importantly, reviewed by experts).
So, it's just like Bjarne said: There's really no motivation to collect garbage generally, as you all but make sure not to produce garbage. GC is becoming a non-problem with C++.
That is not to say GC isn't an interesting problem for certain specific applications, when you want to employ custom allocation and de-allocations strategies. For those you would want custom allocation and de-allocation, not a language-level GC.
A: To add to the debate here.
There are known issues with garbage collection, and understanding them helps understanding why there is none in C++.
1. Performance ?
The first complaint is often about performance, but most people don't really realize what they are talking about. As illustrated by Martin Beckett the problem may not be performance per se, but the predictability of performance.
There are currently 2 families of GC that are widely deployed:
*
*Mark-And-Sweep kind
*Reference-Counting kind
The Mark And Sweep is faster (less impact on overall performance) but it suffers from a "freeze the world" syndrome: i.e. when the GC kicks in, everything else is stopped until the GC has made its cleanup. If you wish to build a server that answers in a few milliseconds... some transactions will not live up to your expectations :)
The problem of Reference Counting is different: reference-counting adds overhead, especially in Multi-Threading environments because you need to have an atomic count. Furthermore there is the problem of reference cycles so you need a clever algorithm to detect those cycles and eliminate them (generally implement by a "freeze the world" too, though less frequent). In general, as of today, this kind (even though normally more responsive or rather, freezing less often) is slower than the Mark And Sweep.
I have seen a paper by Eiffel implementers that were trying to implement a Reference Counting Garbage Collector that would have a similar global performance to Mark And Sweep without the "Freeze The World" aspect. It required a separate thread for the GC (typical). The algorithm was a bit frightening (at the end) but the paper made a good job of introducing the concepts one at a time and showing the evolution of the algorithm from the "simple" version to the full-fledged one. Recommended reading if only I could put my hands back on the PDF file...
2. Resources Acquisition Is Initialization (RAII)
It's a common idiom in C++ that you will wrap the ownership of resources within an object to ensure that they are properly released. It's mostly used for memory since we don't have garbage collection, but it's also useful nonetheless for many other situations:
*
*locks (multi-thread, file handle, ...)
*connections (to a database, another server, ...)
The idea is to properly control the lifetime of the object:
*
*it should be alive as long as you need it
*it should be killed when you're done with it
The problem of GC is that if it helps with the former and ultimately guarantees that later... this "ultimate" may not be sufficient. If you release a lock, you'd really like that it be released now, so that it does not block any further calls!
Languages with GC have two work arounds:
*
*don't use GC when stack allocation is sufficient: it's normally for performance issues, but in our case it really helps since the scope defines the lifetime
*using construct... but it's explicit (weak) RAII while in C++ RAII is implicit so that the user CANNOT unwittingly make the error (by omitting the using keyword)
3. Smart Pointers
Smart pointers often appear as a silver bullet to handle memory in C++. Often times I have heard: we don't need GC after all, since we have smart pointers.
One could not be more wrong.
Smart pointers do help: auto_ptr and unique_ptr use RAII concepts, extremely useful indeed. They are so simple that you can write them by yourself quite easily.
When one need to share ownership however it gets more difficult: you might share among multiple threads and there are a few subtle issues with the handling of the count. Therefore, one naturally goes toward shared_ptr.
It's great, that's what Boost for after all, but it's not a silver bullet. In fact, the main issue with shared_ptr is that it emulates a GC implemented by Reference Counting but you need to implement the cycle detection all by yourself... Urg
Of course there is this weak_ptr thingy, but I have unfortunately already seen memory leaks despite the use of shared_ptr because of those cycles... and when you are in a Multi Threaded environment, it's extremely difficult to detect!
4. What's the solution ?
There is no silver bullet, but as always, it's definitely feasible. In the absence of GC one need to be clear on ownership:
*
*prefer having a single owner at one given time, if possible
*if not, make sure that your class diagram does not have any cycle pertaining to ownership and break them with subtle application of weak_ptr
So indeed, it would be great to have a GC... however it's no trivial issue. And in the mean time, we just need to roll up our sleeves.
A: Stroustrup made some good comments on this at the 2013 Going Native conference.
Just skip to about 25m50s in this video. (I'd recommend watching the whole video actually, but this skips to the stuff about garbage collection.)
When you have a really great language that makes it easy (and safe, and predictable, and easy-to-read, and easy-to-teach) to deal with objects and values in a direct way, avoiding (explicit) use of the heap, then you don't even want garbage collection.
With modern C++, and the stuff we have in C++11, garbage collection is no longer desirable except in limited circumstances. In fact, even if a good garbage collector is built into one of the major C++ compilers, I think that it won't be used very often. It will be easier, not harder, to avoid the GC.
He shows this example:
void f(int n, int x) {
Gadget *p = new Gadget{n};
if(x<100) throw SomeException{};
if(x<200) return;
delete p;
}
This is unsafe in C++. But it's also unsafe in Java! In C++, if the function returns early, the delete will never be called. But if you had full garbage collection, such as in Java, you merely get a suggestion that the object will be destructed "at some point in the future" (Update: it's even worse that this. Java does not promise to call the finalizer ever - it maybe never be called). This isn't good enough if Gadget holds an open file handle, or a connection to a database, or data which you have buffered for write to a database at a later point. We want the Gadget to be destroyed as soon as it's finished, in order to free these resources as soon as possible. You don't want your database server struggling with thousands of database connections that are no longer needed - it doesn't know that your program is finished working.
So what's the solution? There are a few approaches. The obvious approach, which you'll use for the vast majority of your objects is:
void f(int n, int x) {
Gadget p = {n}; // Just leave it on the stack (where it belongs!)
if(x<100) throw SomeException{};
if(x<200) return;
}
This takes fewer characters to type. It doesn't have new getting in the way. It doesn't require you to type Gadget twice. The object is destroyed at the end of the function. If this is what you want, this is very intuitive. Gadgets behave the same as int or double. Predictable, easy-to-read, easy-to-teach. Everything is a 'value'. Sometimes a big value, but values are easier to teach because you don't have this 'action at a distance' thing that you get with pointers (or references).
Most of the objects you make are for use only in the function that created them, and perhaps passed as inputs to child functions. The programmer shouldn't have to think about 'memory management' when returning objects, or otherwise sharing objects across widely separated parts of the software.
Scope and lifetime are important. Most of the time, it's easier if the lifetime is the same as the scope. It's easier to understand and easier to teach. When you want a different lifetime, it should be obvious reading the code that you're doing this, by use of shared_ptr for example. (Or returning (large) objects by value, leveraging move-semantics or unique_ptr.
This might seem like an efficiency problem. What if I want to return a Gadget from foo()? C++11's move semantics make it easier to return big objects. Just write Gadget foo() { ... } and it will just work, and work quickly. You don't need to mess with && yourself, just return things by value and the language will often be able to do the necessary optimizations. (Even before C++03, compilers did a remarkably good job at avoiding unnecessary copying.)
As Stroustrup said elsewhere in the video (paraphrasing): "Only a computer scientist would insist on copying an object, and then destroying the original. (audience laughs). Why not just move the object directly to the new location? This is what humans (not computer scientists) expect."
When you can guarantee only one copy of an object is needed, it's much easier to understand the lifetime of the object. You can pick what lifetime policy you want, and garbage collection is there if you want. But when you understand the benefits of the other approaches, you'll find that garbage collection is at the bottom of your list of preferences.
If that doesn't work for you, you can use unique_ptr, or failing that, shared_ptr. Well written C++11 is shorter, easier-to-read, and easier-to-teach than many other languages when it comes to memory management.
A: The idea behind C++ was that you would not pay any performance impact for features that you don't use. So adding garbage collection would have meant having some programs run straight on the hardware the way C does and some within some sort of runtime virtual machine.
Nothing prevents you from using some form of smart pointers that are bound to some third-party garbage collection mechanism. I seem to recall Microsoft doing something like that with COM and it didn't go to well.
A: Mainly for two reasons:
*
*Because it doesn't need one (IMHO)
*Because it's pretty much incompatible with RAII, which is the cornerstone of C++
C++ already offers manual memory management, stack allocation, RAII, containers, automatic pointers, smart pointers... That should be enough. Garbage collectors are for lazy programmers who don't want to spend 5 minutes thinking about who should own which objects or when should resources be freed. That's not how we do things in C++.
A: Imposing garbage collection is really a low level to high level paradigm shift.
If you look at the way strings are handled in a language with garbage collection, you will find they ONLY allow high level string manipulation functions and do not allow binary access to the strings. Simply put, all string functions first check the pointers to see where the string is, even if you are only drawing out a byte. So if you are doing a loop that processes each byte in a string in a language with garbage collection, it must compute the base location plus offset for each iteration, because it cannot know when the string has moved. Then you have to think about heaps, stacks, threads, etc etc.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/147130",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "297"
} |
Q: Easy way to scroll overflow text on a button? Does anyone have any examples or resources where i might find information on scrolling text which is too long to display in a button control? I'm thinking something along these lines.
*
*Display as much text will fit within the current rect with a '...' at the end to signify overflow.
*Pause for say 1 second then slowly scroll the text to the right edge displaying the right part of the string.
*Display as much text will fit within the current rect with a '...' at the beginning to signify overflow.
*Start the whole thing over in reverse.
Is there an easy way to do this using the "core" or built in "animation" frameworks on a certain mobile device?
[edit]
Iwanted to add some more details as i think people are more focused on wether or not what i'm trying to accomplish is appropriate. The button is for the answers on a trivia game. It does not perform any speciffic UI function but is for displaying the answer. Apple themselves is doing this in their iQuiz trivia game on the iPod Nano and i think its a pretty elegant solution to answers that are longer than the width of my button.
In case its the '...' that is the difficult part of this. Lets say i removed this requirement. Could i have the label for the button be full sized but clipped to the client rect of the button and use some animation methods to scroll it within the clipping rect? This would give me almost the same effect minus the ellipses.
A: Without wishing to be obtuse, maybe you should rethink your problem. A button should have a clear and predictable function. It's not a place to store and display text. Perhaps you could have a description show on screen with a nice standard button below?
A: Update with source code example:
Here is some ready to use source code example (actually a full zipped Xcode project with image and nib files and some source code), not for the iPhone, not using Core Animation, just using a couple of simple NSImages and a NSImageView. It is just a cheap hack, it does not implement the full functionality you requested (sorry, but I don't feel like writing your source code for you :-P), horrible code layout (hey, I just hacked this together within a couple of minutes, so you can't expect any better ;-)) and it's just a demonstration how this can be done. It can be done with Core Animation, too, but this approach is simpler. Composing the button animation into a NSImageView is not as nice as subclassing a NSView and directly paint to its context, but it's much simpler (I just wanted to hack together the simplest solution possible). It will also not scroll back once it scrolled all the way to the right. Therefor you just need another method to scroll back and start another NSTimer that fires 2 seconds after you drew the dots to the left.
Just open the project in Xcode and hit run, that's all there is to do. Then have a look at the source code. It's really not that complicated (however, you may have to reformat it first, the layout sucks).
Update because of comment to my answer:
If you don't use Apple UI elements at all, I fail to see the problem. In that case your button is not even a button, it's just a clickable View (NSView if you use Cocoa). You can just sub-class NSView as MyAnswerView and overwrite the paint method to paint into the view whatever you wish. Multiline text, scrolling text, 3D text animated, it's completely up to your imagination.
Here's an example, showing how someone subclassed NSView to create a complete custom control that does not exist by default. The control looks like this:
See the funny thing in the upper left corner? That is a control. Here's how it works:
I hate to say that, as it is no answer to your question, but "Don't do that!". Apple has guidelines how to implement a user interface. While you are free to ignore them, Apple users are used to have UIs following these guidelines and not following them will create applications that Apple users find ugly and little appealing.
Here are Apple's Human Interface Guidelines
Let me quote from there
Push Button Contents and Labeling
A push button always contains text, it
does not contain an image. If you need
to display an icon or other image on a
button, use instead a bevel button,
described in “Bevel Buttons.”
The label on a push button should be a
verb or verb phrase that describes the
action it performs—Save, Close, Print,
Delete, Change Password, and so on. If
a push button acts on a single
setting, label the button as
specifically as possible; “Choose
Picture…,” for example, is more
helpful than “Choose…” Because buttons
initiate an immediate action, it
shouldn’t be necessary to use “now”
(Scan Now, for example) in the label.
Push button labels should have
title-style capitalization, as
described in “Capitalization of
Interface Element Labels and Text.” If
the push button immediately opens
another window, dialog, or application
to perform its action, you can use an
ellipsis in the label. For example,
Mail preferences displays a push
button that includes an ellipsis
because it opens .Mac system
preferences, as shown in Figure 15-8.
Buttons should contain a single verb or a verb phrase, not answers to trivia game! If you have between 2 and 5 answers, you should use Radio Buttons to have the user select the answer and an OK button to have the user accept the answer. For more than 5 answers, you should consider a Pop-up Selector instead according to guidelines, though I guess that would be rather ugly in this case.
You could consider using a table with just one column, one row per answer and each cell being multiline if the answer is very long and needs to break. So the user selects a table row by clicking on it, which highlights the table cell and then clicks on an OK button to finish. Alternatively, you can directly continue, as soon as the user selects any table cell (but that way you take the user any chance to correct an accidental click). On the other hand, tables with multiline cells are rather rare on MacOS X. The iPhone uses some, but usually with very little text (at most two lines).
A: Here's an idea: instead of ellipses (...), use a gradient on each side, so the extra text fades away into the background color. Then you could do this with three CALayers: one for the text and two for fade effect.
The fade masks would just be rectangles with a gradient that goes from transparent to the background color. They should be positioned above the text layer. The text would be drawn on the text layer, and then you just animate it sliding back and forth in the manner you describe. You can create a CGPath object describing the path and add it to a CAKeyframeAnimation object which you add to the text layer.
As for whether you think this is "easy" depends on how well you know Core Animation, but I think once you learn the API you'll find this isn't too bad and would be worth the trouble.
A: Pretty sure you can't do that using the standard API, certainly not with UILineBreakMode. In addition, the style guide says that an ellipsis indicates that the button when pressed will ask you for more information -for example Open File... will ask for the name of a file. Your proposed use of ellipsis violates this guideline.
You'd need some custom logic to implement the behaviour you describe, but I don't think it's the way to go anyway.
A: This is not a very good UI practice, but if you still want to do it, your best bet is to do so via a clickable div styled to look like a button.
Set the width of the div to an explicit value, and its overflow to hidden, then use a script executing on an interval to adjust the scrollLeft property of this div.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/147154",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: SMS + Web app: Providers of SMS "Long codes" for use by U.S. carrier subscribers within U.S.? Q.: How to get a cellular phone SMS "Long code" for use by U.S. carrier subscribers within U.S.?
Background: I'm building a web app that receives queries from/sends answers to cell phones. The app design (and business model) expects to communicate with cell devices via SMS, addressing the web app via an SMS "Long code" (VMN or MSISDN). The mobile phone subscribers will be sending/receiving within the U.S. and using U.S. carriers. Long codes are not available within the U.S. cellular services.
A: This is not an easy task. First, you'll need to get a code. Then you have to negotiate with all the carriers to get them to recognize it.
Or you can use someone like Cell It (http://www.cellitmarketing.com/) which has handled all of these things and acts as an intermediary for you.
I have no relationship with them but we are exploring doing something similar and the expertise of negotiating with all of the carriers has us looking for someone to work with who does that.
A: A 'long code' is just a normal phone number (full MSISDN in e.164 format) You can get one by purchasing a SIM card (in the case of GSM -- you have to get the whole phone in CDMA today as they don't use SIM like identity modules yet). Once you have that you can get a GSM modem and use standard COM programming for the modem to send and receive SMS messages. Last I looked the cheapest carrier for this in the US was T-Mobile with an unlimited messaging plan.
As Barry pointed out you are not supposed to use this method for commercial purposes, but my experience working at an SMS aggregator was that a lot of people are doing it this way. Check the fine print in the contract to make sure you know what 'unlimited' really means, also be mindful that the speed of a GSM modem is not so good for large scale operations. For large commercial applications you may need to look at connecting to an aggregator. But then you have to negotiate the solution with the aggregator and operator so you might not be able to use a long code.
A: Group Texting is a new long code service (http://grouptexting.com). Unlike Twilio there are no charges for incoming messages and no 1 sms/second rules!
A: I believe that I have answered your question in a different post. You can see it here:
What kind of technologies are available for sending text messages
Hopefully, it is what you need.
A: The only legal way to get a "long code" in the US is to buy a Cell phone modem and a sim card.
Normally all traffic Server to Consumer in the US is done over short codes, which cost about $500/month.
And you are not suppose to use long codes for any commercial purposes
A: If your just going to send out text messages get kannel hook up a gsm phone to it and your done ..
A: You are going to need a short code to do any automated sms marketing. This includes simply responding to queries about a product or event. If you do not do this you can expect that carriers will eventually block your number. Even if you do get a short code carriers can still block your number if you fail their audits. Audits usually include providing a one response unsubscribe to your service and only sending messages to opted in customers. There are a lot more guidelines and they very from carrier to carrier but most info can be found at the MMA Site
A: broadtexter.com uses a 10 digit long code (646-662-3101) which is registered to t-mobile they are able to send and receive SMS in the United States.
I've also just recently posted this: BULK SMS, Long Codes (VMN MSIDN), T-mobile?
In my experience the users on stackoverflow are more than willing to help a fellow developer so check that link soon as I'm sure it might help you out as well.
A: http://www.twilio.com offers long codes for SMS.
Just two caveats:
*
*They enforce a 1 sms/second rule (although they automatically queue messages if you go over the rate)
*They charge 3 cents/message for both incoming and outgoing.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/147164",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: What is the benefit of global resource URIs (i.e. addressability)? What is the benefit of referencing resources using globally-unique URIs (as REST does) versus using a proprietary id format?
For example:
*
*http://host.com/student/5
*http://host.com/student?id=5
In the first approach the entire URL is the ID. In the second approach only the 5 is the ID. What is the practical benefit of the first approach over the second?
Why does REST (seem to) go out of its way to advocate the first approach?
-- EDIT:
My question was confusing because it really asked two separate questions:
*
*What is the benefit of addressibility?
*What is the difference between the two URI forms seen above.
I've answered both questions below using my own post.
A: The main thing when i see uri's like that is a normal user would be able to remember that uri.
Us geeks are fine with question marks and get variables, but if someone remembers http://www.host.com/users/john instead of http://www.host.com/?view=users&name=john, then that is a huge benefit.
A: I will answer my own question:
1) Why are URIs important?
I'll quote from RESTful Web Services by Leonard Richardson and Sam Ruby (ISBN: 978-0-596-52926-0):
Consider a real URI that names a resource in the genre “directory of resources about
jellyfish”: http://www.google.com/search?q=jellyfish. That jellyfish search is just as
much a real URI as http://www.google.com. If HTTP wasn’t addressable, or if the Google
search engine wasn’t an addressable web application, I wouldn’t be able to publish that
URI in a book. I’d have to tell you: “Open a web connection to google.com, type ‘jellyfish’
in the search box, and click the ‘Google Search’ button.
This isn’t an academic worry. Until the mid-1990s, when ftp:// URIs
became popular for describing files on FTP sites, people had to write
things like: “Start an anonymous FTP session on ftp.example.com. Then
change to directory pub/files/ and download file file.txt.” URIs made
FTP as addressable as HTTP. Now people just write: “Download ftp://
ftp.example.com/pub/files/file.txt.” The steps are the same, but now they
can be carried out by machine.
[...]
Addressability is one of the best things about web applications. It makes it easy for
clients to use web sites in ways the original designers never imagined.
2) What is the benefit of addressibility?
It is far easier to follow server-provided URIs than construct them yourself. This is especially true as resource relationships become too complex to be expressed in simple rules. It's easier to code the logic once in the server than re-implement it in numerous clients.
The relationship between resources may change even though the individual resource URIs remain unchanged. For example, if Google Maps were to change the scale of their map tiles, clients that calculate relative tile positions would break.
3) What is the benefit of URIs over custom IDs?
Custom IDs identify a resource uniquely. URIs go a step further by telling you where to find it. This simplifies the client logic.
A: Search engine optimization mostly.
It also makes them easier to remember, and cleaner, more professional looking in my opinion.
A: The first is more aesthetically pleasing.
Technically there is no difference, but use the former when you can.
A: As Ólafur mentioned, The clarity of the former url is one benefit.
Another is implementation flexibility.
Let's say that student 5 changes infrequently. If you use the REST-style url you have the option of serving a static file instead of running code. In Rails it is common that the first request to students/5 would create a cached html file under your web root. That file is used to serve subsequent requests w/o touching the backend. Naturally, there's nothing rails specific about that approach.
The later url wouldn't allow for this. You can't have url variables (?, =) in the names of static pages.
A: Both URIs are valid from a REST perspective, however just realize that web caches treat the querystring parameters very differently.
If you want to use caching to your advantage then I suggest that you do not use a query string parameter to identify your resource.
A: I think it comes down to how closely you want to adhere to the principles of feng shui.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/147172",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: `testl` eax against eax? I am trying to understand some assembly.
The assembly as follows, I am interested in the testl line:
000319df 8b4508 movl 0x08(%ebp), %eax
000319e2 8b4004 movl 0x04(%eax), %eax
000319e5 85c0 testl %eax, %eax
000319e7 7407 je 0x000319f0
I am trying to understand that point of testl between %eax and %eax? I think the specifics of what this code isn't important, I am just trying to understand the test with itself - wouldn't the value always be true?
A: It tests whether eax is 0, or above, or below. In this case, the jump is taken if eax is 0.
A: This snippet of code is from a subroutine that was given a pointer to something, probably some struct or object. The 2nd line dereferences that pointer, fetching a value from that thing - possibly itself a pointer or maybe just an int, stored as its 2nd member (offset +4). The 3rd and 4th lines test this value for zero (NULL if it's a pointer) and skip the following few operations (not shown) if it is zero.
The test for zero sometimes is coded as a compare to an immediate literal zero value, but the compiler (or human?) who wrote this might have thought a testl op would run faster - taking into consideration all the modern CPU stuff like pipelining and register renaming. It's from the same bag of tricks that holds the idea of clearing a register with XOR EAX,EAX (which i saw on someone's license plate in Colorado!) rather than the obvious but maybe slower MOV EAX, #0 (i use an older notation).
In asm, like perl, TMTOWTDI.
A: The test instruction does a logical AND-operation between the operands but does not write the result back into a register. Only the flags are updated.
In your example the test eax, eax will set the zero flag if eax is zero, the sign-flag if the highest bit set and some other flags as well.
The Jump if Equal (je) instruction jumps if the zero flag is set.
You can translate the code to a more readable code like this:
cmp eax, 0
je somewhere
That has the same functionality but requires some bytes more code-space. That's the reason why the compiler emitted a test instead of a compare.
A: test is like and, except it only writes FLAGS, leaving both its inputs unmodified. With two different inputs, it's useful for testing if some bits are all zero, or if at least one is set. (e.g. test al, 3 sets ZF if EAX is a multiple of 4 (and thus has both of its low 2 bits zeroed).
test eax,eax sets all flags exactly the same way that cmp eax, 0 would:
*
*CF and OF cleared (AND/TEST always does that; subtracting zero never produces a carry)
*ZF, SF and PF according to the value in EAX. (a = a&a = a-0).
(PF as usual is only set according to the low 8 bits)
Except for the obsolete AF (auxiliary-carry flag, used by ASCII/BCD instructions). TEST leaves it undefined, but CMP sets it "according to the result". Since subtracting zero can't produce a carry from the 4th to 5th bit, CMP should always clear AF.
TEST is smaller (no immediate) and sometimes faster (can macro-fuse into a compare-and-branch uop on more CPUs in more cases than CMP). That makes test the preferred idiom for comparing a register against zero. It's a peephole optimization for cmp reg,0 that you can use regardless of the semantic meaning.
The only common reason for using CMP with an immediate 0 is when you want to compare against a memory operand. For example, cmpb $0, (%esi) to check for a terminating zero byte at the end of an implicit-length C-style string.
AVX512F adds kortestw k1, k2 and AVX512DQ/BW (Skylake-X but not KNL) add ktestb/w/d/q k1, k2, which operate on AVX512 mask registers (k0..k7) but still set regular FLAGS like test does, the same way that integer OR or AND instructions do. (Sort of like SSE4 ptest or SSE ucomiss: inputs in the SIMD domain and result in integer FLAGS.)
kortestw k1,k1 is the idiomatic way to branch / cmovcc / setcc based on an AVX512 compare result, replacing SSE/AVX2 (v)pmovmskb/ps/pd + test or cmp.
Use of jz vs. je can be confusing.
jz and je are literally the same instruction, i.e. the same opcode in the machine code. They do the same thing, but have different semantic meaning for humans. Disassemblers (and typically asm output from compilers) will only ever use one, so the semantic distinction is lost.
cmp and sub set ZF when their two inputs are equal (i.e. the subtraction result is 0). je (jump if equal) is the semantically relevant synonym.
test %eax,%eax / and %eax,%eax again sets ZF when the result is zero, but there's no "equality" test. ZF after test doesn't tell you whether the two operands were equal. So jz (jump if zero) is the semantically relevant synonym.
A: If eax is zero it will perform the conditional jump, otherwise it will continue execution at 319e9
A: The meaning of test is to AND the arguments together, and check the result for zero. So this code tests if EAX is zero or not. je will jump if zero.
BTW, this generates a smaller instruction than cmp eax, 0 which is the reason that compilers will generally do it this way.
A: In some programs they can be used to check for a buffer overflow.
At the very top of the allocated space a 0 is placed. After inputting data into the stack, it looks for the 0 at the very beginning of the allocated space to make sure the allocated space is not overflowed.
It was used in the stack0 exercise of exploits-exercises to check if it was overflowed and if there wasnt and there was a zero there, it would display "Try again"
0x080483f4 <main+0>: push ebp
0x080483f5 <main+1>: mov ebp,esp
0x080483f7 <main+3>: and esp,0xfffffff0
0x080483fa <main+6>: sub esp,0x60
0x080483fd <main+9>: mov DWORD PTR [esp+0x5c],0x0 ;puts a zero on stack
0x08048405 <main+17>: lea eax,[esp+0x1c]
0x08048409 <main+21>: mov DWORD PTR [esp],eax
0x0804840c <main+24>: call 0x804830c <gets@plt>
0x08048411 <main+29>: mov eax,DWORD PTR [esp+0x5c]
0x08048415 <main+33>: test eax,eax ; checks if its zero
0x08048417 <main+35>: je 0x8048427 <main+51>
0x08048419 <main+37>: mov DWORD PTR [esp],0x8048500
0x08048420 <main+44>: call 0x804832c <puts@plt>
0x08048425 <main+49>: jmp 0x8048433 <main+63>
0x08048427 <main+51>: mov DWORD PTR [esp],0x8048529
0x0804842e <main+58>: call 0x804832c <puts@plt>
0x08048433 <main+63>: leave
0x08048434 <main+64>: ret
A: we could see the jg,jle
If testl %edx,%edx. jle .L3we could easy find jleis suit (SF^OF)|ZF,if %edx is zero ,ZF=1,but if %edx is not zero and is -1,after the testl ,the OF=0,and the SF =1,so the flag =true,that implement jump
.sorry ,my English is poor
| {
"language": "en",
"url": "https://stackoverflow.com/questions/147173",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "129"
} |
Q: What is a good, simple way to compute ISO 8601 week number? Suppose I have a date, i.e. year, month and day, as integers. What's a good (correct), concise and fairly readable algorithm for computing the ISO 8601 week number of the week the given date falls into? I have come across some truly horrendous code that makes me think surely there must be a better way.
I'm looking to do this in Java but psuedocode for any kind of object-oriented language is fine.
A: The joda-time library has an ISO8601 calendar, and provides this functionality:
http://joda-time.sourceforge.net/cal_iso.html
yyyy-Www-dTHH:MM:SS.SSS This format of
ISO8601 has the following fields:
* four digit weekyear, see rules below
* two digit week of year, from 01 to 53
* one digit day of week, from 1 to 7 where 1 is Monday and 7 is Sunday
* two digit hour, from 00 to 23
* two digit minute, from 00 to 59
* two digit second, from 00 to 59
* three decimal places for milliseconds if required
Weeks are always complete, and the
first week of a year is the one that
includes the first Thursday of the
year. This definition can mean that
the first week of a year starts in the
previous year, and the last week
finishes in the next year. The
weekyear field is defined to refer to
the year that owns the week, which may
differ from the actual year.
The upshot of all that is, that you create a DateTime object, and call the rather confusingly (but logically) named getWeekOfWeekyear(), where a weekyear is the particular week-based definition of a year used by ISO8601.
In general, joda-time is a fantastically useful API, I've stopped using java.util.Calendar and java.util.Date entirely, except for when I need to interface with an API that uses them.
A: Just the Java.util.Calendar can do the trick:
You can create a Calendar instance and set the First Day Of the Week
and the Minimal Days In First Week
Calendar calendar = Calendar.getInstance();
calendar.setMinimalDaysInFirstWeek(4);
calendar.setFirstDayOfWeek(Calendar.MONDAY);
calendar.setTime(date);
// Now you are ready to take the week of year.
calendar.get(Calendar.WEEK_OF_YEAR);
This is provided by the javaDoc
The week determination is compatible with the ISO 8601 standard when
getFirstDayOfWeek() is MONDAY and getMinimalDaysInFirstWeek() is 4,
which values are used in locales where the standard is preferred.
These values can explicitly be set by calling setFirstDayOfWeek() and
setMinimalDaysInFirstWeek().
A: tl;dr
LocalDate.of( 2015 , 12 , 30 )
.get (
IsoFields.WEEK_OF_WEEK_BASED_YEAR
)
53
…or…
org.threeten.extra.YearWeek.from (
LocalDate.of( 2015 , 12 , 30 )
)
2015-W53
java.time
Support for the ISO 8601 week is now built into Java 8 and later, in the java.time framework. Avoid the old and notoriously troublesome java.util.Date/.Calendar classes as they have been supplanted by java.time.
These new java.time classes include LocalDate for date-only value without time-of-day or time zone. Note that you must specify a time zone to determine ‘today’ as the date is not simultaneously the same around the world.
ZoneId zoneId = ZoneId.of ( "America/Montreal" );
ZonedDateTime now = ZonedDateTime.now ( zoneId );
Or specify the year, month, and day-of-month as suggested in the Question.
LocalDate localDate = LocalDate.of( year , month , dayOfMonth );
The IsoFields class provides info according to the ISO 8601 standard including the week-of-year for a week-based year.
int calendarYear = now.getYear();
int weekNumber = now.get ( IsoFields.WEEK_OF_WEEK_BASED_YEAR );
int weekYear = now.get ( IsoFields.WEEK_BASED_YEAR );
Near the beginning/ending of a year, the week-based-year may be ±1 different than the calendar-year. For example, notice the difference between the Gregorian and ISO 8601 calendars for the end of 2015: Weeks 52 & 1 become 52 & 53.
ThreeTen-Extra — YearWeek
The YearWeek class represents both the ISO 8601 week-based year number and the week number together as a single object. This class is found in the ThreeTen-Extra project. The project adds functionality to the java.time classes built into Java.
ZoneId zoneId = ZoneId.of ( "America/Montreal" );
YearWeek yw = YearWeek.now( zoneId ) ;
Generate a YearWeek from a date.
YearWeek yw = YearWeek.from (
LocalDate.of( 2015 , 12 , 30 )
)
This class can generate and parse strings in standard ISO 8601 format.
String output = yw.toString() ;
2015-W53
YearWeek yw = YearWeek.parse( "2015-W53" ) ;
You can extract the week number or the week-based-year number.
int weekNumber = yw.getWeek() ;
int weekBasedYearNumber = yw.getYear() ;
You can generate a particular date (LocalDate) by specifying a desired day-of-week to be found within that week. To specify the day-of-week, use the DayOfWeek enum built into Java 8 and later.
LocalDate ld = yw.atDay( DayOfWeek.WEDNESDAY ) ;
About java.time
The java.time framework is built into Java 8 and later. These classes supplant the troublesome old legacy date-time classes such as java.util.Date, Calendar, & SimpleDateFormat.
To learn more, see the Oracle Tutorial. And search Stack Overflow for many examples and explanations. Specification is JSR 310.
The Joda-Time project, now in maintenance mode, advises migration to the java.time classes.
You may exchange java.time objects directly with your database. Use a JDBC driver compliant with JDBC 4.2 or later. No need for strings, no need for java.sql.* classes.
Where to obtain the java.time classes?
*
*Java SE 8, Java SE 9, Java SE 10, Java SE 11, and later - Part of the standard Java API with a bundled implementation.
*
*Java 9 adds some minor features and fixes.
*Java SE 6 and Java SE 7
*
*Most of the java.time functionality is back-ported to Java 6 & 7 in ThreeTen-Backport.
*Android
*
*Later versions of Android bundle implementations of the java.time classes.
*For earlier Android (<26), the ThreeTenABP project adapts ThreeTen-Backport (mentioned above). See How to use ThreeTenABP….
The ThreeTen-Extra project extends java.time with additional classes. This project is a proving ground for possible future additions to java.time. You may find some useful classes here such as Interval, YearWeek, YearQuarter, and more.
A: The Calendar class almost works, but the ISO week-based year does not coincide with what an "Olson's Timezone package" compliant system reports. This example from a Linux box shows how a week-based year value (2009) can differ from the actual year (2010):
$ TZ=UTC /usr/bin/date --date="2010-01-01 12:34:56" "+%a %b %d %T %Z %%Y=%Y,%%G=%G %%W=%W,%%V=%V %s"
Fri Jan 01 12:34:56 UTC %Y=2010,%G=2009 %W=00,%V=53 1262349296
But Java's Calendar class still reports 2010, although the week of the year is correct.
The Joda-Time classes mentioned by skaffman do handle this correctly:
import java.util.Calendar;
import java.util.TimeZone;
import org.joda.time.DateTime;
Calendar cal = Calendar.getInstance(TimeZone.getTimeZone("GMT"));
cal.setTimeInMillis(1262349296 * 1000L);
cal.setMinimalDaysInFirstWeek(4);
cal.setFirstDayOfWeek(Calendar.MONDAY);
System.out.println(cal.get(Calendar.WEEK_OF_YEAR)); // %V
System.out.println(cal.get(Calendar.YEAR)); // %G
DateTime dt = new DateTime(1262349296 * 1000L);
System.out.println(dt.getWeekOfWeekyear()); // %V
System.out.println(dt.getWeekyear()); // %G
Running that program shows:
53 2010 53 2009
So the ISO 8601 week number is correct from Calendar, but the week-based year is not.
The man page for strftime(3) reports:
%G The ISO 8601 week-based year (see NOTES) with century as a decimal number. The
4-digit year corresponding to the ISO week number (see %V). This has the same for‐
mat and value as %Y, except that if the ISO week number belongs to the previous or
next year, that year is used instead. (TZ)
A: I believe you can use the Calendar object (just set FirstDayOfWeek to Monday and MinimalDaysInFirstWeek to 4 to get it to comply with ISO 8601) and call get(Calendar.WEEK_OF_YEAR).
A: /* Build a calendar suitable to extract ISO8601 week numbers
* (see http://en.wikipedia.org/wiki/ISO_8601_week_number) */
Calendar calendar = Calendar.getInstance();
calendar.setMinimalDaysInFirstWeek(4);
calendar.setFirstDayOfWeek(Calendar.MONDAY);
/* Set date */
calendar.setTime(date);
/* Get ISO8601 week number */
calendar.get(Calendar.WEEK_OF_YEAR);
A: If you want to be on the bleeding edge, you can take the latest drop of the JSR-310 codebase (Date Time API) which is led by Stephen Colebourne (of Joda Fame). Its a fluent interface and is effectively a bottom up re-design of Joda.
A: this is the reverse: gives you the date of the monday of the week (in perl)
use POSIX qw(mktime);
use Time::localtime;
sub monday_of_week {
my $year=shift;
my $week=shift;
my $p_date=shift;
my $seconds_1_jan=mktime(0,0,0,1,0,$year-1900,0,0,0);
my $t1=localtime($seconds_1_jan);
my $seconds_for_week;
if (@$t1[6] < 5) {
#first of january is a thursday (or below)
$seconds_for_week=$seconds_1_jan+3600*24*(7*($week-1)-@$t1[6]+1);
} else {
$seconds_for_week=$seconds_1_jan+3600*24*(7*($week-1)-@$t1[6]+8);
}
my $wt=localtime($seconds_for_week);
$$p_date=sprintf("%02d/%02d/%04d",@$wt[3],@$wt[4]+1,@$wt[5]+1900);
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/147178",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21"
} |
Q: How can I convert my Java program to an .exe file? If I have a Java source file (*.java) or a class file (*.class), how can I convert it to a .exe file?
I also need an installer for my program.
A: I would say launch4j is the best tool for converting a java source code(.java) to .exe file
You can even bundle a jre with it for distribution and the exe can even be iconified.
Although the size of application increases, it makes sure that the application will work perfectly even if the user does not have a jre installed. It also makes sure that you are able to provide the specific jre required for your app without the user having to install it separately.
But unfortunately, java loses its importance. Its multi platform support is totally ignored and the final app is only supported for windows. But that is not a big deal, if you are catering only to windows users.
A: As of JDK14, jpackage replaces javapackager mentioned in @Jay answer. The Windows version requires Wix 3.0 and it is fairly straightforward to take a java application and build an installer which provides EXE launcher.
It can also be used with jlink to build a cut-down Java runtime image which is bundled with the installer and only contains the set of modules needed to support your application. The jlink step will also be run implicitly by jpackage if no runtime is specified, but I prefer to make the JRE image separately as it will only change when you update JDK or add new module dependencies to your project.
Example main for Java class:
package exe;
public class Main {
public static void main(String[] args) {
for (int i = 0; i < args.length; i++) {
System.out.println("args["+i+"]="+args[i]);
}
}
}
Here are example steps to build on Windows - obviously you'd set up your local build environment (Maven / ant / etc) to re-produce this:
mkdir jpackage.input\jars tmp
javac -d tmp src\exe\Main.java
pushd tmp && jar cvf ..\jpackage.input\jars\myapp.jar . && popd
Check it runs:
java -cp jpackage.input\jars\myapp.jar exe.Main X Y Z
Create a runtime image with jlink for the set of modules use by your application:
set jlink.modules=java.base
jlink --add-modules %jlink.modules% --strip-debug --no-man-pages --no-header-files --compress=1 --output jpackage.jre
In case there are missing modules above, you should check the jlink JRE runtime image can run your app:
jpackage.jre\bin\java -cp jpackage.input\jars\myapp.jar exe.Main X Y Z
Use jpackage to generate installer, with app version based on date+hour (this saves on need to un-install every re-install) and to print out all system properties - remove the parameter "-XshowSettings:properties" after testing:
set appver=%date:~6,2%.%date:~3,2%.%date:~0,2%%time:~0,2%
jpackage --win-console --input jpackage.input --runtime-image jpackage.jre --app-version %appver% --type exe --name "MyApp" --dest jpackage.dest --java-options "-XshowSettings:properties" --main-jar jars\myapp.jar --main-class exe.Main
Run the installer:
jpackage.dest\MyApp-%appver%.exe
Test the application:
"C:\Program Files\MyApp\MyApp.exe" ONE 2 THREE
... Prints system properties ...
args[0]=ONE
args[1]=2
args[2]=THREE
A: You can use Janel. This last works as an application launcher or service launcher (available from 4.x).
A: Alternatively, you can use some java-to-c translator (e.g., JCGO) and compile the generated C files to a native binary (.exe) file for the target platform.
A: I can be forgiven for being against converting a java program to a .exe Application and I have My reasons. the Major one being that a java program can be compiled to a jar file from A lot of IDE's. When the program is in .jar format, it can run in Multiple Platforms as opposed to .exe which would run Only in very limited Environment. I am for the Idea that Java Programs shoudl not be converted to Exe unless it is very neccesary. One can always write .bat files that runs the Java program while it is a jar file.
if it is really neccesary to convert it to exe, Jar2Exe converter silently does that and one can also attach Libraries that are compiled together with the Main Application.
A: You can convert jar to exe using jar2exe. However you need to purchase the software. If you need a open source software i would suggest JSmooth.
A:
UPDATE: GCJ is dead. It was officially removed from the GCC project in 2016. Even before that, it was practically abandoned for seven years, and in any case it was never sufficiently complete to serve as a viable alternative Java implementation.
Go find another Java AOT compiler.
GCJ: The GNU Compiler for Java can compile Java source code into native machine code, including Windows executables.
Although not everything in Java is supported under GCJ, especially the GUI components (see
What Java API's are supported? How complete is the support? question from the FAQ). I haven't used GCJ much, but from the limited testing I've done with console applications, it seems fine.
One downside of using GCJ to create an standalone executable is that the size of the resulting EXE can be quite large. One time I compiled a trivial console application in GCJ and the result was an executable about 1 MB. (There may be ways around this that I am not aware of. Another option would be executable compression programs.)
In terms of open-source installers, the Nullsoft Scriptable Install System is a scriptable installer. If you're curious, there are user contributed examples on how to detect the presence of a JRE and install it automatically if the required JRE is not installed. (Just to let you know, I haven't used NSIS before.)
For more information on using NSIS for installing Java applications, please take a look at my response for the question "What's the best way to distribute Java applications?"
A: javapackager
The Java Packager tool compiles, packages, and prepares Java and JavaFX applications for distribution. The javapackager command is the command-line version.
– Oracle's documentation
The javapackager utility ships with the JDK. It can generate .exe files with the -native exe flag, among many other things.
WinRun4J
WinRun4j is a java launcher for windows. It is an alternative to javaw.exe and provides the following benefits:
*
*Uses an INI file for specifying classpath, main class, vm args, program args.
*Custom executable name that appears in task manager.
*Additional JVM args for more flexible memory use.
*Built-in icon replacer for custom icon.
*[more bullet points follow]
– WinRun4J's webpage
WinRun4J is an open source utility. It has many features.
packr
Packages your JAR, assets and a JVM for distribution on Windows, Linux and Mac OS X, adding a native executable file to make it appear like a native app. Packr is most suitable for GUI applications.
– packr README
packr is another open source tool.
JSmooth
JSmooth is a Java Executable Wrapper. It creates native Windows launchers (standard .exe) for your java applications. It makes java deployment much smoother and user-friendly, as it is able to find any installed Java VM by itself.
– JSmooth's website
JSmooth is open source and has features, but it is very old. The last release was in 2007.
JexePack
JexePack is a command line tool (great for automated scripting) that allows you to package your Java application (class files), optionally along with its resources (like GIF/JPG/TXT/etc), into a single compressed 32-bit Windows EXE, which runs using Sun's Java Runtime Environment. Both console and windowed applications are supported.
– JexePack's website
JexePack is trialware. Payment is required for production use, and exe files created with this tool will display "reminders" without payment. Also, the last release was in 2013.
InstallAnywhere
InstallAnywhere makes it easy for developers to create professional installation software for any platform. With InstallAnywhere, you’ll adapt to industry changes quickly, get to market faster and deliver an engaging customer experience. And know the vulnerability of your project’s OSS components before you ship.
– InstallAnywhere's website
InstallAnywhere is a commercial/enterprise package that generates installers for Java-based programs. It's probably capable of creating .exe files.
Executable JAR files
As an alternative to .exe files, you can create a JAR file that automatically runs when double-clicked, by adding an entry point to the JAR manifest.
For more information
An excellent source of information on this topic is Excelsior's article "Convert Java to EXE – Why, When, When Not and How".
See also the companion article "Best JAR to EXE Conversion Tools, Free and Commercial".
A: You could make a batch file with the following code:
start javaw -jar JarFile.jar
and convert the .bat to an .exe using any .bat to .exe converter.
A: We're using Install4J to build installers for windows or unix environments.
It's easily customizable up to the point where you want to write scripts for special actions that cannot be done with standard dialogues. But even though we're setting up windows services with it, we're only using standard components.
*
*installer + launcher
*windows or unix
*scriptable in Java
*ant task
*lots of customizable standard panels and actions
*optionally includes or downloads a JRE
*can also launch windows services
*multiple languages
I think Launch4J is from the same company (just the launcher - no installer).
PS: sadly i'm not getting paid for this endorsement. I just like that tool.
A: The latest Java Web Start has been enhanced to allow good offline operation as well as allowing "local installation". It is worth looking into.
EDIT 2018: Java Web Start is no longer bundled with the newest JDK's. Oracle is pushing towards a "deploy your app locally with an enclosed JRE" model instead.
A: Launch4j
Launch4j is a cross-platform tool for wrapping Java applications distributed as jars in lightweight Windows native executables. The executable can be configured to search for a certain JRE version or use a bundled one, and it's possible to set runtime options, like the initial/max heap size. The wrapper also provides better user experience through an application icon, a native pre-JRE splash screen, a custom process name, and a Java download page in case the appropriate JRE cannot be found.
– Launch4j's website
A: IMHO JSmooth seems to do a pretty good job.
A: If you need to convert your entire application to native code, i.e. an EXE plus DLLs, there is ExcelsiorJET. I found it works well and provided an alternative to bundling a JRE.
EDIT: This was posted in 2010 - the product is no longer available.
A: Cautionary note: Much has changed with packaging and deployment since this question was first asked. Most of the answers given are not even current with JIGSAW (Java 9+).
If the goal is to create OS specific packages, information is provided in Oracle Docs Java 17 Packaging Tool User Guide. This guide includes documentation for the jpackage tool, which allows one to create platform-specific packages for Linux, macOS and Windows. I assume the Windows-specific instructions should include arriving at an .exe file, since that remains the most familiar way for Windows users to install and run applications.
My own personal experience creating an exe (for sale on itch.io) was with the Java Platform, Standard Edition Deployment Guide, which included making use of the tool Inno Setup 5. This documentation is older, is for Java 9. The section directly pertaining to .exe packaging is located here As a first step, I used jlink to make a self-contained package. At the time I was first wrangling with this, I was unable to figure out how to get jpackage to work with my modular program. But now that Jigsaw has been around for several years, jpackage is now likely much easier to use, and would be my first choice for the next Java app I might publish for Windows users.
A: Java projects are exported as Jar executables. When you wanna do a .exe file of a java project, what you can do is 'convert' the JAR to EXE (i remark that i putted between quotes convert because isn't exactly this).
From intelij you gonna be able to generate only the jar
Try following the next example : https://www.genuinecoder.com/convert-java-jar-to-exe/
| {
"language": "en",
"url": "https://stackoverflow.com/questions/147181",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "572"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.