text stringlengths 8 267k | meta dict |
|---|---|
Q: Customized command line parsing in Python I'm writing a shell for a project of mine, which by design parses commands that looks like this:
COMMAND_NAME ARG1="Long Value" ARG2=123 ARG3=me@me.com
My problem is that Python's command line parsing libraries (getopt and optparse) forces me to use '-' or '--' in front of the arguments. This behavior doesn't match my requirements.
Any ideas how can this be solved? Any existing library for this?
A: *
*Try to follow "Standards for Command Line Interfaces"
*Convert your arguments (as Thomas suggested) to OptionParser format.
parser.parse_args(["--"+p if "=" in p else p for p in sys.argv[1:]])
If command-line arguments are not in sys.argv or a similar list but in a string then (as ironfroggy suggested) use shlex.split().
parser.parse_args(["--"+p if "=" in p else p for p in shlex.split(argsline)])
A: A small pythonic variation on Ironforggy's shlex answer:
args = dict( arg.split('=', 1) for arg in shlex.split(cmdln_args) )
oops... - corrected.
thanks, J.F. Sebastian
(got to remember those single argument generator expressions).
A: You could split them up with shlex.split(), which can handle the quoted values you have, and pretty easily parse this with a very simple regular expression. Or, you can just use regular expressions for both splitting and parsing. Or simply use split().
args = {}
for arg in shlex.split(cmdln_args):
key, value = arg.split('=', 1)
args[key] = value
A: What about optmatch (http://www.coderazzi.net/python/optmatch/index.htm)? Is not standard, but takes a different approach to options parsing, and it supports any prefix:
OptionMatcher.setMode(optionPrefix='-')
A: Without fairly intensive surgery on optparse or getopt, I don't believe you can sensibly make them parse your format. You can easily parse your own format, though, or translate it into something optparse could handle:
parser = optparse.OptionParser()
parser.add_option("--ARG1", dest="arg1", help="....")
parser.add_option(...)
...
newargs = sys.argv[:1]
for idx, arg in enumerate(sys.argv[1:])
parts = arg.split('=', 1)
if len(parts) < 2:
# End of options, don't translate the rest.
newargs.extend(sys.argv[idx+1:])
break
argname, argvalue = parts
newargs.extend(["--%s" % argname, argvalue])
parser.parse_args(newargs)
A: Little late to the party... but PEP 389 allows for this and much more.
Here's a little nice library should your version of Python need it code.google.com/p/argparse
Enjoy.
A: You may be interested in a little Python module I wrote to make handling of command line arguments even easier (open source and free to use) - http://freshmeat.net/projects/commando
| {
"language": "en",
"url": "https://stackoverflow.com/questions/156873",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Controlling the WCF XmlSerializer I have some REST web services implemented in WCF. I wish to make these services return "Bad Request" when the xml contains invalid elements.
The xml serialization is being handled by XmlSerializer. By default XmlSerializer ignores unknown elements. I know it is possible to hook XmlSerializer.UnknownElement and throw an exception from this handler, but because this is in WCF I have no control over serialization. Any ideas how I might implement this behavior.
A: "I know it is possible to hook XmlSerializer.UnknownElement and throw an exception from this handler, but because this is in WCF I have no control over serialization"
Its actually possible to do this...
In a WCF project that I worked on, we did something similar using the IDispatchMessageFormatter interface.
More information can be found here http://nayyeri.net/blog/use-idispatchmessageformatter-and-iclientmessageformatter-to-customize-messages-in-wcf/
It allows you peak at the message headers, control serialization/deserialization, return status codes etc.
A: Maybe you can return your own type implementing IXmlSerializable and thorw the exception you want in the ReadXml and WriteXml methods...
A: This is from vague memory as I don't have all the code to hand, but you can create a custom Message (inherit for the class "Message") type to return in your REST services and override certain methods to create custom responses.
protected override void OnWriteMessage(XmlDictionaryWriter writer)
{
...
}
protected override void OnWriteStartEnvelope(XmlDictionaryWriter writer)
{
...
}
protected override void OnWriteStartBody(XmlDictionaryWriter writer)
{
...
}
protected override void OnWriteBodyContents(XmlDictionaryWriter writer)
{
...
}
Not a complete answer, but might push you down the right path.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/156880",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: can I configure hibernate properties to connect without using an instance name to sql server 2005? Can I configure hibernate properties to connect without using an instance name to sql server 2005? I need to force it to use localhost as the hostname and not specify the instance (same as you can do with the sql server enterprise manager).
Ta!
T
A: Yes, you can. You set it up as you normally would in the connection string:
Server=(local);initial catalog=MyDataBase;Integrated Security=SSPI
| {
"language": "en",
"url": "https://stackoverflow.com/questions/156885",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How would you transform a pre-existing web app into a multilingual one? I am going to work on a project where a fairly large web app needs to tweaked to handle several languages. The thing runs with a hand crafted PHP code but it's pretty clean.
I was wondering what would be the best way to do that?
*
*Making something on my own, trying to fit the actual architecture.
*Rewriting a good part of it using a framework (e.g., Symfony) that will manage i18n for me?
For option 1, where should I store the i18n data? *.po, xliff, pure DB?
I thought about an alternative: using Symfony only for the translation, but setting the controller to load the website as it already is. Quick, but dirty. On the other hand, it allows us to make the next modification, moving slowly to full Symfony: this web site is really a good candidate for that.
But maybe there are some standalone translation engines that would do the job better than an entire web framework. It's a bit like using a bazooka to kill a fly...
A: It is important to notice that there are two steps involved before translating:
*
*Internationalization: that is, enabling your site to handle multiple languages
*Localization: this includes translating your texts (obtained in step 1) to each language you plan to support
See more on this in Wikipedia.
Step 1 would require you to take into account the fact that some languages are written right to left (RTL) and non-european characters such as Japanese or Chinese. If you are not planning to handle these languages and characters it might be simpler.
For this type of situation I would prefer to have a language file (actually as many language files as languages I plan to support, naming each as langcode.php as in en.php or fr.php) with an associative array containing all the texts used in the site. The procedure would go as follows:
*
*Scan your site for every single text that should be localized
*For each page/section I would create a $lang['sectionname'][] array
*For each text I would create a $lang['sectionname']['textname'] entry
*I would create a Lang.php class that would receive a lang parameter upon instantiation but would have a default in case no lang is received (this method loads langcode.php depending on the parameter or a default depending on your preferred language)
*The class would have a setPage() method that would receive the page/section you will be displaying
*The class would have a show() method that would receive the text to be displayed (show() would be called as many times as texts are shown in a given page... show() being a kind of wrapper for echo $lang['mypage']['mytext'])
This way you could have as many languages as you want in a very easy way. You could even have a language admin where you open your base language page (you actually just read recursively the arrays and display them in textareas) and can then "Save as..." some other language.
I use a similar approach in my site. It is only one page though but I have made multi-page sites with this idea.
If you have user-submitted content or some rather complicated CMS it would be a different story. You could look for i18n-friendly frameworks (Drupal comes to mind).
A: You could look at Zend_Translate, it's a pretty comprehensive, well documented and overall code quality is great. It also allows you to use a unified API for gettext, csv, db, ini file, array or whatever you end up saving your translated strings in.
Also, look at/watch this thread: What are good tools/frameworks for i18n of a php codebase?. It seems similar to your question.
A: Work with languages files.
*
*Replace each text string by a variable
*Create one language file per language and in it define each variable with their corresponding text. (french.inc, dutch.inc ...)
*Include the right file in each page.
That's for small sites.
If getting bigger, replace the files by a DB. :)
A: There are a number of ways of tackling this. None of them "the best way" and all of them with problems in the short term or the long term.
The very first thing to say is that multi lingual sites are not easy, translators and lovely people but hard to work with and most programmers see the problem as a technical one only. There is also another dimension, outside the scope of this answer, as to whether you are translating or localising. This involves looking at the target audiences cultural mores and then tailoring language, style, layout, colour, typeface etc., to that culture. Finally do not use MT, Machine Translation, for anything serious or if it needs to be accurate and when acquiring translators ensure that they are translating from a foreign language into their native language which means that they understand all the nuances of the target language.
Right. Solutions. On the basis that you do not want to rewrite the site then simply clone the site you have and translate the copies to the target language. Assuming the code base is stable you can use a VCS to manage any code changes. You can tweak individual parts of the site to fit the target language, for example French text is on average 30% larger than the equivalent English text so using one site to deliver this means you may (will) have formatting problems and need to swap a different css file in and out depending on the language. It might seem a clunky way to do it but then how long are the sites going to exist? The management overhead of doing it this way may well be less than other options.
Second way without rebuilding. Replace all content in the current site with tags and then put the different language in file or db tables, sniff the users desired language (do you have registered users who can make a preference or do you want to get the browser language tag, or is it going to be URL dot-com dot-fr, dot-de that make the choice) and then replace the tags with the target language. Then you need to address the sizing issues and the image issues separately. This solution is in effect when frameworks like Symfony and Zend do to implement l10n.
Then you could rebuild with a framework or with gettext and possibly have a cleaner solution but remember frameworks were designed to solve other problems, not translation and the translation component has come into the framework as partial solution not the full one.
The big problem with all the solutions is ongoing maintenance. Because not only do you have a code base but also multiple language bases to maintain. Unless you all in one solution is really clever and effective then to ongoing task will be difficult.
A: If it's multi-byte character support then it might be worth checking out the multibyte string functions in PHP:
http://uk.php.net/manual/en/book.mbstring.php
These will better handle multi-byte characters.
A: I use hl parameter and gettext combining engine translations already there with own .po which makes new translations and languages appear when engine or my django/gae example adds:
{% get_current_language as LANGUAGE_CODE %}{{ LANGUAGE_CODE }}{% get_available_languages as LANGUAGES %}{% for LANGUAGE in LANGUAGES %}{% ifnotequal LANGUAGE_CODE LANGUAGE.0 %}{{ LANGUAGE.0 }}{% endifnotequal %}{% endfor %}
So keeping from duplicates and fully using translations already there lets forth here the missing eg arabic month names to appear directly either when engine team adds or app
| {
"language": "en",
"url": "https://stackoverflow.com/questions/156911",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23"
} |
Q: Sending a keyboard event from java to any application (on-screen-keyboard) I am working on developing an on-screen keyboard with java. This keyboard has a JComponent for every possible key. When a mouse down is detected on the button, I want to send a specific keyboard code to the application currently on focus. The keyboard itself is within a JFrame with no decorations and set to always-on-top.
I found that the Robot class can be used to simulate these keyboard events on the native queue. However, in this case, selecting the JComponent would mean that the key-press is received on the JFrame, and I wouldn't be able to receive it in the other application
How can I keep my on-screen keyboard "Always-without-focus"? Is it maybe possible to use another approach to send the key-press?
A: I found jnativehook when I was trying to control a gamebot with actual keyboard and mouse commands (to be more "human-like").
A: The only solution I could find so far, is to make every key a JComponent (so it can not have focus), and set the following properties on the JFrame:
setUndecorated(true);
setFocusableWindowState(false);
setFocusable(false);
enableInputMethods(false);
Now when using the robot class I can send events to any focused window by clicking on the keys. The only limitation, is that it only seems to work for windows that belong to the same virtual machine, and it doesn't work at all with any other system window.
A: Apparently the only way to do this is to have a JNI layer that will make the conversion from java to native. Java has no easy way to provide such funcionality.
This could be an interesting concept for a small, third party library for someone who wants to learn JNI...
A: have you tried to call native setfocus() or setactivewindow() functions to move focus before you use robot class?
A: I am not aware of any way of doing this in OS independent way. I don't know about Windows, but it would be possible talk with X server over the X protocol.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/156912",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: File system 'overlays' Concrete use case:
In the Eclipse IDE, new 'plugins' can be added by copying a plugin's file(s) into the $ECLIPSE_HOME/plugins directory. However, I want to keep my original Eclipse installation 'clean' without additional plugins because I want to run this basic installation on its own at times.
What is a way of avoiding having to copy the files (and hence therefore not being able to run a clean version) and instead logically 'overlaying' the contents of another directory so that it appears to be in the directory at runtime?
e.g. something like:
gravelld@gravelld-laptop:~$ ls $ECLIPSE_HOME/plugins/
org.junit_3.8.2.v200706111738
org.junit4_4.3.1
org.junit.source_3.8.2.v200706111738
gravelld@gravelld-laptop:~$ ls myplugins/
org.dangravell.myplugin.jar
gravelld@gravelld-laptop:~$ overlay myplugins/ $ECLIPSE_HOME/plugins
gravelld@gravelld-laptop:~$ ls $ECLIPSE_HOME/plugins/
org.dangravell.myplugin.jar
org.junit_3.8.2.v200706111738
org.junit4_4.3.1
org.junit.source_3.8.2.v200706111738
Another use case may be around patching and so on...
Can something be done with symbolic links or mnt for this?
Thanks!
A: You could use an overlay filesystem for this. The three overlay filesystems that I know of in Linux are unionfs, aufs, and minifo.
Unionfs is included in recent Ubuntu kernels.
A: Have a look at Manage your eclipse environment article, especially the Method 3
Creating a links folder to manage product extensions
If you have product extensions sitting on your file system, like the one we made in Method 1, you can create a few simple files in your Eclipse program directory to notify Eclipse that it needs to check these directories for plug-ins.
First, create a directory inside your Eclipse installation folder (for example, /opt/eclipse) called links. Within this folder, you can create *.link files (for example, emfPlugins.link). Each link file points to a product extension location. Eclipse will scan this links folder on startup and find the plug-ins in each product extension pointed to by a link file.
This is still supported in eclipse3.4 even though the new p2 provisioning system is quite different.
Now that the "'links' directory mechanism" is known, it means the difference between a vanilla eclipse and an eclipse with custom common plugins is just the presence of that 'links' directory.
So, why not have a 'vanilla eclipse distribution' with a symbolic link inside, 'links', pointing to ../links ?
Any user getting that vanilla eclipse would have at first no 'links' directory alongside it, so it will run as a vanilla distribution. But as soon the user creates a links directory or make another symbolic link to a common remote 'links' directory, that same distribution will pick up the common plugins remote directory...
/path/links -> /remote/links/commonPlugins
/eclipse/links -> ../links
Finally, if you create the "/remote/links/commonPlugins" with a given group "aGroup", and protect it with a '750' mask, you have yourself one eclipse setup which will be:
*
*vanilla eclipse for any user whom 'id -a' does not include 'aGroup'
*eclipse with plugins for any user part of "aGroup"
| {
"language": "en",
"url": "https://stackoverflow.com/questions/156913",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Case-insensitive Glob on zsh/bash I need to list all files whose names start with 'SomeLongString'. But the case of 'SomeLongString' can vary. How?
I am using zsh, but a bash solution is also welcome.
A: bash:
shopt -s nocaseglob
A: ZSH:
$ unsetopt CASE_GLOB
Or, if you don't want to enable case-insensitive globbing in general, you can activate it for only the varying part:
$ print -l (#i)(somelongstring)*
This will match any file that starts with "somelongstring" (in any combination of lower/upper case). The case-insensitive flag applies for everything between the parentheses and can be used multiple times. Read the manual zshexpn(1) for more information.
UPDATE
Almost forgot, you have to enable extendend globbing for this to work:
setopt extendedglob
A:
$ function i () {
> shopt -s nocaseglob; $*; shopt -u nocaseglob
> }
$ ls *jtweet*
ls: cannot access *jtweet*: No such file or directory
$ i ls *jtweet*
JTweet.pm JTweet.pm~ JTweet2.pm JTweet2.pm~
A: Depending on how deep you want to have this listing, find offers quite a lot
in this regard:
find . -iname 'SomeLongString*' -maxdepth 1
This will only give you the files in the current directory. Important here is
the -iname parameter instead of -name.
A: For completeness (and frankly surprised it's not mentioned yet, even though all the other answers are better and/or "more correct"), obviously one can also use (especially for grep aficionados):
$ ls | egrep -i '^SomeLongString'
One might also stick in a redundant ls -1 (that's option "one", not "ell"), but when passed to a pipe, each entry is already going to be one per line, anyway. I'd typically use something like this (vs set) in shell scripts, eg in a for/while loop: for i in $(ls | grep -i ...) . However, the other answer using find would be preferable & more flexible in that circumstance, because you can, for example, omit directories (or set other restrictions): for i in $(find . -type f -iname 'SomeString*' -print -maxdepth 1)... or even forgo the loop altogether and just use the power of find all by itself, eg: find ... -exec do_stuff {} \; ... , but I do digress (again, for completeness.)
A: For completeness, a long, full solution (creating thumbnails from a list of camera images):
_shopt="$( shopt -p )"
shopt -s nocaseglob
for f in *.jpg; do
convert "$f" -auto-orient -resize "1280x1280>" -sharpen 8 jpeg:"$( basename "$f" ".${f##*.}" ).shelf.jpg"
done
eval "$_shopt"
Since we don't know exact extension case (.jpg or .JPG), we create it from the name itself by stripping the prefix up to (and including) the last dot.
The -auto-orient option will take care of image orientation so that thumbnails would be viewed correctly on any device.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/156916",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "47"
} |
Q: Classic ASP Intranet and New ASP.NET Applications We have an existing classic ASP intranet consisting of hundreds of pages. Its directory structure looks like this...
/root
app_1
app_2
...
img
js
style
Obviously app_1 and so on have better names in the actual directory structure.
Even though the many applications have different behaviour, they are all part of the same intranet and therefore share a common look and feel by including stylesheets via /style, images via /img and client script via /js.
The trouble (for me at least) comes when I want to add an intranet application in ASP.NET.
Ultimately, I'd like this structure:
/root
app_1
app_2
dotnetapp_1
dotnetapp_2
...
img
js
style
It seems to me that ASP.NET "applications" like to think of themselves as separate from everything around them (this may just be my comprehension of how they are). You create a new "project" in Visual Studio and it's like you have a new "root" a level below the actual root I want to use. It's like this new application is a thing, standing alone, with its own images and style and whatnot. However, I want it to be a sub-part of the existing intranet.
Ultimately I want to be able to make my whole classic ASP intranet the "root" and have ASP.NET "sub-applications" that can still access /style and /img and, I guess for ASP.NET I'll have /masterpages.
I've tried this before, but I think VS choked on the couple of hundred classic ASP pages that it added to the "project" when I made my existing intranet root directory the ASP.NET project root (via File->Open->Web Site). I'd be nice to edit my existing classic ASP intranet using VS 2008 SP1 (I currently use the excellent Notepad++) because I'd like to get more hands on with VS but I guess this isn't absolutely necessary.
I also tried treating each new ASP.NET application as an application in its own right, effectively making the /dotnetapp_1 directory the "root" of the application (again, via File->Open->Web Site in VS2008). However, VS then complained when I tried to reference /masterpages because it "belonged to another application." I think I kludged it by adding a virtual directory inside each ASP.NET directory that "pointed" to the root /masterpages but I'm not sure VS was able to happily provide WYSIWYG editing when I did this, as opposed to making a copy of the masterpage in every ASP.NET application I add to the intranet.
I'm also quite likely to visit the .NET MVC framework so please offer any answers with that framework in mind. I'm hoping "projects" aren't quite to important with MVC and that rather it's just a bunch of files that creates an application that contributes to the whole (that being the intranet).
So, the question is: How I can best add-on ASP.NET applications to an existing classic ASP intranet (I'm not concerned about the technicalities of session sharing between classic ASP and ASP.NET, only the structural layout of directories and projects) and be able to edit these separate applications in Visual Studio 2008 SP1 and yet have these application "related" to each other by a common, intranet look and feel*?
*
*Please don't just post the answer "use MasterPages." I appreciate MasterPages are .NET's method of sharing styles (and more probably) between related pages in the same application. I get that. What I'm looking for is the best method of adding ASP.NET applications into the existing intranet as smoothly as I can that makes editing each application simple and where each application can share (if possible) an intranet-common style.
A: One solution would be to use IIS Manager to configure the website (created for your ASP.NET app by Visual Studio) and add a virtual directory for each of the common folders so that (by the 'virtual' nature of the virtual directory) they will 'appear' to be in the same root folder as your ASP.NET app.
/root
app_1
app_2
dotnetapp_1
<virtual>img
<virtual>js
...
img
js
style
You can probably script IIS or edit an XML file if you need to do this in bulk, and I'm sure there must be an even more elegant way to do something similar that doesn't require to much mouse work!
A: Since ASP.NET is now the Microsoft standard web development platform, you might be better off starting with a brand-new ASP.NET web site and importing your existing ASP pages and folders into it. Once you have that set up and running, adding new functionality in ASP.NET will be a snap.
Microsoft understood that some customers would be mixing classic ASP and ASP.NET for awhile, and they accomodated this need by making classic ASP pages work within an ASP.NET site. What you're trying to do is the reverse of this (get ASP.NET to work within ASP, sorta), and you're already running into difficulties.
A: Can't you run the asp.net site as a Virtual Directory?
www.site.com/dotnetapp/
Where dotnetapp is a virtual directory completely separate?
A: Couple different ways to go about this:
*
*If your listed folder structure is the desired web folder structure, then any of your ASP.NET applications can simply reference /root/js/whatever.js or ../js/whatever.js and it'll get to the folders you already have at the root. This is an alternative to the virtual directories solution already mentioned. This is a rather messy solution, which some of the projects I've inherited at my current job do.
*At a previous job I managed a hybrid Classic ASP/ASP.NET application for a couple years and I did it by making an ASP.NET Web Application at the root and moving all my ASP pages into it. Any "sub-applications" you want to make should just be folders in this single project. [ASP pages don't have color-coding in VS2008 anymore though, which was really annoying.]
| {
"language": "en",
"url": "https://stackoverflow.com/questions/156930",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: How can I expose iterators without exposing the container used? I have been using C# for a while now, and going back to C++ is a headache. I am trying to get some of my practices from C# with me to C++, but I am finding some resistance and I would be glad to accept your help.
I would like to expose an iterator for a class like this:
template <class T>
class MyContainer
{
public:
// Here is the problem:
// typedef for MyIterator without exposing std::vector publicly?
MyIterator Begin() { return mHiddenContainerImpl.begin(); }
MyIterator End() { return mHiddenContainerImpl.end(); }
private:
std::vector<T> mHiddenContainerImpl;
};
Am I trying at something that isn't a problem? Should I just typedef std::vector< T >::iterator? I am hoping on just depending on the iterator, not the implementing container...
A: You may find the following article interesting as it addresses exactly the problem you have posted: On the Tension Between Object-Oriented and Generic Programming in C++ and What Type Erasure Can Do About It
A: I have done the following before so that I got an iterator that was independent of the container. This may have been overkill since I could also have used an API where the caller passes in a vector<T*>& that should be populated with all the elements and then the caller can just iterate from the vector directly.
template <class T>
class IterImpl
{
public:
virtual T* next() = 0;
};
template <class T>
class Iter
{
public:
Iter( IterImpl<T>* pImpl ):mpImpl(pImpl) {};
Iter( Iter<T>& rIter ):mpImpl(pImpl)
{
rIter.mpImpl = 0; // take ownership
}
~Iter() {
delete mpImpl; // does nothing if it is 0
}
T* next() {
return mpImpl->next();
}
private:
IterImpl<T>* mpImpl;
};
template <class C, class T>
class IterImplStl : public IterImpl<T>
{
public:
IterImplStl( C& rC )
:mrC( rC ),
curr( rC.begin() )
{}
virtual T* next()
{
if ( curr == mrC.end() ) return 0;
typename T* pResult = &*curr;
++curr;
return pResult;
}
private:
C& mrC;
typename C::iterator curr;
};
class Widget;
// in the base clase we do not need to include widget
class TestBase
{
public:
virtual Iter<Widget> getIter() = 0;
};
#include <vector>
class Widget
{
public:
int px;
int py;
};
class Test : public TestBase
{
public:
typedef std::vector<Widget> WidgetVec;
virtual Iter<Widget> getIter() {
return Iter<Widget>( new IterImplStl<WidgetVec, Widget>( mVec ) );
}
void add( int px, int py )
{
mVec.push_back( Widget() );
mVec.back().px = px;
mVec.back().py = py;
}
private:
WidgetVec mVec;
};
void testFn()
{
Test t;
t.add( 3, 4 );
t.add( 2, 5 );
TestBase* tB = &t;
Iter<Widget> iter = tB->getIter();
Widget* pW;
while ( pW = iter.next() )
{
std::cout << "px: " << pW->px << " py: " << pW->py << std::endl;
}
}
A: This should do what you want:
typedef typename std::vector<T>::iterator MyIterator;
From Accelerated C++:
Whenever you have a type, such as vector<T>, that depends on a template parameter, and you want to use a member of that type, such as size_type, that is itself a type, you must precede the entire name by typename to let the implementation know to treat the name as a type.
A: I am unsure about what you mean by "not exposing std::vector publicly" but indeed, you can just define your typedef like that:
typedef typename std::vector<T>::iterator iterator;
typedef typename std::vector<T>::const_iterator const_iterator; // To work with constant references
You will be able to change these typedefs later without the user noticing anything ...
By the way, it is considered good practice to also expose a few other types if you want your class to behave as a container:
typedef typename std::vector<T>::size_type size_type;
typedef typename std::vector<T>::difference_type difference_type;
typedef typename std::vector<T>::pointer pointer;
typedef typename std::vector<T>::reference reference;
And if needed by your class:
typedef typename std::vector<T>::const_pointer const_pointer;
typedef typename std::vector<T>::const_reference const_reference;
You'll find the meaning of all these typedef's here: STL documentation on vectors
Edit: Added the typename as suggested in the comments
| {
"language": "en",
"url": "https://stackoverflow.com/questions/156936",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24"
} |
Q: Is the web hosting location important these days? I was recently looking at some web hosting solutions and some of the providers offered various hosting locations e.g. US or UK based servers.
My question is: does it really make a difference from the performance point of view?
Lets say that I am expecting most of the traffic coming from continental Europe?
Would the fact that the servers are based in UK make bigger difference if the traffic was coming from the UK.
Any pros and cons of having a website hosted in the same county as the most of the expected traffic?
A: Yes, distance = latency = slower. That's why Google, Amazon, and the other big sites have multiple datacenters in different regions and even continents.
A: Yes, obviously it does matter to some degree.
This degree depends on the level of your site optimization (size of the pages, usage of AJAX, Flash etc)
Example from my experience. Round-trip from russia to USA is 200ms. It does not make any difference for the small web site optimized for the performance, but it makes a huge usability difference for SmartClient accessing Web API of this site.
A: Performance is one consideration, support is the other.
After a few different experiences we chose a provider in our time zone. Although most providers claim 24/7 support it is a very different deal in the middle of their business day than the middle of their night.
If you can, I say go local.
A: Also check the details of the hosting plan for expenses.
Here in Hungary most providers give a bigger bandwith to the national net than to foreign countries. Let's say you buy a plan and you have a 100 Mb/s connection to the country, but only a 10 Mb/s connection to outside the country. This is because the internal bandwith is cheaper for them than the international bandwith.
So there is a benefit to locate the server to the country which uses the most.
A: It's a big deal for Iceland since the fiber connection to europe is much much larger than to the USA. So it depends on variables like that.
A: Another example: students in New Zealand universities must pay more to access "international" websites over domestic ones (University of Canterbury, for example).
Might not be relevant to you, but illustrates that location can be factor!
A: Yes, it definitely matters, as others have already said. You do actually lose eyeballs with every extra 100ms.
The corollary I'd add is that it really matters what datacenter your host is located in and who they're peering with -- the difference between a host with boxes at a major exchange point peered with several big telecoms vs. a host at a third-tier datacenter can be as big as U.S. vs. Europe.
Google doesn't just have boxes all over the place for geographical reasons, they're intentionally at almost every major Internet exchange point, and they're also peering with everybody so that their packets can route on whatever network is the fastest at any given moment.
You obviously can't do all that, but once you've got it narrowed to a few providers you can traceroute and look at hops and hoptimes at various times of day and figure out what's going to have the least latency to your users. (i.e., if all your users are in Germany, pick a spot in Frankfurt and traceroute to all the providers in your shortlist from there.)
A: Another thing as mentioned is latency, but i think the thing to stress is if its a continuous streem of data, it may not be huge depending on teh data type, but if its a site that gets hit multiple times to complete a transaction of some sort (ajax app calling multiple web services for example) this can start to add up with high latency (ping)
A: One other consideration unrelated to performance is Search Engine Optimisation, some SEO people believe that hosting sites on servers in other geographic locations can have some affect on placement in results. I'm not sure how accurate this is but it may be something to look into if strong SEO placement is important to you.
In regards to performance then I have used both Mosso and Media Temple and have found access here in the UK to be very fast, I can't say it had any real impact to users browsing my sites.
That said though, I currently keep all my sites in UK based data centres.
A: Physical distance whilst a factor does not always mean that latency automatically goes up. Another factor is if is direct peering agreements with transit carriers based in other countries. For example you may find that the number of hops/ping time from UK -> US is favourable compared even to UK -> UK connectivity.
A: Absolutely, take a look at http://www.speedtest.net/ and see the difference of hosting in Asia vs hosting in US
A: For a small site it is more than acceptable. I host my own sites and projects in the states while myself and a lot of the site users are in the UK.
Another factor to be aware of is the laws within the jurisdiction you are choosing as your host. A prime example of this is The Pirate Bay hosting in Sweden on account of their favourable attitude to copyrighted content.
A: And the missing piece is that you have to make sure you follow European privacy laws if you have EU customers. That might mean no US or US-owned provider.
A: Yes. Performance depends on how far is the data center from users. Nearer means faster, and the opposite.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/156940",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: Capistrano + thin + nginx with user not allowed to sudo howto? I have a scenario like this which I want to use capistrano to deploy my ruby on rails application:
*
*The web application is on a thin cluster with the config file stored under /etc/thin. also an init script is in /etc/init.d/thin, so it would start automatically whenever my server needs a reboot
*Also nginx is executed the same way (as an init script daemon)
*To make sure in case if somebody hacked my webserver I don't want them to do something too horrible, so the web user is not allowed to sudo.
*Thin and nginx both runs as the webuser to enforce such security
Now when I need to do the deployment, I would need the files to be installed under /home/webuser/railsapps/helloworld, and I need the cap script restart my thin afterwards. I want to keep all files owned by the webuser, so the cap script primary user is running as webuser. Now the problem arise when I want to restart the thin daemon because webuser can't sudo.
I am thinking if its possible to invoke two separate sessions- webuser for file deployment, and then a special sudoer to restart the daemon. Can anyone give me a sample script on this?
A: This might not be what you want, but you can actually do something like this in your sudoers file:
someuser ALL=NOPASSWD: /etc/init.d/apache2
that lets someuser run /etc/init.d/apache2
If you try to do something else:
$ sudo ls
[sudo] password for someuser:
Sorry, user someuser is not allowed to execute '/bin/ls' as root on ...
A: why not use sudo for the deployment routine and then chown -R on the RAILS_ROOT? You could tell Capistrano to change the ownership prior to aliasing the release as current.
A: An alternative to this would be running nginx as a normal user, say on port 8080 then using IPTables to redirect requests from port 80 to port 8080, from memory
iptables -A PREROUTING -t tcp -m tcp -p 80 -j DNAT --dport 8080
Will send all packets destined to port 80 to port 8080, which can be bound as a normal user.
A: If you are running Thin as the webuser then can the webuser not end the process? You could restart Thin again without the daemon, so long as you pass the server everything in /etc/thin it should be fine. The daemon, as far as I understand it, is just a convenient way to bypass having to manually launch a program at boot.
The only time you'll come unstuck is when you have to edit the contents of /etc/thin. Assuming you're using aliases to your webuser's thin.yml bits, this will only happen when you want to add / remove a program. When this happens, it might be worth just manually adding/deleting the alias.
This is all assuming the webuser can end the Thin process to start with. I don't know otherwise. Last time it was an issue for me was when I didn't have a way to run the app on my local machine because it's implementation was pretty much tied to the server's layout. Every time I edited something, I had to send it to SVN, switch tabs in the terminal to an ssh shell, pull it from SVN, switch tabs to another ssh and restart the daemon and see whether or not i'd broken it. It got me down, so I installed Thin locally, got the app to read config files, and now I only have to upload once every few days.
A: Just noticed you don't allow user to sudo :-) Well this answer will help others:
A little late the party but I've just done this:
namespace :deploy do
desc "Start the Thin processes"
task :start do
run "cd #{current_path} && bundle exec sudo thin start -C /etc/thin/dankit.yml"
end
desc "Stop the Thin processes"
task :stop do
run "cd #{current_path} && bundle exec sudo thin stop -C /etc/thin/dankit.yml"
end
desc "Restart the Thin processes"
task :restart do
run "cd #{current_path} && bundle exec sudo thin restart -C /etc/thin/dankit.yml"
end
end
Adding sudo to the bundle exec sudo thin start works.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/156941",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: .order_by() isn't working how it should / how I expect it to In my Django project I am using Product.objects.all().order_by('order') in a view, but it doesn't seem to be working properly.
This is it's output:
Product Name
Sort
Evolution
2
Polarity
1
Jumbulaya
3
Kalidascope
4
It should look like this:
Product Name
Sort
Polarity
1
Evolution
2
Jumbulaya
3
Kalidascope
4
But it doesn't. Any ideas?
My view (for that output):
def debug(request):
order = Product.objects.all().order_by('order')
return render_to_response('cms/debug.html', {'order' : order, 'name' : name})
And the view responsible for saving the order field:
def manage_all(request):
if request.method == 'POST':
PostEntries = len(request.POST)
x = 1
while x < PostEntries:
p = Product.objects.get(pk=x)
p.order = int(request.POST.get(str(x),''))
print "Itr: " + str(x)
x = x + 1
p.save()
print "Product Order saved"
return HttpResponse("Saved")
And the model (without the boring bits):
class Product(models.Model):
name = models.CharField(max_length=100)
order = models.IntegerField(blank = True, null = True
Here is a 'live' example of the page http://massiveatom.com:8080/debug/ Please note that that is only running on the dev server, so it may not always be up.
I have asked in #django and they didn't seem to know what was going on. One thought was that the database/Django was being confused by the SQL command it is generating (select * from table where 1 order by 'order'), but I would prefer not to change the order field in the model.
And I know there should be back-ticks surrounding order in the above SQL command, but the syntax parsing thingy kinda hated on it...
Edit: Each object has the correct value, so I don't really know why it isn't sorting it properly.
Edit 2: I don't know what was going on, but it turns out putting p.save() in the loop fixed it all...
A: Your saving loop is wrong. You save Product outside of the loop. It should be:
if request.method == 'POST':
PostEntries = len(request.POST)
x = 1
while x < PostEntries:
p = Product.objects.get(pk=x)
p.order = int(request.POST.get(str(x),''))
print "Itr: " + str(x)
x = x + 1
p.save() # NOTE HERE <- saving in loop instead of outside
print "Product Order saved"
return HttpResponse("Saved")
| {
"language": "en",
"url": "https://stackoverflow.com/questions/156951",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Search for words in SQL Server index I need something in between a full text search and an index search:
I want to search for text in one column of my table (probably there will be an index on the column, too, if that matters).
Problem is, I want to search for words in the column, but I don't want to match parts.
For example, my column might contain business names:
Mighty Muck Miller and Partners Inc.
Boy & Butter Breakfast company
Now if I search for "Miller" I want to find the first line. But if I search for "iller" I don't want to find it, because there is no word starting with "iller". Searching for "Break" should find "Boy & Butter Breakfast company", though, since one word is starting with "Break".
So if I try and use
WHERE BusinessName LIKE %Break%
it will find too many hits.
Is there any way to Search for Words separated by whitespace or other delimiters?
(LINQ would be best, plain SQL would do, too)
Important: Spaces are by far not the only delimiters! Slashes, colons, dots, all non-alphanumerical characters should be considered for this to work!
A: Your word delimiters are going to be many: space, tab, beginning of line, parentheses, periods, commas, exclamation/question marks etc. So, a pretty simple solution is to use a regex in your WHERE clause. (And it's going to be a lot more efficient than just ORing every possible delimiter you can think of.)
Since you mentioned LINQ, here's an article that describes how to do efficient regex querying with SQL Server.
Complicated WHERE clauses like this always raise a red flag with me as far as performance is concerned, so I definitely suggest benchmarking whatever you end up with, you may decide to build a search index for the column after all.
EDIT: Saw you edited your question. When writing your regex, it's easy to just have it use any non-alphanum character as a delimiter, i.e. [^0-9a-zA-Z], or \W for any non-word character, \b for any word boundary and \B for any non-word boundary. Or, instead of matching delimiters, just match any word, i.e. \w+. Here's another example of someone doing regex searches with SQL Server (more complicated than what you'd need).
A: where BusinessName like 'Break%' -- to find if it is beginning with the word
or BusinessName like '% Break%' -- to find if it contains the word anywhere but the beginning
A: SQL Server 2000 or above.
SELECT *
FROM dbo.TblBusinessNames
WHERE BusinessName like '%[^A-z^0-9]Break%' -- In the middle of a sentence
OR BusinessName like 'Break%' -- At the beginning of a sentence
Keyword Reference for LIKE: http://msdn.microsoft.com/en-us/library/aa933232(SQL.80).aspx
A: WHERE BusinessName LIKE '% Break%'
A: You mentioned LINQ - you could do something like...
string myPattern = "% Break%";
var query =
from b in Business
where SqlMethods.Like(b.BusinessName, myPattern)
select b;
Note that this uses the System.Linq.Data.SqlClient namespace which translates directly to the LIKE operator with no additional processing.
A: Try this:
declare @vSearch nvarchar(100)
set @vSearch = 'About'
select * from btTab where ' ' + vText + ' ' LIKE '%[^A-z^0-9]' + @vSearch + '[^A-z^0-9]%'
| {
"language": "en",
"url": "https://stackoverflow.com/questions/156954",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: How to add a horizontal gap with a JLabel I have a JLabel (actually, it is a JXLabel).
I have put an icon and text on it.
<icon><text>
Now I wand to add some spacing on the left side of the component, like this:
<space><icon><text>
I DON'T accept suggestion to move the JLabel or add spacing by modifying the image.
I just want to know how to do it with plain java code.
A: The like this: is not very clear, but you can add spacing by adding a transparent border of a certain width to the label
A: If you're trying to push the label to one side of it's container, you can add a glue. Something like this:
JPanel panel = new JPanel();
panel.setLayoutManager(new BoxLayout(panel, BoxLayout.LINE_AXIS);
panel.add(new JLabel("this is your label with it's image and text"));
panel.add(Box.createHorizontalGlue());
Though your question isn't very clear.
A: I have found the solution!
setBorder(new EmptyBorder(0,10,0,0));
Thanks everyone!
A: You dont need to modify the preferredSize of the JLabel, you can use the GridBagLayout Manager to specify separations between components, you only have to use the GridBagLayout in the container and add the JXLabel to it with a GridBagConstraints object specifiying the insets to the left:
JPanel panel=new JPanel(new GridBagLayout());
JLabel label=new JLabel("xxxxx");
GridBagConstraints constraints=new GridBagConstraints();
constraints.insest.left=X; // X= number of pixels of separation from the left component
panel.add(label,constraints);
Note that i have omitted a lot of configuration properties in the setup of the constraints, you better read the documentacion of GridBagLayout
| {
"language": "en",
"url": "https://stackoverflow.com/questions/156975",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: C# StringTemplate - how to set eol character I'm using the C# version of the StringTemplate library (http://www.stringtemplate.org/) to generate C++ code. My templates work fine, until I started using the
<attribute:template(argument-list)>
syntax to apply templates to multiple values in a 'list' ('multi-valued-argument' if I'm correct in the StringTemplate lingo). From that moment on, the EOL character switched from \n to \r\n, which cause Visual Studio and other editors to pop a 'convert end of line characters to \n' warning every time I open generated files.
So, my question is: how do I force StringTemplate to always output \n as EOL marker?
A: You can try to use an Output Filter (http://www.antlr.org/wiki/display/ST/Output+Filters) to replace all \r\n with \n
| {
"language": "en",
"url": "https://stackoverflow.com/questions/156988",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: What's the easiest way to import a new table into MySQL v5 from CSV? I'm running MySQL 5 on a linux server on my local network. Running windows XP for my desktop. Had a look at the MySQL GUI Tools but I dont think they help. I cannot install apache on the remote server & use something like PHPmyAdmin.
A: have a look @ load data infile : http://dev.mysql.com/doc/refman/5.0/en/load-data.html
A: From the MySQL shell or query browser...
If the CSV has no header:
LOAD DATA INFILE 'mycsvfile.csv' INTO TABLE mytable;
If the CSV has a header:
LOAD DATA INFILE 'mycsvfile.csv' INTO TABLE mytable IGNORE 1 LINES;
A: I use SQLyog on my Windows system which has a free Community Edition and has an option to import from CSV.
I've never used this option myself so I can't tell you how good it is. However, SQLyog has been great for all the other things I've used it for.
A: I would use a spreadsheet editor to make a set of SQL statements. Put a new column at the start and add insert into tablename values(' . Add other columns to seperate the data with code like '','. Finish with ''); . Use the autofill feature to drag these cells down to as many rows as necessary. Copy the entire sheet to a plain text editor and remove the excess tabs, leaving you with a simple set of insert statements.
This is a solution that I can use for any database system and any spreadsheet file format. Plus it is easy to populate the spreadsheet from sources such as other databases, or copying and pasting from a webpage. It's also quite fast and available from any desktop machine, using Excel, OpenOffice or Google Docs.
See my example spreadsheet in Excel and OpenOffice versions.
A: I just did this using LOAD DATA INFILE but it's worth noting that it's not quite as simple as Gareth's example (Kai is quite right that you should look at the documentation). To correctly import comma-separated values, I used this:
LOAD DATA LOCAL INFILE 'mycsvfile.csv' INTO TABLE mytable
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\n'
IGNORE 1 LINES;
This assumes a CSV file with quotation marks around each field and a single header row at the top. (You probably don't need the LINES TERMINATED BY since it should be the default, but it's a good practice to be explicit.)
A: The Toad application does work wonders and is freeware. If you have a proper CSV file, it will create the table and import all data for you.
A: Write a simple Python script that parses the CSV and inserts it into the table.
Look at the csv and mysqldb module.
A: If you don't mind using a bit of commercial software then Navicat (http://mysql.navicat.com/) is a very useful bit of software and is available for Mac, Windows, Linux. I use it regularly for importing a large CSV file into a database.
A: Toad for MySQL will do this nicely, with considerable control over the import (selectively matching columns for example) and most enduringly it's free.
I've also used SQLYog, but you have to have the commercial version for this as import from file isn't available in the community edition.
Toad is an excellent bit of software which comes in versions for all major databases and I've used both the MSSQL and Oracle versions in the past too. Recommended.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/156994",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Conditionally set an attribute on an element with JSP Documents (JSPX) In HTML forms, buttons can be disabled by defining the "disabled" attribute on them, with any value:
<button name="btn1" disabled="disabled">Hello</button>
If a button is to be enabled, the attribute should not exist as there is no defined value that the disabled attribute can be set to that would leave the button enabled.
This is causing me problems when I want to enable / disable buttons when using JSP Documents (jspx). As JSP documents have to be well-formed XML documents, I can't see any way of conditionally including this attribute, as something like the following isn't legal:
<button name="btn1" <%= (isDisabled) ? "disabled" : "" %/> >Hello</button>
While I could replicate the tag twice using a JSTL if tag to get the desired effect, in my specific case I have over 15 attributes declared on the button (lots of javascript event handler attributes for AJAX) so duplicating the tag is going to make the JSP very messy.
How can I solve this problem, without sacrificing the readability of the JSP? Are there any custom tags that can add attributes to the parent by manipulating the output DOM?
A: @alex
great solution to use the ternary operator. I add some of my example, that thanks to you, I just changed the result of the condition, if true, writes the attribute, otherwise not write anything
to populate the list and select the value used, avoiding c:if
<select id="selectLang" name="selectLang" >
<c:forEach var="language" items="${alLanguages}" >
<option value="${language.id}" ${language.code == usedLanguage ? 'selected' : ''} >${language.description}</option>
</c:forEach>
to check at start a radio button to avoiding c:if:
<input type="radio" id="id0" value="0" name="radio" ${modelVar == 0 ? 'checked' : ''} />
<input type="radio" id="id1" value="1" name="radio" ${modelVar == 1 ? 'checked' : ''} />
<input type="radio" id="id2" value="2" name="radio" ${modelVar == 2 ? 'checked' : ''} />
A: You can use the <jsp:text> tag to solve this problem using valid XML:
<jsp:text><![CDATA[<button name="btn1"]]></jsp:text>
<c:if test="${isDisabled}"> disabled="disabled"</c:if>
>
Hello!
<jsp:text><![CDATA[</button>]]></jsp:text>
This is obviously more verbose than some other solutions. But it's completely self-contained: no custom tags required. Also, it scales easily to as many attributes as you need.
A: i guess some time has passed since the last post on this, but I came up against the exact same problem with <select><option selected="selected"> tags, i.e. dynamically declaring which option is selected. To solve that one I made a custom tagx; I posted the details over in another answer here
I came to the conclusion that there is no nice shortcut; EL and JSP expressions can only exist inside XML element attributes (and in body content). So you have to do the following;
<c:choose>
<c:when test="${isDisabled}"><button name="btn1" disabled="disabled">Hello</button></c:when>
<c:otherwise><button name="btn1">Hello</button></c:otherwise>
</c:choose>
Using the scriptlet notation won't work for JSP documents (.jspx)
A: Reading about an automatic jsp to jspx converter I came across the <jsp:element> and <jsp:attribute> tags. If I understand that correctly you should be able to do something like
<jsp:element name="button">
<jsp:attribute name="someAttribute">value</jsp:attribute>
</jsp:element>
and have the jsp engine output
<button someAttribute="value"/>
or something like that. The only problem, pointed out in the page above, is that this doesn't seem to work well with conditional constructs. The author of the converter worked around that creating some helper tags, which you can have a look at downloading the source code I guess. Hope that helps.
A: Make a tag library (.tagx) then use the scriptlet tag.
See http://code.google.com/p/jatl/wiki/JSPExample
<?xml version="1.0" encoding="UTF-8" ?>
<jsp:root xmlns:jsp="http://java.sun.com/JSP/Page" version="2.1">
<jsp:directive.page import="com.googlecode.jatl.Html"/>
<jsp:directive.page import="com.evocatus.product.data.AttributeValue"/>
<jsp:directive.page import="com.evocatus.domain.Product"/>
<jsp:scriptlet>
//<![CDATA[
final Product p = (Product) request.getAttribute("product");
new Html(out) {{
for (AttributeValue v : p.summaryAttributeValues()) {
p();
strong().text(v.getLabel()).end();
text(": " + v.getValue());
endAll();
}
}};
//]]>
</jsp:scriptlet>
</jsp:root>
Yeah this is cheating ... but it gets the job done. Plus can you do really nasty complicated recursion for tree structures this way.
I also posted another solution on my blog and gist.github that uses a bunch of tagx libraries: http://adamgent.com/post/8083703288/conditionally-set-an-attribute-on-an-element-with-jspx
A: I use a custom JSP tag with dynamic attributes. You use it like this:
<util:element elementName="button" name="btn1" disabled="$(isDisabled ? 'disabled' : '')"/>
Basically, what this tag does is generate an XML element with elementName and puts all attributes present in the tag, but skips the empty ones.
The tag itself is pretty easy to implement, my implementation is just 44 lines long.
A: Correct way to do it with pure JSP is this way:
<jsp:element name="button">
<jsp:attribute name="name">btn1</jsp:attribute>
<jsp:attribute name="disabled" omit="${not isDisabled}">disabled</jsp:attribute>
<jsp:body>Hello</jsp:body>
</jsp:element>
The key is to use omit attribute on <jsp:attribute> - if the expression evaluates to true then the attribute won't be renreder at all.
A: The <%= blah %> syntax is not legal XML needed for JSP Documents. You have 2 options:
*
*Replace <%= (isDisabled) ? "disabled" : "" %> with <jsp.expression>(isDisabled) ? "disabled" : ""</jsp.expression>
*Use the Core taglib and EL (make sure isDisabled is put into page scope) like so:
<c:choose>
<c:when test="${isDisabled}">"disabled"</c:when>
<c:otherwise>""</c:otherwise>
</c:choose>
Hope that helps :)
A: I've just been struggling with the same problem. I tried using <jsp:attribute name="disabled"/> inside <c:if>, but the compiler tries to attach the disabled attribute to the c:if element which fails. But I found that this does work (stripes:submit is an element for creating a button of type submit in stripes):
<stripes:submit name="process" value="Hello">
<jsp:attribute name="disabled">
<c:if test="${x == 0}">disabled</disabled>
</jsp:attribute>
</stripes:submit>
It seems that jsp:attribute will not create an attribute at all if the body contains only whitespace, so you either get disabled="disabled" or nothing at all.
This will only work if you are using some sort of taglib to generate the button, and the tag element must support the disabled attribute (passing it through to the underlying HTML element). You can't use jsp:attribute to add an attribute to a raw HTML element.
A: I implemented it like https://stackoverflow.com/a/207882/242042 and encapsulated https://stackoverflow.com/a/775295/242042 so that it can be reused as a tag.
<%@ tag
display-name="element"
pageEncoding="utf-8"
description="similar to jsp:element with the capability of removing attributes that are blank, additional features depending on the key are documented in the tag."
trimDirectiveWhitespaces="true"
dynamic-attributes="attrs"
%>
<%@ attribute
name="tag"
description="Element tag name. Used in place of `name` which is a common attribute in HTML"
required="true"
%>
<%-- key ends with Key, use i18n --%>
<%-- key starts with x-bool- and value is true, add the key attribute, no value --%>
<%-- key starts with x-nc- for no check and value is empty, add the key attribute, no value --%>
<%@ taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core" %>
<%@ taglib prefix="fn" uri="http://java.sun.com/jsp/jstl/functions" %>
<%@ taglib prefix="fmt" uri="http://java.sun.com/jsp/jstl/fmt" %>
<jsp:text><![CDATA[<]]></jsp:text>
<c:out value="${tag} " />
<c:forEach var="attr" begin="0" items="${attrs}">
<c:choose>
<c:when test='${fn:endsWith(attr.key, "Key")}'>
${attr.key}=<fmt:message key="${attr.value}" />
</c:when>
<c:when test='${fn:startsWith(attr.key, "x-bool-") && attr.value == "true"}'>
<c:out value="${fn:substringAfter(attr.key, 'x-bool-')}" />
</c:when>
<c:when test='${fn:startsWith(attr.key, "x-bool-") && attr.value != "true"}'>
</c:when>
<c:when test='${fn:startsWith(attr.key, "x-nc-")}'>
<c:out value="${fn:substringAfter(attr.key, 'x-nc-')}" />="<c:out value='${attr.value}' />"
</c:when>
<c:when test='${not empty attr.value}'>
<c:out value="${attr.key}" />="<c:out value='${attr.value}' />"
</c:when>
</c:choose>
<c:out value=" " />
</c:forEach>
<jsp:doBody var="bodyText" />
<c:choose>
<c:when test="${not empty fn:trim(bodyText)}">
<jsp:text><![CDATA[>]]></jsp:text>
${bodyText}
<jsp:text><![CDATA[<]]></jsp:text>
<c:out value="/${tag}" />
<jsp:text><![CDATA[>]]></jsp:text>
</c:when>
<c:otherwise>
<jsp:text><![CDATA[/>]]></jsp:text>
</c:otherwise>
</c:choose>
To use it put it in a taglib tagdir.
<%@ taglib tagdir="/WEB-INF/tags" prefix="xyz"%>
...
<xyz:element tag="input"
type="date"
id="myDate"
name="myDate"
x-bool-required="true"
/>
The output would render as
<input
name="myDate"
id="myDate"
type="date"
required/>
A: I don't really use JSP (and I replied once, then deleted it when I understood the "must by valid XML" thing). The cleanest I can come up with is this:
<% if (isDisabled) { %>
<button name="btn1" disabled="disabled">Hello</button>
<% } else { %>
<button name="btn1">Hello</button>
<% } %>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157005",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20"
} |
Q: .NET Webservices development approach I'm planning to write a web service aka api for my application which was developed with .Net and SQLServer. .Net provides a simple way to create webservices by creating asmx files. But i need to know how best it can be designed or what are the best practices in creating a web service in .net or some pointers to good articles as this is my first experience in web services programming. Eventhough SF has many references to best practices in API (which are very good) i dont find much information oriented for .net webservices.
Update: After dariom's reply i would like to mention that i'm doing it in .net 2.0
A: It's not easy to remember all the best practices out there, but here is my advice:
*
*Avoid using data types which aren't easily serializable or aren't compatible with the WS standards (and thus you won't be able to consume the service using different languages). Most notably are RecordSets and other MS-only reference types.
*Notice there's a difference between static web service (a static method which has [webmethod]) and non-static (memeber function). This might affect your performance and resource usage.
*There's a difference between designing a .NET webservice that runs inside your intranet, and a webservice designed for the internet. The second should be simpler in that the XML data that goes on the wire should be much smaller.
*I guess the webservice is going to deployed on an IIS server. Notice there's a difference in authentication when running inside the intranet and on the internet. You might need to use impersonation for the service to be able to do specific tasks.
I guess there's much much more, better look for some documentation discussing that.
A: We are developing web service starting from XSD schema first (contract first).
Also we are using Thinktecture’s WSCF - Web Services Contract First tool to generate web service interfaces. It worked for us pretty good for last 2 years.
They also have good walk through on how to use there tool.
http://www.thinktecture.com/resourcearchive/tools-and-software/wscf/wscf-walkthrough
A: If you're using .NET 3.0 or later I would encourage you to consider implementing your service using WCF.
The learning curve is steeper, but you'll benefit from the flexibility and features that WCF offers. With WCF you can make your service compatible with ASMX services (regular .NET web services) so other applications and other languages can consume the service as though it was a regular .NET web service.
There are lots of resources to help you get started. Try MSDN, Learning WCF and Programming WCF Services.
A: There's a book of design patterns for .NET ASMX web services here: http://msdn.microsoft.com/en-us/library/ms978729.aspx
It describes the following design patterns:
Entity Aggregation
Process Integration
Portal Integration
Data Integration
Function Integration
Service-Oriented Integration
Presentation Integration
Message Broker
Message Bus
Publish/Subscribe
Pipes and Filters
Gateway
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157009",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Emacs and Python I recently started learning Emacs. I went through the tutorial, read some introductory articles, so far so good.
Now I want to use it for Python development. From what I understand, there are two separate Python modes for Emacs: python-mode.el, which is part of the Python project; and python.el, which is part of Emacs 22.
I read all information I could find but most of it seems fairly outdated and I'm still confused.
The questions:
*
*What is their difference?
*Which mode should I install and use?
*Are there other Emacs add-ons that are essential for Python development?
Relevant links:
*
*EmacsEditor @ wiki.python.org
*PythonMode @ emacswiki.org
A: This site has a description of how to get Python code completion in Emacs.
Ropemacs is a way to get Rope to work in emacs. I haven't had extensive experience with either, but they're worth looking into.
A: Given the number of times I have several open buffers all called __init__.py, I consider the uniquify library essential for python development.
Pyflakes also aids productivity.
A: If you are using GNU Emacs 21 or before, or XEmacs, use python-mode.el. The GNU Emacs 22 python.el won't work on them. On GNU Emacs 22, python.el does work, and ties in better with GNU Emacs's own symbol parsing and completion, ElDoc, etc. I use XEmacs myself, so I don't use it, and I have heard people complain that it didn't work very nicely in the past, but there are updates available that fix some of the issues (for instance, on the emacswiki page you link), and you would hope some were integrated upstream by now. If I were the GNU Emacs kind, I would use python.el until I found specific reasons not to.
The python-mode.el's single biggest problem as far as I've seen is that it doesn't quite understand triple-quoted strings. It treats them as single-quoted, meaning that a single quote inside a triple-quoted string will throw off the syntax highlighting: it'll think the string has ended there. You may also need to change your auto-mode-alist to turn on python-mode for .py files; I don't remember if that's still the case but my init.el has been setting auto-mode-alist for many years now.
As for other addons, nothing I would consider 'essential'. XEmacs's func-menu is sometimes useful, it gives you a little function/class browser menu for the current file. I don't remember if GNU Emacs has anything similar. I have a rst-mode for reStructuredText editing, as that's used in some projects. Tying into whatever VC you use, if any, may be useful to you, but there is builtin support for most and easily downloaded .el files for the others.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157018",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "34"
} |
Q: iSeries SQL Procedure - Check if already exists I have an script that falls over if any of the procedures it is trying to create already exists. How can I check/drop if this procedure is already created?
A: I would guess something along the lines of:
IF EXISTS
(
SELECT *
FROM SYSPROCS
WHERE SPECIFIC_SCHEMA = ???
AND SPECIFIC_NAME = ???
AND ROUTINE_SCHEMA = ???
AND ROUTINE_NAME = ???
)
DROP PROCEDURE ???
I don't know if you actually need the SPECIFIC_* information or not and I don't know how to handle cases where you have two procedures with the same name but different call signatures, but hopefully this gets you on the right track.
A: IF EXISTS (SELECT * FROM dbo.sysobjects WHERE id = OBJECT_ID(N'[dbo].[Procedure_Name]') AND OBJECTPROPERTY(id,N'IsProcedure') = 1)
DROP PROCEDURE [dbo].[Procedure_Name]
I think this would help you
A: You might check for existence this way (note - make sure of case):
SELECT *
FROM QSYS2/PROCEDURES
WHERE PROCNAME LIKE 'your-procedure-name'
AND PROCSCHEMA = 'your-procedure-library'
A: DROP PROCEDURE xxx ;
CREATE PROCEDURE XXX
.
.
. ;
Include a DROP PROCEDURE as the first statement in the script. If you run with RUNSQLSTM, use ERRLVL(20) to allow the DROP to fail. If you run through 'Run SQL Scripts', use the 'Ignore "Object not found" on DROP' option.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157020",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Where can I find .NET Framework class diagram? I just need a file (picture, pdf or other type file for printing) of the framework structure.
It is very usefull while learning .Net framework.
A: If you are bold and adventurous you can use a tool I found on CodeProject. Send the framework classes to it and, voila, after some crunching, grinding and groaning you should get a diagram from it.
A: Check this poster and see if it helps.
(source: msdn.com)
and Here is the deep zoom version.
A: .NET Framework 3.5 Common Namespaces and Types Poster
November 2007 Edition The .NET
Framework 3.5 Common Namespaces and
Types Poster
Overview
The .NET Framework 3.5 Common
Namespaces and Types Poster is
downloadable as XPS or PDF format.
There is also an XPS format file which
prints over 16 letter or A4 pages for
easy printing. Some assembly is
required if you choose this print
method.
A: http://download.microsoft.com/download/4/a/3/4a3c7c55-84ab-4588-84a4-f96424a7d82d/NET_35_Namespaces_Poster_JAN08.pdf
A: I'm looking at one (and several others) right behind me at the moment, apparently it comes with Visual C#/Studio.
A: You didn't specify which version of the .NET Framework, and it's a little unclear if you mean a map of each class or a map of all classes. Anyhow, here's for .NET 3.5:
http://download.microsoft.com/download/4/a/3/4a3c7c55-84ab-4588-84a4-f96424a7d82d/NET35_Namespaces_Poster_LORES.pdf
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157026",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: Parsing Text in MS Access I have column that contains strings. The strings in that column look like this:
FirstString/SecondString/ThirdString
I need to parse this so I have two values:
Value 1: FirstString/SecondString
Value 2: ThirdString
I could have actually longer strings but I always nee it seperated like [string1/string2/string3/...][stringN]
What I need to end up with is this:
Column1: [string1/string2/string3/etc....]
Column2: [stringN]
I can't find anyway in access to do this. Any suggestions? Do i need regular expressions? If so, is there a way to do this in the query designer?
Update: Both of the expressions give me this error: "The expression you entered contains invalid syntax, or you need to enclose your text data in quotes."
expr1: Left( [Property] , InStrRev( [Property] , "/") - 1), Mid( [Property] , InStrRev( [Property] , "/") + 1)
expr1: mid( [Property] , 1, instr( [Property] , "/", -1)) , mid( [Property] , instr( [Property] , "/", -1)+1, length( [Property] ))
A: In a query, use the following two expressions as columns:
Left(col, InStrRev(col, "/") - 1), Mid(col, InStrRev(col, "/") + 1)
col is your column.
If in VBA, use the following:
last_index= InStrRev(your_string, "/")
first_part= Left$(your_string, last_index - 1)
last_part= Mid$(your_string, last_index + 1)
A: mid(col, 1, instr(col, "/", -1)) , mid(col, instr(col, "/", -1)+1, length(col))
A: Is there any chance you can fix the underlying data structure to be properly normalized so that you can avoid the problem in the first place? Along with retrieving the data comes a whole host or problems with maintaining it accurately, and that would all be ameliorated if you weren't storing multiple values in a single field.
A: I know you're trying to do this inside a query so the SQL string functions are probably your best bet.
However, it's worth mentioning that there's a regular expression COM object accessible from VBA. Just add a reference to the Microsoft VBScript Regular Expressions library inside of your macro code.
Then you can do stuff like this
Dim szLine As String
Dim regex As New RegExp
Dim colregmatch As MatchCollection
With regex
.MultiLine = False
.Global = True
.IgnoreCase = False
End With
szLine = "FirstString/SecondString/ThirdString"
regex.Pattern = "^(.*?\/.*?)/(.*?)$"
Set colregmatch = regex.Execute(szLine)
'FirstString/SecondString
Debug.Print colregmatch.Item(0).submatches.Item(0)
'ThirdString
Debug.Print colregmatch.Item(0).submatches.Item(1)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157034",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Most pythonic way of counting matching elements in something iterable I have an iterable of entries on which I would like to gather some simple statistics, say the count of all numbers divisible by two and the count of all numbers divisible by three.
My first alternative, While only iterating through the list once and avoiding the list expansion (and keeping the split loop refactoring in mind), looks rather bloated:
(alt 1)
r = xrange(1, 10)
twos = 0
threes = 0
for v in r:
if v % 2 == 0:
twos+=1
if v % 3 == 0:
threes+=1
print twos
print threes
This looks rather nice, but has the drawback of expanding the expression to a list:
(alt 2)
r = xrange(1, 10)
print len([1 for v in r if v % 2 == 0])
print len([1 for v in r if v % 3 == 0])
What I would really like is something like a function like this:
(alt 3)
def count(iterable):
n = 0
for i in iterable:
n += 1
return n
r = xrange(1, 10)
print count(1 for v in r if v % 2 == 0)
print count(1 for v in r if v % 3 == 0)
But this looks a lot like something that could be done without a function. The final variant is this:
(alt 4)
r = xrange(1, 10)
print sum(1 for v in r if v % 2 == 0)
print sum(1 for v in r if v % 3 == 0)
and while the smallest (and in my book probably the most elegant) it doesn't feel like it expresses the intent very well.
So, my question to you is:
Which alternative do you like best to gather these types of stats? Feel free to supply your own alternative if you have something better.
To clear up some confusion below:
*
*In reality my filter predicates are more complex than just this simple test.
*The objects I iterate over are larger and more complex than just numbers
*My filter functions are more different and hard to parameterize into one predicate
A: Alt 4! But maybe you should refactor the code to a function that takes an argument which should contain the divisible number (two and three). And then you could have a better functionname.
def methodName(divNumber, r):
return sum(1 for v in r if v % divNumber == 0)
print methodName(2, xrange(1, 10))
print methodName(3, xrange(1, 10))
A: You could use the filter function.
It filters a list (or strictly an iterable) producing a new list containing only the items for which the specified function evaluates to true.
r = xrange(1, 10)
def is_div_two(n):
return n % 2 == 0
def is_div_three(n):
return n % 3 == 0
print len(filter(is_div_two,r))
print len(filter(is_div_three,r))
This is good as it allows you keep your statistics logic contained in a function and the intent of the filter should be pretty clear.
A: I would choose a small variant of your (alt 4):
def count(predicate, list):
print sum(1 for x in list if predicate(x))
r = xrange(1, 10)
count(lambda x: x % 2 == 0, r)
count(lambda x: x % 3 == 0, r)
# ...
If you want to change what count does, change its implementation in one place.
Note: since your predicates are complex, you'll probably want to define them in functions instead of lambdas. And so you'll probably want to put all this in a class rather than the global namespace.
A: Having to iterate over the list multiple times isn't elegant IMHO.
I'd probably create a function that allows doing:
twos, threes = countmatching(xrange(1,10),
lambda a: a % 2 == 0,
lambda a: a % 3 == 0)
A starting point would be something like this:
def countmatching(iterable, *predicates):
v = [0] * len(predicates)
for e in iterable:
for i,p in enumerate(predicates):
if p(e):
v[i] += 1
return tuple(v)
Btw, "itertools recipes" has a recipe for doing much like your alt4.
def quantify(seq, pred=None):
"Count how many times the predicate is true in the sequence"
return sum(imap(pred, seq))
A: Well you could do one list comprehension/expression to get a set of tuples with that stat test in them and then reduce that down to get the sums.
r=xrange(10)
s=( (v % 2 == 0, v % 3 == 0) for v in r )
def add_tuples(t1,t2):
return tuple(x+y for x,y in zip(t1, t2))
sums=reduce(add_tuples, s, (0,0)) # (0,0) is starting amount
print sums[0] # sum of numbers divisible by 2
print sums[1] # sum of numbers divisible by 3
Using generator expression etc should mean you'll only run through the iterator once (unless reduce does anything odd?). Basically you'd be doing map/reduce...
A: True booleans are coerced to unit integers, and false booleans to zero integers. So if you're happy to use scipy or numpy, make an array of integers for each element of your sequence, each array containing one element for each of your tests, and sum over the arrays. E.g.
>>> sum(scipy.array([c % 2 == 0, c % 3 == 0]) for c in xrange(10))
array([5, 4])
A: I would definitely be looking at a numpy array instead of an iterable list if you just have numbers. You will almost certainly be able to do what you want with some terse arithmetic on the array.
A: Not as terse as you are looking for, but more efficient, it actually works with any iterable, not just iterables you can loop over multiple times, and you can expand the things to check for without complicating it further:
r = xrange(1, 10)
counts = {
2: 0,
3: 0,
}
for v in r:
for q in counts:
if not v % q:
counts[q] += 1
# Or, more obscure:
#counts[q] += not v % q
for q in counts:
print "%s's: %s" % (q, counts[q])
A: from itertools import groupby
from collections import defaultdict
def multiples(v):
return 2 if v%2==0 else 3 if v%3==0 else None
d = defaultdict(list)
for k, values in groupby(range(10), multiples):
if k is not None:
d[k].extend(values)
A: The idea here is to use reduction to avoid repeated iterations. Also, this does not create any extra data structures, if memory is an issue for you. You start with a dictionary with your counters ({'div2': 0, 'div3': 0}) and increment them along the iteration.
def increment_stats(stats, n):
if n % 2 == 0: stats['div2'] += 1
if n % 3 == 0: stats['div3'] += 1
return stats
r = xrange(1, 10)
stats = reduce(increment_stats, r, {'div2': 0, 'div3': 0})
print stats
If you want to count anything more complicated than divisors, it would be appropriate to use a more object-oriented approach (with the same advantages), encapsulating the logic for stats extraction.
class Stats:
def __init__(self, div2=0, div3=0):
self.div2 = div2
self.div3 = div3
def increment(self, n):
if n % 2 == 0: self.div2 += 1
if n % 3 == 0: self.div3 += 1
return self
def __repr__(self):
return 'Stats(%d, %d)' % (self.div2, self.div3)
r = xrange(1, 10)
stats = reduce(lambda stats, n: stats.increment(n), r, Stats())
print stats
Please point out any mistakes.
@Henrik: I think the first approach is less maintainable since you have to control initialization of the dictionary in one place and update in another, as well as having to use strings to refer to each stat (instead of having attributes). And I do not think OO is overkill in this case, for you said the predicates and objects will be complex in your application. In fact if the predicates were really simple, I wouldn't even bother to use a dictionary, a single fixed size list would be just fine. Cheers :)
A: Inspired by the OO-stab above, I had to try my hands on one as well (although this is way overkill for the problem I'm trying to solve :)
class Stat(object):
def update(self, n):
raise NotImplementedError
def get(self):
raise NotImplementedError
class TwoStat(Stat):
def __init__(self):
self._twos = 0
def update(self, n):
if n % 2 == 0: self._twos += 1
def get(self):
return self._twos
class ThreeStat(Stat):
def __init__(self):
self._threes = 0
def update(self, n):
if n % 3 == 0: self._threes += 1
def get(self):
return self._threes
class StatCalculator(object):
def __init__(self, stats):
self._stats = stats
def calculate(self, r):
for v in r:
for stat in self._stats:
stat.update(v)
return tuple(stat.get() for stat in self._stats)
s = StatCalculator([TwoStat(), ThreeStat()])
r = xrange(1, 10)
print s.calculate(r)
A: Alt 3, for the reason that it doesn't use memory proportional to the number of "hits". Given a pathological case like xrange(one_trillion), many of the other offered solutions would fail badly.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157039",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22"
} |
Q: How do I resolve the error "Expression must evaluate to a node-set" when checking for the existence of a node? I'm attempting to check for the existence of a node using the following .NET code:
xmlDocument.SelectSingleNode(
String.Format("//ErrorTable/ProjectName/text()='{0}'", projectName));
This always raises:
XPathException: Expression must evaluate to a node-set.
Why am I getting this error and how can I resolve it? Thank you.
A: The expression given evaluates to a boolean, not a node-set. I assume you want to check whether the ProjectName equals the parametrized text. In this case you need to write
//ErrorTable/ProjectName[text()='{0}']
This gives you a list of all nodes (a nodeset) matching the given condition. This list may be empty, in which case the C#-Expression in your sample will return null.
As an afterthought: You can use the original xpath expression, but not with SelectSingleNode, but with Evaluate, like this:
(bool)xmlDocument.CreateNavigator().Evaluate(String.Format("//ErrorTable/ProjectName/text()='{0}'", projectName));
A: Try:
Node node = xmlDocument.SelectSingleNode(String.Format("//ErrorTable/ProjectName = '{0}'", projectName));
if (node != null) {
// and so on
}
Edit: silly error
A: The XPath expression contained a subtle error. It should have been:
xmlDocument.SelectSingleNode(String.Format("//ErrorTable/ProjectName[text()='{0}']", projectName));
The previous expression was evaluating to a boolean, which explains the exception error. Thanks for the help!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157044",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
} |
Q: .NET 3.5 published in 11/07 .NET 3.0 in 11/06. Why are most people still using .NET 2.0? People have been developing own solutions to the following problems:
*
*Consistent messaging frameworks for remote information exchange (webservices,rpc,...)
*SDK's for state managements for things such as Finite State Machines and Workflows
*Authentication Frameworks
*And much more.
For over two years now, Microsoft offers .NET 3.0 which contains consistent and well documented so called Foundations for Workflows, Communication, Authentication and a new way to build web apps.
However,... people were still building own frameworks with consistent object relational mapping to address their databases, own techniques to dynamically extend classes and methods at runtime (for customer to be able to customize application behaviour e.g.).
For over one year now, Microsoft offers .NET 3.5 which - amongst others - contain LINQ and therefor a great ORM and wonderful means to extend your code and make it much easier to write code after all.
But look around... it seems as if the majority still uses .NET 2.0. Websites are created in plain ASP.NET. Desktop experience is still achieved with a combination of CSS, JavaScript and HTML. Executables are using plain old WinForms, workflows are implemented with delegates, events, do/while and switch/case.
Without too much discussion, I would be glad to see concrete reasons for the following question:
In your opinion: Why is it that people don't jump onto the .NET 3.5 train?
A: *
*Requires learning new stuff, many 'just a job' LOB developers can't be bothered.
*Legacy code investments, the custom systems may not be needed, but they work, recoding stuff to use Framework based systems is a waste of time if the existing system isn't broken.
*Dev software cost. Coding effectively in .net 3.x really requires VS2008. Upgrading a whole team of developers to that from 2005 might cost.
*Stability. 2.0 was an evolution of the 1.1 Framework. 3.0 and 3.5 include new v1 technologies, (those you listed). Developers like to see the technologies prove themselves before they can justify to their managers that its worth the jump. As with Windows adoption, you'll likely see more people going from VS2005 to VS2010 and .net 2.0 to .net 4.0 since that will contain v2 of the 3.x technologies.
A: Because it is not well supported web hosting companies.
I'm developing a new web application but my hosting company only provides ASP.NET 2.0 support.
A: If you want Windows 2000 support, you'll have to stay with .NET 2.0. It's not that bad anyway.
A: I can speak for myself and my company. We still don't use it.
Why? Well
1- Most of our users already have NET 2.0 framework installed. No need to install another framework.
2- We won't change just because it is the new thing. It has to add some value.
3- To really pay off the change would mean a huge amount of work.Again, it has to pay off.
4- It is still to early to tell if its worth it (one year is not nearly enough) in terms of bugs and new problems. It seems worth it, though.
A: .NET 3.0 contained things which were great for new projects, but which many existing projects wouldn't want to take for the sake of updating - it would require significant rework to integrate any of them.
As for this:
For over one year now, Microsoft offers .NET 3.5
In what way is November 2007 over a year ago? Yes, .NET 3.5 is great and I love LINQ (particularly LINQ to Objects) and the benefits of C# 3.0 - but there's always a cost involved with change. To start with, there's the cost of rolling out Visual Studio 2008. Then there's the cost of retesting everything against .NET 3.5 (and now .NET 3.5SP1). Then there's the cost of deploying .NET 3.5 to all the servers, or the cost of requiring .NET 3.5 on all the clients. Oh, not to mention the cost of actually learning to use all the new technologies in a productive way.
It'll happen, but you shouldn't expect it to be very quick.
A: To add one more note to what @SCdF said, any company with a significant IT staff, that isn't developer centric (i.e. The Majority), is usually heavily resistant to new technologies.
IT departments tend to lag behind from the developers standpoint because they don't see the business value in upgrading to new systems/hardware just to stay ahead of the curve. IT is too busy dealing with security and maintenance (real or just perceived) to deal with developers trying to get them to upgrade to .Net 3.5.
Developers are also notoriously bad at communicating with other departments about business value. When a developer tries to get IT, or the Business on board with upgrading to .Net 3.5 they start talking about automated workflows and XML, and Web Services instead of talking dollars and cents.
Brian Prince of Microsoft has a really good presentation on "Soft Skills" that goes over some of this. If you ask him nicely he just might come to your company and present :)
A: I asked a manager at a previous employer why he hadn't made the switch yet, the answer was "I don't want to have to support three frameworks," meaning 1.1, 2.0, 3.5.
I explained that upgrading to 3.5 was not the same as switching from 1.1 to 2.0 (3.5 is an extension of 2.0 using a different dll (Core.dll), not a updated dll like 1.1 to 2.0).
I can see a lot of managers using this line of thought, so its up to you to let them know!
A: For the same reason I know Java developers who still code in Java 1.4-- change is expensive and, to people doing work internal to companies where Getting It Done is so much more important than using new technologies, often pointless.
For a lot of internal work there is little justification for upgrading older applications to work in newer environments.
Also, large corperations are not fond of any sort of change for various stability concerns, you will still find them making decisions to use older technology in new solutions simply because they believe the new technology is not stable enough, not enough is known about it or not enough of the software they rely on support it.
And, I've never used it so I couldn't say, but it may be the case that .NET 2 is Good Enough for people's needs, and that .NET 3.5 doesn't offer enough to warrent the learning / changing that is involved, even for companies that are OK with the more cutting edge.
A: 2.0 works, right? I think the newer stuff is great and cool and I like to stay up on the latest, but at the end of the day if it works, often the decision is to stick with what you have. I supported a shop that ran DOS 6.2x programs for years (up until 2004) and I use to scratch my head about it. But at the end of the day, there was no compelling business reason in that case to spend anything to move forward.
I'm sure there are other reasons too.
A: For all new projects I try to use 3.5 (or whatever the newest framework is). But all the old sites, why would the customers pay for uppgrading when it works?
If you want specific technology in 3.5, then uppgrade, but unless; don't fix what isn't broken..
A: I'd be interested to see how long 2.0 was out before it got "majority market share" -- I mean, 1 or 2 years might sound like a long time to you and me, but you've got to remember the new language has to be released, then people have to train on it, then they have to know it well enough to "sell" it to management. Also, VS2005 was "good enough" for quite a few people, and there's cost involved in upgrading to VS2008 to support the newer language versions -- not just money, but time.
A: The same reason most people are still using Windows XP!
A: A few jobs ago, I was in an interesting situation. As the original author of the company's ASMX web services, I was asked about versioning them. There was particular concern that I had based the services on a hand-made XSD and that nobody else really understood XSD that well.
I recommended to use WCF, as they won't need custom XML formats, and won't have to play with XSD, either: just define a data contract in code.
Even after I explained that .NET 3.5 SP1 amounts to a couple of service packs for .NET 2.0, plus some new assemblies, they still looked at me like I was about to become violent. Obviously, upgrading .NET versions is a lengthy and expensive process. Lessons from .NET 1.1 ot .NET 2.0 migration.
One small step in the right direction, as far as migration towards WCF: see ASMX Web Services are a “Legacy Technology”.
A: For government processing, the 3.5 framework is not yet approved for some environments so you're forced to use 2.0 or 3.0.
A: The biggest challenge is justifying the upgrade to the business, especially now we're in a global downturn.
There would have to be a specific advantage or technology in 3.5 that your project (probably greenfield) absolutely cannot do without, to push for it.
I am trying to put together a case for it in a large corporation, so the business benefits must out-weigh the status quo. In the current climate, 'maintaining talent' just won't cut it either, especially with a growing pool of candidates at large.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157055",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: Replace keys in a tuple in Erlang I have a list of tuples eg. [{1,40},{2,45},{3,54}....{7,23}] where 1...7 are days of the week (calculated by finding calendar:day_of_the_week()). So now I want to change the list to [{Mon,40},{Tue,45},{Wed,54}...{Sun,23}]. Is there an easier way to do it than lists:keyreplace?
A: Simple. Use map and a handy tool from the httpd module.
lists:map(fun({A,B}) -> {httpd_util:day(A),B} end, [{1,40},{2,45},{3,54},{7,23}]).
A: ... or using a different syntax:
[{httpd_util:day(A), B} || {A,B} <- L]
where:
L = [{1,40},{2,45},{3,54}....{7,23}]
The construct is called a list comprehension, and reads as:
"Build a list of {httpd_util:day(A),B} tuples, where {A,B} is taken from the list L"
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157058",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: In javaDoc, what's the best way of representing attributes in XML? When you're adding javaDoc comments to your code and you're outlining the structure of an XML document that you're passing back, what's the best way to represent attributes? Is there a best practice for this?
My general structure for my javaDoc comments is like this:
/**
* ...
*
* @return XML document in the form:
*
* <pre>
* <ROOT_ELEMENT>
* <AN_ELEMENT>
* <MULTIPLE_ELEMENTS>*
* </ROOT_ELEMENT>
* </pre>
*/
A: Not sure I clearly understand your question.
My preferred solution would be to embed the schema XSD or DTC in the description of the return parameter. Your solution seems to lead to personal idioms on how to represent things like multiple elements or others. Using a standard like XSD or DTD allows you to have a well know and recognized language on how to describe the structure of a XML document.
Regarding the JavaDoc representation if you are using Eclipse you can specify under Save Actions to format your document. This way you can write normally with > and < and see it converted to the escaped HTML codes.
A: In the end, I just went with:
/**
* ...
*
* @return XML document in the form:
*
* <pre>
* <ROOT_ELEMENT>
* <AN_ELEMENT attribute1 attribute2>
* <MULTIPLE_ELEMENTS>*
* </ROOT_ELEMENT>
* </pre>
*/
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157070",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Where can I get comdef.h? I downloaded some example code from the internet, but when I compiled it I ran into some trouble. My compiler tells me: comdef.h: No such file or directory.
I searched a bit on the internet, but I couldn't find anyone else with the same problem and I have no clue where I can obtain this header file.
I use codeblocks with the GNU GCC compiler.
A: The file is available with Visual Studio (not sure you have to install Platform SDK). You can get comdef.h from the Web but for sure you will have some troubles getting it to compile with your sources.
A: As other posters have said, comdef.h comes with Visual C++. It supplements the VC-specific builtin COM support. Since you say you're using GCC, you will probably have to adapt your code to use "low-level COM", since GCC doesn't have the kind of builtin COM support that VC has (in specific, using #import for importing type libraries into convenient wrapper classes).
A: The file should be available after installing the Microsoft Platform SDK. Don't know how well it works with GNU GCC though.
http://www.microsoft.com/msdownload/platformsdk/sdkupdate/
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157101",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How to output a boolean in T-SQL based on the content of a column? I made a view to abstract columns of different tables and pre-filter and pre-sort them. There is one column whose content I don't care about but I need to know whether the content is null or not. So my view should pass an alias as "true" in case the value of this specified column isn't null and "false" in case the value is null.
How can I select such a boolean with T-SQL?
A: for the column in the view you can use something like
CASE WHEN ColumnName is not null THEN 'True' ELSE 'False' END
or in a statement
SELECT
s.ID,
s.[Name],
CASE WHEN s.AchievedDate is not null THEN 'True' ELSE 'False' END [IsAchieved]
FROM Schools s
or for further processing afterwards I would personally use
SELECT
s.ID,
s.[Name],
CASE WHEN s.AchievedDate is not null THEN 1 ELSE 0 END [IsAchieved]
FROM Schools s
A: You have to use a CASE statement for this:
SELECT CASE WHEN columnName IS NULL THEN 'false' ELSE 'true' END FROM tableName;
A: I had a similar issue where I wanted a view to return a boolean column type based on if an actual column as null or not. I created a user defined function like so:
CREATE FUNCTION IsDatePopulated(@DateColumn as datetime)
RETURNS bit
AS
BEGIN
DECLARE @ReturnBit bit;
SELECT @ReturnBit =
CASE WHEN @DateColumn IS NULL
THEN 0
ELSE 1
END
RETURN @ReturnBit
END
Then the view that I created returns a bit column, instead of an integer.
CREATE VIEW testView
AS
SELECT dbo.IsDatePopulated(DateDeleted) as [IsDeleted]
FROM Company
A: You asked for boolean, which we call bit in t-sql.
Other answers have either given you a varchar 'true' and 'false' or 1 and 0. 'true' and 'false' are obviously varchar, not boolean. I believe 1 and 0 would be cast as an integer, but it's certainly not a bit. This may seem nit-picky, but types matter quite often.
To get an actual bit value, you need to cast your output explicitly as a bit like:
select case when tableName.columnName IS NULL then cast(0 as bit) else cast(1
as bit) END as ColumnLabel from tableName
A: Or you can do like this:
SELECT RealColumn, CAST(0 AS bit) AS FakeBitColumn FROM tblTable
A: If you need a output as boolean
CAST(CASE WHEN colName IS NULL THEN 0 ELSE 1 END as BIT) aIsBooked
A: I think this is slightly simpler then the other solutions:
SELECT Cast(ISNULL([column name], 0) AS BIT) AS IsWhatever
A: Since SQL server 2012 you can use IIF
IIF(columnName IS NULL, 'false', 'true')
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157114",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "59"
} |
Q: Date change notification in a Tkinter app (win32) Does anyone know if it is possible (and if yes, how) to bind an event (Python + Tkinter on MS Windows) to a system date change?
I know I can have .after events checking once in a while; I'm asking if I can somehow have an event fired whenever the system date/time changes, either automatically (e.g. for daylight saving time) or manually.
MS Windows sends such events to applications and Tkinter does receive them; I know, because if I have an .after timer waiting and I set the date/time after the timer's expiration, the timer event fires instantly.
A:
I know, because if I have an .after timer waiting and I set the date/time after the timer's expiration, the timer event fires instantly.
That could just mean that Tkinter (or Tk) is polling the system clock as part of the event loop to figure out when to run timers.
If you're using Windows, Mark Hammond's book notes that you can use the win32evtlogutil module to respond to changes in the Windows event log. Basically it works like this:
import win32evtlogutil
def onEvent(record):
# Do something with the event log record
win32evtlogutil.FeedEventLogRecords(onEvent)
But you'll need to get docs on the structure of the event records (I don't feel like typing out the whole chapter, sorry :-) ). Also I don't know if date changes turn up in the event log anyway.
Really, though, is it so bad to just poll the system clock? It seems easiest and I don't think it would slow you down much.
(finally, a comment: I don't know about your country, but here in NZ, daylight savings doesn't involve a date change; only the time changes (from 2am-3am, or vice-versa))
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157116",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Can't find stuff in session that I just put there (but only sometimes) We have a strange problem occurring once in a while on our servers. It usually happens when one or more of our web applications are upgraded. Debugging the problem has gotten me this far...
During the processing of a request:
*
*In the ASP.NET application we put an object in session
*In code running later (same request) we look up that same session value. It's empty!
So it looks like the session service isn't working, right? This code runs hundreds of times a day, and never fails in development environments or in production situation, only related to upgrading the web application(s) on the web server.
And the strange thing: We haven't really fond a proper way of fixing the situation either. IIS reset, ASP.NET state server stop/start, web.config edits, and even server reboots have all bin used - normally a combination is needed to fix it + plus a lot of swearing and pulling of hears. And in most cases it isn't fixed right away, but maybe two or three minutes after the third IIS reset or whatever. (So it might not be what fixed it after all.)
I'm going crazy here. Any ideas what might be the problem? Is it a microsoft bug?
Some more info:
*
*We're running under .NET 2.0
*We are using the ASP.NET state service
*The code accessing the session variable and getting back null is in an assembly referenced by the ASP.NET app. It uses the HttpContect.Current to get at the session
A: HttpContext.Items are only available during the lifetime of the request not the entire session, so that needs to be considered.
To answer the original question,
*
*When you add the item to session can
you retrieve it successfully in the
next line?
*Is it server specific?
i.e. One server can see the value in
session but not the other?
*The dll
that is trying to access the
variable, is there any platform
target difference on it, for .e.g.
.net 1.1 vs .net 2.0?
A: Can you consider not using a session variable?
If you need the data in the same session, you can use HttpContext.Items which is much more lightweight than using a session variable.
Answering to your question:
Maybe the session is not created yet when you access it for the first time. It will throw an exception if it does not exists.
Check for
If Not IsNothing(Context.Session) Then
'do something
end if
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157117",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Can I Override with derived types? As far as i know it is not possible to do the following in C# 2.0
public class Father
{
public virtual Father SomePropertyName
{
get
{
return this;
}
}
}
public class Child : Father
{
public override Child SomePropertyName
{
get
{
return this;
}
}
}
I workaround the problem by creating the property in the derived class as "new", but of course that is not polymorphic.
public new Child SomePropertyName
Is there any solution in 2.0?
What about any features in 3.5 that address this matter?
A: Modern answer
As of C# 9, return type covariance is supported. Here's a basic example copied from that link:
class Compilation ...
{
public virtual Compilation WithOptions(Options options)...
}
class CSharpCompilation : Compilation
{
public override CSharpCompilation WithOptions(Options options)...
}
A: From Wikipedia:
In the C# programming language, support for both return-type
covariance and parameter
contravariance for delegates was added
in version 2.0 of the language.
Neither covariance nor contravariance
are supported for method overriding.
It doesn't explicitly say anything about covariance of properties though.
A: You can re-declare (new), but you can't re-declare and override at the same time (with the same name).
One option is to use a protected method to hide the detail - this allows both polymorphism and hiding at the same time:
public class Father
{
public Father SomePropertyName
{
get {
return SomePropertyImpl();
}
}
protected virtual Father SomePropertyImpl()
{
// base-class version
}
}
public class Child : Father
{
public new Child SomePropertyName
{
get
{ // since we know our local SomePropertyImpl actually returns a Child
return (Child)SomePropertyImpl();
}
}
protected override Father SomePropertyImpl()
{
// do something different, might return a Child
// but typed as Father for the return
}
}
A: This is not possible in any .NET language because of type-safety concerns. In type-safe languages, you must provide covariance for return values, and contravariance for parameters. Take this code:
class B {
S Get();
Set(S);
}
class D : B {
T Get();
Set(T);
}
For the Get methods, covariance means that T must either be S or a type derived from S. Otherwise, if you had a reference to an object of type D stored in a variable typed B, when you called B.Get() you wouldn't get an object representable as an S back -- breaking the type system.
For the Set methods, contravariance means that T must either be S or a type that S derives from. Otherwise, if you had a reference to an object of type D stored in a variable typed B, when you called B.Set(X), where X was of type S but not of type T, D::Set(T) would get an object of a type it did not expect.
In C#, there was a conscious decision to disallow changing the type when overloading properties, even when they have only one of the getter/setter pair, because it would otherwise have very inconsistent behavior ("You mean, I can change the type on the one with a getter, but not one with both a getter and setter? Why not?!?" -- Anonymous Alternate Universe Newbie).
A: You can create a common interface for father and child and return a type of that interface.
A: This is the closest I could come (so far):
public sealed class JustFather : Father<JustFather> {}
public class Father<T> where T : Father<T>
{
public virtual T SomePropertyName
{ get { return (T) this; }
}
}
public class Child : Father<Child>
{
public override Child SomePropertyName
{ get { return this; }
}
}
Without the JustFather class, you couldn't instantiate a Father<T> unless it was some other derived type.
A: No, but you can use generics in 2 and above:
public class MyClass<T> where T: Person
{
public virtual T SomePropertyName
{
get
{
return ...;
}
}
}
Then Father and Child are generic versions of the same class
A: No. C# does not support this idea (it's called "return type covariance").
You can however do this:
public class FatherProp
{
}
public class ChildProp: FatherProp
{
}
public class Father
{
public virtual FatherProp SomePropertyName
{
get
{
return new FatherProp();
}
}
}
public class Child : Father
{
public override FatherProp SomePropertyName
{
get
{
// override to return a derived type instead
return new ChildProp();
}
}
}
i.e. use the contract defined by the base class, but return a derived type. I have made a more detailed sample to make this point clearer - returning "this" again wouldn't change anything.
It is possible (but messy) to test the returned object for it's actual type (i.e. "if someObject is ChildProp"), but it is better to call a virtual method on it that does the right thing for its type.
The base class virtual method (in this case, virtual property) not only has an implementation, but also defines a contract: that a child class can supply a different implementation of SomePropertyName if it meets this contract (i.e. SomePropertyName returns an object of type "FatherProp"). Returning an object of type "ChildProp" derived from "FatherProp" meets this contract. But you can't change the contract in "Child" - this contract applies to all classes descended from "Father".
If you take a step back and look at your broader design, there are other language constructs in the C# toolkit that you may also want to think about instead - Generics, or interfaces.
A:
No. C# does not support this idea
(it's called "return type
covariance").
From Wikipedia:
In the C# programming language,
support for both return-type
covariance and parameter
contravariance for delegates was added
in version 2.0 of the language.
Neither covariance nor contravariance
are supported for method overriding.
You can re-declare (new), but you
can't re-declare and override at the
same time (with the same name). One
option is to use a protected method to
hide the detail - this allows both
polymorphism and hiding at the same
time:
The best solutions would be to use generics:
public class MyClass<T> where T: Person
{
public virtual T SomePropertyNameA
{
get { return ...; }
}
}//Then the Father and Child are generic versions of the same class
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157119",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "45"
} |
Q: Aborting upload from a servlet to limit file size I'd like to limit the size of the file that can be uploaded to an application. To achieve this, I'd like to abort the upload process from the server side when the size of the file being uploaded exceeds a limit.
Is there a way to abort an upload process from the server side without waiting the HTTP request to finish?
A: With JavaEE 6 / Servlet 3.0 the preferred way of doing that would be to use the @MultipartConfig annotation on your servlet like this:
@MultipartConfig(location="/tmp", fileSizeThreshold=1024*1024,
maxFileSize=1024*1024*5, maxRequestSize=1024*1024*5*5)
public class UploadFileServiceImpl extends HttpServlet ...
A: You can do something like this (using the Commons library):
public class UploadFileServiceImpl extends HttpServlet
{
protected void doPost(HttpServletRequest request, HttpServletResponse response) throws IOException
{
response.setContentType("text/plain");
try
{
FileItem uploadItem = getFileItem(request);
if (uploadItem == null)
{
// ERROR
}
// Add logic here
}
catch (Exception ex)
{
response.getWriter().write("Error: file upload failure: " + ex.getMessage());
}
}
private FileItem getFileItem(HttpServletRequest request) throws FileUploadException
{
DiskFileItemFactory factory = new DiskFileItemFactory();
// Add here your own limit
factory.setSizeThreshold(DiskFileItemFactory.DEFAULT_SIZE_THRESHOLD);
ServletFileUpload upload = new ServletFileUpload(factory);
// Add here your own limit
upload.setSizeMax(DiskFileItemFactory.DEFAULT_SIZE_THRESHOLD);
List<?> items = upload.parseRequest(request);
Iterator<?> it = items.iterator();
while (it.hasNext())
{
FileItem item = (FileItem) it.next();
// Search here for file item
if (!item.isFormField() &&
// Check field name to get to file item ...
{
return item;
}
}
return null;
}
}
A: You might try doing this in the doPost() method of your servlet
multi = new MultipartRequest(request, dirName, FILE_SIZE_LIMIT);
if(submitButton.equals(multi.getParameter("Submit")))
{
out.println("Files:");
Enumeration files = multi.getFileNames();
while (files.hasMoreElements()) {
String name = (String)files.nextElement();
String filename = multi.getFilesystemName(name);
String type = multi.getContentType(name);
File f = multi.getFile(name);
if (f.length() > FILE_SIZE_LIMIT)
{
//show error message or
//return;
return;
}
}
This way you don't have to wait to completely process your HttpRequest and can return or show an error message back to the client side. HTH
A: You can use apache commons fileupload library, this library permits to limir file size also.
http://commons.apache.org/fileupload/
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157132",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Customized DataGridView column does not accept the entered decimal seperator under Windows Vista For a project I built a custom DataGridView column which contains NumericUpDown controls. It is implemented similar to the suggestion from Microsoft
The column works fine under Windows XP. It accepts the entered digits and decimal separator.
Under Windows Vista I have the odd problem that the control only accepts the decimal separator entered by the numeric keypad but not from the keyboard main block.
I have to add that I work with German (Switzerland) culture settings under Windows Vista and the German (Switzerland) keyboard layout is activated. The decimal separator in Switzerland is .
Someone has an idea for the reason and maybe a solution? Thank you very much!
Michael
Edit:
I found the solution to my problem.
*
*To clarify the situation a little bit more. The NumericUpDown control I use implements IDataGridViewEditingControl and inherits from NumericUpDown. Because of IDataGridViewEditingControl I implement the method EditingControlWantsInputKey. And in the implementation of this method I found my mistake or what went wrong.
*In the method I inspected the entered keys and decided if the control had to handle it. But for the decimal separator I only expected Keys.Decimal. In my special (wrong) case the key could not be matched. What was missing was to look for Keys.OemPeriod too. And that was the fix.
A: Can you please paste your OnKeyDown and/or OnKeyPress code? At least the relevant key-filtering code. It will make be easier to spot out any problems.
BTW, I normally use both a British English and Brazilian Portuguese keyboards, so I've had my share of these issues. That kind of forces you to become a localization expert :)
Edit: Oh, sorry, just re-read and understood you are using the stock NumericUpDown control. Can you point me out to the column code so I can try it here? Probably the locale is not getting set for the control, and you'll have to manually do it at some point.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157144",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Partial .csproj Files Is it possible to split the information in a .csproj across more than one file? A bit like a project version of the partial class feature.
A: Yes, you can split information across several files. You can use Import Element (MSBuild).
Note that Visual Studio will give you annoying security warning if you will try to open project file that includes other project files.
Useful linky from MSDN:
How to: Use the Same Target in Multiple Project Files
Note that external files have .targets extension by conventions.
A: You can not have more than one master csproj. But because the underneath wiring of the csproj is done using msbuild you can simply have multiple partial csproj that import each other. The solution file would see the most derived csproj.
project1.csproj
<Project DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
....
</Project>
project2.csproj
<Project DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<Import Project="project1.csproj" />
...
</Project>
project.csproj - this is the main project that is referred by the solution file.
<Project DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<Import Project="project2.csproj" />
...
</Project>
Bottom line is that using msbuild Import feature you can have partial csproj files where each one would contain definitions that the main project (project.csproj in my example) would use.
Visual Studio will show a Security Warning for project dialog when you open your changed solution or project file. Choose the option Load Project Normally and press OK. When opening the solution again later the warning will not be shown because the configuration to Load Project Normally is stored in the suo file.
A: Well you can have multiple projects combined into one large solution, but I don't think that is quite what you had in mind as each project has to be a complete project in that case.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157149",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
} |
Q: What does the term "input-synchronized calls" mean? I found this in an article on Multithreaded Apartments, but can’t find a definition for “input-synchronized calls”.
(Article is at http://msdn.microsoft.com/en-us/library/ms693421(VS.85).aspx)
As used in the article
- Multithreaded apartments cannot make input-synchronized calls
What are “input-synchronized" calls?
Thanks
A: Have a look at this article here.
Input-synchronized calls - When making
input-synchronized calls, the object
called must complete the call before
yielding control. This helps ensure
that focus management works correctly
and that data entered by the user is
processed appropriately. These calls
are made by COM through the Windows
SendMessage function, without entering
a modal loop. While processing an
input-synchronized call, the object
called must not call any function or
method (including synchronous methods)
that might yield control.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157157",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to do something with Bash when a text line appears in a file I want to run a command as soon as a certain text appears in a log file. How do I do that in Bash?
A: Using only tail:
tail -f file.log | while read line; do if [[ $line == *text* ]]; then
mycommand
fi; done
A: Use command
tail -f file.log | grep --line-buffered "my pattern" | while read line
do
echo $line
done
The --line-buffered is the key here, otherwise the read will fail.
A: This should work even without GNU grep:
tail -f -n 0 logfile.out | nawk '/pattern/ {system("echo do something here")}'
edit: Added "-n 0" so that only new occurences of the text will be matched.
A: Also you might look at inotail, a replacement for tail -f which uses the inotify framework to wake up only when the file you're interested in has changed. The usual tail -f just sleeps for short periods of time between polling, which is an effective but not very efficient solution.
A: I like matli's answer. Bruno De Fraine's answer is also good in that it uses only shell ccommands, not other programs (like awk). It suffers from the problem that the entire line must match the magic string. It's not clear from the question that's part of the requirment.
I would modify it a tiny bit to deal with the "as soon as" clause in the original question
logfile_generator | tee logfile.out | nawk '/pattern/ {system("echo do something here")}'
where logfile_generator is the program that is generating the log file in the first place. This modification executes the "something" as soon as the magic string is located.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157163",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Websphere with Classic ASP Is it practical (or possible) to create a Websphere Portlet for a Classic ASP website?
A: Websphere is Java based, but you can mix any web content using IFRAMEs or Ajax to inject HTML in your ASP page.
Practical? I don't think so.
A: Probably, but you would still need to have seperate hosting environments for your Java Portlets and your IIS ASP applications, even if called within an IFrame.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157167",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How do you setup a shared Working Copy in Subversion I still very new using Subversion.
Is it possible to have a working copy on a network available share (c:\svn\projects\website) that everyone (in this case 3 of use) can checkout and commit files to? We don't need a build server because it is an asp site and the designers are used to having immediate results when they save a file. I could try and show them how to set it up local on their machines but if we could just share the files on the development server and still have the ability to commit when someone is done, that would be ideal.
An easy solution would be for all of us to use the same subversion username and that would at least allow me to put files under version control.
But is it possible to checkout a folder from the svn respository but still require each person to login with their user/pass to commit?
EDIT: I'm trying to take our current work flow, which is editing the LIVE version of a site using Frontpage Extensions or FTP. And move it to something BETTER. In this case a copy of the live site on a development server that I setup to mirror the live server, remove frontpage extensions access. Then the designers can still have the same effect of instant gratification but I will not have to worry they are editing the live files. Even using a shared user/pass in subversion is still version control. It may not be ideal and if the designers were actually programmers I would try to get them fully on board but that's just not the case. This is the best I can do in this case and avoid a huge learning curve and work stoppage.
A: In my experience it will work just fine out of the box. At my company we have had this setup for a number of years and not experienced any problems (outside the obvious ones of having a shared working copy).
You should however look into having separate working copies and a trigger (hook) that updates the shared location on commits if you need a "live" version of the site.
A: You can't checkout a working copy, a working copy is the term used for code that has already been checked out. If you are asking multiple developers to work with the same set of working files at the same time, then you are seriously undermining one of the main uses of having a version control system, which is to allow your developers to make changes independently of one another without breaking things for anyone else.
That said, if you really want to do this you can. With a Linux server, the way to go is to have each of your users running a different ssh user agent (for windows machines we use Pagent) with a different ssh identity for each user. Then have the svn server recognize the ssh-tunnels from different identities as being from different users. Unfortunately, I don't know how to set that up in Windows.
A: You need to use svnserve (light-weight SVN server that comes with SVN) or apache mod.
With it, you can configure permissions like this:
[general]
password-db = userfile
realm = example realm
# anonymous users can only read the repository
anon-access = read
# authenticated users can both read and write
auth-access = write
A: I know this is an old thread, but I found it because I'm trying to do the same thing.
I don't think this problem can be solved with subversion, though I've tried on multiple occasions and I think the need is completely legitimate.
The use case that comes up repeatedly for us is complicated configuration files that are maintained outside of an application.
A good example might be apache httpd.conf files. They're complicated and we want to track changes to the files. We don't want everyone to use "root" because then we can't track who did what.
Mercurial can do this:
Mercurial Multiple Committers
A: Working copies are meant for each user to have his own. And the repositories are shared. Only the person who checked out the WC can commit changes from it.
A: Maybe you could look at something other than subversion if you dont want a server, like a distributed VCS (Bzr, git, and mercurial are popular these days) or you should look at subversion hosting services.
Sharing a single working copy is not recommended. That really defeats the purpose of version control. Please don't do that.
A: You are meant to have your SVN repository, and each user (with their own username and password to access the SVN repository) should check out their own working copy.
It is possible to do this on a single PC (is that your problem, PCs with multiple developers sharing it?) by having different PC user accounts and having the people check out into their own account, or even by sharing a PC user account and having the people check out into a different working folder. I don't think this is particularly neat or nice, and if a company can't afford a PC per developer these days then it is hardly worth working for!
I recommend:
*
*Each employee has their own working copy on their own PC.
*There should be an ANT or Maven or similar build script that will allow a developer to build and deploy from their working copy onto the development web server so they can see how it is. This could be simple as "copy files to this shared location".
*As each employee has their own SVN username/password you can see who made which change, and lock people out when they leave the company.
*This might create a process that a designer has to follow rather than the anarchy you have currently, but if it takes any of them more than half a day to pick up SVN and how to run a build script to deploy to the development web server then you've got bigger problems.
A: I think you're selling your team short. Non-devs can easily learn to use SVN, especially with something like tortoise. If you're gonna go through the trouble of setting up SVN, then just give everyone a separate login and let them work on their own local working copy. Do a quick and dirty CruiseControl automated build to pull from SVN to create the staging site content. It's just a little more work and the result will be so much nicer.
A: I know this is an old question/wiki, but a solution that occurred to me is as follows.
The problem could be restated as follows: How can we use a real source control system (like SVN), and still allow non-technical designers to enjoy the same save-preview-save-preview cycle that they've come to know and love?
Points:
*
*In order to realize the full benefit of SVN, each user needs to have their own working copy. That's just the way it is.
*If you've still got classic ASP (NOT .NET) in your app, a lightweight local server like Cassini won't do the job.
*It'd be preferable for the user to not have to install or learn to use an SVN client like the (admittedly-simple) Tortoise.
My approach:
*
*Create a branch in SVN for each user
*On each client, provide a mapped drive to a user-specific working directory on your dev server. They will use FrontPage, Expression, SharePoint designer or whatever to make their changes here.
*In IIS on the dev server, create a user-specific website (for example, alice.www.mysite.com or bob.www.mysite.com) with a user-specific host header. They will browse the site through this URL to see their changes. This also allows them to show their changes to others before merging it into the trunk.
*Using CruiseControl.NET, provide tasks to check out, update, add and commit changes for each user's branch. Figure out how to make this so that each user can only see their own tasks.
*Using CruiseControl.NET, create a task that will merge their changes into the trunk
*Using CruiseControl.NET, create a task that will update the real dev site (dev.www.mysite.com) with the merged changes. This site will show everyone's work combined, and acts as a staging and debugging area. If you are using a WAP, you'll want this task to trigger a build as well.
Sounds like a lot of steps, but it's really pretty simple. Create branches, map drives, set up new IIS sites and let them go crazy.
Under the covers, this is exactly the same as giving them a local IIS install and letting them commit their changes with SVN whenever necessary, just as developers do. The difference is that their working copy is on a server, IIS/Cassini doesn't have to be on their box, and they'll use a web interface to perform SVN actions like commit, update, etc.
Good luck!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157178",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: PHP Development - lot of (newbie) questions I'm a Engineering student and I'm attending a Database and Information Systems class this semester. It's required that I produce a website/application that uses a database, using PHP/PGSQL. My questions are:
*
*which IDE would you recommend?
*does anyone have good tips and advices for a new developer?
*it would help me (a lot) to develop this project attending some more "academic" aspects of the subject, such as the Entity/Association Model, etc. Are there any good tools to help structure my work?
Thanks!
EDIT: A few notes:
*
*I forgot to ask one last thing, I tried installing BitNami's WAPP Stack. Does anyone know how good and/or reliable it is?
*I'm actually working under Windows Vista Business (new laptop :S ). Would you recommend develloping under Linux for any specific reason?
A: *
*IDE: I reccomend PSPad for its great FTP features and syntax highlighting for PHP
*Tip: Go through the PHP documentation for mysql or whatever database you are using, the PHP documentation is the best tool you have for learning it.
*Tip: Keep data simple, its always mutable to something else, for example, store time with unixtimestamp, since php has great functionality with the date() function to turn it into anything you want.
EDIT to add linux vs windows tips
*
*I have developed on both Windows and Linux machines and i have both had a PHP server on Linux and Windows and for my type of developing (CMS's and Websites on those CMS's) i prefer developing on Windows and hosting on Linux. This is due to the stability of Linux and the Tools i can use reliably on Windows (Photoshop mainly)
A: I would recommend a plain text editor rather than an IDE. You should use one with syntax highlighting such as Notepad++.
Tips:
*
*Use Firefox
*Play around with some test databases. The biggest mistake made when teaching or learning databases is to focus on theory without actual data.
A: A good IDE for PHP is PDT, an Eclipse plugin.
A: This is probably the only time in your career when you have the full freedom to chose what tools to use, so make the best use of it. Learn some of the classic tools that will go with you a long long way.
So instead of using an IDE which you'll probably do all your professional life get a taste of using old school editors like vim/emacs. One advantage here is that the IDE will not hide all the details on getting your project to work, knowing the full technology stack is always a plus.
For any technology that you'll be using try and get a good broad perspective before diving in to the implementation details, so for PHP I would suggest getting a grasp of XHTML, CSS and Javascript including libraries like jQuery; Object Relational Mapping (Take a look at Ruby on Rails, CakePHP, Django and SQL Alchemy) and Model View Controller Frameworks on various platforms.
For PGSQL in addition to normalization try to get into the depths of information_schema and the transaction isolation levels and when they're useful.
Also important is understanding how the HTTP protocol works at a low level and how highly scalable websites can be built using HTTP.
Rather than relying on tools I would say that just create a reading list on the topics mentioned above and that would automatically structure your thought process to take into account these kind of issues.
A: *
*which IDE would you recommend?
Anything that supports remote debugging. You will save yourselves hours and hours and learn so much quicker if you can actually step through your code. It always amazes me that more people don't use good debugging tools for PHP. The tools are there, not using them is crazy. FWIW I've always been a devotee of Activestate Komodo - fantastic product.
*
*does anyone have good tips and advices for a new developer?
*
*get test infected. It will stand you in good stead in the future, and will force you to think about design issues properly. In fact the benefits are many and the drawbacks few.
*learn to refactor, and make it part of your development "rhythm".
*related to this is: think ahead, but don't programme ahead. Be aware that something you are writing will probably need to be bubbled up the class hierarchy so it is available more generically, but don't actual do the bubbling up till you need it.
*it would help me (a lot) to develop this project attending some more "academic" aspects of the subject, such as the Entity/Association Model, etc. Are there any good tools to help structure my work?
Learn about design patterns and apply the lessons you have learned from them. Don't programme the "PHP4" way.
*
*I forgot to ask one last thing, I tried installing BitNami's WAPP Stack. Does anyone know how good and/or reliable it is?
No idea, but if you have the time I'd avoid a prebuilt stack like WAMPP. It's important to understand how the pieces fit together. However, if you're running on Windows, you may not have time and your energy could be better focused on writing good code than working out how to install PHP, PostgreSQL and Apache.
*
*I'm actually working under Windows Vista Business (new laptop :S ). Would you recommend developing under Linux for any specific reason?
Yes I would. Assuming you are deploying on Linux (if you are deploying on Windows I'd be asking myself some serious questions!), then developing in the same environment is incredibly useful. I switched for that reason in 2005 and it was one of the most useful things I did development wise. However if you're a total *nix newbie and are under tight time constraints maybe stick with what you know. If you have time to try things out, you'll find it pretty easy to get up and running with a good modern Linux desktop distro and the development work will fly along.
A: My recommendations:
*
*No IDE - just a basic syntax-highlighting text editor (I use jEdit)
*Understand XSS and SQL injection
*There are lots of good frameworks under PHP that will help
A: I recommend you netbeans .its free. it is available for all platforms, and mostly it is good for editing php, jsp, java, css, html, ...
Good for SVN, mercurial, Plus you can integrate it easyly with kenai.com...
it helps with the IntelliSense kind of pop up.
believe me, i'm using it for php development and its the best suited ide i can find...
A: *
*IDE: Quanta+
*tip: don't use a template library over a template language (PHP)
*tip: MVC is a design and mentality issue, not a library
A: The best editors you get on windows are Notepad++ and Eclipse. both good, but can't hold a candle to Kate and Quanta+. for that alone, i'd ditch windows. Also, it's nice to have both the development and a real test environment on the same system, and even if most OSS is available on windows, they're always a square peg on a round hole.
A: ide: vim + (firefox+firebug)
using an ide with php, for the most part, is overkill
other tools: pgadmin3
design your tables so they are easy to query
if you have an extra box, i would put linux on it if you want to try it out. Ubuntu is a good started distro with a simple LAMP set up process. I wouldnt do anything to that vista laptop though, because it will allow you to test in IE and firefox.
A: Have you looked at Delphi for PHP (<http://www.codegear.com/products/delphi/php>) ?
Joe Stagner of Microsoft really likes Delphi for PHP.
He says it here: "[Delphi for PHP] 2.0 is the REAL DEAL and I LOVE IT !"
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157192",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Create a .eml (email) file in Java Anybody knows how to do this? I got all the information of the email (body, subject, from , to, cc, bcc) and need to generate an .eml file out of it.
A: EML files are just plain text files. The headers are separated from the body by a blank line. Headers look like this:
From: "DR CLEMENT OKON" <drclement@nigerianspam.com>
To: "You" <you@yourdomain.com>
Subject: REQUEST FOR URGENT BUSINESS RELATIONSHIP
Date: Tue, 30 Sep 2008 09:42:47 -0400
For more info, the official spec is RFC 2822. It's actually not as hard to read as some RFCs.
Edit: When I said "plain text" I should have thought for a second. I really meant plain ASCII - and not the 8-bit "extended ASCII" either - just up to character 127. If you want more than seven bits, you need some kind of encoding and things get complicated.
A: You can create eml files with the following code. It works fine with thunderbird and probably with other email clients:
public static void createMessage(String to, String from, String subject, String body, List<File> attachments) {
try {
Message message = new MimeMessage(Session.getInstance(System.getProperties()));
message.setFrom(new InternetAddress(from));
message.setRecipients(Message.RecipientType.TO, InternetAddress.parse(to));
message.setSubject(subject);
// create the message part
MimeBodyPart content = new MimeBodyPart();
// fill message
content.setText(body);
Multipart multipart = new MimeMultipart();
multipart.addBodyPart(content);
// add attachments
for(File file : attachments) {
MimeBodyPart attachment = new MimeBodyPart();
DataSource source = new FileDataSource(file);
attachment.setDataHandler(new DataHandler(source));
attachment.setFileName(file.getName());
multipart.addBodyPart(attachment);
}
// integration
message.setContent(multipart);
// store file
message.writeTo(new FileOutputStream(new File("c:/mail.eml")));
} catch (MessagingException ex) {
Logger.getLogger(Mailkit.class.getName()).log(Level.SEVERE, null, ex);
} catch (IOException ex) {
Logger.getLogger(Mailkit.class.getName()).log(Level.SEVERE, null, ex);
}
}
A: You can construct javax.mail.Message object (or have it already constructed from the mail server) and then you can use writeTo() method to save it to file.
See JavaMail API for more information.
A: Looking at a typical EML file it looks like a raw dump of the text communication that went to the server. So it is a text file containing the mail headers and body. To get your attachments, different views, etc in the correct format inside the EML file you need to MIME-encode the body and its parts.
A: If you want to add HTML Stuff you have to add
content.setHeader("Content-Type", "text/html");
(as Marco Sulla said) but also change
message.setContent(multipart);
to
message.setContent(multipart,"text/html");
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157195",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24"
} |
Q: Double checked locking Article I was reading this article about "Double-Checked locking" and out of the main topic of the article I was wondering why at some point of the article the author uses the next Idiom:
Listing 7. Attempting to solve the out-of-order write problem
public static Singleton getInstance()
{
if (instance == null)
{
synchronized(Singleton.class) { //1
Singleton inst = instance; //2
if (inst == null)
{
synchronized(Singleton.class) { //3
inst = new Singleton(); //4
}
instance = inst; //5
}
}
}
return instance;
}
And my question is:
Is there any reason to synchronize twice some code with the same lock?
Have this any purpose it?
Many thanks in advance.
A: Jon Skeet is right: read Bill Pugh's article. The idiom that Hans uses is the precise form that won't work, and should not be used.
This is unsafe:
private static Singleton instance;
public static Singleton getInstance() {
if (instance == null) {
synchronized(Singleton.class) {
if (instance == null) {
instance = new Singleton();
}
}
}
return instance;
}
This is also unsafe:
public static Singleton getInstance()
{
if (instance == null)
{
synchronized(Singleton.class) { //1
Singleton inst = instance; //2
if (inst == null)
{
synchronized(Singleton.class) { //3
inst = new Singleton(); //4
}
instance = inst; //5
}
}
}
return instance;
}
Don't do either of them, ever.
Instead, synchronise the whole method:
public static synchronized Singleton getInstance() {
if (instance == null) {
instance = new Singleton();
}
return instance;
}
Unless you're retrieving this object a zillion times a second the performance hit, in real terms, is negligible.
A: I cover a bunch of this here:
http://tech.puredanger.com/2007/06/15/double-checked-locking/
A: The point of locking twice was to attempt to prevent out-of-order writes. The memory model specifies where reorderings can occur, partly in terms of locks. The lock ensures that no writes (including any within the singleton constructor) appear to happen after the "instance = inst;" line.
However, to go deeper into the subject I'd recommend Bill Pugh's article. And then never attempt it :)
A: The article refers to the pre-5.0 Java memory model (JMM). Under that model leaving a synchronised block forced writes out to main memory. So it appears to be an attempt to make sure that the Singleton object is pushed out before the reference to it. However, it doesn't quite work because the write to instance can be moved up into the block - the roach motel.
However, the pre-5.0 model was never correctly implemented. 1.4 should follow the 5.0 model. Classes are initialised lazily, so you might as well just write
public static final Singleton instance = new Singleton();
Or better, don't use singletons for they are evil.
A: Following the John Skeet Recommendation:
However, to go deeper into the subject
I'd recommend Bill Pugh's article. And
then never attempt it :)
And here is the key for the second sync block:
This code puts construction of the
Helper object inside an inner
synchronized block. The intuitive idea
here is that there should be a memory
barrier at the point where
synchronization is released, and that
should prevent the reordering of the
initialization of the Helper object
and the assignment to the field
helper.
So basically, with the Inner sync block, we are trying to "cheat" the JMM creating the Instance inside the sync block, to force the JMM to execute that allocation before the sync block finished. But the problem here is that the JMM is heading us up and is moving the assigment that is before the sync block inside the sync block, moving our problem back to the beginnig.
This is what i understood from those articles, really interesting and once more thanks for the replies.
A: All right, but the article said that
The code in Listing 7 doesn't work because of the current definition of the memory model. The Java Language Specification (JLS) demands that code within a synchronized block not be moved out of a synchronized block. However, it does not say that code not in a synchronized block cannot be moved into a synchronized block.
And also seems like the JVM makes the next translation to "pseudo-code" in ASM:
public static Singleton getInstance()
{
if (instance == null)
{
synchronized(Singleton.class) { //1
Singleton inst = instance; //2
if (inst == null)
{
synchronized(Singleton.class) { //3
//inst = new Singleton(); //4
instance = new Singleton();
}
//instance = inst; //5
}
}
}
return instance;
}
So far, the point of no writes after the "instance=inst" is not accomplished?
I will read now the article, thanks for the link.
A: Since Java 5, you can make double-checked locking work by declaring the field volatile.
See http://www.cs.umd.edu/~pugh/java/memoryModel/DoubleCheckedLocking.html for a full explanation.
A: Regarding this idiom there is a very advisable and clarifying article:
http://www.javaworld.com/javaworld/jw-02-2001/jw-0209-double.html?page=1
On the other hand, I think what dhighwayman.myopenid means is why the writer has put one synchronized block referring to the same class (synchronized(Singleton.class)) within another synchronized block referring to the same class. It may happen as a new instance (Singleton inst = instance;) is created within that block and to guarantee it to be thread-safe it's necessary to write another synchronized.
Otherwise, I can't see any sense.
A: See the Google Tech Talk on the Java Memory Model for a really nice introduction to the finer points of the JMM. Since it is missing here, I would also like to point out Jeremy Mansons blog 'Java Concurrency' esp. the post on Double Checked locking (anyone who is anything in the Java world seems to have an article on this :).
A: For Java 5 and better there is actually a doublechecked variant that can be better than synchronizing the whole accessor. This is also mentioned in the Double-Checked Locking Declaration :
class Foo {
private volatile Helper helper = null;
public Helper getHelper() {
if (helper == null) {
synchronized(this) {
if (helper == null)
helper = new Helper();
}
}
return helper;
}
}
The key difference here is the use of volatile in the variable declaration - otherwise it does not work, and it does not work in Java 1.4 or less, anyway.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157198",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: how to send email with graphic via php I would like to send HTML email with graphic elements included. I have no idea to attach garaphics to this email.
A: You probably don't want to do an inline attachment by hand, it's far easier, and less error prone to use a library, like PHPMailer.
It can attach the inline images, or if you give it some HTML code, it will attach the images by itself and modify the code to display them.
A: You can try Swift Mailer
A: I'm not going to bore you with a mediocre explanation here so instead let me link to this great tutorial over at Sitepoint which explained it to me in plain English! - advanced-email-php
A: The short version is that you are probably best off creating a HTML formatted messages, and using the header parameter of the php mail function.
$headers = "From: sender@example.com\n" .
"MIME-Version: 1.0\n" .
"Content-type: text/html; charset=iso-8859-1";
mail(to@example.com, 'subject line', 'your message text <strong>with HTML in it</strong>', $headers);
The sitepoint.com article referenced by Jimmy provides an excellent and complete description of your options.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157208",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Open-source fractal maps I'm interested in creating a game that uses fractal maps for more realistic geography. However, the only fractal map programs I have found are Windows-only, for example Fractal Mapper. Needless to say, they are also not open-sourced.
Are there any open-sourced fractal map creators available, preferably in Python or C/C++? Ideally I would like something that can be "plugged into" a program, rather then being standalone.
A: Fracplanet may be of use.
A: Basic terrain generation involves creating a height map (an image) and rendering it using the pixel colour as height. So you may find image or texture generation code useful. This is a good tutorial.
A: For the terrain aspect take a look at libnoise.
It's packaged for Debian, and has excellent documentation with a chapter on terrain generation with example C++ code.
Of course there's a lot more to "maps" than slapping some colours on a height field (for example Fracplanet adds rivers and lakes). And the sort of terrain you get from these methods isn't actually that realistic; continents don't generally ramp up from the coast into a rocky hinterland, so maybe simulating continental drift and mountain building and erosion processes would help (alternatively, fake it). And then if you want vegetation, or the artefacts of lifeforms (roads and towns, say) to populate your map you might want to look at cellular automata or other "artificial life" tools. Finally, the Virtual Terrain Project is well worth a browse for more links and ideas.
A: I'd highly recommend purchasing a copy of
Texturing & Modeling: A Procedural Approach
I see it's now in it's third edition (I only have the second) but it's packed full of useful articles about the use of procedural texturing including several chapters on their use in fractal terrains. It starts out with extensive discussion of noise algorithms too - so you have everything from the basics upwards. The authors include Musgrave, Perlin and Worley, so you really can't do better.
A: If you want truely realistic geography, you could use NASA's SRTM dataset, perhaps combined with OpenStreetMap features. :-)
A: A very simple implementation would be to use the midpoint displacement fractal, http://en.wikipedia.org/wiki/Diamond-square_algorithm, or the somewhat more complicated Diamond-Squares algorithm.
http://www.gameprogrammer.com/fractal.html#diamond
These are similar algorithms to the "Difference cloud" in Photoshop.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157211",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: How can I support the support department better? With the best will in the world, whatever software you (and me) write will have some kind of defect in it.
What can I do, as a developer, to make things easier for the support department (first line, through to third line, and development) to diagnose, workaround and fix problems that the user encounters.
Notes
*
*I'm expecting answers which are predominantly technical in nature, but I expect other answers to exist.
*"Don't release bugs in your software" is a good answer, but I know that already.
A: *
*Log as much detail about the environment in which you're executing as possible (probably on startup).
*Give exceptions meaningful names and messages. They may only appear in a stack trace, but that's still incredibly helpful.
*Allocate some time to writing tools for the support team. They will almost certainly have needs beyond either your users or the developers.
*Sit with the support team for half a day to see what kind of thing they're having to do. Watch any repetitive tasks - they may not even consciously notice the repetition any more.
*Meet up with the support team regularly - make sure they never resent you.
A: Technical features:
*
*In the error dialogue for a desktop app, include a clickable button that opens up and email, and attaches the stacktrace, and log, including system properties.
*On an error screen in a webapp, report a timestamp including nano-seconds and error code, pid, etc so server logs can be searched.
*Allow log levels to be dynamically changed at runtime. Having to restart your server to do this is a pain.
*Log as much detail about the environment in which you're executing as possible (probably on startup).
Non-technical:
*
*Provide a known issues section in your documentation. If this is a web page, then this correspond to a triaged bug list from your bug tracker.
*Depending on your audience, expose some kind of interface to your issue tracking.
*Again, depending on audience, provide some forum for the users to help each other.
*Usability solves problems before they are a problem. Sensible, non-scary error messages often allow a user to find the solution to their own problem.
Process:
*
*watch your logs. For a server side product, regular reviews of logs will be a good early warning sign for impending trouble. Make sure support knows when you think there is trouble ahead.
*allow time to write tools for the support department. These may start off as debugging tools for devs, become a window onto the internal state of the app for support, and even become power tools for future releases.
*allow some time for devs to spend with the support team; listening to customers on a support call, go out on site, etc. Make sure that the devs are not allowed to promise anything. Debrief the dev after doing this - there maybe feature ideas there.
*where appropriate provide user training. An impedence mismatch can cause the user to perceive problems with the software, rather than the user's mental model of the software.
A: If you have at least a part of your application running on your server, make sure you monitor logs for errors.
When we first implemented daily script which greps for ERROR/Exception/FATAL and sends results per email, I was surprised how many issues (mostly tiny) we haven't noticed before.
This will help in a way, that you notice some problems yourself before they are reported to support team.
A: Make sure your application can be deployed with automatic updates. One of the headaches of a support group is upgrading customers to the latest and greatest so that they can take advantage of bug fixes, new features, etc. If the upgrade process is seamless, stress can be relieved from the support group.
A: *
*Provide a mechanism for capturing what the user was doing when the problem happened, a logging or tracing capability that can help provide you and your colleagues with data (what exception was thrown, stack traces, program state, what the user had been doing, etc.) so that you can recreate the issue.
*If you don't already incorporate developer automated testing in your product development, consider doing so.
A: Similar to a combination of jamesh's answers, we do this for web apps
*
*Supply a "report a bug" link so that users can report bugs even when they don't generate error screens.
*That link opens up a small dialog which in turn submits via Ajax to a processor on the server.
*The processor associates the submission to the script being reported on and its PID, so that we can find the right log files (we organize ours by script/pid), and then sends e-mail to our bug tracking system.
A: Provide a know issues document
Give training on the application so they know how it should work
Provide simple concise log lines that they will understand or create error codes with a corresponding document that describes the error
A: Some thoughts:
*
*Do your best to validate user input immediately.
*Check for errors or exceptions as early and as often as possible. It's easier to trace and fix a problem just after it occurs, before it generates "ricochet" effects.
*Whenever possible, describe how to correct the problem in your error message. The user isn't interested in what went wrong, only how to continue working:
BAD: Floating-point exception in vogon.c, line 42
BETTER: Please enter a dollar amount greater than 0.
*If you can't suggest a correction for the problem, tell the user what to do (or not to do) before calling tech support, such as: "Click Help->About to find the version/license number," or "Please leave this error message on the screen."
*Talk to your support staff. Ask about common problems and pet peeves. Have them answer this question!
*If you have a web site with a support section, provide a hyperlink or URL in the error message.
*Indicate whether the error is due to a temporary or permanent condition, so the user will know whether to try again.
*Put your cell phone number in every error message, and identify yourself as the developer.
Ok, the last item probably isn't practical, but wouldn't it encourage better coding practices?
A: Have a mindset for improving things. Whenever you fix something, ask:
*
*How can I avoid a similar problem in the future?
Then try to find a way of solving that problem.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157219",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: How to log MethodName when wrapping Log4net? I have wrapped Log4net in a static wrapper and want to log
loggingEvent.LocationInformation.MethodName
loggingEvent.LocationInformation.ClassName
However all I get is the name of my wrapper.
How can I log that info using a forwardingappender and a static wrapper class like
Logger.Debug("Logging to Debug");
Logger.Info("Logging to Info");
Logger.Warn("Logging to Warn");
Logger.Error(ex);
Logger.Fatal(ex);
A: I would simply use something like %stacktrace{2} as a conversion pattern.
Example of output:
MyNamespace.ClassName.Method > Common.Log.Warning
where MyNamespace.ClassName.Method is a method that is calling my wrapper and Common.Log.Warning is a method of the wrapper class.
Conversion patterns can be found here.
A: Just declare your log variable like this...
private static readonly log4net.ILog log = log4net.LogManager.GetLogger(System.Reflection.MethodBase.GetCurrentMethod().DeclaringType);
Then you can use it normaly.
A: This post helped me work out how to write my own wrapper so in return, thought you might like my complete class to wrap the logger which seems to work quite nicely and actually takes just over half as much time as using an ILog directly!
All that's required is the appropriate xml to set up the logging in the config file and
[assembly: log4net.Config.XmlConfigurator(Watch = true)]
in your AssemblyInfo.cs and it should work easily.
One note: I'm using Log4NetDash with a seriously simple set-up so have cheated and put some information in the wrong fields (eg stack trace in Domain field), this still works for me as I don't care where the information is shown but you might want to fix this if you're setting stuff up properly if you spare time!
using System;
using System.ComponentModel;
using System.Diagnostics;
using System.Reflection;
using System.Threading;
using log4net;
using log4net.Core;
namespace Utility
{
public class Logger
{
static Logger()
{
LogManager.GetLogger(typeof(Logger));
}
public static void Debug(string message, params object[] parameters)
{
Log(message, Level.Debug, null, parameters);
}
public static void Info(string message, params object[] parameters)
{
Log(message, Level.Info, null, parameters);
}
public static void Warn(string message, params object[] parameters)
{
Log(message, Level.Warn, null, parameters);
}
public static void Error(string message, params object[] parameters)
{
Error(message, null, parameters);
}
public static void Error(Exception exception)
{
if (exception==null)
return;
Error(exception.Message, exception);
}
public static void Error(string message, Exception exception, params object[] parameters)
{
string exceptionStack = "";
if (exception != null)
{
exceptionStack = exception.GetType().Name + " : " + exception.Message + Environment.NewLine;
Exception loopException = exception;
while (loopException.InnerException != null)
{
loopException = loopException.InnerException;
exceptionStack += loopException.GetType().Name + " : " + loopException.Message + Environment.NewLine;
}
}
Log(message, Level.Error, exceptionStack, parameters);
}
private static void Log(string message, Level logLevel, string exceptionMessage, params object[] parameters)
{
BackgroundWorker worker = new BackgroundWorker();
worker.DoWork += LogEvent;
worker.RunWorkerAsync(new LogMessageSpec
{
ExceptionMessage = exceptionMessage,
LogLevel = logLevel,
Message = message,
Parameters = parameters,
Stack = new StackTrace(),
LogTime = DateTime.Now
});
}
private static void LogEvent(object sender, DoWorkEventArgs e)
{
try
{
LogMessageSpec messageSpec = (LogMessageSpec) e.Argument;
StackFrame frame = messageSpec.Stack.GetFrame(2);
MethodBase method = frame.GetMethod();
Type reflectedType = method.ReflectedType;
ILogger log = LoggerManager.GetLogger(reflectedType.Assembly, reflectedType);
Level currenLoggingLevel = ((log4net.Repository.Hierarchy.Logger) log).Parent.Level;
if (messageSpec.LogLevel<currenLoggingLevel)
return;
messageSpec.Message = string.Format(messageSpec.Message, messageSpec.Parameters);
string stackTrace = "";
StackFrame[] frames = messageSpec.Stack.GetFrames();
if (frames != null)
{
foreach (StackFrame tempFrame in frames)
{
MethodBase tempMethod = tempFrame.GetMethod();
stackTrace += tempMethod.Name + Environment.NewLine;
}
}
string userName = Thread.CurrentPrincipal.Identity.Name;
LoggingEventData evdat = new LoggingEventData
{
Domain = stackTrace,
Identity = userName,
Level = messageSpec.LogLevel,
LocationInfo = new LocationInfo(reflectedType.FullName,
method.Name,
frame.GetFileName(),
frame.GetFileLineNumber().ToString()),
LoggerName = reflectedType.Name,
Message = messageSpec.Message,
TimeStamp = messageSpec.LogTime,
UserName = userName,
ExceptionString = messageSpec.ExceptionMessage
};
log.Log(new LoggingEvent(evdat));
}
catch (Exception)
{}//don't throw exceptions on background thread especially about logging!
}
private class LogMessageSpec
{
public StackTrace Stack { get; set; }
public string Message { get; set; }
public Level LogLevel { get; set; }
public string ExceptionMessage { get; set; }
public object[] Parameters { get; set; }
public DateTime LogTime { get; set; }
}
}
}
A: What about the %M and %C variables?
http://logging.apache.org/log4net/log4net-1.2.11/release/sdk/log4net.Layout.PatternLayout.html
Usage, something like:
<layout type="log4net.Layout.PatternLayout">
<conversionPattern value="%date [%thread] %-5level %logger [%M %C] - %message%newline" />
</layout>
Doesn't that do what you are after?
A: Well the error was somewhere in my appender but for completeness ill include the answer to the best of my knowledge:
the Facade you need should wrap ILogger and NOT ILog
public static class Logger
{
private readonly static Type ThisDeclaringType = typeof(Logger);
private static readonly ILogger defaultLogger;
static Logger()
{
defaultLogger =
LoggerManager.GetLogger(Assembly.GetCallingAssembly(),"MyDefaultLoggger");
...
public static void Info(string message)
{
if (defaultLogger.IsEnabledFor(infoLevel))
{
defaultLogger.Log(typeof(Logger), infoLevel, message, null);
}
}
A: How about C#4.5 feature callerinfo - http://msdn.microsoft.com/en-us/library/system.runtime.compilerservices.callermembernameattribute.aspx
A: I implemented the following solution for this (.Net framework 4.5+) : the log4net wrapper methods (e.g. Info, Warn, Error) could make use of CallerMemberName and CallerFilePath to fetch the class and method name of the code from where the logs are being called. You can then aggregate these into a custom log4net property.
Feel free to use your log4net own wrapper implementation, the only important things here are:
the signature of the Info and Error methods, and the implementation of the GetLogger method.
The 'memberName' and 'sourceFilePath' args should never be specified when calling the Logger.Info or Logger.Error methods, they are auto-filled-in by .Net.
public static class Logger
{
private class LogSingletonWrapper
{
public ILog Log { get; set; }
public LogSingletonWrapper()
{
Log = LogManager.GetLogger(GetType());
}
}
private static readonly Lazy<LogSingletonWrapper> _logger = new Lazy<LogSingletonWrapper>();
public static void Info(string message, [CallerMemberName] string memberName = "", [CallerFilePath] string sourceFilePath = "")
=> GetLogger(memberName, sourceFilePath).Info(message);
public static void Error(string message,Exception ex, [CallerMemberName] string memberName = "", [CallerFilePath] string sourceFilePath = "")
=> GetLogger(memberName, sourceFilePath).Error(message, ex);
private static ILog GetLogger(string memberName, string sourceFilePath)
{
var classname = sourceFilePath.Split('\\').Last().Split('.').First();
log4net.ThreadContext.Properties["Source"] = $"{classname}.{memberName.Replace(".", "")}";
return _logger.Value.Log;
}
}
Then you would could use a log conversion pattern like this in your .config file :
<conversionPattern value="[%level][%date][Thd%thread: %property{Source}][Message: %message]%newline" />
This would result in logs looking like this:
[INFO][2019-07-03 16:42:00,936][Thd1: Application.Start][Message: The application is starting up ...]
[ERROR][2019-07-03 16:42:44,145][Thd6: DataProcessor.ProcessDataBatch][Message: Attempted to divide by zero.]
The following methods were called in the above example: the Start method of the Application class, and the ProcessDataBatch method of the DataProcessor class.
A: The only thing I can think of doing (as I dont currently use log4net) is to request a stacktrace(new StackTrace), and go back a frame to get the info you need. However, I am unsure of the runtime performance impact of this.
A: I will just write more code of the correct answer of Claus
In the wrapper class
public static class Logger
{
private static readonly ILogger DefaultLogger;
static Logger()
{
defaultLogger = LoggerManager.GetLogger(Assembly.GetCallingAssembly(), "MyDefaultLoggger"); // MyDefaultLoggger is the name of Logger
}
public static void LogError(object message)
{
Level errorLevel = Level.Error;
if (DefaultLogger.IsEnabledFor(errorLevel))
{
DefaultLogger.Log(typeof(Logger), errorLevel, message, null);
}
}
public static void LogError(object message, Exception exception)
{
Level errorLevel = Level.Error;
if (DefaultLogger.IsEnabledFor(errorLevel))
{
DefaultLogger.Log(typeof(Logger), errorLevel, message, exception);
}
}
and so on for the rest of methods.
in web.config or app.config log4net.Layout.PatternLayout
you can use some Conversion Patterns like:
%location %method %line
<layout type="log4net.Layout.PatternLayout">
<conversionPattern value="%date{dd/MM/yyyy hh:mm:ss.fff tt} [%thread] %level %logger [%location %method %line] [%C %M] - %newline%message%newline%exception"/>
</layout>
A: Click here to learn how to implement log4net in .NET Core 2.2
The following steps are taken from the above link, and break down how to add log4net to a .NET Core 2.2 project.
First, run the following command in the Package-Manager console:
Install-Package Log4Net_Logging -Version 1.0.0
Then add a log4net.config with the following information (please edit it to match your set up):
<?xml version="1.0" encoding="utf-8" ?>
<configuration>
<configSections>
<section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler, log4net" />
</configSections>
<log4net>
<appender name="FileAppender" type="log4net.Appender.FileAppender">
<file value="logfile.log" />
<appendToFile value="true" />
<layout type="log4net.Layout.PatternLayout">
<conversionPattern value="%d [%t] %-5p - %m%n" />
</layout>
</appender>
<root>
<!--LogLevel: OFF, FATAL, ERROR, WARN, INFO, DEBUG, ALL -->
<level value="ALL" />
<appender-ref ref="FileAppender" />
</root>
</log4net>
</configuration>
Then, add the following code into a controller (this is an example, please edit it before adding it to your controller):
public ValuesController()
{
LogFourNet.SetUp(Assembly.GetEntryAssembly(), "log4net.config");
}
// GET api/values
[HttpGet]
public ActionResult<IEnumerable<string>> Get()
{
LogFourNet.Info(this, "This is Info logging");
LogFourNet.Debug(this, "This is Debug logging");
LogFourNet.Error(this, "This is Error logging");
return new string[] { "value1", "value2" };
}
Then call the relevant controller action (using the above example, call /Values/Get with an HTTP GET), and you will receive the output matching the following:
2019-06-05 19:58:45,103 [9] INFO-[Log4NetLogging_Project.Controllers.ValuesController.Get:23] - This is Info logging
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157232",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "34"
} |
Q: Which CASE Tools do you use? Which Computer-aided Software Engineering tools do you use and why? In what ways do they increase your productivity or help you design your programs? Or, in case you do not use CASE tools, what are your reasons for this?
A: The best CASE tool I had to work with is the Enterprise Architect from Sparx.
It's lightweight comparing to Rose (easier to buy and cheaper too) but extremely powerful. You could do great UML diagrams or database model or anything else you want but in a nice and organised way.
It greatly helps on the initial stages of the elaboration process as you could create domain model, do some preliminary use cases, map them to the requirements and present all of it in a nice way to the customer. It helps me thinking and I re-factor my design with it until I am satisfied enough to start proper documentation.
It is also very good for database models as it could reverse-engineer most databases very neatly.
The only (but quite serious) drawback it has in my eyes is that its documentation generator is, to put it mildly, crap. Getting a proper document from it is almost impossible unless you invest a significant amount of work in the templates and then it would be only OK.
A: I have used Rational Rose and a few other similar packages in the past. Mostly I have used them for the UML diagram elements and have not gone into the more detailed functionality such as code generation etc.
I mostly use them for aiding the design process and clarifying my own ideas. Often I find that, in trying to come up with a design for a componant, I end up needing to write down / draw what I want to happen so I can get a clear overview in my mind of what needs to happen and why. I have found that in a lot of cases, what I end up trying to draw is essentially the same as a predefined kind of diagram in UML, such as a Use Case Diagram etc. and by then adopting that style, it becomes easier to get my ideas on paper as I have some framework to work within.
So, I use CASE tools principally for thier UML / designing tools at a highish, semi-abstract level.
A: Oracle Designer
A: Not using any. No money for them.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157249",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Is it a good design to put event handling code into an own method? Image a Button on your windows form that does something when being clicked.
The click events thats raised is typically bound to a method such as
protected void Button1_Click(object
sender, EventArgs e) {
}
What I see sometimes in other peoples' code is that the implementation of the buttons' behaviour is not put into the Button1_Click method but into an own method that is called from here like so:
private DoStuff() { }
protected void Button1_Click(object
sender, EventArgs e) {
this.DoStuff();
}
Although I see the advantage here (for instance if this piece of code is needed internally somewhere else, it can be easily used), I am wondering, if this is a general good design decision?
So the question is:
Is it a generally good idea to put event handling code into an own method and if so what naming convention for those methods are proven to be best practice?
A: I put the event handling code into a separate method if:
*
*The code is to be called by multiple events or from anywhere else or
*The code does not actually have to do with the GUI and is more like back-end work.
Everything small and only GUI-related goes always into the handler, sometimes even if it is being called from the same event (as long as the signature is the same). So it's more like, use a separate method if it is a general action, and don't if the method is closely related to the actual event.
A: A good rule of thumb is to see whether the method is doing sometihng UI specific, or actually implementing a generic action. For example a buttock click method could either handle a click or submit a form. If its the former kind, its ok to leave it in the button_click handler, but the latter deserves a method of its own. All I'm saying is, keep the single responsibility principle in mind.
A: In this case, I'd keep the DoStuff() method but subscribe to it with:
button1.Click += delegate { DoStuff(); }
Admittedly that won't please those who do all their event wiring in the designer... but I find it simpler to inspect, personally. (I also hate the autogenerated event handler names...)
A: The straight answer is yes. It's best to encapsulate the task into it's own method not just for reuse elsewhere but from an OOP perspective it makes sense to do so. In this particular case clicking the button starts a process call DoStuff. The button is just triggering the process. Button1_Click is a completely separate task than DoStuff. However DoStuff is not only more descriptive, but it decouples your button from the actual work being done.
A good rule of thumb is to see whether
the method is doing sometihng UI
specific, or actually implementing a
generic action. For example a buttock
click method could either handle a
click or submit a form. If its the
former kind, its ok to leave it in the
button_click handler, but the latter
deserves a method of its own. All I'm
saying is, keep the single
responsibility principle in mind.
I put the event handling code into a
separate method if:
*
*The code is to be called by multiple events or from anywhere else
or
*The code does not actually have to do with the GUI and is more like
back-end work.
Everything small and
only
GUI-related goes always into the
handler, sometimes even if it is being
called from the same event (as long as
the signature is the same). So it's
more like, use a separate method if it
is a general action, and don't if the
method is closely related to the
actual event.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157254",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: What's the best way to loop through a set of elements in JavaScript? In the past and with most my current projects I tend to use a for loop like this:
var elements = document.getElementsByTagName('div');
for (var i=0; i<elements.length; i++) {
doSomething(elements[i]);
}
I've heard that using a "reverse while" loop is quicker but I have no real way to confirm this:
var elements = document.getElementsByTagName('div'),
length = elements.length;
while(length--) {
doSomething(elements[length]);
}
What is considered as best practice when it comes to looping though elements in JavaScript, or any array for that matter?
A: At the risk of getting yelled at, i would get a javascript helper library like jquery or prototype they encapsulate the logic in nice methods - both have an .each method/iterator to do it - and they both strive to make it cross-browser compatible
EDIT: This answer was posted in 2008. Today much better constructs exist. This particular case could be solved with a .forEach.
A: I had a very similar problem earlier with document.getElementsByClassName(). I didn't know what a nodelist was at the time.
var elements = document.getElementsByTagName('div');
for (var i=0; i<elements.length; i++) {
doSomething(elements[i]);
}
My issue was that I expected that elements would be an array, but it isn't. The nodelist Document.getElementsByTagName() returns is iterable, but you can't call array.prototype methods on it.
You can however populate an array with nodelist elements like this:
var myElements = [];
for (var i=0; i<myNodeList.length; i++) {
var element = myNodeList[i];
myElements.push(element);
};
After that you can feel free to call .innerHTML or .style or something on the elements of your array.
A: Here's a nice form of a loop I often use. You create the iterated variable from the for statement and you don't need to check the length property, which can be expensive specially when iterating through a NodeList. However, you must be careful, you can't use it if any of the values in array could be "falsy". In practice, I only use it when iterating over an array of objects that does not contain nulls (like a NodeList). But I love its syntactic sugar.
var list = [{a:1,b:2}, {a:3,b:5}, {a:8,b:2}, {a:4,b:1}, {a:0,b:8}];
for (var i=0, item; item = list[i]; i++) {
// Look no need to do list[i] in the body of the loop
console.log("Looping: index ", i, "item" + item);
}
Note that this can also be used to loop backwards.
var list = [{a:1,b:2}, {a:3,b:5}, {a:8,b:2}, {a:4,b:1}, {a:0,b:8}];
for (var i = list.length - 1, item; item = list[i]; i--) {
console.log("Looping: index ", i, "item", item);
}
ES6 Update
for...of gives you the name but not the index, available since ES6
for (const item of list) {
console.log("Looping: index ", "Sorry!!!", "item" + item);
}
A: I think using the first form is probably the way to go, since it's probably by far the most common loop structure in the known universe, and since I don't believe the reverse loop saves you any time in reality (still doing an increment/decrement and a comparison on each iteration).
Code that is recognizable and readable to others is definitely a good thing.
A: I too advise to use the simple way (KISS !-)
-- but some optimization could be found, namely not to test the length of an array more than once:
var elements = document.getElementsByTagName('div');
for (var i=0, im=elements.length; im>i; i++) {
doSomething(elements[i]);
}
A: Also see my comment on Andrew Hedges' test ...
I just tried to run a test to compare a simple iteration, the optimization I introduced and the reverse do/while, where the elements in an array was tested in every loop.
And alas, no surprise, the three browsers I tested had very different results, though the optimized simple iteration was fastest in all !-)
Test:
An array with 500,000 elements build outside the real test, for every iteration the value of the specific array-element is revealed.
Test run 10 times.
IE6:
Results:
Simple: 984,922,937,984,891,907,906,891,906,906
Average: 923.40 ms.
Optimized: 766,766,844,797,750,750,765,765,766,766
Average: 773.50 ms.
Reverse do/while: 3375,1328,1516,1344,1375,1406,1688,1344,1297,1265
Average: 1593.80 ms. (Note one especially awkward result)
Opera 9.52:
Results:
Simple: 344,343,344,359,343,359,344,359,359,359
Average: 351.30 ms.
Optimized: 281,297,297,297,297,281,281,297,281,281
Average: 289.00 ms
Reverse do/while: 391,407,391,391,500,407,407,406,406,406
Average: 411.20 ms.
FireFox 3.0.1:
Results:
Simple: 278,251,259,245,243,242,259,246,247,256
Average: 252.60 ms.
Optimized: 267,222,223,226,223,230,221,231,224,230
Average: 229.70 ms.
Reverse do/while: 414,381,389,383,388,389,381,387,400,379
Average: 389.10 ms.
A: Form of loop provided by Juan Mendez is very useful and practical,
I changed it a little bit, so that now it works with - false, null, zero and empty strings too.
var items = [
true,
false,
null,
0,
""
];
for(var i = 0, item; (item = items[i]) !== undefined; i++)
{
console.log("Index: " + i + "; Value: " + item);
}
A: Note that in some cases, you need to loop in reverse order (but then you can use i-- too).
For example somebody wanted to use the new getElementsByClassName function to loop on elements of a given class and change this class. He found that only one out of two elements was changed (in FF3).
That's because the function returns a live NodeList, which thus reflects the changes in the Dom tree. Walking the list in reverse order avoided this issue.
var menus = document.getElementsByClassName("style2");
for (var i = menus.length - 1; i >= 0; i--)
{
menus[i].className = "style1";
}
In increasing index progression, when we ask the index 1, FF inspects the Dom and skips the first item with style2, which is the 2nd of the original Dom, thus it returns the 3rd initial item!
A: I know that you don't want to hear that, but: I consider the best practice is the most readable in this case. As long as the loop is not counting from here to the moon, the performance-gain will not be uhge enough.
A: I know this question is old -- but here's another, extremely simple solution ...
var elements = Array.from(document.querySelectorAll("div"));
Then it can be used like any, standard array.
A: I like doing:
var menu = document.getElementsByTagName('div');
for (var i = 0; menu[i]; i++) {
...
}
There is no call to the length of the array on every iteration.
A: I prefer the for loop as it's more readable. Looping from length to 0 would be more efficient than looping from 0 to length. And using a reversed while loop is more efficient than a foor loop as you said. I don't have the link to the page with comparison results anymore but I remember that the difference varied on different browsers. For some browser the reversed while loop was twice as fast. However it makes no difference if you're looping "small" arrays. In your example case the length of elements will be "small"
A: I think you have two alternatives. For dom elements such as jQuery and like frameworks give you a good method of iteration. The second approach is the for loop.
A: I like to use a TreeWalker if the set of elements are children of a root node.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157260",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "64"
} |
Q: What's the difference between rapidSSL and geotrust certificates? I want to buy a 128bit SSL certificate for a website selling services. I checked http://www.rapidssl.com/ssl-certificate-products/ssl-certificate.htm and http://www.geotrust.com/ssl/compare-ssl-certificates.html. Why are the prices for QuickSSL (Geotrust, $249) and RapidSSL (rapidSSL, $69) so different? Is there any particular reason for this or it's just marketing?
RapidSSL says the following:
However it is our opinion that sites conducting more than 50 transactions will require a Professional Level SSL certificate due to the increased likelihood that the website's customers will expect SSL from a highly credible and established SSL provider and well
known internationally accepted SSL brand.
(by "professional level SSL" they mean Geotrust certs)
P.S. will users really pay attention to the SSL issuing authority brand name?
A: they both do the same job, just brand perception i guess
honestly i don't think the end user would even notice. as long as they see the little padlock they will be happy
ps. godaddy certs are cheaper
A: The job of the SSL certificate authority(CA)/provider is to validate your organizational identity so that when customers access your web site, they not only get the padlock for security, but they know that your identity as the fully qualified hostname are authentic and not some phishing scam.
True, most all users look no further than the padlock indicating secure connection to their bank web site, email, etc. However, if any CA were to become compromised, all browsers who trust that CA would be vulnerable, because an attacker could forge a certificate for any domain, including yours. Your choice of certificate provider has no bearing on this. I have yet to hear about this actually happening. MITM attacks are a big deal now with wireless hotspots becoming more and more prevalent.
One more thing is browser compatibility. You would expect that your newly purchased cert be compatible with every modern browser. This is because they are all loaded with a list of root CA certs that trust a select list of SSL certificate authorities. If you buy from a CA that is not on that list, all your client browsers will get a security warning that the site's cert is not trusted. Just doublecheck that RapidSSL, Geotrust, or whoever you go with is in the list of all the browsers you care about. (e.g. for Firefox, it's at Tools/Options/Advanced/Encryption/View Certificates/Authorities tab)
In the end, just get the cheapest one that gives you the level of encryption you want. It'll get the job done. Check with your web host provider. They may have discounts.
A: To clarify, both are owned by Geotrust(R) . One difference is that Geotrust certificates use "Geotrust" root, and RapidSSL certificates use "Equifax" root, which will be shown in the certificate info "Issued by".
A: I know this has an accepted answer already, but there is another aspect.
The more expensive SSL certificates usually have a better warranty when it comes to fraud. A lower cost SSL cert may cover $10,000 worth of fraud whereas a higher cost SSL cert may cover you for $100,000, for example.
A: This has a good overview of the RapidSSL faqs.
This will give you the same for the QuickSSL.
The main difference in these certificates is the amount of verification during purchase. The encryption is basically the same for both.
A: As for the warranty mentioned above, as far as I understand this is a warranty to the "end user" in case the certificate authority issues a certificate to a fraudulent person/domain. It is not a warranty to the website owner.
A: Pretty late to the game but there is one other detail worth noting here--RapidSSL is not on IE8's list of trusted authorities.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157271",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "47"
} |
Q: SQL trigger for deleting old results We have a database that we are using to store test results for an embedded device. There's a table with columns for different types of failures (details not relevant), along with a primary key 'keynum' and a 'NUM_FAILURES' column that lists the number of failures. We store passes and failures, so a pass has a '0' in 'NUM_FAILURES'.
In order to keep the database from growing without bounds, we want to keep the last 1000 results, plus any of the last 50 failures that fall outside of the 1000. So, worst case, the table could have 1050 entries in it. I'm trying to find the most efficient SQL insert trigger to remove extra entries. I'll give what I have so far as an answer, but I'm looking to see if anyone can come up with something better, since SQL isn't something I do very often.
We are using SQLITE3 on a non-Windows platform, if it's relevant.
EDIT: To clarify, the part that I am having problems with is the DELETE, and specifically the part related to the last 50 failures.
A: The reason you want to remove these entries is to keep the database growing too big and not to keep it in some special state. For that i would really not use triggers and instead setup a job to run at some interval cleaning up the table.
A: So far, I have ended up using a View combined with a Trigger, but I'm not sure it's going to work for other reasons.
CREATE VIEW tablename_view AS SELECT keynum FROM tablename WHERE NUM_FAILURES!='0'
ORDER BY keynum DESC LIMIT 50;
CREATE TRIGGER tablename_trig
AFTER INSERT ON tablename WHEN (((SELECT COUNT(*) FROM tablename) >= 1000) or
((SELECT COUNT(NUM_FAILURES) FROM tablename WHERE NUM_FAILURES!='0') >= 50))
BEGIN
DELETE FROM tablename WHERE ((((SELECT MAX(keynum) FROM ibit) - keynum) >= 1000)
AND
((NUM_FAILURES=='0') OR ((SELECT MIN(keynum) FROM tablename_view) > keynum)));
END;
A: I think you may be using the wrong data structure. Instead I'd create two tables and pre-populate one with a 1000 rows (successes) and the other with 50 (failures). Put a primary ID on each. The when you record a result instead of inserting a new row find the ID+1 value for the last timestamped record entered (looping back to 0 if > max(id) in table) and update it with your new values.
This has the advantage of pre-allocating your storage, not requiring a trigger, and internally consistent logic. You can also adjust the size of the log very simply by just pre-populating more records rather than to have to change program logic.
There's several variations you can use on this, but the idea of using a closed loop structure rather than an open list would appear to match the problem domain more closely.
A: How about this:
DELETE
FROM table
WHERE ( id > ( SELECT max(id) - 1000 FROM table )
AND num_failures = 0
)
OR id > ( SELECT max(id) - 1050 FROM table )
If performance is a concern, it might be better to delete on a periodic basis, rather than on each insert.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157272",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How to 3270 screen-scrape from a Linux-based web app I have a LAMP (PHP) web app which need to interface with programs on an IBM 3270 mainframe (via Microsoft SNA Server). One solution I'm looking at is screen-scraping via 3270. (I'm integrating the present with the past!)
Many years ago, I wrote C code which used HLLAPI as the basis for such a task.
*
*Is HLLAPI still the best way to approach this task?
*If so, would I be best off just writing a C app to undertake the work necessary and exec() this C app from php?
*Are there any open source HLLAPI providers for Linux? (In the past I used commercial solutions such as Cleo.)
A: I haven't used it but maybe look at http://x3270.bgp.nu/ which says has a version:
s3270 is a displayless version for
writing screen-scraping scripts
A: I'm currently trying to do a similar thing but with a command line Python script.
I open a pipe to the s3270 (on Windows the exe name is ws3270) to connect to the server and send all commands.
Read carefully those part of the documentation for scripting:
http://x3270.bgp.nu/wc3270-man.html#Actions
http://x3270.bgp.nu/x3270-script.html#Script-Specific-Actions
A: While I have no experience with 3270, I would expect that finding and calling on an outside application or library is your best bet. PHP is not an all-purpose tool, hacking into a non-web communications protocols is best left to languages like C or Java that can handle that well.
A: Screen scraping 3270 applications is a perfectly valid way of getting at data. Many of these applications haven't changed for years, or decades in some cases. Sometimes there is simply no API or other programmatic way of getting at the necessary data.
A: Nighthawk: You could always learn CORBA, that monstrosity of a system was designed to let C programs talk to remote COBOL systems or random stuff written in PL/I or something.
But seriously, if the old app has no API, 3270 screen scraping is fine. There's a lot of similarities between 3270 screens and HTML forms (unlike character mode terminals).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157274",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: Tool for edit CSS and preview in IE and Firefox I'm looking for a VS2008 Plugin or other tool that allows me to edit CSS and preview the changes in IE and Firefox.
I'm not full time web designer, so free or open source is a plus.
Visual Studio integration is a big plus
Reviewed so far:
CSSVista
Pros:
*
*Free
*Nice previews
Cons:
*
*Editor is not powerful
*No save to local file
*No intellisense
Firebug
Pros:
*
*Free
*Intellisense
Cons:
*
*No save to local file
*Only firefox
Homesite
Pros:
*
*Intellisense
*Saves to localfile
Cons:
*
*Not free
A: skybound stylizer seems to be the app for you. allows for live editing, much like cssvista, but with way more powerful tools for editing.
A: What I use to get that functionality is through the Web Developer Addon for Firefox, you can edit CSS in real time. I don't know of a functionality like that for VS.
A: Not quite the answer you'r looking for realy but Firebug for Firefox has css-edit-preview.
A: For Ie-----there is one tool Iedeveloper tool
For Firefox----Firebug is enough..........Using these two,,,u can edit css and u can see the preview
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157277",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-2"
} |
Q: Reports in a .NET Winforms App I'm writing a Winforms application and I've been writing these awful HTML reports where I have templates set up and use String.Replace to get my variables into the templates, then output the results to a WebBrowser control. I really don't like this set up.
I'd love to be able to use ASP.NET for my reports, but my clients don't want to have to run IIS on all the machines this is getting installed on, so that's not really an option.
A: Have you tried the reportviewer control? Your customers will be happy with the fancy new reports. It is template based and can include data from database or from code data, you can use parameters, images, and the result can be exported to Excel or to PDF.
Plus the control has some basic functionality like paging, zooming, printing, finding...
Why do you need ASP.NET? I don't see, what difference it can make. Maybe you can render your HTML more easily, but it's still not "real" reporting.
A: Like was said earlier, use the report viewer with client side reporting. You can create reports the same way as you do for sql reporting services, except you dont need sql server(nor asp.net). Plus you have complete control over them(how you present, how you collect data, what layer they are generated in, etc). You can also export as PDF and excel.
Here is a great book i recommend to everyone to look at if interested in client side reports. It gives a lot of great info and many different scenarios and ways to use client side reporting.
http://www.apress.com/book/view/9781590598542
A: While I haven't used it, I hear a lot of podcast ads for Telerik reporting. Might be worth looking at. Looks pretty sweet.
A: You can use the version of Crystal Reports included in Visual Studio and save the output to a .PDF file which wouldn't be too clumsy to read from a browser. (That's what I did on my last contract)
A: Why not using xsl to generate html reports? Much nicer than doing string replace.
A: You might look into Cassini -- a free ASP.NET web server component that you can embed directly in your WinForms application. The UltiDev version (linked) is based on the code that Microsoft released back in .NET 1.0, which was also used for the Visual Studio 2005+ Development Web Server.
A: For an advanced reporting solution that goes beyond the dataset only reportviewer in VS, you should consider Data Dynamics Reports
It offers all that is in SSRS and adds Master Reports, Themes, Calendar data region, Data Visualization (Databar, Sparkline, Iconset, ColorScale, ...), complete object model for maximum programming flexibility, royalty free end user report designer, barcode report item, excel template export and data merging, and much more. You can download a trial from Data Dynamics (now GrapeCity) and try it with few reports, you will not be disappointed.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157287",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Hibernate SessionFactoryBean for multiple locations of mapping files We have a project consisting of multiple subprojects. With each subproject we potentially have some hibernate mapping files but in the end only one actual hibernate session. Those subprojects could be combined in several ways, some depend on each other. My problem is that actually I want to have a SessionFactoryBean which would be able to collect those mappings/mappinglocations from the applicationContext(s) and configure itself.
Has somebody written something like this, or do I have to do it myself (I envision something a bit like the urlresolver or viewresolver functionality from SpringMVC)?
A: Another (and simpler) approach would be to gather all your model classes in one project. Make all your other projects depend on it and have your SessionFactory created there. That is how I managed to solve the same problem and it works pretty well.
A: LocalSessionFactoryBean has a configLocations property. You inject the list of config locations, and it will gather tham together for a single session factory configuration.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157294",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Template Lib (Engine) in Python running with Jython Im searching a Template Lib or Template Engine for generating HTML (XML) that runs under Jython (Jython 2.5 Alpha is ok).
A: Have you tried Cheetah, I don't have direct experience running it under Jython but there seem to be some people that do.
A: Jinja is pretty cool and seems to work on Jython.
A: Use StringTemplate, see http://www.cs.usfca.edu/~parrt/papers/mvc.templates.pdf for details of why. There is nothing better, and it supports both Java and Python (and .NET, etc.).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157313",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: What is the Best binary/ordinary XML implementation for Java ME? Following Izb's question about Best binary XML format for JavaME, I'm looking for an implementation for either binary XML-like formats or just plain XML. My metrics for such implementation, by most important first, are:
*
*Supported phones. A basic JTWI phone should be able to run it.
*It should be either verified or open source, so I could have it verified.
*Small memory footprint.
*Fast.
*Easy to use on server side.
A: Take a look at kXML - http://kxml.objectweb.org/. It supports WBXML and has a low memory footprint, but its not very strict on errors.
A: The W3C conducted an extensive and open analysis of binary XML technologies, including 9 months of benchmarking to determine which where the smallest, fastest and best for their use cases and requirements. Based on this analysis, they selected one format to be the single standard for the web.
The technology they selected is called "Efficient XML", not FastInfoset as was reported by another commenter. The W3C found FastInfoset did not achieve sufficient levels of compactness and generality for their use cases. You can find a short description of this decision on thee W3C's public mailing lists at http://lists.w3.org/Archives/Public/public-exi/2009Dec/0001.html. You can find low-footprint, fast implementations of the W3C EXI standard for mobile devices and server platforms from http://www.agiledelta.com/product_efxsdk.html.
A: You could use NanoXML for J2ME. It works well and I've never had any problems with it in a production environment. Please note that it is non validating.
IanG
A: The W3C defined a binary encoding for XML, it's called Fast InfoSet, but it's optimised for high-volume web services, rather than embedded devices, and I don't think there are any light and fast implementations suitable for you.
If you want to stick with plain XML, and you need a DOM implementation, then go with XOM (http://xom.nu). It's easily the DOM with the smallest footprint, and it's very fast.
But if you really need to reduce your footprint, then you need to use a non-DOM API like SAX or STAX, but that's considerably more work.
A: Thanks to all those who answered, but this is what I eventually did -
There's a good document about Parsing XML in J2ME here. There's a table there with some alternatives. For now I opted to go for ASXMLP and use it for xml-push parsing.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157314",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Resumable downloads when using PHP to send the file? We are using a PHP scripting for tunnelling file downloads, since we don't want to expose the absolute path of downloadable file:
header("Content-Type: $ctype");
header("Content-Length: " . filesize($file));
header("Content-Disposition: attachment; filename=\"$fileName\"");
readfile($file);
Unfortunately we noticed that downloads passed through this script can't be resumed by the end user.
Is there any way to support resumable downloads with such a PHP-based solution?
A: If you're willing to install a new PECL module, the easiest way to support resumeable downloads with PHP is through http_send_file(), like this
<?php
http_send_content_disposition("document.pdf", true);
http_send_content_type("application/pdf");
http_throttle(0.1, 2048);
http_send_file("../report.pdf");
?>
source : http://www.php.net/manual/en/function.http-send-file.php
We use it to serve database-stored content and it works like a charm !
A: EDIT 2017/01 - I wrote a library to do this in PHP >=7.0 https://github.com/DaveRandom/Resume
EDIT 2016/02 - Code completely rewritten to a set of modular tools an an example usage, rather than a monolithic function. Corrections mentioned in comments below have been incorporated.
A tested, working solution (based heavily on Theo's answer above) which deals with resumable downloads, in a set of a few standalone tools. This code requires PHP 5.4 or later.
This solution can still only cope with one range per request, but under any circumstance with a standard browser that I can think of, this should not cause a problem.
<?php
/**
* Get the value of a header in the current request context
*
* @param string $name Name of the header
* @return string|null Returns null when the header was not sent or cannot be retrieved
*/
function get_request_header($name)
{
$name = strtoupper($name);
// IIS/Some Apache versions and configurations
if (isset($_SERVER['HTTP_' . $name])) {
return trim($_SERVER['HTTP_' . $name]);
}
// Various other SAPIs
foreach (apache_request_headers() as $header_name => $value) {
if (strtoupper($header_name) === $name) {
return trim($value);
}
}
return null;
}
class NonExistentFileException extends \RuntimeException {}
class UnreadableFileException extends \RuntimeException {}
class UnsatisfiableRangeException extends \RuntimeException {}
class InvalidRangeHeaderException extends \RuntimeException {}
class RangeHeader
{
/**
* The first byte in the file to send (0-indexed), a null value indicates the last
* $end bytes
*
* @var int|null
*/
private $firstByte;
/**
* The last byte in the file to send (0-indexed), a null value indicates $start to
* EOF
*
* @var int|null
*/
private $lastByte;
/**
* Create a new instance from a Range header string
*
* @param string $header
* @return RangeHeader
*/
public static function createFromHeaderString($header)
{
if ($header === null) {
return null;
}
if (!preg_match('/^\s*(\S+)\s*(\d*)\s*-\s*(\d*)\s*(?:,|$)/', $header, $info)) {
throw new InvalidRangeHeaderException('Invalid header format');
} else if (strtolower($info[1]) !== 'bytes') {
throw new InvalidRangeHeaderException('Unknown range unit: ' . $info[1]);
}
return new self(
$info[2] === '' ? null : $info[2],
$info[3] === '' ? null : $info[3]
);
}
/**
* @param int|null $firstByte
* @param int|null $lastByte
* @throws InvalidRangeHeaderException
*/
public function __construct($firstByte, $lastByte)
{
$this->firstByte = $firstByte === null ? $firstByte : (int)$firstByte;
$this->lastByte = $lastByte === null ? $lastByte : (int)$lastByte;
if ($this->firstByte === null && $this->lastByte === null) {
throw new InvalidRangeHeaderException(
'Both start and end position specifiers empty'
);
} else if ($this->firstByte < 0 || $this->lastByte < 0) {
throw new InvalidRangeHeaderException(
'Position specifiers cannot be negative'
);
} else if ($this->lastByte !== null && $this->lastByte < $this->firstByte) {
throw new InvalidRangeHeaderException(
'Last byte cannot be less than first byte'
);
}
}
/**
* Get the start position when this range is applied to a file of the specified size
*
* @param int $fileSize
* @return int
* @throws UnsatisfiableRangeException
*/
public function getStartPosition($fileSize)
{
$size = (int)$fileSize;
if ($this->firstByte === null) {
return ($size - 1) - $this->lastByte;
}
if ($size <= $this->firstByte) {
throw new UnsatisfiableRangeException(
'Start position is after the end of the file'
);
}
return $this->firstByte;
}
/**
* Get the end position when this range is applied to a file of the specified size
*
* @param int $fileSize
* @return int
* @throws UnsatisfiableRangeException
*/
public function getEndPosition($fileSize)
{
$size = (int)$fileSize;
if ($this->lastByte === null) {
return $size - 1;
}
if ($size <= $this->lastByte) {
throw new UnsatisfiableRangeException(
'End position is after the end of the file'
);
}
return $this->lastByte;
}
/**
* Get the length when this range is applied to a file of the specified size
*
* @param int $fileSize
* @return int
* @throws UnsatisfiableRangeException
*/
public function getLength($fileSize)
{
$size = (int)$fileSize;
return $this->getEndPosition($size) - $this->getStartPosition($size) + 1;
}
/**
* Get a Content-Range header corresponding to this Range and the specified file
* size
*
* @param int $fileSize
* @return string
*/
public function getContentRangeHeader($fileSize)
{
return 'bytes ' . $this->getStartPosition($fileSize) . '-'
. $this->getEndPosition($fileSize) . '/' . $fileSize;
}
}
class PartialFileServlet
{
/**
* The range header on which the data transmission will be based
*
* @var RangeHeader|null
*/
private $range;
/**
* @param RangeHeader $range Range header on which the transmission will be based
*/
public function __construct(RangeHeader $range = null)
{
$this->range = $range;
}
/**
* Send part of the data in a seekable stream resource to the output buffer
*
* @param resource $fp Stream resource to read data from
* @param int $start Position in the stream to start reading
* @param int $length Number of bytes to read
* @param int $chunkSize Maximum bytes to read from the file in a single operation
*/
private function sendDataRange($fp, $start, $length, $chunkSize = 8192)
{
if ($start > 0) {
fseek($fp, $start, SEEK_SET);
}
while ($length) {
$read = ($length > $chunkSize) ? $chunkSize : $length;
$length -= $read;
echo fread($fp, $read);
}
}
/**
* Send the headers that are included regardless of whether a range was requested
*
* @param string $fileName
* @param int $contentLength
* @param string $contentType
*/
private function sendDownloadHeaders($fileName, $contentLength, $contentType)
{
header('Content-Type: ' . $contentType);
header('Content-Length: ' . $contentLength);
header('Content-Disposition: attachment; filename="' . $fileName . '"');
header('Accept-Ranges: bytes');
}
/**
* Send data from a file based on the current Range header
*
* @param string $path Local file system path to serve
* @param string $contentType MIME type of the data stream
*/
public function sendFile($path, $contentType = 'application/octet-stream')
{
// Make sure the file exists and is a file, otherwise we are wasting our time
$localPath = realpath($path);
if ($localPath === false || !is_file($localPath)) {
throw new NonExistentFileException(
$path . ' does not exist or is not a file'
);
}
// Make sure we can open the file for reading
if (!$fp = fopen($localPath, 'r')) {
throw new UnreadableFileException(
'Failed to open ' . $localPath . ' for reading'
);
}
$fileSize = filesize($localPath);
if ($this->range == null) {
// No range requested, just send the whole file
header('HTTP/1.1 200 OK');
$this->sendDownloadHeaders(basename($localPath), $fileSize, $contentType);
fpassthru($fp);
} else {
// Send the request range
header('HTTP/1.1 206 Partial Content');
header('Content-Range: ' . $this->range->getContentRangeHeader($fileSize));
$this->sendDownloadHeaders(
basename($localPath),
$this->range->getLength($fileSize),
$contentType
);
$this->sendDataRange(
$fp,
$this->range->getStartPosition($fileSize),
$this->range->getLength($fileSize)
);
}
fclose($fp);
}
}
Example usage:
<?php
$path = '/local/path/to/file.ext';
$contentType = 'application/octet-stream';
// Avoid sending unexpected errors to the client - we should be serving a file,
// we don't want to corrupt the data we send
ini_set('display_errors', '0');
try {
$rangeHeader = RangeHeader::createFromHeaderString(get_request_header('Range'));
(new PartialFileServlet($rangeHeader))->sendFile($path, $contentType);
} catch (InvalidRangeHeaderException $e) {
header("HTTP/1.1 400 Bad Request");
} catch (UnsatisfiableRangeException $e) {
header("HTTP/1.1 416 Range Not Satisfiable");
} catch (NonExistentFileException $e) {
header("HTTP/1.1 404 Not Found");
} catch (UnreadableFileException $e) {
header("HTTP/1.1 500 Internal Server Error");
}
// It's usually a good idea to explicitly exit after sending a file to avoid sending any
// extra data on the end that might corrupt the file
exit;
A: Yes, you can use the Range header for that. You need to give 3 more headers to the client for a full download:
header ("Accept-Ranges: bytes");
header ("Content-Length: " . $fileSize);
header ("Content-Range: bytes 0-" . $fileSize - 1 . "/" . $fileSize . ";");
Than for an interrupted download you need to check the Range request header by:
$headers = getAllHeaders ();
$range = substr ($headers['Range'], '6');
And in this case don't forget to serve the content with 206 status code:
header ("HTTP/1.1 206 Partial content");
header ("Accept-Ranges: bytes");
header ("Content-Length: " . $remaining_length);
header ("Content-Range: bytes " . $start . "-" . $to . "/" . $fileSize . ";");
You'll get the $start and $to variables from the request header, and use fseek() to seek to the correct position in the file.
A: The top answer has various bugs.
*
*The major bug: It doesn't handle Range header correctly. bytes a-b should mean [a, b] instead of [a, b), and bytes a- is not handled.
*The minor bug: It doesn't use buffer to handle output. This may consume too much memory and cause low speed for large files.
Here's my modified code:
// TODO: configurations here
$fileName = "File Name";
$file = "File Path";
$bufferSize = 2097152;
$filesize = filesize($file);
$offset = 0;
$length = $filesize;
if (isset($_SERVER['HTTP_RANGE'])) {
// if the HTTP_RANGE header is set we're dealing with partial content
// find the requested range
// this might be too simplistic, apparently the client can request
// multiple ranges, which can become pretty complex, so ignore it for now
preg_match('/bytes=(\d+)-(\d+)?/', $_SERVER['HTTP_RANGE'], $matches);
$offset = intval($matches[1]);
$end = $matches[2] || $matches[2] === '0' ? intval($matches[2]) : $filesize - 1;
$length = $end + 1 - $offset;
// output the right headers for partial content
header('HTTP/1.1 206 Partial Content');
header("Content-Range: bytes $offset-$end/$filesize");
}
// output the regular HTTP headers
header('Content-Type: ' . mime_content_type($file));
header("Content-Length: $filesize");
header("Content-Disposition: attachment; filename=\"$fileName\"");
header('Accept-Ranges: bytes');
$file = fopen($file, 'r');
// seek to the requested offset, this is 0 if it's not a partial content request
fseek($file, $offset);
// don't forget to send the data too
ini_set('memory_limit', '-1');
while ($length >= $bufferSize)
{
print(fread($file, $bufferSize));
$length -= $bufferSize;
}
if ($length) print(fread($file, $length));
fclose($file);
A: This worked very well for me: https://github.com/pomle/php-serveFilePartial
A: Small composer enabled class which works the same way as pecl http_send_file. This means support for resumable downloads and throttle. https://github.com/diversen/http-send-file
A: You could use the below code for byte range request support across any browser
<?php
$file = 'YouTube360p.mp4';
$fileLoc = $file;
$filesize = filesize($file);
$offset = 0;
$fileLength = $filesize;
$length = $filesize - 1;
if ( isset($_SERVER['HTTP_RANGE']) ) {
// if the HTTP_RANGE header is set we're dealing with partial content
$partialContent = true;
preg_match('/bytes=(\d+)-(\d+)?/', $_SERVER['HTTP_RANGE'], $matches);
$offset = intval($matches[1]);
$tempLength = intval($matches[2]) - 0;
if($tempLength != 0)
{
$length = $tempLength;
}
$fileLength = ($length - $offset) + 1;
} else {
$partialContent = false;
$offset = $length;
}
$file = fopen($file, 'r');
// seek to the requested offset, this is 0 if it's not a partial content request
fseek($file, $offset);
$data = fread($file, $length);
fclose($file);
if ( $partialContent ) {
// output the right headers for partial content
header('HTTP/1.1 206 Partial Content');
}
// output the regular HTTP headers
header('Content-Type: ' . mime_content_type($fileLoc));
header('Content-Length: ' . $fileLength);
header('Content-Disposition: inline; filename="' . $file . '"');
header('Accept-Ranges: bytes');
header('Content-Range: bytes ' . $offset . '-' . $length . '/' . $filesize);
// don't forget to send the data too
print($data);
?>
A: Yes. Support byteranges. See RFC 2616 section 14.35 .
It basically means that you should read the Range header, and start serving the file from the specified offset.
This means that you can't use readfile(), since that serves the whole file. Instead, use fopen() first, then fseek() to the correct position, and then use fpassthru() to serve the file.
A: This works 100% super check it
I am using it and no problems any more.
/* Function: download with resume/speed/stream options */
/* List of File Types */
function fileTypes($extension){
$fileTypes['swf'] = 'application/x-shockwave-flash';
$fileTypes['pdf'] = 'application/pdf';
$fileTypes['exe'] = 'application/octet-stream';
$fileTypes['zip'] = 'application/zip';
$fileTypes['doc'] = 'application/msword';
$fileTypes['xls'] = 'application/vnd.ms-excel';
$fileTypes['ppt'] = 'application/vnd.ms-powerpoint';
$fileTypes['gif'] = 'image/gif';
$fileTypes['png'] = 'image/png';
$fileTypes['jpeg'] = 'image/jpg';
$fileTypes['jpg'] = 'image/jpg';
$fileTypes['rar'] = 'application/rar';
$fileTypes['ra'] = 'audio/x-pn-realaudio';
$fileTypes['ram'] = 'audio/x-pn-realaudio';
$fileTypes['ogg'] = 'audio/x-pn-realaudio';
$fileTypes['wav'] = 'video/x-msvideo';
$fileTypes['wmv'] = 'video/x-msvideo';
$fileTypes['avi'] = 'video/x-msvideo';
$fileTypes['asf'] = 'video/x-msvideo';
$fileTypes['divx'] = 'video/x-msvideo';
$fileTypes['mp3'] = 'audio/mpeg';
$fileTypes['mp4'] = 'audio/mpeg';
$fileTypes['mpeg'] = 'video/mpeg';
$fileTypes['mpg'] = 'video/mpeg';
$fileTypes['mpe'] = 'video/mpeg';
$fileTypes['mov'] = 'video/quicktime';
$fileTypes['swf'] = 'video/quicktime';
$fileTypes['3gp'] = 'video/quicktime';
$fileTypes['m4a'] = 'video/quicktime';
$fileTypes['aac'] = 'video/quicktime';
$fileTypes['m3u'] = 'video/quicktime';
return $fileTypes[$extention];
};
/*
Parameters: downloadFile(File Location, File Name,
max speed, is streaming
If streaming - videos will show as videos, images as images
instead of download prompt
*/
function downloadFile($fileLocation, $fileName, $maxSpeed = 100, $doStream = false) {
if (connection_status() != 0)
return(false);
// in some old versions this can be pereferable to get extention
// $extension = strtolower(end(explode('.', $fileName)));
$extension = pathinfo($fileName, PATHINFO_EXTENSION);
$contentType = fileTypes($extension);
header("Cache-Control: public");
header("Content-Transfer-Encoding: binary\n");
header('Content-Type: $contentType');
$contentDisposition = 'attachment';
if ($doStream == true) {
/* extensions to stream */
$array_listen = array('mp3', 'm3u', 'm4a', 'mid', 'ogg', 'ra', 'ram', 'wm',
'wav', 'wma', 'aac', '3gp', 'avi', 'mov', 'mp4', 'mpeg', 'mpg', 'swf', 'wmv', 'divx', 'asf');
if (in_array($extension, $array_listen)) {
$contentDisposition = 'inline';
}
}
if (strstr($_SERVER['HTTP_USER_AGENT'], "MSIE")) {
$fileName = preg_replace('/\./', '%2e', $fileName, substr_count($fileName, '.') - 1);
header("Content-Disposition: $contentDisposition;
filename=\"$fileName\"");
} else {
header("Content-Disposition: $contentDisposition;
filename=\"$fileName\"");
}
header("Accept-Ranges: bytes");
$range = 0;
$size = filesize($fileLocation);
if (isset($_SERVER['HTTP_RANGE'])) {
list($a, $range) = explode("=", $_SERVER['HTTP_RANGE']);
str_replace($range, "-", $range);
$size2 = $size - 1;
$new_length = $size - $range;
header("HTTP/1.1 206 Partial Content");
header("Content-Length: $new_length");
header("Content-Range: bytes $range$size2/$size");
} else {
$size2 = $size - 1;
header("Content-Range: bytes 0-$size2/$size");
header("Content-Length: " . $size);
}
if ($size == 0) {
die('Zero byte file! Aborting download');
}
set_magic_quotes_runtime(0);
$fp = fopen("$fileLocation", "rb");
fseek($fp, $range);
while (!feof($fp) and ( connection_status() == 0)) {
set_time_limit(0);
print(fread($fp, 1024 * $maxSpeed));
flush();
ob_flush();
sleep(1);
}
fclose($fp);
return((connection_status() == 0) and ! connection_aborted());
}
/* Implementation */
// downloadFile('path_to_file/1.mp3', '1.mp3', 1024, false);
A: A really nice way to solve this without having to "roll your own" PHP code is to use the mod_xsendfile Apache module. Then in PHP, you just set the appropriate headers. Apache gets to do its thing.
header("X-Sendfile: /path/to/file");
header("Content-Type: application/octet-stream");
header("Content-Disposition: attachment; file=\"filename\"");
A: The first thing you need to do is to send the Accept-Ranges: bytes header in all responses, to tell the client that you support partial content. Then, if request with a Range: bytes=x-y header is received (with x and y being numbers) you parse the range the client is requesting, open the file as usual, seek x bytes ahead and send the next y - x bytes. Also set the response to HTTP/1.0 206 Partial Content.
Without having tested anything, this could work, more or less:
$filesize = filesize($file);
$offset = 0;
$length = $filesize;
if ( isset($_SERVER['HTTP_RANGE']) ) {
// if the HTTP_RANGE header is set we're dealing with partial content
$partialContent = true;
// find the requested range
// this might be too simplistic, apparently the client can request
// multiple ranges, which can become pretty complex, so ignore it for now
preg_match('/bytes=(\d+)-(\d+)?/', $_SERVER['HTTP_RANGE'], $matches);
$offset = intval($matches[1]);
$length = intval($matches[2]) - $offset;
} else {
$partialContent = false;
}
$file = fopen($file, 'r');
// seek to the requested offset, this is 0 if it's not a partial content request
fseek($file, $offset);
$data = fread($file, $length);
fclose($file);
if ( $partialContent ) {
// output the right headers for partial content
header('HTTP/1.1 206 Partial Content');
header('Content-Range: bytes ' . $offset . '-' . ($offset + $length) . '/' . $filesize);
}
// output the regular HTTP headers
header('Content-Type: ' . $ctype);
header('Content-Length: ' . $filesize);
header('Content-Disposition: attachment; filename="' . $fileName . '"');
header('Accept-Ranges: bytes');
// don't forget to send the data too
print($data);
I may have missed something obvious, and I have most definitely ignored some potential sources of errors, but it should be a start.
There's a description of partial content here and I found some info on partial content on the documentation page for fread.
A: Resuming downloads in HTTP is done through the Range header. If the request contains a Range header, and if other indicators (e.g. If-Match, If-Unmodified-Since) indicate that the content hasn't changed since the download was started, you give a 206 response code (rather than 200), indicate the range of bytes you're returning in the Content-Range header, then provide that range in the response body.
I don't know how to do that in PHP, though.
A: Thanks Theo! your method did not directly work for streaming divx because i found the divx player was sending ranges like bytes=9932800-
but it showed me how to do it so thanks :D
if(isset($_SERVER['HTTP_RANGE']))
{
file_put_contents('showrange.txt',$_SERVER['HTTP_RANGE']);
A: I've created a library for serving files with support for conditional (don't download file again unless it has changed) and ranged (pause and resume download) requests. It even works with virtual file systems, such as Flysystem.
Check it out here: FileWaiter
Example usage:
use Stadly\FileWaiter\Adapter\Local;
use Stadly\FileWaiter\File;
use Stadly\FileWaiter\Waiter;
$streamFactory = new \GuzzleHttp\Psr7\HttpFactory(); // Any PSR-17 compatible stream factory.
$file = new File(new Local('filename.txt', $streamFactory)); // Or another file adapter. See below.
$responseFactory = new \GuzzleHttp\Psr7\HttpFactory(); // Any PSR-17 compatible response factory.
$waiter = new Waiter($file, $responseFactory);
$request = \GuzzleHttp\Psr7\ServerRequest::fromGlobals(); // Any PSR-7 compatible server request.
$response = $waiter->handle($request); // The response is created by the response factory.
$emitter = new \Laminas\HttpHandlerRunner\Emitter\SapiEmitter(); // Any way of emitting PSR-7 responses.
$emitter->emit($response);
die();
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157318",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "113"
} |
Q: bash script runs from shell but not from cron job Cron installation is vixie-cron
/etc/cron.daily/rmspam.cron
#!/bin/bash
/usr/bin/rm /home/user/Maildir/.SPAM/cur/*;
I Have this simple bash script that I want to add to a cron job (also includes spam learning commands before) but this part always fails with "File or directory not found" From what I figure is the metachar isn't being interperted correctly when run as a cron job. If I execute the script from the commandline it works fine.
I'd like a why for this not working and of course a working solution :)
Thanks
edit #1
came back to this question when I got popular question badge for it. I first did this,
#!/bin/bash
find /home/user/Maildir/.SPAM/cur/ -t file | xargs rm
and just recently was reading through the xargs man page and changed it to this
#!/bin/bash
find /home/user/Maildir/.SPAM/cur/ -t file | xargs --no-run-if-empty rm
short xargs option is -r
A: If there are no files in the directory, then the wildcard will not be expanded and will be passed to the command directly. There is no file called "*", and then the command fails with "File or directory not found." Try this instead:
if [ -f /home/user/Maildir/.SPAM/cur/* ]; then
rm /home/user/Maildir/.SPAM/cur/*
fi
Or just use the "-f" flag to rm. The other problem with this command is what happens when there is too much spam for the maximum length of the command line. Something like this is probably better overall:
find /home/user/Maildir/.SPAM/cur -type f -exec rm '{}' +
If you have an old find that only execs rm one file at a time:
find /home/user/Maildir/.SPAM/cur -type f | xargs rm
That handles too many files as well as no files. Thanks to Charles Duffy for pointing out the + option to -exec in find.
A: Are you specifying the full path to the script in the cronjob?
00 3 * * * /home/me/myscript.sh
rather than
00 3 * * * myscript.sh
On another note, it's /bin/rm on all of the linux boxes I have access to. Have you double-checked that it really is /usr/bin/rm on your machine?
A: try adding
MAILTO=your@email.address
to the top of your cron file and you should get any input/errors mailed to you.
Also consider adding the command as a cronjob
0 30 * * * /usr/bin/rm /home/user/Maildir/.SPAM/cur/*
A: Try using a force option and forget about adding a path to rm command. I think it should not be needed...
rm -f
This will ensure that even if there are no files in the directory, rm command will not fail. If this is a part of a shell script, the * should work. It looks to me that you might have an empty dir...
I understand that the rest of the script is being executed, right?
A: Is rm really located in /usr/bin/ on your system? I have always thought that rm should reside in /bin/.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157342",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: Is mathematics necessary for programming? I happened to debate with a friend during college days whether advanced mathematics is necessary for any veteran programmer. He used to argue fiercely against that. He said that programmers need only basic mathematical knowledge from high school or fresh year college math, no more no less, and that almost all of programming tasks can be achieved without even need for advanced math. He argued, however, that algorithms are fundamental & must-have asset for programmers.
My stance was that all computer science advances depended almost solely on mathematics advances, and therefore a thorough knowledge in mathematics would help programmers greatly when they're working with real-world challenging problems.
I still cannot settle on which side of the arguments is correct. Could you tell us your stance, from your own experience?
A: Yeah, there is no need for advanced mathematics - if you are programming commercial - off the shelf software.
However when dealing with hardcore stuff such as:
*
*Calculating trajectories to control
a robot
*Creating AI-like applications to
support uncertainty and automatic
reasoning
*Playing with 3-D motion and graphics
Some advanced mathematics knowledge might come in handy. And it's not like they are "out-of-this world" problems.
I had to create a software to try to "predict" the necessary amount of paper for an office (and it was hell just to find out the best way to approximate values).
You have to be careful, though, because it is easy to get lost when using advanced things - there is a friend of mine who resorted to using Turing to store the state of a dynamic menu just to display it correctly - humm... perhaps he wnet too far in his imagination.
A: What type of programming?
In my commercial experience, I have needed no advanced mathematics, but this is heavily dependent on the field you are in.
Computer graphics require a large amount of advanced mathematics. A lot of academic computer programming requires advanced mathematics.
So saying there tends to be a correlation between people who are good at mathematics and people who are good at programming.
I hope this wishy-washy answer helps.
A: Mathematics are needed for developers in some fields but are almost useless in others.
If you are a game developer and have to work with physics a lot - understanding of math is crucial. If you are working with advanced visual controls - you could not do much without geometry. If you're planning to do some financial calculations - it would REALLY help to have solid knowledge of statistics.
On the other hand over last 5 years I had only 2 or 3 projects where ANY amount of math was required at all. Of these there was only 1 occurrence when a Google search did not help.
At the end of the day even financial calculations are very often something your clients do for you and give you formulas to implement.
So if you're in 'applied software' business you are likely to never use your math degree. If you're in academic software maths are crucial.
A: I agree with Chris. I would say "Yes", also. But this depends on your market as stated above. If you are simply creating some basic "off-the-shelf" applications or writing tools to help your everyday work...then math isn't nearly as important.
Engineering custom software solutions requires lots of problem solving and critical thinking. Skills that are most definitely enhanced when a mathematics background is present. I minored in Math with my Computer Engineering degree and I give credit to all of my math-oriented background as to why I'm where I am today.
That's my 2 cents, I can tell from reading above that many would not agree. I encourage all to consider that I'm not saying you can't have those skills without a math background, I'm simply stating that the skills are side-effects of having such a background and can impact software positively.
A: In my experience math is required in programming, you can't get away from it. The whole of programming is based on math.
The issue is not black and white, but more colorful. The question is not whether or not you need math, but how much. The higher levels of math will give you more tools and open up your mind to different paths of though.
For example, you can program if you only known addition and subtraction. When multiplication is required, you will have to perform a lot of additions. Multiplication simplifies repetitive additions. Algebra allows one to simplify math before implementing it into programs. Linear Algebra provides tools for transforming images. Boolean Algebra provides mechanics for reducing all those if statements.
And don't forget the sibling to mathematics, Logic and Philosophy. Logic will help you make efficient use of case or switch statements. Philosophy will help you understand the thinking of the guy who wrote that code you are modifying.
Yes, you don't need much math to write programs. Some programs may require more math than others. More math knowledge will give you an advantage over those who have lesser understanding. In these times, people need every advantage they can get to obtain those jobs.
A: I've been programming for 8 years professionally, and since I was 12 as a hobby.
Math is not necessary, logic is. Math is horribly helpful though, to say it's not necessary is like saying that to kill a man, a gun isn't necessary, you can use a knife. Well, it is true, but that gun makes it a lot easier.
There are a couple bare minimums, which you should already meet. You need to know basic algebraic expressions and notation, and the common computer equivalents. For example, you need to know what an exponential is (3 to the 3rd is 27), and the common computer expression is 3^3. The common notations for algebra does change between languages, but many of them use a somewhat unified methodology. Others (looking at you LISP) don't. You also need to know order of operations.
You need to understand algorithmic thought. First this, then this, produces this which is used in this calculation. Chances are you understand this or you don't, and it's a fairly hard hurdle to jump if you don't understand it; I've found that this is something you 'get', and not really something you can learn. Conversely, some people don't 'get' art. They should not become painters. Also, there have been students in CS curriculum who cannot figure out why this does not work:
x = z + w;
z = 3;
y = 5;
It's not that they don't understand addition, it's that they aren't grasping the requirement of unambiguous express. If they understand it, the computer should too, right? If you can't see what's wrong with the above three lines, then don't become a programmer.
Lastly, you need to know whatever math is under your domain of programming. Accounting software could stop at basic algebra. If you are programming physics, you'll need to know physics (loosely) and math in 3-dimensional geometry (Euclidean). If you're programming architecture software, you'll need to know trigonometry.
This goes farther then math though; whatever domain you are programming for, you need to soundly understand the basics. If you are programming language analysis software, you'll need to know probability, statistics, grammar theory (multiple languages), etc.
Often times, certain domains need, or can benefit from, knowledge you'd think is unrelated. For example, if you were programming audio software, you actually need to know trigonometry to deal with waveforms.
Magnitude changes things also. If you are sorting a financial data set of 1000 items, it's no big thing. If it was 10 million records, however, you would benefit greatly from knowing vector math actually, and having a deep understanding of sorting at the binary level (how does a system sort alphabetically? How does it know 'a' is less than 'b'?)
You are going to find that as a programmer, your general knowledge base is going to explode, because each project will necessitate more learning outside of the direct sphere of programming. If you are squeamish or lazy about self-learning, and do not like the idea of spending 10+ hours a week doing essentially 'homework', do not become a programmer.
If you like thought exercises, if you like learning, if you can think about abstract things like math without a calculator or design without a sketchpad, if you have broad tastes in life and hobbies, if you are self-critical and can throw away 'favorited' ideas, if you like perfecting things, then become a programmer. Do not base this decision on math, but rather, the ability to think logically and learn. Those are what is important; math is just the by-product.
A: While advanced mathematics may not be required for programming (unless you are programming advanced mathematics capability) the thought process of programming and mathematics are very similar. You begin with a base of known things (axioms, previously proven theories) and try to get to someplace new. You cannot skip steps. If you do skip steps, then you are required to fill in the blanks. It's a critical thought process that makes the two incredibly similar.
Also, mathematicians and programmers both think critically in the abstract. Real world things are represented by objects and variables. The ability to translate from concrete to abstract also links the two fields.
There's a very good chance that if you're good at one, you will probably be good at the other.
A: Of course it depends on what kind of programmer you want to be, or better what kind of programmer your employers want you to be. I think calculus and algebra are essentials, statistic and linear programming is indeed a good tool to have in your briefcase, maybe analysis (derivative, integrals, functions...) could be done without. But if you want to know how things work skin-deep (electronics, for example, or some non trivial algorhytms) "advanced" math is something you'd better not go without anywhere.
A: Most of the programming I have done involved physics simulations for research including things like electromagnetism, quantum mechanics and structural mechanics. Since the problem domains have advanced mathematics associated with them I would be hard pressed to solve them without using advanced mathematics.
So the answer to your question is - it depends on what you are trying to do.
A: Advanced maths knowledge is vital if you're going to be writing a new programming language. Or you need write your own algorithms.
However, for most day-to-day programming - from websites to insurance processing applications - only basic maths are necessary.
A: Someone with a solid mathematical (which is not merely arithmetic) or logic background will cope well with algorithms, variable use, conditional reasoning and data structures.
*
*Not everyone can design a UI.
*Not everyone can make efficient code.
*Not everyone can comment and document clearly.
*Not everyone can do a good algorithm
Mathematics will help you to a point, but only to a point.
A: I dont think advanced mathemetics knowledge is a requirement for a good programmer, but based on personal experience I think that programmers who have a better grasp at advanced maths also make better programmers. This may simply be due to a more logical mind, or a more logical outlook due to their experiences of solving mathematical problems.
A: The fundamental concept of maths is the following, devising, understanding, implementation, and use of algorithms. If you cannot do maths then it is because you cannot do these things, and if you cannot do these things then you cannot be an effective programmer.
Common programming tasks might not need any specific mathematical knowledge (e.g. you probably won't need vector algebra and calculus unless you're doing tasks like 3D graphics or physics simulations, for example), but the underlying skillsets are identical, and lack of ability in one domain will be matched by a corresponding lack of ability in the other domain.
A: Math is a toolbox for creating programs. I recommend Cormen's Introduction to Algorithms. It touches on the more "mathy" stuff.
- Greatest lowest limit (managing resources)
- Random variables (game programming)
- Topological sort (adjusting spreadsheets)
- Matrix operations (3d graphics)
- Number theory (encryption)
- Fast fourier transforms (networks)
A: I don't think that higher math is a requirement for being a good programmer - as always it depends on what you are coding.
Of course if you are in 3D graphics programming, you'll need matrices and stuff. As author of business software, you'll probably need statistics math.
But being a professional programmer for almost 10 years (and another 10 years amateur) "higher math" is not something that I needed regularily. In about 99.8% of all cases it's just plus, minus, division and multiplication in some intelligent combinations - in most cases it's about algorithms, not math.
A: Learning higher math, for most programmers, is important simply because it bends your brain to think logically, in a step-by-step manner to get from one thing to another.
Very few programming jobs, though, require anything above high school math. I've used linear algebra once. I've never used calculus. I use algebra every day.
A: Mathematical knowledge is often useful to a programmer, as are graphic design skill, puzzle-solving ability, work ethic and a host of other skills and traits. Very few programmers are good at everything that a programmer can possibly be good at. I wouldn't agree with any statement of the form "you're not a real programmer unless you can {insert favorite programming ability here}".
But I would be wary of a programmer who couldn't do Math. More so than of one who couldn't draw.
A: I think it really depends on what you're trying to do, but IMHO, the CS and OS theory are more important than math here, and you really need only the math that they involve.
For example, there's a lot of CS background of scheduling theory and optimization that stands behind many schedulers in modern OSs. That is an example of something that would require some math, though not something super complicated.
But honestly, for most stuff, you don't need math. What you need is to learn the ability to think in base 2 and 16, such as the ability to mentally OR/AND. For example, if you have a byte and within that byte there are two 3-bit fields and 2 wasted bits, knowing which bits are in which fields are active when the byte value is something like 11 will make things slightly faster than having to use pen and paper.
A: I began programming about the same time I entered my pre-algebra class.. So I wouldn't say math is all that important, though it can help in certain types of programming, especially functional.
I haven't taken Discrete Math yet, but I see a lot of theory stuff with programming written in a math-notation that is taught in this class.
Also, make sure you know how to calculate anything in any base, especially base 2, 8, and 16.
Also, one class that really brought home some concepts for me was this pre-programming class. We got taught unions, intersections, and all that happy stuff and It almost exactly parallels bitwise math. And we covered boolean logic very heavily. What I considered the most useful was when we learned how to reduce complex boolean statements. This was very handy:
(x|y) & (x|z) & (x|foo)
can be simplified to
x | (y & z & foo)
Which I previously did not quite grasp.
A: Well you generated a number of responses, and no I did not read them all. I am in the middle on this, no you certainly do not need math in order to be a programmer. Assembler vs device drivers in linux are no more or less complicated than the other and neither require math.
In no way shape or form do you need to take or pass a math class for any of this.
I will agree that the problem solving mindset for programming is quite similar to that of math solutions, and as a result math probably comes easily. or the contrary if math is hard then programming may be hard. A class or a degree or any pieces of paper or trophies are not required, going off and learning stuff, sure.
Now if you cannot convert from hex to binary to decimal quickly either in your head, on paper, or using a calculator you are going to struggle. If you want to get into networking and other things that involve timing, which kernel drivers often do but dont have to. You are going to struggle. I know of a very long list of people with math degrees and/or computer science, and/or engineering degrees that struggle with the rate calculations, bits per second, bytes per second, how much memory you need to do something, etc. To some extent it may be considered some sort of knack that some have and others have to work toward.
My bottom line is I believe in will power, if you want to learn this stuff you can and will, it is as simple as that. You dont need to take a class or spend a lot of money, linux and qemu for example can keep you busy for quite some time, different asm langauges, etc. crashable environments for kernel development, embedded, etc. You are not limited to that, but I dont believe that you have to run off and take any classes if you dont want to. If you want to then sure take some ee classes, some cs classes and some math classes..
A: You need math. Programming is nothing more than math. Any findings of theoretical physics does not become a practical(applicable) implication, unless they are explained in terms of mathematical solutions. None of those can be solved computationally if they cannot be interpreted on computers, and more specifically on programming languages. Different languages are thus designed to solve specific problems. But for the general purpose and wide spread programming languages like java, c, c++ much of our programming tasks involve repetitive(continuous) solution to same problems like extracting values from database, text files, putting them on windows(desktop, web), manipulating same values, sometimes accessing some data from similar devices(but given different brand names, different port and a headache) etc which does not involve more than unitary method, and algebra(counter, some logic), geometry(graphics) etc. So it depends what you are trying to solve.
A: computer science != programming
OK, seriously, I know good and bad programmers who were English and Psychology majors and some that were Computer Science majors. Some very famous guys that I admire as developers didn't have a CS background. Larry Wall(Perl), for example, was a linguist.
On the other hand, it helps to know something about the domain you are working on because then you can at least see if your data makes sense and help your customer/users drill down to what they really want.
And yes, there's the issue of computational complexity and efficient data structures and program correctness. That's stuff you learn in Computer Science and that's useful to know in almost any domain, but it's neither necessary nor sufficient.
A: IMO, you probably need an aptitude for mathematics, without necessarily having much knowledge in the field. So the things you require to be good at maths are similar to the things you require to be good at programming.
But in general, I can't remember the last time I used any sort of advanced maths in day-to-day programming, so no.
A: Programming requires you to master, or at least learn, two subjects. Programming itself and what ever domain your program is for. If you are writing accounting software, you need to learn accounting, if you are programming robot kinematics, then you need to understand forward and reverse kinematics. Account might only take basic math skills, other domains take other types of math.
A: Programming is a tool of computer science.
In many area's of programming, math is in the back seat. If you don't know how to quick sort, download a module to do it for you. You don't understand elliptical curves, no problem, buy an AES encryption module.
Now for computer science. Yes, you need higher level math. No doubt about it. Cryptography, operating systems, compiler construction, machine learning, programming languages, and so on all require some form of higher math (calculus, discrete, linear, complex) to fully understand.
A: I guess I am going to be the first person to say you do need math. As others have said math is not all that important for certain aspects of development, but the fundamentals of critical thinking and structured analysis are very important.
More so, math is important in understanding a lot of the fundamentals that go into things like schedulers, optimizations, sorting, protocol management, and a number of other aspects of computers. Though the math involved from a calculation level is not complex (its mostly High school algebra) the theories and applications can be quite complex as a solid understanding of math through calculus will be of great benefit.
Can you get by without it, absolutely, and you shouldnt let a less then thorough knowledge of math hold you back, but if you had the chance, or the inclination I would study as much math as you could, calculus, numeric theory, linear algebra, combinatorics, practical applications, all of it has both practical and theoretical applications in a wide range of computer science.
I have known people who were highly successful on both sides of the fence (those without a strong focus on math, and those who went to school for physics or math), but in both groups they enjoyed numerical problems and learning about algorithms and math theory.
A: I have a maths degree, but I can't remember requiring that maths a single time in my career. It was useful in terms of training my mind for logical thinking, but I've not written any code using fluid dynamics, quantum theory or Markov Chains. (The last is the most likely to come up, I suspect.)
Most line-of-business developers won't need advanced maths most of the time. Sometimes knowing trigonometry can help, and certainly being able to understand enough maths to implement algorithms described mathematically can be important - but beyond that? Nah.
Don't forget that most programmers aren't advancing computer science - they're building applications. I don't need to know advanced engineering to drive a modern car, even though that car has almost certainly been improved through advanced engineering.
A: Nope, don't need math. Haven't done any since I graduated, and probably forgotten what little calculus I mastered anyway.
Think of it like a car. How much math/physics do you think is behind things like traction control and ABS braking? Lots. How much math do you need to know to use those tools? None.
EDIT: One thing to add. Industry is probably important here. A programmer working at a research firm, or writing embedded traction control systems for that car, is probably far more likely to need math than your average business tool programmer.
A: It's important to keep perspective. Learning math, advanced math, calc, etc. is great for thought processes and many programming positions expect and may make use of math and math concepts. But many programming jobs use little to no math at all.
Computer science, being a math discipline, of course requires lots of math. But few programming jobs are derivatives of comp sci. CS is a very specific discipline. There is a reason why IT schools now have Software Engineering as a separate discipline from CS. They are very different fields.
Comp Sci, for example, does not prepare you well for the world of most web applications. And software engineering does not prepare you well for compiler design and kernel development.
A: I would argue that having advanced logic (discrete) math can really help. That along with set theory. When dealing with common computer programs, these disciplines can help a lot. However, a lot of the other math I took in university was calculus, which as far as I can see, had very limited usage. Since 90% (or something like that) of programming is doing business apps with very simple math, I would say that for the most part, you can get by with very little math knowledge. However, a good understanding of boolean algebra, logic, discrete math, and set theory can really put you up to that next level.
A: It depends on what you are doing. If you do a lot of 3D programming, knowledge of 3D geometry is certainly necessary, don't you agree? ;-) If you want to create a new image format like JPG or a new audio format like MP3, you are also pretty lost if you can't understand a cosine or fourier transformation, as these are the basics most lossy compression are based on. Many other problems can be resolved better if you know your math rather well.
There are also many other programming tasks you will find do not need much math.
A: I'll go against the grain here and say "Yes"
I switch from Civil Engineering to programming (Concrete Sucks!). My math background consists of the usual first year stuff, second and third year Calculus(Diff EQ, volume integrations, Series, Fourier and Laplace transforms) and a Numerical Analysis course.
I find that my math is incredibly lacking for computer programming. There are entire areas of Discrete math and logic that I am missing, and I only survive due to an extensive library of textbooks, Wikipedia and Wolfram. Most advanced algorithms are based on advanced math, and I am unable to develop advanced algorithms without doing extensive research (Essentially the equivalent to a half-course worth of work.) I am certainly unable to come up with NEW algorithms, as I just don't have the mathematical foundations as the shoulders of giants upon which to stand.
A: If you find the subject fascinating enough to post this, just go ahead and start learning. The rest will come naturally.
A: To answer your question as it was posed I would have to say, "No, mathematics is not necessary for programming". However, as other people have suggested in this thread, I believe there is a correlation between understanding mathematics and being able to "think algorithmically". That is, to be able to think abstractly about quantity, processes, relationships and proof.
I started programming when I was about 9 years old and it would be a stretch to say I had learnt much mathematics by that stage. However, with a bit of effort I was able to understand variables, for loops, goto statements (forgive me, I was Vic 20 BASIC and I hadn't read any Dijkstra yet) and basic co-ordinate geometry to put graphics on the screen.
I eventually went on to complete an honours degree in Pure Mathematics with a minor in Computer Science. Although I focused mainly on analysis, I also studied quite a bit of discrete maths, number theory, logic and computability theory. Apart from being able to apply a few ideas from statistics, probability theory, vector analysis and linear algebra to programming, there was little maths I studied that was directly applicable to my programming during my undergraduate degree and the commercial and research programming I did afterwards.
However, I strongly believe the formal methods of thinking that mathematics demands — careful reasoning, searching for counter-examples, building axiomatic foundations, spotting connections between concepts — has been a tremendous help when I have tackled large and complex programming projects.
Consider the way athletes train for their sport. For example, footballers no doubt spend much of their training time on basic football skills. However, to improve their general fitness they might also spend time at the gym on bicycle or rowing machines, doing weights, etc.
Studying mathematics can be likened to weight-training or cross-training to improve your mental strength and stamina for programming. It is absolutely essential that you practice your basic programming skills but studying mathematics is an incredible mental work-out that improves your core analytic ability.
A: It's not required by a long shot, but...
As a trivial example--Without an understanding of geometry, you couldn't do a lot of stuff with squares and rectangles. (Every programmer has/gets geometry, so it's just an example).
Without trigonometry, there are certain things that are tough to do. Try to draw an analog clock with no understanding of trigonometry -- you can do it, but the process you have to go through is essentially re-inventing trigonometry.
Calculus is interesting. You'll probably never need it unless you design games, but calculus teaches you how to model things that act much more "Real world". For instance, if you try to model a tree falling, to get the speed right at every point along the arch you probably need a good deal of math.
On the other hand, it's just a matter of being exact. Anything you can do with calculus you can probably do with looping and approximations.
Beyond that, to make things even more life-like, you will probably need fractals and more advanced math.
If you are programming web sites and databases, you hardly need algebra 101.
A: I used a great deal of math when I was solving problems in solid mechanics and heat transfer using computers. Linear algebra, numerical methods, etc.
I never tap into any of that knowledge now that I'm writing business applications that deliver information from relational databases to web-based user interfaces.
I still would recommend a better math background to anyone.
Discrete math is very helpful to a developer; I have no formal training in it.
I think the techniques laid out in "Programming Collective Intelligence" are far from the stuff I did as an ME and could fall into the business apps that I'm doing now. Netflix has certainly made a nice business out of it. This group intelligence stuff appears to be on the rise.
A: To answer the question: no.
Mathematical talent and programming talent: strong correlation, little to no causality.
One is certainly not a prerequisite for the other, and beefing up your math skills isn't going to make you a better programmer unless you're programming in one of the specialized domains where math is pretty integral (3D graphics, statistics programming, etc.)
That said, of course a math background will certainly not hurt and will greatly help you in some cases. And as others have noted the thought processes involved in mathematics and programming are quite similar; if you have an talent for one you'll probably find you have a talent for the other.
If I was going to recommend a math requirement for programmers it'd be some basic statistics. Nearly all programming jobs require a little reporting of some sort.
The need for mathematics does increase a little as you start to do more of the advanced and/or fun stuff. Games are pretty math-heavy, so are performance-critical applications where you really need to understand the costs of different algorithms.
A: I'm going to sit right on the fence with you here... there are a lot of good arguments both for and against, and all of most of them equally valid. So which is the right answer?
Both...depending on the situation. This isn't a case of "if you're not with us, you're against us".
There are many aspects of math that do make areas of programming much easier: geometry, algebra, trigonometry, linear equations, quadratic equations, derivatives etc. In fact a lot of the highest performance "algorithms" have mathematical principles at their heart.
As Jon pointed out, he's got a degree in maths but in the programming world he barely uses that knowledge. I propose that he does use maths far more than he probably considers, albeit unconsiously...okay, maybe not quantum mechanics, but the more basic principles. Every time we lay out a GUI we use mathematical principles to design in an aesthetically pleasing manner, we don't do that consciously - but we do do it.
In the business world, we rarely think about the maths we use in our software - and in a lot of aspects of the software we write, it's just standard algorithms to complete the same monotonous tasks to help the business world catch up with the technology that's available.
It would be quite easy to skip through a whole career without ever consciously using math in our software. However, having an understanding of maths helps make many aspects of programming simpler.
I think the question really boils down to: "Is advanced math necessary for programming?" and of course, to that question the answer is no... unless you're going to start getting into writing and/or cracking encryption algorithms (which is a fascinating subject) or working with hydraulic equations as Mil pointed out or flow control systems (as I have in the past). But I would have add that while basic math may not be necessary, it will make your life a lot easier.
A: I have a degree in math, and I can't say it has helped me in any way. (I develop general web apps, nothing scientific). I enjoy working with other developers with non-math degrees because they seem to think outside my "math" box and force me to do the same.
A: Necessary != Sufficient
Come on guys! the title says "necessary", I would argue that it is at best a sufficient condition to be able to program well. Just like their are many sufficient but not necessary conditions: 5 yrs experience, a CS Degree, or any scientific background.
Some could even argue that being a Poet or English major could make you a good API designer or that an Artist could be good at UI/Web programming.
But these are obviously not guarantees, just like knowing math may not make you a good programmer, but you could hack out some C++ or F# like the rest anyway...
A: My response is absolutely not. I was/am (now unemloyed, thanks India) a computer programmer for 25+ year. And, in my whole career I NEVER encountered program LOGIC that required more than basic math skills. Unless you work with math everyday that exceeds basic math skills, the need for advanced math is nill. At the corporate level any complex math WILL be referred to a statician or mathematicain, who will provide the programmer with the necessary pseudo code, and both will collaborate in the thorough testing of the end product. Ultimately the ball is in the math nerd's court. At any level unless you're a mathmetician/statician/senior programmer the thought of having a programmer responsible for the expected results of a complex advanced math computer program is absurd, and very foolhardy.
A: In my experience the Math requirement for a Computer Science degree exists simply to weed out those who will fail. If you cannot pass Calculus I and II you will most definitely not pass an advanced course on compiler construction, database or operating systems theory.
A: Discrete math I found very helpful. I took Calculus, and there are some times it might have been very helpful too, but I don't remember any of it. For instance, the time I was trying to implement a DIS interface (which deals with things like rotational velocities and coordinate transformations). I spent a day driving all over town looking for a book to explain quaternions to me (this was pre-WWW). There was also a time I ended up needing to write a facility for some engineers to impliment n-linear interpolation. If you have no clue what that means, believe me I didn't either. Fortuntely, that was post-WWW.
My advice is to not sweat it. You may be hamstrung on a project or two, but not all that badly these days.
A: As a self taught programmer who started working on games about 30 years ago I would definitely say you need to get as much math as you can. Things like matrices, quaternions, ray tracing, particle systems, physics engines and such do require a good level of math comprehension and I only wish that I had learned all those things much earlier.
A: I work as a game programmer, in a team with artists, game designers, level designers, etc.
Having someone on the team who knows some maths is a net plus, just as it is a plus to have someone who plays all kinds of games, someone who'se a representative member of our target audience, someone who lived through some painful productions, etc.
Often, the ones who know the most maths will be programmers (sometimes game designers), because the domains are close enough. But, day to day, game programmers don't need much maths beyond 3D geometry and (sometimes) physics.
Among the maths I studied, I found statistics the most useful, though I sometimes find myself missing some concepts.
A: See also Is Programmng == Math? from stackoverflow.
While I don't think it's required for programming, I can't tell you how many times I've been able to use linear algebra concepts to write a clear and short solution to replace a convoluted (and sometimes incorrect) one. When dong any graphics or geometry (and even some solver) work, knowledge of matrices and how to work with them has also been extremely useful.
A: There are plenty of programming tasks that can be done well without a background in advanced math. It is probably safe to say the majority of programming jobs available will rarely require anything more than high school level math. But you are not going to write the software that helps put the shuttle in space by hacking away with your freshman college algebra math level. So, while advanced math is usually not vital to many programming tasks the more difficult problems will absolutely require it. Studying math also teaches valuable problem solving skills that can be used almost anywhere. I guess you could say it's not necessary most of the time, but it's certainly going to help almost all of the time.
A: For your general GUI and Web applications only basic mathematics knowledge will ever be needed.
Once a lifetime you might have an odd project where you need calculus or linear algebra.
(If you do 3D game programming or some other specific field of programming, you might need it everyday thou)
A: There are some good points to this question in my opinion.
As David Nehme posted here, computer science and programming are two very different subjects.
I find it perfectly possible that a programmer with very basic high-school and early college math skills may be a competent programmer. Not so sure about the computer science graduate, though.
As you correctly pointed out, the algorithm creation process is very much related to how you crunch math. Even if this is just a result of the type of mathmatical and analytical process you must accomplish to correctly design an algorithm.
I also think it very much depends on what you're doing, more than it depends on your job description or skills. For instance, if the programming and math are both tools to produce some effect, than you surely have to be competent with both (i.e.: you are making a modelization programme for some purpose). Although, if the programming is the ultimate objective of your activity, than math is most probably not required. (i.e.: you are making a web application)
A: If you need advanced mathematics in your daily job as programmer really depends on your tasks. I need them. The reason is I have to work with hydraulic calculations for piping systems to evaluate in code the piping system before it gets built. You never want to stand near a collapsing piping system because of under or overpressure. ;)
I guess for many other kinds of 'simulations of the real world' you will need advanced mathematics too.
A: Statistical machine learning techniques are becoming increasingly important.
A: See this earlier post
A: I feel this question (which I get quite a bit) is best answered with an analogy.
Many of us lift weights. Why? Is it because we're preparing for that day when we become a professional weightlifter? Will we ever encounter the lifting of weights as a job requirement?
Of course not. We lift weights because it exercises our muscles. It keeps us fit and in shape. A fit person will perform better in other areas: hiking, construction, running, sleeping, etc.
Learning mathematics is like weightlifting for the brain. It exercises the mind and keeps it in shape. You may never use calculus in your career, but your brain will be in better shape because of it.
A: Business programming: arithmetic, some algebra
Engineering: numerical analysis
Scientific programming: the sky's the limit
A: About the only useful things you can learn at university are theoretical.
A: Maths is as much about a way of thinking as it is about the skills themselves. And even that lies on several levels. Someone else noted that the analytical and abstraction skills common to maths are valuable to programming and that is one level. I would also argue that there is another level which contains precise analogues that carry from one to the other - for example the set theory behind relational databases, which is all hidden by the SQL semantics.
Very often the "best" - by which I mean most performant and concise - solutions are those which have a bit of maths behind them. If you start to think about your data-oriented programming problems as matrix manipulation, which many are, you can often find novel solutions form the world of maths.
Obviously it is not necessary to be a maths expert in order to program, anyone can be taught, but it is one of the skills that is worth having - and looking for in recruits.
A: In some programming I imagine that math would be most helpful, but not to be a programmer. I'm lucky if I can add 2+2 without my handy dandy calculator.
A: Depends on the programming task. I would put 'take data from a database and display it on a website' style programming towards the not-so-much side and then 'video games' on the other side (i work in games and I feel like I use some random different flavor of math every day, and would probably use more if i knew more).
A: It depends on what you do: Web developement, business software, etc. I think for this kind of stuff, you don't need math.
If you want to do computer graphics, audio/video processing, AI, cryptography, etc. then you need a math background, otherwise you can simply not do it.
A: Two things come to mind:
*
*Context is all-important. If you're a games programmer or in an engineering discipline, then math may be vital for your job. I do database and web development, therefore high school-level math is fine for me.
*You are very likely to be reusing someone else's pre-built math code rather than reinventing the wheel, especially in fields like encryption and compression. (This may also apply if you're in games development using a third party physics tool or a 3D engine.) Having a framework of tried and tested routines for use in your programs prevents errors and potential security weaknesses - definitely a good thing.
A: Certian kinds of math I think are indispensible. For instance, every software engineer should know and understand De Morgan's laws, and O notation.
Other kinds are just very useful. In simulation we often have to do a lot of physics modeling. If you are doing graphics work, you will often find yourself needing to write coordinate transformation algorithms. I've had many other situations in my 20 year career where I needed to write up and solve simultanious linear equations to figure out what constants to put into an algorithm.
A: I admit that I have never used any advanced math in programming except in some pet projects that are about math topics.
That said, I do enjoy to working together with people that are bright enough to grok maths. Mastering complex and difficult stuff helps to get your brain into shape to solve complex and difficult programming problems.
A: You don't need to learn math for programming.
But learning math trains you in thinking discipline. Therefore I would consider math to be good for the developers.
A: You don't need much math. Some combinatorial thinking can help you frame and reduce a problem for fast execution. Being able to multiply is good. You're an engineer, approximations are fine.
A: I think for tasks you described not too much math is needed. but generally i think for real advanced system programming you:
*
*Don't need calculus at all
*Need good undestanding of computer internals
*Need CS a LOT and OS theory
*Need Discrete math (incl. algorythms and combinatorics)
A: No, you don't need to know any math (except maybe binary/oct/hex/dec representations) for system programming and stuff like that.
A: Systems programming is not rocket science :-) IMHO, any good programmer can approach system programming. However, one need to know
*
*Algorithms (this requires little math, but not too much to scare a good programmer),
*Data structures, and
*Some (not all) domain knowledge e.g. OS, Architecture, Compilers.
I think the most sought after qualities would be to write precise code and ability go in depth, if required, in any of the above items.
BTW, this is my personal theory, YMMV; I don't consider myself a good programmer yet! :-(
A: To do what you want, you do not have to know math, but you have to like it a lot.
A: At the university we read the book "Concrete Mathematics" by Knuth, Graham and Patashnik.
This is a math book with topics selected for computer science students. Several year later, I checked the book again and noticed that I've used every single topic in the book at least once (with the exception of Stirling numbers).
In most cases knowing some math helps to solve problems with less work, more elegant or to implement faster solutions. It also depends on the kind of work you are doing. I.e. math is more important when you concentrate on algorithms, than when you concentrate on engineering problems.
A: I have two math degrees. I wish I knew more about databases.
My point is, while being able to find the roots of a polynomial or being able to prove that sqrt(2) is irrational is useful in an abstract sense but won't necessarily make you a better programmer.
A: This is a good answer you don't need to know maths (hell I never go outside basic The Order of Operations: PEMDAS math) haha, yet I always arrive at a solution. SURE back in 1970's math would of been extremely important to programmers who attempted to program for cpu cycle efficiently using very complex mathematical equations to avoid loops and such.
Now computers are powerful a loop to 100 or so to avoid using complex math won't really hurt your program in the long run, but of course you will pickup mathematical skills as a observer without even learning math haha which will improve your efficient programming abilities.
Lets face it the more math you know the more likely you will not only optimize a program better, but you will understand whats possible to program and whats impossible without reading about articles which state thats impossible, because of this crazy mathematical equation.
Learning math can help you go towards understanding how things work without actual EXPERIENCE.. (I base this on my life).
Here is my example (some compression article). I kept trying and trying without understanding the math behind it.. from atleast 700? flawed / failed attempts I now know more of whats possible to do (which may fail again) and also know the 700 flawed ways of looking at it.
If I probably knew math I wouldn't even try those 700 flawed attempts probably due to knowing too much math. But the path I picked without knowing that much math I find much more fun and more educational to me.
But thats just me.. I am always the hands on person.. not the book worm ;)
Some lead to new mathematical breakthroughs others just lead to faster better optimized software.
Let this be a lesson to you guys pick your path whichever works best for you trust me both are rewarding.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157354",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "117"
} |
Q: non-database field on ClearQuest form Is there a way to use form fields that does not correspond to database field for temporary processings?
I.e. I want to add:
*
*temp fields item1, item2
*database field sum
*button with record hook that sets sum = item1 + item2
A: As far as I know it's simply not possible with ClearQuest.
I've tried to do something similar and was told by our IBM consultant that the only way is to create a DB field for all variables.
A: You can't attach data to form fields really - those are representations of the underlying data, not something scripts interact with directly.
Adding temporary data to the underlying record (entity) itself sounds unlikely as well. Perhaps it's possible to abuse the perl API and dynamically attach data to entity objects but I personally wouldn't try it, you're liable to lose your data at the whim of CQ then ;-)
That does not however mean it's impossible to have temporary data.
The best way seems to me to be using the session object, which is explicitly intended for that purpose.
From the helpfile:
IBM Rational ClearQuest supports the
use of sessionwide variables for
storing information. After you create
sessionwide variables, you can access
them through the current Session
object using functions or subroutines,
including hooks, that have access to
the Session object. When the current
session ends, all of the variables
associated with that Session object
are deleted. The session ends when the
user logs out or the final reference
to the Session object ceases to exist.
There's some helpful documentation on this subject in file:///C:/Program%20Files/Rational/ClearQuest/doc/help/cq_api/c_session_vars.htm (Presuming a default installation on a windows machine, of course.)
Translating the code example in there into what you seem to be wanting, first you store the data you have calculated inside the session object:
$session->SetNameValue("item1", $value1);
$session->SetNameValue("item2", $value2);
Then in your calculation hook you retrieve the stored values and set the value of that totals field like this:
my $item1 = GetNameValue("item1");
my $item2 = GetNameValue("item2");
my $sum = $item1 + $item2;
$entity->SetFieldValue("some_totals_record", $sum);
Adjust to taste of course ;-)
A: ClearQuest schema designers often include 'temporary' fields in their record types. They do this so they perform operations on hooks to generate another value.
For example, for the Notes fields, there is a 'temporary' Notes_entry field that the user types the most recent note into, and when the record is saved, the value is added to the Notes_Log field. The next time the record is edited the Notes_entry field is cleared so the user can type a new Notes_entry.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157357",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Accurate timestamping in Python logging I've been building an error logging app recently and was after a way of accurately timestamping the incoming data. When I say accurately I mean each timestamp should be accurate relative to each other (no need to sync to an atomic clock or anything like that).
I've been using datetime.now() as a first stab, but this isn't perfect:
>>> for i in range(0,1000):
... datetime.datetime.now()
...
datetime.datetime(2008, 10, 1, 13, 17, 27, 562000)
datetime.datetime(2008, 10, 1, 13, 17, 27, 562000)
datetime.datetime(2008, 10, 1, 13, 17, 27, 562000)
datetime.datetime(2008, 10, 1, 13, 17, 27, 562000)
datetime.datetime(2008, 10, 1, 13, 17, 27, 578000)
datetime.datetime(2008, 10, 1, 13, 17, 27, 578000)
datetime.datetime(2008, 10, 1, 13, 17, 27, 578000)
datetime.datetime(2008, 10, 1, 13, 17, 27, 578000)
datetime.datetime(2008, 10, 1, 13, 17, 27, 578000)
datetime.datetime(2008, 10, 1, 13, 17, 27, 609000)
datetime.datetime(2008, 10, 1, 13, 17, 27, 609000)
datetime.datetime(2008, 10, 1, 13, 17, 27, 609000)
etc.
The changes between clocks for the first second of samples looks like this:
uSecs difference
562000
578000 16000
609000 31000
625000 16000
640000 15000
656000 16000
687000 31000
703000 16000
718000 15000
750000 32000
765000 15000
781000 16000
796000 15000
828000 32000
843000 15000
859000 16000
890000 31000
906000 16000
921000 15000
937000 16000
968000 31000
984000 16000
So it looks like the timer data is only updated every ~15-32ms on my machine. The problem comes when we come to analyse the data because sorting by something other than the timestamp and then sorting by timestamp again can leave the data in the wrong order (chronologically). It would be nice to have the time stamps accurate to the point that any call to the time stamp generator gives a unique timestamp.
I had been considering some methods involving using a time.clock() call added to a starting datetime, but would appreciate a solution that would work accurately across threads on the same machine. Any suggestions would be very gratefully received.
A: You're unlikely to get sufficiently fine-grained control that you can completely eliminate the possibility
of duplicate timestamps - you'd need resolution smaller than the time it takes to generate a datetime object. There are a couple of other approaches you might take to deal with it:
*
*Deal with it. Leave your timestamps non-unique as they are, but rely on python's sort being stable to deal with reordering problems. Sorting on timestamp first, then something else will retain the timestamp ordering - you just have to be careful to always start from the timestamp ordered list every time, rather than doing multiple sorts on the same list.
*Append your own value to enforce uniqueness. Eg. include an incrementing integer value as part of the key, or append such a value only if timestamps are different. Eg.
The following will guarantee unique timestamp values:
class TimeStamper(object):
def __init__(self):
self.lock = threading.Lock()
self.prev = None
self.count = 0
def getTimestamp(self):
with self.lock:
ts = str(datetime.now())
if ts == self.prev:
ts +='.%04d' % self.count
self.count += 1
else:
self.prev = ts
self.count = 1
return ts
For multiple processes (rather than threads), it gets a bit trickier though.
A: Thank you all for your contributions - they've all be very useful. Brian's answer seems closest to what I eventually went with (i.e. deal with it but use a sort of unique identifier - see below) so I've accepted his answer. I managed to consolidate all the various data receivers into a single thread which is where the timestamping is now done using my new AccurrateTimeStamp class. What I've done works as long as the time stamp is the first thing to use the clock.
As S.Lott stipulates, without a realtime OS, they're never going to be absolutely perfect. I really only wanted something that would let me see relative to each incoming chunk of data, when things were being received so what I've got below will work well.
Thanks again everyone!
import time
class AccurateTimeStamp():
"""
A simple class to provide a very accurate means of time stamping some data
"""
# Do the class-wide initial time stamp to synchronise calls to
# time.clock() to a single time stamp
initialTimeStamp = time.time()+ time.clock()
def __init__(self):
"""
Constructor for the AccurateTimeStamp class.
This makes a stamp based on the current time which should be more
accurate than anything you can get out of time.time().
NOTE: This time stamp will only work if nothing has called clock() in
this instance of the Python interpreter.
"""
# Get the time since the first of call to time.clock()
offset = time.clock()
# Get the current (accurate) time
currentTime = AccurateTimeStamp.initialTimeStamp+offset
# Split the time into whole seconds and the portion after the fraction
self.accurateSeconds = int(currentTime)
self.accuratePastSecond = currentTime - self.accurateSeconds
def GetAccurateTimeStampString(timestamp):
"""
Function to produce a timestamp of the form "13:48:01.87123" representing
the time stamp 'timestamp'
"""
# Get a struct_time representing the number of whole seconds since the
# epoch that we can use to format the time stamp
wholeSecondsInTimeStamp = time.localtime(timestamp.accurateSeconds)
# Convert the whole seconds and whatever fraction of a second comes after
# into a couple of strings
wholeSecondsString = time.strftime("%H:%M:%S", wholeSecondsInTimeStamp)
fractionAfterSecondString = str(int(timestamp.accuratePastSecond*1000000))
# Return our shiny new accurate time stamp
return wholeSecondsString+"."+fractionAfterSecondString
if __name__ == '__main__':
for i in range(0,500):
timestamp = AccurateTimeStamp()
print GetAccurateTimeStampString(timestamp)
A: "timestamp should be accurate relative to each other "
Why time? Why not a sequence number? If it's any client of client-server application, network latency makes timestamps kind of random.
Are you matching some external source of information? Say a log on another application? Again, if there's a network, those times won't be too close.
If you must match things between separate apps, consider passing GUID's around so that both apps log the GUID value. Then you could be absolutely sure they match, irrespective of timing differences.
If you want the relative order to be exactly right, maybe it's enough for your logger to assign a sequence number to each message in the order they were received.
A: Here is a thread about Python timing accuracy:
Python - time.clock() vs. time.time() - accuracy?
A: A few years past since the question has been asked and answered, and this has been dealt with, at least for CPython on Windows. Using the script below on both Win7 64bit and Windows Server 2008 R2, I got the same results:
*
*datetime.now() gives a resolution of 1ms and a jitter smaller than 1ms
*time.clock() gives a resolution of better than 1us and a jitter much smaller than 1ms
The script:
import time
import datetime
t1_0 = time.clock()
t2_0 = datetime.datetime.now()
with open('output.csv', 'w') as f:
for i in xrange(100000):
t1 = time.clock()
t2 = datetime.datetime.now()
td1 = t1-t1_0
td2 = (t2-t2_0).total_seconds()
f.write('%.6f,%.6f\n' % (td1, td2))
The results visualized:
A: time.clock() only measures wallclock time on Windows. On other systems, time.clock() actually measures CPU-time. On those systems time.time() is more suitable for wallclock time, and it has as high a resolution as Python can manage -- which is as high as the OS can manage; usually using gettimeofday(3) (microsecond resolution) or ftime(3) (millisecond resolution.) Other OS restrictions actually make the real resolution a lot higher than that. datetime.datetime.now() uses time.time(), so time.time() directly won't be better.
For the record, if I use datetime.datetime.now() in a loop, I see about a 1/10000 second resolution. From looking at your data, you have much, much coarser resolution than that. I'm not sure if there's anything Python as such can do, although you may be able to convince the OS to do better through other means.
I seem to recall that on Windows, time.clock() is actually (slightly) more accurate than time.time(), but it measures wallclock since the first call to time.clock(), so you have to remember to 'initialize' it first.
A: I wanted to thank J. Cage for this last post.
For my work, "reasonable" timing of events across processes and platforms is essential. There are obviously lots of places where things can go askew (clock drift, context switching, etc.), however this accurate timing solution will, I think, help to ensure that the time stamps recorded are sufficiently accurate to see the other sources of error.
That said, there are a couple of details I wonder about that are explained in When MicroSeconds Matter. For example, I think time.clock() will eventually wrap. I think for this to work for a long running process, you might have to handle that.
A: If you want microsecond-resolution (NOT accuracy) timestamps in Python, in Windows, you can use Windows's QPC timer, as demonstrated in my answer here: How to get millisecond and microsecond-resolution timestamps in Python. I'm not sure how to do this in Linux yet, so if anyone knows, please comment or answer in the link above.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157359",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: Borderless Taskbar items: Using a right click menu (VB6) Even when BorderStyle is set to 0, it is possible to force a window to show up on the taskbar either by turning on the ShowInTaskbar property or by using the windows api directly: SetWindowLong Me.hwnd, GWL_EXSTYLE, GetWindowLong(Me.hwnd, Win.GWL_EXSTYLE) Or Win.WS_EX_APPWINDOW. However, such taskbar entries lack a right-click menu in their taskbar entry. Right-clicking them does nothing instead of bringing up a context menu. Is there a way, to attach a standard or custom handler to it?
A: Without a hack, I think you're going to be stuck here, I'm sorry to say. When you set the VB6 borderless properties, you inherently disable the control menu. The control menu (typically activated by right-clicking the title bar of a window or left-clicking the icon in the upper left) is what's displayed when you right-click a window in the task bar.
Now, if you're in the mood to hack, you might be able to "simulate" the behavior in such a way that the user doesn't know the difference. I got the idea from this message thread on usenet.
Basically, it sounds like you may be able to hack it by using two forms. One form is minimized right away, and becomes your "stub" in the task bar. The other form is the one you're currently designing (which we'll call the "main" form). The stub form is what actually loads and displays your main form.
The stub form isn't borderless, and must not deactivate the control menu. It is positioned off screen and at the smallest possible size. You'll respond to its form-level events, and then use those to communicate the appropriate behaviors to the borderless form.
That's the general gist of the hack. If I wasn't at work right now, I'd whip up a simple VB6 project and see if I could get it to work for you.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157375",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How do I find out if a SQLite index is unique? (With SQL) I want to find out, with an SQL query, whether an index is UNIQUE or not. I'm using SQLite 3.
I have tried two approaches:
SELECT * FROM sqlite_master WHERE name = 'sqlite_autoindex_user_1'
This returns information about the index ("type", "name", "tbl_name", "rootpage" and "sql"). Note that the sql column is empty when the index is automatically created by SQLite.
PRAGMA index_info(sqlite_autoindex_user_1);
This returns the columns in the index ("seqno", "cid" and "name").
Any other suggestions?
Edit: The above example is for an auto-generated index, but my question is about indexes in general. For example, I can create an index with "CREATE UNIQUE INDEX index1 ON visit (user, date)". It seems no SQL command will show if my new index is UNIQUE or not.
A: Since noone's come up with a good answer, I think the best solution is this:
*
*If the index starts with "sqlite_autoindex", it is an auto-generated index for a single UNIQUE column
*Otherwise, look for the UNIQUE keyword in the sql column in the table sqlite_master, with something like this:
SELECT * FROM sqlite_master WHERE type = 'index' AND sql LIKE '%UNIQUE%'
A: PRAGMA INDEX_LIST('table_name');
Returns a table with 3 columns:
*
*seq Unique numeric ID of index
*name Name of the index
*unique Uniqueness flag (nonzero if UNIQUE index.)
Edit
Since SQLite 3.16.0 you can also use table-valued pragma functions which have the advantage that you can JOIN them to search for a specific table and column. See @mike-scotty's answer.
A: you can programmatically build a select statement to see if any tuples point to more than one row. If you get back three columns, foo, bar and baz, create the following query
select count(*) from t
group by foo, bar, baz
having count(*) > 1
If that returns any rows, your index is not unique, since more than one row maps to the given tuple. If sqlite3 supports derived tables (I've yet to have the need, so I don't know off-hand), you can make this even more succinct:
select count(*) from (
select count(*) from t
group by foo, bar, baz
having count(*) > 1
)
This will return a single row result set, denoting the number of duplicate tuple sets. If positive, your index is not unique.
A: You are close:
1) If the index starts with "sqlite_autoindex", it is an auto-generated index for the primary key . However, this will be in the sqlite_master or sqlite_temp_master tables depending depending on whether the table being indexed is temporary.
2) You need to watch out for table names and columns that contain the substring unique, so you want to use:
SELECT * FROM sqlite_master WHERE type = 'index' AND sql LIKE 'CREATE UNIQUE INDEX%'
See the sqlite website documentation on Create Index
A: As of sqlite 3.16.0 you could also use pragma functions:
SELECT distinct il.name
FROM sqlite_master AS m,
pragma_index_list(m.name) AS il,
pragma_index_info(il.name) AS ii
WHERE m.type='table' AND il.[unique] = 1;
The above statement will list all names of unique indexes.
SELECT DISTINCT m.name as table_name, ii.name as column_name
FROM sqlite_master AS m,
pragma_index_list(m.name) AS il,
pragma_index_info(il.name) AS ii
WHERE m.type='table' AND il.[unique] = 1;
The above statement will return all tables and their columns if the column is part of a unique index.
From the docs:
The table-valued functions for PRAGMA feature was added in SQLite version 3.16.0 (2017-01-02). Prior versions of SQLite cannot use this feature.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157392",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18"
} |
Q: Python 2.5 dictionary 2 key sort I have a dictionary of 200,000 items (the keys are strings and the values are integers).
What is the best/most pythonic way to print the items sorted by descending value then ascending key (i.e. a 2 key sort)?
a={ 'keyC':1, 'keyB':2, 'keyA':1 }
b = a.items()
b.sort( key=lambda a:a[0])
b.sort( key=lambda a:a[1], reverse=True )
print b
>>>[('keyB', 2), ('keyA', 1), ('keyC', 1)]
A: data = { 'keyC':1, 'keyB':2, 'keyA':1 }
for key, value in sorted(data.items(), key=lambda x: (-1*x[1], x[0])):
print key, value
A: You can't sort dictionaries. You have to sort the list of items.
Previous versions were wrong. When you have a numeric value, it's easy to sort in reverse order. These will do that. But this isn't general. This only works because the value is numeric.
a = { 'key':1, 'another':2, 'key2':1 }
b= a.items()
b.sort( key=lambda a:(-a[1],a[0]) )
print b
Here's an alternative, using an explicit function instead of a lambda and the cmp instead of the key option.
def valueKeyCmp( a, b ):
return cmp( (-a[1], a[0]), (-b[1], b[0] ) )
b.sort( cmp= valueKeyCmp )
print b
The more general solution is actually two separate sorts
b.sort( key=lambda a:a[1], reverse=True )
b.sort( key=lambda a:a[0] )
print b
A: The most pythonic way to do it would be to know a little more about the actual data -- specifically, the maximum value you can have -- and then do it like this:
def sortkey((k, v)):
return (maxval - v, k)
items = thedict.items()
items.sort(key=sortkey)
but unless you already know the maximum value, searching for the maximum value means looping through the dict an extra time (with max(thedict.itervalues())), which may be expensive. Alternatively, a keyfunc version of S.Lott's solution:
def sortkey((k, v)):
return (-v, k)
items = thedict.items()
items.sort(key=sortkey)
An alternative that doesn't care about the types would be a comparison function:
def sortcmp((ak, av), (bk, bv)):
# compare values 'in reverse'
r = cmp(bv, av)
if not r:
# and then keys normally
r = cmp(ak, bk)
return r
items = thedict.items()
items.sort(cmp=sortcmp)
and this solution actually works for any type of key and value that you want to mix ascending and descending sorting with in the same key. If you value brevity you can write sortcmp as:
def sortcmp((ak, av), (bk, bv)):
return cmp((bk, av), (ak, bv))
A: You can use something like this:
dic = {'aaa':1, 'aab':3, 'aaf':3, 'aac':2, 'aad':2, 'aae':4}
def sort_compare(a, b):
c = cmp(dic[b], dic[a])
if c != 0:
return c
return cmp(a, b)
for k in sorted(dic.keys(), cmp=sort_compare):
print k, dic[k]
Don't know how pythonic it is however :)
A: Building on Thomas Wouters and Ricardo Reyes solutions:
def combine(*cmps):
"""Sequence comparisons."""
def comparator(a, b):
for cmp in cmps:
result = cmp(a, b):
if result:
return result
return 0
return comparator
def reverse(cmp):
"""Invert a comparison."""
def comparator(a, b):
return cmp(b, a)
return comparator
def compare_nth(cmp, n):
"""Compare the n'th item from two sequences."""
def comparator(a, b):
return cmp(a[n], b[n])
return comparator
rev_val_key_cmp = combine(
# compare values, decreasing
reverse(compare_nth(1, cmp)),
# compare keys, increasing
compare_nth(0, cmp)
)
data = { 'keyC':1, 'keyB':2, 'keyA':1 }
for key, value in sorted(data.items(), cmp=rev_val_key_cmp):
print key, value
A: >>> keys = sorted(a, key=lambda k: (-a[k], k))
or
>>> keys = sorted(a)
>>> keys.sort(key=a.get, reverse=True)
then
print [(key, a[key]) for key in keys]
[('keyB', 2), ('keyA', 1), ('keyC', 1)]
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157424",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16"
} |
Q: What are the benefits of using Perforce instead of Subversion? My team has been using SVN for a few years. We now have the option of switching to Perforce.
What would be the benefits (and pitfalls) of making such a switch?
A: I use perforce at work, svn at home.
The perforce GUI is pretty nice, but only once you get used to it. It definitely has a learning curve, when non programmers start using perforce it usually takes some time until they get the concepts.
Tortoise is awesome, it's very easy to use. My lawyer wife subversions all her documents using it ;)
Branching is easy in perforce. In fact so easy, that people branch for not too much reason. Then you integrate because you branched. It can quite easily become the only thing you do.
Svn is integrated in more products. At least more products that I use. It's a great advantage, because if you have to use eithere external to your development environment, they both get clunky.
Every once in a while we have problems with perforce where it thinks that your local copies are up to date, but they are not. Then you have to force synce, then if it's still no good, delete your local files and resync. Never had such issues with svn. This is actually a huge problem as you don't even know you are working on an old copy.
Another thing to think about is why you want to change. If you have a system that works and everyone's familiar with it and happy with it, why replace it?
A: On the Perforce website they have a paper comparing the two:
P4 vs SVN
Obviously, given the source, you have to realise that it stresses the advantages of Perforce over SVN, but it is still a useful read. You never know, one of the benefits may well be the killer item that your team could benefit from given your own unique circumstances.
I would certainly recommend Perforce for a number of reasons already covered in other answers, but I'm not in a position to offer a comparison with SVN having never really used it.
A: *
*P4 keeps track of your working copy on the server. This means that
*
*Large working copies are processed much faster. I used to have a large SVN project and a simple update took 15 minutes because it had to create a tree of the local working copy (thousands of folders). File access is slow. P4 stores the information about the working copy in the database, so any operations were always near-instantaneous.
*If you mess around with your files and do not tell the server, you are in trouble! You cannot just delete a file - you have to delete a file with the P4 client so the server knows. Note that if you locally delete a file, it will not be downloaded again on successive updates because the server thinks you already have it! When lots of this happened and I ended up wildly out of sync, I usually had to resort to cleaning out my local copy and downloading it again, which could be time-consuming. You must be careful about this.
*The Explorer shell extension client (think TortoiseSVN) sucks and is completely unusable.
*There are two GUI client applications which offer the best functionality: P4Win and P4V, of which P4V is newer and more easy to use but not as feature-rich.
*There are Visual Studio and Eclipse plug-ins, which work relatively well, although they do not have many advanced features.
*Generally speaking, P4 offers much less features than SVN and is sometimes downright confusing.
*Working copy definitions were nice and flexible. I believe P4 is superior to SVN here: you can define masks for working copy folders and create all sorts of bizarre trees, so you download only what you want to exactly where you want, without having to manually futz with multiple checkouts. This came in very handy when I had gigabytes of stuff on the server and only wanted a specific subset of it. I used SVN in a similar situation with much more hassle.
*Branching under P4 is... odd. Branchsets and different kinds of branches and confusing UI. I do not remember much details about this, unfortunately.
Other than that, it's pretty standard.
I recommend you keep SVN unless you deal with huge codebases or hate the .svn folders littering up your filesystem. SVN+TortoiseSVN is far more confortable for most situations.
A: I currently use both on different projects.
*
*The perforce branching mechanism is superior.
*The perforce conflict resolve tool is better.
*I really like perforce's strong notion of a changelist.
*Perforce seems faster.
*It's easier to set up and get running.
*Some of our members really like the MS Office plugin for perforce, I'm on a Mac so I can't use it.
But
*
*The SVN clients are better, especially the eclipse plugin.
*Perforce is more expensive.
These are merely opinions, so perhaps this is a poor answer :)
If I was already using one or the other, I'd be very hard pressed to switch since neither seems to offer really significant benefits over the other, but the disruption in switching could be large.
Update: Since writing this, I've completely switched to using GIT for both personal and commercial purposes. I would pick it over either SVN or Perforce any day.
A: Proper branching and making branches part of the namespace are the biggest benefits that I see in Perforce. Merging is easy. I see no downside in moving away from Subversion.
A: Perforce allows the server to own the client.
A Perforce server can read and write arbitrary files on the client, and thus execute arbitrary code. The Perforce configuration is all server-side, so the server could simply treat the entire hard disc of the clients' computer as a repository, and do whatever it wanted to it.
Never run Perforce except in an SELinux sandbox.
Remember: The Perforce client is the server's puppet. You must use the operating system's security features to prevent it from doing something you don't want it to do. ALWAYS treat the Perforce client as hostile.
A: I used SVN, not a lot, just to try it out. I used Perforce for about three years. I thought it was great. The customer service was brilliant, very fast to resolve a problem that err turned out to be just me being daft, and they even implemented a feature I suggested.
Some other developers and especially non developers who had to use it found it a bit tricky to learn to use, especially when it came to defining a client-spec (a map of folders on the server to local folders).
I found it to be very quick to get files in and out of, and to be very reliable. I think most developers I worked with really liked it once we'd got used to it. We were using Visual Source Safe before we switched though, so, pretty much anything is better than that.
Downsides, it costs money. I believe SVN is very good system, as SVN is free, I would think you'd have to have a compelling reason to switch especially as Perforce does take a while to learn. If SVN is doing the job for you, and you haven't got any complaints about it, I would suggest you stay with it, and save the money for a rainy day!
A: I have used both, and in my experience Perforce makes a lot of sense if you have a big team and/or codebase; otherwise I would pick SVN - it is easier to set up and maintain.
A: You can edit things offline in Perforce if you want. Your worksapce defines if files are readonly or writeable, so you could make them all writeable, hack away and then ask Perforce to figure out what needs to be checked in.
Better to have files readonly and check out what you need so others (and yourself) know what you have been/are doing.
The better system for you depends on what your requirements are, if you have no requirements then Perforce wins.
Who uses Subversion?
Small non-commercial teams
Cheap or small commerical teams
Who uses Perforce?
Google
Sony
Samsung
nVidia
Symantec
A: As of recent versions, Perforce has a new feature for shelving changes:
Shelving is the process of temporarily storing work in progress on a Perforce Server without submitting a changelist. Shelving is useful when you need to perform multiple development tasks (such as interruptions from higher-priority work, testing across multiple platforms) on the same set of files, or share files for code review before committing your work to the depot.
This is analagous to git's branching model which lets you effortlessly switch from one local branch to another when you need to multitask.
AFAIK, Subversion has no similar feature.
More info on the Perforce blog.
A: Has your team evaluated Git? It has features analogous to those available in Perforce, but is free (FOSS).
Either is a great alternative to SVN when working with a large team.
A: Perforce licensing slides down in cost as number of seats goes up, as I recall. So it's not exactly $900 per seat. It's also a server-based license; you pay for total number of human developers using it, not per machine client using it. So, if you're a shop of 200 people, the 200 seat license lets them all use perforce, even from home.
A: In my opinion reason#1 for selecting between SVN and Perforce is cost.
Small repositories:
SVN does its job just fine and for free.
Big repositories: It is fatal to use SVN: http://yoawsconsult.blogspot.com/2009/05/whenwhy-you-cant-afford-to-use.html.
Perforce can do big repositories, but you have to pay for it and for getting to know it.
A: one disadvantage of perforce against sub version is export command in svn. It is easier to export or download the code of some version to anywhere. you do not need to create workspace for that. But in perforce you can get the version-ed code to your workspace only.
A: From my practice:
*
*Perforce designed to store also huge blob files (like software distributions), svn store all its data as text. It is impossible to store such binary data effectively in svn
*Perforce support usefull thing like "shelve changes". User asked perforce to store change like a "patch" in the perforce server. Another users then can review changes if author asked them. Svn doesn't support it
*Svn command line format is more easier to understand and remember and for daily usage
*Svn is free
*In "git" and in "svn" you edit you changes directly via editing files in local filesystem after receiving files from repo. In perforce "the right" way to work with files is to marked them that you're going to work with them (p4 edit)....In theory another guys will be available to view it, in practice it is not comfortable
*Perforce client workspace prepare in your local system need more time then svn due to extra configuration that should be done
A: Being able to do everything just from the exlorer via TortoiseSVN feels very comfortable!
So there is even a P4 Extention installed. But its really not that sophisticated!
On the other hand the P4 client offers a accessible view onto the server repository so one can work without a complete checkout. This always felt a little cumbersome in SVN days with TSVN only.
Saying this I can not understand the top posters comment:
*
*The Explorer shell extension client (think TortoiseSVN) sucks and is completely unusable.
For FOSS TortoiseSVN is just great! (Even though the icon thing worked a little bitchy and different on each machine..)
with TortoiseSVN you:
*
*can rearrange functions
*
*to bring 'get lock..' front for instance
*have actually access to ALL functions from explorer
*have icons in the shell menu
*are notified about updates
A: The main benefit of using subversion over Perforce is in my opinion ability to edit things off-line and simultaneously with your colleagues.
If the data infrastructure is loosely bonded (there is off-line time), svn rocks. You can do a lot even if the server would not be reachable. Perforce essentially requires an always-available server connection.
Disclaimer: my info on Perforce is old, used it for a while in 2005-06 before fully switching to svn
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157429",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "66"
} |
Q: Server Error in '/' Application I have created a Web Application in asp.net 2.0. which is working fine on my Local machine. However when trying to deploy it on sever that has windows 2003 sever, I get the error:
Server Error in '/' Application.
Parser Error
Description: An error occurred during the parsing of a resource required to service this request. Please review the following specific parse error details and modify your source file appropriately.
Parser Error Message: The file '/MasterPage.master' does not exist.
Source Error:
Line 1: <%@ Page Language="C#" MasterPageFile="~/MasterPage.master" AutoEventWireup="true" CodeFile="Default.aspx.cs" Inherits="_Default" Title="LinkChecker Home " %>
Line 2: <asp:Content ID="Content1" ContentPlaceHolderID="MainContent" Runat="Server">
Line 3:
Source File: /LinkChecker/Default.aspx Line: 1
Any idea how this can be fixed?
A: Is the folder on the web server (IIS presumably) marked as an ASP.NET application? If not, ~/ will point to the next application up, or the site root.
It should have a cog icon in the IIS/MMC snap-in. Also ensure that it is running the right version of ASP.NET (v2.blah usually).
In the IIS/MMC view, find the folder that is your project; right-click; Properties.
Check it has an Application Name; if it doesn't, click Create. You might also want to tweak the app-pool if you want it to run in a different identity than default. Also check the ASP.NET tab - for example, it might be 2.0.50727.
A: There are other possible issues that could result in the error message stated above, like permission problems on the server for instance.
Look here for a thread in which this topic is also discussed.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157431",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Problem joining on the highest value in mysql table I have a products table...
alt text http://img357.imageshack.us/img357/6393/productscx5.gif
and a revisions table, which is supposed to track changes to product info
alt text http://img124.imageshack.us/img124/1139/revisionslz5.gif
I try to query the database for all products, with their most recent revision...
select
*
from `products` as `p`
left join `revisions` as `r` on `r`.`product_id` = `p`.`product_id`
group by `p`.`product_id`
order by `r`.`modified` desc
but I always just get the first revision. I need to do this in one select (ie no sub queries). I can manage it in mssql, is this even possible in mysql?
A: Here's how I'd do it:
SELECT p.*, r.*
FROM products AS p
JOIN revisions AS r USING (product_id)
LEFT OUTER JOIN revisions AS r2
ON (r.product_id = r2.product_id AND r.modified < r2.modified)
WHERE r2.revision_id IS NULL;
In other words: find the revision for which no other revision exists with the same product_id and a greater modified value.
A: Begin and end dates on your history table would make this possible.(leaving the most recent end date null and stamping end dates on the previous record as you insert a new one)
Otherwise you will have to use a sub-query.
A: That same query is parsable in MySQL.
Why are you using a Left JOIN instead of an INNER join or a RIGHT join?
Also if you want to go about this in a different way, you have the MAX function at your disposal.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157459",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Should I use an FTP server as a maven host? I would like to host a Maven repository for a framework we're working on and its dependencies. Can I just deploy my artifacts to my FTP host using mvn deploy, or should I manually deploy and/or setup some things before being able to deploy artifacts? I only have FTP access to server I want to host the Maven repo on.
The online repository I want to use is not hosted by myself. As I say, I only have FTP access, so if possible, I would like to use that FTP space as a Maven repository. The tools mentioned seem to work when you have full control over the host machine, or at least more than just FTP access since you need to configure the local directories where the repositories will be placed. Is this possible?
A: You can even use Dropbox. All that you need is a public address to access the files generated with mvn deploy, with any of the protocols in the accepted answer.
I guess there are more services that can work in the same way, but I'm not certain about the URL schemes that alternatives to Dropbox may use.
A: https://maven.apache.org/wagon/wagon-providers/wagon-ftp/ will tell you that you can use ftp to read from an existing repository, but not to create a new one. I don't think that it is impossible in principle, but no one has cared to write all the fiddly code to do the directory management via ftp.
A: You might want to have a look at Nexus, a Maven repository manager. We've replaced our local Maven repository with a Nexus-based one and find it tremendously useful.
A: I've successfully used Archiva as my repository for several years ... see http://archiva.apache.org/. It's easy to administer and allows you to configure as many repositories as you need (SNAPSHOT, internal, external, etc).
According to the book "Better Builds with Maven", the most common type of repository is HTTP, this paragraph describes what I think you need:
This chapter will assume the repositories are running from http://localhost:8081/ and that artifacts are deployed to the repositories using the file system. However, it is possible to use a repository on another server with any combination of supported protocols including http, ftp, scp, sftp and more. For more information, refer to Chapter 3.
A Maven 2 repository is simply a specific directory structure, so once you get the transport and server specifications right for the repository and deployment portion of your POMs, it should be completely transparent to your users.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157463",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: What types of sockets are available in VxWorks? Vxworks supports standard IP v4 and IP v6 sockets, but sockets are also used for other purposes.
What other types of sockets are available?
A: The socket types you can use depend on the communication domain you create your socket in.
The listed socket types are:
SOCK_DGRAM unreliable not sequenced possibly duplicated message
SOCK_STREAM reliable sequenced non-duplicated stream
SOCK_SEQPACKET reliable sequenced non-duplicated message
SOCK_RDM reliable not sequenced possibly duplicated message
SOCK_RAW protocol/interface dependent, access to internal protocol info
VxWorks also defines the following communication domains:
AF_INET IPv4
AF_INT6 IPv6
AF_ROUTE routing
AF_LOCAL local Inter-process Communications
AF_TIPC Transparent Inter-Process Communications
AF_MOBILITY Mobile IPv6
Here is the list of the various sockets supported for the various domains:
AF_INET SOCK_DGRAM, SOCK_STREAM, SOCK_RAW
AF_INT6 SOCK_DGRAM, SOCK_STREAM, SOCK_RAW
AF_ROUTE SOCK_RAW
AF_LOCAL SOCK_SEQPACKET
AF_TIPC SOCK_SEQPACKET, SOCK_RDM, SOCK_DGRAM, SOCK_STREAM
AF_MOBILITY don't know
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157468",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Dynamically instantiate a Ruby class similar to Java How can this line in Java be translated to Ruby:
String className = "java.util.Vector";
...
Object o = Class.forName(className).newInstance();
Thanks!
A: Object::const_get('String').new()
A: If you're using ActiveSupport (i.e. Rails), there is a method added to String that does this:
"String".constantize.new
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157480",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "25"
} |
Q: Is there a way to establish a HTTPS Connection with Java 1.3? I have to work on an old 1.3 JVM and I'm asked to create a secure connection to another server. Unfortunately the HttpsURLConnection only appears sinc JVM 1.4.
Is there another way to create a secure connection? Is there a library that I could you to add this fonctionnality?
A: You need to install the Java Secure Socket Extension (JSSE), which used to be required because Sun wouldn't ship it with the JDK because of comedy export restrictions. I had a look on Sun's web site, but the JDK 1.3 instructions are preving elusive. Bear in mind that JDK 1.3 is now end-of-lifed by Sun, so they may not have any information any more.
http://hc.apache.org/httpclient-3.x/sslguide.html
A: Check out the BouncyCastle implementation. It works all the way down to Java 1.1 and J2ME.
A: If JSSE doesn't work out for you (from @skaffman's answer, it may be hard to find documentation), you may want to look into some sort of a proxy. You could set up a daemon running on the same local machine (or trusted network), which then forwards the requests over HTTPS to the final end point. You could write this proxy server using a more modern JVM. Your legacy system would then point to the proxy rather than the real service.
Of course, if, by chance, you also have control over the final end point, you could perhaps just put both servers on a VPN.
A: You might be able to use JSSE.
A: skaffman links to the SSL guide for jakarta commons HttpClient. HttpClient is a good library for dealing with http.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157491",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Open multiple documents from a single file How would I go about creating multiple documents when a single file is opening in a MFC application?
We have an aggregate file format which can contain information for multiple documents. When this file is opened, I would like multiple CDocuments created for each record in the file. We already have a extended CDocManager, so I'm guessing this could be implemented by some logic in OpenDocumentFile. The question is how to pass the information about "I am record x of y" back up from the CDocument class to the doc manager?
A: If you have several CDocument derived types that store different information, you need a seperate CMultiDocTemplate for each type, typically stored in your CApp derived class. When you call App.Initinstance you initialise each template using something like
m_MyTempate1 = new CAtlasMDITemplate(IDR_RES_TYPE1,
RUNTIME_CLASS(CDocumentType1),
RUNTIME_CLASS(CChildFrameSplitter),
RUNTIME_CLASS(CViewType1));
When you open your base document, you then create and retrieve your additional documents. There are a number of places you could do this in, Serialize probably being the easiest, e.g.
void CDocumentType1::Serialize(CArchive& ar)
{
//
// Do all the document type 1 serialisation
//
// Create second doc
CDocumentType2 *pDoc2 = theApp.m_MyTempate2->OpenDocumentFile(NULL);
pDoc2->Serialize(ar);
}
The more conventional way of doing this though would be to have a single document, with multiple views for accessing the different parts of the aggregate document.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157492",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Passing Objects via QueryString I have object A which in turn has a property of type Object B
Class A
property x as Object B
End Class
On my ASP.NET page when I select a gridview item which maps to an object of type A I serialize the object onto the QueryString and pass it to the next page.
However I run into problems if property x actually has some value as it looks like I exceed the QueryString capacity length of 4k (although I didn't think the objects were that large)
I have already considered the following approaches to do this
*
*Session Variables
Approach not used as I have read that this is bad practice.
*
*Using a unique key for the object and retrieving it on the next page.
Approach not used as the objects do not map to a single instance in a table, they arte composed of data from different databases.
So I guess my question is two fold
*
*Is it worth using GKZip to compress the querystring further (is this possible??)
*What other methods would people suggest to do this?
A: If displaying the url of the next page in the browser does not matter, you could use the context.items collection.
context.items.add("keyA", objectA)
server.transfer("nextPage.aspx")
Then on the next page:
public sub page_load(...)
dim objectA as A = ctype(context.items("keyA"), objectA)
dim objectB as B = objectA.B
end sub
One reason to use this is if you want the users to believe that the next page is really a part of the first page. To them, it only appears as if a PostBack has occurred.
Also, you don't really need a unique key using this approach if the only way to use "next page" is if you first came from "first page". The scope for the context items collections is specific to just this particular request.
I agree with the other posters who mentioned that serialized objects on the querystring is a much worse evil than using session state. If you do use session state, just remember to clear the key you use immediately after using it.
A: I don't understand why you wouldn't use session state but...
Option 1: Viewstate
Option 2: Form parameters instead of querystring
But also be aware that you do not get the same object back when you serialize/deserialize. You get a new object initialized with the values of the original that were serialized out. You're going to end up with two of the object.
EDIT: You can store values in viewstate using the same syntax as Session state
ViewState["key"] = val;
The value has to be serializeable though.
A: While storing objects in session might be considered bad practice, it's lightyears better than passing them via serialized querystrings.
Back in classic asp, storing objects in session was considered bad practice because you created thread-affinity, and you also limited your ability to scale the site by adding other web servers. This is no longer a problem with asp.net (as long as you use an external stateserver).
There are other reasons to avoid session variables, but in your case I think that's the way to go.
Another option is to combine the 2 pages that need access to this object into one page, using panels to hide and display the needed "sub-pages" and use viewstate to store the object.
A: I don't think passing it in the query string, or storing it in the session, is a good idea.
You need one of the following:
a) A caching layer. Something like Microsoft Velocity would work, but I doubt you need something on that scale.
b) Put the keys to each object in the databases that you need in the query string and retrieve them the next time around. (E.g. myurl.com/mypage.aspx?db1objectkey=123&db2objectkey=345&db3objectkey=456)
A: Using session state seems like the most practical way to do this, its exactly what its designed for.
A: Cache is probably not the answer here either. As Telos mentioned, I'm not sure why you're not considering session.
If you have a page that depends on this data being available, then you just throw a guard clause in the page load...
public void Page_Load()
{
if(!IsPostBack)
{
const string key = "FunkyObject";
if(Session[key] == null)
Response.Redirect("firstStep.aspx");
var obj = (FunkyObject)Session[key];
DoSomething(obj);
}
}
If session is absolutely out of the quesiton, then you'll have to re-materialize this object on the other page. Just send the unique identifier in the querystring so you can pull it back again.
A: Here is what I do:
Page1.aspx - Add a public property of an instance of my object. Add a button (Button1) with the PostBackURL property set to ~/Page2.aspx
Private _RP as ReportParameters
Public ReadOnly Property ReportParams() as ReportParameters
Get
Return _RP
End Get
End Property
Protected Sub Button1_Click(ByVal sender As Object, ByVal e As EventArgs) Handles Button1.Click
_RP = New ReportParameters
_RP.Name = "Report 1"
_RP.Param = "42"
End Sub
Now, on the second page, Page2.aspx add the following to the Markup at the top of the page under the first directive:
<%@ PreviousPageType VirtualPath="~/Default.aspx" %>
Then for the Page_Load in the code behind for Page2.aspx, add the following
If Not Page.PreviousPage is Nothing Then
Response.write (PreviousPage.ReportParams.Name & " " & PreviousPage.ReportParams.Param)
End If
A: Session isn't always available. For instance when XSS (cross-site-scripting) security settings on IE prevent the storage of third-party cookies. If your site is being called within an IFrame from a site that's not your DNS domain, your cookies are going to be blocked by default. No cookies = no session.
Another example is where you have to pass control to another website that will make the callback to your site as a pure URL, not a post. In this case you have to store your session parameters in a querystring parameter, something that's tough to do given the 4k size constraint and URL encoding, not to mention encryption, etc.
The issue is that most of the built-in serialisation methods are pretty verbose, thus one has to resort to a roll-your-own method, probably using reflection.
Another reason for not using sessions is simply to give a better user experience; sessions get cleared after N minutes and when the server restarts. OK, in this case a viewstate is preferable, but sometimes it's not possible to use a form. OK, one could rely on JavaScript to do a postback, but again, that's not always possible.
These are the problems I'm currently coding around.
A: Faced with a similar situation what I did, is to XML serialize the object and pass it around as query string parameter. The difficulty with this approach was that despite encoding, the receiving form throws exception saying "potentially dangerous request...". The way I got around was to encrypt the serialized object and then encode to pass it around as query string parameter. Which in turn made the query string tamper proof (bonus wandering into the HMAC territory)!
FormA XML serializes an object > encrypts the serialized string > encode > pass as query string to FormB FormB decrypts the query parameter value (as request.querystring decodes also) > deserialize the resulting XML string to object using XmlSerializer.
I can share my VB.NET code upon request to howIdidit-at-applecart-dot-net
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157504",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Using lock on the key of a Dictionary I have a Dictionary<string, someobject>.
EDIT: It was pointed out to me, that my example was bad. My whole intention was not to update the references in a loop but to update different values based on differnt threads need to update/get the data. I changed the loop to a method.
I need to update items in my dictionary - one key at a time and i was wondering if there are any problems in using the lock on the .key value of my Dictionary object?
private static Dictionary<string, MatrixElement> matrixElements = new Dictionary<string, MatrixElement>();
//Pseudo-code
public static void UpdateValue(string key)
{
KeyValuePair<string, MatrixElement> keyValuePair = matrixElements[key];
lock (keyValuePair.Key)
{
keyValuePair.Value = SomeMeanMethod();
}
}
Would that hold up in court or fail? I just want each value in the dictionary to be locked independantly so locking (and updating) one value does not lock the others. Also i'm aware the locking will be holding for a long time - but the data will be invalid untill updated fully.
A: Note: I assume exception when modifying collection during iteration is already fixed
Dictionary is not thread-safe collection, which means it is not safe to modify and read collection from different threads without external synchronization. Hashtable is (was?) thread-safe for one-writer-many-readers scenario, but Dictionary has different internal data structure and doesn't inherit this guarantee.
This means that you cannot modify your dictionary while you accessing it for read or write from the other thread, it can just broke internal data structures. Locking on the key doesn't protect internal data structure, because while you modify that very key someone could be reading different key of your dictionary in another thread. Even if you can guarantee that all your keys are same objects (like said about string interning), this doesn't bring you on safe side. Example:
*
*You lock the key and begin to modify dictionary
*Another thread attempts to get value for the key which happens to fall into the same bucket as locked one. This is not only when hashcodes of two objects are the same, but more frequently when hashcode%tableSize is the same.
*Both threads are accessing the same bucket (linked list of keys with same hashcode%tableSize value)
If there is no such key in dictionary, first thread will start modifying the list, and the second thread will likely to read incomplete state.
If such key already exists, implementation details of dictionary could still modify data structure, for example move recently accessed keys to the head of the list for faster retrieval. You cannot rely on implementation details.
There are many cases like that, when you will have corrupted dictionary. So you have to have external synchronization object (or use Dictionary itself, if it is not exposed to public) and lock on it during entire operation. If you need more granular locks when operation can take some long time, you can copy keys you need to update, iterate over it, lock entire dictionary during single key update (don't forget to verify key is still there) and release it to let other threads run.
A: If I'm not mistaken, the original intention was to lock on a single element, rather than locking the whole dictionary (like table-level lock vs. row level lock in a DB)
you can't lock on the dictionary's key as many here explained.
What you can do, is to keep an internal dictionary of lock objects, that corresponds to the actual dictionary. So when you'd want to write to YourDictionary[Key1], you'll first lock on InternalLocksDictionary[Key1] - so only a single thread will write to YourDictionary.
a (not too clean) example can be found here.
A: Locking on an object that is accessible outside of the code locking it is a big risk. If any other code (anywhere) ever locks that object you could be in for some deadlocks that are hard to debug. Also note that you lock the object, not the reference, so if I gave you a dictionary, I may still hold references to the keys and lock on them - causing us to lock on the same object.
If you completely encapsulate the dictionary, and generate the keys yourself (they aren't ever passed in, then you may be safe.
However, try to stick to one rule - limit the visibility of the objects you lock on to the locking code itself whenever possible.
That's why you see this:
public class Something
{
private readonly object lockObj = new object();
public SomethingReentrant()
{
lock(lockObj) // Line A
{
// ...
}
}
}
rather than seeing line A above replaced by
lock(this)
That way, a separate object is locked on, and the visibility is limited.
Edit Jon Skeet correctly observed that lockObj above should be readonly.
A: No, this would not work.
The reason is string interning. This means that:
string a = "Something";
string b = "Something";
are both the same object! Therefore, you should never lock on strings because if some other part of the program (e.g. another instance of this same object) also wants to lock on the same string, you could accidentally create lock contention where there is no need for it; possibly even a deadlock.
Feel free to do this with non-strings, though. For best clarity, I make it a personal habit to always create a separate lock object:
class Something
{
bool threadSafeBool = true;
object threadSafeBoolLock = new object(); // Always lock this to use threadSafeBool
}
I recommend you do the same. Create a Dictionary with the lock objects for every matrix cell. Then, lock these objects when needed.
PS. Changing the collection you are iterating over is not considered very nice. It will even throw an exception with most collection types. Try to refactor this - e.g. iterate over a list of keys, if it will always be constant, not the pairs.
A: Just came across this and thought id share some code I wrote a few years ago where I needed to a dictionary on a key basis
using (var lockObject = new Lock(hashedCacheID))
{
var lockedKey = lockObject.GetLock();
//now do something with the dictionary
}
the lock class
class Lock : IDisposable
{
private static readonly Dictionary<string, string> Lockedkeys = new Dictionary<string, string>();
private static readonly object CritialLock = new object();
private readonly string _key;
private bool _isLocked;
public Lock(string key)
{
_key = key;
lock (CritialLock)
{
//if the dictionary doesnt contain the key add it
if (!Lockedkeys.ContainsKey(key))
{
Lockedkeys.Add(key, String.Copy(key)); //enusre that the two objects have different references
}
}
}
public string GetLock()
{
var key = Lockedkeys[_key];
if (!_isLocked)
{
Monitor.Enter(key);
}
_isLocked = true;
return key;
}
public void Dispose()
{
var key = Lockedkeys[_key];
if (_isLocked)
{
Monitor.Exit(key);
}
_isLocked = false;
}
}
A: In your example, you can not do what you want to do!
You will get a System.InvalidOperationException with a message of Collection was modified; enumeration operation may not execute.
Here is an example to prove:
using System.Collections.Generic;
using System;
public class Test
{
private Int32 age = 42;
static public void Main()
{
(new Test()).TestMethod();
}
public void TestMethod()
{
Dictionary<Int32, string> myDict = new Dictionary<Int32, string>();
myDict[age] = age.ToString();
foreach(KeyValuePair<Int32, string> pair in myDict)
{
Console.WriteLine("{0} : {1}", pair.Key, pair.Value);
++age;
Console.WriteLine("{0} : {1}", pair.Key, pair.Value);
myDict[pair.Key] = "new";
Console.WriteLine("Changed!");
}
}
}
The output would be:
42 : 42
42 : 42
Unhandled Exception: System.InvalidOperationException: Collection was modified; enumeration operation may not execute.
at System.ThrowHelper.ThrowInvalidOperationException(ExceptionResource resource)
at System.Collections.Generic.Dictionary`2.Enumerator.MoveNext()
at Test.TestMethod()
at Test.Main()
A: I can see a few potential issues there:
*
*strings can be shared, so you don't necessarily know who else might be locking on that key object for what other reason
*strings might not be shared: you may be locking on one string key with the value "Key1" and some other piece of code may have a different string object that also contains the characters "Key1". To the dictionary they're the same key but as far as locking is concerned they are different objects.
*That locking won't prevent changes to the value objects themselves, i.e. matrixElements[someKey].ChangeAllYourContents()
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157511",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: Using Maven 1.x without extra plugins, how does someone build an executable jar? Using Maven 1.x with just the bundled/standard plugins, what configuration is necessary to build an executable Jar?
Answers should cover:
*
*including dependencies in target Jar
*proper classpath configuration to make dependency Jars accessible
A: Well the easiest way is to simply set the maven.jar.mainclass property to the main class you'd like to use.
As far as setting up the manifest classpath you can use maven.jar.manifest.classpath.add=true to have maven automatically update the classpath based on the dependencies described in the project.xml.
Disclaimer: It's been a long time since I've used Maven 1 and I did not test any of this out, but I'm pretty sure this will get you started in the right direction. For more information check out the jar plugin docs.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157524",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to freeze GridView header? As in a title, does anyone know how to freeze GridView header in ASP.NET ?
A: You can do it in the css
Freeze Header:
1. Define class .Freezing in Stylesheet:
.Freezing
{
position:relative ;
top:expression(this.offsetParent.scrollTop);
z-index: 10;
}
2.Assign Datagrid Header's cssClass to Freezing
A: Option (a) buy into a UI package that includes a souped-up GridView with this functionality built-in.
Option (b) roll your own - it's not simple. Dino Esposito has one approach.
EDIT: Just noticed that the Dino article links to a subscriber-only area on the ASPnetPro magazine site.
Here's another approach using extenders.
A: Try this open-source project for ASP.NET. It extends GridView to provide fixed header, footer and pager and resizable column width. Works well in IE 6/7/8, Firefox 3.0/3.5, Chrome and Safari.
http://johnsobrepena.blogspot.com/2009/09/extending-aspnet-gridview-for-fixed.html
A: I too faced a similar issue while developing in the web applications in Asp.Net 2.0 / 3.5.
One fine day, I came across IdeaSparks ASP.NET CoolControls. It helps to display fix column headers, footer and pager.
I used them personally and I really loved it!
To check the control click here : IdeaSparks ASP.NET CoolControls
Hope this helps!
A: I think I have solution of this.
please see below javascript code
<script type="text/javascript" language="javascript">
var orgTop = 0;
$(document).scroll(function () {
var id = $("tr:.header").get(0);
var offset = $(id).offset();
var elPosition = $(id).position();
var elWidth = $(id).width();
var elHeight = $(id).height();
if (orgTop == 0) {
orgTop = elPosition.top;
}
if ($(window).scrollTop() <= orgTop) {
id.style.position = 'relative';
id.style.top = 'auto';
id.style.width = 'auto';
id.style.height = 'auto';
}
else {
id.style.position = 'absolute';
id.style.top = $(window).scrollTop() + 'px';
id.style.width = elWidth + 'px';
id.style.height = elHeight + 'px';
}
});
</script>
where .header is the css class of your Grid header.
Just add this script on the page and replace header with the css class name you have used for your header.
A: Give this a try should solve the problem
http://www.codeproject.com/KB/webforms/FreezePaneDatagrid.aspx
A: You may try the following sample
Freeze GridView Columns
A: <script src="Scripts/jquery-1.7.1.js"></script>
<script language="javascript" >
$(document).ready(function () {
var gridHeader = $('#<%=GridView1.ClientID%>').clone(true); // Here Clone Copy of Gridview with style
$(gridHeader).find("tr:gt(0)").remove(); // Here remove all rows except first row (header row)
$('#<%=GridView1.ClientID%> tr th').each(function (i) {
// Here Set Width of each th from gridview to new table(clone table) th
$("th:nth-child(" + (i + 1) + ")", gridHeader).css('width', ($(this).width()).toString() + "px");
});
$("#GHead").append(gridHeader);
$('#GHead').css('position', 'absolute');
$('#GHead').css('top', $('#<%=GridView1.ClientID%>').offset().top);
});
</script>
<h3>Scrollable Gridview with fixed header in ASP.NET</h3>
<br />
<div style="width:550px;">
<div id="GHead"></div>
<%-- This GHead is added for Store Gridview Header --%>
<div style="height:300px; overflow:auto">
<asp:GridView ID="GridView1" runat="server" AutoGenerateColumns="false"
CellPadding="5" HeaderStyle-BackColor="#f3f3f3">
<Columns>
<asp:BoundField HeaderText="ID" DataField="StateID" />
<asp:BoundField HeaderText="Country" DataField="Country" />
<asp:BoundField HeaderText="StateName" DataField="StateName" />
</Columns>
</asp:GridView>
</div>
</div>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157528",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: Singular/plural searches and stemming I'm discovering a simple solution for singular-plural keywords searches. I heard about stemming but I don't want to use all its features, only plural/singular transformation. The language is Dutch. Have looked at http://www.snowball.tartarus.org before. Does anyone know the simple solution for singular|plural relevant searches?
Thanks in advance.
A: Use a dictionary, a list of stopwords (those you don't want to singularize) plus the rules for the language. If you don't know Dutch then I cannot help you, but show you how it'd be done in Spanish, for instance:
*
*Plurals end with s, if it doesn't then it's done
*
*If it ends with s,
*
*check if it's a verb or conjugation ending with s if it is one, then it's done (verbs could be added to the stopwords list)
*if it's not a verb, remove s
*if the word exists in the dictionary, done
*if it doesn't remove the previous letter, and check it in the dictionary.
*if it's still not there it's an exception you'll need to check manually to code in the exceptions (I cannot right now think of any, but they always exist :)
Of course this will not translate directly to Dutch.
In general stemmers are already done and provide most of what you need, why don't you want them?
A: Stemmers caused much user annoyance, so if I use one of them, all functionality except singular/plural should be disabled. So the requirement is to use only plural/singular transformations.
A: The answer is correct, but it's worth mentioning that the Dutch language has a large number of irregular verbs. This makes stemming more of a table lookup problem than a set of single rules.
You'll need access to a corpora, you can find one for the Dutch language here: http://corpus1.mpi.nl/ds/imdi_browser/
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157530",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Where can i find information on creating plugins for SQL Server Management Studio? I have read that while plug-ins are not supported for SQL Server Management Studio, it can be done.
Does anyone have any resources or advice on how to go about it using C#?
A company that is currently offering plug-ins to Management Studio is Red Gate:
http://www.red-gate.com/products/SQL_Refactor/index.htm
A: Here is a very good guide to creating a plugin for SQL Server Management Studio:
http://blogs.microsoft.co.il/blogs/shair/archive/2008/07/28/how-to-create-sql-server-management-studio-addin.aspx
Basically, it consists of the following:
*
*Create a Visual Studio add-in with certain settings.
*Subscribe to SSMS specific events
*Code
The article includes a nice sample that you can use to skip some of the manual steps.
A: Here's a list of a lot of free tools for sql server. at the top you have the section that holds stuff about add-ins from SSMS.
You might also want to check out SSMS Tools Pack which is an add-in I made. It's free but not open sourced.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157546",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22"
} |
Q: How do I convert an XmlNodeList into a NodeSet to use within XSLT? I've got a XmlNodeList which I need to have it in a format that I can then re-use within a XSLT stylesheet by calling it from a C# extension method.
Can anyone help? I have read that it might have something to do with using a XPathNavigator but I'm still a bit stuck.
A: I had to solve this issue myself a couple of years ago. The only way I managed it was to create an XML fragment containing the nodes in the node list and then passing in the children of the fragment.
XsltArgumentList arguments = new XsltArgumentList();
XmlNodeList nodelist;
XmlDocument nodesFrament = new XmlDocument();
XmlNode root = nodesFragment.CreateElement("root");
foreach (XmlNode node in nodeList)
{
root.AppendChild(node);
}
nodesFragment.AppendChild(root);
arguments.AddParam("argumentname", string.Empty, nodesFragment.CreateNavigator().SelectChildren(XPathNodeType.All));
Then you need to make sure you have the corresponding argument in your XSLT, of course.
Note that you probably don't need the additional XmlDocument. You could just call CreateNavigator() on the root XmlNode instance and use your existing XmlDocument for creating the element (I wrote this code some time ago and I've learned more since then - but I know the code above works, I haven't tried any alternatives).
A: The note at the end was the most useful, I had infact transformed the XmlNodeList into a XmlDocument already so could just use the Navigator on there and create it as a XPathNodeIterator.
Thanks for you help!
A: Didn't succeed with the answer provided before.
Using AppendChild() to add previously selected nodes of XmlNodeList to a new document resulted in exception on trying to append nodes originating from a different document context. Several trials to fix this included adding my custom default namespace selected nodes in list are bound to. Finally I dropped the approach completely and switched the way I select nodes into XmlNodeList instances.
Instead of
myXmlNode.SelectNodes( xpath, nsmgr )
I'm using
myXmlNode.CreateNavigator().Select( xpath, nsmgr )
to get an XPathNodeIterator instead of XmlNodeList. The resulting XPathNodeIterator of second code is now properly added as parameter value to XsltArgumentsList.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157554",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: C# functions with static data In VB.Net, I can declare a variable in a function as Static, like this:
Function EncodeForXml(ByVal data As String) As String
Static badAmpersand As Regex = new Regex("&(?![a-zA-Z]{2,6};|#[0-9]{2,4};)")
data = badAmpersand.Replace(data, "&")
''// more processing
return data
End Function
Note that I need to use the keyword Static, rather than Shared, which is the normal way to express this in VB.Net. How can I do this in C#? I can't find its equivalent.
A: Personally I'm glad that C# doesn't have this. Logically, methods don't have state: types and instances do. C# makes that logical model clearer, IMO.
A: There is no equivalent in C# unfortunately.
You will need to use a class level variable.
This is one of the few things that VB has that I wish C# had.
A: Ha! In posting the question, I found the answer! Rather than googling for C# I should have been looking for details on how VB.Net implements it, and typing up the question made that apparent to me. After applying that insight, I found this:
http://weblogs.asp.net/psteele/articles/7717.aspx
That article explains that it's not really supported by the CLR, and the VB compiler creates a static (shared) variable "under the hood" in the method's class. To do the same in C#, I have to create the variable myself.
More than that, it uses the Monitor class to make sure the static member is thread-safe as well. Nice.
As a side note: I'd expect to see this in C# sometime soon. The general tactic I've observed from MS is that it doesn't like VB.Net and C# to get too far apart feature-wise. If one language has a feature not supported by the other it tends to become a priority for the language team for the next version.
A: You have to declare this on the class level:
private static readonly RegEx badAmpersand = new RegEx("...");
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157557",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: What is a private assembly in .Net? I understand access modifiers at the class level and below, but why would an entire assembly be private? I assume by default, assemblies are public?
A: Well, in the context of assemblies, private means the assembly cannot be referenced by another application outside the directory it's contained in. Therefore, private to the application it's being used for.
As for your question about the default being public....someone correct me if I'm wrong, but there is no public. It would be shared. A shared assembly can obviously be referenced by multiple applications and must reside in the GAC. All the rules of the GAC would then apply.
A: The private in private assembly is not an access modifier. This is just a term used for assembles that are specific to one application - i.e. in this application's directory, as opposed to shared assemblies in the Global Assembly Cache.
A: Private assembly is an assembly which is particularly an assembly which can't be referred outside the application while a public or shared or we can say it Global can be easily referred outside an assembly.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157579",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: SQL 2005 DB Partitioning for SharePoint Background
I have a massive db for a SharePoint site collection. It is 130GB and growing at 10gb per month. 100GB of the 130GB is in one site collection. 30GB is the version table. There is only one site collection - this is by design.
Question
Am I able to partition a database (SharePoint) using SQL 2005s data partitioning features (creating multiple data files)?
Is it possible to partition a database that is already created?
Has anyone partitioned a SharePoint DB? Will I encounter any issues?
A: You would have to create a partition set and rebuild the table on that partition set. SQL2005 can only partition on a single column, so you would have to have a column in the DB that
*
*Behaves fairly predictably so you don't get a large skew in the amount of data in each partition
*IIRC the column has to be a numeric or datetime value
*In practice it's easiest if it's monotonically increasing - you can create a series of partitions (automatically or manually) and the system will fill them up as it gets to the range definitions.
A date (perhaps the date the document was entered) would be ideal. However, you may or may not have a useful column on the large table. M.S. tech support would be the best source of advice for this.
The partitioning should be transparent to the application (again, you need a column with appropriate behaviour to use as a partition key).
Unless you are lucky enough to have a partition key column that is also used as a search predicate in the most common queries you may not get much query performance benefit from the partitioning. An example of a column that works well is a date column on a data warehouse. However, your Sharepoint application may not make extensive use of this sort fo query.
A: Mauro,
Is there no way you can segment the data on a Sharepoint level?
ie you may have multiple "sites" using a single (SQL) content database.
You could migrate site data to a new content database, which will allow you to reduce the data in that large content site and then shrink the datafiles.
it will also assist you in managing your obvious continued growth.
James.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157587",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Handling Long Running Reports I am working on a ASP.net application written in C# with Sql Server 2000 database. We have several PDF reports which clients use for their business needs. The problem is these reports take a while to generate (> 3 minutes). What usually ends up happening is when the user requests the report the request timeout kills the request before the web server has time to finish generating the report, so the user never gets a chance to download the file. Then the user will refresh the page and try again, which starts the entire report generation process over and still ends up timing out. (No we aren't caching reports right now; that is something I am pushing hard for...).
How do you handle these scenarios? I have an idea in my head which involves making an aysnchronous request to start the report generating and then have some javascript to periodically check the status. Once the status indicates the report is finished then make a separate request for the actual file.
Is there a simpler way that I am not seeing?
A: Using the filesystem here is probably a good bet. Have a request that immediately returns a url to the report pdf location. Your server can then either kick off an external process or send a request to itself to perform the reporting. The client can poll the server (using http HEAD) for the PDF at the supplied url. If you make the filename of the PDF derive from the report parameters, either by using a hash or directly putting the parameters into the name you will get instant server side caching too.
A: I would consider making this report somehow a little bit more offline from the processing point of view.
Like creating a queue to put report requests into, process the reports from there and after finish, it can send a message to the user.
Maybe I would even create a separate Windows Service for the queue handling.
Update: sending to the user can be email or they can have a 'reports' page, where they can check their reports' status and download them if they are ready.
A: What about emailing the report to the user. All the asp page should do is send the request to generate the report and return a message that the report will be emailed after is has finished running.
A: Your users may not accept this approach, but:
When they request a report (by clicking a button or a link or whatever), you could start the report generation process on a separate thread, and re-direct the user to a page that says "thank you, your report will be emailed to you in a few minutes".
When the thread is done generating the report, you could email the PDF directly (probably won't work because of size), or save the report on the server and email a link to the user.
Alternatively, you could go into IIS and raise the timeout to > 3 minutes.
A: Here is some of the things I would do if I would be presented this problem:
1- Stop those timeout! They are a total waste of resources. (bring up the timeout value of asp pages)
2- Centralize all the db access in one single point, then gather stats about what reports ran when by whom and how much time it took. Investigate why it takes so long, is it because of report complexity? data range? server load? (you could actually all write that on a .csv file on the server and import this file periodically in the sql server to analyze later).
Eventually, it's going to be easier for you to "cache" reports if you go through this single access point (example, same query same date will return same PDF previously generated)
3- I know this really wasn't the question but have you tried diving into those queries to see why they are so long to run? Query tuning maybe?
4- Email/SMS/on screen message when report is ready seems great... if your user generally send a batch of report to be generated maybe a little dashboard indicating progression of "their" queue could be built in the app. A little ajax control would periodically refresh the status..
Hint: If you used that central db access and you have sufficient information about what runs when why and how-long you will eventually be able to roughly estimates the time it will take for a report to run.
If the response time is mission critical, should certain users be limited in the data range (date range for example) during some hours of the day?
Good luck and please post more details about your scenario if you want to get more accurate hints...
A: Query tuning is probably your best place to start. Though I don't know you are generating the report, that step shouldn't really take all that long. A poorly performing query on the other hand could absolutely kill your performance.
Depending on what you find in looking at the query, you may need to add some indexes, or possibly even set up a table to store the information for your report in a denormalized way, to make it available faster. This denormalized table could then be refreshed (through a SQL Server Job) every hour, or with whatever frequency your requirements dictate (within reason).
If its' a relatively static report, without varying user input parameters, then caching the report run earlier in the day would be a good idea as well, but its' hard to say any more about this without knowing your situation.
For a problem like this you really need to start at the database unless you have reason to suspect your report generating code of being the culprit. There are various band-aids you could use that might help for a while, but if your db is the root cause then those solutions will not scale well, and you'll likely run into similar problems (or worse) some time in the future.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157592",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: When using cvs2svn how can you rename symbols such that a branch and tag resolve to the same name? I am working on converting a CVS repository that has the following symbols (among others):
tcm-6.1.0-branch -- a branch
tcm-6.1.0 -- a tag
Using the standard transformations cvs2svn identifies them properly. However, I'd like to do some clean up during the conversion. Specifically I'd like to drop the redundant '-branch' portion of the branch symbol, since it will be in the 'branches' dir in svn. I added the following to the symbol_transforms of the project:
RegexpSymbolTransform(r'(.*)-branch', r'\1')
Now I end up with " ERROR: Multiple definitions of the symbol 'tcm-6.1.0' in ..." for every file because tcm-6.1.0 is both a branch and a tag. I have several CVS symbol pairs that result in this problem.
It seems to me that since the source symbols are different and the destination directories are different this operation should be possible. Is there something I'm missing or is this simply a shortcoming of cvs2svn?
How can I rename these symbols such that they remain separate and result in a branch and a tag with the same name?
--
If there is no work around I will try to exclude the problem symbols from the conversion rules and move them by hand afterwards, though I'd rather do it at conversion time.
A: cvs2svn does a lot of magic between cvs and svn. Thats why you cannot "match" the names of branches and tags, as then cvs2svn will not know which version belongs into which dir.
My advice is rename them afterwards in a single commit with a tool like svnmucc.
So you have a single commit and then everything is in place.
A: RegexpSymbolTransform operates at too low a level, during the parsing of the repository files. Therefore, if you use a SymbolTransform to give two symbols the same name, they will be treated as one and the same symbol.
It is possible to rename branches and tags after the conversion, but this would require an explicit SVN commit that will remain in your history forevermore, making history exploration a bit more complicated.
Instead, you should convert the branch with its original name, but then tell cvs2svn to store it to the SVN path /branches/tcm-6.1.0. That way the symbol will end up in the right place for an SVN branch with the desired name, but will still be treated by cvs2svn as distinct from the similarly-named tag.
This can be done using --symbol-hints=symbol-hints.txt command-line option or the SymbolHintsFileRule('symbol-hints.txt') symbol strategy rule, where symbol-hints.txt is a file containing a line like the following:
. tcm-6.1.0-branch branch /branches/tcm-6.1.0 .
The only disadvantage of this approach that I can think of is that some commit messages that are autogenerated by cvs2svn (for example, for the creation of the branch) will mention the original branch name.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157599",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Data generators for SQL server? I would like to receive suggestions on the data generators that are available, for SQL server. If posting a response, please provide any features that you think are important.
I have never used a application like this, so I am looking to be educated on the topic. Thank you.
(My goal is to fill a database with 10,000+ records in each table, to test an application.)
A: For generating sample data, I use simple Python applications.
Considerations:
*
*Simple to modify and configure.
*A repeatable set of data that you can for performance testing and get consistent results.
*Follow all of the DB referential integrity rules and constraints.
*Realistic data.
The first two indicate that you want to produce script files that will load your data. The third is tougher. There are ways to discover the database metadata and constraints. Looking at 3 and 4 together, you don't want simple reverse engineering -- you want something you can control to produce realistic values.
Generally, you want to build an entity model of your own so that you can be sure you have ranges and key relationships correct.
You can do this three ways.
*
*Generate CSV files of data which you can load manually. Nice repeatable test data.
*Generate SQL scripts which you can run. Nice repeatable data, also.
*Use an ODBC connection to generate data directly into the database. I actually don't like this as much, but you might.
Here's a stripped-down one-table-only version of a data generator that writes a CSV file.
import csv
import random
class SomeEntity( list ):
titles = ( 'attr1', 'attr2' ) # ... for all columns
def __init__( self ):
self.append( random.randrange( 1, 10 ) )
self.append( random.randrange( 100, 1000 ) )
# ... for all columns
myData = [ SomeEntity() for i in range(10000) ]
aFile= open( 'tmp.csv', 'wb' )
dest= csv.writer( aFile )
dest.writerow( SomeEntity.titles )
dest.writerows( myData )
aFile.close()
For multiple entities, you have to work out the cardinality. Instead of generating random keys, you want to make a random selection from the other entities. So you might have ChildEntity picking a random element from ParentEntity to assure that the FK-PK relationship was correct.
Use random.choice(someList) and random.shuffle(someList) to assure referential integrity.
A: I have used the data generator in the past. May be worth a look.
3rd party edit
If you do not register you can only generate 100 rows. Below you can find a sample how the interface looks today (october 2016)
A: Visual Studio Team System Database Edition (aka Data Dude) does this.
I have not used it for data generation yet, but 2 features sound nice:
*
*Set your own seed value for the random data generator. This allows you to prodcue the same random data more than once.
*Point the wizard at a 'real' database and have it generate something that looks like real data.
Maybe these are standard features elsewhere?
A: I just found about that one: Spawner
A: Something similar has been asked here : Creating test data in a database
Red Gate SQL Data Generator does a great job in that domain. You can customize every field of your database and using random data with seeds. And even create specific patterns using Regex expressions.
A: I've rolled my own data generator that generates random data conforming to regular expressions. It turned into a learning project (under development) and is available at github.
A: I've used a tool called Datatect for this.
Some of the things I like about this tool:
*
*Uses ODBC so you can generate data into any ODBC data source. I've used this for Oracle, SQL and MS Access databases, flat files, and Excel spreadsheets.
*Extensible via VBScript. You can write hooks at various parts of the data generation workflow to extend the abilities of the tool.
*Referentially aware. When populating foreign key columns, pulls valid keys from parent table.
A: this one is for free: http://www.sqldog.com
contains several functions like: data generator, fulltext search, create database documentation, active database connections
A: I have used this before
http://sqlmanager.net/en/products/mssql/datagenerator
Its not free though.
Ref integrity checking is quite important, or your tests will be no good without correlating related data.(in most cases)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157600",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "44"
} |
Q: Getting specific revision via http with VisualSVN Server I'm using VisualSVN Server to host an SVN repo, and for some automation work, I'd like to be able to get specific versions via the http[s] layer.
I can get the HEAD version simply via an http[s] request to the server (httpd?) - but is there any ability to specify the revision, perhaps as a query-string? I can't seem to find it...
I don't want to do a checkout unless I can help it, as there are a lot of files in the specific folder, and I don't want them all - just one or two.
A: Better late than never;
https://entire/Path/To/Folder/file/?p=REV
?p=Rev specifies the revision
A: Dunno if you've already found the answer to this question but in regular svn server on apache you can get to a particular revision with:
http://host/svn-name/!svn/bc/REVISION_NUMBER/path/to/file.ext
*
*host & REVISION_NUMBER are obvious
*/path/to/file.ext is relative to repo root
I've never used visualsvn so your mileage may vary.
A: Subversion does not publicly document the Uris it uses internally to access that information. (And where it is documented, it is explicitly stated that this can change in future versions)
To access this information on the web you could use a web viewer (E.g. websvn, viewvc).
If you want to access it from your own program you could also use a client binding like SharpSvn.
using (SvnClient client = new SvnClient())
using (FileStream fs = File.Create("c:\\temp\\file.txt"))
{
// Perform svn cat http://svn.collab.net/svn/repos/trunk/COMMITTERS -r 23456
// > file.txt
SvnCatArgs a = new SvnCatArgs();
a.Revision = 23456;
client.Cat(new Uri("http://svn.collab.net/svn/repos/trunk/COMMITTERS"), a, fs);
}
[Update 2008-12-31: One of the next few versions of Subversion will start documenting public urls you can use for retrieving old versions.]
A: This:
Use of WebDAV in Subversion
should help.
A: The help page for the VisualSVN Web Interface suggests using an address formatted like one of these:
link to r1484 commit in the serf's project repository:
https://demo-server.visualsvn.com/!/#serf/commit/r1484/
link to the current content of the trunk/context.c file in the serf's project repository:
https://demo-server.visualsvn.com/!/#serf/view/head/trunk/context.c
link to the content of trunk/context.c file at revision r2222 in the serf's project repository:
https://demo-server.visualsvn.com/!/#serf/view/r2222/trunk/context.c
The crucial thing seems to be the repo revision number prefixed by 'r'. None of the other answers here mention that, and using addresses formatted like this I was able to view a specific revision of a source file from our VisualSVN server.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157603",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "33"
} |
Q: How to dynamically create an Actionscript 2 MovieClip based class with I have a helper method has been created which allows a MovieClip-based class in code and have the constructor called. Unfortunately the solution is not complete because the MovieClip callback onLoad() is never called.
(Link to the Flashdevelop thread which created the method .)
How can the following function be modified so both the constructor and onLoad() is properly called.
//------------------------------------------------------------------------
// - Helper to create a strongly typed class that subclasses MovieClip.
// - You do not use "new" when calling as it is done internally.
// - The syntax requires the caller to cast to the specific type since
// the return type is an object. (See example below).
//
// classRef, Class to create
// id, Instance name
// ..., (optional) Arguments to pass to MovieClip constructor
// RETURNS Reference to the created object
//
// e.g., var f:Foo = Foo( newClassMC(Foo, "foo1") );
//
public function newClassMC( classRef:Function, id:String ):Object
{
var mc:MovieClip = this.createEmptyMovieClip(id, this.getNextHighestDepth());
mc.__proto__ = classRef.prototype;
if (arguments.length > 2)
{
// Duplicate only the arguments to be passed to the constructor of
// the movie clip we are constructing.
var a:Array = new Array(arguments.length - 2);
for (var i:Number = 2; i < arguments.length; i++)
a[Number(i) - 2] = arguments[Number(i)];
classRef.apply(mc, a);
}
else
{
classRef.apply(mc);
}
return mc;
}
An example of a class that I may want to create:
class Foo extends MovieClip
And some examples of how I would currently create the class in code:
// The way I most commonly create one:
var f:Foo = Foo( newClassMC(Foo, "foo1") );
// Another example...
var obj:Object = newClassMC(Foo, "foo2") );
var myFoo:Foo = Foo( obj );
A: Do I understand correctly that you want to create an instance of an empty movie clip with class behavior attached and without having to define an empty clip symbol in the library?
If that's the case you need to use the packages trick. This is my base class (called View) that I've been using over the years and on hundreds of projects:
import mx.events.EventDispatcher;
class com.tequila.common.View extends MovieClip
{
private static var _symbolClass : Function = View;
private static var _symbolPackage : String = "__Packages.com.tequila.common.View";
public var dispatchEvent : Function;
public var addEventListener : Function;
public var removeEventListener : Function;
private function View()
{
super();
EventDispatcher.initialize( this );
onEnterFrame = __$_init;
}
private function onInitialize() : Void
{
// called on the first frame. Event dispatchers are
// ready and initialized at this point.
}
private function __$_init() : Void
{
delete onEnterFrame;
onInitialize();
}
private static function createInstance(symbolClass, parent : View, instance : String, depth : Number, init : Object) : MovieClip
{
if( symbolClass._symbolPackage.indexOf("__Packages") >= 0 )
{
Object.registerClass(symbolClass._symbolPackage, symbolClass);
}
if( depth == undefined )
{
depth = parent.getNextHighestDepth();
}
if( instance == undefined )
{
instance = "__$_" + depth;
}
return( parent.attachMovie(symbolClass._symbolPackage, instance, depth, init) );
}
public static function create(parent : View, instance : String, depth : Number, init : Object) : View
{
return( View( createInstance(_symbolClass, parent, instance, depth, init) ) );
}
}
So, all you have to do to use this class is to subclass it:
class Foo extends View
{
private static var _symbolClass : Function = Foo;
private static var _symbolPackage : String = "__Packages.Foo";
private function Foo()
{
// constructor private
}
private function onInitialize() : Void
{
// implement this to add listeners etc.
}
public static function create(parent : View, instance : String, depth : Number, init : Object) : Foo
{
return( Foo( createInstance(_symbolClass, parent, instance, depth, init) ) );
}
}
You can now create an instance of Foo like this;
var foo : Foo = Foo.create( this );
Assuming that 'this' is some type of MovieClip or View.
If you need to use this with a library symbol then just leave out the __Packages prefix on the _symbolPackage member.
Hope this helps...
A: If you want to create an instance of the Foo class with additional parameters, you can extend the create method. In my implementation , I am creating Nodes with objectIds:
var node : Node = Node.create(1,_root );
The Node class looks like this:
class Node extends View {
private static var _symbolClass : Function = Node;
private static var _symbolPackage : String = "Node";
private var objectId : Number;
private function Node() {
// constructor private
trace("node created ");
}
private function onInitialize() : Void {
//add listeners
}
public static function create(id_:Number, parent : MovieClip, instance : String, depth : Number, init : Object) : Node {
var node :Node = Node( createInstance(_symbolClass, parent, instance, depth, init) )
node.setObjectId(id_);
return(node);
}
//=========================== GETTERS / SETTERS
function setObjectId(id_:Number) : Void {
objectId = id_;
}
function getObjectId() : Number {
return objectId;
}}
Please note that the objectId is undefined in the private constructor Node() but defined in onInitialize().
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157628",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: MVC User Controls + ViewData Hi im new to MVC and I've fished around with no luck on how to build MVC User Controls that have ViewData returned to them. I was hoping someone would post a step by step solution on how to approach this problem. If you could make your solution very detailed that would help out greatly.
Sorry for being so discrete with my question, I would just like to clarify that what Im ultimatly trying to do is pass an id to a controller actionresult method and wanting to render it to a user control directly from the controller itself. Im unsure on how to begin with this approach and wondering if this is even possible. It will essentially in my mind look like this
public ActionResult RTest(int id){
RTestDataContext db = new RTestDataContext();
var table = db.GetTable<tRTest>();
var record = table.SingleOrDefault(m=> m.id = id);
return View("RTest", record);
}
and in my User Control I would like to render the objects of that record and thats my issue.
A: If I understand your question, you are trying to pass ViewData into the user control. A user control is essentially a partial view, so you would do this:
<% Html.RenderPartial("someUserControl.ascx", viewData); %>
Now in your usercontrol, ViewData will be whatever you passed in...
A: OK here it goes --
We use Json data
In the aspx page we have an ajax call that calls the controller. Look up the available option parameters for ajax calls.
url: This calls the function in the class.(obviously) Our class name is JobController, function name is updateJob and it takes no parameters. The url drops the controllerPortion from the classname. For example to call the updateJob function the url would be '/Job/UpdateJob/'.
var data = {x:1, y:2};
$.ajax({
data: data,
cache: false,
url: '/ClassName/functionName/parameter',
dataType: "json",
type: "post",
success: function(result) {
//do something
},
error: function(errorData) {
alert(errorData.responseText);
}
}
);
In the JobController Class:
public ActionResult UpdateJob(string id)
{
string x_Value_from_ajax = Request.Form["x"];
string y_Value_from_ajax = Request.Form["y"];
return Json(dataContextClass.UpdateJob(x_Value_from_ajax, y_Value_from_ajax));
}
We have a Global.asax.cs page that maps the ajax calls.
public class GlobalApplication : System.Web.HttpApplication
{
public static void RegisterRoutes(RouteCollection routes)
{
routes.IgnoreRoute("{resource}.axd/{*pathInfo}");
routes.MapRoute("Default", // Route name
"{controller}/{action}/{id}", // URL with parameters
new { controller = "EnterTime", action = "Index", id = "" } // Parameter defaults (EnterTime is our default controller class, index is our default function and it takes no parameters.)
);
}
}
I hope this gets you off to a good start.
Good luck
A: I am pretty sure view data is accessible inside user controls so long as you extend System.Web.Mvc.ViewUserControl and pass it in. I have a snippet of code:
<%Html.RenderPartial("~/UserControls/CategoryChooser.ascx", ViewData);%>
and from within my CategoryChooser ViewData is accessible.
A: Not sure if I understand your problem completely, but here's my answer to "How to add a User Control to your ASP.NET MVC Project".
In Visual Studio 2008, you can choose Add Item. In the categories at the left side, you can choose Visual C# > Web > MVC. There's an option MVC View User Control. Select it, choose a name, select the desired master page and you're good to go.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157629",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: Best way to encode text data for XML I was looking for a generic method in .Net to encode a string for use in an Xml element or attribute, and was surprised when I didn't immediately find one. So, before I go too much further, could I just be missing the built-in function?
Assuming for a moment that it really doesn't exist, I'm putting together my own generic EncodeForXml(string data) method, and I'm thinking about the best way to do this.
The data I'm using that prompted this whole thing could contain bad characters like &, <, ", etc. It could also contains on occasion the properly escaped entities: &, <, and ", which means just using a CDATA section may not be the best idea. That seems kinda klunky anyay; I'd much rather end up with a nice string value that can be used directly in the xml.
I've used a regular expression in the past to just catch bad ampersands, and I'm thinking of using it to catch them in this case as well as the first step, and then doing a simple replace for other characters.
So, could this be optimized further without making it too complex, and is there anything I'm missing? :
Function EncodeForXml(ByVal data As String) As String
Static badAmpersand As new Regex("&(?![a-zA-Z]{2,6};|#[0-9]{2,4};)")
data = badAmpersand.Replace(data, "&")
return data.Replace("<", "<").Replace("""", """).Replace(">", "gt;")
End Function
Sorry for all you C# -only folks-- I don't really care which language I use, but I wanted to make the Regex static and you can't do that in C# without declaring it outside the method, so this will be VB.Net
Finally, we're still on .Net 2.0 where I work, but if someone could take the final product and turn it into an extension method for the string class, that'd be pretty cool too.
Update The first few responses indicate that .Net does indeed have built-in ways of doing this. But now that I've started, I kind of want to finish my EncodeForXml() method just for the fun of it, so I'm still looking for ideas for improvement. Notably: a more complete list of characters that should be encoded as entities (perhaps stored in a list/map), and something that gets better performance than doing a .Replace() on immutable strings in serial.
A: Depending on how much you know about the input, you may have to take into account that not all Unicode characters are valid XML characters.
Both Server.HtmlEncode and System.Security.SecurityElement.Escape seem to ignore illegal XML characters, while System.XML.XmlWriter.WriteString throws an ArgumentException when it encounters illegal characters (unless you disable that check in which case it ignores them). An overview of library functions is available here.
Edit 2011/8/14: seeing that at least a few people have consulted this answer in the last couple years, I decided to completely rewrite the original code, which had numerous issues, including horribly mishandling UTF-16.
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
/// <summary>
/// Encodes data so that it can be safely embedded as text in XML documents.
/// </summary>
public class XmlTextEncoder : TextReader {
public static string Encode(string s) {
using (var stream = new StringReader(s))
using (var encoder = new XmlTextEncoder(stream)) {
return encoder.ReadToEnd();
}
}
/// <param name="source">The data to be encoded in UTF-16 format.</param>
/// <param name="filterIllegalChars">It is illegal to encode certain
/// characters in XML. If true, silently omit these characters from the
/// output; if false, throw an error when encountered.</param>
public XmlTextEncoder(TextReader source, bool filterIllegalChars=true) {
_source = source;
_filterIllegalChars = filterIllegalChars;
}
readonly Queue<char> _buf = new Queue<char>();
readonly bool _filterIllegalChars;
readonly TextReader _source;
public override int Peek() {
PopulateBuffer();
if (_buf.Count == 0) return -1;
return _buf.Peek();
}
public override int Read() {
PopulateBuffer();
if (_buf.Count == 0) return -1;
return _buf.Dequeue();
}
void PopulateBuffer() {
const int endSentinel = -1;
while (_buf.Count == 0 && _source.Peek() != endSentinel) {
// Strings in .NET are assumed to be UTF-16 encoded [1].
var c = (char) _source.Read();
if (Entities.ContainsKey(c)) {
// Encode all entities defined in the XML spec [2].
foreach (var i in Entities[c]) _buf.Enqueue(i);
} else if (!(0x0 <= c && c <= 0x8) &&
!new[] { 0xB, 0xC }.Contains(c) &&
!(0xE <= c && c <= 0x1F) &&
!(0x7F <= c && c <= 0x84) &&
!(0x86 <= c && c <= 0x9F) &&
!(0xD800 <= c && c <= 0xDFFF) &&
!new[] { 0xFFFE, 0xFFFF }.Contains(c)) {
// Allow if the Unicode codepoint is legal in XML [3].
_buf.Enqueue(c);
} else if (char.IsHighSurrogate(c) &&
_source.Peek() != endSentinel &&
char.IsLowSurrogate((char) _source.Peek())) {
// Allow well-formed surrogate pairs [1].
_buf.Enqueue(c);
_buf.Enqueue((char) _source.Read());
} else if (!_filterIllegalChars) {
// Note that we cannot encode illegal characters as entity
// references due to the "Legal Character" constraint of
// XML [4]. Nor are they allowed in CDATA sections [5].
throw new ArgumentException(
String.Format("Illegal character: '{0:X}'", (int) c));
}
}
}
static readonly Dictionary<char,string> Entities =
new Dictionary<char,string> {
{ '"', """ }, { '&', "&"}, { '\'', "'" },
{ '<', "<" }, { '>', ">" },
};
// References:
// [1] http://en.wikipedia.org/wiki/UTF-16/UCS-2
// [2] http://www.w3.org/TR/xml11/#sec-predefined-ent
// [3] http://www.w3.org/TR/xml11/#charsets
// [4] http://www.w3.org/TR/xml11/#sec-references
// [5] http://www.w3.org/TR/xml11/#sec-cdata-sect
}
Unit tests and full code can be found here.
A: XmlTextWriter.WriteString() does the escaping.
A: System.XML handles the encoding for you, so you don't need a method like this.
A: SecurityElement.Escape
documented here
A: If this is an ASP.NET app why not use Server.HtmlEncode() ?
A: This might be the case where you could benefit from using the WriteCData method.
public override void WriteCData(string text)
Member of System.Xml.XmlTextWriter
Summary:
Writes out a <![CDATA[...]]> block containing the specified text.
Parameters:
text: Text to place inside the CDATA block.
A simple example would look like the following:
writer.WriteStartElement("name");
writer.WriteCData("<unsafe characters>");
writer.WriteFullEndElement();
The result looks like:
<name><![CDATA[<unsafe characters>]]></name>
When reading the node values the XMLReader automatically strips out the CData part of the innertext so you don't have to worry about it. The only catch is that you have to store the data as an innerText value to an XML node. In other words, you can't insert CData content into an attribute value.
A: If you're serious about handling all of the invalid characters (not just the few "html" ones), and you have access to System.Xml, here's the simplest way to do proper Xml encoding of value data:
string theTextToEscape = "Something \x1d else \x1D <script>alert('123');</script>";
var x = new XmlDocument();
x.LoadXml("<r/>"); // simple, empty root element
x.DocumentElement.InnerText = theTextToEscape; // put in raw string
string escapedText = x.DocumentElement.InnerXml; // Returns: Something  else  <script>alert('123');</script>
// Repeat the last 2 lines to escape additional strings.
It's important to know that XmlConvert.EncodeName() is not appropriate, because that's for entity/tag names, not values. Using that would be like Url-encoding when you needed to Html-encode.
A: In the past I have used HttpUtility.HtmlEncode to encode text for xml. It performs the same task, really. I haven't run into any issues with it yet, but that's not to say I won't in the future. As the name implies, it was made for HTML, not XML.
You've probably already read it, but here is an article on xml encoding and decoding.
EDIT: Of course, if you use an xmlwriter or one of the new XElement classes, this encoding is done for you. In fact, you could just take the text, place it in a new XElement instance, then return the string (.tostring) version of the element. I've heard that SecurityElement.Escape will perform the same task as your utility method as well, but havent read much about it or used it.
EDIT2: Disregard my comment about XElement, since you're still on 2.0
A: Microsoft's AntiXss library AntiXssEncoder Class in System.Web.dll has methods for this:
AntiXss.XmlEncode(string s)
AntiXss.XmlAttributeEncode(string s)
it has HTML as well:
AntiXss.HtmlEncode(string s)
AntiXss.HtmlAttributeEncode(string s)
A: In .net 3.5+
new XText("I <want> to & encode this for XML").ToString();
Gives you:
I <want> to & encode this for XML
Turns out that this method doesn't encode some things that it should (like quotes).
SecurityElement.Escape (workmad3's answer) seems to do a better job with this and it's included in earlier versions of .net.
If you don't mind 3rd party code and want to ensure no illegal characters make it into your XML, I would recommend Michael Kropat's answer.
A: Brilliant! That's all I can say.
Here is a VB variant of the updated code (not in a class, just a function) that will clean up and also sanitize the xml
Function cXML(ByVal _buf As String) As String
Dim textOut As New StringBuilder
Dim c As Char
If _buf.Trim Is Nothing OrElse _buf = String.Empty Then Return String.Empty
For i As Integer = 0 To _buf.Length - 1
c = _buf(i)
If Entities.ContainsKey(c) Then
textOut.Append(Entities.Item(c))
ElseIf (AscW(c) = &H9 OrElse AscW(c) = &HA OrElse AscW(c) = &HD) OrElse ((AscW(c) >= &H20) AndAlso (AscW(c) <= &HD7FF)) _
OrElse ((AscW(c) >= &HE000) AndAlso (AscW(c) <= &HFFFD)) OrElse ((AscW(c) >= &H10000) AndAlso (AscW(c) <= &H10FFFF)) Then
textOut.Append(c)
End If
Next
Return textOut.ToString
End Function
Shared ReadOnly Entities As New Dictionary(Of Char, String)() From {{""""c, """}, {"&"c, "&"}, {"'"c, "'"}, {"<"c, "<"}, {">"c, ">"}}
A: You can use the built-in class XAttribute, which handles the encoding automatically:
using System.Xml.Linq;
XDocument doc = new XDocument();
List<XAttribute> attributes = new List<XAttribute>();
attributes.Add(new XAttribute("key1", "val1&val11"));
attributes.Add(new XAttribute("key2", "val2"));
XElement elem = new XElement("test", attributes.ToArray());
doc.Add(elem);
string xmlStr = doc.ToString();
A: Here is a single line solution using the XElements. I use it in a very small tool. I don't need it a second time so I keep it this way. (Its dirdy doug)
StrVal = (<x a=<%= StrVal %>>END</x>).ToString().Replace("<x a=""", "").Replace(">END</x>", "")
Oh and it only works in VB not in C#
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157646",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "72"
} |
Q: Sharepoint WebParts Say you have several webparts, one as a controller and several which take information from the controller and act on it. This is fairly easy to model using the Consumer/Producer interface introduced in ASP 2.0.
How would you be able to add interactions the other way around whilst still maintaining the above?
A simple example would be: the user enters information into webpart A which performs a search and the results would be displayed on webpart B. Webpart C allows you to filter the results which should trigger webpart A to re-submit the query and hence update the results in B.
It doesn't seem possible to do in WSS 3.0 because you are only allowed 1 interface to be used in all of the connections at any one time.
Does this even make sense ? :-)
A: A quick and dirty solution to enable arbitrary control communication is to use recursive find control and events. Have the controls search the control tree by control type for what they need and then subscribe to publicly exposed events on the publishing control.
I have previous used the trick to enable standard server controls to find each other when embedded in CMS systems from different vendors to avoid a specific communication API entirely.
A: I don't see anything wrong with webpart A getting a reference to webpart B and calling public/internal methods/properties or subscribing handlers to public/internal events. One point of mention when doing this though: EnsureChildControls. I have witnessed with my own eyes one webpart being run clear to PreRender while another webpart hadn't even run CreateChildControls.
From webpart A, fetch your reference to webpart B (in this case webpart B is of type Calendar) like so:
private Calendar _calendarWP = null;
public Calendar CalendarWP
{
get
{
if (_calendarWP != null)
return _calendarWP;
else
foreach (System.Web.UI.WebControls.WebParts.WebPartZone zone in this.WebPartManager.Zones)
foreach (System.Web.UI.WebControls.WebParts.WebPart webpart in zone.WebParts)
if (webpart is Calendar)
{
_calendarWP = (Calendar)webpart;
_calendarWP.EnsureChildControls();
return _calendarWP;
}
return null;
}
}
Now you can do things like fetch some new data and update the Calendar like so:
IEnumerable newData = SomeDataProvider.GetNewData(args);
CalendarWP.someGridView.DataSource = newData;
CalendarWP.someGridView.DataBind();
Or perhaps let webpart A toss a reference to itself over to webpart B so it can use webpart A's public/internal properties to go fetch data for itself:
CalendarWP.UseWPAToFetchData(this);
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157661",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: What is the best way to sample/profile a PyObjC application? Sampling with Activity Monitor/Instruments/Shark will show stack traces full of C functions for the Python interpreter. I would be helpful to see the corresponding Python symbol names. Is there some DTrace magic that can do that? Python's cProfile module can be useful for profiling individual subtrees of Python calls, but not so much for getting a picture of what's going on with the whole application in response to user events.
A: The answer is "dtrace", but it won't work on sufficiently old macs.
http://tech.marshallfamily.com.au/archives/python-dtrace-on-os-x-leopard-part-1/
http://tech.marshallfamily.com.au/archives/python-dtrace-on-os-x-leopard-part-2/
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157662",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: "Method Not Implemented" error - Firefox 3 Getting sporadic errors from users of a CMS; Ajax requests sometimes result in a "501 Method not implemented" response from the server. Not all the time; usually works.
Application has been stable for months. Users seem to be getting it with Firefox 3. I've seen a couple references via Google to such problems being related to having "charset=UTF-8" in the Content-type header, but these may be spurious
Has anyone seen this error or have any ideas about what the cause could be?
Thanks
Ian
A: You may want to check the logs of the server to see what's causing the issue. For example, it might be that these requests are garbled, say, because of a flaw in the HTTP 1.1 persistent connection implementation.
A: Try this
*
*Try clearing your cookies and your cache
*Type about:config into the URL bar, list of configuration settings for Firefox
*Locate the setting for 'network.automatic-ntlm-auth.trusted-uris'
*Set the value of names of the servers to use NTLM with.
*Locate the setting for 'network.negotiate-auth.trusted-uris'
*Set the value of names of the servers to use NTLM with.
*network.automatic-ntlm-auth.allow-proxies = True
*Restart Firefox - Test URL to application
A: The problem occurs as your app is not running on the same domain as your service. You need to configure your Server to accept those calls by adding the 'Access-Control-Allow-Origin' Header.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157667",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: In MATLAB, how do I change the background color of a subplot? I'm trying to change the background color of a single subplot in a MATLAB figure.
It's clearly feasible since the UI allows it, but I cannot find the function to automate it.
I've looked into whitebg, but it changes the color scheme of the whole figure, not just the current subplot.
(I'm using MATLAB Version 6.1 by the way)
A: I know you mentioned that you are using MATLAB 6.1, but it bears mentioning that in the newer versions of MATLAB you can specify additional property-value pair arguments in the initial call to SUBPLOT, allowing for a more compact syntax. The following creates an axes with a red background in the top left corner of a 2-by-2 layout:
subplot(2,2,1,'Color','r');
I'm not certain in which version of MATLAB this syntax was introduced, since the release notes going back to Version 7 (R14) don't seem to mention it.
A: You can use the set command.
set(subplot(2,2,1),'Color','Red')
That will give you a red background in the subplot location 2,2,1.
A: I've not used Matlab in several years, but I think it might well be the whitebg method called after the subplot declaration, similar to the way in which you would set a title.
subplot(3, 2, 4), hist(rand(50)), whitebg('y');
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157685",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: Using ofstream to write text to the end of a file How do I use the ofstream to write text to the end of a file without erasing its content inside?
A: You can pass the flag ios::app when opening the file:
ofstream ofs("filename", ios::app);
A: You want to append to the file. Use ios::app as the file mode when creating the ofstream.
Appending will automatically seek to the end of the file.
A: Use ios::app as the file mode.
A: The seekp() function allows you to arbitrarily set the position of the file pointer, for open files.
A: As people have mentioned above, opening the file in the following manner will do:
ofstream out("path_to_file",ios::app);
It will do the trick, if you want to append data to the file by default.
But, if you want to go to the end of the file, in the middle of the program, with the default mode not being ios::app, you can use the following statement:
out.seekp(0,ios::end)
This will place the put pointer 0 bytes from the end of file. http://www.cplusplus.com/reference/ostream/ostream/seekp
Make sure you use the correct seekp(), as there are 2 overloads of seekp(). The one with 2 parameters is favored in this situation.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157689",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Modifying Request/Response Streams in WebBrowser Control Using MSHTML I need to find a way to get at the request/response streams inside of the webbrowser winforms control and see that it's not real intuitive. For example, I need to be able to modify post data when a user clicks a submit button. It looks like you have to register for some MSHTML COM events to do so, but am unsure which I need to subscribe to (and how). Has anyone done this in the past? Examples?
A: Take a look at Asynchronous Pluggable Protocols (IInternetProtocol):
http://msdn.microsoft.com/en-us/library/aa767743(VS.85).aspx
And a solution in C# that uses some of it for its own protocols in IE:
http://www.codeproject.com/KB/aspnet/AspxProtocol.aspx
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157693",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Checking for a duplicate element in the OUTPUT I've got some XML, for example purposes it looks like this:
<root>
<field1>test</field1>
<f2>t2</f2>
<f2>t3</f2>
</root>
I want to transform it with XSLT, but I want to suppress the second f2 element in the output - how do I check inside my template to see if the f2 element already exists in the output when the second f2 element in the source is processed? My XSLT looks something like this at present:
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:output method="xml" indent="no" omit-xml-declaration="yes" standalone="no" />
<xsl:template match="/">
<xsl:for-each select="./root">
<output>
<xsl:apply-templates />
</output>
</xsl:for-each>
</xsl:template>
<xsl:template match="*" >
<xsl:element name="{name(.)}">
<xsl:value-of select="." />
</xsl:element>
</xsl:template>
</xsl:stylesheet>
I need to do some sort of check around the xsl:element in the template I think, but I'm not sure how to interrogate the output document to see if the element is already present.
Edit: Forgot the pre tags, code should be visible now!
A: It depends how system wide you want to be.
i.e. Are you only concerned with elements that are children of the same parent, or all elements at the same level ('cousins' if you like) or elements anywhere in the document...
In the first situation you could check the preceding-sibling axis to see if any other elements exist with the same name.
<xsl:if test="count(preceding-sibling::node()[name()=name(current())])=0">
... do stuff in here.
</xsl:if>
A: To only check (and warn you of a duplicate), you may find an example here
Something along the lines of:
<xsl:for-each-group select="collection(...)//@id" group-by=".">
<xsl:if test="count(current-group()) ne 1">
<xsl:message>Id value <xsl:value-of select="current-grouping-key()"/> is
duplicated in files
<xsl:value-of select="current-group()/document-uri(/)" separator=" and
"/></xsl:message>
</xsl:if>
</xsl:for-each-group>
To be modified to select all nodes within 'root' element.
As to remove the duplicate lines, you have another example here
That would look like:
<xsl:stylesheet>
<xsl:key name="xyz" match="record[x/y/z]" use="x/y/z" />
<xsl:variable name="noxyzdups" select="/path/to/record[generate-id(.) = generate-id(key('xyz', x/y/z))]" />
...
<xsl:template ... >
<xsl:copy-of "exslt:node-set($noxyzdups)" />
</xsl:template>
</xsl:stylesheet>
x/y/z is the xpath expression that you want made unique. It can be concat(x,'-',@y,'-',z) or whatever you want.
Now I am not sure those two examples can easily be adapted to your case, but I just wanted to point out those two sources, in case it helps.
A: It's not possible to interrogate the output of your transform. It's also not possible to track the current state of your transform (i.e. keep track of what nodes you've emitted in a variable). Fundamentally, that's not how XSLT works. One of the costs of a side-effect-free programming environment is that you can't do things that have side effects. Oh well.
In your case, one way of accomplishing this would be to build a variable that contained a list of all of the source elements that could be transformed into the output element that you want to emit only once. Then check every node you're transforming against this list. If it's not in the list, emit it. If it's the first item in the list, emit it. Otherwise, don't.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157705",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: What are some reasons why a sole developer should use TDD? I'm a contract programmer with lots of experience. I'm used to being hired by a client to go in and do a software project of one form or another on my own, usually from nothing. That means a clean slate, almost every time. I can bring in libraries I've developed to get a quick start, but they're always optional. (and depend on getting the right IP clauses in the contract) Many times I can specify or even design the hardware platform... so we're talking serious freedom here.
I can see uses for constructing automated tests for certain code: Libraries with more than trivial functionality, core functionality with a high number of references, etc. Basically, as the value of a piece of code goes up through heavy use, I can see it would be more and more valuable to automatically test that code so that I know I don't break it.
However, in my situation, I find it hard to rationalize anything more than that. I'll adopt things as they prove useful, but I'm not about to blindly follow anything.
I find many of the things I do in 'maintenance' are actually small design changes. In this case, the tests would not have saved me anything and now they'd have to change too. A highly iterative, stub-first design approach works very well for me. I can't see actually saving myself that much time with more extensive tests.
Hobby projects are even harder to justify... they're usually anything from weekenders up to a say month long. Edge-case bugs rarely matter, it's all about playing with something.
Reading questions such as this one, The most voted on response seems to say that in that poster's experience/opinion TDD actually wastes time if you've got less than 5 people (even assuming a certain level of competence/experience with TDD). However, that appears to be covering initial development time, not maintenance. It's not clear how TDD stacks up over the entire life cycle of a project.
I think TDD could be a good step in the worthwhile goal of improving the quality of the products of our industry as a whole. Idealism on it's own is no longer all that effective at motivating me, though.
I do think TDD would be a good approach in large teams, or any size team containing at least one unreliable programmer. That's not my question.
Why would a sole developer with a good track record adopt TDD?
I'd love to hear of any kind of metrics done (formally or not) on TDD... focusing on solo developers or very small teams.
Failing that, anecdotes of your personal experiences would be nice, too. :)
Please avoid stating opinion without experience to back it. Let's not make this an ideology war. Also the skip greater employment options argument. This is simply an efficiency question.
A: I'm also a contract programmer. Here are my 12 Reasons Why I Love Unit Tests.
A: My best experience with TDD is centered around the pyftpdlib project. Most of the development is done by the original author, and I've made a few small contributions, but it's essentially a solo project. The test suite for the project is very thorough, and tests all the major features of the FTPd library. Before checking in changes or releasing a version, all tests are checked, and when a new feature is added, the test suite is always updated as well.
As a result of this approach, this is the only project I've ever worked on that didn't have showstopper bugs appear after a new release, have changes checked in that broke a major feature, etc. The code is very solid and I've been consistently impressed with how few bug reports have been opened during the life of the project. I (and the original author) attribute much of this success to the comprehensive test suite and the ability to test every major code path at will.
From a logical perspective, any code you write has to be tested, and without TDD then you'll be testing it yourself manually. On the flip side to pyftpdlib, the worst code by number of bugs and frequency of major issues, is code that is/was solely being tested by the developers and QA trying out new features manually. Things don't get tested because of time crunch or falling through the cracks. Old code paths are forgotten and even the oldest stable features end up breaking, major releases end up with important features non-functional. etc. Manual testing is critically important for verification and some randomization of testing, but based on my experiences I'd say that it's essential to have both manual testing and a carefully constructed unit test framework. Between the two approaches the gaps in coverage are smaller, and your likelihood of problems can only be reduced.
A: It does not matter whether you are the sole developer or not. You have to think of it from the application point of view. All the applications needs to work properly, all the applications need to be maintained, all the applications needs to be less buggy. There are of course certain scenarios where a TDD approach might not suit you. This is when the deadline is approaching very fast and no time to perform unit testing.
Anyways, TDD does not depend on a solo or a team environment. It depends on the application as a whole.
A: I don't have an enormous amount of experience, but I have had the experience of seeing sharply-contrasted approaches to testing.
In one job, there was no automated testing. "Testing" consisted of poking around in the application, trying whatever popped in your head, to see if it broke. Needless to say, it was easy for flat-out-broken code to reach our production server.
In my current job, there is lots of automated testing, and a full CI-system. Now when code gets broken, it is immediately obvious. Not only that, but as I work, the tests really document what features are working in my code, and what haven't yet. It gives me great confidence to be able to add new features, knowing that if I break existing ones, it won't go unnoticed.
So, to me, it depends not so much on the size of the team, but the size of the application. Can you keep track of every part of the application? Every requirement? Every test you need to run to make sure the application is working? What does it even mean to say that the application is "working", if you don't have tests to prove it?
Just my $0.02.
A: Tests allow you to refactor with confidence that you are not breaking the system. Writing the tests first allows the tests to define what is working behavior for the system. Any behavior that isn't defined by the test is by definition a by-product and allowed to change when refactoring. Writing tests first also drive the design in good directions. To support testability you find that you need to decouple classes, use interfaces, and follow good pattern (Inversion of Control, for instance) to make your code easily testable. If you write tests afterwards, you can't be sure that you've covered all the behavior expected of your system in the tests. You also find that some things are hard to test because of the design -- since it was likely developed without testing in mind -- and are tempted to skimp on or omit tests.
I generally work solo and mostly do TDD -- the cases where I don't are simply where I fail to live up to my practices or haven't yet found a good way that works for me to do TDD, for example with web interfaces.
A:
I'm not about to blindly follow anything.
That's the right attitude. I use TDD all the time, but I don't adhere to it as strictly as some.
The best argument (in my mind) in favor of TDD is that you get a set of tests you can run when you finally get to the refactoring and maintenance phases of your project. If this is your only reason for using TDD, then you can write the tests any time you want, instead of blindly following the methodology.
The other reason I use TDD is that writing tests gets me thinking about my API up front. I'm forced to think about how I'm going to use a class before I write it. Getting my head into the project at this high level works for me. There are other ways to do this, and if you've found other methods (there are plenty) to do the same thing, then I'd say keep doing what works for you.
A: I find it even more useful when flying solo. With nobody around to bounce ideas off of and nobody around to perform peer reviews, you will need some assurance that you're code is solid. TDD/BDD will provide that assurance for you. TDD is a bit contraversial, though. Others may completely disagree with what I'm saying.
EDIT: Might I add that if done right, you can actually generate specifications for your software at the same time you write tests. This is a great side effect of BDD. You can make yourself look like super developer if you're cranking out solid code along with specs, all on your own.
A: Ok my turn... I'd do TDD even on my own (for non-spike/experimental/prototype code) because
*
*Think before you leap: forces me to think what I want to get done before i start cranking out code. What am I trying to accomplish here.. 'If I assume I already had this piece.. how would I expect it to work?' Encourages interface-in design of objects.
*Easier to change: I can make modifications with confidence.. 'I didn't break anything in step1-10 when i changed step5.' Regression testing is instantaneous
*Better designs emerge: I've found better designs emerging without me investing effort in a design activity. test-first + Refactoring lead to loosely coupled, minimal classes with minimal methods.. no overengineering.. no YAGNI code. The classes have better public interfaces, small methods and are more readable. This is kind of a zen thing.. you only notice you got it when you 'get it'.
*The debugger is not my crutch anymore : I know what my program does.. without having to spend hours stepping thru my own code. Nowadays If I spend more than 10 mins with the debugger.. mental alarms start ringing.
*Helps me go home on time I have noticed a marked decrease in the number of bugs in my code since TDD.. even if the assert is like a Console trace and not a xUnit type AT.
*Productivity / Flow: it helps me to identify the next discrete baby-step that will take me towards done... keeps the snowball rolling. TDD helps me get into a rhythm (or what XPers call flow) quicker. I get a bigger chunk of quality work done per unit time than before. The red-green-refactor cycle turns into... a kind of perpetual motion machine.
*I can prove that my code works at the touch of a button
*Practice makes perfect I find myself learning & spotting dragons faster.. with more TDD time under my belt. Maybe dissonance.. but I feel that TDD has made me a better programmer even when I don't go test first. Spotting refactoring opportunities has become second nature...
I'll update if I think of any more.. this is what i came up with in the last 2 mins of reflection.
A: TDD is not about testing it's about writing code. As such, it provides a lot of benefits to even a single developer. For many developers it is a mindshift to write more robust code. For example, how often do you think "Now how can this code fail?" after writing code without TDD? For many developers, the answer to that question is none. For TDD practioners it shifts the mindset to to doing things like checking if objects or strings are null before doing something with them because you are writing tests to specifically do that (break the code).
Another major reason is change. Anytime you deal with a customer, they can never seem to make up their minds. The only constant is change. TDD helps as a "safety net" to find all the other areas that could break.Even on small projects this can keep you from burning up precious time in the debugger.
I could go and on, but I think saying that TDD is more about writing code than anything should be enough to justify it's use as a sole developer.
A: I tend to agree with the validity of your point about the overhead of TDD for 'one developer' or 'hobby' projects not justifying the expenses.
You have to consider however that most best practices are relevant and useful if they are consistently applied for a long period of time.
For example TDD is saving you testing/bugfixing time in a long run, not within 5 minutes after you've created the first unit test.
You're a contract programmer which means that you will leave your current project when it will be finished and will switch to something else, most likely in another company. Your current client will have to maintain and support your application. If you do not leave the support team a good framework to work with they will be stuck. TDD will help the project to be sustainable. It will increase the stability of the code base so other people with less experience will not be able not do too much damage trying to change it.
The same applies for the hobby projects. You may be tired of it and will want to pass it to someone. You might become commercially successful (think Craiglist) and will have 5 more people working besides you.
Investment in proper process always pays-off, even if it is just gained experience. But most of the time you will be grateful that when you started a new project you decided to do it properly
You have to consider OTHER people when doing something. You you have to think ahead, plan for growth, plan for sustainability.
If you don't want to do that - stick to the cowboy coding, it's much simpler this way.
P.S. The same thing applies to other practices:
*
*If you don't comment your code and you have ideal memory you'll be fine but someone else reading your code will not.
*If you don't document your discussions with the customer somebody else will not know anything about a crucial decision you made
etc ad infinitum
A: I no longer refactor anything without a reasonable set of unit tests.
I don't do full-on TDD with unit tests first and code second. I do CALTAL -- Code A LIttle, Test A Little -- development. Generally, code goes first, but not always.
When I find that I've got to refactor, I make sure I've got enough tests and then I hack away at the structure with complete confidence that I don't have to keep the entire old-architecture-becomes-new-architecture plan in my head. I just have to get the tests to pass again.
I refactor the important bits. Get the existing suite of tests to pass.
Then I realize I forgot something, and I'm back to CALTAL development on the new stuff.
Then I see things I forgot to delete -- but are they really unused everywhere? Delete 'em and see what fails in the testing.
Just yesterday -- part way through a big refactoring -- I realized that I still didn't have the exact right design. But the tests still had to pass, so I was free to refactor my refactoring before I was even done with the first refactoring. (whew!) And it all worked nicely because I had a set of tests to validate the changes against.
For flying solo TDD is my copilot.
A: TDD lets me more clearly define the problem in my head. That helps me focus on implementing just the functionality that is required, and nothing more. It also helps me create a better API, because I'm writing a "client" before I write the code itself. I can also refactor without having to worry about breaking anything.
A: I'm going to answer this question quite quickly, and hopefully you will start to see some of the reasoning, even if you still disagree. :)
If you are lucky enough to be on a long-running project, then there will be times when you want to, for example, write your data tier first, then maybe the business tier, before moving on up the stack. If your client then makes a requirement change that requires re-work on your data layer, a set of unit tests on the data layer will ensure that your methods don't fail in undesirable ways (assuming you update the tests to reflect the new requirements). However, you are likely to be calling the data layer method from the business layer as well, and possibly in several places.
Let's assume you have 3 calls to a method in the business layer, but you only modify 2. In the third method, you may still be getting data back from your data layer that appears to be valid, but may break some of the assumptions you coded months before. Unit tests at this level (and above) should have been designed to spot broken assumptions, and in failing they should highlight to you that there is a section of code that needs to be revisited.
I'm hoping that this very simplistic example will be enough to get you thinking about TDD a little more, and that it might create a spark that makes you consider using it. Of course, if you still don't see the point, and you are confident in your own abilities to keep track of many thousands of lines of code, then I have no place to tell you you should start TDD.
A: The point about writing the tests first is that it enforces the requirements and design decisions you are making. When I mod the code, I want to make sure those are still enforced and it is easy enough to "break" something without getting a compiler or run-time error.
I have a test-first approach because I want to have a high degree of confidence in my code. Granted, the tests need to be good tests or they don't enforce anything.
I've got some pretty large code bases that I work on and there is a lot of non-trivial stuff going on. It is easy enough to make changes that ripple and suddenly X happens when X should never happen. My tests have saved me on several occasions from making a critical (but subtle) error that might have gone unnoticed by human testers.
When the tests do fail, they are opportunities to look at them and the production code and make sure that it is correct. Sometimes the design changes and the tests will need to be modified. Sometimes I'll write something that passes 99 out of 100 tests. That 1 test that didn't pass is like a co-worker reviewing my code (in a sense) to make sure I'm still building what I'm supposed to be building.
A: I feel that as a solo developer on a project, especially a larger one, you tend to be spread pretty thin.
You are in the middle of a large refactoring when all of a sudden a couple of critical bugs are detected that for some reason did not show up during pre-release testing. In this case you have to drop everything and fix them and after having spent two weeks tearing your hair out you can finally get back to whatever you were doing before.
A week later one of your largest customers realizes that they absolutely must have this cool new shiny feature or otherwise they won't place the order for those 1M units they should have already ordered a month ago.
Now, three months later you don't even remember why you started refactoring in the first place let alone what the code you are refactoring was supposed to do. Thank god you did a good job writing those unit tests because at least they tell you that your refactored code is still doing what it was supposed to do.
Lather, rinse, repeat.
..story of my life for the past 6 months. :-/
A: Sole developer should use TDD on his project (track record does not matter), since eventually this project could be passed to some other developer. Or more developers could be brought in.
New people will have extremely have hard time working with the code without the tests. They will break things.
A: Does your client own the source code when you deliver the product? If you can convince them that delivering the product with unit tests adds value, then you are up-selling your services and delivering a better product. From the client's perspective, test coverage not only ensures quality, it allows future maintainers to understand the code much more readily since the tests isolate functionality from the UI.
A: I think TDD as a methodology is not just about "having tests when making changes", thus it does not depend on team- nor on project size. It's about noting one's expectations about what a pice of code/an application does BEFORE one starts to really think about HOW the noted behaviour is implemented. The main focus of TDD is not only having test in place for written code but writing less code because you just do what make the test green (and refactor later).
If you're like me and find it quite hard to think about what a part/the whole application does WITHOUT thinking about how to implement it, I think its fine to write your test after your code and thus letting the code "drive" the tests.
If your question isn't so much about test-first (TDD) or test-after (good coding?) I think testing should be standard practise for any developer, wether alone or in a big team, who creates code which stays in production longer than three months. In my expirience that's the time-span after which even the original author has to think hard about what these twenty lines of complex, super-optimized, but sparsely documented code really code do. If you've got tests (which cover all paths throughth the code), there less to think - and less to ERR about, even years later...
A: Here are a few memes and my responses:
"TDD made me think about how it would fail, which made me a better programmer"
Given enough experience, being higly concerned with failure modes should naturally become part of your process anyway.
"Applications need to work properly"
This assumes you are able to test absolutely everything. You're not going to be any better at covering all possible tests correctly than you were at writing the functional code correctly in the first place. "Applications need to work better" is a much better argument. I agree with that, but it's idealistic and not quite tangible enough to motivate as much as I wish it would. Metrics/anecdotes would be great here.
"Worked great for my <library component X>"
I said in the question I saw value in these cases, but thanks for the anecdote.
"Think of the next developer"
This is probably one of the best arguments to me. However, it is quite likely that the next developer wouldn't practice TDD either, and it would therefore be a waste or possibly even a burden in that case. Back-door evangelism is what it amounts to there. I'm quite sure a TDD developer would really appeciate it, though.
How much are you going to appreciate projects done in deprecated must-do methodologies when you inherit one? RUP, anyone? Think of what TDD means to next developer if TDD isn't as great as everyone thinks it is.
"Refactoring is a lot easier"
Refactoring is a skill like any other, and iterative development certainly requires this skill. I tend to throw away considerable amounts of code if I think the new design will save time in the long run, and it feels like there would be an awful number of tests thrown away too. Which is more efficient? I don't know.
...
I would probably recommend some level of TDD to anyone new... but I'm still having trouble with the benefits for anyone who's been around the block a few times already. I will probably start adding automated tests to libraries. It's possible that after doing that, I'll see more value in doing it generally.
A: Motivated self interest.
In my case, sole developer translates to small business owner. I've written a reasonable amount of library code to (ostensibly) make my life easier. A lot of these routines and classes aren't rocket science, so I can be pretty sure they work properly (at least in most cases) by reviewing the code, some some spot testing and debugging into the methods to make sure they behave the way I think they do. Brute force, if you will. Life is good.
Over time, this library grows and gets used in more projects for different customers. Testing gets more time consuming. Especially cases where I'm (hopefully) fixing bugs and (even more hopefully) not breaking something else. And this isn't just for bugs in my code. I have to be careful adding functionality (customers keep asking for more "stuff") or making sure code still works when moved to a new version of my compiler (Delphi!), third party code, runtime environment or operating system.
Taken to the extreme, I could spend more time reviewing old code than working on new (read: billable) projects. Think of it as the angle of repose of software (how high can you stack untested software before it falls over :).
Techniques like TDD gives me methods and classes that are more thoughtfully designed, more thoroughly tested (before the customer gets them) and need less maintenance going forward.
Ultimately, it translates to less time doing maintenance and more time to spend doing things that are more profitable, more interesting (almost anything) and more important (like family).
A: We are all developers with a good track record. After all, we are all reading Stackoverflow. And many of us use TDD and perhaps those people have a great track record. I get hired because people want someone who writes great test automation and can teach that to others. When working alone, I do TDD on my coding projects at home because I found that if I don’t, I spent time doing manual testing or even debugging, and who needs that. (Perhaps those people have only good track records. I don’t know.)
When it comes to being a good automobile driver, everyone believes they are a “good driver.” This is a cognitive bias all drivers have. Programmers have their own biases. The reasons developers such as the OP don’t do TDD are covered in this Agile Thoughts podcast series. The podcast archive also has content on test automation concepts such as the test pyramid, and an intro about what is TDD and why write tests first starting with episode 9 in the podcast archive.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157717",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "35"
} |
Q: Diagnosing Bad OutputPaths: "The OutputPath property is not set for this project" (in the wonderful world of web deployment projects) Starting with the error:
Error 81 The OutputPath property is
not set for this project. Please
check to make sure that you have
specified a valid
Configuration/Platform combination.
Configuration='Staging'
Platform='AnyCPU' C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Microsoft.Common.targets 490 9 crm_deploy
We have a VS 2005 built website with a web deployment project. In this WDP there are two configurations, Staging and Production.
In the .sln for the whole website, we have two configurations with the same name, which are designed to trigger the corresponding deployment projects.
Production builds fine, but Staging returns the error above. I tried updating the .wdproj and .sln so that Staging matched production; I tried copying all the settings from Production to a new configuration (StagingX) by updating these same two files.
In each case, Production still works, but any new configurations I create produce the error above.
I've done a find across the whole project for the word Production and tried searching Googlespace and haven't found anything that explains the problem. WDPs are huge migraine creation devices. Any ideas? Thanks!
(I'll add files as requested)
A: Apparently a separate library without a Staging build can break it, even if the solution's Staging configuration is set to use the library's Debug build?
Oh well...
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157728",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Which Ajax framework to build GUI of web applications that use struts? Which Ajax framework/toolkit can you recommend for building the GUI of web applications that are using struts?
A: I'd say that your AJAX/javascript library choice should depend less on how your backend is implemented and more on what your UI is going to be.
If your site is mostly going to be static web pages with some AJAX thrown in then it would be better to use a lighter javascript framework like jquery. But if you are creating an UI more like a web application, where the user stays at a single page for a long time (think gmail, google calendar, etc) then it probably is better to look at Dojo, ExtJs, or GWT.
A: I suggest JQuery's UI plugin.
jQuery, prototype, Yahoo! User Interface, MooTools, dojo, and ExtJS will have you working with very solid code.
Other posibilities that I can't vouch for myself: QooxDoo
A: Struts already come with Dojo framework. You can set your application theme to ajax and you will be able to use it.
Give a look at struts.ui.theme property at struts.properties file!
A good article for you to read is this one at JavaWorld
A: I'd go with ExtJS (http://extjs.com/).
It has a very good component and event model and very good support.
It's AJAX at it's best ;)
You can use actions with a JSON response to provide data to the Ext frontend. You don't even need to mix you client frontend with the server frontend (via JSPX / tags).
Some see the fact that you have to develop the client frontend separated from the server frontend as a disadvantage of Ext. I think that it's not, as I have switched web applications built with Ext from a java backend to a .Net backend without changing a line of client frontend code, be it HTML or Javascript.
Have a look at the Ext examples and docs before you decide.
A: It's already been mentioned, but I will say it again: jQuery. The strength of jQuery is not just the ability to make a simple AJAX call or the great UI extension library. In my humble opinion, the best part of jQuery is how you can easily handle the return data. jQuery easily allows you to navigate XML just like you can HTML.
For instance, say you get back an xml request:
(: Borrowed this XML document from a MarkLogic training file :)
<author name="Boynton">
<book>
<title>Barnyard Dance!</title>
<year>1993</year>
</book>
<book>
<title>Hippos Go Berserk!</title>
<year>1996</year>
</book>
</author>
jQuery can retrieve all the year elements with this simple command:
var years = $("year");
//Ok, lets act on each element instead
$("year").each(function(index, value){
alert("Element " + index + " = " + value);
});
/* OUTPUT
Element 0 = 1993
Element 1 = 1996
/*
Try doing that in normal Javascript!
Furthermore, the way jQuery is designed is simply beautiful. The founders encourage extending the framework by adding the ability to create extensions into the core of the library (sure you can always just edit the Javascript file, but what happens when there's a critical update to the framework).
Another great reason to use jQuery is its compatibility with other Javascript frameworks. By default both Prototype and jQuery use the "$" sign to refer to the main library object. Only jQuery adds the functionality to remove this reference so it can coexist with Prototype.
jQuery makes JavaScript enjoyable.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/157737",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.