text stringlengths 8 267k | meta dict |
|---|---|
Q: Tree (directed acyclic graph) implementation I require a tree / directed acyclic graph implementation something like this:
public class TreeNode<K, V> {
private K key; // 'key' for this node, always present
private V value; // 'value' for this node, doesn't have to be set
private TreeNode<K, V> parent;
private Set<TreeNode<K, V>> children;
}
*
*There is no sorting of any kind.
*The TreeNode is just a wrapper around the key and a possible value (nodes don't have to have values set).
*I require links to both the parent and the children.
Is there anything out there in the standard APIs or Commons etc that will do this for me?
I don't mind writing it myself (and I'm certainly not asking you folks to) I just don't want to re-invent the wheel.
A: There's also http://www.jgrapht.org, which has software licensed under the LGPL. I have to warn you though, implementing your own is fraught with danger. If you plan on using recursion on your structure (which is a graph), you'll have to ensure that it's acyclic, or you'll run into infinite loop problems. Better to use third party code where they've already dealt with the issues.
A: There doesn't seem to be anything of the kind. I asked a similar question last week and ended up implementing my own tree. My implementation was very similar to what you're proposing:
public class TreeNode<T>
{
private LinkedList<TreeNode<T>> children = new LinkedList<TreeNode<T>>();
public T value { get; set; }
public TreeNode(T value)
{
this.value = value;
}
public LinkedList<TreeNode<T>> GetChildren()
{
return children;
}
}
You will have to add a link back to the parent(s).
A: I'd say it's better to roll out your own implementation (besides, you've already got the interface nicely thought out). What are the operations you are planning to perform on this tree anyway? You'd probably want to design your API around the things you want... direct access to individual nodes by key/value? types of traversals? add/remove operations?
A: If you're looking for additional graph capabilities, JDigraph's Digraph class should fit the bill.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/144642",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: How to use apache's mod_rewrite rewriterule without changing relative paths I've the following rewrite rule in .htaccess:
RewriteRule ^groups/([^/\.]+)/?$ groupdetail.php?gname=$1 [L,NC]
This takes something like www.example.com/groups/groupname and calls www.example/groupdetail.php?gname=groupname. And it works just fine.
But all the relative links on groupdetail.php use groups/ as the relative path, and I don't want them to. How do I avoid this?
For example, when a user clicks on a link <a href="link.php"> on groupdetail.php?gname=groupname, he's taken to www.example/groups/link.php. I want to take the user to www.example.com/link.php.
Obviously, I want to URL to the user to look like "www.example.com/groups/groupname" so I don't want to use [R]/redirect.
A: If like me you had hundreds of relative links in the page, insert a <base href=""> in the <head> with an absolute path (could use relative too). You'll need to also make the path to .js files in the <head> absolute because IE and firefox deal with the base href differently. I agree it is an annoying issue.
A: Relative links are resolved by the browser, not the server, so there is nothing you can do with mod_rewrite.
Either use relative links the go up the hierarchy (../link.php) or use absolute links.
A: If you do not want to have absolute links or use <base> because you are going to move the page around, you can have the base be generated by php, as following:
echo '<base href="http://'.$_SERVER['SERVER_NAME'].str_replace("index.php","",$_SERVER['PHP_SELF']).'" />';
A: If you change the rewite rule to do a force redirect (add the [R] option), then the browser will be using the /groupdetail.php URL and the relative links will work fine. However, that adds one redirect and makes the URLs less pretty.
RewriteRule ^groups/([^/.]+)/?$ groupdetail.php?gname=$1 [L,NC,R]
A: You can use the BASE tag, if you don't want to use absolute paths:
http://www.w3schools.com/tags/tag_base.asp
A: Hop's answer is correct. The browser sees www.example.com/groups/groupname as the address, so considers that /groups is the current directory. So, any links like <a href=link.php> are assumed to be in the /groups folder.
When the user moves his mouse over the link, he'll see www.example.com/groups/link.php as the link address.
The solution is to use absolute links -- just add a slash before the href:
<a href=/link.php>
The user will then see www.example.com/link.php as the url.
That said, it seems from your question that you are using relative links on purpose... do you have a reason not to use absolute links?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/144651",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Can I make Hibernate transparently avoid string duplication in the database? I have a Java program that uses Hibernate and MySQL to store a lot of tracing data about the use of the Eclipse IDE. This data contains a lot of strings such as method names, directories, perspective name, etc.
For example, an event object (which is then reflected in a record) can specify the source file and the current method, the user name, etc. Obviously, string data can repeat itself.
As long as it's in memory, much of it is internalized so all repeated string instances point to the same object (I make sure of that). However, with @Basic (I use annotations), Hibernate maps it into a VARCHAR(255), which means a lot of wasted space.
If I was coding the SQL myself, I could have replaced the VARCHAR with an index to a manually-managed string lookup table and saved the space (at the cost of extra lookups).
Is there some way to get Hibernate to do this for me? I'm willing to pay the performance hit for the space.
A: Building on sblundy's answer, you could probably get away with something like:
class Foo {
// client code uses this to get the value... ignored by Hibernate
@Transient
public String getString() {
return getStringHolder().getString();
}
public StringHolder getStringHolder() {...}
}
At least then the client code wouldn't necessarily have to be aware of the change. I don't know if it'd be worth the trouble, though.
A: I suspect you'll need a string holder object and then make sure all these objects refer to that.
class StringHolder {
private Long id;
private String string;
public StringHolder() {/* Not sure if this is necessary */}
public StringHolder(String string) {
this.string = string;
}
public void getString() {
return this.string;
}
}
A: I believe you want to look at custom value types.
This should allow you to store your strings as integer ID in the database. Of course, you will have to provide the mapping/lookup yourself.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/144657",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: jQuery jWYSIWYG inside jquery.ui.tab So I managed to get a page with Ajax ui.tab and in one of the tab I put jWYSIWYG textarea plugin. Unfortunately, I can only see normal textarea.
However, accessing the page directly (ie. not using the ajax tab) works.
What happened?
p/s: I'm new to jQuery / JavaScript / AJAX / CSS (if that even matter)
A: I expect the problem is that the new html is being inserted into the DOM when the ajax call completes, but isn't being hooked up to anything with jQuery.
Normally you attach all your jquery goodness in a document ready or onload event, when the page initially loads. However, your textarea is not on the page when the page first loads.
When your ajax call returns, the textarea is added to the page. At this point you need to call whatever javascript is needed to hook it up to a jWYSIWTG control.
There is a new(ish) feature in jquery that means you can still set everything up in document ready (called live), but you may find it simpler just to call the hookup code in your ajax success handler.
A: Your answer would be best served by posting a link to the HTML file (and any custom JavaScript files of your own) in question. If the file isn't hosted, you can paste the source code at http://pastebin.com/, and post the link here.
A: My solution is fix dialog width and height blind with event open dialog.
Blind event close dialog by remove div.wysiwyg which auto-create by plugin.
$('#dialogContent').bind('dialogopen', function(event, ui) {
$('textarea').wysiwyg( {
css :burl + 'public/css/text.css',
controls : {
separator00 : { visible : false },
separator01 : { visible : false },
separator02 : { visible : false },
separator03 : { visible : false },
separator04 : { visible : false },
separator05 : { visible : false },
separator06 : { visible : false },
separator07 : { visible : false },
separator08 : { visible : false },
separator09 : { separator : false},
insertOrderedList : { visible : true },
insertUnorderedList : { visible : true },
undo: { visible : true },
redo: { visible : true },
justifyLeft: { visible : true },
justifyCenter: { visible : true },
justifyFull: { visible : true },
subscript: { visible : false },
superscript: { visible : false },
underline: { visible : true },
increaseFontSize : { visible : false },
decreaseFontSize : { visible : false },
removeFormat : { visible : false },
h1mozilla : { visible : false },
h2mozilla : { visible : false },
h3mozilla : { visible : false },
h1 : { visible : false },
h2 : { visible : false },
h3 : { visible : false }
}
});
$('.wysiwyg').css( {
'width' :'350px'
,'height' :'180px'
});
$('.wysiwyg iframe').css( {
'width' :'350px'
,'height' :'150px'
});
}).bind('dialogbeforeclose', function(event, ui) {
$('.wysiwyg').remove();
});
A: My problem was that the formatting for the jWYSIWYG box would show up, but I couldn't get a cursor in it to edit/add text.
What worked for me was to load up the jWSIWYG textbox after the ajax loading on the original page where the tab is defined using a tab event.
$("#example").tabs();
$('#example').bind('tabsshow', function(event, ui) {
if (ui.tab.id == "alinkid") {
$('#textfield').wysiwyg();
}
});
Then, in the HTML, for the tabs:
<div id="example">
<ul>
<li><a href="target" id="alinkid">Target</a></li>
</ul>
</div>
on the target page, you'd have a normal textarea with id 'textfield'
| {
"language": "en",
"url": "https://stackoverflow.com/questions/144659",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Python vs. Ruby for metaprogramming I'm currently primarily a D programmer and am looking to add another language to my toolbox, preferably one that supports the metaprogramming hacks that just can't be done in a statically compiled language like D.
I've read up on Lisp a little and I would love to find a language that allows some of the cool stuff that Lisp does, but without the strange syntax, etc. of Lisp. I don't want to start a language flame war, and I'm sure both Ruby and Python have their tradeoffs, so I'll list what's important to me personally. Please tell me whether Ruby, Python, or some other language would be best for me.
Important:
*
*Good metaprogramming. Ability to create classes, methods, functions, etc. at runtime. Preferably, minimal distinction between code and data, Lisp style.
*Nice, clean, sane syntax and consistent, intuitive semantics. Basically a well thought-out, fun to use, modern language.
*Multiple paradigms. No one paradigm is right for every project, or even every small subproblem within a project.
*An interesting language that actually affects the way one thinks about programming.
Somewhat important:
*
*Performance. It would be nice if performance was decent, but when performance is a real priority, I'll use D instead.
*Well-documented.
Not important:
*
*Community size, library availability, etc. None of these are characteristics of the language itself, and all can change very quickly.
*Job availability. I am not a full-time, professional programmer. I am a grad student and programming is tangentially relevant to my research.
*Any features that are primarily designed with very large projects worked on by a million code monkeys in mind.
A: Compare code examples that do the same thing (join with a newline non-empty descriptions of items from a myList list) in different languages (languages are arranged in reverse-alphabetic order):
Ruby:
myList.collect { |f| f.description }.select { |d| d != "" }.join("\n")
Or
myList.map(&:description).reject(&:empty?).join("\n")
Python:
descriptions = (f.description() for f in mylist)
"\n".join(filter(len, descriptions))
Or
"\n".join(f.description() for f in mylist if f.description())
Perl:
join "\n", grep { $_ } map { $_->description } @myList;
Or
join "\n", grep /./, map { $_->description } @myList;
Javascript:
myList.map(function(e) e.description())
.filter(function(e) e).join("\n")
Io:
myList collect(description) select(!="") join("\n")
Here's an Io guide.
A:
I've read up on Lisp a little and I would love to find a language that allows some of the cool stuff that Lisp does, but without the strange syntax, etc. of Lisp.
Wouldn't we all.
minimal distinction between code and data, Lisp style
Sadly, the minimal distinction between code and data and "strange" syntax are consequences of each other.
If you want easy-to-read syntax, you have Python. However, the code is not represented in any of the commonly-used built-in data structures. It fails—as most languages do—in item #1 of your 'important' list. That makes it difficult to provide useful help.
You can't have it all. Remember, you aren't the first to have this thought. If something like your ideal language existed, we'd all be using it. Since the real world falls short of your ideals, you'll have to re-prioritize your wish list. The "important" section has to be rearranged to identify what's really important to you.
A: Ruby would be better than Lisp in terms of being "mainstream" (whatever that really means, but one realistic concern is how easy it would be to find answers to your questions on Lisp programming if you were to go with that.) In any case, I found Ruby very easy to pick up. In the same amount of time that I had spent first learning Python (or other languages for that matter), I was soon writing better code much more efficiently than I ever had before. That's just one person's opinion, though; take it with a grain of salt, I guess. I know much more about Ruby at this point than I do Python or Lisp, but you should know that I was a Python person for quite a while before I switched.
Lisp is definitely quite cool and worth looking into; as you said, the size of community, etc. can change quite quickly. That being said, the size itself isn't as important as the quality of the community. For example, the #ruby-lang channel is still filled with some incredibly smart people. Lisp seems to attract some really smart people too. I can't speak much about the Python community as I don't have a lot of firsthand experience, but it seems to be "too big" sometimes. (I remember people being quite rude on their IRC channel, and from what I've heard from friends that are really into Python, that seems to be the rule rather than the exception.)
Anyway, some resources that you might find useful are:
1) The Pragmatic Programmers Ruby Metaprogramming series (http://www.pragprog.com/screencasts/v-dtrubyom/the-ruby-object-model-and-metaprogramming) -- not free, but the later episodes are quite intriguing. (The code is free, if you want to download it and see what you'd be learning about.)
2) On Lisp by Paul Graham (http://www.paulgraham.com/onlisp.html). It's a little old, but it's a classic (and downloadable for free).
A: @Jason I respectively disagree. There are differences that make Ruby superior to Python for metaprogramming - both philosophical and pragmatic. For starters, Ruby gets inheritance right with Single Inheritance and Mixins. And when it comes to metaprogramming you simply need to understand that it's all about the self. The canonical difference here is that in Ruby you have access to the self object at runtime - in Python you do not!
Unlike Python, in Ruby there is no separate compile or runtime phase. In Ruby, every line of code is executed against a particular self object. In Ruby every class inherits from both object and a hidden metaclass. This makes for some interesting dynamics:
class Ninja
def rank
puts "Orange Clan"
end
self.name #=> "Ninja"
end
Using self.name accesses the Ninja classes' metaclass name method to return the class name of Ninja. Does metaprogramming flower so beautiful in Python? I sincerely doubt it!
A: I am using Python for many projects and I think Python does provide all the features you asked for.
important:
*
*Metaprogramming: Python supports metaclasses and runtime class/method generation etc
*Syntax: Well thats somehow subjective. I like Pythons syntax for its simplicity, but some People complain that Python is whitespace-sensitive.
*Paradigms: Python supports procedural, object-oriented and basic functional programming.
*I think Python has a very practical oriented style, it was very inspiring for me.
Somewhat important:
*
*Performance: Well its a scripting language. But writing C extensions for Python is a common optimization practice.
*Documentation: I cannot complain. Its not that detailed as someone may know from Java, but its good enough.
As you are grad student you may want to read this paper claiming that Python is all a scientist needs.
Unfortunately I cannot compare Python to Ruby, since I never used that language.
Regards,
Dennis
A: Well, if you don't like the lisp syntax perhaps assembler is the way to go. :-)
It certainly has minimal distinction between code and data, is multi-paradigm (or maybe that is no-paradigm) and it's a mind expanding (if tedious) experience both in terms of the learning and the tricks you can do.
A: Io satisfies all of your "Important" points. I don't think there's a better language out there for doing crazy meta hackery.
A:
one that supports the metaprogramming hacks that just can't be done in a statically compiled language
I would love to find a language that allows some of the cool stuff that Lisp does
Lisp can be compiled.
A: Did you try Rebol?
A: My answer would be neither. I know both languages, took a class on Ruby and been programming in python for several years. Lisp is good at metaprogramming due to the fact that its sole purpose is to transform lists, its own source code is just a list of tokens so metaprogramming is natural. The three languages I like best for this type of thing is Rebol, Forth and Factor. Rebol is a very strong dialecting language which takes code from its input stream, runs an expression against it and transforms it using rules written in the language. Very expressive and extremely good at dialecting. Factor and Forth are more or less completely divorced from syntax and you program them by defining and calling words. They are generally mostly written in their own language. You don't write applications in traditional sense, you extend the language by writing your own words to define your particular application. Factor can be especially nice as it has many features I have only seen in smalltalk for evaluating and working with source code. A really nice workspace, interactive documents, etc.
A: There isn't really a lot to separate Python and Ruby. I'd say the Python community is larger and more mature than the Ruby community, and that's really important for me. Ruby is a more flexible language, which has positive and negative repercussions. However, I'm sure there will be plenty of people to go into detail on both these languages, so I'll throw a third option into the ring. How about JavaScript?
JavaScript was originally designed to be Scheme for the web, and it's prototype-based, which is an advantage over Python and Ruby as far as multi-paradigm and metaprogramming is concerned. The syntax isn't as nice as the other two, but it is probably the most widely deployed language in existence, and performance is getting better every day.
A: If you like the lisp-style code-is-data concept, but don't like the Lispy syntax, maybe Prolog would be a good choice.
Whether that qualifies as a "fun to use, modern language", I'll leave to others to judge. ;-)
A: I've use Python a very bit, but much more Ruby. However I'd argue they both provide what you asked for.
If I see all your four points then you may at least check:
http://www.iolanguage.com/
And Mozart/Oz may be interesting for you also:
http://mozart.github.io/
Regards
Friedrich
A: Ruby is my choice after exploring Python, Smalltalk, and Ruby.
A: What about OCaml ?
OCaml features: a static type system, type inference, parametric polymorphism, tail recursion, pattern matching, first class lexical closures, functors (parametric modules), exception handling, and incremental generational automatic garbage collection.
I think that it satisfies the following:
Important:
*Nice, clean, sane syntax and consistent, intuitive semantics. Basically a well thought-out, fun to use, modern language.
*Multiple paradigms. No one paradigm is right for every project, or even every small subproblem within a project.
*An interesting language that actually affects the way one thinks about programming.
Somewhat important:
*
*Performance. It would be nice if performance was decent, but when performance is a real priority, I'll use D instead.
*Well-documented.
A: Honestly, as far as metaprogramming facilities go, Ruby and Python are a lot more similar than some of their adherent like to admit. This review of both language offers a pretty good comparison/review:
*
*http://regebro.wordpress.com/2009/07/12/python-vs-ruby/
So, just pick one based on some criteria. Maybe you like Rails and want to study that code. Maybe SciPy is your thing. Look at the ecosystem of libraries, community, etc, and pick one. You certainly won't lose out on some metaprogramming nirvana based on your choice of either.
A: Disclaimer: I only dabble in either language, but I have at least written small working programs (not just quick scripts, for which I use Perl, bash or GNU make) in both.
Ruby can be really nice for the "multiple paradigms" point 3, because it works hard to make it easy to create domain-specific languages. For example, browse online and look at a couple of bits of Ruby on Rails code, and a couple of bits of Rake code. They're both Ruby, and you can see the similarities, but they don't look like what you'd normally think of as the same language.
Python seems to me to be a bit more predictable (possibly correlated to 'clean' and 'sane' point 2), but I don't really know whether that's because of the language itself or just that it's typically used by people with different values. I have never attempted deep magic in Python. I would certainly say that both languages are well thought out.
Both score well in 1 and 4. [Edit: actually 1 is pretty arguable - there is "eval" in both, as common in interpreted languages, but they're hardly conceptually pure. You can define closures, assign methods to objects, and whatnot. Not sure whether this goes as far as you want.]
Personally I find Ruby more fun, but in part that's because it's easier to get distracted thinking of cool ways to do things. I've actually used Python more. Sometimes you don't want cool, you want to get on with it so it's done before bedtime...
Neither of them is difficult to get into, so you could just decide to do your next minor task in one, and the one after that in the other. Or pick up an introductory book on each from the library, skim-read them both and see what grabs you.
A: There's not really a huge difference between python and ruby at least at an ideological level. For the most part, they're just different flavors of the same thing. Thus, I would recommend seeing which one matches your programming style more.
A: Have you considered Smalltalk? It offers a very simple, clear and extensible syntax with reflectivity and introspection capabilities and a fully integrated development environment that takes advantage of those capabilities. Have a look at some of the work being done in Squeak Smalltalk for instance. A lot of researchers using Squeak hang out on the Squeak mailing list and #squeak on freenode, so you can get help on complex issues very easily.
Other indicators of its current relevance: it runs on any platform you'd care to name (including the iPhone); Gilad Bracha is basing his Newspeak work on Squeak; the V8 team cut their teeth on Smalltalk VMs; and Dan Ingalls and Randal Schwartz have recently returned to Smalltalk work after years in the wilderness.
Best of luck with your search - let us know what you decide in the end.
A: Lisp satisfies all your criteria, including performance, and it is the only language that doesn't have (strange) syntax. If you eschew it on such an astoundingly ill-informed/wrong-headed basis and consequently miss out on the experience of using e.g. Emacs+SLIME+CL, you'll be doing yourself a great disservice.
A: Your 4 "important" points lead to Ruby exactly, while the 2 "somewhat important" points ruled by Python. So be it.
A: You are describing Ruby.
*
*Good metaprogramming. Ability to create classes, methods, functions,
etc. at runtime. Preferably, minimal
distinction between code and data,
Lisp style.
It's very easy to extend and modify existing primitives at runtime. In ruby everything is an object, strings, integers, even functions.
You can also construct shortcuts for syntactic sugar, for example with class_eval.
*
*Nice, clean, sane syntax and consistent, intuitive semantics.
Basically a well thought-out, fun to
use, modern language.
Ruby follows the principle of less surprise, and when comparing Ruby code vs the equivalent in other language many people consider it more "beautiful".
*
*Multiple paradigms. No one paradigm is right for every project,
or even every small subproblem within
a project.
You can follow imperative, object oriented, functional and reflective.
*
*An interesting language that actually affects the way one thinks
about programming.
That's very subjective, but from my point of view the ability to use many paradigms at the same time allows for very interesting ideas.
I've tried Python and it doesn't fit your important points.
A: For python-style syntax and lisp-like macros (macros that are real code) and good DSL see converge.
A: I'm not sure that Python would fulfill all things you desire (especially the point about the minimal distinction between code and data), but there is one argument in favour of python. There is a project out there which makes it easy for you to program extensions for python in D, so you can have the best of both worlds. http://pyd.dsource.org/celerid.html
A: if you love the rose, you have to learn to live with the thorns :)
A: Do not to mix Ruby Programming Language with Ruby Implementations, thinking that POSIX threads are not possible in ruby.
You can simply compile with pthread support, and this was already possible at the time this thread was created, if you pardon the pun.
The answer to this question is simple. If you like lisp, you will probably prefer ruby. Or, whatever you like.
A: I would recommend you go with Ruby.
When I first started to learn it, I found it really easy to pick up.
A: I suggest that you try out both languages and pick the one that appeals to you. Both Python and Ruby can do what you want.
Also read this thread.
A: Go with JS just check out AJS (Alternative JavaScript Syntax) at my github http://github.com/visionmedia it will give you some cleaner looking closures etc :D
A: Concerning your main-point (meta-programming):
Version 1.6 of Groovy has AST (Abstract Syntax Tree) programming built-in as a standard and integrated feature.
Ruby has RubyParser, but it's an add-on.
A: I recommend Haskell. It isn't much of a contender for "important" criteria 1 (unless you include Template Haskell) or 3 (Haskell is a firmly Functional language, though you'd be surprised how easy it actually is to code imperatively in Haskell if you want to). You'll certainly enjoy Haskell's first class functions, though.
Haskell is a rock solid choice for "important" criterion 2 and 4. Especially #4: "An interesting language that actually affects the way one thinks about programming." Anyone who has learned a significant amount of Haskell can attest to its ability to expand your mind.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/144661",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "90"
} |
Q: How do I restore a deleted file in CVS? I've removed a checked in file from the CVS branch, i.e.:
cvs remove -f file.txt
cvs commit
How do I restore the file?
A: I believe that:
cvs add file.txt
cvs commit file.txt
... will resurrect it from the attic.
A: The cvs add did not work for me because my cvs version on the server was very old. I have confirmed that it works fine with CVS version 1.11.22.
A: Try:
cvs add file.txt
cvs update file.txt
cvs commit file.txt
A: Given Harry's lack of success, here's a transcript of what I did to demonstrate that the above answer works (apologies in advance for its length):
C:\foo>dir
Volume in drive C is Local Disk
Volume Serial Number is 344F-1517
Directory of C:\foo
28/09/2008 05:12 PM <DIR> .
28/09/2008 05:12 PM <DIR> ..
28/09/2008 05:12 PM <DIR> CVS
28/09/2008 05:11 PM 19 file.txt
1 File(s) 19 bytes
3 Dir(s) 22,686,416,896 bytes free
C:\foo>cvs status file.txt
===================================================================
File: file.txt Status: Up-to-date
Working revision: 1.2 Sun Sep 28 07:11:58 2008
Repository revision: 1.2 C:\jason\CVSROOT/foo/file.txt,v
Sticky Tag: (none)
Sticky Date: (none)
Sticky Options: (none)
C:\foo>cvs rm -f file.txt
cvs remove: scheduling `file.txt' for removal
cvs remove: use 'cvs commit' to remove this file permanently
C:\foo>cvs commit -m "" file.txt
Removing file.txt;
C:\jason\CVSROOT/foo/file.txt,v <-- file.txt
new revision: delete; previous revision: 1.2
done
C:\foo>cvs status file.txt
===================================================================
File: no file file.txt Status: Up-to-date
Working revision: No entry for file.txt
Repository revision: 1.3 C:\jason\CVSROOT/foo/Attic/file.txt,v
C:\foo>more file.txt
Cannot access file C:\foo\file.txt
C:\foo>dir
Volume in drive C is Local Disk
Volume Serial Number is 344F-1517
Directory of C:\foo
28/09/2008 05:12 PM <DIR> .
28/09/2008 05:12 PM <DIR> ..
28/09/2008 05:12 PM <DIR> CVS
0 File(s) 0 bytes
3 Dir(s) 22,686,400,512 bytes free
C:\foo>cvs add file.txt
cvs add: Resurrecting file `file.txt' from revision 1.2.
U file.txt
cvs add: Re-adding file `file.txt' (in place of dead revision 1.3).
cvs add: use 'cvs commit' to add this file permanently
C:\foo>cvs commit -m "" file.txt
Checking in file.txt;
C:\jason\CVSROOT/foo/file.txt,v <-- file.txt
new revision: 1.4; previous revision: 1.3
done
C:\foo>more file.txt
This is a test...
C:\jason\work\dev1\nrta\foo>dir
Volume in drive C is Local Disk
Volume Serial Number is 344F-1517
Directory of C:\jason\foo
28/09/2008 05:15 PM <DIR> .
28/09/2008 05:15 PM <DIR> ..
28/09/2008 05:13 PM <DIR> CVS
28/09/2008 05:13 PM 19 file.txt
1 File(s) 19 bytes
3 Dir(s) 22,686,375,936 bytes free
Clearly he's doing the right thing, but the behaviour he's observing is different. Perhaps there's a difference due to CVS version (I'm using 1.11.22 on Windows).
A: cd into $CVSROOT directory and the relevant module directory and then the Attic,
edit the fileOfInterest,v and change the line that says Dead; to Exp; and then move the fileOfInterest,v to the directory above.
An update in the checked out module will now restore the file.
A: I've found that you can not use cvs add to undo a cvs remove operation that has already been comitted. So this works:
$ cvs remove -f file.txt
$ cvs add file.txt
but this doesn't work:
$ cvs remove -f file.txt
$ cvs commit
$ cvs add file.txt
The simplest method I've found so far is to run cvs status file.txt to find out the revision number. Then grab the contents of the revision and add it back in:
$ cvs update -p -r rev file.txt > file.txt
$ cvs add file.txt
$ cvs commit
A: The simplest, though possible least elegant way to do this is to 'cd' into the CVS directory in the same place as the removed file.
Then edit the file called "Entries".
Find the line representing your removed file. Note that there is a '-' after the /
Remove the '-', save the file and voila!
Yuck, but it works.
A: Here's what I do. I just create an empty file of the same name, then add and commit it, then retrieve the older version and re-commit that.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/144669",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "42"
} |
Q: Problem with update site categories in Eclipse 3.4 I am using Eclipse 3.4 (ganymede official, not the service pack).
I have an update site that organizes features into categories; everything looks great in the editor and in the XML.
Once the site is online, accessing it in the usual manner tells me that all the features are "uncategorized". I've tried from multiple computers running 3.4 and the same problem persists.
What is curious is that I used Eclipse 3.3, and it saw the categories well, though of course it wasn't able to instlal the plugins which are made from 3.4.
Am I doing something wrong or is this a known problem?
A: It appears to be a known problem, due to the new 'p2' provisioning system.
See this discussion, and this bug. What it seems to say is... "stay put until 3.5M3, and then try it again".
A: This solution works for me:
*
*Use the PDE update site project to create the site.xml and build your plugins. Make sure you set the category here.
*Delete the artifacts.xml and content.xml created by the update site build.
*Use the P2 Metadata Generator to generate your artifacts and content files. I use the compress option so I'm getting jars.
*The update site should include: the site.xml, content & artifacts jars, features and plugins folders.
If you follow this procedure, it will work just fine in Eclipse 3.3 and 3.4. Naturally, you should automate this process with Ant.
Important notes:
*
*I never got the metadata generator Ant task to work, so I invoke it in its' Java form (the second example in the link above).
*Make sure you clear the artifacts and content xmls before the generation
*Inputs: site.xml and built plugins/features folders
*Specify the metadataRepositoryName which is the update site title (shown to the user in some cases)
I'll do my best to blog about it soon...
Let me know if you have any questions.
A: What seems to work for me is to put the tag, defining the category in the site.xml, before the tag including the other category tag. If you add the category with eclipse's editor after adding the feature, it'll have messed that up...
A: A no-brainer for most..but it can be a problem for newbies on Eclipse update sites: be sure to add your feauture as a child under the category:
See http://ekkescorner.wordpress.com/2010/04/18/who-eats-the-categories-from-update-sites/
| {
"language": "en",
"url": "https://stackoverflow.com/questions/144671",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: How to capture an anchor value from url in Ruby on Rails? After updating or creating a record I use url helper to redirect to the part on the page where the record happened to be located:
if record.save
....
redirect_to records_url, :anchor => "record_" + record_id.to_s
end
the resulting url will be something like
http://localhost:3000/records#record_343242
I want to highlight the record using jquery or prototype, and the anchor is the exact id that I am looking for. Can I capture it?
A: I presume you're trying to capture it in JavaScript?
var record_id = window.location.href.hash.split("_")[1];
In Prototype you could write:
var record_id = window.location.href.hash.split("_").last;
| {
"language": "en",
"url": "https://stackoverflow.com/questions/144675",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: What is the point of using a Logging framework? I think I might be missing the point of having a logging framework for your application. In all the small apps I've always written a small "Logging" class and just pass log messages to a method in it which is written to a file.
What is the purpose of a 3rd party logging framework like log4net? Is it thread safety with logging write operations or am I missing something?
A: you might want to switch to log to a db for some message, or an alerting system that pages the poor person on support
More importantly though, most logging frameworks allow you to specify the loglevel of different classes so you dont need to cut a new binary each time you want more/less logging (that verbose logging for the bug you just spotted in production)
Log4Net's site has more detail
A: Something the other comments have omitted: if there's already a library that does what you want, it saves you having to write the code.
Possibly you're playing semantics here: to me, a "logging framework" typically is little more than a class that writes log messages to a file... so what you've done is write your own logging framework. Given that you have done so, there is obviously some point in "using a Logging framework"!
Ultimately you're going to need to make sure it handles concurrent logging correctly (locking the output stream), can log to a file, syslog, etc., can do log rolling, and so on. You can save yourself that effort by using someone else's well-tested code.
A: That's an excellent question.
The first reason is "why not?" If you are using a logging framework, then you'll reap the maintainability benefits of using something already packaged.
The second reason is that logging is subtle. Different threads, sessions, classes and object instances may all come into play in logging, and you don't want to have to figure this problem out on the fly.
The third reason is that you may find a performance bottleneck in your code. Figuring out that your code is slow because you're writing to a file without buffering or your hard drive has run out of disk space because the logger doesn't rollover and compress old files can be a pain in the neck.
The fourth reason is that you may want to append to syslog, or write to a database, or to a socket, or to different files. Frameworks have this functionality built in.
But really, the first answer is the best one; there's very little benefit to writing your own, and a whole bunch of drawbacks.
A: In a word: flexibility. Log4xxx gives you the ability to do different logging levels, to log different modules of code to different files, and you can depend on it to be dependable no matter what strange situation it hits (what will your logger do if the disk is out of space?)
A: Logging frameworks offer flexibility and prevent you from reinventing the wheel. I know it's brain-dead simple to append to a file in any modern language, but does your home grown logger have multiple targets? Can you turn logging on and off at run-time? Why risk a flat tire when these wheels are available for free?
A: Depending on how smart your own logging system implementation is.
In java, if you want to inherit log types, etc., it might be too much hassle and you'd prefer a third-party tool like Log4J. I assume there are similar things for C#. Similarly if you want to determine log level from the command line.
If you just want to route all your System.out and control whether they are printed whenever you compile, your own logger would do just fine.
A: Just for an argument sake, why not a matured home grown? which doesn't come's with any extra baggage and you have complete control over it.
Most of the Logging frameworks has too many features which we really wont use most of them and having said that they comes with there own baggage(Its just not a class, its a framework). Why not implement a simple home grown logging where all your applications write there logs to a Queue and a simple offline service(may be a windows service) read the queue and writes it to your desired location(File, Database etc). by Using queue you can achieve async operation and no locking issues.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/144682",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21"
} |
Q: Eclipse start up menu What is the window which is displayed when you first install Eclipse called, and how can I find it?
A: Help -> Welcome
That thing?
A: Do you mean the splash screen?
You can change it if you need (for example, your own RCP or a version of Eclipse like IBM's WSAD). Read about branding your application
(eg.,http://www.eclipse.org/articles/Article-Branding/branding-your-application.html)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/144694",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Is exposing a session's CSRF-protection token safe? Django comes with CSRF protection middleware, which generates a unique per-session token for use in forms. It scans all incoming POST requests for the correct token, and rejects the request if the token is missing or invalid.
I'd like to use AJAX for some POST requests, but said requests don't have the CSRF token availabnle. The pages have no <form> elements to hook into and I'd rather not muddy up the markup inserting the token as a hidden value. I figure a good way to do this is to expose a vew like /get-csrf-token/ to return the user's token, relying on browser's cross-site scripting rules to prevent hostile sites from requesting it.
Is this a good idea? Are there better ways to protect against CSRF attacks while still allowing AJAX requests?
A: UPDATE: The below was true, and should be true if all browsers and plugins were properly implemented. Unfortunately, we now know that they aren't, and that certain combinations of browser plugins and redirects can allow an attacker to provide arbitrary headers on a cross-domain request. Unfortunately, this means that even AJAX requests with the "X-Requested-With: XMLHttpRequest" header must now be CSRF-protected. As a result, Django no longer exempts Ajax requests from CSRF protection.
Original Answer
It's worth mentioning that protecting AJAX requests from CSRF is unnecessary, since browsers do not allow cross-site AJAX requests. In fact, the Django CSRF middleware now automatically exempts AJAX requests from CSRF token scanning.
This is only valid if you are actually checking the X-Requested-With header server-side for the "XMLHttpRequest" value (which Django does), and only exempting real AJAX requests from CSRF scanning.
A: If you know you're going to need the CSRF token for AJAX requests, you can always embed it in the HTML somewhere; then you can find it through Javascript by traversing the DOM. This way, you'll still have access to the token, but you're not exposing it via an API.
To put it another way: do it through Django's templates -- not through the URL dispatcher. It's much more secure this way.
A: Cancel that, I was wrong. (See comments.) You can prevent the exploit by ensuring your JSON follows the spec: Always make sure you return an object literal as the top-level object. (I can't guarantee there won't be further exploits. Imagine a browser providing access to the failed code in its window.onerror events!)
You can't rely on cross-site-scripting rules to keep AJAX responses private. For example, if you return the CSRF token as JSON, a malicious site could redefine the String or Array constructor and request the resource.
bigmattyh is correct: You need to embed the token somewhere in the markup. Alternatively, you could reject any POSTs that do have a referer that doesn't match. That way, only people with overzealous software firewalls will be vulnerable to CSRF.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/144696",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19"
} |
Q: Teaching: Field, Class & Package Relationships In general I think I can convey most programming related concepts quite well.
Yet, I still find it hard to summarise the relationship between Fields, Classes and Packages.
How do You summarise "Fields", "Classes" and "Packages" and "Their Relationship" ?
A: I've faced a similar problem since I taught C, C++, and Java.
Here is what I do:
First, I keep packages separately and explain them in the end.
Ideally, in my opinion, students should first learn about ADTs, preferably in C. They have the struct, they have the separate operations on it. Fields are then simply the "slots" in the struct and you can even show the memory layout to demonstrate it. Functions are separate entities that operate on those structs.
You then make the transition to classes, methods, and fields and show that in essence (barring inheritance and some anecdotes) they are in many ways syntactic sugar for ADTs.
If you need, you can then teach object layouts, inheritance, and virtual tables (in my experience it helps students understand inheritance better to see the memory layout).
Finally you get to the topic of how to organize classes together. If you teach C++, you don't really have packages but you can explain namespaces and discuss organization and separate compilation.
If you are in Java, then you just explain that these are collections of classes in the same namespace, that have special access rules and show them. The package system in Java is kind of broken anyway so I usually go through patterns (e.g., separating a UI package from the C).
So in summary: Classes form the basis for objects that are a memory arrangement of several fields and associated methods that operate on them. Packages are collections of classes that have one more access restriction mechanism.
A: The way I describe it is:
*
*Objects are collections of slots, slots holding data are fields, slots holding code are methods. Public slots are on the outside of the object, private slots are on the inside. Methods should be mostly public because an object offers services to clients, fields should be private so clients don't know how the services work. Fields are therefore an implementation detail of objects.
*Class names need to be unique, so that you can combine your code with third party libraries. Simple/short class names are insufficient, since there are probably thousands of classes called 'List', 'Customer' etc... Hence classes are placed in packages to create longer, harder to duplicate names. Only a subset of the classes in the package need to be visible to clients, hence the two access levels of public and default. This allows a package to function as a library.
So fields are an implementation detail of objects, whose classes live in packages to guarantee unique names and provide library-like modularity.
A: Depending on the age of the person you're trying to explain it to, there's a simple analogy that can be used: tax forms. A tax form (such as the 1040EZ, for instance) is like a class, and each space to be filled in on the form is a field of the form. The tax form even contains instructions on what to be done with the information in the fields, just as a class includes member functions to be performed on the data in the fields. And just as a complete set of tax forms includes not just the main tax form, but others that may need to be filled out (additional schedules, for example) so a package contains not just the main classes but other classes it may need to interact with.
A: Fields are variables that belong to the class, or to object instances of the class. The difference between a local variable and a field is that fields have a broader scope.
Classes are templates for user-defined data types. Classes are more advanced than the primitive data types because they have both state and behavior.
Packages are used to group classes and to resolve potential naming conflicts. With multiple developers and publicly available code libraries it's very likely that some of us will name our classes the same (Math, LinkedList, FileUtils, etc.). Having a unique package name prefixing the class name allows the compiler (and other developers) to determine which class you intend to use.
A: Interestingly, you tackled OO programming without mentioning objects. I think that may be your problem.
Here's what I use.
Objects are things. They have attributes (measurements, states of being, etc.) Attributes can be called fields. [I often use things I find in the classroom -- cups, markers, hats, coats, etc., to illustrate this.]
Objects also engage in behaviors, called methods, method functions or operations.
The features (attributes and operations, fields and methods, whatever) of an object provide a way to classify objects.
The features that are common to a class of objects is -- well -- can be collected into a class definition. A class definition describes the attributes and methods of the objects that are members of the class.
A package is a collection of class definitions. While -- ideally -- the classes in a package have something in common, that isn't a requirement and isn't a helpful distinction.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/144700",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How do I convert a .NET console application to a Winforms or WPF application I frequently start with a simple console application to try out an idea, then create a new GUI based project and copy the code in. Is there a better way? Can I convert my existing console application easily?
A: Just add a new Winform, add the following code to your Main:
Application.EnableVisualStyles();
Application.SetCompatibleTextRenderingDefault(false);
Application.Run(new Form1());
Also, be sure the [STAThread] attribute is declared above your Main function to indicate the COM threading model your Windows application will use (more about STAThread here).
Then right click your project and select properties and change the "Output type" to Windows application and you're done.
EDIT :
In VS2008 the property to change is Application type
A: For completeness - and for other newbs like me - you also need to add:
using System.Windows.Forms;
... to the top of Program.cs
| {
"language": "en",
"url": "https://stackoverflow.com/questions/144701",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "48"
} |
Q: Fuse bindings for php I am writing an application that thus far has been written in PHP, from the interface to the daemons. I have a need to use fuse and would like to continue to use PHP just for consistency. However, there doesn't seem to be bindings for PHP. Python, Java etc have bindings, and I can code in those languages I just dont want the additional dependencies in this project. I have seen a project on google code, but nothing complete. Anyone know if these have been written?
A: I wrote an extension for PHP that provides bindings to libfuse. I have read support working, but haven't quite finished write support. Eventually I'll finish it, but if you'd like to futz with it, I'd be happy to take patches.
http://pecl.php.net/fuse
A: Have a look at those other bindings, and write a PHP extension! :-)
A: There is another binding available: https://github.com/fujimoto/php-fuse
This blog post explains how to install it: http://blog.728.lu/?p=5
| {
"language": "en",
"url": "https://stackoverflow.com/questions/144707",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Add a namespace to elements I have an XML document with un-namespaced elements, and I want to use XSLT to add namespaces to them. Most elements will be in namespace A; a few will be in namespace B. How do I do this?
A: You will need two main ingredients for this recipe.
The sauce stock will be the identity transform, and the main flavor will be given by the namespace attribute to xsl:element.
The following, untested code, should add the http://example.com/ namespace to all elements.
<xsl:template match="*">
<xsl:element name="xmpl:{local-name()}" namespace="http://example.com/">
<xsl:apply-templates select="@*|node()"/>
</xsl:element>
</xsl:template>
<xsl:template match="@*|node()">
<xsl:copy>
<xsl:apply-templates select="@*|node()"/>
</xsl:copy>
</xsl:template>
Personal message: Hello, Jeni Tennison. I know you are reading this.
A: With foo.xml
<foo x="1">
<bar y="2">
<baz z="3"/>
</bar>
<a-special-element n="8"/>
</foo>
and foo.xsl
<xsl:template match="*">
<xsl:element name="{local-name()}" namespace="A" >
<xsl:copy-of select="attribute::*"/>
<xsl:apply-templates />
</xsl:element>
</xsl:template>
<xsl:template match="a-special-element">
<B:a-special-element xmlns:B="B">
<xsl:apply-templates match="children()"/>
</B:a-special-element>
</xsl:template>
</xsl:transform>
I get
<foo xmlns="A" x="1">
<bar y="2">
<baz z="3"/>
</bar>
<B:a-special-element xmlns:B="B"/>
</foo>
Is that what you’re looking for?
A: Here's what I have so far:
<xsl:template match="*">
<xsl:element name="{local-name()}" namespace="A" >
<xsl:apply-templates />
</xsl:element>
</xsl:template>
<xsl:template match="a-special-element">
<B:a-special-element xmlns:B="B">
<xsl:apply-templates />
</B:a-special-element>
</xsl:template>
This almost works; the problem is that it's not copying attributes. From what I've read thusfar, xsl:element doesn't have a way to copy all of the attributes from the element as-is (use-attribute-sets doesn't appear to cut it).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/144713",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: With what GUI framework is the Mono .NET Windows.Forms implemented? With what GUI framework is the Mono .NET Windows.Forms implemented?
Example: KDE, Gnome, X11 itself?
A: Mono does not use Qt (KDE) or GTK+ (GNOME) widgets because they don't match up with the Winforms API. Mono implements Winforms on top of their System.Drawing implementation, which in turn uses Cairo. Cairo deals with the native graphics implementation, such as X11 in Linux or Quartz in OS X.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/144716",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: ASP.NET MVC and...YUI? jQuery? Other After the last project I've done using WebForms, I've decided to pass on using that framework in the future. It's great for getting your basic features out there...not so great when you have more complex UI logic.
I'm looking at ASP.NET MVC, and like what I see so far. Of course the issue is that you lose the server side controls when you make that change. I've been looking for an Ajax Library that will a good replacement for the Web Forms widgets and like YUI right now.
Not having a lot of experience in this area, I'd like to ask someone who has more knowledge. Which Ajax toolkit offers the most complete widget library? Is it possible to combine two or more toolkits to provide supplement to each other (e.g YUI has a great Grid, Scriptaculous has a great Calendar, let's use the best of both worlds)? Or are you more or less tied to one once you choose?
Thanks for the answers and great sample. ExtJS definitely looks interesting, we spent more than that on the Infragistics WebForms suite and don't get the source. Flexigrid looks pretty good as well. Thanks again!
Update 2 Just found out MSFT will be shipping jQuery with ASP.NET MVC
A: I have written an ASP.NET MVC application and I incorporated jQuery into it. I found that jQuery helped me manipulate things that would have overcomplicated my View... such as adding alternating styles to my grids, etc...
There are many plugins for jQuery that fill in a lot of the gaps that other libraries may have. For example, I used a great jQuery plugin called Flexigrid and I am very pleased with the look and features of the control. I wrote a blog entry about how to use c# 3.0 and LINQ to populate the grid with JSON.
A: Well, considering jQuery is going to start shipping with Visual Studio (first with MVC, then with Visual Studio as a whole) I would go with it. This news just came out today here.
So, with Microsoft fully backing jQuery and it being tightly integrated into the Visual Studio work enviornment I would highly suggest you go with that.
Microsoft is going to make jQuery part of the official dev platform. JQuery will come with Visual Studio in the long term, and in the short term it'll ship with ASP.NET MVC. We'll also ship a version includes Intellisense in Visual Studio.
The Announcement Blog Posts
*
*ScottGu on the jQuery/Microsoft goodness
*John Resig on the jQuery/Microsoft announcement
Visual Studio Intellisense w/ jQuery Beta Screenshot:
A: This site (stackoverflow) uses ASP.NET MVC and jQuery, if that's any influence.
Also, ASP.NET MVC is now shipping with jQuery
http://www.hanselman.com/blog/jQueryToShipWithASPNETMVCAndVisualStudio.aspx
A: IMO ExtJS has the most complete widgets, but you have to pay the price to use it commercially. If you don't want to pay, YUI is very nice too, it has grown a lot lately. Most of the time, though, I don't need the widgets, so I'm happy with jQuery and the occasional jQuery.UI datepicker.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/144726",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Things to consider when writing for touch screen? I'm starting a new project which involves developing an interface for a machine that measures wedge and roundness of lenses and stores the information in a database and reports on it. There's a decent chance we're going to be putting a touch screen on this machine so that it doesn't need to have a mouse or keyboard...
I don't have any experience developing for full size touch screens, so I'm looking for advice/tips/info from you guys...
I can imagine you want to make the elements a little larger than normal... space buttons out a bit more.... things like that... anyone have anything else to add?
A: A few things to consider:
*
*You need to account for parallax error when touching controls. Basically, the user may touch the screen above or below your actual control and therefore miss the control. This is a combination of the size of the control (eg you can have the active area larger than visual control to allow the user to miss and still activate the control), the viewing angle of the user (which you may or may not be able to predict/control) and the type of touch screen you're using. If you know where the user will be placed relative to the screen when using it, you can usually accommodate this with appropriate calibration.
*Depending on the type of touch screen, you may need to ensure that your users aren't wearing gloves or using an implement other than their fingers (eg the end of a pen) to touch the screen. Some screens (eg those depending on conductance) don't respond well to anything other than flesh and blood.
*Avoid using double clicks because it can be very hard for users to reliably double click a control. This can be partly mitigated if you've got experienced/trained users working in a fairly controlled environment where they're used to the screens.
*Linked to the above, if you are using double clicks, you may find the double click activated when the user only wants to single click. This is because it's very easy for the user's finger to bounce slightly on touching the screen and, depending on how sensitive the double click settings are, trigger a double rather than a single click. For this and the previous reason, we always disable double clicks and only use single clicks (or similar single activation controls).
*However big you think you need to make the controls to allow for touch activation, they almost certainly need to be bigger still. Make sure you test the interface with real users in the real deployment environment (or as close to it as you can get). For example, we deployed some screens with nice big buttons you couldn't miss only to find that the control room was unheated and that the users were wearing thick gloves in the middle of winter, making their fingers way bigger than we had allowed for.
*Don't put any controls near the edges of the screen - it's very hard to get your finger into the edges (particularly if the screen has a deep bezel) and a slight calibration problem can easily shift the control too close to the edge to use. Standard menus and scroll bars are a good example of controls that can be very tricky to use on a touch screen and you should either avoid them (which is preferable - they're not good for touch screens) or replicate them with jumbo equivalents.
*Remember that the user's hand will be over the screen, obscuring some of the screen and controls (typically those below where the user is touching, but it depends on the position of the user relative to the screen). Don't put instructions or indicators where the user's hand or arm will obscure them when trying to use the control they relate to (eg typically put them above rather than below the control).
*Depending on the environment, make sure your touch screen is suitably proofed against dust, damp, grease etc and make sure it's easy to clean without damaging it. You wouldn't believe the slime that can quickly accumulate on a touch screen in an industrial or public setting.
A: The other obvious one is that there's no equivalent of pointer 'hover'. Not that that affects many apps though.
A: If you decide to put in analog controls (scrollbars, rotation widgets, etc) be sure to put in a digital control also. Some companies think that a touch screen means perfect control over something with your fingers. In real life, this translates to minutes of frustration trying to fix a number that's just a little off.
A: The most obvious thing is that everything on the GUI needs to be big enough for a fingertip to hit, which is sometimes bigger than you think.
As has been mentioned, there's really no way for a right-click action to happen. Also, double-clicking can be tricky with a fingertip on a touch screen.
The other major thing is that you'll want to create a on-screen keyboard that pops up for text entry and an on-screen numpad for number only fields.
A: I wrote my own set of controls for a POS application designed specifically to be touchscreen friendly.
Remember to allow enough real estate for stubby fingers and talons. In our application the users can have these manicures that necessitate them to use the pad of their finger instead of the tip. This means that you need to allow more space for activation areas than you would normally consider in any other type of application.
I would also recommend that you accommodate yourself as a programmer from a testing standpoint and from the point of view that things change and there may need to be a keyboard/mouse attached to a non-touch workstation. I cannot tell you how many times I went to touch my flat panel LCD expecting something to happen, before remembering that I had to use the mouse.
A: Make sure to read your basic UI principles like Fitz law (The time to acquire a target is a function of the distance to and size of the target).
Also consider whether or not the device is stationary or not when it is in use (e.g., like a palmpilot or iphone), research shows that you must accomodate that into your design.
A: The larger gui elements is the major thing. But it applies to all elements, scroll bars, tabs and even text fields.
The other major thing that I can think of, it's hard for the user to right click. So things that require a right click should be avoided, context menus are the only thing that comes to mind at the moment.
A: The other responses are pretty good, but are you totally sure that a touch screen would actually be easier to use? There are a lot of devices where a touch screen actually makes them much harder to use, not easier. The main problem is that you can't use the device when you're not looking at it. If users are going to be doing a lot of repetitive actions, a keyboard could be a lot more efficient.
Also, a touch screen might be a lot harder to use by someone with a disability, if you think there's even a small chance that could happen.
A: Even though this is quite old now, I found it to still be useful, as a starting point for design considerations.
http://www.sapdesignguild.org/resources/tsdesigngl/index.htm
A: If you've not already done so, have a look at some of the documentation available for developers on mobile platforms, eg Windows Mobile, iPhone.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/144730",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: How do you store request URL's in Javascript to satisfy the DRY principal? Are there any commonly used patterns in Javascript for storing the URL's of endpoints that will be requested in an AJAX application?
For example would you create a "Service" class to abstract the URL's away?
A: You could create a collection of ValuePairs where you'd store each URL value and an identifier:
function ControlValuePair(Id, Value)
{
this.Id = Id;
this.Value = Value;
}
function CreateCollection(ClassName)
{
var obj=new Array();
eval("var t=new "+ClassName+"()");
for(_item in t)
{
eval("obj."+_item+"=t."+_item);
}
return obj;
}
function ValuePairsCollection()
{
this.Container="";
this.Add=function(obj)
{
this.push(obj);
}
}
Later you can iterate through the collection or look up the id.
A: Is your question similar to this already-answered question? If so, does the answer apply to your code also?
A: Where you store your globals is a matter of personal choice. It's best to put them inside an object to avoid conflicts in the global namespace, so yes, a global object named Service would be a good place to store URLs, and other strings that are used in multiple places.
A: I've used something like this (used in Rails):
NAMESPACE.categories.baseUri = '/categories';
NAMESPACE.categories.getUri = function(options)
{
options = options || {};
var uri = [NAMESPACE.categories.baseUri];
if(options.id)
{
uri.push(options.id);
}
if(options.action)
{
uri.push(options.action);
}
if(options.format)
{
uri.push('?format=' + options.format);
}
return uri.join('/');
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/144731",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: When is it good (if ever) to scrap production code and start over? I was asked to do a code review and report on the feasibility of adding a new feature to one of our new products, one that I haven't personally worked on until now. I know it's easy to nitpick someone else's code, but I'd say it's in bad shape (while trying to be as objective as possible). Some highlights from my code review:
*
*Abuse of threads: QueueUserWorkItem and threads in general are used a lot, and Thread-pool delegates have uninformative names such as PoolStart and PoolStart2. There is also a lack of proper synchronization between threads, in particular accessing UI objects on threads other than the UI thread.
*Magic numbers and magic strings: Some Const's and Enum's are defined in the code, but much of the code relies on literal values.
*Global variables: Many variables are declared global and may or may not be initialized depending on what code paths get followed and what order things occur in. This gets very confusing when the code is also jumping around between threads.
*Compiler warnings: The main solution file contains 500+ warnings, and the total number is unknown to me. I got a warning from Visual Studio that it couldn't display any more warnings.
*Half-finished classes: The code was worked on and added to here and there, and I think this led to people forgetting what they had done before, so there are a few seemingly half-finished classes and empty stubs.
*Not Invented Here: The product duplicates functionality that already exists in common libraries used by other products, such as data access helpers, error logging helpers, and user interface helpers.
*Separation of concerns: I think someone was holding the book upside down when they read about the typical "UI -> business layer -> data access layer" 3-tier architecture. In this codebase, the UI layer directly accesses the database, because the business layer is partially implemented but mostly ignored due to not being fleshed out fully enough, and the data access layer controls the UI layer. Most of the low-level database and network methods operate on a global reference to the main form, and directly show, hide, and modify the form. Where the rather thin business layer is actually used, it also tends to control the UI directly. Most of this lower-level code also uses MessageBox.Show to display error messages when an exception occurs, and most swallow the original exception. This of course makes it a bit more complicated to start writing units tests to verify the functionality of the program before attempting to refactor it.
I'm just scratching the surface here, but my question is simple enough: Would it make more sense to take the time to refactor the existing codebase, focusing on one issue at a time, or would you consider rewriting the entire thing from scratch?
EDIT: To clarify a bit, we do have the original requirements for the project, which is why starting over could be an option. Another way to phrase my question is: Can code ever reach a point where the cost of maintaining it would become greater than the cost of dumping it and starting over?
A: I saw an application re-architected within 2 years of its introduction into production, and others rewritten in different technologies (one was C++ - now Java). Both efforts were were not, to my mind, successful.
I prefer a more evolutionary approach to bad software. If you can "componentize" your old app such that you can introduce your new requirements and interface with the old code, you can ease yourself into the new environment without having to "sell" the zero-value (from a biz perspective) investment in rewriting.
Suggested approach - write unit tests for the functionality with which you wish to interface to 1) ensure the code behaves as you expect and 2) provide a safety net for any refactoring that you may wish to do on the old base.
Bad code is the norm. I think IT gets a bad rap from business for favoring rewrites/rearchitecting/etc. They pay the money and "trust" us (as an industry) to deliver solid, extensible code. Sadly, business pressures frequently result in shortcuts that make the code unmaintainable. Sometimes it's bad programmers... sometimes bad situations.
To answer your rephrased question... can code maintenance costs ever exceed rewriting costs... the answer is clearly yes. I don't see anything in your examples, however, that lead me to believe this is your case. I think those issues can be addressed with tests and refactoring.
A: In terms of business value, I would think it's extremely rare that a real case can be made for a rewrite due solely to the internal state of the code. If the product's customer-facing and is currently live and bringing in money (i.e. is not a mothballed or unreleased product), then consider that:
*
*You already have customers using it. They're familiar with it, and might have built some of their own assets around it. (Other systems that interface to it; products based on it; processes they'd have to change; staff they'd maybe have to retrain). All of this costs the customer money.
*Re-writing it might cost less in the long term than making difficult changes and fixes. But you can't quantify that yet, unless your app is no more complex than Hello World. And a re-write means a re-test and a redeploy, and probably an upgrade path for your customers.
*Who says the re-write will be any better? Can you honestly say your firm is writing sparkly code now? Have the practices that turned the original code to spaghetti been corrected? (Even if the main culprit was a single developer, where were his peers and management, ensuring quality through reviews, testing, etc.?)
In terms of technical reasons, I'd suggest it could be time for a major rewrite if the original has some technical dependencies that have become problematic. e.g. a third party dependency that's now out of support, etc.
In general though, I think the most sensible move is to refactor piece by piece (very small pieces if it's really that bad), and improve the internal architecture incrementally rather than in one big drop.
A: Two threads of thought on this one: Do you have the original requirements? Do you have confidence that the original requirements are accurate? What about test plans or unit tests? If you have those things in place it might be easier.
Putting on my customer hat, does the system work or is it unstable? If you've got something that's unstable you've got an argument to change; otherwise you're best of refactoring it bit by bit.
A: I think the line in the sand is when basic maintenance is taking 25% - 50% longer than it should. There comes a time when maintaining legacy code becomes too costly. A number of factors contribute to the final decision. Time and cost being the most important factors I think.
A: If there are clean interfaces and you can cleanly delineate module boundaries, then it might be worth refactoring it module by module or layer by layer in order to allow you to migrate existing customers forward into cleaner more stable codebases, and over time, after you've refactored every module, you will have rewritten everything.
But, based on the codereview, doesn't sound like there would be any clean boundaries.
A: I wonder if the people who vote for scrapping and starting over have ever successfully refactored a large project, or at least seen a large project in poor condition that they think could use a refactoring?
If anything, I err on the opposite side: I've seen 4 large projects that were a mess, that I advocated refactoring as opposed to rewriting. On a couple, there was barely a single line of original code that remained, and major interfaces changed in significant ways, but the process never involved the entire project failing to function as well as it originally did, for any more than a week. (And top-of-trunk was never broken).
Perhaps a project exists that is so severely broken that to attempt to refactor it would be doomed to failure, or perhaps one of the previous projects I refactored would have been better served by a "clean re-write", but I'm not sure I'd know how to recognize it.
A: I agree with Martin. You really need to weigh the effort that will be involved in writing the app from scratch against the current state of the app and how many people use it, do they like it, etc. Often we may want to completely start from scratch, but the cost far outweighs the benefit. I come across bits of ugly looking code all the time, but I soon realize that some of these 'ugly' areas are really bug fixes and make the program work correctly.
A: I would try to consider the architecture of the system and see whether it is possible to scrap and rewrite specific well defined components without starting everything from scratch.
What would usually happen is that you can either do that (and then sell that to the customer/management), or that you find out that the code is such a horrible and tangled mess that you become even more convinced that you need a rewrite and have more convincing arguments for it (including: "if we engineer it right, we would never need to scrap the whole thing and do a third rewrite).
Slow maintenance would eventually cause that architectural drift that would make a rewrite more expensive later.
A: Scrap old code early and often. When in doubt, throw it out. The hard part is convincing non-technical folks of the cost-to-maintain.
So long as the value derived appears to be greater than the cost to operate and maintain, there's still positive value flowing from the software. The question surrounding a rewrite this: "will we get even more value from a rewrite?" Or alternatively "How much more value will we get from a rewrite?" How many person-hours of maintenance will you save?
Remember, the rewrite investment is once only. The return on the rewrite investment lasts forever. Forever.
Focus the value question down to specific issues. You listed a bunch of them above. Stick with that.
*
*"Will we get more value by reducing cost through
dropping the junk that we don't use
but still have to wade through?"
*"Will we get more value from dropping the junk that's unreliable and breaks?"
*"Will we get more value if we understand it -- not by documenting, but by replacing with something we built as a team?"
Do you homework. You'll have to confront the following show-stoppers.
These will originate somewhere in your executive foodchain from someone who'll respond as follows:
*
*"Is it broken?" And when you say "It's not crashed as such," They'll say "It's not broke - don't fix it."
*"You've done the code analysis, you understand it, you no longer need to fix it."
What's your answer to them?
That's only the first hurdle. Here's the worst possible situation. This doesn't always happen, but it does happen with alarming frequency.
Someone in your executive foodchain will have this thought:
*
*"A rewrite doesn't create enough value. Rather than simply rewrite, let's expand it." The justification is that by creating enough value, users are more likely to buy in to the rewrite.
A project where scope is expanded -- artificially -- to add value is usually doomed.
Instead, do the smallest rewrite you can to replace the darn thing. Then expand to fit real needs and add value.
A: Without any offense intended, the decision to rewrite a codebase from scratch is a common, and serious management mistake newbie software developers make.
There are many disadvantages to be wary of.
*
*Rewrites stop new features from being developed cold for months/years. Few, if any companies can afford to stand-still for this long.
*Most development schedules are difficult to nail. This rewrite will be no exception. Amplify the previous point by, now, a delay in development.
*Bugs that were fixed in the existing codebase through painful experience will be re-introduced. Joel Spolsky has more examples in this article.
*Danger of falling victim to the Second-system effect -- in summary, ``People who have designed something only once before try to do all the things they "didn't get to do last time", loading the project up with all the things they put off while making version one, even if most of them should be put off in version two as well.''
*Once this expensive, burdensome rewrite is completed, the very next team to inherit the new codebase is likely to use the same excuses for doing another rewrite. Programmers hate learning someone else's code. No one writes perfect code because perfection is so subjective. Find me any real-world application and I can give you a damning indictment and rationale for doing a from-scratch rewrite.
Whether you ultimately rewrite from scratch or not, beginning a refactoring phase now is a good way to both really sit down and understand the problem so that the rewrite will go more smoothly if truly called for, as well as giving the existing codebase an honest look to really see if a rewrite's needed.
A: To actually scrap and start over?
When the current code doesn't do what you would like it to do, and would be cost prohibitive to change.
I'm sure someone will now link Joel's article about Netscape throwing their code away and how it's oh-so-terrible and a huge mistake. I don't want to talk about it in detail, but if you do link that article, before you do so, consider this: the IE engine, the engine that allowed MS to release IE 4, 5, 5.5, and 6 in quick succession, the IE engine that totally destroyed Netscape... it was new. Trident was a new engine after they threw away the IE 3 engine because it didn't provide a suitable basis for their future development work. MS did that which Joel says you must never do, and it is because MS did so that they had a browser that allowed them to completely eclipse Netscape. So please... just meditate on that thought for a moment before you link Joel and say "oh you should never do it, it's a terrible idea".
A: You can only give a definite yes to rewriting in case if you know completely how your application works (and by completely I mean it, not just having a general idea of how it should work) and you know more or less exactly how to make it better. Any other cases and it's a shot in the dark, it depends on too much things. Perhaps gradual refactoring would be safer if it is possible.
A: If possible, I typically would prefer to rewrite smaller portions of the code over time when I need to refactor a baseline. There are typically many smaller issues such as magic number, poor commenting, etc. that tend to make the code look worse than it actually is. So, unless the baseline is just awful, keep the code and just make improvements at the same time you are maintaining the code.
If refactoring requires a lot of work, I recommend laying out a small re-design plan/todo list that gives you a list of things to work on in order so that you can bring the baseline to a better state. Starting from scratch is always a risky move and you are not guaranteed that the code will be better when you are finished. Using this technique, you will always have a working system that improves over time.
A: Code with excessively high cyclomatic complexity (like over 100 in a large number of modules) is a good clue. Also, how many bugs does it have / KLOC? How critical are the bugs? How often are bugs introduced when bug fixes are made. If your answer is a lot (I cant remember norms right now), then a rewrite is warranted.
A: As early as possible. Whenever you get a premonition that your code is slowly turning into an ugly beast that is very likely to consume your soul and give you headaches, and you know the problem is in the underlying structure of the code (so any fix would be a hack, e.g. introduce a global variable), then it's time to start over.
For some reasons people don't like throwing away precious code, but if you feel your better off starting over, you are probably right. Trust your instinct and remember that it wasn't a waste of time, it taught you one more way of NOT approaching the problem. You could (should) always use a version control system so your baby is never really lost.
A: A rule of thumb I've found useful is that if given a code base, if I have to re-write more than 25% of the code to make it work or modify it based upon new requirements, you may as well re-write it from scratch.
The reasoning is that you can only patch a body of code so far; beyond a certain point, it's quicker to do over.
There's an underlying assumption that you have a mechanism (such as thorough unit and/or system tests) that will tell you whether your re-written version is functionally equivalent (where it needs to be) as the original.
A:
If it requires more time to read and understand the code (if that is even possible)
than it would to rewrite the entire application, I say scrap it and start over.
Be very carefull with this:
*
*Are you sure you aren't just being lazy and not bothering to read the code
*Are you being arrogant about the great code you will write compared to the rubbish anyone else produced.
*Remember tested-working code is worth a lot more than imaginary yet-to-be-written code
In the words of our estemed host and overlord, Joel - things you should never do,
it's not always wrong to abandon working code - but you have to be sure about the reason.
A: I do not have any experience with using metrics for this myself, but the
article
"Software Maintainability Metrics Models in Practice" discusses
more or less the same question asked here for two case studies they did.
It starts with the following editor's note:
In the past, when a maintainer
received new code to maintain, the
rule-of-thumb was "If you have to
change more than 40 percent of someone
else's code, you throw it out and
start over." The Maintainability Index
[MI] addressed here gives a much more
quantifiable method to determine when
to "throw it out and start over." This
work was sponsored by the U.S. Air
Force Information Warfare Center and
the U.S. Department of Energy [DOE],
Idaho Field Office, DOE Contract No.
DE-AC07-94ID13223.)
A: I think the rule was...
*
*The first version is always a throw away
So, if you learned your lesson(s), or his/her lessons, then you can go ahead and write it fresh now that you understand your problem domain better.
Not that there aren't parts that can/should be kept. Tested code is the most valuable code, so if it isn't deficient in any real way other than style, no reason to toss it all out.
A: When is it good (if ever) to scrap production code and start over?
Never had to do this, but logic would dictate (to me, anyway) that once you pass the inflection point where you're spending more time reworking and fixing bugs in the existing code base than you are adding new functionality, it's time to trash the old stuff and get a fresh start.
A: If it requires more time to read and understand the code (if that is even possible) than it would to rewrite the entire application, I say scrap it and start over.
A: I have never completely thrown out code. Even when going from a foxpro system to a c# system.
If the old system worked then why just throw it out?
I have come across a few really bad system. Threads being used where not needed. Horrible inheritance and abuse of interfaces.
It is best to understand what the old code is doing and why it is doing it. Then change it so that it is not confusing.
Of course if the old code doesn't work. I mean can't even compile. Then you might be justified in just starting over. But how often does that actually happen?
A: Yes, it totally can happen. I've seen money be saved by doing it.
This is not a tech decision, it's a business decision. Code rewrites are long term gains, while "if it ain't totally broke..." is a short term gain. If you are in a first year startup that is focused on getting a product out the door, the answer is usually to just live with it. If you're in an established company, or the errors with the current systems are causing more workload, therefor more company money.. then they might go for it.
Present the problem as best as you can to your GM, use dollar values where you can. "I don't like dealing with it" means nothing. "It'll take twice the time to do everything until this is fixed" means a lot.
A: I think there are a number of issues here that depend largely on where you are at.
Is the software working well from a customer perspective? (If yes be very careful about changes). I would think there would be little point re-witting unless you were expanding the feature set if the system was working. And are you planning to expand the features and customer base of the software? If so then you have much more reason to change.
As much as anything just trying to understand some else's code even if well written can be difficult, when badly written I would imagine almost impossible. What you describe sounds like something that would be very difficult to expand.
A: I would take into consideration if the application does what it is intended to do, is required for you to ever make modifications, and are you confident that the app has been thoroughly tested in all scenarios that it will be used in.
Do not invest the time if the app does not need alterations. However, if it doesn't function as you need and you need to control the hours and time invested to make corrections, scrap it and re-write to the standards that your team can support. There's nothing worse than terrible code that you have to support / decipher but still have to live with. Remember, Murphy's Law says it will 10 at night when you'll have to make things work, and that is never productive.
A: Production code always has some value. The only case where I would truly throw it all out and start again is if we determine the intellectual property is irrevocably contaminated. For example if someone brought large amounts of code from a previous employer, or a large percentage of the code was ripped from a GPLd codebase.
A: I'm going to post this book every time I see a discussion on Refactoring. Everyone should read "Working Effectively with Legacy Code" by Michael Feathers. I found it to be an excellent book - if nothing else, it's a fun read, and motivational.
A: When the code has reached a point that is not maintainable or extensible anymore. Is full of short-term hacky fixes. It has lots of coupling. It has long (100+lines) methods. It has database access in the UI. It generates a lot of random, impossible to debug errors.
Bottom line: When maintaining it is more expensive (i.e. takes longer) than rewriting it.
A: I used to believe in just re-write from scratch, but it is wrong.
http://www.joelonsoftware.com/articles/fog0000000069.html
Changed my mind.
What I would suggested is figuring out a way to properly refactor the code. Keep all existing functionality and test as you go. We have all seen horrible code bases, but it is important to keep the knowledge over time you application has.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/144734",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "53"
} |
Q: Best way to get started with programming other things than your computer? What is the best way to get started with programming things outside of your computer?
I don't mean mainstream things like cell phones with APIs.
Please assume working knowledge of C/C++
A: It's not a microcontroller, but the Lego Minstorm is a good place to start learning the ins and outs of embedded programming.
A: I recently heard about the BUG which calls itself "open source hardware development". Is this the sort of thing you're looking for?
A: Buy yourself an HP 10C Calculator, and then program all those "programming 101" math algorithms using its insanely small but practical calculator language. Reminds me of assembler, but it's not.
A: I'd strongly recommend to find an open-source project next to one of your leisure occupations.
First, open-source because the support is mostly very friendly, then open-source because other contributors will have at least one comparable hobby, and then favorite pastime occupation so you can see a need for tools etc.
Two projects I have been playing around with very successfully:
*
*Music: Rockbox, a firmware replacement for many mp3-players and portable media players.
*Photography: CHDK, a firmware addition to numerous Canon compact still cameras.
A: Give SparkFun a shot. For me, servos are what I love to hack around with.
A: You can try with BeagleBoard, though its kind of mainstream, nonetheless very impressive performance to speak off at just 149$.
A: I vote for the Nintendo DS:
*
*Nice hardware : 2 CPUs, 2 screens, touchscreen, mic, speakers, wireless, 2D and 3D acceleration
*No OS to speak of
*Freedom to talk to the bare metal without restriction
*Well-documented
*Very active dev community
*Enthusiastic audience for anything cool you create
*Cheap (shockingly so if you go for 1st-gen units)
All-in-all it's really excellent fun to play with.
To get started:
*
*Get a DS
*Get a SLOT1 flash-cart (I've got a DS-X, but there are plenty of others)
*Get devkitpro
*Go here for help or advice
A: I'd look into stuff like (unofficial) GBA development or the like, sure there are "Libraries" but you can go digging and just stick bits into specific addresses and make stuff happen. You can't get more "No API" then raw memory-mapped hardware access.
A: Brian, you might find the Arduino interesting. It is inexpensive and pretty popular. I started playing around with micro controller boards and such a few years back and that lead to an interest in robots. Kind of interesting, at least to me.
If one is interested in a .NET-flavored development environment, there is an analog to the arduino call netduino that is worth a look.
A: Embedded programming is fun.
You can start with things like the Basic stamp or PIC, or since you know c/c++ you can use a real microcrontroller like an Atmel AVR. look at the Butterfly or Arduino kit
The Arduino has an amazing community of projects and info behind it.
A: Maybe start with small microcontroller projects.
This may be helpful: http://www.kmitl.ac.th/~kswichit%20/
A: What sort of things do you want to program?
Sounds like you might be interested in MAKE magazine, and some of their compilations, such as Making Things Talk. With a little bit of experience with basic electronics, you can follow their recipes to do all sorts of odd and interesting things. When you get more comfortable, you can start modding their designs.
Good luck, :)
A: I have personal experience and would recommend using these products to program PICs:
Programming board
GCBasic (Open Source Basic)
The PICs are cheap ($2 bucks or so) and the board will cost you around $120.
Recently, I have been impressed with TIs wireless USB chips/programmers. You can get 2 chips and a programmer for $50 bucks. It also comes with a free C compiler. By default it comes with a sample remote temperature program.
TI wireless target board
A: I think it's fun to hack old iPods. You can get a fourth generation iPod (or any of a number of supported devices), run Rockbox on it, then get the source and help hack on it.
A: I would also recommend AVR (8-bit) and Butterfly or DB101 kit. The main advantage is that there is a GCC compiler available and that you can program them through the Serial Port, without the need of a tool. Inexpensive programming and debugging tools are also available. There is a very strong AVR community in AVRFreaks
Another alternative is ARM7 and ARM9 microcontrollers (32bit). If you are interested in using an OS (ucLinux/FreeRTOS for ARM7, Linux for ARM9), you should go that way. There is of course a free GCC compiler. You can buy kits and tools at Olimex
A: If you would like to create a cool gadget using a microcontroller as a learning experience, you can look at the starter kits from Rabbit (website). They have a variety of low-cost kits with 8-bit microcontrollers to get started with a particular technology.
A: There are a lot of programmable robots around. In fact, even some of the Roombas (automated vacuums) can be programmed. This is particularly good if you want to teach kids how to program.
A: If you have a Nintendo Wii, you can crack it using Twilight Princess. You don't even have to buy it. I just rented it for a couple days. Go to WiiBrew.org and check out some of the projects that are available there. Most if not all are open source, and should give you a good starting point. Lots of ports of existing stuff, along with some original programs written specifically for the Wii. You would of course do the programming on your computer, and transfer the compiled binaries to the Wii. I haven't looked into how hard it is to get a development environment set up and having it build for the Wii, but if you email they project maintainers from wiibrew.org, they may be able to set you up.
[EDIT]
Just browsing around, I found DevkitPro, which seems to be the toolkit of choice for developing on many different console and handheld systems, including the Wii.
A: To ease yourself into embedded programming, you may want to try using XNA for either the Xbox or the Zune. You won't be doing memory management, but you'll get used to the constrained hardware if you do it on the Zune. Admittedly, it's using C#; but you could always do the programming itself using CIL.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/144735",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "25"
} |
Q: How to install/register more than one instances in SqlServer Is it possible to install/register another local server instance in any SqlServer version, besides the default local instance, where only one SqlServer version is installed?
A: Yes, it's possible. I have several combinations in my servers. I have one server that has both SQL Server 2000 and 2005 installed side by side. My desktop at work is a Windows 2003 Server, and I have SQL Server 2005 and 2008 installed side by side.
What you want is called a named instance. There will be a screen during the install, where you will be able to give it a name.
A: Yes. Usually the installer will detect that you have one or more existing instances and will prompt you for a instance name. We have setup three SQL Server 2000 standard editions on a development box to emulate the three production servers at one of our clients.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/144739",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How do I install the D programming language into C:\Program Files? The prompt says that if I install the software into a directory with spaces:
the rebuild build tool used by the D Shared Source System will fail to build
and that I will be
forced to reinstall in a different location
However, I don't like random things in my C:\ drive. D, IMO, belongs in Program Files with PHP and MinGW and so on. How can I get it here?
If it matters, I'm using the Easy D installer package.
A: You can also use NTFS Link to create junction points (symlinks for all intents and purposes) and hard links on NTFS file systems. The functionality is built into the NTFS drivers, but an interface was never implemented for it, presumably to avoid things like recursive directory structures (endless virus scan loops anyone?). This package exposes an interface to this functionality.
I'd then create a symlink from C:\Program Files\ to something like C:\ProgramFiles\, hence disposing with the problematic space. This means that anything added to one directory will be added to the other, because both directories point to the same place on disk.
More info on NTFS Junction Points.
Info on NTFS symlinks (Vista only, but doesn't need NTFS Link to be installed.)
A: You could try using the old DOS 8.3 name for the Program Files directory, although this solution is implementation- and locale-dependent, and thus somewhat deprecated. On most US English systems, the 8.3 name of the C:\Program Files directory is C:\PROGRA~1. So, instead of installing to "C:\Program Files\dmd", you'd install to "C:\PROGRA~1\dmd". Hopefully, the configuration files for the misbehaving programs won't know the difference.
A: You could install it into C:\Program Files, and then use the subst command to make it appear as a new drive letter:
subst x: "c:\program files\d"
A: I actually use a "c:\Programs" for situations such as this - quite a few applications don't work well in directories with spaces in them.
It doesn't cause confusion since it's different enough from "c:\Program Files" - earlier attempts used "c:\ProgramFiles" (without the space) but this was too similar.
A: I have a C:\Dev folder on my machine for things like this. That way you only have one folder on the main directory and it stays unclutered.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/144745",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How to remove accents and tilde in a C++ std::string I have a problem with a string in C++ which has several words in Spanish. This means that I have a lot of words with accents and tildes. I want to replace them for their not accented counterparts. Example: I want to replace this word: "había" for habia. I tried replace it directly but with replace method of string class but I could not get that to work.
I'm using this code:
for (it= dictionary.begin(); it != dictionary.end(); it++)
{
strMine=(it->first);
found=toReplace.find_first_of(strMine);
while (found!=std::string::npos)
{
strAux=(it->second);
toReplace.erase(found,strMine.length());
toReplace.insert(found,strAux);
found=toReplace.find_first_of(strMine,found+1);
}
}
Where dictionary is a map like this (with more entries):
dictionary.insert ( std::pair<std::string,std::string>("á","a") );
dictionary.insert ( std::pair<std::string,std::string>("é","e") );
dictionary.insert ( std::pair<std::string,std::string>("í","i") );
dictionary.insert ( std::pair<std::string,std::string>("ó","o") );
dictionary.insert ( std::pair<std::string,std::string>("ú","u") );
dictionary.insert ( std::pair<std::string,std::string>("ñ","n") );
and toReplace strings is:
std::string toReplace="á-é-í-ó-ú-ñ-á-é-í-ó-ú-ñ";
I obviously must be missing something. I can't figure it out.
Is there any library I can use?.
Thanks,
A: I disagree with the currently "approved" answer. The question makes perfect sense when you are indexing text. Like case-insensitive search, accent-insensitive search is a good idea. "naïve" matches "Naïve" matches "naive" matches "NAİVE" (you do know that an uppercase i is İ in Turkish? That's why you ignore accents)
Now, the best algorithm is hinted at the approved answer: Use NKD (decomposition) to decompose accented letters into the base letter and a seperate accent, and then remove all accents.
There is little point in the re-composition afterwards, though. You removed most sequences which would change, and the others are for all intents and purposes identical anyway. WHat's the difference between æ in NKC and æ in NKD?
A: First, this is a really bad idea: you’re mangling somebody’s language by removing letters. Although the extra dots in words like “naïve” seem superfluous to people who only speak English, there are literally thousands of writing systems in the world in which such distinctions are very important. Writing software to mutilate someone’s speech puts you squarely on the wrong side of the tension between using computers as means to broaden the realm of human expression vs. tools of oppression.
What is the reason you’re trying to do this? Is something further down the line choking on the accents? Many people would love to help you solve that.
That said, libicu can do this for you. Open the transform demo; copy and paste your Spanish text into the “Input” box; enter
NFD; [:M:] remove; NFC
as “Compound 1” and click transform.
(With help from slide 9 of Unicode Transforms in ICU. Slides 29-30 show how to use the API.)
A: I definitely think you should look into the root of the problem. That is, look for a solution that will allow you to support characters encoded in Unicode or for the user's locale.
That being said, your problem is that you're dealing with multi-character strings. There is std::wstring but I'm not sure I'd use that. For one thing, wide characters aren't meant to handle variable width encodings. This hole goes deep, so I'll leave it at that.
Now, as for the rest of your code, it is error prone because you mix the looping logic with translation logic. Thus, at least two kinds of bugs can occur: translation bugs and looping bugs. Do use the STL, it can help you a lot with the looping part.
The following is a rough solution for replacing characters in a string.
main.cpp:
#include <iostream>
#include <string>
#include <iterator>
#include <algorithm>
#include "translate_characters.h"
using namespace std;
int main()
{
string text;
cin.unsetf(ios::skipws);
transform(istream_iterator<char>(cin), istream_iterator<char>(),
inserter(text, text.end()), translate_characters());
cout << text << endl;
return 0;
}
translate_characters.h:
#ifndef TRANSLATE_CHARACTERS_H
#define TRANSLATE_CHARACTERS_H
#include <functional>
#include <map>
class translate_characters : public std::unary_function<const char,char> {
public:
translate_characters();
char operator()(const char c);
private:
std::map<char, char> characters_map;
};
#endif // TRANSLATE_CHARACTERS_H
translate_characters.cpp:
#include "translate_characters.h"
using namespace std;
translate_characters::translate_characters()
{
characters_map.insert(make_pair('e', 'a'));
}
char translate_characters::operator()(const char c)
{
map<char, char>::const_iterator translation_pos(characters_map.find(c));
if( translation_pos == characters_map.end() )
return c;
return translation_pos->second;
}
A: You might want to check out the boost (http://www.boost.org/) library.
It has a regexp library, which you could use.
In addition it has a specific library that has some functions for string manipulation (link) including replace.
A: Try using std::wstring instead of std::string. UTF-16 should work (as opposed to ASCII).
A: I could not link the ICU libraries but I still think it's the best solution. As I need this program to be functional as soon as possible I made a little program (that I have to improve) and I'm going to use that. Thank you all for for suggestions and answers.
Here's the code I'm gonna use:
for (it= dictionary.begin(); it != dictionary.end(); it++)
{
strMine=(it->first);
found=toReplace.find(strMine);
while (found != std::string::npos)
{
strAux=(it->second);
toReplace.erase(found,2);
toReplace.insert(found,strAux);
found=toReplace.find(strMine,found+1);
}
}
I will change it next time I have to turn my program in for correction (in about 6 weeks).
A: If you can (if you're running Unix), I suggest using the tr facility for this: it's custom-built for this purpose. Remember, no code == no buggy code. :-)
Edit: Sorry, you're right, tr doesn't seem to work. How about sed? It's a pretty stupid script I've written, but it works for me.
#!/bin/sed -f
s/á/a/g;
s/é/e/g;
s/í/i/g;
s/ó/o/g;
s/ú/u/g;
s/ñ/n/g;
A: /// <summary>
///
/// Replace any accent and foreign character by their ASCII equivalent.
/// In other words, convert a string to an ASCII-complient string.
///
/// This also get rid of special hidden character, like EOF, NUL, TAB and other '\0', except \n\r
///
/// Tests with accents and foreign characters:
/// Before: "äæǽaeöœoeüueÄAeÜUeÖOeÀÁÂÃÄÅǺĀĂĄǍΑΆẢẠẦẪẨẬẰẮẴẲẶАAàáâãåǻāăąǎªαάảạầấẫẩậằắẵẳặаaБBбbÇĆĈĊČCçćĉċčcДDдdÐĎĐΔDjðďđδdjÈÉÊËĒĔĖĘĚΕΈẼẺẸỀẾỄỂỆЕЭEèéêëēĕėęěέεẽẻẹềếễểệеэeФFфfĜĞĠĢΓГҐGĝğġģγгґgĤĦHĥħhÌÍÎÏĨĪĬǏĮİΗΉΊΙΪỈỊИЫIìíîïĩīĭǐįıηήίιϊỉịиыїiĴJĵjĶΚКKķκкkĹĻĽĿŁΛЛLĺļľŀłλлlМMмmÑŃŅŇΝНNñńņňʼnνнnÒÓÔÕŌŎǑŐƠØǾΟΌΩΏỎỌỒỐỖỔỘỜỚỠỞỢОOòóôõōŏǒőơøǿºοόωώỏọồốỗổộờớỡởợоoПPпpŔŖŘΡРRŕŗřρрrŚŜŞȘŠΣСSśŝşșšſσςсsȚŢŤŦτТTțţťŧтtÙÚÛŨŪŬŮŰŲƯǓǕǗǙǛŨỦỤỪỨỮỬỰУUùúûũūŭůűųưǔǖǘǚǜυύϋủụừứữửựуuÝŸŶΥΎΫỲỸỶỴЙYýÿŷỳỹỷỵйyВVвvŴWŵwŹŻŽΖЗZźżžζзzÆǼAEßssIJIJijijŒOEƒf'ξksπpβvμmψpsЁYoёyoЄYeєyeЇYiЖZhжzhХKhхkhЦTsцtsЧChчchШShшshЩShchщshchЪъЬьЮYuюyuЯYaяya"
/// After: "aaeooeuueAAeUUeOOeAAAAAAAAAAAAAAAAAAAAAAAaaaaaaaaaaaaaaaaaaaaaaaBbCCCCCCccccccDdDDjddjEEEEEEEEEEEEEEEEEEeeeeeeeeeeeeeeeeeeFfGGGGGgggggHHhhIIIIIIIIIIIIIiiiiiiiiiiiiJJjjKKkkLLLLllllMmNNNNNnnnnnOOOOOOOOOOOOOOOOOOOOOOooooooooooooooooooooooPpRRRRrrrrSSSSSSssssssTTTTttttUUUUUUUUUUUUUUUUUUUUUUUUuuuuuuuuuuuuuuuuuuuuuuuYYYYYYYYyyyyyyyyVvWWwwZZZZzzzzAEssIJijOEf'kspvmpsYoyoYeyeYiZhzhKhkhTstsChchShshShchshchYuyuYaya"
///
/// Tests with invalid 'special hidden characters':
/// Before: "\0\0\000\0000Bj��rk�\'\"\\\0\a\b\f\n\r\t\v\u0020���oacu\'\\\'te�"
/// After: "00000Bjrk'\"\\\n\r oacu'\\'te"
///
/// </summary>
private string Normalize(string StringToClean)
{
string normalizedString = StringToClean.Normalize(NormalizationForm.FormD);
StringBuilder Buffer = new StringBuilder(StringToClean.Length);
for (int i = 0; i < normalizedString.Length; i++)
{
if (CharUnicodeInfo.GetUnicodeCategory(normalizedString[i]) != UnicodeCategory.NonSpacingMark)
{
Buffer.Append(normalizedString[i]);
}
}
string PreAsciiCompliant = Buffer.ToString().Normalize(NormalizationForm.FormC);
StringBuilder AsciiComplient = new StringBuilder(PreAsciiCompliant.Length);
foreach (char character in PreAsciiCompliant)
{
//Reject all special characters except \n\r (Carriage-Return and Line-Feed).
//Get rid of special hidden character, like EOF, NUL, TAB and other '\0'
if (((int)character >= 32 && (int)character < 127) || ((int)character == 10 || (int)character == 13))
{
AsciiComplient.Append(character);
}
}
return AsciiComplient.ToString().Trim(); // Remove spaces at start and end of string if any
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/144761",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19"
} |
Q: What are essential topics to have in a Web Services (semester) course? I am in the process of designing a Web Services course for students in an Information Technology program. Some students stop after getting a two-year associates degree, but other students in the program go on to a four-year bachelor's degree. This course would be for students going on to the four-year degree.
My initial thoughts for the course would be that it would cover:
*
*Some simple database concepts, with enough command line practice to allow students to create simple relational database backends.
*Enough PHP so students can create a web-interface that allows user to enter new data into the database backend, edit data in the database, and display fixed views of the database.
*Basic security practices for PHP and web services in general.
*Writing a barebones content management system using PHP and a database backend.
*Learning about and using existing content management software such as Zope/Plone or Drupal.
*Discuss feasibility of using existing content management software to provide ADA section 508 compliance for web pages. Contrast this with coming up with a simple framework to make ADA compliant pages using PHP.
Our semesters are 16 weeks long. Are there other topics that you cover instead of the ones listed? If you had a chance to design such a course, what would be the most pragmatic things to cover?
Edit: Based on the initial response, it is clear that the title of my question is misleading. It should be web programming instead of web services. The students taking this course will have already taken at least one programming course. The students would have all taken a course in Python. The Python course they take includes writing an XML parser that produces HTML with CSS. This course would also cover HTML, CSS, and JavaScript. XML would also be used (parsing XML using PHP, and possibly using converting XML into PHP code). Some of the students will also have taken an introductory course in Java, but that course will not cover JSP.
A: First of all, what do you understand by "web service"? As far as I know, the standard definition of a web service is that it's a "software system to support machine-to-machine interaction over a network". If it really is what you had in mind, well then (1) those parts about CMS doesn't apply and (2) there should definitely be some previous knowledge of web programming or something like that. Actually very little of the course description seems applicable for web services, from the description it reads like a generic web-development course.
Anyway, as that is probably not what you had in mind, the thing is, you cannot create "a web-interface" in PHP - you need HTML, CSS, JavaScript, etc. for that - will that be included in the course?
Regarding the last section about 508 - to be honest, it is a relatively minor part of everyday work in web development and it actually has nothing at all to do with PHP or programming, or server-side web development and more with what the client side code is like and how content is prepared.
A: You're probably going to need to talk about Xml. May want to even talk about XSDs... but that depends on what you want to get into in the course. I don't know about web services with PHP, but if it were .Net you would want to talk about serialization/deserialization.
A: I would teach (even briefly) the layers model. If students don't fundamentally understand it, somewhere down the road it will come back to haunt them. And yes, I have met students who went through a 4 year CS degree without understanding the network layer model or the OS layer model.
A: Why exactly are you teaching PHP as a CS course? Especially considering the web services topic.
Once these students graduate, 98% of their webservice work is going to be either Java, or C#.
Or perhaps you mean something different than REST, XML-RPC or SOAP for webservices?
A: Talking about the Sun's standard is a nice idea and maybe the oldest way to use WS in Java, the Apache Axi.
I believe that in the java session you could have a speech for jax-ws and jax-b.
Talk about the WS-* directions and how REST is changing our vision about service consumers and providers.
A: Some sort of programming language should be a pre-req to the course that way students can test what they are learning. It would be too much to have to teach a language. You, as the teacher, will be able to verify that they are in fact creating a service.
Maybe create some databases that the students can connect to and create services from.
Should probably get into REST vs nonREST, formats (xml, json, csv...)
A: Love it or hate it, SOAP is here to stay and extends beyond specific languages like java, php, etc... Web programming is no longer custom coded from the ground up. Teaching REST and SOAP is just like teaching people to use the standard template library in a C++ class. Reuse is paramount.
I would avoid writing a CMS - its usually the wrong choice for most web projects and if we have learned anything from twitter its that shoving a cms where it doesn't belong is bad juju. Plus its boring. Have them do a mashup competition instead. Foster creativity and entrepreneurship while also applying all the core concepts.
If there is time, delve into web tier architecture. I am always dismayed by the number of candidates I interview who don't understand how things fit together and how to scale. Understanding redundancy is impressive to any potential employer, especially if your students will move to corporate jobs.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/144767",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Why does GetWindowRgn fail on Vista? I'm writing a program that uses SetWindowRgn to make transparent holes in a window that belongs to another process. (This is done only when the user explicitly requests it.)
The program has to assume that the target window may already have holes which need to be preserved, so before it calls SetWindowRgn, it calls GetWindowRgn to get the current region, then combines the current region with the new one and calls SetWindowRgn:
HRGN rgnOld = CreateRectRgn ( 0, 0, 0, 0 );
int regionType = GetWindowRgn ( hwnd, rgnOld );
This works fine in XP, but the call to GetWindowRgn fails in Vista. I've tried turning off Aero and elevating my thread's privilege to SE_DEBUG_NAME with AdjustTokenPrivileges, but neither helps.
GetLastError() doesn't seem to return a valid value for GetWindowRgn -- it returns 0 on one machine and 5 (Access denied) on another.
Can anyone tell me what I'm doing wrong or suggest a different approach?
A: Under Vista, in order for a process that does not run as administrator to target a window from another process, it must:
*
*Embed a manifest file with uiAccess="true" (example below)
*Digitally sign the application
*Install and execute it from a "safe" place, like "Program Files"
Here's a sample manifest:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0">
<assemblyIdentity version="1.0.0.0" processorArchitecture="X86" name="yourAssemblyNameWithoutExtension" type="win32"/>
<trustInfo xmlns="urn:schemas-microsoft-com:asm.v3">
<security>
<requestedPrivileges>
<requestedExecutionLevel level="asInvoker" uiAccess="true" />
</requestedPrivileges>
</security>
</trustInfo>
</assembly>
A: Are you sure your window has a region? Most top-level windows in XP do, simply because the default theme uses them for round corners... but this is still a bad assumption to be making, and may very well not hold once you get to Vista.
If you haven't set a region yet, and the call fails, use a sensible default (the window rect) and don't let it ruin your life. Now, if SetWindowRgn() fails...
A: You mention that you're trying to get the region of the window of another process. Vista tightened up the security of a lot of cross-process Win32 calls. I can't find any documentation one way or the other for GetWindowRgn(), but you could test it simply enough. Make a simple project that sets it's own region, and try to use your original app to get the simple app's region. If it works, then it's just going to be annoying and people can't use your app on just anything. If it doesn't work, there's a chance that your app won't work at all on Vista.
A:
My answer (based on my experience) regarding the Windows API function ::GetWindowRgn(...)
This function fails in Vista and in Windows 7, that is, it returns ERROR.
But this function works good in Windows XP.
Therefore, I would advice the following non-complex solution:
If you use this function within an application expected to run
under different Windows, provide the test like this:
int nResultOfRgnOperation = ::GetWindowRegion(...);
if (nResultOfRgnOperation != ERROR)
< Use further the entire window's region determined by this function >
else
< Find the bounding rectangle for the entire window and use
further that bounding rectangle instead of the window's region.
In needed, you can create a rectangular region which represents
the bounding rectangle. >
Please use corresponding code in the places marked above as <...>
Thank you for your enthusiasm.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/144774",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: C# classes in separate files? Should each class in my C# project get its own file (in your opinion)?
A: While the one class per file policy is strictly enforced in Java, it's not required by C#. However, it's generally a good idea.
I typically break this rule if I have a very small helper class that is only used by the main class, but I prefer to do that as a nested inner class for clarity's sake.
You can however, split a single class into multiple files using the partial keyword. This is useful for separating your code from wizard-generated code.
A: I personally believe that every class should be in its own file, this includes nested types as well. About the only exceptions to this rule for me are custom delegates.
Most answers have excluded private classes from this rule but I think those should be in their own file as well. Here is a pattern that I currently use for nested types:
Foo.cs: // Contains only Foo implementation
public partial class Foo
{
// Foo implementation
}
Foo.Bar.cs: // Contains only Foo.Bar implementation
public partial class Foo
{
private class Bar
{
// Bar implementation
}
}
A: It depends. Most of the time I would say yes, put them in separate files. But if I had a private helper class that would only be used by one other class (like a Linked List's Node or Element) I wouldn't recommend separating them.
A: As someone who has been coding in large files for years (limited to 1,000 lines), in fact, since I started programming as a child, I was surprised at the huge consensus in this "one class per source file" rule.
The "one class per source file" is not without its problems. If you are working on a lot of things at once, you will have many files open. Sure, you could close files once you're finished with them, but what if you needed to re-open them? There is usually a delay every time I open a file.
I am now going to address points others have made and explain what I think are bad reasons for the "one class per source file" rule. A lot of the problems with multiple classes in one source file are resolved with modern source-editing technology.
*
*"I hate having to scroll up and down" - Bad Reason - Modern IDEs now either have built-in functionality for getting quickly to the code you want or you can install extensions/plugins for that task. Visual Studio's Solution Explorer does this with its search function, but if that's not enough, buy VisualAssist. VisualAssist provides an outline of the items in your source file. No need to scroll, but double-click on what you want.
There is also code-folding. Too much code? Just collapse it into one line! Problem solved.
*"Things are easier to find because they're identified by file" - Bad Reason - Again, modern IDEs make it easy to find what you're looking for. Just use Solution Explorer or buy VisualAssist!! The technology is out there, use it!!
*"Easier to read/too much code" - Bad Reason - I am not blind. I can see. Again, with code-folding I can easily eliminate the clutter and collapse the code I don't need to see. This is not the Stone Age of programming.
*"You will forget where the classes are in large projects" - Bad Reason - Easy solution: Solution Explorer in Visual Studio and the VisualAssist extension.
*"You know what's in a project without opening anything" - Good Reason - no dispute with that one.
*Source Control/Merging - Good Reason - This is actually one good argument in favour of the "one class per source file" rule, especially in team projects. If multiple people are working on the same project. It allows people to see what has changed, at a glance. I can also see how it can complicate merging processes if you use large, multiple-class files.
Source control and merging processes are really the only compelling reason IMO that the "one class per source file" rule should apply. If I'm working on my own individual projects, no, it's not so important.
A: They should be in different files, even when it seems like overkill. It's a mistake I still frequently make.
There always comes a time when you you've added enough code to a class that it deserves it's own file. If you decide to create a new file for it at that point then you lose your commit history, which always bites you when you lest want it too.
A: Files are cheap, you aren't doing anyone a favor by consolidating many classes into single files.
In Visual Studio, renaming the file in Solution Explorer will rename the class and all references to that class in your project. Even if you rarely use that feature, the cheapness of files and the ease of managing them mean the benefit is infinitely valuable, when divided by its cost.
A: Public classes: yes
Private classes: (needless to say) no
A: I actually prefer pretty big .cs files, 5000 lines is pretty reasonable IMO, although most of my files at the moment are only about 500-1000 (In C++, however, I've had some scary files), however, . The Object Browser/Class View, Go to Definition, and incremental search (-I; Thanks for that tip, Jeff Atwood!), all make finding any specific class or method pretty easy.
This is probably all because I am terrible about closing unneded tabs.
This is of course highly dependant on how you work, but there are more than enough tools to not need to use horrible old '70s based file source navigation (Joking, if it wasn't obvious).
A: As others have said, one file per type in general - although where others have made the public/private distinction, I'd just say "one top-level file per type" (so even top-level internal types get their own files).
I have one exception to this, which is less relevant with the advent of the Func and Action delegate types in .NET 3.5: if I'm defining several delegate types in a project, I often bunch them together in a file called Delegates.cs.
There are other very occasional exceptions too - I recently used partial classes to make several autogenerated classes implement the same interface. They already defined the appropriate methods, so it was just a case of writing:
public partial class MessageDescriptor : IDescriptor<MessageDescriptorProto> {}
public partial class FileDescriptor : IDescriptor<FileDescriptorProto> {}
etc. Putting all of those into their own files would have been slightly silly.
One thing to bear in mind with all of this: using ReSharper makes it easier to get to your classes whether they're in sensibly named files or not. That's not to say that organising them properly isn't a good thing anyway; it's more to reinforce the notion that ReSharper rocks :)
A: Of course! Why wouldn't you? Other than private classes it is silly to have multiple classes in a single file.
A: I think the one-class-per-file approach makes sense. Certainly for different classes, but especially for base and derived classes, whose interactions and dependencies are often non-obvious and error-prone. Separate files makes it straightforward to view/edit base and derived classes side-by-side and scroll independently.
In the days of printed source code listings running to many hundreds of pages (think of a phone book), the "three finger rule" was good a working limit on complexity: if you needed more than three fingers (or paper clips or post-its) as placeholders to understand a module, that module's dependency set was probably too complex. Given that almost no one uses printed source code listings anymore, I'll suggest that this should be updated as the "three window rule" - if you have to open more than three additional windows to understand code displayed in another window, this code probably should be refactored.
A class hierarchy of more than four levels is a code smell, which is in evidence if you need more than four open windows to see the totality of its behavior. Keeping each class in its own file will improve understandability for depth less than four and will give an indirect warning otherwise.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/144783",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "63"
} |
Q: Is there one API to handle the Task Scheduler in XP and Vista? I have a couple applications that I would like to be able to add scheduled tasks from within them. I've been Googling for how to add tasks in both XP and Vista. Apparently, Vista has a new Task Scheduler that is very different from the one in XP.
Does anybody know if there is a single API to tackle both of them, or do I have to code for both in my apps?
A: I think you could use the Task Scheduler COM interface.
Also check out this project.
A: If I recall correctly, the initial release of Vista used the same API as XP.
Server 2008 is supposed to have a much improved scheduler. That would seem to indicate that the API has changed.
I mention 2008 because SP1 for Vista brought much of the code in line with Server 2008.
Good luck and I'll be watching other answers.
A: Ideally, you could use the interface on the OS you're currently running on. You could do this by having an XP and Vista version of your app, for instance.
But Vista is from Microsoft, so the old API is still there for programs to use. The simplest solution is to use the XP API for this version of your app, and require Vista, Server 2K8 or better in the next version or perhaps 2 versions from now and transition to the Task Scheduler 2.0 API then.
A: Just as I suspected. I will have to code to two different APIs then. It will make maintenance harder, but not impossible. Just need to make sure that I put integration test that cover both cases.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/144789",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Java Log Viewer Unfortunately, sometimes the only way to debug a program is by going through its long log files.
I searched for a decent log viewer for a while now, and haven't found a real solution. The only program that seemed to be most appropriate was Chainsaw with its Socket connector but after a few short uses the program proved to be buggy and unresponsive at best.
For my purposes, a log viewer should at least be able to mark log levels (for example with different colors) and perform easy filtering based on packages and free-text.
Is there any other (free) log viewer? I'm looking for anything that could work well with log4j.
A: You can try logFaces, it has fantastic real-time log viewer based on eclipse-like design.
Disclosure: I am the author of this product.
A: Consider to use Log4j viewer eclipse plugin - that was fork of Ganemede plugin in the begging and now have more features and stability was improved significantly, and still in active development and free :)
A: Just wanted to say that I've finally found a tool that I can get along with just fine...
It's called LogExpert (see http://www.log-expert.de/) and is free. Besides the usual tail function, it also has a filter and a search function - two crucial things that are missing from BareTail. And if you happen to want to customize the way it parses columns further, it's dead simple. Just implement an interface in .NET and you're done (and I'm a Java/Flex programmer...)
A: I've always used 'tail -f | grep re' or occasionaly 'awk'.
A: LogSaw based on Eclipse and free. Log4j log file analyzer, simple to use with easy filtering. Supports several flavors of log4j log files: JBoss, Log4j pattern layout, Log4j XML layout, WebSphere. Works like a charm. After couple of hours googling and trying several recommended free log4j viewers, this one was pleasant surprise. Have tried Chainsaw, BareTail, Insight, LogExpert, logview4j. It is released weeks ago, and I guess still builds its way up on google.
A: I'm using OtrosLogViewer. You can mark log events manually or using string/regular expression. You can filter events based on level, time thread, string or regular expression. Logs can be imported by listening on socket or connecting to Log4j SocketHubAppender
You can take a look at Youtube video or screenshots:
Disclaimer: I am the author of OtrosLogViewer
A: I've rolled out Splunk (http://www.splunk.com/) for log viewing and searching with great success. The free version can be used locally and the paid version can collect all your logs into one location. We use it mostly for Log4J logs but with lots of other formats as well.
Beyond tail and grep support (without needing to know grep...) it automatically indexes logs and allows easy analysis (e.g. # of events in last xx timeframe) as well as basic charting, alerting, and event aggregation.
I won't say that the app is perfect or that the company has matured yet. But I don't hesitate at all to recommend that you try it.
A: I'll add that for Windows, WireShark makes for a handy syslog viewer, ironically enough. I've tried several other syslog tools, and really, Kiwi is the best for syslog out there, but the "free" version is a bit nerfed. Others I ran into were either poorly programmed (crashing on minor issues -- logview4net), had a poor interface (Star SysLog Daemon Lite), or didn't even run (nxlog)
You can use WireShark's filter language to drill down on log data. It's overkill, but until someone writes a free syslog viewer/collector for Windows and makes it decent, this is one field that will be a hard one for most people.
Example:
# Display level 6 alerts from 192.168.5.90 in WireShark
syslog.level == 6 && ip.addr == 192.168.5.90
A: LogMX is a crossplatform tool that parses any log format from any source, then displays log entries with many features. By default, it handles formats like Log4j, LogFactor, syslog,... and can read from local file or SFTP, FTP, HTTP... but you can write your own pluggins if your format is another one or if your logs cannot be accessed through classical protocols.
You can monitor logs in realtime like 'tail' or load a whole log file and stop monitoring it.
www.logmx.com
A: You didn't mention an OS, so I'll mention this though it is only on Windows.
Bare Metal Software makes a product called BareTail that has a nice interface and works well. They have a free version with a startup nag screen, a licensed version with no nag, and a pro version with additional features. It has configurable highlighting based on matching lines against keywords.
They also have a BareGrep product too, which provides similar grep capabilities. Both are excellent and very stable and better than anything I've seen on Windows. I liked them so much I bought the bundle with both pro versions for $50.
A: I am using Notepad++ with my custom log file highlighting UDL. Looks like this:
A: Depending on what platform you are running on and what other log viewing tools you have available, you can just use the appropriate log4j appender (syslog, Windows Event Logger) and just use your platform log viewing tools.
Other than that I have usually seen custom solutions developed.
Something that will drive your solution is what your overall system is like. Are you trying to aggregate logs from several computers? Or just view the logs from a single remote process?
A: You may want to use a custom log viewer that just works on files. I like Kiwi Log Viewer or Ganymede (an Eclipse plugin), but it's not hard to put a simple Swing app together that reads from the socket.
A: Take a look to http://jlogviewer.sourceforge.net/ or http://sourceforge.net/projects/jlogviewer/
Java log viewer is lightweight GUI to easily view the java application
logs generated by the "java.util.logging" package.
It's open source!!
A: You can use MindTree Insight, it is open source, efficient, and specific for that use case : analyze log4j files.
A: I have written a custom tool for that: https://plus.google.com/u/0/102275357970232913798/posts/Fsu6qftH2ja
Alfa is a GUI tool for analyzing log files. Usually you are forced to search for data in them using editors. You open a log, press Ctrl-F and the "Next" button again and again, then reload the file as it was modified, and repeat the search. Alfa maps a log file to a database allowing you to use standard SQL queries to get data without any superfluous actions.
UPD: Google killed Google+ so please use other link: https://drive.google.com/drive/folders/0B-hYEtveqA0aN1E3Ul9NVlFlYWM
A: Another good log viewer is Lilith (http://sourceforge.net/projects/lilith/ and http://lilithapp.com/). It is open source and works well with Logback, log4j & java.util.logging.
A: Just published a node module for color highlighting log output log-color-highlight.
echo "this string" | lch -red.bold this -blue string
Works well on unix/linux/windows and supports config file for complex logging scenarios.
For windows I use it in combination with file-tail
| {
"language": "en",
"url": "https://stackoverflow.com/questions/144807",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "80"
} |
Q: jQuery get textarea text Recently I have started playing with jQuery, and have been following a couple of tutorials. Now I feel slightly competent with using it (it's pretty easy), and I thought it would be cool if I were able to make a 'console' on my webpage (as in, you press the ` key like you do in FPS games, etc.), and then have it Ajax itself back to the server in-order to do stuff.
I originally thought the best way would be to just get the text inside the textarea, and then split it, or should I use the keyup event, convert the keycode returned to an ASCII character, append the character to a string and send the string to the server (then empty the string).
I couldn't find any information on getting text from a textarea, all I got was keyup information. Also, how can I convert the keycode returned to an ASCII character?
A: I have figured out that I can convert the keyCode of the event to a character by using the following function:
var char = String.fromCharCode(v_code);
From there I would then append the character to a string, and when the enter key is pressed send the string to the server. I'm sorry if my question seemed somewhat cryptic, and the title meaning something almost completely off-topic, it's early in the morning and I haven't had breakfast yet ;).
Thanks for all your help guys.
A: Why would you want to convert key strokes to text? Add a button that sends the text inside the textarea to the server when clicked. You can get the text using the value attribute as the poster before has pointed out, or using jQuery's API:
$('input#mybutton').click(function() {
var text = $('textarea#mytextarea').val();
//send to server and process response
});
A: Methinks the word "console" is causing the confusion.
If you want to emulate an old-style full/half duplex console, you'd use something like this:
$('console').keyup(function(event){
$.get("url", { keyCode: event.which }, ... );
return true;
});
event.which has the key that was pressed. For backspace handling, event.which === 8.
A: you can get textarea data by name and id
// by name
<textarea name="comment"></textarea>
let text_area_data = $('textarea[name="comment"]').val();
// by id
<textarea id="comment" name="comment"></textarea>
let text_area_data = $('textarea#comment').val();
A: Where it is often the text function you use (e.g. in divs etc) then for text area it is val
get:
$('#myTextBox').val();
set:
$('#myTextBox').val('new value');
A: the best way:
$('#myTextBox').val('new value').trim();
A: You should have a div that just contains the console messages, that is, previous commands and their output. And underneath put an input or textarea that just holds the command you are typing.
-------------------------------
| consle output ... |
| more output |
| prevous commands and data |
-------------------------------
> This is an input box.
That way you just send the value of the input box to the server for processing, and append the result to the console messages div.
A: Normally, it's the value property
testArea.value
Or is there something I'm missing in what you need?
A: Read textarea value and code-char conversion:
function keys(e) {
msg.innerHTML = `last key: ${String.fromCharCode(e.keyCode)}`
if(e.key == 'Enter') {
console.log('send: ', mycon.value);
mycon.value='';
e.preventDefault();
}
}
Push enter to 'send'<br>
<textarea id='mycon' onkeydown="keys(event)"></textarea>
<div id="msg"></div>
And below nice Quake like console on div-s only :)
document.addEventListener('keyup', keys);
let conShow = false
function keys(e) {
if (e.code == 'Backquote') {
conShow = !conShow;
mycon.classList.toggle("showcon");
} else {
if (conShow) {
if (e.code == "Enter") {
conTextOld.innerHTML+= '<br>' + conText.innerHTML;
let command=conText.innerHTML.replace(/ /g,' ');
conText.innerHTML='';
console.log('Send to server:', command);
}
else if (e.code == "Backspace") {
conText.innerHTML = conText.innerText.slice(0, -1);
} else if (e.code == "Space") {
conText.innerHTML = conText.innerText + ' '
} else {
conText.innerHTML = conText.innerText + e.key;
}
}
}
}
body {
margin: 0
}
.con {
display: flex;
flex-direction: column;
justify-content: flex-end;
align-items: flex-start;
width: 100%;
height: 90px;
background: rgba(255, 0, 0, 0.4);
position: fixed;
top: -90px;
transition: top 0.5s ease-out 0.2s;
font-family: monospace;
}
.showcon {
top: 0px;
}
.conTextOld {
color: white;
}
.line {
display: flex;
flex-direction: row;
}
.conText{ color: yellow; }
.carret {
height: 20px;
width: 10px;
background: red;
margin-left: 1px;
}
.start { color: red; margin-right: 2px}
Click here and Press tilde ` (and Enter for "send")
<div id="mycon" class="con">
<div id='conTextOld' class='conTextOld'>Hello!</div>
<div class="line">
<div class='start'> > </div>
<div id='conText' class="conText"></div>
<div class='carret'></div>
</div>
</div>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/144810",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "523"
} |
Q: is bash getopts function destructive to the command-line options? Can you use the bash "getopts" function twice in the same script?
I have a set of options that would mean different things depending on the value of a specific option. Since I can't guarantee that getopts will evaluate that specific option first, I would like to run getopts one time, using only that specific option, then run it a second time using the other options.
A: Yes, just reset OPTIND afterwards.
#!/bin/bash
set -- -1
while getopts 1 opt; do
case "${opt}" in
1) echo "Worked!";;
*) exit 1;
esac
done
OPTIND=1
set -- -2
while getopts 2 opt; do
case "${opt}" in
2) echo "Worked!";;
*) exit 1;
esac
done
A: getopts does not modify the original arguments, as opposed to the older getopt standalone executable. You can use the bash built-in getopts over and over without modifying your original input.
See the bash man page for more info.
HTH.
cheers,
Rob
| {
"language": "en",
"url": "https://stackoverflow.com/questions/144824",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Most Useful Attributes I know that attributes are extremely useful. There are some predefined ones such as [Browsable(false)] which allows you to hide properties in the properties tab. Here is a good question explaining attributes: What are attributes in .NET?
What are the predefined attributes (and their namespace) you actually use in your projects?
A: I always use the DisplayName, Description and DefaultValue attributes over public properties of my user controls, custom controls or any class I'll edit through a property grid. These tags are used by the .NET PropertyGrid to format the name, the description panel, and bolds values that are not set to the default values.
[DisplayName("Error color")]
[Description("The color used on nodes containing errors.")]
[DefaultValue(Color.Red)]
public Color ErrorColor
{
...
}
I just wish Visual Studio's IntelliSense would take the Description attribute into account if no XML comment are found. It would avoid having to repeat the same sentence twice.
A: The attributes I use the most are the ones related to XML Serialization.
XmlRoot
XmlElement
XmlAttribute
etc...
Extremely useful when doing any quick and dirty XML parsing or serializing.
A: Being a middle tier developer I like
System.ComponentModel.EditorBrowsableAttribute Allows me to hide properties so that the UI developer is not overwhelmed with properties that they don't need to see.
System.ComponentModel.BindableAttribute Some things don't need to be databound. Again, lessens the work the UI developers need to do.
I also like the DefaultValue that Lawrence Johnston mentioned.
System.ComponentModel.BrowsableAttribute and the Flags are used regularly.
I use
System.STAThreadAttribute
System.ThreadStaticAttribute
when needed.
By the way. I these are just as valuable for all the .Net framework developers.
A: [EditorBrowsable(EditorBrowsableState.Never)] allows you to hide properties and methods from IntelliSense if the project is not in your solution. Very helpful for hiding invalid flows for fluent interfaces. How often do you want to GetHashCode() or Equals()?
For MVC [ActionName("Name")] allows you to have a Get action and Post action with the same method signature, or to use dashes in the action name, which otherwise would not be possible without creating a route for it.
A: I consider that is important to mention here that the following attributes are also very important:
STAThreadAttribute
Indicates that the COM threading model for an application is single-threaded apartment (STA).
For example this attribute is used in Windows Forms Applications:
static class Program
{
/// <summary>
/// The main entry point for the application.
/// </summary>
[STAThread]
static void Main()
{
Application.EnableVisualStyles();
Application.SetCompatibleTextRenderingDefault(false);
Application.Run(new Form1());
}
}
And also ...
SuppressMessageAttribute
Suppresses reporting of a specific static analysis tool rule violation, allowing multiple suppressions on a single code artifact.
For example:
[SuppressMessage("Microsoft.Performance", "CA1801:ReviewUnusedParameters", MessageId = "isChecked")]
[SuppressMessage("Microsoft.Performance", "CA1804:RemoveUnusedLocals", MessageId = "fileIdentifier")]
static void FileNode(string name, bool isChecked)
{
string fileIdentifier = name;
string fileName = name;
string version = String.Empty;
}
A: Off the top of my head, here is a quick list, roughly sorted by frequency of use, of predefined attributes I actually use in a big project (~500k LoCs):
Flags, Serializable, WebMethod, COMVisible, TypeConverter, Conditional, ThreadStatic, Obsolete, InternalsVisibleTo, DebuggerStepThrough.
A: [Serializable] is used all the time for serializing and deserializing objects to and from external data sources such as xml or from a remote server. More about it here.
A: [DebuggerDisplay] can be really helpful to quickly see customized output of a Type when you mouse over the instance of the Type during debugging. example:
[DebuggerDisplay("FirstName={FirstName}, LastName={LastName}")]
class Customer
{
public string FirstName;
public string LastName;
}
This is how it should look in the debugger:
Also, it is worth mentioning that [WebMethod] attribute with CacheDuration property set can avoid unnecessary execution of the web service method.
A: [DeploymentItem("myFile1.txt")]
MSDN Doc on DeploymentItem
This is really useful if you are testing against a file or using the file as input to your test.
A: I generates data entity class via CodeSmith and I use attributes for some validation routine. Here is an example:
/// <summary>
/// Firm ID
/// </summary>
[ChineseDescription("送样单位编号")]
[ValidRequired()]
public string FirmGUID
{
get { return _firmGUID; }
set { _firmGUID = value; }
}
And I got an utility class to do the validation based on the attributes attached to the data entity class. Here is the code:
namespace Reform.Water.Business.Common
{
/// <summary>
/// Validation Utility
/// </summary>
public static class ValidationUtility
{
/// <summary>
/// Data entity validation
/// </summary>
/// <param name="data">Data entity object</param>
/// <returns>return true if the object is valid, otherwise return false</returns>
public static bool Validate(object data)
{
bool result = true;
PropertyInfo[] properties = data.GetType().GetProperties();
foreach (PropertyInfo p in properties)
{
//Length validatioin
Attribute attribute = Attribute.GetCustomAttribute(p,typeof(ValidLengthAttribute), false);
if (attribute != null)
{
ValidLengthAttribute validLengthAttribute = attribute as ValidLengthAttribute;
if (validLengthAttribute != null)
{
int maxLength = validLengthAttribute.MaxLength;
int minLength = validLengthAttribute.MinLength;
string stringValue = p.GetValue(data, null).ToString();
if (stringValue.Length < minLength || stringValue.Length > maxLength)
{
return false;
}
}
}
//Range validation
attribute = Attribute.GetCustomAttribute(p,typeof(ValidRangeAttribute), false);
if (attribute != null)
{
ValidRangeAttribute validRangeAttribute = attribute as ValidRangeAttribute;
if (validRangeAttribute != null)
{
decimal maxValue = decimal.MaxValue;
decimal minValue = decimal.MinValue;
decimal.TryParse(validRangeAttribute.MaxValueString, out maxValue);
decimal.TryParse(validRangeAttribute.MinValueString, out minValue);
decimal decimalValue = 0;
decimal.TryParse(p.GetValue(data, null).ToString(), out decimalValue);
if (decimalValue < minValue || decimalValue > maxValue)
{
return false;
}
}
}
//Regex validation
attribute = Attribute.GetCustomAttribute(p,typeof(ValidRegExAttribute), false);
if (attribute != null)
{
ValidRegExAttribute validRegExAttribute = attribute as ValidRegExAttribute;
if (validRegExAttribute != null)
{
string objectStringValue = p.GetValue(data, null).ToString();
string regExString = validRegExAttribute.RegExString;
Regex regEx = new Regex(regExString);
if (regEx.Match(objectStringValue) == null)
{
return false;
}
}
}
//Required field validation
attribute = Attribute.GetCustomAttribute(p,typeof(ValidRequiredAttribute), false);
if (attribute != null)
{
ValidRequiredAttribute validRequiredAttribute = attribute as ValidRequiredAttribute;
if (validRequiredAttribute != null)
{
object requiredPropertyValue = p.GetValue(data, null);
if (requiredPropertyValue == null || string.IsNullOrEmpty(requiredPropertyValue.ToString()))
{
return false;
}
}
}
}
return result;
}
}
}
A: In Hofstadtian spirit, the [Attribute] attribute is very useful, since it's how you create your own attributes. I've used attributes instead of interfaces to implement plugin systems, add descriptions to Enums, simulate multiple dispatch and other tricks.
A: [System.Security.Permissions.PermissionSetAttribute] allows security actions for a PermissionSet to be applied to code using declarative security.
// usage:
public class FullConditionUITypeEditor : UITypeEditor
{
// The immediate caller is required to have been granted the FullTrust permission.
[PermissionSetAttribute(SecurityAction.LinkDemand, Name = "FullTrust")]
public FullConditionUITypeEditor() { }
}
A: Here is the post about interesting attribute InternalsVisibleTo. Basically what it does it mimics C++ friends access functionality. It comes very handy for unit testing.
A: I've found [DefaultValue] to be quite useful.
A: I'd suggest [TestFixture] and [Test] - from the nUnit library.
Unit tests in your code provide safety in refactoring and codified documentation.
A: System.Obsolete is one of the most useful attributes in the framework, in my opinion. The ability to raise a warning about code that should no longer be used is very useful. I love having a way to tell developers that something should no longer be used, as well as having a way to explain why and point to the better/new way of doing something.
The Conditional attribute is pretty handy too for debug usage. It allows you to add methods in your code for debug purposes that won't get compiled when you build your solution for release.
Then there are a lot of attributes specific to Web Controls that I find useful, but those are more specific and don't have any uses outside of the development of server controls from what I've found.
A: [XmlIgnore]
as this allows you to ignore (in any xml serialisation) 'parent' objects that would otherwise cause exceptions when saving.
A: I like using the [ThreadStatic] attribute in combination with thread and stack based programming. For example, if I want a value that I want to share with the rest of a call sequence, but I want to do it out of band (i.e. outside of the call parameters), I might employ something like this.
class MyContextInformation : IDisposable {
[ThreadStatic] private static MyContextInformation current;
public static MyContextInformation Current {
get { return current; }
}
private MyContextInformation previous;
public MyContextInformation(Object myData) {
this.myData = myData;
previous = current;
current = this;
}
public void Dispose() {
current = previous;
}
}
Later in my code, I can use this to provide contextual information out of band to people downstream from my code. Example:
using(new MyContextInformation(someInfoInContext)) {
...
}
The ThreadStatic attribute allows me to scope the call only to the thread in question avoiding the messy problem of data access across threads.
A: It's not well-named, not well-supported in the framework, and shouldn't require a parameter, but this attribute is a useful marker for immutable classes:
[ImmutableObject(true)]
A: The DebuggerHiddenAttribute which allows to avoiding step into code which should not be debugged.
public static class CustomDebug
{
[DebuggerHidden]
public static void Assert(Boolean condition, Func<Exception> exceptionCreator) { ... }
}
...
// The following assert fails, and because of the attribute the exception is shown at this line
// Isn't affecting the stack trace
CustomDebug.Assert(false, () => new Exception());
Also it prevents from showing methods in stack trace, useful when having a method which just wraps another method:
[DebuggerHidden]
public Element GetElementAt(Vector2 position)
{
return GetElementAt(position.X, position.Y);
}
public Element GetElementAt(Single x, Single y) { ... }
If you now call GetElementAt(new Vector2(10, 10)) and a error occurs at the wrapped method, the call stack is not showing the method which is calling the method which throws the error.
A: DesignerSerializationVisibilityAttribute is very useful. When you put a runtime property on a control or component, and you don't want the designer to serialize it, you use it like this:
[DesignerSerializationVisibility(DesignerSerializationVisibility.Hidden)]
public Foo Bar {
get { return baz; }
set { baz = value; }
}
A: [Flags] is pretty handy. Syntactic sugar to be sure, but still rather nice.
[Flags]
enum SandwichStuff
{
Cheese = 1,
Pickles = 2,
Chips = 4,
Ham = 8,
Eggs = 16,
PeanutButter = 32,
Jam = 64
};
public Sandwich MakeSandwich(SandwichStuff stuff)
{
Console.WriteLine(stuff.ToString());
// ...
}
// ...
MakeSandwich(SandwichStuff.Cheese
| SandwichStuff.Ham
| SandwichStuff.PeanutButter);
// produces console output: "Cheese, Ham, PeanutButter"
Leppie points out something I hadn't realized, and which rather dampens my enthusiasm for this attribute: it does not instruct the compiler to allow bit combinations as valid values for enumeration variables, the compiler allows this for enumerations regardless. My C++ background showing through... sigh
A: // on configuration sections
[ConfigurationProperty]
// in asp.net
[NotifyParentProperty(true)]
A: I always use attributes,
[Serializable], [WebMethod], [DefaultValue], [Description("description here")].
but beside that there is a Global Attributes in c#.
[assembly: System.CLSCompliant(true)]
[assembly: AssemblyCulture("")]
[assembly: AssemblyDescription("")]
A: I like [DebuggerStepThrough] from System.Diagnostics.
It's very handy for avoiding stepping into those one-line do-nothing methods or properties (if you're forced to work in an early .Net without automatic properties). Put the attribute on a short method or the getter or setter of a property, and you'll fly right by even when hitting "step into" in the debugger.
A: Only a few attributes get compiler support, but one very interesting use of attributes is in AOP: PostSharp uses your bespoke attributes to inject IL into methods, allowing all manner of abilities... log/trace being trivial examples - but some other good examples are things like automatic INotifyPropertyChanged implementation (here).
Some that occur and impact the compiler or runtime directly:
*
*[Conditional("FOO")] - calls to this method (including argument evaluation) only occur if the "FOO" symbol is defined during build
*[MethodImpl(...)] - used to indicate a few thing like synchronization, inlining
*[PrincipalPermission(...)] - used to inject security checks into the code automatically
*[TypeForwardedTo(...)] - used to move types between assemblies without rebuilding the callers
For things that are checked manually via reflection - I'm a big fan of the System.ComponentModel attributes; things like [TypeDescriptionProvider(...)], [TypeConverter(...)], and [Editor(...)] which can completely change the behavior of types in data-binding scenarios (i.e. dynamic properties etc).
A: If I were to do a code coverage crawl, I think these two would be top:
[Serializable]
[WebMethod]
A: I have been using the [DataObjectMethod] lately. It describes the method so you can use your class with the ObjectDataSource ( or other controls).
[DataObjectMethod(DataObjectMethodType.Select)]
[DataObjectMethod(DataObjectMethodType.Delete)]
[DataObjectMethod(DataObjectMethodType.Update)]
[DataObjectMethod(DataObjectMethodType.Insert)]
More info
A: For what it's worth, here's a list of all .NET attributes. There are several hundred.
I don't know about anyone else but I have some serious RTFM to do!
A: My vote would be for Conditional
[Conditional("DEBUG")]
public void DebugOnlyFunction()
{
// your code here
}
You can use this to add a function with advanced debugging features; like Debug.Write, it is only called in debug builds, and so allows you to encapsulate complex debug logic outside the main flow of your program.
A: [TypeConverter(typeof(ExpandableObjectConverter))]
Tells the designer to expand the properties which are classes (of your control)
[Obfuscation]
Instructs obfuscation tools to take the specified actions for an assembly, type, or member. (Although typically you use an Assembly level [assembly:ObfuscateAssemblyAttribute(true)]
A: In our current project, we use
[ComVisible(false)]
It controls accessibility of an individual managed type or member, or of all types within an assembly, to COM.
More Info
| {
"language": "en",
"url": "https://stackoverflow.com/questions/144833",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "797"
} |
Q: Silverlight RC0 Upgrade Issue Since upgrading to RC0 from SL2 B2 the following line of code in my Page.xaml gives a AG_E_PARSER_BAD_PROPERTY_VALUE...anyone have any ideas? I didn't see anything in the breaking changes document. It seems to be the binding mode in the snippet below.
ItemsSource="{Binding Mode=TwoWay, Source={StaticResource ProductListDS}}"
A: Are you trying to do this in a header template? If so, this appears to be a bug in RC0
http://silverlight.net/forums/p/30592/98478.aspx
Also it appears to be an issue with the new ComboBox control in RC0.
Bart Czernicki
www.silverlighthack.com
| {
"language": "en",
"url": "https://stackoverflow.com/questions/144857",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Bluetooth APIs in Windows/.Net? I am in the process of writing a Bluetooth scanner that locates and identifies mobile devices in the local vicinity. Is this something that I can accomplish using C#, or do I need to drop down into the C/C++ APIs? My application is targeting Windows XP and Vista. Pointers are appreciated.
Thanks!
A: Mike Petrichenko has a nice BT framework. It works with BlueSoleil, Widcomm, Toshiba and Microsoft.
It is now called the Wireless Communications Library and works with Bluetooth 802.11 and Infrared. Mike named the company Soft Service Company and sells non-commercial and commercial licenses with and without source code in prices ranging between $100 and $2050.
A: One problem with Bluetooth on the PC is that there are several BT stacks in use and you can never quite know which one is available on a given machine. The most common ones are Widcomm (now Broadcom) and Microsoft (appeared in XP, maybe one of the service packs). However, some BT hardware vendors package BlueSoleil and some use Toshiba. Most dongles will work with the MS stack so the .NET libs I've seen tend to use that.
Each of the stacks has a totally different way of doing the discovery part where you browse for nearby devices and inquire their services.
If I had to pick one approach today I'd probably do the discovery in C++ and add an interface for .NET.
The 32feet.net stuff worked pretty well when I tried it but didn't support the Widcomm stack.
A: There is also Peter Foot's 32feet.net
http://inthehand.com/content/32feet.aspx
I've played around with this back when it was v1.5 and it worked well.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/144862",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "32"
} |
Q: JFreeChart XYPlot with labels I'm using an XYPlot in JFreeChart. All the lines on it are XYSeries objects. Both axes are NumberAxis objects. The Y-Axis range is from 0-1, with ticks every .1. Along with displaying the numbers though, I'd like to display text on the Y-Axis, like High/Medium/Low. High would cover .7-1, etc. What is the best way to go about doing this?
A: I have some JFreeChart experience, and after a little research I don't have an answer for adding the three labels to the axis.
However, as an alternative approach, you should be able to delineate these three areas on the plot with colors by setting a MarkerAxisBand for the NumberAxis (using this method).
You could then add interval markers to the MarkerAxisBand to highlight the three areas.
A: try this ... it can give similare result
JFreeChart Text Annotations not Working?
XYTextAnnotation textAnnotaion = new XYTextAnnotation(description, xMid, yMid);
plot.addAnnotation(textAnnotaion);
textAnnotaion.setRotationAngle(90.0);
| {
"language": "en",
"url": "https://stackoverflow.com/questions/144872",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: How to judge the strength of a directed acyclic graph? Curious what is recognized as a solid algorithm/approach for judging the strength of a directed acyclic graph - particularly the strength of certain nodes. The main question I have about this can be boiled down to the following two graphs:
(if graph doesn't show up, click here or visit this link: http://www.flickr.com/photos/86396568@N00/2893003041/
To my eyes, A is in a stronger position than a. I am judging strength by how strong a node can remain if a link is knocked out. I'd call the first a thin "stilt", and the second a thick "stalk".
Here are the approaches I've considered so far for judging the strength of a node:
1) Counting the number of nodes below, subtracting the number of nodes above.
*
*A=7, a=7, B=5, b=1
2) Counting the number of complete paths (to termination) for each node, summing their lengths.
*
*A=17 (1+5+5+5+1), B=12 (4+4+4), a=9 (3+3+3), b=2
*This make the stilt stronger, rather than the stalk.
3) Counting every possible path, treating every node as a destination.
*
*A=9 (A->B, A->C, A->D, A->E, A->G, 2xA->F, 2xA->H), B=6, a=9, b=2
3 seems like the best option so far, but is there one that is better, generalized for DAGs? Is this something that has a known best approach? The principles are to use as much information in the graph as possible, and for the solution to be explainable in an intuitive way.
A: This really depends on what you mean by strength. Because of the versatility of the DAG in representing information, you could be discussing anything from a multiple-outcome control flow to argument clauses of non-adverbial discourse connectives or even the full set of dependencies between different words in a sentence.
All of these would view node strength in a different way. For instance, a control flow may consider the node with the most amount of outcomes (therefore the most outward arcs) as the strongest, because it has the most power over the eventual outcome of the diagram. In discourse, the strongest node is the discourse connective, but it is found in speech and text after the first connective and before the third. Selection of the lexical "head" of a sentence is not directly related to the amount of arcs directly interacting with it.
What I'm getting at is that there is no real panacea for computing "strength" in this data type because of the polysemous character of the term "strength" and the kind of data a DAG fits. I would say that in a machine learning problem, all three of these approaches would be very informative in selecting particular types of nodes for a classification or ranking problem, but in the end, the answer depends upon the data type's practical application.
A: I think you need to define "strength" more clearly. Is this related to the maximum flow problem?
A: Okay, the practical application is sports teams. Each node is a team, each link is a victory over another team. Assume there are no circular victory paths, like A->B->C->A. The objective is to get a power ranking that doesn't conflict with the graph, and ranks the teams in order of a team's strength. The site in question is my (somewhat tongue-in-cheek) football site, http://beatpaths.com/ , where you can see full graphs of the entire NFL season each week. (And other sports.) I'm basically looking for ranking algorithms other than the ones I listed above that might make even more sense and can be defended as using all the information in the graph. The goal isn't necessarily to be more accurate in terms of future picks (although a stronger algorithm probably would be), but instead to be as reasonably descriptive as possible of the season so far.
You can see Week 3 of the NFL Season up on the site. I've removed two "beatloops" (long story) that were ambiguous, relying on the rest of the graph to determine the pecking order.
A: If you're looking to make predictions, your best bet is probably going to be a maximum entropy ranking algorithm. The problem is going to be developing a large enough data set for your learner--the more, the better. It sounds like you can use the standings at each week of games as a single ranking instance.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/144880",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Reading call history in iPhone OS I'm doing some research on the feasibility of an iPhone application, and can't find any indication in Apple's documentation that an iPhone app can read the call history of the phone, specifically the number/address book entry called, when, and the duration.
Does anyone know if this is possible, and how?
Note: The purpose is to remove the need for the user to perform this data-entry themselves. The application is for recording interactions with customer service centers.
A: You can access call history on the Mac by sniffing around the iTunes directory. There are apps out there that do this.
A: AFAIK you can't access call history. The address book is a database of contacts, not call information.
You can read more about the address book in the SDK's "Address Book Programming Guide for iPhone OS."
A: Seems the only way is to read the log from the iTunes side but now from the phone:
http://arstechnica.com/apple/news/2007/11/iphonelogd-another-solution-for-viewing-your-iphone-call-log.ars
A: I did some reading in which states that you can access the call history on the iphone. It may be dated but worth a shot. Apparently the history is/was held in just a sqlite db on a table called call. The db is/was located at /private/var/mobile/library/CallHistory/call_history.db
If you use FMDB, you can simply do something like this.
FMResultSet *rs = [db executeQuery:@"Select * from Call"];
to get the call history
A: Unfortunately you can't access the call history. The only User Data you have API access to is the address book. You can also access photos/pictures but only by starting an iPhone-controlled dialog that allows the user to choose a single image.
It's a bit sucky, hopefully this will be expanded in future versions.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/144888",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: How to center a Window in Java? What's the easiest way to centre a java.awt.Window, such as a JFrame or a JDialog?
A: This should work in all versions of Java
public static void centreWindow(Window frame) {
Dimension dimension = Toolkit.getDefaultToolkit().getScreenSize();
int x = (int) ((dimension.getWidth() - frame.getWidth()) / 2);
int y = (int) ((dimension.getHeight() - frame.getHeight()) / 2);
frame.setLocation(x, y);
}
A: I finally got this bunch of codes to work in NetBeans using Swing GUI Forms in order to center main jFrame:
package my.SampleUIdemo;
import java.awt.*;
public class classSampleUIdemo extends javax.swing.JFrame {
///
public classSampleUIdemo() {
initComponents();
CenteredFrame(this); // <--- Here ya go.
}
// ...
// void main() and other public method declarations here...
/// modular approach
public void CenteredFrame(javax.swing.JFrame objFrame){
Dimension objDimension = Toolkit.getDefaultToolkit().getScreenSize();
int iCoordX = (objDimension.width - objFrame.getWidth()) / 2;
int iCoordY = (objDimension.height - objFrame.getHeight()) / 2;
objFrame.setLocation(iCoordX, iCoordY);
}
}
OR
package my.SampleUIdemo;
import java.awt.*;
public class classSampleUIdemo extends javax.swing.JFrame {
///
public classSampleUIdemo() {
initComponents();
//------>> Insert your code here to center main jFrame.
Dimension objDimension = Toolkit.getDefaultToolkit().getScreenSize();
int iCoordX = (objDimension.width - this.getWidth()) / 2;
int iCoordY = (objDimension.height - this.getHeight()) / 2;
this.setLocation(iCoordX, iCoordY);
//------>>
}
// ...
// void main() and other public method declarations here...
}
OR
package my.SampleUIdemo;
import java.awt.*;
public class classSampleUIdemo extends javax.swing.JFrame {
///
public classSampleUIdemo() {
initComponents();
this.setLocationRelativeTo(null); // <<--- plain and simple
}
// ...
// void main() and other public method declarations here...
}
A: setLocationRelativeTo(null) should be called after you either use setSize(x,y), or use pack().
A: below is code for displaying a frame at top-centre of existing window.
public class SwingContainerDemo {
private JFrame mainFrame;
private JPanel controlPanel;
private JLabel msglabel;
Frame.setLayout(new FlowLayout());
mainFrame.addWindowListener(new WindowAdapter() {
public void windowClosing(WindowEvent windowEvent){
System.exit(0);
}
});
//headerLabel = new JLabel("", JLabel.CENTER);
/* statusLabel = new JLabel("",JLabel.CENTER);
statusLabel.setSize(350,100);
*/ msglabel = new JLabel("Welcome to TutorialsPoint SWING Tutorial.", JLabel.CENTER);
controlPanel = new JPanel();
controlPanel.setLayout(new FlowLayout());
//mainFrame.add(headerLabel);
mainFrame.add(controlPanel);
// mainFrame.add(statusLabel);
mainFrame.setUndecorated(true);
mainFrame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
mainFrame.getRootPane().setWindowDecorationStyle(JRootPane.NONE);
mainFrame.setVisible(true);
centreWindow(mainFrame);
}
public static void centreWindow(Window frame) {
Dimension dimension = Toolkit.getDefaultToolkit().getScreenSize();
int x = (int) ((dimension.getWidth() - frame.getWidth()) / 2);
int y = (int) ((dimension.getHeight() - frame.getHeight()) / 2);
frame.setLocation(x, 0);
}
public void showJFrameDemo(){
/* headerLabel.setText("Container in action: JFrame"); */
final JFrame frame = new JFrame();
frame.setSize(300, 300);
frame.setLayout(new FlowLayout());
frame.add(msglabel);
frame.addWindowListener(new WindowAdapter() {
public void windowClosing(WindowEvent windowEvent){
frame.dispose();
}
});
JButton okButton = new JButton("Capture");
okButton.addActionListener(new ActionListener() {
public void actionPerformed(ActionEvent e) {
// statusLabel.setText("A Frame shown to the user.");
// frame.setVisible(true);
mainFrame.setState(Frame.ICONIFIED);
Robot robot = null;
try {
robot = new Robot();
} catch (AWTException e1) {
// TODO Auto-generated catch block
e1.printStackTrace();
}
final Dimension screenSize = Toolkit.getDefaultToolkit().
getScreenSize();
final BufferedImage screen = robot.createScreenCapture(
new Rectangle(screenSize));
SwingUtilities.invokeLater(new Runnable() {
public void run() {
new ScreenCaptureRectangle(screen);
}
});
mainFrame.setState(Frame.NORMAL);
}
});
controlPanel.add(okButton);
mainFrame.setVisible(true);
}
public static void main(String[] args) throws Exception {
new SwingContainerDemo().showJFrameDemo();
}
Below is the ouput of above code-snippet:
A: frame.setLocationRelativeTo(null);
Full example:
public class BorderLayoutPanel {
private JFrame mainFrame;
private JButton btnLeft, btnRight, btnTop, btnBottom, btnCenter;
public BorderLayoutPanel() {
mainFrame = new JFrame("Border Layout Example");
btnLeft = new JButton("LEFT");
btnRight = new JButton("RIGHT");
btnTop = new JButton("TOP");
btnBottom = new JButton("BOTTOM");
btnCenter = new JButton("CENTER");
}
public void SetLayout() {
mainFrame.add(btnTop, BorderLayout.NORTH);
mainFrame.add(btnBottom, BorderLayout.SOUTH);
mainFrame.add(btnLeft, BorderLayout.EAST);
mainFrame.add(btnRight, BorderLayout.WEST);
mainFrame.add(btnCenter, BorderLayout.CENTER);
// mainFrame.setSize(200, 200);
// or
mainFrame.pack();
mainFrame.setVisible(true);
//take up the default look and feel specified by windows themes
mainFrame.setDefaultLookAndFeelDecorated(true);
//make the window startup position be centered
mainFrame.setLocationRelativeTo(null);
mainFrame.setDefaultCloseOperation(mainFrame.EXIT_ON_CLOSE);
}
}
A: The following doesn't work for JDK 1.7.0.07:
frame.setLocationRelativeTo(null);
It puts the top left corner at the center - not the same as centering the window. The other one doesn't work either, involving frame.getSize() and dimension.getSize():
Dimension dimension = Toolkit.getDefaultToolkit().getScreenSize();
int x = (int) ((dimension.getWidth() - frame.getWidth()) / 2);
int y = (int) ((dimension.getHeight() - frame.getHeight()) / 2);
frame.setLocation(x, y);
The getSize() method is inherited from the Component class, and therefore frame.getSize returns the size of the window as well. Thus subtracting half the vertical and horizontal dimensions from the vertical and horizontal dimensions, to find the x,y coordinates of where to place the top-left corner, gives you the location of the center point, which ends up centering the window as well. However, the first line of the above code is useful, "Dimension...". Just do this to center it:
Dimension dimension = Toolkit.getDefaultToolkit().getScreenSize();
JLabel emptyLabel = new JLabel("");
emptyLabel.setPreferredSize(new Dimension( (int)dimension.getWidth() / 2, (int)dimension.getHeight()/2 ));
frame.getContentPane().add(emptyLabel, BorderLayout.CENTER);
frame.setLocation((int)dimension.getWidth()/4, (int)dimension.getHeight()/4);
The JLabel sets the screen-size. It's in FrameDemo.java available on the java tutorials at the Oracle/Sun site. I set it to half the screen size's height/width. Then, I centered it by placing the top left at 1/4 of the screen size's dimension from the left, and 1/4 of the screen size's dimension from the top. You can use a similar concept.
A: From this link
If you are using Java 1.4 or newer,
you can use the simple method
setLocationRelativeTo(null) on the
dialog box, frame, or window to center
it.
A: Note that both the setLocationRelativeTo(null) and Tookit.getDefaultToolkit().getScreenSize() techniques work only for the primary monitor. If you are in a multi-monitor environment, you may need to get information about the specific monitor the window is on before doing this kind of calculation.
Sometimes important, sometimes not...
See GraphicsEnvironment javadocs for more info on how to get this.
A: On Linux the code
setLocationRelativeTo(null)
Put my window to random location each time I launched it, in a multi display environment.
And the code
setLocation((Toolkit.getDefaultToolkit().getScreenSize().width - getSize().width) / 2, (Toolkit.getDefaultToolkit().getScreenSize().height - getSize().height) / 2);
"cut" the window in half with placing it to the exact center, which is between my two displays.
I used the following method to center it:
private void setWindowPosition(JFrame window, int screen)
{
GraphicsEnvironment env = GraphicsEnvironment.getLocalGraphicsEnvironment();
GraphicsDevice[] allDevices = env.getScreenDevices();
int topLeftX, topLeftY, screenX, screenY, windowPosX, windowPosY;
if (screen < allDevices.length && screen > -1)
{
topLeftX = allDevices[screen].getDefaultConfiguration().getBounds().x;
topLeftY = allDevices[screen].getDefaultConfiguration().getBounds().y;
screenX = allDevices[screen].getDefaultConfiguration().getBounds().width;
screenY = allDevices[screen].getDefaultConfiguration().getBounds().height;
}
else
{
topLeftX = allDevices[0].getDefaultConfiguration().getBounds().x;
topLeftY = allDevices[0].getDefaultConfiguration().getBounds().y;
screenX = allDevices[0].getDefaultConfiguration().getBounds().width;
screenY = allDevices[0].getDefaultConfiguration().getBounds().height;
}
windowPosX = ((screenX - window.getWidth()) / 2) + topLeftX;
windowPosY = ((screenY - window.getHeight()) / 2) + topLeftY;
window.setLocation(windowPosX, windowPosY);
}
Makes the window appear right at the center of the first display.
This is probably not the easiest solution.
Works properly on Linux, Windows and Mac.
A: There's something really simple that you might be overlooking after trying to center the window using either setLocationRelativeTo(null) or setLocation(x,y) and it ends up being a little off center.
Make sure that you use either one of these methods after calling pack() because the you'll end up using the dimensions of the window itself to calculate where to place it on screen. Until pack() is called, the dimensions aren't what you'd think thus throwing off the calculations to center the window. Hope this helps.
A: Example: Inside myWindow() on line 3 is the code you need to set the window in the center of the screen.
JFrame window;
public myWindow() {
window = new JFrame();
window.setSize(1200,800);
window.setLocationRelativeTo(null); // this line set the window in the center of thr screen
window.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
window.getContentPane().setBackground(Color.BLACK);
window.setLayout(null); // disable the default layout to use custom one.
window.setVisible(true); // to show the window on the screen.
}
A: The following code center the Window in the center of the current monitor (ie where the mouse pointer is located).
public static final void centerWindow(final Window window) {
GraphicsDevice screen = MouseInfo.getPointerInfo().getDevice();
Rectangle r = screen.getDefaultConfiguration().getBounds();
int x = (r.width - window.getWidth()) / 2 + r.x;
int y = (r.height - window.getHeight()) / 2 + r.y;
window.setLocation(x, y);
}
A: You could try this also.
Frame frame = new Frame("Centered Frame");
Dimension dimemsion = Toolkit.getDefaultToolkit().getScreenSize();
frame.setLocation(dimemsion.width/2-frame.getSize().width/2, dimemsion.height/2-frame.getSize().height/2);
A: Actually frame.getHeight() and getwidth() doesnt return values , check it by System.out.println(frame.getHeight()); directly put the values for width and height ,then it will work fine in center. eg: as below
Dimension dimension = Toolkit.getDefaultToolkit().getScreenSize();
int x=(int)((dimension.getWidth() - 450)/2);
int y=(int)((dimension.getHeight() - 450)/2);
jf.setLocation(x, y);
both 450 is my frame width n height
A: public class SwingExample implements Runnable {
@Override
public void run() {
// Create the window
final JFrame f = new JFrame("Hello, World!");
SwingExample.centerWindow(f);
f.setPreferredSize(new Dimension(500, 250));
f.setMaximumSize(new Dimension(10000, 200));
f.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
}
public static void centerWindow(JFrame frame) {
Insets insets = frame.getInsets();
frame.setSize(new Dimension(insets.left + insets.right + 500, insets.top + insets.bottom + 250));
frame.setVisible(true);
frame.setResizable(false);
Dimension dimension = Toolkit.getDefaultToolkit().getScreenSize();
int x = (int) ((dimension.getWidth() - frame.getWidth()) / 2);
int y = (int) ((dimension.getHeight() - frame.getHeight()) / 2);
frame.setLocation(x, y);
}
}
A: The order of the calls is important:
first -
pack();
second -
setLocationRelativeTo(null);
A: In addition to Donal's answer, I would like to add a small calculation that makes sure that the Java window is perfectly at the center of the window. Not just the "TOP LEFT" of the window is at the center of the window.
public static void centreWindow(JFrame frame, int width, int height) {
Dimension dimension = Toolkit.getDefaultToolkit().getScreenSize();
int x = (int) ((dimension.getWidth() - frame.getWidth()) / 2);
int y = (int) ((dimension.getHeight() - frame.getHeight()) / 2);
// calculate perfect center
int perf_x = (int) x - width/2;
int perf_y = (int) y - height/2;
frame.setLocation(perf_x, perf_y);
}
A: If you want a simple answer for Java NetBeans:
Right click on the JFrame, go to properties, then go to code and select generate center option.
Look image for reference:
[1]: https://i.stack.imgur.com/RFXbL.png
A: If you want to push center of your app window, you can do solved to follow.
int x = (Toolkit.getDefaultToolkit().getScreenSize().width) - getSize().width) / 2;
int y = (Toolkit.getDefaultToolkit().getScreenSize().height) - getSize().height) / 2;
setLocation(x,y);
getSize() function is app frame size...
getScreenSize() is your pc screen size.
A: I'd like to modify Dónal's answer to accommodate a multiple display setup:
public static void centerWindow(Window frame) {
Rectangle bounds = frame.getGraphicsConfiguration().getBounds();
Dimension dimension = bounds.getSize();
int x = (int) (((dimension.getWidth() - frame.getWidth()) / 2) + bounds.getMinX());
int y = (int) (((dimension.getHeight() - frame.getHeight()) / 2) + bounds.getMinY());
frame.setLocation(x, y);
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/144892",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "132"
} |
Q: How to detect design time in a VS.NET 2003 control library project Code below is not working as expected to detect if it is in design mode (VS.Net 2003 - Control Library):
if (this.Site != null && this.Site.DesignMode == true)
{
// Design Mode
}
else
{
// Run-time
}
It is used in a complex user control, deriving from another user control and including other user controls on it.
Is there another way to detect design time in a VS.NET 2003 or what is the problem with the code above?
A: DesignMode won't work from inside a constructor. Some alternatives (not sure if they work in 1.1) are
if (System.ComponentModel.LicenseManager.UsageMode == System.ComponentModel.LicenseUsageMode.Designtime)
or
call GetService(typeof(IDesignerHost)) and see if it returns something.
I've had better luck with the first option.
A: I guess you could use HttpContext.Current == null as described at http://west-wind.com/weblog/posts/189.aspx
In .Net 2.0 there's Control.DesignMode (http://msdn.microsoft.com/en-us/library/system.web.ui.control.designmode.aspx). I guess you have a good reasons to stay on VS 2003 though, so upgrading might not be an option for you.
Update If you are doing Winforms, Component.DesignMode (http://msdn.microsoft.com/en-us/library/system.componentmodel.component.designmode.aspx) is the right way to check. Though, if this.Site.DesignMode doesn't work properly, Component.DesignMode might not work as well, as it does exactly the check you are doing (Site != null && Site.DesignMode).
This might be a long shot, but are you sure that your base control does not override the Site property?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/144898",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: system wide keyboard hook on X under linux What would be the best approach to install a keyboard hook on Linux (X-windows) in order to trigger some application when some key-combo is pressed?? Is there a way to do this regardless of which window manager is running? The idea is to have an application being called ( or brought to foreground ) when some key is pressed in a way similar that Google Desktop does to Ctrl-Ctrl.
A: XGrabKey on the root window is how xbindkey does it. Be careful about having some alternative method of killing the grab though, it's very annoying to have to go somewhere to ssh into your own box just to kill that process... And that's why, if it was me, xbindkeys+"echo 'moo' > /tmp/moo-fifo" would be the way to do it. That way, you could also control it in any number of other ways you haven't thought of yet.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/144901",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: Speed of DataSet row/column lookups? Recently I had to do some very processing heavy stuff with data stored in a DataSet. It was heavy enough that I ended up using a tool to help identify some bottlenecks in my code. When I was analyzing the bottlenecks, I noticed that although DataSet lookups were not terribly slow (they weren't the bottleneck), it was slower than I expected. I always assumed that DataSets used some sort of HashTable style implementation which would make lookups O(1) (or at least thats what I think HashTables are). The speed of my lookups seemed to be significantly slower than this.
I was wondering if anyone who knows anything about the implementation of .NET's DataSet class would care to share what they know.
If I do something like this :
DataTable dt = new DataTable();
if(dt.Columns.Contains("SomeColumn"))
{
object o = dt.Rows[0]["SomeColumn"];
}
How fast would the lookup time be for the Contains(...) method, and for retrieving the value to store in Object o? I would have thought it be very fast like a HashTable (assuming what I understand about HashTables is correct) but it doesn't seem like it...
I wrote that code from memory so some things may not be "syntactically correct".
A: Actually it's advisable to use integer when referencing column, which can improve a lot in terms of performance. To keep things manageable, you could declare constant integer. So instead of what you did, you could do
const int SomeTable_SomeColumn = 0;
DataTable dt = new DataTable();
if(dt.Columns.Contains(SomeTable_SomeColumn))
{
object o = dt.Rows[0][SomeTable_SomeColumn];
}
A: Via Reflector the steps for DataRow["ColumnName"] are:
*
*Get the DataColumn from ColumnName. Uses the row's DataColumnCollection["ColumnName"]. Internally, DataColumnCollection stores its DataColumns in a Hastable. O(1)
*Get the DataRow's row index. The index is stored in an internal member. O(1)
*Get the DataColumn's value at the index using DataColumn[index]. DataColumn stores its data in a System.Data.Common.DataStorage (internal, abstract) member:
return dataColumnInstance._storage.Get(recordIndex);
A sample concrete implementation is System.Data.Common.StringStorage (internal, sealed). StringStorage (and the other concrete DataStorages I checked) store their values in an array. Get(recordIndex) simply grabs the object in the value array at the recordIndex. O(1)
So overall you're O(1) but that doesn't mean the hashing and function calling during the operation is without cost. It just means it doesn't cost more as the number of DataRows or DataColumns increases.
Interesting that DataStorage uses an array for values. Can't imagine that's easy to rebuild when you add or remove rows.
A: I imagine that any lookups would be O(n), as I don't think they would use any type of hashtable, but would actually use more of an array for finding rows and columns.
A: Actually, I believe the columns names are stored in a Hashtable. Should be O(1) or constant lookup for case-sensitive lookups. If it had to look through each, then of course it would be O(n).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/144902",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Can I display the global variables of a RTP in the shell? In VxWorks, I can display global variables in the shell like so:
-> my_global
my_global = 0x103c4110: value = 4 = 0x4
Is there a way to do the same with a RTP global variable?
A: You can display global variables in a specific RTP by using the command (cmd) interpreter and attaching to the RTP.
Here is an example with comments in parenthesis.
-> cmd (switch to command interpreter)
[vxWorks *]# rtp exec Hello_RTP.vxe &
Launching process 'Hello_RTP.vxe' ...
Process 'Hello_RTP.vxe' (process Id = 0x105e4d50) launched.
Attachment number for process 'Hello_RTP.vxe' is %1.
[vxWorks *]# echo $my_global (display my_global in the kernel context)
0x4
[vxWorks *]# %1 (attach to RTP - can also use rtp attach)
[Hello_RTP]# echo $my_global
0x6b7 (global variable from RTP context)
[Hello_RTP]# echo $my_global
0x16e1 (same global variable..it increments)
[Hello_RTP]# %0 (detach from RTP. Go to kernel)
[vxWorks *]# echo $my_global (back to kernel context)
0x4
Note that this is only available in VxWorks 6.x Before the 6 release, there was no RTP in vxWorks.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/144923",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: When Using Dynamic Proxies, how do I access the underlying object's Annotations? When Using Dynamic Proxies, how do I access the underlying object's Annotations?
Specifically I'm annotating settings of a ORM object with @Column("client_id") and then making a Dynamic Proxy keep track of when the annotated setters are called, but...
It doesn't seem that the annotated proxy keeps any of the underlying annotations so short of performing reflection on every invocation, how do I make the proxy have the annotations of the class it's Proxying?
Thank you,
Allain
A: AFAIK, it depends on your bytecode injection lib. Also, remember that typically annotations are not inherited (imposed by the Java spec). If you want to access the original class, and are using CGLIB, you can use this snippet:
if (Enhancer.isEnhanced(getClass())) {
currClass = UnEnhancer.unenhance(getClass());
} else {
// else, let's get the original class directly
currClass = getClass();
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/144936",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Using Exception.Data How have you used the Exception.Data property in C# projects that you've worked on?
I'd like answers that suggest a pattern, rather than those that are very specific to your app.
A: The exception logger I use has been tweaked to write out all the items in the Data collection. Then for every exception we encounter that we cannot diagnose from the exception stack, we add in all the data in that function's scope, send out a new build, and wait for it to reoccur.
I guess we're optimists in that we don't put it in every function, but we are pessimists in that we don't take it out once we fix the issue.
A: Since none of the answers include any code. Something that might useful as an addition to this question is how to actually look at the .Data dictionary. Since it is not a generic dictionary and only returns IDictionary
foreach(var kvp in exception.Data) the type of kvp will actually be object unhelpfully. However from the MSDN there's an easy way to iterate this dictionary:
foreach (DictionaryEntry de in e.Data)
Console.WriteLine(" Key: {0,-20} Value: {1}",
"'" + de.Key.ToString() + "'", de.Value);
I don't really know what the format argument , -20 would mean, maybe Take(20)? Digressing... this code can be very helpful in a common error logger to unwind this data. A more complete usage would be similar to:
var messageBuilder = new StringBuilder();
do
{
foreach (DictionaryEntry kvp in exception.Data)
messageBuilder.AppendFormat("{0} : {1}\n", kvp.Key, kvp.Value);
messageBuilder.AppendLine(exception.Message);
} while ((exception = exception.InnerException) != null);
return messageBuilder.ToString();
A: I have used it when I knew the exception I was creating was going to need to be serialized. Using Reflector one day, I found that Excepion.Data gets stuck into and pulled from serialization streams.
So, basically, if I have properties on a custom exception class that are already serializable types, I implement them on the derived class and use the underlying data object as their storage mechanism rather than creating private fields to hold the data. If properties of my custom exception object require more advanced serialization, I generally implement them using backing private fields and handle their serialization in the derived class.
Bottom line, Exception.Data gives you serialization for free just by sticking your properties into it -- but just remember those items need to be serializable!
A: I've used it to capture information about the state at the time of the Exception from the enclosing scope as the Exception travels up the stack. Items like the filename that caused the Exception, or the value of some ID that will help track down the problem.
At the top most level in a web application I also tend to add much of the Request information like the RawUrl, the cookies, the Referrer, ...
For more details here's my blog on the topic:
Rather than waiting for problems to happen I add this code in wherever an Exception can occur that's related to something external, e.g. a file name, or an URL that was being accessed, ... In other words, any data that will help repro the problem.
A: I just tried to use it and found out that it is not very useful for my purpose - so I am not using it.
The most important part of the stack trace is to be able to tell what happened. The method name and line number are great, but you often need to see value of relevant variables in the context of the exception. I thought that was the whole point of the Data. But - you have to see it for it to be useful.
In my case, I control the code around the caught exception but not the code that logs it. So, for me Data is useless if it is not being automatically printed out in the stack trace. I might as well log the values myself rather than add them to the Data. Or, somehow modify the message to add the values in it, so that it gets logged but without losing the original stack trace with line numbers and method names.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/144957",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "54"
} |
Q: Ruby. How can I copy and paste in irb on Windows? How can I copy and paste in irb (Interactive Ruby Shell) on Windows?
A: To copy: Hit alt-space, choose Edit, choose Mark, drag-select the text, hit enter.
To paste: Hit alt-space, choose Edit, choose Paste.
A: To avoid having to open the drop-down menu and clicking, you need to change the command window settings. To do this, right-click the title bar, choose Properties, turn on "QuickEdit Mode" under the Properties tab (and keep "Insert Mode" on), then OK.
Now, to copy: drag to select, right-click to copy.
To paste: right-click with no selection.
A: For CLI copy/paste:
Copy : Ctrl+insert
Paste : Shift+insert
A: You might want to consider using Console, a replacement for Windows' terrible command-line chrome. It offers fully redefinable keyboard shortcuts plus tabs, so it's ideal for IRB.
A: check out console2--very nice and allows you to paste by using right click or what not.
Update: conemu is even better: http://conemu.github.io/
| {
"language": "en",
"url": "https://stackoverflow.com/questions/144959",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: Can I get the current page sourcecode from a firefox extension? Can this be done? How?
I want to write my own extension. Can Get the current page sorcecode in my own extension?
A: As Rich says, adding view-source in front of the URL will give you the current page's source code. A keyboard shortcut for this is Ctrl+U.
I want to write my own extension.
There are a number of existing Firefox extensions that fetch a page's source code and apply some action to it (colour-coding, syntax-checking, etc). Downloading them and looking at how they handle it may be a good place to start!
*
*7 Firefox extensions to explore source code
*View Formatted Source extension
If you're new to Firefox extension development, this article at Lifehacker is an excellent primer in how to start, and will give you an idea of where to look in the above linked extensions for tasks that may be similar to your own.
A: Sure, just add view-source: in front of the URL.
view-source:http://stackoverflow.com/posts/edit/145419
Will show the source of this page for instance - try it in the address bar.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/144960",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Cannot find the Create GUID tool in VS2005 I have Visual Studio 2005 Professional ENU installed and want to create GUIDs using its Create GUIDs utility. However, I cannot find it under the Tools menu. What should I do to get this utility? Thanks
A: <visual studio install>\Common7\Tools\guidgen.exe
A: I find it handy to use a GUID generator macro than using the GUID generator. You can assign a shortcut key combination for this macro and insert new GUIDs instantly anywhere in the code.
Here is the code for the interested:
Public Module GUIDGenModule
Sub Create_GUID()
DTE.ActiveDocument.Selection.Text = System.Guid.NewGuid().ToString("D").ToUpper()
End Sub
End Module
A: File path and name below
C:\Program Files\Microsoft Visual Studio 8\Common7\Tools\guidgen.exe
A:
C:\Program Files\Microsoft Visual Studio 8\Common7\Tools\guidgen.exe
C:\Program Files\Microsoft Visual Studio 8\Common7\Tools\uuidgen.exe
The first one is the GUI tool. The second one is a handy console line tool.
A: I'm not sure why it wouldn't be on your Tools menu, but the file you're looking for is called guidgen.exe, and it should be in the Tools folder, i.e. \Microsoft Visual Studio 8\Common7\Tools\guidge.exe.
Another thing you could try is to reset your IDE settings. Select "Tools --> Import and Export Settings --> Reset All Settings" to restore the defaults and see if that restores the GUID option to the Tools menu. If you've customized a lot of your settings, you might actually export them first so that you can roll back once you figure out the GUID issue.
A: I have actually switched over to using PowerShell for a lot of tasks that used to be done through spawned tools... try this:
[guid]::NewGuid().ToString()
A: A while ago I've blogged about some handy Visual Studio macro's that generate new GUIDs. You can bind those macro's to your keyboard and generate GUIDs on the fly in the source editor. Read about them here:
http://www.wirwar.com/blog/2007/11/03/generating-guids-in-the-visual-studio-ide/
A: The facter of the proble is product package selection.
The tool is C++ addons.
So when you have vs pro above(not only c#...), install c++ part.
A: 9a005ff3-5dee-4667-b5b9-7663fee2b0f9
db031ebf-7ffa-4604-a6b6-7d60a38c60ca
96f1854c-3654-46a7-8f57-20eb23f62375
f43a4642-db72-4ed5-a9e7-32fc2c53d1f1
6fa5c074-d68c-4871-b26f-1e0b51374865
17cf6675-fce6-42ce-8501-f19dadbe0c6d
65c681ad-701e-4bc6-a373-2351d9fc1910
3eab6e3d-4040-4beb-9c79-57a0bd7c84c9
3aae1801-c595-4f0b-a36c-56f41e5858dd
310f9053-319e-457c-aedf-ba9a1cd6a1cb
Here are ten free guids, but for only $19.95, I can send you the missing portion to my amazing Guid Generator:
for (int i = 0; i < 10; i++)
{
Console.WriteLine([GET THE WHOLE SOURCE! ONLY $19.99!]);
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/144965",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Implementing multithreading in C# (code review) Greetings.
I'm trying to implement some multithreaded code in an application. The purpose of this code is to validate items that the database gives it. Validation can take quite a while (a few hundred ms to a few seconds), so this process needs to be forked off into its own thread for each item.
The database may give it 20 or 30 items a second in the beginning, but that begins to decline rapidly, eventually reaching about 65K items over 24 hours, at which point the application exits.
I'd like it if anyone more knowledgeable could take a peek at my code and see if there's any obvious problems. No one I work with knows multithreading, so I'm really just on my own, on this one.
Here's the code. It's kinda long but should be pretty clear. Let me know if you have any feedback or advice. Thanks!
public class ItemValidationService
{
/// <summary>
/// The object to lock on in this class, for multithreading purposes.
/// </summary>
private static object locker = new object();
/// <summary>Items that have been validated.</summary>
private HashSet<int> validatedItems;
/// <summary>Items that are currently being validated.</summary>
private HashSet<int> validatingItems;
/// <summary>Remove an item from the index if its links are bad.</summary>
/// <param name="id">The ID of the item.</param>
public void ValidateItem(int id)
{
lock (locker)
{
if
(
!this.validatedItems.Contains(id) &&
!this.validatingItems.Contains(id)
){
ThreadPool.QueueUserWorkItem(sender =>
{
this.Validate(id);
});
}
}
} // method
private void Validate(int itemId)
{
lock (locker)
{
this.validatingItems.Add(itemId);
}
// *********************************************
// Time-consuming routine to validate an item...
// *********************************************
lock (locker)
{
this.validatingItems.Remove(itemId);
this.validatedItems.Add(itemId);
}
} // method
} // class
A: The thread pool is a convenient choice if you have light weight sporadic processing that isn't time sensitive. However, I recall reading on MSDN that it's not appropriate for large scale processing of this nature.
I used it for something quite similar to this and regret it. I took a worker-thread approach in subsequent apps and am much happier with the level of control I have.
My favorite pattern in the worker-thread model is to create a master thread which holds a queue of tasks items. Then fork a bunch of workers that pop items off that queue to process. I use a blocking queue so that when there are no items the process, the workers just block until something is pushed onto the queue. In this model, the master thread produces work items from some source (db, etc.) and the worker threads consume them.
A: I second the idea of using a blocking queue and worker threads. Here is a blocking queue implementation that I've used in the past with good results:
https://www.codeproject.com/Articles/8018/Bounded-Blocking-Queue-One-Lock
What's involved in your validation logic? If its mainly CPU bound then I would create no more than 1 worker thread per processor/core on the box. This will tell you the number of processors:
Environment.ProcessorCount
If your validation involves I/O such as File Access or database access then you could use a few more threads than the number of processors.
A: Be careful, QueueUserWorkItem might fail
A: There is a possible logic error in the code posted with the question, depending on where the item id in ValidateItem(int id) comes from. Why? Because although you correctly lock your validatingItems and validatedItems queues before queing a work item, you do not add the item to the validatingItems queue until the new thread spins up. That means there could be a time gap where another thread calls ValidateItem(id) with the same id (unless this is running on a single main thread).
I would add item to the validatingItems queue just before queuing the item, inside the lock.
Edit: also QueueUserWorkItem() returns a bool so you should use the return value to make sure the item was queued and THEN add it to the validatingItems queue.
A: ThreadPool may not be optimal for jamming so much at once into it. You may want to research the upper limits of its capabilities and/or roll your own.
Also, there is a race condition that exists in your code, if you expect no duplicate validations. The call to
this.validatingItems.Add(itemId);
needs to happen in the main thread (ValidateItem), not in the thread pool thread (Validate method). This call should occur a line before the queueing of the work item to the pool.
A worse bug is found by not checking the return of QueueUserWorkItem. Queueing can fail, and why it doesn't throw an exception is a mystery to us all. If it returns false, you need to remove the item that was added to the validatingItems list, and handle the error (throw exeception probably).
A: I would be concerned about performance here. You indicated that the database may give it 20-30 items per second and an item could take up to a few seconds to be validated. That could be quite a large number of threads -- using your metrics, worst case 60-90 threads! I think you need to reconsider the design here. Michael mentioned a nice pattern. The use of the queue really helps keep things under control and organized. A semaphore could also be employed to control number of threads created -- i.e. you could have a maximum number of threads allowed, but under smaller loads, you wouldn't necessarily have to create the maximum number if fewer ended up getting the job done -- i.e. your own pool size could be dynamic with a cap.
When using the thread-pool, I also find it more difficult to monitor the execution of threads from the pool in their performing the work. So, unless it's fire and forget, I am in favor of more controlled execution. I know you mentioned that your app exits after the 65K items are all completed. How are you monitoring you threads to determine if they have completed their work -- i.e. all queued workers are done. Are you monitoring the status of all items in the HashSets? I think by queuing your items up and having your own worker threads consume off that queue, you can gain more control. Albeit, this can come at the cost of more overhead in terms of signaling between threads to indicate when all items have been queued allowing them to exit.
A: You could also try using the CCR - Concurrency and Coordination Runtime. It's buried inside Microsoft Robotics Studio, but provides an excellent API for doing this sort of thing.
You'd just need to create a "Port" (essentially a queue), hook up a receiver (method that gets called when something is posted to it), and then post work items to it. The CCR handles the queue and the worker thread to run it on.
Here's a video on Channel9 about the CCR.
It's very high-performance and is even being used for non-Robotics stuff (Myspace.com uses it behind the scenese for their content-delivery network).
A: I would recommend looking into MSDN: Task Parallel Library - DataFlow. You can find examples of implementing Producer-Consumer in your case would be the database producing items to validate and the validation routine becomes the consumer.
Also recommend using ConcurrentDictionary<TKey, TValue> as a "Concurrent" hash set where you just populate the keys with no values :). You can potentially make your code lock-free.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/144980",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: How do I make Emacs start without so much fanfare? Every time I start Emacs I see a page of help text and a bunch of messages suggesting that I try the tutorial. How do I stop this from happening?
A: You can customize message in minibuffer therefore remove fanfare:
;; Hide advertisement from minibuffer
(defun display-startup-echo-area-message ()
(message ""))
A: Emacs has a couple of variables which inhibit these actions. If you edit your emacs control file (.emacs) and insert the following:
;; inhibit-startup-echo-area-message MUST be set to a hardcoded
;; string of your login name
(setq inhibit-startup-echo-area-message "USERNAME")
(setq inhibit-startup-message t)
that should solve your problem. They basically set the inhibit parameters to true to prevent the behavior you want to get rid of.
A: Put the following in your personal init file (ususally ~/.emacs.el):
(setq inhibit-startup-message t)
(Or (setq inhibit-startup-screen t) in with older Emacs versions.)
You can also turn off the message "For information about GNU Emacs and the GNU system, type C-h C-a." in the echo with the variable inhibit-startup-echo-area-message, but it is not enough to set it to t; you must set it to your username. See the documentation for inhibit-startup-echo-area-message.
A: Put the following in your .emacs:
(setq inhibit-startup-message t)
(setq inhibit-startup-echo-area-message t)
A: If your init file is byte-compiled, use the following form instead:
(eval '(setq inhibit-startup-echo-area-message "YOUR-USER-NAME"))
A: Add the below to your init file
(setq inhibit-startup-message t
inhibit-startup-echo-area-message t)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/144983",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "26"
} |
Q: Programming on a Nintendo DS I was reading this answer previously and it got me interested in purchasing a Nintendo DS Lite for learning to program embedded devices. Before I go out and splurge on a DS I had a few questions:
*
*Are there any restrictions on what you can program? The post I indicated earlier seemed to say there weren't, but clarification would be nice.
*Would I be better off buying an arduino (or similar) and going that route? I like the DS because it already has a lot of hardware built in.
*I'm thinking of getting a CycloDS Evo card, is there a better option for homebrew?
*What are the best resources to learn about DS development?
Thanks for your time, If you have a DS and program on it, I'd love you hear your opinion, or alternatively if you have a better idea, I'd like to hear it too.
Thanks =]
A: I've done a little programming on the DS Lite about 1 year ago. The major hardware limitation that I had was working with the WiFi hardware. I found that DS-DS communication was not possible with the homebrew libraries at the time. I am not sure if that has changed. I also found that you could not form an Ad-Hoc connection to another device. I had to connect to an 802.11b network in infrastructure mode and the SSID had to be broadcast.
For developing I used
*
*PALib (helper libraries): http://www.palib.info/wiki/doku.php?id=day1
*DevKitPro (toolchain): http://www.devkitpro.org/
*no$gba (DS Emulator) - http://nocash.emubase.de/gba.htm
*Supercard Lite (hardware to run homebrew applications) - http://www.realhotstuff.com/-c-32_81.html
I don't recommend the Supercard Lite as it required use of the GBA and DS slot of the DS. At the time this was the only option. There are now DS slot only solutions such as the R4. I have a friend who is using the R4 and has pretty good success with it, though I have not used it myself.
A: I haven't done any programming on the DS, but I have done some development on the GBA (Game Boy Advanced). If what you're looking to do is learn how to program embedded devices, that might be a good option for you (and certainly a cheaper one). There's even a free book you can get online: Programming the Nintendo Gameboy Advanced. I suggest the GBA because, as I've seen, there are a lot more resources online for learning how to program for it. One drawback is that it doesn't have wifi, which means you won't be able to do as many cool things as you would for the DS, but it's certainly a start!
A: Can't say anything about 1,2, or 3. but the resource I use for GBA programming also has DS info:
http://nocash.emubase.de/gbatek.htm (and this is a deep down technical spec document, but I like it for that)
Also: http://www.devkitpro.org/ for the compilers and stuff.
A: *
*The restrictions are hardware restrictions - there's 4Mb of RAM, the 3D hardware can handle X polys per frame and so on. Aside from that, it's just a bunch of hardware that you can do what you want with. The toolchain supports C/C++ and assembler (ARM).
*The variety of hardware is why I like it too. Getting to grips with each piece of the puzzle is what makes the DS a fun - each bit of hardware has it's own set of tricks for getting the most out of it.
*Don't have one myself, so I guess just check here. Looks nice though. Edit: The only nit I would pick with it is that you'll be swapping the SD card between PC and NDS a lot, whereas a cart with an onboard USB socket would give you slightly faster turnaround.
*The best resources are the libnds examples, and then the gbadev forums.
A: *
*No, there really isn't much of a
limitation beyond that of the
hardware, and even that can be
overcome with enough effort. Quake
has been ported to DS, for example,
and particle games that utilize both
processors have been made. There
has also been discussion on how to
make higher quality 3D scenes using
a double pass renderer. There are
multiple resources on the Nintendo
DS section of the GBADev
forums.
*I
would say that the DS is an
excellent route to embedded systems
development; there is a large and
active community that is willing to
answer questions and give support,
and there is so much hardware built
straight into the thing. It saves
you the time of building a system to
test on.
*The CycloDS Evolution is a
good card and is fairly common, so
it shouldn't be difficult - if
necessary at all - to make your homebrew compatable with
other cards. However, be aware that
other popular choices are the M3
line and the R4 line, which are
pretty much the same thing. I have
a TTDS, and it works well, but not
out of the box. I would reccommend
the other three mentioned.
*As for
beginning DS devving, I would
reccommend looking at the basic
examples found in the examples folder of devkitPro and reading the GBA
tutorial
TONC,
which covers many of the concepts
that are used in both GBA and DS
development. A more DS oriented
tutorial, Patater's
Introduction to Nintendo DS
Programming,
will help beginners get on their way
in the DS world. There is also a very comprehensive documentation
spec for the GBA and DS known as
GBATek.
A: I just got a CycloDS Evolution the other day, and I am loving it! DSOrganize is like a mini-OS which adds a bunch of stuff I was wishing the DS came with, like an actual calendar app!
To address Mike F's #3, there is actually an FTP server for DS, which you can use to transfer files to your DS wirelessly. I haven't tried it myself though, since my network uses WPA and the DS only seems to support WEP.
A: Honestly, I found the Nintendo DS and the homebrew community while I was attending an Embedded Systems course in college, and I realized the similarities between the ATmega32-based kit I was programming for the class and the hardware-level development of the Nintendo DS via libnds, and I was hooked.
Personally, I've come from a strong C++ background, but being able to walk around with something in my pocket that I've programmed has been a goal of mine since I first got my hands on a TI-83 Plus calculator... I'm now able to realize that goal due to the Nintendo DS.
Anyway, I hope you have as much fun getting into DS development as I have over the past months, and I wish you luck on your endeavors.
A: I have done both, more GBA than DS. I would recommend GBA first then moving up to DS because it doubles the complication. The ezflash V gba sized 3 in 1 is a good card. I have a bootloader for the gba that I wrote to the card using an NDS and a program that I downloaded that I cant remember the name of off hand. Once the bootloader was working a serial cable and lets me debug programs as well as load them into ram. that card also allows you to load into ram on the card and run from there taking advantage of the prefetch buffer and a bigger program. For the NDS I have tried many of the cards. The cyclods is good for day to day use, but for development not so much. I think I liked the Acekard 2 better, or the R4. think about the number of times you pull the card out and pull the sd card out and load it into a computer. Very painful you want a card with an sd card slot you can get at without having to pull the slot0 card out. the cyclods is not it. A very good card though for the NDS. I dont think it works on the NDSi where the acekard 2 does. For both nds and gba you can get your feet wet with simulators like visualboyadvance, they are not completely accurate and very common that programs that work on the simulator will not work on real hardware, programs that work on real hardware will usually work on the simulator though. removing the development card, reprogramming, and replacing is very painful, bootloaders, wifi, or any other way you can avoid that is well worth it.
Arduinos are fun and interesting, the lilypad and the usb to serial thing is the one I recommend, no soldering required and you can start using for not a big investment. I like the armmite pro better, arduino like footprint but arm based (the only lpc I would buy, not an lpc fan right now). And you dont need to buy the serial thing, just a normal usb cable and a jumper (well maybe a paper clip until you solder on a jumper). I just ordered two more and so far my code that erased the as-shipped flash and allowed me to load whatever I want isnt working, gotta go figure that out. I continue to be very pleased with the olimex sam7-h64 and h256 (header board at91sam7s256), as with the avr atmel is very developer friendly with good docs. Sparkfun is a good place to find all of the above in the USA. Sam-ba now has a linux version if you use linux as I do, the windows version had been there for a while, fairly easy to erase and reprogram, much easier than a ds or gba, on par with the arduino or armmite pro or similar.
Formerly luminary micro now ti stellaris has some good boards. like the gba/nds but unlike the other boards I mentioned there are displays and other peripherals to play with, usb is all you need to program. thumb mode only though. GBA prefers thumb mode for performance but can go either way. nds, I dont remember, never got so far as to understand the width of the busses and their timing. Knowing Nintendo and their cheapness thumb is probably better/faster. the lm3s811 eval board was too easy to brick, the 1968 is not a bad one. I dont like that they were pushing developers away from the source and into pre-built libraries tailored to the rtos and specific compiler suite.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/144985",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "28"
} |
Q: How much overhead is there in calling a function in C++? A lot of literature talks about using inline functions to "avoid the overhead of a function call". However I haven't seen quantifiable data. What is the actual overhead of a function call i.e. what sort of performance increase do we achieve by inlining functions?
A: Your question is one of the questions, that has no answer one could call the "absolute truth". The overhead of a normal function call depends on three factors:
*
*The CPU. The overhead of x86, PPC, and ARM CPUs varies a lot and even if you just stay with one architecture, the overhead also varies quite a bit between an Intel Pentium 4, Intel Core 2 Duo and an Intel Core i7. The overhead might even vary noticeably between an Intel and an AMD CPU, even if both run at the same clock speed, since factors like cache sizes, caching algorithms, memory access patterns and the actual hardware implementation of the call opcode itself can have a huge influence on the overhead.
*The ABI (Application Binary Interface). Even with the same CPU, there often exist different ABIs that specify how function calls pass parameters (via registers, via stack, or via a combination of both) and where and how stack frame initialization and clean-up takes place. All this has an influence on the overhead. Different operating systems may use different ABIs for the same CPU; e.g. Linux, Windows and Solaris may all three use a different ABI for the same CPU.
*The Compiler. Strictly following the ABI is only important if functions are called between independent code units, e.g. if an application calls a function of a system library or a user library calls a function of another user library. As long as functions are "private", not visible outside a certain library or binary, the compiler may "cheat". It may not strictly follow the ABI but instead use shortcuts that lead to faster function calls. E.g. it may pass parameters in register instead of using the stack or it may skip stack frame setup and clean-up completely if not really necessary.
If you want to know the overhead for a specific combination of the three factors above, e.g. for Intel Core i5 on Linux using GCC, your only way to get this information is benchmarking the difference between two implementations, one using function calls and one where you copy the code directly into the caller; this way you force inlining for sure, since the inline statement is only a hint and does not always lead to inlining.
However, the real question here is: Does the exact overhead really matter? One thing is for sure: A function call always has an overhead. It may be small, it may be big, but it is for sure existent. And no matter how small it is if a function is called often enough in a performance critical section, the overhead will matter to some degree. Inlining rarely makes your code slower, unless you terribly overdo it; it will make the code bigger though. Today's compilers are pretty good at deciding themselves when to inline and when not, so you hardly ever have to rack your brain about it.
Personally I ignore inlining during development completely, until I have a more or less usable product that I can profile and only if profiling tells me, that a certain function is called really often and also within a performance critical section of the application, then I will consider "force-inlining" of this function.
So far my answer is very generic, it applies to C as much as it applies to C++ and Objective-C. As a closing word let me say something about C++ in particular: Methods that are virtual are double indirect function calls, that means they have a higher function call overhead than normal function calls and also they cannot be inlined. Non-virtual methods might be inlined by the compiler or not but even if they are not inlined, they are still significant faster than virtual ones, so you should not make methods virtual, unless you really plan to override them or have them overridden.
A: The amount of overhead will depend on the compiler, CPU, etc. The percentage overhead will depend on the code you're inlining. The only way to know is to take your code and profile it both ways - that's why there's no definitive answer.
A: On most architectures, the cost consists of saving all (or some, or none) of the registers to the stack, pushing the function arguments to the stack (or putting them in registers), incrementing the stack pointer and jumping to the beginning of the new code. Then when the function is done, you have to restore the registers from the stack. This webpage has a description of what's involved in the various calling conventions.
Most C++ compilers are smart enough now to inline functions for you. The inline keyword is just a hint to the compiler. Some will even do inlining across translation units where they decide it's helpful.
A: For very small functions inlining makes sense, because the (small) cost of the function call is significant relative to the (very small) cost of the function body. For most functions over a few lines it's not a big win.
A: It's worth pointing out that an inlined function increases the size of the calling function and anything that increases the size of a function may have a negative affect on caching. If you're right at a boundary, "just one more wafer thin mint" of inlined code might have a dramatically negative effect on performance.
If you're reading literature that's warning about "the cost of a function call," I'd suggest it may be older material that doesn't reflect modern processors. Unless you're in the embedded world, the era in which C is a "portable assembly language" has essentially passed. A large amount of the ingenuity of the chip designers in the past decade (say) has gone into all sorts of low-level complexities that can differ radically from the way things worked "back in the day."
A: I made a simple benchmark against a simple increment function:
inc.c:
typedef unsigned long ulong;
ulong inc(ulong x){
return x+1;
}
main.c
#include <stdio.h>
#include <stdlib.h>
typedef unsigned long ulong;
#ifdef EXTERN
ulong inc(ulong);
#else
static inline ulong inc(ulong x){
return x+1;
}
#endif
int main(int argc, char** argv){
if (argc < 1+1)
return 1;
ulong i, sum = 0, cnt;
cnt = atoi(argv[1]);
for(i=0;i<cnt;i++){
sum+=inc(i);
}
printf("%lu\n", sum);
return 0;
}
Running it with a billion iterations on my Intel(R) Core(TM) i5 CPU M 430 @ 2.27GHz gave me:
*
*1.4 seconds for the inlinining version
*4.4 seconds for the regularly linked version
(It appears to fluctuate by up to 0.2 but I'm too lazy to calculate proper standard deviations nor do I care for them)
This suggests that the overhead of function calls on this computer is about 3 nanoseconds
The fastest I measured something at it was about 0.3ns so that would suggest a function call costs about 9 primitive ops, to put it very simplistically.
This overhead increases by about another 2ns per call (total time call time about 6ns) for functions called through a PLT (functions in a shared library).
A: There is a great concept called 'register shadowing', which allows to pass ( up to 6 ? ),values thru registers ( on CPU ) instead of stack ( memory ). Also, depending on the function and variables used within, compiler may just decide that frame management code is not required !!
Also, even C++ compiler may do a 'tail recursion optimiztaion', i.e. if A() calls B(), and after calling B(), A just returns, compiler will reuse the stack frame !!
Of course, this all can be done, only if program sticks to the semantics of standard ( see pointer aliasing and it's effect on optimizations )
A: Modern CPUs are very fast (obviously!). Almost every operation involved with calls and argument passing are full speed instructions (indirect calls might be slightly more expensive, mostly the first time through a loop).
Function call overhead is so small, only loops that call functions can make call overhead relevant.
Therefore, when we talk about (and measure) function call overhead today, we are usually really talking about the overhead of not being able to hoist common subexpressions out of loops. If a function has to do a bunch of (identical) work every time it is called, the compiler would be able to "hoist" it out of the loop and do it once if it was inlined. When not inlined, the code will probably just go ahead and repeat the work, you told it to!
Inlined functions seem impossibly faster not because of call and argument overhead, but because of common subexpressions that can be hoisted out of the function.
Example:
Foo::result_type MakeMeFaster()
{
Foo t = 0;
for (auto i = 0; i < 1000; ++i)
t += CheckOverhead(SomethingUnpredictible());
return t.result();
}
Foo CheckOverhead(int i)
{
auto n = CalculatePi_1000_digits();
return i * n;
}
An optimizer can see through this foolishness and do:
Foo::result_type MakeMeFaster()
{
Foo t;
auto _hidden_optimizer_tmp = CalculatePi_1000_digits();
for (auto i = 0; i < 1000; ++i)
t += SomethingUnpredictible() * _hidden_optimizer_tmp;
return t.result();
}
It seems like call overhead is impossibly reduced because it really has hoised a big chunk of the function out of the loop (the CalculatePi_1000_digits call). The compiler would need to be able to prove that CalculatePi_1000_digits always returns the same result, but good optimizers can do that.
A: There's the technical and the practical answer. The practical answer is it will never matter, and in the very rare case it does the only way you'll know is through actual profiled tests.
The technical answer, which your literature refers to, is generally not relevant due to compiler optimizations. But if you're still interested, is well described by Josh.
As far as a "percentage" you'd have to know how expensive the function itself was. Outside of the cost of the called function there is no percentage because you are comparing to a zero cost operation. For inlined code there is no cost, the processor just moves to the next instruction. The downside to inling is a larger code size which manifests it's costs in a different way than the stack construction/tear down costs.
A: There are a few issues here.
*
*If you have a smart enough compiler, it will do some automatic inlining for you even if you did not specify inline. On the other hand, there are many things that cannot be inlined.
*If the function is virtual, then of course you are going to pay the price that it cannot be inlined because the target is determined at runtime. Conversely, in Java, you might be paying this price unless you indicate that the method is final.
*Depending on how your code is organized in memory, you may be paying a cost in cache misses and even page misses as the code is located elsewhere. That can end up having a huge impact in some applications.
A: There is not much overhead at all, especially with small (inline-able) functions or even classes.
The following example has three different tests that are each run many, many times and timed. The results are always equal to the order of a couple 1000ths of a unit of time.
#include <boost/timer/timer.hpp>
#include <iostream>
#include <cmath>
double sum;
double a = 42, b = 53;
//#define ITERATIONS 1000000 // 1 million - for testing
//#define ITERATIONS 10000000000 // 10 billion ~ 10s per run
//#define WORK_UNIT sum += a + b
/* output
8.609619s wall, 8.611255s user + 0.000000s system = 8.611255s CPU(100.0%)
8.604478s wall, 8.611255s user + 0.000000s system = 8.611255s CPU(100.1%)
8.610679s wall, 8.595655s user + 0.000000s system = 8.595655s CPU(99.8%)
9.5e+011 9.5e+011 9.5e+011
*/
#define ITERATIONS 100000000 // 100 million ~ 10s per run
#define WORK_UNIT sum += std::sqrt(a*a + b*b + sum) + std::sin(sum) + std::cos(sum)
/* output
8.485689s wall, 8.486454s user + 0.000000s system = 8.486454s CPU (100.0%)
8.494153s wall, 8.486454s user + 0.000000s system = 8.486454s CPU (99.9%)
8.467291s wall, 8.470854s user + 0.000000s system = 8.470854s CPU (100.0%)
2.50001e+015 2.50001e+015 2.50001e+015
*/
// ------------------------------
double simple()
{
sum = 0;
boost::timer::auto_cpu_timer t;
for (unsigned long long i = 0; i < ITERATIONS; i++)
{
WORK_UNIT;
}
return sum;
}
// ------------------------------
void call6()
{
WORK_UNIT;
}
void call5(){ call6(); }
void call4(){ call5(); }
void call3(){ call4(); }
void call2(){ call3(); }
void call1(){ call2(); }
double calls()
{
sum = 0;
boost::timer::auto_cpu_timer t;
for (unsigned long long i = 0; i < ITERATIONS; i++)
{
call1();
}
return sum;
}
// ------------------------------
class Obj3{
public:
void runIt(){
WORK_UNIT;
}
};
class Obj2{
public:
Obj2(){it = new Obj3();}
~Obj2(){delete it;}
void runIt(){it->runIt();}
Obj3* it;
};
class Obj1{
public:
void runIt(){it.runIt();}
Obj2 it;
};
double objects()
{
sum = 0;
Obj1 obj;
boost::timer::auto_cpu_timer t;
for (unsigned long long i = 0; i < ITERATIONS; i++)
{
obj.runIt();
}
return sum;
}
// ------------------------------
int main(int argc, char** argv)
{
double ssum = 0;
double csum = 0;
double osum = 0;
ssum = simple();
csum = calls();
osum = objects();
std::cout << ssum << " " << csum << " " << osum << std::endl;
}
The output for running 10,000,000 iterations (of each type: simple, six function calls, three object calls) was with this semi-convoluted work payload:
sum += std::sqrt(a*a + b*b + sum) + std::sin(sum) + std::cos(sum)
as follows:
8.485689s wall, 8.486454s user + 0.000000s system = 8.486454s CPU (100.0%)
8.494153s wall, 8.486454s user + 0.000000s system = 8.486454s CPU (99.9%)
8.467291s wall, 8.470854s user + 0.000000s system = 8.470854s CPU (100.0%)
2.50001e+015 2.50001e+015 2.50001e+015
Using a simple work payload of
sum += a + b
Gives the same results except a couple orders of magnitude faster for each case.
A: As others have said, you really don't have to worry too much about overhead, unless you're going for ultimate performance or something akin. When you make a function the compiler has to write code to:
*
*Save function parameters to the stack
*Save the return address to the stack
*Jump to the starting address of the function
*Allocate space for the function's local variables (stack)
*Run the body of the function
*Save the return value (stack)
*Free space for the local variables aka garbage collection
*Jump back to the saved return address
*Free up save for the parameters
etc...
However, you have to account for lowering the readability of your code, as well as how it will impact your testing strategies, maintenance plans, and overall size impact of your src file.
A: Each new function requires a new local stack to be created. But the overhead of this would only be noticeable if you are calling a function on every iteration of a loop over a very large number of iterations.
A: For most functions, their is no additional overhead for calling them in C++ vs C (unless you count that the "this" pointer as an unnecessary argument to every function.. You have to pass state to a function somehow tho)...
For virtual functions, their is an additional level of indirection (equivalent to a calling a function through a pointer in C)... But really, on today's hardware this is trivial.
A: Depending on how you structure your code, division into units such as modules and libraries it might matter in some cases profoundly.
*
*Using dynamic library function with external linkage will most of the time impose full stack frame processing.
That is why using qsort from stdc library is one order of magnitude (10 times) slower than using stl code when comparison operation is as simple as integer comparison.
*Passing function pointers between modules will also be affected.
*The same penalty will most likely affect usage of C++'s virtual functions as well as other functions, whose code is defined in separate modules.
*Good news is that whole program optimization might resolve the issue for dependencies between static libraries and modules.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/144993",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "72"
} |
Q: What's the future of the web? XHTML 2, HTML 5, or something else? I'm confused by the discussion and advancement both of a new version of HTML and a new version of XHTML. Are they competitors? If so, what is likeliest to be the adopted future of the web? If not, what is the differing non-competing purpose for each?
Are we due to have a BluRay/HDVD battle here? Is there ultimately any clear decision? I fear a future where browsers pick and choose among the easiest and/or flashiest features of each to implement, leaving web developers trying to sort out the lowest common denominator for any new web app.
A: HTML 5 is meant for web applications whereas XHTML2 is meant for documents. From the HTML 5 working draft:
XHTML2 defines a new HTML vocabulary with better features for hyperlinks, multimedia content, annotating document edits, rich metadata, declarative interactive forms, and describing the semantics of human literary works such as poems and scientific papers.
However, it lacks elements to express the semantics of many of the non-document types of content often seen on the Web. For instance, forum sites, auction sites, search engines, online shops, and the like, do not fit the document metaphor well, and are not covered by XHTML2.
[HTML5] aims to extend HTML so that it is also suitable in these contexts.
XHTML2 and [HTML5] use different namespaces and therefore can both be implemented in the same XML processor.
A: XHTML2 and HTML5 are competing standards, they both purport to be the next iteration of HTML.
It is pretty clear that HTML5 is going to win, since it has support by the browser vendors.
A: XHTML2 is effectively dead. Since w3c(HTMLWG) accepted WHATWG's proposal the work has stopped on XHTML2 (even before that, since the last working draft for xhtml2 is from 2006).
A: In my opinion HTML5 will be the next dominant format. XHTML is just too unforgiving to be used in a web environment (you can't have the page fail on every small error...).
HTML5 is shaping up to be quite the treat for web developers - a formal spec for the CANVAS element, native drag-and-drop API, an offline storage API, server notifications API (push model), a formal content editing API and much more. If they can deliver even half of what they are proposing to, it will be a major advancement for web applications.
A: From what I was able to find in a quick google search, I would suggest that these are indeed competing standards. Both are attempting to advance web technology but are following different paths to do so.
For a pretty thorough treatment of the matter you might look at these two links:
http://xhtml.com/en/future/x-html-5-versus-xhtml-2/
http://www.cmswire.com/cms/industry-news/setting-the-standards-html-5-vs-xhtml-2-002032.php
A: Ultimately it's whatever is supported by browser makers. HTML 5 is feature rich, but the final draft may be years off. There are inherent difficulties in implementing things like audio and video support in 4(+) major rendering engines, and having them all behave the same way. Even validation would be a chore. Most browsers besides IE support the canvas element and SVG, but they still only represent about 25% of the market. With IE still commanding 75-80% of the market share, users who don't use or are oblivious to alternatives will be unable to use more advanced features, giving designers a tough decision.
IE8 is only finally implementing support which other browsers have had for users, meaning that the IE user base will always lag in compatibility. While HTML 5 is a nice idea, I think proprietary solutions such as Flash/AIR and Google Gears will continue to provide standardized support for the rich features HTML 5 provides. The biggest problem really is standardization - you have to design a website with the greatest percentage of users in mind as possible. There is hope, however. A Mozilla developer made a canvas plugin for IE - we could potentially see an open-source IE add-on that brings it up to a certain standard, that users could install much like Flash.
To Microsoft's credit they are being very open with IE8 and Windows 7 development (see their project blogs), so there is the possibility that more proactive IE development will accelerate adoption of HTML 5.
A: The W3C allowed the xhtml2 working group's charter to expire in 2009. Their resources were rolled into the html5 working group. The html5 spec contains a section entitled The XHTML Syntax.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145000",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: End of FILE* pointer is not equal to size of written data Very simply put, I have the following code snippet:
FILE* test = fopen("C:\\core.u", "w");
printf("Filepointer at: %d\n", ftell(test));
fwrite(data, size, 1, test);
printf("Written: %d bytes.\n", size);
fseek(test, 0, SEEK_END);
printf("Filepointer is now at %d.\n", ftell(test));
fclose(test);
and it outputs:
Filepointer at: 0
Written: 73105 bytes.
Filepointer is now at 74160.
Why is that? Why does the number of bytes written not match the file pointer?
A: Since you're opening the file in text mode, it will convert end-of-line markers, such as LF, into CR/LF.
This is likely if you're running on Windows (and you probably are, given that your file name starts with "c:\").
If you open the file in "wb" mode, I suspect you'll find the numbers are identical:
FILE* test = fopen("C:\\core.u", "wb");
The C99 standard has this to say in 7.19.5.3 The fopen function:
The argument mode points to a string. If the string is one of the following, the file is
open in the indicated mode. Otherwise, the behaviour is undefined.
r open text file for reading
w truncate to zero length or create text file for writing
a append; open or create text file for writing at end-of-file
rb open binary file for reading
wb truncate to zero length or create binary file for writing
ab append; open or create binary file for writing at end-of-file
r+ open text file for update (reading and writing)
w+ truncate to zero length or create text file for update
a+ append; open or create text file for update, writing at end-of-file
r+b or rb+ open binary file for update (reading and writing)
w+b or wb+ truncate to zero length or create binary file for update
a+b or ab+ append; open or create binary file for update, writing at end-of-file
You can see they distinguish between w and wb. I don't believe an implementation is required to treat the two differently but it's usually safer to use binary mode for binary data.
A: what does fwrite return? normally the return value should be the number of bytes written.
Also, what does the ftell() answer with right before the fseek?
It might help to know what operating system, C compiler version and C library.
A: A filepointer is a cookie. It has no value. The only thing you can use it for is to seek to the same place in a file. I'm not even sure if ISO C guarantees that ftell returns increasing values. If you don't believe this, please look at the different seek() modes. They exist precisely because the position is not a simple byte offset.
A: windows doesn't actually write all data out to the file without a flush and possibly an fsync. Maybe that's why
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145006",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: What web server should I use with NetBeans? I haven't been around Java development for 8 years, but am starting to build a NetBeans Web Application. When I walk through the Web Application wizard, it asks for the server I'm going to be using.
What would be the best and simplest server for me to start using with NetBeans?
A: Since the NetBeans IDE is a Sun product, I would assume that the Glassfish application server would be a natural fit.
That said, one of the pluses of developing a web application in Java is that the interface for working with the http is standardized (i.e. the Servlet specification), so that you can pick any servlet container you want: be it Glassfish, Tomcat, Jetty or Weblogic. Since it sounds to me that you're experimenting and you want to use something easy to administer, I might go with Glassfish. However, be open to revisit that decision when you need to actually deploy your web application in a production environment. Be sure to check out other options like Tomcat or Jetty.
A: Unless you are deploying to a full J2EE application server, I would recommend using Tomcat. Tomcat can run as a standalone web/servlet/jsp server and avoids some of the complexities of a full J2EE app server.
The web development bundle for Netbeans will include installers for and automated integration with Glassfish and Tomcat. You will get the "best" experience using Netbeans with those servers.
That said, the workflow in Netbeans can be easily integrated with other application servers. As of 6.1, this includes Sun Java System Application Server 8 and 9, GlassFish v1 and v2, Apache Tomcat 4, 5 and 6, JBoss 4, BEA WebLogic 10, IBM WebSphere 6.0 and 6.1, Sailfin V1. See the Netbeans J2EE Features site for more info.
A: Glassfish is actually an easy to use app server. I think it's easier for a beginner to use and it's integrated with Netbeans. Setting up database connection caches is easy, for example.
You administer the server through this web page:
http://localhost:4848
(login: admin, password: adminadmin)
Glassfish will run your apps on port 8080.
The Glassfish home page: http://glassfish.dev.java.net (don't really need to read)
For non-Netbeans users there's a QuickStart guide:
http://glassfish.dev.java.net/downloads/quickstart/index.html
Here's a screencast overview:
http://download.java.net/javaee5/screencasts/admin-console/index.html
At some point you will want to learn Tomcat too because it's so prevalent, but Glassfish is a much friendlier start. In fact, it's probably better as a production server too, if you can find an affordable host.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145017",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: WCF Service authorization patterns I'm implementing a secure WCF service. Authentication is done using username / password or Windows credentials. The service is hosted in a Windows Service process. Now, I'm trying to find out the best way to implement authorization for each service operation.
For example, consider the following method:
public EntityInfo GetEntityInfo(string entityId);
As you may know, in WCF, there is an OperationContext object from which you can retrieve the security credentials passed in by the caller/client. Now,authentication would have already finished by the time the first line in the method is called. However, how do we implement authorization if the decision depends on the input data itself? For example, in the above case, say 'admin' users(whose permissions etc are stored in a database), are allowed to get entity info, and other users should not be allowed... where do we put the authorization checks?
Say we put it in the first line of the method like so:
CheckAccessPermission(PermissionType.GetEntity, user, entityId) //user is pulled from the current OperationContext
Now, there are a couple of questions:
*
*Do we validate the entityId (for example check null / empty value etc) BEFORE the authorization check or INSIDE the authorization check? In other words, if authorization checks should be included in every method, is that a good pattern? Which should happen first - argument validation or authorization?
*How do we unit test a WCF service when authorization checks are all over the place like this, and we don't have an OperationContext in the unit test!? (Assuming I'm tryin to test this service class implementation directly without any of the WCF setup).
Any ideas guys?
A: For question 1, it's best to perform authorization first. That way, you don't leak validation error messages back to unauthorized users.
BTW, instead of using a home-grown authentication method (which I assume is what your CheckAccessPermission is), you might be able to hook up to WCF's out-of-the-box support for ASP.NET role providers. Once this is done, you perform authorization via OperationContext.Current.ServiceSecurityContext.PrimaryIdentity.IsInRole(). The PrimaryIdentity is an IPrincipal.
A: About question #2, I would do this using Dependency Injection and set up your service implementation something like this:
class MyService : IMyService
{
public MyService() : this(new UserAuthorization()) { }
public MyService(IAuthorization auth) { _auth = auth; }
private IAuthorization _auth;
public EntityInfo GetEntityInfo(string entityId)
{
_auth.CheckAccessPermission(PermissionType.GetEntity,
user, entityId);
//Get the entity info
}
}
Note that IAuthorization is an interface that you would define.
Because you are going to be testing the service type directly (that is, without running it inside the WCF hosting framework) you simply set up your service to use a dummy IAuthorization type that allows all calls. However, an even BETTER test is to mock the IAuthorization and test that it is called when and with the parameters that you expect. This allows you to test that your calls to the authorization methods are valid, along with the method itself.
Separating the authorization into it's own type also allows you to more easily test that it is correct in isolation. In my (albeit limited) experience, using DI "patterns" give you vastly better separation of concerns and testability in your types as well as leading to a cleaner interface (this is obviously open to debate).
My preferred mocking framework is RhinoMocks which is free and has very nice fluent interface but there are lots of others out there. If you'd like to know more about DI here are some good primers and .Net frameworks:
*
*Martin Fowler on DI
*Jeremy Miller on DI
*Scott Hanselman's List of DI Containers
*My personal favorite DI container: The Castle Project Windsor Container
A: For question 1, absolutely do authorization first. No code (within your control) should execute before authorization to maintain the tightest security. Paul's example above is excellent.
For question 2, you could handle this by subclassing your concrete service implementation. Make the true business logic implementation an abstract class with an abstract "CheckPermissions" method as you mention above. Then create 2 subclasses, one for WCF use, and one (very isolated in a non deployed DLL) which returns true (or whatever you'd like it to do in your unit testing).
Example (note, these shouldn't be in the same file or even DLL though!):
public abstract class MyServiceImpl
{
public void MyMethod(string entityId)
{
CheckPermissions(entityId);
//move along...
}
protected abstract bool CheckPermissions(string entityId);
}
public class MyServiceUnitTest
{
private bool CheckPermissions(string entityId)
{
return true;
}
}
public class MyServiceMyAuth
{
private bool CheckPermissions(string entityId)
{
//do some custom authentication
return true;
}
}
Then your WCF deployment uses the class "MyServiceMyAuth", and you do your unit testing against the other.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145025",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Are there any considerations needed to be taken running your .net program on x64 vs x86? I believe the architecture type (x86 vs x64) is abstracted away for you when making .Net programs, but are there any other considerations that can cause problems?
A: This article has a lot of good issues to be aware of:
http://osnews.com/story/20330/Windows_x64_Watch_List
Personally, my boss has a 64-bit Vista computer, and I program in a 32-bit mode. We've run into the following issues:
*
*Registry for 32 bit apps gets hidden (sort of) into a Wow6432Node folder. Not all apps you are used to finding a path for in the registry will be in that node (SQL Server won't, for instance).
*SysWow64 in the C:\Windows folder can cause issues of DLLs not being where they are needed (we had this issue w/ a 3rd party licensing component).
*Sometimes the files you need are in "C:\Program Files (x86)", rather than "C:\Program Files". Sucks too.
A: *
*Reading and writing to 64 bit values is not thread safe on a 32 bit platform. Reading a 64 bit value takes two operations which could be interrupted by a context switch. See the MSDN article on Threading.Interlocked.Read for more information.
*Also agree entirely with torial's answers! :-)
A: MSDN had put up a little paper regarding issues of porting 32-bit applications over to 64-bit execution environment.
http://msdn.microsoft.com/en-us/library/ms973190.aspx
Two other bloggers had previously wrote about 64-bit development when they were working in the CLR team
*
*Gaurav Seth
*SpankyJ
A: Beware of third-party COM libraries or third party .NET libraries that secretly make win32 calls. That's where we had our biggest headaches.
A: From the MSDN doco, among other considerations:
In many cases, assemblies will run the same on the 32-bit or 64-bit CLR. Some reasons for a program to behave differently when run by the 64-bit CLR include:
*
*Structs that contain members that
change size depending on the platform,
such as any pointer type.
*Pointer arithmetic that includes
constant sizes.
*Incorrect platform invoke or COM
declarations that use Int32 for
handles instead of IntPtr.
*Casting IntPtr to Int32
Also, default file locations.
A: x64 will allow you to address more memory, but given the same code, it will use more memory than x86.
A: In my experience porting an Asp.NET application was basically flawless. Run on 32 bit machine and on 64 bit and no problem happens, beside having more memory available. This happens because a lot of the issues already mentioned (registry, threading and so on) have been managed by Asp.NET and you need to properly fix them to run in the Asp.NET environment.
Client side (windows form) the same happened but it if you've used some "unsafe" APIs to get special folders or registry access then some problem can happen, as already pointed.
Regards
Massimo
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145026",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
} |
Q: 192.168.0.71... What is this special address used for? I have some accesses from 192.168.0.71 on my apache logs. I looked up this IP (because my server almost exclusively takes requests from 127.0.0.1, and I saw that it's reserved for "special purposes." What types of purposes might those be?
Edit:
I didn't tell you, typing 192.168.0.71 brings me straight to my site, just as 127.0.0.1 would. I just wonder how this is different, then from 127.0.0.1.
A:
I didn't tell you, typing 192.168.0.71 brings me straight to my site, just as 127.0.0.1 >would. I just wonder how this is different, then from 127.0.0.1.
That means that 192.168.0.71 is the assigned internal IP to your machine.
127.0.0.1 is just a local loopback redirect. 192.168.0.71 is actually directly connecting to your machine.
A: 192.168.???.??? is a special, reserved range of addresses private IP addresses. So it's probably a computer from your local network.
Read: http://en.wikipedia.org/wiki/Classful_network
EDIT:
You've edited your post.
It seems, it's your address in the local network.
127.0.0.1 is the loopback address.
Difference between them is if somebody else from your network types 192.168.0.71, they go to your site, 127.0.0.1 is for their computer.
A: RFC 1918 reserves addresses starting with 192.168 for private networks. This most likely means that some computer on your local network is accessing the server.
A: 192.168.0.71 (Well the entire range 192.168.0.0 – 192.168.255.255) are for private (read. not internet accessible) network IP addresses, so that is from something inside your private network.
A: I believe it is reserved for any private intranet, as per this document.
A: The 192.168.x.y block is typically used for non-Internet connected devices. It's most likely from one of your own machines. If you have a router of some sort, go into its configuration tool and see if you can find the block of addresses it uses to assign to internal machines. It should be 192.168.x.y.
A: Judging from your edit, it sounds like 192.168.0.71 is your computer's IP address on your internal network.
As to why it's showing up in your logs instead of 127.0.0.1... well, I can only assume that, for whatever reason, one of the programs on your computer is contacting the computer by its network IP rather than the localhost IP.
A: The 192.168.0.0 network is defined as being one of the "private" networks.
As Krzysiek Goj has said, check this link for further details.
There are 3 ranges that have been designated as private ip addresses.
- 10.0.0.0/8 (meaning 10.0.0.0 to 10.255.255.255)
- 172.16.0.0/12 (meaning 172.16.0.0 - 172.31.255.255)
- 192.168.0.0/16 (meaning 192.168.0.0 to 192.168.255.255)
Typically a DHCP server on your network (of which most network switches are examples) are configured to dynamically hand out ip addresses in one of the private ranges. The 192.168.0.0/16 range is probably the most popular. Alternatively you may have been statically allocated one of these addresses by your network administrator.
To check the address that you've been allocated you can use one of the following:
- (windows) ipconfig /all
- (unix) ifconfig
By default your machine will also have a loopback interface enabled using the address 127.0.0.1. This can be used to access your own machine.
A: There is not enough information here to completely answer the question. The most likely answer is:
The web server is also your desktop system. Your browser is running on that system as well, so the the 127.0.0.1 traffic is from your surfing of your own site.
The 192.168.0.71 is the actual IP address of your desktop, which is connected to some kind of NAT'ing device which connects you to the internet. Almost every broadband WiFi device uses this subnet by default.
The reason some traffic comes from that address is that on occasion, for various web reasons, some of the traffic is directly addressing your 192.168.0.71 address rather than the 127.0.0.1 address.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145027",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How do I stop Emacs from automatically editing my startup file? Emacs edits my .emacs file whenever I use the customization facility, or when I type a command that is disabled by default. Any automatic editing of my configuration makes me nervous. How can I stop Emacs from ever editing my .emacs. file?
A: The first thing to do is to stop that silly "disabled command" feature from ever doing anything. If you care this much about your .emacs file, you certainly don't need novice.el bossing you around.
(setq disabled-command-function nil)
The customization facility can be made to stuff all your customizations in a separate file with the following commands.
(setq custom-file "~/.emacs-custom.el")
(load custom-file 'noerror)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145043",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Simplest way to use "BeanUtils alike" replace Is there any libraries that would allow me to use the same known notation as we use in BeanUtils for extracting POJO parameters, but for easily replacing placeholders in a string?
I know it would be possible to roll my own, using BeanUtils itself or other libraries with similar features, but I didn't want to reinvent the wheel.
I would like to take a String as follows:
String s = "User ${user.name} just placed an order. Deliver is to be
made to ${user.address.street}, ${user.address.number} - ${user.address.city} /
${user.address.state}";
And passing one instance of the User class below:
public class User {
private String name;
private Address address;
// (...)
public String getName() { return name; }
public Address getAddress() { return address; }
}
public class Address {
private String street;
private int number;
private String city;
private String state;
public String getStreet() { return street; }
public int getNumber() { return number; }
// other getters...
}
To something like:
System.out.println(BeanUtilsReplacer.replaceString(s, user));
Would get each placeholder replaced with actual values.
Any ideas?
A: Rolling your own using BeanUtils wouldn't take too much wheel reinvention (assuming you want it to be as basic as asked for). This implementation takes a Map for replacement context, where the map key should correspond to the first portion of the variable lookup paths given for replacement.
import java.lang.reflect.InvocationTargetException;
import java.util.Map;
import java.util.regex.Matcher;
import java.util.regex.Pattern;
import org.apache.commons.beanutils.BeanUtils;
public class BeanUtilsReplacer
{
private static Pattern lookupPattern = Pattern.compile("\\$\\{([^\\}]+)\\}");
public static String replaceString(String input, Map<String, Object> context)
throws IllegalAccessException, InvocationTargetException, NoSuchMethodException
{
int position = 0;
StringBuffer result = new StringBuffer();
Matcher m = lookupPattern.matcher(input);
while (m.find())
{
result.append(input.substring(position, m.start()));
result.append(BeanUtils.getNestedProperty(context, m.group(1)));
position = m.end();
}
if (position == 0)
{
return input;
}
else
{
result.append(input.substring(position));
return result.toString();
}
}
}
Given the variables provided in your question:
Map<String, Object> context = new HashMap<String, Object>();
context.put("user", user);
System.out.println(BeanUtilsReplacer.replaceString(s, context));
A: Spring Framework should have a feature that does this (see Spring JDBC example below). If you can use groovy (just add the groovy.jar file) you can use Groovy's GString feature to do this quite nicely.
Groovy example
foxtype = 'quick'
foxcolor = ['b', 'r', 'o', 'w', 'n']
println "The $foxtype ${foxcolor.join()} fox"
Spring JDBC has a feature that I use to support named and nested named bind variables from beans like this:
public int countOfActors(Actor exampleActor) {
// notice how the named parameters match the properties of the above 'Actor' class
String sql = "select count(0) from T_ACTOR where first_name = :firstName and last_name = :lastName";
SqlParameterSource namedParameters = new BeanPropertySqlParameterSource(exampleActor);
return this.namedParameterJdbcTemplate.queryForInt(sql, namedParameters);
}
A: Your string example is a valid template in at least a few templating engines, like Velocity or Freemarker. These libraries offer a way to merge a template with a context containing some objects (like 'user' in your example).
See http://velocity.apache.org/ or http://www.freemarker.org/
Some example code (from the Freemarker site):
/* ------------------------------------------------------------------- */
/* You usually do it only once in the whole application life-cycle: */
/* Create and adjust the configuration */
Configuration cfg = new Configuration();
cfg.setDirectoryForTemplateLoading(
new File("/where/you/store/templates"));
cfg.setObjectWrapper(new DefaultObjectWrapper());
/* ------------------------------------------------------------------- */
/* You usually do these for many times in the application life-cycle: */
/* Get or create a template */
Template temp = cfg.getTemplate("test.ftl");
/* Create a data-model */
Map root = new HashMap();
root.put("user", "Big Joe");
Map latest = new HashMap();
root.put("latestProduct", latest);
latest.put("url", "products/greenmouse.html");
latest.put("name", "green mouse");
/* Merge data-model with template */
Writer out = new OutputStreamWriter(System.out);
temp.process(root, out);
out.flush();
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145052",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Javascript as a functional language I am looking get to grips with functional programming concepts.
I've used Javascript for many years for client side scripting in web applications and apart from using prototypes it was all simple DOM manipulation, input validation etc.
Of late, I have often read that Javascript is one of the languages that supports functional programming.
With my familiarity and experience with Javascript, my preference is to use it to learn functional programming. I expect I would be able to concentrate more on the main functional concepts and not get bogged down or distracted by a completely new syntax.
So in summary, is Javascript a good choice to learn functional programming concepts? What capabilities in Javascript are relevant/support functional programming?
A: Higher Order Javascript is a great way to get familiar with the functional aspects of javascript. It's also a relatively short read in case you want to get your feet wet without diving into a larger book.
A: Although javascript supports FP to some degree, it does not directly encourage it. That's why projects like Oliver Steele's Functional exist, to fill in the gaps. So I wouldn't recommend it for learning FP. Check out F# instead.
A: I would say that although you can quickly grasp some functional programming concepts with JavaScript, using JavaScript consistently like a functional programming language is not a common practice. At least not obviously common. Most people don't post tutorials that pinpoint how to do functional programming with JavaScript -- the one marxidad pointed out is actually a pretty decent example, but you won't find a lot of that. The functional aspects are not often apparent, just like when people use closures in JavaScript, but are unaware that they are doing it.
The idea that you would pass two functions through as arguments to a third function, and then have the return value be some execution related to the first two functions is an advanced technique that almost always appears only in the core of full-blown libraries like jQuery. Self executing anonymous functions and the like have gained ground, but are still not used consistently. The majority of tutorials often focus instead on JavaScript's OO capabilities, like how to create properties and methods, scope, access control and also how to use the prototype property of constructors. Honestly, if functional programming is what you want, then I would choose a language known strictly for this capability.
A: I don't remember who said it, but javascript has been called "Scheme with Algol syntax". So for learning Scheme/Lisp, Javascript isn't a bad start. Note though that functional languages like Lisp are quite different from pure functional languages, such as Haskell.
Apart from "first-class functions" (Meaning that functions are values, that can be assigned to variables), lexical scope is also an inherent part of what makes a functional language.
Higher Order Javascript and The Little Javascripter has been mentioned already. They are both excellent texts. In addition, Higher Order Programming in Javascript may be an easier start.
A: JavaScript supports first class functions. See Use functional programming techniques to write elegant JavaScript.
A: I would recommend reading The Little Schemer, which is a fairly slim book about recursion and is a good introduction to the functional style. Whilst it's focused on Scheme it can easily be applied to JavaScript, see http://javascript.crockford.com/little.html. I found it really helpful in my javascript development, although it gets quite tricky towards the end.
A: Javascript is a multi-paradigm language. If your goal is to learn functional language concepts, try starting with a pure functional language like OCaml or Haskell.
A: Also, Eloquent JavaScript: Functional Programming chapter.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145053",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "62"
} |
Q: Is this a reasonable implementation of the quadratic Bézier function in OCaml? A friend came across a quadratic Bézier curve function in his codebase that used a gigantic rats nest of a switch table to perform the computation. He challenged me to find a single, short expression that would allow him to replace the gigantic block of code.
In attempting to satisfy two different curiosities, I thought I'd try implementing the function in OCaml. I'm a very novice OCaml programmer and I'm also unfamiliar with the function and this specific implementation is hard to come by via Google.
Critiques on both the function's performance/correctness as well as its implementation are very much appreciated.
Implementation of Quadratic Bézier Curve:
let rec b2 n =
let p1 = -10. in
let p2 = 10. in
let q = n*.n in
let rec b2i n i hd =
if i > n then
List.rev hd
else
let t = i /. n in
b2i n (i+.1.) ((((1.-.t)**2.)*.p1+.(2.*.t*.(1.-.t)*.q)+.(t**2.)*.p2) :: hd)
in
b2i n 0. []
;;
let floatprint lst =
List.iter (fun f -> Printf.printf "%f; " f) lst ;;
floatprint (b2 8.);;
A: b2 isn't recursive, so no need for [let rec b2 n =]. Since n never changes, no need to have it as argument to b2i, just use n from the enclosing scope. Your inner function should depend on p0, p1 and p2, but I see it depending on -10., n**2 and 10. The function also has the form of a map from [ 0.0; 1.0; 2.0; ...; n.0] to the final values. Could you write it:
let b i =
let t = i /. n in
let tminus = (1.-.t) in
(tminus *. tminus *. p0) +. (2. *. t *. tminus *. p1) +. (t *. t * p2)
in
List.map b ([generate list 1.0; 2.0; ... n.0])
A function to generate the list 1.0...n.0 could be: (for small n)
let rec count m n = if m > n then [] else m :: (count (m+.1.) n)
A: I have two suggestions:
You should call List.rev after b2i returns so ocaml can exploit it's tail-recursion optimizations. I am not sure how well OCaml will deal with the current implementation, List.rev is tail-recursive though. You'll notice that in this post it is done like that.
Also, you can make the resolution of the iteration be an optional argument like ?(epsilon=0.1).
As an ocaml programmer I don't see much wrong here aside from that --as long as P1 and P2 are in fact constants. Compile it down and see what the difference in assembly is between moving List.rev inside or out of the tail-recursion.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145056",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: YUV Conversion via a fragment shader I have implemented a YUV to RGB conversion via a fragment shader written in Nvidia's shader language. (Y, U and V are stored in separate textures that are combined via multi texturing in my fragment shader). It works great under OpenGL, but under Direct3D I just can't get the output image to look right. I'm starting to suspect that Direct3D is somehow modifying the Y, U and V samples before I get a chance to do my YUV conversion thing. Does anyone know if Direct3D makes any modifications to the values stored in textures before the fragment shader is run and how to disable them>?
A: We figured it out. :) Basically the problem was that while our YUV to RGB equations were correct, we weren't properly sampling the V data! So no amount of futzing with the equations would have helped!
In the end, I would recommend the following strategy for anyone attempting to do this:
1) Set R, G, and B to the value from Y. You should get a grayscale image (as Y contains just luminance).
2) Next, set R, G, and B to U. You should get funny colors!
3) Finally set R, G, and B to V. Again, you should get funny colors.
Also, properly normalizing the values is critical. Check our fourcc.org for a discussion of proper YUV normalization.
A: The only suggestion that comes to mind is that the textures are in an inappropriate format (low-precision or compressed).
Can you describe in what way the output looks wrong? Any chance of a right vs wrong screenshot?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145064",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Distributing iPhone applications 2.1 vs 2.0 Can iPhone applications compiled against 2.1 be successfully installed via iTunes on a 2.0 device?
I know iPhone applications compiled with 2.1 can run on a 2.0 device (assuming they're not using anything new from 2.1). But am not sure if iTunes will let the install take place.
Does anyone have concrete information on this?
I have not seen any apps on the AppStore that are 2.1+ only.
A: I believe apps that are compiled against 2.1 will be marked as "Requires iPhone 2.1 Softwar Update" when viewed through iTunes. (but not when viewed from an iPhone - the iPhone's App Store app only displays a subset of an app's metadata.)
One example: Caliper (it's under "Application Description->Requirements")
I don't know if this "requirement" is actually enforced, however.
A: No.
As has been mentioned apps compiled with 2.1 will be marked as requiring 2.1 in iTunes.
If you attempt to download a 2.1 app from the iPhone, or sync a 2.1 app via iTunes, you will receive a message that stated iPhone OS 2.1 is required.
A: Short answer, yes.
There have been some apps that dont work 100% in 2.1, but the two firmwares are pretty much backwards compatible
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145080",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Is it true that there is no need to learn C because C++ contains everything? I am taking a class in C++ programming and the professor told us that there is no need to learn C because C++ contains everything in C plus object-oriented features. However, some others have told me that this is not necessarily true. Can anyone shed some light on this?
A: Overview:
It is almost true that C++ is a superset of C, and your professor is correct in that there is no need to learn C separately.
C++ adds the whole object oriented aspect, generic programming aspect, as well as having less strict rules (like variables needing to be declared at the top of each function). C++ does change the definition of some terms in C such as structs, although still in a superset way.
Examples of why it is not a strict superset:
This Wikipedia article has a couple good examples of such a differences:
One commonly encountered difference is
that C allows implicit conversion from
void* to other pointer types, but C++
does not. So, the following is valid C
code:
int *i = malloc(sizeof(int) * 5);
... but to make it work in both C and
C++ one would need to use an explicit
cast:
int *i = (int *) malloc(sizeof(int) * 5)
Another common portability issue is
that C++ defines many new keywords,
such as new and class, that may be
used as identifiers (e.g. variable
names) in a C program.
This wikipedia article has further differences as well:
C++ compilers prohibit goto from crossing an initialization, as in the following C99 code:
void fn(void)
{
goto flack;
int i = 1;
flack:
;
}
What should you learn first?
You should learn C++ first, not because learning C first will hurt you, not because you will have to unlearn anything (you won't), but because there is no benefit in learning C first. You will eventually learn just about everything about C anyway because it is more or less contained in C++.
A: It might be true that you don't need to learn the syntax of C if you know the syntax of C++ but you cetainly do need to learn of how coding practices are different in C than in C++.
So your professor wasn't 100% right.
In C you don't have the classes to arrange your code into logical modules and you don't have C++ polymorphism. Yet you still need to achieve these goals somehow.
although the syntax of C is to some extent a subset of C++, programming in C is not a subset of programming in C++. it is completely different.
A: Yes and no.
As others have already answered, the language C++ is a superset of the language C, with some small exceptions, for example that sizeof('x') gives a different value.
But what I don't think has been very clearly stated is that when it comes to the use of these two languages, C++ is not a superset, but rather different. C++ contains new (it can be discussed if they are better) ways of doing the basic things, such as writing to the screen. The old C ways are still there, but you generally use the new ways. This means that a simple "hello world" program looks different in C and in C++. So it is not really true that the simple things are the same in C and C++, and then you just add more advanced stuff, such as support for object-oriented programming, in C++.
So if you have learnt C++, you will need to re-learn quite a lot before you can program in C. (Well, it is possible to teach C++ as an extension to C, still using printf and malloc instead of iostreams and new, and then adding classes and other C++ things, but that way of using C++ is generally frowned upon.)
A: No C++ isn't really a superset of C. You can check this article for a more extensive list of the differences if you're interested:
http://en.wikipedia.org/wiki/Compatibility_of_C_and_C%2B%2B
A: Not entirely true.
The biggest "gotcha" is typing -- C++ is much more strongly typed than C is, and the preferred methods for solving this in C++ are simply not available in C. Namely, you can silently cast between types in C (particularly pointer types), but not in C++. And C++ highly recommends using the static_cast/reinterpret_cast/const_cast methods for resolving these issues.
More importantly, if you learn C++ syntax and mannerisms, you'll probably find it difficult to deal with C (some may say this is good; and I prefer C++ myself, but sometimes it just isn't an option, or you have to deal with legacy code that's in C and not C++). Again, the most likely issues you'll encounter are dealing with pointers (particularly char*'s and general array usage; in C++ using std::string and std::vector or other collections is simply better).
It's certainly possible to learn C++, and then learn the differences between C and C++ and be capable of programming in both. But the differences are far more than just skin deep.
A: It is true that for most purposes, C++ contains everything that C does. Language lawyers will be quick to point out that there are some very special edge cases that are valid C but not valid C++.
One such example might be the C declaration
int virtual;
which declares an integer named "virtual". Since "virtual" is a keyword in C++, this is not valid C++.
A: There is a large common core of C (especially C89) and C++, but there are most certainly areas of difference between C and C++. Obviously, C++ has all the object-oriented features, plus the generic programming, plus exceptions, plus namespaces that C does not. However, there are also features of C that are not in C++, such as support for the (close to archaic) non-prototype notation for declaring and defining functions. In particular, the meaning of the following function declaration is different in C and C++:
extern void function();
In C++, that is a function that returns no value and takes no parameters (and, therefore, is called solely for its side-effects, whatever they are). In C, that is a function which returns no value but for which there is no information about the argument list. C still does not require a declaration in scope before a function is called (in general; you must have a declaration in scope if the function takes a variable list of arguments, so it is critical to #include <stdio.h> before using printf(), etc).
There are also differences:
sizeof('c')
In C++, the answer is 1; in C, the answer is normally 4 (32-bit systems with 8-bit characters) or even 8 (64-bit systems with 64-bit int).
In general, you can write code that will compile under both C and C++ compilers without much difficulty - the majority of my code does that all the time. The exceptions are either a result of carelessness on my part, or because I've consciously exploited the good features of C99 that are not in C++ 98, such as designated initializers, or long long.
A: Stroustrup himself advices against learning C first. But then again, he (and many others of his generation) managed to become a C++ guru starting from C.
A: I personally would disagree with your professor.
Generally speaking, C++ is based on C and in that "sense" contains it and extends it.
However, since traditionally people learned C and only then the extensions of C++, your professor's statement is incorrect since to use C++ correctly you would need to master the C origins. It is possible that when teaching you something, your professor or textbook will not specifically mention what came from which language.
In addition, it is important to understand that despite the similarities, not every C program runs in the same way under C++. For example, C structs are interpreted differently (as classes with everything public) by the C++ compiler.
When I teach, I teach the C core first, and then go to C++.
A: While it's true that C++ was designed to maintain a large degree of compatibility with C and a subset of what you learn in C++ will apply to C the mindset is completely different. Programming C++ with Boost or STL is a very different experience than programming in C.
There was a term of art called using C++ as a better C. This meant using some C++ language features and tools to make C programming easier (e.g., declaring the index variable of a for loop within the for statement). But now, modern C++ development seems very different from C other than a great deal of the syntax and in those cases the C legacy often seems to be a burden rather than a benefit.
A: If any of the students in the class intend to become embedded software engineers, then they may have no choice but to program in C (see this question, and this one, among others).
Of course, having learnt C++, it may be less of a transition for them than starting from scratch - but it still makes your professor's statement untrue!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145096",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "44"
} |
Q: What's the best way to benchmark programs in Windows? I need to do some performance benchmarks on .NET programs (C#) in Windows, but I haven't done benchmarking much in the Windows world. I've looked into using the Windows 2000/XP Performance monitor with custom counters for this, but I don't think this is quite what I want.
Are there any good system facilities for this in Windows XP, or do I need to just use System.Diagnostics.Stopwatch [edit] and write text logs for manual interpretation, or is there something else?
Edit: is there anything beyond System.Diagnostics.Stopwatch?
A: using System.Diagnostics;
....
Stopwatch sw = new Stopwatch();
sw.Start();
// Code you want to time...
// Note: for averaged accuracy (without other OS effects),
// run timed code multiple times in a loop
// and then divide by the number of runs.
sw.Stop();
Console.WriteLine("Took " + sw.ElapsedTicks + " Ticks");
A: For micro-benchmarking I really like MeasureIt (can be downloaded from http://msdn.microsoft.com/en-us/magazine/cc500596.aspx). It is a test project written by Vance Morrison a Performance Architect on the CLR. It currently has a good set of benchmarks for a number of .Net/CLR core methods. The best part of it is that it is trivial to tweak and add new benchmarks for whatever you would like to test. Simply run "MeasureIt /edit" and it will launch VS with the project for itself so that you can view how those benchmarks are written and add new ones in a similar fashion if you like.
As already been stated StopWatch is probably the easiest way to do this and MeasureIt uses StopWatch underneath for its timings but it also does some other things like running a block of code X times and then providing you stats for the runs and what not.
A: There are quite a few profilers out there. Here are some of the ones I know of:
*
*Red Gate's ANTS
*Yourkit's .Net Profiler
*JetBrains dotTrace
If you go on with using System.Diagnostics.Stopwatch, you will be able to instrument and measure only particular points of your code, where you explicitly put the Start/Stop around. This is good enough for measuring specific pieces, like a tight loop or things like that, but it's not going to give you a complete picture of where your program spends most of its time.
A: If you want just some quick numbers, you can use Powershell to time overall execution. Use the measure-command cmdlet. It's roughly equivalent to "time" in Unix.
> measure-command { your.exe arg1 }
Days : 0
Hours : 0
Minutes : 0
Seconds : 4
Milliseconds : 996
Ticks : 49963029
TotalDays : 5.78275798611111E-05
TotalHours : 0.00138786191666667
TotalMinutes : 0.083271715
TotalSeconds : 4.9963029
TotalMilliseconds : 4996.3029
A: This may not be what you want, but dotTrace offers many useful diagnostics and is integrated into Visual Studio.
A: If your project is quite big and there are lots of modules doing big number of calls you can your:
http://www.moduleanalyzer.com/
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145103",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: C++ performance vs. Java/C# My understanding is that C/C++ produces native code to run on a particular machine architecture. Conversely, languages like Java and C# run on top of a virtual machine which abstracts away the native architecture. Logically it would seem impossible for Java or C# to match the speed of C++ because of this intermediate step, however I've been told that the latest compilers ("hot spot") can attain this speed or even exceed it.
Perhaps this is more of a compiler question than a language question, but can anyone explain in plain English how it is possible for one of these virtual machine languages to perform better than a native language?
A: I'm not sure how often you'll find that Java code will run faster than C++, even with Hotspot, but I'll take a swing at explaining how it could happen.
Think of compiled Java code as interpreted machine language for the JVM. When the Hotspot processor notices that certain pieces of the compiled code are going to be used many times, it performs an optimization on the machine code. Since hand-tuning Assembly is almost always faster than C++ compiled code, it's ok to figure that programmatically-tuned machine code isn't going to be too bad.
So, for highly repetitious code, I could see where it'd be possible for Hotspot JVM to run the Java faster than C++... until garbage collection comes into play. :)
A: Generally, your program's algorithm will be much more important to the speed of your application than the language. You can implement a poor algorithm in any language, including C++. With that in mind, you'll generally be able to write code the runs faster in a language that helps you implement a more efficient algorithm.
Higher-level languages do very well at this by providing easier access to many efficient pre-built data structures and encouraging practices that will help you avoid inefficient code. Of course, they can at times also make it easy to write a bunch of really slow code, too, so you still have to know your platform.
Also, C++ is catching up with "new" (note the quotes) features like the STL containers, auto pointers, etc -- see the boost library, for example. And you might occasionally find that the fastest way to accomplish some task requires a technique like pointer arithmetic that's forbidden in a higher-level language -- though they typcially allow you to call out to a library written in a language that can implement it as desired.
The main thing is to know the language you're using, it's associated API, what it can do, and what it's limitations are.
A: I don't know either...my Java programs are always slow. :-) I've never really noticed C# programs being particularly slow, though.
A: Whenever I talk managed vs. unmanaged performance, I like to point to the series Rico (and Raymond) did comparing C++ and C# versions of a Chinese/English dictionary. This google search will let you read for yourself, but I like Rico's summary.
So am I ashamed by my crushing defeat?
Hardly. The managed code got a very
good result for hardly any effort. To
defeat the managed Raymond had to:
*
*Write his own file I/O stuff
*Write his own string class
*Write his own allocator
*Write his own international mapping
Of course he used available lower
level libraries to do this, but that's
still a lot of work. Can you call
what's left an STL program? I don't
think so, I think he kept the
std::vector class which ultimately was
never a problem and he kept the find
function. Pretty much everything else
is gone.
So, yup, you can definately beat the
CLR. Raymond can make his program go
even faster I think.
Interestingly, the time to parse the
file as reported by both programs
internal timers is about the same --
30ms for each. The difference is in
the overhead.
For me the bottom line is that it took 6 revisions for the unmanaged version to beat the managed version that was a simple port of the original unmanaged code. If you need every last bit of performance (and have the time and expertise to get it), you'll have to go unmanaged, but for me, I'll take the order of magnitude advantage I have on the first versions over the 33% I gain if I try 6 times.
A: Here is another intersting benchmark, which you can try yourself on your own computer.
It compares ASM, VC++, C#, Silverlight, Java applet, Javascript, Flash (AS3)
Roozz plugin speed demo
Please note that the speed of javascript varries a lot depending on what browser is executing it. The same is true for Flash and Silverlight because these plugins run in the same process as the hosting browser. But the Roozz plugin run standard .exe files, which run in their own process, thus the speed is not influenced by the hosting browser.
A: You should define "perform better than..". Well, I know, you asked about speed, but its not everything that counts.
*
*Do virtual machines perform more runtime overhead? Yes!
*Do they eat more working memory? Yes!
*Do they have higher startup costs (runtime initialization and JIT compiler) ? Yes!
*Do they require a huge library installed? Yes!
And so on, its biased, yes ;)
With C# and Java you pay a price for what you get (faster coding, automatic memory management, big library and so on). But you have not much room to haggle about the details: take the complete package or nothing.
Even if those languages can optimize some code to execute faster than compiled code, the whole approach is (IMHO) inefficient. Imagine driving every day 5 miles to your workplace, with a truck! Its comfortable, it feels good, you are safe (extreme crumple zone) and after you step on the gas for some time, it will even be as fast as a standard car! Why don't we all have a truck to drive to work? ;)
In C++ you get what you pay for, not more, not less.
Quoting Bjarne Stroustrup: "C++ is my favorite garbage collected language because it generates so little garbage"
link text
A: The executable code produced from a Java or C# compiler is not interpretted -- it is compiled to native code "just in time" (JIT). So, the first time code in a Java/C# program is encountered during execution, there is some overhead as the "runtime compiler" (aka JIT compiler) turns the byte code (Java) or IL code (C#) into native machine instructions. However, the next time that code is encountered while the application is still running, the native code is executed immediately. This explains how some Java/C# programs appear to be slow initially, but then perform better the longer they run. A good example is an ASP.Net web site. The very first time the web site is accessed, it may be a bit slower as the C# code is compiled to native code by the JIT compiler. Subsequent accesses result in a much faster web site -- server and client side caching aside.
A: The virtual machine languages are unlikely to outperform compiled languages but they can get close enough that it doesn't matter, for (at least) the following reasons (I'm speaking for Java here since I've never done C#).
1/ The Java Runtime Environment is usually able to detect pieces of code that are run frequently and perform just-in-time (JIT) compilation of those sections so that, in future, they run at the full compiled speed.
2/ Vast portions of the Java libraries are compiled so that, when you call a library function, you're executing compiled code, not interpreted. You can see the code (in C) by downloading the OpenJDK.
3/ Unless you're doing massive calculations, much of the time your program is running, it's waiting for input from a very slow (relatively speaking) human.
4/ Since a lot of the validation of Java bytecode is done at the time of loading the class, the normal overhead of runtime checks is greatly reduced.
5/ At the worst case, performance-intensive code can be extracted to a compiled module and called from Java (see JNI) so that it runs at full speed.
In summary, the Java bytecode will never outperform native machine language, but there are ways to mitigate this. The big advantage of Java (as I see it) is the HUGE standard library and the cross-platform nature.
A: Some good answers here about the specific question you asked. I'd like to step back and look at the bigger picture.
Keep in mind that your user's perception of the speed of the software you write is affected by many other factors than just how well the codegen optimizes. Here are some examples:
*
*Manual memory management is hard to do correctly (no leaks), and even harder to do effeciently (free memory soon after you're done with it). Using a GC is, in general, more likely to produce a program that manages memory well. Are you willing to work very hard, and delay delivering your software, in an attempt to out-do the GC?
*My C# is easier to read & understand than my C++. I also have more ways to convince myself that my C# code is working correctly. That means I can optimize my algorithms with less risk of introducing bugs (and users don't like software that crashes, even if it does it quickly!)
*I can create my software faster in C# than in C++. That frees up time to work on performance, and still deliver my software on time.
*It's easier to write good UI in C# than C++, so I'm more likely to be able to push work to the background while UI stays responsive, or to provide progress or hearbeat UI when the program has to block for a while. This doesn't make anything faster, but it makes users happier about waiting.
Everything I said about C# is probably true for Java, I just don't have the experience to say for sure.
A: If you're a Java/C# programmer learning C++, you'll be tempted to keep thinking in terms of Java/C# and translate verbatim to C++ syntax. In that case, you only get the earlier mentioned benefits of native code vs. interpreted/JIT. To get the biggest performance gain in C++ vs. Java/C#, you have to learn to think in C++ and design code specifically to exploit the strengths of C++.
To paraphrase Edsger Dijkstra: [your first language] mutilates the mind beyond recovery.
To paraphrase Jeff Atwood: you can write [your first language] in any new language.
A: One of the most significant JIT optimizations is method inlining. Java can even inline virtual methods if it can guarantee runtime correctness. This kind of optimization usually cannot be performed by standard static compilers because it needs whole-program analysis, which is hard because of separate compilation (in contrast, JIT has all the program available to it). Method inlining improves other optimizations, giving larger code blocks to optimize.
Standard memory allocation in Java/C# is also faster, and deallocation (GC) is not much slower, but only less deterministic.
A: Orion Adrian, let me invert your post to see how unfounded your remarks are, because a lot can be said about C++ as well. And telling that Java/C# compiler optimize away empty functions does really make you sound like you are not my expert in optimization, because a) why should a real program contain empty functions, except for really bad legacy code, b) that is really not black and bleeding edge optimization.
Apart from that phrase, you ranted blatantly about pointers, but don't objects in Java and C# basically work like C++ pointers? May they not overlap? May they not be null? C (and most C++ implementations) has the restrict keyword, both have value types, C++ has reference-to-value with non-null guarantee. What do Java and C# offer?
>>>>>>>>>>
Generally, C and C++ can be just as fast or faster because the AOT compiler -- a compiler that compiles your code before deployment, once and for all, on your high memory many core build server -- can make optimizations that a C# compiled program cannot because it has a ton of time to do so. The compiler can determine if the machine is Intel or AMD; Pentium 4, Core Solo, or Core Duo; or if supports SSE4, etc, and if your compiler does not support runtime dispatch, you can solve for that yourself by deploying a handful of specialized binaries.
A C# program is commonly compiled upon running it so that it runs decently well on all machines, but is not optimized as much as it could be for a single configuration (i.e. processor, instruction set, other hardware), and it must spend some time first. Features like loop fission, loop inversion, automatic vectorization, whole program optimization, template expansion, IPO, and many more, are very hard to be solved all and completely in a way that does not annoy the end user.
Additionally certain language features allow the compiler in C++ or C to make assumptions about your code that allows it to optimize certain parts away that just aren't safe for the Java/C# compiler to do. When you don't have access to the full type id of generics or a guaranteed program flow there's a lot of optimizations that just aren't safe.
Also C++ and C do many stack allocations at once with just one register incrementation, which surely is more efficient than Javas and C# allocations as for the layer of abstraction between the garbage collector and your code.
Now I can't speak for Java on this next point, but I know that C++ compilers for example will actually remove methods and method calls when it knows the body of the method is empty, it will eliminate common subexpressions, it may try and retry to find optimal register usage, it does not enforce bounds checking, it will autovectorize loops and inner loops and will invert inner to outer, it moves conditionals out of loops, it splits and unsplits loops. It will expand std::vector into native zero overhead arrays as you'd do the C way. It will do inter procedural optimmizations. It will construct return values directly at the caller site. It will fold and propagate expressions. It will reorder data into a cache friendly manner. It will do jump threading. It lets you write compile time ray tracers with zero runtime overhead. It will make very expensive graph based optimizations. It will do strength reduction, were it replaces certain codes with syntactically totally unequal but semantically equivalent code (the old "xor foo, foo" is just the simplest, though outdated optimization of such kind). If you kindly ask it, you may omit IEEE floating point standards and enable even more optimizations like floating point operand re-ordering. After it has massaged and massacred your code, it might repeat the whole process, because often, certain optimizations lay the foundation for even certainer optimizations. It might also just retry with shuffled parameters and see how the other variant scores in its internal ranking. And it will use this kind of logic throughout your code.
So as you can see, there are lots of reasons why certain C++ or C implementations will be faster.
Now this all said, many optimizations can be made in C++ that will blow away anything that you could do with C#, especially in the number crunching, realtime and close-to-metal realm, but not exclusively there. You don't even have to touch a single pointer to come a long way.
So depending on what you're writing I would go with one or the other. But if you're writing something that isn't hardware dependent (driver, video game, etc), I wouldn't worry about the performance of C# (again can't speak about Java). It'll do just fine.
<<<<<<<<<<
Generally, certain generalized arguments might sound cool in specific posts, but don't generally sound certainly credible.
Anyways, to make peace: AOT is great, as is JIT. The only correct answer can be: It depends. And the real smart people know that you can use the best of both worlds anyways.
A: The compile for specific CPU optimizations are usually overrated. Just take a program in C++ and compile with optimization for pentium PRO and run on a pentium 4. Then recompile with optimize for pentium 4. I passed long afternoons doing it with several programs. General results?? Usually less than 2-3% performance increase. So the theoretical JIT advantages are almost none. Most differences of performance can only be observed when using scalar data processing features, something that will eventually need manual fine tunning to achieve maximum performance anyway. Optimizations of that sort are slow and costly to perform making them sometimes unsuitable for JIT anyway.
On real world and real application C++ is still usually faster than java, mainly because of lighter memory footprint that result in better cache performance.
But to use all of C++ capability you, the developer must work hard. You can achieve superior results, but you must use your brain for that. C++ is a language that decided to present you with more tools, charging the price that you must learn them to be able to use the language well.
A: JIT (Just In Time Compiling) can be incredibly fast because it optimizes for the target platform.
This means that it can take advantage of any compiler trick your CPU can support, regardless of what CPU the developer wrote the code on.
The basic concept of the .NET JIT works like this (heavily simplified):
Calling a method for the first time:
*
*Your program code calls a method Foo()
*The CLR looks at the type that implements Foo() and gets the metadata associated with it
*From the metadata, the CLR knows what memory address the IL (Intermediate byte code) is stored in.
*The CLR allocates a block of memory, and calls the JIT.
*The JIT compiles the IL into native code, places it into the allocated memory, and then changes the function pointer in Foo()'s type metadata to point to this native code.
*The native code is ran.
Calling a method for the second time:
*
*Your program code calls a method Foo()
*The CLR looks at the type that implements Foo() and finds the function pointer in the metadata.
*The native code at this memory location is ran.
As you can see, the 2nd time around, its virtually the same process as C++, except with the advantage of real time optimizations.
That said, there are still other overhead issues that slow down a managed language, but the JIT helps a lot.
A: It would only happen if the Java interpreter is producing machine code that is actually better optimized than the machine code your compiler is generating for the C++ code you are writing, to the point where the C++ code is slower than the Java and the interpretation cost.
However, the odds of that actually happening are pretty low - unless perhaps Java has a very well-written library, and you have your own poorly written C++ library.
A: Actually, C# does not really run in a virtual machine like Java does. IL is compiled into assembly language, which is entirely native code and runs at the same speed as native code. You can pre-JIT an .NET application which entirely removes the JIT cost and then you are running entirely native code.
The slowdown with .NET will come not because .NET code is slower, but because it does a lot more behind the scenes to do things like garbage collect, check references, store complete stack frames, etc. This can be quite powerful and helpful when building applications, but also comes at a cost. Note that you could do all these things in a C++ program as well (much of the core .NET functionality is actually .NET code which you can view in ROTOR). However, if you hand wrote the same functionality you would probably end up with a much slower program since the .NET runtime has been optimized and finely tuned.
That said, one of the strengths of managed code is that it can be fully verifiable, ie. you can verify that the code will never access another processes's memory or do unsage things before you execute it. Microsoft has a research prototype of a fully managed operating system that has suprisingly shown that a 100% managed environment can actually perform significantly faster than any modern operating system by taking advantage of this verification to turn off security features that are no longer needed by managed programs (we are talking like 10x in some cases). SE radio has a great episode talking about this project.
A: JIT vs. Static Compiler
As already said in the previous posts, JIT can compile IL/bytecode into native code at runtime. The cost of that was mentionned, but not to its conclusion:
JIT has one massive problem is that it can't compile everything: JIT compiling takes time, so the JIT will compile only some parts of the code, whereas a static compiler will produce a full native binary: For some kind of programs, the static compiler will simply easily outperform the JIT.
Of course, C# (or Java, or VB) is usually faster to produce viable and robust solution than is C++ (if only because C++ has complex semantics, and C++ standard library, while interesting and powerful, is quite poor when compared with the full scope of the standard library from .NET or Java), so usually, the difference between C++ and .NET or Java JIT won't be visible to most users, and for those binaries that are critical, well, you can still call C++ processing from C# or Java (even if this kind of native calls can be quite costly in themselves)...
C++ metaprograming
Note that usually, you are comparing C++ runtime code with its equivalent in C# or Java. But C++ has one feature that can outperform Java/C# out of the box, that is template metaprograming: The code processing will be done at compilation time (thus, increasing vastly compilation time), resulting into zero (or almost zero) runtime.
I have yet so see a real life effect on this (I played only with concepts, but by then, the difference was seconds of execution for JIT, and zero for C++), but this is worth mentioning, alongside the fact template metaprograming is not trivial...
Edit 2011-06-10: In C++, playing with types is done at compile time, meaning producing generic code which calls non-generic code (e.g. a generic parser from string to type T, calling standard library API for types T it recognizes, and making the parser easily extensible by its user) is very easy and very efficient, whereas the equivalent in Java or C# is painful at best to write, and will always be slower and resolved at runtime even when the types are known at compile time, meaning your only hope is for the JIT to inline the whole thing.
...
Edit 2011-09-20: The team behind Blitz++ (Homepage, Wikipedia) went that way, and apparently, their goal is to reach FORTRAN's performance on scientific calculations by moving as much as possible from runtime execution to compilation time, via C++ template metaprogramming. So the "I have yet so see a real life effect on this" part I wrote above apparently does exist in real life.
Native C++ Memory Usage
C++ has a memory usage different from Java/C#, and thus, has different advantages/flaws.
No matter the JIT optimization, nothing will go has fast as direct pointer access to memory (let's ignore for a moment processor caches, etc.). So, if you have contiguous data in memory, accessing it through C++ pointers (i.e. C pointers... Let's give Caesar its due) will goes times faster than in Java/C#. And C++ has RAII, which makes a lot of processing a lot easier than in C# or even in Java. C++ does not need using to scope the existence of its objects. And C++ does not have a finally clause. This is not an error.
:-)
And despite C# primitive-like structs, C++ "on the stack" objects will cost nothing at allocation and destruction, and will need no GC to work in an independent thread to do the cleaning.
As for memory fragmentation, memory allocators in 2008 are not the old memory allocators from 1980 that are usually compared with a GC: C++ allocation can't be moved in memory, true, but then, like on a Linux filesystem: Who needs hard disk defragmenting when fragmentation does not happen? Using the right allocator for the right task should be part of the C++ developer toolkit. Now, writing allocators is not easy, and then, most of us have better things to do, and for the most of use, RAII or GC is more than good enough.
Edit 2011-10-04: For examples about efficient allocators: On Windows platforms, since Vista, the Low Fragmentation Heap is enabled by default. For previous versions, the LFH can be activated by calling the WinAPI function HeapSetInformation). On other OSes, alternative allocators are provided (see https://secure.wikimedia.org/wikipedia/en/wiki/Malloc for a list)
Now, the memory model is somewhat becoming more complicated with the rise of multicore and multithreading technology. In this field, I guess .NET has the advantage, and Java, I was told, held the upper ground. It's easy for some "on the bare metal" hacker to praise his "near the machine" code. But now, it is quite more difficult to produce better assembly by hand than letting the compiler to its job. For C++, the compiler became usually better than the hacker since a decade. For C# and Java, this is even easier.
Still, the new standard C++0x will impose a simple memory model to C++ compilers, which will standardize (and thus simplify) effective multiprocessing/parallel/threading code in C++, and make optimizations easier and safer for compilers. But then, we'll see in some couple of years if its promises are held true.
C++/CLI vs. C#/VB.NET
Note: In this section, I am talking about C++/CLI, that is, the C++ hosted by .NET, not the native C++.
Last week, I had a training on .NET optimization, and discovered that the static compiler is very important anyway. As important than JIT.
The very same code compiled in C++/CLI (or its ancestor, Managed C++) could be times faster than the same code produced in C# (or VB.NET, whose compiler produces the same IL than C#).
Because the C++ static compiler was a lot better to produce already optimized code than C#'s.
For example, function inlining in .NET is limited to functions whose bytecode is less or equal than 32 bytes in length. So, some code in C# will produce a 40 bytes accessor, which won't be ever inlined by the JIT. The same code in C++/CLI will produce a 20 bytes accessor, which will be inlined by the JIT.
Another example is temporary variables, that are simply compiled away by the C++ compiler while still being mentioned in the IL produced by the C# compiler. C++ static compilation optimization will result in less code, thus authorizes a more aggressive JIT optimization, again.
The reason for this was speculated to be the fact C++/CLI compiler profited from the vast optimization techniques from C++ native compiler.
Conclusion
I love C++.
But as far as I see it, C# or Java are all in all a better bet. Not because they are faster than C++, but because when you add up their qualities, they end up being more productive, needing less training, and having more complete standard libraries than C++. And as for most of programs, their speed differences (in one way or another) will be negligible...
Edit (2011-06-06)
My experience on C#/.NET
I have now 5 months of almost exclusive professional C# coding (which adds up to my CV already full of C++ and Java, and a touch of C++/CLI).
I played with WinForms (Ahem...) and WCF (cool!), and WPF (Cool!!!! Both through XAML and raw C#. WPF is so easy I believe Swing just cannot compare to it), and C# 4.0.
The conclusion is that while it's easier/faster to produce a code that works in C#/Java than in C++, it's a lot harder to produce a strong, safe and robust code in C# (and even harder in Java) than in C++. Reasons abound, but it can be summarized by:
*
*Generics are not as powerful as templates (try to write an efficient generic Parse method (from string to T), or an efficient equivalent of boost::lexical_cast in C# to understand the problem)
*RAII remains unmatched (GC still can leak (yes, I had to handle that problem) and will only handle memory. Even C#'s using is not as easy and powerful because writing a correct Dispose implementations is difficult)
*C# readonly and Java final are nowhere as useful as C++'s const (There's no way you can expose readonly complex data (a Tree of Nodes, for example) in C# without tremendous work, while it's a built-in feature of C++. Immutable data is an interesting solution, but not everything can be made immutable, so it's not even enough, by far).
So, C# remains an pleasant language as long as you want something that works, but a frustrating language the moment you want something that always and safely works.
Java is even more frustrating, as it has the same problems than C#, and more: Lacking the equivalent of C#'s using keyword, a very skilled colleague of mine spent too much time making sure its resources where correctly freed, whereas the equivalent in C++ would have been easy (using destructors and smart pointers).
So I guess C#/Java's productivity gain is visible for most code... until the day you need the code to be as perfect as possible. That day, you'll know pain. (you won't believe what's asked from our server and GUI apps...).
About Server-side Java and C++
I kept contact with the server teams (I worked 2 years among them, before getting back to the GUI team), at the other side of the building, and I learned something interesting.
Last years, the trend was to have the Java server apps be destined to replace the old C++ server apps, as Java has a lot of frameworks/tools, and is easy to maintain, deploy, etc. etc..
...Until the problem of low-latency reared its ugly head the last months. Then, the Java server apps, no matter the optimization attempted by our skilled Java team, simply and cleanly lost the race against the old, not really optimized C++ server.
Currently, the decision is to keep the Java servers for common use where performance while still important, is not concerned by the low-latency target, and aggressively optimize the already faster C++ server applications for low-latency and ultra-low-latency needs.
Conclusion
Nothing is as simple as expected.
Java, and even more C#, are cool languages, with extensive standard libraries and frameworks, where you can code fast, and have result very soon.
But when you need raw power, powerful and systematic optimizations, strong compiler support, powerful language features and absolute safety, Java and C# make it difficult to win the last missing but critical percents of quality you need to remain above the competition.
It's as if you needed less time and less experienced developers in C#/Java than in C++ to produce average quality code, but in the other hand, the moment you needed excellent to perfect quality code, it was suddenly easier and faster to get the results right in C++.
Of course, this is my own perception, perhaps limited to our specific needs.
But still, it is what happens today, both in the GUI teams and the server-side teams.
Of course, I'll update this post if something new happens.
Edit (2011-06-22)
"We find that in regards to performance, C++ wins out by
a large margin. However, it also required the most extensive
tuning efforts, many of which were done at a level of sophistication
that would not be available to the average programmer.
[...] The Java version was probably the simplest to implement, but the hardest to analyze for performance. Specifically the effects around garbage collection were complicated and very hard to tune."
Sources:
*
*https://days2011.scala-lang.org/sites/days2011/files/ws3-1-Hundt.pdf
*http://www.computing.co.uk/ctg/news/2076322/-winner-google-language-tests
Edit (2011-09-20)
"The going word at Facebook is that 'reasonably written C++ code just runs fast,' which underscores the enormous effort spent at optimizing PHP and Java code. Paradoxically, C++ code is more difficult to write than in other languages, but efficient code is a lot easier [to write in C++ than in other languages]."
– Herb Sutter at //build/, quoting Andrei Alexandrescu
Sources:
*
*http://channel9.msdn.com/Events/BUILD/BUILD2011/TOOL-835T
*http://video.ch9.ms/build/2011/slides/TOOL-835T_Sutter.pptx
A: Generally, C# and Java can be just as fast or faster because the JIT compiler -- a compiler that compiles your IL the first time it's executed -- can make optimizations that a C++ compiled program cannot because it can query the machine. It can determine if the machine is Intel or AMD; Pentium 4, Core Solo, or Core Duo; or if supports SSE4, etc.
A C++ program has to be compiled beforehand usually with mixed optimizations so that it runs decently well on all machines, but is not optimized as much as it could be for a single configuration (i.e. processor, instruction set, other hardware).
Additionally certain language features allow the compiler in C# and Java to make assumptions about your code that allows it to optimize certain parts away that just aren't safe for the C/C++ compiler to do. When you have access to pointers there's a lot of optimizations that just aren't safe.
Also Java and C# can do heap allocations more efficiently than C++ because the layer of abstraction between the garbage collector and your code allows it to do all of its heap compression at once (a fairly expensive operation).
Now I can't speak for Java on this next point, but I know that C# for example will actually remove methods and method calls when it knows the body of the method is empty. And it will use this kind of logic throughout your code.
So as you can see, there are lots of reasons why certain C# or Java implementations will be faster.
Now this all said, specific optimizations can be made in C++ that will blow away anything that you could do with C#, especially in the graphics realm and anytime you're close to the hardware. Pointers do wonders here.
So depending on what you're writing I would go with one or the other. But if you're writing something that isn't hardware dependent (driver, video game, etc), I wouldn't worry about the performance of C# (again can't speak about Java). It'll do just fine.
One the Java side, @Swati points out a good article:
https://www.ibm.com/developerworks/library/j-jtp09275
A: I like Orion Adrian's answer, but there is another aspect to it.
The same question was posed decades ago about assembly language vs. "human" languages like FORTRAN. And part of the answer is similar.
Yes, a C++ program is capable of being faster than C# on any given (non-trivial?) algorithm, but the program in C# will often be as fast or faster than a "naive" implementation in C++, and an optimized version in C++ will take longer to develop, and might still beat the C# version by a very small margin. So, is it really worth it?
You'll have to answer that question on a one-by-one basis.
That said, I'm a long time fan of C++, and I think it's an incredibly expressive and powerful language -- sometimes underappreciated. But in many "real life" problems (to me personally, that means "the kind I get paid to solve"), C# will get the job done sooner and safer.
The biggest penalty you pay? Many .NET and Java programs are memory hogs. I have seen .NET and Java apps take "hundreds" of megabytes of memory, when C++ programs of similar complexity barely scratch the "tens" of MBs.
A: In some cases, managed code can actually be faster than native code. For instance, "mark-and-sweep" garbage collection algorithms allow environments like the JRE or CLR to free large numbers of short-lived (usually) objects in a single pass, where most C/C++ heap objects are freed one-at-a-time.
From wikipedia:
For many practical purposes, allocation/deallocation-intensive algorithms implemented in garbage collected languages can actually be faster than their equivalents using manual heap allocation. A major reason for this is that the garbage collector allows the runtime system to amortize allocation and deallocation operations in a potentially advantageous fashion.
That said, I've written a lot of C# and a lot of C++, and I've run a lot of benchmarks. In my experience, C++ is a lot faster than C#, in two ways: (1) if you take some code that you've written in C#, port it to C++ the native code tends to be faster. How much faster? Well, it varies a whole lot, but it's not uncommon to see a 100% speed improvement. (2) In some cases, garbage collection can massively slow down a managed application. The .NET CLR does a terrible job with large heaps (say, > 2GB), and can end up spending a lot of time in GC--even in applications that have few--or even no--objects of intermediate life spans.
Of course, in most cases that I've encounted, managed languages are fast enough, by a long shot, and the maintenance and coding tradeoff for the extra performance of C++ is simply not a good one.
A: Here's an interesting benchmark
http://zi.fi/shootout/
A: Actually Sun's HotSpot JVM uses "mixed-mode" execution. It interprets the method's bytecode until it determines (usually through a counter of some sort) that a particular block of code (method, loop, try-catch block, etc.) is going to be executed a lot, then it JIT compiles it. The time required to JIT compile a method often takes longer than if the method were to be interpreted if it is a seldom run method. Performance is usually higher for "mixed-mode" because the JVM does not waste time JITing code that is rarely, if ever, run.
C# and .NET do not do this. .NET JITs everything which, often times, wastes time.
A: Here is answer from Cliff Click: http://www.azulsystems.com/blog/cliff/2009-09-06-java-vs-c-performanceagain
A: Go read about HP Labs' Dynamo, an interpreter for PA-8000 that runs on PA-8000, and often runs programs faster than they do natively. Then it won't seem at all surprising!
Don't think of it as an "intermediate step" -- running a program involves lots of other steps already, in any language.
It often comes down to:
*
*programs have hot-spots, so even if you're slower running 95% of the body of code you have to run, you can still be performance-competitive if you're faster at the hot 5%
*a HLL knows more about your intent than a LLL like C/C++, and so can generate more optimized code (OCaml has even more, and in practice is often even faster)
*a JIT compiler has a lot of information that a static compiler doesn't (like, the actual data you happen to have this time)
*a JIT compiler can do optimizations at run-time that traditional linkers aren't really allowed to do (like reordering branches so the common case is flat, or inlining library calls)
All in all, C/C++ are pretty lousy languages for performance: there's relatively little information about your data types, no information about your data, and no dynamic runtime to allow much in the way of run-time optimization.
A: You might get short bursts when Java or CLR is faster than C++, but overall the performance is worse for the life of the application:
see www.codeproject.com/KB/dotnet/RuntimePerformance.aspx for some results for that.
A:
My understanding is that C/C++ produces native code to run on a particular machine architecture. Conversely, languages like Java and C# run on top of a virtual machine which abstracts away the native architecture. Logically it would seem impossible for Java or C# to match the speed of C++ because of this intermediate step, however I've been told that the latest compilers ("hot spot") can attain this speed or even exceed it.
That is illogical. The use of an intermediate representation does not inherently degrade performance. For example, llvm-gcc compiles C and C++ via LLVM IR (which is a virtual infinite-register machine) to native code and it achieves excellent performance (often beating GCC).
Perhaps this is more of a compiler question than a language question, but can anyone explain in plain English how it is possible for one of these virtual machine languages to perform better than a native language?
Here are some examples:
*
*Virtual machines with JIT compilation facilitate run-time code generation (e.g. System.Reflection.Emit on .NET) so you can compile generated code on-the-fly in languages like C# and F# but must resort to writing a comparatively-slow interpreter in C or C++. For example, to implement regular expressions.
*Parts of the virtual machine (e.g. the write barrier and allocator) are often written in hand-coded assembler because C and C++ do not generate fast enough code. If a program stresses these parts of a system then it could conceivably outperform anything that can be written in C or C++.
*Dynamic linking of native code requires conformance to an ABI that can impede performance and obviates whole-program optimization whereas linking is typically deferred on VMs and can benefit from whole-program optimizations (like .NET's reified generics).
I'd also like to address some issues with paercebal's highly-upvoted answer above (because someone keeps deleting my comments on his answer) that presents a counter-productively polarized view:
The code processing will be done at compilation time...
Hence template metaprogramming only works if the program is available at compile time which is often not the case, e.g. it is impossible to write a competitively performant regular expression library in vanilla C++ because it is incapable of run-time code generation (an important aspect of metaprogramming).
...playing with types is done at compile time...the equivalent in Java or C# is painful at best to write, and will always be slower and resolved at runtime even when the types are known at compile time.
In C#, that is only true of reference types and is not true for value types.
No matter the JIT optimization, nothing will go has fast as direct pointer access to memory...if you have contiguous data in memory, accessing it through C++ pointers (i.e. C pointers... Let's give Caesar its due) will goes times faster than in Java/C#.
People have observed Java beating C++ on the SOR test from the SciMark2 benchmark precisely because pointers impede aliasing-related optimizations.
Also worth noting that .NET does type specialization of generics across dynamically-linked libraries after linking whereas C++ cannot because templates must be resolved before linking. And obviously the big advantage generics have over templates is comprehensible error messages.
A: On top of what some others have said, from my understanding .NET and Java are better at memory allocation. E.g. they can compact memory as it gets fragmented while C++ cannot (natively, but it can if you're using a clever garbage collector).
A: For anything needing lots of speed, the JVM just calls a C++ implementation, so it's a question more of how good their libs are than how good the JVM is for most OS related things.
Garbage collection cuts your memory in half, but using some of the fancier STL and Boost features will have the same effect but with many times the bug potential.
If you are just using C++ libraries and lots of its high level features in a large project with many classes you will probably wind up slower than using a JVM. Except much more error prone.
However, the benefit of C++ is that it allows you to optimize yourself, otherwise you are stuck with what the compiler/jvm does. If you make your own containers, write your own memory management that's aligned, use SIMD, and drop to assembly here and there, you can speed up at least 2x-4x times over what most C++ compilers will do on their own. For some operations, 16x-32x. That's using the same algorithms, if you use better algorithms and parallelize, increases can be dramatic, sometimes thousands of times faster that commonly used methods.
A: I look at it from a few different points.
*
*Given infinite time and resources, will managed or unmanaged code be faster? Clearly, the answer is that unmanaged code can always at least tie managed code in this aspect - as in the worst case, you'd just hard-code the managed code solution.
*If you take a program in one language, and directly translate it to another, how much worse will it perform? Probably a lot, for any two languages. Most languages require different optimizations and have different gotchas. Micro-performance is often a lot about knowing these details.
*Given finite time and resources, which of two languages will produce a better result? This is the most interesting question, as while a managed language may produce slightly slower code (given a program reasonably written for that language), that version will likely be done sooner, allowing for more time spent on optimization.
A: A very short answer: Given a fixed budget you will achieve better performing java application than a C++ application (ROI considerations) In addition Java platform has more decent profilers, that will help you pinpoint your hotspots more quickly
A: I don't think performance need to considered with respect to processing speed of the server these days (after all the muti-core processor are into market). It should be more defined based on the memory usage. And that way java has a slight disadvantage.
But still all in all programing language are suited for different purpose. And that is their competitive area, at each section it is different winners.
And I am sure Java will win in the long run for it continues development and competitiveness it shows in all the new features it produce.
I found a link here that will support my reason for voting for java
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145110",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "119"
} |
Q: What's the best strategy for unit-testing database-driven applications? I work with a lot of web applications that are driven by databases of varying complexity on the backend. Typically, there's an ORM layer separate from the business and presentation logic. This makes unit-testing the business logic fairly straightforward; things can be implemented in discrete modules and any data needed for the test can be faked through object mocking.
But testing the ORM and database itself has always been fraught with problems and compromises.
Over the years, I have tried a few strategies, none of which completely satisfied me.
*
*Load a test database with known data. Run tests against the ORM and confirm that the right data comes back. The disadvantage here is that your test DB has to keep up with any schema changes in the application database, and might get out of sync. It also relies on artificial data, and may not expose bugs that occur due to stupid user input. Finally, if the test database is small, it won't reveal inefficiencies like a missing index. (OK, that last one isn't really what unit testing should be used for, but it doesn't hurt.)
*Load a copy of the production database and test against that. The problem here is that you may have no idea what's in the production DB at any given time; your tests may need to be rewritten if data changes over time.
Some people have pointed out that both of these strategies rely on specific data, and a unit test should test only functionality. To that end, I've seen suggested:
*
*Use a mock database server, and check only that the ORM is sending the correct queries in response to a given method call.
What strategies have you used for testing database-driven applications, if any? What has worked the best for you?
A: I'm always running tests against an in-memory DB (HSQLDB or Derby) for these reasons:
*
*It makes you think which data to keep in your test DB and why. Just hauling your production DB into a test system translates to "I have no idea what I'm doing or why and if something breaks, it wasn't me!!" ;)
*It makes sure the database can be recreated with little effort in a new place (for example when we need to replicate a bug from production)
*It helps enormously with the quality of the DDL files.
The in-memory DB is loaded with fresh data once the tests start and after most tests, I invoke ROLLBACK to keep it stable. ALWAYS keep the data in the test DB stable! If the data changes all the time, you can't test.
The data is loaded from SQL, a template DB or a dump/backup. I prefer dumps if they are in a readable format because I can put them in VCS. If that doesn't work, I use a CSV file or XML. If I have to load enormous amounts of data ... I don't. You never have to load enormous amounts of data :) Not for unit tests. Performance tests are another issue and different rules apply.
A: I use the first (running the code against a test database). The only substantive issue I see you raising with this approach is the possibilty of schemas getting out of sync, which I deal with by keeping a version number in my database and making all schema changes via a script which applies the changes for each version increment.
I also make all changes (including to the database schema) against my test environment first, so it ends up being the other way around: After all tests pass, apply the schema updates to the production host. I also keep a separate pair of testing vs. application databases on my development system so that I can verify there that the db upgrade works properly before touching the real production box(es).
A: I'm using the first approach but a bit different that allows to address the problems you mentioned.
Everything that is needed to run tests for DAOs is in source control. It includes schema and scripts to create the DB (docker is very good for this). If the embedded DB can be used - I use it for speed.
The important difference with the other described approaches is that the data that is required for test is not loaded from SQL scripts or XML files. Everything (except some dictionary data that is effectively constant) is created by application using utility functions/classes.
The main purpose is to make data used by test
*
*very close to the test
*explicit (using SQL files for data make it very problematic to see what piece of data is used by what test)
*isolate tests from the unrelated changes.
It basically means that these utilities allow to declaratively specify only things essential for the test in test itself and omit irrelevant things.
To give some idea of what it means in practice, consider the test for some DAO which works with Comments to Posts written by Authors. In order to test CRUD operations for such DAO some data should be created in the DB. The test would look like:
@Test
public void savedCommentCanBeRead() {
// Builder is needed to declaratively specify the entity with all attributes relevant
// for this specific test
// Missing attributes are generated with reasonable values
// factory's responsibility is to create entity (and all entities required by it
// in our example Author) in the DB
Post post = factory.create(PostBuilder.post());
Comment comment = CommentBuilder.comment().forPost(post).build();
sut.save(comment);
Comment savedComment = sut.get(comment.getId());
// this checks fields that are directly stored
assertThat(saveComment, fieldwiseEqualTo(comment));
// if there are some fields that are generated during save check them separately
assertThat(saveComment.getGeneratedField(), equalTo(expectedValue));
}
This has several advantages over SQL scripts or XML files with test data:
*
*Maintaining the code is much easier (adding a mandatory column for example in some entity that is referenced in many tests, like Author, does not require to change lots of files/records but only a change in builder and/or factory)
*The data required by specific test is described in the test itself and not in some other file. This proximity is very important for test comprehensibility.
Rollback vs Commit
I find it more convenient that tests do commit when they are executed. Firstly, some effects (for example DEFERRED CONSTRAINTS) cannot be checked if commit never happens. Secondly, when a test fails the data can be examined in the DB as it is not reverted by the rollback.
Of cause this has a downside that test may produce a broken data and this will lead to the failures in other tests. To deal with this I try to isolate the tests. In the example above every test may create new Author and all other entities are created related to it so collisions are rare. To deal with the remaining invariants that can be potentially broken but cannot be expressed as a DB level constraint I use some programmatic checks for erroneous conditions that may be run after every single test (and they are run in CI but usually switched off locally for performance reasons).
A: For JDBC based project (directly or indirectly, e.g. JPA, EJB, ...) you can mockup not the entire database (in such case it would be better to use a test db on a real RDBMS), but only mockup at JDBC level.
Advantage is abstraction which comes with that way, as JDBC data (result set, update count, warning, ...) are the same whatever is the backend: your prod db, a test db, or just some mockup data provided for each test case.
With JDBC connection mocked up for each case there is no need to manage test db (cleanup, only one test at time, reload fixtures, ...). Every mockup connection is isolated and there is no need to clean up. Only minimal required fixtures are provided in each test case to mock up JDBC exchange, which help to avoid complexity of managing a whole test db.
Acolyte is my framework which includes a JDBC driver and utility for this kind of mockup: http://acolyte.eu.org .
A: I've actually used your first approach with quite some success, but in a slightly different ways that I think would solve some of your problems:
*
*Keep the entire schema and scripts for creating it in source control so that anyone can create the current database schema after a check out. In addition, keep sample data in data files that get loaded by part of the build process. As you discover data that causes errors, add it to your sample data to check that errors don't re-emerge.
*Use a continuous integration server to build the database schema, load the sample data, and run tests. This is how we keep our test database in sync (rebuilding it at every test run). Though this requires that the CI server have access and ownership of its own dedicated database instance, I say that having our db schema built 3 times a day has dramatically helped find errors that probably would not have been found till just before delivery (if not later). I can't say that I rebuild the schema before every commit. Does anybody? With this approach you won't have to (well maybe we should, but its not a big deal if someone forgets).
*For my group, user input is done at the application level (not db) so this is tested via standard unit tests.
Loading Production Database Copy:
This was the approach that was used at my last job. It was a huge pain cause of a couple of issues:
*
*The copy would get out of date from the production version
*Changes would be made to the copy's schema and wouldn't get propagated to the production systems. At this point we'd have diverging schemas. Not fun.
Mocking Database Server:
We also do this at my current job. After every commit we execute unit tests against the application code that have mock db accessors injected. Then three times a day we execute the full db build described above. I definitely recommend both approaches.
A: I have been asking this question for a long time, but I think there is no silver bullet for that.
What I currently do is mocking the DAO objects and keeping a in memory representation of a good collection of objects that represent interesting cases of data that could live on the database.
The main problem I see with that approach is that you're covering only the code that interacts with your DAO layer, but never testing the DAO itself, and in my experience I see that a lot of errors happen on that layer as well. I also keep a few unit tests that run against the database (for the sake of using TDD or quick testing locally), but those tests are never run on my continuous integration server, since we don't keep a database for that purpose and I think tests that run on CI server should be self-contained.
Another approach I find very interesting, but not always worth since is a little time consuming, is to create the same schema you use for production on an embedded database that just runs within the unit testing.
Even though there's no question this approach improves your coverage, there are a few drawbacks, since you have to be as close as possible to ANSI SQL to make it work both with your current DBMS and the embedded replacement.
No matter what you think is more relevant for your code, there are a few projects out there that may make it easier, like DbUnit.
A: Even if there are tools that allow you to mock your database in one way or another (e.g. jOOQ's MockConnection, which can be seen in this answer - disclaimer, I work for jOOQ's vendor), I would advise not to mock larger databases with complex queries.
Even if you just want to integration-test your ORM, beware that an ORM issues a very complex series of queries to your database, that may vary in
*
*syntax
*complexity
*order (!)
Mocking all that to produce sensible dummy data is quite hard, unless you're actually building a little database inside your mock, which interprets the transmitted SQL statements. Having said so, use a well-known integration-test database that you can easily reset with well-known data, against which you can run your integration tests.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145131",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "399"
} |
Q: What should my Objective-C singleton look like? My singleton accessor method is usually some variant of:
static MyClass *gInstance = NULL;
+ (MyClass *)instance
{
@synchronized(self)
{
if (gInstance == NULL)
gInstance = [[self alloc] init];
}
return(gInstance);
}
What could I be doing to improve this?
A: @interface MySingleton : NSObject
{
}
+ (MySingleton *)sharedSingleton;
@end
@implementation MySingleton
+ (MySingleton *)sharedSingleton
{
static MySingleton *sharedSingleton;
@synchronized(self)
{
if (!sharedSingleton)
sharedSingleton = [[MySingleton alloc] init];
return sharedSingleton;
}
}
@end
[Source]
A: I have an interesting variation on sharedInstance that is thread safe, but does not lock after the initialization. I am not yet sure enough of it to modify the top answer as requested, but I present it for further discussion:
// Volatile to make sure we are not foiled by CPU caches
static volatile ALBackendRequestManager *sharedInstance;
// There's no need to call this directly, as method swizzling in sharedInstance
// means this will get called after the singleton is initialized.
+ (MySingleton *)simpleSharedInstance
{
return (MySingleton *)sharedInstance;
}
+ (MySingleton*)sharedInstance
{
@synchronized(self)
{
if (sharedInstance == nil)
{
sharedInstance = [[MySingleton alloc] init];
// Replace expensive thread-safe method
// with the simpler one that just returns the allocated instance.
SEL origSel = @selector(sharedInstance);
SEL newSel = @selector(simpleSharedInstance);
Method origMethod = class_getClassMethod(self, origSel);
Method newMethod = class_getClassMethod(self, newSel);
method_exchangeImplementations(origMethod, newMethod);
}
}
return (MySingleton *)sharedInstance;
}
A: Short answer: Fabulous.
Long answer: Something like....
static SomeSingleton *instance = NULL;
@implementation SomeSingleton
+ (id) instance {
static dispatch_once_t onceToken;
dispatch_once(&onceToken, ^{
if (instance == NULL){
instance = [[super allocWithZone:NULL] init];
}
});
return instance;
}
+ (id) allocWithZone:(NSZone *)paramZone {
return [[self instance] retain];
}
- (id) copyWithZone:(NSZone *)paramZone {
return self;
}
- (id) autorelease {
return self;
}
- (NSUInteger) retainCount {
return NSUIntegerMax;
}
- (id) retain {
return self;
}
@end
Be sure to read the dispatch/once.h header to understand what's going on. In this case the header comments are more applicable than the docs or man page.
A: Per my other answer below, I think you should be doing:
+ (id)sharedFoo
{
static dispatch_once_t once;
static MyFoo *sharedFoo;
dispatch_once(&once, ^ { sharedFoo = [[self alloc] init]; });
return sharedFoo;
}
A: Since Kendall posted a threadsafe singleton that attempts to avoid locking costs, I thought I would toss one up as well:
#import <libkern/OSAtomic.h>
static void * volatile sharedInstance = nil;
+ (className *) sharedInstance {
while (!sharedInstance) {
className *temp = [[self alloc] init];
if(!OSAtomicCompareAndSwapPtrBarrier(0x0, temp, &sharedInstance)) {
[temp release];
}
}
return sharedInstance;
}
Okay, let me explain how this works:
*
*Fast case: In normal execution sharedInstance has already been set, so the while loop is never executed and the function returns after simply testing for the variable's existence;
*Slow case: If sharedInstance doesn't exist, then an instance is allocated and copied into it using a Compare And Swap ('CAS');
*Contended case: If two threads both attempt to call sharedInstance at the same time AND sharedInstance doesn't exist at the same time then they will both initialize new instances of the singleton and attempt to CAS it into position. Whichever one wins the CAS returns immediately, whichever one loses releases the instance it just allocated and returns the (now set) sharedInstance. The single OSAtomicCompareAndSwapPtrBarrier acts as both a write barrier for the setting thread and a read barrier from the testing thread.
A: I've rolled singleton into a class, so other classes can inherit singleton properties.
Singleton.h :
static id sharedInstance = nil;
#define DEFINE_SHARED_INSTANCE + (id) sharedInstance { return [self sharedInstance:&sharedInstance]; } \
+ (id) allocWithZone:(NSZone *)zone { return [self allocWithZone:zone forInstance:&sharedInstance]; }
@interface Singleton : NSObject {
}
+ (id) sharedInstance;
+ (id) sharedInstance:(id*)inst;
+ (id) allocWithZone:(NSZone *)zone forInstance:(id*)inst;
@end
Singleton.m :
#import "Singleton.h"
@implementation Singleton
+ (id) sharedInstance {
return [self sharedInstance:&sharedInstance];
}
+ (id) sharedInstance:(id*)inst {
@synchronized(self)
{
if (*inst == nil)
*inst = [[self alloc] init];
}
return *inst;
}
+ (id) allocWithZone:(NSZone *)zone forInstance:(id*)inst {
@synchronized(self) {
if (*inst == nil) {
*inst = [super allocWithZone:zone];
return *inst; // assignment and return on first allocation
}
}
return nil; // on subsequent allocation attempts return nil
}
- (id)copyWithZone:(NSZone *)zone {
return self;
}
- (id)retain {
return self;
}
- (unsigned)retainCount {
return UINT_MAX; // denotes an object that cannot be released
}
- (void)release {
//do nothing
}
- (id)autorelease {
return self;
}
@end
And here is an example of some class, that you want to become singleton.
#import "Singleton.h"
@interface SomeClass : Singleton {
}
@end
@implementation SomeClass
DEFINE_SHARED_INSTANCE;
@end
The only limitation about Singleton class, is that it is NSObject subclass. But most time I use singletons in my code they are in fact NSObject subclasses, so this class really ease my life and make code cleaner.
A: Another option is to use the +(void)initialize method. From the documentation:
The runtime sends initialize to each class in a program exactly one time just before the class, or any class that inherits from it, is sent its first message from within the program. (Thus the method may never be invoked if the class is not used.) The runtime sends the initialize message to classes in a thread-safe manner. Superclasses receive this message before their subclasses.
So you could do something akin to this:
static MySingleton *sharedSingleton;
+ (void)initialize
{
static BOOL initialized = NO;
if(!initialized)
{
initialized = YES;
sharedSingleton = [[MySingleton alloc] init];
}
}
A: This works in a non-garbage collected environment also.
@interface MySingleton : NSObject {
}
+(MySingleton *)sharedManager;
@end
@implementation MySingleton
static MySingleton *sharedMySingleton = nil;
+(MySingleton*)sharedManager {
@synchronized(self) {
if (sharedMySingleton == nil) {
[[self alloc] init]; // assignment not done here
}
}
return sharedMySingleton;
}
+(id)allocWithZone:(NSZone *)zone {
@synchronized(self) {
if (sharedMySingleton == nil) {
sharedMySingleton = [super allocWithZone:zone];
return sharedMySingleton; // assignment and return on first allocation
}
}
return nil; //on subsequent allocation attempts return nil
}
-(void)dealloc {
[super dealloc];
}
-(id)copyWithZone:(NSZone *)zone {
return self;
}
-(id)retain {
return self;
}
-(unsigned)retainCount {
return UINT_MAX; //denotes an object that cannot be release
}
-(void)release {
//do nothing
}
-(id)autorelease {
return self;
}
-(id)init {
self = [super init];
sharedMySingleton = self;
//initialize here
return self;
}
@end
A: Here's a macro that I put together:
http://github.com/cjhanson/Objective-C-Optimized-Singleton
It is based on the work here by Matt Gallagher
But changing the implementation to use method swizzling as described here by Dave MacLachlan of Google.
I welcome comments / contributions.
A: Shouln't this be threadsafe and avoid the expensive locking after the first call?
+ (MySingleton*)sharedInstance
{
if (sharedInstance == nil) {
@synchronized(self) {
if (sharedInstance == nil) {
sharedInstance = [[MySingleton alloc] init];
}
}
}
return (MySingleton *)sharedInstance;
}
A: For an in-depth discussion of the singleton pattern in Objective-C, look here:
Using the Singleton Pattern in Objective-C
A: How about
static MyClass *gInstance = NULL;
+ (MyClass *)instance
{
if (gInstance == NULL) {
@synchronized(self)
{
if (gInstance == NULL)
gInstance = [[self alloc] init];
}
}
return(gInstance);
}
So you avoid the synchronization cost after initialization?
A:
static MyClass *sharedInst = nil;
+ (id)sharedInstance
{
@synchronize( self ) {
if ( sharedInst == nil ) {
/* sharedInst set up in init */
[[self alloc] init];
}
}
return sharedInst;
}
- (id)init
{
if ( sharedInst != nil ) {
[NSException raise:NSInternalInconsistencyException
format:@"[%@ %@] cannot be called; use +[%@ %@] instead"],
NSStringFromClass([self class]), NSStringFromSelector(_cmd),
NSStringFromClass([self class]),
NSStringFromSelector(@selector(sharedInstance)"];
} else if ( self = [super init] ) {
sharedInst = self;
/* Whatever class specific here */
}
return sharedInst;
}
/* These probably do nothing in
a GC app. Keeps singleton
as an actual singleton in a
non CG app
*/
- (NSUInteger)retainCount
{
return NSUIntegerMax;
}
- (oneway void)release
{
}
- (id)retain
{
return sharedInst;
}
- (id)autorelease
{
return sharedInst;
}
A: Edit: This implementation obsoleted with ARC. Please have a look at How do I implement an Objective-C singleton that is compatible with ARC? for correct implementation.
All the implementations of initialize I've read in other answers share a common error.
+ (void) initialize {
_instance = [[MySingletonClass alloc] init] // <----- Wrong!
}
+ (void) initialize {
if (self == [MySingletonClass class]){ // <----- Correct!
_instance = [[MySingletonClass alloc] init]
}
}
The Apple documentation recommend you check the class type in your initialize block. Because subclasses call the initialize by default. There exists a non-obvious case where subclasses may be created indirectly through KVO. For if you add the following line in another class:
[[MySingletonClass getInstance] addObserver:self forKeyPath:@"foo" options:0 context:nil]
Objective-C will implicitly create a subclass of MySingletonClass resulting in a second triggering of +initialize.
You may think that you should implicitly check for duplicate initialization in your init block as such:
- (id) init { <----- Wrong!
if (_instance != nil) {
// Some hack
}
else {
// Do stuff
}
return self;
}
But you will shoot yourself in the foot; or worse give another developer the opportunity to shoot themselves in the foot.
- (id) init { <----- Correct!
NSAssert(_instance == nil, @"Duplication initialization of singleton");
self = [super init];
if (self){
// Do stuff
}
return self;
}
TL;DR, here's my implementation
@implementation MySingletonClass
static MySingletonClass * _instance;
+ (void) initialize {
if (self == [MySingletonClass class]){
_instance = [[MySingletonClass alloc] init];
}
}
- (id) init {
ZAssert (_instance == nil, @"Duplication initialization of singleton");
self = [super init];
if (self) {
// Initialization
}
return self;
}
+ (id) getInstance {
return _instance;
}
@end
(Replace ZAssert with our own assertion macro; or just NSAssert.)
A: A thorough explanation of the Singleton macro code is on the blog Cocoa With Love
http://cocoawithlove.com/2008/11/singletons-appdelegates-and-top-level.html.
A:
KLSingleton is:
*
*Subclassible (to the n-th degree)
*ARC compatible
*Safe with alloc and init
*Loaded lazily
*Thread-safe
*Lock-free (uses +initialize, not @synchronize)
*Macro-free
*Swizzle-free
*Simple
KLSingleton
A: You don't want to synchronize on self... Since the self object doesn't exist yet! You end up locking on a temporary id value. You want to ensure that no one else can run class methods ( sharedInstance, alloc, allocWithZone:, etc ), so you need to synchronize on the class object instead:
@implementation MYSingleton
static MYSingleton * sharedInstance = nil;
+( id )sharedInstance {
@synchronized( [ MYSingleton class ] ) {
if( sharedInstance == nil )
sharedInstance = [ [ MYSingleton alloc ] init ];
}
return sharedInstance;
}
+( id )allocWithZone:( NSZone * )zone {
@synchronized( [ MYSingleton class ] ) {
if( sharedInstance == nil )
sharedInstance = [ super allocWithZone:zone ];
}
return sharedInstance;
}
-( id )init {
@synchronized( [ MYSingleton class ] ) {
self = [ super init ];
if( self != nil ) {
// Insert initialization code here
}
return self;
}
}
@end
A: static mySingleton *obj=nil;
@implementation mySingleton
-(id) init {
if(obj != nil){
[self release];
return obj;
} else if(self = [super init]) {
obj = self;
}
return obj;
}
+(mySingleton*) getSharedInstance {
@synchronized(self){
if(obj == nil) {
obj = [[mySingleton alloc] init];
}
}
return obj;
}
- (id)retain {
return self;
}
- (id)copy {
return self;
}
- (unsigned)retainCount {
return UINT_MAX; // denotes an object that cannot be released
}
- (void)release {
if(obj != self){
[super release];
}
//do nothing
}
- (id)autorelease {
return self;
}
-(void) dealloc {
[super dealloc];
}
@end
A: Just wanted to leave this here so I don't lose it. The advantage to this one is that it's usable in InterfaceBuilder, which is a HUGE advantage. This is taken from another question that I asked:
static Server *instance;
+ (Server *)instance { return instance; }
+ (id)hiddenAlloc
{
return [super alloc];
}
+ (id)alloc
{
return [[self instance] retain];
}
+ (void)initialize
{
static BOOL initialized = NO;
if(!initialized)
{
initialized = YES;
instance = [[Server hiddenAlloc] init];
}
}
- (id) init
{
if (instance)
return self;
self = [super init];
if (self != nil) {
// whatever
}
return self;
}
A: I know there are a lot of comments on this "question", but I don't see many people suggesting using a macro to define the singleton. It's such a common pattern and a macro greatly simplifies the singleton.
Here are the macros I wrote based on several Objc implementations I've seen.
Singeton.h
/**
@abstract Helps define the interface of a singleton.
@param TYPE The type of this singleton.
@param NAME The name of the singleton accessor. Must match the name used in the implementation.
@discussion
Typcially the NAME is something like 'sharedThing' where 'Thing' is the prefix-removed type name of the class.
*/
#define SingletonInterface(TYPE, NAME) \
+ (TYPE *)NAME;
/**
@abstract Helps define the implementation of a singleton.
@param TYPE The type of this singleton.
@param NAME The name of the singleton accessor. Must match the name used in the interface.
@discussion
Typcially the NAME is something like 'sharedThing' where 'Thing' is the prefix-removed type name of the class.
*/
#define SingletonImplementation(TYPE, NAME) \
static TYPE *__ ## NAME; \
\
\
+ (void)initialize \
{ \
static BOOL initialized = NO; \
if(!initialized) \
{ \
initialized = YES; \
__ ## NAME = [[TYPE alloc] init]; \
} \
} \
\
\
+ (TYPE *)NAME \
{ \
return __ ## NAME; \
}
Example of use:
MyManager.h
@interface MyManager
SingletonInterface(MyManager, sharedManager);
// ...
@end
MyManager.m
@implementation MyManager
- (id)init
{
self = [super init];
if (self) {
// Initialization code here.
}
return self;
}
SingletonImplementation(MyManager, sharedManager);
// ...
@end
Why a interface macro when it's nearly empty? Code consistency between the header and code files; maintainability in case you want to add more automatic methods or change it around.
I'm using the initialize method to create the singleton as is used in the most popular answer here (at time of writing).
A: With Objective C class methods, we can just avoid using the singleton pattern the usual way, from:
[[Librarian sharedInstance] openLibrary]
to:
[Librarian openLibrary]
by wrapping the class inside another class that just has Class Methods, that way there is no chance of accidentally creating duplicate instances, as we're not creating any instance!
I wrote a more detailed blog here :)
A: To extend the example from @robbie-hanson ...
static MySingleton* sharedSingleton = nil;
+ (void)initialize {
static BOOL initialized = NO;
if (!initialized) {
initialized = YES;
sharedSingleton = [[self alloc] init];
}
}
- (id)init {
self = [super init];
if (self) {
// Member initialization here.
}
return self;
}
A: My way is simple like this:
static id instanceOfXXX = nil;
+ (id) sharedXXX
{
static volatile BOOL initialized = NO;
if (!initialized)
{
@synchronized([XXX class])
{
if (!initialized)
{
instanceOfXXX = [[XXX alloc] init];
initialized = YES;
}
}
}
return instanceOfXXX;
}
If the singleton is initialized already, the LOCK block will not be entered. The second check if(!initialized) is to make sure it is not initialized yet when the current thread acquires the LOCK.
A: I've not read through all the solutions, so forgive if this code is redundant.
This is the most thread safe implementation in my opinion.
+(SingletonObject *) sharedManager
{
static SingletonObject * sharedResourcesObj = nil;
@synchronized(self)
{
if (!sharedResourcesObj)
{
sharedResourcesObj = [[SingletonObject alloc] init];
}
}
return sharedResourcesObj;
}
A: The accepted answer, although it compiles, is incorrect.
+ (MySingleton*)sharedInstance
{
@synchronized(self) <-------- self does not exist at class scope
{
if (sharedInstance == nil)
sharedInstance = [[MySingleton alloc] init];
}
return sharedInstance;
}
Per Apple documentation:
... You can take a similar approach to synchronize the class methods of the associated class, using the Class object instead of self.
Even if using self works, it shouldn't and this looks like a copy and paste mistake to me.
The correct implementation for a class factory method would be:
+ (MySingleton*)getInstance
{
@synchronized([MySingleton class])
{
if (sharedInstance == nil)
sharedInstance = [[MySingleton alloc] init];
}
return sharedInstance;
}
A: I usually use code roughly similar to that in Ben Hoffstein's answer (which I also got out of Wikipedia). I use it for the reasons stated by Chris Hanson in his comment.
However, sometimes I have a need to place a singleton into a NIB, and in that case I use the following:
@implementation Singleton
static Singleton *singleton = nil;
- (id)init {
static BOOL initialized = NO;
if (!initialized) {
self = [super init];
singleton = self;
initialized = YES;
}
return self;
}
+ (id)allocWithZone:(NSZone*)zone {
@synchronized (self) {
if (!singleton)
singleton = [super allocWithZone:zone];
}
return singleton;
}
+ (Singleton*)sharedSingleton {
if (!singleton)
[[Singleton alloc] init];
return singleton;
}
@end
I leave the implementation of -retain (etc.) to the reader, although the above code is all you need in a garbage collected environment.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145154",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "333"
} |
Q: Something like Explorer's icon grid view in a Python GUI I am making a Python gui project that needs to duplicate the look of a Windows gui environment (ie Explorer). I have my own custom icons to draw but they should be selectable by the same methods as usual; click, ctrl-click, drag box etc. Are any of the gui toolkits going to help with this or will I have to implement it all myself. If there aren't any tools to help with this advice would be greatly appreciated.
edit I am not trying to recreate explorer, that would be madness. I simply want to be able to take icons and lay them out in a scrollable window. Any number of them may be selected at once. It would be great if there was something that could select/deselect them in the same (appearing at least) way that Windows does. Then all I would need is a list of all the selected icons.
A: Python has extensions for accessing the Win32 API, but good luck trying to re-write explorer in that by yourself. Your best bet is to use a toolkit like Qt, but you'll still have to write the vast majority of the application from scratch.
Is there any way you can re-use explorer itself in your project?
Updated for edited question:
GTK+ has an icon grid widget that you could use. See a reference for PyGTK+: gtk.IconView
A: In wxPython there's a plethora of ready-made list and tree controls (CustomTreeCtrl, TreeListCtrl, and others), a mixture of which you can use to create a simple explorer in minutes. The wxPython demo even has a few relevant examples (see the demo of MVCTree).
A: I'll assume you're serious and suggest that you check out the many wonderful GUI libraries available for Python.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145155",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Nintendo DS homebrew with Ada? Note: I know very little about the GCC toolchain, so this question may not make much sense.
Since GCC includes an Ada front end, and it can emit ARM, and devKitPro is based on GCC, is it possible to use Ada instead of C/C++ for writing code on the DS?
Edit: It seems that the target that devKitARM uses is arm-eabi.
A: devkitPro is not a toolchain, compiler or indeed any software package. The toolchain used to target the DS is devkitARM, one of the toolchains provided by devkitPro.
It may be possible to build the ada compiler but I doubt very much if you'll ever manage to get anything useful running on the DS itself. devkitPro will certainly never provide an ada compiler as part of the packages we produce.
A: Yes it is possible, see my project https://github.com/Lucretia/tamp and build the cross compiler as per my script. You would then be able to target NDS using Ada. I have build a basic RTS as well which will provide you with local exception handling.
And @Martin Beckett, why do think Ada is aimed squarely at DoD stuff? They dropped the mandate years ago and Ada is easily usable for any project, you do realise that Ada is a general purpose programming language don't you?
A: (Disclaimer: I don't know Ada)
Possibly.
You might be able to build devKitPro to use Ada, however, the pre-provided binaries (at least for OS X) do not have Ada support compiled in.
However, you will probably find yourself writing tons of C "glue" code to interface with the various hardware registers and the like.
A: One thing to consider when porting a language to the nintendo DS is the relatively small stack it has (16KB). There are possible workarounds such as swapping the SRAM stack content into DRAM (4MB) when stack gets full or just have the whole stack in DRAM (assumed to be auwfully slow).
And I second Dre on the fact that you'll have to provide yourself glue between the Ada library function you'd like to use and existing libraries on the DS (which are hopefully covering most of the hardware stuff).
A: On a practical plane, it is not possible.
On a theoretical plane, you could use one custom Ada parser (I found this one on the ANTLR site, but it is quite old) in order to translate Ada to C/C++, and then feed that to devkitpro.
However, the effort of building such translator is probably going to be equal (if not higher) to creating the game itself.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145157",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Inadvertent Session Hijacking Issue With Restful Authentication I'm using the current version of restful_authentication that is found on github and I'm having a bunch of strange session issues. The server seems to be somehow assigning sessions to users it shouldn't be. This only happens when crossing the logged out/logged in barrier.
Here's an example. With no sessions active on the server, I log in to an account with user A. On another machine, I log in with user B. Then when logging out of user B, sometime after the logout redirect happens, I will be logged in as user A. From this point, I can continue to navigate the site as if I had logged in as that user! Something I've observed via the logs is that when this hijack happens, the session IDs are not the same. User A is logged in in both sessions, but the session ID's are completely different. This is just one example of what might happen. I can't reproduce the issue reliably as it is seemingly random.
It doesn't seem to be a symptom of the environment or the server it's running on. I can reproduce the problem using both mongrel and passenger. I've also seen it in development and production. I am using db-based sessions in this application and it is running on Rails 2.1.1. I applied the stateful option when calling the generator. Otherwise no other modifications have been made to how sessions are handled.
Update
Here is the offending method which came directly from restful_authentication.
# Accesses the current user from the session.
# Future calls avoid the database because nil is not equal to false.
def current_user
@current_user ||= (login_from_session || login_from_basic_auth || login_from_cookie) unless @current_user == false
end
A: This can happen if you (or those who wrote restful_authentication) are caching the current user in a class variable. I've seen a bunch of articles advocating the use of "User.current_user", but since classes are cached across requests, this can cause session tainting.
A: I don't know if this is so much of an answer as it is a work around. All I did was switch over to cookie based sessions and everything is working smoothly.
A: Is this site remote? Are you logging into it onto two separate computers on the same network?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145169",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: How to invoke an interactive elisp interpreter in Emacs? Right now I write expressions in the *scratch* buffer and test them by evaluating with C-x C-e. I would really appreciate having an interactive interpreter like SLIME or irb, in which I could test Emacs Lisp expressions.
A: Your best bet is the *scratch* buffer. You can make it more like a REPL by first turning on the debugger:
M-x set-variable debug-on-error t
Then use C-j instead of C-x C-e, which will insert the result of evaluating the expression into the buffer on the line after the expression. Instead of things like command history, * * * and so forth, you just move around the *scratch* buffer and edit.
If you want things like * * * to work, more like a usual REPL, try ielm.
M-x ielm
A: It's easy to evaluate Lisp expressions in Inferior Emacs-Lisp Mode:
M-x ielm
You can read more about this feature in the Emacs manual section on "Lisp Interaction"
A: Eshell is another option for an interactive Elisp interpreter.
M-x eshell
Not only is it a command shell like bash (or cmd.exe if on Windows) but you can also interactively write and execute Elisp code.
~ $ ls
foo.txt
bar.txt
~ $ (+ 1 1)
2
A: To run just one elisp expression you can use M-: shortcut and enter expression in mini-buffer. For other cases you can use scratch buffer
A: In the *scratch* buffer, just type C-j to evaluate the expression before point.
A: Well, if you're really interested in a literal REPL for emacs it is possible to write one using the -batch mode of emacs:
(require 'cl)
(defun read-expression ()
(condition-case
err
(read-string "> ")
(error
(message "Error reading '%s'" form)
(message (format "%s" err)))))
(defun read-expression-from-string (str)
(condition-case
err
(read-from-string str)
(error
(message "Error parsing '%s'" str)
(message (format "%s" err))
nil)))
(defun repl ()
(loop for expr = (read-string "> ") then (read-expression)
do
(let ((form (car (read-expression-from-string expr))))
(condition-case
err
(message " => %s" (eval form))
(error
(message "Error evaluating '%s'" form)
(message (format "%s" err)))))))
(repl)
You can call this from the command line, or, as you seem to want, from within an emacs buffer running a shell:
kburton@hypothesis:~/projects/elisp$ emacs -batch -l test.el
Loading 00debian-vars...
> (defvar x '(lambda (y) (* y 100)))
=> x
> (funcall x 0.25)
=> 25.0
>
kburton@hypothesis:~/projects/elisp$
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145175",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "40"
} |
Q: Is HTML considered a programming language? I guess the question is self-explanatory, but I'm wondering whether HTML qualifies as a programming language (obviously the "L" stands for language).
The reason for asking is more pragmatic—I'm putting together a resume and don't want to look like a fool for listing things like HTML and XML under languages, but can't figure out how to classify them.
A: Well, L is for language, but it doesn't imply programming language. After all, English or French are (natural) languages too! ;-)
As said above, put them under a subsidiary section, Technology seems to be a good term.
(Looking at my own resume, not updated in a while) I have made a section just called "Languages", so I can't get wrong... :-D
I have put "(X)HTML and CSS, XML/DTD/Schema and SVG" at the end of the section, clearly separated.
In French, I have a section "Langages" (programming and markup) and another "Langues" (French/English). In the English version, I titled both at "Languages", which is clumsy now that I think of it, although context clarify this. I should find a better formulation.
A: HTML is in no way a programming language.
Programming languages deals with ''proccessing functions'', etc. HTML just deals with the visual interface of a web page, where the actual programming handles the proccessing. PHP for example.
If anyone really knows programming, I really can't see how people can mistake HTML for an actual programming language.
A: YES, a declarative programming language.
You really want to list the most important things you know that are relative to the job you're applying for on your resume. If you list ASP.NET but don't list HTML, even though it's somewhat obvious, there are a lot of managers and/or HR types that will assume you don't know HTML since it's not listed. I've had it happen to me before.
Update - Some say no it isn't a programming language, and you may not agree with me on this, but regardless on a resume it IS a programming language. You get HR types looking at your resume before the hiring manager even sees it. If the manager says you need to know HTML, and it's not listed in the 'programming languages' section then the HR person may disregard you resume thinking you don't know it because it's not listed.
Update 6-8-2012: Any instruction that tells the computer to do something is a programming language. So even after all these years, I still stand by my answer. HTML is a programming language. Something that isn't a programming language would be XML.
A: In recruitment terms, having been on both sides of the fence, definitely put HTML under 'programming languages', or perhaps more safely under 'technologies'
Yes, we all know that it is a Markup Language and not a Programming Language. but a) Recruitment Agencies don't know and don't care, and b) employers don't know and don't care. Really.
And pointing out their ignorance will only serve you ill. And the techies who eventually see your CV will be grateful for a candidate who has heard of HTML, and won't worry about the taxonomy.
Honestly, it isn't an issue.
A: No, the clue is in the M - it's a Markup Language.
A: No, HTML is not a programming language. The "M" stands for "Markup". Generally, a programming language allows you to describe some sort of process of doing something, whereas HTML is a way of adding context and structure to text.
If you're looking to add more alphabet soup to your CV, don't classify them at all. Just put them in a big pile called "Technologies" or whatever you like. Remember, however, that anything you list is fair game for a question.
HTML is so common that I'd expect almost any technology person to already know it (although not stuff like CSS and so on), so you might consider not listing every initialism you've ever come across. I tend to regard CVs listing too many things as suspicious, so I ask more questions to weed out the stuff that shouldn't be listed. :)
However, if your HTML experience includes serious web design stuff including Ajax, JavaScript, and so on, you might talk about those in your "Experience" section.
A: On some level Chris Pietschmann is correct. SQL isn't Turing complete (at least without stored procedures) yet people will list that as a language, TeX is Turing complete but most people regard it as a markup language.
Having said that: if you are just applying for jobs, not arguing formal logic, I would just list them all as technologies. Things like .NET aren't languages but would probably be listed as well.
A: The 'M' stands for a 'Markup'. It's a 'Markup Language' not a programming language. Some people will disagree with this, but my opinion is that if it lacks logical constructs (conditional branching, iteration, etc) its not really a programming language.
As for the resume, I would suggest putting HTML and XML under a section like 'Technologies'. I usually have a section like this where I list things like version control software, OS's I've developed for, build systems, etc.
A: No, HTML is a not a programming language. It is called "markup" for that reason.
If you're going to say that HTML is a programming language, then you might as well include things such as word documents, as they too are based on ML, or 'Markup Language'.
Simply put--HTML defines content!
A: List it under technologies or something. I'd just leave it off if I were you as it's pretty much expected that you know HTML and XML at this point.
A: I think not exactly a programming language, but exactly what its name says: a markup language.
We cannot program using just pure, HTML. But just annotate how to present content.
But if you consider programming the act of tell the computer how to present contents, it is a programming language.
A: In the advanced programming languages class I took in college, we had what I think is a pretty good definition of "programming language": a programming language is any (formal) language capable of expressing all computable functions, which the Church-Turing thesis implies is the set of all Turing-computable functions.
By that definition, no, HTML is not a programming language, even a declarative one. It is, as others have explained, a markup language.
But the people reviewing your resume may very well not care about such a formal distinction. I'd follow the good advice given by others and list it under a "Technologies" type of section.
A: I think that it definitely has its place on a resume. Knowledge of HTML is valuable, and there really is a lot to know, what with cross-browser compatibility issues and standards which should be followed.
I wouldn't list HTML under "programming languages" alongside C# or something, but it's worth noting your experience.
A: No - there's a big prejudice in IT against web design; but in this case the "real" programmers are on pretty firm ground.
If you've done a lot of web design work you've probably done some JavaScript, so you can put that down under 'programming languages'; if you want to list HTML as well, then I agree with the answer that suggests "Technologies".
But unless you're targeting agents who're trying to tick boxes rather than find you a good job, a bare list of things you've used doesn't really look all that good. You're better off listing the projects you've worked on and detailing the technologies you used on each; that demonstrates that you've got real experience of using them rather than just that you know some buzzwords.
A: I get around this problem by not having a "programming languages" section on my resume. Instead I label it simply as "languages", and I stick HTML and CSS at the end. I'd rather make life easier for the reviewer so that they can see whether mine checks-off all their requirements.
Only fools would disregard an applicant because he or she listed HTML under "languages" instead of some other label, especially since there is no industry standard. And who wants to work for fools?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145176",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "210"
} |
Q: Best way to store tags in a sql server table? What's the best way to store tags for a record? Just use a varchar field? What about when selecting rows that contains tag x? Use the like operator?
thanks!
A: Use a tags table with the smallest allowable primary key. If there are less than 255 tags use a byte (tinyint) or else a word (smallint). The smaller the key the smaller and faster the index on the foreign key in the main table.
A: No, it is generally a bad idea to put multiple pieces of data in a single field. Instead, use a separate Tags table (perhaps with just a TagID and TagName) and then, for each record, indicate the TagID associated with it. If a record is associated with multiple tags, you will have duplicate records with the only difference being TagID.
The advantage here is that you can easily query by tag, by record, and maintain the Tags table separately (i.e. what if a tag name changes?).
A: Depends on two things:
1) The amount of tags/tagged records
2) Whether or not you have a religious opinion on normalization :-)
Unless dealing with very large volumes of data, I'd suggest having a 'Tags' table mapping varchar values to integer identifiers then second table mapping tagged records to their tag ids. I'd suggest implementing this first, then check if it doesn't meet your performance needs. In that case, keep a single table with a id for the tagged row and the actual text of the tag, but in this I'd suggest you use a char column as it will kill your query if the optimizer does a full table scan against a large table with a varchar column.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145185",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18"
} |
Q: Algorithm to estimate number of English translation words from Japanese source I'm trying to come up with a way to estimate the number of English words a translation from Japanese will turn into. Japanese has three main scripts -- Kanji, Hiragana, and Katakana -- and each has a different average character-to-word ratio (Kanji being the lowest, Katakana the highest).
Examples:
*
*computer: コンピュータ (Katakana - 6
characters); 計算機 (Kanji: 3
characters)
*whale: くじら (Hiragana --
3 characters); 鯨 (Kanji: 1
character)
As data, I have a large glossary of Japanese words and their English translations, and a fairly large corpus of matched Japanese source documents and their English translations. I want to come up with a formula that will count numbers of Kanji, Hiragana, and Katakana characters in a source text, and estimate the number of English words this is likely to turn into.
A: Here's what Borland (now Embarcadero) thinks about English to non-English:
Length of English string (in characters)
Expected increase
1-5 100%
6-12 80%
13-20 60%
21-30 40%
31-50 20%
over 50 10%
I think you can sort of apply this (with some modification) for Japanese to non-Japanese.
Another element you might want to consider is the tone of the language. In English, instructions are phrased as an imperative as in "Press OK." But in Japanese language, imperatives are considered rude, and you must phrase instructions in honorific (or keigo) as in "OKボタンを押してください。"
Watch out for three-letter kanji combos. Many of the big words translate into three- or four- letter kanji combo such as 国際化(internationalization: 20 chars), 高可用性(high availability: 17 chars).
A: Well, it's a little more complex than just the number of characters in a noun compared to English, for instance, Japanese also has a different grammatical structure compared to English, so certain sentences would use MORE words in Japanese, and others would use LESS words. I don't really know Japanese, so please forgive me for using Korean as an example.
In Korean, a sentence is often shorter than an English sentence, due mainly to the fact that they are cut short by using context to fill in the missing words. For instance, saying "I love you" could be as short as 사랑해 ("sarang hae", simply the verb "love"), or as long as the fully qualified sentence 저는 당신을 살앙해요 (I [topic] you [object] love [verb + polite modifier]. In a text how it is written depends on context, which is usually set by earlier sentences in the paragraph.
Anyway, having an algorithm to actually KNOW this kind of thing would be very difficult, so you're probably much better off, just using statistics. What you should do is use random samples where the known Japanese texts, and English texts have the same meaning. The larger the sample (and the more random it is) the better... though if they are truly random, it won't make much difference how many you have past a few hundred.
Now, another thing is this ratio would change completely on the type of text being translated. For instance, highly technical document is quite likely to have a much higher Japanese/English length ratio than a soppy novel.
As for simply using your dictionary of word to word translations - that probably won't work to well (and is probably wrong). The same word does not translate to the same word every time in a different language (although much more likely to happen in technical discussions). For instance, the word beautiful. There is not only more than one word I could assign it to in Korean (i.e. there is a choice), but sometimes I lose that choice, as in the sentence (that food is beautiful), where I don't mean the food looks good. I mean it tastes good, and my option of translations for that word changes. And this is a VERY common circumstance.
Another big problem is optimal translation. Something that human's are really bad at, and something that computers are much much worse at. Whenever I've proofread a document translated from another text to English, I can always see various ways to cut it much much shorter.
So although, with statistics, you would be able to work out a pretty good average ratio in length between translations, this will be far different than it would be were all translations to be optimal.
A: I would start with linear approximation: approx_english_words = a1*no_characters_in_script1 + a2 * no_chars_in_script2 + a3 * no_chars_in_script3, with the coefficients a1, a2, a3 fit from your data using linear least squares.
If this doesn't approximate very well, then look at the worst cases for the reasons they don't fit (specialized words, etc.).
A: In my experience as a translator and localization specialist, a good rule of thumb is 2 Japanese characters per English word.
A: As an experienced translator between Japanese and English, I can say that this is extremely difficult to quantify, but typically in my experience English text translated from Japanese is nearly 200% as many characters as the source text. In Japanese there are many culturally specific phrases and nouns that can't be translated literally and need to be explained in English.
When translating it is not unusual for me to take a single Japanese sentence and to make a single English paragraph out of it in order for the meaning to be communicated to the reader. Off the top of my here is an example:
「懐かしい」
This literally means nostalgic. However, in Japanese it can be used as a single phrase in an exclamation. Yet, in English in order to convey a feeling of nostalgia we require a lot more context. For instance, you may need to turn that single phrase into a sentence:
"As I walked by my old elementary school, I was flooded with memories of the past."
This is why machine translation between Japanese and English is impossible.
A: It seems simple enough - you just need to find out the ratios.
For each script, count the number of script characters and English words in your glossary and work out the ratio.
This can be augmented with the Japanese source documents assuming you can both detect which script a Japanese word is in and what the English equivalent phrase is in the translation. Otherwise you'll have to guesstimate the ratios or ignore this as source data,
Then, as you say, count the number of words in each script of your source text, do the multiplies, and you should have a rough estimate.
A: My (albeit tiny) experience seems to indicate that, no matter what the language, blocks of text take the same amount of printed space to convey equivalent information. So, for a large-ish block of text, you could assign a width count to each character in English (grab this from a common font like Times New Roman), and likewise use a common Japanese font at the same point size to calculate the number of characters that would be required.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145190",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Dynamic .NET language performance? I understand that IronPython is an implementation of Python on the .NET platform just like IronRuby is an implementation of Ruby and F# is more or less OCaml.
What I can't seem to grasp is whether these languages perform closer to their "ancestors" or closer to something like C# in terms of speed. For example, is IronPython somehow "compiled" down to the same bytecode used by C# and, therefore, will run just as fast?
A: IronPython and IronRuby are built on top of the DLR -- dynamic language runtime -- and are compiled to CIL (the bytecode used by .NET) on the fly. They're slower than C# but faaaaaaar faster than their non-.NET counterparts. There aren't any decent benchmarks out there, to my knowledge, but you'll see the difference.
A: IronPython is actually the fastest Python implementation out there. For some definition of "fastest", at least: the startup overhead of the CLR, for example, is huge compared to CPython. Also, the optimizing compiler IronPython has, really only makes sense, when code is executed multiple times.
IronRuby has the potential to be as fast IronPython, since many of the interesting features that make IronPython fast, have been extracted into the Dynamic Language Runtime, on which both IronPython and IronRuby (and Managed JavaScript, Dynamic VB, IronScheme, VistaSmalltalk and others) are built.
In general, the speed of a language implementation is pretty much independent of the actual language features, and more dependent on the number of engineering man-years that go into it. IOW: dynamic vs. static doesn't matter, money does.
E.g., Common Lisp is a language that is even more dynamic than Ruby or Python, and yet there are Common Lisp compilers out there that can even give C a run for its money. Good Smalltalk implementations run as fast as Java (which is no surprise, since both major JVMs, Sun HotSpot and IBM J9, are actually just slightly modified Smalltalk VMs) or C++. In just the past 6 months, the major JavaScript implementations (Mozilla TraceMonkey, Apple SquirrelFish Extreme and the new kid on the block, Google V8) have made ginormous performance improvements, 10x and more, to bring JavaScript head-to-head with un-optimized C.
A: Currently IronRuby is pretty slow in most regards. It's definitely slower than MRI (Matz' Ruby Implementation) overall, though in some places they're faster.
IronRuby does have the potential to be much faster, though I doubt they'll ever get near C# in terms of speed. In most cases it just doesn't matter. A database call will probably make up 90% of the overall duration of a web request, for example.
I suspect the team will go for language-completeness rather than performance first. This will allow you to run IronRuby & run most ruby programs when 1.0 ships, then they can improve perf as they go.
I suspect IronPython has a similar story.
A: You've got the right idea by assuming that the performance of a modern .NET implementation will be between that of the ancestor and of C#. The reason is that C# is very closely matched to .NET itself.
F# is a no-brainer because C# and OCaml have similar performance characteristics themselves.
IronPython is much harder because Python and C# have wildly different performance characteristics. In fact, the answer depends upon the IronPython implementation itself, which will strive to convert inefficient Python-style evaluation into efficient C#-style evaluation whenever possible. Expect IronPython to be generally a lot slower than C# with the occassional spikes into the same territory. You can see this effect here.
Cheers,
Jon Harrop.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145191",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: Mounting NTFS filesystem on CentOS 5.2 I want to mount some internal and external NTFS drives in CentOS 5.2, preferably automatically upon boot-up. Doesn't matter if it's read/write or read-only, but read/write would be preferred, if it's safe.
Edit: Thanks for all answers, I summarized them below =)
A: first do a
fdisk -l
get the harddrive partition, ie /dev/sda2
then
mount /dev/sda2 /mnt/windows
if this fails, try a
yum install ntfs-3g
* Just noted this is not included by default, so you can check out NTFS-3g here, and find a suitable package for your system.
to auto mount this, add a line to /etc/fstab saying
/dev/sda2 /mnt/temp ntfs defaults 0 0
and this should auto mount on a reboot
A: To answer my own question: PostMan and mgb led me to the right path, but their answers did not contain complete solution.
Note: A short manual/wiki on this question is here: http://wiki.centos.org/TipsAndTricks/NTFSPartitions
So, I am using a fresh, bare install of CentOS 5.2 with latest updates. First of all, I ran the su command to avoid any permission issues.
I created mount points for a couple of external NTFS drives:
mkdir /mnt/iomega80
mkdir /mnt/iogear250
I had to use the fdisk command, but it wasn't in my system. Here's what installs it:
yum install util-linux
Then I ran /sbin/fdisk -l and found the device names:
Disk /dev/sdc: 250.0 GB, 250059350016 bytes
255 heads, 63 sectors/track, 30401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
**/dev/sdc1** * 1 30401 244196001 7 HPFS/NTFS
Disk /dev/sdd: 82.3 GB, 82348278272 bytes
255 heads, 63 sectors/track, 10011 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
**/dev/sdd1** * 1 10011 80413326 7 HPFS/NTFS
For me, they are /dev/sdc1 and /dev/sdd1.
I had to install NTFS-3G, a package that enables NTFS support on CentOS. To install NTFS-3G, I first had to include RPMFORGE in YUM repository list.
To include RPMFORGE in YUM repository list, I used these instructions: http://rpmrepo.org/RPMforge/Using. For my system, the two commands I had to run were:
wget http://packages.sw.be/rpmforge-release/rpmforge-release-0.3.6-1.el5.rf.i386.rpm
rpm -Uhv rpmforge-release-0.3.6-1.el5.rf.i386.rpm
Finally, I installed NTFS-3G using this YUM command:
yum install fuse fuse-ntfs-3g dkms dkms-fuse
At last, I could use the mount command to mount the filesystems:
mount -t ntfs-3g /dev/sdc1 /mnt/iogear250
mount -t ntfs-3g /dev/sdd1 /mnt/iomega80
By adding these two lines to /etc/fstab, like previous answers suggested, I got the drives to mount upon boot-up:
/dev/sdc1 /mnt/iogear250 ntfs-3g rw,umask=0000,defaults 0 0
/dev/sdd1 /mnt/iomega80 ntfs-3g rw,umask=0000,defaults 0 0
A: You should already have ntfs available, read-write support is now pretty reliable.
You can test it with "mount -t ntfs /dev/sdX1 /mnt/tmp" you need to know what drive the external disk is identified as (check dmesg) and you need to make a mount point.
To mount automatically everytime put a line in /etc/fstab, use one of the existing lines as an example - you will have to be root to do this.
A: You forgot to mention that you need to do a reboot after installing fuse, etc.
A: First enable the repository Epel
yum install epel-release
Then install ntfs
yum install ntfs-3g
A: *
*Enable the EPEL repository
yum -y install epel-release
*
*Install ntfs-3g
yum -y install ntfs-3g
*
*Update Grub
grub2-mkconfig -o /boot/grub2/grub.cfg
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145209",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How do small software patches correct big software? One thing I've always wondered about is how software patches work. A lot of software seems to just release new versions on their binaries that need to be installed over older versions, but some software (operating systems like Windows in particular) seem to be able to release very small patches that correct bugs or add functionality to existing software.
Most of the time the patches I see can't possibly replace entire applications, or even small files that are used within applications. To me it seems like the actual binary is being modified.
How are these kinds of patches actually implemented? Could anyone point me to any resources that explain how this works, or is it just as simple as replacing small components such as linked libraries in an application?
I'll probably never need to do a deployment in this manner, but I am curious to find out how it works. If I'm correct in my understanding that patches can really modify only portions of binary files, is this possible to do in .NET? If it is I'd like to learn it since that's the framework I'm most familiar with and I'd like to understand how it works.
A: If you are talking about patching windows applications then what you want to look at are .MSP files. These are similar to an .MSI but just patch and application.
Take a look at Patching and Upgrading in the MSDN documents.
What an .MSP files does is load updated files to an application install. This typically is updated dll's and resource files, but could include any file.
In addition to patching the installed application, the repair files located in C:\WINDOWS\Installer are updated as well. Then if the user selects "Repair" from Add / Remove programs the updated patch files are used as well.
I'm thinking that the binary diff method discussed by John Millikin must be used in other operating systems. Although you could make it work in windows it would be somewhat alien.
A: This is usually implemented using binary diff algorithms -- diff the most recently released version against the new code. If the user's running the most recent version, you only need to apply the diff. Works particularly well against software, because compiled code is usually pretty similar between versions. Of course, if the user's not running the most recent version you'll have to download the whole thing anyway.
There are a couple implementations of generic binary diff algorithms: bsdiff and xdelta are good open-source implementations. I can't find any implementations for .NET, but since the algorithms in question are pretty platform-agnostic it shouldn't be too difficult to port them if you feel like a project.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145236",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: Change the value of a text box to its current order in a sortable tab Edit: I have solved this by myself. See my answer below
I have set up a nice sortable table with jQuery and it is quite nice. But now i want to extend it.
Each table row has a text box, and i want i am after is to, every time a row is dropped, the text boxes update to reflect the order of the text boxes. E.g. The text box up the top always has the value of '1', the second is always '2' and so on.
I am using jQuery and the Table Drag and Drop JQuery plugin
Code
Javascript:
<script type = "text/javascript" >
$(document).ready(function () {
$("#table-2").tableDnD({
onDrop: function (table, row) {
var rows = table.tBodies[0].rows;
var debugStr = "Order: ";
for (var i = 0; i < rows.length; i++) {
debugStr += rows[i].id + ", ";
}
console.log(debugStr)
document.forms['productform'].sort1.value = debugStr;
document.forms['productform'].sort2.value = debugStr;
document.forms['productform'].sort3.value = debugStr;
document.forms['productform'].sort4.value = debugStr;
},
});
});
</script>
HTML Table:
<form name="productform">
<table cellspacing="0" id="table-2" name="productform">
<thead>
<tr>
<td>Product</td>
<td>Order</td>
</tr>
</thead>
<tbody>
<tr class="row1" id="Pol">
<td><a href="1/">Pol</a></td>
<td><input type="textbox" name="sort1"/></td>
</tr>
<tr class="row2" id="Evo">
<td><a href="2/">Evo</a></td>
<td><input type="textbox" name="sort2"/></td>
</tr>
<tr class="row3" id="Kal">
<td><a href="3/">Kal</a></td>
<td><input type="textbox" name="sort3"/></td>
</tr>
<tr class="row4" id="Lok">
<td><a href="4/">Lok</a></td>
<td><input type="textbox" name="sort4"/></td>
</tr>
</tbody>
</table>
</form>
A: Hardnrg in #jquery ended up solving it for me.
It involved adding an id="" to each input:
<form name="productform">
<table cellspacing="0" id="table-2" name="productform">
<thead>
<tr><td>Product</td> <td>Order</td></tr>
</thead>
<tbody>
<tr class="row1" id="Pol"> <td><a href="1/">Pol</a></td> <td><input id="Pol_field" type="textbox" name="sort1"/></td> </tr>
<tr class="row2" id="Evo"> <td><a href="2/">Evo</a></td> <td><input id="Evo_field" type="textbox" name="sort2"/></td> </tr>
<tr class="row3" id="Kal"> <td><a href="3/">Kal</a></td> <td><input id="Kal_field" type="textbox" name="sort3"/></td> </tr>
<tr class="row4" id="Lok"> <td><a href="4/">Lok</a></td> <td><input id="Lok_field" type="textbox" name="sort4"/></td> </tr>
</tbody>
</table>
</form>
And add this js to the OnDrop event:
for (var i=0; i < rows.length; i++) {
$('#' + rows[i].id + "_field").val(i+1);
}
Easy peasy!
A: Hmmm..
I think you want to do something like this:
$("input:text", "#table-2").each( function(i){ this.value=i+1; });
The $().each() function's info is here: http://docs.jquery.com/Core/each
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145241",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Collaboration desktop sharing - Multiple Mouse I'm using Windows Xp and i'm looking for a software where I can see the screen of a remote desktop and be able to show a second mouse (or pointer of any sort) that I can mouse to just show something.
I want to work with a peer over the net, pretty much like the XP programming method. I find it useful, but it's pretty hard over the internet to do such thing.
I don't want to control the computer, but it would be a plus. All I want is to see the remote desktop and have my pointer (or marker) to point to a line of code that need to be changed or something like.
Do you know any software like this ?
A: You're after something like Microsoft SharedView.
It allows sharing applications etc with multiple pointers.
A: I already used windows messenger, but I don't remember it to be able to show a second mouse pointer. As I don't want to disrupt the person using the computer, it would only be for some indication.
I already google some solution for VNC, but the only one I found is Collaborative VNC and it's only for Unix.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145246",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Is there a free software for creating windows help files for your program? Is there a free software for creating windows help files for your program?
I would like something that allows an output of both CHM and HTML files.
A: HTML Help Workshop by Microsoft.
A: If you're developing for .NET and you're looking to generate XML documentation help files you should look into Microsoft's shared-source Sandcastle project, and the front-end GUI for it "Sandcastle Help File Builder."
It's pretty nice and highly configurable. You can make some really good help documentation using it.
It was a little slow the last time I used it (over 6 months ago) but it may have been optimized since then...
A: I'm not sure about 'free', but Dr. Explain is a little over $100 and worth every penny. We use it to produce both help for desktop apps with a single CHM and web apps using the HTML export. The best part is that it 'auto-magically' mines your webpage or app page and starts the basic construction of the help for you. The ROI for us was about 1 day.
A: HelpNDoc is free for personal use. It can generate CHM and HTML help files as requested, as well as Word DocX, PDF, ePub and Kindle eBooks from the same source.
A: Yes. HelpMaker from sourceforge (The original site www.vizacc.com is down). Best free help utility ever.
A: DocBook is a universal standard file format for writing software documentation.
DocBook is an XML file format and as such is already blessed by Microsoft. It is declarative, earning it further kudos.
DocBook allows you to identify to it what pictures are screenshots, what strings of words are actually commands, and so forth. Which yes, actually means it is a bonafide part of the semantic web.
Because of this, you can use an XPath expression to search for all the screenshots, all the commands, and so forth. Decent IDE's all support XPath searches, and so do lots of small, free utilities.
Once you have worked out which XPath search string returns the content you want, you can write a little XSLT stylesheet yourself or with someone else's help. The stylesheet can collect the information and generate an HTML bullet-list (UL LI), a definitions list (DL DT,DD), or a quick reference card. Whatever you like. XPath, XSLT, and the various *ML languages are very flexible.
Read From DocBook to Integrated Help Systems for information about how to automatically convert a DocBook standard file into a proprietary and very practical Microsoft Windows HTML Help file.
For more information about DocBook itself go visit http://www.docbook.org/ - and grab the free XSLT stylesheets for the latest version while you are there.
DocBook files can be automatically converted into many file formats besides the one used in Microsoft Windows WinHelp files. See the docbook.org web site for details. It is a long list of supported file formats, so brace yourself for a pleasant surprise!
If you already have a structured XML text editor, you might want to use that. If you are writing a really big online help file then consider oXygenXML and/or Open Office Writer. The former is a commercial product and the latter is free, open source software.
For more information about using Open Office Writer to create/edit DocBook files, read Getting Started with DocBook on Open Office.
A: Doxygen, while originally meant to produce code documentation can be made quite easily to produce any kind of help.
A: It's not "free" but the best software I've ever come across is HelpScribble from Just Great Software. I use it on Windows 7 without any problem. What I like most is its ability to integrate SHG files into the helpfile. A picture is worth a thousand words so I tend to use screen images where I capture buttons, text boxes or whatever. The user can see that screen and then just click of the display they want more information on an "BOOM!" ... the explanation pops right up! There's no need to thumb through pages of a manual or screens.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145247",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24"
} |
Q: Any experience with compiling VBScript? I have a home-spun 2000 lines VBScript script, that has become progressively slow with each additional code I add. It was created as a private debugging aid and now that it has become really useful. I want to polish it and ship it along with our product.
I thought I could speed it up by compiling it and making it an EXE. Furthermore I want to have a user interface for my tool, which might be possible once I use the extra libraries that the compiling platform might give me. I'm also considering extending the script by calling Win32 functions for whatever missing functionalities I require.
I have VB 6.0 or I can buy an external compiler. But I also need the created program (not the compiler itself) to run fine in Windows Vista. What are my best options?
A: I would recommend downloading Visual Basic Express Edition (http://www.microsoft.com/express/vb/) and port your tool to VB.Net. However, that approach has one drawback - your program will be dependent on .Net. For the most part that shouldn't be a big problem, as by now most machines should have .Net 2.0, still it's better to keep it in mind.
I would stay away from VB6.0; however, aside from VB.Net I don't know any other good Basic compilers you could use.
A: There's probably more to the slowness than just the fact it's being interpreted. There are probably various optimisations you could make to it to make it faster. Try finding which parts of the code slow it down the most and try to speed them up.
Depending on what the code does VB6 could be fine. If it'll be dealing with natural text/filenames, then it would be better to use VB.net, 'cause VB6 doesn't support Unicode well.
But I get the feeling that even after compilation, it could still be slow, because compilation will only make it run faster, but not more efficiently.
A: Well ... there are a number of "good" BASIC compilers out there:
*
*BCX
*xbLite
are the ones that come to mind immediately. Quite a few are listed on the mindteq site. (Jabaco is particularly interesting - VB6 re-expressed in Java. I've fiddled with it, and it looks very promising!)
But getting back to VBScript compilers, they do exist .. sort of. What they do is tokenise the code and put some kind of wrapper around them. Whether they run any faster is moot.
*
*VBS2EXE
*VBScript2Exe
*ExeScript
A: It's hard to say without knowing more of what the program is doing or how much data it's processing.
I agree with Franci - VB6 is no longer sold or supported so VB.Net would be the way to go for compiled code. (Express is free.) VBScript is not very much like VB.Net, so that might be a good bit of work to port unless it's all WMI or LDAP queries or something like that.
I would start out timing things to see where your bottlenecks are. Unless you're doing tons of looping and multi-level function calling you're probably stuck on external calls.
wscript.echo "Begin: " & Time
tStartTime = Timer
'... do stuff ...
tStopTime = Timer
wscript.echo "Elapsed time: " & tStopTime - tStartTime
Cheers
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145262",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: What is "Total Functional Programming"? Wikipedia has this to say:
Total functional programming (also
known as strong functional
programming, to be contrasted with
ordinary, or weak functional
programming) is a programming paradigm
which restricts the range of programs
to those which are provably
terminating.
and
These restrictions mean that total
functional programming is not
Turing-complete. However, the set of
algorithms which can be used is still
huge. For example, any algorithm which
has had an asymptotic upper bound
calculated for it can be trivially
transformed into a
provably-terminating function by using
the upper bound as an extra argument
which is decremented upon each
iteration or recursion.
There is also a Lambda The Ultimate Post about a paper on Total Functional Programming.
I hadn't come across that until last week on a mailing list.
Are there any more resources, references or any example implementations that you know of?
A: If I understood that correctly, Total Functional Programming means just that: Programming with Total Functions. If I remember my math courses correctly, a Total Function is a function which is defined over its entire domain, a Partial Function is one which has "holes" in its definition.
Now, if you have a function which for some input value v goes into an infinite recursion or an infinite loop or in general doesn't terminate in some other fashion, then your function isn't defined for v, and thus partial, i.e. not total.
Total Functional Programming doesn't allow you to write such a function. All functions always return a result for all possible inputs; and the type checker ensures that this is the case.
My guess is that this vastly simplifies error handling: there aren't any.
The downside is already mentioned in your quote: it's not Turing-complete. E.g. an Operating System is essentially a giant infinite loop. Indeed, we do not want an Operating System to terminate, we call this behaviour a "crash" and yell at our computers about it!
A: While this is an old question, I think that none of the answers so far mention the real motivation for total functional programming, which is this:
If programs are proofs, and proofs are programs, then programs which have 'holes' don't make any sense as proofs, and introduce logical inconsistency.
Basically, if a proof is a program, an infinite loop can be used to prove anything. This is really bad, and provides much of the motivation for why we might want to program totally. Other answers tend to not account for the flip side of the paper. While the languages are techincally not turing complete, you can recover a lot of interesting programs by using co-inductive definitions and functions. We're very prone to think of inductive data, but codata serves an important purpose in these languages, where you can totally define a definition which is infinite (and when doing real computation which terminates, you will potentially use only a finite piece of this, or maybe not if you're writing an operating system!).
It is also of note that most proof assistants work based on this principle, Coq, for example.
A: Charity is another language that guarantees termination:
http://pll.cpsc.ucalgary.ca/charity1/www/home.html
Hume is a language with 4 levels. The outer level is Turing complete and the innermost layer guarantees termination:
http://www-fp.cs.st-andrews.ac.uk/hume/report/
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145263",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21"
} |
Q: HTTP communication monitoring on OS X What application do you use to monitor HTTP communication on OS X?
A: Charles Proxy
Charles is an HTTP proxy / HTTP
monitor / Reverse Proxy that enables a
developer to view all of the HTTP
traffic between their machine and the
Internet. This includes requests,
responses and the HTTP headers (which
contain the cookies and caching
information).
Runs on JAVA. Available on OSX, Linux and Windows.
A: I like TcpCatcher. It is free and 100% java based so it works fine on Mac OS X.
Not only, you will be able to monitor HTTP communication but you will also be able to change requests / responses on the fly which opens very interesting possibilities..
There is a dedicated tutorial on capturing iPhone's HTTP communication.
A: If you're looking to trace application traffic, Wireshark is the best tool I've found - it can log and decode HTTP and many other protocols, and the GUI's search tools make finding the messages you're interesting in pretty quick and painless.
Other reasons I recommend this:
*
*It's quick to install
*It captures traffic straight from the network card, there is no need to change the application or set up proxies etc. It'll even read dumps captured from tcpdump and similar tools offline
*It's multi-platform (works on Windows/Mac/Linux and others)
*It's open source
A: HTTPTracer
http://simile.mit.edu/wiki/HTTPTracer
A: You could also use dTrace to monitor in even more detail, if that's what you need.
A: I second using Charles, it's a really excellent tool for HTTP examination. When used with the iPhone simulator (or any other OS X application) Charles automatically sets up the system settings to use itself as a proxy so you only have to launch and run. It also is very easy to examine the traffic in a few different ways, and has a very lenient free trial version that is fully featured (time limited to an hour with a few nag screens) so you can give it a good try.
A: Depends on what you mean by monitor...
If you simply want to know/stop when an installed application (or the OS) tries to "phone home", then I recommend LittleSnitch.
The peace of mind you gain is well worth the loss of weight from your bank account.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145264",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: Calling C/C++ from Python? What would be the quickest way to construct a Python binding to a C or C++ library?
(I am using Windows if this matters.)
A: The question is how to call a C function from Python, if I understood correctly. Then the best bet are Ctypes (BTW portable across all variants of Python).
>>> from ctypes import *
>>> libc = cdll.msvcrt
>>> print libc.time(None)
1438069008
>>> printf = libc.printf
>>> printf("Hello, %s\n", "World!")
Hello, World!
14
>>> printf("%d bottles of beer\n", 42)
42 bottles of beer
19
For a detailed guide you may want to refer to my blog article.
A: ctypes module is part of the standard library, and therefore is more stable and widely available than swig, which always tended to give me problems.
With ctypes, you need to satisfy any compile time dependency on python, and your binding will work on any python that has ctypes, not just the one it was compiled against.
Suppose you have a simple C++ example class you want to talk to in a file called foo.cpp:
#include <iostream>
class Foo{
public:
void bar(){
std::cout << "Hello" << std::endl;
}
};
Since ctypes can only talk to C functions, you need to provide those declaring them as extern "C"
extern "C" {
Foo* Foo_new(){ return new Foo(); }
void Foo_bar(Foo* foo){ foo->bar(); }
}
Next you have to compile this to a shared library
g++ -c -fPIC foo.cpp -o foo.o
g++ -shared -Wl,-soname,libfoo.so -o libfoo.so foo.o
And finally you have to write your python wrapper (e.g. in fooWrapper.py)
from ctypes import cdll
lib = cdll.LoadLibrary('./libfoo.so')
class Foo(object):
def __init__(self):
self.obj = lib.Foo_new()
def bar(self):
lib.Foo_bar(self.obj)
Once you have that you can call it like
f = Foo()
f.bar() #and you will see "Hello" on the screen
A: Cython is definitely the way to go, unless you anticipate writing Java wrappers, in which case SWIG may be preferable.
I recommend using the runcython command line utility, it makes the process of using Cython extremely easy. If you need to pass structured data to C++, take a look at Google's protobuf library, it's very convenient.
Here is a minimal examples I made that uses both tools:
https://github.com/nicodjimenez/python2cpp
Hope it can be a useful starting point.
A: There is also pybind11, which is like a lightweight version of Boost.Python and compatible with all modern C++ compilers:
https://pybind11.readthedocs.io/en/latest/
A: The quickest way to do this is using SWIG.
Example from SWIG tutorial:
/* File : example.c */
int fact(int n) {
if (n <= 1) return 1;
else return n*fact(n-1);
}
Interface file:
/* example.i */
%module example
%{
/* Put header files here or function declarations like below */
extern int fact(int n);
%}
extern int fact(int n);
Building a Python module on Unix:
swig -python example.i
gcc -fPIC -c example.c example_wrap.c -I/usr/local/include/python2.7
gcc -shared example.o example_wrap.o -o _example.so
Usage:
>>> import example
>>> example.fact(5)
120
Note that you have to have python-dev. Also in some systems python header files will be in /usr/include/python2.7 based on the way you have installed it.
From the tutorial:
SWIG is a fairly complete C++ compiler with support for nearly every language feature. This includes preprocessing, pointers, classes, inheritance, and even C++ templates. SWIG can also be used to package structures and classes into proxy classes in the target language — exposing the underlying functionality in a very natural manner.
A: First you should decide what is your particular purpose. The official Python documentation on extending and embedding the Python interpreter was mentioned above, I can add a good overview of binary extensions. The use cases can be divided into 3 categories:
*
*accelerator modules: to run faster than the equivalent pure Python code runs in CPython.
*wrapper modules: to expose existing C interfaces to Python code.
*low level system access: to access lower level features of the CPython runtime, the operating system, or the underlying hardware.
In order to give some broader perspective for other interested and since your initial question is a bit vague ("to a C or C++ library") I think this information might be interesting to you. On the link above you can read on disadvantages of using binary extensions and its alternatives.
Apart from the other answers suggested, if you want an accelerator module, you can try Numba. It works "by generating optimized machine code using the LLVM compiler infrastructure at import time, runtime, or statically (using the included pycc tool)".
A: I started my journey in the Python <-> C++ binding from this page, with the objective of linking high level data types (multidimensional STL vectors with Python lists) :-)
Having tried the solutions based on both ctypes and boost.python (and not being a software engineer) I have found them complex when high level datatypes binding is required, while I have found SWIG much more simple for such cases.
This example uses therefore SWIG, and it has been tested in Linux (but SWIG is available and is widely used in Windows too).
The objective is to make a C++ function available to Python that takes a matrix in form of a 2D STL vector and returns an average of each row (as a 1D STL vector).
The code in C++ ("code.cpp") is as follow:
#include <vector>
#include "code.h"
using namespace std;
vector<double> average (vector< vector<double> > i_matrix) {
// Compute average of each row..
vector <double> averages;
for (int r = 0; r < i_matrix.size(); r++){
double rsum = 0.0;
double ncols= i_matrix[r].size();
for (int c = 0; c< i_matrix[r].size(); c++){
rsum += i_matrix[r][c];
}
averages.push_back(rsum/ncols);
}
return averages;
}
The equivalent header ("code.h") is:
#ifndef _code
#define _code
#include <vector>
std::vector<double> average (std::vector< std::vector<double> > i_matrix);
#endif
We first compile the C++ code to create an object file:
g++ -c -fPIC code.cpp
We then define a SWIG interface definition file ("code.i") for our C++ functions.
%module code
%{
#include "code.h"
%}
%include "std_vector.i"
namespace std {
/* On a side note, the names VecDouble and VecVecdouble can be changed, but the order of first the inner vector matters! */
%template(VecDouble) vector<double>;
%template(VecVecdouble) vector< vector<double> >;
}
%include "code.h"
Using SWIG, we generate a C++ interface source code from the SWIG interface definition file..
swig -c++ -python code.i
We finally compile the generated C++ interface source file and link everything together to generate a shared library that is directly importable by Python (the "_" matters):
g++ -c -fPIC code_wrap.cxx -I/usr/include/python2.7 -I/usr/lib/python2.7
g++ -shared -Wl,-soname,_code.so -o _code.so code.o code_wrap.o
We can now use the function in Python scripts:
#!/usr/bin/env python
import code
a= [[3,5,7],[8,10,12]]
print a
b = code.average(a)
print "Assignment done"
print a
print b
A: For modern C++, use cppyy:
http://cppyy.readthedocs.io/en/latest/
It's based on Cling, the C++ interpreter for Clang/LLVM. Bindings are at run-time and no additional intermediate language is necessary. Thanks to Clang, it supports C++17.
Install it using pip:
$ pip install cppyy
For small projects, simply load the relevant library and the headers that you are interested in. E.g. take the code from the ctypes example is this thread, but split in header and code sections:
$ cat foo.h
class Foo {
public:
void bar();
};
$ cat foo.cpp
#include "foo.h"
#include <iostream>
void Foo::bar() { std::cout << "Hello" << std::endl; }
Compile it:
$ g++ -c -fPIC foo.cpp -o foo.o
$ g++ -shared -Wl,-soname,libfoo.so -o libfoo.so foo.o
and use it:
$ python
>>> import cppyy
>>> cppyy.include("foo.h")
>>> cppyy.load_library("foo")
>>> from cppyy.gbl import Foo
>>> f = Foo()
>>> f.bar()
Hello
>>>
Large projects are supported with auto-loading of prepared reflection information and the cmake fragments to create them, so that users of installed packages can simply run:
$ python
>>> import cppyy
>>> f = cppyy.gbl.Foo()
>>> f.bar()
Hello
>>>
Thanks to LLVM, advanced features are possible, such as automatic template instantiation. To continue the example:
>>> v = cppyy.gbl.std.vector[cppyy.gbl.Foo]()
>>> v.push_back(f)
>>> len(v)
1
>>> v[0].bar()
Hello
>>>
Note: I'm the author of cppyy.
A: You should have a look at Boost.Python. Here is the short introduction taken from their website:
The Boost Python Library is a framework for interfacing Python and
C++. It allows you to quickly and seamlessly expose C++ classes
functions and objects to Python, and vice-versa, using no special
tools -- just your C++ compiler. It is designed to wrap C++ interfaces
non-intrusively, so that you should not have to change the C++ code at
all in order to wrap it, making Boost.Python ideal for exposing
3rd-party libraries to Python. The library's use of advanced
metaprogramming techniques simplifies its syntax for users, so that
wrapping code takes on the look of a kind of declarative interface
definition language (IDL).
A: pybind11 minimal runnable example
pybind11 was previously mentioned at https://stackoverflow.com/a/38542539/895245 but I would like to give here a concrete usage example and some further discussion about implementation.
All and all, I highly recommend pybind11 because it is really easy to use: you just include a header and then pybind11 uses template magic to inspect the C++ class you want to expose to Python and does that transparently.
The downside of this template magic is that it slows down compilation immediately adding a few seconds to any file that uses pybind11, see for example the investigation done on this issue. PyTorch agrees. A proposal to remediate this problem has been made at: https://github.com/pybind/pybind11/pull/2445
Here is a minimal runnable example to give you a feel of how awesome pybind11 is:
class_test.cpp
#include <string>
#include <pybind11/pybind11.h>
struct ClassTest {
ClassTest(const std::string &name, int i) : name(name), i(i) { }
void setName(const std::string &name_) { name = name_; }
const std::string getName() const { return name + "z"; }
void setI(const int i) { this->i = i; }
const int getI() const { return i + 1; }
std::string name;
int i;
};
namespace py = pybind11;
PYBIND11_PLUGIN(class_test) {
py::module m("my_module", "pybind11 example plugin");
py::class_<ClassTest>(m, "ClassTest")
.def(py::init<const std::string &, int>())
.def("setName", &ClassTest::setName)
.def("getName", &ClassTest::getName)
.def_readwrite("name", &ClassTest::name)
.def("setI", &ClassTest::setI)
.def("getI", &ClassTest::getI)
.def_readwrite("i", &ClassTest::i);
return m.ptr();
}
class_test_main.py
#!/usr/bin/env python3
import class_test
my_class_test = class_test.ClassTest("abc", 1);
print(my_class_test.getName())
print(my_class_test.getI())
my_class_test.setName("012")
my_class_test.setI(2)
print(my_class_test.getName())
print(my_class_test.getI())
assert(my_class_test.getName() == "012z")
assert(my_class_test.getI() == 3)
Compile and run:
#!/usr/bin/env bash
set -eux
sudo apt install pybind11-dev
g++ `python3-config --cflags` -shared -std=c++11 -fPIC class_test.cpp \
-o class_test`python3-config --extension-suffix` `python3-config --libs`
./class_test_main.py
Stdout output:
abcz
2
012z
3
If we tried to use a wrong type as in:
my_class_test.setI("abc")
it blows up as expected:
Traceback (most recent call last):
File "/home/ciro/test/./class_test_main.py", line 9, in <module>
my_class_test.setI("abc")
TypeError: setI(): incompatible function arguments. The following argument types are supported:
1. (self: my_module.ClassTest, arg0: int) -> None
Invoked with: <my_module.ClassTest object at 0x7f2980254fb0>, 'abc'
This example shows how pybind11 allows you to effortlessly expose the ClassTest C++ class to Python!
Notably, Pybind11 automatically understands from the C++ code that name is an std::string, and therefore should be mapped to a Python str object.
Compilation produces a file named class_test.cpython-36m-x86_64-linux-gnu.so which class_test_main.py automatically picks up as the definition point for the class_test natively defined module.
Perhaps the realization of how awesome this is only sinks in if you try to do the same thing by hand with the native Python API, see for example this example of doing that, which has about 10x more code: https://github.com/cirosantilli/python-cheat/blob/4f676f62e87810582ad53b2fb426b74eae52aad5/py_from_c/pure.c On that example you can see how the C code has to painfully and explicitly define the Python class bit by bit with all the information it contains (members, methods, further metadata...). See also:
*
*Can python-C++ extension get a C++ object and call its member function?
*Exposing a C++ class instance to a python embedded interpreter
*A full and minimal example for a class (not method) with Python C Extension?
*Embedding Python in C++ and calling methods from the C++ code with Boost.Python
*Inheritance in Python C++ extension
pybind11 claims to be similar to Boost.Python which was mentioned at https://stackoverflow.com/a/145436/895245 but more minimal because it is freed from the bloat of being inside the Boost project:
pybind11 is a lightweight header-only library that exposes C++ types in Python and vice versa, mainly to create Python bindings of existing C++ code. Its goals and syntax are similar to the excellent Boost.Python library by David Abrahams: to minimize boilerplate code in traditional extension modules by inferring type information using compile-time introspection.
The main issue with Boost.Python—and the reason for creating such a similar project—is Boost. Boost is an enormously large and complex suite of utility libraries that works with almost every C++ compiler in existence. This compatibility has its cost: arcane template tricks and workarounds are necessary to support the oldest and buggiest of compiler specimens. Now that C++11-compatible compilers are widely available, this heavy machinery has become an excessively large and unnecessary dependency.
Think of this library as a tiny self-contained version of Boost.Python with everything stripped away that isn't relevant for binding generation. Without comments, the core header files only require ~4K lines of code and depend on Python (2.7 or 3.x, or PyPy2.7 >= 5.7) and the C++ standard library. This compact implementation was possible thanks to some of the new C++11 language features (specifically: tuples, lambda functions and variadic templates). Since its creation, this library has grown beyond Boost.Python in many ways, leading to dramatically simpler binding code in many common situations.
pybind11 is also the only non-native alternative hightlighted by the current Microsoft Python C binding documentation at: https://learn.microsoft.com/en-us/visualstudio/python/working-with-c-cpp-python-in-visual-studio?view=vs-2019 (archive).
Tested on Ubuntu 18.04, pybind11 2.0.1, Python 3.6.8, GCC 7.4.0.
A: I think cffi for python can be an option.
The goal is to call C code from Python. You should be able to do so
without learning a 3rd language: every alternative requires you to
learn their own language (Cython, SWIG) or API (ctypes). So we tried
to assume that you know Python and C and minimize the extra bits of
API that you need to learn.
http://cffi.readthedocs.org/en/release-0.7/
A: I love cppyy, it makes it very easy to extend Python with C++ code, dramatically increasing performance when needed.
It is powerful and frankly very simple to use,
here it is an example of how you can create a numpy array and pass it to a class member function in C++.
cppyy_test.py
import cppyy
import numpy as np
cppyy.include('Buffer.h')
s = cppyy.gbl.Buffer()
numpy_array = np.empty(32000, np.float64)
s.get_numpy_array(numpy_array.data, numpy_array.size)
print(numpy_array[:20])
Buffer.h
struct Buffer {
void get_numpy_array(double *ad, int size) {
for( long i=0; i < size; i++)
ad[i]=i;
}
};
You can also create a Python module very easily (with CMake), this way you will avoid recompile the C++ code all the times.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145270",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "596"
} |
Q: Is Lambda Probe dead? Does anyone know where to get the source code for LambdaProbe?
Alternatively, does anyone know if the project could be moved to a community repository?
Besides the tool not being updated for over a year, the LambdaProbe website has been down since late September 2008.
Background: Lambda Probe is a useful tool for viewing stats on a running tomcat server. It used to be found at http://www.lambdaprobe.org.
A: The site still seems to be pretty dead. However, I've done some looking into available source code and release notes and here's everything I've found in regards to lambda probe. I hope it helps!
Google cache search:
site:www.lambdaprobe.org
Release notes:
http://209.85.173.104/search?q=cache:HB0527hDa5AJ:www.lambdaprobe.org/d/latest.shtml+release+site:www.lambdaprobe.org&hl=en&gl=us&strip=1
Lates Release Downloads:
http://209.85.173.104/search?q=cache:t5jXPCYgiCsJ:www.lambdaprobe.org/d/download.htm+download+1.7b+site:www.lambdaprobe.org&hl=en&gl=us&strip=1
http://www.lambdaprobe.org/downloads/1.7/probe.1.7b.zip
http://www.lambdaprobe.org/downloads/1.7/probe.1.7b.src.zip
http://www.lambdaprobe.org/downloads/1.7/probe.1.7b-jb.zip
http://www.lambdaprobe.org/downloads/1.6/probe.1.6.zip
http://www.lambdaprobe.org/downloads/1.6/probe.1.6.src.zip
http://www.lambdaprobe.org/downloads/1.6/probe.1.6-jb.zip
A: There seems to be a fork of the project over at http://code.google.com/p/psi-probe/
A: The project is currently hosted here: https://launchpad.net/lambdaprobe/+download
Also the files are alternatively hosted on Softpedia here: http://webscripts.softpedia.com/scriptDownload/Lambda-Probe-Download-69093.html
A: I have no idea about the status of the site, but it looks like you can grab a copy over at Softpedia.
A: Actually the some part of the site is not working: it doesn't open the home page at http://www.lambdaprobe.org/, but the Softpedia external link points to http://www.lambdaprobe.org/downloads/1.5/probe.1.5.0.1.zip and you can download from there.
A: Just played with the link Zloster sent and got version 1.6. Here it is
http://www.lambdaprobe.org/downloads/1.6/probe.1.6.zip
Enjoy!
A: It has been forked and now is on GitHub here: https://github.com/psi-probe/psi-probe
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145272",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: Smart home in Emacs Can you have smart behavior for the home key in Emacs? By smart I mean that instead of going to the character number 0, it should go to the first non-blank character, and go to 0 on a second pressing, and back to the first non-blank in a third and so on.
Having smart end would be nice as well.
A: (defun smart-beginning-of-line ()
"Move point to first non-whitespace character or beginning-of-line.
Move point to the first non-whitespace character on this line.
If point was already at that position, move point to beginning of line."
(interactive "^") ; Use (interactive) in Emacs 22 or older
(let ((oldpos (point)))
(back-to-indentation)
(and (= oldpos (point))
(beginning-of-line))))
(global-set-key [home] 'smart-beginning-of-line)
I'm not quite sure what smart end would do. Do you normally have a lot of trailing whitespace?
Note: The major difference between this function and Robert Vuković's is that his always moves to the first non-blank character on the first keypress, even if the cursor was already there. Mine would move to column 0 in that case.
Also, he used (beginning-of-line-text) where I used (back-to-indentation). Those are very similar, but there are some differences between them. (back-to-indentation) always moves to the first non-whitespace character on a line. (beginning-of-line-text) sometimes moves past non-whitespace characters that it considers insignificant. For instance, on a comment-only line, it moves to the first character of the comment's text, not the comment marker. But either function could be used in either of our answers, depending on which behavior you prefer.
A: Thanks for this handy function. I use it all the time now and love it. I've made just one small change:
(interactive)
becomes:
(interactive "^")
From emacs help:
If the string begins with ^' andshift-select-mode' is non-nil, Emacs first calls the function `handle-shift-select'.
Basically this makes shift-home select from the current position to the start of the line if you use shift-select-mode. It's especially useful in the minibuffer.
A: Note that there is already a back-to-indentation function which does what you want the first smart-home function to do, i.e. go to the first non-whitespace character on the line. It is bound by default to M-m.
A: There is now a package that does just that, mwim (Move Where I Mean)
A: This works with GNU Emacs, I didn't tried it with XEmacs.
(defun My-smart-home () "Odd home to beginning of line, even home to beginning of text/code."
(interactive)
(if (and (eq last-command 'My-smart-home)
(/= (line-beginning-position) (point)))
(beginning-of-line)
(beginning-of-line-text))
)
(global-set-key [home] 'My-smart-home)
A: My version: move to begining of visual line, first non-whitespace, or beginning of line.
(defun smart-beginning-of-line ()
"Move point to beginning-of-line or first non-whitespace character"
(interactive "^")
(let ((p (point)))
(beginning-of-visual-line)
(if (= p (point)) (back-to-indentation))
(if (= p (point)) (beginning-of-line))))
(global-set-key [home] 'smart-beginning-of-line)
(global-set-key "\C-a" 'smart-beginning-of-line)
The [home] and "\C-a" (control+a) keys:
*
*Move the cursor (point) the the beginning of the visual line.
*If it is already at the beginnng of the visual line, then move it to first non-whitespace character of the line.
*If it is already there, then move it to the beginning of the line.
*While moving, keep the region (interactive "^").
This is taken from @cjm and @thomas; then I add the visual line stuff. (Sorry for my broken English).
A: I adapt @Vucovic code to jump to beggining-of-line first:
(defun my-smart-beginning-of-line ()
"Move point to beginning-of-line. If repeat command it cycle
position between `back-to-indentation' and `beginning-of-line'."
(interactive "^")
(if (and (eq last-command 'my-smart-beginning-of-line)
(= (line-beginning-position) (point)))
(back-to-indentation)
(beginning-of-line)))
(global-set-key [home] 'my-smart-beginning-of-line)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145291",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "41"
} |
Q: When to use thread pool in C#? I have been trying to learn multi-threaded programming in C# and I am confused about when it is best to use a thread pool vs. create my own threads. One book recommends using a thread pool for small tasks only (whatever that means), but I can't seem to find any real guidelines.
What are some pros and cons of thread pools vs creating my own threads? And what are some example use cases for each?
A: The thread pool is designed to reduce context switching among your threads. Consider a process that has several components running. Each of those components could be creating worker threads. The more threads in your process, the more time is wasted on context switching.
Now, if each of those components were queuing items to the thread pool, you would have a lot less context switching overhead.
The thread pool is designed to maximize the work being done across your CPUs (or CPU cores). That is why, by default, the thread pool spins up multiple threads per processor.
There are some situations where you would not want to use the thread pool. If you are waiting on I/O, or waiting on an event, etc then you tie up that thread pool thread and it can't be used by anyone else. Same idea applies to long running tasks, though what constitutes a long running task is subjective.
Pax Diablo makes a good point as well. Spinning up threads is not free. It takes time and they consume additional memory for their stack space. The thread pool will re-use threads to amortize this cost.
Note: you asked about using a thread pool thread to download data or perform disk I/O. You should not use a thread pool thread for this (for the reasons I outlined above). Instead use asynchronous I/O (aka the BeginXX and EndXX methods). For a FileStream that would be BeginRead and EndRead. For an HttpWebRequest that would be BeginGetResponse and EndGetResponse. They are more complicated to use, but they are the proper way to perform multi-threaded I/O.
A: Beware of the .NET thread pool for operations that may block for any significant, variable or unknown part of their processing, as it is prone to thread starvation. Consider using the .NET parallel extensions, which provide a good number of logical abstractions over threaded operations. They also include a new scheduler, which should be an improvement on ThreadPool. See here
A: I would suggest you use a thread pool in C# for the same reasons as any other language.
When you want to limit the number of threads running or don't want the overhead of creating and destroying them, use a thread pool.
By small tasks, the book you read means tasks with a short lifetime. If it takes ten seconds to create a thread which only runs for one second, that's one place where you should be using pools (ignore my actual figures, it's the ratio that counts).
Otherwise you spend the bulk of your time creating and destroying threads rather than simply doing the work they're intended to do.
A: If you have lots of logical tasks that require constant processing and you want that to be done in parallel use the pool+scheduler.
If you need to make your IO related tasks concurrently such as downloading stuff from remote servers or disk access, but need to do this say once every few minutes, then make your own threads and kill them once you're finished.
Edit: About some considerations, I use thread pools for database access, physics/simulation, AI(games), and for scripted tasks ran on virtual machines that process lots of user defined tasks.
Normally a pool consists of 2 threads per processor (so likely 4 nowadays), however you can set up the amount of threads you want, if you know how many you need.
Edit: The reason to make your own threads is because of context changes, (thats when threads need to swap in and out of the process, along with their memory). Having useless context changes, say when you aren't using your threads, just leaving them sit around as one might say, can easily half the performance of your program (say you have 3 sleeping threads and 2 active threads). Thus if those downloading threads are just waiting they're eating up tons of CPU and cooling down the cache for your real application
A: One reason to use the thread pool for small tasks only is that there are a limited number of thread pool threads. If one is used for a long time then it stops that thread from being used by other code. If this happens many times then the thread pool can become used up.
Using up the thread pool can have subtle effects - some .NET timers use thread pool threads and will not fire, for example.
A: Here's a nice summary of the thread pool in .Net: http://blogs.msdn.com/pedram/archive/2007/08/05/dedicated-thread-or-a-threadpool-thread.aspx
The post also has some points on when you should not use the thread pool and start your own thread instead.
A: If you have a background task that will live for a long time, like for the entire lifetime of your application, then creating your own thread is a reasonable thing. If you have short jobs that need to be done in a thread, then use thread pooling.
In an application where you are creating many threads, the overhead of creating the threads becomes substantial. Using the thread pool creates the threads once and reuses them, thus avoiding the thread creation overhead.
In an application that I worked on, changing from creating threads to using the thread pool for the short lived threads really helpped the through put of the application.
A: For the highest performance with concurrently executing units, write your own thread pool, where a pool of Thread objects are created at start up and go to blocking (formerly suspended), waiting on a context to run (an object with a standard interface implemented by your code).
So many articles about Tasks vs. Threads vs. the .NET ThreadPool fail to really give you what you need to make a decision for performance. But when you compare them, Threads win out and especially a pool of Threads. They are distributed the best across CPUs and they start up faster.
What should be discussed is the fact that the main execution unit of Windows (including Windows 10) is a thread, and OS context switching overhead is usually negligible. Simply put, I have not been able to find convincing evidence of many of these articles, whether the article claims higher performance by saving context switching or better CPU usage.
Now for a bit of realism:
Most of us won’t need our application to be deterministic, and most of us do not have a hard-knocks background with threads, which for instance often comes with developing an operating system. What I wrote above is not for a beginner.
So what may be most important is to discuss is what is easy to program.
If you create your own thread pool, you’ll have a bit of writing to do as you’ll need to be concerned with tracking execution status, how to simulate suspend and resume, and how to cancel execution – including in an application-wide shut down. You might also have to be concerned with whether you want to dynamically grow your pool and also what capacity limitation your pool will have. I can write such a framework in an hour but that is because I’ve done it so many times.
Perhaps the easiest way to write an execution unit is to use a Task. The beauty of a Task is that you can create one and kick it off in-line in your code (though caution may be warranted). You can pass a cancellation token to handle when you want to cancel the Task. Also, it uses the promise approach to chaining events, and you can have it return a specific type of value. Moreover, with async and await, more options exist and your code will be more portable.
In essence, it is important to understand the pros and cons with Tasks vs. Threads vs. the .NET ThreadPool. If I need high performance, I am going to use threads, and I prefer using my own pool.
An easy way to compare is start up 512 Threads, 512 Tasks, and 512 ThreadPool threads. You’ll find a delay in the beginning with Threads (hence, why write a thread pool), but all 512 Threads will be running in a few seconds while Tasks and .NET ThreadPool threads take up to a few minutes to all start.
Below are the results of such a test (i5 quad core with 16 GB of RAM), giving each 30 seconds to run. The code executed performs simple file I/O on an SSD drive.
Test Results
A: I highly recommend reading the this free e-book:
Threading in C# by Joseph Albahari
At least read the "Getting Started" section. The e-book provides a great introduction and includes a wealth of advanced threading information as well.
Knowing whether or not to use the thread pool is just the beginning. Next you will need to determine which method of entering the thread pool best suits your needs:
*
*Task Parallel Library (.NET Framework
4.0)
*ThreadPool.QueueUserWorkItem
*Asynchronous Delegates
*BackgroundWorker
This e-book explains these all and advises when to use them vs. create your own thread.
A: Thread pools are great when you have more tasks to process than available threads.
You can add all the tasks to a thread pool and specify the maximum number of threads that can run at a certain time.
Check out this page on MSDN:
http://msdn.microsoft.com/en-us/library/3dasc8as(VS.80).aspx
A: Always use a thread pool if you can, work at the highest level of abstraction possible. Thread pools hide creating and destroying threads for you, this is usually a good thing!
A: Most of the time you can use the pool as you avoid the expensive process of creating the thread.
However in some scenarios you may want to create a thread. For example if you are not the only one using the thread pool and the thread you create is long-lived (to avoid consuming shared resources) or for example if you want to control the stacksize of the thread.
A: Don't forget to investigate the Background worker.
I find for a lot of situations, it gives me just what i want without the heavy lifting.
Cheers.
A: I usually use the Threadpool whenever I need to just do something on another thread and don't really care when it runs or ends. Something like logging or maybe even background downloading a file (though there are better ways to do that async-style). I use my own thread when I need more control. Also what I've found is using a Threadsafe queue (hack your own) to store "command objects" is nice when I have multiple commands that I need to work on in >1 thread. So you'd may split up an Xml file and put each element in a queue and then have multiple threads working on doing some processing on these elements. I wrote such a queue way back in uni (VB.net!) that I've converted to C#. I've included it below for no particular reason (this code might contain some errors).
using System.Collections.Generic;
using System.Threading;
namespace ThreadSafeQueue {
public class ThreadSafeQueue<T> {
private Queue<T> _queue;
public ThreadSafeQueue() {
_queue = new Queue<T>();
}
public void EnqueueSafe(T item) {
lock ( this ) {
_queue.Enqueue(item);
if ( _queue.Count >= 1 )
Monitor.Pulse(this);
}
}
public T DequeueSafe() {
lock ( this ) {
while ( _queue.Count <= 0 )
Monitor.Wait(this);
return this.DeEnqueueUnblock();
}
}
private T DeEnqueueUnblock() {
return _queue.Dequeue();
}
}
}
A: I wanted a thread pool to distribute work across cores with as little latency as possible, and that didn't have to play well with other applications. I found that the .NET thread pool performance wasn't as good as it could be. I knew I wanted one thread per core, so I wrote my own thread pool substitute class. The code is provided as an answer to another StackOverflow question over here.
As to the original question, the thread pool is useful for breaking repetitive computations up into parts that can be executed in parallel (assuming they can be executed in parallel without changing the outcome). Manual thread management is useful for tasks like UI and IO.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145304",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "128"
} |
Q: Maximum number of threads in a .NET app? What is the maximum number of threads you can create in a C# application? And what happens when you reach this limit? Is an exception of some kind thrown?
A: You should be using the thread pool (or async delgates, which in turn use the thread pool) so that the system can decide how many threads should run.
A: Jeff Richter in CLR via C#:
"With version 2.0 of the CLR, the maximum number
of worker threads default to 25 per CPU in the machine
and the maximum number of I/O
threads defaults to 1000. A limit of 1000 is effectively no limit at all."
Note this is based on .NET 2.0. This may have changed in .NET 3.5.
[Edit] As @Mitch pointed out, this is specific to the CLR ThreadPool. If you're creating threads directly see the @Mitch and others comments.
A: Mitch is right. It depends on resources (memory).
Although Raymond's article is dedicated to Windows threads, not to C# threads, the logic applies the same (C# threads are mapped to Windows threads).
However, as we are in C#, if we want to be completely precise, we need to distinguish between "started" and "non started" threads. Only started threads actually reserve stack space (as we could expect). Non started threads only allocate the information required by a thread object (you can use reflector if interested in the actual members).
You can actually test it for yourself, compare:
static void DummyCall()
{
Thread.Sleep(1000000000);
}
static void Main(string[] args)
{
int count = 0;
var threadList = new List<Thread>();
try
{
while (true)
{
Thread newThread = new Thread(new ThreadStart(DummyCall), 1024);
newThread.Start();
threadList.Add(newThread);
count++;
}
}
catch (Exception ex)
{
}
}
with:
static void DummyCall()
{
Thread.Sleep(1000000000);
}
static void Main(string[] args)
{
int count = 0;
var threadList = new List<Thread>();
try
{
while (true)
{
Thread newThread = new Thread(new ThreadStart(DummyCall), 1024);
threadList.Add(newThread);
count++;
}
}
catch (Exception ex)
{
}
}
Put a breakpoint in the exception (out of memory, of course) in VS to see the value of counter. There is a very significant difference, of course.
A: There is no inherent limit. The maximum number of threads is determined by the amount of physical resources available. See this article by Raymond Chen for specifics.
If you need to ask what the maximum number of threads is, you are probably doing something wrong.
[Update: Just out of interest: .NET Thread Pool default numbers of threads:
*
*1023 in Framework 4.0 (32-bit environment)
*32767 in Framework 4.0 (64-bit environment)
*250 per core in Framework 3.5
*25 per core in Framework 2.0
(These numbers may vary depending upon the hardware and OS)]
A: i did a test on a 64bit system with c# console, the exception is type of out of memory, using 2949 threads.
I realize we should be using threading pool, which I do, but this answer is in response to the main question ;)
A: I would recommend running ThreadPool.GetMaxThreads method in debug
ThreadPool.GetMaxThreads(out int workerThreadsCount, out int ioThreadsCount);
Docs and examples:
https://learn.microsoft.com/en-us/dotnet/api/system.threading.threadpool.getmaxthreads?view=netframework-4.8
A: You can test it by using this snipped code:
private static void Main(string[] args)
{
int threadCount = 0;
try
{
for (int i = 0; i < int.MaxValue; i ++)
{
new Thread(() => Thread.Sleep(Timeout.Infinite)).Start();
threadCount ++;
}
}
catch
{
Console.WriteLine(threadCount);
Console.ReadKey(true);
}
}
Beware of 32-bit and 64-bit mode of application.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145312",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "165"
} |
Q: How to set OS X Terminal's default home? For some reason after I installed Boot Camp, my os x terminal started to point to the Boot Camp drive instead of my os x home directory by default! Once in the terminal I know how to switch back an forth and am able to do that, but I was wondering how to make my terminal default back to my os x home folder?
I've checked my Home Directory under System Preferences->Accounts->Control-click on my account and it is pointing to the right place. I've also tried unmounting it with no luck.
A: You probably want these instructions for how to use the command line:
https://superuser.com/questions/154193/setting-a-users-home-directory-on-mac-os-x-server-from-the-command-line
Basically, use the dscl command line tool to see what the system thinks your home directory is set to, or to try and reset it....
A: This is a bit of under-the-hood fiddling, but you could simply enter
cd <directory>
in /Users/<yourUserName>/.profile
and startup a new terminal. You should then be in the directory you want.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145321",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: WebFocus - is there a free/open source alternative? What is a good free/open source alternative to WebFOCUS?
Is there an ASP.NET way of getting info from an OLAP cube?
Update: I chose Magnus Smith's answer as the correct one, but Alexmac's answer was also very good!
A: I am not aware of free analytical suite. But what is it you are trying to accomplish?
You can query an OLAP cube by using MDX queries with ADO.net.
http://msdn.microsoft.com/en-us/library/ms144785.aspx
You can then bind the results to a datagrid for example. MDX is a little like SQL but be careful as it has several syntax differences. I think Excel has a query tool you can use to graphically construct your queries which can be helpful.
On a related note look into SQL reporting services. With SQL express you can use a cut down version of sql reporting services that may accomplish what you are looking to do.
A: Watch out, MDX is horrible - nothing like SQL even though it looks like it is.
You can get a .NET control to show OLAP data on a web page. Our company tested one from Dundas that was OK. I never got involved though so I don't know if it was brilliant or merely serviceable.
http://www.dundas.com/Gallery/Chart/NET/index.aspx?ImgGroup=OLAP
We gave up on SQL Reporting Services as it was not suitable for Internet usage (fine on Intranet though).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145322",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: What's a Windows text editor that matches this criteria My Situation: I love e editor, however I'm on a new computer and my license is being used on my old one. I can't exactly afford another license, so I'm looking for a free editor that meets the follow criteria:
*
*Decent syntax highlighting
*Ability to view a directory and its contents on the side panel, without the need to create it as a 'project' (Very Important)
*Easily themable (I like dark themes)
*Tabs
Also would be nice:
*
*S/FTP support
*Code snippets/bundles
*Multi-line editing
And is not (Simply listed because they're common suggestions, but I've tried and not found them to meet my criteria):
*
*Vim/Emacs
*Notepad++
*Crimson/Emerald Editor
*Programmer's Notepad
*Wordpad/Notepad :P
Thanks. Oh, and as a reference, here's a picture of my current setup: Link
Edit: Thanks all to those who suggested. All 3 (JEdit, Cream, and PsPad) are solid candidates for anyone looking at this thread.
A: You may scoff at this, but Cream is a very un-Vimlike offshoot of Vim. Here is an article written specifically for Textmate (and e) fans who want to try it.
A: jedit. I believe that it does everything on your wishlist.
A: PsPad. Excellent free text code editor.
Also does FTP site based editing as if it was in a local folder. very handy.
A:
When you register e, the license is bound to you, and not to one specific computer. This means that you can use you license on as many computers as you like. There is also no limit on platforms, so it will also be valid on future versions for Linux or any other OS.
http://e-texteditor.com/blog/2007/licensing
There's no need to buy another license!
A: You're willing to spend all this time and effort asking about and evaluating other editors which will almost certainly not have all the features you want, yet you can't shell out $35 for another licence?
When I'm making decisions like this, I always value my time at $100/hour so if this were going to take me more than 20 minutes, I'd just buy another licence.
Time is the one commodity you can't recover; you can always make more money...
A: Pick your poison here: http://en.wikipedia.org/wiki/Comparison_of_text_editors
A: I would like to recommand Gedit. There was Windows version.
I haven't ever used Mac but I think Textmate must be the best text editor. When I'm on Windows, I really like e-Text editor. But later, I have to move on to Linux(Ubuntu) for an requirement. I searched for Textmate/e-Text editor like text editor for Linux.
I found out, Gedit is not a bad one. Here is my Gedit...
By your requirement, Gedit already have a decent syntax highlighting and Tab. Didn't have Project pane but Document List and File Browser combination is not bad, I think.
Easily themable? Gedit haven't many ready made themes. But you can easily create your own themes by creating xml color schemes base on default themes (In ubuntu 8.04, default themes directory is /usr/share/gtksourceview-2.0/styles).
As for me, Embedded terminal, Word completion and Code snippets are also important. Gedit have plenty of useful plugins for those feature.
There were many customization tutorial for Gedit. You may need to spent your time on customization. :)
A: try Komodo edit, its free and can do the things you mentioned afaik.
http://activestate.com/Products/komodo_ide/komodo_edit.mhtml
The sidebar folder feature is called "live folders, and its on the right side bar. It also has vi emulation, dark themes, tabs, sftp support etc..
A: You also might want to take a look at EditPlus.
A: The e text editor license is per user not per machine. So, you don't need to buy another license to use it on a different machine, just copy the license.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145327",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: How do I brighten the colors in the rxvt family of terminals? I know how to lighten the colors for certain commands, however I'd like to lighten the standard ansi colors across all commands.
A: I found instructions for doing it for Xterm and aterm here:
http://gentoo-wiki.com/TIP_Linux_Colors_in_Aterm/rxvt
From those I was able to get brighter colors by adding:
rxvt*background: #000000
rxvt*foreground: #7f7f7f
rxvt*color0: #000000
rxvt*color1: #9e1828
rxvt*color2: #aece92
rxvt*color3: #968a38
rxvt*color4: #414171
rxvt*color5: #963c59
rxvt*color6: #418179
rxvt*color7: #bebebe
rxvt*color8: #666666
rxvt*color9: #cf6171
rxvt*color10: #c5f779
rxvt*color11: #fff796
rxvt*color12: #4186be
rxvt*color13: #cf9ebe
rxvt*color14: #71bebe
rxvt*color15: #ffffff
to the bottom of my ~/.Xdefaults file
A: A simple solution would be to turn up your monitor brightness :)
More seriously, see the RESOURCES section of the manual.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145335",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Checking if array is multidimensional or not?
*
*What is the most efficient way to check if an array is a flat array
of primitive values or if it is a multidimensional array?
*Is there any way to do this without actually looping through an
array and running is_array() on each of its elements?
A: After PHP 7 you could simply do:
public function is_multi(array $array):bool
{
return is_array($array[array_key_first($array)]);
}
A: You could look check is_array() on the first element, under the assumption that if the first element of an array is an array, then the rest of them are too.
A: I think you will find that this function is the simplest, most efficient, and fastest way.
function isMultiArray($a){
foreach($a as $v) if(is_array($v)) return TRUE;
return FALSE;
}
You can test it like this:
$a = array(1 => 'a',2 => 'b',3 => array(1,2,3));
$b = array(1 => 'a',2 => 'b');
echo isMultiArray($a) ? 'is multi':'is not multi';
echo '<br />';
echo isMultiArray($b) ? 'is multi':'is not multi';
A: For PHP 4.2.0 or newer:
function is_multi($array) {
return (count($array) != count($array, 1));
}
A: Use count() twice; one time in default mode and one time in recursive mode. If the values match, the array is not multidimensional, as a multidimensional array would have a higher recursive count.
if (count($array) == count($array, COUNT_RECURSIVE))
{
echo 'array is not multidimensional';
}
else
{
echo 'array is multidimensional';
}
This option second value mode was added in PHP 4.2.0. From the PHP Docs:
If the optional mode parameter is set to COUNT_RECURSIVE (or 1), count() will recursively count the array. This is particularly useful for counting all the elements of a multidimensional array. count() does not detect infinite recursion.
However this method does not detect array(array()).
A: Don't use COUNT_RECURSIVE
click this site for know why
use rsort and then use isset
function is_multi_array( $arr ) {
rsort( $arr );
return isset( $arr[0] ) && is_array( $arr[0] );
}
//Usage
var_dump( is_multi_array( $some_array ) );
A: The short answer is no you can't do it without at least looping implicitly if the 'second dimension' could be anywhere. If it has to be in the first item, you'd just do
is_array($arr[0]);
But, the most efficient general way I could find is to use a foreach loop on the array, shortcircuiting whenever a hit is found (at least the implicit loop is better than the straight for()):
$ more multi.php
<?php
$a = array(1 => 'a',2 => 'b',3 => array(1,2,3));
$b = array(1 => 'a',2 => 'b');
$c = array(1 => 'a',2 => 'b','foo' => array(1,array(2)));
function is_multi($a) {
$rv = array_filter($a,'is_array');
if(count($rv)>0) return true;
return false;
}
function is_multi2($a) {
foreach ($a as $v) {
if (is_array($v)) return true;
}
return false;
}
function is_multi3($a) {
$c = count($a);
for ($i=0;$i<$c;$i++) {
if (is_array($a[$i])) return true;
}
return false;
}
$iters = 500000;
$time = microtime(true);
for ($i = 0; $i < $iters; $i++) {
is_multi($a);
is_multi($b);
is_multi($c);
}
$end = microtime(true);
echo "is_multi took ".($end-$time)." seconds in $iters times\n";
$time = microtime(true);
for ($i = 0; $i < $iters; $i++) {
is_multi2($a);
is_multi2($b);
is_multi2($c);
}
$end = microtime(true);
echo "is_multi2 took ".($end-$time)." seconds in $iters times\n";
$time = microtime(true);
for ($i = 0; $i < $iters; $i++) {
is_multi3($a);
is_multi3($b);
is_multi3($c);
}
$end = microtime(true);
echo "is_multi3 took ".($end-$time)." seconds in $iters times\n";
?>
$ php multi.php
is_multi took 7.53565130424 seconds in 500000 times
is_multi2 took 4.56964588165 seconds in 500000 times
is_multi3 took 9.01706600189 seconds in 500000 times
Implicit looping, but we can't shortcircuit as soon as a match is found...
$ more multi.php
<?php
$a = array(1 => 'a',2 => 'b',3 => array(1,2,3));
$b = array(1 => 'a',2 => 'b');
function is_multi($a) {
$rv = array_filter($a,'is_array');
if(count($rv)>0) return true;
return false;
}
var_dump(is_multi($a));
var_dump(is_multi($b));
?>
$ php multi.php
bool(true)
bool(false)
A: I think this is the most straight forward way and it's state-of-the-art:
function is_multidimensional(array $array) {
return count($array) !== count($array, COUNT_RECURSIVE);
}
A: Even this works
is_array(current($array));
If false its a single dimension array if true its a multi dimension array.
current will give you the first element of your array and check if the first element is an array or not by is_array function.
A: This function will return int number of array dimensions (stolen from here).
function countdim($array)
{
if (is_array(reset($array)))
$return = countdim(reset($array)) + 1;
else
$return = 1;
return $return;
}
A: You can also do a simple check like this:
$array = array('yo'=>'dream', 'mydear'=> array('anotherYo'=>'dream'));
$array1 = array('yo'=>'dream', 'mydear'=> 'not_array');
function is_multi_dimensional($array){
$flag = 0;
while(list($k,$value)=each($array)){
if(is_array($value))
$flag = 1;
}
return $flag;
}
echo is_multi_dimensional($array); // returns 1
echo is_multi_dimensional($array1); // returns 0
A: I think this one is classy (props to another user I don't know his username):
static public function isMulti($array)
{
$result = array_unique(array_map("gettype",$array));
return count($result) == 1 && array_shift($result) == "array";
}
A: In my case. I stuck in vary strange condition.
1st case = array("data"=> "name");
2nd case = array("data"=> array("name"=>"username","fname"=>"fname"));
But if data has array instead of value then sizeof() or count() function not work for this condition. Then i create custom function to check.
If first index of array have value then it return "only value"
But if index have array instead of value then it return "has array"
I use this way
function is_multi($a) {
foreach ($a as $v) {
if (is_array($v))
{
return "has array";
break;
}
break;
}
return 'only value';
}
Special thanks to Vinko Vrsalovic
A: Its as simple as
$isMulti = !empty(array_filter($array, function($e) {
return is_array($e);
}));
A: Try as follows
if (count($arrayList) != count($arrayList, COUNT_RECURSIVE))
{
echo 'arrayList is multidimensional';
}else{
echo 'arrayList is no multidimensional';
}
A: $is_multi_array = array_reduce(array_keys($arr), function ($carry, $key) use ($arr) { return $carry && is_array($arr[$key]); }, true);
Here is a nice one liner. It iterates over every key to check if the value at that key is an array. This will ensure true
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145337",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "161"
} |
Q: Wide exec for C/C++ Is there a wchar_t version of exec[lv][pe] (i.e. an exec that uses wchar_t as path and wchar_t as arguments)?
In Windows, I can just do CreateProcessW(process, cmdline), but in *nix, I'm stuck (i.e. no pure POSIX equivalent).
I'm trying to add UTF-16 support to my program (an autorun).
A: There is not. In UNIX, it's customary to use UTF-8 when interacting with the environment.
A: There is a problem though: the file system on UNIX/Linux is encoding-agnostic. All file names are just "a bunch of bytes"
So if I do a LANG=ja_JAP.EUC_JP, create a file with Japanese name, then I do a LANG=ja_JP.UTF8, when I look at my file name will look like junk, and it will be an invalid UTF-8 string.
You might say: why do that? But imagine you have a system used by hundreds of international users, each of them using Russian/Chinese/Korean/Arabic files, and you have to write a backup application :-(
The "solution" is to ask everybody to set the locale to something.UTF8, but that is just a convention, the system itself does not enforce anything.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145350",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Adding custom log locations to the OS X console application After searching online, the best solution I've found so far is to just make a symbolic link in either "/Library/logs/" or "~/Library/logs/" to get it to show up in the Console application.
I'm wondering if it would be possible to add a new directory or log file to the "root" level directly under the "LOG FILES" section in the console.
Here's a quick screenshot:
A: My solution for macOS Sierra:
First and last step, you must create a hard link from your source (log) directory into (as example) one of existing official log directories, you can seen in console.app.
I take my ~/Library/Logs directory for that.
hln /usr/local/var/log /Users/dierk/Library/Logs/_usr_local_var_log
Cross-posting this great tool for creating hardlinks originally posted by Sam.
Short intro:
To install Hardlink, ensure you've installed homebrew, then run:
brew install hardlink-osx
Once installed, create a hard link with:
hln [source] [destination]
A: There is one way to get your log files into the console.
You can add a symlink to the log file or log directory to one of the directories in the list. The directory ~/Library/Logs seems like the logical choice for adding your own log files.
For myself I wanted easy access to apache2 logs. I installed apache2 using macports and the default log file is located at /opt/local/apache2/logs.
Thus all I did was create the symlink to that directory.
# cd ~/Library/Logs
# ln -s /opt/local/apache2/logs/ apache2
Now I can easily use the console.app to get to the logs.
A: I actually just came across this option that worked perfectly for me:
Actually if you open terminal and...
$ cd /Library/Logs
then sym-link to your new log directory. eg i want my chroot'ed apache logs as 'www'
$ ln -s /chroot/apache/private/var/log www
then re-open Console.app
drill down into /Library/Logs and you will find your sym-linked directory.
;-)
Mohclips.
http://forums.macosxhints.com/showthread.php?t=35680
A: In Terminal run this command... append any log file directories you want to add
defaults write com.apple.Console LogFolderPaths -array '~/Library/Logs/' '/Library/Logs/' '/var/log/' '/opt/local/var/log/'
A: Since Mavericks, symlink behavior as change so "ln - s" doesn't work anymore.
use hardlink-osx instead to create an hardlink to your directory (may be installed via homebrew)
A: Very old post I know but, this is the only way I could get it to work.
cd /Library/Logs
sudo mkdir log_files
sudo ln -s /Users/USERNAME/Sites/website/logs/* log_files
A: In mac os 10.11, you may not be able to link to folder of logs, but instead you need to link to each log of logs folder in side console.
ln -s /opt/local/apache2/logs/error_log ~/Library/Logs/Apache2/error_log
A: You can just open any text file with console.app and it will add and keep it. Folder's though, no luck on that yet.
A: I was able to hardlink the files into ~/Library/logs by running:
ln /usr/local/var/logs/postgres.log ~/Library/logs
Notice the absence of -s.
No luck for directories though.
OSX Sierra 10.12.6
A: I don't believe it's possible.
If you're generating log files, you should generate them into one of the standard locations anyway, so this won't be an issue.
A: Just tried to do something similar.
I enter this in terminal, while the Console.app was running.
sudo mkdir -p /usr/local/var/log/apache2
sudo mv /private/var/log/apache2 /usr/local/var/log/apache2/apache2-old
sudo ln -s /usr/local/var/log/apache2 /private/var/log/apache2
Now whenever I open the Console.app it crashes.
Really wish there was a way of adding log files in the files. You CAN do it by dragging and dropping a folder onto the Console.app (given it a directory path as an argument), but the added folder only displays its immediate contents and doesn't allow for recursively descending into folders.
---------EDIT BELOW----------
Nevermind I stupidly did something like this leading to infinite recursion in Console.app
sudo mkdir -p /usr/local/var/log/apache2
sudo ln -s /private/var/log/apache2/apache2 /usr/local/var/log/apache2
sudo mv /private/var/log/apache2 /usr/local/var/log/apache2/apache2-old
sudo ln -s /usr/local/var/log/apache2 /private/var/log/apache2
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145354",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "27"
} |
Q: Debugging SharePoint 2007 Code How do you debug your SharePoint 2007 code? Since SharePoint runs on a remote server, and I'm developing on a windows xp machine (with the necessary .dll files copied into my GAC), I haven't had much luck with finding easy ways to debug. Breakpoints don't work, etc.
The best way I've come up with is to enable page tracing in the web.config file, write trace messages throughout my code, and access trace.axd whenever I need to debug.
Does anyone have any better suggestions for debugging? Am I missing something?
A: From Andrew Connell's blog post on the subject:
Attaching the debugger to GAC'd
assemblies: "Why aren't my breakpoints
being hit?!?!" Ever been there? Me
too... what a PITA that is! What's
going on? Well, the assemblies are in
the GAC and the Visual Studio debugger
can't see the debugging symbols (aka:
*.pdb). Unless you've gone through the trouble of setting up a symbol store
where all your PDBs are going, you'll
need to put the debugging symbols in
the same location as the assembly. The
trick is finding the folder that
contains your DLL in the GAC.
The c:\windows\assembly folder is not
a real folder, it's a virtual folder.
To get to the REAL folder, do the
following:
*
*Start » Run
*%systemroot%\assembly\gac
[ENTER]
This will open the GAC folder.
Now, poke around until you find a
folder that looks like this (you might
need to jump up one folder and dive
into the MSIL folder): [assembly file
name -.DLL extention][assembly
version in format of
> #.#.#.#]__[assembly public key token].
When you find that folder, open it up
and you'll see your assembly. Copy the
PDB file to that folder and then
attach the debugger for some debugging
joy!
A: The best way (even the one endorsed by Microsoft) is to have a Windows 2003 Server with Sharepoint as your local Development machine.
See also this topic.
A: Don't put your assemblies into the GAC, put them in the bin directory - then you can use the VS remote debugger. Google creating .WSP files for distribution.
This also has the advantage that its easier to copy your new builds onto the server after compilation (post-build step) and its also the recommended way to increase security.
A: I recommend you develop on a Windows 2003 server with Sharepoint. It's a hassle to debug on a remote server.
You can do it in a virtual machine with VMWare or Virtual PC, if you have XP on your workstation.
A: Virtual machine is the only way to go. You don't want to dedicate a whole machine to dev (unless you have extras) and developing on your production server is just asking for trouble. I prefer VMWare, but there are others that work just as well.
Tracing works well as normal debugging isn't really an option.
What else I do is try to develop all the logic (the stuff that isn't SharePoint dependent) on just a regular asp.net site, then integrate it into SharePoint after it's tested.
Hope that makes sense.
Are you talking about developing web parts? Custom pages? Something else?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145361",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: How do you pass an object to a windsor container instead of a type? I have the following class:
public class ViewPage<TView,TPresenter> : Page
where TView : IView
where TPresenter : Presenter<TView>
{
public ViewPage()
{
if (!(this is TView))
throw new Exception(String.Format("The view must be of type {0}", typeof(TView)));
IWindsorContainer container = new WindsorContainer();
container.AddComponent("view", typeof(IView), typeof(TView));
container.AddComponent("presenter", typeof(Presenter<TView>), typeof(TPresenter));
TPresenter presenter = container[typeof(TPresenter)] as TPresenter;
}
}
and this is the Presenter code:
public class Presenter<T> where T : IView
{
public T View { get; private set; }
public Presenter(T view)
{
this.View = view;
}
}
I'd like to pass the current instance of ViewPage to TPresenter via Windsor instead of having it instantiate a new object.
How can i achieve this? Thanks.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145364",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: What's the default intellisense shortcut in vs2008? I'd like to open the intelligence window without typing a character and then backspacing it. I can't seem to remember the shortcut for this. What is it?
A: Ctrl + Space for normal Intellisense, and Ctrl + Shift + Space for parameter Intellisense (e.g. to see what overloads are available in a method call which you've actually already filled in). I find the latter very handy :)
A: Ctrl + Space
A: If you have installed MSG Plus, then the problem is messenger lock keys, try to change them in msg plus and Intellisense will work again. Good Luck!
A: Ctrl + Space?
Also, go to Tools -> Options -> Environment -> Keyboard or Default Keyboard Shortcuts in Visual Studio, you can then search for commands and see what is assigned to that (and remap).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145371",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
} |
Q: PHP's SPL: Do its interfaces involving arrays cover all array properties? Would it be possible to write a class that is virtually indistinguishable from an actual PHP array by implementing all the necessary SPL interfaces? Are they missing anything that would be critical?
I'd like to build a more advanced Array object, but I want to make sure I wouldn't break an existing app that uses arrays everywhere if I substituted them with a custom Array class.
A: The only problems i can think of are the gettype() and the is_array() functions.
Check your code for
gettype($FakeArray) == 'array'
is_array($FakeArray)
Because although you can use the object just like an array, it will still be identified as an object.
A: In addition to the points made above, you would not be able to make user-space array type hints work with instances of your class. For example:
<?php
function f(array $a) { /*...*/ }
$ao = new ArrayObject();
f($ao); //error
?>
Output:
Catchable fatal error: Argument 1 passed to f() must be an array, object given
A: Other differences include the '+' operator for arrays (merging) and the failure of the entire array_* functions, including the commonly used array_merge and array_shift.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145376",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.