text
stringlengths
8
267k
meta
dict
Q: Vim auto-generate ctags Right now I have the following in my .vimrc: au BufWritePost *.c,*.cpp,*.h !ctags -R There are a few problems with this: * *It's slow -- regenerates tags for files that haven't changed since the last tag generation. *I have to push the enter button again after writing the file because of an inevitable "press Enter or type command to continue". When you combine these two issues I end up pushing the additional enter too soon (before ctags -R has finished), then see the annoying error message, and have to push enter again. I know it doesn't sound like a big deal, but with the amount of file writes I do on a given day it tends to get really annoying. There's gotta be a better way to do it! A: I've noticed this is an old thread, however... Use incron in *nix like environments supporting inotify. It will always launch commands whenever files in a directory change. i.e., /home/me/Code/c/that_program IN_DELETE,IN_CLOSE_WRITE ctags --sort=yes *.c That's it. A: au BufWritePost *.c,*.cpp,*.h silent! !ctags -R & The downside is that you won't have a useful tags file until it completes. As long as you're on a *nix system it should be ok to do multiple writes before the previous ctags has completed, but you should test that. On a Windows system it won't put it in the background and it'll complain that the file is locked until the first ctags finishes (which shouldn't cause problems in vim, but you'll end up with a slightly outdated tags file). Note, you could use the --append option as tonylo suggests, but then you'll have to disable tagbsearch which could mean that tag searches take a lot longer, depending on the size of your tag file. A: Perhaps use the append argument to ctags as demonstrated by: http://vim.wikia.com/wiki/Autocmd_to_update_ctags_file I can't really vouch for this as I generally use source insight for code browsing, but use vim as an editor... go figure. A: How about having ctags scheduled to run via crontab? If your project tree is fairly stable in it's structure, that should be doable? A: To suppress the "press enter" prompt, use :silent. A: On OSX this command will not work out of the box, at least not for me. au BufWritePost *.c,*.cpp,*.h silent! !ctags -R & I found a post, which explains how to get the standard ctags version that contains the -R option. This alone did not work for me. I had to add /usr/local/bin to the PATH variable in .bash_profile in order to pick up the bin where Homebrew installs programs. A: Edit: A solution very much along the lines of the following has been posted as the AutoTag vim script. Note that the script needs a vim with Python support, however. My solution shells out to awk instead, so it should work on many more systems. au FileType {c,cpp} au BufWritePost <buffer> silent ! [ -e tags ] && \ ( awk -F'\t' '$2\!="%:gs/'/'\''/"{print}' tags ; ctags -f- '%:gs/'/'\''/' ) \ | sort -t$'\t' -k1,1 -o tags.new && mv tags.new tags Note that you can only write it this way in a script, otherwise it has to go on a single line. There’s lot going on in there: * *This auto-command triggers when a file has been detected to be C or C++, and adds in turn a buffer-local auto-command that is triggered by the BufWritePost event. *It uses the % placeholder which is replaced by the buffer’s filename at execution time, together with the :gs modifier used to shell-quote the filename (by turning any embedded single-quotes into quote-escape-quote-quote). *That way it runs a shell command that checks if a tags file exists, in which case its content is printed except for the lines that refer to the just-saved file, meanwhile ctags is invoked on just the just-saved file, and the result is then sorted and put back into place. Caveat implementor: this assumes everything is in the same directory and that that is also the buffer-local current directory. I have not given any thought to path mangling. A: I wrote easytags.vim to do just this: automatically update and highlight tags. The plug-in can be configured to update just the file being edited or all files in the directory of the file being edited (recursively). It can use a global tags file, file type specific tags files and project specific tags files. A: The --append option is indeed the way to go. Used with a grep -v, we can update only one tagged file. For instance, here is a excerpt of an unpolished plugin that addresses this issue. (NB: It will require an "external" library plugin) " Options {{{1 let g:tags_options_cpp = '--c++-kinds=+p --fields=+imaS --extra=+q' function! s:CtagsExecutable() let tags_executable = lh#option#Get('tags_executable', s:tags_executable, 'bg') return tags_executable endfunction function! s:CtagsOptions() let ctags_options = lh#option#Get('tags_options_'.&ft, '') let ctags_options .= ' '.lh#option#Get('tags_options', '', 'wbg') return ctags_options endfunction function! s:CtagsDirname() let ctags_dirname = lh#option#Get('tags_dirname', '', 'b').'/' return ctags_dirname endfunction function! s:CtagsFilename() let ctags_filename = lh#option#Get('tags_filename', 'tags', 'bg') return ctags_filename endfunction function! s:CtagsCmdLine(ctags_pathname) let cmd_line = s:CtagsExecutable().' '.s:CtagsOptions().' -f '.a:ctags_pathname return cmd_line endfunction " ###################################################################### " Tag generating functions {{{1 " ====================================================================== " Interface {{{2 " ====================================================================== " Mappings {{{3 " inoremap <expr> ; <sid>Run('UpdateTags_for_ModifiedFile',';') nnoremap <silent> <Plug>CTagsUpdateCurrent :call <sid>UpdateCurrent()<cr> if !hasmapto('<Plug>CTagsUpdateCurrent', 'n') nmap <silent> <c-x>tc <Plug>CTagsUpdateCurrent endif nnoremap <silent> <Plug>CTagsUpdateAll :call <sid>UpdateAll()<cr> if !hasmapto('<Plug>CTagsUpdateAll', 'n') nmap <silent> <c-x>ta <Plug>CTagsUpdateAll endif " ====================================================================== " Auto command for automatically tagging a file when saved {{{3 augroup LH_TAGS au! autocmd BufWritePost,FileWritePost * if ! lh#option#Get('LHT_no_auto', 0) | call s:Run('UpdateTags_for_SavedFile') | endif aug END " ====================================================================== " Internal functions {{{2 " ====================================================================== " generate tags on-the-fly {{{3 function! UpdateTags_for_ModifiedFile(ctags_pathname) let source_name = expand('%') let temp_name = tempname() let temp_tags = tempname() " 1- purge old references to the source name if filereadable(a:ctags_pathname) " it exists => must be changed call system('grep -v " '.source_name.' " '.a:ctags_pathname.' > '.temp_tags. \ ' && mv -f '.temp_tags.' '.a:ctags_pathname) endif " 2- save the unsaved contents of the current file call writefile(getline(1, '$'), temp_name, 'b') " 3- call ctags, and replace references to the temporary source file to the " real source file let cmd_line = s:CtagsCmdLine(a:ctags_pathname).' '.source_name.' --append' let cmd_line .= ' && sed "s#\t'.temp_name.'\t#\t'.source_name.'\t#" > '.temp_tags let cmd_line .= ' && mv -f '.temp_tags.' '.a:ctags_pathname call system(cmd_line) call delete(temp_name) return ';' endfunction " ====================================================================== " generate tags for all files {{{3 function! s:UpdateTags_for_All(ctags_pathname) call delete(a:ctags_pathname) let cmd_line = 'cd '.s:CtagsDirname() " todo => use project directory " let cmd_line .= ' && '.s:CtagsCmdLine(a:ctags_pathname).' -R' echo cmd_line call system(cmd_line) endfunction " ====================================================================== " generate tags for the current saved file {{{3 function! s:UpdateTags_for_SavedFile(ctags_pathname) let source_name = expand('%') let temp_tags = tempname() if filereadable(a:ctags_pathname) " it exists => must be changed call system('grep -v " '.source_name.' " '.a:ctags_pathname.' > '.temp_tags.' && mv -f '.temp_tags.' '.a:ctags_pathname) endif let cmd_line = 'cd '.s:CtagsDirname() let cmd_line .= ' && ' . s:CtagsCmdLine(a:ctags_pathname).' --append '.source_name " echo cmd_line call system(cmd_line) endfunction " ====================================================================== " (public) Run a tag generating function {{{3 function! LHTagsRun(tag_function) call s:Run(a:tag_function) endfunction " ====================================================================== " (private) Run a tag generating function {{{3 " See this function as a /template method/. function! s:Run(tag_function) try let ctags_dirname = s:CtagsDirname() if strlen(ctags_dirname)==1 throw "tags-error: empty dirname" endif let ctags_filename = s:CtagsFilename() let ctags_pathname = ctags_dirname.ctags_filename if !filewritable(ctags_dirname) && !filewritable(ctags_pathname) throw "tags-error: ".ctags_pathname." cannot be modified" endif let Fn = function("s:".a:tag_function) call Fn(ctags_pathname) catch /tags-error:/ " call lh#common#ErrorMsg(v:exception) return 0 finally endtry echo ctags_pathname . ' updated.' return 1 endfunction function! s:Irun(tag_function, res) call s:Run(a:tag_function) return a:res endfunction " ====================================================================== " Main function for updating all tags {{{3 function! s:UpdateAll() let done = s:Run('UpdateTags_for_All') endfunction " Main function for updating the tags from one file {{{3 " @note the file may be saved or "modified". function! s:UpdateCurrent() if &modified let done = s:Run('UpdateTags_for_ModifiedFile') else let done = s:Run('UpdateTags_for_SavedFile') endif endfunction This code defines: * *^Xta to force the update of the tags base for all the files in the current project ; *^Xtc to force the update of the tags base for the current (unsaved) file ; *an autocommand that updates the tags base every time a file is saved ; and it supports and many options to disable the automatic update where we don't want it, to tune ctags calls depending on filetypes, ... It is not just a tip, but a small excerpt of a plugin. HTH, A: There is a vim plugin called AutoTag for this that works really well. If you have taglist installed it will also update that for you. A: In my opninion, plugin Indexer is better. http://www.vim.org/scripts/script.php?script_id=3221 It can be: 1) an add-on for project.tar.gz 2) an independent plugin * *background tags generation (you have not wait while ctags works) *multiple projects supported A: Auto Tag is a vim plugin that updates existing tag files on save. I've been using it for years without problems, with the exception that it enforces a maximum size on the tags files. Unless you have a really large set of code all indexed in the same tags file, you shouldn't hit that limit, though. Note that Auto Tag requires Python support in vim.
{ "language": "en", "url": "https://stackoverflow.com/questions/155449", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "56" }
Q: C# object is not null but (myObject != null) still return false I need to do a comparaison between an object and NULL. When the object is not NULL I fill it with some data. Here is the code : if (region != null) { .... } This is working but when looping and looping sometime the region object is NOT null (I can see data inside it in debug mode). In step-by-step when debugging, it doesn't go inside the IF statement... When I do a Quick Watch with these following expression : I see the (region == null) return false, AND (region != null) return false too... why and how? Update Someone point out that the object was == and != overloaded: public static bool operator ==(Region r1, Region r2) { if (object.ReferenceEquals(r1, null)) { return false; } if (object.ReferenceEquals(r2, null)) { return false; } return (r1.Cmr.CompareTo(r2.Cmr) == 0 && r1.Id == r2.Id); } public static bool operator !=(Region r1, Region r2) { if (object.ReferenceEquals(r1, null)) { return false; } if (object.ReferenceEquals(r2, null)) { return false; } return (r1.Cmr.CompareTo(r2.Cmr) != 0 || r1.Id != r2.Id); } A: Both of the overloads are incorrect public static bool operator ==(Region r1, Region r2) { if (object.ReferenceEquals(r1, null)) { return false; } if (object.ReferenceEquals(r2, null)) { return false; } return (r1.Cmr.CompareTo(r2.Cmr) == 0 && r1.Id == r2.Id); } if r1 And r2 are null, the first test (object.ReferenceEquals(r1, null)) will return false, even though r2 is also null. try //ifs expanded a bit for readability public static bool operator ==(Region r1, Region r2) { if( (object)r1 == null && (object)r2 == null) { return true; } if( (object)r1 == null || (object)r2 == null) { return false; } //btw - a quick shortcut here is also object.ReferenceEquals(r1, r2) return (r1.Cmr.CompareTo(r2.Cmr) == 0 && r1.Id == r2.Id); } A: This can sometimes happen when you have multiple threads working with the same data. If this is the case, you can use a lock to prevent them from messing with eachother. A: Is the == and/or != operator overloaded for the region object's class? Now that you've posted the code for the overloads: The overloads should probably look like the following (code taken from postings made by Jon Skeet and Philip Rieck): public static bool operator ==(Region r1, Region r2) { if (object.ReferenceEquals( r1, r2)) { // handles if both are null as well as object identity return true; } if ((object)r1 == null || (object)r2 == null) { return false; } return (r1.Cmr.CompareTo(r2.Cmr) == 0 && r1.Id == r2.Id); } public static bool operator !=(Region r1, Region r2) { return !(r1 == r2); } A: For equality comparison of a type "T", overload these methods: int GetHashCode() //Overrides Object.GetHashCode bool Equals(object other) //Overrides Object.Equals; would correspond to IEquatable, if such an interface existed bool Equals(T other) //Implements IEquatable<T>; do this for each T you want to compare to static bool operator ==(T x, T y) static bool operator !=(T x, T y) Your type-specific comparison code should be done in one place: the type-safe IEquatable<T> interface method Equals(T other). If you're comparing to another type (T2), implement IEquatable<T2> as well, and put the field comparison code for that type in Equals(T2 other). All overloaded methods and operators should forward the equality comparison task to the main type-safe Equals(T other) instance method, such that an clean dependency hierarchy is maintained and stricter guarantees are introduced at each level to eliminate redundancy and unessential complexity. bool Equals(object other) { if (other is T) //replicate this for each IEquatable<T2>, IEquatable<T3>, etc. you may implement return Equals( (T)other) ); //forward to IEquatable<T> implementation return false; //other is null or cannot be compared to this instance; therefore it is not equal } bool Equals(T other) { if ((object)other == null) //cast to object for reference equality comparison, or use object.ReferenceEquals return false; //if ((object)other == this) //possible performance boost, ONLY if object instance is frequently compared to itself! otherwise it's just an extra useless check //return true; return field1.Equals( other.field1 ) && field2.Equals( other.field2 ); //compare type fields to determine equality } public static bool operator ==( T x, T y ) { if ((object)x != null) //cast to object for reference equality comparison, or use object.ReferenceEquals return x.Equals( y ); //forward to type-safe Equals on non-null instance x if ((object)y != null) return false; //x was null, y is not null return true; //both null } public static bool operator !=( T x, T y ) { if ((object)x != null) return !x.Equals( y ); //forward to type-safe Equals on non-null instance x if ((object)y != null) return true; //x was null, y is not null return false; //both null } Discussion: The preceding implementation centralizes the type-specific (i.e. field equality) comparison to the end of the IEquatable<T> implementation for the type. The == and != operators have a parallel but opposite implementation. I prefer this over having one reference the other, such that there is an extra method call for the dependent one. If the != operator is simply going to call the == operator, rather than offer an equally performing operator, then you may as well just use !(obj1 == obj2) and avoid the extra method call. The comparison-to-self is left out from the equals operator and the IEquatable<T> implementations, because it can introduce 1. unnecessary overhead in some cases, and/or 2. inconsistent performance depending on how often an instance is compared to itself vs other instances. An alternative I don't like, but should mention, is to reverse this setup, centralizing the type-specific equality code in the equality operator instead and have the Equals methods depend on that. One could then use the shortcut of ReferenceEquals(obj1,obj2) to check for reference equality and null equality simultaneously as Philip mentioned in an earlier post, but that idea is misleading. It seems like you're killing two birds with one stone, but your actually creating more work -- after determining the objects are neither both null nor the same instance, you will then, in addition, STILL have to on to check whether each instance is null. In my implementation, you check for any single instance being null exactly once. By the time the Equals instance method is called, it's already ruled out that the first object being compared is null, so all that's left to do is check whether the other is null. So after at most two comparisons, we jump directly into the field checking, no matter which method we use (Equals(object),Equals(T),==,!=). Also, as I mentioned, if you really are comparing and object to itself the majority of the time, then you could add that check in the Equals method just before diving into the field comparisons. The point in adding it last is that you can still maintain the flow/dependency hierarchy without introducing a redundant/useless check at every level. A: Those operator overloads are broken. Firstly, it makes life a lot easier if != is implemented by just calling == and inverting the result. Secondly, before the nullity checks in == there should be: if (object.ReferenceEquals(r1, r2)) { return true; } A: So is it that these checks here are not right: public static bool operator !=(Region r1, Region r2) { if (object.ReferenceEquals(r1, null)) { return false; } if (object.ReferenceEquals(r2, null)) { return false; } ... A: There's another possibility that you need to click the refresh icon next to the parameter that you're watching. VS try to keep up with the performance while not evaluating every statement/parameter. Take a look to make sure, before you start making changes to places that's non relevant. A: bool comp; if (object.IsNullOrEmpty(r1)) { comp = false; } if (object.IsNullOrEmpty(r2)) { comp = false; } return comp;
{ "language": "en", "url": "https://stackoverflow.com/questions/155458", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: LINQ Submit Changes not submitting changes I'm using LINQ to SQL and C#. I have two LINQ classes: User and Network. User has UserID (primary key) and NetworkID Network has NetworkID (primary key) and an AdminID (a UserID) The following code works fine: user.Network.AdminID = 0; db.SubmitChanges(); However, if I access the AdminID before making the change, the change never happens to the DB. So the following doesn't work: if(user.Network.AdminID == user.UserID) { user.Network.AdminID = 0; db.SubmitChanges(); } It is making it into the if statement and calling submit changes. For some reason, the changes to AdminID never make it to the DB. No error thrown, the change just never 'takes'. Any idea what could be causing this? Thanks. A: I just ran a quick test and it works fine for me. I hate to ask this, but are you sure the if statement ever returns true? It could be you're just not hitting the code which changes the value. Other than that we might need more info. What are the properties of that member? Have you traced into the set statement to ensure the value is getting set before calling SubmitChanges? Does the Linq entity have the new value after SubmitChanges? Or do both the database AND the Linq entity fail to take the new value? In short, that code should work... so something else somewhere is probably wrong. A: Here's the original post. Here's a setter generated by the LinqToSql designer. Code Snippet { Contact previousValue = this._Contact.Entity; if (((previousValue != value) || (this._Contact.HasLoadedOrAssignedValue == false))) { this.SendPropertyChanging(); if ((previousValue != null)) { this._Contact.Entity = null; previousValue.ContactEvents.Remove(this); } this._Contact.Entity = value; if ((value != null)) { value.ContactEvents.Add(this); this._ContactID = value.ID; } else { this._ContactID = default(int); } this.SendPropertyChanged("Contact"); } } This line sets the child's property to the parent. this._Contact.Entity = value; This line adds the child to the parent's collection value.ContactEvents.Add(this); The setter for the ID does not have this second line. So, with the autogenerated entities... This code produces an unexpected behavior: myContactEvent.ContactID = myContact.ID; This code is good: myContactEvent.Contact = myContact; This code is also good: myContact.ContactEvents.Add(myContactEvent); A: I had this issue. The reason was one dumb line of code: DBDataContext db { get { return new DBDataContext(); } } obviously it should be: DBDataContext db = new DBDataContext();
{ "language": "en", "url": "https://stackoverflow.com/questions/155460", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What is "for" in Ruby In Ruby: for i in A do # some code end is the same as: A.each do |i| # some code end for is not a kernel method: * *What exactly is "for" in ruby *Is there a way to use other keywords to do similar things? Something like: total = sum i in I {x[i]} mapping to: total = I.sum {|i] x[i]} A: For is just syntactic sugar. From the pickaxe: For ... In Earlier we said that the only built-in Ruby looping primitives were while and until. What's this ``for'' thing, then? Well, for is almost a lump of syntactic sugar. When you write for aSong in songList aSong.play end Ruby translates it into something like: songList.each do |aSong| aSong.play end The only difference between the for loop and the each form is the scope of local variables that are defined in the body. This is discussed on page 87. You can use for to iterate over any object that responds to the method each, such as an Array or a Range. for i in ['fee', 'fi', 'fo', 'fum'] print i, " " end for i in 1..3 print i, " " end for i in File.open("ordinal").find_all { |l| l =~ /d$/} print i.chomp, " " end produces: fee fi fo fum 1 2 3 second third As long as your class defines a sensible each method, you can use a for loop to traverse it. class Periods def each yield "Classical" yield "Jazz" yield "Rock" end end periods = Periods.new for genre in periods print genre, " " end produces: Classical Jazz Rock Ruby doesn't have other keywords for list comprehensions (like the sum example you made above). for isn't a terribly popular keyword, and the method syntax ( arr.each {} ) is generally preferred. A: It's almost syntax sugar. One difference is that, while for would use the scope of the code around it, each creates a separate scope within its block. Compare the following: for i in (1..3) x = i end p x # => 3 versus (1..3).each do |i| x = i end p x # => undefined local variable or method `x' for main:Object A: for is just syntax sugar for the each method. This can be seen by running this code: for i in 1 do end This results in the error: NoMethodError: undefined method `each' for 1:Fixnum
{ "language": "en", "url": "https://stackoverflow.com/questions/155462", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23" }
Q: Extending chart functionality in SSRS The default chart object in the SQL Server (2005) Business Intelligence Development Studio doesn't seem to some have formatting options like : * *specifying the text direction of labels in the x and y axis *adding a trendline to a bar chart *arbitrarily resizing items in a chart - for example, if I resize the chart object, everything gets resized accordingly but I can't keep the size of the chart the same while extending the area of the legend for instance. *multiline chart labels So what I want to know is * *is there any easy answer to the formatting problems mentioned above? *what websites/books/resources/examples would you recommend I look into for extending the functionality of the chart object? A: Some colleagues of mine gave up on the stock control and bought Dundas charts The stock charts are cut down versions of Dundas. A: yes you can specify the text direction of labels in the x and y axis * *Go to chart properties and in the tab x and y axis enter the chart title and in the title align use the combination like left/right/center align. *you can change the legend line go to the chart properties click legend tab inside this there is an option for "display legend inside plot area" and you can include the trendline there *you can use multiline text labels when the text limits extends A: I don't see how you can resize the legend, puting inside the plot area looks ugly for pie-chart A: I'd recommend go with the dundas chart components gbn suggested. If that's not possible at least this article should solve issue 1.
{ "language": "en", "url": "https://stackoverflow.com/questions/155484", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: OS-independent API to monitor file system? I would like to experiment with ideas about distributed file synchronization/replication. To make it efficient when the user is working, I would like to implement some kind of daemon to monitor changes in some directory (e.g. /home/user/dirToBeMonitored or c:\docs and setts\user\dirToBeMonitored). So, I could be able to know which filename was added/changed/deleted at every time (or within a reasonable interval). Is this possible with any high-medium level language?. Do you know some API (and in which language?) to do this? Thanks. A: A bonified answer, albeit one that requires a largish library dependency (well-worth it IMO)! QT provides the QFileSystemwatcher class, which uses the native mechanism of the underlying platform. Even better, you can use the QT language bindings for Python or Ruby. Here is a simple PyQT4 application which uses QFileSystemWatcher. Notes * *A good reference on on creating deployable PyQT4 apps, especially on OSX but should work for Windows also. *Same solution previously posted here. *Other cross-platform toolkits may also do the trick (for example Gnome's GIO has GFileMonitor, although it is UNIX only and doesn't support OSX's FSEvents mechanism afaik). A: The APIs are totally different for Windows, Linux, Mac OS X, and any other Unix you can name, it seems. I don't know of any cross-platform library that handles this in a consistent way. A: In Linux it is called inotify. A: And on OS X it's called fsevents. It's an OS-level API, so it's easiest to access from C or C++. It should be accessible from nearly any language, although bindings for your preferred language may not have been written yet.
{ "language": "en", "url": "https://stackoverflow.com/questions/155490", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: download mail attachment with Java I had a look in the reference doc, and Spring seems to have pretty good support for sending mail. However, I need to login to a mail account, read the messages, and download any attachments. Is downloading mail attachments supported by the Spring mail API? I know you can do this with the Java Mail API, but in the past I've found that very verbose and unpleasant to work with. EDIT: I've received several replies pointing towards tutorials that describe how to send mail with attachments, but what I'm asking about is how to read attachments from received mail. Cheers, Don A: Here is an error: else if ((disposition != null) && (disposition.equals(Part.ATTACHMENT) || disposition.equals(Part.INLINE) ) it should be: else if ((disposition.equalsIgnoreCase(Part.ATTACHMENT) || disposition.equalsIgnoreCase(Part.INLINE)) Thanks @Stevenmcherry for your answer A: Here's the class that I use for downloading e-mails (with attachment handling). You'll have to glance by some of the stuff it's doing (like ignore the logging classes and database writes). I've also re-named some of the packages for ease of reading. The general idea is that all attachments are saved as individual files in the filesystem, and each e-mail is saved as a record in the database with a set of child records that point to all of the attachment file paths. Focus on the doEMailDownload method. /** * Copyright (c) 2008 Steven M. Cherry * All rights reserved. */ package utils.scheduled; import java.io.BufferedOutputStream; import java.io.File; import java.io.FileOutputStream; import java.io.InputStream; import java.sql.Timestamp; import java.util.Properties; import java.util.Vector; import javax.mail.Address; import javax.mail.Flags; import javax.mail.Folder; import javax.mail.Message; import javax.mail.Multipart; import javax.mail.Part; import javax.mail.Session; import javax.mail.Store; import javax.mail.internet.MimeBodyPart; import glob.ActionLogicImplementation; import glob.IOConn; import glob.log.Log; import logic.utils.sql.Settings; import logic.utils.sqldo.EMail; import logic.utils.sqldo.EMailAttach; /** * This will connect to our incoming e-mail server and download any e-mails * that are found on the server. The e-mails will be stored for further processing * in our internal database. Attachments will be written out to separate files * and then referred to by the database entries. This is intended to be run by * the scheduler every minute or so. * * @author Steven M. Cherry */ public class DownloadEMail implements ActionLogicImplementation { protected String receiving_host; protected String receiving_user; protected String receiving_pass; protected String receiving_protocol; protected boolean receiving_secure; protected String receiving_attachments; /** This will run our logic */ public void ExecuteRequest(IOConn ioc) throws Exception { Log.Trace("Enter"); Log.Debug("Executing DownloadEMail"); ioc.initializeResponseDocument("DownloadEMail"); // pick up our configuration from the server: receiving_host = Settings.getValue(ioc, "server.email.receiving.host"); receiving_user = Settings.getValue(ioc, "server.email.receiving.username"); receiving_pass = Settings.getValue(ioc, "server.email.receiving.password"); receiving_protocol = Settings.getValue(ioc, "server.email.receiving.protocol"); String tmp_secure = Settings.getValue(ioc, "server.email.receiving.secure"); receiving_attachments = Settings.getValue(ioc, "server.email.receiving.attachments"); // sanity check on the parameters: if(receiving_host == null || receiving_host.length() == 0){ ioc.SendReturn(); ioc.Close(); Log.Trace("Exit"); return; // no host defined. } if(receiving_user == null || receiving_user.length() == 0){ ioc.SendReturn(); ioc.Close(); Log.Trace("Exit"); return; // no user defined. } if(receiving_pass == null || receiving_pass.length() == 0){ ioc.SendReturn(); ioc.Close(); Log.Trace("Exit"); return; // no pass defined. } if(receiving_protocol == null || receiving_protocol.length() == 0){ Log.Debug("EMail receiving protocol not defined, defaulting to POP"); receiving_protocol = "POP"; } if(tmp_secure == null || tmp_secure.length() == 0 || tmp_secure.compareToIgnoreCase("false") == 0 || tmp_secure.compareToIgnoreCase("no") == 0 ){ receiving_secure = false; } else { receiving_secure = true; } if(receiving_attachments == null || receiving_attachments.length() == 0){ Log.Debug("EMail receiving attachments not defined, defaulting to ./email/attachments/"); receiving_attachments = "./email/attachments/"; } // now do the real work. doEMailDownload(ioc); ioc.SendReturn(); ioc.Close(); Log.Trace("Exit"); } protected void doEMailDownload(IOConn ioc) throws Exception { // Create empty properties Properties props = new Properties(); // Get the session Session session = Session.getInstance(props, null); // Get the store Store store = session.getStore(receiving_protocol); store.connect(receiving_host, receiving_user, receiving_pass); // Get folder Folder folder = store.getFolder("INBOX"); folder.open(Folder.READ_WRITE); try { // Get directory listing Message messages[] = folder.getMessages(); for (int i=0; i < messages.length; i++) { // get the details of the message: EMail email = new EMail(); email.fromaddr = messages[i].getFrom()[0].toString(); Address[] to = messages[i].getRecipients(Message.RecipientType.TO); email.toaddr = ""; for(int j = 0; j < to.length; j++){ email.toaddr += to[j].toString() + "; "; } Address[] cc; try { cc = messages[i].getRecipients(Message.RecipientType.CC); } catch (Exception e){ Log.Warn("Exception retrieving CC addrs: %s", e.getLocalizedMessage()); cc = null; } email.cc = ""; if(cc != null){ for(int j = 0; j < cc.length; j++){ email.cc += cc[j].toString() + "; "; } } email.subject = messages[i].getSubject(); if(messages[i].getReceivedDate() != null){ email.received_when = new Timestamp(messages[i].getReceivedDate().getTime()); } else { email.received_when = new Timestamp( (new java.util.Date()).getTime()); } email.body = ""; Vector<EMailAttach> vema = new Vector<EMailAttach>(); Object content = messages[i].getContent(); if(content instanceof java.lang.String){ email.body = (String)content; } else if(content instanceof Multipart){ Multipart mp = (Multipart)content; for (int j=0; j < mp.getCount(); j++) { Part part = mp.getBodyPart(j); String disposition = part.getDisposition(); if (disposition == null) { // Check if plain MimeBodyPart mbp = (MimeBodyPart)part; if (mbp.isMimeType("text/plain")) { Log.Debug("Mime type is plain"); email.body += (String)mbp.getContent(); } else { Log.Debug("Mime type is not plain"); // Special non-attachment cases here of // image/gif, text/html, ... EMailAttach ema = new EMailAttach(); ema.name = decodeName(part.getFileName()); File savedir = new File(receiving_attachments); savedir.mkdirs(); File savefile = File.createTempFile("emailattach", ".atch", savedir ); ema.path = savefile.getAbsolutePath(); ema.size = part.getSize(); vema.add(ema); ema.size = saveFile(savefile, part); } } else if ((disposition != null) && (disposition.equals(Part.ATTACHMENT) || disposition.equals(Part.INLINE) ) ){ // Check if plain MimeBodyPart mbp = (MimeBodyPart)part; if (mbp.isMimeType("text/plain")) { Log.Debug("Mime type is plain"); email.body += (String)mbp.getContent(); } else { Log.Debug("Save file (%s)", part.getFileName() ); EMailAttach ema = new EMailAttach(); ema.name = decodeName(part.getFileName()); File savedir = new File(receiving_attachments); savedir.mkdirs(); File savefile = File.createTempFile("emailattach", ".atch", savedir ); ema.path = savefile.getAbsolutePath(); ema.size = part.getSize(); vema.add(ema); ema.size = saveFile( savefile, part); } } } } // Insert everything into the database: logic.utils.sql.EMail.insertEMail(ioc, email); for(int j = 0; j < vema.size(); j++){ vema.get(j).emailid = email.id; logic.utils.sql.EMail.insertEMailAttach(ioc, vema.get(j) ); } // commit this message and all of it's attachments ioc.getDBConnection().commit(); // Finally delete the message from the server. messages[i].setFlag(Flags.Flag.DELETED, true); } // Close connection folder.close(true); // true tells the mail server to expunge deleted messages. store.close(); } catch (Exception e){ folder.close(true); // true tells the mail server to expunge deleted messages. store.close(); throw e; } } protected int saveFile(File saveFile, Part part) throws Exception { BufferedOutputStream bos = new BufferedOutputStream( new FileOutputStream(saveFile) ); byte[] buff = new byte[2048]; InputStream is = part.getInputStream(); int ret = 0, count = 0; while( (ret = is.read(buff)) > 0 ){ bos.write(buff, 0, ret); count += ret; } bos.close(); is.close(); return count; } protected String decodeName( String name ) throws Exception { if(name == null || name.length() == 0){ return "unknown"; } String ret = java.net.URLDecoder.decode( name, "UTF-8" ); // also check for a few other things in the string: ret = ret.replaceAll("=\\?utf-8\\?q\\?", ""); ret = ret.replaceAll("\\?=", ""); ret = ret.replaceAll("=20", " "); return ret; } } A: I worked Steven's example a little bit and removed the parts of the code specific to Steven. My code won't read the body of an email if it has attachments. That is fine for my case but you may want to refine it further for yours. package utils; import java.io.BufferedOutputStream; import java.io.File; import java.io.FileOutputStream; import java.io.InputStream; import java.util.ArrayList; import java.util.Date; import java.util.List; import java.util.Properties; import javax.mail.Address; import javax.mail.Flags; import javax.mail.Folder; import javax.mail.Message; import javax.mail.Multipart; import javax.mail.Part; import javax.mail.Session; import javax.mail.Store; import javax.mail.internet.MimeBodyPart; public class IncomingMail { public static List<Email> downloadPop3(String host, String user, String pass, String downloadDir) throws Exception { List<Email> emails = new ArrayList<Email>(); // Create empty properties Properties props = new Properties(); // Get the session Session session = Session.getInstance(props, null); // Get the store Store store = session.getStore("pop3"); store.connect(host, user, pass); // Get folder Folder folder = store.getFolder("INBOX"); folder.open(Folder.READ_WRITE); try { // Get directory listing Message messages[] = folder.getMessages(); for (int i = 0; i < messages.length; i++) { Email email = new Email(); // from email.from = messages[i].getFrom()[0].toString(); // to list Address[] toArray = messages[i] .getRecipients(Message.RecipientType.TO); for (Address to : toArray) { email.to.add(to.toString()); } // cc list Address[] ccArray = null; try { ccArray = messages[i] .getRecipients(Message.RecipientType.CC); } catch (Exception e) { ccArray = null; } if (ccArray != null) { for (Address c : ccArray) { email.cc.add(c.toString()); } } // subject email.subject = messages[i].getSubject(); // received date if (messages[i].getReceivedDate() != null) { email.received = messages[i].getReceivedDate(); } else { email.received = new Date(); } // body and attachments email.body = ""; Object content = messages[i].getContent(); if (content instanceof java.lang.String) { email.body = (String) content; } else if (content instanceof Multipart) { Multipart mp = (Multipart) content; for (int j = 0; j < mp.getCount(); j++) { Part part = mp.getBodyPart(j); String disposition = part.getDisposition(); if (disposition == null) { MimeBodyPart mbp = (MimeBodyPart) part; if (mbp.isMimeType("text/plain")) { // Plain email.body += (String) mbp.getContent(); } } else if ((disposition != null) && (disposition.equals(Part.ATTACHMENT) || disposition .equals(Part.INLINE))) { // Check if plain MimeBodyPart mbp = (MimeBodyPart) part; if (mbp.isMimeType("text/plain")) { email.body += (String) mbp.getContent(); } else { EmailAttachment attachment = new EmailAttachment(); attachment.name = decodeName(part.getFileName()); File savedir = new File(downloadDir); savedir.mkdirs(); // File savefile = File.createTempFile( "emailattach", ".atch", savedir); File savefile = new File(downloadDir,attachment.name); attachment.path = savefile.getAbsolutePath(); attachment.size = saveFile(savefile, part); email.attachments.add(attachment); } } } // end of multipart for loop } // end messages for loop emails.add(email); // Finally delete the message from the server. messages[i].setFlag(Flags.Flag.DELETED, true); } // Close connection folder.close(true); // true tells the mail server to expunge deleted messages store.close(); } catch (Exception e) { folder.close(true); // true tells the mail server to expunge deleted store.close(); throw e; } return emails; } private static String decodeName(String name) throws Exception { if (name == null || name.length() == 0) { return "unknown"; } String ret = java.net.URLDecoder.decode(name, "UTF-8"); // also check for a few other things in the string: ret = ret.replaceAll("=\\?utf-8\\?q\\?", ""); ret = ret.replaceAll("\\?=", ""); ret = ret.replaceAll("=20", " "); return ret; } private static int saveFile(File saveFile, Part part) throws Exception { BufferedOutputStream bos = new BufferedOutputStream( new FileOutputStream(saveFile)); byte[] buff = new byte[2048]; InputStream is = part.getInputStream(); int ret = 0, count = 0; while ((ret = is.read(buff)) > 0) { bos.write(buff, 0, ret); count += ret; } bos.close(); is.close(); return count; } } You also need these two helper classes package utils; import java.util.ArrayList; import java.util.Date; import java.util.List; public class Email { public Date received; public String from; public List<String> to = new ArrayList<String>(); public List<String> cc = new ArrayList<String>(); public String subject; public String body; public List<EmailAttachment> attachments = new ArrayList<EmailAttachment>(); } and package utils; public class EmailAttachment { public String name; public String path; public int size; } I used this to test the above classes package utils; import java.util.List; public class Test { public static void main(String[] args) { String host = "some host"; String user = "some user"; String pass = "some pass"; String downloadDir = "/Temp"; try { List<Email> emails = IncomingMail.downloadPop3(host, user, pass, downloadDir); for ( Email email : emails ) { System.out.println(email.from); System.out.println(email.subject); System.out.println(email.body); List<EmailAttachment> attachments = email.attachments; for ( EmailAttachment attachment : attachments ) { System.out.println(attachment.path+" "+attachment.name); } } } catch (Exception e) { e.printStackTrace(); } } } More info can be found at http://java.sun.com/developer/onlineTraining/JavaMail/contents.html A: I used Apache Commons Mail for this task: import java.util.List; import javax.activation.DataSource; import javax.mail.internet.MimeMessage; import org.apache.commons.mail.util.MimeMessageParser; public List<DataSource> getAttachmentList(MimeMessage message) throws Exception { msgParser = new MimeMessageParser(message); msgParser.parse(); return msgParser.getAttachmentList(); } From a DataSource object you can retrieve an InputStream (beside name and type) of the attachment (see API: http://docs.oracle.com/javase/6/docs/api/javax/activation/DataSource.html?is-external=true). A: Thus far, I've only used the JavaMail APIs (and I've been reasonably happy with them for my purposes). If the full JavaMail package is too heavy for you, the underlying transport engine can be used without the top layers of the package. The lower you go in the SMTP, POP3 and IMAP stacks, the more you have to be prepared to do for yourself. On the bright side, you'll also be able to ignore the parts that aren't required for your application. A: import java.io.IOException; import java.io.InputStream; import javax.mail.internet.MimeMessage; import javax.servlet.http.HttpServletRequest; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.core.io.InputStreamSource; import org.springframework.mail.javamail.JavaMailSender; import org.springframework.mail.javamail.MimeMessageHelper; import org.springframework.mail.javamail.MimeMessagePreparator; import org.springframework.stereotype.Controller; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.RequestMethod; import org.springframework.web.bind.annotation.RequestParam; import org.springframework.web.multipart.commons.CommonsMultipartFile; @Controller @RequestMapping("/sendEmail.do") public class SendEmailAttachController { @Autowired private JavaMailSender mailSender; @RequestMapping(method = RequestMethod.POST) public String sendEmail(HttpServletRequest request, final @RequestParam CommonsMultipartFile attachFile) { // Input here final String emailTo = request.getParameter("mailTo"); final String subject = request.getParameter("subject"); final String yourmailid = request.getParameter("yourmail"); final String message = request.getParameter("message"); // Logging System.out.println("emailTo: " + emailTo); System.out.println("subject: " + subject); System.out.println("Your mail id is: "+yourmailid); System.out.println("message: " + message); System.out.println("attachFile: " + attachFile.getOriginalFilename()); mailSender.send(new MimeMessagePreparator() { @Override public void prepare(MimeMessage mimeMessage) throws Exception { MimeMessageHelper messageHelper = new MimeMessageHelper( mimeMessage, true, "UTF-8"); messageHelper.setTo(emailTo); messageHelper.setSubject(subject); messageHelper.setReplyTo(yourmailid); messageHelper.setText(message); // Attachment with mail String attachName = attachFile.getOriginalFilename(); if (!attachFile.equals("")) { messageHelper.addAttachment(attachName, new InputStreamSource() { @Override public InputStream getInputStream() throws IOException { return attachFile.getInputStream(); } }); } } }); return "Result"; } }
{ "language": "en", "url": "https://stackoverflow.com/questions/155504", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: How do I create a directory from within Emacs? How exactly can I create a new directory using Emacs? What commands do I use? (If possible, please provide an example) A: You can also run single shell commands using M-! You're basically sending a string to the command line so you don't get any nice auto-completion but it's useful if you know how to perform an action through the command line but don't know an Emacs equivalent way. M-! mkdir /path/to/new_dir A: I guess I did it the hard way earlier today. I did: M-x shell-command then mkdir -p topdir/subdir A: Ctrl+X D (C-x d) to open a directory in "dired" mode, then + to create a directory. A: You can use M-x make-directory inside of any buffer, not necessarily a dired buffer. It is a lisp function you can use as well. A: * *to create the directory dir/to/create, type: M-x make-directory RET dir/to/create RET *to create directories dir/parent1/node and dir/parent2/node, type: M-! mkdir -p dir/parent{1,2}/node RET It assumes that Emacs's inferior shell is bash/zsh or other compatible shell. *or in a Dired mode + It doesn't create nonexistent parent directories. Example: C-x d *.py RET ; shows python source files in the CWD in `Dired` mode + test RET ; create `test` directory in the CWD CWD stands for Current Working Directory. *or just create a new file with non-existing parent directories using C-x C-f and type: M-x make-directory RET RET Emacs asks to create the parent directories automatically while saving a new file in recent Emacs versions. For older version, see How to make Emacs create intermediate dirs - when saving a file? A: I came across this question while searching for how to automatically create directories in Emacs. The best answer I found was in another thread from a few years later. The answer from Victor Deryagin was exactly what I was looking for. Adding that code to your .emacs will make Emacs prompt you to create the directory when you go to save the file.
{ "language": "en", "url": "https://stackoverflow.com/questions/155507", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "156" }
Q: How to convert a utf-8 string to a utf-16 string in PHP How do I convert a utf-8 string to a utf-16 string in PHP? A: You could also use iconv. It's native in PHP, but require that all your text is one charset. Else it could discard characters. A: mbstring supports UTF-16, so you can use mb_convert_encoding.
{ "language": "en", "url": "https://stackoverflow.com/questions/155514", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Cancelling a long running process in VB6.0 without DoEvents? Is it possible to cancel out of a long running process in VB6.0 without using DoEvents? For example: for i = 1 to someVeryHighNumber ' Do some work here ' ... if cancel then exit for end if next Sub btnCancel_Click() cancel = true End Sub I assume I need a "DoEvents" before the "if cancel then..." is there a better way? It's been awhile... A: No, you have to use DoEvents otherwise all UI, keyboard and Timer events will stay waiting in the queue. The only thing you can do is calling DoEvents once for every 1000 iterations or such. A: Is the "for" loop running in the GUI thread? If so, yes, you'll need a DoEvents. You may want to use a separate Thread, in which case a DoEvents would not be required. You can do this in VB6 (not simple). A: You could start it on a separate thread, but in VB6 it's a royal pain. DoEvents should work. It's a hack, but then so is VB6 (10 year VB veteran talking here, so don't down-mod me). A: Divide up the long-running task into quanta. Such tasks are often driven by a simple loop, so slice it into 10, 100, 1000, etc. iterations. Use a Timer control and each time it fires do part of the task and save its state as you go. To start, set up initial state and enable the Timer. When complete, disable the Timer and process the results. You can "tune" this by changing how much work is done per quantum. In the Timer event handler you can check for "cancel" and stop early as required. You can make it all neater by bundling the workload and Timer into a UserControl with a Completed event. A: This works well for me when I need it. It checks to see if the user has pressed the escape key to exit the loop. Note that it has a really big drawback: it will detect if the user hit the escape key on ANY application - not just yours. But it's a great trick in development when you want to give yourself a way to interrupt a long running loop, or a way to hold down the shift key to bypass a bit of code. Option Explicit Private Declare Function GetAsyncKeyState Lib "user32" (ByVal nVirtKey As Long) As Integer Private Sub Command1_Click() Do Label1.Caption = Now() Label1.Refresh If WasKeyPressed(vbKeyEscape) Then Exit Do Loop Label1.Caption = "Exited loop successfully" End Sub Function WasKeyPressed(ByVal plVirtualKey As Long) As Boolean If (GetAsyncKeyState(plVirtualKey) And &H8000) Then WasKeyPressed = True End Function Documentation for GetAsyncKeyState is here: http://msdn.microsoft.com/en-us/library/ms646301(VS.85).aspx A: EDIT it turns out the MSDN article is flawed and the technique DOESN'T WORK :( Here's an article on using the .NET BackgroundWorker component to run the task on another thread from within VB6. A: Here is a pretty standard scheme for asynchronous background processing in VB6. (For instance it's in Dan Appleman's book and Microsoft's VB6 samples.) You create a separate ActiveX EXE to do the work: that way the work is automatically on another thread, in a separate process (which means you don't have to worry about variables being trampled). * *The VB6 ActiveX EXE object should expose an event CheckQuitDoStuff(). This takes a ByRef Boolean called Quit. *The client calls StartDoStuff in the ActiveX EXE object. This routine starts a Timer on a hidden form and immediately returns. This unblocks the calling thread. The Timer interval is very short so the Timer event fires quickly. *The Timer event handler disables the Timer, and then calls back into the ActiveX object DoStuff method. This begins the lengthy processing. *Periodically the DoStuff method raises the CheckQuitDoStuff event. The client's event handler checks the special flag and sets Quit True if it's necessary to abort. Then DoStuff aborts the calculation and returns early if Quit is True. This scheme means that the client doesn't actually need to be multi-threaded, since the calling thread doesn't block while "DoStuff" is happening. The tricky part is making sure that DoStuff raises the events at appropriate intervals - too long, and you can't quit when you want to: too short, and you are slowing down DoStuff unecessarily. Also, when DoStuff exits, it must unload the hidden form. If DoStuff does actually manage to get all the stuff done before being aborted, you can raise a different event to tell the client that the job is finished. A: Nope, you got it right, you definitely want DoEvents in your loop. If you put the DoEvents in your main loop and find that slows down processing too much, try calling the Windows API function GetQueueStatus (which is much faster than DoEvents) to quickly determine if it's even necessary to call DoEvents. GetQueueStatus tells you if there are any events to process. ' at the top: Declare Function GetQueueStatus Lib "user32" (ByVal qsFlags As Long) As Long ' then call this instead of DoEvents: Sub DoEventsIfNecessary() If GetQueueStatus(255) <> 0 Then DoEvents End Sub
{ "language": "en", "url": "https://stackoverflow.com/questions/155517", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22" }
Q: XHTML 1.0 Strict (or Transitional) compliance in ASP.NET 2.0/3.5 Are there any good methods for getting ASP.NET 2.0 to validate under the XHTML 1.0 Strict (or Transitional) DTD? I'm interested to hear some ideas before I hack up the core of the HTTP response. One major problem is the form tag itself, this is the output I got from W3C when I tried to validate: Line 13, Column 11: there is no attribute "name". <form name="aspnetForm" method="post" action="Default.aspx" onsubmit="javascript That tag is very fundamental to ASP.NET, as you all know. Hmmmm. A: Have you considered the ASP.NET MVC Framework? It's likely to be a better bet if strict XHTML compliance is a requirement. You gain more control of your output, but you'll be treading unfamiliar territory if you're already comfortable with the traditional ASP.NET model. A: Its possible to change the output of ASP.NET controls using techniques like the CSS Adapters. Although I wouldn't personally recommend you use these out of the box, it might give you some hints on a good solution. I generally avoid using the ASP.NET controls where ever possible, except ones that don't generate markup on their own such as the Repeater control. I would look into the ASP.NET MVC framework (what StackOverflow is built on) as this gives you 100% control over markup. A: ASP.NET 2.0 and above can indeed output Strict (or Transitional) XHTML. This will resolve your 'there is no attribute "name"' validation error, amongst other things. To set this up, update your Web.config file with something like: <system.web> ... other configuration goes here ... <xhtmlConformance mode="Strict" /> </system.web> For Transitional XHTML, use mode="Transitional" instead. See How to: Configure XHTML Rendering in ASP.NET Web Sites on MSDN.
{ "language": "en", "url": "https://stackoverflow.com/questions/155532", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: How can I tell when .Net System.Diagnostics.Process ran successfully or failed? I'm writing a scheduler or sorts. It's basically a table with a list of exes (like "C:\a.exe") and a console app that looks at the records in the table every minute or so and runs the tasks that haven't been run yet. I run the tasks like this: System.Diagnostics.Process p = new System.Diagnostics.Process(); p.StartInfo.UseShellExecute = false; p.StartInfo.FileName = someExe; // like "a.exe" p.Start(); How can I tell if a particular task failed? For example what if a.exe throws an unhandled exception? I'd like the above code to know when this happens and update the tasks table with something like "the particular task failed" etc. How can I do this? I'm not using the Sql Agent or the Windows Scheduler because someone else told me not to. He has more "experience" so I'm basically just following orders. Feel free to suggest alternatives. A: You can catch the Win32Exception to check if Process.Start() failed due to the file not existing or execute access is denied. But you can not catch exceptions thrown by the processes that you create using this class. In the first place, the application might not be written in .NET so there might not be a concept of exception at all. What you can do is check on the ExitCode of the application or read the StandardOutput and StandardError streams to check whether error messages are being posted. A: I think you're looking for Process.ExitCode, assuming the process returns one. You may need to use WaitForExit() though. There is also an ErrorDataReceived event which is triggered when an app sends to stderr. A: In addition to the ExitCode, you can also do something like this: string output = p.StandardOutput.ReadToEnd(); That will capture everything that would have been written to a command window. Then you can parse that string for known patterns for displaying errors, depending on the app. A: To expand on what @jop said. You will also need to wait for the process to close. Thus: p.Start(); p.WaitForExit(); int returnCode = p.ExitCode; Non-zero codes are typically errors. Some application use negative ones are errors, and positive ones as status codes/warnings.
{ "language": "en", "url": "https://stackoverflow.com/questions/155540", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: globally unique integer based ID (sequential) for a given location I need to create a unique ID for a given location, and the location's ID must be sequential. So its basically like a primary key, except that it is also tied to the locationID. So 3 different locations will all have ID's like 1,2,3,4,5,...,n What is the best way to do this? I also need a safe way of getting the nextID for a given location, I'm guessing I can just put a transaction on the stored procedure that gets the next ID? A: One of the ways I've seen this done is by creating a table mapping the location to the next ID. CREATE TABLE LocationID { Location varchar(32) PRIMARY KEY, NextID int DEFAULT(1) } Inside your stored procedure you can do an update and grab the current value while also incrementing the value: ... UPDATE LocationID SET @nextID = NextID, NextID = NextID + 1 WHERE Location = @Location ... The above may not be very portable and you may end up getting the incremented value instead of the current one. You can adjust the default for the column as desired. Another thing to be cautious of is how often you'll be hitting this and if you're going to hit it from another stored procedure, or from application code. If it's from another stored procedure, then one at a time is probably fine. If you're going to hit it from application code, you might be better off grabbing a range of values and then doling them out to your application one by one and then grabbing another range. This could leave gaps in your sequence if the application goes down while it still has a half allocated block. A: You'll want to wrap both the code to find the next ID and the code to save the row in the same transaction. You don't want (pseudocode): transaction { id = getId } ... other processing transaction { createRowWithNewId } Because another object with that id could be saved during "... other processing" A: If this doesn't need to be persisted, you could always do this in your query versus storing it in the table itself. select locationID ,row_number() over (partition by locationID order by (select null)) as LocationPK From YourTable
{ "language": "en", "url": "https://stackoverflow.com/questions/155544", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Exception thrown in a referenced .dll how do I debug? I'm using watermark extenders on textboxes and an exception is being thrown from the AJAX Control Toolkit .dll. It's strange because this just started happening. I tried debugging from the Ajax solution and Ajax examples (but with my code), but no dice. Is there a way to step into the Ajax .dll from my solution to see where this is happening? A: Couldn't you just get the source for the Ajax Control Toolkit and include it as a project in your solution and then reference it? You'd then be able to step into the code and if you really needed to, you can just put the precompiled one out when you deploy out. A: The source code for the AJAX Control Toolkit is available from: http://www.codeplex.com/AjaxControlToolkit/Release/ProjectReleases.aspx?ReleaseId=16488 Just download and start debugging. A: You don't need to include the AjaxControlToolkit project. Just open the file you need (in the VS instance where your code that currently breaks is), and set a breakpoint where appropriate. A: Maybe a black box approach is more appropriate here. You said that this error just started - what has changed in your environment to start this? You may be headed down a rabbit hole by stepping through the code. Can you deploy to a clean environment and not get the error? Or, does it work in the dev but not the test environment?
{ "language": "en", "url": "https://stackoverflow.com/questions/155559", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Get the TThread object for the currently executing thread? I want a function like GetCurrentThread which returns a TThread object of the current executing thread. I know there is a Win32 API call GetCurrentThread, but it returns the thread Id. If there is a possibility to get TThread object from that ID that's also fine. A: I'm using my own TThread descendent that registers itself in a global list, protected with a lock. That way, a method in this descendent can walk the list and get a TThread give an ID. A: From your own answer, it seems maybe you only want to "determine if running in the main thread or not", in which case you can just use if Windows.GetCurrentThreadId() = System.MainThreadID then // ... Although this won't work from a DLL created with Delphi if it was loaded by a worker thread. A: The latest version of Delphi, Delphi 2009, has a CurrentThread class property on the TThread class. This will return the proper Delphi thread object if it's a native thread. If the thread is an "alien" thread, i.e. created using some other mechanism or on a callback from a third party thread, then it will create a wrapper thread around the thread handle. A: Answering my own question. I guess it is not possible to get TThread object from ID. It is possible by using a global variable. Then comparing its handle and current thread id, one can determine if running in the main thread or not. A: Wouldn't the current executing thread be the one you're trying to run a function from? A: You could store the pointer of the TThread instance in the current thread's context via the TlsSetValue API call and then retrieve it using TlsGetValue. However, note that this will only work if you're trying to retrieve/store the TThread instance of the current thread.
{ "language": "en", "url": "https://stackoverflow.com/questions/155560", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17" }
Q: How do you use git svn? Please provide tips for effectively using git with svn. What are your "best practices"? A: Here's some that I recently learned: * *always do git svn rebase before doing git svn dcommit *when you are doing dcommit, do it from a temporary staging branch - if you (or git) mess up, it's much easier to recover by just deleting the branch and starting over When svn dcommit dies halfway through a large commit and you seem to have lost all of your history, do this: How To Recover: First, open .git/logs/HEAD Find the hash of the commit that's the head of your git repo. Hopefully you remember the commit message and can figure it out, but it should be pretty obvious Back in your now f-ed up working-dir: git reset --hard <hash from log> This gets your working dir back to where it was before you did a git- svn dcommit. Then: git-svn rebase git-svn dcommit A: When you create the clone, use --prefix=svn/. It creates nicer branch names. Also, don't neglect the --trunk, --tags, and --branches arguments when doing clone or init. Fetching is one of the more time-consuming steps, so set up a cron job to do git svn fetch in the background. This is safe because fetching doesn't affect any of your working branches. ( Background info on git svn fetch: This command is executed first whenever you do git svn rebase, so by doing this step ahead of time, your git svn rebase call will usually be faster. The fetch command downloads SVN commits and sticks them into special branches managed by git-svn. These branches are viewable by doing git branch -r, and if you did the above step, they start with "svn/". ) Make sure you know how to use git reflog. I've had a few occasions where git svn dcommit died (usually because I tried to check in something huge) and my commit seemed to be lost. In every case, the commit was easily found in the reflog. A: If you have a post-commit hook in your SVN repository that can reject commits, then git svn dcommit will stop processing commits the first time a commit is rejected, and you'll have to recover your remaining commits from the git reflog. Actually, I think the above problem was caused by my cow-orker not running git rebase -i correctly while trying to fix the rejected commits. But, thanks to the reflog we were able to recover everything! A: I've been blogging a bit about how to live with Subversion and Git in parallel, and I've also put up a couple of rudimentary screencasts. Gathered everything here: http://www.tfnico.com/presentations/git-and-subversion I'll try a summary: * *Just go for it! Doesn't hurt to try it out :) *Start off with a small Git project to learn first. *Stick to the command-line until you master it. GUI-tools might just confuse you. *If possible, do one-off migrations, leave SVN behind, one project at a time, starting with the small ones. *If you do have to live with a Git and SVN together, be aware that you have to give up many advantages you get from Git, like branches, but you get the "local" benefits (stash, index, speed). *If you are one Git user, just do git-svn rebasing and dcommitting by yourself. *If you are several Git collaborators, set up a central Git/SVN that only pulls from SVN, while each Git-user dcommits directly back to SVN. More info here.
{ "language": "en", "url": "https://stackoverflow.com/questions/155566", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: Understanding code metrics I recently installed the Eclipse Metrics Plugin and have exported the data for one of our projects. It's all very good having these nice graphs but I'd really like to understand more in depth what they all mean. The definitions of the metrics only go so far to telling you what it really means. Does anyone know of any good resources, books, websites, etc, that can help me better understand what all the data means and give an understanding of how to improve the code where necessary? I'm interested in things like Efferent Coupling, and Cyclomatic Complexity, etc, rather than lines of code or lines per method. A: I don't think that code metrics (sometimes referred to as software metrics) provide valuable data in terms of where you can improve. With code metrics it is sort of nice to see how much code you write in an hour etc., but beyond they tell you nada about the quality of the code written, its documentation and code coverage. They are pretty much a week attempt to measure where you cannot really measure. Code metrics also discriminate the programmers who solve the harder problems because they obviously managed to code less. Yet they solved the hard issues and a junior programmer whipping out lots of crap code looks good. Another example for using metrics is the very popular Ohloh. They employ metrics to put a price tag on an opensource project (using number of lines, etc.), which in itself is an attempt which is flawed as hell - as you can imagine. Having said all that the Wikipedia entry provides some overall insight on the topic, sorry to not answer your question in a more supportive way with a really great website or book, but I bet you got the drift that I am not a huge fan. :) Something to employ to help you improve would be continuous integration and adhering to some sort of standard when it comes to code, documentation and so on. That is how you can improve. Metrics are just eye candy for meetings - "look we coded that much already". Update Ok, well my point being efferent coupling or even cyclomatic complexity can indicate something is wrong - it doesn't have to be wrong though. It can be an indicator to refactor a class but there is no rule of thumb that tells you when. IMHO a rule such as 500+ lines of code, refactor or the DRY principal are more applicable in most cases. Sometimes it's as simple as that. I give you that much that since cyclomatic complexity is graphed into a flow chart, it can be an eye opener. But again, use carefully. A: In my opinion metrics are an excellent way to find pain points in your codebase. They are very useful also to show your manager why you should spend time improving it. This is a post I wrote about it: http://blog.jorgef.net/2011/12/metrics-in-brownfield-applications.html I hope it helps
{ "language": "en", "url": "https://stackoverflow.com/questions/155581", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Is there any clean CSS method to make each letter in a word a different color? I need a way to allow each letter of a word to rotate through 3 different colors. I know of some not so clean ways I can do this with asp.NET, but I'm wondering if there might be a cleaner CSS/JavaScript solution that is more search engine friendly. The designer is including a file like this for each page. I'd rather not have to manually generate an image for every page as that makes it hard for the non-technical site editors to add pages and change page names. A: On the server-side you can do this easily enough without annoying search engines AFAIK. // This server-side code example is in JavaScript because that's // what I know best. var words = split(message, " "); var c = 1; for(var i = 0; i < words.length; i++) { print("<span class=\"color" + c + "\">" + words[i] + "</span> "); c = c + 1; if (c > 3) c = 1; } If you really want dead simple inline HTML code, write client-side Javascript to retrieve the message out of a given P or DIV or whatever based on its ID, split it, recode it as above, and replace the P or DIV's 'innerHTML' attribute. A: There’s definitely no solution using just CSS, as CSS selectors give you no access to individual letters (apart from :first-letter). A: Here is some JavaScript. var message = "The quick brown fox."; var colors = new Array("#ff0000","#00ff00","#0000ff"); // red, green, blue for (var i = 0; i < message.length; i++){ document.write("<span style=\"color:" + colors[(i % colors.length)] + ";\">" + message[i] + "</span>"); } A: I'd rather not have to manually generate an image for every page Then generate the image automatically. You don't specify which server-side technology you're using, but any good one will allow you to manipulate images (or at least call an external image utility) So the non-technical users would just need to do something like this: <img src="images/redblueyellow.cfm/programs.png" alt="programs"/> And the redblueyellow.cfm script would then either use an existing programs.png or generate a new image with the multiple colours as desired. I can provide sample CFML or pseudo-code to do this, if you'd like?
{ "language": "en", "url": "https://stackoverflow.com/questions/155584", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Is there an existing way to turn source code back into a CodeCompileUnit? We use the DesignSurface and all that good IDesignerHost goodness in our own designer. The designed forms are then persisted in our own bespoke format and all that works great. WE also want to export the forms to a text-based format (which we've done as it isn't that difficult). However, we also want to import that text back into a document for the designer which involves getting the designer code back into a CodeCompileUnit. Unfortunately, the Parse method is not implemented (for, no doubt, good reasons). Is there an alternative? We don't want to use anything that wouldn't exist on a standard .NET installation (like .NET libraries installed with Visual Studio). My current idea is to compile the imported text and then instantiate the form and copy its properties and controls over to the design surface object, and just capture the new CodeCompileUnit, but I was hoping there was a better way. Thanks. UPDATE: I though some might be interested in our progress. So far, not so good. A brief overview of what I've discovered is that the Parse method was not implemented because it was deemed too difficult, open source parsers exist that do the work but they're not complete and therefore aren't guaranteed to work in all cases (NRefactory is one of those from the SharpDevelop project, I believe), and the copying of controls across from an instance to the designer isn't working as yet. I believe this is because although the controls are getting added to the form instance that the designer surface wraps, the designer surface is not aware of their inclusion. Our next attempt is to mimic cut/paste to see if that solves it. Obviously, this is a huge nasty workaround, but we need it working so we'll take the hit and keep an eye out for alternatives. A: It's not exactly what you asked for, but you could try to use the CodeDomComponentSerializationService class to generate the CodeDom graph based on the current state of the design surface. We use that class to handle copy/paste functionality in our built-in designer. A: You could always write your own C# parser. That way you can be sure of it's completeness. In your case, because you don't need anything like intellisense, you could probably get away with just using a parser generator. Even if you wrote one by hand, however, it probably wouldn't take you more than about a month.
{ "language": "en", "url": "https://stackoverflow.com/questions/155586", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: How can I determine the number of users on an ASP.NET site (IIS)? And their info? Is there a way to determine the number of users that have active sessions in an ASP.NET application? I have an admin/tools page in a particular application, and I would like to display info regarding all open sessions, such as the number of sessions, and perhaps the requesting machines' addresses, or other credential information for each user. A: If you are using .net Membership you could use Membership.GetNumberOfUsersOnline() More about it: http://msdn.microsoft.com/en-us/library/system.web.security.membership.getnumberofusersonline.aspx A: If you'd like to implement the same mechanism by yourself, you can define like a CurrentUserManager class and implement the singleton pattern here. This singleton object of class CurrentUserManager would be unique in the AppDomain. In this class you will create its self instance once, and you will prohibit the others from creating new instances of this class by hiding its constructor. Whenever a request comes to this object, that single instance will give the response. So, if you implement a list that keeps the records of every user (when a user comes in, you add him to the list; when he goes out, you remove him from the list). And lastly, if you want the current user count you could just ask the list count to this singleton object. A: if you use sql server as the session state provider you can use this code to count the number of online users: SELECT Count(*) As Onlines FROM ASPStateTempSessions WHERE Expires>getutcdate() A: In global.aspx void Application_Start(object sender, EventArgs e) { // Code that runs on application startup Application["OnlineUsers"] = 0; } void Session_Start(object sender, EventArgs e) { // Code that runs when a new session is started Application.Lock(); Application["OnlineUsers"] = (int)Application["OnlineUsers"] + 1; Application.UnLock(); } void Session_End(object sender, EventArgs e) { // Code that runs when a session ends. // Note: The Session_End event is raised only when the sessionstate // mode is set to InProc in the Web.config file. // If session mode is set to StateServer or SQLServer, // the event is not raised. Application.Lock(); Application["OnlineUsers"] = (int)Application["OnlineUsers"] - 1; Application.UnLock(); } Note: The Application.Lock and Application.Unlock methods are used to prevent multiple threads from changing this variable at the same time. In Web.config Verify that the SessionState is "InProc" for this to work <system.web> <sessionState mode="InProc" cookieless="false" timeout="20" /> </system.web> In your .aspx file Visitors online: <%= Application["OnlineUsers"].ToString() %> Note: Code was originally copied from http://www.aspdotnetfaq.com/Faq/How-to-show-number-of-online-users-visitors-for-ASP-NET-website.aspx (link no longer active) A: ASP.NET Performance Counters like State Server Sessions Active (The number of active user sessions) should help you out. Then you can just read and display the performance counters from your admin page.. A: The way I've seen this done in the past is adding extra code to the Session_OnStart event in the Global.asax file to store the information in a session agnostic way, e.g. a database or the HttpApplicationState object. Depending upon your needs you could also use Session_OnEnd to remove this information. You may want to initialise and clean up some of this information using the Application_Start and Application_End events. The administration page can then read this information and display statistics etc. This is explained in more depth at http://msdn.microsoft.com/en-us/library/ms178594.aspx and http://msdn.microsoft.com/en-us/library/ms178581.aspx. A: You can use PerformanceCounter to get data from System.Diagnostics namespace. It allows you to get "Sessions Active" and much more. It allows you to get from local server as well as remote. Here is an example of how to do it on local machine void Main() { var pc = new PerformanceCounter("ASP.NET Applications", "Sessions Active", "__Total__"); Console.WriteLine(pc.NextValue()); } or for remote server you would do: void Main() { var pc = new PerformanceCounter("ASP.NET Applications", "Sessions Active", "__Total__", "ServerHostName.domain"); Console.WriteLine(pc.NextValue()); } Performance Counters for ASP.NET provides full list of ASP.NET counters that you can monitor A: Google Analytics comes with an API that can be implemented on your ASP.NET MVC Application. It has RealTime functionality so the current amount of users on your website can be tracked and returned to your application. Here's some information
{ "language": "en", "url": "https://stackoverflow.com/questions/155593", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: How much is File I/O a performance factor in web development? I know the mantra is that the database is always the long pole in the tent anytime a page is being generated server-side. But there's also a good bit of file i/o going on on a web server. Scripted code is replete with include/require statements. Moreover, its typically a practice to store templated html outside the application in files which are loaded and filled in accordingly. How much of a role does file i/o play when concened with web development? Does it ever become an issue? When is it too much? Do web servers/languages cache anything? Has it ever really mattered in your experience? A: 10 years ago, disks were so much faster than processors that you didn't have to worry about it so much. You'd run out of CPU (or saturate your NIC) before disk became an issue. Nowadays, CPUs and gigabit NICs could make disk the bottleneck, BUT.... Most non-database disk usage is so easily parallelizable. If you haven't designed your hosting architecture to scale horizontally by adding more systems, that's more important than fine-tuning disk access. If you have designed to scale horizontally, usually just buying more servers is cheaper than trying to figure out how to optimize disk. Not to mention, things like SSD or even RAM disks for your templates will make it a non-issue. It's very rare to have a serving architecture that scales horizontally, popular enough to cause scalability problems, but not profitable enough to afford another 1u in your rack. A: File I/O will only become a factor (for static content and static page includes) if your bandwidth to the outside world is similar to your disk bandwidth. This would imply either you have a really fast connection, are serving content on a fast LAN, or have really slow disks (or are having a lot of disk contention). So most likely the answer is no. Of course, this assumes that you are not loading a large file only for a small portion of the file. A: File I/O is one of many factors, including bandwidth, network connectivity, memory etc, which may affect the performance of a web application. The most effective way to determine if file I/O is causing you any issues is to run some profiling on the server and see if this represents is the bounding factor on your performance. A lot of this will depend upon what types of files you are loading from disk, lots of small files will have very different properties to a few large files. Web servers can cache files, both internally in memory and can indicate to a client that a file (e.g. an image) can be cached and so does not need to be requested every time. A: Do not prematurely optimize. Its evil, or something. However, I/O is about the slowest thing you can do on a computer. Try to keep it at a minimum, but don't let Knuth see what you're doing. A: I would say that file IO speed only becomes an issue if you are serving tons of static content. When you are processing data, and executing code to render the pages, the time to read the page itself from disk is negligible. File I/O is important in cases where the static files you are serving up are unable to fit into memory, such as when you are serving video or image files. It could also happen with html files, but since the size is so small with html files, this is less likely.
{ "language": "en", "url": "https://stackoverflow.com/questions/155599", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: What is the best way to learn about RDF / OWL? What references offer a good summary/tutorial for using RDF/OWL? There seem to be enough tools (Protege, Topbraid, Jena, etc.) that knowing the syntax of the markup languages is not necessary, but knowing the concepts is, of course, still critical. I'm working through the w3c documents (particularly the RDF Primer) but I'd like to find other resources/techniques to use as well. A: I've found experimenting with SPARQL to be a very helpful way of getting a grip on RDF. Reading about it is great, but trying to model a few things and querying other people's models made it "click" for me. Some more resources: * *Planet RDF (rss aggregating several rdf/semweb blogs) is often informative *Arc (rdf/sparql library for PHP) is great and easy to get started with if you come from a scripting background *Semantic Web for the Working Ontologist (book) contains a good number of practical examples and motivates the need for RDF, RDFS, and OWL and is (in my opinion) very readable. *The tutorials with many of the libraries are good resources too A: If you want to learn about building ontologies with OWL, then the pizza ontology tutorial from this book is a good place to start. A: There is a Software Engineering Radio interview with Jim Hendler dating from early November, 2008, that discusses the state of the art in that area. His book, Semantic Web for the Working Ontologist: Effective Modeling in RDFS and OWL, has gotten high marks for its practical coverage of the area. Chasing links from that interview led me to Protege, an active open-source project at Stanford University. A: I posted a series of informative articles and tutorials a while back, which may be helpful. The series starts very basic concepts and builds progressively. Introduction to the Semantic Web Vision and Technologies - Part 1 - Overview This is the first of a series of articles written exclusively to help you understand the Semantic Web vision and technologies. In this part, we introduce the Semantic Web vision set forth by Tim Berners-Lee. We also took a look at the famous layer cake diagram illustrating key technologies that make it possible. Part 2 - Foundations In this part, we munch around the bottom of the layer cake with a few important points about Unicode, URI, and XML - - three foundational technologies that permeate the existing Web and that are especially relevant to the emerging Semantic Web. Part 3 - The Resource Description Framework We put Unicode, URI, and XML to use as we take our next step up the Semantic Web layer cake in a review of the Resource Description Framework (RDF). At the same time, we take the visual RDF/OWL editor, Altova SemanticWorks, for a test drive. Part 4 - Protégé 101 (screencast tutorial) We reach an important milestone in the series - crossing a great divide between familiar technologies such as XML, Unicode, URI, and RDF to the Web Ontology Language (OWL). This is where things really start to get interesting. (Sorry for the annoying click sounds.) Part 5 - Building OWL Ontologies Using Protege 4 (screencast tutorial) We're still using Protege, but this time working with the new ALPHA version and getting deeper into concepts. Apologies for not having completed the series to a good finale, but I got slammed. More recently, I wrote a couple of posts on the Linked Data side of things. Though not specifically about RDF/OWL, they are highly related and may also be of interest to those interested in RDF/OWL. In order from most recent to last: * *A look at SPARQL – SQL for the Semantic Web *Linked Data – Where the Enterprise is Going *Linked Data in the Software Development Lifecycle *W3C’s Linked Data Platform aims to reframe the Web *What’s Next for the Web as Told by the Father of the Web A: A very good introduction to the semantic web in comparison to object-oriented languages is this document from W3C: A Semantic Web Primer for Object-Oriented Software Developers. It helped me clarify a lot of things from the beginning. A: For OWL, check out the OWL 2 specification, e.g. the following documents, which also provide a lot of examples. * *http://www.w3.org/TR/owl2-syntax/ *http://www.w3.org/TR/owl2-primer/ A: Bob DuCharme's blog post, Adding metadata value with Pellet, is a nice practical place to start with OWL: http://www.snee.com/bobdc.blog/2008/12/adding-metadata-value-with-pel.html A: * *For pragmatic use of RDF, Shelley Power's book Practical RDF is a good start. *The ESW Wiki is also a good resource *There's also David Beckett's RDF Resource Guide *Tim Berners-Lee's notes are always a good read *There's a bunch of links from the semantic-web@w3.org mailing list archives A: This is a nice video about the semantic web: http://vimeo.com/1062481?pg=embed&sec=1062481 A: I've found the linkeddatatools tutorial is easy to understand the basics. http://www.linkeddatatools.com/semantic-web-basics
{ "language": "en", "url": "https://stackoverflow.com/questions/155601", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23" }
Q: Is it possible to use Mac Mail with MS Exchange if IMAP is disabled Our network admins have disabled IMAP and POP for our exchange server, but do have RDP over HTTP enabled. Does Mac Mail only use IMAP to communicate with exchange servers, or does it also know how to use RDP over HTTP? A: No. Mac mail uses IMAP for mail and uses HTTP to fetch calendar data, as far as I am aware. A: No. Mail.app supports IMAP and POP, but no RDP over HTTP, yet.
{ "language": "en", "url": "https://stackoverflow.com/questions/155603", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What's the difference between a method and a function? Can someone provide a simple explanation of methods vs. functions in OOP context? A: for me: the function of a method and a function is the same if I agree that: * *a function may return a value *may expect parameters Just like any piece of code you may have objects you put in and you may have an object that comes as a result. During doing that they might change the state of an object but that would not change their basic functioning for me. There might be a definition differencing in calling functions of objects or other codes. But isn't that something for a verbal differenciations and that's why people interchange them? The mentions example of computation I would be careful with. because I hire employes to do my calculations: new Employer().calculateSum( 8, 8 ); By doing it that way I can rely on an employer being responsible for calculations. If he wants more money I free him and let the carbage collector's function of disposing unused employees do the rest and get a new employee. Even arguing that a method is an objects function and a function is unconnected computation will not help me. The function descriptor itself and ideally the function's documentation will tell me what it needs and what it may return. The rest, like manipulating some object's state is not really transparent to me. I do expect both functions and methods to deliver and manipulate what they claim to without needing to know in detail how they do it. Even a pure computational function might change the console's state or append to a logfile. A: From my understanding a method is any operation which can be performed on a class. It is a general term used in programming. In many languages methods are represented by functions and subroutines. The main distinction that most languages use for these is that functions may return a value back to the caller and a subroutine may not. However many modern languages only have functions, but these can optionally not return any value. For example, lets say you want to describe a cat and you would like that to be able to yawn. You would create a Cat class, with a Yawn method, which would most likely be a function without any return value. A: To a first order approximation, a method (in C++ style OO) is another word for a member function, that is a function that is part of a class. In languages like C/C++ you can have functions which are not members of a class; you don't call a function not associated with a class a method. A: IMHO people just wanted to invent new word for easier communication between programmers when they wanted to refer to functions inside objects. If you are saying methods you mean functions inside the class. If you are saying functions you mean simply functions outside the class. The truth is that both words are used to describe functions. Even if you used it wrongly nothing wrong happens. Both words describe well what you want to achieve in your code. Function is a code that has to play a role (a function) of doing something. Method is a method to resolve the problem. It does the same thing. It is the same thing. If you want to be super precise and go along with the convention you can call methods as the functions inside objects. A: Let's not over complicate what should be a very simple answer. Methods and functions are the same thing. You call a function a function when it is outside of a class, and you call a function a method when it is written inside a class. A: A very general definition of the main difference between a Function and a Method: Functions are defined outside of classes, while Methods are defined inside of and part of classes. A: Function is the concept mainly belonging to Procedure oriented programming where a function is an an entity which can process data and returns you value Method is the concept of Object Oriented programming where a method is a member of a class which mostly does processing on the class members. A: I am not an expert, but this is what I know: * *Function is C language term, it refers to a piece of code and the function name will be the identifier to use this function. *Method is the OO term, typically it has a this pointer in the function parameter. You can not invoke this piece of code like C, you need to use object to invoke it. *The invoke methods are also different. Here invoke meaning to find the address of this piece of code. C/C++, the linking time will use the function symbol to locate. *Objecive-C is different. Invoke meaning a C function to use data structure to find the address. It means everything is known at run time. A: TL;DR A Function is a piece of code to run. A Method is a Function inside an Object. Example of a function: function sum(){ console.log("sum")l } Example of a Method: const obj = { a:1, b:2, sum(){ } } So thats why we say that a "this" keyword inside a Function is not very useful unless we use it with call, apply or bind .. because call, apply, bind will call that function as a method inside object ==> basically it converts function to method A: The idea behind Object Oriented paradigm is to "treat" the software is composed of .. well "objects". Objects in real world have properties, for instance if you have an Employee, the employee has a name, an employee id, a position, he belongs to a department etc. etc. The object also know how to deal with its attributes and perform some operations on them. Let say if we want to know what an employee is doing right now we would ask him. employe whatAreYouDoing. That "whatAreYouDoing" is a "message" sent to the object. The object knows how to answer to that questions, it is said it has a "method" to resolve the question. So, the way objects have to expose its behavior are called methods. Methods thus are the artifact object have to "do" something. Other possible methods are employee whatIsYourName employee whatIsYourDepartmentsName etc. Functions in the other hand are ways a programming language has to compute some data, for instance you might have the function addValues( 8 , 8 ) that returns 16 // pseudo-code function addValues( int x, int y ) return x + y // call it result = addValues( 8,8 ) print result // output is 16... Since first popular programming languages ( such as fortran, c, pascal ) didn't cover the OO paradigm, they only call to these artifacts "functions". for instance the previous function in C would be: int addValues( int x, int y ) { return x + y; } It is not "natural" to say an object has a "function" to perform some action, because functions are more related to mathematical stuff while an Employee has little mathematic on it, but you can have methods that do exactly the same as functions, for instance in Java this would be the equivalent addValues function. public static int addValues( int x, int y ) { return x + y; } Looks familiar? That´s because Java have its roots on C++ and C++ on C. At the end is just a concept, in implementation they might look the same, but in the OO documentation these are called method. Here´s an example of the previously Employee object in Java. public class Employee { Department department; String name; public String whatsYourName(){ return this.name; } public String whatsYourDeparmentsName(){ return this.department.name(); } public String whatAreYouDoing(){ return "nothing"; } // Ignore the following, only set here for completness public Employee( String name ) { this.name = name; } } // Usage sample. Employee employee = new Employee( "John" ); // Creates an employee called John // If I want to display what is this employee doing I could use its methods. // to know it. String name = employee.whatIsYourName(): String doingWhat = employee.whatAreYouDoint(); // Print the info to the console. System.out.printf("Employee %s is doing: %s", name, doingWhat ); Output: Employee John is doing nothing. The difference then, is on the "domain" where it is applied. AppleScript have the idea of "natural language" matphor , that at some point OO had. For instance Smalltalk. I hope it may be reasonable easier for you to understand methods in objects after reading this. NOTE: The code is not to be compiled, just to serve as an example. Feel free to modify the post and add Python example. A: 'method' is the object-oriented word for 'function'. That's pretty much all there is to it (ie., no real difference). Unfortunately, I think a lot of the answers here are perpetuating or advancing the idea that there's some complex, meaningful difference. Really - there isn't all that much to it, just different words for the same thing. [late addition] In fact, as Brian Neal pointed out in a comment to this question, the C++ standard never uses the term 'method' when refering to member functions. Some people may take that as an indication that C++ isn't really an object-oriented language; however, I prefer to take it as an indication that a pretty smart group of people didn't think there was a particularly strong reason to use a different term. A: I know many others have already answered, but I found following is a simple, yet effective single line answer. Though it doesn't look a lot better than others answers here, but if you read it carefully, it has everything you need to know about the method vs function. A method is a function that has a defined receiver, in OOP terms, a method is a function on an instance of an object. A: A class is the collection of some data and function optionally with a constructor. While you creating an instance (copy,replication) of that particular class the constructor initialize the class and return an object. Now the class become object (without constructor) & Functions are known as method in the object context. So basically Class <==new==>Object Function <==new==>Method In java the it is generally told as that the constructor name same as class name but in real that constructor is like instance block and static block but with having a user define return type(i.e. Class type) While the class can have an static block,instance block,constructor, function The object generally have only data & method. A: Function - A function in an independent piece of code which includes some logic and must be called independently and are defined outside of class. Method - A method is an independent piece of code which is called in reference to some object and are be defined inside the class. A: General answer is: method has object context (this, or class instance reference), function has none context (null, or global, or static). But answer to question is dependent on terminology of language you use. * *In JavaScript (ES 6) you are free to customising function context (this) for any you desire, which is normally must be link to the (this) object instance context. *In Java world you always hear that "only OOP classes/objects, no functions", but if you watch in detailes to static methods in Java, they are really in global/null context (or context of classes, whithout instancing), so just functions whithout object. Java teachers could told you, that functions were rudiment of C in C++ and dropped in Java, but they told you it for simplification of history and avoiding unnecessary questions of newbies. If you see at Java after 7 version, you can find many elements of pure function programming (even not from C, but from older 1988 Lisp) for simplifying parallel computing, and it is not OOP classes style. *In C++ and D world things are stronger, and you have separated functions and objects with methods and fields. But in practice, you again see functions without this and methods whith this (with object context). *In FreePascal/Lazarus and Borland Pascal/Delphi things about separation terms of functions and objects (variables and fields) are usually similar to C++. *Objective-C comes from C world, so you must separate C functions and Objective-C objects with methods addon. *C# is very similar to Java, but has many C++ advantages. A: In OO world, the two are commonly used to mean the same thing. From a pure Math and CS perspective, a function will always return the same result when called with the same arguments ( f(x,y) = (x + y) ). A method on the other hand, is typically associated with an instance of a class. Again though, most modern OO languages no longer use the term "function" for the most part. Many static methods can be quite like functions, as they typically have no state (not always true). A: A function is a piece of code that is called by name. It can be passed data to operate on (i.e. the parameters) and can optionally return data (the return value). All data that is passed to a function is explicitly passed. A method is a piece of code that is called by a name that is associated with an object. In most respects it is identical to a function except for two key differences: * *A method is implicitly passed the object on which it was called. *A method is able to operate on data that is contained within the class (remembering that an object is an instance of a class - the class is the definition, the object is an instance of that data). (this is a simplified explanation, ignoring issues of scope etc.) A: Let's say a function is a block of code (usually with its own scope, and sometimes with its own closure) that may receive some arguments and may also return a result. A method is a function that is owned by an object (in some object oriented systems, it is more correct to say it is owned by a class). Being "owned" by a object/class means that you refer to the method through the object/class; for example, in Java if you want to invoke a method "open()" owned by an object "door" you need to write "door.open()". Usually methods also gain some extra attributes describing their behaviour within the object/class, for example: visibility (related to the object oriented concept of encapsulation) which defines from which objects (or classes) the method can be invoked. In many object oriented languages, all "functions" belong to some object (or class) and so in these languages there are no functions that are not methods. A: In C++, sometimes, method is used to reflect the notion of member function of a class. However, recently I found a statement in the book «The C++ Programming Language 4th Edition», on page 586 "Derived Classes" A virtual function is sometimes called a method. This is a little bit confusing, but he said sometimes, so it roughly makes sense, C++ creator tends to see methods as functions can be invoked on objects and can behave polymorphic. A: Here is some explanation for method vs. function using JavaScript examples: test(20, 50); is function define and use to run some steps or return something back that can be stored/used somewhere. You can reuse code: Define the code once and use it many times. You can use the same code many times with different arguments, to produce different results. var x = myFunction(4, 3); // Function is called, return value will end up in x function myFunction(a, b) { return a * b; // Function returns the product of a and b } var test = something.test(); here test() can be a method of some object or custom defined a prototype for inbuilt objects, here is more explanation: JavaScript methods are the actions that can be performed on objects. A JavaScript method is a property containing a function definition. Built-in property/method for strings in javascript: var message = "Hello world!"; var x = message.toUpperCase(); //Output: HELLO WORLD! Custom example: function person(firstName, lastName, age, eyeColor) { this.firstName = firstName; this.lastName = lastName; this.age = age; this.eyeColor = eyeColor; this.changeName = function (name) { this.lastName = name; }; } something.changeName("SomeName"); //This will change 'something' objject's name to 'SomeName' You can define properties for String, Array, etc as well for example String.prototype.distance = function (char) { var index = this.indexOf(char); if (index === -1) { console.log(char + " does not appear in " + this); } else { console.log(char + " is " + (this.length - index) + " characters from the end of the string!"); } }; var something = "ThisIsSomeString" // now use distance like this, run and check console log something.distance("m"); Some references: Javascript Object Method, Functions, More info on prototype A: Difference Between Methods and Functions From reading this doc on Microsoft Members that contain executable code are collectively known as the function members of a class. The preceding section describes methods, which are the primary kind of function members. This section describes the other kinds of function members supported by C#: constructors, properties, indexers, events, operators, and finalizers. So methods are the subset of the functions. Every method is a function but not every function is a method, for example, a constructor can't be said as a method but it is a function. A: In just 2 words: non-static ("instance") methods take a hidden pointer to "this" (as their 1st param) which is the object you call the method on. That's the only difference with a regular standalone function, dynamic dispatching notwithstanding. If you are interested, read the details below. I'll try to be short and will use C++ as an example although what I say can be applied to virtually every language. * *For your CPU, both functions and methods are just pieces of code. Period. *As such, when functions/methods are called, they can take parameters Ok, I said there's no actual difference. Let's dig a bit deeper: * *There are 2 flavors of methods: static and non-static *Static methods are like regular functions but declared inside the class that acts merely like a namespace *Non-static ("instance") methods take a hidden pointer to "this". That's the only difference with a regular standalone function. Dynamic dispatching aside, it means it's as simple as that: class User { public string name; // I made it public intentionally // Each instance method takes a hidden reference to "this" public void printName(/*User & this*/) { cout << this.name << endl; } }; is equivalent to public getName(User & user) { // No syntactic sugar, passing a reference explicitly cout << user.name << endl; } So, essentially, user->printName() is just syntactic sugar for getName(user). If you don't use dynamic dispatch, that's all. If it is used, then it's a bit more involved, but the compiler will still emit what looks like a function taking this as a first parameter. A: There's a clear difference between method and funtion as: Function is an independant piece of code which you can invoke anywhere by just mentioning it's name with given arguments, like in most of the procedural languages e.g C++ and python. While Method is specifically associated with an object, means you can only invoke a method by mentioning it's object before it with dot(.) notation, like in specifically pure Object Oriented languages like C# and java. A: Methods are functions of classes. In normal jargon, people interchange method and function all over. Basically you can think of them as the same thing (not sure if global functions are called methods). http://en.wikipedia.org/wiki/Method_(computer_science) A: A function is a mathematical concept. For example: f(x,y) = sin(x) + cos(y) says that function f() will return the sin of the first parameter added to the cosine of the second parameter. It's just math. As it happens sin() and cos() are also functions. A function has another property: all calls to a function with the same parameters, should return the same result. A method, on the other hand, is a function that is related to an object in an object-oriented language. It has one implicit parameter: the object being acted upon (and it's state). So, if you have an object Z with a method g(x), you might see the following: Z.g(x) = sin(x) + cos(Z.y) In this case, the parameter x is passed in, the same as in the function example earlier. However, the parameter to cos() is a value that lives inside the object Z. Z and the data that lives inside it (Z.y) are implicit parameters to Z's g() method. A: Historically, there may have been a subtle difference with a "method" being something which does not return a value, and a "function" one which does.Each language has its own lexicon of terms with special meaning. In "C", the word "function" means a program routine. In Java, the term "function" does not have any special meaning. Whereas "method" means one of the routines that forms the implementation of a class. In C# that would translate as: public void DoSomething() {} // method public int DoSomethingAndReturnMeANumber(){} // function But really, I re-iterate that there is really no difference in the 2 concepts. If you use the term "function" in informal discussions about Java, people will assume you meant "method" and carry on. Don't use it in proper documents or presentations about Java, or you will look silly. A: Function or a method is a named callable piece of code which performs some operations and optionally returns a value. In C language the term function is used. Java & C# people would say it a method (and a function in this case is defined within a class/object). A C++ programmer might call it a function or sometimes method (depending on if they are writing procedural style c++ code or are doing object oriented way of C++, also a C/C++ only programmer would likely call it a function because term 'method' is less often used in C/C++ literature). You use a function by just calling it's name like, result = mySum(num1, num2); You would call a method by referencing its object first like, result = MyCalc.mySum(num1,num2); A: Function is a set of logic that can be used to manipulate data. While, Method is function that is used to manipulate the data of the object where it belongs. So technically, if you have a function that is not completely related to your class but was declared in the class, its not a method; It's called a bad design. A: In general: methods are functions that belong to a class, functions can be on any other scope of the code so you could state that all methods are functions, but not all functions are methods: Take the following python example: class Door: def open(self): print 'hello stranger' def knock_door(): a_door = Door() Door.open(a_door) knock_door() The example given shows you a class called "Door" which has a method or action called "open", it is called a method because it was declared inside a class. There is another portion of code with "def" just below which defines a function, it is a function because it is not declared inside a class, this function calls the method we defined inside our class as you can see and finally the function is being called by itself. As you can see you can call a function anywhere but if you want to call a method either you have to pass a new object of the same type as the class the method is declared (Class.method(object)) or you have to invoke the method inside the object (object.Method()), at least in python. Think of methods as things only one entity can do, so if you have a Dog class it would make sense to have a bark function only inside that class and that would be a method, if you have also a Person class it could make sense to write a function "feed" for that doesn't belong to any class since both humans and dogs can be fed and you could call that a function since it does not belong to any class in particular. A: Simple way to remember: * *Function → Free (Free means it can be anywhere, no need to be in an object or class) *Method → Member (A member of an object or class) A: In OO languages such as Object Pascal or C++, a "method" is a function associated with an object. So, for example, a "Dog" object might have a "bark" function and this would be considered a "Method". In contrast, the "StrLen" function stands alone (it provides the length of a string provided as an argument). It is thus just a "function." Javascript is technically Object Oriented as well but faces many limitations compared to a full-blown language like C++, C# or Pascal. Nonetheless, the distinction should still hold. A couple of additional facts: C# is fully object oriented so you cannot create standalone "functions." In C# every function is bound to an object and is thus, technically, a "method." The kicker is that few people in C# refer to them as "methods" - they just use the term "functions" because there isn't any real distinction to be made. Finally - just so any Pascal gurus don't jump on me here - Pascal also differentiates between "functions" (which return a value) and "procedures" which do not. C# does not make this distinction explicitly although you can, of course, choose to return a value or not. A: A method is on an object or is static in class. A function is independent of any object (and outside of any class). For Java and C#, there are only methods. For C, there are only functions. For C++ and Python it would depend on whether or not you're in a class. But in basic English: * *Function: Standalone feature or functionality. *Method: One way of doing something, which has different approaches or methods, but related to the same aspect (aka class). A: Methods on a class act on the instance of the class, called the object. class Example { public int data = 0; // Each instance of Example holds its internal data. This is a "field", or "member variable". public void UpdateData() // .. and manipulates it (This is a method by the way) { data = data + 1; } public void PrintData() // This is also a method { Console.WriteLine(data); } } class Program { public static void Main() { Example exampleObject1 = new Example(); Example exampleObject2 = new Example(); exampleObject1.UpdateData(); exampleObject1.UpdateData(); exampleObject2.UpdateData(); exampleObject1.PrintData(); // Prints "2" exampleObject2.PrintData(); // Prints "1" } } A: Since you mentioned Python, the following might be a useful illustration of the relationship between methods and objects in most modern object-oriented languages. In a nutshell what they call a "method" is just a function that gets passed an extra argument (as other answers have pointed out), but Python makes that more explicit than most languages. # perfectly normal function def hello(greetee): print "Hello", greetee # generalise a bit (still a function though) def greet(greeting, greetee): print greeting, greetee # hide the greeting behind a layer of abstraction (still a function!) def greet_with_greeter(greeter, greetee): print greeter.greeting, greetee # very simple class we can pass to greet_with_greeter class Greeter(object): def __init__(self, greeting): self.greeting = greeting # while we're at it, here's a method that uses self.greeting... def greet(self, greetee): print self.greeting, greetee # save an object of class Greeter for later hello_greeter = Greeter("Hello") # now all of the following print the same message hello("World") greet("Hello", "World") greet_with_greeter(hello_greeter, "World") hello_greeter.greet("World") Now compare the function greet_with_greeter and the method greet: the only difference is the name of the first parameter (in the function I called it "greeter", in the method I called it "self"). So I can use the greet method in exactly the same way as I use the greet_with_greeter function (using the "dot" syntax to get at it, since I defined it inside a class): Greeter.greet(hello_greeter, "World") So I've effectively turned a method into a function. Can I turn a function into a method? Well, as Python lets you mess with classes after they're defined, let's try: Greeter.greet2 = greet_with_greeter hello_greeter.greet2("World") Yes, the function greet_with_greeter is now also known as the method greet2. This shows the only real difference between a method and a function: when you call a method "on" an object by calling object.method(args), the language magically turns it into method(object, args). (OO purists might argue a method is something different from a function, and if you get into advanced Python or Ruby - or Smalltalk! - you will start to see their point. Also some languages give methods special access to bits of an object. But the main conceptual difference is still the hidden extra parameter.) A: They're often interchangeable, but a method usually refers to a subroutine inside a class, and a function usually refers to a subroutine outside the class. for instance, in Ruby: # function def putSqr(a) puts a ** 2 end class Math2 # method def putSqr(a) puts a ** 2 end end In Java, where everything (except package and import statements) must be inside the class, people almost always refer to them as "methods". A: A function and a method look very similar. They also have inputs and return outputs. The difference is that a method is inside of a class whereas a function is outside of a class. A: A method is a member of any object or class. A function is independent. But in the case of Javascript, function and method are interchangeable. A: With the C# terminology, there’s a distinction between functions and methods. The term function member includes not only methods, but also other nondata members such as indexers, operators, constructors, destructors, and properties — all members that contain executable code. reference => Professional C# and .NET 2021 Edition - written by Christina Nagel A: What's the difference between a method and a function? Python's official documentation defines it like this (thank you to @Kelly Bundy here!): function A series of statements which returns some value to a caller. It can also be passed zero or more arguments which may be used in the execution of the body. See also parameter, method, and the Function definitions section method A function which is defined inside a class body. If called as an attribute of an instance of that class, the method will get the instance object as its first argument (which is usually called self). See function and nested scope. A square is a rectangle, but not all rectangles are squares. The way I interpret the world, a method is a function, but not all functions are methods. What makes a method unique is that it is a special type of function which also is associated with a class and has access to class member variables. See also: * *[my ans] hasattr is called a method, but it looks like a function A: * *OOP is a design philosophy. In this context a "Method" is an "action", a "behaviour" of the object, an "operation", something an object does. A right click on a mouse object is an action. This action or behavior in several languages that implement OOP design is called "method". *"Function" is related only to procedural languages as C and Pascal and is not related with OOP philosophy, even technically the implementation is similar to method. A "function" is a block of code in a procedural language like C, that has a defined purpose, a isolated and defined functionality, that can also return a result. "Procedure" is also a function that does not return a result, but its just a technical difference. A: A function is a set of instructions or procedures to perform a specific task. it can be used to split code into easily understandable parts, which can be invoked or reused as well. Methods are actions that can be performed on objects. it is also known as functions stored as object properties. Major Difference: A function doesn’t need any object and is independent, while the method is a function, which is linked with any object. //firstName() is the function function firstName(){ cosole.log('John'); } firstName() //Invoked without any object const person = { firstName: "John", lastName: "Doe", id: 5566, }; //person.name is the method person.name = function() { return this.firstName + " " + this.lastName; }; document.getElementById("demo").innerHTML = "My father is " + person.name() //performs action on object;
{ "language": "en", "url": "https://stackoverflow.com/questions/155609", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2111" }
Q: How do I specify the exit code of a console application in .NET? I have a trivial console application in .NET. It's just a test part of a larger application. I'd like to specify the "exit code" of my console application. How do I do this? A: The enumeration option is excellent. However, it can be improved upon by multiplying the numbers as in: enum ExitCodes : int { Success = 0, SignToolNotInPath = 1, AssemblyDirectoryBad = 2, PFXFilePathBad = 4, PasswordMissing = 8, SignFailed = 16, UnknownError = 32 } In the case of multiple errors, adding the specific error numbers together will give you a unique number that will represent the combination of detected errors. For example, an errorlevel of 6 can only consist of errors 4 and 2, 12 can only consist of errors 4 and 8, 14 can only consist of 2, 4 and 8 etc. A: As an update to Scott Munro's answer: * *In C# 6.0 and VB.NET 14.0 (Visual Studio 2015), either Environment.ExitCode or Environment.Exit(exitCode) is required to return an non-zero code from a console application. Changing the return type of Main has no effect. *In F# 4.0 (Visual Studio 2015), the return value of the main entry point is respected. A: Three options: * *You can return it from Main if you declare your Main method to return int. *You can call Environment.Exit(code). *You can set the exit code using properties: Environment.ExitCode = -1;. This will be used if nothing else sets the return code or uses one of the other options above). Depending on your application (console, service, web application, etc.), different methods can be used. A: There are three methods that you can use to return an exit code from a console application. * *Modify the Main method in your application so that it returns an int instead of void (a function that returns an Integer instead of Sub in VB.NET) and then return the exit code from that method. *Set the Environment.ExitCode property to the exit code. Note that method 1. takes precedence - if the Main method returns anything other than void (is a Sub in VB.Net) then the value of this property will be ignored. *Pass the exit code to the Environment.Exit method. This will terminate the process immediately as opposed to the other two methods. An important standard that should be observed is that 0 represents 'Success'. On a related topic, consider using an enumeration to define the exit codes that your application is going to return. The FlagsAttribute will allow you to return a combination of codes. Also, ensure that your application is compiled as a 'Console Application'. A: If you are going to use the method suggested by David, you should also take a look at the [Flags] Attribute. This allows you to do bit wise operations on enums. [Flags] enum ExitCodes : int { Success = 0, SignToolNotInPath = 1, AssemblyDirectoryBad = 2, PFXFilePathBad = 4, PasswordMissing = 8, SignFailed = 16, UnknownError = 32 } Then (ExitCodes.SignFailed | ExitCodes.UnknownError) would be 16 + 32. :) A: In addition to the answers covering the return int's... a plea for sanity. Please, please define your exit codes in an enum, with Flags if appropriate. It makes debugging and maintenance so much easier (and, as a bonus, you can easily print out the exit codes on your help screen - you do have one of those, right?). enum ExitCode : int { Success = 0, InvalidLogin = 1, InvalidFilename = 2, UnknownError = 10 } int Main(string[] args) { return (int)ExitCode.Success; } A: System.Environment.ExitCode See Environment.ExitCode Property. A: You can find the system error codes on System Error Codes (0-499). You will find the typical codes, like 2 for "file not found" or 5 for "access denied". And when you stumble upon an unknown code, you can use this command to find out what it means: net helpmsg decimal_code For example, net helpmsg 1 returns Incorrect function A: int code = 2; Environment.Exit( code ); A: Just return the appropiate code from main. int Main(string[] args) { return 0; // Or exit code of your choice } A: Use ExitCode if your main has a void return signature. Otherwise, you need to "set" it by the value you return. From Environment.ExitCode Property: If the Main method returns void, you can use this property to set the exit code that will be returned to the calling environment. If Main does not return void, this property is ignored. The initial value of this property is zero. A: Use this code Environment.Exit(0); use 0 as the int if you don't want to return anything. A: I'm doing it like this: int exitCode = 0; Environment.Exit(exitCode); Or you can throw an error (personal preference): throw new ArgumentException("Code 0, Environment Exit"); I've choose ArgumentException, but you can type other. It will work fine. A: Just another way: public static class ApplicationExitCodes { public static readonly int Failure = 1; public static readonly int Success = 0; }
{ "language": "en", "url": "https://stackoverflow.com/questions/155610", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "543" }
Q: Is it reasonable to assume my visitors have javascript enabled? I understand that server-side validation is an absolute must to prevent malicious users (or simply users who choose to disable javascript) from bypassing client-side validation. But that's mainly to protect your application, not to provide value for those who are running browsers with javascript disabled. Is it reasonable to assume visitors have javascript enabled and simply have an unusable site for those who don't? A: Duplicate There's at least one category where the answer is definitely "no". If you work for the government, you must make sure the site is accessible to those using screen readers. A: Is it reasonable to assume visitors have javascript enabled and simply have an unusable site for those who don't? There are actually two questions, and the answers are: Yes, it is reasonable to assume visitors have javascript enabled. And, No this does not mean others should be left with unusable site. Progressive enhancement is the way to go. Have your site usable without javascript and then add bells and whistles. As for client side validation, it is no more than a convenience for user to avoid unnecessary roundtrips to server (where real validation should be performed). A: I browse with the NoScript plugin in firefox and I'm surprised at the amount of developers that haven't even considered making their site degradable. Never assume the user has JavaScript disabled - especially seeing as it may not always be their fault. Many enterprises have firewalls which block JavaScript/ActiveX etc. - In this instance the <noscript> element won't work so I would NOT recomend using that either! Unless you're creating a full-on web application which is going to be 90% Ajax then you must make sure to abide by standards and progressively enhance your site through various layers of interactivity. Also don't forget the important of object detection, especially with the rise of mobile phone web browsing. One of the most popular mobile web browsers (Opera mini 4.0) doesn't allow all "Background javaScript" to work and Ajax calls rarely execute correctly... Just something to be aware of. To be honest I am sick and tired of developers that think everyone will have JS enabled! What ignorance!! A: I browse with NoScript in Firefox, and it always annoys me when I get pages that don't work. That said - know your audience. If you're trying to cater to paranoid computer security professionals - assume they might not have JavaScript enabled. If you're going for a general audience, JavaScript is probably on. A: It is ok in these days to assume your visitors have JS enabled. With that said, you should strive for the best possible degradation of your site with JS disabled. It is ideal if your site falls back to a state that is still usable without JS. A: Yes it is. But expose as much of it as possible through regular HTML and URLs, if for nothing else than for Google. A: Accessible, yes... functional? Not really. This is really a customer requirement question more than developer-answerable, but if your customer tries to enforce a requirement that non-JS browsers work, you should argue heavily against it and really hammer them on the "cool" factor they'll be missing. Given the heavy reliance by GWT, RichFaces, etc. on Javascript, it's just not feasible to make an app with any kind of user-friendly UI without it. You should certainly warn non-JS enabled users that the site they're trying to visit relies heavily on JS, though. No point in being rude about it. A: No! Some environments will have it disabled as a matter of policy, with nothing you can do to enable it. And even if it's enabled, it might be crippled. This question has been asked before. A: Totally depends on who you're aiming at. If your site or app is for an Intranet, you can make lots of assumptions. If your target audience is bleeding-edge social-networking types, you can assume JavaScript will work. If you anticipate a lot of paranoia sysadmin types, you can assume a bunch of them will be trying to access your site in lynx or have JS turned of for "security reasons." A good example of this is Amazon -- their approach is driven by their business goals. They are a mass-market site, but for them, locking out users in old/incapable browsers means potential lost sales, so they work hard on non-script fallbacks. So like lots of these kinds of questions, the answer is not just regurgitating what you've read somewhere about accessibility or progressive enhancement. The real answer is "it depends." A: I think there is another reason which push you to support at least some main functionality without JS - lots of us now browsing from mobile and PDA, which have no the same lvl of JavaScript support. A: http://www.w3schools.com/browsers/browsers_stats.asp They claim 95% of users have Javascript on. A: One interesting point to consider is that as a web developer you have a social responsibility to push technology forward - and by using things like AJAX, you increase exposure and potentially rate of adoption along with it. The only thing that should stop you from using the tech to its fullest extent is money - if you won't make the money that you need because people will have trouble viewing the material, you've got to reconsider. A: Never ever assume Javascript for form validation, as your question implies. Someone will eventually realise this and turn Javascript off. Instead, code the app in fairly regular html manner and use Javascript for what it is: an optional perk for your users. Even for an entirely AJAX app like Gmail, the complete works of form validation is required on the server side. A: Yes it is, JavaScript is as old as CSS and no one tries to build around browsers that don't support CSS. Cross Site Scripting is the reason people are afraid of JavaScript, but believe me if a developer wants to screw you over he doesn't need JavaScript to do it. As far as mobile browsers, most of them now have JavaScript, and the others shouldn't be considered browsers. My advice is not to open yourself to hackers by making your site vulnerable to those who choose to turn off their JavaScript, but at the same time don't go out of the way to support those who are living in the stone age. You aren't going to support IE 4 or Netscape, right? Then why support those who sabotage their own browsers because of blatant fear or paranoia? A: I think it's fair to assume that the majority of visitors to your site will have JavaScript enabled. Some of the more trafficked sites out there have a dependency on JavaScript. For example, I was surprised to learn that you can't authenticate through a Passport-enabled site without a JS-enabled browser. A: No its not, period, full-stop, end of story. Its just naive and wrong at an ethical level, not to mention you miss out on around 50% of Internet users worldwide (believe it or not 70% of web access worldwide is from mobile devices). Add extra nifty stuff that requires Javascript, thats fine. Don't make your site unusable without Javascript unless you have really, really, really good reason to do so. Someone rightly pointed out that I don't have evidence to back up my claim of 70% mobile web users. Unforunately I can't find the source I got it from but I remember it being authorative so have no reason to doubt it. It does make sense though when you consider worldwide usage, many developing countries have more mobile phones than landlines and broadband. A statistic that was quoted in my not-to-be-found source was that one African country in particular has 300,000 landlines, but has 1.5 million mobile phones! A: Nearly all (but not quite all!) users will have javascript enabled. (I believe the figure quoted above of about 5% is accurate.) Given the vast improvement in usability you can make with the judicious use of javascript, my opinion is that most of the time, it is reasonable to assume it is enabled. There will of course be some instances where that is not the case, (ie, a site designed for mobile devices, or with a high percentage of disabled users etc), and always a effort should definitely be given to making your site as accessible as possible to as large a percentage of the population as possible. That said, if you only have a low traffic site, 5% of a small number is a very small number. It may not be worth bending over backwards to make your site accessible to these people when it may only gain you one or two extra users. I guess the short answer is (as always), there is no correct answer - it will depend entirely on the target use, and target users of the sit in question. A: According to this little page Javascript is enabled in 95% of browsers and it keeps raising. A: The W3C Browser Statistics page (scroll down) has some information on this; they say that 95% of visitors have JavaScript on as of January 2008. A: It's reasonable to assume your visitors have javascript enabled !-) -- but of course it depends on who you're trying to reach ... Several times above w3schools have been mentioned and, as Dan stated, its their own visitors which make it somewhat quirky to draw conclusion from. However, if you look at theCounter.com it seems that their audience has the same habits in general on this point ... A twist that hasn't been mentioned yet is the growing amount of crawlers, mailharvesters and so on, they definitely do not have javascript turned on, and how good are counters to detect them ?-) My guess would be, that this sort of machine-browsers fill up a lot of those 5-6% !o] -- that said, if it's at all possible, make your app degrade gracefully (as a wise man said) A: Your questions seems to suggest form based input for an application. If it's an intranet application then you'd be guided by the in-house security experts. If it's public app, then as other posters have suggested, fail gracefully. A: I will argue that it is more than reasonable to expect them to have javascript so long as you provide suitable means to replace javascript should it not be enabled. One of the reasons that I like the Yahoo UI Library is that it degrades gracefully. A: I always try to code my sites as static one first, THEN i add js/ajax functionality. This way i can be kinda sure that will work on non JS browsers :) But, javascript is like flash: all users have it, but developer have to concern on WHAT IF.... ? :D A: This is totally a "it depends" question, as many people have pointed out. This is why metrics is valuable on sites to help show if you can really run with the analogy that "major sites say that the majority of people have JS on" - you could have a site where it's 99%. I won't dig in to what's been said above, as it's been answered very well :) A: Not that everyone else hasn't chimed in, but I disagree with the "look at your audience" position to some extent. It really should be "look at your app" if you are just displaying some information, and your js is for bell/whistle purposes, then certainly look at nice degradation, if you want to. However, if you're building something like Google Docs, it's really asinine that someone would think they could use your site without js, so perhaps let them know that via a nice sarcastic message inside <noscript> tags. From a purely philosophical point of view, if users want to access your site, they will flip the js switch, or upgrade to a decent browser, etc. And you should force them to do this, because evolution is important for the survival of the species. A: According to this site, 95% of browsers use JavaScript. That said, there are a LOT of bots that don't use JavaScript: scrapers, search bots, etc. I'd say closer to 100% of actual human users use JavaScript. But your guess is just as good as mine.
{ "language": "en", "url": "https://stackoverflow.com/questions/155615", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "38" }
Q: Circle coordinates to array in Javascript What's the best way to add the coordinates of a circle to an array in JavaScript? So far I've only been able to do a half circle, but I need a formula that returns the whole circle to two different arrays: xValues and yValues. (I'm trying to get the coordinates so I can animate an object along a path.) Here's what I have so far: circle: function(radius, steps, centerX, centerY){ var xValues = [centerX]; var yValues = [centerY]; for (var i = 1; i < steps; i++) { xValues[i] = (centerX + radius * Math.cos(Math.PI * i / steps-Math.PI/2)); yValues[i] = (centerY + radius * Math.sin(Math.PI * i / steps-Math.PI/2)); } } A: Bresenham's algorithm is way faster. You hear of it in relation to drawing straight lines, but there's a form of the algorithm for circles. Whether you use that or continue with the trig calculations (which are blazingly fast these days) - you only need to draw 1/8th of the circle. By swapping x,y you can get another 1/8th, and then the negative of x, of y, and of both - swapped and unswapped - gives you points for all the rest of the circle. A speedup of 8x! A: Change: Math.PI * i / steps to: 2*Math.PI * i / steps A full circle is 2pi radians, and you are only going to pi radians. A: Your loop should be set up like this instead: for (var i = 0; i < steps; i++) { xValues[i] = (centerX + radius * Math.cos(2 * Math.PI * i / steps)); yValues[i] = (centerY + radius * Math.sin(2 * Math.PI * i / steps)); } * *Start your loop at 0 *Step through the entire 2 * PI range, not just PI. *You shouldn't have the var xValues = [centerX]; var yValues = [centerY]; -- the center of the circle is not a part of it. A: You need to use a partial function to input the radians into cos and sin; therefore take the values you're getting for a quarter or half of the circle, and reflect them over the center points' axis to get your full circle. That said JavaScript's sin and cos aren't quite as picky, so you must have halved your radian or something; I'd write it as: function circle(radius, steps, centerX, centerY){ var xValues = [centerX]; var yValues = [centerY]; var table="<tr><th>Step</th><th>X</th><th>Y</th></tr>"; var ctx = document.getElementById("canvas").getContext("2d"); ctx.fillStyle = "red" ctx.beginPath(); for (var i = 0; i <= steps; i++) { var radian = (2*Math.PI) * (i/steps); xValues[i+1] = centerX + radius * Math.cos(radian); yValues[i+1] = centerY + radius * Math.sin(radian); if(0==i){ctx.moveTo(xValues[i+1],yValues[i+1]);}else{ctx.lineTo(xValues[i+1],yValues[i+1]);} table += "<tr><td>" + i + "</td><td>" + xValues[i+1] + "</td><td>" + yValues[i+1] + "</td></tr>"; } ctx.fill(); return table; } document.body.innerHTML="<canvas id=\"canvas\" width=\"300\" height=\"300\"></canvas><table id=\"table\"/>"; document.getElementById("table").innerHTML+=circle(150,15,150,150); I assumed that for whatever reason you wanted xValues[0] and yValues[0] to be centerX and centerY. I can't figure out why you'd want that, as they're values passed into the function already. A: If you already have half a circle, just mirror the points to get the other half make sure you do this in the right order. more speficically, for the other half you simply replace the "+ sin(...)" with a "- sin(...)" A: I was able to solve it on my own by multiplying the number of steps by 2: circle: function(radius, steps, centerX, centerY){ var xValues = [centerX]; var yValues = [centerY]; for (var i = 1; i < steps; i++) { xValues[i] = (centerX + radius * Math.cos(Math.PI * i / steps*2-Math.PI/2)); yValues[i] = (centerY + radius * Math.sin(Math.PI * i / steps*2-Math.PI/2)); } }
{ "language": "en", "url": "https://stackoverflow.com/questions/155649", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: Database schema to track random settings in website for marketing I have Flex based consumer website where I would like to change various look and feel type settings based on random and other criteria, and then track these through to what results in the most sales. For instance I might completely switch out the homepage, show different things depending upon where people come from. I might show or hide certain features, or change certain text. The things i might change are as yet undefined and will likely become quite complicated. I want to design the most flexible database schema but it must be efficient and easy to search. Currently I have a 'SiteVisit' table which contains information about each distinct visitor. I want to find the right balance between a single table with columns for each setting, and a table containing just key value pairs. Any suggestions? A: Ok, this is very tricky. The reason for me to say that is because you are asking for two things: * *relaxed repository schema so that you can store various data and change what gets saved dynamically later *fixed database schema so that you can query data effectively The solution will be a compromise. You have to understand that. Let's assume that you have User table: +---------------- | User +---------------- | UserId (PK) | ... +---------------- 1) XML blob approach You can either save your data as a blog (big XML) into a table (actually a property bag) but querying (filtering) will be a nightmare. +---------------- | CustomProperty +---------------- | PropId (PK) | UserId (FK) | Data of type memo/binary/... +---------------- Advantage is that you (business logic) own the schema. This is at the same time disadvantage of this solution. Another HUGE disadvantage is that querying/filtering will be EXTREMELY difficult and SLOW! 2) Table per property Another solution is to make a special table per property (homepage, etc.). This table would contain value per user (FK based realtionship to the User table). +---------------- | HomePage +---------------- | HomePageId (PK) | UserId (FK) | Value of type string +---------------- Advantage of this approach is that you can very quickly see all the values for that property. Disadvantage is that you will have too many tables (one per custom property) and that you will join tables often during query operations. 3) CustomProperty table In this solution you have one table holding all custom properties. +---------------- | CustomPropertyEnum +---------------- | PropertyId (PK) | Name of type string +---------------- +---------------- | CustomProperty +---------------- | PropId (PK) | PropertyId (FK) | UserId (FK) | Value of type string +---------------- In this solution you store all custom properties in one table. You also have a special enum table that allows you to more efficiently query data. The disadvantage is that you will join tables often during query operations. Choose for yourself. I would decide between 2 and 3 depending on your workload (most probably 3 because it is easier). A: This is a classic case for using a NoSQL database. It could be used to store various key value pairs with ease and without any predefined schema.
{ "language": "en", "url": "https://stackoverflow.com/questions/155669", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Invert 4x4 matrix - Numerical most stable solution needed I want to invert a 4x4 matrix. My numbers are stored in fixed-point format (1.15.16 to be exact). With floating-point arithmetic I usually just build the adjoint matrix and divide by the determinant (e.g. brute force the solution). That worked for me so far, but when dealing with fixed point numbers I get an unacceptable precision loss due to all of the multiplications used. Note: In fixed point arithmetic I always throw away some of the least significant bits of immediate results. So - What's the most numerical stable way to invert a matrix? I don't mind much about the performance, but simply going to floating-point would be to slow on my target architecture. A: I think the answer to this depends on the exact form of the matrix. A standard decomposition method (LU, QR, Cholesky etc.) with pivoting (an essential) is fairly good on fixed point, especially for a small 4x4 matrix. See the book 'Numerical Recipes' by Press et al. for a description of these methods. This paper gives some useful algorithms, but is behind a paywall unfortunately. They recommend a (pivoted) Cholesky decomposition with some additional features too complicated to list here. A: I'd like to second the question Jason S raised: are you certain that you need to invert your matrix? This is almost never necessary. Not only that, it is often a bad idea. If you need to solve Ax = b, it is more numerically stable to solve the system directly than to multiply b by A inverse. Even if you have to solve Ax = b over and over for many values of b, it's still not a good idea to invert A. You can factor A (say LU factorization or Cholesky factorization) and save the factors so you're not redoing that work every time, but you'd still solve the system each time using the factorization. A: Meta-answer: Is it really a general 4x4 matrix? If your matrix has a special form, then there are direct formulas for inverting that would be fast and keep your operation count down. For example, if it's a standard homogenous coordinate transform from graphics, like: [ux vx wx tx] [uy vy wy ty] [uz vz wz tz] [ 0 0 0 1] (assuming a composition of rotation, scale, translation matrices) then there's an easily-derivable direct formula, which is [ux uy uz -dot(u,t)] [vx vy vz -dot(v,t)] [wx wy wz -dot(w,t)] [ 0 0 0 1 ] (ASCII matrices stolen from the linked page.) You probably can't beat that for loss of precision in fixed point. If your matrix comes from some domain where you know it has more structure, then there's likely to be an easy answer. A: You might consider doubling to 1.31 before doing your normal algorithm. It'll double the number of multiplications, but you're doing a matrix invert and anything you do is going to be pretty tied to the multiplier in your processor. For anyone interested in finding the equations for a 4x4 invert, you can use a symbolic math package to resolve them for you. The TI-89 will do it even, although it'll take several minutes. If you give us an idea of what the matrix invert does for you, and how it fits in with the rest of your processing we might be able to suggest alternatives. -Adam A: Let me ask a different question: do you definitely need to invert the matrix (call it M), or do you need to use the matrix inverse to solve other equations? (e.g. Mx = b for known M, b) Often there are other ways to do this w/o explicitly needing to calculate the inverse. Or if the matrix M is a function of time & it changes slowly then you could calculate the full inverse once, & there are iterative ways to update it. A: If the matrix represents an affine transformation (many times this is the case with 4x4 matrices so long as you don't introduce a scaling component) the inverse is simply the transpose of the upper 3x3 rotation part with the last column negated. Obviously if you require a generalized solution then looking into Gaussian elimination is probably the easiest.
{ "language": "en", "url": "https://stackoverflow.com/questions/155670", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Displaying whitespace in HTML when pulling from MySQL TEXT column I have saved input from a textarea element to a TEXT column in MySQL. I'm using PHP to pull that data out of the database and want to display it in a p element while still showing the whitespace that the user entered (e.g. multiple spaces and newlines). I've tried a pre tag but it doesn't obey the width set in the containing div element. Other than creating a PHP function to convert spaces to &nbsp and new lines to br tags, what are my options? I'd prefer a clean HTML/CSS solution, but any input is welcome! Thanks! A: You could just use PHP's nl2br function. A: You've got two competing requirements. You either want the content to fit within a certain area (ie: width: 300px), or you want to preserve the whitespace and newlines as the user entered them. You can't do both since one - by definition - interferes with the other. Since HTML isn't whitespace aware, your only options are changing multiple spaces to "&nbsp;" and changing newlines to <br />, using a <pre> tag, or specifying the css style "white-space: pre". A: You can cause the text inside the pre to wrap by using the following CSS pre { white-space: pre-wrap; /* css-3 */ white-space: -moz-pre-wrap; /* Mozilla, since 1999 */ white-space: -pre-wrap; /* Opera 4-6 */ white-space: -o-pre-wrap; /* Opera 7 */ word-wrap: break-word; /* Internet Explorer 5.5+ */ } Taken from this site It's currently defined in CSS3 (which is not yet a finished standard) but most browsers seem to support it as per the comments. A: Regarding the problem with the div, you can always make it scroll, or adjust the font down (this is even possible dynamically based on length of longest line in your server-side code).
{ "language": "en", "url": "https://stackoverflow.com/questions/155681", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: I need a LINQ expression to find an XElement where the element name and attributes match an input node I need to replace the contents of a node in an XElement hierarchy when the element name and all the attribute names and values match an input element. (If there is no match, the new element can be added.) For example, if my data looks like this: <root> <thing1 a1="a" a2="b">one</thing1> <thing2 a1="a" a2="a">two</thing2> <thing2 a1="a" a3="b">three</thing2> <thing2 a1="a">four</thing2> <thing2 a1="a" a2="b">five</thing2> <root> I want to find the last element when I call a method with this input: <thing2 a1="a" a2="b">new value</thing2> The method should have no hard-coded element or attribute names - it simply matches the input to the data. A: This will match any given element with exact tag name and attribute name/value pairs: public static void ReplaceOrAdd(this XElement source, XElement node) { var q = from x in source.Elements() where x.Name == node.Name && x.Attributes().All(a =>node.Attributes().Any(b =>a.Name==b.Name && a.Value==b.Value)) select x; var n = q.LastOrDefault(); if (n == null) source.Add(node); else n.ReplaceWith(node); } var root = XElement.Parse(data); var newElem =XElement.Parse("<thing2 a1=\"a\" a2=\"b\">new value</thing2>"); root.ReplaceOrAdd(newElem); A: You can do an XPathSelectElement with the path (don't quote me, been to the bar; will clean up in the morn) /root/thing2[@a1='a' and @a2='b'] and then take .LastOrDefault() (XPathSelectElement is an extension method in system.Linq.Xml). That will get you the node you wish to change. I'm not sure how you want to change it, however. The result you get is the actual XElement, so changing it will change the tree element.
{ "language": "en", "url": "https://stackoverflow.com/questions/155685", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: ChangeServiceConfig problem setting logon credentials I've got this weird problem - I'm calling ChangeServiceConfig on a newly installed service (I CreateService it myself) to supply the logon credentials, but while the function succeeds (returns TRUE), if I try to start the service, it fails with a 1069 (logon failed). If I go into the service manager and modify credentials by hand (I can see the user name is correct, but of course can't see the password), then it's all ok and it starts ok. The call itself is trivial: ChangeServiceConfig(hService, SERVICE_NO_CHANGE, SERVICE_NO_CHANGE, SERVICE_NO_CHANGE, NULL, NULL, NULL, NULL, strUser, strPassword, NULL); Any ideas on where I should be looking and what could be wrong? Thanks in advance. A: The user account must explicitly have rights to log on as a service (SeServiceLogonRight). Many users, including computer admins, may not have this flag set, and you may need to set it manually. The windows services control panel actually does this silently behind the scenes when you use it to configure services. I also have some vague foggy memories about needing to fully qualify the username. It needs to be in DOMAIN\Username format - If it's a local account you need to specify .\Username or find out the machine name and use MACHINENAME\Username
{ "language": "en", "url": "https://stackoverflow.com/questions/155695", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Ambiguous JavaScript error [nsSessionStore.js] I'm seeing an ambiguous error in Firebug. I don't think it's particularly related to the script I'm writing, however I don't have enough details to be able to determine that from this one error alone. Has anyone seen something similar and have a suggestion? error: [Exception... "Component is not available" nsresult: "0x80040111 (NS_ERROR_NOT_AVAILABLE)" location: "JS frame :: file:///Applications/Firefox.app/Contents/MacOS/components/nsSessionStore.js :: sss_saveState :: line 1896" data: no] [Break on this error] this._writeFile(this._sessionFile, oState.toSource()); A: I have run across the same error myself, and it is an internal FireFox issue, not an issue with your script at all. It is related to the saving of the FireFox state: According to: http://blogs.unbolt.net/index.php/brinley/2008/04/26/0x80040111_nssessionstore, it is caused by a corrupted session state. In short, I don't think there is anything you can do to avoid it (it is a bug in FireFox or perhaps a plugin). However, that link claims you can just clear your session (via closing FireFox) to get rid of the problem when it crops up. FYI, you may want to read the comments, as it seems closing FireFox won't necessarily eradicate the problem... but if all you care about is whether your script is at fault, then don't worry :-) A: Pasting this here so I can find it later :/ Modify nsSessionStore.js from: this._writeFile(this._sessionFile, oState.toSource()); to: this._writeFile(this._sessionFile, "(" + this._toJSONString(oState) + ")"); BTW, the error is caused by extensions creating browser elements without disabling the history (I don't know what that means either, see bug). The bug should be fixed in 3.1, see bug.
{ "language": "en", "url": "https://stackoverflow.com/questions/155697", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: What are good tools/frameworks for i18n of a php codebase? I have been looking at a few options for enabling localization and internationalization of a dynamic php application. There appears to be a variety of tools available such as gettext and Yahoo's R3 and I am interested in hearing from both developers and translators about which tools are good to use and what functionality is important in easing the task of implementation and translation. A: PHP gettext implementation works very smoothly. And po files with po edit and gettext are about as good a way as you can get to deal with localization bearing in mind that no solution of this kind can completely handle the complexities of the various languages. For example, the gettext method is very good on plural forms, but nothing I've seen can handle things like conjugation. For more info see my post here: How do you build a multi-language web site? A: We've been tinkering withZend_Translate, since we use the Zend Framework anyway. It's very well documented and so far extremly solid. In the past, I've pretty much used my own home-grown solution mostly. Which involves language files with constants or variables which hold all text parts and are just echo'ed in the view/template later on. As for gettext, in the past I've heard references about PHP's gettext implementation being faulty, but I can't really back that up nor do I have any references right now. A: There are a number of useful extensions in pecl: http://pecl.php.net/packages.php?catpid=28&catname=Internationalization In particular, you may want to check out php-intl, which provides most of the key i18n functions from International Components for Unicode (ICU) A: the database driven solution to show the messages is not always the good one, I worked in a site with more than 15 languages and translations were an issue. so our design was: * *translation app in php-mysql (translation access, etc.) *then translations are written in php arrrays *these arrays are also cached in APC to speed up the site. so to localize different languages you only need do an include like <?php include('lang/en.php'); include('lang/en_us.php'); // this file overrides few keys from the last one. ?> A: Xataface can be used to quite easily internationalize an arbitrary PHP/MySQL application. It support translation of both your static text, and your database data. All you have to do is add a line or 2 of code to a couple of places in your application and it's good to go. http://xataface.com/documentation/tutorial/internationalization-with-dataface-0.6
{ "language": "en", "url": "https://stackoverflow.com/questions/155706", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How do I visually "break out" of a Container in Flex? Here's my problem - I have some code like this: <mx:Canvas width="300" height="300"> <mx:Button x="800" /> </mx:Canvas> So the problem is that the Button inside the canvas has an x property way in excess of the Canvas's width - since it's a child of the Canvas, the Canvas masks it and creates some scrollbars for me to scroll over to the button. What I'd like is to display the button - 800 pixels to the left of the Canvas without the scrollbars while still leaving the button as a child of the Canvas. How do I do that? A: I figured it out - apparently the Container has a property called clipContent - here's the description from Adobe: Whether to apply a clip mask if the positions and/or sizes of this container's children extend outside the borders of this container. If false, the children of this container remain visible when they are moved or sized outside the borders of this container. If true, the children of this container are clipped. If clipContent is false, then scrolling is disabled for this container and scrollbars will not appear. If clipContent is true, then scrollbars will usually appear when the container's children extend outside the border of the container. For additional control over the appearance of scrollbars, see horizontalScrollPolicy and verticalScrollPolicy. The default value is true. So basically - to show the button outside of the bounds of the container I need to do the following: <mx:Canvas width="300" height="300" clipContent="false" > <mx:Button x="800" /> </mx:Canvas> That was easier than I thought it was going to be. :) Here's the official doc... A: You should be able to use the includeInLayout property also, which would allow you to apply it to each child component independently.
{ "language": "en", "url": "https://stackoverflow.com/questions/155712", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Best practices for internationalizing web applications? Internationalizing web apps always seems to be a chore. No matter how much you plan ahead for pluggable languages, there's always issues with encoding, funky phrasing that doesn't fit your templates, and other problems. I think it would be useful to get the SO community's input for a set of things that programmers should look out for when deciding to internationalize their web apps. A: Internationalization is hard, here's a few things I've learned from working with 2 websites that were in over 20 different languages: * *Use UTF-8 everywhere. No exceptions. HTML, server-side language (watch out for PHP especially), database, etc. *No text in images unless you want a ton of work. Use CSS to put text over images if necessary. *Separate configuration from localization. That way localizers can translate the text and you can deal with different configurations per locale (features, layout, etc). You don't want localizers to have the ability to mess with your app. *Make sure your layouts can deal with text that is 2-3 times longer than English. And also 50% less than English (Japanese and Chinese are often shorter). *Some languages need larger font sizes (Japanese, Chinese) *Colors are locale-specific also. Red and green don't mean the same thing everywhere! *Add a classname that is the locale name to the body tag of your documents. That way you can specify a specific locale's layout in your CSS file easily. *Watch out for variable substitution. Don't split your strings. Leave them whole like this: "You have X new messages" and replace the 'X' with the #. *Different languages have different pluralization. 0, 1, 2-4, 5-7, 7-infinity. Hard to deal with. *Context is difficult. Sometimes localizers need to know where/how a string is used to make sure it's translated correctly. Resources: * *http://interglacial.com/~sburke/tpj/as_html/tpj13.html *http://www.ryandoherty.net/2008/05/26/quick-tips-for-localizing-web-apps/ *http://ed.agadak.net/2007/12/one-potato-two-potato-three-potato-four A: As an English person living abroad I have become frustrated by many web application's approach to internationalization and have blogged about my frustrations. My tips would be: * *think about how you show an international version of a page *using geolocation might work for many users, but as my examples show for many it will not *why not use the Accept-Language header for determining which language to serve *if a user accesses a page via a search engine then don't redirect them somewhere else e.g. to a homepage in a different language *it's extremely annoying to change language and have a different page reload - either serve the same page or warn the user that the current content is not available in a different language before redirecting them *English is a very common language, so perhaps default to that *But make sure the change language option is clear on the GUI (I like what Google Maps are doing, as shown in the post) All I see on the Web is companies getting internalization wrong. Getting it right from a user's perspective is tricky indeed. A: In my company all our strings are stored in *.properties files. Our build tools build a "test languange" copy of the properties files, which replace a string like this: Click here with something like this: [~~ Çļïčк н∑ѓё ~~ タウ ~~] Now, when we set the language to "test" in our config files, these properties files are used. (And of course we don't ship the test language files). This allows us to: * *Make sure that Unicode characters are displayed correctly, including Japanese/Chinese/Korean. *Make sure that the layout scales appropriately for languages with longer words (German in particular has longer words on average than English). *Spot any hard-coded strings (as they will be in plain-English). As for the actual translation, this is done by professional translators, not developers. A: I have a couple apps that are "bilingual" I used resource files in ASP.NET1.1 There is also something called the String Resource Tool Basically you put all your strings in a .RES file for both languages and then determine what file to read from based on Culture or whether someone clicked a Link for the language The biggest gotcha is making sure the Translations are done correctly
{ "language": "en", "url": "https://stackoverflow.com/questions/155719", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "42" }
Q: Fast way to search for particular string (or byte array) in another process memory in C#? Please post a working source code example (or link) of how to search string in another process memory and getting offset of match if found. The similar way its done in game cheating utils which search for values in game memory using ReadProcessMemory. A: You may want to look into Memory Mapped Files as a way to share memory between separate processes. You'll need to use Win32 P/Invokes to implement this in C#, see this Code Project link for an example that you may be able to adapt. A: String searching algorithm on Wikipedia.
{ "language": "en", "url": "https://stackoverflow.com/questions/155721", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: qt/wxwidgets third party components? I'm used to working in a Delphi and C# environment which seem to have a rich set of third party components available. I'm currently wanting to do cross-platform programming in C++ using either qt or wxwidgets. Is there a large market for third party components? I was looking at sourceforge and that doesn't seem to show much that is useful (how the hell do you find out what components or features are in a project without downloading the source?). I'm thinking carousel/coverflow components, rich datagrids (like the sort DevExpress provide). Or is this, write your own territory? A: There are a number of good quality third party Qt libraries, though I don't know of a centralized resource for finding them. A few places to start looking: * *http://www.ics.com/products/qt/addons ICS provides the QicsTable, a high performance model-view-delegate table library, and resells various libraries by KDAB. (These are all available as a free download.) *http://www.qtcentre.org/contest-first-edition/finalists QtCentre has an annual programming contest which awards interesting Qt-based tools and libraries. This year's contest is still being judged, but the finalists from last year can be seen at the above link. Check out the Custom Widget and Helper Library categories. A: There is a third-party component for Qt - advanced data grid - Qtitan DataGrid. In it there are almost all necessary possibilities. Ultra-fast processing of large data sets Use of QStyle for rendering objects ensures that the grid blends into the UI design of any application Two modes of vertical scrolling Customizable colors of rows and columns Two integrated table views Column banding and grouping Automatic width and height adjustment Fixed columns Flexible sorting Column summaries Integrated high-performance caching mechanism Advanced paint engine for faster rendering of UI elements Cross-platform support API for external editors Screenshots about this Grid http://www.devmachines.com/qtitan_screenshots.php A: For a crossplatform GUI development, Qt is the tool you should be looking for. I have used both. Here is what I feel about Qt Building rich GUI is a piece of cake if you use Qt. It has a loads of GUI capabilities, starting with its Graphics View, OpenGL support, stylesheets that supports css. A mature painting system, Richtext formatting, Integration with Webkit, and I am sure I am missing a lot more here... Qt has its own build system, qmake which creates platform dependent Makefiles, so no Makefile hassles. Moreover you get a single pro-file which is much easier to manage. For wxWidgets, you will need to create different Makefiles for the various compilers you intent to use. Other advantages of using Qt over wxWidgets are - the Api is very easy to learn with its intuitive api, superb documentation and tons and tons of examples. This helps you get yourself productive pretty soon and thus getting your product early to marker. BTW Qt is a RAD tool. Moreover, there is a huge user base, and there are forums like QtCentre.org to help you with your questions. If you are planning to buy commercial license, you get support directly from Qt Software (trolltech). You would obviously be using Qt's Model View pattern, allowing you to separate your business logic from the presentation tier. I would suggest that you write to "support at trolltech dot com" or "sales at trolltech dot com" to get more information. You can explain your requirements and they would be able to explain how Qt fits your needs. You could also download the opensource version and have look at the demos. Coverflow: http://labs.trolltech.com/blogs/2007/11/02/pictureflow-on-windows-mobile/ , http://ariya.blogspot.com/2008/03/introducing-photoflow.html As I said, if its Rich gui you are planning to develop, use Qt. A: In addition to the ones by ICS and at QtCentre the Qt-apps website has some open source widgets/components for Qt. A: For wxWidgets you have wxCode which has quite a few things although not all the existing third party components (including a few very useful ones) are available from there. A: Good quality components for Qt can be found here - http://www.devmachines.com/ At the moment there are Microsoft Ribbon Control for Qt, DataGrid for Qt, Charting for Qt. All components are commercial and should be used in Qt Commercial or Qt LGPL.
{ "language": "en", "url": "https://stackoverflow.com/questions/155724", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Is there a difference between (local), '.' and localhost? I've used all three of these when making local programmatic connections to databases. Is there any real difference between them? A: The final result is the same. The difference is: * *'localhost' resolves at the TCP/IP level and is equivalent to the IP address 127.0.0.1 *Depending on the application "(local)" could be just an alias for 'localhost'. In SQLServer, '(local)' and '.' mean that the connection will be made using the named pipes (shared memory) protocol within the same machine (doesn't need to go through the TCP/IP stack). That's the theory. In practice, I don't think there is substantial difference in performance or features if you use either one of them. A: They are generally synonyms. However, it depends on the application you are configuring. As long as the app understands what you mean, it shouldn't result in a performance loss. At least, not one you have to root out prematurely, if you get my drift. A: As far as I know the dot "." and "local" are windows application terms, not a "standard" term, localhost resolves to 127.0.0.1 in the TCP/IP level so if you want to make sure you are "compatible" across platforms you should use either localhost or 127.0.0.1
{ "language": "en", "url": "https://stackoverflow.com/questions/155733", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "32" }
Q: Detecting Unsaved Changes I have a requirement to implement an "Unsaved Changes" prompt in an ASP .Net application. If a user modifies controls on a web form, and attempts to navigate away before saving, a prompt should appear warning them that they have unsaved changes, and give them the option to cancel and stay on the current page. The prompt should not display if the user hasn't touched any of the controls. Ideally I'd like to implement this in JavaScript, but before I go down the path of rolling my own code, are there any existing frameworks or recommended design patterns for achieving this? Ideally I'd like something that can easily be reused across multiple pages with minimal changes. A: The following uses the browser's onbeforeunload function and jquery to capture any onchange event. IT also looks for any submit or reset buttons to reset the flag indicating changes have occurred. dataChanged = 0; // global variable flags unsaved changes function bindForChange(){ $('input,checkbox,textarea,radio,select').bind('change',function(event) { dataChanged = 1}) $(':reset,:submit').bind('click',function(event) { dataChanged = 0 }) } function askConfirm(){ if (dataChanged){ return "You have some unsaved changes. Press OK to continue without saving." } } window.onbeforeunload = askConfirm; window.onload = bindForChange; A: Thanks for the replies everyone. I ended up implementing a solution using JQuery and the Protect-Data plug-in. This allows me to automatically apply monitoring to all controls on a page. There are a few caveats however, especially when dealing with an ASP .Net application: * *When a user chooses the cancel option, the doPostBack function will throw a JavaScript error. I had to manually put a try-catch around the .submit call within doPostBack to suppress it. *On some pages, a user could perform an action that performs a postback to the same page, but isn't a save. This results in any JavaScript logic resetting, so it thinks nothing has changed after the postback when something may have. I had to implement a hidden textbox that gets posted back with the page, and is used to hold a simple boolean value indicating whether the data is dirty. This gets persisted across postbacks. *You may want some postbacks on the page to not trigger the dialog, such as a Save button. In this case, you can use JQuery to add an OnClick function which sets window.onbeforeunload to null. Hopefully this is helpful for anyone else who has to implement something similar. A: General Solution Supporting multiple forms in a given page (Just copy and paste in your project) $(document).ready(function() { $('form :input').change(function() { $(this).closest('form').addClass('form-dirty'); }); $(window).bind('beforeunload', function() { if($('form:not(.ignore-changes).form-dirty').length > 0) { return 'You have unsaved changes, are you sure you want to discard them?'; } }); $('form').bind('submit',function() { $(this).closest('form').removeClass('form-dirty'); return true; }); }); Note: This solution is combined from others' solutions to create a general integrated solution. Features: * *Just copy and paste into your app. *Supports Multiple Forms. *You can style or make actions dirty forms, since they've the class "form-dirty". *You can exclude some forms by adding the class 'ignore-changes'. A: The following solution works for prototype (tested in FF, IE 6 and Safari). It uses a generic form observer (which fires form:changed when any fields of the form have been modified), which you can use for other stuff as well. /* use this function to announce changes from your own scripts/event handlers. * Example: onClick="makeDirty($(this).up('form'));" */ function makeDirty(form) { form.fire("form:changed"); } function handleChange(form, event) { makeDirty(form); } /* generic form observer, ensure that form:changed is being fired whenever * a field is being changed in that particular for */ function setupFormChangeObserver(form) { var handler = handleChange.curry(form); form.getElements().each(function (element) { element.observe("change", handler); }); } /* installs a form protector to a form marked with class 'protectForm' */ function setupProtectForm() { var form = $$("form.protectForm").first(); /* abort if no form */ if (!form) return; setupFormChangeObserver(form); var dirty = false; form.observe("form:changed", function(event) { dirty = true; }); /* submitting the form makes the form clean again */ form.observe("submit", function(event) { dirty = false; }); /* unfortunatly a propper event handler doesn't appear to work with IE and Safari */ window.onbeforeunload = function(event) { if (dirty) { return "There are unsaved changes, they will be lost if you leave now."; } }; } document.observe("dom:loaded", setupProtectForm); A: Here's a javascript / jquery solution that is simple. It accounts for "undos" by the user, it is encapsulated within a function for ease of application, and it doesn't misfire on submit. Just call the function and pass the ID of your form. This function serializes the form once when the page is loaded, and again before the user leaves the page. If the two form states are different, the prompt is shown. Try it out: http://jsfiddle.net/skibulk/Ydt7Y/ function formUnloadPrompt(formSelector) { var formA = $(formSelector).serialize(), formB, formSubmit = false; // Detect Form Submit $(formSelector).submit( function(){ formSubmit = true; }); // Handle Form Unload window.onbeforeunload = function(){ if (formSubmit) return; formB = $(formSelector).serialize(); if (formA != formB) return "Your changes have not been saved."; }; } $(function(){ formUnloadPrompt('form'); }); A: One piece of the puzzle: /** * Determines if a form is dirty by comparing the current value of each element * with its default value. * * @param {Form} form the form to be checked. * @return {Boolean} <code>true</code> if the form is dirty, <code>false</code> * otherwise. */ function formIsDirty(form) { for (var i = 0; i < form.elements.length; i++) { var element = form.elements[i]; var type = element.type; if (type == "checkbox" || type == "radio") { if (element.checked != element.defaultChecked) { return true; } } else if (type == "hidden" || type == "password" || type == "text" || type == "textarea") { if (element.value != element.defaultValue) { return true; } } else if (type == "select-one" || type == "select-multiple") { for (var j = 0; j < element.options.length; j++) { if (element.options[j].selected != element.options[j].defaultSelected) { return true; } } } } return false; } And another: window.onbeforeunload = function(e) { e = e || window.event; if (formIsDirty(document.forms["someForm"])) { // For IE and Firefox if (e) { e.returnValue = "You have unsaved changes."; } // For Safari return "You have unsaved changes."; } }; Wrap it all up, and what do you get? var confirmExitIfModified = (function() { function formIsDirty(form) { // ...as above } return function(form, message) { window.onbeforeunload = function(e) { e = e || window.event; if (formIsDirty(document.forms[form])) { // For IE and Firefox if (e) { e.returnValue = message; } // For Safari return message; } }; }; })(); confirmExitIfModified("someForm", "You have unsaved changes."); You'll probably also want to change the registration of the beforeunload event handler to use LIBRARY_OF_CHOICE's event registration. A: I recently contributed to an open source jQuery plugin called dirtyForms. The plugin is designed to work with dynamically added HTML, supports multiple forms, can support virtually any dialog framework, falls back to the browser beforeunload dialog, has a pluggable helper framework to support getting dirty status from custom editors (a tinyMCE plugin is included), works within iFrames, and the dirty status can be set or reset at will. https://github.com/snikch/jquery.dirtyforms A: Detect form changes with using jQuery is very simple: var formInitVal = $('#formId').serialize(); // detect form init value after form is displayed // check for form changes if ($('#formId').serialize() != formInitVal) { // show confirmation alert } A: I expanded on Slace's suggestion above, to include most editable elements and also excluding certain elements (with a CSS style called "srSearch" here) from causing the dirty flag to be set. <script type="text/javascript"> var _isDirty = false; $(document).ready(function () { // Set exclude CSS class on radio-button list elements $('table.srSearch input:radio').addClass("srSearch"); $("input[type='text'],input[type='radio'],select,textarea").not(".srSearch").change(function () { _isDirty = true; }); }); $(window).bind('beforeunload', function () { if (_isDirty) { return 'You have unsaved changes.'; } }); A: var unsaved = false; $(":input").change(function () { unsaved = true; }); function unloadPage() { if (unsaved) { alert("You have unsaved changes on this page. Do you want to leave this page and discard your changes or stay on this page?"); } } window.onbeforeunload = unloadPage; A: In the .aspx page, you need a Javascript function to tell whether or not the form info is "dirty" <script language="javascript"> var isDirty = false; function setDirty() { isDirty = true; } function checkSave() { var sSave; if (isDirty == true) { sSave = window.confirm("You have some changes that have not been saved. Click OK to save now or CANCEL to continue without saving."); if (sSave == true) { document.getElementById('__EVENTTARGET').value = 'btnSubmit'; document.getElementById('__EVENTARGUMENT').value = 'Click'; window.document.formName.submit(); } else { return true; } } } </script> <body class="StandardBody" onunload="checkSave()"> and in the codebehind, add the triggers to the input fields as well as resets on the submission/cancel buttons.... btnSubmit.Attributes.Add("onclick", "isDirty = 0;"); btnCancel.Attributes.Add("onclick", "isDirty = 0;"); txtName.Attributes.Add("onchange", "setDirty();"); txtAddress.Attributes.Add("onchange", "setDirty();"); //etc.. A: Using jQuery: var _isDirty = false; $("input[type='text']").change(function(){ _isDirty = true; }); // replicate for other input types and selects Combine with onunload/onbeforeunload methods as required. From the comments, the following references all input fields, without duplicating code: $(':input').change(function () { Using $(":input") refers to all input, textarea, select, and button elements. A: This is exactly what the Fleegix.js plugin fleegix.form.diff (http://js.fleegix.org/plugins/form/diff) was created for. Serialize the initial state of the form on load using fleegix.form.toObject (http://js.fleegix.org/ref#fleegix.form.toObject) and save it in a variable, then compare with the current state using fleegix.form.diff on unload. Easy as pie. A: A lot of outdated answers so here's something a little more modern. ES6 let dirty = false document.querySelectorAll('form').forEach(e => e.onchange = () => dirty = true) A: One method, using arrays to hold the variables so changes can be tracked. Here's a very simple method to detect changes, but the rest isn't as elegant. Another method which is fairly simple and small, from Farfetched Blog: <body onLoad="lookForChanges()" onBeforeUnload="return warnOfUnsavedChanges()"> <form> <select name=a multiple> <option value=1>1 <option value=2>2 <option value=3>3 </select> <input name=b value=123> <input type=submit> </form> <script> var changed = 0; function recordChange() { changed = 1; } function recordChangeIfChangeKey(myevent) { if (myevent.which && !myevent.ctrlKey && !myevent.ctrlKey) recordChange(myevent); } function ignoreChange() { changed = 0; } function lookForChanges() { var origfunc; for (i = 0; i < document.forms.length; i++) { for (j = 0; j < document.forms[i].elements.length; j++) { var formField=document.forms[i].elements[j]; var formFieldType=formField.type.toLowerCase(); if (formFieldType == 'checkbox' || formFieldType == 'radio') { addHandler(formField, 'click', recordChange); } else if (formFieldType == 'text' || formFieldType == 'textarea') { if (formField.attachEvent) { addHandler(formField, 'keypress', recordChange); } else { addHandler(formField, 'keypress', recordChangeIfChangeKey); } } else if (formFieldType == 'select-multiple' || formFieldType == 'select-one') { addHandler(formField, 'change', recordChange); } } addHandler(document.forms[i], 'submit', ignoreChange); } } function warnOfUnsavedChanges() { if (changed) { if ("event" in window) //ie event.returnValue = 'You have unsaved changes on this page, which will be discarded if you leave now. Click "Cancel" in order to save them first.'; else //netscape return false; } } function addHandler(target, eventName, handler) { if (target.attachEvent) { target.attachEvent('on'+eventName, handler); } else { target.addEventListener(eventName, handler, false); } } </script> A: In IE document.ready will not work properly it will update the values of input. so we need to bind load event inside the document.ready function that will handle for IE browser also. below is the code you should put inside the document.ready function. $(document).ready(function () { $(window).bind("load", function () { $("input, select").change(function () {}); }); }); A: I have found that this one works in Chrome with an exception... The messages being returned do not match those in the script: dataChanged = 0; // global variable flags unsaved changes function bindForChange() { $("input,checkbox,textarea,radio,select").bind("change", function (_event) { dataChanged = 1; }); $(":reset,:submit").bind("click", function (_event) { dataChanged = 0; }); } function askConfirm() { if (dataChanged) { var message = "You have some unsaved changes. Press OK to continue without saving."; return message; } } window.onbeforeunload = askConfirm; window.onload = bindForChange; The messages returned seem to be triggered by the specific type of action I'm performing. A RELOAD displays a question "Reload Site? And a windows close returns a "Leave Site?" message.
{ "language": "en", "url": "https://stackoverflow.com/questions/155739", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "97" }
Q: Remote print module in Java I am working on an application that will sport a web-based point of sale interface. The point of sale PC (I am not sure as of now whether it will run on Linux or Windows) must have a fiscal printer attached to it, but like any web app, it is the server which processes all stuff. Both server and PoS machines are on the same LAN. I must send the sale data in real time, and via the fiscal printer which uses the serial port, so printing a PDF or even a web page is not an option. I've been told I could have a little app listening on web services on the client, which in turn talks to the printer instead of the server or the browser, but don't have a clue how to do it. Also, I'll most likely need to listen to any printer feedback (coupon number, for instance, which is generated by the printer) and hand it back to the server. Any ideas? A: I did something similar to this a couple of yrs. ago. But in my case the server and the PC where in the same lan. Is your PoS within the lan? If so, I'll explain it to you. In the mean time, if you have the "little app" covered you can take a look at the following: http://java.sun.com/j2se/1.4.2/docs/api/javax/print/PrintService.html The print service have a method to discover the printers registered within machine it is running on. So after you receive the message from the server on your app you just have to do something similar to the code shown in the link above: Taked from, http://java.sun.com/j2se/1.4.2/docs/api/javax/print/PrintService.html DocFlavor flavor = DocFlavor.INPUT_STREAM.POSTSCRIPT; PrintRequestAttributeSet aset = new HashPrintRequestHashAttributeSet(); aset.add(MediaSizeName.ISO_A4); PrintService[] pservices = PrintServiceLookup.lookupPrintServices(flavor, aset); if (pservices.length > 0) { DocPrintJob pj = pservices[0].createPrintJob(); // InputStreamDoc is an implementation of the Doc interface // Doc doc = new InputStreamDoc("test.ps", flavor); try { pj.print(doc, aset); } catch (PrintException e) { } } A: That's why you have applets. But applets run in a security sandbox. However, if the right kind of privileges are given to the applet running in a webapp, it can open socket, write to files, write to serial port, etc.
{ "language": "en", "url": "https://stackoverflow.com/questions/155740", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Provider model in .net When the .net 2.0 framework first came out, the provider model was all the rage. 2.0 even shipped with a bunch of default providers (Membership, sitemap, role). Since the release of 2.0, the hype has died down, and whilst I still use providers day to day, it seems to get far less press. I was wondering if this is because people are using something other than providers and they've been superseded, or is it simply because the take up wasn't as big as other IoC methods? A: It actually hasn't died down. DI is still big. There are many DI frameworks out there to choose from. Yes, it's not hard-baked into every part framework like it should absolutely be, but its still a very good practice to follow. For instance, I was using the P&P's custom application blocks to do DI. Until they ditched it for Unity. Now I'm using unity. A lightweight DI framework is a good idea for any large extensible application. A: I think that as these tools become more standard within .NET the hype around them becomes less, but their use does not. Certainly the Membership and role providers are very important to our new application that we are developing and will save us significant amounts of code. Microsoft Patterns and Practices is the birthplace of tools like the Enterprise Library, which is heavily involved with the provider patterns (particularly with membership) in regards to the Security Applicaiton block and the model appears to be used throughout the blocks.
{ "language": "en", "url": "https://stackoverflow.com/questions/155742", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Checking for and not printing JavaScript in generated data? In my php web app, suppose I want to go the extra mile and in addition to going gang-busters and being anal-retentive about sanitizing my inputs, I also want to ensure that no JavaScript is being output in strings I am inserting into html templates. Is there a standard way to make sure I don't put JavaScript in the generated html content? A: If you aren't opposed to external dependencies, the HTML Purifier library is a pretty good filter for a majority of XSS attacks. A: not exactly a standard way; because what if you were doing: <img src="${path}">, and ${path} expanded to http://p0wned.com/jpg.jpg" /><script src="p0wned.com/js.js"/> Anyway I like this regular expression: #from http://www.perlmonks.org/?node_id=161281 sub untag { local $_ = $_[0] || $_; # ALGORITHM: # find < , # comment <!-- ... -->, # or comment <? ... ?> , # or one of the start tags which require correspond # end tag plus all to end tag # or if \s or =" # then skip to next " # else [^>] # > s{ < # open tag (?: # open group (A) (!--) | # comment (1) or (\?) | # another comment (2) or (?i: # open group (B) for /i ( TITLE | # one of start tags SCRIPT | # for which APPLET | # must be skipped OBJECT | # all content STYLE # to correspond ) # end tag (3) ) | # close group (B), or ([!/A-Za-z]) # one of these chars, remember in (4) ) # close group (A) (?(4) # if previous case is (4) (?: # open group (C) (?! # and next is not : (D) [\s=] # \s or "=" ["`'] # with open quotes ) # close (D) [^>] | # and not close tag or [\s=] # \s or "=" with `[^`]*` | # something in quotes ` or [\s=] # \s or "=" with '[^']*' | # something in quotes ' or [\s=] # \s or "=" with "[^"]*" # something in quotes " )* # repeat (C) 0 or more times | # else (if previous case is not (4)) .*? # minimum of any chars ) # end if previous char is (4) (?(1) # if comment (1) (?<=--) # wait for "--" ) # end if comment (1) (?(2) # if another comment (2) (?<=\?) # wait for "?" ) # end if another comment (2) (?(3) # if one of tags-containers (3) </ # wait for end (?i:\3) # of this tag (?:\s[^>]*)? # skip junk to ">" ) # end if (3) > # tag closed }{}gsx; # STRIP THIS TAG return $_ ? $_ : ""; } A: In PHP, I'd start with strip_tags. Like so: $output = strip_tags($input); If I wanted to allow some tags in user input, I'd include them, like so: $output = strip_tags($input, '<code><em><strong>'); A: I don't think it's possible to find javascript code like that. You'd have to pass the data through an interpreter of some type to attempt to find valid js statements. This would be very processor intensive and probably generate many false positives depending on the nature of your text. Entity escaping meta characters is probably the best way to further protect your application from attacks your filter may have missed. Javascript can't be run if it's loaded as regular text.
{ "language": "en", "url": "https://stackoverflow.com/questions/155751", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Safehandle in C# What is SafeHandle? how does it differ from IntPtr? When should I use one? What are its advantages? A: You should use a derivative of SafeHandle whenever possible where managed code is receiving an IntPtr from unmanaged code. While the name, general use, and even documentation of the SafeHandle class implies that it is only supposed to be used to contain Windows operating system handles, a few internal .NET framework classes such as Microsoft.Win32.SafeHandles.SafeLocalAllocHandle and those that derive from the publicly available abstract class System.Runtime.InteropServices.SafeBuffer also use it to guarantee that other unmanaged resources such as dynamically allocated structs and arrays are freed. In general, I believe that it is good practice to create a derivative of this class whenever an IntPtr is returned to managed code from unmanaged code even if it doesn't require cleanup. The established purpose of a SafeHandle is to guarantee that even if the world is ending (e.g. an AppDomain is being unloaded or a StackOverflowException occurs) the .NET framework should make absolutely sure that the finalizer for the SafeHandle is called to close or deallocate the unmanaged entity being referred to by the wrapped IntPtr. The SafeHandle class achieves this by inheriting from the CriticalFinalizerObject class. Inheriting from this class does, however, place upon the inheritor the obligation of not totally screwing up the state of the process when the finalizer is called, which is likely why it is not often used for entities other than Windows operating system handles. The .NET framework also provides some weak finalization ordering so that it is safe to interact with a SafeHandle object in the finalizer of any class that does not inherit from CriticalFinalizerObject, but circumstances in which that is necessary should be few and far between. Ideally, a SafeHandle-derived class should also be used to more safely interact with an unmanaged entity reference by encapsulating expected functionality within the derived class. A well-written class that inherits from SafeHandle should have a specific purpose in mind and should provide methods that are sufficient to prevent any developer using it for that purpose from ever needing to interact directly with the IntPtr it contains. Adding such methods also provides other developers with a clear idea of what the result of an unmanaged method call is to be used for in a managed context. A class that inherits from SafeHandle can be used for this even if no cleanup is required on the pointer that the unmanaged method returns by calling base(false) in the constructor for the class. Two examples that use classes which derive from SafeHandle to safely clean up a reference to an unmanaged entity and encapsulate functionality related to the unmanaged entity are below. The first example is a more traditional scenario in which a user token returned by LogonUser is wrapped by an instance of the SafeTokenHandle class. This class will call CloseHandle on the token when the object is disposed or finalized. It also includes a method called GetWindowsIdentity that returns a WindowsIdentity object for the user represented by the user token. The second example uses Windows built-in function CommandLineToArgvW to parse a command line. This function returns a pointer to an array contained a contiguous block of memory that can be freed by a single call to LocalFree. The SafeLocalAllocWStrArray class (which inherits from class SafeLocalAllocArray which is also defined in this example) will call LocalFree on the array when object is disposed or finalized. It also includes a function that will copy the contents of the unmanaged array to a managed array. static class Examples { static void Example1_SafeUserToken() { const string user = "SomeLocalUser"; const string domain = null; const string password = "ExamplePassword"; NativeMethods.SafeTokenHandle userToken; WindowsIdentity identity; NativeMethods.LogonUser(user, domain, password, NativeMethods.LogonType.LOGON32_LOGON_INTERACTIVE, NativeMethods.LogonProvider.LOGON32_PROVIDER_DEFAULT, out userToken); using (userToken) { // get a WindowsIdentity object for the user // WindowsIdentity will duplicate the token, so it is safe to free the original token after this is called identity = userToken.GetWindowsIdentity(); } // impersonate the user using (identity) using (WindowsImpersonationContext impersonationContext = identity.Impersonate()) { Console.WriteLine("I'm running as {0}!", Thread.CurrentPrincipal.Identity.Name); } } static void Example2_SafeLocalAllocWStrArray() { const string commandLine = "/example /command"; int argc; string[] args; using (NativeMethods.SafeLocalAllocWStrArray argv = NativeMethods.CommandLineToArgvW(commandLine, out argc)) { // CommandLineToArgvW returns NULL on failure; since SafeLocalAllocWStrArray inherits from // SafeHandleZeroOrMinusOneIsInvalid, it will see this value as invalid // if that happens, throw an exception containing the last Win32 error that occurred if (argv.IsInvalid) { int lastError = Marshal.GetHRForLastWin32Error(); throw new Win32Exception(lastError, "An error occurred when calling CommandLineToArgvW."); } // the one unsafe aspect of this is that the developer calling this function must be trusted to // pass in an array of length argc or specify the length of the copy as the value of argc // if the developer does not do this, the array may end up containing some garbage or an // AccessViolationException could be thrown args = new string[argc]; argv.CopyTo(args); } for (int i = 0; i < args.Length; ++i) { Console.WriteLine("Argument {0}: {1}", i, args[i]); } } } /// <summary> /// P/Invoke methods and helper classes used by this example. /// </summary> internal static class NativeMethods { // documentation: http://msdn.microsoft.com/en-us/library/windows/desktop/aa378184(v=vs.85).aspx [DllImport("advapi32.dll", SetLastError = true, CharSet = CharSet.Unicode)] public static extern bool LogonUser(string lpszUsername, string lpszDomain, string lpszPassword, LogonType dwLogonType, LogonProvider dwLogonProvider, out SafeTokenHandle phToken); // documentation: http://msdn.microsoft.com/en-us/library/windows/desktop/ms724211(v=vs.85).aspx [DllImport("kernel32.dll", SetLastError = true)] public static extern bool CloseHandle(IntPtr handle); // documentation: http://msdn.microsoft.com/en-us/library/windows/desktop/bb776391(v=vs.85).aspx [DllImport("shell32.dll", CharSet = CharSet.Unicode, SetLastError = true)] public static extern SafeLocalAllocWStrArray CommandLineToArgvW(string lpCmdLine, out int pNumArgs); // documentation: http://msdn.microsoft.com/en-us/library/windows/desktop/aa366730(v=vs.85).aspx [DllImport("kernel32.dll", SetLastError = true)] public static extern IntPtr LocalFree(IntPtr hLocal); /// <summary> /// Wraps a handle to a user token. /// </summary> public class SafeTokenHandle : SafeHandleZeroOrMinusOneIsInvalid { /// <summary> /// Creates a new SafeTokenHandle. This constructor should only be called by P/Invoke. /// </summary> private SafeTokenHandle() : base(true) { } /// <summary> /// Creates a new SafeTokenHandle to wrap the specified user token. /// </summary> /// <param name="arrayPointer">The user token to wrap.</param> /// <param name="ownHandle"><c>true</c> to close the token when this object is disposed or finalized, /// <c>false</c> otherwise.</param> public SafeTokenHandle(IntPtr handle, bool ownHandle) : base(ownHandle) { this.SetHandle(handle); } /// <summary> /// Provides a <see cref="WindowsIdentity" /> object created from this user token. Depending /// on the type of token, this can be used to impersonate the user. The WindowsIdentity /// class will duplicate the token, so it is safe to use the WindowsIdentity object created by /// this method after disposing this object. /// </summary> /// <returns>a <see cref="WindowsIdentity" /> for the user that this token represents.</returns> /// <exception cref="InvalidOperationException">This object does not contain a valid handle.</exception> /// <exception cref="ObjectDisposedException">This object has been disposed and its token has /// been released.</exception> public WindowsIdentity GetWindowsIdentity() { if (this.IsClosed) { throw new ObjectDisposedException("The user token has been released."); } if (this.IsInvalid) { throw new InvalidOperationException("The user token is invalid."); } return new WindowsIdentity(this.handle); } /// <summary> /// Calls <see cref="NativeMethods.CloseHandle" /> to release this user token. /// </summary> /// <returns><c>true</c> if the function succeeds, <c>false otherwise</c>. To get extended /// error information, call <see cref="Marshal.GetLastWin32Error"/>.</returns> protected override bool ReleaseHandle() { return NativeMethods.CloseHandle(this.handle); } } /// <summary> /// A wrapper around a pointer to an array of Unicode strings (LPWSTR*) using a contiguous block of /// memory that can be freed by a single call to LocalFree. /// </summary> public sealed class SafeLocalAllocWStrArray : SafeLocalAllocArray<string> { /// <summary> /// Creates a new SafeLocalAllocWStrArray. This constructor should only be called by P/Invoke. /// </summary> private SafeLocalAllocWStrArray() : base(true) { } /// <summary> /// Creates a new SafeLocalallocWStrArray to wrap the specified array. /// </summary> /// <param name="handle">The pointer to the unmanaged array to wrap.</param> /// <param name="ownHandle"><c>true</c> to release the array when this object /// is disposed or finalized, <c>false</c> otherwise.</param> public SafeLocalAllocWStrArray(IntPtr handle, bool ownHandle) : base(ownHandle) { this.SetHandle(handle); } /// <summary> /// Returns the Unicode string referred to by an unmanaged pointer in the wrapped array. /// </summary> /// <param name="index">The index of the value to retrieve.</param> /// <returns>the value at the position specified by <paramref name="index" /> as a string.</returns> protected override string GetArrayValue(int index) { return Marshal.PtrToStringUni(Marshal.ReadIntPtr(this.handle + IntPtr.Size * index)); } } // This class is similar to the built-in SafeBuffer class. Major differences are: // 1. This class is less safe because it does not implicitly know the length of the array it wraps. // 2. The array is read-only. // 3. The type parameter is not limited to value types. /// <summary> /// Wraps a pointer to an unmanaged array of objects that can be freed by calling LocalFree. /// </summary> /// <typeparam name="T">The type of the objects in the array.</typeparam> public abstract class SafeLocalAllocArray<T> : SafeHandleZeroOrMinusOneIsInvalid { /// <summary> /// Creates a new SafeLocalArray which specifies that the array should be freed when this /// object is disposed or finalized. /// <param name="ownsHandle"><c>true</c> to reliably release the handle during the finalization phase; /// <c>false</c> to prevent reliable release (not recommended).</param> /// </summary> protected SafeLocalAllocArray(bool ownsHandle) : base(ownsHandle) { } /// <summary> /// Converts the unmanaged object referred to by <paramref name="valuePointer" /> to a managed object /// of type T. /// </summary> /// <param name="index">The index of the value to retrieve.</param> /// <returns>the value at the position specified by <paramref name="index" /> as a managed object of /// type T.</returns> protected abstract T GetArrayValue(int index); // /// <summary> /// Frees the wrapped array by calling LocalFree. /// </summary> /// <returns><c>true</c> if the call to LocalFree succeeds, <c>false</c> if the call fails.</returns> protected override bool ReleaseHandle() { return (NativeMethods.LocalFree(this.handle) == IntPtr.Zero); } /// <summary> /// Copies the unmanaged array to the specified managed array. /// /// It is important that the length of <paramref name="array"/> be less than or equal to the length of /// the unmanaged array wrapped by this object. If it is not, at best garbage will be read and at worst /// an exception of type <see cref="AccessViolationException" /> will be thrown. /// </summary> /// <param name="array">The managed array to copy the unmanaged values to.</param> /// <exception cref="ObjectDisposedException">The unmanaged array wrapped by this object has been /// freed.</exception> /// <exception cref="InvalidOperationException">The pointer to the unmanaged array wrapped by this object /// is invalid.</exception> /// <exception cref="ArgumentNullException"><paramref name="array"/> is null.</exception> public void CopyTo(T[] array) { if (array == null) { throw new ArgumentNullException("array"); } this.CopyTo(array, 0, array.Length); } /// <summary> /// Copies the unmanaged array to the specified managed array. /// /// It is important that <paramref name="length" /> be less than or equal to the length of /// the array wrapped by this object. If it is not, at best garbage will be read and at worst /// an exception of type <see cref="AccessViolationException" /> will be thrown. /// </summary> /// <param name="array">The managed array to copy the unmanaged values to.</param> /// <param name="index">The index to start at when copying to <paramref name="array" />.</param> /// <param name="length">The number of items to copy to <paramref name="array" /></param> /// <exception cref="ObjectDisposedException">The unmanaged array wrapped by this object has been /// freed.</exception> /// <exception cref="InvalidOperationException">The pointer to the unmanaged array wrapped by this object /// is invalid.</exception> /// <exception cref="ArgumentNullException"><paramref name="array"/> is null.</exception> /// <exception cref="ArgumentOutOfRangeException"><paramref name="index"/> is less than zero.-or- /// <paramref name="index" /> is greater than the length of <paramref name="array"/>.-or- /// <paramref name="length"/> is less than zero.</exception> /// <exception cref="ArgumentException">The sum of <paramref name="index" /> and <paramref name="length" /> /// is greater than the length of <paramref name="array" />.</exception> public void CopyTo(T[] array, int index, int length) { if (this.IsClosed) { throw new ObjectDisposedException(this.ToString()); } if (this.IsInvalid) { throw new InvalidOperationException("This object's buffer is invalid."); } if (array == null) { throw new ArgumentNullException("array"); } if (index < 0 || array.Length < index) { throw new ArgumentOutOfRangeException("index", "index must be a nonnegative integer that is less than array's length."); } if (length < 0) { throw new ArgumentOutOfRangeException("length", "length must be a nonnegative integer."); } if (array.Length < index + length) { throw new ArgumentException("length", "length is greater than the number of elements from index to the end of array."); } for (int i = 0; i < length; ++i) { array[index + i] = this.GetArrayValue(i); } } } /// <summary> /// The type of logon operation to perform. /// </summary> internal enum LogonType : uint { LOGON32_LOGON_BATCH = 1, LOGON32_LOGON_INTERACTIVE = 2, LOGON32_LOGON_NETWORK = 3, LOGON32_LOGON_NETWORK_CLEARTEXT = 4, LOGON32_LOGON_NEW_CREDENTIALS = 5, LOGON32_LOGON_SERVICE = 6, LOGON32_LOGON_UNLOCK = 7 } /// <summary> /// The logon provider to use. /// </summary> internal enum LogonProvider : uint { LOGON32_PROVIDER_DEFAULT = 0, LOGON32_PROVIDER_WINNT50 = 1, LOGON32_PROVIDER_WINNT40 = 2 } } A: Another way of looking at it: with SafeHandle, you should almost never need to write another finalizer. A: I think MSDN is pretty clear in definition: The SafeHandle class provides critical finalization of handle resources, preventing handles from being reclaimed prematurely by garbage collection and from being recycled by Windows to reference unintended unmanaged objects. Before the .NET Framework version 2.0, all operating system handles could only be encapsulated in the IntPtr managed wrapper object. The SafeHandle class contains a finalizer that ensures that the handle is closed and is guaranteed to run, even during unexpected AppDomain unloads when a host may not trust the consistency of the state of the AppDomain. For more information about the benefits of using a SafeHandle, see Safe Handles and Critical Finalization. This class is abstract because you cannot create a generic handle. To implement SafeHandle, you must create a derived class. To create SafeHandle derived classes, you must know how to create and free an operating system handle. This process is different for different handle types because some use CloseHandle, while others use more specific methods such as UnmapViewOfFile or FindClose. For this reason, you must create a derived class of SafeHandle for each operating system handle type; such as MySafeRegistryHandle, MySafeFileHandle, and MySpecialSafeFileHandle. Some of these derived classes are prewritten and provided for you in the Microsoft.Win32.SafeHandles namespace.
{ "language": "en", "url": "https://stackoverflow.com/questions/155780", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: Advice on C# Expression Trees I'm working on a method that accepts an expression tree as a parameter, along with a type (or instance) of a class. The basic idea is that this method will add certain things to a collection that will be used for validation. public interface ITestInterface { //Specify stuff here. } private static void DoSomething<T>(Expression<Func<T, object>> expression, params IMyInterface[] rule) { // Stuff is done here. } The method is called as follows: class TestClass { public int MyProperty { get; set; } } class OtherTestClass : ITestInterface { // Blah Blah Blah. } static void Main(string[] args) { DoSomething<TestClass>(t => t.MyProperty, new OtherTestClass()); } I'm doing it this way because I'd like for the property names that are passed in to be strong typed. A couple of things I'm struggling with.. * *Within DoSomething, I'd like to get a PropertyInfo type (from the body passed in) of T and add it to a collection along with rule[]. Currently, I'm thinking about using expression.Body and removing [propertyname] from "Convert.([propertyname])" and using reflection to get what I need. This seems cumbersome and wrong. Is there a better way? *Is this a specific pattern I'm using? *Lastly, any suggestions or clarifications as to my misunderstanding of what I'm doing are appreciated and / or resources or good info on C# expression trees are appreciated as well. Thanks! Ian Edit: An example of what expression.Body.ToString() returns within the DoSomething method is a string that contains "Convert(t.MyProperty)" if called from the example above. I do need it to be strongly typed, so it will not compile if I change a property name. Thanks for the suggestions! A: I rely heavily on expression trees to push a lot of what I want to do with my current application to compile-time, i.e. static type checking. I traverse expression trees to translate them into something else which "makes sense". One thing I've ended up doing a lot is that instead of URLs I rely on a MVC like approach where I declare lambda functions, and translates that... interpret, the compiler generated expression tree into an URL. When this URL is invoked, I do the opposite. This way, I have what I call compile-time checks for broken links and this works great with refactoring and overloads as well. I think it's cool to think about using expression trees in this way. You might wanna check out the visitor pattern, it's a pain to get started with because it doesn't make much sense in the beginning but it ties everything together and it's a very formal way to solve type checking in compiler construction. You could do the same, but instead of type checking emit what ever you need. Something which I'm currently pounding my head against is the ability to build a simple framework for translating (or actually I should say interpret) expression tress and emit JavaScript. The idea is that the compiler generated expression trees will translate into valid JavaScript which interfaces with some object model. What's exciting about this is the way the compiler is always able to tell me when I go wrong and sure the end result is just a bunch of strings but the important part is how these strings got created. They went through some verification and that means something. Once you get that going there is little you can't do with expression trees. While working with the System.Reflection.Emit stuff I found myself using expression trees to create a light-weight framework for dynamic compilation, which at compile time could basically say if my dynamically created assemblies would compile as well, and this worked seamlessly with reflection and static type checking. It took this further and further and ended up with something which in the end saved a lot of time and proved to be very agile and robust. So I love this kind of stuff, and this is what meta programming is all about, writing programs in your programs that do programs. I say keep it coming! A: Collecting PropertyInfo objects from Expression.Body seems similar to my solution to another question. A: I appreciate what you are trying to do with the property here. I have run into this conundrum. It always feels weird to write: DoSomething("MyProperty", new OtherClass()); If the property ever changes name, or the text is mistyped in the call, then there will be a problem. What I have come to learn is that this is something you probably have to deal with via testing. Specifically, unit testing. I would write unit tests to enforce that the "DoSomething" calls work correctly. The other thing you might try is to decorate your properties with attributes, and then reflect against your class when it is constructed looking for properties with the attribute, and load rules. [DoSomething(typeof(OtherClass), typeof(OtherClass2))] public int MyProperty { get; set; } In this case the constructor (perhaps in a base class?) would dynamically create an OtherClass object and a OtherClass2 object, and load them into a collection along with the name of the property.
{ "language": "en", "url": "https://stackoverflow.com/questions/155792", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Rhino Mocks: How to mock ADO.NET's DataRow? ADO.NET has the notorious DataRow class which you cannot instantiate using new. This is a problem now that I find a need to mock it using Rhino Mocks. Does anyone have any ideas how I could get around this problem? A: I'm curious as to why you need to mock the DataRow. Sometimes you can get caught up doing mocking and forget that it can be just as prudent to use the real thing. If you are passing around data rows then you can simply instantiate one with a helper method and use that as a return value on your mock. SetupResult.For(someMockClass.GetDataRow(input)).Return(GetReturnRow()); public DataRow GetReturnRow() { DataTable table = new DataTable("FakeTable"); DataRow row = table.NewRow(); row.value1 = "someValue"; row.value2 = 234; return row; } If this is not the situation you are in then I am going to need some example code to be able to figure out what you are trying to do. A: I also use Typemock Isolator for this, it can mock things that other mocking frameworks are unable to. A: Any time I can't mock something (I prefer MoQ over Rhino, but that's beside the point) I have to code around it. The way I see it you only have two choices. Pay for a superior framework such as TypeMock that can mock ANY class, or code a wrapper around classes that weren't written to be mocked. Its a sad state of affairs in the framework. TDD wasn't a big concern back in the 1.1 days. A: this works for me private DataRow GetReturnRow() { DataTable table = new DataTable("FakeTable"); table.Columns.Add("column_name"); DataRow row = table.NewRow(); row["column_name"] = your_value; return row; }
{ "language": "en", "url": "https://stackoverflow.com/questions/155797", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How do I configure visual studio to use the code view as the default view for Webservices? When you double click on a class (in 'solution explorer')... if that class happens to be an .asmx.cs webservice... then you get this... To add components to your class, drag them from the Toolbox and use the Properties window to set their properties. To create methods and events for your class, click here to switch to code view. ...it's a 'visual design surface' for webservices. (Who actually uses that surface to write webservices?) So what I want to know, how do I configure visual studio to never show me that design view? Or at least, to show me the code view by default? A: You can set the default editor for any given file type (.cs, .xml, .xsd, etc). To change the default editor for a given type: * *Right-click a file of that type in your project, and select "Open With..." *Select your preferred editor. You may want "CSharp Editor". *Click "Set as Default". I don't see the behavior you see with web services, but this should work with all file types in Visual Studio. A: Add the following attribute to your class: [System.ComponentModel.DesignerCategory("Code")] (Not sure why [System.ComponentModel.DesignerCategory("")] does not work.) A: Add the following attribute to your class: <System.ComponentModel.DesignerCategory("")>
{ "language": "en", "url": "https://stackoverflow.com/questions/155810", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22" }
Q: Programmatically focusing a hippo.CanvasEntry? Consider this Python program which uses PyGtk and Hippo Canvas to display a clickable text label. Clicking the text label replaces it with a Hippo CanvasEntry widget which contains the text of the label. import pygtk pygtk.require('2.0') import gtk, hippo def textClicked(text, event, row): input = hippo.CanvasEntry() input.set_property('text', text.get_property('text')) parent = text.get_parent() parent.insert_after(input, text) parent.remove(text) def main(): canvas = hippo.Canvas() root = hippo.CanvasBox() canvas.set_root(root) text = hippo.CanvasText(text=u'Some text') text.connect('button-press-event', textClicked, text) root.append(text) window = gtk.Window() window.connect('destroy', lambda ignored: gtk.main_quit()) window.add(canvas) canvas.show() window.show() gtk.main() if __name__ == '__main__': main() How can the CanvasEntry created when the text label is clicked be automatically focused at creation time? A: Underneath the CanvasEntry, there's a regular old gtk.Entry which you need to request the focus as soon as it's made visible. Here's a modified version of your textClicked function which does just that: def textClicked(text, event, row): input = hippo.CanvasEntry() input.set_property('text', text.get_property('text')) entry = input.get_property("widget") def grabit(widget): entry.grab_focus() entry.connect("realize", grabit) parent = text.get_parent() parent.insert_after(input, text) parent.remove(text)
{ "language": "en", "url": "https://stackoverflow.com/questions/155822", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Getting an error when filling a datatable from a data adapter I am getting this error but only very occasionally. 99.9% of the time it works fine: Failed to enable constraints. One or more rows contain values violating non-null, unique, or foreign-key constraints. Does anyone have any idea on what the cause could be? I only use that datatable for viewing and not updating so is it possible to easily turn off all constraints somehow? A: Set DataSet.EnforceConstraints to false before filling the DataTable A: This typically happens when the schema on your dataset is enforcing something that your database is not. Visual Studio will automatically read schema and try and set up some primary keys on your dataset, but if you are using a view that can possibly return multiple rows it will fail. It is easy enough to remove these constraints from the DataSet itself by deleting the constraint in the designer. Check to ensure that your dataset is not enforcing a primary key in a situation where you could possibly have two rows with the same key, like in a View that joins two tables together and therefore duplicates the rows in the parent table. VS by default will try and create the primary key of the parent table as a unique constraint on the dataset, but the view itself enforces no such constraint. A: This error can also present if you're using an XSD DataSet to define your schema and the maximum length of a variable-length field (varchar, varbinary, etc) is increased in the database but the XSD is not regenerated. In my case, I had a varchar(100) database field with a text value 60 characters in length. The XSD expected this field to have a maximum length of 50 (you can see this in the InitClass() method of the Designer.cs file). When this record was loaded I received the "Failed to enable constraints" error message. Updating the record to reduce the field below 50 removed the error.
{ "language": "en", "url": "https://stackoverflow.com/questions/155829", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How do you reference MS office interops in C#? I am trying to access Outlook 2007 from C#. I have installed the PIA msi after following the directions found on msdn. After a successful install nothing shows up in Visual Studio's references under the .net tab. A: Office interaction is available through COM objects found on the 'COM' tab of the 'Add Reference' dialog window. A: After you downloaded the installer and ran it, did you run the MSI installer it extracted and placed in the folder it asked you to create?
{ "language": "en", "url": "https://stackoverflow.com/questions/155835", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How do I create a Microsoft Jet (Access) database without an interop assembly? I need to create an access (mdb) database without using the ADOX interop assembly. How can this be done? A: Before I throw away this code, it might as well live on stackoverflow Something along these lines seems to do the trick: if (!File.Exists(DB_FILENAME)) { var cnnStr = "Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" + DB_FILENAME; // Use a late bound COM object to create a new catalog. This is so we avoid an interop assembly. var catType = Type.GetTypeFromProgID("ADOX.Catalog"); object o = Activator.CreateInstance(catType); catType.InvokeMember("Create", BindingFlags.InvokeMethod, null, o, new object[] {cnnStr}); OleDbConnection cnn = new OleDbConnection(cnnStr); cnn.Open(); var cmd = cnn.CreateCommand(); cmd.CommandText = "CREATE TABLE VideoPosition (filename TEXT , pos LONG)"; cmd.ExecuteNonQuery(); } This code illustrates that you can access the database using OleDbConnection once its created with the ADOX.Catalog COM component. A: I've done the same as Autsin, create an Access db then included it into my project as a managed resource. Once there, it is included in the compiled code and you can copy it to hard disk as many times as you want. Empty databases are relatively small too, so there isn't much overhead. The added bonus is the ability to set up the database if you know how it will be used or what tables will be added every time, you can reduce the amount of coding and slow database queries. A: You don't need Jet(major headache) installed, if you use this connection string in .net 3.5 Provider=Microsoft.ACE.OLEDB.12.0;Data Source=C:\myFolder\myAccess2007file.accdb;Persist Security Info=False; This should work on access 2007 and below A: Interesting question -- I've never thought to create one on the fly like this. I've always included my baseline database as a resource in the project and made a copy when I needed a new one. A: ACE is not in any framework (yet. ie not in 1, 2, 3.5, 4, 4.5) It's also not part of Windows Update. JET4 is in Framework2 and above. If you're working with Access/MDB files etc then do not assume ACE is present.
{ "language": "en", "url": "https://stackoverflow.com/questions/155848", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Design time serialization in C# I have created a non-visual component in C# which is designed as a placeholder for meta-data on a form. The component has a property which is a collection of custom objects, this object is marked as Serializable and implements the GetObjectData for serilizing and public constuctor for deserilizing. In the resx file for the form it will generate binary data for storing the collection, however any time I make a change to the serialized class I get designer errors and need to delete the data manually out of the resx file and then recreate this data. I have tried changing the constuctor to have a try / catch block around each property in the class try { _Name = info.GetString("Name"); } catch (SerializationException) { this._Name = string.Empty; } but it still crashes. The last error I got was that I had to implement IConvertible. I would prefer to use xml serialization because I can at least see it, is this possible for use by the designer? Is there a way to make the serialization more stable and less resistant to changes? Edit: More information...better description maybe I have a class which inherits from Component, it has one property which is a collection of Rules. The RulesCollection seems to have to be marked as Serializable, otherwise it does not retain its members. The Rules class is also a Component with the attribute DesignTimeVisible(false) to stop it showing in the component tray, this clas is not marked Serializable. Having the collection marked as Serializable generates binary data in the resx file (not ideal) and the IDE reports that the Rules class is not Serializable. I think this issue is getting beyond a simple question. So I will probably close it shortly. If anyone has any links to something similar that would help a lot. A: You might want to try the alternate approach of getting everything to serialize as generated code. To do that is very easy. Just implement your non-visual class from Component. Then expose your collection as you already are but ensure each object placed into the collection is itself derived from Component. By doing that everything is code generated. A: I have since discovered where I was going wrong. The component I was implementing a custom collection (inherited from CollectionBase), I changed this to a List and added the DesignerSerializationVisibility(DesignerSerializationVisibility.Content) attribute to the List property, this list is also read-only. This would then produce code to generate all the components properties and all the entries in the List. The class stored in the list did not need any particuar attributes or need to be serializble. private List<Rule> _Rules; [DesignerSerializationVisibility(DesignerSerializationVisibility.Content)] public List<Rule> Rules { get { return _Rules; } } A: Could you put more code up of the class that is having the serialization issue, maybe the constructor and the property to give reference to the variables you're using. Just a note: I've had a lot of issues with the visual designer and code generation, if I've got a property on a control then generally I put [DesignerSerializationVisibility(DesignerSerializationVisibility.Hidden)] on the property and handle the initialization myself.
{ "language": "en", "url": "https://stackoverflow.com/questions/155852", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: ASP.NET MVC - passing parameters to the controller I have a controller with an action method as follows: public class InventoryController : Controller { public ActionResult ViewStockNext(int firstItem) { // Do some stuff } } And when I run it I get an error stating: The parameters dictionary does not contain a valid value of type 'System.Int32' for parameter 'firstItem'. To make a parameter optional its type should either be a reference type or a Nullable type. I had it working at one point and I decided to try the function without parameters. Finding out that the controller was not persistant I put the parameter back in, now it refuses to recognise the parameter when I call the method. I'm using this url syntax to call the action: http://localhost:2316/Inventory/ViewStockNext/11 Any ideas why I would get this error and what I need to do to fix it? I've tried adding another method that takes an integer to the class it it also fails with the same reason. I've tried adding one that takes a string, and the string is set to null. I've tried adding one without parameters and that works fine, but of course it won't suit my needs. A: you can change firstItem to id and it will work you can change the routing on global.asax (i do not recommed that) and, can't believe no one mentioned this, you can call : http://localhost:2316/Inventory/ViewStockNext?firstItem=11 In a @Url.Action would be : @Url.Action("ViewStockNext", "Inventory", new {firstItem=11}); depending on the type of what you are doing, the last will be more suitable. Also you should consider not doing ViewStockNext action and instead a ViewStock action with index. (my 2cents) A: Headspring created a nice library that allows you to add aliases to your parameters in attributes on the action. This looks like this: [ParameterAlias("firstItem", "id", Order = 3)] public ActionResult ViewStockNext(int firstItem) { // Do some stuff } With this you don't have to alter your routing just to handle a different parameter name. The library also supports applying it multiple times so you can map several parameter spellings (handy when refactoring without breaking your public interface). You can get it from Nuget and read Jeffrey Palermo's article on it here A: Using the ASP.NET Core Tag Helper feature: <a asp-controller="Home" asp-action="SetLanguage" asp-route-yourparam1="@item.Value">@item.Text</a> A: public ActionResult ViewNextItem(int? id) makes the id integer a nullable type, no need for string<->int conversions. A: To rephrase Jarret Meyer's answer, you need to change the parameter name to 'id' or add a route like this: routes.MapRoute( "ViewStockNext", // Route name "Inventory/ViewStockNext/{firstItem}", // URL with parameters new { controller = "Inventory", action = "ViewStockNext" } // Parameter defaults ); The reason is the default route only looks for actions with no parameter or a parameter called 'id'. Edit: Heh, nevermind Jarret added a route example after posting. A: or do it with Route Attribute: public class InventoryController : Controller { [Route("whatever/{firstItem}")] public ActionResult ViewStockNext(int firstItem) { int yourNewVariable = firstItem; // ... } } A: Your routing needs to be set up along the lines of {controller}/{action}/{firstItem}. If you left the routing as the default {controller}/{action}/{id} in your global.asax.cs file, then you will need to pass in id. routes.MapRoute( "Inventory", "Inventory/{action}/{firstItem}", new { controller = "Inventory", action = "ListAll", firstItem = "" } ); ... or something close to that. A: There is another way to accomplish that (described in more details in Stephen Walther's Pager example Essentially, you create a link in the view: Html.ActionLink("Next page", "Index", routeData) In routeData you can specify name/value pairs (e.g., routeData["page"] = 5), and in the controller Index function corresponding parameters receive the value. That is, public ViewResult Index(int? page) will have page passed as 5. I have to admit, it's quite unusual that string ("page") automagically becomes a variable - but that's how MVC works in other languages as well... A: Or, you could try changing the parameter type to string, then convert the string to an integer in the method. I am new to MVC, but I believe you need nullable objects in your parameter list, how else will the controller indicate that no such parameter was provided? So... public ActionResult ViewNextItem(string id)... A: The reason for the special treatment of "id" is that it is added to the default route mapping. To change this, go to Global.asax.cs, and you will find the following line: routes.MapRoute ("Default", "{controller}/{action}/{id}", new { controller = "Home", action = "Index", id = "" }); Change it to: routes.MapRoute ("Default", "{controller}/{action}", new { controller = "Home", action = "Index" });
{ "language": "en", "url": "https://stackoverflow.com/questions/155864", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "112" }
Q: Anyone have issues going from ColdFusion's serializeJSON method to PHP's json_decode? The Interwebs are no help on this one. We're encoding data in ColdFusion using serializeJSON and trying to decode it in PHP using json_decode. Most of the time, this is working fine, but in some cases, json_decode returns NULL. We've looked for the obvious culprits, but serializeJSON seems to be formatting things as expected. What else could be the problem? UPDATE: A couple of people (wisely) asked me to post the output that is causing the problem. I would, except we just discovered that the result set is all of our data (listing information for 2300+ rental properties for a total of 565,135 ASCII characters)! That could be a problem, though I didn't see anything in the PHP docs about a max size for the string. What would be the limiting factor there? RAM? UPDATE II: It looks like the problem was that a couple of our users had copied and pasted Microsoft Word text with "smart" quotes. Those pesky users... A: You could try operating in UTF-8 and also letting PHP know that fact. I had an issue with PHP's json_decode not being able to decode a UTF-8 JSON string (with some "weird" characters other than the curly quotes that you have). My solution was to hint PHP that I was working in UTF-8 mode by inserting a Content-Type meta tag in the HTML page that was doing the submit to the PHP. That way the content type of the submitted data, which is the JSON string, would also be UTF-8: <meta http-equiv="Content-Type" content="text/html;charset=utf-8"/> After that, PHP's json_decode was able to properly decode the string. A: can you replicate this issue reliably? and if so can you post sample data that returns null? i'm sure you know this, but for informational sake for others stumbling on this who may not, RFC 4627 describes JSON, and it's a common mistake to assume valid javascript is valid JSON. it's better to think of JSON as a subset of javascript. in response to the edit: i'd suggest checking to make sure your information is being populated in your PHP script (before it's being passed off to json_decode), and also validating that information (especially if you can reliably reproduce the error). you can try an online validator for convenience. based on the very limited information it sounds like perhaps it's timing out and not grabbing all the data? is there a need for such a large dataset? A: I had this exact problem and it turns out it was due to ColdFusion putting none printable characters into the JSON packets (these characters did actually exist in our data) but they can't go into JSON. Two questions on this site fixed this problem for me, although I went for the PHP solution rather than the ColdFusion solution as I felt it was the more elegant of the two. PHP solution Fix the string before you pass it to json_decode() $string = preg_replace('/[\x00-\x1F\x80-\xFF]/', '', $string); ColdFusion solution Use the cleanXmlString() function in that SO question after using serializeJSON() A: You could try parsing it with another parser, and looking for an error -- I know Python's JSON parsers are very high quality. If you have Python installed it's easy enough to run the text through demjson's syntax checker. If it's a very large dataset you can use my library jsonlib -- memory use will be higher than with demjson, but it will run faster because it's written in C.
{ "language": "en", "url": "https://stackoverflow.com/questions/155869", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How do I declare an impicitly typed variable in VB inline in an ASP.Net page? I want to do the following: <%=var t = ViewData.Model%> but in VB A: <% Dim t = ViewData.Model %> VB doesn't use a special keyword for implicitly typed variables... just Dim.
{ "language": "en", "url": "https://stackoverflow.com/questions/155870", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: MyClass in VB.Net What is a realistic use for VB.Net's MyClass keyword? I understand the technical usage of MyClass; I don't understand the practical usage of it in the real world. Using MyClass only makes sense if you have any virtual (overridable) members. But it also means that you want to ignore the overridden implementations in sub classes. It appears to be self-contradicting. I can think of some contrived examples, but they are simply bad design rather than practical usage. A: MyClass, from a compiler's perspective, is a way to omit a callvirt instruction in favor of a call instruction. Essentially when you call a method with the virtual semantics (callvirt), you're indicating that you want to use the most derived variation. In cases where you wish to omit the derived variations you utilize MyClass (call). While you've stated you understand the basic concept, I figured it might help to describe it from a functional viewpoint, rather than an implicit understanding. It's functionally identical to MyBase with the caveat of scope being base type with MyBase, instead of the active type with MyClass. Overriding virtual call semantics, at the current point in the hierarchy, is typically a bad design choice, the only times it is valid is when you must rely on a specific piece of functionality within your object's hierarchy, and can't rely on the inheritor to call your variation through a base invocation in their implementation. It could also rely on you as a designer deciding that it's the only alternative since you overrode the functionality further in the object hierarchy and you must ensure that in this corner case that this specific method, at the current level of the inheritance tree, must be called. It's all about design, understanding the overall design and corner cases. There's likely a reason C♯ doesn't include such functionality since on those corner cases you could separate the method into a private variation the current level in the hierarchy invokes, and just refer to that private implementation when necessary. It's my personal view that utilizing the segmentation approach is the ideal means to an end since it's explicit about your design choice, and is easier to follow (and it's also the only valid means in languages without a functional equivalent to MyClass.) A: Polymorphism I'm sorry I don't have a clear code example here but you can follow the link below for that and I hate to copy the MSDN Library description but it's so good that it's really hard to rewrite it any clearer. "MyClass provides a way to refer to the current class instance members without them being replaced by any derived class overrides. The MyClass keyword behaves like an object variable referring to the current instance of a class as originally implemented." Also note that you can't use MyClass in a shared method. A good example of implementing Polymorphism via MyClass is at http://www.devarticles.com/c/a/VB.Net/Implementing-Polymorphism-in-VB.Net/ A: You need it if you want to call a chained constructor. Public Sub New(ByVal accountKey As Integer) MyClass.New(accountKey, Nothing) End Sub Public Sub New(ByVal accountKey As Integer, ByVal accountName As String) MyClass.New(accountKey, accountName, Nothing) End Sub Public Sub New(ByVal accountKey As Integer, ByVal accountName As String, ByVal accountNumber As String) m_AccountKey = accountKey m_AccountName = accountName m_AccountNumber = accountNumber End Sub A: I guess the only case I could see a use for it, would be if you want the base condition, and an inherited condition at the same time? I.E. where you want to be able to inherit a member, but you want the ability to access a value for that member that hasn't been changed by inheritance?
{ "language": "en", "url": "https://stackoverflow.com/questions/155884", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: How do you rename a Verity collection in ColdFusion? Can't seem to rename an existing Verity collection in ColdFusion without deleting, recreating, and rebuilding the collection. Problem is, I have some very large collections I'd rather not have to delete and rebuild from scratch. Any one have a handy trick for this conundrum? A: I don't believe that there is an easy way to rename a Verity collection. You can always use <cfcollection action="map" ...> to assign an alias to an existing collection, provided you do not need to re-use the original name. A: Looks like this is not possible. Deleting and re-creating the collection with the desired name appears to be the only approach available. A: For the Verity part (without considering ColdFusion), it's easy enough to detach a collection, rename it, and reattach it again: rcadmin> indexdetach Server Alias:YourDocserver Index Alias:CollectionName Index Type [(c)ollection,(t)ree,(p)arametric,(r)ecommendation]:c Save changes? [y|n]:y <<Return>> SUCCESS rcadmin> collpurge Collection alias:CollectionName Admin Alias:AdminServer Save changes? [y|n]:y <<Return>> SUCCESS rcadmin> adminsignal Admin Alias:AdminServer Type of signal (Shutdown=2,WSRefresh=3,RestartAllServers=4):4 Save changes? [y|n]:y <<Return>> SUCCESS Now you can rename the collection directory, and re-attach. (If you are unsure of any of these values, check them with collget before you take it offline). rcadmin> collset Admin Alias:AdminServer Collection Alias:NewCollectionName Modify Type (Update=0, Insert=1):1 Path: Gateway[(o)dbc|(n)otes|(e)xchange|(d)ocumentum|(f)ilesys|(w)eb|o(t)her]: Style Alias: Document Access (Public=0,Secure=1,Anonymous=2): Query Parser [(s)imple|(b)oolPlus|(f)reeText|(o)ldFreeText|O(l)dSimple|O(t)her]: Description: Max. Search Time(msecs): Save changes? [y|n]:y rcadmin> indexattach Index Alias:NewCollectionName Index Type [(c)ollection,(t)ree,(p)arametric,(r)ecommendation]:c Server Alias:YourDocserver Modify Type (Update=0, Insert=1):1 Index State (offline=0,hidden=1,online=2):2 Threads (default=3): Save changes? [y|n]:y <<Return>> SUCCESS It should now show up again in the 'hierarchyview'. You can also use the "merge" utility to copy content from one collection to another, with a new name.
{ "language": "en", "url": "https://stackoverflow.com/questions/155891", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Unicode URL decoding The usual method of URL-encoding a unicode character is to split it into 2 %HH codes. (\u4161 => %41%61) But, how is unicode distinguished when decoding? How do you know that %41%61 is \u4161 vs. \x41\x61 ("Aa")? Are 8-bit characters, that require encoding, preceded by %00? Or, is the point that unicode characters are supposed to be lost/split? A: According to Wikipedia: Current standard The generic URI syntax mandates that new URI schemes that provide for the representation of character data in a URI must, in effect, represent characters from the unreserved set without translation, and should convert all other characters to bytes according to UTF-8, and then percent-encode those values. This requirement was introduced in January 2005 with the publication of RFC 3986. URI schemes introduced before this date are not affected. Not addressed by the current specification is what to do with encoded character data. For example, in computers, character data manifests in encoded form, at some level, and thus could be treated as either binary data or as character data when being mapped to URI characters. Presumably, it is up to the URI scheme specifications to account for this possibility and require one or the other, but in practice, few, if any, actually do. Non-standard implementations There exists a non-standard encoding for Unicode characters: %uxxxx, where xxxx is a Unicode value represented as four hexadecimal digits. This behavior is not specified by any RFC and has been rejected by the W3C. The third edition of ECMA-262 still includes an escape(string) function that uses this syntax, but also an encodeURI(uri) function that converts to UTF-8 and percent-encodes each octet. So, it looks like its entirely up to the person writing the unencode method...Aren't standards fun? A: What I've always done is first UTF-8 encode a Unicode string to make it a series of 8-bit characters before escaping any of those with %HH. P.S. - I can only hope the non-standard implementations (%uxxxx) are few and far between. A: Since URI's were introduced before unicode was around, or atleast in wide use, I imagine this is a very implementation specific question. UTF-8 encoding your text, then escaping that per normal sounds like the best idea, since that's completely backwards compatible with any ASCII/ANSI systems in place, though you might get the odd wierd character or two. On the other end, to decode, you'd unescape your text, and get a UTF-8 string. If someone using an older system tries to send yours some data in ASCII/ANSI, there's no harm done, that's (almost) UTF-8 encoded already.
{ "language": "en", "url": "https://stackoverflow.com/questions/155892", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: How to detect the number of context switches that occurred while running C# code? From C#, is it possible to detect the number of context switches that occurred while executing a block of code on a particular thread? Ideally, I'd like to know how many times and what CPU my thread code was scheduled on. I know I can use tools like Event Tracing for Windows and the associated viewers, but this seemed a bit complicated to get the data I wanted. Also, tools like Process Explorer make it too hard to tell how many switches occurred as a result of a specific block of code. Background: I'm trying to test the actual performance of a low level lock primitive in .NET (as a result of some comments on a recent blog post I made. A: It sounds like you may be looking for a programmatic solution, but if not, Microsoft's Process Explorer tool will tell you very easily the number of context switches for a particular thread. Once in the tool, double-click your process, select the Threads tab, and select your thread. The .NET tab has more specific .NET-related perf data. A: I've never done this, but here are a few leads that might help: * *The .NET profiler APIs might allow you to hook in? The ICorProfilerCallback interface has RuntimeThreadSuspended and RuntimeThreadResumed callbacks. But a comment on this blog post seems to indicate that they won't get you what you are looking for: "RuntimeThreadSuspended is issued when a thread is being suspended by the runtime, typically in preparation for doing a GC." *There is a "Context Switches/sec" perfmon counter that might help. I haven't looked at this counter specifically, but I'm guessing it operates on Win32 threads and not managed threads. You can use the profiling APIs to get the Win32 thread ID for any given managed thread ID. Good luck! ;) A: It looks like procexp might be using the kernel thread (KTHREAD) or executive thread (ETHREAD) data structures that have a ContextSwitches field on them. It might be possible to get this from managed code.
{ "language": "en", "url": "https://stackoverflow.com/questions/155903", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Creating A Private Photo Gallery Using Asp.Net MVC I need to create a photo gallery service that is managed by users. I've done this a million times using just Asp.net but I was wondering if there are any special considerations that I need to make when using Asp.net MVC. Basically, I will be storing the actual images on the filesystem and storing the locations in a database linking the images to a specific user. The images in a user's gallery should NOT be accessible by anyone except registered users. Meaning, I need to somehow prevent users from sharing the URL of an image from a gallery with someone who is not a user of the site. In the past I did this using some generic handlers which authenticated that the request is allowed to access the image resource. Can I use the same pattern but using Controllers instead? I was thinking of perhaps creating a Photo Controller and just a simple Get action. Would this require that I have a View just for displaying an Image? Am I on the right track or are there better ways of doing this? (Besides storing images in the DB) A: This link explains how to create a custom ImageResult class. I was able to do exactly what I needed following it https://blog.maartenballiauw.be/post/2008/05/13/aspnet-mvc-custom-actionresult.html A: It's not a complete answer but I'd look at using a route that restricts access to the actual files themselves and then possibly use authentication of the action that gets an image.
{ "language": "en", "url": "https://stackoverflow.com/questions/155906", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: What is the easiest way to start tomcat in embedded mode from the cargo-maven2-plugin? I have defined tomcat:catalina:5.5.23 as a dependency to the cargo plugin, however I still get the following exception: java.lang.ClassNotFoundException: org.apache.catalina.Connector at java.net.URLClassLoader$1.run(URLClassLoader.java:200) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:188) at java.lang.ClassLoader.loadClass(ClassLoader.java:316) at org.codehaus.classworlds.RealmClassLoader.loadClassDirect(RealmClassLoader.java:195) at org.codehaus.classworlds.DefaultClassRealm.loadClass(DefaultClassRealm.java:255) at org.codehaus.classworlds.DefaultClassRealm.loadClass(DefaultClassRealm.java:274) at org.codehaus.classworlds.RealmClassLoader.loadClass(RealmClassLoader.java:214) at java.lang.ClassLoader.loadClass(ClassLoader.java:251) at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:374) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:247) at org.codehaus.cargo.container.tomcat.internal.Tomcat5xEmbedded.preloadEmbedded(Tomcat5xEmbedded.java:232) It looks like the RealmClassLoader is not finding the class, possibly due to java.security.AccessController.doPrivileged denying access. Has anyone got tomcat to run in embedded mode from within maven? A: Side note: You can start jetty which is similar to tomcat. (Servlets will be available at http://localhost:8080/ artefact-name) mvn jetty6:run You would have to add to your pom: <project> <build> <plugins> <plugin> <groupId>org.mortbay.jetty</groupId> <artifactId>maven-jetty6-plugin</artifactId> <configuration> <scanIntervalSeconds>5</scanIntervalSeconds> <!-- <webXml>${basedir}/WEB-INF/web.xml</webXml> --> </configuration> </plugin> </plugins> </build> </project> A: There is also a tomcat maven plugin: http://mojo.codehaus.org/tomcat-maven-plugin/introduction.html <plugins> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>tomcat-maven-plugin</artifactId> </plugin> </plugins> On my machine this loads up tomcat 6. I'm not sure how to get it to work with tomcat 5.5
{ "language": "en", "url": "https://stackoverflow.com/questions/155908", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How do you handle unit/regression tests which are expected to fail during development? During software development, there may be bugs in the codebase which are known issues. These bugs will cause the regression/unit tests to fail, if the tests have been written well. There is constant debate in our teams about how failing tests should be managed: * *Comment out failing test cases with a REVISIT or TODO comment. * *Advantage: We will always know when a new defect has been introduced, and not one we are already aware of. *Disadvantage: May forget to REVISIT the commented-out test case, meaning that the defect could slip through the cracks. *Leave the test cases failing. * *Advantage: Will not forget to fix the defects, as the script failures will constantly reminding you that a defect is present. *Disadvantage: Difficult to detect when a new defect is introduced, due to failure noise. I'd like to explore what the best practices are in this regard. Personally, I think a tri-state solution is the best for determining whether a script is passing. For example when you run a script, you could see the following: * *Percentage passed: 75% *Percentage failed (expected): 20% *Percentage failed (unexpected): 5% You would basically mark any test cases which you expect to fail (due to some defect) with some metadata. This ensures you still see the failure result at the end of the test, but immediately know if there is a new failure which you weren't expecting. This appears to take the best parts of the 2 proposals above. Does anyone have any best practices for managing this? A: I would leave your test cases in. In my experience, commenting out code with something like // TODO: fix test case is akin to doing: // HAHA: you'll never revisit me In all seriousness, as you get closer to shipping, the desire to revisit TODO's in code tends to fade, especially with things like unit tests because you are concentrating on fixing other parts of the code. Leave the tests in perhaps with your "tri-state" solution. Howeveer, I would strongly encourage fixing those cases ASAP. My problem with constant reminders is that after people see them, they tend to gloss over them and say "oh yeah, we get those errors all the time..." Case in point -- in some of our code, we have introduced the idea of "skippable asserts" -- asserts which are there to let you know there is a problem, but allow our testers to move past them on into the rest of the code. We've come to find out that QA started saying things like "oh yeah, we get that assert all the time and we were told it was skippable" and bugs didn't get reported. I guess what I'm suggesting is that there is another alternative, which is to fix the bugs that your test cases find immediately. There may be practical reasons not to do so, but getting in that habit now could be more beneficial in the long run. A: Fix the bug right away. If it's too complex to do right away, it's probably too large a unit for unit testing. Lose the unit test, and put the defect in your bug database. That way it has visibility, can be prioritized, etc. A: I generally work in Perl and Perl's Test::* modules allow you to insert TODO blocks: TODO: { local $TODO = "This has not been implemented yet." # Tests expected to fail go here } In the detailed output of the test run, the message in $TODO is appended to the pass/fail report for each test in the TODO block, so as to explain why it was expected to fail. For the summary of test results, all TODO tests are treated as having succeeded, but, if any actually return a successful result, the summary will also count those up and report the number of tests which unexpectedly succeeded. My recommendation, then, would be to find a testing tool which has similar capabilities. (Or just use Perl for your testing, even if the code being tested is in another language...) A: I tend to leave these in, with an Ignore attribute (this is using NUnit) - the test is mentioned in the test run output, so it's visible, hopefully meaning we won't forget it. Consider adding the issue/ticket ID in the "ignore" message. That way it will be resolved when the underlying problem is considered to be ripe - it'd be nice to fix failing tests right away, but sometimes small bugs have to wait until the time is right. I've considered the Explicit attribute, which has the advantage of being able to be run without a recompile, but it doesn't take a "reason" argument, and in the version of NUnit we run, the test doesn't show up in the output as unrun. A: We did the following: Put a hierarchy on the tests. Example: You have to test 3 things. * *Test the login (login, retrieve the user name, get the "last login date" or something familiar etc.) *Test the database retrieval (search for a given "schnitzelmitkartoffelsalat" - tag, search the latest tags) *Test web services (connect, get the version number, retrieve simple data, retrieve detailed data, change data) Every testing point has subpoints, as stated in brackets. We split these hierarchical. Take the last example: 3. Connect to a web service ... 3.1. Get the version number ... 3.2. Data: 3.2.1. Get the version number 3.2.2. Retrieve simple data 3.2.3. Retrieve detailed data 3.2.4. Change data If a point fails (while developing) give one exact error message. I.e. 3.2.2. failed. Then the testing unit will not execute the tests for 3.2.3. and 3.2.4. . This way you get one (exact) error message: "3.2.2 failed". Thus leaving the programmer to solve that problem (first) and not handle 3.2.3. and 3.2.4. because this would not work out. That helped a lot to clarify the problem and to make clear what has to be done at first. A: I think you need a TODO watcher that produces the "TODO" comments from the code base. The TODO is your test metadata. It's one line in front of the known failure message and very easy to correlate. TODO's are good. Use them. Actively management them by actually putting them into the backlog on a regular basis. A: #5 on Joel's "12 Steps to Better Code" is fixing bugs before you write new code: When you have a bug in your code that you see the first time you try to run it, you will be able to fix it in no time at all, because all the code is still fresh in your mind. If you find a bug in some code that you wrote a few days ago, it will take you a while to hunt it down, but when you reread the code you wrote, you'll remember everything and you'll be able to fix the bug in a reasonable amount of time. But if you find a bug in code that you wrote a few months ago, you'll probably have forgotten a lot of things about that code, and it's much harder to fix. By that time you may be fixing somebody else's code, and they may be in Aruba on vacation, in which case, fixing the bug is like science: you have to be slow, methodical, and meticulous, and you can't be sure how long it will take to discover the cure. And if you find a bug in code that has already shipped, you're going to incur incredible expense getting it fixed. But if you really want to ignore failing tests, use the [Ignore] attribute or its equivalent in whatever test framework you use. In MbUnit's HTML output, ignored tests are displayed in yellow, compared to the red of failing tests. This lets you easily notice a newly-failing test, but you won't lose track of the known-failing tests.
{ "language": "en", "url": "https://stackoverflow.com/questions/155911", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: How to throw a XmlSchemaException on XML Schema validation errors? Calling Validate() on an XmlDocument requires passing in a ValidationEventHandler delegate. That event function gets a ValidationEventArgs parameter which in turn has an Exception property of the type XmlSchemaException. Whew! My current code looks like this: ValidationEventHandler onValidationError = delegate(object sender, ValidationEventArgs args) { throw(args.Exception); } doc.Validate(onValidationError); Is there some other method I'm overlooking which simply throws the XmlSchemaException if validation fails (warnings ignored entirely)? A: Because the Validate method takes the ValidationEventHandler delegate, it is left up to the developer to decide what to do with the excpetion. What you are doing is correct. A: Passing null for the validationEventHandler parameter will throw an exception if there are any errors. The MSDN documentation for the Extensions.Validate method describes the validationEventHandler parameter as: A ValidationEventHandler for an event that occurs when the reader encounters validation errors. If null, throws an exception upon validation errors.
{ "language": "en", "url": "https://stackoverflow.com/questions/155912", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Business Entity Loading Pattern The project I'm working is using n-tier architecture. Our layers are as follows: * *Data Access *Business Logic *Business Entities *Presentation The Business Logic calls down into the data access layer, and the Presentation layer calls down into the Business Logic layer, and the Business entities are referenced by all of them. Our business entities essentially match our data model 1-1. For every table, we have a class. Initially when the framework was designed, there was no consideration for managing master-detail or child-parent relationships. So all of the Business logic, data access, and business entities, only referenced a single table in the database. Once we started developing the application it quickly became apparent that not having these relationships in our object model was severely hurting us. All of your layers (including the database) are all generated from an in-house metadata-database which we use to drive our home-grown code generator. The question is what is the best way to load or lazy load the relationships in our entities. For instance Let's say we have a person class that has a master-child relationship to an address table. This shows up in the business entity as a collection property of Addresses on the Person object. If we have a one-to-one relationship then this would show up as a single entity property. What is the best approach for filling and saving the relationship objects? Our Business entities have no knowledge of the Business Logic layer, so it can't be done internally when the property get's called. I'm sure there is some sort of standard patter out there for doing this. Any suggestions? Also, one caveat is that the DataAcess layer uses reflection to build our entities. The stored procedures return one result selt based on one table, and using reflection we populate our business object by matching the names of the properties with the names of the columns. So doing joins would be difficult. A: I highly recommend looking at Fowler's Patterns of Enterprise Architecture book. There are a few different approaches to solving this sort of problem that he outlines nicely, including entity relationships. One of the more compelling items would be the Unit Of Work pattern, which is basically a collector, that observes the actions performed on your entities, and once your done with your action, it batches the appropriate database calls, and makes the request to the database. This pattern is one of the central concepts used by NHibernate, which uses an object which implements IDisposable to signal the end of the "work". This allows you to wrap your actions in a using, and have the unit of work deal with the actions for you. Edit: Additional Information This is a link to the basic class structure of the Unit of Work...not really the most exciting thing in the world. Fowler provides more details in his book, some of which you can see here. You can also look at the Session object from NHibernate as a possible implementation ( I was able to track down the ISession interface...not sure where the implementation lives) Hope this helps. A: An approach I've used in the past is to make the container type smart enough to fetch the required objects. eg: public class Relation<T> { private T _value; private void FetchData() { if( LoadData != null ) { LoadDataEventArgs args = new LoadDataEventArgs(typeof(T), /* magic to get correct object */); LoadData(this, args); _value = (T)args.Value; } } public event EventHandler<LoadDataEventArgs> LoadData; public T Value { get { if( _value == default(T) ) FetchData(); return _value; } set { /* Do magic here. */ } } } Then you declare it on your entity like: [RelationCriteria("ID", EqualsMyProperty="AddressID")] public Relation<Address> Address { get; set; } And it's up to the loader of the type that declares the Address property to add a handler to the LoadData event. A similar class implements IList to give you a one-to-many relationship. A: What language are you using? What you described is exactly what the Entity Framework does in .Net. But you didn't share what language you were using and I'm assuming you don't want to rewrite any of your datalayer.
{ "language": "en", "url": "https://stackoverflow.com/questions/155913", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: PHP Session data not being saved I have one of those "I swear I didn't touch the server" situations. I honestly didn't touch any of the php scripts. The problem I am having is that php data is not being saved across different pages or page refreshes. I know a new session is being created correctly because I can set a session variable (e.g. $_SESSION['foo'] = "foo" and print it back out on the same page just fine. But when I try to use that same variable on another page it is not set! Is there any php functions or information I can use on my hosts server to see what is going on? Here is an example script that does not work on my hosts' server as of right now: <?php session_start(); if(isset($_SESSION['views'])) $_SESSION['views'] = $_SESSION['views']+ 1; else $_SESSION['views'] = 1; echo "views = ". $_SESSION['views']; echo '<p><a href="page1.php">Refresh</a></p>'; ?> The 'views' variable never gets incremented after doing a page refresh. I'm thinking this is a problem on their side, but I wanted to make sure I'm not a complete idiot first. Here is the phpinfo() for my hosts' server (PHP Version 4.4.7): A: Had same problem - what happened to me is our server admin changed the session.cookie_secure boolean to On, which means that cookies will only be sent over a secure connection. Since the cookie was not being found, php was creating a new session every time, thus session variables were not being seen. A: Use phpinfo() and check the session.* settings. Maybe the information is stored in cookies and your browser does not accept cookies, something like that. Check that first and come back with the results. You can also do a print_r($_SESSION); to have a dump of this variable and see the content.... Regarding your phpinfo(), is the session.save_path a valid one? Does your web server have write access to this directory? Hope this helps. A: I had following problem index.php <? session_start(); $_SESSION['a'] = 123; header('location:index2.php'); ?> index2.php <? session_start(); echo $_SESSION['a']; ?> The variable $_SESSION['a'] was not set correctly. Then I have changed the index.php acordingly <? session_start(); $_SESSION['a'] = 123; session_write_close(); header('location:index2.php'); ?> I dont know what this internally means, I just explain it to myself that the session variable change was not quick enough :) A: Check to see if the session save path is writable by the web server. Make sure you have cookies turned on.. (I forget when I turn them off to test something) Use firefox with the firebug extension to see if the cookie is being set and transmitted back. And on a unrelated note, start looking at php5, because php 4.4.9 is the last of the php4 series. A: Thanks for all the helpful info. It turns out that my host changed servers and started using a different session save path other than /var/php_sessions which didn't exist anymore. A solution would have been to declare ini_set(' session.save_path','SOME WRITABLE PATH'); in all my script files but that would have been a pain. I talked with the host and they explicitly set the session path to a real path that did exist. Hope this helps anyone having session path troubles. A: Check who the group and owner are of the folder where the script runs. If the group id or user id are wrong, for example, set to root, it will cause sessions to not be saved properly. A: Check the value of "views" when before you increment it. If, for some bizarre reason, it's getting set to a string, then when you add 1 to it, it'll always return 1. if (isset($_SESSION['views'])) { if (!is_numeric($_SESSION['views'])) { echo "CRAP!"; } ++$_SESSION['views']; } else { $_SESSION['views'] = 1; } A: Well, we can eliminate code error because I tested the code on my own server (PHP 5). Here's what to check for: * *Are you calling session_unset() or session_destroy() anywhere? These functions will delete the session data immediately. If I put these at the end of my script, it begins behaving exactly like you describe. *Does it act the same in all browsers? If it works on one browser and not another, you may have a configuration problem on the nonfunctioning browser (i.e. you turned off cookies and forgot to turn them on, or are blocking cookies by mistake). *Is the session folder writable? You can't test this with is_writable(), so you'll need to go to the folder (from phpinfo() it looks like /var/php_sessions) and make sure sessions are actually getting created. A: I know one solution I found (OSX with Apache 1 and just switched to PHP5) when I had a similar problem was that unsetting 1 specific key (ie unset($_SESSION['key']);) was causing it not to save. As soon as I didn't unset that key any more it saved. I have never seen this again, except on that server on another site, but then it was a different variable. Neither were anything special. Thanks for this one Darryl. This helped me out. I was deleting a session variable, and for some reason it was keeping the session from committing. now i'm just setting it to null instead (which is fine for my app), and it works. A: If you set a session in php5, then try to read it on a php4 page, it might not look in the correct place! Make the pages the same php version or set the session_path. A: I spent ages looking for the answer for a similar problem. It wasn't an issue with the code or the setup, as a very similar code worked perfectly in another .php on the same server. Turned out the problem was caused by a very large amount of data being saved into the session in this page. In one place we had a line like this:$_SESSION['full_list'] = $full_list where $full_list was an array of data loaded from the database; each row was an array of about 150 elements. When the code was initially written a couple of years ago, the DB only contained about 1000 rows, so the $full_list contained about 100 elements, each being an array of about 20 elements. With time, the 20 elements turned into 150 and 1000 rows turned into 17000, so the code was storing close to 64 meg of data into the session. Apparently, with this amount of data being stored, it refused to store anything else. Once we changed the code to deal with data locally without saving it into the session, everything worked perfectly. A: Check to make sure you are not mixing https:// with http://. Session variables do not flow between secure and insecure sessions. A: I know one solution I found (OSX with Apache 1 and just switched to PHP5) when I had a similar problem was that unsetting 1 specific key (ie unset($_SESSION['key']);) was causing it not to save. As soon as I didn't unset that key any more it saved. I have never seen this again, except on that server on another site, but then it was a different variable. Neither were anything special. A: Here is one common problem I haven't seen addressed in the other comments: is your host running a cache of some sort? If they are automatically caching results in some fashion you would get this sort of behavior. A: Just wanted to add a little note that this can also occur if you accidentally miss the session_start() statement on your pages. A: Check if you are using session_write_close(); anywhere, I was using this right after another session and then trying to write to the session again and it wasn't working.. so just comment that sh*t out A: I had session cookie path set to "//" instead of "/". Firebug is awesome. Hope it helps somebody. A: I had this problem when using secure pages where I was coming from www.domain.com/auth.php that redirected to domain.com/destpage.php. I removed the www from the auth.php link and it worked. This threw me because everything worked otherwise; the session was not set when I arrived at the destination though. A: A common issue often overlooked is also that there must be NO other code or extra spacing before the session_start() command. I've had this issue before where I had a blank line before session_start() which caused it not to work properly. A: Edit your php.ini. I think the value of session.gc_probability is 1, so set it to 0. session.gc_probability=0 A: Adding my solution: Check if you access the correct domain. I was using www.mysite.com to start the session, and tried to receive it from mysite.com (without the www). I have solved this by adding a htaccess rewrite of all domains to www to be on the safe side/site. Also check if you use http or https. A: Another few things I had to do (I had same problem: no sesson retention after PHP upgrade to 5.4). You many not need these, depending on what your server's php.ini contains (check phpinfio()); session.use_trans_sid=0 ; Do not add session id to URI (osc does this) session.use_cookies=0; ; ensure cookies are not used session.use_only_cookies=0 ; ensure sessions are OK to use IMPORTANT session.save_path=~/tmp/osc; ; Set to same as admin setting session.auto_start = off; Tell PHP not to start sessions, osc code will do this Basically, your php.ini should be set to no cookies, and session parameters must be consistent with what osc wants. You may also need to change a few session code snippets in application_top.php - creating objects where none exist in the tep_session_is_registered(...) calls (e eg. navigation object), set $HTTP_ variables to the newer $_SERVER ones and a few other isset tests for empty objects (google for info). I ended up being able to use the original sessions.php files (includes/classes and includes/functions) with a slightly modified application_top.php to get things going again. The php.ini settings were the main problem, but this of course depends on what your server company has installed as the defaults.
{ "language": "en", "url": "https://stackoverflow.com/questions/155920", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "58" }
Q: Launch web page from my application in Linux I have an application that launches a webpage in the "current" browser when the user selects it. This part of my app works fine in the Windows version but I can't figure out how to do this in Linux build. Right now the Linux version is hardcoded for Firefox in a specific directory and runs a new instance of it each time and doesn't show the URL that I pass in. I would like it to NOT launch a new version each time but just open a new page in the current open one if it is already running. For windows I use: ShellExecute(NULL,"open",filename,NULL,NULL,SW_SHOWNORMAL); For Linux I currently use: pid_t pid; char *args[2]; char *prog=0; char firefox[]={"/usr/bin/firefox"}; if(strstri(filename,".html")) prog=firefox; if(prog) { args[0]=(char *)filename; args[1]=0; pid=fork(); if(!pid) execvp(prog,args); } A: If you're writing this for modern distros, you can use xdg-open: $ xdg-open http://google.com/ If you're on an older version you'll have to use a desktop-specific command like gnome-open or exo-open. A: xdg-open is the new standard, and you should use it when possible. However, if the distro is more than a few years old, it may not exist, and alternative mechanisms include $BROWSER (older attempted standard), gnome-open (Gnome), kfmclient exec (KDE), exo-open (Xfce), or parsing mailcap yourself (the text/html handler will be likely be a browser). That being said, most applications don't bother with that much work -- if they're built for a particular environment, they use that environment's launch mechanisms. For example, Gnome has gnome_url_show, KDE has KRun, most terminal programs (for example, mutt) parse mailcap, etc. Hardcoding a browser and allowing the distributor or user to override the default is common too. I don't suggest hardcoding this, but if you really want to open a new tab in Firefox, you can use "firefox -new-tab $URL". A: A note for xdg-open: check http://portland.freedesktop.org/wiki/ , section "Using Xdg-utils"; it states that you can include the xdg-open script in your own application and use that as fallback in case the target system doesn't have xdg-open already installed. A: If you don't want to involve additional applications, just use the built-in remote control commands of firefox. E.g: firefox -remote 'openurl(http://stackoverflow.com)' Se detailed usage at http://www.mozilla.org/unix/remote.html
{ "language": "en", "url": "https://stackoverflow.com/questions/155930", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How do you loop through each line in a text file using a windows batch file? I would like to know how to loop through each line in a text file using a Windows batch file and process each line of text in succession. A: If you have an NT-family Windows (one with cmd.exe as the shell), try the FOR /F command. A: The accepted anwser using cmd.exe and for /F "tokens=*" %F in (file.txt) do whatever "%F" ... works only for "normal" files. It fails miserably with huge files. For big files, you may need to use Powershell and something like this: [IO.File]::ReadLines("file.txt") | ForEach-Object { whatever "$_" } or if you have enough memory: foreach($line in [System.IO.File]::ReadLines("file.txt")) { whatever "$line" } This worked for me with a 250 MB file containing over 2 million lines, where the for /F ... command got stuck after a few thousand lines. For the differences between foreach and ForEach-Object, see Getting to Know ForEach and ForEach-Object. (credits: Read file line by line in PowerShell ) A: From the Windows command line reference: To parse a file, ignoring commented lines, type: for /F "eol=; tokens=2,3* delims=," %i in (myfile.txt) do @echo %i %j %k This command parses each line in Myfile.txt, ignoring lines that begin with a semicolon and passing the second and third token from each line to the FOR body (tokens are delimited by commas or spaces). The body of the FOR statement references %i to get the second token, %j to get the third token, and %k to get all of the remaining tokens. If the file names that you supply contain spaces, use quotation marks around the text (for example, "File Name"). To use quotation marks, you must use usebackq. Otherwise, the quotation marks are interpreted as defining a literal string to parse. By the way, you can find the command-line help file on most Windows systems at: "C:\WINDOWS\Help\ntcmds.chm" A: In a Batch File you MUST use %% instead of % : (Type help for) for /F "tokens=1,2,3" %%i in (myfile.txt) do call :process %%i %%j %%k goto thenextstep :process set VAR1=%1 set VAR2=%2 set VAR3=%3 COMMANDS TO PROCESS INFORMATION goto :EOF What this does: The "do call :process %%i %%j %%k" at the end of the for command passes the information acquired in the for command from myfile.txt to the "process" 'subroutine'. When you're using the for command in a batch program, you need to use double % signs for the variables. The following lines pass those variables from the for command to the process 'sub routine' and allow you to process this information. set VAR1=%1 set VAR2=%2 set VAR3=%3 I have some pretty advanced uses of this exact setup that I would be willing to share if further examples are needed. Add in your EOL or Delims as needed of course. A: I needed to process the entire line as a whole. Here is what I found to work. for /F "tokens=*" %%A in (myfile.txt) do [process] %%A The tokens keyword with an asterisk (*) will pull all text for the entire line. If you don't put in the asterisk it will only pull the first word on the line. I assume it has to do with spaces. For Command on TechNet If there are spaces in your file path, you need to use usebackq. For example. for /F "usebackq tokens=*" %%A in ("my file.txt") do [process] %%A A: Improving the first "FOR /F.." answer: What I had to do was to call execute every script listed in MyList.txt, so it worked for me: for /F "tokens=*" %A in (MyList.txt) do CALL %A ARG1 --OR, if you wish to do it over the multiple line: for /F "tokens=*" %A in (MuList.txt) do ( ECHO Processing %A.... CALL %A ARG1 ) Edit: The example given above is for executing FOR loop from command-prompt; from a batch-script, an extra % needs to be added, as shown below: ---START of MyScript.bat--- @echo off for /F "tokens=*" %%A in ( MyList.TXT) do ( ECHO Processing %%A.... CALL %%A ARG1 ) @echo on ;---END of MyScript.bat--- A: @MrKraus's answer is instructive. Further, let me add that if you want to load a file located in the same directory as the batch file, prefix the file name with %~dp0. Here is an example: cd /d %~dp0 for /F "tokens=*" %%A in (myfile.txt) do [process] %%A NB:: If your file name or directory (e.g. myfile.txt in the above example) has a space (e.g. 'my file.txt' or 'c:\Program Files'), use: for /F "tokens=*" %%A in ('type "my file.txt"') do [process] %%A , with the type keyword calling the type program, which displays the contents of a text file. If you don't want to suffer the overhead of calling the type command you should change the directory to the text file's directory. Note that type is still required for file names with spaces. I hope this helps someone! A: The accepted answer is good, but has two limitations. It drops empty lines and lines beginning with ; To read lines of any content, you need the delayed expansion toggling technic. @echo off SETLOCAL DisableDelayedExpansion FOR /F "usebackq delims=" %%a in (`"findstr /n ^^ text.txt"`) do ( set "var=%%a" SETLOCAL EnableDelayedExpansion set "var=!var:*:=!" echo(!var! ENDLOCAL ) Findstr is used to prefix each line with the line number and a colon, so empty lines aren't empty anymore. DelayedExpansion needs to be disabled, when accessing the %%a parameter, else exclamation marks ! and carets ^ will be lost, as they have special meanings in that mode. But to remove the line number from the line, the delayed expansion needs to be enabled. set "var=!var:*:=!" removes all up to the first colon (using delims=: would remove also all colons at the beginning of a line, not only the one from findstr). The endlocal disables the delayed expansion again for the next line. The only limitation is now the line length limit of ~8191, but there seems no way to overcome this. A: Or, you may exclude the options in quotes: FOR /F %%i IN (myfile.txt) DO ECHO %%i A: Here's a bat file I wrote to execute all SQL scripts in a folder: REM ****************************************************************** REM Runs all *.sql scripts sorted by filename in the current folder. REM To use integrated auth change -U <user> -P <password> to -E REM ****************************************************************** dir /B /O:n *.sql > RunSqlScripts.tmp for /F %%A in (RunSqlScripts.tmp) do osql -S (local) -d DEFAULT_DATABASE_NAME -U USERNAME_GOES_HERE -P PASSWORD_GOES_HERE -i %%A del RunSqlScripts.tmp A: Modded examples here to list our Rails apps on Heroku - thanks! cmd /C "heroku list > heroku_apps.txt" find /v "=" heroku_apps.txt | find /v ".TXT" | findstr /r /v /c:"^$" > heroku_apps_list.txt for /F "tokens=1" %%i in (heroku_apps_list.txt) do heroku run bundle show rails --app %%i Full code here. A: To print all lines in text file from command line (with delayedExpansion): set input="path/to/file.txt" for /f "tokens=* delims=[" %i in ('type "%input%" ^| find /v /n ""') do ( set a=%i set a=!a:*]=]! echo:!a:~1!) Works with leading whitespace, blank lines, whitespace lines. Tested on Win 10 CMD
{ "language": "en", "url": "https://stackoverflow.com/questions/155932", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "286" }
Q: What GNU/Linux command-line tool would I use for performing a search and replace on a file? What GNU/Linux command-line tool would I use for performing a search and replace on a file? Can the search text, and replacement, be specified in a regex format? A: Consider Ruby as an alternative to Perl. It stole most of Perl's one-liner commandline args (-i, -p, -l, -e, -n) and auto-sets $_ for you like Perl does and has plenty of regex goodness. Additionally Ruby's syntax may be more comfortable and easier to read or write than Perl's or sed's. (Or not, depending on your tastes.) ruby -pi.bak -e '$_.gsub!(/foo|bar/){|x| x.upcase}' *.txt vs. perl -pi.bak -e 's/(foo|bar)/\U\1/g' *.txt In many cases when dealing with one-liners, performance isn't enough of an issue to care whether you use lightweight sed or heavyweight Perl or heaveier-weight Ruby. Use whatever is easiest to write. A: sed 's/a.*b/xyz/g;' old_file > new_file GNU sed (which you probably have) is even more versatile: sed -r --in-place 's/a(.*)b/x\1y/g;' your_file Here is a brief explanation of those options: -i[SUFFIX], --in-place[=SUFFIX] edit files in place (makes backup if extension supplied) -r, --regexp-extended use extended regular expressions in the script. The FreeBSD, NetBSD and OpenBSD versions also supports these options. If you want to learn more about sed, Cori has suggested this tutorial. A: sed, the stream editor, and yes, it uses regex. A: Perl was invented for this: perl -pi -e 's/foo/bar/g;' *.txt Any normal s/// pattern in those single quotes. You can keep a backup with something like this: perl -pi.bak -e 's/foo/bar/g;' *.txt Or pipeline: cat file.txt | perl -ne 's/foo/bar/g;' | less But that's really more sed's job.
{ "language": "en", "url": "https://stackoverflow.com/questions/155934", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "27" }
Q: Distributed file source Looking for a software solution to to store large files (>50MB - 1.5GB), distributed across multiple servers. We have looked at MogileFS, however, given existing software demands, need to have an NFS interface. Would prefer open source, however, open to all options. A: If you have only a small number of servers, you could try rsync. Couple it with SSH for some security. A: did you look at Hadoop DFS? it's much more then distributed file system but should be good for very big files.
{ "language": "en", "url": "https://stackoverflow.com/questions/155938", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Do you design/sketch/draw a development solution first and then develop it? If so how? I work a lot with decision makers looking to use technology better in their businesses. I have found that a picture is worth a thousand words and prototyping a system in a diagram of some sorts always lends a lot to a discussion. I have used Visio, UML (somewhat), Mind Maps, Flow Charts and Mocked Up WinForms to start the vision for these sponsors to ensure that everyone is on the same page. I always seem to be looking for that common process can be used to knit the business vision to the development process so that we all end up at the same end, "Something functional that solves a problem". I am looking for suggestions or Cliff notes on how to approach the design process such that it works for applications that may only take a week to develop, but can also be used to encompass larger projects as well. I know that this delves into the area of UML, but I have found that I have a hard time finding a guide to appropriately use the various diagram types, let alone help business users understand the diagrams and relate to them. What do you use to capture a vision of a system/application to then present to the sponsors of a project? (all before you write a single line of code)... A: Paper or whiteboard! For the lone deveoper, I'd recommend paper. At least at first, eventually you may want to formalize it with UML, but I don't think its necessary. For a group of developers that work together (physically), I'd recommend a whiteboard. That way its visible for everyone and everyone can improve and contribute. Again, you may want to formalize at this point, but I still don't think its neces When I first started doing OOP (or designing an algorithm), I'd do it all in my head while coding. But after doing some reasonable complex projects I definitely see the benefit in drawing out the interactions between different objects in a system. I do projects by myself, so I use lots of index cards for designing classes and paper for their interactions. The point is it needs to be easy to change. I've played with Dia, a diagram editor, for doing UML, but making changes is too difficult. I want to be able to be able to make quick changes, so I can figure out what works best. Its worth mentioning that TDD and doing "spike"[1] projects can help when designing a system, too. [1] From Extreme Programming Adventures in C#, page 8: "Spike" is an Extreme Programming term meaning "experiement." We use the word because we think of a spike as a quick, almost brute-force experiment aimed at learning just one thing. Think of drving a big nail through a board. A: For small or very bounded tasks, I think developers almost universally agree that any sort of diagram is an unnecessary step. However when dealing with a larger, more complicated system, especially when two or more processes have to interact or complex business logic is needed, Process Activity Diagrams can be extremely useful. We use fairly pure agile methods in development and find these are almost the only type of diagrams we use. It is amazing how much you can optimize a high level design just by having all the big pieces in front of you an connecting them with flow lines. I can't stress enough how important it is to tailor the diagram to your problem, not the other way around, so while the link gives a good starting point, simply add what makes sense to represent your system/problem. As for storage, whiteboard can be great for brainstorming and when the idea is still being refined, but I'd argue that electronic and a wiki is better once the idea is taking a fairly final shape (OmniGraffle is the king of diagramming if you are lucky enough to be able to use a Mac at work:) . Having an area where you dump all these diagrams can be extremely helpful for someone new to get a quick grasp on an overall piece of the system without having to dig through code. Also, because activity diagrams represent larger logic blocks, there is not the issue of always having to keep them up to date. When you make a large change to a system, then yes, but hopefully likely using the existing diagram to plan the change anyway. A: From Conceptual Blockbusting: A Guide To Better Ideas by James L. Adams: Intellectual blocks result in an inefficient choice of mental tactics or a shortage of intellectual ammunition. . . . 1. Solving the problem using an incorrect language (verbal, mathematical, visual) -- as in trying to solve a problem mathematically when it can more easily be accomplished visually (pg. 71, 3rd Edition) Needless to say, if you choose to use diagrams to capture ideas that may be better captured with mathematics, it's equally bad. The trick, of course, is to find the right language to express both the problem and the solution too. And, of course, it may be appropriate to use more than one language to express both the problem and the solution. My point is that you're assuming that diagrams are the best way to go. I'm not sure that I would be willing to make that assumption. You may get a better answer (and the customer may be happier with the result) via some other method of framing requirements and proposed designs. By the way, Conceptual Blockbusting is highly recommended reading. A: Read up on Kruchten's 4+1 Views. Here's a way you can proceed. * *Collect use cases into a use case diagram. This will show actors and use cases. The diagram doesn't start with a lot of details. *Prioritize the use cases to focus on high value use cases. *Write narratives. If you want, you can draw Activity diagrams. The above is completely non-technical. UML imposes a few rules on the shapes to be used, but other than that, you can depict things in end-user terminology. * *You can do noun analysis, drawing Class diagrams to illustrate the entities and relationships. At first, these will be in user terminology. No geeky technical content. *You can expand the activity diagrams or add sequence diagrams to show the processing model. This will start with end-user, non-technical depictions of processing. *You can iterate through the class and activity diagrams to move from analysis to design. At some point, you will have moved out of analysis and into engineering mode. Users may not want to see all of these pictures. *You can draw component diagrams for the development view and deployment diagrams for the physical view. These will also iterate as your conception of the system expands and refines. A: When designing an application (I mainly create web applications, but this does apply to others), I typically create user stories to determine exactly what the end user really needs. These form the typical "business requirements". After the user stories are nailed down, I create flow charts to lay out the paths that people will take when using the app. After that step (which sometimes gets an approval process) I create interface sketches (pen/pencil & graph paper), and begin the layout of the databases. This, and the next step are usually the most time consuming process. The next step is taking the sketches and turn them into cleaned up wireframes. I use OmniGraffle for this step -- it's light years ahead of Visio. After this, you may want to do typical UML diagrams, or other object layouts / functionality organization, but the business people aren't going to care so much about that kind of detail :) A: When I'm putting together a design, I'm concerned with conveying the ideas cleanly and easily to the audience. That audience is made up of (typically) different folks with different backgrounds. What I don't want to do is get into "teaching mode" for a particular design model. If I have to spend considerable time telling my customer what the arrow with the solid head means and how it is different from the one that is hollow or what a square means versus a circle, I'm not making progress - at least not the progress I want to. If it is reasonably informal, I'll sketch it out on a whiteboard or on some paper - block and simple arrows at most. The point of the rough design at this point is to make sure we're on the same page. It will vary by customer though. If it is more formal, I might pull out a UML tool and put together some diagrams, but mostly my customers don't write software and are probably only marginally interesting in the innards. We keep it at the "bubble and line" level and might put together some bulleted lists where clarification is needed. My customer don't want to see class diagrams or anything like that, typically. If we need to show some GUI interaction, I'll throw together some simple window prototypes in Visual Studio - it is quick and easy. I've found that the customer can relate to that pretty easily. In a nutshell, I produce simple drawings (in some format) that can communicate the design to the interested parties and stake holders. I make sure I know what I want it to do and more importantly - what THEY NEED it to do, and talk to that. It typically doesn't get into the weeds because folks get lost there and I don't find it time well spent to diagram everything to the nth degree. Ultimately, if the customer and I (and all other parties concerned) are on the same page after talking through the design, I'm a happy guy. A: I'm an Agile guy, so I tend to not put a lot of time into diagramming. There are certainly times when sketching something on a white board or a napkin will help ensure that you understand a particular problem or requirement, but nothing really beats getting working software in front of a customer so they can see how it works. If you are in a situation where your customers would accept frequent iterations and demos over up front design, I say go for it. It's even better if they are okay to get early feedback in the form of passing unit or integration tests (something like Fit works well here). I generally dislike prototypes, because far too often the prototype becomes the final product. I had the misfortune of working on a project which was basically extending a commercial offering which turned out to be a "proof of concept" that got packaged and sold. The project was canceled after over 1000 defects were logged against the core application (not counting any enhancements or customizations we were currently working on). I've tried using UML, and found that unless the person looking at the diagrams understands UML, they are of little help. Screen mock-ups are generally not a bad idea, but they only show the side of the application which directly effects the user, so you don't get much mileage for anything that isn't presentation. Oddly enough tools like the Workflow designer in Visual Studio produce pretty clear diagrams that are easy for non-developers to understand, so it makes a good tool for generating core application workflow, if your application is complex enough to benefit from it. Still, of all the approaches I've used over the years, nothing beats a user getting their hands on something to let you know how well you understand the problem. A: I suggest reading Joel's articles on "Painless Functional Specifications". Part 1 is titled "Why Bother?". We use Mockup Screens at work ("Quick and Easy Screen Prototypes"). It's easy to alter screens and the scetches make clear that this is only a design. The mockups are then included in a Word document containing the spec. A: The UML advice works well if you're working on a large & risk-averse project with a lot of stakeholders, and with lots of contributors. Even on those projects, it really helps to develop a prototype to show to the decision makers. Usually walking them through the UI and a typical user story is quite sufficient. That said, you must beware that focus upon the UI for decision makers will tend to make them neglect some significant backend issues such as validations, business rules and data integrity. They will tend to write these issues off as "technical" issues rather than business decisions. If, on the other hand, you're working on an Agile project where it's possible to make code changes quickly (and rollback mistakes quickly), you may be able to make an evolutionary prototype with all the works. Your application's architecture should be supple and flexible enough to support quick change (e.g. naked objects design pattern or Rails-style MVC). It helps to have a development culture that encourages experimentation, and acknowledges that BDUF is no predictor of working successful software. A: 4+1 views are good only for technical people. And only if they are interested enough. Remember those last dozen times you struggled to discuss use-case diagrams with the customer? The only thing I found that works with everybody is in fact showing them screens of your application. You said yourself: a picture is worth a thousand words. Curiously, there are two approaches that worked for me: * *Present to users a complete user manual (before even development is started), OR *Use mockups that doesn't look at all like finished app: Discuss main screens of you app first. When satisfied, proceed discussing mockups but one scenario at a time. For option (1) you can use whatever you want, it doesn't really matter. For option (2) it's completely fine to start with pen and paper. But soon you are better off using a specialized mockup tool (as soon as you need to edit, maintain or organize your mockups) I ended up writing my own mockup tool back in 2005, it became pretty popular: MockupScreens And here is the most complete list of mockup tools I know of. Many of those are free: http://c2.com/cgi/wiki?GuiPrototypingTools
{ "language": "en", "url": "https://stackoverflow.com/questions/155948", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: OleDbException System Resources Exceeded The following code executes a simple insert command. If it is called 2,000 times consecutively (to insert 2,000 rows) an OleDbException with message = "System Resources Exceeded" is thrown. Is there something else I should be doing to free up resources? using (OleDbConnection conn = new OleDbConnection(connectionString)) using (OleDbCommand cmd = new OleDbCommand(commandText, conn)) { conn.Open(); cmd.ExecuteNonQuery(); } A: The system resources exceeded error is not coming from the managed code, its coming from you killing your database (JET?) You are opening way too many connections, way too fast... Some tips: * *Avoid round trips by not opening a new connection for every single command, and perform the inserts using a single connection. *Ensure that database connection pooling is working. (Not sure if that works with OLEDB connections.) *Consider using a more optimized way to insert the data. Have you tried this? using (OleDBConnection conn = new OleDBConnection(connstr)) { while (IHaveData) { using (OldDBCommand cmd = new OldDBCommand()) { cmd.Connection = conn; cmd.ExecuteScalar(); } } } A: I tested this code out with an Access 2007 database with no exceptions (I went as high as 13000 inserts). However, what I noticed is that it is terribly slow as you are creating a connection every time. If you put the "using(connection)" outside the loop, it goes much faster. A: In addition to the above (connecting to the database only once), I would also like to make sure you're closing and disposing of your connections. As most objects in c# are managed wrt memory, connections and streams don't have this luxury always, so if objects like this aren't disposed of, they are not guaranteed to be cleaned up. This has the added effect of leaving that connection open for the life of your program. Also, if possible, I'd look into using Transactions. I can't tell what you're using this code for, but OleDbTransactions are useful when inserting and updating many rows in a database. A: I am not sure about the specifics but I have ran across a similar problem. We utilize an Access database with IIS to serve our clients. We do not have very many clients but there are alot of connections being opened and closed during a single session. After about a week of work, we recieve the same error and all connection attempts fail. To correct the problem, all we had to do was restart the worker processes. After some research, I found (of course) that Access does not perform well in this environment. Resources do not get released correctly and over time the executable will run out. To solve this problem, we are going to move to an Oracle database. If this does not fix the problem, I will keep you updated on my findings. A: This could be occurring because you are not disposing the Connection and Command object created. Always Dispose the object at the end. OledbCommand.Dispose();
{ "language": "en", "url": "https://stackoverflow.com/questions/155959", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: What are best practices that you use when writing Objective-C and Cocoa? I know about the HIG (which is quite handy!), but what programming practices do you use when writing Objective-C, and more specifically when using Cocoa (or CocoaTouch). A: Use the LLVM/Clang Static Analyzer NOTE: Under Xcode 4 this is now built into the IDE. You use the Clang Static Analyzer to -- unsurprisingly -- analyse your C and Objective-C code (no C++ yet) on Mac OS X 10.5. It's trivial to install and use: * *Download the latest version from this page. *From the command-line, cd to your project directory. *Execute scan-build -k -V xcodebuild. (There are some additional constraints etc., in particular you should analyze a project in its "Debug" configuration -- see http://clang.llvm.org/StaticAnalysisUsage.html for details -- the but that's more-or-less what it boils down to.) The analyser then produces a set of web pages for you that shows likely memory management and other basic problems that the compiler is unable to detect. A: This is subtle one but handy one. If you're passing yourself as a delegate to another object, reset that object's delegate before you dealloc. - (void)dealloc { self.someObject.delegate = NULL; self.someObject = NULL; // [super dealloc]; } By doing this you're ensuring that no more delegate methods will get sent. As you're about to dealloc and disappear into the ether you want to make sure that nothing can send you any more messages by accident. Remember self.someObject could be retained by another object (it could be a singleton or on the autorelease pool or whatever) and until you tell it "stop sending me messages!", it thinks your just-about-to-be-dealloced object is fair game. Getting into this habit will save you from lots of weird crashes that are a pain to debug. The same principal applies to Key Value Observation, and NSNotifications too. Edit: Even more defensive, change: self.someObject.delegate = NULL; into: if (self.someObject.delegate == self) self.someObject.delegate = NULL; A: Be more functional. Objective-C is object-oriented language, but Cocoa framework functional-style aware, and is designed functional style in many cases. * *There is separation of mutability. Use immutable classes as primary, and mutable object as secondary. For instance, use NSArray primarily, and use NSMutableArray only when you need. *There is pure functions. Not so many, buy many of framework APIs are designed like pure function. Look at functions such as CGRectMake() or CGAffineTransformMake(). Obviously pointer form looks more efficient. However indirect argument with pointers can't offer side-effect-free. Design structures purely as much as possible. Separate even state objects. Use -copy instead of -retain when passing a value to other object. Because shared state can influence mutation to value in other object silently. So can't be side-effect-free. If you have a value from external from object, copy it. So it's also important designing shared state as minimal as possible. However don't be afraid of using impure functions too. * *There is lazy evaluation. See something like -[UIViewController view] property. The view won't be created when the object is created. It'll be created when caller reading view property at first time. UIImage will not be loaded until it actually being drawn. There are many implementation like this design. This kind of designs are very helpful for resource management, but if you don't know the concept of lazy evaluation, it's not easy to understand behavior of them. *There is closure. Use C-blocks as much as possible. This will simplify your life greatly. But read once more about block-memory-management before using it. *There is semi-auto GC. NSAutoreleasePool. Use -autorelease primary. Use manual -retain/-release secondary when you really need. (ex: memory optimization, explicit resource deletion) A: @kendell Instead of: @interface MyClass (private) - (void) someMethod - (void) someOtherMethod @end Use: @interface MyClass () - (void) someMethod - (void) someOtherMethod @end New in Objective-C 2.0. Class extensions are described in Apple's Objective-C 2.0 Reference. "Class extensions allow you to declare additional required API for a class in locations other than within the primary class @interface block" So they're part of the actual class - and NOT a (private) category in addition to the class. Subtle but important difference. A: The Apple-provided samples I saw treated the App delegate as a global data store, a data manager of sorts. That's wrongheaded. Create a singleton and maybe instantiate it in the App delegate, but stay away from using the App delegate as anything more than application-level event handling. I heartily second the recommendations in this blog entry. This thread tipped me off. A: Avoid autorelease Since you typically(1) don't have direct control over their lifetime, autoreleased objects can persist for a comparatively long time and unnecessarily increase the memory footprint of your application. Whilst on the desktop this may be of little consequence, on more constrained platforms this can be a significant issue. On all platforms, therefore, and especially on more constrained platforms, it is considered best practice to avoid using methods that would lead to autoreleased objects and instead you are encouraged to use the alloc/init pattern. Thus, rather than: aVariable = [AClass convenienceMethod]; where able, you should instead use: aVariable = [[AClass alloc] init]; // do things with aVariable [aVariable release]; When you're writing your own methods that return a newly-created object, you can take advantage of Cocoa's naming convention to flag to the receiver that it must be released by prepending the method name with "new". Thus, instead of: - (MyClass *)convenienceMethod { MyClass *instance = [[[self alloc] init] autorelease]; // configure instance return instance; } you could write: - (MyClass *)newInstance { MyClass *instance = [[self alloc] init]; // configure instance return instance; } Since the method name begins with "new", consumers of your API know that they're responsible for releasing the received object (see, for example, NSObjectController's newObject method). (1) You can take control by using your own local autorelease pools. For more on this, see Autorelease Pools. A: Some of these have already been mentioned, but here's what I can think of off the top of my head: * *Follow KVO naming rules. Even if you don't use KVO now, in my experience often times it's still beneficial in the future. And if you are using KVO or bindings, you need to know things are going work the way they are supposed to. This covers not just accessor methods and instance variables, but to-many relationships, validation, auto-notifying dependent keys, and so on. *Put private methods in a category. Not just the interface, but the implementation as well. It's good to have some distance conceptually between private and non-private methods. I include everything in my .m file. *Put background thread methods in a category. Same as above. I've found it's good to keep a clear conceptual barrier when you're thinking about what's on the main thread and what's not. *Use #pragma mark [section]. Usually I group by my own methods, each subclass's overrides, and any information or formal protocols. This makes it a lot easier to jump to exactly what I'm looking for. On the same topic, group similar methods (like a table view's delegate methods) together, don't just stick them anywhere. *Prefix private methods & ivars with _. I like the way it looks, and I'm less likely to use an ivar when I mean a property by accident. *Don't use mutator methods / properties in init & dealloc. I've never had anything bad happen because of it, but I can see the logic if you change the method to do something that depends on the state of your object. *Put IBOutlets in properties. I actually just read this one here, but I'm going to start doing it. Regardless of any memory benefits, it seems better stylistically (at least to me). *Avoid writing code you don't absolutely need. This really covers a lot of things, like making ivars when a #define will do, or caching an array instead of sorting it each time the data is needed. There's a lot I could say about this, but the bottom line is don't write code until you need it, or the profiler tells you to. It makes things a lot easier to maintain in the long run. *Finish what you start. Having a lot of half-finished, buggy code is the fastest way to kill a project dead. If you need a stub method that's fine, just indicate it by putting NSLog( @"stub" ) inside, or however you want to keep track of things. A: Write unit tests. You can test a lot of things in Cocoa that might be harder in other frameworks. For example, with UI code, you can generally verify that things are connected as they should be and trust that they'll work when used. And you can set up state & invoke delegate methods easily to test them. You also don't have public vs. protected vs. private method visibility getting in the way of writing tests for your internals. A: Golden Rule: If you alloc then you release! UPDATE: Unless you are using ARC A: Don't write Objective-C as if it were Java/C#/C++/etc. I once saw a team used to writing Java EE web applications try to write a Cocoa desktop application. As if it was a Java EE web application. There was a lot of AbstractFooFactory and FooFactory and IFoo and Foo flying around when all they really needed was a Foo class and possibly a Fooable protocol. Part of ensuring you don't do this is truly understanding the differences in the language. For example, you don't need the abstract factory and factory classes above because Objective-C class methods are dispatched just as dynamically as instance methods, and can be overridden in subclasses. A: Make sure you bookmark the Debugging Magic page. This should be your first stop when banging your head against a wall while trying to find the source of a Cocoa bug. For example, it will tell you how to find the method where you first allocated memory that later is causing crashes (like during app termination). A: Only release a property in dealloc method. If you want to release memory that the property is holding, just set it as nil: self.<property> = nil; A: There are a few things I have started to do that I do not think are standard: 1) With the advent of properties, I no longer use "_" to prefix "private" class variables. After all, if a variable can be accessed by other classes shouldn't there be a property for it? I always disliked the "_" prefix for making code uglier, and now I can leave it out. 2) Speaking of private things, I prefer to place private method definitions within the .m file in a class extension like so: #import "MyClass.h" @interface MyClass () - (void) someMethod; - (void) someOtherMethod; @end @implementation MyClass Why clutter up the .h file with things outsiders should not care about? The empty () works for private categories in the .m file, and issues compile warnings if you do not implement the methods declared. 3) I have taken to putting dealloc at the top of the .m file, just below the @synthesize directives. Shouldn't what you dealloc be at the top of the list of things you want to think about in a class? That is especially true in an environment like the iPhone. 3.5) In table cells, make every element (including the cell itself) opaque for performance. That means setting the appropriate background color in everything. 3.6) When using an NSURLConnection, as a rule you may well want to implement the delegate method: - (NSCachedURLResponse *)connection:(NSURLConnection *)connection willCacheResponse:(NSCachedURLResponse *)cachedResponse { return nil; } I find most web calls are very singular and it's more the exception than the rule you'll be wanting responses cached, especially for web service calls. Implementing the method as shown disables caching of responses. Also of interest, are some good iPhone specific tips from Joseph Mattiello (received in an iPhone mailing list). There are more, but these were the most generally useful I thought (note that a few bits have now been slightly edited from the original to include details offered in responses): 4) Only use double precision if you have to, such as when working with CoreLocation. Make sure you end your constants in 'f' to make gcc store them as floats. float val = someFloat * 2.2f; This is mostly important when someFloat may actually be a double, you don't need the mixed-mode math, since you're losing precision in 'val' on storage. While floating-point numbers are supported in hardware on iPhones, it may still take more time to do double-precision arithmetic as opposed to single precision. References: * *Double vs float on the iPhone *iPhone/iPad double precision math On the older phones supposedly calculations operate at the same speed but you can have more single precision components in registers than doubles, so for many calculations single precision will end up being faster. 5) Set your properties as nonatomic. They're atomic by default and upon synthesis, semaphore code will be created to prevent multi-threading problems. 99% of you probably don't need to worry about this and the code is much less bloated and more memory-efficient when set to nonatomic. 6) SQLite can be a very, very fast way to cache large data sets. A map application for instance can cache its tiles into SQLite files. The most expensive part is disk I/O. Avoid many small writes by sending BEGIN; and COMMIT; between large blocks. We use a 2 second timer for instance that resets on each new submit. When it expires, we send COMMIT; , which causes all your writes to go in one large chunk. SQLite stores transaction data to disk and doing this Begin/End wrapping avoids creation of many transaction files, grouping all of the transactions into one file. Also, SQL will block your GUI if it's on your main thread. If you have a very long query, It's a good idea to store your queries as static objects, and run your SQL on a separate thread. Make sure to wrap anything that modifies the database for query strings in @synchronize() {} blocks. For short queries just leave things on the main thread for easier convenience. More SQLite optimization tips are here, though the document appears out of date many of the points are probably still good; http://web.utk.edu/~jplyon/sqlite/SQLite_optimization_FAQ.html A: Try to avoid what I have now decided to call Newbiecategoryaholism. When newcomers to Objective-C discover categories they often go hog wild, adding useful little categories to every class in existence ("What? i can add a method to convert a number to roman numerals to NSNumber rock on!"). Don't do this. Your code will be more portable and easier to understand with out dozens of little category methods sprinkled on top of two dozen foundation classes. Most of the time when you really think you need a category method to help streamline some code you'll find you never end up reusing the method. There are other dangers too, unless you're namespacing your category methods (and who besides the utterly insane ddribin is?) there is a chance that Apple, or a plugin, or something else running in your address space will also define the same category method with the same name with a slightly different side effect.... OK. Now that you've been warned, ignore the "don't do this part". But exercise extreme restraint. A: Resist subclassing the world. In Cocoa a lot is done through delegation and use of the underlying runtime that in other frameworks is done through subclassing. For example, in Java you use instances of anonymous *Listener subclasses a lot and in .NET you use your EventArgs subclasses a lot. In Cocoa, you don't do either — the target-action is used instead. A: Sort strings as the user wants When you sort strings to present to the user, you should not use the simple compare: method. Instead, you should always use localized comparison methods such as localizedCompare: or localizedCaseInsensitiveCompare:. For more details, see Searching, Comparing, and Sorting Strings. A: Declared Properties You should typically use the Objective-C 2.0 Declared Properties feature for all your properties. If they are not public, add them in a class extension. Using declared properties makes the memory management semantics immediately clear, and makes it easier for you to check your dealloc method -- if you group your property declarations together you can quickly scan them and compare with the implementation of your dealloc method. You should think hard before not marking properties as 'nonatomic'. As The Objective C Programming Language Guide notes, properties are atomic by default, and incur considerable overhead. Moreover, simply making all your properties atomic does not make your application thread-safe. Also note, of course, that if you don't specify 'nonatomic' and implement your own accessor methods (rather than synthesising them), you must implement them in an atomic fashion. A: Think about nil values As this question notes, messages to nil are valid in Objective-C. Whilst this is frequently an advantage -- leading to cleaner and more natural code -- the feature can occasionally lead to peculiar and difficult-to-track-down bugs if you get a nil value when you weren't expecting it. A: Use NSAssert and friends. I use nil as valid object all the time ... especially sending messages to nil is perfectly valid in Obj-C. However if I really want to make sure about the state of a variable, I use NSAssert and NSParameterAssert, which helps to track down problems easily. A: Simple but oft-forgotten one. According to spec: In general, methods in different classes that have the same selector (the same name) must also share the same return and argument types. This constraint is imposed by the compiler to allow dynamic binding. in which case all the same named selectors, even if in different classes, will be regarded as to have identical return/argument types. Here is a simple example. @interface FooInt:NSObject{} -(int) print; @end @implementation FooInt -(int) print{ return 5; } @end @interface FooFloat:NSObject{} -(float) print; @end @implementation FooFloat -(float) print{ return 3.3; } @end int main (int argc, const char * argv[]) { NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; id f1=[[FooFloat alloc]init]; //prints 0, runtime considers [f1 print] to return int, as f1's type is "id" and FooInt precedes FooBar NSLog(@"%f",[f1 print]); FooFloat* f2=[[FooFloat alloc]init]; //prints 3.3 expectedly as the static type is FooFloat NSLog(@"%f",[f2 print]); [f1 release]; [f2 release] [pool drain]; return 0; } A: If you're using Leopard (Mac OS X 10.5) or later, you can use the Instruments application to find and track memory leaks. After building your program in Xcode, select Run > Start with Performance Tool > Leaks. Even if your app doesn't show any leaks, you may be keeping objects around too long. In Instruments, you can use the ObjectAlloc instrument for this. Select the ObjectAlloc instrument in your Instruments document, and bring up the instrument's detail (if it isn't already showing) by choosing View > Detail (it should have a check mark next to it). Under "Allocation Lifespan" in the ObjectAlloc detail, make sure you choose the radio button next to "Created & Still Living". Now whenever you stop recording your application, selecting the ObjectAlloc tool will show you how many references there are to each still-living object in your application in the "# Net" column. Make sure you not only look at your own classes, but also the classes of your NIB files' top-level objects. For example, if you have no windows on the screen, and you see references to a still-living NSWindow, you may have not released it in your code. A: Clean up in dealloc. This is one of the easiest things to forget - esp. when coding at 150mph. Always, always, always clean up your attributes/member variables in dealloc. I like to use Objc 2 attributes - with the new dot notation - so this makes the cleanup painless. Often as simple as: - (void)dealloc { self.someAttribute = NULL; [super dealloc]; } This will take care of the release for you and set the attribute to NULL (which I consider defensive programming - in case another method further down in dealloc accesses the member variable again - rare but could happen). With GC turned on in 10.5, this isn't needed so much any more - but you might still need to clean up others resources you create, you can do that in the finalize method instead. A: All these comments are great, but I'm really surprised nobody mentioned Google's Objective-C Style Guide that was published a while back. I think they have done a very thorough job. A: Also, semi-related topic (with room for more responses!): What are those little Xcode tips & tricks you wish you knew about 2 years ago?. A: Don't forget that NSWindowController and NSViewController will release the top-level objects of the NIB files they govern. If you manually load a NIB file, you are responsible for releasing that NIB's top-level objects when you are done with them. A: One rather obvious one for a beginner to use: utilize Xcode's auto-indentation feature for your code. Even if you are copy/pasting from another source, once you have pasted the code, you can select the entire block of code, right click on it, and then choose the option to re-indent everything within that block. Xcode will actually parse through that section and indent it based on brackets, loops, etc. It's a lot more efficient than hitting the space bar or tab key for each and every line. A: Don't use unknown strings as format strings When methods or functions take a format string argument, you should make sure that you have control over the content of the format string. For example, when logging strings, it is tempting to pass the string variable as the sole argument to NSLog: NSString *aString = // get a string from somewhere; NSLog(aString); The problem with this is that the string may contain characters that are interpreted as format strings. This can lead to erroneous output, crashes, and security problems. Instead, you should substitute the string variable into a format string: NSLog(@"%@", aString); A: Use standard Cocoa naming and formatting conventions and terminology rather than whatever you're used to from another environment. There are lots of Cocoa developers out there, and when another one of them starts working with your code, it'll be much more approachable if it looks and feels similar to other Cocoa code. Examples of what to do and what not to do: * *Don't declare id m_something; in an object's interface and call it a member variable or field; use something or _something for its name and call it an instance variable. *Don't name a getter -getSomething; the proper Cocoa name is just -something. *Don't name a setter -something:; it should be -setSomething: *The method name is interspersed with the arguments and includes colons; it's -[NSObject performSelector:withObject:], not NSObject::performSelector. *Use inter-caps (CamelCase) in method names, parameters, variables, class names, etc. rather than underbars (underscores). *Class names start with an upper-case letter, variable and method names with lower-case. Whatever else you do, don't use Win16/Win32-style Hungarian notation. Even Microsoft gave up on that with the move to the .NET platform. A: IBOutlets Historically, memory management of outlets has been poor. Current best practice is to declare outlets as properties: @interface MyClass :NSObject { NSTextField *textField; } @property (nonatomic, retain) IBOutlet NSTextField *textField; @end Using properties makes the memory management semantics clear; it also provides a consistent pattern if you use instance variable synthesis. A: I know I overlooked this when first getting into Cocoa programming. Make sure you understand memory management responsibilities regarding NIB files. You are responsible for releasing the top-level objects in any NIB file you load. Read Apple's Documentation on the subject. A: Turn on all GCC warnings, then turn off those that are regularly caused by Apple's headers to reduce noise. Also run Clang static analysis frequently; you can enable it for all builds via the "Run Static Analyzer" build setting. Write unit tests and run them with each build. A: Variables and properties 1/ Keeping your headers clean, hiding implementation Don't include instance variables in your header. Private variables put into class continuation as properties. Public variables declare as public properties in your header. If it should be only read, declare it as readonly and overwrite it as readwrite in class continutation. Basically I am not using variables at all, only properties. 2/ Give your properties a non-default variable name, example: @synthesize property = property_; Reason 1: You will catch errors caused by forgetting "self." when assigning the property. Reason 2: From my experiments, Leak Analyzer in Instruments has problems to detect leaking property with default name. 3/ Never use retain or release directly on properties (or only in very exceptional situations). In your dealloc just assign them a nil. Retain properties are meant to handle retain/release by themselves. You never know if a setter is not, for example, adding or removing observers. You should use the variable directly only inside its setter and getter. Views 1/ Put every view definition into a xib, if you can (the exception is usually dynamic content and layer settings). It saves time (it's easier than writing code), it's easy to change and it keeps your code clean. 2/ Don't try to optimize views by decreasing the number of views. Don't create UIImageView in your code instead of xib just because you want to add subviews into it. Use UIImageView as background instead. The view framework can handle hundreds of views without problems. 3/ IBOutlets don't have to be always retained (or strong). Note that most of your IBOutlets are part of your view hierarchy and thus implicitly retained. 4/ Release all IBOutlets in viewDidUnload 5/ Call viewDidUnload from your dealloc method. It is not implicitly called. Memory 1/ Autorelease objects when you create them. Many bugs are caused by moving your release call into one if-else branch or after a return statement. Release instead of autorelease should be used only in exceptional situations - e.g. when you are waiting for a runloop and you don't want your object to be autoreleased too early. 2/ Even if you are using Authomatic Reference Counting, you have to understand perfectly how retain-release methods work. Using retain-release manually is not more complicated than ARC, in both cases you have to thing about leaks and retain-cycles. Consider using retain-release manually on big projects or complicated object hierarchies. Comments 1/ Make your code autodocumented. Every variable name and method name should tell what it is doing. If code is written correctly (you need a lot of practice in this), you won't need any code comments (not the same as documentation comments). Algorithms can be complicated but the code should be always simple. 2/ Sometimes, you'll need a comment. Usually to describe a non apparent code behavior or hack. If you feel you have to write a comment, first try to rewrite the code to be simpler and without the need of comments. Indentation 1/ Don't increase indentation too much. Most of your method code should be indented on the method level. Nested blocks (if, for etc.) decrease readability. If you have three nested blocks, you should try to put the inner blocks into a separate method. Four or more nested blocks should be never used. If most of your method code is inside of an if, negate the if condition, example: if (self) { //... long initialization code ... } return self; if (!self) { return nil; } //... long initialization code ... return self; Understand C code, mainly C structs Note that Obj-C is only a light OOP layer over C language. You should understand how basic code structures in C work (enums, structs, arrays, pointers etc). Example: view.frame = CGRectMake(view.frame.origin.x, view.frame.origin.y, view.frame.size.width, view.frame.size.height + 20); is the same as: CGRect frame = view.frame; frame.size.height += 20; view.frame = frame; And many more Mantain your own coding standards document and update it often. Try to learn from your bugs. Understand why a bug was created and try to avoid it using coding standards. Our coding standards have currently about 20 pages, a mix of Java Coding Standards, Google Obj-C/C++ Standards and our own addings. Document your code, use standard standard indentation, white spaces and blank lines on the right places etc. A: #import "MyClass.h" @interface MyClass () - (void) someMethod; - (void) someOtherMethod; @end @implementation MyClass
{ "language": "en", "url": "https://stackoverflow.com/questions/155964", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "346" }
Q: jQuery match multiple attributes I have the following markup, and I want to make the All radio button checked. <ul> <li><input type="radio" value="All" name="Foo"/>All</li> <li><input type="radio" value="New" name="Foo"/>New</li> <li><input type="radio" value="Removed" name="Foo"/>Removed</li> <li><input type="radio" value="Updated" name="Foo"/>Updated</li> </ul> I'd like to match via attribute, but I need to match on 2 attributes, @name='Foo' and @value='All'. Something like this: $("input[@name='Foo' @value='all']").attr('checked','checked'); Can someone show how this can be done? A: The following HTML file shows how you can do this: <html> <head> <script type="text/javascript" src="jquery-1.2.6.pack.js"></script> <script type="text/javascript"> $(document).ready(function(){ $("a").click(function(event){ $("input[name='Foo'][value='All']").attr('checked','checked'); event.preventDefault(); }); }); </script> </head> <body> <ul> <li><input type="radio" value="All" name="Foo" />All</li> <li><input type="radio" value="New" name="Foo" />New</li> <li><input type="radio" value="Removed" name="Foo" />Removed</li> <li><input type="radio" value="Updated" name="Foo" />Updated</li> </ul> <a href="" >Click here</a> </body> </html> When you click on the link, the desired radio button is selected. The important line is the one setting the checked attribute. A: I was beating my head against a wall similar to this and just want to point out that in jQuery 1.3 the syntax used in the accepted answer is the ONLY syntax that will work. The questioner uses the @ syntax for the expression which does not work at all in jQuery. Hopefully this helps the next guy to come across this question via Google =p To be clear, you have to use jQuery('input[name=field1][val=checked]') and not jQuery('input[@name=field1][@val=checked]')
{ "language": "en", "url": "https://stackoverflow.com/questions/155977", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "45" }
Q: VB.Net MessageBox.Show() moves my form to the back I have an MDI application. When I show a message box using MessageBox.Show(), the entire application disappears behind all of my open windows when I dismiss the message box. The code is not doing anything special. In fact, here is the line that invokes the message box from within an MDI Child form: MessageBox.Show(String.Format("{0} saved successfully.", Me.BusinessUnitTypeName), "Save Successful", MessageBoxButtons.OK, MessageBoxIcon.Information, MessageBoxDefaultButton.Button1, MessageBoxOptions.DefaultDesktopOnly) Me.BusinessUnitTypeName() is a read only property getter that returns a string, depending upon the value of a member variable. There are no side effects in this property. Any ideas? A: Remove the last parameter, MessageBoxOptions.DefaultDesktopOnly. From MSDN: DefaultDesktopOnly will cause the application that raised the MessageBox to lose focus. The MessageBox that is displayed will not use visual styles. For more information, see Rendering Controls with Visual Styles. The last parameter allows communication of a background Windows Service with the active desktop through means of csrss.exe! See Bart de Smet's blog post for details. A: Remove the MessageBoxOptions.DefaultDesktopOnly parameter and it will work correctly. DefaultDesktopOnly specifies that "The message box is displayed on the active desktop" which causes the focus loss. A: These answers are correct, but I wanted to add another point. I came across this question while working with someone else's code. A simple message box was causing the front most window to move to the back: MessageBox.Show("Hello"). Turns out, there was a BindingSource.Endedit command before the MessageBox. The BindingSource wasn't connected to any controls yet, but it caused the window to change z-positions. I am only including this note since my search brought me to this question and I thought it might be helpful to someone else.
{ "language": "en", "url": "https://stackoverflow.com/questions/155996", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How can I assign an event to a timer at runtime in vb.net? Given this: Public Sub timReminder_Tick(ByVal sender As System.Object, ByVal e As System.EventArgs) If DateTime.Now() > g_RemindTime Then Reminders.ShowDialog() timReminder.Enabled = False End If End Sub I want to be able to say this (as I would in Delphi): timReminder.Tick = timReminder_Tick But I get errors when I try it. Does anyone know how I can assign a custom event to a timer's on-tick event at runtime in VB.NET? A: Use the 'AddHandler' and 'AddressOf' keywords to add a handler to the Tick event. AddHandler timeReminder.Tick, AddressOf timeReminder_Tick A: The addHandler is a very powerful tool. Try using it to add an event to a series of controls within a collection. The handler can add validation or error checking to all types of controls and will work with whatever you add to the form.
{ "language": "en", "url": "https://stackoverflow.com/questions/155998", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Protecting internal view layer template pages in servlet applications I have a very basic question about MVC web applications in Java. Since the olden days of raw JSP up until current technologies like Seam, a very basic pattern has always been the internal dispatch from the controller that initially accepted the request to the view layer that creates the output to be sent to the client. This internal dispatch is generally done (although the mechanism may be hidden through an extra layer of configuration) by asking the servlet container for a new resource using a URL. The mapping of these URL are done by the same web.xml that also defines the "real" URL to the outside. Unless special measures are taken, it is often possible to directly access the view layer directly. Witness the Seam "registration" demo, where you can bypass "register.seam" and directly go to "registered.xhtml". This is a potential security problem. At the very least, it leaks view template source code. I am aware that this is only a basic sample application, but it is also strange that any extra measures should need to be taken to declare these internal resources invisible to the outside. What is the easiest way to restrict URL entry points? Is there maybe something like the "WEB-INF" directory, a magic URL path component that can only be accessed by internal requests? A: You can prevent access to internal resources by using a security-constraint in your web.xml deployment descriptor. For example, I use the following configuration to prevent direct access to JSPs: <!-- Prevent direct access to JSPs. --> <security-constraint> <web-resource-collection> <web-resource-name>JSP templates</web-resource-name> <url-pattern>*.jsp</url-pattern> </web-resource-collection> <auth-constraint/> <!-- i.e. nobody --> </security-constraint> A: I would not recommend allowing Internet requests to directly access your appserver. I'd throw a webserver in front, then in it, allow the request of certain kinds of URLs. Don't want people to go to foo.com/jsps? Restrict it once and for all there. There's a bit of a conversation on the topic here: hiding pages behind WEB-INF? A: One way to handle this would be to construct a Servlet Filter which examines the request path of every request and handles each request accordingly. Here is a link that could help get you started, JavaServer Pages (JSP) and JSTL - Access control with JSP A: I have now seen a couple of applications that put their internal JSP into WEB-INF/jsp. That seems to do the trick, at least for JSP, and also for Velocity. It does not seem to work for JSF, though.
{ "language": "en", "url": "https://stackoverflow.com/questions/156002", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Is it possible to rename an SQL Server 2005 instance I would like to change the name of my SQL Server instance. Is there a simple way of doing this or is a significant effort required? Note, this is a named instance - not the default instance. A: The only way is a reinstall. See this similar thread for more info: SQL Server, convert a named instance to default instance? A: Or you could try this method: http://groups.google.com/group/microsoft.public.sqlserver.server/browse_thread/thread/544c4eaf43ddfaf3/f1bdcd1ec9cab158#f1bdcd1ec9cab158 A: I have seen a few makeshift ways of doing this, but I don't have confidence in any of them. I think I will simply install a new instance and transfer my information over. A: Renaming does't work well on the registry. Install a new isntance. A: Although there is no simple way of renaming a SQL Server instance, one can create SQL aliases.
{ "language": "en", "url": "https://stackoverflow.com/questions/156004", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Perl: Grabbing the nth and mth delimited words from each line in a file Because of the more tedious way of adding hosts to be monitored in Nagios (it requires defining a host object, as opposed to the previous program which only required the IP and hostname), I figured it'd be best to automate this, and it'd be a great time to learn Perl, because all I know at the moment is C/C++ and Java. The file I read from looks like this: xxx.xxx.xxx.xxx hostname #comments. i.dont. care. about All I want are the first 2 bunches of characters. These are obviously space delimited, but for the sake of generality, it might as well be anything. To make it more general, why not the first and third, or fourth and tenth? Surely there must be some regex action involved, but I'll leave that tag off for the moment, just in case. A: The one-liner is great, if you're not writing more Perl to handle the result. More generally though, in the context of a larger Perl program, you would either write a custom regular expression, for example: if($line =~ m/(\S+)\s+(\S+)/) { $ip = $1; $hostname = $2; } ... or you would use the split operator. my @arr = split(/ /, $line); $ip = $arr[0]; $hostname = $arr[1]; Either way, add logic to check for invalid input. A: Let's turn this into code golf! Based on David's excellent answer, here's mine: perl -ane 'print "@F[0,1]\n";' Edit: A real golf submission would look more like this (shaving off five strokes): perl -ape '$_="@F[0,1] "' but that's less readable for this question's purposes. :-P A: Here's a general solution (if we step away from code-golfing a bit). #!/usr/bin/perl -n chop; # strip newline (in case next line doesn't strip it) s/#.*//; # strip comments next unless /\S/; # don't process line if it has nothing (left) @fields = (split)[0,1]; # split line, and get wanted fields print join(' ', @fields), "\n"; Normally split splits by whitespace. If that's not what you want (e.g., parsing /etc/passwd), you can pass a delimiter as a regex: @fields = (split /:/)[0,2,4..6]; Of course, if you're parsing colon-delimited files, chances are also good that such files don't have comments and you don't have to strip them. A: A simple one-liner is perl -nae 'print "$F[0] $F[1]\n";' you can change the delimiter with -F A: David Nehme said: perl -nae 'print "$F[0] $F[1}\n"; which uses the -a switch. I had to look that one up: -a turns on autosplit mode when used with a -n or -p. An implicit split command to the @F array is done as the first thing inside the implicit while loop produced by the -n or -p. you learn something every day. -n causes each line to be passed to LINE: while (<>) { ... # your program goes here } And finally -e is a way to directly enter a single line of a program. You can have more than -e. Most of this was a rip of the perlrun(1) manpage. A: Since ray asked, I thought I'd rewrite my whole program without using Perl's implicitness (except the use of <ARGV>; that's hard to write out by hand). This will probably make Python people happier (braces notwithstanding :-P): while (my $line = <ARGV>) { chop $line; $line =~ s/#.*//; next unless $line =~ /\S/; @fields = (split ' ', $line)[0,1]; print join(' ', @fields), "\n"; } Is there anything I missed? Hopefully not. The ARGV filehandle is special. It causes each named file on the command line to be read, unless none are specified, in which case it reads standard input. Edit: Oh, I forgot. split ' ' is magical too, unlike split / /. The latter just matches a space. The former matches any amount of any whitespace. This magical behaviour is used by default if no pattern is specified for split. (Some would say, but what about /\s+/? ' ' and /\s+/ are similar, except for how whitespace at the beginning of a line is treated. So ' ' really is magical.) The moral of the story is, Perl is great if you like lots of magical behaviour. If you don't have a bar of it, use Python. :-P A: To Find Nth to Mth Character In Line No. L --- Example For Finding Label @echo off REM Next line = Set command value to a file OR Just Choose Your File By Skipping The Line vol E: > %temp%\justtmp.txt REM Vol E: = Find Volume Lable Of Drive E REM Next Line to choose line line no. +0 = line no. 1 for /f "usebackq delims=" %%a in (`more +0 %temp%\justtmp.txt`) DO (set findstringline=%%a& goto :nextstep) :nextstep REM Next line to read nth to mth Character here 22th Character to 40th Character set result=%findstringline:~22,40% echo %result% pause exit /b Save as find label.cmd The Result Will Be Your Drive E Label Enjoy
{ "language": "en", "url": "https://stackoverflow.com/questions/156009", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Haskell syntax for a case expression in a do block I can't quite figure out this syntax problem with a case expression in a do block. What is the correct syntax? If you could correct my example and explain it that would be the best. module Main where main = do putStrLn "This is a test" s <- foo putStrLn s foo = do args <- getArgs return case args of [] -> "No Args" [s]-> "Some Args" A little update. My source file was a mix of spaces and tabs and it was causing all kinds of problems. Just a tip for any one else starting in Haskell. If you are having problems check for tabs and spaces in your source code. A: return is an (overloaded) function, and it's not expecting its first argument to be a keyword. You can either parenthesize: module Main where import System(getArgs) main = do putStrLn "This is a test" s <- foo putStrLn s foo = do args <- getArgs return (case args of [] -> "No Args" [s]-> "Some Args") or use the handy application operator ($): foo = do args <- getArgs return $ case args of [] -> "No Args" [s]-> "Some Args" Stylewise, I'd break it out into another function: foo = do args <- getArgs return (has_args args) has_args [] = "No Args" has_args _ = "Some Args" but you still need to parenthesize or use ($), because return takes one argument, and function application is the highest precedence. A: Equivalently: foo = do args <- getArgs case args of [] -> return "No Args" [s]-> return "Some Args" It's probably preferable to do as wnoise suggests, but this might help someone understand a bit better.
{ "language": "en", "url": "https://stackoverflow.com/questions/156013", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "29" }
Q: How do you store Date ranges, which are actually timestamps Java & Oracle both have a timestamp type called Date. Developers tend to manipulate these as if they were calendar dates, which I've seen cause nasty one-off bugs. * *For a basic date quantity you can simply chop off the time portion upon input, i.e., reduce the precision. But if you do that with a date range, (e.g.: 9/29-9/30), the difference between these two values is 1 day, rather than 2. Also, range comparisons require either 1) a truncate operation: start < trunc(now) <= end, or 2) arithmetic: start < now < (end + 24hrs). Not horrible, but not DRY. *An alternative is to use true timestamps: 9/29 00:00:00 - 10/1 00:00:00. (midnight-to-midnight, so does not include any part of Oct). Now durations are intrinsically correct, and range comparisons are simpler: start <= now < end. Certainly cleaner for internal processing, however end dates do need to be converted upon initial input (+1), and for output (-1), presuming a calendar date metaphor at the user level. How do you handle date ranges on your project? Are there other alternatives? I am particularly interested in how you handle this on both the Java and the Oracle sides of the equation. A: Here's how we do it. * *Use timestamps. *Use Half-open intervals for comparison: start <= now < end. Ignore the whiners who insist that BETWEEN is somehow essential to successful SQL. With this a series of date ranges is really easy to audit. The database value for 9/30 to 10/1 encompass one day (9/30). The next interval's start must equal the previous interval's end. That interval[n-1].end == interval[n].start rule is handy for audit. When you display, if you want, you can display the formatted start and end-1. Turns out, you can educate people to understand that the "end" is actually the first day the rule is no longer true. So "9/30 to 10/1" means "valid starting 9/30, no longer valid starting 10/1". A: Oracle has the TIMESTAMP datatype. It stores the year, month, and day of the DATE datatype, plus hour, minute, second and fractional second values. Here is a thread on asktom.oracle.com about date arithmetic. A: I second what S.Lott explained. We have a product suite which makes extensive use of date time ranges and it has been one of our lessons-learned to work with ranges like that. By the way, we call the end date exclusive end date if it is not part of the range anymore (IOW, a half open interval). In contrast, it is an inclusive end date if it counts as part of the range which only makes sense if there is no time portion. Users typically expect input/output of inclusive date ranges. At any rate, convert user input as soon as possible to exclusive end date ranges, and convert any date range as late as possible when it has to be shown to the user. On the database, always store exclusive end date ranges. If there is legacy data with inclusive end date ranges, either migrate them on the DB if possible or convert to exclusive end date range as soon as possible when the data is read. A: I use Oracle's date data type and educate developers on the issue of time components affecting boundary conditions. A database constraint will also prevent the accidental specification of a time component in a column that should have none and also tells the optimizer that none of the values have a time component. For example, the constraint CHECK (MY_DATE=TRUNC(MY_DATE)) prevents a value with a time other than 00:00:00 being placed into the my_date column, and also allows Oracle to infer that a predicate such as MY_DATE = TO_DATE('2008-09-12 15:00:00') will never be true, and hence no rows will be returned from the table because it can be expanded to: MY_DATE = TO_DATE('2008-09-12 15:00:00') AND TO_DATE('2008-09-12 15:00:00') = TRUNC(TO_DATE('2008-09-12 15:00:00')) This is automatically false of course. Although it is sometimes tempting to store dates as numbers such as 20080915 this can cause query optimization problems. For example, how many legal values are there between 20,071,231 and 20,070,101? How about between the dates 31-Dec-2007 abnd 01-Jan-2008? It also allows illegal values to be entered, such as 20070100. So, if you have dates without time components then defining a range becomes easy: select ... from ... where my_date Between date '2008-01-01' and date '2008-01-05' When there is a time component you can do one of the following: select ... from ... where my_date >= date '2008-01-01' and my_date < date '2008-01-06' or select ... from ... where my_date Between date '2008-01-01' and date '2008-01-05'-(1/24/60/60) Note the use of (1/24/60/60) instead of a magic number. It's pretty common in Oracle to perform date arithmetic by adding defined fractions of a day ... 3/24 for three hours, 27/24/60 for 27 minutes. Oracle math of this type is exact and doesn't suffer rounding errors, so: select 27/24/60 from dual; ... gives 0.01875, not 0.01874999999999 or whatever. A: I don't see the Interval datatypes posted yet. Oracle also has datatypes for your exact scenario. There are INTERVAL YEAR TO MONTH and INTERVAL DAY TO SECOND datatypes in Oracle as well. From the 10gR2 docs. INTERVAL YEAR TO MONTH stores a period of time using the YEAR and MONTH datetime fields. This datatype is useful for representing the difference between two datetime values when only the year and month values are significant. INTERVAL YEAR [(year_precision)] TO MONTH where year_precision is the number of digits in the YEAR datetime field. The default value of year_precision is 2. INTERVAL DAY TO SECOND Datatype INTERVAL DAY TO SECOND stores a period of time in terms of days, hours, minutes, and seconds. This datatype is useful for representing the precise difference between two datetime values. Specify this datatype as follows: INTERVAL DAY [(day_precision)] TO SECOND [(fractional_seconds_precision)] where day_precision is the number of digits in the DAY datetime field. Accepted values are 0 to 9. The default is 2. fractional_seconds_precision is the number of digits in the fractional part of the SECOND datetime field. Accepted values are 0 to 9. The default is 6. You have a great deal of flexibility when specifying interval values as literals. Please refer to "Interval Literals" for detailed information on specify interval values as literals. Also see "Datetime and Interval Examples" for an example using intervals. A: Based upon your first sentence, you're stumbling upon one of the hidden "features" (i.e. bugs) of Java: java.util.Date should have been immutable but it ain't. (Java 7 promises to fix this with a new date/time API.) Almost every enterprise app counts on various temporal patterns, and at some point you will need to do arithmetic on date and time. Ideally, you could use Joda time, which is used by Google Calendar. If you can't do this, I guess an API that consists of a wrapper around java.util.Date with computational methods similar to Grails/Rails, and of a range of your wrapper (i.e. an ordered pair indicating the start and end of a time period) will be sufficient. On my current project (an HR timekeeping application) we try to normalize all our Dates to the same timezone for both Oracle and Java. Fortunately, our localization requirements are lightweight (= 1 timezone is enough). When a persistent object doesn't need finer precision than a day, we use the timestamp as of midnight. I would go further and insist upon throwing away the extra milli-seconds to the coarsest granularity that a persistent object can tolerate (it will make your processing simpler). A: Based on my experiences, there are four main ways to do it: 1) Convert the date to an epoch integer (seconds since 1st Jan 1970) and store it in the database as an integer. 2) Convert the date to a YYYYMMDDHHMMSS integer and store it in the database as an integer. 3) Store it as a date 4) Store it as a string I've always stuck with 1 and 2, because it enables you to perform quick and simple arithmetic with the date and not rely on the underlying database functionality. A: All dates can be unambiguously stored as GMT timestamps (i.e. no timezone or daylight saving headaches) by storing the result of getTime() as a long integer. In cases where day, week, month, etc. manipulations are needed in database queries, and when query performance is paramount, the timestamps (normalized to a higher granularity than milliseconds) can be linked to a date breakdown table that has columns for the day, week, month, etc. values so that costly date/time functions don't have to be used in queries. A: Alan is right- Joda time is great. java.util.Date and Calendar are just a shame. If you need timestamps use the oracle date type with the time, name the column with some kind of suffix like _tmst. When you read the data into java get it into a joda time DateTime object. to make sure the timezone is right consider that there are specific data types in oracle that will store the timestamps with the timezone. Or you can create another column in the table to store the timezone ID. Values for the timezone ID should be standard full name ID for Timezones see http://java.sun.com/j2se/1.4.2/docs/api/java/util/TimeZone.html#getTimeZone%28java.lang.String%29 . If you use another column for the TZ dta then when you read the data into java use DateTime object but set the timezone on the DateTime object using the .withZoneRetainFields to set the timezone. If you only need the date data (no timestamp) then use the date type in the database with no time. again name it well. in this case use DateMidnight object from jodatime. bottom line: leverage the type system of the database and the language you are using. Learn them and reap the benefits of having expressive api and language syntax to deal with your problem. A: UPDATE: The Joda-Time project is now in maintenance mode. Its team advises migration to the java.time classes built into Java. Joda-Time Joda-Time offers 3 classes for representing a span of time: Interval, Duration, and Period. The ISO 8601 standard specifies how to format strings representing a Duration and an Interval. Joda-Time both parses and generates such strings. Time zone is a crucial consideration. Your database should be storing its date-time values in UTC. But your business logic may need to consider time zones. The beginning of a "day" depends on time zone. By the way, use proper time zone names rather than 3 or 4 letter codes. The correct answer by S.Lott wisely advises to use Half-Open logic, as that usually works best for date-time work. The beginning of a span of time is inclusive while the ending is exclusive. Joda-Time uses half-open logic in its methods. DateTimeZone timeZone_NewYork = DateTimeZone.forID( "America/New_York" ); DateTime start = new DateTime( 2014, 9, 29, 15, 16, 17, timeZone_NewYork ); DateTime stop = new DateTime( 2014, 9, 30, 1, 2, 3, timeZone_NewYork ); int daysBetween = Days.daysBetween( start, stop ).getDays(); Period period = new Period( start, stop ); Interval interval = new Interval( start, stop ); Interval intervalWholeDays = new Interval( start.withTimeAtStartOfDay(), stop.plusDays( 1 ).withTimeAtStartOfDay() ); DateTime lateNight29th = new DateTime( 2014, 9, 29, 23, 0, 0, timeZone_NewYork ); boolean containsLateNight29th = interval.contains( lateNight29th ); Dump to console… System.out.println( "start: " + start ); System.out.println( "stop: " + stop ); System.out.println( "daysBetween: " + daysBetween ); System.out.println( "period: " + period ); // Uses format: PnYnMnDTnHnMnS System.out.println( "interval: " + interval ); System.out.println( "intervalWholeDays: " + intervalWholeDays ); System.out.println( "lateNight29th: " + lateNight29th ); System.out.println( "containsLateNight29th: " + containsLateNight29th ); When run… start: 2014-09-29T15:16:17.000-04:00 stop: 2014-09-30T01:02:03.000-04:00 daysBetween: 0 period: PT9H45M46S interval: 2014-09-29T15:16:17.000-04:00/2014-09-30T01:02:03.000-04:00 intervalWholeDays: 2014-09-29T00:00:00.000-04:00/2014-10-01T00:00:00.000-04:00 lateNight29th: 2014-09-29T23:00:00.000-04:00 containsLateNight29th: true A: Im storing all dates in milliseconds. I do not use timestamps/datetime fields at all. So, i have to manipulate it as longs. It means i do not use 'before', 'after', 'now' keywords in my sql queries.
{ "language": "en", "url": "https://stackoverflow.com/questions/156032", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: In Facebook app, is there a way to link directly to "join a group" I'd like to have a link for a user to join a Facebook group from within my Facebook application. Here is the link on Facebook's "display a group" page (minus a longer referrer part), but the group id is encrypted: http://www.new.facebook.com/group.php?sid=c431b3cfc02765def331081f8b71ffbd Anyone know how to either encrypt a group id the Facebook way or otherwise build a link that adds a user to a group? Thanks! A: Can you not just use the group id? This works for me: http://www.new.facebook.com/group.php?gid={ID}
{ "language": "en", "url": "https://stackoverflow.com/questions/156033", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How can you limit a TFS Check-In notes to a custom path? You can limit "Check-In Policy" rules via the "Custom Paths" policy. But the "Check-in Notes" tab doesn't seem to fit in to the same system. Why isn't "Check-In notes" just another "Check-In policy"?? I'm using Team Foundation Server 2008 SP1 A: We had a similar problem some time ago. For some sub tree we wanted to require entering a code reviewer. I ended up implementing a custom policy and used the Custom Path Policy to restrict it to certain folders. That works well, except that you have to deploy your policy assembly and TFS has no built-in mechanism for that, yet. A: That's an interesting question - the short answer is you cannot. I have ran into the issue myself a lot where people get check-in notes and check-in policies confused because, while very different in implementation on the server, they often serve similar purposes. Check-in notes are bits of structured meta-data that you want to collect on every check-in to a Team Project. They can be thinks like who was the code reviewer or a reference to a ticket in an external CRM system or something. You can make them required, or just have them defined for people to optionally fill out. Check-in policies are bits of code that run on the client at the point of check-in that get a say if the check-in should be allowed or not. These are useful for checking things like you have associated a check-in with a work item, given it a comment or the code you are check-in in passes certain key static code analysis rules (such as basic checking for SQL injection attacks etc). If a check-in policy fails in the evaluation of the check-in then the user gets alerted and they get the ability to fix the problem or check-in anyway with a check-in policy override than can easily be reported on or alerted for by the TFS administrator. Both check-in notes and check-in policies are defined and scoped at the Team Project level. However, Microsoft got feedback that some people would like check-in policies would like to be applied to specific paths in version control rather than just the Team Project and so the Custom Path Policy was invented. The Custom Path Policy is a bit of a hack that allows you to wrap check-in policies inside the custom path policy. The custom path gets evaluated on every check-in and if it contains files inside the defined path then the wrapped check-in policies are evaluated for those files. The Custom Path Policy ships in the TFS Power Tools and is not part of the "Out The Box" TFS experience. So, to answer your question a different way - I suspect the answer is "because that's the way it was designed and not enough people have asked for it to be changed". If you wanted to leave feedback at http://connect.microsoft.com/VisualStudio I know they take customer feedback very seriously.
{ "language": "en", "url": "https://stackoverflow.com/questions/156035", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How do you manage database revisions on a medium sized project with branches? At work we have 4 people working together on a few different projects. For each project we each have a local copy we work on and then there is a development, staging, and live deployment, along with any branches we have (we use subversion). Our database is MySQL. So my question is, what is a good way to manage which revisions to the database have been made to each deployment (and for the developers their local copies). Right now each change goes into a text file that is timestamped in the name and put into a folder under the project. This isn't working very well to be honest.. I need a solution that will help keep track of what has been applied where. A: http://odetocode.com/Blogs/scott/archive/2008/01/30/11702.aspx The above blog brought us to our current database version control system. Simply put, no DB changes are made without an update script and all update scripts are in our source control repository. We only manage schema changes but you may also be able/willing to consider keeping dumps of your data available in version control as well; creating such files is a pretty trivial exercise using mysqldump. Our solution differs from the solution presented in the blog in one key manner: it's not automated. We have to hand apply database updates, etc. Though this can be slightly time consuming, it postponed some of the effort a fully automated system would have required. One thing we did automate however, was the db version tracking in the software: this was pretty simple and it ensures that our software is aware of the database it's running against and will ONLY run if it knows the schema it's working with. The hardest part of our solution was how to merge updates from our branches into our trunk. We spent some time to develop a workflow to address the possibility of two developers trying to merge branches with DB updates at the same time and how to handle it. We eventually settled on locking a file in version control (the file in question for us is actually a table mapping software version to db version which assists in our manual management strategy), much like you would a thread's critical section, and the developer who gets the lock goes about their update of the trunk. When completed, the other developer would be able to lock and it is their responsibility to make any changes necessary to their scripts to ensure that expected version collisions and other bad juju are avoided. A: We keep all of our database scripts (data and schema/ddl) in version control. We also keep a central catalog of the changes. When a developer makes a change to a schema/DDL file or adds a script that changes the data in some way, those files are added to the catalog, along with the SVN commit number. We have put together a small utility in-house that reads the catalog changes and builds a large update script based on the contents of the catalog by grabbing the contents from each revision in the catalog and applying them. The concept is pretty similar to the DBDeploy tool, which I believe originally came from Thoughtworks, so you may be able to utilize it. It will at least give you a good place to start, from which point you can customize a solution more directly suited to your needs. Best of luck! A: If your database maps nicely to a set of data access objects, consider using 'migrations'. The idea is to store your data model as application code with steps for moving forward and backward through each database version. I believe Rails did it first. Java has at least one project. And here's a .NET migration library. To change versions, you run a simple script that steps through all of the up or down versions to get you to the version you want. The beauty of it is, you check your migrations into the same source repository as your app code - it's all in one place. Maybe others can suggest other migration libraries. Cheers. Edit: See also https://stackoverflow.com/questions/313/net-migrations-engine and .NET database migration tool roundup (from above post).
{ "language": "en", "url": "https://stackoverflow.com/questions/156044", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Show a Form without stealing focus? I'm using a Form to show notifications (it appears at the bottom right of the screen), but when I show this form it steals the focus from the main Form. Is there a way to show this "notification" form without stealing focus? A: Doing this seems like a hack, but it seems to work: this.TopMost = true; // as a result the form gets thrown to the front this.TopMost = false; // but we don't actually want our form to always be on top Edit: Note, this merely raises an already created form without stealing focus. A: The sample code from pinvoke.net in Alex Lyman/TheSoftwareJedi's answers will make the window a "topmost" window, meaning that you can't put it behind normal windows after it's popped up. Given Matias's description of what he wants to use this for, that could be what he wants. But if you want the user to be able to put your window behind other windows after you've popped it up, just use HWND_TOP (0) instead of HWND_TOPMOST (-1) in the sample. A: Stolen from PInvoke.net's ShowWindow method: private const int SW_SHOWNOACTIVATE = 4; private const int HWND_TOPMOST = -1; private const uint SWP_NOACTIVATE = 0x0010; [DllImport("user32.dll", EntryPoint = "SetWindowPos")] static extern bool SetWindowPos( int hWnd, // Window handle int hWndInsertAfter, // Placement-order handle int X, // Horizontal position int Y, // Vertical position int cx, // Width int cy, // Height uint uFlags); // Window positioning flags [DllImport("user32.dll")] static extern bool ShowWindow(IntPtr hWnd, int nCmdShow); static void ShowInactiveTopmost(Form frm) { ShowWindow(frm.Handle, SW_SHOWNOACTIVATE); SetWindowPos(frm.Handle.ToInt32(), HWND_TOPMOST, frm.Left, frm.Top, frm.Width, frm.Height, SWP_NOACTIVATE); } (Alex Lyman answered this, I'm just expanding it by directly pasting the code. Someone with edit rights can copy it over there and delete this for all I care ;) ) A: In WPF you can solve it like this: In the window put these attributes: <Window x:Class="myApplication.winNotification" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="Notification Popup" Width="300" SizeToContent="Height" WindowStyle="None" AllowsTransparency="True" Background="Transparent" ShowInTaskbar="False" Topmost="True" Focusable="False" ShowActivated="False" > </Window> The last one attribute is the one you need ShowActivated="False". A: I have something similar, and I simply show the notification form and then do this.Focus(); to bring the focus back on the main form. A: You might want to consider what kind of notification you would like to display. If it's absolutely critical to let the user know about some event, using Messagebox.Show would be the recommended way, due to its nature to block any other events to the main window, until the user confirms it. Be aware of pop-up blindness, though. If it's less than critical, you might want to use an alternative way to display notifications, such as a toolbar on the bottom of the window. You wrote, that you display notifications on the bottom-right of the screen - the standard way to do this would be using a balloon tip with the combination of a system tray icon. A: Create and start the notification Form in a separate thread and reset the focus back to your main form after the Form opens. Have the notification Form provide an OnFormOpened event that is fired from the Form.Shown event. Something like this: private void StartNotfication() { Thread th = new Thread(new ThreadStart(delegate { NotificationForm frm = new NotificationForm(); frm.OnFormOpen += NotificationOpened; frm.ShowDialog(); })); th.Name = "NotificationForm"; th.Start(); } private void NotificationOpened() { this.Focus(); // Put focus back on the original calling Form } You can also keep a handle to your NotifcationForm object around so that it can be programmatically closed by the main Form (frm.Close()). Some details are missing, but hopefully this will get you going in the right direction. A: This works well. See: OpenIcon - MSDN and SetForegroundWindow - MSDN using System.Runtime.InteropServices; [DllImport("user32.dll")] static extern bool OpenIcon(IntPtr hWnd); [DllImport("user32.dll")] static extern bool SetForegroundWindow(IntPtr hWnd); public static void ActivateInstance() { IntPtr hWnd = IntPtr hWnd = Process.GetCurrentProcess().MainWindowHandle; // Restore the program. bool result = OpenIcon(hWnd); // Activate the application. result = SetForegroundWindow(hWnd); // End the current instance of the application. //System.Environment.Exit(0); } A: Hmmm, isn't simply overriding Form.ShowWithoutActivation enough? protected override bool ShowWithoutActivation { get { return true; } } And if you don't want the user to click this notification window either, you can override CreateParams: protected override CreateParams CreateParams { get { CreateParams baseParams = base.CreateParams; const int WS_EX_NOACTIVATE = 0x08000000; const int WS_EX_TOOLWINDOW = 0x00000080; baseParams.ExStyle |= ( int )( WS_EX_NOACTIVATE | WS_EX_TOOLWINDOW ); return baseParams; } } A: This is what worked for me. It provides TopMost but without focus-stealing. protected override bool ShowWithoutActivation { get { return true; } } private const int WS_EX_TOPMOST = 0x00000008; protected override CreateParams CreateParams { get { CreateParams createParams = base.CreateParams; createParams.ExStyle |= WS_EX_TOPMOST; return createParams; } } Remember to omit setting TopMost in Visual Studio designer, or elsewhere. This is stolen, err, borrowed, from here (click on Workarounds): https://connect.microsoft.com/VisualStudio/feedback/details/401311/showwithoutactivation-is-not-supported-with-topmost A: If you're willing to use Win32 P/Invoke, then you can use the ShowWindow method (the first code sample does exactly what you want). A: You can handle it by logic alone too, although I have to admit that the suggestions above where you end up with a BringToFront method without actually stealing focus is the most elegant one. Anyhow, I ran into this and solved it by using a DateTime property to not allow further BringToFront calls if calls were made already recently. Assume a core class, 'Core', which handles for example three forms, 'Form1, 2, and 3'. Each form needs a DateTime property and an Activate event that call Core to bring windows to front: internal static DateTime LastBringToFrontTime { get; set; } private void Form1_Activated(object sender, EventArgs e) { var eventTime = DateTime.Now; if ((eventTime - LastBringToFrontTime).TotalMilliseconds > 500) Core.BringAllToFront(this); LastBringToFrontTime = eventTime; } And then create the work in the Core Class: internal static void BringAllToFront(Form inForm) { Form1.BringToFront(); Form2.BringToFront(); Form3.BringToFront(); inForm.Focus(); } On a side note, if you want to restore a minimized window to its original state (not maximized), use: inForm.WindowState = FormWindowState.Normal; Again, I know this is just a patch solution in the lack of a BringToFrontWithoutFocus. It is meant as a suggestion if you want to avoid the DLL file. A: I don't know if this is considered as necro-posting, but this is what I did since I couln't get it working with user32's "ShowWindow" and "SetWindowPos" methods. And no, overriding "ShowWithoutActivation" doesn't work in this case since the new window should be always-on-top. Anyway, I created a helper method that takes a form as parameter; when called, it shows the form, brings it to the front and makes it TopMost without stealing the focus of the current window (apparently it does, but the user won't notice). [DllImport("user32.dll")] static extern IntPtr GetForegroundWindow(); [DllImport("user32.dll")] static extern IntPtr SetForegroundWindow(IntPtr hWnd); public static void ShowTopmostNoFocus(Form f) { IntPtr activeWin = GetForegroundWindow(); f.Show(); f.BringToFront(); f.TopMost = true; if (activeWin.ToInt32() > 0) { SetForegroundWindow(activeWin); } } A: I know it may sound stupid, but this worked: this.TopMost = true; this.TopMost = false; this.TopMost = true; this.SendToBack(); A: I needed to do this with my window TopMost. I implemented the PInvoke method above but found that my Load event wasn't getting called like Talha above. I finally succeeded. Maybe this will help someone. Here is my solution: form.Visible = false; form.TopMost = false; ShowWindow(form.Handle, ShowNoActivate); SetWindowPos(form.Handle, HWND_TOPMOST, form.Left, form.Top, form.Width, form.Height, NoActivate); form.Visible = true; //So that Load event happens A: You don't need to make it anywhere near as complicated. a = new Assign_Stock(); a.MdiParent = this.ParentForm; a.Visible = false; //hide for a bit. a.Show(); //show the form. Invisible form now at the top. this.Focus(); //focus on this form. make old form come to the top. a.Visible = true; //make other form visible now. Behind the main form. A: Github Sample Form.ShowWithoutActivation Property Add this in your child form class protected override bool ShowWithoutActivation { get { return true; } } Working Code Form2 public partial class Form2 : Form { Form3 c; public Form2() { InitializeComponent(); c = new Form3(); } private void textchanged(object sender, EventArgs e) { c.ResetText(textBox1.Text.ToString()); c.Location = new Point(this.Location.X+150, this.Location.Y); c .Show(); //removethis //if mdiparent 2 add this.focus() after show form c.MdiParent = this.MdiParent; c.ResetText(textBox1.Text.ToString()); c.Location = new Point(this.Location.X+150, this.Location.Y); c .Show(); this.Focus(); ////----------------- } } Form3 public partial class Form3 : Form { public Form3() { InitializeComponent(); //ShowWithoutActivation = false; } protected override bool ShowWithoutActivation { get { return true; } } internal void ResetText(string toString) { label2.Text = toString; } } A: Figured it out: window.WindowState = WindowState.Minimized;. A: When you create a new form using Form f = new Form(); f.ShowDialog(); it steals focus because your code can't continue executing on the main form until this form is closed. The exception is by using threading to create a new form then Form.Show(). Make sure the thread is globally visible though, because if you declare it within a function, as soon as your function exits, your thread will end and the form will disappear.
{ "language": "en", "url": "https://stackoverflow.com/questions/156046", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "154" }
Q: Update viewstate after populating a list with ASP.NET AJAX I've got a dropdown list that is being populated via a webservice using ASP>NET AJAX. On the success callback of the method in javascript, I'm populating the dropdown via a loop: function populateDropDown(dropdownId, list, enable, showCount) { var dropdown = $get(dropdownId); dropdown.options.length = 1; for (var i = 0; i < list.length; i++) { var opt = document.createElement("option"); if (showCount) { opt.text = list[i].Name + ' (' + list[i].ChildCount + ')'; } else { opt.text = list[i].Name; } opt.value = list[i].Name; dropdown.options.add(opt); } dropdown.disabled = !enable; } However when I submit the form that this control is on, the control's list is always empty on postback. How do I get the populated lists data to persist over postback? Edit: Maybe I'm coming at this backwards. A better question would probably be, how do I populate a dropdown list from a webservice without having to use an updatepanel due to the full page lifecycle it has to run through? A: You need to use Request.Form for this - you can't encrypt ViewState on the fly from the client - it would defeat the whole point of it :). Edit: Responding to your Edit :) the Page Lifecycle is the thing that allows you to use the ViewState persistence in the first place. The control tree is handled there and, well, there's just no getting around it. Request.Form is a perfectly viable way to do this - it will tell you the value of the selection. If you want to know all of the values, you could do some type of serialization to a hidden control. Ugly, yes, But that's why god (some call him ScottGu) invented ASP.NET MVC :). A: Although I'm not really sure how it does it the CascadingDropDown in the AJAX Control Toolkit does support this. This is the line that appears to do it: AjaxControlToolkit.CascadingDropDownBehavior.callBaseMethod(this, 'set_ClientState', [ this._selectedValue+':::'+text ]); But the simplest idea would be to put the selected value into a hidden input field for the postback event.
{ "language": "en", "url": "https://stackoverflow.com/questions/156051", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do screen scrapers work? I hear people writing these programs all the time and I know what they do, but how do they actually do it? I'm looking for general concepts. A: In general a screen scraper is a program that captures output from a server program by mimicing the actions of a person sitting in front of the workstation using a browser or terminal access program. at certain key points the program would interpret the output and then take an action or extract certain amounts of information from the output. Originally this was done with character/terminal outputs from mainframes for extracting data or updating systems that were archaic or not directly accessible to the end user. in modern terms it usually means parsing the output from an HTTP request to extract data or to take some other action. with the advent of web services this sort of thing should have died away, but not all apps provide a nice api to interact with. A: Technically, screenscraping is any program that grabs the display data of another program and ingests it for it's own use. Quite often, screenscaping refers to a web client that parses the HTML pages of targeted website to extract formatted data. This is done when a website does not offer an RSS feed or a REST API for accessing the data in a programmatic way. One example of a library used for this purpose is Hpricot for Ruby, which is one of the better-architected HTML parsers used for screen scraping. A: A screen scraper downloads the html page, and pulls out the data interested either by searching for known tokens or parsing it as XML or some such. A: You have an HTML page that contains some data you want. What you do is you write a program that will fetch that web page and attempt to extract that data. This can be done with XML parsers, but for simple applications I prefer to use regular expressions to match a specific spot in the HTML and extract the necessary data. Sometimes it can be tricky to create a good regular expression, though, because the surrounding HTML appears multiple times in the document. You always want to match a unique item as close as you can to the data you need. A: In the early days of PC's, screen scrapers would emulate a terminal (e.g. IBM 3270) and pretend to be a user in order to interactively extract, update information on the mainframe. In more recent times, the concept is applied to any application that provides an interface via web pages. With emergence of SOA, screenscraping is a convenient way in which to services enable applications that aren't. In those cases, the web page scraping is the more common approach taken. A: Here's a tiny bit of screen scraping implemented in Javascript, using jQuery (not a common choice, mind you, since scraping is usually a client-server activity): //Show My SO Reputation Score var repval = $('span.reputation-score:first'); alert('StackOverflow User "' + repval.prev().attr('href').split('/').pop() + '" has (' + repval.html() + ') Reputation Points.'); If you run Firebug, copy the above code and paste it into the Console and see it in action right here on this Question page. If SO changes the DOM structure / element class names / URI path conventions, all bets are off and it may not work any longer - that's the usual risk in screen scraping endeavors where there is no contract/understanding between parties (the scraper and the scrapee [yes I just invented a word]). A: Technically, screenscraping is any program that grabs the display data of another program and ingests it for it's own use.In the early days of PC's, screen scrapers would emulate a terminal (e.g. IBM 3270) and pretend to be a user in order to interactively extract, update information on the mainframe. In more recent times, the concept is applied to any application that provides an interface via web pages. With emergence of SOA, screenscraping is a convenient way in which to services enable applications that aren't. In those cases, the web page scraping is the more common approach taken. Quite often, screenscaping refers to a web client that parses the HTML pages of targeted website to extract formatted data. This is done when a website does not offer an RSS feed or a REST API for accessing the data in a programmatic way. Typically You have an HTML page that contains some data you want. What you do is you write a program that will fetch that web page and attempt to extract that data. This can be done with XML parsers, but for simple applications I prefer to use regular expressions to match a specific spot in the HTML and extract the necessary data. Sometimes it can be tricky to create a good regular expression, though, because the surrounding HTML appears multiple times in the document. You always want to match a unique item as close as you can to the data you need. Screen scraping is what you do when nobody's provided you with a reasonable machine-readable interface. It's hard to write, and brittle. As an example, consider an RSS aggregator, then consider code that gets the same information by working through a normal human-oriented blog interface. Which one breaks when the blogger decides to change their layout. One example of a library used for this purpose is Hpricot for Ruby, which is one of the better-architected HTML parsers used for screen scraping. A: Lots of accurate answers here. What nobody's said is don't do it! Screen scraping is what you do when nobody's provided you with a reasonable machine-readable interface. It's hard to write, and brittle. As an example, consider an RSS aggregator, then consider code that gets the same information by working through a normal human-oriented blog interface. Which one breaks when the blogger decides to change their layout? Of course, sometimes you have no choice :( A: Screen scraping is what you do when nobody's provided you with a reasonable machine-readable interface. It's hard to write, and brittle. Not quite true. I don't think I'm exaggerating when I say that most developers do not have enough experience to write decents APIs. I've worked with screen scraping companies and often the APIs are so problematic (ranging from cryptic errors to bad results) and often don't give the full functionality that the website provides that it can be better to screen scrape (web scrape if you will). The extranet/website portals are used my more customers/brokers than API clients and thus are better supported. In big companies changes to extranet portals etc.. are infrequent, usually because it was originally outsourced and now its just maintained. I refer more to screen scraping where the output is tailored, e.g. a flight on particular route and time, an insurance quote, a shipping quote etc.. In terms of doing it, it can be as simple as web client to pull the page contents into a string and using a series of regular expressions to extract the information you want. string pageContents = new WebClient("www.stackoverflow.com").DownloadString(); int numberOfPosts = // regex match Obviously in a large scale environment you'd be writing more robust code than the above. A screen scraper downloads the html page, and pulls out the data interested either by searching for known tokens or parsing it as XML or some such. That is cleaner approach than regex... in theory.., however in practice its not quite as easy, given that most documents will need normalized to XHTML before you can XPath through it, in the end we found the fine tuned regular expressions were more practical.
{ "language": "en", "url": "https://stackoverflow.com/questions/156083", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: Sorting ADO recordset text field as numeric Using VBA i have a set of functions that return an ADODB.Recordset where all the columns as adVarChar. Unfortunately this means numerics get sorted as text. So 1,7,16,22 becomes 1,16,22,7 Is there any methods that can sort numerics as text columns without resorting to changing the type of the column? Sub TestSortVarChar() Dim strBefore, strAfter As String Dim r As ADODB.RecordSet Set r = New ADODB.RecordSet r.Fields.Append "ID", adVarChar, 100 r.Fields.Append "Field1", adVarChar, 100 r.Open r.AddNew r.Fields("ID") = "1" r.Fields("Field1") = "A" r.AddNew r.Fields("ID") = "7" r.Fields("Field1") = "B" r.AddNew r.Fields("ID") = "16" r.Fields("Field1") = "C" r.AddNew r.Fields("ID") = "22" r.Fields("Field1") = "D" r.MoveFirst Do Until r.EOF strBefore = strBefore & r.Fields("ID") & " " & r.Fields("Field1") & vbCrLf r.MoveNext Loop r.Sort = "[ID] ASC" r.MoveFirst Do Until r.EOF strAfter = strAfter & r.Fields("ID") & " " & r.Fields("Field1") & vbCrLf r.MoveNext Loop MsgBox strBefore & vbCrLf & vbCrLf & strAfter End Sub NB: I am using Project 2003 and Excel 2003 and referencing Microsoft ActiveX DataObject 2.8 Library A: Left pad with Zeros with at least as many as maximum number digits. e.g. 0001 0010 0022 1000 You can use Right$() to accomplish this. A: Use the Val() function to sort numerically on a text column. Example: SELECT ID, Field1 FROM tablename ORDER BY Val(Field1);
{ "language": "en", "url": "https://stackoverflow.com/questions/156084", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Silverlight 2.0 DataGrid How to remove mouseover effect I am just starting with SL and WPF. I am using the DataGrid control and I need to remove the mouseover effect (I actually will need to do more customizations than that). How do I do this. I think I need to do it with a control template but not sure how. I'm researching and reading right now. Any help would be appreciated. A: The short answer is to use styles. The long answer is the following: There are two style properties in the Silverlight 2.0 datagrid that should solve your problem. The first is CellStyle and the second is RowStyle. The CellStyle property is the one which will remove the light blue highlight around the currently selected cell. The RowStyle property is the one where you will be able to remove the light blue shade indicating the selected row. The CellStyle that I used is as follows: <Style x:Key="CellStyle" TargetType="local:DataGridCell"> <Setter Property="Background" Value="Transparent" /> <Setter Property="HorizontalContentAlignment" Value="Stretch" /> <Setter Property="VerticalContentAlignment" Value="Stretch" /> <Setter Property="Cursor" Value="Arrow" /> <Setter Property="IsTabStop" Value="False" /> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="local:DataGridCell"> <Grid Name="Root" Background="Transparent"> <vsm:VisualStateManager.VisualStateGroups> <vsm:VisualStateGroup x:Name="CurrentStates" > <vsm:VisualStateGroup.Transitions> <vsm:VisualTransition GeneratedDuration="0" /> </vsm:VisualStateGroup.Transitions> <vsm:VisualState x:Name="Regular" /> <vsm:VisualState x:Name="Current" /> <!--<Storyboard> <DoubleAnimation Storyboard.TargetName="FocusVisual" Storyboard.TargetProperty="Opacity" To="1" Duration="0" /> </Storyboard> </vsm:VisualState>--> </vsm:VisualStateGroup> </vsm:VisualStateManager.VisualStateGroups> <Grid.ColumnDefinitions> <ColumnDefinition Width="*" /> <ColumnDefinition Width="Auto" /> </Grid.ColumnDefinitions> <Rectangle Name="FocusVisual" Stroke="#FF6DBDD1" StrokeThickness="1" Fill="#66FFFFFF" HorizontalAlignment="Stretch" VerticalAlignment="Stretch" IsHitTestVisible="false" Opacity="0" /> <ContentPresenter Content="{TemplateBinding Content}" ContentTemplate="{TemplateBinding ContentTemplate}" Cursor="{TemplateBinding Cursor}" HorizontalAlignment="{TemplateBinding HorizontalContentAlignment}" VerticalAlignment="{TemplateBinding VerticalContentAlignment}" Margin="{TemplateBinding Padding}" /> <Rectangle Name="RightGridLine" Grid.Column="1" VerticalAlignment="Stretch" Width="1" /> </Grid> </ControlTemplate> </Setter.Value> </Setter> </Style> If you will notice, I commented out the storyboard that changed the FocusVisual rectangle's opacity value. What this was doing was to set the FocusVisual rectangle to be shown on cell selection. (Please Note: You cannot remove the FocusVisual Element as the CellPresenter is expecting this element, and not finding the element will cause an error.) The RowStyle I used is as follows: <Style TargetType="local:DataGridRow" x:Key="MyCustomRow"> <Setter Property="IsTabStop" Value="False" /> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="local:DataGridRow"> <localprimitives:DataGridFrozenGrid x:Name="Root"> <localprimitives:DataGridFrozenGrid.Resources> <Storyboard x:Key="DetailsVisibleTransition" > <DoubleAnimation Storyboard.TargetName="DetailsPresenter" Storyboard.TargetProperty="ContentHeight" Duration="00:00:0.1" /> </Storyboard> </localprimitives:DataGridFrozenGrid.Resources> <vsm:VisualStateManager.VisualStateGroups> <vsm:VisualStateGroup x:Name="CommonStates" > <vsm:VisualState x:Name="Normal" /> <vsm:VisualState x:Name="Normal AlternatingRow"> <Storyboard> <DoubleAnimation Storyboard.TargetName="BackgroundRectangle" Storyboard.TargetProperty="Opacity" Duration="0" To="0" /> </Storyboard> </vsm:VisualState> <vsm:VisualState x:Name="MouseOver" /> <!--<Storyboard> <DoubleAnimation Storyboard.TargetName="BackgroundRectangle" Storyboard.TargetProperty="Opacity" Duration="0" To=".5" /> </Storyboard> </vsm:VisualState>--> <vsm:VisualState x:Name="Normal Selected"/> <!--<Storyboard> <DoubleAnimation Storyboard.TargetName="BackgroundRectangle" Storyboard.TargetProperty="Opacity" Duration="0" To="1" /> </Storyboard> </vsm:VisualState>--> <vsm:VisualState x:Name="MouseOver Selected"/> <!--<Storyboard> <DoubleAnimation Storyboard.TargetName="BackgroundRectangle" Storyboard.TargetProperty="Opacity" Duration="0" To="1" /> </Storyboard> </vsm:VisualState>--> <vsm:VisualState x:Name="Unfocused Selected"/> <!--<Storyboard> <DoubleAnimation Storyboard.TargetName="BackgroundRectangle" Storyboard.TargetProperty="Opacity" Duration="0" To="1" /> <ColorAnimationUsingKeyFrames BeginTime="0" Duration="0" Storyboard.TargetName="BackgroundRectangle" Storyboard.TargetProperty="(Shape.Fill).(SolidColorBrush.Color)"> <SplineColorKeyFrame KeyTime="0" Value="#FFE1E7EC" /> </ColorAnimationUsingKeyFrames> </Storyboard> </vsm:VisualState>--> </vsm:VisualStateGroup> </vsm:VisualStateManager.VisualStateGroups> <localprimitives:DataGridFrozenGrid.RowDefinitions> <RowDefinition Height="*" /> <RowDefinition Height="Auto" /> <RowDefinition Height="Auto" /> </localprimitives:DataGridFrozenGrid.RowDefinitions> <localprimitives:DataGridFrozenGrid.ColumnDefinitions> <ColumnDefinition Width="Auto" /> <ColumnDefinition Width="*" /> </localprimitives:DataGridFrozenGrid.ColumnDefinitions> <Rectangle x:Name="BackgroundRectangle" Grid.RowSpan="2" Grid.ColumnSpan="2" Opacity="0" Fill="#FFBADDE9" /> <localprimitives:DataGridRowHeader Grid.RowSpan="3" x:Name="RowHeader" localprimitives:DataGridFrozenGrid.IsFrozen="True" /> <localprimitives:DataGridCellsPresenter x:Name="CellsPresenter" localprimitives:DataGridFrozenGrid.IsFrozen="True"/> <localprimitives:DataGridDetailsPresenter Grid.Row="1" Grid.Column="1" x:Name="DetailsPresenter" /> <Rectangle Grid.Row="2" Grid.Column="1" x:Name="BottomGridLine" HorizontalAlignment="Stretch" Height="1" /> </localprimitives:DataGridFrozenGrid> </ControlTemplate> </Setter.Value> </Setter> </Style> As you can see, I commented out some more visual states. You will want to comment out the MouseOver VisualState storyboard, the Normal Selected storyboard, the MouseOver Selected storyboard, and the Unfocused Selected storyboard. (Please Note: I did not remove these visual states, I only commented out what they used to do.) This is my code in its entirety for reference: (XAML first, then VB) XAML: <UserControl x:Class="DataGrid_Mouseover.Page" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:data="clr-namespace:System.Windows.Controls;assembly=System.Windows.Controls.Data" xmlns:local="clr-namespace:System.Windows.Controls;assembly=System.Windows.Controls.Data" xmlns:controls="clr-namespace:System.Windows.Controls;assembly=System.Windows.Controls.Data" xmlns:primitives="clr-namespace:System.Windows.Controls.Primitives;assembly=System.Windows" xmlns:localprimitives="clr-namespace:System.Windows.Controls.Primitives;assembly=System.Windows.Controls.Data" xmlns:vsm="clr-namespace:System.Windows;assembly=System.Windows"> <UserControl.Resources> <Style x:Key="CellStyle" TargetType="local:DataGridCell"> <!-- TODO: Remove this workaround to force MouseLeftButtonDown event to be raised when root element is clicked. --> <Setter Property="Background" Value="Transparent" /> <Setter Property="HorizontalContentAlignment" Value="Stretch" /> <Setter Property="VerticalContentAlignment" Value="Stretch" /> <Setter Property="Cursor" Value="Arrow" /> <Setter Property="IsTabStop" Value="False" /> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="local:DataGridCell"> <Grid Name="Root" Background="Transparent"> <vsm:VisualStateManager.VisualStateGroups> <vsm:VisualStateGroup x:Name="CurrentStates" > <vsm:VisualStateGroup.Transitions> <vsm:VisualTransition GeneratedDuration="0" /> </vsm:VisualStateGroup.Transitions> <vsm:VisualState x:Name="Regular" /> <vsm:VisualState x:Name="Current" /> <!--<Storyboard> <DoubleAnimation Storyboard.TargetName="FocusVisual" Storyboard.TargetProperty="Opacity" To="1" Duration="0" /> </Storyboard> </vsm:VisualState>--> </vsm:VisualStateGroup> </vsm:VisualStateManager.VisualStateGroups> <Grid.ColumnDefinitions> <ColumnDefinition Width="*" /> <ColumnDefinition Width="Auto" /> </Grid.ColumnDefinitions> <!-- TODO Refactor this if SL ever gets a FocusVisualStyle on FrameworkElement --> <Rectangle Name="FocusVisual" Stroke="#FF6DBDD1" StrokeThickness="1" Fill="#66FFFFFF" HorizontalAlignment="Stretch" VerticalAlignment="Stretch" IsHitTestVisible="false" Opacity="0" /> <ContentPresenter Content="{TemplateBinding Content}" ContentTemplate="{TemplateBinding ContentTemplate}" Cursor="{TemplateBinding Cursor}" HorizontalAlignment="{TemplateBinding HorizontalContentAlignment}" VerticalAlignment="{TemplateBinding VerticalContentAlignment}" Margin="{TemplateBinding Padding}" /> <Rectangle Name="RightGridLine" Grid.Column="1" VerticalAlignment="Stretch" Width="1" /> </Grid> </ControlTemplate> </Setter.Value> </Setter> </Style> <Style TargetType="local:DataGridRow" x:Key="MyCustomRow"> <Setter Property="IsTabStop" Value="False" /> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="local:DataGridRow"> <localprimitives:DataGridFrozenGrid x:Name="Root"> <localprimitives:DataGridFrozenGrid.Resources> <Storyboard x:Key="DetailsVisibleTransition" > <DoubleAnimation Storyboard.TargetName="DetailsPresenter" Storyboard.TargetProperty="ContentHeight" Duration="00:00:0.1" /> </Storyboard> </localprimitives:DataGridFrozenGrid.Resources> <vsm:VisualStateManager.VisualStateGroups> <vsm:VisualStateGroup x:Name="CommonStates" > <vsm:VisualState x:Name="Normal" /> <vsm:VisualState x:Name="Normal AlternatingRow"> <Storyboard> <DoubleAnimation Storyboard.TargetName="BackgroundRectangle" Storyboard.TargetProperty="Opacity" Duration="0" To="0" /> </Storyboard> </vsm:VisualState> <vsm:VisualState x:Name="MouseOver" /> <!--<Storyboard> <DoubleAnimation Storyboard.TargetName="BackgroundRectangle" Storyboard.TargetProperty="Opacity" Duration="0" To=".5" /> </Storyboard> </vsm:VisualState>--> <vsm:VisualState x:Name="Normal Selected"/> <!--<Storyboard> <DoubleAnimation Storyboard.TargetName="BackgroundRectangle" Storyboard.TargetProperty="Opacity" Duration="0" To="1" /> </Storyboard> </vsm:VisualState>--> <vsm:VisualState x:Name="MouseOver Selected"/> <!--<Storyboard> <DoubleAnimation Storyboard.TargetName="BackgroundRectangle" Storyboard.TargetProperty="Opacity" Duration="0" To="1" /> </Storyboard> </vsm:VisualState>--> <vsm:VisualState x:Name="Unfocused Selected"/> <!--<Storyboard> <DoubleAnimation Storyboard.TargetName="BackgroundRectangle" Storyboard.TargetProperty="Opacity" Duration="0" To="1" /> <ColorAnimationUsingKeyFrames BeginTime="0" Duration="0" Storyboard.TargetName="BackgroundRectangle" Storyboard.TargetProperty="(Shape.Fill).(SolidColorBrush.Color)"> <SplineColorKeyFrame KeyTime="0" Value="#FFE1E7EC" /> </ColorAnimationUsingKeyFrames> </Storyboard> </vsm:VisualState>--> </vsm:VisualStateGroup> </vsm:VisualStateManager.VisualStateGroups> <localprimitives:DataGridFrozenGrid.RowDefinitions> <RowDefinition Height="*" /> <RowDefinition Height="Auto" /> <RowDefinition Height="Auto" /> </localprimitives:DataGridFrozenGrid.RowDefinitions> <localprimitives:DataGridFrozenGrid.ColumnDefinitions> <ColumnDefinition Width="Auto" /> <ColumnDefinition Width="*" /> </localprimitives:DataGridFrozenGrid.ColumnDefinitions> <Rectangle x:Name="BackgroundRectangle" Grid.RowSpan="2" Grid.ColumnSpan="2" Opacity="0" Fill="#FFBADDE9" /> <localprimitives:DataGridRowHeader Grid.RowSpan="3" x:Name="RowHeader" localprimitives:DataGridFrozenGrid.IsFrozen="True" /> <localprimitives:DataGridCellsPresenter x:Name="CellsPresenter" localprimitives:DataGridFrozenGrid.IsFrozen="True"/> <localprimitives:DataGridDetailsPresenter Grid.Row="1" Grid.Column="1" x:Name="DetailsPresenter" /> <Rectangle Grid.Row="2" Grid.Column="1" x:Name="BottomGridLine" HorizontalAlignment="Stretch" Height="1" /> </localprimitives:DataGridFrozenGrid> </ControlTemplate> </Setter.Value> </Setter> </Style> </UserControl.Resources> <Grid x:Name="LayoutRoot" Background="White"> <local:DataGrid x:Name="TestGrid" HorizontalAlignment="Left" VerticalAlignment="Bottom" AutoGenerateColumns="False" HeadersVisibility="None" RowHeight="55" Background="Transparent" AlternatingRowBackground="Transparent" RowBackground="Transparent" BorderBrush="Transparent" Foreground="Transparent" GridLinesVisibility="None" SelectionMode="Single" CellStyle="{StaticResource CellStyle}" RowStyle="{StaticResource MyCustomRow}"> <local:DataGrid.Columns> <local:DataGridTemplateColumn Header="Clinic"> <local:DataGridTemplateColumn.CellTemplate> <DataTemplate> <Button x:Name="btnClinic" Height="46" Width="580" Content="{Binding Path=Description}" Click="btnClinic_Click" FontSize="24" FontFamily="Tahoma" FontWeight="Bold"> <Button.Background> <LinearGradientBrush EndPoint="0.528,1.144" StartPoint="1.066,1.221"> <GradientStop Color="#FF000000"/> <GradientStop Color="#FFEDC88F" Offset="1"/> </LinearGradientBrush> </Button.Background> </Button> </DataTemplate> </local:DataGridTemplateColumn.CellTemplate> </local:DataGridTemplateColumn> </local:DataGrid.Columns> </local:DataGrid> </Grid> </UserControl> VB: Partial Public Class Page Inherits UserControl Public Sub New() InitializeComponent() Dim test As IList(Of String) = New List(Of String) test.Add("test1") test.Add("test1") test.Add("test1") test.Add("test1") test.Add("test1") test.Add("test1") test.Add("test1") test.Add("test1") test.Add("test1") test.Add("test1") TestGrid.ItemsSource = test End Sub Private Sub btnClinic_Click(ByVal sender As System.Object, ByVal e As System.Windows.RoutedEventArgs) End Sub End Class Hope this helps. Thanks, Scott A: Yeah You have to change the Style and ControlTemplate , But are you using experssion blend to edit XAML? Blend is the easiest tool to do this. Try changing the ControlTemplate for a Standard Button or ListBox and once you are comfortable then go to DatGrid. Why I am suggesting this is DataGrid is a complex combination of different UIElements so the control template will be hard to understand for a beginner. Specific to MouseOver effect removal - There will be a VSM tag in the control template which has some storyboards , just remove the one with <vsm:VisualState x:Name="MouseOver">and you are good to go.
{ "language": "en", "url": "https://stackoverflow.com/questions/156089", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: LinqToSql and abstract base classes I have some linq entities that inherit something like this: public abstract class EntityBase { public int Identifier { get; } } public interface IDeviceEntity { int DeviceId { get; set; } } public abstract class DeviceEntityBase : EntityBase, IDeviceEntity { public abstract int DeviceId { get; set; } } public partial class ActualLinqGeneratedEntity : DeviceEntityBase { } In a generic method I am querying DeviceEnityBase derived entities with: return unitOfWork.GetRepository<TEntity>().FindOne(x => x.DeviceId == evt.DeviceId); where TEntity has a contraint that is it a DeviceEntityBase. This query is always failing with an InvalidOperationException with the message "Class member DeviceEntityBase.DeviceId is unmapped". Even if I add some mapping info in the abstract base class with [Column(Storage = "_DeviceId", DbType = "Int", Name = "DeviceId", IsDbGenerated = false, UpdateCheck = UpdateCheck.Never)] A: Wow, looks like for once I may be able to one-up @MarcGravell! I had the same problem, then I discovered this answer, which solved the problem for me! In your case, you would say: return unitOfWork.GetRepository<TEntity>().Select(x => x).FindOne(x => x.DeviceId == evt.DeviceId); and Bob's your uncle! A: LINQ-to-SQL has some support for inheritance via a discriminator (here, here), but you can only query on classes that are defined in the LINQ model - i.e. data classes themselves, and (more perhaps importantly for this example) the query itself must be phrased in terms of data classes: although TEntity is a data class, it knows that the property here is declared on the entity base. One option might be dynamic expressions; it the classes themselves declared the property (i.e. lose the base class, but keep the interface) - but this isn't trivial. The Expression work would be something like below, noting that you might want to either pass in the string as an argument, or obtain the primary key via reflection (if it is attributed): static Expression<Func<T, bool>> BuildWhere<T>(int deviceId) { var id = Expression.Constant(deviceId, typeof(int)); var arg = Expression.Parameter(typeof(T), "x"); var prop = Expression.Property(arg, "DeviceId"); return Expression.Lambda<Func<T, bool>>( Expression.Equal(prop, id), arg); } A: This kind of heirarchial mapping isnot possible with LinqToSql. The the mapping is setup it cannot map to properties in base classes. I went around on this for a couple of months when it first came out. The best solution is to use the entity framework. It gives you much more flexibility with creating your object model. It will allow you to do exactly what your trying to do here. Here is some information on the entity framework: MSDN Article A: Try .OfType<>() as posted here https://stackoverflow.com/a/17734469/3936440, it works for me having the exact same issue.
{ "language": "en", "url": "https://stackoverflow.com/questions/156113", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Best way to get result count before LIMIT was applied When paging through data that comes from a DB, you need to know how many pages there will be to render the page jump controls. Currently I do that by running the query twice, once wrapped in a count() to determine the total results, and a second time with a limit applied to get back just the results I need for the current page. This seems inefficient. Is there a better way to determine how many results would have been returned before LIMIT was applied? I am using PHP and Postgres. A: As I describe on my blog, MySQL has a feature called SQL_CALC_FOUND_ROWS. This removes the need to do the query twice, but it still needs to do the query in its entireity, even if the limit clause would have allowed it to stop early. As far as I know, there is no similar feature for PostgreSQL. One thing to watch out for when doing pagination (the most common thing for which LIMIT is used IMHO): doing an "OFFSET 1000 LIMIT 10" means that the DB has to fetch at least 1010 rows, even if it only gives you 10. A more performant way to do is to remember the value of the row you are ordering by for the previous row (the 1000th in this case) and rewrite the query like this: "... WHERE order_row > value_of_1000_th LIMIT 10". The advantage is that "order_row" is most probably indexed (if not, you've go a problem). The disadvantage being that if new elements are added between page views, this can get a little out of synch (but then again, it may not be observable by visitors and can be a big performance gain). A: You could mitigate the performance penalty by not running the COUNT() query every time. Cache the number of pages for, say 5 minutes before the query is run again. Unless you're seeing a huge number of INSERTs, that should work just fine. A: Pure SQL Things have changed since 2008. You can use a window function to get the full count and the limited result in one query. Introduced with PostgreSQL 8.4 in 2009. SELECT foo , count(*) OVER() AS full_count FROM bar WHERE <some condition> ORDER BY <some col> LIMIT <pagesize> OFFSET <offset>; Note that this can be considerably more expensive than without the total count. All rows have to be counted, and a possible shortcut taking just the top rows from a matching index may not be helpful any more. Doesn't matter much with small tables or full_count <= OFFSET + LIMIT. Matters for a substantially bigger full_count. Corner case: when OFFSET is at least as great as the number of rows from the base query, no row is returned. So you also get no full_count. Possible alternative: * *Run a query with a LIMIT/OFFSET and also get the total number of rows Sequence of events in a SELECT query ( 0. CTEs are evaluated and materialized separately. In Postgres 12 or later the planner may inline those like subqueries before going to work.) Not here. * *WHERE clause (and JOIN conditions, though none in your example) filter qualifying rows from the base table(s). The rest is based on the filtered subset. ( 2. GROUP BY and aggregate functions would go here.) Not here. ( 3. Other SELECT list expressions are evaluated, based on grouped / aggregated columns.) Not here. *Window functions are applied depending on the OVER clause and the frame specification of the function. The simple count(*) OVER() is based on all qualifying rows. *ORDER BY ( 6. DISTINCT or DISTINCT ON would go here.) Not here. *LIMIT / OFFSET are applied based on the established order to select rows to return. LIMIT / OFFSET becomes increasingly inefficient with a growing number of rows in the table. Consider alternative approaches if you need better performance: * *Optimize query with OFFSET on large table Alternatives to get final count There are completely different approaches to get the count of affected rows (not the full count before OFFSET & LIMIT were applied). Postgres has internal bookkeeping how many rows where affected by the last SQL command. Some clients can access that information or count rows themselves (like psql). For instance, you can retrieve the number of affected rows in plpgsql immediately after executing an SQL command with: GET DIAGNOSTICS integer_var = ROW_COUNT; Details in the manual. Or you can use pg_num_rows in PHP. Or similar functions in other clients. Related: * *Calculate number of rows affected by batch query in PostgreSQL A: Since Postgres already does a certain amount of caching things, this type of method isn't as inefficient as it seems. It's definitely not doubling execution time. We have timers built into our DB layer, so I have seen the evidence. A: Seeing as you need to know for the purpose of paging, I'd suggest running the full query once, writing the data to disk as a server-side cache, then feeding that through your paging mechanism. If you're running the COUNT query for the purpose of deciding whether to provide the data to the user or not (i.e. if there are > X records, give back an error), you need to stick with the COUNT approach.
{ "language": "en", "url": "https://stackoverflow.com/questions/156114", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "84" }
Q: What's the CSS Filter alternative for Firefox? I'm using CSS Filters to modify images on the fly within the browser. These work perfectly in Internet Explorer, but aren't supported in Firefox. Does anyone know what the CSS Filter equivalent for these is for Firefox? An answer that would work cross browser (Safari, WebKit, Firefox, etc.) would be preferred. <style type="text/css"> .CSSClassName {filter:Invert;} .CSSClassName {filter:Xray;} .CSSClassName {filter:Gray;} .CSSClassName {filter:FlipV;} </style> Update: I know Filter is an IE specific feature. Is there any kind of equivalent for any of these that is supported by Firefox? A: Please check the Nihilogic Javascript Image Effect Library: * *supports IE and Fx pretty well *has a lot of effects You can find many other effects in the CVI Projects: * *they are also JS based *there's a Lab to experiment Good Luck A: Could you give us a concrete example of what exactly you're trying to do? You'd probably get fewer "Your brower sux" responses and more "How about trying this different approach?" ones. Normally CSS is used to control the look and feel of HTML content, not add effects or edit images in clever ways. What you're trying to do might be possible using javascript, but a behavior-oriented script still probably isn't very well suited for the kind of tweaking you want to do (although something like this is a fun and very inefficient adventure in CSS / JS tomfoolery). I can't imagine a scenario when you would need the client to perform image tweaking in real-time. You could modify images server-side and simply reference these modified versions with your CSS or possibly Javascript, depending on what you're doing exactly. ImageMagick is a great little command-line tool for all the image effects you would ever need, and is pretty simple to use by itself or within the server-side language of your choice. Or if you're using PHP, I believe PHP's GD library is pretty popular. A: There are no equivalents in other browsers. The closest you could get is using a graphics library like Canvas and manipulating the images in it, but you'd have to write the manipulations yourself and they'd require JavaScript. filter is an IE-only feature -- it is not available in any other browser. A: SVG filters applied to HTML content. Only works in Firefox 3.1 and above, though I think Safari is heading in the same direction. A: None that I know of. Filter was an IE only thing and I don't think any other browser has followed with similar functionality. What is there a specific use case you need? A: I'm afraid that you are pretty much out of luck with most of the cross-browser filter-type functionality. CSS alone will not allow you to do most of these things. For example, there is no way to invert an image cross-browser just using CSS. You will have to have two different copies of the image (one inverted) or you could try using Javascript or maybe go about it a completely different way, but there is no simple solution solely in CSS. A: There are filters, such as Gaussian Blur et al in SVG, which is supported natively by most browsers except IE. Pure thought experiment here, you could wrap your images in an SVG object on the fly with javascript and attempt to apply filters to them. I doubt this would work for background images, though perhaps with alot of clever positioning it could work. It's unlikely to be a realistic solution. If you don't want to permanently modify your source images, Rudi has the best answer, using server side tools to apply transformations on the fly (or cached for performance) will be the best cross browser solution. A: This is a very very old question but css has updated to now support filters. Read more about it at https://developer.mozilla.org/en-US/docs/Web/CSS/filter Syntax With a function, use the following: filter: <filter-function> [<filter-function>]* | none For a reference to an SVG element, use the following: filter: url(svg-url#element-id) A: Not really, and hopefully there never will be. It's not a web standard CSS feature for the reason that you're using CSS to format the webpage, not the browser itself. The day that other web designers and developers think they should style my browser how they wish and are then do so is the day I stop visiting their pages (and I say this as a front end web guy).
{ "language": "en", "url": "https://stackoverflow.com/questions/156116", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Source code management strategies - branching, tagging, forking, etc. - for web apps This posting here (How do you manage database revisions on a medium sized project with branches?) got me wondering how best to work on a web project using branching and deploying to dev, staging, and production (along with local copies). We don't have "releases" per se: if a feature is big enough to be noticeable, we push it live (after requisite testing/etc.), otherwise we batch a few up and, when it feels "comfortable", push those live. The goal is to never have a deploy more than once or twice a month or so because a constantly shifting site tends to make the users a bit uneasy. Here's how we do it, and it feels sort of brittle (currently using svn but considering a switch to git): * *Two "branches" - DEV and STAGE with a given release of STAGE marked as TRUNK * *Developer checks out a copy of TRUNK for every change and creates a branch for it *Developer works locally, checking in code frequently (just like voting: early and often) *When Developer is comfortable it isn't totally broken, merge the branch with DEV and deploy to the development site. *Repeat 3-4 as necessary until the change is "finished" *Merge change branch with STAGING, deploy to stage site. Do expected final testing. *After some period of time, mark a given revision of STAGE as the TRUNK, and push trunk live *Merge TRUNK changes back down to DEV to keep it in sync Now, some of these steps have significant complexity hand-waved away and in practice are very difficult to do (TRUNK -> DEV always breaks) so I have to imagine there's a better way. Thoughts? A: Branching is handy if you expect the work to NOT be completed on time, and you do not have a sufficient body of tests to make continuous integration work. I tend to see branch-crazy development in shops where the programming tasks are far too big to complete predictably and so management wants to wait until just before a release to determine what features should ship. If you are doing that kind of work, then you might consider using distributed version control, where EVERY working directory is a branch naturally and you get all the local check-in and local history you can eat without hurting anyone. You can even cross-merge with other developers outside of the trunk. My preference is when we work in an unstable trunk with branches for release candidates, which are then tagged for release, and which then become the stream for emergency patches. In such a system, you very seldom have more than three branches (last release, current release candidate, unstable trunk). This works if you are doing TDD and have CI on the unstable trunk. And if you require all tasks to be broken down so you can deliver code as often as you desire (usually a task should be only one to two days, and releasable without all the other tasks that compose its feature). So programmers take work, check-out the trunk, do the work, sync up and check in any time all the tests pass. The unstable trunk is always available to branch as a release candidate (if all tests pass) and therefore release becomes a non-event. Overall, better means: fewer branches, shorter tasks, shorter time to release, more tests. A: An obvious thought would be more "rebase" (merges back more often from "parent" environment STAGE to "child" environment "DEV" to developer branch) in order to minimize the final impact of TRUNK->DEV, which would be not needed anymore. I.e, anything done in STAGE, that is bound to go into production at one time (TRUNK) should be merged back as early as possible in DEV and private devs branch, otherwise those late retrofitting merges are always a pain. BUT, it the above merge workflow is too inconvenient, I would suggest a REBASE branch, based on latest DEV just after a release (new TRUNK). The rebase TRUNK->DEV would become TRUNK->REBASE, where all problems are solved, then a final merge DEV->REBASE to check that any current dev is compatible with the new updated system. A final trivial merge back from REBASE to DEV (and to private dev branches) would complete the process. The point of a branch is to isolate a development effort that can not be conducted together with other current development efforts. If TRUNK->DEV is too complicated to go along with current DEVs, it need to be isolated. Hence the 'REBASE' branch proposition. A: Read this: http://oreilly.com/catalog/practicalperforce/chapter/ch07.pdf A: We use SVN in the shop I work at. While we do C++ development, version management is pretty universal. The following is our approach, you can decide what, if any of it, is reasonable for your approach. For us, ALL development occurs in a branch. We branch for every bug and every feature. Ideally, that branch is devoted ONLY to 1 feature but sometimes that just isn't meant to be. When work is completed, tested, and "ready" we merge the changes into the trunk. Our rule is that at no point should the trunk ever have broken code in it. If broken code should find its way into the trunk, fixing it becomes priority 1. Releases are made when the features are all done and merged: a branch for the release is created as is a tag. The tag allows us a shapshot to retrieve if we need to. The branch allows us our previous version support. Fixing bugs in a released version is done by going to that release's branch, branching from it. When all is well, changes are merged back to the release's branch and, if desired, all the way to the trunk.
{ "language": "en", "url": "https://stackoverflow.com/questions/156120", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Loading XHTML fragments over AJAX with jQuery I'm trying to load fragments of XHTML markup using jQuery's $.fn.load function, but it raises an error trying to add the new markup into the DOM. I've narrowed this down to the XML declaration (<?xml...?>) -- the view works if I return static text without the declaration. I don't understand why this would cause failure, or if the blame lies in jQuery, Firefox, or my code. How should I insert XHTML fragments into the DOM using jQuery? Using $.get does not work -- the callback receives a Document object, and when I try to insert it into the DOM, I receive the following error: uncaught exception: Node cannot be inserted at the specified point in the hierarchy (NS_ERROR_DOM_HIERARCHY_REQUEST_ERR) http://localhost:8000/static/1222832186/pixra/script/jquery-1.2.6.js Line 257 This is my code: $body = $("#div-to-fill"); $.get ("/testfile.xhtml", undefined, function (data) { console.debug ("data = %o", data); // data = Document $body.children ().replaceWith (data); // error }, 'xml'); A sample response: <?xml version="1.0" encoding="UTF-8"?> <div xmlns="http://www.w3.org/1999/xhtml"> <form action="/gallery/image/edit-description/11529/" method="post"> <div>content here</div> </form> </div> A: Try this instead (I just did a quick test and it seems to work): $body = $("#div-to-fill"); $.get ("/testfile.xhtml", function (data) { $body.html($(data).children()); }, 'xml'); Basically, .children() will get you the root node and replace the content of your div with it. I guess you can't exactly insert an xml document with the <?xml declaration into the DOM...
{ "language": "en", "url": "https://stackoverflow.com/questions/156133", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: CoreImage for Win32 For those not familiar with Core Image, here's a good description of it: http://developer.apple.com/macosx/coreimage.html Is there something equivalent to Apple's CoreImage/CoreVideo for Windows? I looked around and found the DirectX/Direct3D stuff, which has all the underlying pieces, but there doesn't appear to be any high level API to work with, unless you're willing to use .NET AND use WPF, neither of which really interest me. The basic idea would be create/load an image, attach any number of filters that can be chained together, forming a graph, and then render the image to an HDC, using the GPU to do most of the hard work. DirectX/Direct3D has these pieces, but you have to jump through a lot of hoops (or so it appears) to use it. A: There are a variety of tools for working with shaders (such as RenderMonkey and FX-Composer), but no direct equivalent to CoreImage. But stacking up fragment shaders on top of each other is not very hard, so if you don't mind learning OpenGL it would be quite doable to build a framework that applies shaders to an input image and draws the result to an HDC. A: Adobe's new Pixel Blender is the closest technology out there. It is cross-platform -- it's part of the Flash 10 runtime, as well as the key pixel-oriented CS4 apps, namely After Effects and (soon) Photoshop. It's unclear, however, how much is currently exposed for embedding in other applications at this point. In the most extreme case it should be possible to embed by embedding a Flash view, but that is more overhead than would obviously be idea. There is also at least one smaller-scale 3rd party offering: Conduit Pixel Engine. It is commercial, with no licensing price clearly listed, however. A: I've now got a solution to this. I've implemented an ImageContext class, a special Image class, and a Filter class that allows similar functionality to Apple's CoreImage. All three use OpenGL (I gave up trying to get this to work on DirectX due to image quality issues, if someone knows DirectX well contact me, because I'd love to have a Dx version) to render an image(s) to a context and use the filters to apply their effects (as HLGL frag shaders). There's a brief write up here: ImageKit with a screen shot of an example filter and some sample source code.
{ "language": "en", "url": "https://stackoverflow.com/questions/156136", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Best way to render hand-drawn figures I guess I'll illustrate with an example: In this game you are able to draw 2D shapes using the mouse and what you draw is rendered to the screen in real-time. I want to know what the best ways are to render this type of drawing using hardware acceleration (OpenGL). I had two ideas: * *Create a screen-size texture when drawing is started, update this when drawing, and blit this to the screen *Create a series of line segments to represent the drawing, and render these using either lines or thin polygons Are there any other ideas? Which of these methods is likely to be best/most efficient/easiest? Any suggestions are welcome. A: I love crayon physics (music gets me every time). Great game! But back to the point... He has created brush sprites that follow your mouse position. He's created a few brushes that account for a little variation. Once the mouse goes down, I imagine he is adding these sprites to a data structure and sending that structure through his drawing and collision functions to loop through. Thus popping out the real-time effect. He is using Simple DirectMedia Layer library, which I give two thumbs up. A: I'm pretty sure the second idea is the way to go. A: First option if the player draws pure freehand (rather than lines), and what they draw doesn't need to be animated. Second option if it is animated or is primarily lines. If you do choose this, it seems like you'd need to draw thin polygons rather than regular lines to get any kind of interesting look (as in the crayon example).
{ "language": "en", "url": "https://stackoverflow.com/questions/156165", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do you measure if an interface change improved or reduced usability? For an ecommerce website how do you measure if a change to your site actually improved usability? What kind of measurements should you gather and how would you set up a framework for making this testing part of development? A: Multivariate testing and reporting is a great way to actually measure these kind of things. It allows you to test what combination of page elements has the greatest conversion rate, providing continual improvement on your site design and usability. Google Web Optimiser has support for this. A: Similar methods that you used to identify the usability problems to begin with-- usability testing. Typically you identify your use-cases and then have a lab study evaluating how users go about accomplishing certain goals. Lab testing is typically good with 8-10 people. The more information methodology we have adopted to understand our users is to have anonymous data collection (you may need user permission, make your privacy policys clear, etc.) This is simply evaluating what buttons/navigation menus users click on, how users delete something (i.e. changing quantity - are more users entering 0 and updating quantity or hitting X)? This is a bit more complex to setup; you have to develop an infrastructure to hold this data (which is actually just counters, i.e. "Times clicked x: 138838383, Times entered 0: 390393") and allow data points to be created as needed to plug into the design. A: To push the measurement of an improvement of a UI change up the stream from end-user (where the data gathering could take a while) to design or implementation, some simple heuristics can be used: * *Is the number of actions it takes to perform a scenario less? (If yes, then it has improved). Measurement: # of steps reduced / added. *Does the change reduce the number of kinds of input devices to use (even if # of steps is the same)? By this, I mean if you take something that relied on both the mouse and keyboard and changed it to rely only on the mouse or only on the keyboard, then you have improved useability. Measurement: Change in # of devices used. *Does the change make different parts of the website consistent? E.g. If one part of the e-Commerce site loses changes made while you are not logged on and another part does not, this is inconsistent. Changing it so that they have the same behavior improves usability (preferably to the more fault tolerant please!). Measurement: Make a graph (flow chart really) mapping the ways a particular action could be done. Improvement is a reduction in the # of edges on the graph. *And so on... find some general UI tips, figure out some metrics like the above, and you can approximate usability improvement. Once you have these design approximations of user improvement, and then gather longer term data, you can see if there is any predictive ability for the design-level usability improvements to the end-user reaction (like: Over the last 10 projects, we've seen an average of 1% quicker scenarios for each action removed, with a range of 0.25% and standard dev of 0.32%). A: The first way can be fully subjective or partly quantified: user complaints and positive feedbacks. The problem with this is that you may have some strong biases when it comes to filter those feedbacks, so you better make as quantitative as possible. Having some ticketing system to file every report from the users and gathering statistics about each version of the interface might be useful. Just get your statistics right. The second way is to measure the difference in a questionnaire taken about the interface by end-users. Answers to each question should be a set of discrete values and then again you can gather statistics for each version of the interface. The latter way may be much harder to setup (designing a questionnaire and possibly the controlled environment for it as well as the guidelines to interpret the results is a craft by itself) but the former makes it unpleasantly easy to mess up with the measurements. For example, you have to consider the fact that the number of tickets you get for each version is dependent on the time it is used, and that all time ranges are not equal (e.g. a whole class of critical issues may never be discovered before the third or fourth week of usage, or users might tend not to file tickets the first days of use, even if they find issues, etc.). A: Torial stole my answer. Although if there is a measure of how long it takes to do a certain task. If the time is reduced and the task is still completed, then that's a good thing. Also, if there is a way to record the number of cancels, then that would work too.
{ "language": "en", "url": "https://stackoverflow.com/questions/156176", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How do you update an object with Linq 2 SQL without rowversion or timestamp? I'm trying to take a POCO object and update it with Linq2SQL using an XML mapping file... This what what I have: using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace Business.Objects { public class AchievementType { public int Id { get; set; } public string Name { get; set; } } } <?xml version="1.0" encoding="utf-8" ?> <Database Name="Name" xmlns="http://schemas.microsoft.com/linqtosql/mapping/2007"> <Table Name="dbo.AchievementTypes" Member="Business.Objects.AchievementType"> <Type Name="Business.Objects.AchievementType"> <Column Name="Id" Member="Id" IsDbGenerated="true" IsPrimaryKey="true" /> <Column Name="Name" Member="Name" /> </Type> </Table> </Database> CREATE TABLE AchievementTypes ( Id INTEGER IDENTITY NOT NULL, Name NVARCHAR(255) NOT NULL, CONSTRAINT PK_AchievevementTypes PRIMARY KEY (Id), CONSTRAINT UQ_AchievementTypes UNIQUE (Name) ) and i'm doing the following to update it: var type_repo = new BaseRepository<AchievementType>(); var t1 = new AchievementType { Name = "Foo" }; type_repo.Insert(t1); t1.Name = "Baz"; type_repo.Save(t1, "Id"); and my repository Save is just doing: public void Update(TData entity) { using (var ctx = MyDataContext()) { ctx.GetTable<TData>().Attach(entity); ctx.SubmitChanges(); } } The update doesn't fail or anything, but the data in the database has not changed. A: Bah, right after I asked I found some documentation on it. Since Context isn't tracking the changes, you need to do the following: ctx.Refresh(RefreshMode.KeepCurrentValues, entities);
{ "language": "en", "url": "https://stackoverflow.com/questions/156177", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Retrieve only a portion of a matched string using regex in Javascript I've got a string like foo (123) bar I want to retrieve all numbers surrounded with the delimiters ( and ). If I use varname.match(/\([0-9]+\)/), my delimiters are included in the response, and I get "(123)" when what I really want is "123". How can I retrieve only a portion of the matched string without following it up with varname.replace()? A: Yes, use capturing (non-escaped) parens: varname.match(/\(([0-9]+)\)/)[1]
{ "language": "en", "url": "https://stackoverflow.com/questions/156181", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Reorder PDF Page Order Is it possible to reorder an already generated PDF file programmatically, and using as little resources as possible, as this will need to be ran on ~8000 PDFs every month or so? We are currently using iTextSharp to merge the PDF’s in to larger PDF’s, but iTextsharp’s Documentation does not really explain much. A: The Merger product from DynamicPDF will do this (http://www.dynamicpdf.com/). I cannot speak to what kind of performance you'll see with 8k documents, but i can say that it is one of the fastest PDF processing tools i have found. There is a .Net version of both the Merger tool, and the Generator tool. A: I've used iTextSharp -- check out this code sample, it's what I used to write a (simpler) splitting utility. I've used this on over 10,000 PDFs in a shot, and I can't remember the exact performance, but it was certainly acceptable for a batch job.
{ "language": "en", "url": "https://stackoverflow.com/questions/156212", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: What is a decent beginner graph puzzle? I'm trying to get more acquainted with problems that require Graphs to be solved (are are best solved by graphs). If someone has an old ACM Programming Competition problem that utilized graphs, or have another problem that they found particularly enlightening as they worked it out I would appreciate it. I want to familiarize myself with graphs, identifying graph-type problems easily and be able to utilize basic graph traversal algorithmns. Anyone have a sweet problem they can send my way? A: To get a better grasp of operations on graph, you might want to just implement some known Graph algorithms. Try implementing a Nurikabe solver or generator. It would need quite a bit of classical graph operations. A: You should be familiar with the Konigsberg Bridge Problem. You should also get really familiar with the types of data structures that often come up in graph theory problems. A: Graphs can literally be used to model almost any problem. Topcoder.com Marathon Matches often lend themselves to graph-based solutions. * *The ProcessorScheduling problem could be solved with graphs. *The GraphBuilder problem. *The Distance problem. *The MapMaker problem (weighted k-coloring problem, which is a classic computer science problem). You might checkout some of these problems - and there are more where they came from. A: I found this book to be extremely useful (Amazon Link): Programming Challenges Not only does it give a pretty indepth explanation of graphs, trees, basic data structures it gives a handful of programming challenges involving each type! This document is more useful to me than my textbook! Here are some of the Graph Problems in it: Problems involving Graph Traversal: * *Bicoloring : pg 203 *Playing With Wheels : pg 204 *The Tourist Guide : pg 206 *Slash Maze : pg 208 *Edit Step Ladders : pg 210 *Tower of Cubes : pg 211 *From Dusk Till Dawn : pg 213 *Hanoi Tower Troubles (Again!) : pg 215 Problems involving Graph Algorithms (Dijkstra's, Min Spanning Tree, etc): * *Freckles : pg 231 *The Necklace : pg 231 *Fire Station : pg 234 *Railroads : pg 235 *War : pg 237 *The Grand Dinner : pg 241 A: You don't say what language you are using (thinking of using). If I may, I'd suggest Lisp or Python. They're both good for easy graph manipulation. If you want to be really fancy, you might want to create a pretty output using PyGame. As for problem, have a look at a simple program and convert it into a graph. Tip, each token is a node. Assuming that you have some loops and equations, then you could traverse the graph and identify what could be moved outside of the loop. Equations could be rearranged to be more "efficient". My rationale for this problem is that it will help you as a programmer by seeing the sort of processes that might be going on inside the optimisation phases of a compiler. BTW, if you give the above a go, have a look at Plex, it will save you a lot of time with parsers. A: http://codekata.pragprog.com/2007/01/kata_nineteen_w.html Hint: a DAWG is a pretty good method.
{ "language": "en", "url": "https://stackoverflow.com/questions/156213", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }