Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I need to empty the data on a socket (making sure that there is nothing to receive). Unfortunately, there is no function for this in the python socket module. I've implemented something this way: ``` def empty_socket(sock): """remove the data present on the socket""" input = [sock] while 1: inputready, o, e = select.select(input,[],[], 0.0) if len(inputready)==0: break for s in inputready: s.recv(1) ``` What do you think? Is there a better way to do that? --- Update: I don't want to change the socket timeout. What's why i prefer a select to a read. --- Update: The original question was using the 'flush' term. It seems that 'empty' is a better term. --- Update - 2010-02-27 : I've noticed a bug when the pair has closed. The inputready is always filled with the sockets. I fixed that by adding a maximum number of loops. Is there a better fix?
If by "flush" you mean throw away any pending incoming data then you can either use select() like you do, or set the socket to nonblocking and read in a loop until you're out of data. Also note that (from the Linux manpage): > Under Linux, select() may report a socket file descriptor as "ready > for reading", while nevertheless a subsequent read blocks. This > could for example happen when data has arrived but upon > examination has wrong checksum and is discarded. There may be other > circumstances in which a file descriptor is spuriously reported as > ready. Thus it may be safer to use O\_NONBLOCK on sockets that should > not block. [Spurious readiness notification for Select System call](https://stackoverflow.com/questions/858282/spurious-readiness-notification-for-select-system-call) And as has been noted by others, "flush" usually refers to output.
Using `select.select` is good practice, as indicated in the [Socket Programming HOWTO](https://docs.python.org/3/howto/sockets.html#non-blocking-sockets). You'll need to set the socket as non-blocking, using `sock.setblocking(0)`. Just a comment about nomenclature: `flush` is normally associated with *output* operations.
How to empty a socket in python?
[ "", "python", "sockets", "select", "" ]
I'm having some problems with getting the .getruntime.exec() to work properly. Here is the code dealing with that part: ``` while (line != null) { String name = line; String commandFull = commandFirst + name + commandLast; String[] fullCommand = new String[] {commandFirst, name, commandLast}; for(int i=0;i<3;i++) { System.out.print(fullCommand[i]); } Runtime runner = Runtime.getRuntime(); Process p = runner.exec(fullCommand); outFile.println(fullCommand); line = inFile.readLine(); } ``` It prints out the command as it should look. When I run the program here is the output: ``` adfind -b dc=stuff,dc=com -f "cn=user" |find "displayName" >> fullList.txt Exception in thread "main" java.lang.IllegalArgumentException at java.lang.ProcessImpl.<init>(Unknown Source) at java.lang.ProcessImpl.start(Unknown Source) at java.lang.ProcessBuilder.start(Unknown Source) at java.lang.Runtime.exec(Unknown Source) at java.lang.Runtime.exec(Unknown Source) at lookup.main(lookup.java:41) ```
You are trying to execute a shell command without the shell. That is, you are trying to execute something that a shell would interpret (specifically the pipe `'|'` and append `'>>'`). To solve this, have Java execute a shell instance and pass the entire command to the shell. How this would work is platform dependent. For instance in Linux: ``` String fullCommand = {"/bin/sh", "-c", "find -b dc=stuff,dc=com -f \"cn=user\" |find \"displayName\" >> fullList.txt"}; ``` Or in Windows: ``` String fullCommand = {"cmd.exe", "/c", "find -b dc=stuff,dc=com -f \"cn=user\" |find \"displayName\" >> fullList.txt"}; ```
The redirections and pipes do not work - they are not a part of the command, but rely on the underlying shell to be evaluated when you run this line from e. g. a cmd.exe window. You will need to use the input and output streams of the Process object you get from the exec() method. If your main concern is to query Active Directory, not using the specific query tool at hand, you should consider going for an all-Java solution. There are several ways to access LDAP directories (which Active Directory also is), for example here: <http://developer.novell.com/wiki/index.php/Jldap> I have not used it myself yet, however it does not seem abandoned as there are releases from 2008.
Java Runtime.getRuntime() Illegal Argument Exception
[ "", "java", "runtime-error", "" ]
i have the following code: ``` public static void Serialize() { List<string> dirs = FileHelper.GetFilesRecursive(fileDirectoryPath); List<string> dirFiles = new List<string>(); foreach (string p in dirs) { string path = p; string lastAccessTime = File.GetLastAccessTime(path).ToString(); bool DirFile = File.Exists(path); FileInfo fInf = new FileInfo(path); long lengthInk = fInf.Length / 1024; DateTime lastWriteTime = File.GetLastWriteTime(p); dirFiles.Add(p + "|" + lastAccessTime.ToString() + "|" + DirFile.ToString() + "|" + lastWriteTime.ToString() + "|" + lengthInk.ToString() + " kb"); } ``` I keep hitting a PathTooLongException error with the following line: ``` string lastAccessTime = File.GetLastAccessTime(path).ToString(); ``` The application drills into a drive and finds all files/folders w/in the drive. I cannot change this path but since it is above 260 characters...how to work around this?
The `GetLastAccessTime()` call, with a full path can [exceed the internal limit](http://msdn.microsoft.com/en-us/library/system.io.pathtoolongexception.aspx) (which is OS-version specific, but typically 260 characters) on the maximum length for a fully qualified file path. One way to avoid this, is to use [`Directory.SetCurrentDirectory()`](http://msdn.microsoft.com/en-us/library/system.io.directory.setcurrentdirectory.aspx) to change the current system directory and then call `GetLastAccessTime()` with only a relative path. Just make sure you change your current directory back to what you started from to avoid unexpected issues.
Something like the `.LastAccessTime` property of Delimon.Win32.IO.FileInfo, might do the trick. Delimon is a library on Microsoft TechNet for overcoming the long filenames problem, it's called **[Delimon.Win32.I​O Library (V4.0)](http://gallery.technet.microsoft.com/DelimonWin32IO-Library-V40-7ff6b16c)** and it has its own versions of key classes from System.IO For example, you would replace: ``` System.IO.Directory.GetFiles ``` with ``` Delimon.Win32.IO.Directory.GetFiles ``` which will let you handle long files and folders. From the website: > Delimon.Win32.IO replaces basic file functions of System.IO and > supports File & Folder names up to up to 32,767 Characters. > > This Library is written on .NET Framework 4.0 and can be used either > on x86 & x64 systems. The File & Folder limitations of the standard > System.IO namespace can work with files that have 260 characters in a > filename and 240 characters in a folder name (MAX\_PATH is usually > configured as 260 characters). Typically you run into the > System.IO.PathTooLongException Error with the Standard .NET Library.
PathTooLongException in C# code
[ "", "c#", "pathtoolongexception", "" ]
Given a string like so: ``` Hello {FIRST_NAME}, this is a personalized message for you. ``` Where FIRST\_NAME is an arbitrary token (a key in a map passed to the method), to write a routine which would turn that string into: ``` Hello Jim, this is a personalized message for you. ``` given a map with an entry FIRST\_NAME -> Jim. It would seem that StringTokenizer is the most straight forward approach, but the Javadocs really say you should prefer to use the regex aproach. How would you do that in a regex based solution?
Try this: **Note:** The [author's final solution](https://stackoverflow.com/questions/1138830/how-to-replace-tokens-in-a-string-without-stringtokenizer/1139250#1139250) builds upon this sample and is much more concise. ``` public class TokenReplacer { private Pattern tokenPattern; public TokenReplacer() { tokenPattern = Pattern.compile("\\{([^}]+)\\}"); } public String replaceTokens(String text, Map<String, String> valuesByKey) { StringBuilder output = new StringBuilder(); Matcher tokenMatcher = tokenPattern.matcher(text); int cursor = 0; while (tokenMatcher.find()) { // A token is defined as a sequence of the format "{...}". // A key is defined as the content between the brackets. int tokenStart = tokenMatcher.start(); int tokenEnd = tokenMatcher.end(); int keyStart = tokenMatcher.start(1); int keyEnd = tokenMatcher.end(1); output.append(text.substring(cursor, tokenStart)); String token = text.substring(tokenStart, tokenEnd); String key = text.substring(keyStart, keyEnd); if (valuesByKey.containsKey(key)) { String value = valuesByKey.get(key); output.append(value); } else { output.append(token); } cursor = tokenEnd; } output.append(text.substring(cursor)); return output.toString(); } } ```
Thanks everyone for the answers! Gizmo's answer was definitely out of the box, and a great solution, but unfortunately not appropriate as the format can't be limited to what the Formatter class does in this case. Adam Paynter really got to the heart of the matter, with the right pattern. Peter Nix and Sean Bright had a great workaround to avoid all of the complexities of the regex, but I needed to raise some errors if there were bad tokens, which that didn't do. But in terms of both doing a regex and a reasonable replace loop, this is the answer I came up with (with a little help from Google and the existing answer, including Sean Bright's comment about how to use group(1) vs group()): ``` private static Pattern tokenPattern = Pattern.compile("\\{([^}]*)\\}"); public static String process(String template, Map<String, Object> params) { StringBuffer sb = new StringBuffer(); Matcher myMatcher = tokenPattern.matcher(template); while (myMatcher.find()) { String field = myMatcher.group(1); myMatcher.appendReplacement(sb, ""); sb.append(doParameter(field, params)); } myMatcher.appendTail(sb); return sb.toString(); } ``` Where doParameter gets the value out of the map and converts it to a string and throws an exception if it isn't there. Note also I changed the pattern to find empty braces (i.e. {}), as that is an error condition explicitly checked for. EDIT: Note that appendReplacement is not agnostic about the content of the string. Per the javadocs, it recognizes $ and backslash as a special character, so I added some escaping to handle that to the sample above. Not done in the most performance conscious way, but in my case it isn't a big enough deal to be worth attempting to micro-optimize the string creations. Thanks to the comment from Alan M, this can be made even simpler to avoid the special character issues of appendReplacement.
How to replace tokens in a string without StringTokenizer
[ "", "java", "regex", "stringtokenizer", "" ]
**EDIT:** I've written the results up as a [blog post](http://codeblog.jonskeet.uk/2009/07/07/faking-com-to-fool-the-c-compiler.aspx). --- The C# compiler treats COM types somewhat magically. For instance, this statement looks normal... ``` Word.Application app = new Word.Application(); ``` ... until you realise that `Application` is an interface. Calling a constructor on an interface? Yoiks! This actually gets translated into a call to [`Type.GetTypeFromCLSID()`](http://msdn.microsoft.com/en-us/library/system.type.gettypefromclsid.aspx) and another to [`Activator.CreateInstance`](http://msdn.microsoft.com/en-us/library/system.activator.createinstance.aspx). Additionally, in C# 4, you can use non-ref arguments for `ref` parameters, and the compiler just adds a local variable to pass by reference, discarding the results: ``` // FileName parameter is *really* a ref parameter app.ActiveDocument.SaveAs(FileName: "test.doc"); ``` (Yeah, there are a bunch of arguments missing. Aren't optional parameters nice? :) I'm trying to investigate the compiler behaviour, and I'm failing to fake the first part. I can do the second part with no problem: ``` using System; using System.Runtime.InteropServices; using System.Runtime.CompilerServices; [ComImport, GuidAttribute("00012345-0000-0000-0000-000000000011")] public interface Dummy { void Foo(ref int x); } class Test { static void Main() { Dummy dummy = null; dummy.Foo(10); } } ``` I'd like to be able to write: ``` Dummy dummy = new Dummy(); ``` though. Obviously it'll go bang at execution time, but that's okay. I'm just experimenting. The other attributes added by the compiler for linked COM PIAs (`CompilerGenerated` and `TypeIdentifier`) don't seem to do the trick... what's the magic sauce?
By no means am I an expert in this, but I stumbled recently on what I think you want: the [CoClass](http://msdn.microsoft.com/en-us/library/system.runtime.interopservices.coclassattribute.aspx) attribute class. ``` [System.Runtime.InteropServices.CoClass(typeof(Test))] public interface Dummy { } ``` > A coclass supplies concrete > implementation(s) of one or more > interfaces. In COM, such concrete > implementations can be written in any > programming language that supports COM > component development, e.g. Delphi, > C++, Visual Basic, etc. See [my answer to a similar question about the Microsoft Speech API](https://stackoverflow.com/questions/958932/it-looks-like-im-instantiating-this-speechapi-interface-how-is-that-possible/958952#958952), where you're able to "instantiate" the interface `SpVoice` (but really, you're instantiating `SPVoiceClass`). ``` [CoClass(typeof(SpVoiceClass))] public interface SpVoice : ISpeechVoice, _ISpeechVoiceEvents_Event { } ```
Between you and Michael you've almost got the pieces put together. I think this is how it works. (I didn't write the code, so I might be slightly mis-stating it, but I'm pretty sure this is how it goes.) If: * you are "new"ing an interface type, and * the interface type has a known coclass, and * you ARE using the "no pia" feature for this interface then the code is generated as (IPIAINTERFACE)Activator.CreateInstance(Type.GetTypeFromClsid(GUID OF COCLASSTYPE)) If: * you are "new"ing an interface type, and * the interface type has a known coclass, and * you ARE NOT using the "no pia" feature for this interface then the code is generated as if you'd said "new COCLASSTYPE()". Jon, feel free to bug me or Sam directly if you have questions about this stuff. FYI, Sam is the expert on this feature.
How does the C# compiler detect COM types?
[ "", "c#", "com", "compiler-construction", "c#-4.0", "" ]
I have a URL as a string. How do I match the numbers after the VideoID. Also VideoID may occur at different points in the URL. But I will worry about that afterwards, as I can't even do this. ``` $string = 'http://example.com/index.php?action=vids.individual&VideoID=60085484'; preg_match('/(?<VideoID>)=/', $string, $matches); print_r($matches); ``` ...Spare some change for a noob. :)
Just use the built-in [`parse_url`](http://php.net/manual/en/function.parse-url.php)/[`parse_str`](http://php.net/manual/en/function.parse-str.php) ``` $string = 'http://example.com/index.php?action=vids.individual&VideoID=60085484'; $URL = parse_url($string); parse_str($URL['query'],$Q); print_r($Q); ``` returns > ``` > Array ( > [action] => vids.individual > [VideoID] => 60085484 > ) > ```
``` /(?:\?|&)VideoID=([0-9]+)/ # get just the ID, stored in \\1 /(?:\?|&)(VideoID=[0-9]+)/ # get VideoId=ID, stored in \\1 ``` Under the assumption that your URL is properly formed, it will always be preceded by either `?` or `&`, and with your example the URL is strictly numerical, so it will match a valid ID up to the next segment of the URL.
Preg_Match a string in the form of a URL
[ "", "php", "regex", "string", "" ]
How to write LINQ statement that returns ROWNUMBER of element with id == something?
There is no direct way to do this that I'm aware of. You'd have to pull the whole query down to the client, and the from there you could project in the row numbers. As an alternative, you could write a stored procedure that uses ROW\_NUMBER, and then hit that proc from Linq to SQL. In your case, the only way you're going to be able to do this would be client side. Keep in mind that the following statement is NOT going to do this at the server, but will pull down your whole table and get the index at the client... ``` using (var dc = new DataClasses1DataContext()) { var result = dc.Users .AsEnumerable() // select all users from the database and bring them back to the client .Select((user, index) => new // project in the index { user.Username, index }) .Where(user => user.Username == "sivey"); // filter for your specific record foreach (var item in result) { Console.WriteLine(string.Format("{0}:{1}", item.index, item.Username)); } } ```
You should be able to use the Skip and Take extension methods to accomplish this. For example, if you want row 10: ``` from c in customers where c.Region == "somewhere" orderby c.CustomerName select new {c.CustomerID, c.CustomerName} .Skip(9).Take(1); ```
LINQ statement that returns rownumber of element with id == something?
[ "", "c#", "linq-to-sql", "" ]
I have a postgres table like this: ``` CREATE SEQUENCE seq; CREATE TABLE tbl (id INTEGER DEFAULT VALUE nextval('seq'), data VARCHAR); ``` When I insert into the table: ``` INSERT INTO tbl (data) VALUES ('something'); ``` How can I get back the value of the id field for the record I just created? (Note, I may have got some of the SQL syntax a bit off; the gist should be clear, though) Suppose for the sake of argument that I'm not able to call currval on the same session because I don't control the transaction boundaries. I *might* be working in the same session with my next call, but I might not be, too.
You're looking for `INSERT ... RETURNING`. From [The Manual's section on `INSERT`](http://www.postgresql.org/docs/8.4/interactive/sql-insert.html): > The optional RETURNING clause causes INSERT to compute and return value(s) based on each row actually inserted. This is primarily useful for obtaining values that were supplied by defaults, such as a serial sequence number. However, any expression using the table's columns is allowed. The syntax of the RETURNING list is identical to that of the output list of SELECT.
In the same session: ``` SELECT currval('seq'); ``` EDIT: But if you can't use currval/nextval because you don't know if the inserting and selecting of the sequence value will occur in the same session, and if you're on postgresql 8.2 or later, you could probably write your insert statement like this. ``` INSERT INTO tbl (data) VALUES ('something') RETURNING id; ``` which should also return the last inserted id.
How do I get the value of the autogenerated fields using Postgresql?
[ "", "sql", "postgresql", "" ]
I'm new to Crystal Reports and was interested in finding out what books would be most helpful. I'm planning on using the Designer as well as integrating Crystal Reports with a .NET C# application. Anybody have any idea which are the best books for this purpose?
Some suggestions: Crystal Reports 2008: The Complete Reference (Osborne Complete Reference Series): [http://www.amazon.com/Crystal-Reports-2008-Complete-Reference/dp/0071590986/ref=sr\_1\_5?ie=UTF8&s=books&qid=1246629337&sr=8-5](https://rads.stackoverflow.com/amzn/click/com/0071590986) Crystal Reports 10: The Complete Reference (Complete Reference Series): [http://www.amazon.com/Crystal-Reports-10-Complete-Reference/dp/0072231661/ref=sr\_1\_26?ie=UTF8&s=books&qid=1246629400&sr=8-26](https://rads.stackoverflow.com/amzn/click/com/0072231661) Professional Crystal Reports for Visual Studio .NET, 2nd Edition: [http://www.amazon.com/Professional-Crystal-Reports-Visual-Studio/dp/0764557300/ref=sr\_1\_34?ie=UTF8&s=books&qid=1246629455&sr=8-34](https://rads.stackoverflow.com/amzn/click/com/0764557300) All come highly recommended.
**Crystal Reports .NET Programming** *By **Brian Bischof*** I think this is a nice one
Good book for Crystal Reports?
[ "", "c#", ".net", "crystal-reports", "" ]
I need to split my string input into an array at the commas. Is there a way to explode a comma-separated string into a flat, indexed array? Input: ``` 9,admin@example.com,8 ``` Output: ``` ['9', 'admin@example', '8'] ```
Try [explode](https://www.php.net/manual/en/function.explode.php): ``` $myString = "9,admin@example.com,8"; $myArray = explode(',', $myString); print_r($myArray); ``` Output : ``` Array ( [0] => 9 [1] => admin@example.com [2] => 8 ) ```
``` $string = '9,admin@google.com,8'; $array = explode(',', $string); ``` For more complicated situations, you may need to use [`preg_split`](http://php.net/preg_split).
Split a comma-delimited string into an array?
[ "", "php", "arrays", "string", "csv", "explode", "" ]
> **Possible Duplicate:** > [When to choose checked and unchecked exceptions](https://stackoverflow.com/questions/27578/when-to-choose-checked-and-unchecked-exceptions) Why does **Java** as a language have *both* checked and unchecked [exceptions](http://helpdesk.inet-sciences.com/ISSSupportCustomer/ViewCase.asp?Casenum=1261&ReturnPage=FAQ). What purpose do they serve? **Note**: I'm not asking when I should use them, or how to code them, but what they add to the language.
The theory for checked exceptions is simple. When designing an interface, think of exceptional cases that can occur, and will occur, with the normal state of a method call. Declare these exceptions in your interface, as the programmer will have to handle them directly. For example, a bank account withdraw method may declare an OverdraftException, which is an expected exception - a withdrawal may fail due to overdraft, but this type of failure may be handled differently by the client code (one may decide to completely deny the withdrawal, another may decide to apply a huge penalty and allow for a negative balance to be recorded, another may decide that their client is allowed to draw from a different account). However, runtime exceptions were supposed to be programming errors that weren't supposed to be handled directly - such as NullPointerExceptions, which only occur if methods take invalid arguments or don't check for such cases directly. This is a good theory. However, Java messed up with its implementation of Exceptions, and this threw the book of this theory out the window. There are two cases that I will illustrate where Java messed up with its implementation of Exceptions. These are IOException and SQLException. An IOException occurs anytime, anywhere a stream in the IO libraries of Java messes up. This is a checked exception, however. But, generally you cannot do anything but log that an error occur - if you're simply writing to the console, what can you reasonably be expected to do if you suddenly get an IOException when you're writing to it? But there's more. IOException also hides stuff like file exceptions and network exceptions. They may be subclasses of IOException floating around for that, but it is still a checked exception. If your writing to an external file fails, you can't really do much about it - if your network connection is severed, ditto. SQLException is the same way. Exception names should show what happened when they are called. SQLException does not. SQLException is thrown any single time any possible number of errors are encountered when dealing with a database - MOST OF WHICH THAT HAVE NOTHING TO DO WITH SQL. Therefore, programmers typically get annoyed with handling exceptions, and let Eclipse (or whatever IDE they're using) generate blocks like this: ``` try { thisMethodThrowsACheckedExceptionButIDontCare(); } catch(Exception e) { e.printStackTrace(); } ``` However, with RuntimeExceptions, these intentionally bubble up and eventually get handled by the JVM or container level. This is a good thing - it forces errors to show up and then you must fix the code directly instead of ignoring the exception - you may still end up just printing the stack trace (hopefully logging it instead of printing to the console directly), but then there will be an exception handler that you were forced to write because of a real problem - not because a method said that it *might* possibly throw an Exception, but that it did. Spring uses a DataAccessException to wrap SQLExceptions so that you don't have to handle them as a checked exception. It makes code much cleaner as a result - if you expect a DataAccessException, you can handle it - but most of the time you let it propagate and be logged as an error, because your SQL should be debugged by the time you release your application, meaning the DataAccessException is probably a hardware issue that you could not resolve - DataAccessException is a much more meaningful name than SQLException, because it shows that access to data failed - not that your SQL query was nessecarily at fault.
Personally, I think checked exceptions were a mistake in Java. That aside, designating both checked and unchecked exceptions allows a library to differentiate between recoverable and unrecoverable errors. By making all recoverable errors throw checked exceptions, a library/language can force a developer to handle the edge cases they might otherwise paper over. The big problem with this: ``` try{ myCode(); }catch(Exception e){ //Do nothing } ``` Additionally, in most cases it really is best to just throw up your hands and pass an exception up when one occurs. By forcing checked exceptions to be declared, a method that really doesn't care if an error occurs ends up having dependencies (in terms of compatibility, but also code-smell and others) it really shouldn't.
Why does Java have both checked and unchecked exceptions?
[ "", "java", "programming-languages", "theory", "" ]
The function associated with the selector stops working when I replace it's contents using .html(). Since I cannot post my original code I've created an example to show what I mean... > **Jquery** ``` $(document).ready(function () { $("#pg_display span").click(function () { var pageno = $(this).attr("id"); alert(pageno); var data = "<span id='page1'>1</span><span id='page2'> 2</span><span id='page3'> 3</span>"; $("#pg_display").html(data); }); }); ``` > **HTML** ``` <div id="pg_display"> <span id="page1">1</span> <span id="page2">2</span> <span id="page3">3</span> </div> ``` Is there any way to fix this??...Thanks
Not sure I understand you completely, but if you're asking why .click() functions aren't working on spans that are added later, you'll need to use [.live()](http://docs.jquery.com/Events/live), ``` $("#someSelector span").live("click", function(){ # do stuff to spans currently existing # and those that will exist in the future }); ``` This will add functionality to any element currently on the page, and any element that is later created. It keeps you have having to re-attach handlers when new elements are created.
You have to re-bind the event after you replace the HTML, because the original DOM element will have disappeared. To allow this, you have to create a named function instead of an anonymous function: ``` function pgClick() { var pageno = $(this).attr("id"); alert(pageno); var data="<span id='page1'>1</span><span id='page2'> 2</span><span id='page3'> 3</span>"; $("#pg_display").html(data); $("#pg_display span").click(pgClick); } $(document).ready(function(){ $("#pg_display span").click(pgClick); }); ```
Issue with selectors & .html() in jquery?
[ "", "javascript", "jquery", "html", "" ]
I have a Rails app with Users, and each user HABTM Roles. I want to select Users without a specific role. I have searchlogic at my disposal, and I'm lost. I've tried using a combination of conditions and joins and includes and what not, but I can't seem to nail it. This works: ``` User.find(:all, :conditions => ['role_id != ?', Role[:admin].id], :joins => :roles) ``` To find users that are not admins, but doesn't not find users with no roles (which I want to find as well). What simple thing am I missing in my tired state?
How about this: ``` User.find :all, :conditions => [ 'roles.id is ? or roles.id != ?', nil, Role[:admin].id ], :include => :roles ``` This works for `has_many :through`, seems like it should be the same for HABTM.
Use a sub-query and the NOT IN operator ``` User.find(:all,:conditions => ["id NOT IN (select user_id from roles_users where role_id = ?)", Role[:admin].id) ```
Rails/AR find where habtm does not include
[ "", "sql", "ruby-on-rails", "has-and-belongs-to-many", "searchlogic", "" ]
I have seen some instances where people will say you have to use JS .style.display = 'none'; and that .NET .Visible = false will not work. What is the difference between the two and why would something work with the JS and not with the .NET? An example is <http://www.componentart.com/forums/ShowPost.aspx?PostID=27586> (see the bottom post) Thanks
* `display: none` completely hides the element, 0px \* 0px, but the HTML element is still there in the source * `Visible = false` removes the HTML element from the HTML output * A third option, `visibility: hidden` hides the element but reserves the space for it in the layout
That really depends on the component, at a default capacity in asp.net setting a controls `Visible` property to false will prevent rendering it to the output stream at all. However with some custom components it may just set a style attribute to hide the rendered elements. I believe in the case pointed out the ComponentArt control in question is dependent on a built in control and if you set the built in control visibility to false it might break the functionality of the ComponentArt control.
Difference between JS style.display and .NET visible
[ "", ".net", "javascript", "visibility", "" ]
I have a TextBlock inside a limited-size control. If the text is too long to fit into the control, I'd like to show a tooltip with full text. This is a classic behavior you surely know from many apps. I tried using a Converter to convert TextBlock width into Tooltip's Visibility. ``` <GridViewColumn.CellTemplate> <DataTemplate> <TextBlock Text="{Binding Text}"> <TextBlock.ToolTip> <ToolTip DataContext="{TemplateBinding Content}" Visibility="{Binding Converter={StaticResource visConvert}}"> <TextBlock Text="{Binding Text}"></TextBlock> </ToolTip> </TextBlock.ToolTip> </TextBlock> </DataTemplate> </GridViewColumn.CellTemplate> ``` The problem is that in the Converter: ``` public object Convert(object value, ... ``` 'value' is the DataBound item. I'd like the 'value' to be the TextBlock, to observe its Width, and compare it to the GridViewColumn.Width.
I figured it out, the Tooltip has **PlacementTarget** property that specifies the UI element that has the Tooltip. In case anyone needs it: ``` <TextBlock Text="{Binding Text}"> <TextBlock.ToolTip> <ToolTip DataContext="{Binding Path=PlacementTarget, RelativeSource={x:Static RelativeSource.Self}}" Visibility="{Binding Converter={StaticResource toolVisConverter}}"> <TextBlock Text="{Binding Text}"/> <!-- tooltip content --> </ToolTip> </TextBlock.ToolTip> </TextBlock> ``` And then write a Converter that converts TextBlock to Visibility (based on TextBlock width).
Ok, so why do it the hard XAML-only way? This works: ``` <TextBlock Text="{Binding Text}" IsMouseDirectlyOverChanged="TextBlock_IsMouseDirectlyOverChanged" > <TextBlock.ToolTip> <ToolTip Visibility="Collapsed"> <TextBlock Text="{Binding Text}"></TextBlock> </ToolTip> </TextBlock.ToolTip> </TextBlock> ``` in Control.xaml.cs: ``` private void TextBlock_IsMouseDirectlyOverChanged(object sender, DependencyPropertyChangedEventArgs e) { bool isMouseOver = (bool)e.NewValue; if (!isMouseOver) return; TextBlock textBlock = (TextBlock)sender; bool needed = textBlock.ActualWidth > (this.listView.View as GridView).Columns[2].ActualWidth; ((ToolTip)textBlock.ToolTip).Visibility = needed ? Visibility.Visible : Visibility.Collapsed; } ```
Show WPF Tooltip if needed
[ "", "c#", "wpf", "conditional-statements", "tooltip", "" ]
I'm trying to use JavaScript object as an associative array and everything was well until I needed to get number of entries that are stored in it. What is the easiest and most elegant way to do that? All I can think of is to run `for each` loop or jQuery `$.each` function and just see how much iterations it would do but that looks like an an awful thing to do.
Old Firefox supports the `__count__` property. Newer environments support ES5's `Object.keys`. For older environments we have to fallback to just iterating over the object and counting manually (ugh!): ``` function count(obj) { if (obj.__count__ !== undefined) { // Old FF return obj.__count__; } if (Object.keys) { // ES5 return Object.keys(obj).length; } // Everything else: var c = 0, p; for (p in obj) { if (obj.hasOwnProperty(p)) { c += 1; } } return c; } ```
This worked for me: ``` Object.keys(obj).length ```
How do I get number of fields in JavaScript object?
[ "", "javascript", "" ]
I have a datagridview with multiple columns and rows. The first column contains a checkbox. I want the user to be able to select multiple checkboxes and then perform an action. For example if they select checkboxes in rows 1 and 2, the data from other columns in rows 1 and 2 can be selected and passed into a messagebox. I know i need to use the checkbox\_changed event to do this. However I am having trouble working out how to do this for multiple rows?
On button Click event do: ``` static int SelectColumnIndex = 0; PerformAction_Click(object sender, System.EventArgs e) { string data = string.Empty; foreach(DataGridViewRow row in MyDataGridView.Rows) { if(row.Cells[SelectColumnIndex].Value!=null && Convert.ToBoolean(row.Cells[SelectColumnIndex].Value) == true) { foreach(DataGridViewCell cell in row.Cells) { if(cell.OwningColumn.Index!= SelectColumnIndex) { data+= (cell.Value + " ") ; // do some thing } } data+="\n"; } } MessageBox.Show(data, "Data"); } ```
If you want the user to click on a button to perform the action, then what you need to handle is the Click event of the button, not the CheckBox Changed event... When the button is clicked, just go through all rows of your DataGridView and perform an action on rows with a checked checkbox.
C# Datagridview Checkbox Checked Event - multiple rows?
[ "", "c#", "datagridview", "checkbox", "checked", "" ]
I am trying to use C# to access the content of a webpage. For example, I want to grab the text of the body of google homepage. I know this is doable in C# with its web browser control. But I couldn't find a good, simple example of doing it. All the resources I found online involve creating Forms and GUI, which I don't need, I just need a good old Console Application. If anyone can provide a simple console-based code snippet that accomplishes the above, it'll be greatly appreciated.
Actually the WebBrowser is a GUI control used in case you want to visualize a web page (embed and manage Internet Explorer in your windows application). If you just need to get the contents of a web page you could use the [WebClient](http://msdn.microsoft.com/en-us/library/system.net.webclient%28VS.80%29.aspx) class: ``` class Program { static void Main(string[] args) { using (var client = new WebClient()) { var contents = client.DownloadString("http://www.google.com"); Console.WriteLine(contents); } } } ```
You can also use the WatiN library to load and manipulate web pages easily. This was designed as a testing library for web UI's. To use it get the latest from the official site <http://watin.sourceforge.net/> . For C# the following code in a console application will give you the HTML of the Google home page (this is modified from the getting started example on the WatiN site). The library also contains many more useful methods for getting and setting various parts of the page, taking actions and checking for results. ``` using System; using WatiN.Core; namespace Test { class WatiNConsoleExample { [STAThread] static void Main(string[] args) { // Open an new Internet Explorer Window and // goto the google website. IE ie = new IE("http://www.google.com"); // Write out the HTML text of the body Console.WriteLine(ie.Text); // Close Internet Explorer and the console window immediately. ie.Close(); Console.Readkey(); } } } ```
Access the Contents of a Web Page with C#
[ "", "c#", ".net", "dom", "" ]
I am currently getting first day Of this week and last week values with vbscript function in 2/12/2009 format. I was wondering if it was possible with SQL query.
These statements should do what you want in TSQL. Note, the statements are based on the current date. You can replace getdate() for whatever date you wish: ``` Select dateadd(wk, datediff(wk, 0, getdate()) - 1, 0) as LastWeekStart Select dateadd(wk, datediff(wk, 0, getdate()), 0) as ThisWeekStart Select dateadd(wk, datediff(wk, 0, getdate()) + 1, 0) as NextWeekStart ``` There are lots of other [date routines here](https://stackoverflow.com/questions/1114592/getdate-last-month/1114784#1114784).
Ben's answer gives the wrong answer if today is Sunday. Regardless of whether the first day of the week is Sunday or Monday, the current date should be included in the current week. When I run his code for 3/24/2013, it gives me a ThisWeekStart of 3/25/2013. I used the following code instead: SELECT DATEADD(DAY, 1 - DATEPART(DW, '3/24/2013'), '3/24/2013')
First day Of this week and last week
[ "", "sql", "sql-server", "" ]
I need to send a UDP message to specific IP and Port. Since there are 3 network cards, ``` 10.1.x.x 10.2.x.x 10.4.x.x ``` when i send a UDP message,i am receiving the message only in one network adapter...the rest of the ip's are not receiving. I want to check for the network adapter while sending the message. How can I do that? --- Currently I am using the following: ``` IPEndPoint localEndPoint = new IPEndPoint(IPAddress.Parse(LocalIP), 0); IPEndPoint targetEndPoint = new IPEndPoint(TargetIP, iTargetPort); UdpClient sendUdpClient = new UdpClient(localEndPoint); int numBytesSent = sendUdpClient.Send(CombineHeaderBody, CombineHeaderBody.Length, targetEndPoint); ```
This is actually trickier than it sounds because if you have more than one interface the broadcasts will not always go out to all the interfaces. To get around this I created this class. ``` public class MyUdpClient : UdpClient { public MyUdpClient() : base() { //Calls the protected Client property belonging to the UdpClient base class. Socket s = this.Client; //Uses the Socket returned by Client to set an option that is not available using UdpClient. s.SetSocketOption(SocketOptionLevel.Socket, SocketOptionName.Broadcast, 1); s.SetSocketOption(SocketOptionLevel.Socket, SocketOptionName.DontRoute, 1); } public MyUdpClient(IPEndPoint ipLocalEndPoint) : base(ipLocalEndPoint) { //Calls the protected Client property belonging to the UdpClient base class. Socket s = this.Client; //Uses the Socket returned by Client to set an option that is not available using UdpClient. s.SetSocketOption(SocketOptionLevel.Socket, SocketOptionName.Broadcast, 1); s.SetSocketOption(SocketOptionLevel.Socket, SocketOptionName.DontRoute, 1); } } ``` Then to send the UDP packet via broadcast, I use something like the following. I am using `IPAddress.Broadcast` and `MyUdpClient`, which is different from your code. ``` IPEndPoint localEndPoint = new IPEndPoint(IPAddress.Parse(LocalIP), 0); IPEndPoint targetEndPoint = new IPEndPoint(IPAddress.Broadcast, iTargetPort); MyUdpClient sendUdpClient = new MyUdpClient(localEndPoint); int numBytesSent = sendUdpClient.Send(CombineHeaderBody, CombineHeaderBody.Length, targetEndPoint); ``` Also, you should note that when you use a specific `ipaddress` instead of broadcast the route table only sends it out the interface that matches the address. So in your example, unicast is used. You need to set `LocalIP` to the IP of the local interface you want to send out to. With three interfaces, you would have three local IP's and you need to pick the correct one to use. ``` IPEndPoint localEndPoint = new IPEndPoint(IPAddress.Parse(LocalIP), 0); IPEndPoint targetEndPoint = new IPEndPoint(TargetIP, iTargetPort); MyUdpClient sendUdpClient = new MyUdpClient(localEndPoint); int numBytesSent = sendUdpClient.Send(CombineHeaderBody, CombineHeaderBody.Length, targetEndPoint); ``` Because route is turned off you might see it on all interfaces but you will need to test this for the unicast case. If you don't care about the send IP or port you can use the following code. ``` IPEndPoint targetEndPoint = new IPEndPoint(TargetIP, iTargetPort); MyUdpClient sendUdpClient = new MyUdpClient(); int numBytesSent = sendUdpClient.Send(CombineHeaderBody, CombineHeaderBody.Length, targetEndPoint); ``` or for broadcast ``` IPEndPoint targetEndPoint = new IPEndPoint(IPAddress.Broadcast, iTargetPort); MyUdpClient sendUdpClient = new MyUdpClient(); int numBytesSent = sendUdpClient.Send(CombineHeaderBody, CombineHeaderBody.Length, targetEndPoint); ``` The problem with `IPAddress.Broadcast` is that they will not route through any gateways. To get around this you can create a list of `IPAddresses` and then loop through and send. Also since Send can fail for network issues that you cannot control you should also have a try/catch block. ``` ArrayList ip_addr_acq = new ArrayList(); ip_addr_acq.Add(IPAddress.Parse("10.1.1.1")); // add to list of address to send to try { foreach (IPAddress curAdd in ip_addr_acq) { IPEndPoint targetEndPoint = new IPEndPoint(curAdd , iTargetPort); MyUdpClient sendUdpClient = new MyUdpClient(); int numBytesSent = sendUdpClient.Send(CombineHeaderBody, CombineHeaderBody.Length, targetEndPoint); Thread.Sleep(40); //small delay between each message } } catch { // handle any exceptions } ``` Edit: see above change to unicast with multiple interfaces and also [Problem Trying to unicast packets to available networks.](https://stackoverflow.com/questions/1096497/problem-trying-to-unicast-packets-to-available-networks/1101383#1101383)
Expansion of Rex's Answer. This allows you to not have to hard code the ip addresses that you want to broadcast on. Loops through all interfaces, checks if they are up, makes sure it has IPv4 information, and an IPv4 address is associated with it. Just change the "data" variable to whatever data you want to broadcast, and the "target" port to the one you want. Small drawback is that if an interface has multiple ip addresses associated with it, it will broadcast out of each address. Note: this will ALSO try to send broadcasts through any VPN adapter (via Network and Sharing Center/Network Connections, Win 7+ verified), and if you want to receive responses, you will have to save all the clients. You also will not need a secondary class. ``` foreach( NetworkInterface ni in NetworkInterface.GetAllNetworkInterfaces() ) { if( ni.OperationalStatus == OperationalStatus.Up && ni.SupportsMulticast && ni.GetIPProperties().GetIPv4Properties() != null ) { int id = ni.GetIPProperties().GetIPv4Properties().Index; if( NetworkInterface.LoopbackInterfaceIndex != id ) { foreach(UnicastIPAddressInformation uip in ni.GetIPProperties().UnicastAddresses ) { if( uip.Address.AddressFamily == AddressFamily.InterNetwork ) { IPEndPoint local = new IPEndPoint(uip.Address.Address, 0); UdpClient udpc = new UdpClient(local); udpc.Client.SetSocketOption(SocketOptionLevel.Socket, SocketOptionName.Broadcast, 1); udpc.Client.SetSocketOption(SocketOptionLevel.Socket, SocketOptionName.DontRoute, 1); byte[] data = new byte[10]{1,2,3,4,5,6,7,8,9,10}; IPEndPoint target = new IPEndPoint(IPAddress.Broadcast, 48888); udpc.Send(data,data.Length, target); } } } } } ```
Broadcasting UDP message to all the available network cards
[ "", "c#", "udp", "udpclient", "multihomed", "" ]
I am building a XML RSS for my page. And running into this error: ``` error on line 39 at column 46: xmlParseEntityRef: no name ``` Apparently this is because I cant have & in XML... Which I do in my last field row... What is the best way to clean all my `$row['field']'s` in PHP so that &'s turn into `&amp;`
Use [`htmlspecialchars`](http://docs.php.net/htmlspecialchars) to encode just the HTML special characters `&`, `<`, `>`, `"` and optionally `'` (see second parameter `$quote_style`).
It's called [htmlentities()](http://is.php.net/manual/en/function.htmlentities.php) and [html\_entity\_decode()](http://is.php.net/manual/en/function.html-entity-decode.php)
converting & to &amp; for XML in PHP
[ "", "php", "xml", "" ]
> **Possible Duplicate:** > [Tool to read and display Java .class versions](https://stackoverflow.com/questions/27065/tool-to-read-and-display-java-class-versions) I'm trying to debug a > "Bad version number in .class file' error in java, is there a way for me to check which version the `.class` files are? I'm using `JRE1.5.0_6`, but my `JDK` is version 1.6.0\_13. I'm compiling with compatibility mode set to 1.5 in eclipse which I thought would work...
You're looking for this on the command line (for a class called MyClass): On Unix/Linux: ``` javap -verbose MyClass | grep "major" ``` On Windows: ``` javap -verbose MyClass | findstr "major" ``` You want the **major** version from the results. Here are some example values: | Java Version | Major Version | | --- | --- | | 1.2 | 46 | | 1.3 | 47 | | 1.4 | 48 | | 5 | 49 | | 6 | 50 | | 7 | 51 | | 8 | 52 | | 9 | 53 | | 10 | 54 | | 11 | 55 | | 12 | 56 | | 13 | 57 | | 14 | 58 | | 15 | 59 | | 16 | 60 | | 17 | 61 | | 18 | 62 | | 19 | 63 | | 20 | 64 | | 21 | 65 |
Btw, the reason that you're having trouble is that the java compiler recognizes two version flags. There is -source 1.5, which assumes java 1.5 level source code, and -target 1.5, which will emit java 1.5 compatible class files. You'll probably want to use both of these switches, but you definitely need -target 1.5; try double checking that eclipse is doing the right thing.
how to check the jdk version used to compile a .class file
[ "", "java", "" ]
"We should never trust user's input", this is what I read from somewhere on the web. Currently I have a web form, they enter their username and emails. For username field, how to control, check and prevent mysql injection?? Currently, in my php script I only do this: ``` $username = mysql_real_escape_string($_POST['username']); // prevent mysql injection ``` I saw some tutorials, before the mysql\_real\_escape\_string function, they include other functions like, htmlentities, etc (could not remember what it is, and I cant found it now, sigh) Is this a must to include the so called "htmlentities" function before mysql\_real\_escape\_string?? What is your method you usually use for checking user's input data? Oh ya, some other functions: ``` stripslashes(); serialize(); urlencode(); ``` Must i include those?
You're doing it right, as far as putting your data into the database is concerned. You're protected against SQL injection attacks. `htmlentities()` and `htmlspecialchars()` aren't relevant to SQL injection attacks; they're relevant to [XSS](http://en.wikipedia.org/wiki/Cross-site_scripting) attacks, which is a whole other topic you should look into, and is a relevant issue if you're displaying user input back out to the web.
You could also look at using [prepared statements](http://dev.mysql.com/doc/refman/6.0/en/sql-syntax-prepared-statements.html) (I think equivalent to parameterized queries for SQL Server), which further reduces the attack surface.
What are the methods to check user input and prevent mysql injection?
[ "", "php", "mysql", "forms", "input", "" ]
We're doing development for both .NET (Using VS 2008) and Java (using eclipse) Currently, we're using CVS, but there isn't really a good plugin for Visual Studio 2008, so I'm looking at changing to something that has better support for VS 2008 and Eclipse. My initial thought was SVN, as it is really close to CVS, but I'm a bit tempted to use something like Mercurial. I'm looking for something that is simple to use, and has good plug-in support for both platforms.
I can tell you that there are really nice plugins for Subversion for Eclipse and Visual Studio 2008 ([AnkhSVN](http://ankhsvn.open.collab.net/) for Visual Studio). You have to make sure to download the daily builds of AnkhSVN if you plan to use it with the most recent subversion version. Additionally there are tools (<http://cvs2svn.tigris.org/>) to migrate your data from CSV to SVN. For Mercurial oder Git - I don't have got any experience with those. I think SVN will give you the smoothest transision - but it won't give you the "big revolution" (if this is what you are after)
We're a .NET, Java, and Rails shop We used Subversion for years and its a fantastic system, did everything we thought we needed from an SCM. About 9 months ago we started playing around with Github.com whilst developing a Rails application (unavoidable in the Rails community). Since then we've shifted over to Github.com completely using private repos for our closed-source commercial software developments. Git has made us not break a build or clobber code in months - something that used to happen from time to time and made us loose a day's work trying to rectify the problem. Subversion doesn't provide you with the flexibility in your working methods that Git does. If you're in trouble (a broken build or a hot fix), Subversion won't help you it'll even work against you. Its branch/merge mechanism is very difficult to use because it doesn't keep track of the origin of the branch. Also when you merge back, your change history is modified such that all changes by the team in a particular branch are attributed to the user performing the merge. Git is also lightning fast as the whole repo that you work on is local, something that is very noticable when you're working from remote locations. That said, Subversion will take you a week or two to get proficient in, Git takes at least a month, especially if you're coming from Subversion or CVS. If you pretend its just a more modern SVN or CVS, you'll get frustrated by the lack of improvement in your coding workflow and you'll become annoyed by the multitude of commands. We have a 3-branch setup: hotfix<->master<->development. In normal conditions the dev team will work in the development branch. For each user story the developer will create a branch off development: development<->user story. When the story is finished the user story is merged with development and the user story branch may be deleted. This goes on and on, and master stays stable and unaffected until the build manager decides its safe to merge all the changes in development back into master. If in the mean time a customer phones and requires a hotfix, that too is done in isolation from master and can be merged into the rest of the codebase (master&development) at a suitable point in the future. Now wrt to GUIs and SCM. We avoid them like the plague. GUIs are bad for working with SCMs. I know - controversial, but hear me out. The command line will slow you down more than a GUI does and when you're working with an SCM and there's a high-chance that you're going to do something bad or destructive to your central repo, slow is a good thing. Slow makes you think about your actions. All the typical GUIs that I've seen (EclipseSVN, TortoiseGit/SVN) all preselect your recent changes as being part of the commit you're about to make, whether those changes are ready to be committed or not. BAD!!!! You need to think about your commits and how lumpy or granular they need to be - command lines do a better job than GUIs in this regard. All our .NET coders, who are naturally drawn to performing tasks via GUIs, use command line Git and used command line SVN before that, just for those reasons outlined above. It gave them a greater sense of control.
Cross-platform source control?
[ "", "java", ".net", "visual-studio-2008", "eclipse", "version-control", "" ]
I trying create a class derivated from System.Web.UI.Page and in override Render i set this code: ``` writer.WriteLine("<![CDATA["); base.Render(writer); writer.WriteLine("\r\n]]>"); ``` My problem is when i try see code generated ``` <![CDATA[ ><form name="form1" method="post" action="Default.aspx" id="form1"> ... </form> ]]> ``` first > after CDATA is wrong i don't want it generated. Any Suggestions?
Seems like the renderer tries to validate the HTML or something similar. Try perhaps render this into a stream, and then Insert and Append the CDATA stuff to it?
Can you try rendering an empty DIV tag before calling base.Render? I suspect that there might be a control or page adapter involved in this, and seeing what happens to the '>' as a result would help.
Incorrect page Render
[ "", "c#", "asp.net", "" ]
Unless you're programming parts of an OS or an embedded system are there any reasons to do so? I can imagine that for some particular classes that are created and destroyed frequently overloading memory management functions or introducing a pool of objects might lower the overhead, but doing these things globally? **Addition** I've just found a bug in an overloaded delete function - memory wasn't always freed. And that was in a not-so memory critical application. Also, disabling these overloads decreases performance by ~0.5% only.
We overload the global new and delete operators where I work for many reasons: * **pooling** all small allocations -- decreases overhead, decreases fragmentation, can increase performance for small-alloc-heavy apps * **framing** allocations with a known lifetime -- ignore all the frees until the very end of this period, then free all of them together (admittedly we do this more with local operator overloads than global) * **alignment** adjustment -- to cacheline boundaries, etc * **alloc fill** -- helping to expose usage of uninitialized variables * **free fill** -- helping to expose usage of previously deleted memory * **delayed free** -- increasing the effectiveness of free fill, occasionally increasing performance * **sentinels** or **fenceposts** -- helping to expose buffer overruns, underruns, and the occasional wild pointer * **redirecting** allocations -- to account for NUMA, special memory areas, or even to keep separate systems separate in memory (for e.g. embedded scripting languages or DSLs) * **garbage collection** or cleanup -- again useful for those embedded scripting languages * **heap verification** -- you can walk through the heap data structure every N allocs/frees to make sure everything looks ok * **accounting**, including **leak tracking** and **usage snapshots/statistics** (stacks, allocation ages, etc) The idea of new/delete accounting is really flexible and powerful: you can, for example, record the entire callstack for the active thread whenever an alloc occurs, and aggregate statistics about that. You could ship the stack info over the network if you don't have space to keep it locally for whatever reason. The types of info you can gather here are only limited by your imagination (and performance, of course). We use global overloads because it's convenient to hang lots of common debugging functionality there, as well as make sweeping improvements across the entire app, based on the statistics we gather from those same overloads. We still do use custom allocators for individual types too; in many cases the speedup or capabilities you can get by providing custom allocators for e.g. a single point-of-use of an STL data structure far exceeds the general speedup you can get from the global overloads. Take a look at some of the allocators and debugging systems that are out there for C/C++ and you'll rapidly come up with these and other ideas: * [valgrind](http://valgrind.org/) * [electricfence](http://directory.fsf.org/project/ElectricFence/) * [dmalloc](http://dmalloc.com/) * [dlmalloc](http://g.oswego.edu/dl/html/malloc.html) * [Application Verifier](http://msdn.microsoft.com/en-us/library/ms220948.aspx) * [Insure++](http://www.parasoft.com/jsp/products/home.jsp?product=Insure) * [BoundsChecker](http://en.wikipedia.org/wiki/BoundsChecker) * ...and many others... (the gamedev industry is a great place to look) (One old but seminal book is [Writing Solid Code](http://www.microsoft.com/mspress/books/1024.aspx), which discusses many of the reasons you might want to provide custom allocators in C, most of which are still very relevant.) Obviously if you can use any of these fine tools you will want to do so rather than rolling your own. There are situations in which it is faster, easier, less of a business/legal hassle, nothing's available for your platform yet, or just more instructive: dig in and write a global overload.
The most common reason to overload new and delete are simply to check for [memory leaks](http://en.wikipedia.org/wiki/Memory_leak), and memory usage stats. Note that "memory leak" is usually generalized to memory errors. You can check for things such as double deletes and buffer overruns. The uses after that are usually memory-allocation schemes, such as [garbage collection](http://en.wikipedia.org/wiki/Garbage_collection_%28computer_science%29), and [pooling](http://en.wikipedia.org/wiki/Memory_pool). All other cases are just specific things, mentioned in other answers (logging to disk, kernel use).
Any reason to overload global new and delete?
[ "", "c++", "memory-management", "overloading", "" ]
I'm working on a few **closed**-source projects. (Small ones.) I *don't* want to share the code of these projects. But I do want to have an easy-to-install version control system that's easily installed on my web host. I don't have administration rights on this web host so I can't just run a setup.exe on the remote system. Basically, the only way to install something is by just creating a database and copying files to the server and maybe a bit more by using a Plesk control panel. I can choose between two hosts: one using Linux with MySQL and the other Windows 2008/IIS7 with SQL Server 2005. The Windows host has my preference, since it provides the most disk space. So, what options do I have? (Doesn't have to be free, although I do have a limited budget.) --- I can have physical access to both hosts to install some additional software but it means traveling about 5.000 KM to the server room. I can also tell someone at the location of the server to install it for me, but that's quite expensive. I could just find another company to host this for me but I like my current servers/hosts. So that's why I'm looking for a solution that I can simply upload to the server.
I run Subversion on [Dreamhost](http://www.dreamhost.com) for less than $10/month. Most of my projects are pretty small so far--I know they don't always appreciate large amounts data as non-web-accessible storage. It is first a web host, not a file host. However, its pretty easy to set up SVN repositories in their admin interface and then you have an easy svn+ssh connection. Of course, since it is a web host it's easy to install [Trac](http://trac.edgewall.org/) (they even have a 1-click install for it) or some other piece of software as well.
I would highly recommend both [Assembla](http://www.assembla.com) and [Unfuddle](http://unfuddle.com/). They offer different types of version control (SVN, GIT, etc.), as well as ticket management, messaging between teams, wikis, and many many other tools that I haven't gotten into yet. I have been using Assembla and have found it to be very intuitive and offer great features, such as automatic integration between the ticketing system and the version control system. I use the free version and just hope that no one browses/steals my code until that particular project gets to the point where I should consider paying their small fee to guard it :) Do comment on what you choose to do!
Hosted Source Control?
[ "", "c#", ".net", "version-control", "" ]
I have a `FileSystemXmlApplicationContext` and I would like the beans defined in the XML to take as a constructor argument a bean which is not declared in Spring For example, I would like to do: ``` <bean class="some.MyClass"> <constructor-arg ref="myBean" /> </bean> ``` So I could imagine doing this via something like: ``` Object myBean = ... context = new FileSystemXmlApplicationContext(xmlFile); context.addBean("myBean", myBean); //add myBean before processing context.refresh(); ``` Except that there is no such method :-( does anyone know how I can achieve this?
How about programmatically creating an empty parent context first, registering your object as a singleton with that context's `BeanFactory` using the fact that `getBeanFactory` returns an implementation of `SingletonBeanRegistry`. ``` parentContext = new ClassPathXmlApplicationContext(); parentContext.refresh(); //THIS IS REQUIRED parentContext.getBeanFactory().registerSingleton("myBean", myBean) ``` Then specify this context as a parent to your "real" context The beans in the child context will then be able to refer to the bean in the parent. ``` String[] fs = new String[] { "/path/to/myfile.xml" } appContext = new FileSystemXmlApplicationContext(fs, parentContext); ```
As I had trouble solving this with an AnnotationConfigApplicationContext, I found the following alternative: ``` DefaultListableBeanFactory beanFactory = new DefaultListableBeanFactory(); beanFactory.registerSingleton("customBean", new CustomBean()); context = new AnnotationConfigApplicationContext(beanFactory); context.register(ContextConfiguration.class); context.refresh(); ```
How can I inject a bean into an ApplicationContext before it loads from a file?
[ "", "java", "spring", "" ]
I have 2 divs, one positioned absolutely right: 0 and the other relatively positioned center screen. When the window's width is too small, they overlap. How can I invoke a javascript function when this happens? Thanks. Mike Edited to make clearer.
To check for overlapping div's you might wanna do a check once the page is loaded, and whenever the window is resized: ``` window.onload = checkOverlap; window.onresize = checkOverlap; ``` And then use some offset-checking: ``` function checkOverlap() { var centerBox = document.getElementById('centerDiv'); var rightBox = document.getElementById('rightDiv'); console.log("centerbox offset left: " + centerBox.offsetLeft); console.log("centerbox width: " + centerBox.offsetWidth); console.log("rightbox offset left: " + rightBox.offsetLeft); if ((centerBox.offsetLeft + centerBox.offsetWidth) >= rightBox.offsetLeft) { centerBox.style.display = "inline-block"; } else { centerBox.style.display = "block"; } } ``` You might wanna do some more checks in the function, e.g. to see if the box is already displayed inline, and such. But that should give you a good place to start. **edit:** added some diagnostics and fixed error
## Part 1: Do it like this: ``` <script type="text/javascript"> document.getElementById('example').style.display = "inline"; </script> ... <div id="example"> ... </div> ```
How can I invoke a javascript function div upon divs overlapping
[ "", "javascript", "html", "css", "" ]
I want to write a structure which has a list of integer id. The list can be of varying length. ``` typedef struct ss_iidx_node { int totalFreq; vector < int > docIDList; }s_iidx_node; ``` Now, I wish to write this structure in a file and read it back. How can I do it? Wrting is done: ``` fwrite(&obj,sizeof(s_iidx_node),1,dat_fd2); ``` When I read it back, it gives garbage value. It seems it storing only the strating and ending position of stl vector...which on reading is garbage? Any idea how to do it Thanks
Though I'd rather see an approach based on an explicit serialisation, you could try: ``` fwrite(&obj.totalFreq,sizeof(int),1,dat_fd2); fwrite(&obj.docIDList[0],sizeof(int),obj.totalFreq,dat_fd2); ``` Assuming totalFreq == docIDList.size(), it's a spurious variable, so a better implementation would be: ``` size_t size=obj.docIDList.size(); fwrite(&size,sizeof(size_t),1,dat_fd2); fwrite(&obj.docIDList[0],sizeof(int),size,dat_fd2); ``` My preferred implementation would be: ``` size_t size=obj.docIDList.size(); fwrite(&size,sizeof(size_t),1,dat_fd2); for (size_t i=0;i<size;i++) { int id=obj.docIDList[i]; fwrite(&id,sizeof(int),1,dat_fd2); } ```
Your code is simply non-portable. It tries to treat object as a raw sequence of bytes, which is plainly undefined for non-POD objects in the C++ standard (and your struct is non-POD because it contains a member of a non-POD type `std::vector`). What happens in practice is that vector class typically consists of 3 fields: pointer to beginning of data, size, and capacity. What you see are bytes constituting those values written into the file. You should consider avoiding C-style file I/O entirely, and using C++ streams and [Boost Serialization library](http://www.boost.org/doc/libs/1_39_0/libs/serialization/doc/index.html) instead - it supports STL collections out of the box.
writing list of dynamic array to file in binary form>
[ "", "c++", "stl", "file-io", "" ]
As you may know, Silverlight 3 doesn't support IMultiValueConverter and... I badly need it. A Web Service proxy which defines a class structure that I need to display in my UI. The object definition class has a few array property such as string[], int[], etc. When I bind these property to a TextBlock, the Text property of the TextBlock becomes System.String[] or System.Int[]. Instead, I would like to see a list strings or numbers separated by a comma. I thought about using a IMultiValueConverter but Silverlight 3 doesn't support it. How do I work around this? Thanks
The purpose of `IMultiValueConverter` is to implement converters that support *multiple bindings* (i.e. `MultiBinding` objects). In your case, this doesn't actually seem to be what you need. If you want to convert an array (`string[]` for example) into a text value, then simply define a normal `IValueConverter` that does that. Don't let the fact that an array contains *multiple values* confuse you. Here's an example converter: ``` [ValueConversion(typeof(string[]), typeof(string))] public class StringArrayConverter : IValueConverter { public object Convert(object value, Type targetType, object parameter, CultureInfo culture) { return string.Join(", ", (string[])value); } public object ConvertBack(object value, Type targetType, object parameter, CultureInfo culture) { throw new NotImplementedException(); } } ``` Hope that helps.
I dont see the use of a Multivalue Converter in your scenario. You can create a IValueConverter which takes Array and return you the string comma separated ``` <TextBlock Text="{Binding ArrayProperty,Converter={StaticResource stringArrayToString}}" ... ```
Silverlight 3 and IMultiValueConverter
[ "", "c#", "data-binding", "silverlight-3.0", "c#-3.0", "" ]
While attempting to execute SQL insert statements using [Oracle SQL Developer](http://www.oracle.com/technology/products/database/sql_developer/index.html) I keep generating an "Enter substitution value" prompt: ``` insert into agregadores_agregadores ( idagregador, nombre, url ) values ( 2, 'Netvibes', 'http://www.netvibes.com/subscribe.php?type=rss\&amp;url=' ); ``` I've tried [escaping the special character in the query](http://download.oracle.com/docs/cd/B10501_01/text.920/a96518/cqspcl.htm) using the '\' above but I still can't avoid the ampersand, '&', causing a string substitution.
the `&` is the default value for `DEFINE`, which allows you to use substitution variables. I like to turn it off using ``` SET DEFINE OFF ``` then you won't have to worry about escaping or CHR(38).
`|| chr(38) ||` This solution is perfect.
Oracle SQL escape character (for a '&')
[ "", "sql", "oracle", "escaping", "oracle-sqldeveloper", "" ]
I'm using Visual Studio 2008 (C++) and would like to produce a list of all classes that are defined in that project. Does anyone know tools that extract those easily? A simple 'Find in files' will not be sufficient, of course. Edit: The list of classes should be created automatically and the result should be a simple file of class names (one class each line).
[Doxygen](http://www.doxygen.nl/) will do that and loads more. Its a really good tool for producing all sorts of documentation
You can browse all classes etc. in your project in the Class View window (`View` > `Class View`). You can even create your own folders and organize the classes to create your own structure. E.g. you could create folders named Refactor, Unused, Suspect etc. You cannot print the class view, but the browser might still be helpful to you.
Producing a list of all classes in a C++ project
[ "", "c++", "visual-studio", "visual-studio-2008", "winapi", "class", "" ]
If I have a namespace like: ``` namespace MyApp.Providers { using System; using System.Collections.Generic; using System.Configuration; using System.Globalization; } ``` Does this mean that if I create other files and classes with the same namespace, the using statements are shared, and I don't need to include them again? If yes, isn't this a bit of a management headache?
No, it's only good for the namespace section inside the file. Not for all files inside the namespace. If you put the using statement outside the namespace, then it applies to the entire file regardless of namespace. It will also search the Usings inside the namespace first, before going to the outer scope.
You need to specify the using directive for any classes that you want to reference without qualification in each file where you want to use them. [Reference](http://msdn.microsoft.com/en-us/library/sf0df423.aspx): > The scope of a using directive is > limited to the file in which it > appears.
Namespaces and Using Directives
[ "", "c#", "namespaces", "using-directives", "" ]
I'm using Java and OpenXLS to write out an Excel spreadsheet. I want to set a formula for a cell but I haven't got a clue how to do it. Can anybody help me, please? :) (Can't tag this with "openxls" because I'm a new user...)
I don't know about OpenXLS, but it's easy to do with Andy Khan's [JExcel](http://jexcelapi.sourceforge.net/). I'd recommend trying it. I think it's far superior to POI; I'm betting that it's better than OpenXLS as well.
OpenXLS support very well formulas. Look at this example. I put a value in the columns A and B of a sheet named "testSheet". In the column C of the same sheet I put the result of SUM (A+B).Don't forget to initialise the column C else you will have a CellNotFoundException ``` WorkBookHandle workbook = new WorkBookHandle(); workbook.createWorkSheet("testSheet"); WorkSheetHandle sheet = workbook.getWorkSheet("testSheet"); for (int i=1 ;i<=10; i++) { sheet.add(10*i, "A"+i); sheet.add(15*i, "B"+i); CellHandle cx = sheet.add(0,"C"+i); cx.setFormula("=SUM(A"+i+":B"+i+")"); } ``` I hope that that this example will help other people.
Writing a formula to a cell with OpenXLS
[ "", "java", "excel", "openxls", "" ]
I'm working on my first object oriented bit of python and I have the following: ``` #!/usr/bin/python import random class triangle: # Angle A To Angle C Connects Side F # Angle C to Angle B Connects Side D # Angle B to Angle A Connects Side E def __init__(self, a, b, c, d, e, f): self.a = a self.b = b self.c = c self.d = d self.e = e self.f = f #def solver: #pass #initialize Triangle myTri = triangle(0,0,0,0,0,0) #Pick Three Random Angles or Sides to Generate Values For sample = random.sample([myTri.a, myTri.b, myTri.c, myTri.d, myTri.e, myTri.f], 3) #Sets the three randomly picked variables to a Random Number sample[0] = random.randint(1, 100) sample[1] = random.randint(1, 100) sample[2] = random.randint(1, 100) ``` How do I pass myTri.a, for example to random.randint. It is passing the value of '0' which it initialized. I want to be able to assign a random value to three of the .a-.f attributes of myTri. What am I missing?
When you say `[myTri.a, myTri.b, ...]` you are not getting a list of the variables themselves, or references to them. Instead you are getting just their values. Since you know they were initialized to `0`, it is as if you had written `[0, 0, 0, 0, 0, 0]`. There's no difference. Then later when you try to assign to `sample[0]`, you are actually just overwriting the 0 that is stored in that array with a random value. Python knows nothing at all about `myTri` at that point; the connection is lost. Here's what you can do to get the effect you're aiming for. First, pass a list of variable names we want to assign to later to `random.sample`: ``` sample = random.sample(["a", "b", "c", "d", "e", "f"], 3) ``` That'll give us back 3 random variable names. Now we want to assign to the variables with those same names. We can do that by using the special `setattr` function, which takes an object and a variable name and sets its value. For instance, `setattr(myTri, "b", 72)` does the same thing as `myTri.b = 72`. So rewritten we have: ``` setattr(myTri, sample[0], random.randint(1, 100)) setattr(myTri, sample[1], random.randint(1, 100)) setattr(myTri, sample[2], random.randint(1, 100)) ``` The major concept here is that you're doing a bit of reflection, also known as introspection. You've got dynamic variable names--you don't know exactly who you're messing with--so you've got to consult with some more exotic, out of the way language constructs. Normally I'd actually caution against such tomfoolery, but this is a rare instance where introspection is a reasonable solution.
To assign to `a`, `b`, and `c`: ``` myTri.a = random.randint(1, 100) myTri.b = random.randint(1, 100) myTri.c = random.randint(1, 100) ``` To assign to one random attribute from `a`-`f`: ``` attrs = ['a', 'b', 'c', 'd', 'e', 'f'] setattr(myTri, random.choice(attrs), random.randint(1, 100)) ``` To assign to three random attributes from `a`-`f`: ``` attrs = ['a', 'b', 'c', 'd', 'e', 'f'] for attr in random.sample(attrs, 3): setattr(myTri, attr, random.randint(1, 100)) ```
Object Attribute in Random List Not Accessible in Python
[ "", "python", "random", "object", "oop", "" ]
I have installed XAMPP on a Windows XP Machine. PHP scripts are working fine through the browser. Initially, PHP.exe (CLI) was failing with error - can't load php\_pgsql.dll I disabled extension=php\_pgsql.dll in the php.ini file and PHP.exe stopped to fail. However, even now I am not able to run a PHP script from the CLI even though the same script runs fine through the browser. The script fails at mysqli\_connect() with error - Can't connect to MySQL server on '127.0.0.1' Has somebody else faced the same problem? Kshitij
Did you start the MySQL service via the XAMPP Control panel and it kept beeing marked as "Running" in both test cases? Did you test the webserver version by requesting something like <http://localhost/>... or <http://127.0.0.1/>... and did you run the php.exe -f something.php test on the same machine?
How many php.ini have you got? It's likely the CLI version has a different php.ini than the web version.
PHP.exe (CLI) can't connect to MySQL while Firefox can!
[ "", "php", "mysql", "xampp", "command-line-interface", "" ]
Consider the following C++ code: ``` class A { public: virtual void f()=0; }; int main() { void (A::*f)()=&A::f; } ``` If I'd have to guess, I'd say that &A::f in this context would mean "the address of A's implementation of f()", since there is no explicit seperation between pointers to regular member functions and virtual member functions. And since A doesn't implement f(), that would be a compile error. However, it isn't. And not only that. The following code: ``` void (A::*f)()=&A::f; A *a=new B; // B is a subclass of A, which implements f() (a->*f)(); ``` will actually call B::f. How does it happen?
Here is way too much information about member function pointers. There's some stuff about virtual functions under "The Well-Behaved Compilers", although IIRC when I read the article I was skimming that part, since the article is actually about implementing delegates in C++. <http://www.codeproject.com/KB/cpp/FastDelegate.aspx> The short answer is that it depends on the compiler, but one possibility is that the member function pointer is implemented as a struct containing a pointer to a "thunk" function which makes the virtual call.
It works because the Standard says that's how it should happen. I did some tests with GCC, and it turns out for virtual functions, GCC stores the virtual table offset of the function in question, in bytes. ``` struct A { virtual void f() { } virtual void g() { } }; int main() { union insp { void (A::*pf)(); ptrdiff_t pd[2]; }; insp p[] = { { &A::f }, { &A::g } }; std::cout << p[0].pd[0] << " " << p[1].pd[0] << std::endl; } ``` That program outputs `1 5` - the byte offsets of the virtual table entries of those two functions. It follows the *Itanium C++ ABI*, [which specifies that](http://www.codesourcery.com/public/cxx-abi/abi.html#member-pointers).
Pointers to virtual member functions. How does it work?
[ "", "c++", "virtual", "pointer-to-member", "" ]
Could anyone create a **short sample** that breaks, unless the `[ReliabilityContract(Consistency.WillNotCorruptState, Cer.Success)]` is applied? I just ran through this [sample on MSDN](http://msdn.microsoft.com/en-us/library/3t1y35sz.aspx) and am unable to get it to break, even if I comment out the ReliabilityContract attribute. Finally seems to always get called.
``` using System; using System.Runtime.CompilerServices; using System.Runtime.ConstrainedExecution; class Program { static bool cerWorked; static void Main( string[] args ) { try { cerWorked = true; MyFn(); } catch( OutOfMemoryException ) { Console.WriteLine( cerWorked ); } Console.ReadLine(); } unsafe struct Big { public fixed byte Bytes[int.MaxValue]; } //results depends on the existance of this attribute [ReliabilityContract( Consistency.WillNotCorruptState, Cer.Success )] unsafe static void StackOverflow() { Big big; big.Bytes[ int.MaxValue - 1 ] = 1; } static void MyFn() { RuntimeHelpers.PrepareConstrainedRegions(); try { cerWorked = false; } finally { StackOverflow(); } } } ``` When `MyFn` is jitted, it tries to create a `ConstrainedRegion` from the `finally` block. * In the case without the `ReliabilityContract,` no proper `ConstrainedRegion` could be formed, so a regular code is emitted. The stack overflow exception is thrown on the call to `Stackoverflow` (after the try block is executed). * In the case with the `ReliabilityContract`, a `ConstrainedRegion` could be formed and the stack requirements of methods in the `finally` block could be lifted into `MyFn`. The stack overflow exception is now thrown on the call to `MyFn` (before the try block is ever executed).
The primary driver for this functionality was to support SQL Servers stringent requirements for integrating the CLR into SQL Server 2005. Probably so that others could use and likely for legal reasons this deep integration was published as a hosting API but the technical requirements were SQL Servers. Remember that in SQL Server, MTBF is measured in months not hours and the process restarting because an unhandled exception happened is completely unacceptable. This [MSDN Magazine article](https://web.archive.org/web/20150423173148/https://msdn.microsoft.com/en-us/magazine/cc163716.aspx) is probably the best one that I've seen describing the technical requirements the constrained execution environment was built for. The ReliabilityContract is used to decorate your methods to indicate how they operate in terms of potentially asynchronous exceptions (ThreadAbortException, OutOfMemoryException, StackOverflowException). A constrained execution region is defined as a catch or finally (or fault) section of a try block which is immediately preceded by a call to System.Runtime.CompilerServices.RuntimeServices.PrepareConstrainedRegions(). ``` System.Runtime.CompilerServices.RuntimeServices.PrepareConstrainedRegions(); try { // this is not constrained } catch (Exception e) { // this IS a CER } finally { // this IS ALSO a CER } ``` When a ReliabilityContract method is used from within a CER, there are 2 things that happen to it. The method will be pre-prepared by the JIT so that it won't invoke the JIT compiler the first time it's executed which could try to use memory itself and cause it's own exceptions. Also while inside of a CER the runtime promises not to throw a ThreadAbort exception and will wait to throw the exception until after the CER has completed. So back to your question; I'm still trying to come up with a simple code sample that will directly answer your question. As you may have already guessed though, the simplest sample is going to require quite a lot of code given the asynchronous nature of the problem and will likely be SQLCLR code because that is the environment which will use CERs for the most benefit.
Code demonstrating the importance of a Constrained Execution Region
[ "", "c#", ".net", "concurrency", "cer", "" ]
## Original Question (see Update below) I have a WinForms program that needs a decent scrollable icon control with large icons (128x128 or larger thumbnails, really) that can be clicked to hilight or double clicked to perform some action. Preferably there would be minimal wasted space (short filename captions might be needed below each icon; if the filename is too long I can add an ellipsis). [![finished version of listview with proper colors, spacing, etc.](https://i.stack.imgur.com/YHOug.png)](https://i.stack.imgur.com/YHOug.png) (source: [updike.org](http://www.updike.org/images/listview-great.png)) I tried using a ListView with LargeIcon (default .View) and the results are disappointing: [![screenshot showing tiny icons in LargeIcon view](https://i.stack.imgur.com/j7wIp.png)](https://i.stack.imgur.com/j7wIp.png) (source: [updike.org](http://www.updike.org/images/listview-poor.png)) Perhaps I am populating the control incorrectly? Code: ``` ImageList ilist = new ImageList(); this.listView.LargeImageList = ilist; int i = 0; foreach (GradorCacheFile gcf in gc.files) { Bitmap b = gcf.image128; ilist.Images.Add(b); ListViewItem lvi = new ListViewItem("text"); lvi.ImageIndex = i; this.listView.Items.Add(lvi); i++; } ``` I need large icons with little empty space, not large empty space with embarrassingly small icons. 1. Is there a .NET control that does what I need? * Is there a favorite third party control that does this? * If not, which control would be best to inherit and tweak to make it work? * Should I break down and make a custom Control (which I have plenty of experience with... just don't want to go to that extreme since that is somewhat involved). I found [this tutorial about OwnerDraw](http://msdn.microsoft.com/en-us/library/system.windows.forms.listview.ownerdraw.aspx) but work from that basically amounts to number 3 or 4 above since that demo just shows how to spice up the rows in the details view. ## Update Adding the line ``` ilist.ImageSize = new Size(128, 128); ``` before the for loop fixed the size problem but now the images are palette-ized to 8-bit (looks like system colors?) even though the debugger shows that the images are inserted into the ImageList as 24bpp System.Drawing.Bitmap's: [![large icons, finally](https://i.stack.imgur.com/X7p25.png)](https://i.stack.imgur.com/X7p25.png) (source: [updike.org](http://www.updike.org/images/listview-poor2.png)) 1. How do I (can I?) make the images show in full 24 bit color? * The spacing around the icons is still rather wasteful... how do I fix that? Can I? ## Update 2 Along with adding the line ``` ilist.ColorDepth = ColorDepth.Depth24Bit; ``` next after setting ilist.ImageSize, I followed arbiter's advice and changed the spacing: ``` [DllImport("user32.dll")] public static extern IntPtr SendMessage(IntPtr hWnd, uint Msg, IntPtr wParam, IntPtr lParam); public int MakeLong(short lowPart, short highPart) { return (int)(((ushort)lowPart) | (uint)(highPart << 16)); } public void ListView_SetSpacing(ListView listview, short cx, short cy) { const int LVM_FIRST = 0x1000; const int LVM_SETICONSPACING = LVM_FIRST + 53; // http://msdn.microsoft.com/en-us/library/bb761176(VS.85).aspx // minimum spacing = 4 SendMessage(listview.Handle, LVM_SETICONSPACING, IntPtr.Zero, (IntPtr)MakeLong(cx, cy)); // http://msdn.microsoft.com/en-us/library/bb775085(VS.85).aspx // DOESN'T WORK! // can't find ListView_SetIconSpacing in dll comctl32.dll //ListView_SetIconSpacing(listView.Handle, 5, 5); } /////////////////////////////////////////////////////////// ListView_SetSpacing(this.listView, 128 + 12, 128 + 4 + 20); ``` The ListView control may not be perfect or have the defaults I'd expect (like a Spacing property) but I'm glad I could tame it, in the end: [![alt text](https://i.stack.imgur.com/YHOug.png)](https://i.stack.imgur.com/YHOug.png) (source: [updike.org](http://www.updike.org/images/listview-great.png)) By the way, to maintain the proper aspect ratio for the thumbnails, I had to make my own 128x128 bitmaps, clear the background to match the control, and center those images: ``` public void CenterDrawImage(Bitmap target, Color background, Bitmap centerme) { Graphics g = Graphics.FromImage(target); g.Clear(background); int x = (target.Width - centerme.Width) / 2; int y = (target.Height - centerme.Height) / 2; g.DrawImage(centerme, x, y); g.Dispose(); } ```
For update: 1. Set image list color depth in addition to image size (ilist.ColorDepth = ColorDepth.Depth24Bit) 2. WinForms ListView does not have possibility to change icon spacing, however it can be easily done using Win32. You need to send [LVM\_SETICONSPACING](http://msdn.microsoft.com/en-us/library/bb761176%28VS.85%29.aspx) to your ListView (there is a lot of tutorials how to use SendMessage win32 function in .net, so I think this direction must be enough for you).
You could use the FlowLayoutPanel and drop pictureboxes in it. Set the picturebox to a size of 128x128 and the sizemode to 'zoom' (This takes care of resizing your image without loss of aspect ratio). You can even programatically add the pictureboxes. ``` PictureBox pb = New Picturebox; pb.image = gcf.image128; FlowLayoutPanel1.Controls.Add(pb) ``` Since you need to have a label under the picturebox, you could create a Usercontrol like Pastor said that all it has is a picturebox and a label under it. Then that would be the control instance you would add to your flowlayoutpanel.
Displaying thumbnail icons 128x128 pixels or larger in a grid in ListView
[ "", "c#", "winforms", "listview", ".net-2.0", "thumbnails", "" ]
I'm toying with an application that is, roughly speaking, a sort of modeler application for the building industry. In the future I'd like it to be possible for the user to use both SI units and imperial. From what I understand, it's customary in the US building industry to use fractions of inches when specifying measurements, eg 3 1/2" - whereas in SI we'd write 3.5, not 3 1/2. I'm looking for a way to work with these different systems in my software - storing them, doing calculations on them etc, not only parsing what a users enters. It should be able to show the user a measurement in the way he entered it, yet being able to calculate with other measurements - for example add 3 cm to 1 1/2 inch. So if a user draws a length of wall of 5 feet and another one of 3 meters, the total measurement should be shown in the default unit system the user selected. I'm undecided yet on how much flexibility I should add for entering data for the user; e.g. if he enters 1 foot 14 inches, should it should 2 feet 2 inches the next time the measurement is shown? However before I decide things like that, I'm looking for a way to store measurements in an exact form, which is what my question is about. I'm using C++ and I've looked at Boost.Units, but that doesn't seem to offer a way to deal with fractions. The simple option is to convert everything to millimeters, but rounding errors would make it impossible to go back to the exact measurement a user entered (if he entered it in imperial measurements). So I'll need something more complex. For now I'm using a class that is tentatively named 'Distance' and looks conceptually like this: ``` class Distance { public: Distance(double value); // operators +, -, *, / Distance operator+(const Distance& that); ...etc... std::string StringForm(); // Returns a textual form of the value Distance operator=(double value); private: <question: what should go here?> } ``` This clearly shows where my problems are. The most obvious thing to do would be to have an enum that says whether this Distance is storing SI or imperial units, and have fields (doubles, presumably) that store the meters, centimeters and millimeters if it's in SI units and feet and inches if it's imperial. However this will make the implementation of the class littered with if(SI) else ..., and is very wasteful in memory. Plus I'd have to store a numerator and denominator for the feet and inches to be able to exactly store 1/3", for example. So I'm looking for general design advice on how I should solve these problems, given my design requirements. Of course if there's a C++ library out there that already does these things, or a library in another language I could look at to copy concepts from, that would be great.
Take a look at Martin Fowler's [Money](http://www.martinfowler.com/eaaCatalog/money.html) pattern from [Patterns of Enterprise Application Architecture](https://rads.stackoverflow.com/amzn/click/com/0321127420) - it is directly applicable to this situation. Recommended reading. Fowler has also posted a short writeup on his site of the [Quantity](https://web.archive.org/web/20160816121532/http://martinfowler.com/eaaDev/quantity.html) pattern, a more generic version of Money.
I would definitely consider adding a Units property to the distance class. You could then overload the +, -, \*, / (and related) operators so that arithmetic operations on distances is only possible when the units are the same type. Personally, I would normalize all measurements into the lowest unit of measurement you will support in each system (eg. millimeters for SI, inches for imperial) but also store the users' entered representation. Perform all calculation in normalized form, but convert back to a more readable form when presenting to users. You should also consider making instances of Distance immutable - and creating a new Distance whenever an arithmetic operation is performed. Finally, you can create helper methods to convert between different units - and potentially even call these internally when performing arithmetic on distances with different units. Just convert everything to a common unit and then perform the calculation. Personally, I would not go the route of creating multiple types for measurements in each system - I think you are better off consolidating the logic and allowing your system to treat measurements polymorphically.
Working with imperial units
[ "", "c++", "units-of-measurement", "" ]
I'm facing a problem here, with HttpListener. When a request of the form ``` http://user:password@example.com/ ``` is made, how can I get the user and password ? HttpWebRequest has a Credentials property, but HttpListenerRequest doesn't have it, and I didn't find the username in any property of it. Thanks for the help.
What you're attempting to do is pass credentials via HTTP basic auth, I'm not sure if the username:password syntax is supported in HttpListener, but if it is, you'll need to specify that you accept basic auth first. ``` HttpListener listener = new HttpListener(); listener.Prefixes.Add(uriPrefix); listener.AuthenticationSchemes = AuthenticationSchemes.Basic; listener.Start(); ``` Once you receive a request, you can then extract the username and password with: ``` HttpListenerBasicIdentity identity = (HttpListenerBasicIdentity)context.User.Identity; Console.WriteLine(identity.Name); Console.WriteLine(identity.Password); ``` [Here's a full explanation](http://www.webdavsystem.com/server/prev/v2/documentation/authentication/httplistener_auth) of all supported authenitcation methods that can be used with HttpListener.
Get the `Authorization` header. It's format is as follows ``` Authorization: <Type> <Base64-encoded-Username/Password-Pair> ``` Example: ``` Authorization: Basic QWxhZGRpbjpvcGVuIHNlc2FtZQ== ``` The username and password is colon-seperated (in this example, `Aladdin:open sesame`), then B64-encoded.
HttpListener: how to get http user and password?
[ "", "c#", "passwords", "httplistener", "" ]
The essence of the problem is, given a class hierarchy like this: ``` class A { protected void MethodToExpose() {} protected void MethodToHide(object param) {} } class B : A { new private void MethodToHide(object param) {} protected void NewMethodInB() {} } class C : B { public void DoSomething() { base.MethodToHide("the parameter"); // This still calls A.MethodToHide() base.MethodToExpose(); // This calls A.MethodToExpose(), but that's ok base.NewMethodInB(); } } ``` How can I prevent any classes that inherit from class "B" from seeing the method `A.MethodToHide()`? In C++, this was easy enough by using a declaration such as `class B : private A`, but this syntax is not valid in C#. For those interested (or wondering what I'm *really* trying to do), what we're trying to do is create a wrapper for for Rhino.Commons.NHRepository that hides the methods we don't want to expose to our group of developers, so we can have a cookie-cutter way of developing our app that new developers can easily follow. So yes, I believe the "Is-A" test is valid for the whole chain (WidgetRepository Is-A BaseRepository Is-A NHRepository). **Edit**: I should have mentioned, for the sake of argument, that class A is an API class outside of our control. Otherwise the problem gets considerably easier.
You can't do it and preserve the hierarchy. If possible, you should create interfaces that define your ideal, then subclass the bases classes and implement the interfaces. reference the interfaces only (not the base class types) in your code. The [Adapter pattern](http://www.dofactory.com/Patterns/PatternAdapter.aspx) was created specifically to solve the problem of how to use a framework when its API doesn't line up exactly with your needs.
**Obsolete It** In class B, override MethodToHide and add the Obsolete attribute ``` [Obsolete("Reason", true)] // true will cause a compile-time error ``` **Set EditorBrowsable** (As mentioned previously) In class B, override MethodToHide and add the EditorBrowsable attribute ``` [System.ComponentModel.EditorBrowsable(EditorBrowsableState.Never)] ``` **Throw exception** (As mentioned previously) In class B, override MethodToHide and throw exception. **Create Wrapper** I think Michael Meadows is right. Use the [Adapter pattern](http://www.dofactory.com/Patterns/PatternAdapter.aspx). This pattern also allows easier mocking of code when unit testing. ``` class B: IInterface { protected void MethodToExpose() { A a = new A(); a.MethodToExpose(); } protected void NewMethodInB() { } } ```
How to hide (remove) a base class's methods in C#?
[ "", "c#", "inheritance", "language-features", "inheritance-prevention", "" ]
Suppose I have a free function called `InitFoo`. I'd like to protect this function from being called multiple times by accident. Without much thought I wrote the following: ``` void InitFoo() { { static bool flag = false; if(flag) return; flag = true; } //Actual code goes here. } ``` This looks like a big wart, though. `InitFoo` does *not* need to preserve any other state information. Can someone suggest a way to accomplish the same goal without the ugliness? Macros don't count, of course.
You can do it with some different ugliness: ``` struct InitFoo { InitFoo() { // one-time code goes here } }; void Foo() { static InitFoo i; } ``` You're still using `static`, but now you don't need to do your own flag checking - `static` already puts in a flag and a check for it, so it only constructs `i` once.
Well, a constructor is only automatically called once. If you create a single instance of this class: ``` class Foo { public: Foo(void) { // do stuff } } ``` Then `//do stuff` will only execute once. The only way to execute it twice is to create another instance of the class. You can prevent this by using a [Singleton](http://en.wikipedia.org/wiki/Singleton_pattern). In effect, `//do stuff` can only possibly be called once.
Function Call Guard
[ "", "c++", "function", "call", "guard", "" ]
You may know this recommendation from Microsoft about the use of exceptions in .NET: > Performance Considerations > > ... > > Throw exceptions only for > extraordinary conditions, ... > > In addition, do not throw an exception > when a return code is sufficient... (See the whole text at <http://msdn.microsoft.com/en-us/library/system.exception.aspx>.) As a point of comparison, would you recommend the same for Python code?
The pythonic thing to do is to raise and handle exceptions. The excellent book "Python in a nutshell" discusses this in 'Error-Checking Strategies' in Chapter 6. The book discusses EAFP ("it's easier to ask forgiveness than permission") vs. LBYL ("look before you leap"). So to answer your question: No, I would not recommend the same for python code. I suggest you read chapter 6 of [Python in a nutshell](http://oreilly.com/catalog/9780596100469/).
The best way to understand exceptions is "[if your method can't do what its name says it does, throw](http://www.hanselman.com/blog/IfYourMethodCantDoWhatItsNamePromisesItCanThrow.aspx)." My personal opinion is that this advice should be applied equally to both .NET and Python. The key difference is where you have methods that frequently can't do what their name says they should do, for instance, parsing strings as integers or retrieving a record from a database. The C# style is to avoid an exception being thrown in the first place: ``` int i; if (Int32.TryParse(myString, out i)) { doWhatever(i); } else { doWhatever(0); } ``` whereas Python is much more at ease with this kind of thing: ``` try: i = int(myString) except ValueError: i = 0 doWhatever(i); ```
Is it better to use an exception or a return code in Python?
[ "", "python", "performance", "exception", "" ]
What is the general thinking on the use of extension methods that serve no purpose other than enhancing readability? Without using extension methods we might have the method ``` IEnumerable<DependencyObject> GetDescendents(DependencyObject root) {} ``` that could be called with ``` var descendents = GetDescendents(someControl); ``` or ``` foreach (var descendent in GetDescendents(someControl)) {} ``` Although there's nothing wrong with this I find the instance.method() notation to be more readable so I might consider making this an extension method with this signature ``` public IEnumerable<DependencyObject> GetDescendents(this DependencyObject root) {} ``` allowing it to be called with ``` var descendents = someControl.GetDescendents(); ``` or ``` foreach (var descendent in someControl.GetDescendents()) {} ``` So my question is whether you think this is reasonable or an abuse of extension methods. If it was simply a matter of declaring the function differently I wouldn't hesitate; but the fact that using an extension method requires it be coded in a different, static class makes me wonder if it's worth the effort or not. The example I'm using above is a fairly generic one and might have merit as an extension method that will be used in several places but often this is not the case and I would be coding the static class containing the extension in the same file as the single class that uses it.
I think the big advantage of extension methods is discoverability. If someone is unaware that one of their team members created a GetDescendents method in a utility class somewhere, they'll never use it. However, if that method starts to show up in Intellisense or in the Object Browser, there's a decent chance they will stumble across it. If you start to make more extensive use of extension methods, people will start to use the tools I mentioned to look for extensions that add value.
Most if not all extension methods fall into this category to some degree, since they can't operate on the internals of a class anymore than your static function. At any rate, any extension method can be rewritten in a static class with an extra parameter representing the object (arguably, that's exactly what an extension method is anyway). To me, it's entirely a question of style: in the example you provided, I'd probably jump for the extension method. I think the important question here is, *Is this function something I'd write as part of the class if I were to reimplement the class, and does it make sense as such?* If yes, then go for it, if no, then consider a different solution.
Use of extension methods to enhance readability
[ "", "c#", "coding-style", "extension-methods", "" ]
I have a 19 X 7 table with a textBox in each cell. Some textBoxes need to be read only, depending on the data that is loaded into them. On saving I have to examine each textbox and see if the value needs to be saved. Having to list 133 textboxes by hand takes a long while. I would be ecstatic to get it down to a row level, so I would only need to deal with 7 textBoxes and let .Net duplicate my effort 19 times. Is there better way to leverage .Net? The repeater looks promising, but I don't know how to reference a control that has been repeated, more over a group of controls.
Have you considered a [DataGrid](http://msdn.microsoft.com/en-us/library/system.web.ui.webcontrols.datagrid.aspx) with DataBinding?
The [DataGridView](http://msdn.microsoft.com/en-us/library/e0ywh3cz.aspx) has been made for this
Leverage .Net to deal with 133 textBoxes?
[ "", "c#", "asp.net", "" ]
I've been trying out IntelliJ IDEA for JavaScript editing, and I like it so far, but I'm having a small problem with a new project. I can't seem to be able to get IDEA to display the directories in the project directory in the Project view. Even if I manually add a directory, it refuses to display it. I think this probably has something to do with the fact that it tries to apply Java conventions, but when I imported an old Eclipse project, it showed all directories just fine. Do I have to use Eclipse to create projects and import in IDEA to get the directories visible, or is there some other trick? I am using IDEA version 8.1.3, and the code is just a plain bunch of HTML and JavaScript files, not in any kind of a Java environment.
It appears I need to manually create a Java module (File->New Module) inside the project to actually see the "proper" directory view. I do wonder why it didn't show up when I created it with the project.
I've been struggling with this same problem and found another reason why directories may not show up correctly. Make sure the "Content Root" is correct. * Click on the project * Select "File"->"Project Structure" * Select "modules" from the left column, and select a module. On the sources tab you will see the current "Content Root" along with a button to add a new content root. Make sure that content root is correct. When in the project structure view you will only see files below the "Content Root". I'm fairly new to IntelliJ but I think of the content root as the basedir in ant terms. These instructions are for IntelliJ 9.x Hope this helps someone.
How do I get IntelliJ IDEA to display directories?
[ "", "javascript", "intellij-idea", "" ]
Consider: ``` cd C:\BORLAND\BCC55\BIN bcc32 hello.cpp ``` Output: ``` Borland C++ 5.5.1 for Win32 Copyright (c) 1993, 2000 Borland hello.cpp: Error E2209 hello.cpp 2: Unable to open include file 'iostream' Error E2090 hello.cpp 6: Qualifier 'std' is not a class or namespace name in fun ction main() Error E2379 hello.cpp 6: Statement missing ; in function main() *** 3 errors in Compile *** ``` I read [the instructions at Embarcadero](http://edn.embarcadero.com/article/21205). Now, it says... ``` #include <iostream.h> int main(void) { cout << "Hello." << endl; return 0; } ``` Compile: ``` cd C:\Borland\BCC55\Bin\MySource bcc32 hello.cpp ``` Output: ``` Borland C++ 5.5.1 for Win32 Copyright (c) 1993, 2000 Borland hello.cpp: Error E2209 hello.cpp 1: Unable to open include file 'iostream.h' Error E2451 hello.cpp 4: Undefined symbol 'cout' in function main() Error E2451 hello.cpp 4: Undefined symbol 'end' in function main() Error E2379 hello.cpp 4: Statement missing ; in function main() *** 4 errors in Compile *** ```
Seriously, you're going to keep having troubles if you continue to use Borland's compiler. It's free from their computer *museum* for a reason - it's ancient. The copyright line itself should be proof enough of that: > Borland C++ 5.5.1 for Win32 Copyright (c) 1993, 2000 Borland Do you *really* want a compiler that hasn't been updated in decades, one that the standard has long since left behind? It's the compiler you get if you're interested in vintage computing stuff, in the same league as people with TRS-80 and Apple II emulators :-) Download Microsoft Visual C++ Express and install it. It's as free (as in cost) as the Borland one and substantially more up to date. See [here](http://www.microsoft.com/express/product/) for the product page. Or there are many other more up-to-date development tools you can get for free as well, such as `gcc`, `Code::Blocks` and so forth.
The first error has a resolution in the [installation instructions](http://edn.embarcadero.com/article/21205).
Borland can't compile. What is going on? I can't even get started
[ "", "c++", "compilation", "borland-c++", "" ]
Sometime when looking through code, I see many methods specify an annotation: ``` @SuppressWarnings("unchecked") ``` What does this mean?
Sometimes Java generics just doesn't let you do what you want to, and you need to effectively tell the compiler that what you're doing really *will* be legal at execution time. I usually find this a pain when I'm mocking a generic interface, but there are other examples too. It's usually worth trying to work out a way of avoiding the warning rather than suppressing it (the [Java Generics FAQ](http://www.angelikalanger.com/GenericsFAQ/JavaGenericsFAQ.html) helps here) but sometimes even if it *is* possible, it bends the code out of shape so much that suppressing the warning is neater. Always add an explanatory comment in that case! The same generics FAQ has several sections on this topic, starting with ["What is an "unchecked" warning?"](http://www.angelikalanger.com/GenericsFAQ/FAQSections/TechnicalDetails.html#FAQ001) - it's well worth a read.
It is an annotation to suppress compile warnings about unchecked generic operations (not exceptions), such as casts. It essentially implies that the programmer did not wish to be notified about these which he is already aware of when compiling a particular bit of code. You can read more on this specific annotation here: > [SuppressWarnings](https://docs.oracle.com/javase/8/docs/api/java/lang/SuppressWarnings.html) Additionally, Oracle provides some tutorial documentation on the usage of annotations here: > [Annotations](https://docs.oracle.com/javase/tutorial/java/annotations/predefined.html) As they put it, > "The 'unchecked' warning can occur when interfacing with legacy code written before the advent of generics (discussed in the lesson titled Generics)."
What is SuppressWarnings ("unchecked") in Java?
[ "", "java", "generics", "unchecked", "suppress-warnings", "" ]
I want users on the site to be able to download files whose paths are obscured so they cannot be directly downloaded. For instance, I'd like the URL to be something like this: `http://example.com/download/?f=somefile.txt` And on the server, I know that all downloadable files reside in the folder `/home/user/files/`. Is there a way to make Django serve that file for download as opposed to trying to find a URL and View to display it?
For the "best of both worlds" you could combine S.Lott's solution with the [xsendfile module](http://www.google.com.ng/search?q=mod_xsendfile&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-US:official&client=firefox-a): django generates the path to the file (or the file itself), but the actual file serving is handled by Apache/Lighttpd. Once you've set up mod\_xsendfile, integrating with your view takes a few lines of code: ``` from django.utils.encoding import smart_str response = HttpResponse(mimetype='application/force-download') # mimetype is replaced by content_type for django 1.7 response['Content-Disposition'] = 'attachment; filename=%s' % smart_str(file_name) response['X-Sendfile'] = smart_str(path_to_file) # It's usually a good idea to set the 'Content-Length' header too. # You can also set any other required headers: Cache-Control, etc. return response ``` Of course, this will only work if you have control over your server, or your hosting company has mod\_xsendfile already set up. **EDIT:** > mimetype is replaced by content\_type for django 1.7 ``` response = HttpResponse(content_type='application/force-download') ``` **EDIT:** For `nginx` check [this](https://www.nginx.com/resources/wiki/start/topics/examples/xsendfile/), it uses `X-Accel-Redirect` instead of `apache` X-Sendfile header.
A "download" is simply an HTTP header change. See <http://docs.djangoproject.com/en/dev/ref/request-response/#telling-the-browser-to-treat-the-response-as-a-file-attachment> for how to respond with a download. You only need one URL definition for `"/download"`. The request's `GET` or `POST` dictionary will have the `"f=somefile.txt"` information. Your view function will simply merge the base path with the "`f`" value, open the file, create and return a response object. It should be less than 12 lines of code.
Having Django serve downloadable files
[ "", "python", "django", "django-authentication", "" ]
I'd like to use the C99 header file inttypes.h in a Visual Studio project (I'd like to printf 64 bit numbers). However, this file does not seem to exist in my install. Is this just not part of VS2005? Are there any alternatives?
It's at [google](http://code.google.com/p/msinttypes/). VS doesn't come with `<inttypes.h>`
No, it is not included in VS 2005. An alternative is Boost's implementation in the [Boost::Integer](http://www.boost.org/doc/libs/1_39_0/libs/integer/index.html) library, specifically [boost/cstdint.hpp](http://www.boost.org/doc/libs/1_39_0/libs/integer/cstdint.htm)
Where is <inttypes.h> in Visual Studio 2005?
[ "", "c++", "visual-studio-2005", "c99", "" ]
I'm using .NET 3.5. Why am I still be getting: > does not contain a definition for 'Distinct' with this code: ``` using System.Collections.Generic; //.. . . . . code List<string> Words = new List<string>(); // many strings added here . . . Words = Words.Distinct().ToList(); ```
Are you ``` using System.Linq; ``` ? `Distinct` is an extension method defined in `System.Linq.Enumerable` so you need to add that using statement. And don't forget to add a reference to `System.Core.dll` (if you're using VS2008, this has already been done for you).
You forgot to add ``` using System.Linq; ``` `Distinct` is an [extension method](http://msdn.microsoft.com/en-us/library/bb383977.aspx) that is defined in [`System.Linq.Enumerable`](http://msdn.microsoft.com/en-us/library/system.linq.enumerable.aspx), so you can only call it if you import that namespace. You'll also need to add a reference to `System.Core.dll`. If you created the project as a .Net 3.5 project, it will already be referenced; if you upgraded it from .Net 2 or 3, you'll have to add the reference yourself.
.NET List.Distinct
[ "", "c#", "linq", ".net-3.5", "extension-methods", "" ]
In C#, does setting a field as readonly reduce memory usage? i.e. ``` DBRepository _db = new DBRepository(); ``` vs ``` readonly DBRepository _db = new DBRepository(); ``` Just curious. Thanks.
No. It means you can't assign to it except at the declaration point or in a constructor. You also can't pass it as a `ref` or `out` parameter to a method. Edit: based on the comment below. The following field is `readonly` because you want callers to have direct read access to `string.Empty`, but you don't want them to set it to something else. ``` public sealed class String { public static readonly string Empty = ""; } ``` Also, sometimes when I have a `List<>` or `Dictionary<>` as a field in a class, I'll declare it `readonly` to indicate that I want to work with its members (even Add and Remove items, etc.) but I never want to actually assign a different List or Dictionary object to it. One more edit: Make sure you read about the deprecated [Path.InvalidPathChars](http://msdn.microsoft.com/en-us/library/system.io.path.invalidpathchars.aspx) field (in the remarks section) to show a serious problem that can happen when you don't understand what `readonly` is and is not.
YES! Yes? Yes! How? Simple. A read-only field cannot get a new value so you can't create a second DBRepository and assign it to this field. If it's not read-only then you could write code which will re-assign a new value to this field. And during the time between the reassignment and the time for the garbage collector to clear your old value, you will have a bit more memory in use. Furthermore, if cleanup of the DBRepository has a memory leak, then reassigning new values to this field will cause multiple memory leaks. A situation where such a memory leak could occur is this: 1) You assign a value to \_db. 2) You assign the value of \_db to another field. 3) You reassign a new value to \_db. At this moment you will have two DBRepository objects in memory because the old object is still referenced to from another object. Thus the old object won't be freed until the second field releases it. But this is very nitpicky and quite rare. The answer "No" is more appropriate. Don't use "ReadOnly" to save memory because you'd be using it for the wrong reason. Use "ReadOnly" to make sure others won't reassign a value to this field. (Basically, I'm saying that it *can* reduce memory usage...)
Does "readonly" (C#) reduce memory usage?
[ "", "c#", "performance", "memory", "" ]
I have found the following expression which is intended to modify the id of a cloned html element e.g. change `contactDetails[0]` to `contactDetails[1]`: ``` var nel = 1; var s = $(this).attr(attribute); s.replace(/([^\[]+)\[0\]/, "$1["+nel+"]"); $(this).attr(attribute, s); ``` I am not terribly familiar with regex, but have tried to interpret it and with the help of [The Regex Coach](http://weitz.de/regex-coach/) however I am still struggling. It appears that `([^\[]+)` matches one or more characters which are not '`[`' and `\[0\]/` matches `[0]`. The `/` in the middle I interpret as an 'include both', so I don't understand why the author has even included the first expression. I dont understand what the $1 in the replace string is and if I use the Regex Coach replace functionality if I simply use [0] as the search and [1](http://weitz.de/regex-coach/) as the replace I get the correct result, however if I change the javascript to `s.replace(/\[0\]/, "["+nel+"]");` the string s remains unchanged. I would be grateful for any advice as to what the original author intended and help in finding a solution which will successfully replace the a number in square brackets anywhere within a search string.
**Find** ``` / # Signifies the start of a regex expression like " for a string ([^\[]+) # Capture the character that isn't [ 1 or more times into $1 \[0\] # Find [0] / # Signifies the end of a regex expression ``` **Replace** ``` "$1[" # Insert the item captured above And [ +nel+ # New index "]" # Close with ] ``` To create an expression that captures any digit, you can replace the `0` with `\d+` which will match a digit 1 or more times. ``` s.replace(/([^\[]+)\[\d+\]/, "$1["+nel+"]"); ```
The `$1` is a backreference to the first group in the regex. Groups are the pieces inside `()`. So, in this case `$1` will be replaced by whatever the `([^\[]+)` part matched. If the string was `contactDetails[0]` the resulting string would be `contactDetails[1]`. Note that this regex only replaces 0s inside square brackets. If you want to replace any number you will need something like: ``` ([^\[]+)\[\d+\] ``` The `\d` matches any digit character. `\d+` then becomes any sequence of at least one digit. But your code will still not work, because Javascript strings are immutable. That means they can't be changed once created. The `replace` method returns a new string, instead of changing the original one. You should use: ``` s = s.replace(...) ```
Help interpreting a javascript Regex
[ "", "javascript", "regex", "" ]
I'm making an app that will be installed and run on multiple computers, my target is to make an empty local database file that is installed with the app and when user uses the app his database to be filled with the data from the app . can you provide me with the following examples : 1. what do I need to do so my app can connect to its local database 2. how to execute a query with variables from the app for example how would you add to the database the following thing ``` String abc = "ABC"; String BBB = "Something longer than abc"; ``` and etc Edit :: **I am using a "local database" created from " add > new item > Local database" so how would i connect to that ? Sorry for the dumb question .. i have never used databases in .net**
Depending on the needs you could also consider Sql CE. I'm sure that if you specified the database you're thinking of using, or your requirements if you're usure you would get proper and real examples of connection strings etc. Edit: Here's code for SqlCe / Sql Compact ``` public void ConnectListAndSaveSQLCompactExample() { // Create a connection to the file datafile.sdf in the program folder string dbfile = new System.IO.FileInfo(System.Reflection.Assembly.GetExecutingAssembly().Location).DirectoryName + "\\datafile.sdf"; SqlCeConnection connection = new SqlCeConnection("datasource=" + dbfile); // Read all rows from the table test_table into a dataset (note, the adapter automatically opens the connection) SqlCeDataAdapter adapter = new SqlCeDataAdapter("select * from test_table", connection); DataSet data = new DataSet(); adapter.Fill(data); // Add a row to the test_table (assume that table consists of a text column) data.Tables[0].Rows.Add(new object[] { "New row added by code" }); // Save data back to the databasefile adapter.Update(data); // Close connection.Close(); } ``` Remember to add a reference to System.Data.SqlServerCe
I'm not seeing anybody suggesting SQL Compact; it's similar to SQLite in that it doesn't require installation and tailors to the low-end database. It grew out of SQL Mobile and as such has a small footprint and limited feature-set, but if you're familiar with Microsoft's SQL offerings it should have some familiarity. SQL Express is another option, but be aware that it requires a standalone installation and is a bit beefier than you might need for an applciation's local cache. That said it's also quite a bit more powerful than SQL Compact or SQLite.
Local database, I need some examples
[ "", "c#", "database", "" ]
I have a class in my domain model root that looks like this: ``` namespace Domain { public class Foo { ... } } ``` I also have another class with the same name in a different namespace: ``` namespace Domain.SubDomain { public class Foo { ... } } ``` For my mappings, I have a `Mapping` directory with a subdirectory called `SubDomain` that contains mappings for the domain classes found in `Domain.SubDomain` namespace. They are all in the same assembly. However, when I try to load them with NHibernate, I keep getting a `DuplicateMappingException`... even though both Foos having different namespaces. The code I am using to load my NHibernate configuration is this: ``` var cfg = new Configuration() .Configure() .AddAssembly("Domain"); ``` How can I tell NHibernate to let me use two entities with the same name (but different namespaces)?
I found the [answer](https://web.archive.org/web/20090616013415/http://docs.jboss.org/hibernate/stable/core/reference/en/html/mapping.html#mapping-declaration-mapping) on the Hibernate website: > If you have two persistent classes > with the same unqualified name, you > should set `auto-import="false"`. An > exception will result if you attempt > to assign two classes to the same > "imported" name. I used that as an attribute for the `<hibernate-mapping>` tag and it worked.
I have had the same problem. I solved it like this: ``` Fluently.Configure() .Database(MsSqlConfiguration.MsSql2008 .ConnectionString(...) .AdoNetBatchSize(500)) .Mappings(m => m.FluentMappings .Conventions.Setup(x => x.Add(AutoImport.Never())) .AddFromAssembly(...) .AddFromAssembly(...) .AddFromAssembly(...) .AddFromAssembly(...)) ; ``` The imported part is: `.Conventions.Setup(x => x.Add(AutoImport.Never()))`. Everything seems to be working fine with this configuration.
NHibernate DuplicateMappingException when two classes have the same name but different namespaces
[ "", "c#", ".net", "nhibernate", "orm", "nhibernate-mapping", "" ]
Sometimes in an application, one might compare the Message text of an exception. For instance, if ``` ex.Message.Contains("String or binary data would be truncated") ``` then a MessageBox will be displayed for the user. This works when testing on an English-language Windows system. However, when the program is run on a system with a different language set, then this won't work. How to ensure that only English exception messages are used?
As orsogufo noted, you should check the exception type or error code, and never try to parse an exception message (the message is for the user, not for the program). In your specific example, you could do something like ``` try { ... } catch (SqlException ex) { if (ex.Number == 8152) MessageBox.Show(ex.Message); } ``` (You'll have to determine the exact error number(s) to check for.)
You cannot ensure that the exception message will be in English; it depends upon system settings behind your control. In general, you should not parse an exception message, but rather rely on exception **types** and, if present, **error codes** (which are language independent). As an example, instead of catching only one exception type and parsing the message... ``` try { do_something(); } catch (Exception exc) { if (exc.Message.Contains("String or binary data would be truncated"){ MessageBox.Show("An error occurred..."); } } ``` ...you might use multiple exception handlers: ``` try { do_something(); } catch (SqlException sql) { MessageBox.Show("An error occurred..."); } catch (SomeOtherException someExc){ // exception-specific code here... } catch (Exception exc) { // most generic error... } ```
Exceptions: Compare Message Property to Know what it Means?
[ "", "c#", "exception", "multilingual", "" ]
I am at a loss with the following query, which is peanuts in plain T-SQL. We have three physical tables: * Band (PK=BandId) * MusicStyle (PK=MuicStyleId) * BandMusicStyle (PK=BandId+MusicStyleId, FK=BandId, MusicStyleId) Now what I'm trying to do is get a list of MusicStyles that are linked to a Band which contains a certain searchstring in it's name. The bandname should be in the result aswell. The T-SQL would be something like this: ``` SELECT b.Name, m.ID, m.Name, m.Description FROM Band b INNER JOIN BandMusicStyle bm on b.BandId = bm.BandId INNER JOIN MusicStyle m on bm.MusicStyleId = m.MusicStyleId WHERE b.Name like '%@searchstring%' ``` How would I write this in Linq To Entities? PS: StackOverflow does not allow a search on the string 'many to many' for some bizar reason...
This proved to be much simpler than it seemed. I've solved the problem using the following blogpost: <http://weblogs.asp.net/salimfayad/archive/2008/07/09/linq-to-entities-join-queries.aspx> The key to this solution is to apply the filter of the bandname on a subset of Bands of the musicstyle collection. ``` var result=(from m in _entities.MusicStyle from b in m.Band where b.Name.Contains(search) select new { BandName = b.Name, m.ID, m.Name, m.Description }); ``` notice the line ``` from b IN m.Band ``` This makes sure you are only filtering on bands that have a musicstyle. Thanks for your answers but none of them actually solved my problem.
In Linq, actually you don't need to write anything, if you define the relation in the diagram in SQL database, and generated using the utility, the object hierarchy is built automatically. That means, if you do: ``` var bands = from ms in db.MusicStyle let b = ms.Bands where b.Name.Contains(SEARCHSTRING) select new { b.Name, ms.Name, ms.ID, ms.Description}; ``` If you look into the generated classes of entities, the BandMusicStyle should not appear as LINQ to Entities consider that Band and MusicStyle are many to many and that table is not necessary. See if it works?
Linq to Entities many to many select query
[ "", "c#", "t-sql", "linq-to-entities", "" ]
How do I use jQuery to decode HTML entities in a string?
> **Security note:** using this answer (preserved in its original form below) may introduce an [XSS vulnerability](https://www.owasp.org/index.php/Cross-site_Scripting_(XSS)) into your application. **You should not use this answer.** Read [lucascaro's answer](https://stackoverflow.com/a/1395954/1709587) for an explanation of the vulnerabilities in this answer, and use the approach from either that answer or [Mark Amery's answer](https://stackoverflow.com/a/23596964/1709587) instead. Actually, try ``` var encodedStr = "This is fun &amp; stuff"; var decoded = $("<div/>").html(encodedStr).text(); console.log(decoded); ``` ``` <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script> <div/> ```
Without any jQuery: ``` function decodeEntities(encodedString) { var textArea = document.createElement('textarea'); textArea.innerHTML = encodedString; return textArea.value; } console.log(decodeEntities('1 &amp; 2')); // '1 & 2' ``` This works similarly to the [accepted answer](https://stackoverflow.com/a/2419664/1709587), but is safe to use with untrusted user input. --- ## Security issues in similar approaches As [noted by Mike Samuel](https://stackoverflow.com/questions/1147359/how-to-decode-html-entities-using-jquery/1395954#comment6018122_2419664), doing this with a `<div>` instead of a `<textarea>` with untrusted user input is an XSS vulnerability, even if the `<div>` is never added to the DOM: ``` function decodeEntities(encodedString) { var div = document.createElement('div'); div.innerHTML = encodedString; return div.textContent; } // Shows an alert decodeEntities('<img src="nonexistent_image" onerror="alert(1337)">') ``` However, this attack is not possible against a `<textarea>` because there are no HTML elements that are permitted content of a [`<textarea>`](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/textarea). Consequently, any HTML tags still present in the 'encoded' string will be automatically entity-encoded by the browser. ``` function decodeEntities(encodedString) { var textArea = document.createElement('textarea'); textArea.innerHTML = encodedString; return textArea.value; } // Safe, and returns the correct answer console.log(decodeEntities('<img src="nonexistent_image" onerror="alert(1337)">')) ``` > **Warning**: Doing this using jQuery's [`.html()`](http://api.jquery.com/html/) and [`.val()`](http://api.jquery.com/val/) methods instead of using [`.innerHTML`](https://developer.mozilla.org/en-US/docs/Web/API/Element/innerHTML) and `.value` is also insecure\* for some versions of jQuery, **even when using a `textarea`**. This is because older versions of jQuery would [deliberately and explicitly evaluate scripts](https://github.com/jquery/jquery/blob/1.7/jquery.js#L6049) contained in the string passed to `.html()`. Hence code like this shows an alert in jQuery 1.8: ``` //<!-- CDATA // Shows alert $("<textarea>") .html("<script>alert(1337);</script>") .text(); //--> ``` ``` <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.2.3/jquery.min.js"></script> ``` \* Thanks to [Eru Penkman](https://stackoverflow.com/users/2377920/eru-penkman) for catching this vulnerability.
How to decode HTML entities using jQuery?
[ "", "javascript", "jquery", "html", "" ]
When dealing with custom exceptions, I usually inherit from Exception and then add some fields/properties to my exception class to store some additional info: ``` public class MyException : Exception { public int ErrorCode{get;set;} public MyException() {} } ``` In the above example, the `ErrorCode` value is stored in the exception, meaning that I have to add it to and retireve if from the `SerializationInfo` object in the protected constructor and the overridden `GetObjectData` method. The [Exception](http://msdn.microsoft.com/en-us/library/system.exception.aspx) class has a [Data](http://msdn.microsoft.com/en-us/library/system.exception.data.aspx) property, which according to MSDN: > Gets a collection of key/value pairs that provide additional user-defined information about the exception. If I store the error code inside the `Data`, it will get serialised for me by the Exception class (according to Reflector), meaning that my exception class now looks like: ``` public class MyException : Exception { public int ErrorCode { get {return (int) Data["ErrorCode"];} set {Data["ErrorCode"] = value;} } public MyException() {} } ``` This means that whilst there is a bit more work to do in dealing with the get/set of the error code (like dealing with casting errors and situations where the error code might not be in the dictionary), I don't have to worry about serialising/deserialising it. Is this just two different ways of achieving the same thing, or does one way have any clear advantage(s) over the other (apart from those I've already mentioned)?
I would avoid using Data as it is not under your control e.g. some code somewhere might decide to overwrite the "ErrorCode" value. Instead use the propery and implement serialization. I use the following code to test all my custom exceptions to make sure I've implemented them properly. ``` public static void TestCustomException<T>() where T : Exception { var t = typeof(T); //Custom exceptions should have the following 3 constructors var e1 = (T)Activator.CreateInstance(t, null); const string message = "message"; var e2 = (T)Activator.CreateInstance(t, message); Assert.AreEqual(message, e2.Message); var innerEx = new Exception("inner Exception"); var e3 = (T)Activator.CreateInstance(t, message, innerEx); Assert.AreEqual(message, e3.Message); Assert.AreEqual(innerEx, e3.InnerException); //They should also be serializable var stream = new MemoryStream(); var formatter = new BinaryFormatter(); formatter.Serialize(stream, e3); stream.Flush(); stream.Position = 0; var e4 = (T)formatter.Deserialize(stream); Assert.AreEqual(message, e4.Message); Assert.AreEqual(innerEx.ToString(), e4.InnerException.ToString()); } ```
If you are bothering to create your own exception, you don't need the Data property. Data comes in useful when you want to store a bit of extra information in an existing exception class, but don't want to create your own custom exception class.
How should I store data inside custom exceptions?
[ "", "c#", "exception", "" ]
I'm currently trying to optimize a MYSQL statement that is taking quite some time. The table this is running on is 600k+ and the query is taking over 10 seconds. ``` SELECT DATE_FORMAT( timestamp, '%Y-%m-%d' ) AS date, COUNT( DISTINCT ( email ) ) AS count FROM log WHERE timestamp > '2009-02-23' AND timestamp < '2020-01-01' AND TYPE = 'play' GROUP BY date ORDER BY date DESC ``` I've just indexes on timestamp and type and also one on timestamp\_type (type\_2). Here is the explain results, the problem seems to be a file sort but I don't know how to get around this... ``` id: 1 select_type: SIMPLE table: log type: ref possible_keys: type,timestamp,type_2 key: type_2 key_len: 1 ref: const rows: 226403 Extra: Using where; Using filesort ``` Thanks
Things to try: * Have a separate `date` column (indexed) and use that instead of your `timestamp` column * Add an index across `type` and `date` * Use BETWEEN (don't think it will affect the speed but it's easier to read) So ideally you would 1. Create a `date` column and fill it using `UPDATE table SET date = DATE(timestamp)` 2. Index across `type` and `date` 3. Change your select to ... type = ? AND date BETWEEN ? AND ?
Try rewriting to filter on TYPE alone first. Then apply your date range and aggregates. Basically create an inline view that filters type down. I know it's likely that the optimizer is doing this already, but when trying to improve performance I find it's helpful to be very certain of what things are happening first.
Mysql optimization
[ "", "sql", "mysql", "optimization", "" ]
I've got a PHP script that checks a directory and deletes any files not modified within 15 seconds (It's for a game). My problem is how to get this script to run all the time. I set up a cron job to run every 10 minutes and then in the PHP script I have an infinite loop with a sleep(10). My thought was that it would run the code every 10 seconds, and in the case the script stopped, the cron job would restart it eventually. However, after the script is started, it runs for about 3 loops (30 secs) and then stops. I've heard PHP only gets so much memory per file load. How can I make this PHP script loop indefinitely? Maybe there is some way to call itself
You could run a parent php process that forks a client at an interval. If you're curious about exploring it as an option here is a good starting point: <https://www.php.net/pcntl> Nice thing about doing it this way is that the parent process can kill client pids that do not end within a reasonable amount of time. If you're looking for something quick and dirty you could write a bash script to invoke the php quite easily (if you're on linux): ``` #!/bin/bash while [ "true" ]; do /path/to/script.php sleep 15 done ``` *EDIT* You don't really even need the script, bash will do it all on one line: ``` while [ "true" ]; do /path/to/script.php; sleep 15; done ```
You might want to check your max\_execution\_time parameter in the php.ini file. I believe the default is 30 seconds. The way you have it setup with cron, you will probably have multiple instances of the script running after 10 minutes unless you add some logic in the script to check that an instance of itself is not already running
Looping a PHP Script
[ "", "php", "cron", "infinite-loop", "" ]
I've consistently had an issue with parsing XML with PHP and not really found "the right way" or at least a standardised way of parsing XML files. Firstly i'm trying to parse this: ``` <item> <title>2884400</title> <description><![CDATA[ ><img width="126" alt="" src="http://userserve-ak.last.fm/serve/126/27319921.jpg" /> ]]></description> <link>http://www.last.fm/music/+noredirect/Beatles/+images/27319921</link> <author>anne710</author> <pubDate>Tue, 21 Apr 2009 16:12:31 +0000</pubDate> <guid>http://www.last.fm/music/+noredirect/Beatles/+images/27319921</guid> <media:content url="http://userserve-ak.last.fm/serve/_/27319921/Beatles+2884400.jpg" fileSize="13065" type="image/jpeg" expression="full" width="126" height="126" /> <media:thumbnail url="http://userserve-ak.last.fm/serve/126/27319921.jpg" type="image/jpeg" width="126" height="126" /> </item> ``` I'm using this code: ``` $doc = new DOMDocument(); $doc->load('http://ws.audioscrobbler.com/2.0/artist/beatles/images.rss'); $arrFeeds = array(); foreach ($doc->getElementsByTagName('item') as $node) { $itemRSS = array ( 'title' => $node->getElementsByTagName('title')->item(0)->nodeValue, 'desc' => $node->getElementsByTagName('description')->item(0)->nodeValue, 'link' => $node->getElementsByTagName('link')->item(0)->nodeValue, 'date' => $node->getElementsByTagName('pubDate')->item(0)->nodeValue ); array_push($arrFeeds, $itemRSS); } ``` Now I want to get the "media:content" and "media:thumbnail" url attributes, how would i do that? Now i think i should be using DOMElement::getAttribute but i haven't managed to get it to work :/ Can anyone shed some light on this, and also let me know if this is a good way to parse XML? Regards, Shadi
This was how i have eventually done it using XMLReader: ``` <?php define ('XMLFILE', 'http://ws.audioscrobbler.com/2.0/artist/vasco%20rossi/images.rss'); echo "<pre>"; $items = array (); $i = 0; $xmlReader = new XMLReader(); $xmlReader->open(XMLFILE, null, LIBXML_NOBLANKS); $isParserActive = false; $simpleNodeTypes = array ("title", "description", "media:title", "link", "author", "pubDate", "guid"); while ($xmlReader->read ()) { $nodeType = $xmlReader->nodeType; // Only deal with Beginning/Ending Tags if ($nodeType != XMLReader::ELEMENT && $nodeType != XMLReader::END_ELEMENT) { continue; } else if ($xmlReader->name == "item") { if (($nodeType == XMLReader::END_ELEMENT) && $isParserActive) { $i++; } $isParserActive = ($nodeType != XMLReader::END_ELEMENT); } if (!$isParserActive || $nodeType == XMLReader::END_ELEMENT) { continue; } $name = $xmlReader->name; if (in_array ($name, $simpleNodeTypes)) { // Skip to the text node $xmlReader->read (); $items[$i][$name] = $xmlReader->value; } else if ($name == "media:thumbnail") { $items[$i]['media:thumbnail'] = array ( "url" => $xmlReader->getAttribute("url"), "width" => $xmlReader->getAttribute("width"), "height" => $xmlReader->getAttribute("height"), "type" => $xmlReader->getAttribute("type") ); } else if ($name == "media:content") { $items[$i]['media:content'] = array ( "url" => $xmlReader->getAttribute("url"), "width" => $xmlReader->getAttribute("width"), "height" => $xmlReader->getAttribute("height"), "filesize" => $xmlReader->getAttribute("fileSize"), "expression" => $xmlReader->getAttribute("expression") ); } } print_r($items); echo "</pre>"; ?> ```
You can use [SimpleXML](http://nl.php.net/SimpleXML) as suggested by the other posters, but you need to use the children() and attributes() functions so you can [deal with the different namespaces](http://www.sitepoint.com/blogs/2005/10/20/simplexml-and-namespaces/) Example (untested): ``` $feed = file_get_contents('http://ws.audioscrobbler.com/2.0/artist/beatles/images.rss'); $xml = new SimpleXMLElement($feed); foreach ($xml->channel->item as $item) { foreach ($item->children('http://search.yahoo.com/mrss' as $media_element) { var_dump($media_element); } } ``` Alternatively, you can use XPath (again, untested): ``` $feed = file_get_contents('http://ws.audioscrobbler.com/2.0/artist/beatles/images.rss'); $xml = new SimpleXMLElement($feed); $xml->registerXPathNamespace('media', 'http://ws.audioscrobbler.com/2.0/artist/beatles/images.rss'); $images = $xml->xpath('/rss/channel/item/media:content@url'); var_dump($images); ```
Parsing XML using PHP
[ "", "php", "xml", "parsing", "domdocument", "" ]
i have the flowing code ``` $LastModified = filemtime($somefile) ; ``` i want to add ten minute to last modified time and compare with current time then if $LastModified+ 10 minute is equal to current time delete the file . how can i do that ?! i'm little confusing with unix time stamp .
Since the UNIX timestamp is expressed in "seconds since 1970", you just add five minutes in seconds: ``` $LastModPlusFiveMinutes = $lastModified + (60 * 5); ``` Or, maybe more readable: ``` $LastModPlusFiveMinutes = strtotime("+5 minutes", $lastModified); ```
The unix timestamp is the number of seconds that have passed since Jan 1st, 1970. Therefore to add 10 minutes you need to add 600 (seconds). To get the current time call `time()`. e.g. ``` $LastModified = filemtime($somefile); if ($LastModified+600 <= time()) { // delete the file } ``` (Note that you said "if $LastModified+ 10 minute is equal to current time delete the file" - I presume you actually meant equal to or less than, otherwise replace <= with == above).
add five minute to filemtime function (php)!
[ "", "php", "" ]
I have a piece of code that load a very big image in memory. So it seemed like a reasonable thing to call ``` System.gc(); ``` before loading the image. From what I can tell it works with no issues. Yesterday i decided to use a pretty useful piece of software called [FindBugs](http://findbugs.sourceforge.net/) that scans your code and reports back issues that might cause bugs or generally not advised strategies. The problem is that this piece of code i mentioned gets reported. The description is this: > ... forces garbage collection; > extremely dubious except in > benchmarking code And it goes on to elaborate : > Code explicitly invokes garbage > collection. Except for specific use in > benchmarking, this is very dubious. > > In the past, situations where people > have explicitly invoked the garbage > collector in routines such as close or > finalize methods has led to huge > performance black holes. Garbage > collection can be expensive. Any > situation that forces hundreds or > thousands of garbage collections will > bring the machine to a crawl. So my question is : **Is it NOT OK to programmatically call the garbage collector in such a case? My code only calls it once and the method that it is in gets used rarely. And if it is not OK to call it then what should you do in a case where you need as much memory as possible before doing a very memory intensive operation and you need to free as much memory as posible prior to it?**
Did you get any performance improvements with the System.gc()? I don't think so, since you probably dont have a lot of objects that needs to be collected before you load the image. Usually modern garbage collectors know best when to run, so you shouldnt force a collection, unless you have a really really good reason to. (for example a benchmarking application as suggested by that plugin) btw: Calling System.gc() recommends the VM to perform a "full" or "large" collection, which means that all threads are stopped shortly. Otherwise it will probably only make "small" garbage collections, which don't stop all threads. Run your program with -verbose:gc to see how many bytes are collected. There is also lots of technical information on garbage collection here: <http://java.sun.com/developer/technicalArticles/Programming/GCPortal/>
Typically the GC is smarter than you, so it's better to let it run whenever the runtime decides. If the runtime needs memory, it'll run the GC itself
When to Garbage Collect
[ "", "java", "garbage-collection", "findbugs", "" ]
Other than that I don't know if I can reproduce it now that it's happened (I've been using this particular application for a week or two now without issue), assuming that I'm running my application in the VS debugger, how should I go about debugging a deadlock after it's happened? I thought I might be able to get at call stacks if I paused the program and hence see where the different threads were when it happened, but clicking pause just threw Visual Studio into a deadlock too till I killed my application. Is there some way other than browsing through my source tree to find potential problems? Is there a way to get at the call stacks once the problem has occured to see where the problem is? Any other tools/tips/tricks that might help?
What you did was the correct way. If Visual Studio also deadlocks, that happens now and then. It's just bad luck, unless there's some other issue. You don't have to run the application in the debugger in order to debug it. Run the application normally, and if the deadlock happens, you can attach VS later. **Ctrl+Alt+P**, select the process, choose debugger type and click **attach**. Using a different set of debugger types might reduce the risk of VS crashing (especially if you don't debug native code) A deadlock involves 2 or more threads. You probably know the first one (probably your UI thread) since you noticed the deadlock in your application. Now you only need to find the other one. With knowledge of the architecture, it should be easy to find (e.g. what other threads use the same locks, interact with the UI etc) If VS doesn't work *at all*, you can always use **windbg**. Download here: <http://www.microsoft.com/whdc/devtools/debugging/default.mspx>
I'd try different approaches in the following order: * First, inspect the code to look for thread-safety violations, making sure that your critical regions don't call other functions that will in turn try to lock a critical region. * Use whatever tool you can get your hands on to visualize thread activity, I use an in-house perl script that parses an OS log we made and graphs all the context switches and shows when a thread gets pre-empted. * If you can't find a good tool, do some logging to see the last threads that were running before the deadlock occurred. This will give you a clue as to where the issue might be caused, it helps if the locking mechanisms have unique names, like if an object has it's own thread, create a dedicated semaphore or mutex just to manage that thread. I hope this helps. Good luck!
How to debug a deadlock?
[ "", "c#", "multithreading", "deadlock", "" ]
I am using extension methods OrderBy and ThenBy to sort my custom collection on multiple fields. This sort does not effect the collection but instead returns and IEnumberable. I am unable to cast the IEnumerable result to my custom collection. Is there anyway to change the order of my collection or convert the IEnumerable result to my custom collection?
If your collection type implements `IList<T>` (to be able to `Add()` to it) you could write an extension method: ``` public static Extensions { public static TColl ToTypedCollection<TColl, T>(this IEnumerable ien) where TColl : IList<T>, new() { TColl collection = new TColl(); foreach (var item in ien) { collection.Add((T) item); } return collection; } } ```
No there isn't. When you use the query operators, it doesn't use instances of the original collection to generate the enumeration. Rather, it uses private implementations (possibly anonymous, possibly not) to provide this functionality. If you want it in your original collection, you should have a constructor on the type which takes an `IEnumerable<T>` (or whatever your collection stores, if it is specific) and then pass the query to the constructor. You can then use this to create an extension method for `IEnumerable<T>` called `To<YourCollectionType>` which would take the `IEnumerable<T>` and then pass it to the constructor of your type and return that.
How do I convert IEnumerable to a custom type in C#?
[ "", "c#", "extension-methods", "ienumerable", "casting", "" ]
I'm using VS 2008, and in my property pages for the project I see that I'm targeting .Net 3.5. Here is the error I'm getting when trying to compile: > AMSDataModels.Vehicle.VIN.get' must declare a body because it is not marked abstract, extern, or partial And here is the code: ``` using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace AMSDataModels { public class Vehicle { //NodeID for datastore persistance public Guid NodeID { get; set; } public string VIN { get; set { if (value.Length != 17) throw new ArgumentOutOfRangeException("VIN", "VIN must be 17 characters"); } } public string Make { get; set; } public string Model { get; set; } } } ``` If I strip the body from set so that its just: ``` public string VIN { get; set; } ``` All works, but I lose my ability to check the VIN as it is set. Does anyone have a suggestion of how to fix this or a better way to approach the problem at hand? I really like the shorthand notation - but verifying the legitimacy of input is important too!
If you're going to add logic in the set, you need to add it into the get as well. Notice in your set you're not actually setting a value to anything? Add a backing field, ``` private string _vin; ``` and return that in the get. ``` public string VIN { get { return _vin; } set { if (value.Length != 17) throw new ArgumentOutOfRangeException("VIN", "VIN must be 17 characters"); else _vin = value; } } ```
When automatic properties are used, the compiler automatically generates a backer field. When you declare your own, there's no way for it to know what field to use for the get method. So you have to declare both or none. Incidentally, your current set method only checks for the value - it never actually assigns it to anything.
Automatic Properties in C# 3 - Must declare a body for get if I declare one for set?
[ "", "c#", ".net-3.5", "" ]
I am writing a framework for writing out collections into different formats for a project at my employer. One of the output formats is delimited text files (commonly known as the CSV -- even though CSVs aren't always delimited by a comma). I am using the Microsoft.Jet.OLEDB.4.0 provider via OleDbConnection in ADO.net. For reading this files, its very quick. However, for writing, its extremely slow. In one case, I have a file with 160 records, with each record having about 250 fields. It takes approximately 30 seconds to create this file, seemingly CPU bound. I have done the following, which provided significant performance boosts, but I can't think of anything else: 1. Preparing the statement once 2. Using unnamed parameters Any other suggestions to speed this up some?
How about "don't use OleDbConnection"... writing delimited files with `TextWriter` is pretty simple (escaping aside). For reading, [CsvReader](http://www.codeproject.com/KB/database/CsvReader.aspx).
I have written a small and simple set of classes at my employer to do just that (write and read CSV files or other flat files with a fixed field length). I have just used the StreamWriter & StreamReader classes, and it is quite fast actually.
.NET writing a delimited text file
[ "", "c#", "csv", "" ]
I'm making a hit counter. I have a database and I store the IP and `$_SERVER['HTTP_USER_AGENT'];` of the visitors. Now I need to add a filter, so I can put away the hits, that are made by bots. I found out, that many bots usually keep some common words in the `$_SERVER['HTTP_USER_AGENT'];` , so I's like to make and array of words, that would keep the bot from displaying in the results. Here is what I have now: ``` while($row = mysql_fetch_array($yesterday, MYSQL_ASSOC)) { ``` <-- Here I need a code, that would run through an array and check, if it containts the keywords and if it doesn't ... just count++; --> ``` } ``` Also if you know any other way of detecting and removing the bots from the results, I'd be verry thankful. Cheers
Loop through the array of words with `foreach` and check if the current word exists in the UA string using [strpos()](http://php.net/strpos): ``` foreach ($words as $word) { if (strpos($row['user_agent'], $word) !== FALSE) { // word exists in string } } ```
Using Dimitar Christoff's list ended up with this script: ``` function isBot($user_agent){ $bots = array('bingbot', 'msn', 'abacho', 'abcdatos', 'abcsearch', 'acoon', 'adsarobot', 'aesop', 'ah-ha', 'alkalinebot', 'almaden', 'altavista', 'antibot', 'anzwerscrawl', 'aol', 'search', 'appie', 'arachnoidea', 'araneo', 'architext', 'ariadne', 'arianna', 'ask', 'jeeves', 'aspseek', 'asterias', 'astraspider', 'atomz', 'augurfind', 'backrub', 'baiduspider', 'bannana_bot', 'bbot', 'bdcindexer', 'blindekuh', 'boitho', 'boito', 'borg-bot', 'bsdseek', 'christcrawler', 'computer_and_automation_research_institute_crawler', 'coolbot', 'cosmos', 'crawler', 'crawler@fast', 'crawlerboy', 'cruiser', 'cusco', 'cyveillance', 'deepindex', 'denmex', 'dittospyder', 'docomo', 'dogpile', 'dtsearch', 'elfinbot', 'entire', 'web', 'esismartspider', 'exalead', 'excite', 'ezresult', 'fast', 'fast-webcrawler', 'fdse', 'felix', 'fido', 'findwhat', 'finnish', 'firefly', 'firstgov', 'fluffy', 'freecrawl', 'frooglebot', 'galaxy', 'gaisbot', 'geckobot', 'gencrawler', 'geobot', 'gigabot', 'girafa', 'goclick', 'goliat', 'googlebot', 'griffon', 'gromit', 'grub-client', 'gulliver', 'gulper', 'henrythemiragorobot', 'hometown', 'hotbot', 'htdig', 'hubater', 'ia_archiver', 'ibm_planetwide', 'iitrovatore-setaccio', 'incywincy', 'incrawler', 'indy', 'infonavirobot', 'infoseek', 'ingrid', 'inspectorwww', 'intelliseek', 'internetseer', 'ip3000.com-crawler', 'iron33', 'jcrawler', 'jeeves', 'jubii', 'kanoodle', 'kapito', 'kit_fireball', 'kit-fireball', 'ko_yappo_robot', 'kototoi', 'lachesis', 'larbin', 'legs', 'linkwalker', 'lnspiderguy', 'look.com', 'lycos', 'mantraagent', 'markwatch', 'maxbot', 'mercator', 'merzscope', 'meshexplorer', 'metacrawler', 'mirago', 'mnogosearch', 'moget', 'motor', 'muscatferret', 'nameprotect', 'nationaldirectory', 'naverrobot', 'nazilla', 'ncsa', 'beta', 'netnose', 'netresearchserver', 'ng/1.0', 'northerlights', 'npbot', 'nttdirectory_robot', 'nutchorg', 'nzexplorer', 'odp', 'openbot', 'openfind', 'osis-project', 'overture', 'perlcrawler', 'phpdig', 'pjspide', 'polybot', 'pompos', 'poppi', 'portalb', 'psbot', 'quepasacreep', 'rabot', 'raven', 'rhcs', 'robi', 'robocrawl', 'robozilla', 'roverbot', 'scooter', 'scrubby', 'search.ch', 'search.com.ua', 'searchfeed', 'searchspider', 'searchuk', 'seventwentyfour', 'sidewinder', 'sightquestbot', 'skymob', 'sleek', 'slider_search', 'slurp', 'solbot', 'speedfind', 'speedy', 'spida', 'spider_monkey', 'spiderku', 'stackrambler', 'steeler', 'suchbot', 'suchknecht.at-robot', 'suntek', 'szukacz', 'surferf3', 'surfnomore', 'surveybot', 'suzuran', 'synobot', 'tarantula', 'teomaagent', 'teradex', 't-h-u-n-d-e-r-s-t-o-n-e', 'tigersuche', 'topiclink', 'toutatis', 'tracerlock', 'turnitinbot', 'tutorgig', 'uaportal', 'uasearch.kiev.ua', 'uksearcher', 'ultraseek', 'unitek', 'vagabondo', 'verygoodsearch', 'vivisimo', 'voilabot', 'voyager', 'vscooter', 'w3index', 'w3c_validator', 'wapspider', 'wdg_validator', 'webcrawler', 'webmasterresourcesdirectory', 'webmoose', 'websearchbench', 'webspinne', 'whatuseek', 'whizbanglab', 'winona', 'wire', 'wotbox', 'wscbot', 'www.webwombat.com.au', 'xenu', 'link', 'sleuth', 'xyro', 'yahoobot', 'yahoo!', 'slurp', 'yandex', 'yellopet-spider', 'zao/0', 'zealbot', 'zippy', 'zyborg', 'mediapartners-google' ); $user_agent = strtolower($user_agent); foreach($bots as $bot){ if(strpos($user_agent, $bot) === true){ return true; } } return false; } ```
PHP/MySQL - an array filter for bots
[ "", "php", "mysql", "arrays", "function", "" ]
I have an array that looks like ``` $numbers = array('first', 'second', 'third'); ``` I want to have a function that will take this array as input and return an array that would look like: ``` array( 'first' => 'first', 'second' => 'second', 'third' => 'third' ) ``` I wonder if it is possible to use `array_walk_recursive` or something similar...
You can use the [`array_combine`](https://www.php.net/manual/en/function.array-combine.php) function, like so: ``` $numbers = array('first', 'second', 'third'); $result = array_combine($numbers, $numbers); ```
This simple approach should work: ``` $new_array = array(); foreach($numbers as $n){ $new_array[$n] = $n; } ``` You can also do something like: `array_combine(array_values($numbers), array_values($numbers))`
Create an assoc array with equal keys and values from a regular array
[ "", "php", "arrays", "" ]
Can anyone suggest a good solution to remove duplicates from nested lists if wanting to evaluate duplicates based on first element of each nested list? The main list looks like this: ``` L = [['14', '65', 76], ['2', '5', 6], ['7', '12', 33], ['14', '22', 46]] ``` If there is another list with the same element at first position `[k][0]` that had already occurred, then I'd like to remove that list and get this result: ``` L = [['14', '65', 76], ['2', '5', 6], ['7', '12', 33]] ``` Can you suggest an algorithm to achieve this goal?
Do you care about preserving order / which duplicate is removed? If not, then: ``` dict((x[0], x) for x in L).values() ``` will do it. If you want to preserve order, and want to keep the first one you find then: ``` def unique_items(L): found = set() for item in L: if item[0] not in found: yield item found.add(item[0]) print list(unique_items(L)) ```
use a dict instead like so: ``` L = {'14': ['65', 76], '2': ['5', 6], '7': ['12', 33]} L['14'] = ['22', 46] ``` if you are receiving the first list from some external source, convert it like so: ``` L = [['14', '65', 76], ['2', '5', 6], ['7', '12', 33], ['14', '22', 46]] L_dict = dict((x[0], x[1:]) for x in L) ```
Removing duplicates from list of lists in Python
[ "", "python", "list", "" ]
I have an application I need to analyze. I have the source code here. I already found a way to log the method calls using this code: Inside Method: **logger.MethodTraceLog();** ``` public void MethodTraceLog() { var msg = "MethodTraceLog: " + this.log.Logger.Name + " ### " + new StackFrame(1).GetMethod().Name ; this.log.Debug(msg); } ``` Is there any way to "inject" the logging call into every method of a class, without having to rewrite the whole source code. As I said, I could, but it just is a lot of work. Some "post function call via reflection" in the constructor, or anything similar? Thanks for tips... If anybody has some links additional for analyzing the "behaviour" of an application, I would not say no to it :-) Any framework out there? (Except breakpoints and a lot of F-Key hitting)
Actually, there is a concept called Aspect Oriented Programming (AOP) and an implementation in c# called PostSharp (<http://www.postsharp.org/>) that allows you to inject code post compilation.
I recommend you do two things: one, get [NDepend](http://www.ndepend.com/). Two, get the Visual Studio 2010 beta 1. Run it in a VM if necessary. It will generate sequence diagrams from code, I believe, and has other features to help comprehend a code base at a high level. You don't have to use it for anything other than understanding. The downside is that I hereby pass along to you the moral obligation to report bugs you find, on <http://connect.microsoft.com/visualstudio>. :-)
Log all MethodCalls with minimal code changes (C# 3.0, .NET 3.5)
[ "", "c#", "reflection", "logging", "" ]
Python 3.x renamed the low-level module 'thread' to '\_thread' -- I don't see why in the documentation. Does anyone know?
It looks like the thread module became obsolete in 3.x in favor of the threading module. See [PEP 3108](http://www.python.org/dev/peps/pep-3108/#obsolete).
It's been quite a long time since the low-level `thread` module was informally deprecated, with all users heartily encouraged to use the higher-level `threading` module instead; now with the ability to introduce backwards incompatibilities in Python 3, we've made that deprecation rather more than just "informal", that's all!-)
Why was the 'thread' module renamed to '_thread' in Python 3.x?
[ "", "python", "multithreading", "python-3.x", "" ]
I need to **derive an important value given 7 potential inputs**. [Uncle Bob](http://en.wikipedia.org/wiki/Robert_Cecil_Martin) urges me to avoid functions with that many parameters, so I've [extracted the class](http://www.refactoring.com/catalog/extractClass.html). All parameters now being properties, I'm left with a calculation method with no arguments. “That”, I think, “could be a property, but I'm not sure if that's idiomatic C#.” Should I **expose the final result as a property, or as a method with no arguments**? Would the average C# programmer find properties confusing or offensive? What about the Alt.Net crowd? ``` decimal consumption = calculator.GetConsumption(); // obviously derived decimal consumption = calculator.Consumption; // not so obvious ``` If the latter: should I **declare interim results as [private] properties**, also? Thanks to heavy [method extraction](http://www.refactoring.com/catalog/extractMethod.html), I have several interim results. Many of these shouldn't be part of the public API. Some of them could be interesting, though, and my expressions would look cleaner if I could access them as properties: ``` decimal interim2 = this.ImportantInterimValue * otherval; ``` **Happy Experiment Dept.:** While debugging my code in VS2008, I noticed that I kept hovering my mouse over the method calls that compute interim results, expecting a hover-over with their return value. After turning all methods into properties, I found that **exposing interim results as properties greatly assisted debugging.** I'm well pleased with that, but have lingering concerns about readability. The interim value declarations look messier. The expressions, however, are easier to read without the brackets. I no longer feel compelled to start the method name with a verb. To contrast: ``` // Clean method declaration; compulsive verby name; callers need // parenthesis despite lack of any arguments. decimal DetermineImportantInterimValue() { return this.DetermineOtherInterimValue() * this.SomeProperty; } // Messier property declaration; clean name; clean access syntax decimal ImportantInterimValue { get { return this.OtherInterimValue * this.SomeProperty; } } ``` I should perhaps explain that I've been coding in Python for a decade. I've been left with a tendency to spend extra time making my code easier to call than to write. I'm not sure the Python community would regard this property-oriented style as acceptably “Pythonic”, however: ``` def determineImportantInterimValue(self): "The usual way of doing it." return self.determineOtherInterimValue() * self.someAttribute importantInterimValue = property( lambda self => self.otherInterimValue * self.someAttribute, doc = "I'm not sure if this is Pythonic...") ```
The important question here seems to be this: *Which one produces more legible, maintainable code for you in the long run?* In my personal opinion, isolating the individual calculations as properties has a couple of distinct advantages over a single monolothic method call: * You can see the calculations as they're performed in the debugger, regardless of the class method you're in. This is a boon to productivity while you're debugging the *class*. * If the calculations are discrete, the properties will execute very quickly, which means (in my opinion), they observe the rules for property design. It's absurd to think that a guideline for design should be treated as a straightjacket. Remember: *There is no silver bullet.* * If the calculations are marked private or internal, they do not add unnecessary complexity to consumers of the class. * If all of the properties are discrete enough, compiler inlining may resolve the performance issues for you. * Finally, if the final method that returns your final calculation is far and away easier to maintain and understand because you can read it, that is an utterly compelling argument in and of itself. One of the best things you can do is think for yourself and dare to challenge the preconceived One Size Fits All notions of our peers and predecessors. There are exceptions to every rule. This case may very well be one of them. **Postscript:** I do not believe that we should abandon standard property design in the vast majority of cases. But there are cases where deviating from The Standard(TM) is called for, because it makes sense to do so.
Personally, I would prefer if you make your public API as a method instead of property. Properties are supposed to be as 'fast' as possible in C#. More details on this discussion: [Properties vs Methods](https://stackoverflow.com/questions/601621/properties-vs-methods) Internally, GetConsumption can use any number of private properties to arrive at the result, choice is yours.
Is it good form to expose derived values as properties?
[ "", "c#", ".net", "idioms", "" ]
My cell phone provider offers a limited number of free text messages on their website. I frequently use the service although I hate constantly having a tab open in my browser. Does anyone know/point me in the right direction of how I could create a jar file/command line utility so I can fill out the appropriate forms on the site. I've always wanted to code up a project like this in Java, just in case anyone asks why I'm not using something else. Kind Regards, Lar
Use Watij with the Eclipse IDE. When your done, compile as an .exe or run with a batch file. Here is some sample code I wrote for filling in fields for a Google search, which can be adjusted for the web form you want to control : ``` package goog; import junit.framework.TestCase; import watij.runtime.ie.IE; import static watij.finders.SymbolFactory.*; public class GTestCases extends TestCase { private static watij.runtime.ie.IE activeIE_m; public static IE attachToIE(String url) throws Exception { if (activeIE_m==null) { activeIE_m = new IE(); activeIE_m.start(url); } else { activeIE_m.goTo(url); } activeIE_m.bringToFront(); return (activeIE_m); } public static String getActiveUrl () throws Exception { String currUrl = activeIE_m.url().toString(); return currUrl; } public void testGoogleLogin() throws Exception { IE ie = attachToIE("http://google.com"); if ( ie.containsText("/Sign in/") ) { ie.div(id,"guser").link(0).click(); if ( ie.containsText("Sign in with your") || ie.containsText("Sign in to iGoogle with your")) { ie.textField(name,"Email").set("test@gmail.com"); ie.textField(name,"Passwd").set("test"); if ( ie.checkbox(name,"PersistentCookie").checked() ){ ie.checkbox(name,"PersistentCookie").click(); } ie.button(name,"signIn").click(); } } System.out.println("Login finished."); } public void testGoogleSearch() throws Exception { //IE ie = attachToIE( getActiveUrl() ); IE ie = attachToIE( "http://www.google.com/advanced_search?hl=en" ); ie.div(id,"opt-handle").click(); ie.textField(name,"as_q").set("Watij"); ie.selectList(name,"lr").select("English"); ie.button(value,"Advanced Search").click(); System.out.println("Search finished."); } public void testGoogleResult() throws Exception { IE ie = attachToIE( getActiveUrl() ); ie.link(href,"http://groups.google.com/group/watij").click(); System.out.println("Followed link."); } } ```
Try with [Webdriver](http://code.google.com/p/webdriver/) from Google or [Selenium](http://seleniumhq.org/).
How can I fill out an online form with Java?
[ "", "java", "sms", "" ]
I'm a newbie C++ developer and I'm working on an application which needs to write out a log file every so often, and we've noticed that the log file has been corrupted a few times when running the app. The main scenarios seems to be when the program is shutting down, or crashes, but I'm concerned that this isn't the only time that something may go wrong, as the application was born out of a fairly "quick and dirty" project. It's not critical to have to the most absolute up-to-date data saved, so one idea that someone mentioned was to alternatively write to two log files, and then if the program crashes at least one will still have proper integrity. But this doesn't smell right to me as I haven't really seen any other application use this method. Are there any "best practises" or standard "patterns" or frameworks to deal with this problem? At the moment I'm thinking of doing something like this - 1. Write data to a temp file 2. Check the data was written correctly with a hash 3. Rename the original file, and put the temp file in place. 4. Delete the original Then if anything fails I can just roll back by just deleting the temp, and the original be untouched.
You must find the reason why the file gets corrupted. If the app crashes unexpectedly, it can't corrupt the file. The only thing that can happen is that the file is truncated (i.e. the last log messages are missing). But the app can't really jump around in the file and modify something elsewhere (unless you call `seek` in the logging code which would surprise me). My guess is that the app is multi threaded and the logging code is being called from several threads which can easily lead to data corrupted *before* the data is written to the log.
You probably forgot to call `fsync()` every so often, or the data comes in from different threads without proper synchronization among them. Hard to tell without more information (platform, form of corruption you see). A workaround would be to use logfile rollover, ie. starting a new file every so often.
File corruption detection and error handling
[ "", "c++", "file-io", "" ]
I have a problem reading and using the content from unicode files. I am working on a unicode release build, and I am trying to read the content from an unicode file, but the data has strange characters and I can't seem to find a way to convert the data to ASCII. I'm using `fgets`. I tried `fgetws`, `WideCharToMultiByte`, and a lot of functions which I found in other articles and posts, but nothing worked.
Because you mention WideCharToMultiByte I will assume you are dealing with Windows. > "read the content from an unicode file ... find a way to convert data to ASCII" This might be a problem. If you convert Unicode to ASCII (or other legacy code page) you will run into the risk of corrupting/losing data. Since you are "working on a unicode release build" you will want to read Unicode **and stay** Unicode. So your final buffer will have to be `wchar_t` (or `WCHAR`, or `CStringW`, same thing). So your file might be utf-16, or utf-8 (utf-32 is quite rare). For utf-16 the endianess might also matter. If there is a BOM that will help a lot. Quick steps: * open file with `wopen`, or `_wfopen` as binary * read the first bytes to identify encoding using the BOM * if the encoding is utf-8, read in a byte array and convert to `wchar_t` with `WideCharToMultiByte` and `CP_UTF8` * if the encoding is utf-16be (big endian) read in a `wchar_t` array and `_swab` * if the encoding is utf-16le (little endian) read in a `wchar_t` array and you are done Also (if you use a newer Visual Studio), you might take advantage of an MS extension to `_wfopen`. It can take an encoding as part of the mode (something like `_wfopen(L"newfile.txt", L"rw, ccs=<encoding>");` with the encoding being UTF-8 or UTF-16LE). It can also detect the encoding based on the BOM. Warning: to be cross-platform is problematic, `wchar_t` can be 2 or 4 bytes, the conversion routines are not portable... Useful links: * [BOM (http://unicode.org/faq/utf\_bom.html)](http://unicode.org/faq/utf_bom.html) * [wfopen (http://msdn.microsoft.com/en-us/library/yeby3zcb.aspx)](http://msdn.microsoft.com/en-us/library/yeby3zcb.aspx)
We'll need more information to answer the question (for example, are you trying to read the Unicode file into a `char` buffer or a `wchar_t` buffer? What encoding does the file use?), but for now you might want to make sure you're not running into [this issue](http://msdn.microsoft.com/en-us/library/c4cy2b8e(VS.71).aspx) if your file is Unicode and you're using `fgetws` in text mode. > When a Unicode stream-I/O > function operates in text mode, the > source or destination stream is > assumed to be a sequence of multibyte > characters. Therefore, the Unicode > stream-input functions convert > multibyte characters to wide > characters (as if by a call to the > mbtowc function). For the same reason, > the Unicode stream-output functions > convert wide characters to multibyte > characters (as if by a call to the > wctomb function).
Read Unicode Files
[ "", "c++", "file", "unicode", "text", "" ]
I have a list of events, these events are each of a specific type, and start in a specific month. I have a checkbox group for types and one for months. What I'm trying to do is use the checkboxes to filter the list. I've got it working with one group, but can't seem to get it working with two. Basically I'm trying to set a class when I hide the list item, so I know which group hid it, but it seems to get confused. The class names are correct but some sometimes items do not get shown again. If anyone can see what I'm doing wrong, or think of a better solution that would be great! Thanks! Darren. My JavaScript: ``` $("#options input.type_check").change(function() { if($(this).is(':checked')) { $("#events li."+$(this).attr('id')).removeClass('type_hidden'); if(!$("#events li."+$(this).attr('id')).hasClass('start_hidden')) { $("#events li."+$(this).attr('id')).slideDown(); } } else { $("#events li."+$(this).attr('id')).addClass('type_hidden'); $("#events li."+$(this).attr('id')).slideUp(); } return false; }); $("#options input.start_check").change(function() { if($(this).is(':checked')) { $("#events li."+$(this).attr('id')).removeClass('start_hidden'); if(!$("#events li."+$(this).attr('id')).hasClass('type_hidden')) { $("#events li."+$(this).attr('id')).slideDown(); } } else { $("#events li."+$(this).attr('id')).addClass('start_hidden'); $("#events li."+$(this).attr('id')).slideUp(); } return false; }); ``` My HTML: ``` <p>Types:</p> <div><input name="type[]" type="checkbox" id="type_0" value="0" class="type_check" checked="checked" /><label for="type_0">Type 0</label></div> <div><input name="type[]" type="checkbox" id="type_1" value="1" class="type_check" checked="checked" /><label for="type_1">Type 1</label></div> <div><input name="type[]" type="checkbox" id="type_2" value="2" class="type_check" checked="checked" /><label for="type_2">Type 2</label></div> <div><input name="type[]" type="checkbox" id="type_3" value="3" class="type_check" checked="checked" /><label for="type_3">Type 3</label></div> <div><input name="type[]" type="checkbox" id="type_4" value="4" class="type_check" checked="checked" /><label for="type_4">Type 4</label></div> <p>Starts:</p> <div><input name="start[]" type="checkbox" id="start_072009" value="072009" class="start_check" checked="checked" /><label for="type_072009">July 2009</label></div> <div><input name="start[]" type="checkbox" id="start_082009" value="082009" class="start_check" checked="checked" /><label for="type_082009">August 2009</label></div> <div><input name="start[]" type="checkbox" id="start_092009" value="092009" class="start_check" checked="checked" /><label for="type_092009">September 2009</label></div> <div><input name="start[]" type="checkbox" id="start_102009" value="102009" class="start_check" checked="checked" /><label for="type_102009">October 2009</label></div> <p>Events</p> <ul id="events"> <li id="1768" class="type_0 start_072009">Event 1</li> <li id="2190" class="type_1 start_072009">Event 2</li> <li id="2191" class="type_2 start_072009">Event 3</li> <li id="1864" class="type_2 start_082009">Event 4</li> <li id="1679" class="type_3 start_082009">Event 5</li> <li id="2042" class="type_0 start_092009">Event 6</li> <li id="1717" class="type_4 start_092009">Event 7</li> <li id="1917" class="type_4 start_092009">Event 8</li> <li id="1767" class="type_4 start_092009">Event 9</li> <li id="1866" class="type_2 start_102009">Event 10</li> </ul> ```
The ID attributes on your LIs are invalid - they can't be just numbers. Javascript will probably choke when trying to make assignments on them. See the standard: <http://www.w3.org/TR/REC-html40/types.html#type-name> > **ID** and **NAME** tokens must begin with a > letter ([A-Za-z]) and may be followed > by any number of letters, digits > ([0-9]), hyphens ("-"), underscores > ("\_"), colons (":"), and periods > (".").
OK, here's the fix. Change: ``` if(!$("#events li."+$(this).attr('id')).hasClass('start_hidden')) { $("#events li."+$(this).attr('id')).slideDown(); } ``` to: ``` $("#events li).not(".type_hidden, .start_hidden").slideDown(); ``` in both places.
jQuery multiple checkbox filters
[ "", "javascript", "jquery", "checkbox", "filter", "" ]
Should you do validation on the server side, or is it ok to just do the validation on the client side? @TheTXI I am happy your amazed I just did not want to leave anything out that could potential change someone answer to give me false information. --- It seems alot of people touched on what I was going after but the part of the Ajax should have been kept in my question as this was the most important part. However reading the posts I know see that a bad person could just easily load up firebug and change my button to a submit button and do a post. Or some other way.
Browser/client-side validation is a convenience. You cannot rely on it. You absolutely need to duplicate any client-level validation with server-side validation.
Well, fine, all YOUR code is correct. What happens when a hacker replaces your javascript with one of their liking, or just plain submit POSTs and GETs as if it were your code? Validating at the client is a usability issue. Validating at the point of USAGE is a security issue. That last point is important, because if you do not validate at the point of usage, you are making your code highly coupled. If you change a module, it breaks things elsewhere because you validated at the wrong point. For instance, you validate data against SQL injection before storing in a SQL database -- the library will do that for you if you choose a good one. You validate data against CSS when you display it as HTML. But if you expose the data it as XML, RSS or JSON, then the validation is different -- if you validated it just at input, you wouldn't prevent exploits for the other formats, AND your input routine would be tied to the output formats you choose.
Should you do validation on the server side?
[ "", "javascript", "ajax", "validation", "" ]
I want to batch multiple select statements to reduce round trips to the database. The code looks something like the pseudo code below. It works perfectly on SQL Server, but does not work on Oracle - Oracle complains about the sql syntax. I have had a look around and the only examples I can find of returning multiple result sets from Oracle are using Stored Procedures. Is it possible to do this in Oracle without using Stored Procedures? I am using the MS Oracle data provider, but could use the ODP.Net one if needed. ``` var sql = @" select * from table1 select * from table2 select * from table3"; DbCommand cmd = GetCommand(sql); using(var reader = cmd.ExecuteReader()) { dt1.Load(reader); reader.NextResult(); dt2.Load(reader); reader.NextResult(); dt3.Load(reader); } ```
You should write an anonymous pl/sql block that returns 3 ref cursors. **edit1:** Here it is done in an anonymous pl/sql block with one cursor. It should work with three too. Oracle ref cursors don't lock data and they are the fastest way to return a result set from a pl/sql procedure or an anonymous pl/sql bloc. <http://www.oracle.com/technetwork/issue-archive/2006/06-jan/o16odpnet-087852.html>
An example in C# with multiple cursors and an input parameter: ``` string ConnectionString = "connectionString"; OracleConnection conn = new OracleConnection(ConnectionString); StringBuilder sql = new StringBuilder(); sql.Append("begin "); sql.Append("open :1 for select * from table_1 where id = :id; "); sql.Append("open :2 for select * from table_2; "); sql.Append("open :3 for select * from table_3; "); sql.Append("end;"); OracleCommand comm = new OracleCommand(sql.ToString(),_conn); comm.Parameters.Add("p_cursor_1", OracleDbType.RefCursor, DBNull.Value, ParameterDirection.Output); comm.Parameters.Add("p_id", OracleDbType.Int32, Id, ParameterDirection.Input); comm.Parameters.Add("p_cursor_2", OracleDbType.RefCursor, DBNull.Value, ParameterDirection.Output); comm.Parameters.Add("p_cursor_3", OracleDbType.RefCursor, DBNull.Value, ParameterDirection.Output); conn.Open(); OracleDataReader dr = comm.ExecuteReader(); ```
Batch multiple select statements when calling Oracle from ADO.NET
[ "", "c#", "oracle", "ado.net", "resultset", "" ]
In this C# code snippet, `DateTime.Now.Month.ToString()` returns `7` as output. I would like to get `07` as a return value. What can I do to add the leading zero when the month has only 1 digit?
``` DateTime.Now.Month.ToString("d2") ```
Either format the integer with two digits as suggested by Mehrdad, or format the `DateTime` itself to give you a two-digit month: ``` DateTime.Now.ToString("MM") ```
C#:DateTime.Now Month output format
[ "", "c#", "" ]
i 'm trying the hibernate tutorials from their main site and wanted to change thing a bit to know how many to many relationship work with java.util.set interface.My mappings are correct and i can insert in and from the tables EVENT, PERSON and the mapping table PERSON\_EVENT.Now i've inserted some dummy values in the tables and add their mappings in the mapping table.I wanted to display all the events of all the person who are register to an event or more. with this code : ``` public void ShowPersonEvents() { Person aperson; Event anEvent; Session session = HibernateUtil.getSessionFactory().getCurrentSession(); session.beginTransaction(); List<Person> persons = session.createQuery("from Person").list(); for(int i =0; i< persons.size(); i++) { aperson = (Person) persons.get(i); Set a = aperson.getEvents(); // String[] events = (String[])a.toArray(new String[a.size()]); // for (String e : events) // { // System.out.println(aperson.getLastname()+" is registerd to the" + e); // // } Iterator it = a.iterator(); while(it.hasNext()) { System.out.println(aperson.getLastname()+" is registerd to the" +(String) it.next().toString()); } // System.out.println(); } session.getTransaction().commit(); } } ``` so when i run is the who the correct number of rows but instead of show for example rows like : > Joseph is registered to the opensouce event it's rather showing something like : > Joseph is registered to the domain.Event@18a8ce2 this is the format mypackagename.myclassname@something. when i comment the iterator part ant uncomment the casting to an string array i have an exception:arraystoreexception. I'm kind of lost a bit.I can't see what is wrong here.Please can you have a look and tell me what i did wrong?Thanks for reading.
This doesn't really have anything to do with Hibernate. You're calling toString() on the Event object: ``` (String) it.next().toString() ``` You haven't overridden the Event.toString() method, so you're getting the default implementation. Instead try something like: ``` while(it.hasNext()) { Event event = (Event) it.next(); System.out.println(aperson.getLastname()+" is registerd to the" + event.getName()); } ``` You can also improve your Hibernate HQL query by pre-fetching the events. As it stands now they will be lazy loaded so you'll get an extra query for each person (assuming you haven't set the fetching strategy in the mapping file). Try something like: ``` List<Person> persons = session.createQuery("from Person p left join fetch p.events").list(); ```
The reason you are seeing `domain.Event@18a8ce2` is that this is the output from calling `Object.toString()` (i.e. the default `toString()` implementation). This implementation returns a String in the format *@*. If you wish to see the event's internal state you should override the toString() method in your Event class definition: ``` public String toString() { return String.format("Event{ID: %d, Title: %s, Date: %s}", id, title, date); } ``` The reason for the `ArrayStoreException` is that yo`u're trying to create a`String[]`but are passing in objects that are not`String`s (they're Event`s). From the `ArrayStoreException` Javadoc: "Thrown to indicate that an attempt has been made to store the wrong type of object into an array of objects." So you need to create your Array by calling `toArray(new Event[a.size()])`.
hibernate and Java.util.set problem
[ "", "java", "hibernate", "jakarta-ee", "" ]
I would like to build an Android application that, via an OCR library, should scan a picture extracting text from it . What Java library should I use?
Don't know how good it is (it definitely needs to be trained first), but there is [Ron Cemer's Java OCR library](http://www.roncemer.com/software-development/java-ocr).
If you are looking for a very extensible option or have a specific problem domain you could consider rolling your own using the [Java Object Oriented Neural Engine](http://www.jooneworld.com/). I used it successfully in a personal project to identify the letter from an image such as [this](http://www.gamerbytes.com/xnawordsoup.jpg), you can find all the source for the OCR component of my application on github, [here](http://github.com/dukedave/wordsolver/tree/2e03f0dcf7c9fcfef60d6df0dfd1036c3660a30d/src/ocr).
What kind of OCR Java library should I use in Android?
[ "", "java", "android", "ocr", "" ]
When I define columns in MySQL I generally use int, varchar(255), text and the occasional enum for boolean. What are the benifits of accurately specifying column types rather than just leaving them at maximum? For example a password field encoded as MD5 will never exceed 32 characters so is there a tangible performance gain from using varchar(32) over varchar(255)?
Per [the manual](http://dev.mysql.com/doc/refman/5.1/en/storage-requirements.html), a`VARCHAR(N)` of a given actual length takes the same amount of space for any N up to 255, so using 32 saves no space. But there's an advantage of clarity and readability of your schema in using a type that best represents what the actual data must be like.
I would agree with what Clyde says, but your example of a password is not a particularly good one. Since an MD5 sum will ALWAYS be 32 characters, you could use a CHAR(32) instead of a VARCHAR(32) which would be faster/more efficient in many cases.
Should you accurately specify column types in MySQL?
[ "", "php", "mysql", "performance", "optimization", "" ]
We have a very large project mostly written in C# that has some small, but important, components written in C++. We target the RTM of .NET 2.0 as the minimum required version. So far, in order to meet this requirement we've made sure to have only the RTM of .NET 2.0 on our build box so that the C++ pieces link against that version. **Update:** The C++ assembly that is causing the issue is a mixed-mode C++ assembly being loaded into a managed process. Unfortunately when the confiker was set to do something on April 1st, our corporate IT made a huge push to get everything patched and up to date, and as a result everything up through 3.5 SP1 got installed on the build box. We've tried uninstalling everything, which has happened before, but now we are unable to meet our minimum requirements as anything built on that particular box requires .NET 2.0 SP1. Since the box seems to be hosed in that we can't just uninstall the offending versions, is there any way to build the assemblies and explicitly tell them to use the RTM of .NET 2.0 (which is v2.0.50727.42)? I've seen pages that refer to using a manifest, but I can't figure out how to actually implement a proper manifest and get it into the assemblies. My expertise is in the managed world, so I'm at a bit of a loss on this. Can anyone explain how I can make these assemblies target the .NET 2.0 RTM SxS assemblies? Thanks!
While I'm pretty sure that Christopher's answer and code sample (thank you, Christopher!) is part of a more elegant solution, we were under the gun to get this out the door and found a very similar, but different, solution. The first step is to create a manifest for the assembly: ``` <assembly xmlns='urn:schemas-microsoft-com:asm.v1' manifestVersion='1.0'> <dependency> <dependentAssembly> <assemblyIdentity type='win32' name='Microsoft.VC80.DebugCRT' version='8.0.50608.0' processorArchitecture='x86' publicKeyToken='1fc8b3b9a1e18e3b' /> </dependentAssembly> </dependency> <dependency> <dependentAssembly> <assemblyIdentity type='win32' name='Microsoft.VC80.CRT' version='8.0.50608.0' processorArchitecture='x86' publicKeyToken='1fc8b3b9a1e18e3b' /> </dependentAssembly> </dependency> </assembly> ``` Next you have to set the 'Generate Manifest' option to 'No' under Configuration Properties -> Linker -> Manifest File, and set the 'Embed Manifest' option to 'No' under Configuration Properties -> Manifest Tool -> Input and Output. Finally, to get your new manifest into the assembly add the following command to the project's post-build step: ``` mt.exe /manifest "$(ProjectDir)cppassembly.dll.manifest" /outputresource:"$(TargetDir)\cppassembly.dll";#2 -out:"$(TargetDir)\cppassembly.dll.manifest" ``` Once built we can open the dll in Visual Studio to view the manifest under RT\_MANIFEST and confirm that it has our manifest! When I put Christopher's code in the stdafx.h it ended up adding it as an *additional* dependency...the manifest was still looking for v8.0.50727.762. The manifest it generated looked like this: ``` <assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0"> <dependency> <dependentAssembly> <assemblyIdentity type="win32" name="Microsoft.VC80.DebugCRT" version="8.0.50608.0" processorArchitecture="x86" publicKeyToken="1fc8b3b9a1e18e3b"></assemblyIdentity> </dependentAssembly> </dependency> <dependency> <dependentAssembly> <assemblyIdentity type="win32" name="Microsoft.VC80.DebugCRT" version="8.0.50727.762" processorArchitecture="x86" publicKeyToken="1fc8b3b9a1e18e3b"></assemblyIdentity> </dependentAssembly> </dependency> <dependency> <dependentAssembly> <assemblyIdentity type="win32" name="Microsoft.VC80.CRT" version="8.0.50727.762" processorArchitecture="x86" publicKeyToken="1fc8b3b9a1e18e3b"></assemblyIdentity> </dependentAssembly> </dependency> </assembly> ``` I could not track down another switch that would remove or clear existing dependencies. I like Christopher's approach better than a post-build step, but for now this works. If anyone has any additional input on how to clear out any existing dependencies that'd be great. Thanks!
Yes. In your project properties, there is a page that indicates the runtime. There is a drop down that lists all of the runtimes available. Choose the one that is appropriate for you. (For VS 2008: Right click on the project -> properties, Compile tab, Advanced Compiler Settings button -> Target framework) We do this right now. We would like to move to VS 2008, but we are doing it incrementally. So right now we have a VS 2008 solution, but all the projects still target .Net 2.0. Thus, when we compile and deploy, we don't need the .Net 3.5 stuff installed on our test boxes. **UPDATE:** To force a native program to link to specific versions of .dlls, you probably want to use something like this: ``` #pragma message ("Explicit link to generate a manifest entry for MFC.") #if defined (_DEBUG) #pragma comment(linker, "\"/manifestdependency:type='win32' name='Microsoft.VC80.DebugMFC' version='8.0.50608.0' processorArchitecture='x86' publicKeyToken='1fc8b3b9a1e18e3b'\"") #else #pragma comment(linker, "\"/manifestdependency:type='win32' name='Microsoft.VC80.MFC' version='8.0.50608.0' processorArchitecture='x86' publicKeyToken='1fc8b3b9a1e18e3b'\"") #endif ``` Except, that instead of MFC, you should find the correct values for the .Net .DLLs. It is reasonable to believe that you cannot have .Net 2.0 SP1 and .Net 2.0 on the same box. So getting this to work on that box is likely going to be really, really painful. It is probably better to spin up a new build VM that you can install the old, unpatched .Net framework on (if you can even get a hold of it anymore.) Otherwise you will need to copy all the build-time files over to your current box, and then make adjustments to the include and library paths based on your build type. Probably this is a much bigger headache than it's worth.
How can I target a specific version of the C++ runtime?
[ "", ".net", "c++", "side-by-side", "" ]
I am looking for a way that I can query a database with multiple SQL Queries and then, once I have the results of the queries (in XML), merge the separate XML together into one XML document, using reverse paths. My reason for doing this is related to the concept of Parallel Details (same data, with different manipulations applied). For example, 3 Queries: ``` SELECT * FROM Customer; SELECT * FROM Purchase_order; SELECT * FROM Line_Item; ``` Query 1 Results: ``` <customer> <cust_id>2</cust_id> <fname>John</fname> <lname>Doe</lname> </customer> ``` Query 2 Results: ``` <purchase_order> <order_id>2</order_id> <cust_id>2</cust_id> <shipped>7/7/2009</shipped> </purchase_order> ``` Query 3 Results: ``` <line_item> <line_id>2</line_id> <order_id>2</order_id> <quantity>7</quantity> </line_item> ``` Desired Output: ``` <collection> <customer> <cust_id>2</cust_id> <fname>John</fname> <lname>Doe</lname> </customer> <purchase_order> <order_id>2</order_id> <cust_id>2</cust_id> <shipped>7/7/2009</shipped> </purchase_order> <line_item> <line_id>2</line_id> <order_id>2</order_id> <quantity>7</quantity> </line_item> </collection> ``` This looks like it would be easy, but my SQL queries can return a lot of customers, and a lot of purchase orders and line items, and I need to be able to match them all up. I cannot use one SQL Query with Joins that will collect all of this information at once. Has anyone ever seen anything like this done? Any ideas? Thanks.
This will give you EXACTLY what you are looking for: ``` SELECT CONVERT(XML, (SELECT cust_id, fname, lname FROM Customer FOR XML PATH('customer'))), CONVERT(XML, (SELECT order_id, cust_id, shipped FROM Purchase_order FOR XML PATH('purchase_order'))), CONVERT(XML, (SELECT line_id, order_id, quantity FROM Line_Item FOR XML PATH('line_item'))) FOR XML PATH('collection') ```
XML formatting aside, it looks like what you want is to query the line item table and join the purchase order and customer tables on that. ``` SELECT * FROM Line_Item li join Purchase_Order po on li.order_id on po.order_id join Customer c = po.cust_id on c.cust_id ``` This will give you a result set of line items that has the purchase order and customer information inline. If you're using a server feature to render it to XML, you'll probably get something along the lines of ``` <line_item> <line_id>2</line_id> <quantity>7</quantity> <order_id>2</order_id> <shipped>7/7/2009</shipped> <fname>John</fname> <lname>Doe</lname> </line_item> ```
Combining XML with Reverse Paths
[ "", "sql", "xml", "postgresql", "" ]
What is the best way to determine if a form on an ASPX page is valid in JavaScript? I am trying to check the validation of a user control that was opened using the JavaScript `window.showModalDialog()` and checking the 'Page.IsValid' property on the server side does not work. I am using ASP.NET validation controls for page validation.
If I have a page that is using a bunch of ASP.NET validation controls I will use code similar to the following to validate the page. Make the call on an input submit. Hopefully this code sample will get you started! ``` <input type="submit" value="Submit" onclick"ValidatePage();" /> <script type="text/javascript"> function ValidatePage() { if (typeof (Page_ClientValidate) == 'function') { Page_ClientValidate(); } if (Page_IsValid) { // do something alert('Page is valid!'); } else { // do something else alert('Page is not valid!'); } } </script> ```
You are checking for `Page.IsValid` where you should be checking for `Page_IsValid` (it's a variable exposed by the .NET validators) :)
Determine if page is valid in JavaScript - ASP.NET
[ "", "asp.net", "javascript", "validation", "" ]
I have a `href` link and I would like it to be clicked when the page is loaded.
``` $(document).ready(function() { $('#someLinkId').click(); }); ``` Or ``` $(document).ready(function() { $('#someLinkId').trigger("click"); }); ```
``` $("#whateverid").trigger("click"); ``` Where "whateverid" is the ID of the anchor tag, or whatever other selector you want to use.
How to make onclick automatically through onload function
[ "", "javascript", "jquery", "html", "onclick", "" ]
Does performance of a database (SQL Server 2005) decrease if I shrink it? What exactly happen to the mdf and ldf files when shrink is applied (Internals???)
When shrinking a database it will consume resources to shrink the DB. Where it runs into issues is when the DB needs to grow again, and assuming you have auto grow set, it will consume more resources to auto grow. Constant auto shrink (or shrink as part of maintenance plan) will cause physical disk fragmentation. If you have auto grow enabled and it is set to the default of 1MB then constant auto grows will consume a lot of resources. It is best practice to size your database to a size that is suitable, expected initial size plus expected growth over a period (month, year, whatever period you see fit). You should not use auto shrink or use shrink as part of a maintenance program. You should also set your auto grow to MB (not a % of the database as when auto growing it needs to calculate the % first, then grow the database). You should also set the auto grow to a reasonable amount to ensure that it isnt going to be growing every 10 mins, try and aim for 1 or two growths a day. You should also look at setting Instant Initialisation for your SQL Server. Good luck, Matt
It's important to understand that when you shrink a database, the pages are re-arranged. Pages on the end of the data file are moved to open space in the beginning of the file, with no regard to fragmentation. A clustered index determines the physical order of data in a table. So, imagine that you just created a clustered index, which would have re-ordered the data in that table, physically. Well, then when you execute a shrink command, the data that had just been neatly ordered during the creation of the clustered index will now potentially be out of order, which will affect SQL's ability to make efficient use of it. So, any time you do a shrink operation you have the potential of impacting performance for all subsequent queries. However, if you re-do your clustered indexes / primary keys after the shrink, you are helping to defragment much of the fragmentation that you may have introduced during the shrink operation. If performance is critical but you are also forced to do shrinks regularly, then in an ideal world you'd want to re-do your indexes after each shrink operation.
Does performance of a database (SQL Server 2005) decrease if I shrink it?
[ "", "sql", "sql-server", "sql-server-2005", "t-sql", "shrink", "" ]
Is there a simply layout manager I can use in a `JPanel` to create something akin to a bar chart? `FlowLayout` almost meets this need. The added component orientation needs to be left to right (default for `FlowLayout`), but they need to "rest" on the bottom of the panel with excess space at the top (not available in `FlowLayout`). Also, the components will all the be the same height and width. Thanks.
A BoxLayout will do the trick as demonstrated in this [posting](http://forums.sun.com/thread.jspa?threadID=5395200)
If you are going to do something like a bar chart, you might want to consider not using `Component`s at all. Just have a single `JComponent` that overrides (IIRC) `paintComponent`. It'll be easier to do the calculations in a manner appropriate to a bar chart rather than trying to use an inappropriate layout manager abstraction. FWIW, I default to `GridBagLayout`, even if a simpler layout manager will do, on this basis that the code can be more consistent.
Java - Layout Manager Selection
[ "", "java", "layout", "" ]
From a security perspective, I can see simply doing an 'eval' on incoming JSON data as a critical mistake. If you got data like below you'd have some problems. ``` { someData:((function() { alert("i'm in ur code hackin' ur page"); })()) } ``` I wondered what do most popular Javascript libraries do? Is it a manual parse or simply an eval? **[Edit]** I'm not asking if *I* should eval/parse - I was asking what methods some of the popular Javascript libraries used (jQuery, Prototype, etc...)
Here's what the [official JavaScript parser](https://github.com/douglascrockford/JSON-js/blob/master/json2.js) does: ``` // In the second stage, we run the text against regular expressions that look // for non-JSON patterns. We are especially concerned with '()' and 'new' // because they can cause invocation, and '=' because it can cause mutation. // But just to be safe, we want to reject all unexpected forms. // We split the second stage into 4 regexp operations in order to work around // crippling inefficiencies in IE's and Safari's regexp engines. First we // replace the JSON backslash pairs with '@' (a non-JSON character). Second, we // replace all simple value tokens with ']' characters. Third, we delete all // open brackets that follow a colon or comma or that begin the text. Finally, // we look to see that the remaining characters are only whitespace or ']' or // ',' or ':' or '{' or '}'. If that is so, then the text is safe for eval. if (/^[\],:{}\s]*$/. test(text.replace(/\\(?:["\\\/bfnrt]|u[0-9a-fA-F]{4})/g, '@'). replace(/"[^"\\\n\r]*"|true|false|null|-?\d+(?:\.\d*)?(?:[eE][+\-]?\d+)?/g, ']'). replace(/(?:^|:|,)(?:\s*\[)+/g, ''))) { // In the third stage we use the eval function to compile the text into a // JavaScript structure. The '{' operator is subject to a syntactic ambiguity // in JavaScript: it can begin a block or an object literal. We wrap the text // in parens to eliminate the ambiguity. j = eval('(' + text + ')'); ... ``` With the exception of the built-in [JSON parsing support](http://caniuse.com/json) that is in modern browsers, this is what all (library-based) secure JSON parsers do (ie, a regex test before `eval`). **Secure libraries** (in addition to the official json2 implementation) Prototype's [`isJSON`](http://www.prototypejs.org/assets/2009/6/16/prototype.js) function. Mootools' [`JSON.decode`](http://mootools.net/docs/core/Utilities/JSON#JSON:decode) function (again, via a [regex test before `eval`](http://github.com/mootools/mootools-core/blob/master/Source/Utilities/JSON.js#L42)). **Unsecure libraries**: dojo's [`fromJson`](http://api.dojotoolkit.org/jsdoc/1.3.2/dojo.fromJson) does *not* provide secure `eval`ing. [Here is their entire implementation (minus comments)](http://svn.dojotoolkit.org/src/tags/release-1.3.2/dojo/_base/json.js): ``` dojo.fromJson = function(json) { return eval("(" + json + ")"); } ``` jQuery does not provide secure JSON `eval`'ing, but see the official plugin's [`secureEvalJSON`](http://code.google.com/p/jquery-json/source/browse/trunk/jquery.json.js) function (line 143).
You should absolutely parse it! JSON is just a subset of JavaScript. But `eval` would evaluate any JavaScript code and not that specific subset like a JSON parser would.
JSON Data - Parsed Or 'Eval'ed
[ "", "javascript", "security", "json", "" ]
I have a static timer class which will be called by ANY webpage to calculate how long each page has taken to be constructed. My question is are Static classes thread safe? In my example will concurrent users cause a problem with my start and stop times? e.g a different threads overwriting my start and stop values. ``` public static class Timer { private static DateTime _startTime; private static DateTime _stopTime; /// <summary> /// Gets the amount of time taken in milliseconds /// </summary> /// <returns></returns> public static decimal Duration() { TimeSpan duration = _stopTime - _startTime; return duration.Milliseconds; } public static void Start() { _startTime = DateTime.Now; } public static void Stop() { _stopTime = DateTime.Now; } } ``` Should this class be a non-static class? (This class will called from the asp.net masterpage.)
Static methods aren't *inherently* thread-safe. They're treated no differently by the CLR than instance methods. The difference is that one should generally try to *make* them thread-safe. (I can't think of any .NET BCL static methods which aren't thread-safe.) Instance methods are often not thread-safe because the typical pattern is to create an object and use it repeatedly from one thread, and if it *does* have to be used from multiple threads, the co-ordination involved includes making sure that the object is used safely. In very many cases that's more appropriate to do in the co-ordinating code than in the object itself. (Usually you want to make whole sequences of operations effectively atomic - something which can't be done within the object.) Your `Timer` class is most definitely not thread-safe: two threads can stomp on each other's data with ease, and there's nothing to stop a thread from using "stale" data when calculating the duration. Use the [`Stopwatch`](http://msdn.microsoft.com/en-us/library/system.diagnostics.stopwatch.aspx) class instead - that's what it's there for. Admittedly if you want to use one instance from multiple threads you'll need to take the normal steps to ensure safety, but you'll be in a much better position in general. Admittedly `Stopwatch` is far from perfect too - see [this question](https://stackoverflow.com/a/7919483/216440) and the comment below for more details - but it is at least what the type is designed for. (Who knows, it may be fixed some time...)
There is a good discussion [here](http://www.velocityreviews.com/forums/t122276-static-functions-and-thread-safety-how-does-it-work.html) that focuses more on the mechanisms and reasons for why your example is not thread-safe. To summarize, first, your static variables will be shared. If you could make them local variables, even though they are local to a static method, they would still get their own stack frame, and thus, be thread-safe. Also, if you otherwise protect your static variables (ie, locks and/or other multi-threaded programming techniques mentioned by others in this thread) you could also make your sample static class thread-safe. Second, because your example does not take in external variable instances which you modify or whose state might get acted upon by another thread, your example is thread-safe in that regard as well.
Are static methods thread safe
[ "", "c#", "asp.net", "static", "" ]
As the title says, I'm using Linux, and the folder could contain more than one file, I want to get the one its name contain `*tmp*.log` (`*` means anything of course!). Just like what I do using Linux command line.
Use the [`glob`](http://docs.python.org/library/glob.html) module. ``` >>> import glob >>> glob.glob('./[0-9].*') ['./1.gif', './2.txt'] >>> glob.glob('*.gif') ['1.gif', 'card.gif'] >>> glob.glob('?.gif') ['1.gif'] ```
The glob answer is easier, but for the sake of completeness: You could also use os.listdir and a regular expression check: ``` import os import re dirEntries = os.listdir(path/to/dir) for entry in dirEntries: if re.match(".*tmp.*\.log", entry): print entry ```
Search a folder for files like "/*tmp*.log" in Python
[ "", "python", "linux", "file", "directory", "" ]
I'm making my first steps with unit testing and am unsure about two paradigms which seem to contradict themselves on unit tests, which is: * Every single unit test should be self-contained and not depend on others. * Don't repeat yourself. To be more concrete, I've got an importer which I want to test. The Importer has a "Import" function, taking raw data (e.g. out of a CSV) and returning an object of a certain kind which also will be stored into a database through ORM (LinqToSQL in this case). Now I want to test several things, e.g. that the returned object returned is not null, that it's mandatory fields are not null or empty and that it's attributes got the correct values. I wrote 3 unit tests for this. Should each test import and get the job or does this belong into a general setup-logic? On the other hand, [believing this blog post](http://jamesnewkirk.typepad.com/posts/2007/09/why-you-should-.html), the latter would be a bad idea as far as my understanding goes. Also, wouldn't this violate the self-containment? My class looks like this: ``` [TestFixture] public class ImportJob { private TransactionScope scope; private CsvImporter csvImporter; private readonly string[] row = { "" }; public ImportJob() { CsvReader reader = new CsvReader(new StreamReader( @"C:\SomePath\unit_test.csv", Encoding.Default), false, ';'); reader.MissingFieldAction = MissingFieldAction.ReplaceByEmpty; int fieldCount = reader.FieldCount; row = new string[fieldCount]; reader.ReadNextRecord(); reader.CopyCurrentRecordTo(row); } [SetUp] public void SetUp() { scope = new TransactionScope(); csvImporter = new CsvImporter(); } [TearDown] public void TearDown() { scope.Dispose(); } [Test] public void ImportJob_IsNotNull() { Job j = csvImporter.ImportJob(row); Assert.IsNotNull(j); } [Test] public void ImportJob_MandatoryFields_AreNotNull() { Job j = csvImporter.ImportJob(row); Assert.IsNotNull(j.Customer); Assert.IsNotNull(j.DateCreated); Assert.IsNotNull(j.OrderNo); } [Test] public void ImportJob_MandatoryFields_AreValid() { Job j = csvImporter.ImportJob(row); Customer c = csvImporter.GetCustomer("01-01234567"); Assert.AreEqual(j.Customer, c); Assert.That(j.DateCreated.Date == DateTime.Now.Date); Assert.That(j.OrderNo == row[(int)Csv.RechNmrPruef]); } // etc. ... } ``` As can be seen, I'm doing the line `Job j = csvImporter.ImportJob(row);` in every unit test, as they should be self-contained. But this does violate the DRY principle and may possibly cause performance issues some day. What's the best practice in this case?
That depends on how much of your scenario that's common to your test. In the blog post you refered to the main complaint was that the SetUp method did different setup for the three tests and that can't be considered best practise. In your case you've got the same setup for each test/scenario and then you should use a shared SetUp instead of duplicating the code in each test. If you later on find that there are more tests that does not share this setup or requires a different setup shared between a set of tests then refactor those test to a new test case class. You could also have shared setup methods that's not marked with [SetUp] but gets called in the beginning of each test that needs them: ``` [Test] public void SomeTest() { setupSomeSharedState(); ... } ``` A way of finding the right mix could be to start off without a SetUp method and when you find that you're duplicating code for test setup then refactor to a shared method.
Your test classes are no different from usual classes, and should be treated as such: all good practices (DRY, code reuse, etc.) should apply there as well.
Unit Testing: Self-contained tests vs code duplication (DRY)
[ "", "c#", "unit-testing", "nunit", "" ]
Say I have this code - ``` public interface ParentInterface1 { public List<? extends ChildInterface1> getChildren(); public void setChildren(List<? extends ChildInterface1> children); } public interface ParentInterface2 { public List<? extends ChildInterface2> getChildren(); public void setChildren(List<? extends ChildInterface2> children); } public interface ChildInterface1 { public String getField(); public void setField(String field); } public interface ChildInterface2 { public String getField(); public void setField(String field); } public class LParentImpl implements ParentInterface1, ParentInterface2 { private List<ChildImpl> list; public List<ChildImpl> getChildren() { return list; } public void setChildren(List<... wants to accept ChildImpl, which implements ChildInterface1 & ChildInterface2> children) { throw new UnsupportedOperationException("Not supported yet."); } } public class ChildImpl implements ChildInterface1, ChildInterface2 { private String field; public String getField() { return field; } public void setField(String field) { this.field = field; } } ``` Is there a way to make the setChildren() in the ParentImpl class work, without removing the Generic typing completely from the interfaces and implementation? I'd like to do something like - ``` public void setChildren(List<? extends ChildInterface1 & ChildInterface2> children) ``` This sort of interface/implementation structure is valid for non Generic types, but it seems some aspect of the run-time erasure of Generics might make this impossible? Or am I missing some magic syntax? Edit: Using the List<? extends ChildInterface1 & ChildInterface2> yields this compile error - ``` ...\ParentImpl.java:20: > expected public void setChildren(List<? extends ChildInterface1 & ChildInterface2> children) { ```
Your problem doesn't makes sense. `ParentInterface1.setChildren` accepts `List<ChildInterface1>`. Therefore so much `LParentImpl.setChildern`, but you are trying to constrain it so that it does not. You might want to say, parameterise `ParentInterface1`/`2`, but I'd suggest avoiding multiple inheritance of interface wherever possible (not just where generics are involved).
You can specify a method that takes an object that implements those two interfaces like this: ``` public <T extends IFirst & ISecond> void doSomething(T obj) {} ``` However, it won't matter much in your example, since both your child interfaces specify the same methods.
Is there a way with Java Generics to take Generic parameter that requires implementation of 2 interfaces?
[ "", "java", "generics", "interface", "" ]
another victim of the recession here. got tired of reading up on c# stuff and i want to work on a project now. looking for projects i can work on where we can keep in touch online. i have a couple of years of c# experience. source forge seemed cool but can't be browsed by programming language :/ so if you are working on something and need some dot net C# help (for free), lemme know... or a website where geeks like me can get together.
[Sourceforge](http://sourceforge.net) can be still browsed by programming language, although it is not as easy as it was in the old interface. To do so, from the main page click on Find Software, then a general topic or Display All, and then Display: Filters. In the Filter area, use the comboboxes to add a filter to Require Programming Language C# and click Apply.
Here's an open source C# project: [Mono](http://www.mono-project.com/Main_Page). You might also check here: [Codeplex](http://www.codeplex.com/)
free open online projects?
[ "", "c#", "asp.net", "open-source", "" ]
I'm implementing an interface that has functionality similar to a table that can contain an types of objects. The interface specifies the following function: ``` double[] getDoubles(int columnIndex); ``` Where I'm stumped is that in my implementation, I'm storing the table data in a 2D `Object` array (`Object[][] data`). When I need to return the values, I want to do the following (it is assumed that `getDoubles()` will only be called on a column that contains doubles, so there will be no `ClassCastExceptions`): ``` double[] getDoubles(int columnIndex) { return (double[]) data[columnIndex]; } ``` But - Java doesn't allow `Object[]` to be cast to `double[]`. Casting it to `Double[]` is ok because `Double` is an object and not a primitive, but my interface specifies that data will be returned as a `double[]`. So I have two questions: 1. Is there any way I can get the column data out of the `Object[][]` table and return the array of primitives? 2. If I do change the interface to return `Double[]`, will there be any performance impact?
Unfortunately you will need to loop through the entire list and unbox the `Double` if you want to convert it to a `double[]`. As far as performance goes, there is some time associated with boxing and unboxing primitives in Java. If the set is small enough, you won't see any performance issues.
If you don't mind using a 3rd party library, commons-lang has the ArrayUtils type with various methods for manipulation. ``` Double[] doubles; ... double[] d = ArrayUtils.toPrimitive(doubles); ``` There is also the complementary method ``` doubles = ArrayUtils.toObject(d); ``` Edit: To answer the rest of the question. There will be some overhead to doing this, but unless the array is really big you shouldn't worry about it. Test it first to see if it is a problem before refactoring. Implementing the method you had actually asked about would give something like this. ``` double[] getDoubles(int columnIndex) { return ArrayUtils.toPrimitive(data[columnIndex]); } ```
How do I convert Double[] to double[]?
[ "", "java", "arrays", "autoboxing", "" ]
It should turn this ``` int Yada (int yada) { return yada; } ``` into this ``` int Yada (int yada) { SOME_HEIDEGGER_QUOTE; return yada; } ``` but for all (or at least a big bunch of) syntactically legal C/C++ - function and method constructs. Maybe you've heard of some Perl library that will allow me to perform these kinds of operations in a view lines of code. My goal is to add a tracer to an old, but big C++ project in order to be able to debug it without a debugger.
Try Aspect C++ ([www.aspectc.org](http://www.aspectc.org/)). You can define an Aspect that will pick up every method execution. In fact, the quickstart has pretty much exactly what you are after defined as an example: <http://www.aspectc.org/fileadmin/documentation/ac-quickref.pdf>
If you build using GCC and the -pg flag, GCC will automatically issue a call to the mcount() function at the start of every function. In this function you can then inspect the return address to figure out where you were called from. This approach is used by the linux kernel function tracer (CONFIG\_FUNCTION\_TRACER). Note that this function should be written in assembler, and be careful to preserve all registers! Also, note that this should be passed only in the build phase, not link, or GCC will add in the profiling libraries that normally implement mcount.
Is there a tool that enables me to insert one line of code into all functions and methods in a C++-source file?
[ "", "c++", "c", "regex", "aop", "trace", "" ]