Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
What are some good resources to learn best practices for Javascript? I'm mainly concerned about when something should be an object vs. when it should just be tracked in the DOM. Also I would like to better learn how to organize my code so it's easy to unit test.
Seconding [Javascript: The Good Parts](http://oreilly.com/catalog/9780596517748/) and Resig's book [Secrets of the Javascript Ninja](http://jsninja.com/). Here are some tips for Javascript: * Don't pollute the global namespace (put all functions into objects/closures) + Take a look at [YUI](http://developer.yahoo.com/yui/), it's a huge codebase with only 2 global objects: YAHOO and YAHOO\_config * Use the Module pattern for singletons (<http://yuiblog.com/blog/2007/06/12/module-pattern/>) * Make your JS as reusable as possible (jQuery plugins, YUI modules, basic JS objects.) Don't write tons of global functions. * Don't forget to var your variables * Use JSlint : <http://www.jslint.com/> * If you need to save state, it's probably best to use objects instead of the DOM.
I disagree to the "use a framework" statement to some degree. Too many people use frameworks blindly and have little or no understanding of what's going on behind the curtains.
Javascript Best Practices
[ "", "javascript", "unit-testing", "" ]
I want to open a file for reading, the C++ way. I need to be able to do it for: * text files, which would involve some sort of read line function. * binary files, which would provide a way to read raw data into a `char*` buffer.
There are three ways to do this, depending on your needs. You could use the old-school C way and call `fopen`/`fread`/`fclose`, or you could use the C++ fstream facilities (`ifstream`/`ofstream`), or if you're using MFC, use the `CFile` class, which provides functions to accomplish actual file operations. All of these are suitable for both text and binary, though none have a specific readline functionality. What you'd most likely do instead in that case is use the fstream classes (fstream.h) and use the stream operators (<< and >>) or the read function to read/write blocks of text: ``` int nsize = 10; std::vector<char> somedata(nsize); ifstream myfile; myfile.open("<path to file>"); myfile.read(somedata.data(), nsize); myfile.close(); ``` Note that, if you're using Visual Studio 2005 or higher, traditional fstream may not be available (there's a new Microsoft implementation, which is slightly different, but accomplishes the same thing).
You need to use an [`ifstream`](http://en.cppreference.com/w/cpp/io/basic_ifstream) if you just want to read (use an `ofstream` to write, or an `fstream` for both). To open a file in text mode, do the following: ``` ifstream in("filename.ext", ios_base::in); // the in flag is optional ``` To open a file in binary mode, you just need to add the "binary" flag. ``` ifstream in2("filename2.ext", ios_base::in | ios_base::binary ); ``` Use the [`ifstream.read()`](http://en.cppreference.com/w/cpp/io/basic_istream/read) function to read a block of characters (in binary or text mode). Use the [`getline()`](http://en.cppreference.com/w/cpp/string/basic_string/getline) function (it's global) to read an entire line.
How do you open a file in C++?
[ "", "c++", "file", "io", "" ]
If so how? Did you roll your own with jQuery or use the Microsoft AJAX toolkit? Did you create a webservice or call an action? --- **EDIT** : Please note that this question was asked before Microsoft announced that they were going to bundle jQuery in with VS/ASP.NET MVC. I think there is less of a discussion around this topic now and the answer would now almost always be 'use jQuery'. IainMH 11th Feb 2009
jQuery, action, return JSON. <http://devlicio.us/blogs/mike_nichols/archive/2008/05/25/jquery-cascade-cascading-values-from-forms.aspx>
I've spent the past day or two getting @[Matt Hinze's answer](https://stackoverflow.com/questions/39735/have-you-got-a-cascadingdropdown-working-with-aspnet-mvc#40120) to work. It works well. jQuery is the prefferred method of doing AJAX in the forthcoming [ASP.NET MVC In Action](http://www.manning.com/palermo/) book from Manning. You can get a pdf of a free preview chapter on AJAX in MVC [here](http://www.manning.com/palermo/palermo_meapch9_sample.pdf). However, [Stephen Walther](http://www.aspworkshops.com/) in his excellent [ASP.NET Tip series](http://weblogs.asp.net/stephenwalther/archive/tags/Tips/default.aspx) has just blogged about creating cascading dropdowns in [Tip #41](http://weblogs.asp.net/stephenwalther/archive/2008/09/06/asp-net-mvc-tip-41-creating-cascading-dropdown-lists-with-ajax.aspx).
Have you got a CascadingDropDown working with ASP.NET MVC?
[ "", "c#", "asp.net-mvc", "ajax", "controls", "" ]
I've got a situation where I have a main form that pops up an advanced configuration form that just has half a dozen matched check boxes and combo boxes to select some advanced options (the check boxes to enable/disable, the combo to select a media if enabled). If I just pass the individual settings for the check and combo boxes in to the constructor for the dialog that's obviously a dozen arguments, which seems a bit excessive. My other obvious option would be since in the main form these settings are stored in a large IDictionary with all the other main form settings I could just pass this dictionary in and fetch it back afterward with the updated values, but my understanding is that this wouldn't really be very good coding practice. Am I missing a good way to do this that is both efficient and good coding practice? (this particular code is in C#, although I have a feeling a general solution would apply to other languages as well)
I personally would create a carrier object to store the values. You then get the nice intellisense for it, and changes to it would be quite straightforward. It would also be faster than dictionary lookups for parameter values. And of course, you get type safety. :)
You could go with Rob's solution; that's the prettiest for development. Your "carrier object" could contain the entire IDictionary and have typed properties to help intellisense. The properties could update the IDictionary. When you're done, you can pass the carrier object back and fetch the IDictionary directly from it. For example, if your dictionary had key/value pair "FirstEnabled"/boolean, you could do this: ``` class ContainerObject { public IDictionary<object, object> _dict; public ContainerObject(IDictionary<object, object> dict) { _dict = dict; } public bool FirstEnabled { get { return (bool) _dict["FirstEnabled"]; } set { _dict["FirstEnabled"] = value; } } } ``` You can change the member "\_dict" to private or protected and have a accessor function if you want.
Best way to pass a large number of arguments into a configuration dialog
[ "", "c#", "" ]
For certain types of sql queries, an auxiliary table of numbers can be very useful. It may be created as a table with as many rows as you need for a particular task or as a user defined function that returns the number of rows required in each query. What is the optimal way to create such a function?
Heh... sorry I'm so late responding to an old post. And, yeah, I had to respond because the most popular answer (at the time, the Recursive CTE answer with the link to 14 different methods) on this thread is, ummm... performance challenged at best. First, the article with the 14 different solutions is fine for seeing the different methods of creating a Numbers/Tally table on the fly but as pointed out in the article and in the cited thread, there's a *very* important quote... > "suggestions regarding efficiency and > performance are often subjective. > Regardless of how a query is being > used, the physical implementation > determines the efficiency of a query. > Therefore, rather than relying on > biased guidelines, it is imperative > that you test the query and determine > which one performs better." Ironically, the article itself contains many subjective statements and "biased guidelines" such as *"a recursive CTE can generate a number listing **pretty efficiently**"* and *"This is **an efficient method** of using WHILE loop from a newsgroup posting by Itzik Ben-Gen"* (which I'm sure he posted just for comparison purposes). C'mon folks... Just mentioning Itzik's good name may lead some poor slob into actually using that horrible method. The author should practice what (s)he preaches and should do a little performance testing before making such ridiculously incorrect statements especially in the face of any scalablility. With the thought of actually doing some testing before making any subjective claims about what any code does or what someone "likes", here's some code you can do your own testing with. Setup profiler for the SPID you're running the test from and check it out for yourself... just do a "Search'n'Replace" of the number 1000000 for your "favorite" number and see... ``` --===== Test for 1000000 rows ================================== GO --===== Traditional RECURSIVE CTE method WITH Tally (N) AS ( SELECT 1 UNION ALL SELECT 1 + N FROM Tally WHERE N < 1000000 ) SELECT N INTO #Tally1 FROM Tally OPTION (MAXRECURSION 0); GO --===== Traditional WHILE LOOP method CREATE TABLE #Tally2 (N INT); SET NOCOUNT ON; DECLARE @Index INT; SET @Index = 1; WHILE @Index <= 1000000 BEGIN INSERT #Tally2 (N) VALUES (@Index); SET @Index = @Index + 1; END; GO --===== Traditional CROSS JOIN table method SELECT TOP (1000000) ROW_NUMBER() OVER (ORDER BY (SELECT 1)) AS N INTO #Tally3 FROM Master.sys.All_Columns ac1 CROSS JOIN Master.sys.ALL_Columns ac2; GO --===== Itzik's CROSS JOINED CTE method WITH E00(N) AS (SELECT 1 UNION ALL SELECT 1), E02(N) AS (SELECT 1 FROM E00 a, E00 b), E04(N) AS (SELECT 1 FROM E02 a, E02 b), E08(N) AS (SELECT 1 FROM E04 a, E04 b), E16(N) AS (SELECT 1 FROM E08 a, E08 b), E32(N) AS (SELECT 1 FROM E16 a, E16 b), cteTally(N) AS (SELECT ROW_NUMBER() OVER (ORDER BY N) FROM E32) SELECT N INTO #Tally4 FROM cteTally WHERE N <= 1000000; GO --===== Housekeeping DROP TABLE #Tally1, #Tally2, #Tally3, #Tally4; GO ``` While we're at it, here's the numbers I get from SQL Profiler for the values of 100, 1000, 10000, 100000, and 1000000... ``` SPID TextData Dur(ms) CPU Reads Writes ---- ---------------------------------------- ------- ----- ------- ------ 51 --===== Test for 100 rows ============== 8 0 0 0 51 --===== Traditional RECURSIVE CTE method 16 0 868 0 51 --===== Traditional WHILE LOOP method CR 73 16 175 2 51 --===== Traditional CROSS JOIN table met 11 0 80 0 51 --===== Itzik's CROSS JOINED CTE method 6 0 63 0 51 --===== Housekeeping DROP TABLE #Tally 35 31 401 0 51 --===== Test for 1000 rows ============= 0 0 0 0 51 --===== Traditional RECURSIVE CTE method 47 47 8074 0 51 --===== Traditional WHILE LOOP method CR 80 78 1085 0 51 --===== Traditional CROSS JOIN table met 5 0 98 0 51 --===== Itzik's CROSS JOINED CTE method 2 0 83 0 51 --===== Housekeeping DROP TABLE #Tally 6 15 426 0 51 --===== Test for 10000 rows ============ 0 0 0 0 51 --===== Traditional RECURSIVE CTE method 434 344 80230 10 51 --===== Traditional WHILE LOOP method CR 671 563 10240 9 51 --===== Traditional CROSS JOIN table met 25 31 302 15 51 --===== Itzik's CROSS JOINED CTE method 24 0 192 15 51 --===== Housekeeping DROP TABLE #Tally 7 15 531 0 51 --===== Test for 100000 rows =========== 0 0 0 0 51 --===== Traditional RECURSIVE CTE method 4143 3813 800260 154 51 --===== Traditional WHILE LOOP method CR 5820 5547 101380 161 51 --===== Traditional CROSS JOIN table met 160 140 479 211 51 --===== Itzik's CROSS JOINED CTE method 153 141 276 204 51 --===== Housekeeping DROP TABLE #Tally 10 15 761 0 51 --===== Test for 1000000 rows ========== 0 0 0 0 51 --===== Traditional RECURSIVE CTE method 41349 37437 8001048 1601 51 --===== Traditional WHILE LOOP method CR 59138 56141 1012785 1682 51 --===== Traditional CROSS JOIN table met 1224 1219 2429 2101 51 --===== Itzik's CROSS JOINED CTE method 1448 1328 1217 2095 51 --===== Housekeeping DROP TABLE #Tally 8 0 415 0 ``` As you can see, **the Recursive CTE method is the second worst only to the While Loop for Duration and CPU and has 8 times the memory pressure in the form of logical reads than the While Loop**. It's RBAR on steroids and should be avoided, at all cost, for any single row calculations just as a While Loop should be avoided. **There are places where recursion is quite valuable but this ISN'T one of them**. As a side bar, Mr. Denny is absolutely spot on... a correctly sized permanent Numbers or Tally table is the way to go for most things. What does correctly sized mean? Well, most people use a Tally table to generate dates or to do splits on VARCHAR(8000). If you create an 11,000 row Tally table with the correct clustered index on "N", you'll have enough rows to create more than 30 years worth of dates (I work with mortgages a fair bit so 30 years is a key number for me) and certainly enough to handle a VARCHAR(8000) split. Why is "right sizing" so important? If the Tally table is used a lot, it easily fits in cache which makes it blazingly fast without much pressure on memory at all. Last but not least, every one knows that if you create a permanent Tally table, it doesn't much matter which method you use to build it because 1) it's only going to be made once and 2) if it's something like an 11,000 row table, all of the methods are going to run "good enough". **So why all the indigination on my part about which method to use???** The answer is that some poor guy/gal who doesn't know any better and just needs to get his or her job done might see something like the Recursive CTE method and decide to use it for something much larger and much more frequently used than building a permanent Tally table and I'm trying to **protect those people, the servers their code runs on, and the company that owns the data on those servers**. Yeah... it's that big a deal. It should be for everyone else, as well. Teach the right way to do things instead of "good enough". Do some testing before posting or using something from a post or book... the life you save may, in fact, be your own especially if you think a recursive CTE is the way to go for something like this. ;-) Thanks for listening...
The most optimal function would be to use a table instead of a function. Using a function causes extra CPU load to create the values for the data being returned, especially if the values being returned cover a very large range.
SQL, Auxiliary table of numbers
[ "", "sql", "sql-server", "" ]
I'm trying to perform a LINQ query on a DataTable object and bizarrely I am finding that performing such queries on DataTables is not straightforward. For example: ``` var results = from myRow in myDataTable where results.Field("RowNo") == 1 select results; ``` This is not allowed. How do I get something like this working? I'm amazed that LINQ queries are not allowed on DataTables!
You can't query against the `DataTable`'s *Rows* collection, since `DataRowCollection` doesn't implement `IEnumerable<T>`. You need to use the `AsEnumerable()` extension for `DataTable`. Like so: ``` var results = from myRow in myDataTable.AsEnumerable() where myRow.Field<int>("RowNo") == 1 select myRow; ``` And as [@Keith](https://stackoverflow.com/a/10893/5519709) says, you'll need to add a reference to [System.Data.DataSetExtensions](http://msdn.microsoft.com/en-us/library/system.data.datarowextensions.aspx) `AsEnumerable()` returns `IEnumerable<DataRow>`. If you need to convert `IEnumerable<DataRow>` to a `DataTable`, use the `CopyToDataTable()` extension. Below is query with Lambda Expression, ``` var result = myDataTable .AsEnumerable() .Where(myRow => myRow.Field<int>("RowNo") == 1); ```
``` var results = from DataRow myRow in myDataTable.Rows where (int)myRow["RowNo"] == 1 select myRow ```
LINQ query on a DataTable
[ "", "c#", ".net", "linq", "datatable", ".net-3.5", "" ]
I'm looking for a good article on using emacs as C/C++ IDE. Something like Steve Yegge's ["Effective emacs"](http://steve.yegge.googlepages.com/effective-emacs).
No specific article, really, but I've found [EmacsWiki](http://www.emacswiki.org/cgi-bin/wiki) to be full of useful information. Consider checking out these entries: * [CPlusPlus](http://www.emacswiki.org/cgi-bin/wiki/CPlusPlus) as a starting point for many C++-related articles, and * [CppTemplate](http://www.emacswiki.org/cgi-bin/wiki/CppTemplate) to define a template that can give you a good skeleton when you start new files
I've recently stumbled upon this [article](http://www.physics.ucsb.edu/~taro/comp/tips/emacs/emacs-c-dev.html) which is quite good. EDIT: Yep the link is no longer valid. It seems like they've changed their url recently and it doesn't redirect properly. Hopefully it will be back soon. Anyway the article was called "Benjamin Rutt's Emacs C development tips". I managed to find a copy [here](http://xugx2007.blogspot.com/2007/06/benjamin-rutts-emacs-c-development-tips.html).
Any good advice on using emacs for C++ project?
[ "", "c++", "emacs", "" ]
I'm trying to write a regex function that will identify and replace a single instance of a match within a string without affecting the other instances. For example, I have this string: ``` 12||34||56 ``` I want to replace the second set of pipes with ampersands to get this string: ``` 12||34&&56 ``` The regex function needs to be able to handle x amount of pipes and allow me to replace the nth set of pipes, so I could use the same function to make these replacements: ``` 23||45||45||56||67 -> 23&&45||45||56||67 23||34||98||87 -> 23||34||98&&87 ``` I know that I could just split/replace/concat the string at the pipes, and I also know that I can match on `/\|\|/` and iterate through the resulting array, but I'm interested to know if it's possible to write a single expression that can do this. Note that this would be for Javascript, so it's possible to generate a regex at runtime using `eval()`, but it's not possible to use any Perl-specific regex instructions.
here's something that works: ``` "23||45||45||56||67".replace(/^((?:[0-9]+\|\|){n})([0-9]+)\|\|/,"$1$2&&") ``` where n is the one less than the nth pipe, (of course you don't need that first subexpression if n = 0) And if you'd like a function to do this: ``` function pipe_replace(str,n) { var RE = new RegExp("^((?:[0-9]+\\|\\|){" + (n-1) + "})([0-9]+)\|\|"); return str.replace(RE,"$1$2&&"); } ```
# A more general-purpose function I came across this question and, although the title is very general, the accepted answer handles only the question's specific use case. I needed a more general-purpose solution, so I wrote one and thought I'd share it here. ## Usage This function requires that you pass it the following arguments: * `original`: the string you're searching in * `pattern`: either a string to search for, or a RegExp **with a capture group**. Without a capture group, it will throw an error. This is because the function calls `split` on the original string, and [only if the supplied RegExp contains a capture group will the resulting array contain the matches](https://developer.mozilla.org/en/JavaScript/Reference/Global_Objects/String/split#Example:_Capturing_parentheses). * `n`: the ordinal occurrence to find; eg, if you want the 2nd match, pass in `2` * `replace`: Either a string to replace the match with, or a function which will take in the match and return a replacement string. ## Examples ``` // Pipe examples like the OP's replaceNthMatch("12||34||56", /(\|\|)/, 2, '&&') // "12||34&&56" replaceNthMatch("23||45||45||56||67", /(\|\|)/, 1, '&&') // "23&&45||45||56||67" // Replace groups of digits replaceNthMatch("foo-1-bar-23-stuff-45", /(\d+)/, 3, 'NEW') // "foo-1-bar-23-stuff-NEW" // Search value can be a string replaceNthMatch("foo-stuff-foo-stuff-foo", "foo", 2, 'bar') // "foo-stuff-bar-stuff-foo" // No change if there is no match for the search replaceNthMatch("hello-world", "goodbye", 2, "adios") // "hello-world" // No change if there is no Nth match for the search replaceNthMatch("foo-1-bar-23-stuff-45", /(\d+)/, 6, 'NEW') // "foo-1-bar-23-stuff-45" // Passing in a function to make the replacement replaceNthMatch("foo-1-bar-23-stuff-45", /(\d+)/, 2, function(val){ //increment the given value return parseInt(val, 10) + 1; }); // "foo-1-bar-24-stuff-45" ``` ## The Code ``` var replaceNthMatch = function (original, pattern, n, replace) { var parts, tempParts; if (pattern.constructor === RegExp) { // If there's no match, bail if (original.search(pattern) === -1) { return original; } // Every other item should be a matched capture group; // between will be non-matching portions of the substring parts = original.split(pattern); // If there was a capture group, index 1 will be // an item that matches the RegExp if (parts[1].search(pattern) !== 0) { throw {name: "ArgumentError", message: "RegExp must have a capture group"}; } } else if (pattern.constructor === String) { parts = original.split(pattern); // Need every other item to be the matched string tempParts = []; for (var i=0; i < parts.length; i++) { tempParts.push(parts[i]); // Insert between, but don't tack one onto the end if (i < parts.length - 1) { tempParts.push(pattern); } } parts = tempParts; } else { throw {name: "ArgumentError", message: "Must provide either a RegExp or String"}; } // Parens are unnecessary, but explicit. :) indexOfNthMatch = (n * 2) - 1; if (parts[indexOfNthMatch] === undefined) { // There IS no Nth match return original; } if (typeof(replace) === "function") { // Call it. After this, we don't need it anymore. replace = replace(parts[indexOfNthMatch]); } // Update our parts array with the new value parts[indexOfNthMatch] = replace; // Put it back together and return return parts.join(''); } ``` ## An Alternate Way To Define It The least appealing part of this function is that it takes 4 arguments. It could be simplified to need only 3 arguments by adding it as a method to the String prototype, like this: ``` String.prototype.replaceNthMatch = function(pattern, n, replace) { // Same code as above, replacing "original" with "this" }; ``` If you do that, you can call the method on any string, like this: ``` "foo-bar-foo".replaceNthMatch("foo", 2, "baz"); // "foo-bar-baz" ``` ## Passing Tests The following are the Jasmine tests that this function passes. ``` describe("replaceNthMatch", function() { describe("when there is no match", function() { it("should return the unmodified original string", function() { var str = replaceNthMatch("hello-there", /(\d+)/, 3, 'NEW'); expect(str).toEqual("hello-there"); }); }); describe("when there is no Nth match", function() { it("should return the unmodified original string", function() { var str = replaceNthMatch("blah45stuff68hey", /(\d+)/, 3, 'NEW'); expect(str).toEqual("blah45stuff68hey"); }); }); describe("when the search argument is a RegExp", function() { describe("when it has a capture group", function () { it("should replace correctly when the match is in the middle", function(){ var str = replaceNthMatch("this_937_thing_38_has_21_numbers", /(\d+)/, 2, 'NEW'); expect(str).toEqual("this_937_thing_NEW_has_21_numbers"); }); it("should replace correctly when the match is at the beginning", function(){ var str = replaceNthMatch("123_this_937_thing_38_has_21_numbers", /(\d+)/, 2, 'NEW'); expect(str).toEqual("123_this_NEW_thing_38_has_21_numbers"); }); }); describe("when it has no capture group", function() { it("should throw an error", function(){ expect(function(){ replaceNthMatch("one_1_two_2", /\d+/, 2, 'NEW'); }).toThrow('RegExp must have a capture group'); }); }); }); describe("when the search argument is a string", function() { it("should should match and replace correctly", function(){ var str = replaceNthMatch("blah45stuff68hey", 'stuff', 1, 'NEW'); expect(str).toEqual("blah45NEW68hey"); }); }); describe("when the replacement argument is a function", function() { it("should call it on the Nth match and replace with the return value", function(){ // Look for the second number surrounded by brackets var str = replaceNthMatch("foo[1][2]", /(\[\d+\])/, 2, function(val) { // Get the number without the [ and ] var number = val.slice(1,-1); // Add 1 number = parseInt(number,10) + 1; // Re-format and return return '[' + number + ']'; }); expect(str).toEqual("foo[1][3]"); }); }); }); ``` ## May not work in IE7 This code may fail in IE7 because that browser incorrectly splits strings using a regex, as discussed [here](https://stackoverflow.com/questions/4417931/javascript-split-regex-bug-in-ie7). [shakes fist at IE7]. I believe that [this](http://blog.stevenlevithan.com/archives/cross-browser-split) is the solution; if you need to support IE7, good luck. :)
Replacing the nth instance of a regex match in Javascript
[ "", "javascript", "regex", "" ]
I remember first learning about vectors in the STL and after some time, I wanted to use a vector of bools for one of my projects. After seeing some strange behavior and doing some research, I learned that [a vector of bools is not really a vector of bools](http://www.informit.com/guides/content.aspx?g=cplusplus&seqNum=98). Are there any other common pitfalls to avoid in C++?
A short list might be: * Avoid memory leaks through use shared pointers to manage memory allocation and cleanup * Use the [Resource Acquisition Is Initialization](https://en.wikipedia.org/wiki/Resource_Acquisition_Is_Initialization) (RAII) idiom to manage resource cleanup - especially in the presence of exceptions * Avoid calling virtual functions in constructors * Employ minimalist coding techniques where possible - for example, declaring variables only when needed, scoping variables, and early-out design where possible. * Truly understand the exception handling in your code - both with regard to exceptions you throw, as well as ones thrown by classes you may be using indirectly. This is especially important in the presence of templates. RAII, shared pointers and minimalist coding are of course not specific to C++, but they help avoid problems that do frequently crop up when developing in the language. Some excellent books on this subject are: * Effective C++ - Scott Meyers * More Effective C++ - Scott Meyers * C++ Coding Standards - Sutter & Alexandrescu * C++ FAQs - Cline Reading these books has helped me more than anything else to avoid the kind of pitfalls you are asking about.
## Pitfalls in decreasing order of their importance First of all, you should visit the award winning [C++ FAQ](https://isocpp.org/faq). It has many good answers to pitfalls. If you have further questions, visit `##c++` on `irc.freenode.org` in [IRC](http://en.wikipedia.org/wiki/Internet_Relay_Chat). We are glad to help you, if we can. Note all the following pitfalls are originally written. They are not just copied from random sources. --- > `delete[]` on `new`, `delete` on `new[]` **Solution**: Doing the above yields to undefined behavior: Everything could happen. Understand your code and what it does, and always `delete[]` what you `new[]`, and `delete` what you `new`, then that won't happen. **Exception**: ``` typedef T type[N]; T * pT = new type; delete[] pT; ``` You need to `delete[]` even though you `new`, since you new'ed an array. So if you are working with `typedef`, take special care. --- > Calling a virtual function in a constructor or destructor **Solution**: Calling a virtual function won't call the overriding functions in the derived classes. Calling a *pure virtual function* in a constructor or desctructor is undefined behavior. --- > Calling `delete` or `delete[]` on an already deleted pointer **Solution**: Assign 0 to every pointer you delete. Calling `delete` or `delete[]` on a null-pointer does nothing. --- > Taking the sizeof of a pointer, when the number of elements of an 'array' is to be calculated. **Solution**: Pass the number of elements alongside the pointer when you need to pass an array as a pointer into a function. Use the function proposed [here](https://stackoverflow.com/questions/275994/whats-the-best-way-to-do-a-backwards-loop-in-ccc#276053) if you take the sizeof of an array that is supposed to be really an array. --- > Using an array as if it were a pointer. Thus, using `T **` for a two dimentional array. **Solution**: See [here](https://stackoverflow.com/questions/274865/pointer-question-in-c#274943) for why they are different and how you handle them. --- > Writing to a string literal: `char * c = "hello"; *c = 'B';` **Solution**: Allocate an array that is initialized from the data of the string literal, then you can write to it: ``` char c[] = "hello"; *c = 'B'; ``` Writing to a string literal is undefined behavior. Anyway, the above conversion from a string literal to `char *` is deprecated. So compilers will probably warn if you increase the warning level. --- > Creating resources, then forgetting to free them when something throws. **Solution**: Use smart pointers like [`std::unique_ptr`](http://en.cppreference.com/w/cpp/memory/unique_ptr) or [`std::shared_ptr`](http://en.cppreference.com/w/cpp/memory/shared_ptr) as pointed out by other answers. --- > Modifying an object twice like in this example: `i = ++i;` **Solution**: The above was supposed to assign to `i` the value of `i+1`. But what it does is not defined. Instead of incrementing `i` and assigning the result, it changes `i` on the right side as well. Changing an object between two sequence points is undefined behavior. Sequence points include `||`, `&&`, `comma-operator`, `semicolon` and `entering a function` (non exhaustive list!). Change the code to the following to make it behave correctly: `i = i + 1;` --- ## Misc Issues > Forgetting to flush streams before calling a blocking function like `sleep`. **Solution**: Flush the stream by streaming either `std::endl` instead of `\n` or by calling `stream.flush();`. --- > Declaring a function instead of a variable. **Solution**: The issue arises because the compiler interprets for example ``` Type t(other_type(value)); ``` as a function declaration of a function `t` returning `Type` and having a parameter of type `other_type` which is called `value`. You solve it by putting parentheses around the first argument. Now you get a variable `t` of type `Type`: ``` Type t((other_type(value))); ``` --- > Calling the function of a free object that is only declared in the current translation unit (`.cpp` file). **Solution**: The standard doesn't define the order of creation of free objects (at namespace scope) defined across different translation units. Calling a member function on an object not yet constructed is undefined behavior. You can define the following function in the object's translation unit instead and call it from other ones: ``` House & getTheHouse() { static House h; return h; } ``` That would create the object on demand and leave you with a fully constructed object at the time you call functions on it. --- > Defining a template in a `.cpp` file, while it's used in a different `.cpp` file. **Solution**: Almost always you will get errors like `undefined reference to ...`. Put all the template definitions in a header, so that when the compiler is using them, it can already produce the code needed. --- > `static_cast<Derived*>(base);` if base is a pointer to a virtual base class of `Derived`. **Solution**: A virtual base class is a base which occurs only once, even if it is inherited more than once by different classes indirectly in an inheritance tree. Doing the above is not allowed by the Standard. Use dynamic\_cast to do that, and make sure your base class is polymorphic. --- > `dynamic_cast<Derived*>(ptr_to_base);` if base is non-polymorphic **Solution**: The standard doesn't allow a downcast of a pointer or reference when the object passed is not polymorphic. It or one of its base classes has to have a virtual function. --- > Making your function accept `T const **` **Solution**: You might think that's safer than using `T **`, but actually it will cause headache to people that want to pass `T**`: The standard doesn't allow it. It gives a neat example of why it is disallowed: ``` int main() { char const c = ’c’; char* pc; char const** pcc = &pc; //1: not allowed *pcc = &c; *pc = ’C’; //2: modifies a const object } ``` Always accept `T const* const*;` instead. ## Another (closed) pitfalls thread about C++, so people looking for them will find them, is Stack Overflow question *[C++ pitfalls](https://stackoverflow.com/questions/280531)*.
What C++ pitfalls should I avoid?
[ "", "c++", "stl", "" ]
You should be able to create a generic form: ``` public partial class MyGenericForm<T> : Form where T : class { /* form code */ public List<T> TypedList { get; set; } } ``` Is valid C#, and compiles. However the designer won't work and the form will throw a runtime exception if you have any images stating that it cannot find the resource. I think this is because the windows forms designer assumes that the resources will be stored under the simple type's name.
Yes you can! Here's a blog post I made a while ago with the trick: [Designing Generic Forms](http://www.madprops.org/blog/designing-generic-forms/) Edit: Looks like you're already doing it this way. This method works fine so I wouldn't consider it too hacky.
I have a hack to workaround this, which works but isn't ideal: Add a new class to the project that inherits the form with its simple name. ``` internal class MyGenericForm: MyGenericForm<object> { } ``` This means that although the designer is still wrong the expected simple type (i.e without `<>`) is still found.
Can you use generic forms in C#?
[ "", "c#", ".net", "winforms", "" ]
Warning - I am very new to NHibernate. I know this question seems simple - and I'm sure there's a simple answer, but I've been spinning my wheels for some time on this one. I am dealing with a legacy db which really can't be altered structurally. I have a details table which lists payment plans that have been accepted by a customer. Each payment plan has an ID which links back to a reference table to get the plan's terms, conditions, etc. In my object model, I have an AcceptedPlan class, and a Plan class. Originally, I used a many-to-one relationship from the detail table back to the ref table to model this relationship in NHibernate. I also created a one-to-many relationship going in the opposite direction from the Plan class over to the AcceptedPlan class. This was fine while I was simply reading data. I could go to my Plan object, which was a property of my AcceptedPlan class to read the plan's details. My problem arose when I had to start inserting new rows to the details table. From my reading, it seems the only way to create a new child object is to add it to the parent object and then save the session. But I don't want to have to create a new parent Plan object every time I want to create a new detail record. This seems like unnecessary overhead. Does anyone know if I am going about this in the wrong way?
I'd steer away from having child object containing their logical parent, it can get very messy and very recursive pretty quickly when you do that. I'd take a look at how you're intending to use the domain model before you do that sort of thing. You can easily still have the ID references in the tables and just leave them unmapped. Here are two example mappings that might nudge you in the right direction, I've had to adlib table names etc but it could possibly help. I'd probably also suggest mapping the StatusId to an enumeration. Pay attention to the way the bag effectivly maps the details table into a collection. ``` <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping default-cascade="save-update" xmlns="urn:nhibernate-mapping-2.2"> <class lazy="false" name="Namespace.Customer, Namespace" table="Customer"> <id name="Id" type="Int32" unsaved-value="0"> <column name="CustomerAccountId" length="4" sql-type="int" not-null="true" unique="true" index="CustomerPK"/> <generator class="native" /> </id> <bag name="AcceptedOffers" inverse="false" lazy="false" cascade="all-delete-orphan" table="details"> <key column="CustomerAccountId" foreign-key="AcceptedOfferFK"/> <many-to-many class="Namespace.AcceptedOffer, Namespace" column="AcceptedOfferFK" foreign-key="AcceptedOfferID" lazy="false" /> </bag> </class> </hibernate-mapping> <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping default-cascade="save-update" xmlns="urn:nhibernate-mapping-2.2"> <class lazy="false" name="Namespace.AcceptedOffer, Namespace" table="AcceptedOffer"> <id name="Id" type="Int32" unsaved-value="0"> <column name="AcceptedOfferId" length="4" sql-type="int" not-null="true" unique="true" index="AcceptedOfferPK"/> <generator class="native" /> </id> <many-to-one name="Plan" class="Namespace.Plan, Namespace" lazy="false" cascade="save-update" > <column name="PlanFK" length="4" sql-type="int" not-null="false"/> </many-to-one> <property name="StatusId" type="Int32"> <column name="StatusId" length="4" sql-type="int" not-null="true"/> </property> </class> </hibernate-mapping> ```
Didn't see your database diagram whilst I was writing. ``` <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping default-cascade="save-update" xmlns="urn:nhibernate-mapping-2.2"> <class lazy="false" name="Namespace.Customer, Namespace" table="Customer"> <id name="Id" type="Int32" unsaved-value="0"> <column name="customer_id" length="4" sql-type="int" not-null="true" unique="true" index="CustomerPK"/> <generator class="native" /> </id> <bag name="AcceptedOffers" inverse="false" lazy="false" cascade="all-delete-orphan"> <key column="accepted_offer_id"/> <one-to-many class="Namespace.AcceptedOffer, Namespace"/> </bag> </class> </hibernate-mapping> <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping default-cascade="save-update" xmlns="urn:nhibernate-mapping-2.2"> <class lazy="false" name="Namespace.AcceptedOffer, Namespace" table="Accepted_Offer"> <id name="Id" type="Int32" unsaved-value="0"> <column name="accepted_offer_id" length="4" sql-type="int" not-null="true" unique="true" /> <generator class="native" /> </id> <many-to-one name="Plan" class="Namespace.Plan, Namespace" lazy="false" cascade="save-update"> <column name="plan_id" length="4" sql-type="int" not-null="false"/> </many-to-one> </class> </hibernate-mapping> ``` Should probably do the trick (I've only done example mappings for the collections, you'll have to add other properties).
Best way to model Many-To-One Relationships in NHibernate When Dealing With a Legacy DB?
[ "", "c#", "nhibernate", "" ]
I have been looking into IKVMing Apache's FOP project to use with our .NET app. It's a commercial product, and looking into licensing, IKVM runs into some sticky areas because of its use of GNU Classpath. From what I've seen, no one can say for sure if this stuff can be used in a commercial product. Has anyone used IKVM, or an IKVM'd product, in a commercial product? Here's what I've found so far: [IKVM license page](https://web.archive.org/web/20040202014857/http://weblog.ikvm.net:80/story.aspx/license), which notes that one dll contains code from other projects, their license [GPLv2 + Classpath Exception](https://openjdk.java.net/legal/gplv2+ce.html) [Saxon for .NET](http://saxon.sourceforge.net/) is generated with IKVM, but released under the Apache license... Anyone have experience with this?
There are multiple issues here as ikvm is currently being transitioned away from the GNU classpath system to Sun's OpenJDK. Both are licensed as GPL+Exceptions to state explicitly that applications which merely use the OpenJDK libraries will not be considered derived works. Generally speaking, applications which rely upon components with defined specs such as this do not fall under the GPL anyway. For example, linking against public POSIX APIs does not trigger GPL reliance in a Linux application, despite the kernel being GPL. A similar principal will usually (the details can be tricky) apply to replacing Sun's Java with a FOSS/GPL implementation.
Just a quick update on this after noticing the question, for anyone browsing by. IKVM seem to have updated to use the OpenJDK and not the GNU Classpath, infact IKVM.net have removed the comment from their license page.
IKVM and Licensing
[ "", "java", ".net", "ikvm", "gnu-classpath", "" ]
Does anyone know how I can, in platform-independent C++ code prevent an object from being created on the heap? That is, for a class "Foo", I want to prevent users from doing this: ``` Foo *ptr = new Foo; ``` and only allow them to do this: ``` Foo myfooObject; ``` Does anyone have any ideas? Cheers,
[Nick's answer](https://stackoverflow.com/questions/10985/how-to-prevent-an-object-being-created-on-the-heap#11003) is a good starting point, but incomplete, as you actually need to overload: ``` private: void* operator new(size_t); // standard new void* operator new(size_t, void*); // placement new void* operator new[](size_t); // array new void* operator new[](size_t, void*); // placement array new ``` (Good coding practice would suggest you should also overload the delete and delete[] operators -- I would, but since they're not going to get called it isn't *really* necessary.) [Pauldoo](https://stackoverflow.com/questions/10985/how-to-prevent-an-object-being-created-on-the-heap#11018) is also correct that this doesn't survive aggregating on Foo, although it does survive inheriting from Foo. You could do some template meta-programming magic to HELP prevent this, but it would not be immune to "evil users" and thus is probably not worth the complication. Documentation of how it should be used, and code review to ensure it is used properly, are the only ~100% way.
You could overload new for Foo and make it private. This would mean that the compiler would moan... unless you're creating an instance of Foo on the heap from within Foo. To catch this case, you could simply not write Foo's new method and then the linker would moan about undefined symbols. ``` class Foo { private: void* operator new(size_t size); }; ``` PS. Yes, I know this can be circumvented easily. I'm really not recommending it - I think it's a bad idea - I was just answering the question! ;-)
How to prevent an object being created on the heap?
[ "", "c++", "stack", "heap-memory", "" ]
Sorry for the basic question - I'm a .NET developer and don't have much experience with LAMP setups. I have a PHP site that will allow uploads to a specific folder. I have been told that this folder needs to be owned by the webserver user for the upload process to work, so I created the folder and then set permissions as such: ``` chown apache:apache -R uploads/ chmod 755 -R uploads/ ``` The only problem now is that the FTP user can not modify the uploaded files at all. Is there a permission setting that will allow me to still upload files and then modify them later as a user other than the webserver user?
You can create a new group with both the apache user and FTP user as members and then make the permission on the upload folder 775. This should give both the apache and FTP users the ability to write to the files in the folder but keep everyone else from modifying them.
I would go with Ryan's answer if you really want to do this. In general on a \*nix environment, you always want to err on giving away as little permissions as possible. 9 times out of 10, 755 is the ideal permission for this - as the only user with the ability to modify the files will be the webserver. Change this to 775 with your ftp user in a group if you REALLY need to change this. Since you're new to php by your own admission, here's a helpful link for improving the security of your upload service: [`move_uploaded_file`](http://www.php.net/move_uploaded_file)
What are the proper permissions for an upload folder with PHP/Apache?
[ "", "php", "apache", "upload", "" ]
Following on from my recent question on [Large, Complex Objects as a Web Service Result](https://stackoverflow.com/questions/17725/large-complex-objects-as-a-web-service-result). I have been thinking about how I can ensure all future child classes are serializable to XML. Now, obviously I could implement the [IXmlSerializable](http://msdn.microsoft.com/en-us/library/system.xml.serialization.ixmlserializable.aspx) interface and then chuck a reader/writer to it but I would like to avoid that since it then means I need to instantiate a reader/writer whenever I want to do it, and 99.99% of the time I am going to be working with a *string* so I may just write my own. However, to serialize to XML, I am simply decorating the class and its members with the *Xml???* attributes ( *XmlRoot* , *XmlElement* etc.) and then passing it to the *XmlSerializer* and a *StringWriter* to get the string. Which is all good. I intend to put the method to return the string into a generic utility method so I don't need to worry about type etc. The this that concerns me is this: If I do not decorate the class(es) with the required attributes an error is not thrown until run time. **Is there any way to enforce attribute decoration? Can this be done with FxCop?** (I have not used FxCop yet) ### UPDATE: Sorry for the delay in getting this close off guys, lots to do! Definitely like the idea of using reflection to do it in a test case rather than resorting to FxCop (like to keep everything together).. [Fredrik Kalseth's answer](https://stackoverflow.com/questions/19454/enforce-attribute-decoration-of-classesmethods#19455) was fantastic, thanks for including the code as it probably would have taken me a bit of digging to figure out how to do it myself! +1 to the other guys for similar suggestions :)
I'd write a unit/integration test that verifies that any class matching some given criteria (ie subclassing X) is decorated appropriately. If you set up your build to run with tests, you can have the build fail when this test fails. UPDATE: You said, "Looks like I will just have to roll my sleeves up and make sure that the unit tests are collectively maintained" - you don't have to. Just write a general test class that uses reflection to find all classes that needs to be asserted. Something like this: ``` [TestClass] public class When_type_inherits_MyObject { private readonly List<Type> _types = new List<Type>(); public When_type_inherits_MyObject() { // lets find all types that inherit from MyObject, directly or indirectly foreach(Type type in typeof(MyObject).Assembly.GetTypes()) { if(type.IsClass && typeof(MyObject).IsAssignableFrom(type)) { _types.Add(type); } } } [TestMethod] public void Properties_have_XmlElement_attribute { foreach(Type type in _types) { foreach(PropertyInfo property in type.GetProperties()) { object[] attribs = property.GetCustomAttributes(typeof(XmlElementAttribute), false); Assert.IsTrue(attribs.Count > 0, "Missing XmlElementAttribute on property " + property.Name + " in type " + type.FullName); } } } } ```
You can write unit tests to check for this kind of thing - it basically uses reflection. Given the fact this is possible I guess it would also be possible to write a FxCop rule, but I've never done such a thing.
Enforce Attribute Decoration of Classes/Methods
[ "", "c#", "xml", "serialization", "coding-style", ".net-attributes", "" ]
Is there an existing application or library in *Java* which will allow me to convert a `CSV` data file to `XML` file? The `XML` tags would be provided through possibly the first row containing column headings.
Maybe this might help: [JSefa](http://jsefa.sourceforge.net/quick-tutorial.html) You can read CSV file with this tool and serialize it to XML.
As the others above, I don't know any one-step way to do that, but if you are ready to use very simple external libraries, I would suggest: [OpenCsv](http://opencsv.sourceforge.net/) for parsing CSV (small, simple, reliable and easy to use) **Xstream** to parse/serialize XML (very very easy to use, and creating fully human readable xml) Using the same sample data as above, code would look like: ``` package fr.megiste.test; import java.io.FileReader; import java.io.FileWriter; import java.util.ArrayList; import java.util.List; import au.com.bytecode.opencsv.CSVReader; import com.thoughtworks.xstream.XStream; public class CsvToXml { public static void main(String[] args) { String startFile = "./startData.csv"; String outFile = "./outData.xml"; try { CSVReader reader = new CSVReader(new FileReader(startFile)); String[] line = null; String[] header = reader.readNext(); List out = new ArrayList(); while((line = reader.readNext())!=null){ List<String[]> item = new ArrayList<String[]>(); for (int i = 0; i < header.length; i++) { String[] keyVal = new String[2]; String string = header[i]; String val = line[i]; keyVal[0] = string; keyVal[1] = val; item.add(keyVal); } out.add(item); } XStream xstream = new XStream(); xstream.toXML(out, new FileWriter(outFile,false)); } catch (Exception e) { // TODO Auto-generated catch block e.printStackTrace(); } } } ``` Producing the following result: (Xstream allows very fine tuning of the result...) ``` <list> <list> <string-array> <string>string</string> <string>hello world</string> </string-array> <string-array> <string>float1</string> <string>1.0</string> </string-array> <string-array> <string>float2</string> <string>3.3</string> </string-array> <string-array> <string>integer</string> <string>4</string> </string-array> </list> <list> <string-array> <string>string</string> <string>goodbye world</string> </string-array> <string-array> <string>float1</string> <string>1e9</string> </string-array> <string-array> <string>float2</string> <string>-3.3</string> </string-array> <string-array> <string>integer</string> <string>45</string> </string-array> </list> <list> <string-array> <string>string</string> <string>hello again</string> </string-array> <string-array> <string>float1</string> <string>-1</string> </string-array> <string-array> <string>float2</string> <string>23.33</string> </string-array> <string-array> <string>integer</string> <string>456</string> </string-array> </list> <list> <string-array> <string>string</string> <string>hello world 3</string> </string-array> <string-array> <string>float1</string> <string>1.40</string> </string-array> <string-array> <string>float2</string> <string>34.83</string> </string-array> <string-array> <string>integer</string> <string>4999</string> </string-array> </list> <list> <string-array> <string>string</string> <string>hello 2 world</string> </string-array> <string-array> <string>float1</string> <string>9981.05</string> </string-array> <string-array> <string>float2</string> <string>43.33</string> </string-array> <string-array> <string>integer</string> <string>444</string> </string-array> </list> </list> ```
Java lib or app to convert CSV to XML file?
[ "", "java", "xml", "csv", "data-conversion", "" ]
Let's say that you want to output or concat strings. Which of the following styles do you prefer? * `var p = new { FirstName = "Bill", LastName = "Gates" };` * `Console.WriteLine("{0} {1}", p.FirstName, p.LastName);` * `Console.WriteLine(p.FirstName + " " + p.LastName);` Do you rather use format or do you simply concat strings? What is your favorite? Is one of these hurting your eyes? Do you have any rational arguments to use one and not the other? I'd go for the second one.
Try this code. It's a slightly modified version of your code. 1. I removed Console.WriteLine as it's probably a few orders of magnitude slower than what I'm trying to measure. 2. I'm starting the Stopwatch before the loop and stopping it right after, this way I'm not losing precision if the function takes for example 26.4 ticks to execute. 3. The way you divided the result by some iterations was wrong. See what happens if you have 1,000 milliseconds and 100 milliseconds. In both situations, you will get 0 ms after dividing it by 1,000,000. Code: ``` Stopwatch s = new Stopwatch(); var p = new { FirstName = "Bill", LastName = "Gates" }; int n = 1000000; long fElapsedMilliseconds = 0, fElapsedTicks = 0, cElapsedMilliseconds = 0, cElapsedTicks = 0; string result; s.Start(); for (var i = 0; i < n; i++) result = (p.FirstName + " " + p.LastName); s.Stop(); cElapsedMilliseconds = s.ElapsedMilliseconds; cElapsedTicks = s.ElapsedTicks; s.Reset(); s.Start(); for (var i = 0; i < n; i++) result = string.Format("{0} {1}", p.FirstName, p.LastName); s.Stop(); fElapsedMilliseconds = s.ElapsedMilliseconds; fElapsedTicks = s.ElapsedTicks; s.Reset(); Console.Clear(); Console.WriteLine(n.ToString()+" x result = string.Format(\"{0} {1}\", p.FirstName, p.LastName); took: " + (fElapsedMilliseconds) + "ms - " + (fElapsedTicks) + " ticks"); Console.WriteLine(n.ToString() + " x result = (p.FirstName + \" \" + p.LastName); took: " + (cElapsedMilliseconds) + "ms - " + (cElapsedTicks) + " ticks"); Thread.Sleep(4000); ``` Those are my results: > 1000000 x result = string.Format("{0} {1}", p.FirstName, p.LastName); took: 618ms - 2213706 ticks
I'm amazed that so many people immediately want to find the code that executes the fastest. *If ONE MILLION iterations STILL take less than a second to process, is this going to be in ANY WAY noticeable to the end user? Not very likely.* > Premature optimization = FAIL. I'd go with the `String.Format` option, only because it makes the most sense from an architectural standpoint. I don't care about the performance until it becomes an issue (and if it did, I'd ask myself: Do I need to concatenate a million names at once? Surely they won't all fit on the screen...) Consider if your customer later wants to change it so that they can configure whether to display `"Firstname Lastname"` or `"Lastname, Firstname."` With the Format option, this is easy - just swap out the format string. With the concat, you'll need extra code. Sure that doesn't sound like a big deal in this particular example but extrapolate.
String output: format or concat in C#?
[ "", "c#", "string", "coding-style", "string.format", "" ]
I'm currently using VS2005 Profesional and .NET 2.0, and since our project is rather large (25 projects in the solution), I'd like to try VS 2008, since its theoretically faster with larger projects. Before doing such thing, i'd like to know if what I've read is true: can I use VS2008 in ".net 2.0" mode? I don't want my customers to install .net 3.0 or .3.5, I just want to install VS2008, open my solution and start working from there. Is this possible? P.D.: the solution is a c# Window Forms project.
yes, vs2008 can "[target](http://weblogs.asp.net/scottgu/archive/2007/06/20/vs-2008-multi-targeting-support.aspx)" a framework, but i think by default, if converting from vs2005 - vs2008 it just keeps it at framework 2.0
Yes it's possible. In the project properties you can target different versions of the .Net Framework going back to .NET 2.0. Upgrading to VS 2008 will upgrade your Solution file and you won't be able to go back to VS 2005 unless you have backed up your solution
Moving from Visual Studio 2005 to 2008 and .NET 2.0
[ "", "c#", ".net", "visual-studio", ".net-3.5", ".net-2.0", "" ]
I am trying to generate a report by querying 2 databases (Sybase) in classic ASP. I have created 2 connection strings: > connA for databaseA > connB for databaseB Both databases are present on the same server (don't know if this matters) Queries: `q1 = SELECT column1 INTO #temp FROM databaseA..table1 WHERE xyz="A"` `q2 = SELECT columnA,columnB,...,columnZ FROM table2 a #temp b WHERE b.column1=a.columnB` followed by: ``` response.Write(rstsql) <br> set rstSQL = CreateObject("ADODB.Recordset")<br> rstSQL.Open q1, connA<br> rstSQL.Open q2, connB ``` When I try to open up this page in a browser, I get error message: > Microsoft OLE DB Provider for ODBC Drivers error '80040e37' > > [DataDirect][ODBC Sybase Wire Protocol driver][SQL Server]#temp not found. Specify owner.objectname or use sp\_help to check whether the object exists (sp\_help may produce lots of output). Could anyone please help me understand what the problem is and help me fix it? Thanks.
With both queries, it looks like you are trying to insert into #temp. #temp is located on one of the databases (for arguments sake, databaseA). So when you try to insert into #temp from databaseB, it reports that it does not exist. Try changing it from *Into **#temp** From* to *Into **databaseA.dbo.#temp** From* in both statements. Also, make sure that the connection strings have permissions on the other DB, otherwise this will not work. Update: relating to the temp table going out of scope - if you have one connection string that has permissions on both databases, then you could use this for both queries (while keeping the connection alive). While querying the table in the other DB, be sure to use [DBName].[Owner].[TableName] format when referring to the table.
your temp table is out of scope, it is only 'alive' during the first connection and will not be available in the 2nd connection Just move all of it in one block of code and execute it inside one conection
Select Query on 2 tables, on different database servers
[ "", "sql", "asp-classic", "vbscript", "sybase", "" ]
I need to call a web service written in .NET from Java. The web service implements the WS-Security stack (either WSE 2 or WSE 3, it's not clear from the information I have). The information that I received from the service provider included WSDL, a policyCache.config file, some sample C# code, and a sample application that can successfully call the service. This isn't as useful as it sounds because it's not clear how I'm supposed to use this information to write a Java client. If the web service request isn't signed according to the policy then it is rejected by the service. I'm trying to use Apache Axis2 and I can't find any instructions on how I'm supposed to use the policyCahce.config file and the WSDL to generate a client. There are several examples that I have found on the Web but in all cases the authors of the examples had control of both the service and the client and so were able to make tweaks on both sides in order to get it to work. I'm not in that position. Has anyone done this successfully?
This seems to be a popular question so I'll provide an overview of what we did in our situation. It seems that services built in .NET are following an older ws-addressing standard (<http://schemas.xmlsoap.org/ws/2004/03/addressing/>) and axis2 only understands the newer standard (<http://schemas.xmlsoap.org/ws/2004/08/addressing/>). In addition, the policyCache.config file provided is in a form that the axis2 rampart module can't understand. So the steps we had to do, in a nutshell: * Read the policyCache.config and try to understand it. Then rewrite it into a policy that rampart could understand. (Some [updated docs](http://www.scribd.com/doc/466238/Official-Documentation-ws-apache-org-Axis2-Part-2) helped.) * Configure rampart with this policy. * Take the keys that were provided in the .pfx file and convert them to a java key store. There is a utility that comes with Jetty that can do that. * Configure rampart with that key store. * Write a custom axis2 handler that backward-converts the newer ws-addressing stuff that comes out of axis2 into the older stuff expected by the service. * Configure axis2 to use the handler on outgoing messages. In the end it was a lot of configuration and code for something that is supposed to be an open standard supported by the vendors. Although I'm not sure what the alternative is...can you wait for the vendors (or in this case, the one vendor) to make sure that everything will inter-op? As a postscript I'll add that I didn't end up doing the work, it was someone else on my team, but I think I got the salient details correct. The other option that I was considering (before my teammate took over) was to call the WSS4J API directly to construct the SOAP envelope as the .NET service expected it. I think that would have worked too.
WS-Security specifications are not typically contained in a WSDL (never in a WSE WSDL). So wsdl2java does not know that WS-Security is even required for this service. The fact that security constraints are not present in a WSE WSDL is a big disappointment to me (WCF will include WS-Trust information in a WSDL). On the client end, you'll need to use [Rampart](http://ws.apache.org/rampart/) to add the necessary WS-Security headers to your outgoing client message. Since the WSDL does not report what WS-Security settings are necessary, you're best off by asking the service provider what is required. WS-Security requirements may be simple plaintext password, or might be X509 certificates, or might be encrypted message..... Rampart should be able to handle most of these scenarios. Apache Rampart is "turned on" by engaging the module in your axis2.xml file. You'll need to download the Rampart module and put it in a specific place in your axis2 directory, then modify the xml file. You can also engage Rampart programatically (please edit your original question if this is a requirement and I'll edit this response). Depending on how you configure rampart (through other XML files or programatically), it will intercept any outgoing messages and add the necessary WS-Security information to it. I've personally used axis2 with rampart to call a WSE3 service that is secured with UsernameToken in plaintext and it worked great. Similar, but more advanced scenarios should also work. There are more details on how to set up and get started with Rampart on the site linked above. If you have problems about the specifics of Rampart or how to use Rampart with your particular WSE setup, then edit your question and I'll try my best to answer.
Calling .NET Web Service (WSE 2/3, WS-Security) from Java
[ "", "java", ".net", "apache-axis", "ws-security", "wse", "" ]
In an application that I am currently working on, a requirement is to bring a window of an external application to the foreground. Making Win32 API calls such as BringWindowToTop and SetForeground window do not work all the time. This is due to some restrictions within Windows XP. What I would like to do instead is send simulate a mouse click the window's button on the taskbar which I am hoping will bring the window to the front. Does anyone know how this is possible?
Check out the section "How to steal focus on 2K/XP" at <http://www.codeproject.com/KB/dialog/dlgboxtricks.aspx>, as this is exactly what you need. I wouldn't go the taskbar route as the taskbar could be hidden or simply not there.
It's possible. But it's extremely sketchy. Your application may also break with the next version of Windows, since it's undocumented. What you need to do is find the window handle of the taskbar, then find the window handle of the child window representing the button, then send it a WM\_MOUSEDOWN (I think) message. Here's a bit on finding the window handle of the taskbar: [http://www.codeproject.com/](http://www.codeproject.com/KB/miscctrl/hide_vista_start_orb.aspx) FWIW, the restrictions on BringWindowToTop/SetForeground are there because it's irritating when a window steals focus. That may not matter if you're working on a corporate environment. Just keep it in mind. :)
Sending a mouse click to a button in the taskbar using C#
[ "", "c#", ".net", "windows", "winapi", "" ]
I have a list of integers, `List<Integer>` and I'd like to convert all the integer objects into Strings, thus finishing up with a new `List<String>`. Naturally, I could create a new `List<String>` and loop through the list calling `String.valueOf()` for each integer, but I was wondering if there was a better (read: *more automatic*) way of doing it?
As far as I know, iterate and instantiate is the only way to do this. Something like (for others potential help, since I'm sure you know how to do this): ``` List<Integer> oldList = ... /* Specify the size of the list up front to prevent resizing. */ List<String> newList = new ArrayList<>(oldList.size()); for (Integer myInt : oldList) { newList.add(String.valueOf(myInt)); } ```
Using [Google Collections from Guava-Project](https://github.com/google/guava/), you could use the `transform` method in the [Lists](https://google.github.io/guava/releases/23.0/api/docs/com/google/common/collect/Lists.html) class ``` import com.google.common.collect.Lists; import com.google.common.base.Functions List<Integer> integers = Arrays.asList(1, 2, 3, 4); List<String> strings = Lists.transform(integers, Functions.toStringFunction()); ``` The `List` returned by `transform` is a *view* on the backing list - the transformation will be applied on each access to the transformed list. Be aware that `Functions.toStringFunction()` will throw a `NullPointerException` when applied to null, so only use it if you are sure your list will not contain null.
Converting List<Integer> to List<String>
[ "", "java", "string", "collections", "integer", "" ]
I have a "showall" query string parameter in the url, the parameter is being added dynamically when "Show All/Show Pages" button is clicked. I want the ability to toggle "showall" query string parameter value depending on user clicking the "Show All/Show Pages" button. I'm doing some nested "if's" and `string.Replace()` on the url, is there a better way? All manipulations are done on the server. **p.s.** Toran, good suggestion, however I HAVE TO USE URL PARAMETER due to some other issues.
Just to elaborate on Toran's answer: Use: `<asp:HiddenField ID="ShowAll" Value="False" runat="server" />` To toggle your state: ``` protected void ToggleState(object sender, EventArgs e) { //parse string as boolean, invert, and convert back to string ShowAll.Value = (!Boolean.Parse(ShowAll.Value)).ToString(); } ```
Another dirty alternative could be just to use a hidden input and set that on/off instead of manipulating the url.
reassign value to query string parameter
[ "", "c#", "query-string", "" ]
While I've seen rare cases where *private* inheritance was needed, I've never encountered a case where *protected* inheritance is needed. Does someone have an example?
People here seem to mistake Protected class inheritance and Protected methods. FWIW, I've never seen anyone use protected class inheritance, and if I remember correctly I think Stroustrup even considered the "protected" level to be a mistake in c++. There's precious little you cannot do if you remove that protection level and only rely on public and private.
There is a very rare use case of protected inheritance. It is where you want to make use of [covariance](http://en.wikipedia.org/wiki/Covariance_and_contravariance_%28computer_science%29 "Covariance"): ``` struct base { virtual ~base() {} virtual base & getBase() = 0; }; struct d1 : private /* protected */ base { virtual base & getBase() { return this; } }; struct d2 : private /* protected */ d1 { virtual d1 & getBase () { return this; } }; ``` The previous snippet tried to hide it's base class, and provide controlled visibility of bases and their functions, for whatever reason, by providing a "getBase" function. However, it will fail in struct `d2`, since `d2` does not know that `d1` is derived from `base`. Thus, `covariance` will not work. A way out of this is deriving them protected, so that the inheritance is visible in d2. A similar example of using this is when you derive from `std::ostream`, but don't want random people to write into your stream. You can provide a virtual `getStream` function that returns `std::ostream&`. That function could do some preparing of the stream for the next operation. For example putting certain manipulators in. ``` std::ostream& d2::getStream() { this->width(10); return *this; } logger.getStream() << "we are padded"; ```
Are there any examples where we *need* protected inheritance in C++?
[ "", "c++", "oop", "inheritance", "" ]
Ok, I have a strange exception thrown from my code that's been bothering me for ages. ``` System.Net.Sockets.SocketException: A blocking operation was interrupted by a call to WSACancelBlockingCall at System.Net.Sockets.Socket.Accept() at System.Net.Sockets.TcpListener.AcceptTcpClient() ``` MSDN isn't terribly helpful on this : <http://msdn.microsoft.com/en-us/library/ms741547(VS.85).aspx> and I don't even know how to begin troubleshooting this one. It's only thrown 4 or 5 times a day, and never in our test environment. Only in production sites, and on ALL production sites. I've found plenty of posts asking about this exception, but no actual definitive answers on what is causing it, and how to handle or prevent it. The code runs in a separate background thread, the method starts : ``` public virtual void Startup() { TcpListener serverSocket= new TcpListener(new IPEndPoint(bindAddress, port)); serverSocket.Start(); ``` then I run a loop putting all new connections as jobs in a separate thread pool. It gets more complicated because of the app architecture, but basically: ``` while (( socket = serverSocket.AcceptTcpClient()) !=null) //Funny exception here { connectionHandler = new ConnectionHandler(socket, mappingStrategy); pool.AddJob(connectionHandler); } } ``` From there, the `pool` has it's own threads that take care of each job in it's own thread, separately. My understanding is that AcceptTcpClient() is a blocking call, and that somehow winsock is telling the thread to stop blocking and continue execution.. but why? And what am I supposed to do? Just catch the exception and ignore it? --- Well, I do think some other thread is closing the socket, but it's certainly not from my code. What I would like to know is: is this socket closed by the connecting client (on the other side of the socket) or is it closed by my server. Because as it is at this moment, whenever this exception occurs, it shutsdown my listening port, effectively closing my service. If this is done from a remote location, then it's a major problem. Alternatively, could this be simply the IIS server shutting down my application, and thus cancelling all my background threads and blocking methods?
Is it possible that the serverSocket is being closed from another thread? That will cause this exception.
This is my example solution to avoid WSAcancelblablabla: Define your thread as global then you can use invoke method like this: ``` private void closinginvoker(string dummy) { if (InvokeRequired) { this.Invoke(new Action<string>(closinginvoker), new object[] { dummy }); return; } t_listen.Abort(); client_flag = true; c_idle.Close(); listener1.Stop(); } ``` After you invoke it, close the thread first then the forever loop flag so it block further waiting (if you have it), then close tcpclient then stop the listener.
WSACancelBlockingCall exception
[ "", "c#", "multithreading", "sockets", "socketexception", "" ]
There seem to be many ways to define [singletons](http://en.wikipedia.org/wiki/Singleton_pattern) in Python. Is there a consensus opinion on Stack Overflow?
I don't really see the need, as a module with functions (and not a class) would serve well as a singleton. All its variables would be bound to the module, which could not be instantiated repeatedly anyway. If you do wish to use a class, there is no way of creating private classes or private constructors in Python, so you can't protect against multiple instantiations, other than just via convention in use of your API. I would still just put methods in a module, and consider the module as the singleton.
Here's my own implementation of singletons. All you have to do is decorate the class; to get the singleton, you then have to use the `Instance` method. Here's an example: ``` @Singleton class Foo: def __init__(self): print 'Foo created' f = Foo() # Error, this isn't how you get the instance of a singleton f = Foo.instance() # Good. Being explicit is in line with the Python Zen g = Foo.instance() # Returns already created instance print f is g # True ``` And here's the code: ``` class Singleton: """ A non-thread-safe helper class to ease implementing singletons. This should be used as a decorator -- not a metaclass -- to the class that should be a singleton. The decorated class can define one `__init__` function that takes only the `self` argument. Also, the decorated class cannot be inherited from. Other than that, there are no restrictions that apply to the decorated class. To get the singleton instance, use the `instance` method. Trying to use `__call__` will result in a `TypeError` being raised. """ def __init__(self, decorated): self._decorated = decorated def instance(self): """ Returns the singleton instance. Upon its first call, it creates a new instance of the decorated class and calls its `__init__` method. On all subsequent calls, the already created instance is returned. """ try: return self._instance except AttributeError: self._instance = self._decorated() return self._instance def __call__(self): raise TypeError('Singletons must be accessed through `instance()`.') def __instancecheck__(self, inst): return isinstance(inst, self._decorated) ```
Is there a simple, elegant way to define singletons?
[ "", "python", "design-patterns", "singleton", "" ]
Does anyone know how to transform a enum value to a human readable value? For example: > ThisIsValueA should be "This is Value A".
Converting this from a vb code snippet that a certain Ian Horwill left at a [blog post long ago](http://secretgeek.net/progr_purga.asp)... i've since used this in production successfully. ``` /// <summary> /// Add spaces to separate the capitalized words in the string, /// i.e. insert a space before each uppercase letter that is /// either preceded by a lowercase letter or followed by a /// lowercase letter (but not for the first char in string). /// This keeps groups of uppercase letters - e.g. acronyms - together. /// </summary> /// <param name="pascalCaseString">A string in PascalCase</param> /// <returns></returns> public static string Wordify(string pascalCaseString) { Regex r = new Regex("(?<=[a-z])(?<x>[A-Z])|(?<=.)(?<x>[A-Z])(?=[a-z])"); return r.Replace(pascalCaseString, " ${x}"); } ``` (requires, 'using System.Text.RegularExpressions;') Thus: ``` Console.WriteLine(Wordify(ThisIsValueA.ToString())); ``` Would return, ``` "This Is Value A". ``` It's much simpler, and less redundant than providing Description attributes. Attributes are useful here only if you need to provide a layer of indirection (which the question didn't ask for).
The .ToString on Enums is relatively slow in C#, comparable with GetType().Name (it might even use that under the covers). If your solution needs to be very quick or highly efficient you may be best of caching your conversions in a static dictionary, and looking them up from there. --- A small adaptation of @Leon's code to take advantage of C#3. This does make sense as an extension of enums - you could limit this to the specific type if you didn't want to clutter up all of them. ``` public static string Wordify(this Enum input) { Regex r = new Regex("(?<=[a-z])(?<x>[A-Z])|(?<=.)(?<x>[A-Z])(?=[a-z])"); return r.Replace( input.ToString() , " ${x}"); } //then your calling syntax is down to: MyEnum.ThisIsA.Wordify(); ```
Convert enums to human readable values
[ "", "c#", "enums", "human-readable", "" ]
Could someone recommend any good resources for creating Graphics User Interfaces, preferably in C/C++? Currently my biggest influence is [3DBuzz.com](http://www.3dbuzz.com)'s [C++/OpenGL VTMs](http://www.3dbuzz.com/xcart/product.php?productid=30&cat=12&page=1) (Video Training Modules). While they are very good, they cover a large area of graphics programming, so only skim the surface of GUI programming. This question does relate to "[How do I make a GUI?](https://stackoverflow.com/questions/7489/how-do-i-make-a-gui)", where there is also a rough outline of my current structure. Any response would be appreciated. **Edit:** I've just read some of the answers and spotted that I've missed some points. I had to type the question twice as I caught the wrong button and lost it. I missed two important points, first: This will be used cross platform including homebrew on a Sony PSP. Second: I want to create a GUI system not use an existing one. **Edit 2:** I think some of you are missing the point, I don't what to **use** an existing GUI system I want to build one. Qt in it's current form is not portable to the PSP, never mind the overkill of such a task. That said I've decided to create an IM-GUI, and have started to prototype the code.
I wouldn't use OpenGL for the GUI unless you are planning for hardware accelerated and/or 3D effects that you don't think you would get with a more traditional GUI toolkit (e.g [Qt](http://trolltech.com/), [wxWidgets](http://www.wxwidgets.org/), [GTK](http://www.gtk.org/), etc). If you just need a quick simple GUI for hosting your OpenGL graphics then [FLTK](http://www.fltk.org/) is a nice choice. Otherwise, for rendering the GUI directly in OpenGL their are libraries like [Crazy Eddie's GUI](http://www.cegui.org.uk/wiki/index.php/Main_Page) that do just that and provide lots of [skinnable](http://en.wikipedia.org/wiki/Skinnable) [widgets](http://en.wikipedia.org/wiki/GUI_widget) that you won't have to reinvent. The window and OpenGL context could then be provide with a portable library like [SDL](http://www.libsdl.org/). **EDIT**: Now that I've gone back and taken at look your [other post](https://stackoverflow.com/questions/7489/how-do-i-make-a-gui) I think I have a better understanding of what you are asking. For a GUI on an embedded system like the Nintendo DS, I would consider using an "immediate mode" GUI. [Jari Komppa](http://sol.gfxile.net/who.html) has a [good tutorial about them](http://sol.gfxile.net/imgui/), but you could use a more object-oriented approach with C++ than the C code he presents.
Have a look at [Qt](https://www.qt.io/download-open-source/). It is an open source library for making GUI's. Unlike Swing in Java, it assumes a lot of stuff, so it is really easy to make functional GUI's. For example, a textarea assumes that you want a context menu when you right click it with copy, paste, select all, etc. The [documentation](https://doc.qt.io/qt-5/) is also very good.
GUI system development resources?
[ "", "c++", "user-interface", "playstation-portable", "" ]
I need to validate an integer to know if is a valid enum value. What is the best way to do this in C#?
You got to love these folk who assume that data not only always comes from a UI, but a UI within your control! `IsDefined` is fine for most scenarios, you could start with: ``` public static bool TryParseEnum<TEnum>(this int enumValue, out TEnum retVal) { retVal = default(TEnum); bool success = Enum.IsDefined(typeof(TEnum), enumValue); if (success) { retVal = (TEnum)Enum.ToObject(typeof(TEnum), enumValue); } return success; } ``` (Obviously just drop the ‘this’ if you don’t think it’s a suitable int extension)
IMHO the post marked as the answer is incorrect. Parameter and data validation is one of the things that was drilled into me decades ago. **WHY** Validation is required because essentially any integer value can be assigned to an enum without throwing an error. I spent many days researching C# enum validation because it is a necessary function in many cases. **WHERE** The main purpose in enum validation for me is in validating data read from a file: you never know if the file has been corrupted, or was modified externally, or was hacked on purpose. And with enum validation of application data pasted from the clipboard: you never know if the user has edited the clipboard contents. That said, I spent days researching and testing many methods including profiling the performance of every method I could find or design. Making calls into anything in System.Enum is so slow that it was a noticeable performance penalty on functions that contained hundreds or thousands of objects that had one or more enums in their properties that had to be validated for bounds. Bottom line, stay away from *everything* in the System.Enum class when validating enum values, it is dreadfully slow. **RESULT** The method that I currently use for enum validation will probably draw rolling eyes from many programmers here, but it is imho the least evil for my specific application design. I define one or two constants that are the upper and (optionally) lower bounds of the enum, and use them in a pair of if() statements for validation. One downside is that you must be sure to update the constants if you change the enum. This method also only works if the enum is an "auto" style where each enum element is an incremental integer value such as 0,1,2,3,4,.... It won't work properly with Flags or enums that have values that are not incremental. Also note that this method is almost as fast as regular if "<" ">" on regular int32s (which scored 38,000 ticks on my tests). For example: ``` public const MyEnum MYENUM_MINIMUM = MyEnum.One; public const MyEnum MYENUM_MAXIMUM = MyEnum.Four; public enum MyEnum { One, Two, Three, Four }; public static MyEnum Validate(MyEnum value) { if (value < MYENUM_MINIMUM) { return MYENUM_MINIMUM; } if (value > MYENUM_MAXIMUM) { return MYENUM_MAXIMUM; } return value; } ``` **PERFORMANCE** For those who are interested, I profiled the following variations on an enum validation, and here are the results. The profiling was performed on release compile in a loop of one million times on each method with a random integer input value. Each test was ran more than 10 times and averaged. The tick results include the total time to execute which will include the random number generation etc. but those will be constant across the tests. 1 tick = 10ns. Note that the code here isn't the complete test code, it is only the basic enum validation method. There were also a lot of additional variations on these that were tested, and all of them with results similar to those shown here that benched 1,800,000 ticks. Listed slowest to fastest with rounded results, hopefully no typos. **Bounds determined in Method** = 13,600,000 ticks ``` public static T Clamp<T>(T value) { int minimum = Enum.GetValues(typeof(T)).GetLowerBound(0); int maximum = Enum.GetValues(typeof(T)).GetUpperBound(0); if (Convert.ToInt32(value) < minimum) { return (T)Enum.ToObject(typeof(T), minimum); } if (Convert.ToInt32(value) > maximum) { return (T)Enum.ToObject(typeof(T), maximum); } return value; } ``` **Enum.IsDefined** = 1,800,000 ticks Note: this code version doesn't clamp to Min/Max but returns Default if out of bounds. ``` public static T ValidateItem<T>(T eEnumItem) { if (Enum.IsDefined(typeof(T), eEnumItem) == true) return eEnumItem; else return default(T); } ``` **System.Enum Convert Int32 with casts** = 1,800,000 ticks ``` public static Enum Clamp(this Enum value, Enum minimum, Enum maximum) { if (Convert.ToInt32(value) < Convert.ToInt32(minimum)) { return minimum; } if (Convert.ToInt32(value) > Convert.ToInt32(maximum)) { return maximum; } return value; } ``` **if() Min/Max Constants** = 43,000 ticks = the winner by 42x and 316x faster. ``` public static MyEnum Clamp(MyEnum value) { if (value < MYENUM_MINIMUM) { return MYENUM_MINIMUM; } if (value > MYENUM_MAXIMUM) { return MYENUM_MAXIMUM; } return value; } ``` -eol-
Validate Enum Values
[ "", "c#", "validation", "enums", "" ]
OK. This is a bit of a vanity app, but I had a situation today at work where I was in a training class and the machine was set to lock every 10 minutes. Well, if the trainers got excited about talking - as opposed to changing slides - the machine would lock up. I'd like to write a teeny app that has nothing but a taskbar icon that does nothing but move the mouse by 1 pixel every 4 minutes. I can do that in 3 ways with Delphi (my strong language) but I'm moving to C# for work and I'd like to know the path of least resistance there.
**for C# 3.5** without notifyicon therefore you will need to terminate this application in task manager manually ``` using System; using System.Drawing; using System.Windows.Forms; static class Program { static void Main() { Timer timer = new Timer(); // timer.Interval = 4 minutes timer.Interval = (int)(TimeSpan.TicksPerMinute * 4 / TimeSpan.TicksPerMillisecond); timer.Tick += (sender, args) => { Cursor.Position = new Point(Cursor.Position.X + 1, Cursor.Position.Y + 1); }; timer.Start(); Application.Run(); } } ```
The "correct" way to do this is to respond to the WM\_SYSCOMMAND message. In C# this looks something like this: ``` protected override void WndProc(ref Message m) { // Abort screensaver and monitor power-down const int WM_SYSCOMMAND = 0x0112; const int SC_MONITOR_POWER = 0xF170; const int SC_SCREENSAVE = 0xF140; int WParam = (m.WParam.ToInt32() & 0xFFF0); if (m.Msg == WM_SYSCOMMAND && (WParam == SC_MONITOR_POWER || WParam == SC_SCREENSAVE)) return; base.WndProc(ref m); } ``` According to [MSDN](http://msdn.microsoft.com/en-us/library/ms646360(VS.85).aspx), if the screensaver password is enabled by policy on Vista or above, this won't work. Presumably programmatically moving the mouse is also ignored, though I have not tested this.
Wiggling the mouse
[ "", "c#", "winapi", "mouse", "" ]
I am currently working on a project and my goal is to locate text in an image. OCR'ing the text is not my intention as of yet. I want to basically obtain the bounds of text within an image. I am using the AForge.Net imaging component for manipulation. Any assistance in some sense or another? Update 2/5/09: I've since went along another route in my project. However I did attempt to obtain text using MODI (Microsoft Office Document Imaging). It allows you to OCR an image and pull text from it with some ease.
This is an active area of research. There are literally oodles of academic papers on the subject. It's going to be difficult to give you assistance especially w/o more deatails. Are you looking for specific types of text? Fonts? English-only? Are you familiar with the academic literature? "Text detection" is a standard problem in any OCR (optical character recognition) system and consequently there are lots of bits of code on the interwebs that deal with it. I could start listing piles of links from google but I suggest you just do a search for "text detection" and start reading :). There is ample example code available as well.
recognizing text inside an image is indeed a hot topic for researchers in that field, but only begun to grow out of control when [captcha's](http://en.wikipedia.org/wiki/Captcha) became the "norm" in terms of defense against spam bots. Why use captcha's as protection? well because it is/was very hard to locate (and read) text inside an image! The reason why I mention captcha's is because the most advancement\* is made within that tiny area, and I think that your solution could be best found there. especially because captcha's are indeed about locating text (or something that resembles text) inside a cluttered image and afterwards trying to read the letters correctly. so if you can find yourself [a good open source captcha breaking tool](http://libcaca.zoy.org/wiki/PWNtcha) you probably have all you need to continue your quest... You could probably even throw away the most dificult code that handles the character recognition itself, because those OCR's are used to read distorted text, something you don't have to do. \*: advancement in terms of visible, usable, and **practical** information for a "non-researcher"
Locating Text within image
[ "", "c#", "image", "image-processing", "artificial-intelligence", "" ]
What do I need to look at to see whether I'm on Windows or Unix, etc.?
``` >>> import os >>> os.name 'posix' >>> import platform >>> platform.system() 'Linux' >>> platform.release() '2.6.22-15-generic' ``` The output of [`platform.system()`](https://docs.python.org/library/platform.html#platform.system) is as follows: * Linux: `Linux` * Mac: `Darwin` * Windows: `Windows` See: [`platform` — Access to underlying platform’s identifying data](https://docs.python.org/library/platform.html)
Here are the system results for [Windows Vista](https://en.wikipedia.org/wiki/Windows_Vista)! ``` >>> import os >>> os.name 'nt' >>> import platform >>> platform.system() 'Windows' >>> platform.release() 'Vista' ``` And for Windows 10: ``` >>> import os >>> os.name 'nt' >>> import platform >>> platform.system() 'Windows' >>> platform.release() '10' ```
How to identify which OS Python is running on
[ "", "python", "operating-system", "cross-platform", "platform-agnostic", "" ]
How do I delimit a Javascript data-bound string parameter in an anchor `OnClick` event? * I have an anchor tag in an ASP.NET Repeater control. * The `OnClick` event of the anchor contains a call to a Javascript function. * The Javascript function takes a string for its input parameter. * The string parameter is populated with a data-bound value from the Repeater. I need the "double quotes" for the `Container.DataItem`. I need the 'single quotes' for the `OnClick`. And I still need *one more delimiter* (triple quotes?) for the input string parameter of the Javascript function call. Since I can't use 'single quotes' again, how do I ensure the Javascript function knows the input parameter is a string and not an integer? Without the extra quotes around the input string parameter, the Javascript function thinks I'm passing in an integer. The anchor: ``` <a id="aShowHide" onclick='ToggleDisplay(<%# DataBinder.Eval(Container.DataItem, "JobCode") %>);' >Show/Hide</a> ``` And there is my Javascript: ``` <script language="JavaScript" type="text/javascript"> /* Shows/Hides the Jobs Div */ function ToggleDisplay(jobCode) { /* Each div has its ID set dynamically ('d' plus the JobCode) */ var elem = document.getElementById('d' + jobCode); if (elem) { if (elem.style.display != 'block') { elem.style.display = 'block'; elem.style.visibility = 'visible'; } else { elem.style.display = 'none'; elem.style.visibility = 'hidden'; } } } </script> ```
I had recently similar problem and the only way to solve it was to use plain old HTML codes for single (`&#39;`) and double quotes (`&#34;`). Source code was total mess of course but it worked. Try ``` <a id="aShowHide" onclick='ToggleDisplay(&#34;<%# DataBinder.Eval(Container.DataItem, "JobCode") %>&#34;);'>Show/Hide</a> ``` or ``` <a id="aShowHide" onclick='ToggleDisplay(&#39;<%# DataBinder.Eval(Container.DataItem, "JobCode") %>&#39;);'>Show/Hide</a> ```
``` onclick='javascript:ToggleDisplay("<%# DataBinder.Eval(Container.DataItem, "JobCode")%> "); ' ``` Use like above.
Triple Quotes? How do I delimit a databound Javascript string parameter in ASP.NET?
[ "", "asp.net", "javascript", "anchor", "quotes", "" ]
How can I do this fast? Sure I can do this: ``` static bool ByteArrayCompare(byte[] a1, byte[] a2) { if (a1.Length != a2.Length) return false; for (int i=0; i<a1.Length; i++) if (a1[i]!=a2[i]) return false; return true; } ``` But I'm looking for either a [BCL](https://en.wikipedia.org/wiki/Standard_Libraries_%28CLI%29#Base_Class_Library) function or some highly optimized proven way to do this. ``` java.util.Arrays.equals((sbyte[])(Array)a1, (sbyte[])(Array)a2); ``` works nicely, but it doesn't look like that would work for x64. Note my super-fast answer [here](https://stackoverflow.com/a/8808245/4489).
Edit: modern fast way is to use `a1.SequenceEquals(a2)` User *gil* suggested unsafe code which spawned this solution: ``` // Copyright (c) 2008-2013 Hafthor Stefansson // Distributed under the MIT/X11 software license // Ref: http://www.opensource.org/licenses/mit-license.php. static unsafe bool UnsafeCompare(byte[] a1, byte[] a2) { unchecked { if(a1==a2) return true; if(a1==null || a2==null || a1.Length!=a2.Length) return false; fixed (byte* p1=a1, p2=a2) { byte* x1=p1, x2=p2; int l = a1.Length; for (int i=0; i < l/8; i++, x1+=8, x2+=8) if (*((long*)x1) != *((long*)x2)) return false; if ((l & 4)!=0) { if (*((int*)x1)!=*((int*)x2)) return false; x1+=4; x2+=4; } if ((l & 2)!=0) { if (*((short*)x1)!=*((short*)x2)) return false; x1+=2; x2+=2; } if ((l & 1)!=0) if (*((byte*)x1) != *((byte*)x2)) return false; return true; } } } ``` which does 64-bit based comparison for as much of the array as possible. This kind of counts on the fact that the arrays start qword aligned. It'll work if not qword aligned, just not as fast as if it were. It performs about seven timers faster than the simple `for` loop. Using the J# library performed equivalently to the original `for` loop. Using .SequenceEqual runs around seven times slower; I think just because it is using IEnumerator.MoveNext. I imagine LINQ-based solutions being at least that slow or worse.
You can use [Enumerable.SequenceEqual](http://msdn.microsoft.com/en-us/library/bb348567.aspx) method. ``` using System; using System.Linq; ... var a1 = new int[] { 1, 2, 3}; var a2 = new int[] { 1, 2, 3}; var a3 = new int[] { 1, 2, 4}; var x = a1.SequenceEqual(a2); // true var y = a1.SequenceEqual(a3); // false ``` If you can't use .NET 3.5 for some reason, your method is OK. Compiler\run-time environment will optimize your loop so you don't need to worry about performance.
Comparing two byte arrays in .NET
[ "", "c#", ".net", "arrays", "performance", "j#", "" ]
In Django's template language, you can use `{% url [viewname] [args] %}` to generate a URL to a specific view with parameters. How can you programatically do the same in Python code? What I need is to create a list of menu items where each item has name, URL, and an active flag (whether it's the current page or not). This is because it will be a lot cleaner to do this in Python than the template language.
If you need to use something similar to the `{% url %}` template tag in your code, Django provides the `django.core.urlresolvers.reverse()`. The `reverse` function has the following signature: ``` reverse(viewname, urlconf=None, args=None, kwargs=None) ``` <https://docs.djangoproject.com/en/dev/ref/urlresolvers/> At the time of this edit the import is `django.urls import reverse`
I'm using two different approaches in my `models.py`. The first is the `permalink` decorator: ``` from django.db.models import permalink def get_absolute_url(self): """Construct the absolute URL for this Item.""" return ('project.app.views.view_name', [str(self.id)]) get_absolute_url = permalink(get_absolute_url) ``` You can also call `reverse` directly: ``` from django.core.urlresolvers import reverse def get_absolute_url(self): """Construct the absolute URL for this Item.""" return reverse('project.app.views.view_name', None, [str(self.id)]) ```
How to generate urls in django
[ "", "python", "django", "url", "django-urls", "" ]
When I'm joining three or more tables together by a common column, I'd write my query like this: ``` SELECT * FROM a, b, c WHERE a.id = b.id AND b.id = c.id ``` a colleague recently asked my why I didn't do explicit *Join Transitive Closure* in my queries like this: ``` SELECT * FROM a, b, c WHERE a.id = b.id AND b.id = c.id AND c.id = a.id ``` are the really any advantages to this? Surely the optimiser can imply this for itself? *edit: I know it's evil syntax, but it's a quick and dirty example of legitimate legacy code +1 @[Stu](https://stackoverflow.com/questions/11028/what-are-the-advantages-of-explicit-join-transitive-closure-in-sql#11114) for cleaning it up*
You don't need to do this in todays database engines, but there was a time when things like that would give the query optimizer more hints as to possible index paths and thus to speedier results. These days that entire syntax is going out anyway.
This is filthy, evil legacy syntax. You write this as > ``` > Select > * -- Oh, and don't ever use *, either > From > A > Inner Join B On A.ID = B.ID > Inner Join C On B.ID = C.ID > ```
What are the advantages of explicit Join Transitive Closure in SQL?
[ "", "sql", "" ]
Should the folders in a solution match the namespace? In one of my teams projects, we have a class library that has many sub-folders in the project. Project Name and Namespace: `MyCompany.Project.Section`. Within this project, there are several folders that match the namespace section: * Folder `Vehicles` has classes in the `MyCompany.Project.Section.Vehicles` namespace * Folder `Clothing` has classes in the`MyCompany.Project.Section.Clothing` namespace * etc. Inside this same project, is another rogue folder * Folder `BusinessObjects` has classes in the `MyCompany.Project.Section` namespace There are a few cases like this where folders are made for "organizational convenience". My question is: What's the standard? In class libraries do the folders usually match the namespace structure or is it a mixed bag?
Also, note that if you use the built-in templates to add classes to a folder, it will by default be put in a namespace that reflects the folder hierarchy. The classes will be easier to find and that alone should be reasons good enough. The rules we follow are: * Project/assembly name is the same as the root namespace, except for the .dll ending * Only exception to the above rule is a project with a .Core ending, the .Core is stripped off * Folders equals namespaces * One type per file (class, struct, enum, delegate, etc.) makes it easy to find the right file
No. I've tried both methods on small and large projects, both with single (me) and a team of developers. I found the simplest and most productive route was to have a single namespace per project and all classes go into that namespace. You are then free to put the class files into whatever project folders you want. There is no messing about adding using statements at the top of files all the time as there is just a single namespace. It is important to organize source files into folders and in my opinion that's all folders should be used for. Requiring that these folders also map to namespaces is unnecessary, creates more work, and I found was actually harmful to organization because the added burden encourages disorganization. Take this FxCop warning for example: > **CA1020: Avoid namespaces with few types** > cause: A namespace other than the global namespace contains fewer than five types > <https://msdn.microsoft.com/en-gb/library/ms182130.aspx> This warning encourages the dumping of new files into a generic Project.General folder, or even the project root until you have four similar classes to justify creating a new folder. Will that ever happen? **Finding Files** The accepted answer says "The classes will be easier to find and that alone should be reasons good enough." I suspect the answer is referring to having multiple namespaces in a project which don't map to the folder structure, rather than what I am suggesting which is a project with a single namespace. In any case while you can't determine which folder a class file is in from the namespace, you can find it by using Go To Definition or the search solution explorer box in Visual Studio. Also this isn't really a big issue in my opinion. I don't expend even 0.1% of my development time on the problem of finding files to justify optimizing it. **Name clashes** Sure creating multiple namespaces allows project to have two classes with the same name. But is that really a good thing? Is it perhaps easier to just disallow that from being possible? Allowing two classes with the same name creates a more complex situation where 90% of the time things work a certain way and then suddenly you find you have a special case. Say you have two Rectangle classes defined in separate namespaces: * class Project1.Image.Rectangle * class Project1.Window.Rectangle It's possible to hit an issue that a source file needs to include both namespaces. Now you have to write out the full namespace everywhere in that file: ``` var rectangle = new Project1.Window.Rectangle(); ``` Or mess about with some nasty using statement: ``` using Rectangle = Project1.Window.Rectangle; ``` With a single namespace in your project you are forced to come up with different, and I'd argue more descriptive, names like this: * class Project1.ImageRectangle * class Project1.WindowRectangle And usage is the same everywhere, you don't have to deal with a special case when a file uses both types. **using statements** ``` using Project1.General; using Project1.Image; using Project1.Window; using Project1.Window.Controls; using Project1.Shapes; using Project1.Input; using Project1.Data; ``` vs ``` using Project1; ``` The ease of not having to add namespaces all the time while writing code. It's not the time it takes really, it's the break in flow of having to do it and just filling up files with lots of using statements - for what? Is it worth it? **Changing project folder structure** If folders are mapped to namespaces then the project folder path is effectively hard-coded into each source file. This means any rename or move of a file or folder in the project requires actual file contents to change. Both the namespace declaration of files in that folder and using statements in a whole bunch of other files that reference classes in that folder. While the changes themselves are trivial with tooling, it usually results in a large commit consisting of many files whose classes haven't even changed. With a single namespace in the project you can change project folder structure however you want without any source files themselves being modified. **Visual Studio automatically maps the namespace of a new file to the project folder it's created in** Unfortunate, but I find the hassle of correcting the namespace is less than the hassle of dealing with them. Also I've got into the habit of copy pasting an existing file rather than using Add->New. **Intellisense and Object Browser** The biggest benefit in my opinion of using multiple namespaces in large projects is having extra organization when viewing classes in any tooling that displays classes in a namespaces hierarchy. Even documentation. Obviously having just one namespace in the project results in all classes being displayed in a single list rather than broken into categories. However personally I've never been stumped or delayed because of a lack of this so I don't find it a big enough benefit to justify multiple namespaces. Although if I were writing a large public class library then I *would* probably use multiple namespaces in the project so that the assembly looked neat in the tooling and documentation.
Should the folders in a solution match the namespace?
[ "", "c#", ".net", "namespaces", "" ]
What are the pros and cons of using table aliases in SQL? I personally try to avoid them, as I think they make the code less readable (especially when reading through large where/and statements), but I'd be interested in hearing any counter-points to this. When is it generally a good idea to use table aliases, and do you have any preferred formats?
Table aliases are a necessary evil when dealing with highly normalized schemas. For example, and I'm not the architect on this DB so bear with me, it can take 7 joins in order to get a clean and complete record back which includes a person's name, address, phone number and company affiliation. Rather than the somewhat standard single character aliases, I tend to favor short word aliases so the above example's SQL ends up looking like: ``` select person.FirstName ,person.LastName ,addr.StreetAddress ,addr.City ,addr.State ,addr.Zip ,phone.PhoneNumber ,company.CompanyName from tblPeople person left outer join tblAffiliations affl on affl.personID = person.personID left outer join tblCompany company on company.companyID = affl.companyID ``` ... etc
Well, there are some cases you *must* use them, like when you need to join to the same table twice in one query. It also depends on wether you have unique column names across tables. In our legacy database we have 3-letter prefixes for all columns, stemming from an abbreviated form from the table, simply because one ancient database system we were once compatible with didn't support table aliases all that well. If you have column names that occur in more than one table, specifying the table name as part of the column reference is a must, and thus a table alias will allow for a shorter syntax.
SQL Table Aliases - Good or Bad?
[ "", "sql", "alias", "" ]
I had used Server Explorer and related tools for graphical database development with Microsoft SQL Server in some of my learning projects - and it was a great experience. However, in my work I deal with Oracle DB and SQLite and my hobby projects use MySQL (because they are hosted on Linux). Is there a way to leverage the database-related tools in Visual Studio with other database providers?
Here is instructions on how to connect to your MySQL database from Visual Studio: > To make the connection in server > explorer you need to do the following: > > * first of all you need to install the MyODBC connector 3.51 (or latest) on > the development machine (NB. you can > find this at > <http://www.mysql.com/products/connector/odbc/> > ) > * Create a datasource in Control Panel/Administrative Tools with a > connection to your database. This data > source is going to be used purely for > Server Manager and you dont need to > worry about creating the same data > source on your clients PC when you > have made your VS.NET application > (Unless you want to) - I dont want to > cover this in this answer, too long. > For the purpose of this explanation I > will pretend that you created a MyODBC > data source called 'AADSN' to database > 'noddy' on mysqlserver 'SERVER01' and > have a root password of 'fred'. The > server can be either the Computer Name > (found in Control > Panel/System/Computer Name), or > alternatively it can be the IP > Address. NB. Make sure that you test > this connection before continuing with > this explanation. > * open your VS.NET project > * go to server explorer > * right-click on 'Data Connections' > * select 'Add Connection' > * In DataLink Properties, go to the provider tab and select "Microsoft OLE > DB Provider For ODBC drivers" > * Click Next > * If you previously created an ODBC data source then you could just select > that. The disadvantage of this is that > when you install your project > application on the client machine, the > same data source needs to be there. I > prefer to use a connection string. > This should look something like: > > DSN=AADSN;DESC=MySQL ODBC 3.51 Driver > DSN;DATABASE=noddy;SERVER=SERVER01;UID=root;PASSWORD=fred;PORT=3306;SOCKET=;OPTION=11;STMT=; > > If you omit the password from the > connection string then you must make > sure that the datasource you created > (AADSN) contains a password. I am not > going to describe what these mean, you > can look in the documentation for > myodbc for that, just ensure that you > get a "Connection Succeeded" message > when you test the datasource.
I found this during my research on Sqlite. I haven't had the chance to use it though. Let us know if this works for you. <http://sqlite.phxsoftware.com/> > **System.Data.SQLite** System.Data.SQLite is the original > SQLite database engine and a complete > ADO.NET 2.0 provider all rolled into a > single mixed mode assembly. > > ... > > **Visual Studio 2005/2008 Design-Time > Support** > > You can add a SQLite connection to the > Server Explorer, create queries with > the query designer, drag-and-drop > tables onto a Typed DataSet and more! > > SQLite's designer works on full > editions of Visual Studio 2005/2008, > including VS2005 Express Editions. > > NEW You can create/edit views, tables, indexes, foreign keys, > constraints and triggers interactively > within the Visual Studio Server > Explorer!
Does Visual Studio Server Explorer support custom database providers?
[ "", "c#", "mysql", "visual-studio", "oracle", "sqlite", "" ]
I want to open a folder window, in the appropriate file manager, from within a cross-platform (windows/mac/linux) Python application. On OSX, I can open a window in the finder with ``` os.system('open "%s"' % foldername) ``` and on Windows with ``` os.startfile(foldername) ``` What about unix/linux? Is there a standard way to do this or do I have to special case gnome/kde/etc and manually run the appropriate application (nautilus/konqueror/etc)? This looks like something that could be specified by the [freedesktop.org](http://freedesktop.org) folks (a python module, similar to `webbrowser`, would also be nice!).
``` os.system('xdg-open "%s"' % foldername) ``` `xdg-open` can be used for files/urls also
this would probably have to be done manually, or have as a config item since there are many file managers that users may want to use. Providing a way for command options as well. There might be an function that launches the defaults for kde or gnome in their respective toolkits but I haven't had reason to look for them.
Standard way to open a folder window in linux?
[ "", "python", "linux", "cross-platform", "desktop", "" ]
I have a small utility that I use to download an MP3 file from a website on a schedule and then builds/updates a podcast XML file which I've added to iTunes. The text processing that creates/updates the XML file is written in Python. However, I use wget inside a Windows `.bat` file to download the actual MP3 file. I would prefer to have the entire utility written in Python. I struggled to find a way to actually download the file in Python, thus why I resorted to using `wget`. So, how do I download the file using Python?
Use [`urllib.request.urlopen()`](https://docs.python.org/3/library/urllib.request.html#urllib.request.urlopen): ``` import urllib.request with urllib.request.urlopen('http://www.example.com/') as f: html = f.read().decode('utf-8') ``` This is the most basic way to use the library, minus any error handling. You can also do more complex stuff such as changing headers. On Python 2, the method is in [`urllib2`](http://docs.python.org/2/library/urllib2.html): ``` import urllib2 response = urllib2.urlopen('http://www.example.com/') html = response.read() ```
One more, using [`urlretrieve`](https://docs.python.org/3/library/urllib.request.html#module-urllib.request): ``` import urllib.request urllib.request.urlretrieve("http://www.example.com/songs/mp3.mp3", "mp3.mp3") ``` (for Python 2 use `import urllib` and `urllib.urlretrieve`)
How to download a file over HTTP?
[ "", "python", "http", "urllib", "" ]
How can you reliably and dynamically load a JavaScript file? This will can be used to implement a module or component that when 'initialized' the component will dynamically load all needed JavaScript library scripts on demand. The client that uses the component isn't required to load all the library script files (and manually insert `<script>` tags into their web page) that implement this component - just the 'main' component script file. **How do mainstream JavaScript libraries accomplish this (Prototype, jQuery, etc)?** Do these tools merge multiple JavaScript files into a single redistributable 'build' version of a script file? Or do they do any dynamic loading of ancillary 'library' scripts? An addition to this question: **is there a way to handle the event after a dynamically included JavaScript file is loaded?** Prototype has `document.observe` for document-wide events. Example: ``` document.observe("dom:loaded", function() { // initially hide all containers for tab content $$('div.tabcontent').invoke('hide'); }); ``` **What are the available events for a script element?**
You may create a script element dynamically, using [Prototypes](http://www.prototypejs.org/): ``` new Element("script", {src: "myBigCodeLibrary.js", type: "text/javascript"}); ``` The problem here is that we do not know *when* the external script file is fully loaded. We often want our dependant code on the very next line and like to write something like: ``` if (iNeedSomeMore) { Script.load("myBigCodeLibrary.js"); // includes code for myFancyMethod(); myFancyMethod(); // cool, no need for callbacks! } ``` There is a smart way to inject script dependencies without the need of callbacks. You simply have to pull the script via a *synchronous AJAX request* and eval the script on global level. If you use Prototype the Script.load method looks like this: ``` var Script = { _loadedScripts: [], include: function(script) { // include script only once if (this._loadedScripts.include(script)) { return false; } // request file synchronous var code = new Ajax.Request(script, { asynchronous: false, method: "GET", evalJS: false, evalJSON: false }).transport.responseText; // eval code on global level if (Prototype.Browser.IE) { window.execScript(code); } else if (Prototype.Browser.WebKit) { $$("head").first().insert(Object.extend( new Element("script", { type: "text/javascript" }), { text: code } )); } else { window.eval(code); } // remember included script this._loadedScripts.push(script); } }; ```
There is no import / include / require in javascript, but there are two main ways to achieve what you want: 1 - You can load it with an AJAX call then use eval. This is the most straightforward way but it's limited to your domain because of the Javascript safety settings, and using eval is opening the door to bugs and hacks. 2 - Add a script element with the script URL in the HTML. Definitely the best way to go. You can load the script even from a foreign server, and it's clean as you use the browser parser to evaluate the code. You can put the `script` element in the `head` element of the web page, or at the bottom of the `body`. Both of these solutions are discussed and illustrated here. Now, there is a big issue you must know about. Doing that implies that you remotely load the code. Modern web browsers will load the file and keep executing your current script because they load everything asynchronously to improve performances. It means that if you use these tricks directly, you won't be able to use your newly loaded code the next line after you asked it to be loaded, because it will be still loading. E.G : my\_lovely\_script.js contains MySuperObject ``` var js = document.createElement("script"); js.type = "text/javascript"; js.src = jsFilePath; document.body.appendChild(js); var s = new MySuperObject(); Error : MySuperObject is undefined ``` Then you reload the page hitting F5. And it works! Confusing... So what to do about it ? Well, you can use the hack the author suggests in the link I gave you. In summary, for people in a hurry, he uses en event to run a callback function when the script is loaded. So you can put all the code using the remote library in the callback function. E.G : ``` function loadScript(url, callback) { // adding the script element to the head as suggested before var head = document.getElementsByTagName('head')[0]; var script = document.createElement('script'); script.type = 'text/javascript'; script.src = url; // then bind the event to the callback function // there are several events for cross browser compatibility script.onreadystatechange = callback; script.onload = callback; // fire the loading head.appendChild(script); } ``` Then you write the code you want to use AFTER the script is loaded in a lambda function : ``` var myPrettyCode = function() { // here, do what ever you want }; ``` Then you run all that : ``` loadScript("my_lovely_script.js", myPrettyCode); ``` Ok, I got it. But it's a pain to write all this stuff. Well, in that case, you can use as always the fantastic free jQuery framework, which let you do the very same thing in one line : ``` $.getScript("my_lovely_script.js", function() { alert("Script loaded and executed."); // here you can use anything you defined in the loaded script }); ```
Dynamically load a JavaScript file
[ "", "javascript", "file", "import", "include", "" ]
VC++ makes functions which are implemented within the class declaration inline functions. If I declare a class `Foo` as follows, then are the CONSTRUCTOR and DESTRUCTOR inline functions? ``` class Foo { int* p; public: Foo() { p = new char[0x00100000]; } ~Foo() { delete [] p; } }; { Foo f; (f); } ```
Defining the body of the constructor INSIDE the class has the same effect as placing the function OUTSIDE the class with the "inline" keyword. In both cases it's a hint to the compiler. An "inline" function doesn't necessarily mean the function will be inlined. That depends on the complexity of the function and other rules.
The short answer is yes. Any function can be declared inline, and putting the function body in the class definition is one way of doing that. You could also have done: ``` class Foo { int* p; public: Foo(); ~Foo(); }; inline Foo::Foo() { p = new char[0x00100000]; } inline Foo::~Foo() { delete [] p; } ``` However, it's up to the compiler if it actually does inline the function. VC++ pretty much ignores your requests for inlining. It will only inline a function if it thinks it's a good idea. Recent versions of the compiler will also inline things that are in separate .obj files and not declared inline (e.g. from code in different .cpp files) if you use [link time code generation](http://msdn.microsoft.com/en-us/magazine/cc301698.aspx). You could use the [\_\_forceinline](http://msdn.microsoft.com/en-us/library/z8y1yy88.aspx) keyword to tell the compiler that you really really mean it when you say "inline this function", but it's usally not worth it. In many cases, the compiler really does know best.
In C++ can constructor and destructor be inline functions?
[ "", "c++", "constructor", "destructor", "" ]
If I have managed to locate and verify the existence of a file using Server.MapPath and I now want to send the user directly to that file, what is the **fastest** way to convert that absolute path back into a relative web path?
Perhaps this might work: ``` String RelativePath = AbsolutePath.Replace(Request.ServerVariables["APPL_PHYSICAL_PATH"], String.Empty); ``` *I'm using c# but could be adapted to vb.*
Wouldn't it be nice to have **Server.RelativePath(path)**? well, you just need to extend it ;-) ``` public static class ExtensionMethods { public static string RelativePath(this HttpServerUtility srv, string path, HttpRequest context) { return path.Replace(context.ServerVariables["APPL_PHYSICAL_PATH"], "~/").Replace(@"\", "/"); } } ``` With this you can simply call ``` Server.RelativePath(path, Request); ```
Absolute path back to web-relative path
[ "", "c#", "asp.net", ".net", "path", "mappath", "" ]
If I have data like this: | Key | Name | | --- | --- | | 1 | Dan | | 2 | Tom | | 3 | Jon | | 4 | Tom | | 5 | Sam | | 6 | Dan | What is the SQL query to bring back the records where `Name` is repeated 2 or more times? So the result I would want is | Tom | | --- | | Dan |
Couldn't be simpler... ``` Select Name, Count(Name) As Count From Table Group By Name Having Count(Name) > 1 Order By Count(Name) Desc ``` This could also be extended to delete duplicates: ``` Delete From Table Where Key In ( Select Max(Key) From Table Group By Name Having Count(Name) > 1 ) ```
``` select name from table group by name having count(name) > 1 ```
SQL query, count and group by
[ "", "sql", "" ]
I'd like some sorthand for this: ``` Map rowToMap(row) { def rowMap = [:]; row.columns.each{ rowMap[it.name] = it.val } return rowMap; } ``` given the way the GDK stuff is, I'd expect to be able to do something like: ``` Map rowToMap(row) { row.columns.collectMap{ [it.name,it.val] } } ``` but I haven't seen anything in the docs... am I missing something? or am I just way too lazy?
I've recently came across the need to do exactly that: converting a list into a map. This question was posted before Groovy version 1.7.9 came out, so the method [`collectEntries`](http://docs.groovy-lang.org/latest/html/api/org/codehaus/groovy/runtime/DefaultGroovyMethods.html#collectEntries(java.lang.Iterable,%20groovy.lang.Closure)) didn't exist yet. It works exactly as the `collectMap` method [that was proposed](https://stackoverflow.com/questions/18538/shortcut-for-creating-a-map-from-a-list-in-groovy/19077#19077): ``` Map rowToMap(row) { row.columns.collectEntries{[it.name, it.val]} } ``` If for some reason you are stuck with an older Groovy version, the [`inject`](http://docs.groovy-lang.org/latest/html/api/org/codehaus/groovy/runtime/DefaultGroovyMethods.html#inject(java.lang.Object,%20groovy.lang.Closure)) method can also be used (as proposed [here](https://stackoverflow.com/questions/18538/shortcut-for-creating-a-map-from-a-list-in-groovy/198614#198614)). This is a slightly modified version that takes only one expression inside the closure (just for the sake of character saving!): ``` Map rowToMap(row) { row.columns.inject([:]) {map, col -> map << [(col.name): col.val]} } ``` The `+` operator can also be used instead of the `<<`.
Check out "inject". Real functional programming wonks call it "fold". ``` columns.inject([:]) { memo, entry -> memo[entry.name] = entry.val return memo } ``` And, while you're at it, you probably want to define methods as Categories instead of right on the metaClass. That way, you can define it once for all Collections: ``` class PropertyMapCategory { static Map mapProperty(Collection c, String keyParam, String valParam) { return c.inject([:]) { memo, entry -> memo[entry[keyParam]] = entry[valParam] return memo } } } ``` Example usage: ``` use(PropertyMapCategory) { println columns.mapProperty('name', 'val') } ```
shortcut for creating a Map from a List in groovy?
[ "", "java", "collections", "groovy", "expandometaclass", "" ]
(assume php5) consider ``` <?php $foo = 'some words'; //case 1 print "these are $foo"; //case 2 print "these are {$foo}"; //case 3 print 'these are ' . $foo; ?> ``` Is there much of a difference between 1 and 2? If not, what about between 1/2 and 3?
Well, as with all "What might be faster in real life" questions, you can't beat a real life test. ``` function timeFunc($function, $runs) { $times = array(); for ($i = 0; $i < $runs; $i++) { $time = microtime(); call_user_func($function); $times[$i] = microtime() - $time; } return array_sum($times) / $runs; } function Method1() { $foo = 'some words'; for ($i = 0; $i < 10000; $i++) $t = "these are $foo"; } function Method2() { $foo = 'some words'; for ($i = 0; $i < 10000; $i++) $t = "these are {$foo}"; } function Method3() { $foo = 'some words'; for ($i = 0; $i < 10000; $i++) $t = "these are " . $foo; } print timeFunc('Method1', 10) . "\n"; print timeFunc('Method2', 10) . "\n"; print timeFunc('Method3', 10) . "\n"; ``` Give it a few runs to page everything in, then... 0.0035568 0.0035388 0.0025394 So, as expected, the interpolation are virtually identical (noise level differences, probably due to the extra characters the interpolation engine needs to handle). Straight up concatenation is about 66% of the speed, which is no great shock. The interpolation parser will look, find nothing to do, then finish with a simple internal string concat. Even if the concat were expensive, the interpolator will still have to do it, **after** all the work to parse out the variable and trim/copy up the original string. **Updates By Somnath:** I added Method4() to above real time logic. ``` function Method4() { $foo = 'some words'; for ($i = 0; $i < 10000; $i++) $t = 'these are ' . $foo; } print timeFunc('Method4', 10) . "\n"; Results were: 0.0014739 0.0015574 0.0011955 0.001169 ``` When you are just declaring a string only and no need to parse that string too, then why to confuse PHP debugger to parse. I hope you got my point.
The performance difference has been [irrelevant](http://nikic.github.com/2012/01/09/Disproving-the-Single-Quotes-Performance-Myth.html) since at least January 2012, and likely earlier: ``` Single quotes: 0.061846971511841 seconds Double quotes: 0.061599016189575 seconds ``` Earlier versions of PHP may have had a difference - I personally prefer single quotes to double quotes, so it was a convenient difference. The conclusion of the article makes an excellent point: > Never trust a statistic you didn’t forge yourself. (Although the article quotes the phrase, the original quip was likely falsely [attributed](http://www.statistik.baden-wuerttemberg.de/Service/Veroeff/Monatshefte/20041111.mha) to Winston Churchill, invented by Joseph Goebbels' propaganda ministry to portray Churchill as a liar: > Ich traue keiner Statistik, die ich nicht selbst gefälscht habe. This loosely translates to, "I do not trust a statistic that I did not fake myself.")
Speed difference in using inline strings vs concatenation in php5?
[ "", "php", "optimization", "performance", "" ]
I've seen it mentioned in many blogs around the net, but I believe it shoud be discussed here. What can we do when we have an MVC framework (I am interested in ZEND) in PHP but our host does not provide mod\_rewrite? Are there any "short-cuts"? Can we transfer control in any way (so that a mapping may occur between pages)? Any ideas? Thank you :-)
Zend framework should work without `mod_rewrite`. If you can live with your URL:s looking more like "/path/to/app/index.php/controller/action". If you had mod\_rewrite you could do away with the "index.php" bit, but it should work with too. It's all a matter of setting up the routes to accept the index.php part.
OK my verdict :-): I have used successfully zend without mod\_rewrite and it's as you've all said site/index.php/controller/action. I knew that before posting this. I've also found out around the net a technique that "pushes" 404 pages to index.php therefore what is not a resource (eg. CSS, image, etc) get there, with one exception: POST values. So I decided that the next time an application has to be made in the specific server, to ask politely for mod\_rewrite. If the administrator can not provide it, talk with my boss or if it is for me, switch provider. Generally, it is a shame sometimes that the PHP market is so much fragmented (php4, php5, php6, mod\_rewrite, mod\_auth, mod\_whatever), but this is another story...
php Zend / MVC without mod_rewrite
[ "", "php", "model-view-controller", "zend-framework", "" ]
There are a few ways to get class-like behavior in javascript, the most common seem to be prototype based like this: ``` function Vector(x, y, x) { this.x = x; this.y = y; this.z = z; return this; } Vector.prototype.length = function () { return Math.sqrt(this.x * this.x ... ); } ``` and closure based approaches similar to ``` function Vector(x, y, z) { this.length = function() { return Math.sqrt(x * x + ...); } } ``` For various reasons the latter is faster, but I've seen (and I frequently do write) the prototype version and was curious as to what other people do.
Assigning functions to the prototype is better (for public methods) because all instances of the class will share the same copy of the method. If you assign the function inside the constructor as in the second example, every time you create a new instance, the constructor creates a new copy of the length function and assigns it to just that one instance. However this latter technique is useful if you *want* each copy to have it's own copy, the main use of that being to do private/privileges methods which have access to private variables declared inside the constructor and inherited via the closure mechanism. Douglas Crockford has a good [summary](http://javascript.crockford.com/private.html).
There is also the object literal approach to the prototype: ``` var Vector = function(){}; Vector.prototype = { init:function(x,y,z) { this.x = x; this.y = y; this.z = z; }, length:function() { return Math.sqrt(x * x + ...); } }; var v1 = new Vector(); v1.init(1,2,3); ```
What style do you use for creating a "class"?
[ "", "javascript", "" ]
Does C# have built-in support for parsing strings of page numbers? By page numbers, I mean the format you might enter into a print dialog that's a mixture of comma and dash-delimited. Something like this: ``` 1,3,5-10,12 ``` What would be really nice is a solution that gave me back some kind of list of all page numbers represented by the string. In the above example, getting a list back like this would be nice: ``` 1,3,5,6,7,8,9,10,12 ``` I just want to avoid rolling my own if there's an easy way to do it.
Should be simple: ``` foreach( string s in "1,3,5-10,12".Split(',') ) { // try and get the number int num; if( int.TryParse( s, out num ) ) { yield return num; continue; // skip the rest } // otherwise we might have a range // split on the range delimiter string[] subs = s.Split('-'); int start, end; // now see if we can parse a start and end if( subs.Length > 1 && int.TryParse(subs[0], out start) && int.TryParse(subs[1], out end) && end >= start ) { // create a range between the two values int rangeLength = end - start + 1; foreach(int i in Enumerable.Range(start, rangeLength)) { yield return i; } } } ``` **Edit:** thanks for the fix ;-)
It doesn't have a built-in way to do this, but it would be trivial to do using String.Split. Simply split on ',' then you have a series of strings that represent either page numbers or ranges. Iterate over that series and do a String.Split of '-'. If there isn't a result, it's a plain page number, so stick it in your list of pages. If there is a result, take the left and right of the '-' as the bounds and use a simple for loop to add each page number to your final list over that range. Can't take but 5 minutes to do, then maybe another 10 to add in some sanity checks to throw errors when the user tries to input invalid data (like "1-2-3" or something.)
Does C# have built-in support for parsing page-number strings?
[ "", "c#", "parsing", "" ]
I've used jdom in the past, and have looked briefly at XOM and `DOM4j`. Each seems to provide essentially the same thing, as they each provide a simplified wrapper over the (very obtuse) standard W3C DOM APIs. I know that JDOM went through some effort to develop a JSR and standardization process at one point, but as far as I know that effort never went anywhere. All of the project appear to be in stasis with little new development in the past several years. Is there a consensus as to which is the best? Also what are the pros and cons of each ?
I like [XOM](http://www.xom.nu/), because I like the way Elliotte Rusty Harold thinks. Of the ones you mention I belive it's the one that strays away from the DOM standard API:s the most, but I consider that a benefit. I once implemented a DOM library for Cocoa, and XOM was my inspiration. I've worked with JDOM as well, and there's absolutely nothing wrong with it, although I do prefer XOM.
While [dom4j](http://www.dom4j.org/dom4j-1.6.1/guide.html) is an old player, we have been using it for a while and haven't regret it yet. Strong features: simplicity, xpath support and others. Weak sides: yet to support java 5.0, but version 2.0 has been finally announced.
Which Java DOM Wrapper is the Best or Most Popular?
[ "", "java", "xml", "dom", "" ]
I infrequently (monthly/quarterly) generate hundreds of Crystal Reports reports using Microsoft SQL Server 2005 database views. Are those views wasting CPU cycles and RAM during all the time that I am not reading from them? Should I instead use stored procedures, temporary tables, or short-lived normal tables since I rarely read from my views? I'm not a DBA so I don't know what's going on behind the scenes inside the database server. Is it possible to have too many database views? What's considered best practice?
For the most part, it doesn't matter. Yes, SQL Server will have more choices when it parses SELECT \* FROM table (it'll have to look in the system catalogs for 'table') but it's highly optimized for that, and provided you have sufficient RAM (most servers nowadays do), you won't notice a difference between 0 and 1,000 views. However, from a people-perspective, trying to manage and figure out what "hundreds" of views are doing is probably impossible, so you likely have a lot of duplicated code in there. What happens if some business rules change that are embedded in these redundant views? The main point of views is to encapsulate business logic into a pseudo table (so you may have a person table, but then a view called "active\_persons" which does some magic). Creating a view for each report is kind of silly unless each report is so isolated and unique that there is no ability to re-use.
A view is a query that you run often with preset parameters. If you know you will be looking at the same data all the time you can create a view for ease of use and for data binding. That being said, when you select from a view the view defining query is run along with the query you are running. For example, if vwCustomersWhoHavePaid is: ``` Select * from customers where paid = 1 ``` and the query you are running returns the customers who have paid after August first is formatted like this: ``` Select * from vwCustomersWhoHavePaid where datepaid > '08/01/08' ``` The query you are actually running is: ``` Select * from (Select * from customers where paid = 1) where datepaid > '08/01/08' ``` This is something you should keep in mind when creating views, they are a way of storing data that you look at often. It's just a way of organizing data so it's easier to access.
Is it okay to have a lot of database views?
[ "", "sql", "sql-server", "database", "database-design", "crystal-reports", "" ]
Is there a way to shutdown a computer using a built-in Java method?
Create your own function to execute an OS command through the [command line](http://www.computerhope.com/shutdown.htm)? For the sake of an example. But know where and why you'd want to use this as others note. ``` public static void main(String arg[]) throws IOException{ Runtime runtime = Runtime.getRuntime(); Process proc = runtime.exec("shutdown -s -t 0"); System.exit(0); } ```
Here's another example that could work cross-platform: ``` public static void shutdown() throws RuntimeException, IOException { String shutdownCommand; String operatingSystem = System.getProperty("os.name"); if ("Linux".equals(operatingSystem) || "Mac OS X".equals(operatingSystem)) { shutdownCommand = "shutdown -h now"; } // This will work on any version of windows including version 11 else if (operatingSystem.contains("Windows")) { shutdownCommand = "shutdown.exe -s -t 0"; } else { throw new RuntimeException("Unsupported operating system."); } Runtime.getRuntime().exec(shutdownCommand); System.exit(0); } ``` The specific shutdown commands may require different paths or administrative privileges.
Shutting down a computer
[ "", "java", "cross-platform", "shutdown", "" ]
What's the best framework for creating mock objects in Java? Why? What are the pros and cons of each framework?
I've had good success using [Mockito](https://github.com/mockito/mockito). When I tried learning about JMock and EasyMock, I found the learning curve to be a bit steep (though maybe that's just me). I like Mockito because of its simple and clean syntax that I was able to grasp pretty quickly. The minimal syntax is designed to support the common cases very well, although the few times I needed to do something more complicated I found what I wanted was supported and easy to grasp. Here's an (abridged) example from the Mockito homepage: ``` import static org.mockito.Mockito.*; List mockedList = mock(List.class); mockedList.clear(); verify(mockedList).clear(); ``` It doesn't get much simpler than that. The only major downside I can think of is that it won't mock static methods.
I am the creator of PowerMock so obviously I must recommend that! :-) [PowerMock](http://powermock.org) extends both EasyMock and Mockito with the ability to [mock static methods](http://code.google.com/p/powermock/wiki/MockStatic), final and even private methods. The EasyMock support is complete, but the Mockito plugin needs some more work. We are planning to add JMock support as well. PowerMock is not intended to replace other frameworks, rather it can be used in the tricky situations when other frameworks does't allow mocking. PowerMock also contains other useful features such as [suppressing static initializers](http://code.google.com/p/powermock/wiki/SuppressUnwantedBehavior) and constructors.
What's the best mock framework for Java?
[ "", "java", "unit-testing", "mocking", "" ]
I have a listening port on my server that I'm connecting to using a Java class and the `Socket` interface, i.e. ``` Socket mySocket = new Socket(host,port); ``` I then grab an `OutputStream`, decorate with a `PrintWriter` in autoflush mode and I'm laughing - except if the listening port closes. Then I get ``` tcp4 0 0 *.9999 *.* LISTEN tcp 0 0 127.0.0.1.45737 127.0.0.1.9999 CLOSE_WAIT ``` and I can't seem to detect the problem in the program - I've tried using the `isConnected()` method on the socket but it doesn't seem to know that the connection is closed. I want to be aware of the problem the next time I try and write to the Socket so that I can try and reconnect and report the issue. Any advice please? Thanks all
Set a short timeout? Does `isOutputShutdown()` not get you what you want? You could always build a `SocketWatcher` class that spins up in its own `Thread` and repeatedly tries to write empty strings to the `Socket` until that raises a `SocketClosedException`.
The only reliable way to detect a broken connection in TCP is to write to it, which will eventually cause a 'connection reset' IOException. However due to buffering it won't happen on the first write after the disconnection,p but on a subsequent write. You can't do anything about this.
Configure a Java Socket to fail-fast on disconnect?
[ "", "java", "exception", "sockets", "networking", "" ]
I'm a PHP developer, and I use the MVC pattern and object-oriented code. I really want to write applications for the iPhone, but to do that I need to know Cocoa, but to do that I need to know Objective-C 2.0, but to do that I need to know C, and to do that I need to know about compiled languages (versus interpreted). Where should I begin? Do I really need to begin with plain old "C", as Joel would recommend? Caveat: I like to produce working widgets, not elegant theories.
Yes, you're really best off learning C and then Objective-C. There are some resources that will get you over the C and Objective-C language learning curve: * Uli Kusterer's online book [Masters of the Void](http://www.zathras.de/angelweb/masters-of-the-void.htm) * Stephen Kochan's book [Programming in Objective-C](https://rads.stackoverflow.com/amzn/click/com/0672325861) And there are some resources that will get you over the framework learning curve: * CocoaLab's online book [Become an Xcoder](http://www.cocoalab.com/) * Aaron Hillegass' book [Cocoa Programming for Mac OS X](https://rads.stackoverflow.com/amzn/click/com/0321503619) Despite what Jeff might say, learning C is important for professional software developers for just this reason. It's sort of a baseline low-level *lingua franca* that other innovation happens atop. The reason Jeff has been able to get away with not learning C is not because you don't need to know C, but because he learned Pascal which is in many ways isomorphic to C. (It has all the same concepts, including pointers and manual memory management.)
Get [Cocoa Programming For Mac OS X](https://rads.stackoverflow.com/amzn/click/com/0321503619) by Aaron Hillegass. This should get you on your way to Cocoa programming. You can look up C-related programming as things come up. [K&R C Programming Language](https://rads.stackoverflow.com/amzn/click/com/0131103628) is the definitive reference that is still applicable today to C programming. Get the [Cocoa book](https://rads.stackoverflow.com/amzn/click/com/0321503619), work though it and if you encounter any snags, just ask your C questions here :)
Cocoa tips for PHP developers?
[ "", "php", "c", "objective-c", "cocoa", "" ]
Is there an efficient method of converting an integer into the written numbers, for example: ``` string Written = IntegerToWritten(21); ``` would return "Twenty One". Is there any way of doing this that doesn't involve a massive look-up table?
This should work reasonably well: ``` public static class HumanFriendlyInteger { static string[] ones = new string[] { "", "One", "Two", "Three", "Four", "Five", "Six", "Seven", "Eight", "Nine" }; static string[] teens = new string[] { "Ten", "Eleven", "Twelve", "Thirteen", "Fourteen", "Fifteen", "Sixteen", "Seventeen", "Eighteen", "Nineteen" }; static string[] tens = new string[] { "Twenty", "Thirty", "Forty", "Fifty", "Sixty", "Seventy", "Eighty", "Ninety" }; static string[] thousandsGroups = { "", " Thousand", " Million", " Billion" }; private static string FriendlyInteger(int n, string leftDigits, int thousands) { if (n == 0) { return leftDigits; } string friendlyInt = leftDigits; if (friendlyInt.Length > 0) { friendlyInt += " "; } if (n < 10) { friendlyInt += ones[n]; } else if (n < 20) { friendlyInt += teens[n - 10]; } else if (n < 100) { friendlyInt += FriendlyInteger(n % 10, tens[n / 10 - 2], 0); } else if (n < 1000) { friendlyInt += FriendlyInteger(n % 100, (ones[n / 100] + " Hundred"), 0); } else { friendlyInt += FriendlyInteger(n % 1000, FriendlyInteger(n / 1000, "", thousands+1), 0); if (n % 1000 == 0) { return friendlyInt; } } return friendlyInt + thousandsGroups[thousands]; } public static string IntegerToWritten(int n) { if (n == 0) { return "Zero"; } else if (n < 0) { return "Negative " + IntegerToWritten(-n); } return FriendlyInteger(n, "", 0); } } ``` (Edited to fix a bug w/ million, billion, etc.)
I use this handy library called Humanizer. <https://github.com/Humanizr/Humanizer> It supports several cultures and converts not only numbers to words but also date and it's very simple to use. Here's how I use it: ``` int someNumber = 543; var culture = System.Globalization.CultureInfo("en-US"); var result = someNumber.ToWords(culture); // 543 -> five hundred forty-three ``` And voilá!
Convert integers to written numbers
[ "", "c#", "integer", "" ]
> **Possible Duplicate:** > [How do I calculate someone's age in C#?](https://stackoverflow.com/questions/9/how-do-i-calculate-someones-age-in-c) Maybe this could be silly but and I don't have issues with my age but sometimes it is good to calculate the exact age of someone, I have introduced my birthdate in my profile (01/12/1975) "dd/mm/yyyy" and it calculated 33 and I'm 32 actually still, isn't it better to calculate the exact age? Maybe ``` DateTime dt1 = DateTime.Now; TimeSpan dt2; dt2 = dt1.Subtract(new DateTime(1975, 12, 01)); double year = dt2.TotalDays / 365; ``` The result of year is 32.77405678074 Could this code be OK?
If you were born on January 12th 1975, you would be 33 years old today. If you were born on December 1st 1975, you would be 32 years old today. If you read the note by the birthday field when editing your profile you'll see it says "YYYY/MM/DD", I'm sure it will try to interpret dates of other formats but it looks like it interprets MM/DD/YYYY (US standard dates) in preference to DD/MM/YYYY (European standard dates). The easy fix is to enter the date of your birthday according to the suggested input style.
> Maybe this could be silly but and I don't have issues with my age but sometimes it is good to calculate the exact age of someone, I have introduced my birthdate in my profile (01/12/1975) "dd/mm/yyyy" and it calculated 33 and I'm 32 actually still, doesn't it better to calculate the exact age? My guess would be that this is a localization issue, though I don't know how it would happen, since (at least for me) the profile has you fill out your age in the format "YYYY/MM/DD". But your birthday is one that reads as a valid date (January 12th) in traditional U.S. settings, so this is the area I'd look into. I was born in 1975, also, and my birthday is next month, and it's got my age right.
Exact age calculation
[ "", "c#", "" ]
### Update: Solved, with code [I got it working, see my answer below for the code...](https://stackoverflow.com/questions/18585/why-cant-you-bind-the-size-of-a-windows-form-to-applicationsettings#19056) ### Original Post As Tundey pointed out in [his answer](https://stackoverflow.com/questions/18421/best-way-to-bind-windows-forms-properties-to-applicationsettings-in-c#18456) to my [last question](https://stackoverflow.com/questions/18421/best-way-to-bind-windows-forms-properties-to-applicationsettings-in-c), you can bind nearly everything about a windows forms control to ApplicationSettings pretty effortlessly. So is there really no way to do this with form Size? [This tutorial](http://www.codeproject.com/KB/cs/UserSettings.aspx) says you need to handle Size explicitly so you can save RestoreBounds instead of size if the window is maximized or minimized. However, I hoped I could just use a property like: ``` public Size RestoreSize { get { if (this.WindowState == FormWindowState.Normal) { return this.Size; } else { return this.RestoreBounds.Size; } } set { ... } } ``` But I can't see a way to bind this in the designer (Size is notably missing from the PropertyBinding list).
I finally came up with a Form subclass that solves this, once and for all. To use it: 1. Inherit from RestorableForm instead of Form. 2. Add a binding in (ApplicationSettings) -> (PropertyBinding) to WindowRestoreState. 3. Call Properties.Settings.Default.Save() when the window is about to close. Now window position and state will be remembered between sessions. Following the suggestions from other posters below, I included a function ConstrainToScreen that makes sure the window fits nicely on the available displays when restoring itself. ### Code ``` // Consider this code public domain. If you want, you can even tell // your boss, attractive women, or the other guy in your cube that // you wrote it. Enjoy! using System; using System.Windows.Forms; using System.ComponentModel; using System.Drawing; namespace Utilities { public class RestorableForm : Form, INotifyPropertyChanged { // We invoke this event when the binding needs to be updated. public event PropertyChangedEventHandler PropertyChanged; // This stores the last window position and state private WindowRestoreStateInfo windowRestoreState; // Now we define the property that we will bind to our settings. [Browsable(false)] // Don't show it in the Properties list [SettingsBindable(true)] // But do enable binding to settings public WindowRestoreStateInfo WindowRestoreState { get { return windowRestoreState; } set { windowRestoreState = value; if (PropertyChanged != null) { // If anybody's listening, let them know the // binding needs to be updated: PropertyChanged(this, new PropertyChangedEventArgs("WindowRestoreState")); } } } protected override void OnClosing(CancelEventArgs e) { WindowRestoreState = new WindowRestoreStateInfo(); WindowRestoreState.Bounds = WindowState == FormWindowState.Normal ? Bounds : RestoreBounds; WindowRestoreState.WindowState = WindowState; base.OnClosing(e); } protected override void OnLoad(EventArgs e) { base.OnLoad(e); if (WindowRestoreState != null) { Bounds = ConstrainToScreen(WindowRestoreState.Bounds); WindowState = WindowRestoreState.WindowState; } } // This helper class stores both position and state. // That way, we only have to set one binding. public class WindowRestoreStateInfo { Rectangle bounds; public Rectangle Bounds { get { return bounds; } set { bounds = value; } } FormWindowState windowState; public FormWindowState WindowState { get { return windowState; } set { windowState = value; } } } private Rectangle ConstrainToScreen(Rectangle bounds) { Screen screen = Screen.FromRectangle(WindowRestoreState.Bounds); Rectangle workingArea = screen.WorkingArea; int width = Math.Min(bounds.Width, workingArea.Width); int height = Math.Min(bounds.Height, workingArea.Height); // mmm....minimax int left = Math.Min(workingArea.Right - width, Math.Max(bounds.Left, workingArea.Left)); int top = Math.Min(workingArea.Bottom - height, Math.Max(bounds.Top, workingArea.Top)); return new Rectangle(left, top, width, height); } } } ``` ### Settings Bindings References * [SettingsBindableAttribute](http://msdn.microsoft.com/en-us/library/system.componentmodel.settingsbindableattribute.aspx) * [INotifyPropertyChanged](http://msdn.microsoft.com/en-us/library/system.componentmodel.inotifypropertychanged.aspx)
The reason why the Form.Size property is not available in the settings binding UI is because this property is marked **DesignerSerializationVisibility.Hidden**. This means that the designer doesn't know how to serialise it, let alone generate a data binding for it. Instead the **Form.ClientSize** property is the one that gets serialised. If you try and get clever by binding **Location** and **ClientSize**, you'll see another problem. When you try to resize your form from the left or top edge, you'll see weird behaviour. This is apparently related to the way that two-way data binding works in the context of property sets that mutually affect each other. Both **Location** and **ClientSize** eventually call into a common method, **SetBoundsCore()**. Also, data binding to properties like **Location** and **Size** is just not efficient. Each time the user moves or resizes the form, Windows sends hundreds of messages to the form, causing the data binding logic to do a lot of processing, when all you really want is to store the last position and size before the form is closed. This is a very simplified version of what I do: ``` private void MyForm_FormClosing(object sender, FormClosingEventArgs e) { Properties.Settings.Default.MyState = this.WindowState; if (this.WindowState == FormWindowState.Normal) { Properties.Settings.Default.MySize = this.Size; Properties.Settings.Default.MyLoc = this.Location; } else { Properties.Settings.Default.MySize = this.RestoreBounds.Size; Properties.Settings.Default.MyLoc = this.RestoreBounds.Location; } Properties.Settings.Default.Save(); } private void MyForm_Load(object sender, EventArgs e) { this.Size = Properties.Settings.Default.MySize; this.Location = Properties.Settings.Default.MyLoc; this.WindowState = Properties.Settings.Default.MyState; } ``` Why is this a very simplified version? Because doing this properly is [a lot trickier](http://miksovsky.blogs.com/flowstate/2005/10/the_fractal_nat.html) than it looks :-)
Why can't you bind the Size of a windows form to ApplicationSettings?
[ "", "c#", "visual-studio", "data-binding", ".net-2.0", "" ]
CSS and Javascript files don't change very often, so I want them to be cached by the web browser. But I also want the web browser to see changes made to these files without requiring the user to clear their browser cache. Also want a solution that works well with a version control system such as Subversion. --- > Some solutions I have seen involve adding a version number to the end of the file in the form of a query string. > > Could use the SVN revision number to automate this for you: [ASP.NET Display SVN Revision Number](https://stackoverflow.com/questions/2308/aspnet-display-svn-revision-number) Can you specify how you include the [Revision](http://svnbook.red-bean.com/en/1.4/svn.advanced.props.special.keywords.html) variable of another file? That is in the HTML file I can include the Revision number in the URL to the CSS or Javascript file. In the [Subversion book](http://svnbook.red-bean.com) it says about Revision: "This keyword describes the last known revision in which this file changed in the repository". > Firefox also allows pressing `CTRL`+`R` to reload everything on a particular page. To clarify I am looking for solutions that don't require the user to do anything on their part.
I found that if you append the last modified timestamp of the file onto the end of the URL the browser will request the files when it is modified. For example in PHP: ``` function urlmtime($url) { $parsed_url = parse_url($url); $path = $parsed_url['path']; if ($path[0] == "/") { $filename = $_SERVER['DOCUMENT_ROOT'] . "/" . $path; } else { $filename = $path; } if (!file_exists($filename)) { // If not a file then use the current time $lastModified = date('YmdHis'); } else { $lastModified = date('YmdHis', filemtime($filename)); } if (strpos($url, '?') === false) { $url .= '?ts=' . $lastModified; } else { $url .= '&ts=' . $lastModified; } return $url; } function include_css($css_url, $media='all') { // According to Yahoo, using link allows for progressive // rendering in IE where as @import url($css_url) does not echo '<link rel="stylesheet" type="text/css" media="' . $media . '" href="' . urlmtime($css_url) . '">'."\n"; } function include_javascript($javascript_url) { echo '<script type="text/javascript" src="' . urlmtime($javascript_url) . '"></script>'."\n"; } ```
Some solutions I have seen involve adding a version number to the end of the file in the form of a query string. ``` <script type="text/javascript" src="funkycode.js?v1"> ``` You could use the SVN revision number to automate this [for you](https://stackoverflow.com/questions/2308/aspnet-display-svn-revision-number) by including the word **LastChangedRevision** in your html file after where v1 appears above. You must also setup your repository to do this. I hope this further clarifies my answer? Firefox also allows pressing `CTRL` + `R` to reload everything on a particular page.
How can I make the browser see CSS and Javascript changes?
[ "", "javascript", "css", "http", "caching", "" ]
Given a string of JSON data, how can I safely turn that string into a JavaScript object? Obviously I can do this unsafely with something like: ``` var obj = eval("(" + json + ')'); ``` but that leaves me vulnerable to the JSON string containing other code, which it seems very dangerous to simply eval.
[`JSON.parse(jsonString)`](https://developer.mozilla.org/en/docs/Web/JavaScript/Reference/Global_Objects/JSON/parse) is a pure JavaScript approach so long as you can guarantee a reasonably modern browser.
The jQuery method is now deprecated. Use this method instead: ``` let jsonObject = JSON.parse(jsonString); ``` --- *Original answer using deprecated jQuery functionality*: If you're using jQuery just use: ``` jQuery.parseJSON( jsonString ); ``` It's exactly what you're looking for (see the jQuery [documentation](http://api.jquery.com/jQuery.parseJSON/)).
Safely turning a JSON string into an object
[ "", "javascript", "json", "parsing", "" ]
I have a collection of objects in a database. Images in a photo gallery, products in a catalog, chapters in a book, etc. Each object is represented as a row. I want to be able to arbitrarily order these images, storing that ordering in the database so when I display the objects, they will be in the right order. For example, let's say I'm writing a book, and each chapter is an object. I write my book, and put the chapters in the following order: > Introduction, Accessibility, Form vs. Function, Errors, Consistency, Conclusion, Index It goes to the editor, and comes back with the following suggested order: > Introduction, Form, Function, Accessibility, Consistency, Errors, Conclusion, Index How can I store this ordering in the database in a robust, efficient way? I've had the following ideas, but I'm not thrilled with any of them: 1. Array. Each row has an ordering ID, when order is changed (via a removal followed by an insertion), the order IDs are updated. This makes retrieval easy, since it's just `ORDER BY`, but it seems easy to break. > `// REMOVAL` > `UPDATE ... SET orderingID=NULL WHERE orderingID=removedID` > `UPDATE ... SET orderingID=orderingID-1 WHERE orderingID > removedID` > `// INSERTION` > `UPDATE ... SET orderingID=orderingID+1 WHERE orderingID > insertionID` > `UPDATE ... SET orderID=insertionID WHERE ID=addedID` 2. Linked list. Each row has a column for the id of the next row in the ordering. Traversal seems costly here, though there may by some way to use `ORDER BY` that I'm not thinking of. 3. Spaced array. Set the orderingID (as used in #1) to be large, so the first object is 100, the second is 200, etc. Then when an insertion happens, you just place it at `(objectBefore + objectAfter)/2`. Of course, this would need to be rebalanced occasionally, so you don't have things too close together (even with floats, you'd eventually run into rounding errors). None of these seem particularly elegant to me. Does anyone have a better way to do it?
Since I've mostly run into this with Django, I've found [this solution](http://www.djangosnippets.org/snippets/1053/) to be the most workable. It seems that there isn't any "right way" to do this in a relational database.
An other alternative would be (if your RDBMS supports it) to use columns of type array. While this breaks the normalization rules, it can be useful in situations like this. One database which I know about that has arrays is PostgreSQL.
Represent Ordering in a Relational Database
[ "", "sql", "database", "django", "django-models", "" ]
I need to find a bottleneck and need to accurately as possible measure time. Is the following code snippet the best way to measure the performance? ``` DateTime startTime = DateTime.Now; // Some execution process DateTime endTime = DateTime.Now; TimeSpan totalTimeTaken = endTime.Subtract(startTime); ```
No, it's not. Use the [Stopwatch](http://msdn2.microsoft.com/en-us/library/system.diagnostics.stopwatch.aspx) (in `System.Diagnostics`) ``` Stopwatch sw = Stopwatch.StartNew(); PerformWork(); sw.Stop(); Console.WriteLine("Time taken: {0}ms", sw.Elapsed.TotalMilliseconds); ``` Stopwatch automatically checks for the existence of high-precision timers. It is worth mentioning that `DateTime.Now` often is quite a bit slower than `DateTime.UtcNow` due to the work that has to be done with timezones, [DST](http://en.wikipedia.org/wiki/Daylight_saving_time) and such. DateTime.UtcNow typically has a resolution of 15 ms. See [John Chapman's blog post](http://jaychapman.blogspot.com/2007/12/datetimenow-precision-issues-enter.html) about `DateTime.Now` precision for a great summary. Interesting trivia: The stopwatch falls back on `DateTime.UtcNow` if your hardware doesn't support a high frequency counter. You can check to see if Stopwatch uses hardware to achieve high precision by looking at the static field [Stopwatch.IsHighResolution](http://msdn.microsoft.com/en-us/library/system.diagnostics.stopwatch.ishighresolution.aspx).
If you want something quick and dirty I would suggest using Stopwatch instead for a greater degree of precision. ``` Stopwatch sw = new Stopwatch(); sw.Start(); // Do Work sw.Stop(); Console.WriteLine("Elapsed time: {0}", sw.Elapsed.TotalMilliseconds); ``` Alternatively, if you need something a little more sophisticated you should probably consider using a 3rd party profiler such as [ANTS](http://www.red-gate.com/products/ants_profiler/index.htm).
Is DateTime.Now the best way to measure a function's performance?
[ "", "c#", ".net", "performance", "datetime", "timer", "" ]
I am using SourceForge for some Open Source projects and I want to automate the deployment of releases to the SourceForge File Release System. I use Maven for my builds and the standard SFTP deployment mechanism doesn't seem to work unless you do some manual preparation work. I have come across some old postings on other forums suggesting that the only approach is to write a Wagon specifically for SourceForge. Has anybody had any recent experience with this?
I'm not able to test this to confirm, but I believe it is possible without writing any plugins. You can [deploy to SourceForge using SCP](http://sourceforge.net/apps/trac/sourceforge/wiki/SCP), and the maven-deploy-plugin can be configured to [use SCP](http://maven.apache.org/plugins/maven-deploy-plugin/examples/deploy-ssh-external.html) so it should work. You can also deploy your [site to SourceForge](http://maven.apache.org/plugins/maven-site-plugin/examples/site-deploy-to-sourceforge.net.html) via SCP. You would configure the SourceForge server in your settings.xml to use a "combined" username with a comma separator. With these credentials: ``` SourceForge username: foo SourceForge user password: secret SourceForge project name: bar Path: /home/frs/project/P/PR/PROJECT_UNIX_NAME/ - Substitute your project UNIX name data for /P/PR/PROJECT_UNIX_NAME ``` The server element would look like this: ``` <server> <id>sourceforge</id> <username>foo,bar</username> <password>secret</password> </server> ``` And the distributionManagement section in your POM would look like this: ``` <!-- Enabling the use of FTP --> <distributionManagement> <repository> <id>ssh-repository</id> <url> scpexe://frs.sourceforge.net:/home/frs/project/P/PR/PROJECT_UNIX_NAME</url> </repository> </distributionManagement> ``` Finally declare that ssh-external is to be used: ``` <build> <extensions> <extension> <groupId>org.apache.maven.wagon</groupId> <artifactId>wagon-ssh-external</artifactId> <version>1.0-alpha-5</version> </extension> </extensions> </build> ``` --- If this doesn't work, you may be able to use the recommended approach in the site reference above, i.e. create a shell on shell.sourceforge.net with your username and project group: ``` ssh -t <username>,<project name>@shell.sf.net create ``` Then use shell.sourceforge.net (instead of web.sourceforge.net) in your site URL in the diestributionManagement section: ``` <url>scp://shell.sourceforge.net/home/frs/project/P/PR/PROJECT_UNIX_NAME/</url> ```
I have uploaded an example to sourceforge.net at: <http://sf-mvn-plugins.sourceforge.net/example-1jar-thinlet/> You can check out it via svn - so you can see how to use plugins for upload and download of and to sourceforge.net file system area and web site. The main points to upload are to use sftp: Add this similar code to your pom.xml ``` <distributionManagement> <!-- use the following if you're not using a snapshot version. --> <repository> <id>sourceforge-sf-mvn-plugins</id> <name>FRS Area</name> <uniqueVersion>false</uniqueVersion> <url>sftp://web.sourceforge.net/home/frs/project/s/sf/sf-mvn-plugins/m2-repo</url> </repository> <site> <id>sourceforge-sf-mvn-plugins</id> <name>Web Area</name> <url> sftp://web.sourceforge.net/home/groups/s/sf/sf-mvn-plugins/htdocs/${artifactId} </url> </site> </distributionManagement> ``` Add similar code to settings.xml ``` <server> <id>sourceforge-sf-mvn-plugins-svn</id> <username>tmichel,sf-mvn-plugins</username> <password>secret</password> </server> <server> <id>sourceforge-sf-mvn-plugins</id> <username>user,project</username> <password>secret</password> </server> ``` The main point for download is to use the wagon-http-sourceforge maven plugin - please see at: sf-mvn-plugins.sourceforge.net/wagon-http-sourceforge/FAQ.html Please add the following code to your pom.xml ``` <repositories> <repository> <id>sourceforge-svn</id> <name>SF Maven Plugin SVN Repository</name> <url>http://sf-mvn-plugins.svn.sourceforge.net/svnroot/sf-mvn-plugins/_m2-repo/trunk</url> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>sourceforge-frs</id> <name>SF Maven Plugin Repository</name> <url>http://sourceforge.net/projects/sf-mvn-plugins/files/m2-repo</url> </pluginRepository> </pluginRepositories> <build> <extensions> <extension> <groupId>net.sf.maven.plugins</groupId> <artifactId>wagon-http-sourceforge</artifactId> <version>0.4</version> </extension> </extensions> : </build> ```
How can I deploy artifacts from a Maven build to the SourceForge File Release System?
[ "", "java", "maven-2", "sourceforge", "" ]
How can I check `file permissions`, without having to run operating system specific command via `passthru()` or `exec()`?
Use [fileperms()](http://php.net/fileperms) function ``` clearstatcache(); echo substr(sprintf('%o', fileperms('/etc/passwd')), -4); ```
You can use the [is\_readable()](http://www.php.net/manual/en/function.is-readable.php), [is\_executable()](http://www.php.net/manual/en/function.is-executable.php) etc.. commands.
Check file permissions
[ "", "php", "file-permissions", "" ]
I have done Java and JSP programming in the past, but I am new to Java Server Faces and want to know if there's a set of best practices for JSF development.
Some tips: Understand the JSF request [lifecycle](http://www.java-samples.com/showtutorial.php?tutorialid=470) and where your various pieces of code fit in it. Especially find out why your model values will not be updated if there are validation errors. Choose a tag library and then stick with it. Take your time to determine your needs and prototype different libraries. Mixing different taglibs may cause severe harm to your mental health.
Consider using [facelets](https://facelets.dev.java.net/)- it greatly simplifies the worst parts of JSF development. I'm doing a CMS-based JSF project now without facelets (after doing a project with it) and it feels like my left arm is missing....
What are the best practices for JSF?
[ "", "java", "jsf", "" ]
I've developed my own delivery extension for Reporting Services 2005, to integrate this with our SaaS marketing solution. It takes the subscription, and takes a snapshot of the report with a custom set of parameters. It then renders the report, sends an e-mail with a link and the report attached as XLS. Everything works fine, until mail delivery... Here's my code for sending e-mail: ``` public static List<string> SendMail(SubscriptionData data, Stream reportStream, string reportName, string smptServerHostname, int smtpServerPort) { List<string> failedRecipients = new List<string>(); MailMessage emailMessage = new MailMessage(data.ReplyTo, data.To); emailMessage.Priority = data.Priority; emailMessage.Subject = data.Subject; emailMessage.IsBodyHtml = false; emailMessage.Body = data.Comment; if (reportStream != null) { Attachment reportAttachment = new Attachment(reportStream, reportName); emailMessage.Attachments.Add(reportAttachment); reportStream.Dispose(); } try { SmtpClient smtp = new SmtpClient(smptServerHostname, smtpServerPort); // Send the MailMessage smtp.Send(emailMessage); } catch (SmtpFailedRecipientsException ex) { // Delivery failed for the recipient. Add the e-mail address to the failedRecipients List failedRecipients.Add(ex.FailedRecipient); } catch (SmtpFailedRecipientException ex) { // Delivery failed for the recipient. Add the e-mail address to the failedRecipients List failedRecipients.Add(ex.FailedRecipient); } catch (SmtpException ex) { throw ex; } catch (Exception ex) { throw ex; } // Return the List of failed recipient e-mail addresses, so the client can maintain its list. return failedRecipients; } ``` Values for SmtpServerHostname is localhost, and port is 25. I veryfied that I can actually send mail, by using Telnet. And it works. **Here's the error message I get from SSRS:** ReportingServicesService!notification!4!08/28/2008-11:26:17:: Notification 6ab32b8d-296e-47a2-8d96-09e81222985c completed. Success: False, Status: Exception Message: Failure sending mail. Stacktrace: at MyDeliveryExtension.MailDelivery.SendMail(SubscriptionData data, Stream reportStream, String reportName, String smptServerHostname, Int32 smtpServerPort) in C:\inetpub\wwwroot\CustomReporting\MyDeliveryExtension\MailDelivery.cs:line 48 at MyDeliveryExtension.MyDelivery.Deliver(Notification notification) in C:\inetpub\wwwroot\CustomReporting\MyDeliveryExtension\MyDelivery.cs:line 153, DeliveryExtension: My Delivery, Report: Clicks Development, Attempt 1 ReportingServicesService!dbpolling!4!08/28/2008-11:26:17:: NotificationPolling finished processing item 6ab32b8d-296e-47a2-8d96-09e81222985c **Could this have something to do with Trust/Code Access Security?** My delivery extension is granted full trust in rssrvpolicy.config: ``` <CodeGroup class="UnionCodeGroup" version="1" PermissionSetName="FullTrust" Name="MyDelivery_CodeGroup" Description="Code group for MyDelivery extension"> <IMembershipCondition class="UrlMembershipCondition" version="1" Url="C:\Program Files\Microsoft SQL Server\MSSQL.2\Reporting Services\ReportServer\bin\MyDeliveryExtension.dll" /> </CodeGroup> ``` Could trust be an issue here? Another theory: SQL Server and SSRS was installed in the security context of Local System. Am I right, or is this service account restricted access to any network resource? Even its own SMTP Server? I tried changing all SQL Server Services logons to Administrator - but still without any success. I also tried logging onto the SMTP server in my code, by proviiding: NetworkCredential("Administrator", "password") and also NetworkCredential("Administrator", "password", "MyRepServer") Can anyone help here, please?
What's at: ``` at MyDeliveryExtension.MailDelivery.SendMail(SubscriptionData data, Stream reportStream, String reportName, String smptServerHostname, Int32 smtpServerPort) in C:\inetpub\wwwroot\CustomReporting\MyDeliveryExtension\MailDelivery.cs:line 48 at MyDeliveryExtension.MyDelivery.Deliver(Notification notification) in C:\inetpub\wwwroot\CustomReporting\MyDeliveryExtension\MyDelivery.cs:line 153 ``` Also you seem to be disposing the report stream, but that should be done by whatever opened that stream, not your method (it won't be obvious that attaching a stream disposes it). You're losing part of your stack trace due to how you re-throw exceptions. Don't throw the ex variable, just throw is enough. Try this tweak: ``` public static List<string> SendMail(SubscriptionData data, Stream reportStream, string reportName, string smptServerHostname, int smtpServerPort) { List<string> failedRecipients = new List<string>(); MailMessage emailMessage = new MailMessage(data.ReplyTo, data.To) { Priority = data.Priority, Subject = data.Subject, IsBodyHtml = false, Body = data.Comment }; if (reportStream != null) emailMessage.Attachments.Add(new Attachment(reportStream, reportName)); try { SmtpClient smtp = new SmtpClient(smptServerHostname, smtpServerPort); // Send the MailMessage smtp.Send(emailMessage); } catch (SmtpFailedRecipientsException ex) { // Delivery failed for the recipient. Add the e-mail address to the failedRecipients List failedRecipients.Add(ex.FailedRecipient); //are you missing a loop here? only one failed address will ever be returned } catch (SmtpFailedRecipientException ex) { // Delivery failed for the recipient. Add the e-mail address to the failedRecipients List failedRecipients.Add(ex.FailedRecipient); } // Return the List of failed recipient e-mail addresses, so the client can maintain its list. return failedRecipients; } ```
That's why you have to avoid: ``` catch (Exception ex) { throw ex; } ``` As that basically cloaks your exception in a new one. If you use: ``` catch (Exception ex) { throw; //note: no ex } ``` It keeps the original exception and stack trace.
Sending e-mail from a Custom SQL Server Reporting Services Delivery Extension
[ "", "c#", "reporting-services", "" ]
I need to decrement a Javascript date by 1 day, so that it rolls back across months/years correctly. That is, if I have a date of 'Today', I want to get the date for 'Yesterday'. It always seems to take more code than necessary when I do this, so I'm wondering if there's any simpler way. What's the simplest way of doing this? [Edit: Just to avoid confusion in an answer below, this is a **JavaScript** question, not a Java one.]
``` var d = new Date(); d.setDate(d.getDate() - 1); console.log(d); ```
``` day.setDate(day.getDate() -1); //will be wrong ``` this will return wrong day. under UTC -03:00, check for ``` var d = new Date(2014,9,19); d.setDate(d.getDate()-1);// will return Oct 17 ``` **Better use:** ``` var n = day.getTime(); n -= 86400000; day = new Date(n); //works fine for everything ```
What's the simplest way to decrement a date in Javascript by 1 day?
[ "", "javascript", "browser", "date", "" ]
I'm sure this is easy but I can't figure it out: I have an ASP.NET page with some UpdatePanels on it. I want the page to *completely* load with some 'Please wait' text in the UpdatePanels. Then once the page is *completely loaded* I want to call a code-behind function to update the UpdatePanel. Any ideas as to what combination of Javascript and code-behind I need to implement this idea? SAL PS: I've tried putting my function call in the Page\_Load but then code is run *before* the page is delivered and, as the function I want to run takes some time, the page simply takes too long to load up.
Use a timer control that will be fired after a certain number of milliseconds (for page to load). In the timer tick event refresh the update panel.
I fiddled around with the ScriptManager suggestions - which I reckon I would have eventually got working but it seems to me that the Timer idea is easier to implement and not really(!) that much of a hack?! Here's how I got my panel updated *after* the initial page render was complete... default.aspx ``` <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="AJAXPostLoadCall._Default" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <head runat="server"> <title>Untitled Page</title> </head> <body> <form id="form1" runat="server"> <h2>And now for a magic trick...</h2> <asp:ScriptManager ID="ScriptManager1" runat="server" EnablePartialRendering="True"> </asp:ScriptManager> <div> <asp:UpdatePanel ID="UpdatePanel1" runat="server"> <ContentTemplate> <asp:Timer ID="Timer1" runat="server" Interval="2000" ontick="Timer1_Tick" /> <asp:Label ID="Label1" runat="server">Something magic is about to happen...</asp:Label> </ContentTemplate> </asp:UpdatePanel> </div> </form> </body> </html> ``` and the code behind default.aspx.cs reads ``` using System; using System.Collections; using System.Configuration; using System.Data; using System.Linq; using System.Web; using System.Web.Security; using System.Web.UI; using System.Web.UI.HtmlControls; using System.Web.UI.WebControls; using System.Web.UI.WebControls.WebParts; using System.Xml.Linq; namespace AJAXPostLoadCall { public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { } public void DoMagic() { Label1.Text = "Abracadabra"; } protected void Timer1_Tick(object sender, EventArgs e) { // Do the magic, then disable the timer DoMagic(); Timer1.Enabled = false; } } } ``` So, the page loads up and the Timer (contained within the UpdatePanel) fires 2 secs after the page has loaded up (I think - I'm not sure when the Timer actually starts?). The label text is rewritten and then the Timer is disabled to stop any more updates. Simple enough - but can you purists out there tell me if this is a Horrible Hack?
ASP.NET AJAX: Firing an UpdatePanel after the page load is complete
[ "", "asp.net", "javascript", "asp.net-ajax", "" ]
I'd like to check if the user input is an email address in JavaScript, before sending it to a server or attempting to send an email to it, to prevent the most basic mistyping. How could I achieve this?
Using [regular expressions](http://en.wikipedia.org/wiki/Regular_expression) is probably the best way of validating an email address in JavaScript. [View a bunch of tests on JSFiddle](http://jsfiddle.net/ghvj4gy9/embedded/result.js) taken from [Chromium](https://cs.chromium.org/chromium/src/third_party/blink/web_tests/fast/forms/resources/ValidityState-typeMismatch-email.js?q=ValidityState-typeMismatch-email.js&sq=package:chromium&dr). ``` const validateEmail = (email) => { return String(email) .toLowerCase() .match( /^(([^<>()[\]\\.,;:\s@"]+(\.[^<>()[\]\\.,;:\s@"]+)*)|.(".+"))@((\[[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\])|(([a-zA-Z\-0-9]+\.)+[a-zA-Z]{2,}))$/ ); }; ``` The following is an example of a regular expression that accepts unicode. ``` const re = /^(([^<>()[\]\.,;:\s@\"]+(\.[^<>()[\]\.,;:\s@\"]+)*)|(\".+\"))@(([^<>()[\]\.,;:\s@\"]+\.)+[^<>()[\]\.,;:\s@\"]{2,})$/i; ``` Keep in mind that one should not rely on JavaScript validation alone, as JavaScript can be easily disabled by the client. Furthermore, it is important to validate on the server side. The following snippet of code is an example of JavaScript validating an email address on the client side. ``` const validateEmail = (email) => { return email.match( /^(([^<>()[\]\\.,;:\s@\"]+(\.[^<>()[\]\\.,;:\s@\"]+)*)|(\".+\"))@((\[[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\])|(([a-zA-Z\-0-9]+\.)+[a-zA-Z]{2,}))$/ ); }; const validate = () => { const $result = $('#result'); const email = $('#email').val(); $result.text(''); if(validateEmail(email)){ $result.text(email + ' is valid.'); $result.css('color', 'green'); } else{ $result.text(email + ' is invalid.'); $result.css('color', 'red'); } return false; } $('#email').on('input', validate); ``` ``` <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script> <label for="email">Enter email address</label> <input id="email" type="email"> <p id="result"></p> ```
I've slightly modified [Jaymon's answer](https://stackoverflow.com/a/48800/4832311) for people who want really simple validation in the form of: ``` anystring@anystring.anystring ``` The regular expression: ``` /^\S+@\S+\.\S+$/ ``` To prevent matching multiple @ signs: ``` /^[^\s@]+@[^\s@]+\.[^\s@]+$/ ``` The above regexes match the whole string, remove the leading and `^` and trailing `$` if you want to match anywhere in the string. The example below matches *anywhere* in the string. If you do want to match the whole sring, you may want to `trim()` the string first. Example JavaScript function: ``` function validateEmail(email) { var re = /\S+@\S+\.\S+/; return re.test(email); } console.log(validateEmail('my email is anystring@anystring.any')); // true console.log(validateEmail('my email is anystring@anystring .any')); // false ```
How can I validate an email address in JavaScript?
[ "", "javascript", "html", "regex", "email-validation", "" ]
I'm wanting to parse a string into a nullable int in C#. ie. I want to get back either the int value of the string or null if it can't be parsed. I was kind of hoping that this would work ``` int? val = stringVal as int?; ``` But that won't work, so the way I'm doing it now is I've written this extension method ``` public static int? ParseNullableInt(this string value) { if (value == null || value.Trim() == string.Empty) { return null; } else { try { return int.Parse(value); } catch { return null; } } } ``` Is there a better way of doing this? **EDIT:** Thanks for the TryParse suggestions, I did know about that, but it worked out about the same. I'm more interested in knowing if there is a built-in framework method that will parse directly into a nullable int?
`int.TryParse` is probably a tad easier: ``` public static int? ToNullableInt(this string s) { int i; if (int.TryParse(s, out i)) return i; return null; } ``` **Edit** @Glenn `int.TryParse` is "built into the framework". It and `int.Parse` are *the* way to parse strings to ints.
You can do this in one line, using the conditional operator and the fact that you can cast `null` to a nullable type (two lines, if you don't have a pre-existing int you can reuse for the output of `TryParse`): Pre C#7: ``` int tempVal; int? val = Int32.TryParse(stringVal, out tempVal) ? tempVal : (int?)null; ``` With C#7's updated syntax that allows you to declare an output variable in the method call, this gets even simpler. ``` int? val = Int32.TryParse(stringVal, out var tempVal) ? tempVal : null; ```
How to parse a string into a nullable int
[ "", "c#", ".net", "string", ".net-3.5", "nullable", "" ]
In Java, there is no such thing as an unsigned byte. Working with some low level code, occasionally you need to work with bytes that have unsigned values greater than 128, which causes Java to interpret them as a negative number due to the MSB being used for sign. What's a good way to work around this? (Saying don't use Java is not an option)
When reading any single value from the array copy it into something like a short or an int and manually convert the negative number into the positive value it should be. ``` byte[] foobar = ..; int value = foobar[10]; if (value < 0) value += 256 // Patch up the 'falsely' negative value ``` You can do a similar conversion when writing into the array.
It is actually possible to get rid of the if statement and the addition if you do it like this. ``` byte[] foobar = ..; int value = (foobar[10] & 0xff); ``` This way Java doesn't interpret the byte as a negative number and flip the sign bit on the integer also.
What is the best way to work around the fact that ALL Java bytes are signed?
[ "", "java", "byte", "unsigned", "signed", "" ]
Having a heckuva time with this one, though I feel I'm missing something obvious. I have a control that inherits from `System.Web.UI.WebControls.Button`, and then implements an interface that I have set up. So think... ``` public class Button : System.Web.UI.WebControls.Button, IMyButtonInterface { ... } ``` In the codebehind of a page, I'd like to find all instances of this button from the ASPX. Because I don't really know what the *type* is going to be, just the *interface* it implements, that's all I have to go on when looping through the control tree. Thing is, I've never had to determine if an object uses an interface versus just testing its type. **How can I loop through the control tree and yank anything that implements `IMyButtonInterface` in a clean way** (Linq would be fine)? Again, know it's something obvious, but just now started using interfaces heavily and I can't seem to focus my Google results enough to figure it out :) **Edit:** `GetType()` returns the actual class, but doesn't return the interface, so I can't test on that (e.g., it'd return "`MyNamespace.Button`" instead of "`IMyButtonInterface`"). In trying to use "`as`" or "`is`" in a recursive function, the *`type`* parameter doesn't even get recognized within the function! It's rather bizarre. So ``` if(ctrl.GetType() == typeToFind) //ok if(ctrl is typeToFind) //typeToFind isn't recognized! eh? ``` Definitely scratching my head over this one.
Longhorn213 almost has the right answer, but as as Sean Chambers and bdukes say, you should use ``` ctrl is IInterfaceToFind ``` instead of ``` ctrl.GetType() == aTypeVariable ``` The reason why is that if you use `.GetType()` you will get the true type of an object, not necessarily what it can also be cast to in its inheritance/Interface implementation chain. Also, `.GetType()` will never return an abstract type/interface since you can't new up an abstract type or interface. `GetType()` returns concrete types only. The reason this doesn't work ``` if(ctrl is typeToFind) ``` Is because the type of the variable `typeToFind` is actually `System.RuntimeType`, not the type you've set its value to. Example, if you set a string's value to "`foo`", its type is still string not "`foo`". I hope that makes sense. It's very easy to get confused when working with types. I'm chronically confused when working with them. The most import thing to note about longhorn213's answer is that **you have to use recursion** or you may miss some of the controls on the page. Although we have a working solution here, I too would love to see if there is a more succinct way to do this with LINQ.
You can just search on the Interface. This also uses recursion if the control has child controls, i.e. the button is in a panel. ``` private List<Control> FindControlsByType(ControlCollection controls, Type typeToFind) { List<Control> foundList = new List<Control>(); foreach (Control ctrl in this.Page.Controls) { if (ctrl.GetType() == typeToFind) { // Do whatever with interface foundList.Add(ctrl); } // Check if the Control has Child Controls and use Recursion // to keep checking them if (ctrl.HasControls()) { // Call Function to List<Control> childList = FindControlsByType(ctrl.Controls, typeToFind); foundList.AddRange(childList); } } return foundList; } // Pass it this way FindControlsByType(Page.Controls, typeof(IYourInterface)); ```
Finding controls that use a certain interface in ASP.NET
[ "", "c#", "asp.net", "" ]
``` #if SYMBOL //code #endif ``` what values does C# predefine for use?
To add to what Nick said, the MSDN documentation does not list any pre-defined names. It would seem that all need to come from `#define` and `/define`. [#if on MSDN](http://msdn.microsoft.com/en-us/library/4y6tbswk.aspx)
Depends on what /define compiler options you use. `Visual Studio` puts the `DEBUG` symbol in there for you via the project settings, but you could create any ones that you want.
What Predefined #if symbos does c# have?
[ "", "c#", "" ]
SQL Experts, Is there an efficient way to group runs of data together using SQL? Or is it going to be more efficient to process the data in code. For example if I have the following data: ``` ID|Name 01|Harry Johns 02|Adam Taylor 03|John Smith 04|John Smith 05|Bill Manning 06|John Smith ``` I need to display this: ``` Harry Johns Adam Taylor John Smith (2) Bill Manning John Smith ``` @Matt: Sorry I had trouble formatting the data using an embedded html table it worked in the preview but not in the final display.
Try this: ``` select n.name, (select count(*) from myTable n1 where n1.name = n.name and n1.id >= n.id and (n1.id <= ( select isnull(min(nn.id), (select max(id) + 1 from myTable)) from myTable nn where nn.id > n.id and nn.name <> n.name ) )) from myTable n where not exists ( select 1 from myTable n3 where n3.name = n.name and n3.id < n.id and n3.id > ( select isnull(max(n4.id), (select min(id) - 1 from myTable)) from myTable n4 where n4.id < n.id and n4.name <> n.name ) ) ``` I think that'll do what you want. Bit of a kludge though. Phew! After a few edits I think I have all the edge cases sorted out.
I hate cursors with a passion... but here's a dodgy cursor version... ``` Declare @NewName Varchar(50) Declare @OldName Varchar(50) Declare @CountNum int Set @CountNum = 0 DECLARE nameCursor CURSOR FOR SELECT Name FROM NameTest OPEN nameCursor FETCH NEXT FROM nameCursor INTO @NewName WHILE @@FETCH_STATUS = 0 BEGIN if @OldName <> @NewName BEGIN Print @OldName + ' (' + Cast(@CountNum as Varchar(50)) + ')' Set @CountNum = 0 END SELECT @OldName = @NewName FETCH NEXT FROM nameCursor INTO @NewName Set @CountNum = @CountNum + 1 END Print @OldName + ' (' + Cast(@CountNum as Varchar(50)) + ')' CLOSE nameCursor DEALLOCATE nameCursor ```
Grouping runs of data
[ "", "sql", "" ]
I have a simple webform that will allow unauthenticated users to input their information, including name. I gave the name field a limit of 50 characters to coincide with my database table where the field is `varchar(50)`, but then I started to wonder. Is it more appropriate to use something like the `Text` column type or should I limit the length of the name to something reasonable? I'm using SQL Server 2005, in case that matters in your response. **EDIT:** I did not see [this broader question](https://stackoverflow.com/questions/20958/list-of-standard-lengths-for-database-fields) regarding similar issues.
[UK Government Data Standards Catalogue](http://webarchive.nationalarchives.gov.uk/20100407120701/http://cabinetoffice.gov.uk/govtalk/schemasstandards/e-gif/datastandards.aspx) suggests 35 characters for each of Given Name and Family Name, or 70 characters for a single field to hold the Full Name.
I know I'm late on this one, but I'll add this comment anyway, as others may well come here in the future with similar questions. Beware of tweaking column sizes dependent on locale. For a start, it sets you up for a maintenance nightmare, leaving aside the fact that people migrate, and take their names with them. For example, Spanish people with those extra surnames can move to and live in an English-speaking country, and can reasonably expect their full name to be used. Russians have patronymics in addition to their surnames, some African names can be considerably longer than most European names. Go with making each column as wide as you can reasonably do, taking into account the potential row count. I use 40 characters each for first name, other given names and surname and have never found any problems.
What is a reasonable length limit on person "Name" fields?
[ "", "html", "sql", "sql-server", "textbox", "limit", "" ]
Within c#, I need to be able to * Connect to a remote system, specifying username/password as appropriate * List the members of a localgroup on that system * Fetch the results back to the executing computer So for example I would connect to \SOMESYSTEM with appropriate creds, and fetch back a list of local administrators including SOMESYSTEM\Administrator, SOMESYSTEM\Bob, DOMAIN\AlanH, "DOMAIN\Domain Administrators". I've tried this with system.directoryservices.accountmanagement but am running into problems with authentication. Sometimes I get: *Multiple connections to a server or shared resource by the same user, using more than one user name, are not allowed. Disconnect all previous connections to the server or shared resource and try again. (Exception from HRESULT: 0x800704C3)* The above is trying because there will be situations where I simply cannot unmap existing drives or UNC connections. Other times my program gets UNKNOWN ERROR and the security log on the remote system reports an error 675, code 0x19 which is KDC\_ERR\_PREAUTH\_REQUIRED. I need a simpler and less error prone way to do this!
This should be easy to do using WMI. Here you have a pointer to some docs: [WMI Documentation for Win32\_UserAccount](http://msdn.microsoft.com/en-us/library/aa394507.aspx) Even if you have no previous experience with WMI, it should be quite easy to turn that VB Script code at the bottom of the page into some .NET code. Hope this helped!
davidg was on the right track, and I am crediting him with the answer. But the WMI query necessary was a little less than straightfoward, since I needed not just a list of users for the whole machine, but the subset of users *and groups*, whether local or domain, that were members of the local Administrators group. For the record, that WMI query was: ``` SELECT PartComponent FROM Win32_GroupUser WHERE GroupComponent = "Win32_Group.Domain='thehostname',Name='thegroupname'" ``` Here's the full code snippet: ``` public string GroupMembers(string targethost, string groupname, string targetusername, string targetpassword) { StringBuilder result = new StringBuilder(); try { ConnectionOptions Conn = new ConnectionOptions(); if (targethost != Environment.MachineName) //WMI errors if creds given for localhost { Conn.Username = targetusername; //can be null Conn.Password = targetpassword; //can be null } Conn.Timeout = TimeSpan.FromSeconds(2); ManagementScope scope = new ManagementScope("\\\\" + targethost + "\\root\\cimv2", Conn); scope.Connect(); StringBuilder qs = new StringBuilder(); qs.Append("SELECT PartComponent FROM Win32_GroupUser WHERE GroupComponent = \"Win32_Group.Domain='"); qs.Append(targethost); qs.Append("',Name='"); qs.Append(groupname); qs.AppendLine("'\""); ObjectQuery query = new ObjectQuery(qs.ToString()); ManagementObjectSearcher searcher = new ManagementObjectSearcher(scope, query); ManagementObjectCollection queryCollection = searcher.Get(); foreach (ManagementObject m in queryCollection) { ManagementPath path = new ManagementPath(m["PartComponent"].ToString()); { String[] names = path.RelativePath.Split(','); result.Append(names[0].Substring(names[0].IndexOf("=") + 1).Replace("\"", " ").Trim() + "\\"); result.AppendLine(names[1].Substring(names[1].IndexOf("=") + 1).Replace("\"", " ").Trim()); } } return result.ToString(); } catch (Exception e) { Console.WriteLine("Error. Message: " + e.Message); return "fail"; } } ``` So, if I invoke Groupmembers("Server1", "Administrators", "myusername", "mypassword"); I get a single string returned with: SERVER1\Administrator MYDOMAIN\Domain Admins The actual WMI return is more like this: \\SERVER1\root\cimv2:Win32\_UserAccount.Domain="SERVER1",Name="Administrator" ... so as you can see, I had to do a little string manipulation to pretty it up.
Enumerate Windows user group members on remote system using c#
[ "", "c#", "windows", "user-management", "usergroups", "" ]
Here is my code, which takes two version identifiers in the form "1, 5, 0, 4" or "1.5.0.4" and determines which is the newer version. Suggestions or improvements, please! ``` /// <summary> /// Compares two specified version strings and returns an integer that /// indicates their relationship to one another in the sort order. /// </summary> /// <param name="strA">the first version</param> /// <param name="strB">the second version</param> /// <returns>less than zero if strA is less than strB, equal to zero if /// strA equals strB, and greater than zero if strA is greater than strB</returns> public static int CompareVersions(string strA, string strB) { char[] splitTokens = new char[] {'.', ','}; string[] strAsplit = strA.Split(splitTokens, StringSplitOptions.RemoveEmptyEntries); string[] strBsplit = strB.Split(splitTokens, StringSplitOptions.RemoveEmptyEntries); int[] versionA = new int[4]; int[] versionB = new int[4]; for (int i = 0; i < 4; i++) { versionA[i] = Convert.ToInt32(strAsplit[i]); versionB[i] = Convert.ToInt32(strBsplit[i]); } // now that we have parsed the input strings, compare them return RecursiveCompareArrays(versionA, versionB, 0); } /// <summary> /// Recursive function for comparing arrays, 0-index is highest priority /// </summary> private static int RecursiveCompareArrays(int[] versionA, int[] versionB, int idx) { if (versionA[idx] < versionB[idx]) return -1; else if (versionA[idx] > versionB[idx]) return 1; else { Debug.Assert(versionA[idx] == versionB[idx]); if (idx == versionA.Length - 1) return 0; else return RecursiveCompareArrays(versionA, versionB, idx + 1); } } ``` --- @ [Darren Kopp](https://stackoverflow.com/questions/30494/compare-version-identifiers#30510): The version class does not handle versions of the format 1.0.0.5.
The [System.Version](http://msdn.microsoft.com/en-us/library/system.version.aspx) class does not support versions with commas in it, so the solution presented by [Darren Kopp](https://stackoverflow.com/questions/30494#30510) is not sufficient. Here is a version that is as simple as possible (but no simpler). It uses [System.Version](http://msdn.microsoft.com/en-us/library/system.version.aspx) but achieves compatibility with version numbers like "1, 2, 3, 4" by doing a search-replace before comparing. ``` /// <summary> /// Compare versions of form "1,2,3,4" or "1.2.3.4". Throws FormatException /// in case of invalid version. /// </summary> /// <param name="strA">the first version</param> /// <param name="strB">the second version</param> /// <returns>less than zero if strA is less than strB, equal to zero if /// strA equals strB, and greater than zero if strA is greater than strB</returns> public static int CompareVersions(String strA, String strB) { Version vA = new Version(strA.Replace(",", ".")); Version vB = new Version(strB.Replace(",", ".")); return vA.CompareTo(vB); } ``` The code has been tested with: ``` static void Main(string[] args) { Test("1.0.0.0", "1.0.0.1", -1); Test("1.0.0.1", "1.0.0.0", 1); Test("1.0.0.0", "1.0.0.0", 0); Test("1, 0.0.0", "1.0.0.0", 0); Test("9, 5, 1, 44", "3.4.5.6", 1); Test("1, 5, 1, 44", "3.4.5.6", -1); Test("6,5,4,3", "6.5.4.3", 0); try { CompareVersions("2, 3, 4 - 4", "1,2,3,4"); Console.WriteLine("Exception should have been thrown"); } catch (FormatException e) { Console.WriteLine("Got exception as expected."); } Console.ReadLine(); } private static void Test(string lhs, string rhs, int expected) { int result = CompareVersions(lhs, rhs); Console.WriteLine("Test(\"" + lhs + "\", \"" + rhs + "\", " + expected + (result.Equals(expected) ? " succeeded." : " failed.")); } ```
Use the [Version](http://msdn.microsoft.com/en-us/library/system.version.aspx) class. ``` Version a = new Version("1.0.0.0"); Version b = new Version("2.0.0.0"); Console.WriteLine(string.Format("Newer: {0}", (a > b) ? "a" : "b")); // prints b ```
Compare Version Identifiers
[ "", "c#", ".net", "compare", "versions", "" ]
Has anyone got a working solution without some Java/COM-bridge? E.g. process the Email as a file (.msg) rather than locating the data that is referenced in the Clipboard?
Maybe this is a solution for your problem: <http://sourceforge.net/projects/javaoutlookdd/> It allows to handle outlook items like File objects during drag&drop.
I did make some headway on this sort of thing a few years back using [Apache POI](http://poi.apache.org) to extract the contents of an email from .msg files. I'm pretty sure they have a simple swing explorer / viewer that you can use to examine the structure within the compound document format, but I can't find it right now. I was able to extract most information that I was interested in but was ultimately wanting to create a mime format version of the message and couldn't extract all the information I needed in a format I could use.
Drag and Drop an Email from MS Outlook to Eclipse RCP
[ "", "java", "eclipse", "ms-office", "integration", "desktop", "" ]
What is the fastest method of parsing an XML file in C#? I'm using .Net 2.0
If you're using .Net 2 then the XmlReader and XmlDocument are about it. If you can use .Net 3.5 then the new Linq to Xml methods are a big improvement.
I haven't benched-marked it myself, but when I've asked about it in the past I've been told that XmlDocument is supposed to be faster. I have my doubts, though, since XmlDocument would need to create a DOM while XmlReader does not.
Parsing an XML file in C#
[ "", "c#", "xml", "" ]
It's about PHP but I've no doubt many of the same comments will apply to other languages. Simply put, what are the differences in the different types of loop for PHP? Is one faster/better than the others or should I simply put in the most readable loop? ``` for ($i = 0; $i < 10; $i++) { # code... } foreach ($array as $index => $value) { # code... } do { # code... } while ($flag == false); ```
For loop and While loops are entry condition loops. They evaluate condition first, so the statement block associated with the loop won't run even once if the condition fails to meet The statements inside this for loop block will run 10 times, the value of $i will be 0 to 9; ``` for ($i = 0; $i < 10; $i++) { # code... } ``` Same thing done with while loop: ``` $i = 0; while ($i < 10) { # code... $i++ } ``` Do-while loop is exit-condition loop. It's guaranteed to execute once, then it will evaluate condition before repeating the block ``` do { # code... } while ($flag == false); ``` foreach is used to access array elements from start to end. At the beginning of foreach loop, the internal pointer of the array is set to the first element of the array, in next step it is set to the 2nd element of the array and so on till the array ends. In the loop block The value of current array item is available as $value and the key of current item is available as $index. ``` foreach ($array as $index => $value) { # code... } ``` You could do the same thing with while loop, like this ``` while (current($array)) { $index = key($array); // to get key of the current element $value = $array[$index]; // to get value of current element # code ... next($array); // advance the internal array pointer of $array } ``` And lastly: [The PHP Manual](http://www.php.net/download-docs.php) is your friend :)
@brendan: The article you cited is seriously outdated and the information is just plain wrong. Especially the last point (use `for` instead of `foreach`) is misleading and the justification offered in the article no longer applies to modern versions of .NET. While it's true that the `IEnumerator` uses virtual calls, these *can* actually be inlined by a modern compiler. Furthermore, .NET now knows generics and strongly typed enumerators. There are a lot of performance tests out there that prove conclusively that `for` is generally no faster than `foreach`. [Here's an example](http://diditwith.net/2006/10/05/PerformanceOfForeachVsListForEach.aspx).
The difference between loops
[ "", "php", "loops", "" ]
Using [PyObjC](http://pyobjc.sourceforge.net/), you can use Python to write Cocoa applications for OS X. Can I write native iPhone apps using Python and if so, how?
Not currently, currently the only languages available to access the iPhone SDK are C/C++, Objective C and Swift. There is no technical reason why this could not change in the future but I wouldn't hold your breath for this happening in the short term. That said, Objective-C and Swift really are not too scary... > # 2016 edit > > Javascript with NativeScript framework is available to use now.
You can use PyObjC on the iPhone as well, due to the excellent work by Jay Freeman (saurik). See [iPhone Applications in Python](http://www.saurik.com/id/5). Note that this requires a jailbroken iPhone at the moment.
Can I write native iPhone apps using Python?
[ "", "iphone", "python", "cocoa-touch", "" ]
I would like to filter an array of items by using the `map()` function. Here is a code snippet: ``` var filteredItems = items.map(function(item) { if( ...some condition... ) { return item; } }); ``` The problem is that filtered out items still uses space in the array and I would like to completely wipe them out. My specific code was not intended to run in a browser, so I'm not worried about browser-specific support for either function. Any idea?
You should use the `filter` method rather than map unless you want to mutate the items in the array, in addition to filtering. eg. ``` var filteredItems = items.filter(function(item) { return ...some condition...; }); ``` [Edit: Of course you could always do `sourceArray.filter(...).map(...)` to both filter and mutate]
Inspired by writing this answer, I ended up later expanding and writing a blog post going over this in careful detail. I recommend [checking that out](http://code.kylebaker.io/2018/03/16/stack-overflow/) if you want to develop a deeper understanding of how to think about this problem--I try to explain it piece by piece, and also give a JSperf comparison at the end, going over speed considerations. That said, \*\*The tl;dr is this: ## To accomplish what you're asking for (filtering and mapping within one function call), you would use `Array.reduce()`\*\*. ## However, the *more readable* **and** (less importantly) *usually significantly faster*[2](http://code.kylebaker.io/2018/03/16/stack-overflow/#What-about-performance) approach is to just use filter and map chained together: `[1,2,3].filter(num => num > 2).map(num => num * 2)` What follows is a description of how `Array.reduce()` works, and how it can be used to accomplish filter and map in one iteration. Again, if this is too condensed, I highly recommend seeing the blog post linked above, which is a much more friendly intro with clear examples and progression. --- You give reduce an argument that is a (usually anonymous) function. *That anonymous function* takes two parameters--one (like the anonymous functions passed in to map/filter/forEach) is the iteratee to be operated on. There is another argument for the anonymous function passed to reduce, however, that those functions do not accept, and that is **the value that will be passed along between function calls, often referred to as the *memo***. Note that while Array.filter() takes only one argument (a function), Array.reduce() also takes an important (though optional) second argument: an initial value for 'memo' that will be passed into that anonymous function as its first argument, and subsequently can be mutated and passed along between function calls. (If it is not supplied, then 'memo' in the first anonymous function call will by default be the first iteratee, and the 'iteratee' argument will actually be the second value in the array) In our case, we'll pass in an empty array to start, and then choose whether to inject our iteratee into our array or not based on our function--this is the filtering process. Finally, we'll return our 'array in progress' on each anonymous function call, and reduce will take that return value and pass it as an argument (called memo) to its next function call. This allows filter and map to happen in one iteration, cutting down our number of required iterations in half--just doing twice as much work each iteration, though, so nothing is really saved other than function calls, which are not so expensive in javascript. For a more complete explanation, refer to [MDN](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/Reduce) docs (or to my post referenced at the beginning of this answer). Basic example of a Reduce call: ``` let array = [1,2,3]; const initialMemo = []; array = array.reduce((memo, iteratee) => { // if condition is our filter if (iteratee > 1) { // what happens inside the filter is the map memo.push(iteratee * 2); } // this return value will be passed in as the 'memo' argument // to the next call of this function, and this function will have // every element passed into it at some point. return memo; }, initialMemo) console.log(array) // [4,6], equivalent to [(2 * 2), (3 * 2)] ``` more succinct version: ``` [1,2,3].reduce((memo, value) => value > 1 ? memo.concat(value * 2) : memo, []) ``` Notice that the first iteratee was not greater than one, and so was filtered. Also note the initialMemo, named just to make its existence clear and draw attention to it. Once again, it is passed in as 'memo' to the first anonymous function call, and then the returned value of the anonymous function is passed in as the 'memo' argument to the next function. Another example of the classic use case for memo would be returning the smallest or largest number in an array. Example: ``` [7,4,1,99,57,2,1,100].reduce((memo, val) => memo > val ? memo : val) // ^this would return the largest number in the list. ``` An example of how to write your own reduce function (this often helps understanding functions like these, I find): ``` test_arr = []; // we accept an anonymous function, and an optional 'initial memo' value. test_arr.my_reducer = function(reduceFunc, initialMemo) { // if we did not pass in a second argument, then our first memo value // will be whatever is in index zero. (Otherwise, it will // be that second argument.) const initialMemoIsIndexZero = arguments.length < 2; // here we use that logic to set the memo value accordingly. let memo = initialMemoIsIndexZero ? this[0] : initialMemo; // here we use that same boolean to decide whether the first // value we pass in as iteratee is either the first or second // element const initialIteratee = initialMemoIsIndexZero ? 1 : 0; for (var i = initialIteratee; i < this.length; i++) { // memo is either the argument passed in above, or the // first item in the list. initialIteratee is either the // first item in the list, or the second item in the list. memo = reduceFunc(memo, this[i]); // or, more technically complete, give access to base array // and index to the reducer as well: // memo = reduceFunc(memo, this[i], i, this); } // after we've compressed the array into a single value, // we return it. return memo; } ``` The real implementation allows access to things like the index, for example, but I hope this helps you get an uncomplicated feel for the gist of it.
Removing elements with Array.map in JavaScript
[ "", "javascript", "functional-programming", "data-manipulation", "client-side", "" ]
I want to know what a "*virtual base class*" is and what it means. Let me show an example: ``` class Foo { public: void DoSomething() { /* ... */ } }; class Bar : public virtual Foo { public: void DoSpecific() { /* ... */ } }; ```
Virtual base classes, used in virtual inheritance, is a way of preventing multiple "instances" of a given class appearing in an inheritance hierarchy when using multiple inheritance. Consider the following scenario: ``` class A { public: void Foo() {} }; class B : public A {}; class C : public A {}; class D : public B, public C {}; ``` The above class hierarchy results in the "dreaded diamond" which looks like this: ``` A / \ B C \ / D ``` An instance of D will be made up of B, which includes A, and C which also includes A. So you have two "instances" (for want of a better expression) of A. When you have this scenario, you have the possibility of ambiguity. What happens when you do this: ``` D d; d.Foo(); // is this B's Foo() or C's Foo() ?? ``` Virtual inheritance is there to solve this problem. When you specify virtual when inheriting your classes, you're telling the compiler that you only want a single instance. ``` class A { public: void Foo() {} }; class B : public virtual A {}; class C : public virtual A {}; class D : public B, public C {}; ``` This means that there is only one "instance" of A included in the hierarchy. Hence ``` D d; d.Foo(); // no longer ambiguous ``` This is a mini summary. For more information, have a read of [this](http://en.wikipedia.org/wiki/Virtual_inheritance) and [this](https://isocpp.org/wiki/faq/multiple-inheritance). A good example is also available [here](http://www.learncpp.com/cpp-tutorial/118-virtual-base-classes/).
## About the memory layout As a side note, the problem with the Dreaded Diamond is that the base class is present multiple times. So with regular inheritance, you believe you have: ``` A / \ B C \ / D ``` But in the memory layout, you have: ``` A A | | B C \ / D ``` This explain why when call `D::foo()`, you have an ambiguity problem. But the **real** problem comes when you want to use a data member of `A`. For example, let's say we have: ``` class A { public : foo() ; int m_iValue ; } ; ``` When you'll try to access `m_iValue` from `D`, the compiler will protest, because in the hierarchy, it'll see two `m_iValue`, not one. And if you modify one, say, `B::m_iValue` (that is the `A::m_iValue` parent of `B`), `C::m_iValue` won't be modified (that is the `A::m_iValue` parent of `C`). This is where virtual inheritance comes handy, as with it, you'll get back to a true diamond layout, with not only one `foo()` method only, but also one and only one `m_iValue`. ## What could go wrong? Imagine: * `A` has some basic feature. * `B` adds to it some kind of cool array of data (for example) * `C` adds to it some cool feature like an observer pattern (for example, on `m_iValue`). * `D` inherits from `B` and `C`, and thus from `A`. With normal inheritance, modifying `m_iValue` from `D` is ambiguous and this must be resolved. Even if it is, there are two `m_iValues` inside `D`, so you'd better remember that and update the two at the same time. With virtual inheritance, modifying `m_iValue` from `D` is ok... But... Let's say that you have `D`. Through its `C` interface, you attached an observer. And through its `B` interface, you update the cool array, which has the side effect of directly changing `m_iValue`... As the change of `m_iValue` is done directly (without using a virtual accessor method), the observer "listening" through `C` won't be called, because the code implementing the listening is in `C`, and `B` doesn't know about it... ## Conclusion If you're having a diamond in your hierarchy, it means that you have 95% probability to have done something wrong with said hierarchy.
In C++, what is a virtual base class?
[ "", "c++", "virtual-inheritance", "" ]
I'm wondering how to make a release build that includes all necessary DLL files into the .exe so the program can be run on a non-development machine without having to install the Microsoft redistributable on the target machine. Without doing this you get the error message that the application configuration is not correct and to reinstall.
1. Choose Project -> Properties 2. Select Configuration -> General 3. In the box for how you should link MFC, choose to statically link it. 4. Choose Linker -> Input. Under **Additional Dependencies**, add any libraries you need your app to statically link in.
You need to set the run-time library (Under C/C++ -> Code Generation) for ALL projects to static linkage, which correlates to the following default building configurations: * Multithreaded Debug/Release * Singlethreaded Debug/Release As opposed to the "DLL" versions of those libraries. Even if you do that, depending on the libraries you're using, you might have to install a Merge Module/framework/etc. It depends on whether static LIB versions of your dependencies are available.
How do you pack a visual studio c++ project for release?
[ "", "c++", "visual-studio", "build", "" ]
I want to write C/C++ programs that take input from a MIDI device. The MIDI device connects to my PC using a USB connector. I'm looking for a (C/C++ implemented) library that I can use to read the MIDI signals from the MIDI device through the USB port. I'm happy manipulating the MIDI data once I get it, I just don't want to have to implement the code for its capture. I'm planning on writing my code using the Bloodshed Dev-C++ IDE on Windows XP.
[PortMidi](http://portmedia.sourceforge.net/portmidi/) is another open source cross-platform MIDI I/O library worth checking out. On the other hand, if you are working on a sysex type of app, then direct Win32 works easily enough. Just came across another open source cross-platform framework that includes MIDI support: [Juce](http://www.rawmaterialsoftware.com/juce/). Also, I should note that there isn't anything special about a USB connected MIDI device. It will still be presented as a MIDI device in Windows and you will use standard MIDI APIs (mmsystem) to communicate with it. [July 2014] I just came across [RtMidi](http://www.music.mcgill.ca/~gary/rtmidi/) that looks to be a nice, compact, open source cross-platform C++ library.
Check out the open source project [LMMS](http://lmms.sourceforge.net/). It's a music studio for Linux that includes the ability to use MIDI keyboards with software instruments. If you dig around in [source files](http://lmms.sourceforge.net/wiki/index.php?title=Accessing_SVN) with 'midi' in the name, you'll probably find what you're looking for.
C/C++ library for reading MIDI signals from a USB MIDI device
[ "", "c++", "c", "signal-processing", "midi", "dev-c++", "" ]
How would one display any add content from a "dynamic" aspx page? Currently I am working on using the System.Web.HttpResponse "Page.Response" to write a file that is stored on a web server to a web request. This would allow people to hit a url to the type <http://www.foo.com?Image=test.jpg> and have the image display in their browser. So as you may know this revolves around the use of Response.ContentType. By using ``` Response.ContentType = "application/octet-stream"; ``` I am able to display images of type gif/jpeg/png (all i have tested so far), bit trying to display .swf or .ico files gives me a nice little error. using ``` Response.ContentType = "application/x-shockwave-flash"; ``` I can get flash files to play, but then the images are messed. So how do i **easily** choose the contenttype?
This is ugly, but the best way is to look at the file and set the content type as appropriate: ``` switch ( fileExtension ) { case "pdf": Response.ContentType = "application/pdf"; break; case "swf": Response.ContentType = "application/x-shockwave-flash"; break; case "gif": Response.ContentType = "image/gif"; break; case "jpeg": Response.ContentType = "image/jpg"; break; case "jpg": Response.ContentType = "image/jpg"; break; case "png": Response.ContentType = "image/png"; break; case "mp4": Response.ContentType = "video/mp4"; break; case "mpeg": Response.ContentType = "video/mpeg"; break; case "mov": Response.ContentType = "video/quicktime"; break; case "wmv": case "avi": Response.ContentType = "video/x-ms-wmv"; break; //and so on default: Response.ContentType = "application/octet-stream"; break; } ```
This is part of a solution I use on a local intranet. Some of the variables you will have to collect yourself as I pull them from a database but you may pull them from somewhere else. The only extra but I've got in there is a function called **getMimeType** which connects to the database and pulls back the correct mine type based on file extension. This defaults to application/octet-stream if none is found. ``` // Clear the response buffer incase there is anything already in it. Response.Clear(); Response.Buffer = true; // Read the original file from disk FileStream myFileStream = new FileStream(sPath, FileMode.Open); long FileSize = myFileStream.Length; byte[] Buffer = new byte[(int)FileSize]; myFileStream.Read(Buffer, 0, (int)FileSize); myFileStream.Close(); // Tell the browse stuff about the file Response.AddHeader("Content-Length", FileSize.ToString()); Response.AddHeader("Content-Disposition", "inline; filename=" + sFilename.Replace(" ","_")); Response.ContentType = getMimeType(sExtention, oConnection); // Send the data to the browser Response.BinaryWrite(Buffer); Response.End(); ```
Displaying ad content from Respose.WriteFile()/ Response.ContentType
[ "", "c#", "" ]
Is there any reason not to use the bitwise operators &, |, and ^ for "bool" values in C++? I sometimes run into situations where I want exactly one of two conditions to be true (XOR), so I just throw the ^ operator into a conditional expression. I also sometimes want all parts of a condition to be evaluated whether the result is true or not (rather than short-circuiting), so I use & and |. I also need to accumulate Boolean values sometimes, and &= and |= can be quite useful. I've gotten a few raised eyebrows when doing this, but the code is still meaningful and cleaner than it would be otherwise. Is there any reason NOT to use these for bools? Are there any modern compilers that give bad results for this?
`||` and `&&` are boolean operators and the built-in ones are guaranteed to return either `true` or `false`. Nothing else. `|`, `&` and `^` are bitwise operators. When the domain of numbers you operate on is just 1 and 0, then they are exactly the same, but in cases where your booleans are not strictly 1 and 0 – as is the case with the C language – you may end up with some behavior you didn't want. For instance: ``` BOOL two = 2; BOOL one = 1; BOOL and = two & one; //and = 0 BOOL cand = two && one; //cand = 1 ``` In C++, however, the `bool` type is guaranteed to be only either a `true` or a `false` (which convert implicitly to respectively `1` and `0`), so it's less of a worry from this stance, but the fact that people aren't used to seeing such things in code makes a good argument for not doing it. Just say `b = b && x` and be done with it.
Two main reasons. In short, consider carefully; there could be a good reason for it, but if there is be VERY explicit in your comments because it can be brittle and, as you say yourself, people aren't generally used to seeing code like this. ## Bitwise xor != Logical xor (except for 0 and 1) Firstly, if you are operating on values other than `false` and `true` (or `0` and `1`, as integers), the `^` operator can introduce behavior not equivalent to a logical xor. For example: ``` int one = 1; int two = 2; // bitwise xor if (one ^ two) { // executes because expression = 3 and any non-zero integer evaluates to true } // logical xor; more correctly would be coded as // if (bool(one) != bool(two)) // but spelled out to be explicit in the context of the problem if ((one && !two) || (!one && two)) { // does not execute b/c expression = ((true && false) || (false && true)) // which evaluates to false } ``` Credit to user @Patrick for expressing this first. ## Order of operations Second, `|`, `&`, and `^`, as bitwise operators, do not short-circuit. In addition, multiple bitwise operators chained together in a single statement -- even with explicit parentheses -- can be reordered by optimizing compilers, because all 3 operations are normally commutative. This is important if the order of the operations matters. In other words ``` bool result = true; result = result && a() && b(); // will not call a() if result false, will not call b() if result or a() false ``` will not always give the same result (or end state) as ``` bool result = true; result &= (a() & b()); // a() and b() both will be called, but not necessarily in that order in an // optimizing compiler ``` This is especially important because you may not control methods `a()` and `b()`, or somebody else may come along and change them later not understanding the dependency, and cause a nasty (and often release-build only) bug.
Using bitwise operators for Booleans in C++
[ "", "c++", "boolean", "bitwise-operators", "" ]
I've been programming in C# and Java recently and I am curious where the best place is to initialize my class fields. Should I do it at declaration?: ``` public class Dice { private int topFace = 1; private Random myRand = new Random(); public void Roll() { // ...... } } ``` or in a constructor?: ``` public class Dice { private int topFace; private Random myRand; public Dice() { topFace = 1; myRand = new Random(); } public void Roll() { // ..... } } ``` I'm really curious what some of you veterans think is the best practice. I want to be consistent and stick to one approach.
My rules: 1. Don't initialize with the default values in declaration (`null`, `false`, `0`, `0.0`…). 2. Prefer initialization in declaration if you don't have a constructor parameter that changes the value of the field. 3. If the value of the field changes because of a constructor parameter put the initialization in the constructors. 4. Be consistent in your practice (the most important rule).
In C# it doesn't matter. The two code samples you give are utterly equivalent. In the first example the C# compiler (or is it the CLR?) will construct an empty constructor and initialise the variables as if they were in the constructor (there's a slight nuance to this that Jon Skeet explains in the comments below). If there is already a constructor then any initialisation "above" will be moved into the top of it. In terms of best practice the former is less error prone than the latter as someone could easily add another constructor and forget to chain it.
Initialize class fields in constructor or at declaration?
[ "", "java", "" ]
Is it a deliberate design decision or a problem with our current day browsers which will be rectified in the coming versions?
JavaScript does not support multi-threading because the JavaScript interpreter in the browser is a single thread (AFAIK). Even Google Chrome will not let a single web page’s JavaScript run concurrently because this would cause massive concurrency issues in existing web pages. All Chrome does is separate multiple components (different tabs, plug-ins, etcetera) into separate processes, but I can’t imagine a single page having more than one JavaScript thread. You can however use, as was suggested, `setTimeout` to allow some sort of scheduling and “fake” concurrency. This causes the browser to regain control of the rendering thread, and start the JavaScript code supplied to `setTimeout` after the given number of milliseconds. This is very useful if you want to allow the viewport (what you see) to refresh while performing operations on it. Just looping through e.g. coordinates and updating an element accordingly will just let you see the start and end positions, and nothing in between. We use an abstraction library in JavaScript that allows us to create processes and threads which are all managed by the same JavaScript interpreter. This allows us to run actions in the following manner: * Process A, Thread 1 * Process A, Thread 2 * Process B, Thread 1 * Process A, Thread 3 * Process A, Thread 4 * Process B, Thread 2 * Pause Process A * Process B, Thread 3 * Process B, Thread 4 * Process B, Thread 5 * Start Process A * Process A, Thread 5 This allows some form of scheduling and fakes parallelism, starting and stopping of threads, etcetera, but it will not be true multi-threading. I don’t think it will ever be implemented in the language itself, since true multi-threading is only useful if the browser can run a single page multi-threaded (or even more than one core), and the difficulties there are way larger than the extra possibilities. For the future of JavaScript, check this out: [https://developer.mozilla.org/presentations/xtech2006/javascript/](https://web.archive.org/web/20170920070053/https://developer.mozilla.org/presentations/xtech2006/javascript/)
JavaScript multi-threading (with some limitations) is here. Google implemented workers for Gears, and workers are being included with HTML5. Most browsers have already added support for this feature. Thread-safety of data is guaranteed because all data communicated to/from the worker is serialized/copied. For more info, read: <http://www.whatwg.org/specs/web-workers/current-work/> <http://ejohn.org/blog/web-workers/>
Why doesn't JavaScript support multithreading?
[ "", "javascript", "multithreading", "browser", "" ]
How do you retrieve selected text using Regex in C#? I am looking for C# code that is equivalent to this Perl code: ``` $indexVal = 0; if($string =~ /Index: (\d*)/){$indexVal = $1;} ```
``` int indexVal = 0; Regex re = new Regex(@"Index: (\d*)") Match m = re.Match(s) if(m.Success) indexVal = int.TryParse(m.Groups[1].toString()); ``` I might have the group number wrong, but you should be able to figure it out from here.
I think Patrick nailed this one -- my only suggestion is to remember that named regex groups exist, too, so you don't *have* to use array index numbers. ``` Regex.Match(s, @"Index (?<num>\d*)").Groups["num"].Value ``` I find the regex is a bit more readable this way as well, though [opinions vary](http://www.codinghorror.com/blog/archives/001016.html)...
How do you retrieve selected text using Regex in C#?
[ "", "c#", "regex", "perl", "" ]
*Note: This was posted when I was starting out C#. With 2014 knowledge, I can truly say that auto-properties are among the best things that ever happened to the C# language.* I am used to create my properties in C# using a private and a public field: ``` private string title; public string Title { get { return title; } set { title = value; } } ``` Now, with [.NET](http://en.wikipedia.org/wiki/.NET_Framework) 3.0, we got auto-properties: ``` public string Title { get; set; } ``` I know this is more a philosophical/subjective questions, but is there any reason to use these auto-properties except from saving five lines of code for each field? My personal gripe is that those properties are hiding stuff from me, and I am not a big fan of black magic. In fact, the hidden private field does not even show up in the debugger, which is OK given the fact that the get/set functions do nothing. But when I want to actually implement some getter/setter logic, I have to use the private/public pair anyway. I see the benefit that I save a lot of code (one vs six lines) without losing the ability to change the getter/setter logic later, but then again I can already do that by simply declaring a public field "Public string Title" without the need of the { get; set; } block, thus even saving more code. So, what am I missing here? Why would anyone actually want to use auto-properties?
We use them all the time in Stack Overflow. You may also be interested in a discussion of [Properties vs. Public Variables](http://blog.codinghorror.com/properties-vs-public-variables/). IMHO that's really what this is a reaction to, and for that purpose, it's great.
Yes, it does *just* save code. It's miles easier to read when you have loads of them. They're quicker to write and easier to maintain. Saving code is always a good goal. You can set different scopes: ``` public string PropertyName { get; private set; } ``` So that the property can only be changed inside the class. This isn't really immutable as you can still access the private setter through reflection. As of C#6 you can also create true `readonly` properties - i.e. immutable properties that cannot be changed outside of the constructor: ``` public string PropertyName { get; } public MyClass() { this.PropertyName = "whatever"; } ``` At compile time that will become: ``` readonly string pName; public string PropertyName { get { return this.pName; } } public MyClass() { this.pName = "whatever"; } ``` In immutable classes with a lot of members this saves a lot of excess code.
C# 3.0 auto-properties — useful or not?
[ "", "c#", ".net", "automatic-properties", "" ]
Plug-in systems in C++ are hard because the ABI is not properly defined, and each compiler (or version thereof) follows its own rules. However, COM on Windows shows that it's possible to create a minimal plug-in system that allows programmers with different compilers to create plug-ins for a host application using a simple interface. Let's be practical, and leave the C++ standard, which is not very helpful in this respect, aside for a minute. If I want to write an app for Windows and Mac (and optionally Linux) that supports C++ plug-ins, and if I want to give plug-in authors a reasonably large choice of compilers (say less than 2 year old versions of Visual C++, GCC or Intel's C++ compiler), what features of C++ could I count on? Of course, I assume that plug-ins would be written for a specific platform. Off the top of my head, here are some C++ features I can think of, with what I think is the answer: * vtable layout, to use objects through abstract classes? (yes) * built-in types, pointers? (yes) * structs, unions? (yes) * exceptions? (no) * extern "C" functions? (yes) * stdcall non-extern "C" functions with built-in parameter types? (yes) * non-stdcall non-extern "C" functions with user-defined parameter types? (no) I would appreciate any experience you have in that area that you could share. If you know of any moderately successful app that has a C++ plug-in system, that's cool too. Carl
Dr Dobb's Journal has an article [Building Your Own Plugin Framework: Part 1](http://www.ddj.com/cpp/204202899?cid=RSSfeed_DDJ_Cpp) which is pretty good reading on the subject. It is the start of a series of articles which covers the architecture, development, and deployment of a C/C++ cross-platform plugin framework.
You might also want to consider replacing the conventional plugin interface by a scripting interface. There are some very good bindings for several scripting languages in C/C++ that have already solved your problem. It might not be a bad idea to build on top of them. For example, have a look at [Boost.Python](http://www.boost.org/doc/libs/1_36_0/libs/python/doc/index.html).
What's safe for a C++ plug-in system?
[ "", "c++", "api", "plugins", "compiler-construction", "" ]
I'm using the Yahoo Uploader, part of the Yahoo UI Library, on my ASP.Net website to allow users to upload files. For those unfamiliar, the uploader works by using a Flash applet to give me more control over the FileOpen dialog. I can specify a filter for file types, allow multiple files to be selected, etc. It's great, but it has the following documented limitation: > Because of a known Flash bug, the Uploader running in Firefox in Windows does not send the correct cookies with the upload; instead of sending Firefox cookies, it sends Internet Explorer’s cookies for the respective domain. As a workaround, we suggest either using a cookieless upload method or appending document.cookie to the upload request. So, if a user is using Firefox, I can't rely on cookies to persist their session when they upload a file. I need their session because I need to know who they are! As a workaround, I'm using the Application object thusly: ``` Guid UploadID = Guid.NewGuid(); Application.Add(Guid.ToString(), User); ``` So, I'm creating a unique ID and using it as a key to store the `Page.User` object in the Application scope. I include that ID as a variable in the POST when the file is uploaded. Then, in the handler that accepts the file upload, I grab the User object thusly: ``` IPrincipal User = (IPrincipal)Application[Request.Form["uploadid"]]; ``` This actually works, but it has two glaring drawbacks: * If IIS, the app pool, or even just the application is restarted between the time the user visits the upload page, and actually uploads a file, their "uploadid" is deleted from application scope and the upload fails because I can't authenticate them. * If I ever scale to a web farm (possibly even a web garden) scenario, this will completely break. I might not be worried, except I do plan on scaling this app in the future. Does anyone have a better way? Is there a way for me to pass the actual ASP.Net session ID in a POST variable, then use that ID at the other end to retrieve the session? I know I can get the session ID through `Session.SessionID`, and I know how to use YUI to post it to the next page. What I don't know is how to use that `SessionID` to grab the session from the state server. Yes, I'm using a state server to store the sessions, so they persist application/IIS restarts, and will work in a web farm scenario.
[Here](http://swfupload.org/forum/generaldiscussion/98) is a post from the maintainer of [SWFUpload](http://swfupload.org) which explains how to load the session from an ID stored in Request.Form. I imagine the same thing would work for the Yahoo component. Note the security disclaimers at the bottom of the post. --- > By including a Global.asax file and the following code you can override the missing Session ID cookie: ``` using System; using System.Web; public class Global_asax : System.Web.HttpApplication { private void Application_BeginRequest(object sender, EventArgs e) { /* Fix for the Flash Player Cookie bug in Non-IE browsers. Since Flash Player always sends the IE cookies even in FireFox we have to bypass the cookies by sending the values as part of the POST or GET and overwrite the cookies with the passed in values. The theory is that at this point (BeginRequest) the cookies have not been ready by the Session and Authentication logic and if we update the cookies here we'll get our Session and Authentication restored correctly */ HttpRequest request = HttpContext.Current.Request; try { string sessionParamName = "ASPSESSID"; string sessionCookieName = "ASP.NET_SESSIONID"; string sessionValue = request.Form[sessionParamName] ?? request.QueryString[sessionParamName]; if (sessionValue != null) { UpdateCookie(sessionCookieName, sessionValue); } } catch (Exception ex) { // TODO: Add logging here. } try { string authParamName = "AUTHID"; string authCookieName = FormsAuthentication.FormsCookieName; string authValue = request.Form[authParamName] ?? request.QueryString[authParamName]; if (authValue != null) { UpdateCookie(authCookieName, authValue); } } catch (Exception ex) { // TODO: Add logging here. } } private void UpdateCookie(string cookieName, string cookieValue) { HttpCookie cookie = HttpContext.Current.Request.Cookies.Get(cookieName); if (cookie == null) { HttpCookie newCookie = new HttpCookie(cookieName, cookieValue); Response.Cookies.Add(newCookie); } else { cookie.Value = cookieValue; HttpContext.Current.Request.Cookies.Set(cookie); } } } ``` > **Security Warning:** Don't just copy and paste this code in to your ASP.Net application without knowing what you are doing. It introduces security issues and possibilities of Cross-site Scripting.
Relying on [this blog post](http://konste.wordpress.com/2013/01/22/asp-net-accessing-any-session-data-by-session-id/), here's a function that should get you the session for any user based on the session ID, though it's not pretty: ``` public SessionStateStoreData GetSessionById(string sessionId) { HttpApplication httpApplication = HttpContext.ApplicationInstance; // Black magic #1: getting to SessionStateModule HttpModuleCollection httpModuleCollection = httpApplication.Modules; SessionStateModule sessionHttpModule = httpModuleCollection["Session"] as SessionStateModule; if (sessionHttpModule == null) { // Couldn't find Session module return null; } // Black magic #2: getting to SessionStateStoreProviderBase through reflection FieldInfo fieldInfo = typeof(SessionStateModule).GetField("_store", BindingFlags.NonPublic | BindingFlags.Instance); SessionStateStoreProviderBase sessionStateStoreProviderBase = fieldInfo.GetValue(sessionHttpModule) as SessionStateStoreProviderBase; if (sessionStateStoreProviderBase == null) { // Couldn't find sessionStateStoreProviderBase return null; } // Black magic #3: generating dummy HttpContext out of the thin air. sessionStateStoreProviderBase.GetItem in #4 needs it. SimpleWorkerRequest request = new SimpleWorkerRequest("dummy.html", null, new StringWriter()); HttpContext context = new HttpContext(request); // Black magic #4: using sessionStateStoreProviderBase.GetItem to fetch the data from session with given Id. bool locked; TimeSpan lockAge; object lockId; SessionStateActions actions; SessionStateStoreData sessionStateStoreData = sessionStateStoreProviderBase.GetItem( context, sessionId, out locked, out lockAge, out lockId, out actions); return sessionStateStoreData; } ```
Can I put an ASP.Net session ID in a hidden form field?
[ "", "c#", "asp.net", "session", "yui", "" ]
Is there a [Box Plot](http://en.wikipedia.org/wiki/Box_plot) graph, or box and whisker graph available for Reporting Services 2005? From the looks of the documentation there doesn't seem to be one out of the box; so I am wondering if there is a third party that has the graph, or a way to build my own?
There definitely isn't a Box Plot built into SSRS 2005, though it's possible that 2008 has one. SSRS 2005 does have a robust extension model. If you can implement a chart in System.Drawing/GDI+, you can make it into a [custom report item](http://msdn.microsoft.com/en-us/magazine/cc188686.aspx) for SSRS. There are a few third-party vendors with fairly feature-rich products, but the only one I've ever evaluated was [Dundas Chart](http://www.dundas.com/Products/Chart/RS/index.aspx), which isn't cheap, but gives you about 100x more charting capability than SSRS 2005 built in (for SSRS 2008, Microsoft incorporated a great deal of Dundas's charting technology). I can't say from experience that I know Dundas Chart supports the Box Plot, but this [support forum post](http://support.dundas.com/forum/printable.aspx?m=3579) says so.
[ZedGraph](http://sourceforge.net/project/showfiles.php?group_id=114675) is a good open source alternative.
Is there a Box Plot graph available for Reporting Services 2005?
[ "", "sql", "reporting-services", "graph", "" ]
I know the following libraries for drawing charts in an SWT/Eclipse RCP application: * [Eclipse BIRT Chart Engine](http://www.eclipse.org/articles/article.php?file=Article-BIRTChartEngine/index.html) (Links to an article on how to use it) * [JFreeChart](http://www.jfree.org/jfreechart/) Which other libraries are there for drawing pretty charts with SWT? Or charts in Java generally? After all, you can always display an image...
I have not used BIRT or JGraph, however I use JFreeChart in my SWT application. I have found the best way to use JFreeChart in SWT is by making a composite an AWT frame and using the AWT functionality for JFreeChart. The way to do this is by creating a composite ``` Composite comp = new Composite(parent, SWT.NONE | SWT.EMBEDDED); Frame frame = SWT_AWT.new_Frame(comp); JFreeChart chart = createChart(); ChartPanel chartPanel = new ChartPanel(chart); frame.add(chartPanel); ``` There are several problems in regards to implementations across different platforms as well as the SWT code in it is very poor (in its defense Mr. Gilbert does not know SWT well and it is made for AWT). My two biggest problems are as AWT events bubble up through SWT there are some erroneous events fired and due to wrapping the AWT frame JFreeChart becomes substantially slower. @zvikico The idea of putting the chart into a web page is probably not a great way to go. There are a few problems first being how Eclipse handles integrating the web browser on different platforms is inconsistent. Also from my understanding of a few graphing packages for the web they are server side requiring that setup, also many companies including mine use proxy servers and sometimes this creates issues with the Eclipse web browsing.
SWTChart gives good results for line, scatter, bar, and area charts. The API is straight forward and there are numerous examples on the website. I went from finding it on google to viewing my data in less than an hour. [SWTChart](http://www.swtchart.org/index.html)
Libraries for pretty charts in SWT?
[ "", "java", "eclipse", "charts", "swt", "" ]
I have a .NET 2.0 windows forms app, which makes heavy use of the `ListView` control. I've subclassed the `ListView` class into a templated `SortableListView<T>` class, so it can be a bit smarter about how it displays things, and sort itself. Unfortunately this seems to break the Visual Studio Forms Designer, in both VS2005 and 2008. The program compiles and runs fine, but when I try view the owning form in the designer, I get these Errors: * Could not find type 'MyApp.Controls.SortableListView'. Please make sure that the assembly that contains this type is referenced. If this type is a part of your development project, make sure that the project has been successfully built. There is no stack trace or error line information available for this error * The variable 'listViewImages' is either undeclared or was never assigned. At MyApp.Main.Designer.cs Line:XYZ Column:1 ``` Call stack: at System.ComponentModel.Design.Serialization.CodeDomSerializerBase.Error(IDesignerSerializationManager manager, String exceptionText, String helpLink) at System.ComponentModel.Design.Serialization.CodeDomSerializerBase.DeserializeExpression(IDesignerSerializationManager manager, String name, CodeExpression expression) at System.ComponentModel.Design.Serialization.CodeDomSerializerBase.DeserializeExpression(IDesignerSerializationManager manager, String name, CodeExpression expression) at System.ComponentModel.Design.Serialization.CodeDomSerializerBase.DeserializeStatement(IDesignerSerializationManager manager, CodeStatement statement) ``` The line of code in question is where it is actually added to the form, and is ``` this.imagesTab.Controls.Add( this.listViewImages ); ``` listViewImages is declared as ``` private MyApp.Controls.SortableListView<Image> listViewImages; ``` and is instantiated in the `InitializeComponent` method as follows: ``` this.listViewImages = new MyApp.Controls.SortableListView<Image>(); ``` As mentioned earlier, the program compiles and runs perfectly, and I've tried shifting the `SortableListView` class out to a seperate assembly so it can be compiled seperately, but this makes no difference. I have no idea where to go from here. Any help would be appreciated!
> when you added the listview, did you add it to the toolbox and then add it to the form? No, I just edited `Main.Designer.cs` and changed it from `System.Windows.Forms.ListView` to `MyApp.Controls.SortableListView<Image>` Suspecting it might have been due to the generics led me to actually finding a solution. For each class that I need to make a SortableListView for, I defined a 'stub class' like this ``` class ImagesListView : SortableListView<Image> { } ``` Then made the `Main.Designer.cs` file refer to these stub classes instead of the `SortableListView`. It now works, hooray! Thankfully I am able to do this because all my types are known up front, and I'm only using the `SortableListView` as a method of reducing duplicate code.
It happened to me because of x86 / x64 architecture. Since Visual Studio (the development tool itself) has no x64 version, it's not possible to load x64 control into GUI designer. The best approach for this might be tuning GUI under x86, and compile it for x64 when necessary.
"Could not find type" error loading a form in the Windows Forms Designer
[ "", "c#", ".net", "winforms", "visual-studio-2008", "visual-studio-2005", "" ]
How do you create a static class in C++? I should be able to do something like: ``` cout << "bit 5 is " << BitParser::getBitAt(buffer, 5) << endl; ``` Assuming I created the `BitParser` class. What would the `BitParser` class definition look like?
If you're looking for a way of applying the `static` keyword to a class, like you can in C# for example, then you won't be able to without using Managed C++. But the looks of your sample, you just need to create a public static method on your `BitParser` object. Like so: **BitParser.h** ``` class BitParser { public: static bool getBitAt(int buffer, int bitIndex); // ... // Disallow creating an instance of this object // (Making all constructors private also works but is not ideal and does not // convey your intent as well) BitParser() = delete; }; ``` **BitParser.cpp** ``` bool BitParser::getBitAt(int buffer, int bitIndex) { bool isBitSet = false; // .. determine if bit is set return isBitSet; } ``` You can use this code to call the method in the same way as your example code.
Consider [Matt Price's solution](https://stackoverflow.com/questions/9321/how-do-you-create-a-static-class-in-c/9348#9348). 1. In C++, a "static class" has no meaning. The nearest thing is a class with only static methods and members. 2. Using static methods will only limit you. What you want is, expressed in C++ semantics, to put your function (for it **is** a function) in a namespace. ## Edit 2011-11-11 There is no "static class" in C++. The nearest concept would be a class with only static methods. For example: ``` // header class MyClass { public : static void myMethod() ; } ; // source void MyClass::myMethod() { // etc. } ``` But you must remember that "static classes" are hacks in the Java-like kind of languages (e.g. C#) that are unable to have non-member functions, so they have instead to move them inside classes as static methods. In C++, what you really want is a non-member function that you'll declare in a namespace: ``` // header namespace MyNamespace { void myMethod() ; } // source namespace MyNamespace { void myMethod() { // etc. } } ``` ### Why is that? In C++, the namespace is more powerful than classes for the "Java static method" pattern, because: * static methods have access to the classes private symbols * private static methods are still visible (if inaccessible) to everyone, which breaches somewhat the encapsulation * static methods cannot be forward-declared * static methods cannot be overloaded by the class user without modifying the library header * there is nothing that can be done by a static method that can't be done better than a (possibly friend) non-member function in the same namespace * namespaces have their own semantics (they can be combined, they can be anonymous, etc.) * etc. Conclusion: Do not copy/paste that Java/C#'s pattern in C++. In Java/C#, the pattern is mandatory. But in C++, it is bad style. ## Edit 2010-06-10 There was an argument in favor to the static method because sometimes, one needs to use a static private data member. I disagree somewhat, as show below: ### The "Static private member" solution ``` // HPP class Foo { public : void barA() ; private : void barB() ; static std::string myGlobal ; } ; ``` First, myGlobal is called myGlobal because it is still a global private variable. A look at the CPP source will clarify that: ``` // CPP std::string Foo::myGlobal ; // You MUST declare it in a CPP void Foo::barA() { // I can access Foo::myGlobal } void Foo::barB() { // I can access Foo::myGlobal, too } void barC() { // I CAN'T access Foo::myGlobal !!! } ``` At first sight, the fact the free function barC can't access Foo::myGlobal seems a good thing from an encapsulation viewpoint... It's cool because someone looking at the HPP won't be able (unless resorting to sabotage) to access Foo::myGlobal. But if you look at it closely, you'll find that it is a colossal mistake: Not only your private variable must still be declared in the HPP (and so, visible to all the world, despite being private), but you must declare in the same HPP all (as in ALL) functions that will be authorized to access it !!! So **using a private static member is like walking outside in the nude with the list of your lovers tattooed on your skin : No one is authorized to touch, but everyone is able to peek at. And the bonus: Everyone can have the names of those authorized to play with your privies.** `private` indeed... :-D ### The "Anonymous namespaces" solution Anonymous namespaces will have the advantage of making things private really private. First, the HPP header ``` // HPP namespace Foo { void barA() ; } ``` Just to be sure you remarked: There is no useless declaration of barB nor myGlobal. Which means that no one reading the header knows what's hidden behind barA. Then, the CPP: ``` // CPP namespace Foo { namespace { std::string myGlobal ; void Foo::barB() { // I can access Foo::myGlobal } } void barA() { // I can access myGlobal, too } } void barC() { // I STILL CAN'T access myGlobal !!! } ``` As you can see, like the so-called "static class" declaration, fooA and fooB are still able to access myGlobal. But no one else can. And no one else outside this CPP knows fooB and myGlobal even exist! **Unlike the "static class" walking on the nude with her address book tattooed on her skin the "anonymous" namespace is fully clothed**, which seems quite better encapsulated AFAIK. ### Does it really matter? Unless the users of your code are saboteurs (I'll let you, as an exercise, find how one can access to the private part of a public class using a dirty behaviour-undefined hack...), what's `private` is `private`, even if it is visible in the `private` section of a class declared in a header. Still, if you need to add another "private function" with access to the private member, you still must declare it to all the world by modifying the header, which is a paradox as far as I am concerned: **If I change the implementation of my code (the CPP part), then the interface (the HPP part) should NOT change.** Quoting Leonidas : "**This is ENCAPSULATION!**" ## Edit 2014-09-20 When are classes static methods are actually better than namespaces with non-member functions? When you need to group together functions and feed that group to a template: ``` namespace alpha { void foo() ; void bar() ; } struct Beta { static void foo() ; static void bar() ; }; template <typename T> struct Gamma { void foobar() { T::foo() ; T::bar() ; } }; Gamma<alpha> ga ; // compilation error Gamma<Beta> gb ; // ok gb.foobar() ; // ok !!! ``` Because, if a class can be a template parameter, a namespaces cannot.
How do you create a static class?
[ "", "c++", "class", "oop", "static-classes", "" ]
I've got a group of inter-related classes that are all overridden together to create a particular implementation. I'm wondering if it is a good idea to enclose the interrelated subclasses in a namespace. For example purposes, consider the following namespaces and classes: ``` namespace Protocol { public abstract class Message { } public abstract class Driver { } } namespace Protocol.Tcp { public class TcpMessage : Message { } public class TcpDriver : Driver { } } namespace Protocol.Ftp { public class FtpMessage : Message { } public class FtpDriver : Driver { } } ``` What is the best way to structure the namespaces? It seems unavoidable to expose the inheritance in the namespace since the base classes don't really belong in either the Protocol.Tcp namespace or the Protocol.Ftp namespace.
I think you are perhaps worrying too much! Does it make sense logically? Do you know where to find your code within the namespaces? I would much rather see a codebase like the above with a small number of classes, relevant to the name with a hierarchy, than one large namespace where everything is interrelated.. Remember, namespacing is there for precisely this, to **organise your codebase logically** What you have seems logical :) EDIT: As an example: ``` using System.Data; using System.Data.Sql; ``` ;)
The original tags show that this post is about C# - therefore multiple inheritance is an irrelevancy - you can't multiply inherit in C#. Maybe you should consider defining some interfaces that define what the basic contracts of a `Message` and a `Driver` are and then you may feel a little free-er to use the namespace structure to mimic the technology differences.
Is it a bad idea to expose inheritance hierarchy in namespace structure?
[ "", "c#", "oop", "inheritance", "naming", "convention", "" ]
In a C++ Windows (XP and NT, if it makes a difference) application I'm working on, I need to get the network name associated with the computer the code is executing on, so that I can convert local filenames from C:\filename.ext to \\network\_name\C$\filename.ext. How would I do this? Alternatively, if there's a function that will just do the conversion I described, that would be even better. I looked into WNetGetUniversalName, but that doesn't seem to work with local (C drive) files.
You'll want Win32's GetComputerName: <http://msdn.microsoft.com/en-us/library/ms724295(VS.85).aspx>
There are more than one alternatives: a. Use Win32's GetComputerName() as suggested by Stu. Example: <http://www.techbytes.ca/techbyte97.html> OR b. Use the function gethostname() under Winsock. This function is cross platform and may help if your app is going to be run on other platforms besides Windows. MSDN Reference: <http://msdn.microsoft.com/en-us/library/ms738527(VS.85).aspx> OR c. Use the function getaddrinfo(). MSDN reference: <http://msdn.microsoft.com/en-us/library/ms738520(VS.85).aspx>
In C++/Windows how do I get the network name of the computer I'm on?
[ "", "c++", "windows-xp", "networking", "windows-nt", "" ]