Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
Does javascript use immutable or mutable strings? Do I need a "string builder"?
They are immutable. You cannot change a character within a string with something like `var myString = "abbdef"; myString[2] = 'c'`. The string manipulation methods such as [`trim`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/trim), [`slice`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/slice) return new strings. In the same way, if you have two references to the same string, modifying one doesn't affect the other ``` let a = b = "hello"; a = a + " world"; // b is not affected ``` ## Myth Debunking - String concatenation is ***NOT*** slow I've always heard what Ash mentioned in his answer (that using Array.join is faster for concatenation) so I wanted to test out the different methods of concatenating strings and abstracting the fastest way into a StringBuilder. I wrote some tests to see if this is true (it isn't!). This was what I believed would be the fastest way, avoiding push and using an array to store the strings to then join them in the end. ``` class StringBuilderArrayIndex { array = []; index = 0; append(str) { this.array[this.index++] = str } toString() { return this.array.join('') } } ``` ## Some benchmarks * Read the test cases in the snippet below * Run the snippet * Press the benchmark button to run the tests and see results I've created two types of tests * Using Array indexing to avoid `Array.push`, then using `Array.join` * Straight string concatenation For each of those tests, I looped appending a constant value and a random string; ``` <script benchmark> // Number of times to loop through, appending random chars const APPEND_COUNT = 1000; const STR = 'Hot diggity dizzle'; function generateRandomString() { const characters = 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789'; const length = Math.floor(Math.random() * 10) + 1; // Random length between 1 and 10 let result = ''; for (let i = 0; i < length; i++) { const randomIndex = Math.floor(Math.random() * characters.length); result += characters.charAt(randomIndex); } return result; } const randomStrings = Array.from({length: APPEND_COUNT}, generateRandomString); class StringBuilderStringAppend { str = ''; append(str) { this.str += str; } toString() { return this.str; } } class StringBuilderArrayIndex { array = []; index = 0; append(str) { this.array[this.index] = str; this.index++; } toString() { return this.array.join(''); } } // @group Same string 'Hot diggity dizzle' // @benchmark array push & join { const sb = new StringBuilderArrayIndex(); for (let i = 0; i < APPEND_COUNT; i++) { sb.append(STR) } sb.toString(); } // @benchmark string concatenation { const sb = new StringBuilderStringAppend(); for (let i = 0; i < APPEND_COUNT; i++) { sb.append(STR) } sb.toString(); } // @group Random strings // @benchmark array push & join { const sb = new StringBuilderArrayIndex(); for (let i = 0; i < APPEND_COUNT; i++) { sb.append(randomStrings[i]) } sb.toString(); } // @benchmark string concatenation { const sb = new StringBuilderStringAppend(); for (let i = 0; i < APPEND_COUNT; i++) { sb.append(randomStrings[i]) } sb.toString(); } </script> <script src="https://cdn.jsdelivr.net/gh/silentmantra/benchmark/loader.js"></script> ``` ### Findings Nowadays, all evergreen browsers handle string concatenation better, at least twice as fast. ### i-12600k (added by Alexander Nenashev) #### Chrome/117 ``` -------------------------------------------------------------------- Same string 'Hot diggity dizzle' string concatenation   1.0x  |  x100000  224  232  254  266  275 array push & join      3.2x  |  x100000  722  753  757  762  763 Random strings string concatenation   1.0x  |  x100000  261  268  270  273  279 array push & join      5.4x  |   x10000  142  147  148  155  166 -------------------------------------------------------------------- https://github.com/silentmantra/benchmark ``` #### Firefox/118 ``` -------------------------------------------------------------------- Same string 'Hot diggity dizzle' string concatenation   1.0x  |  x100000  304  335  353  358  370 array push & join      9.5x  |   x10000  289  300  301  306  309 Random strings string concatenation   1.0x  |  x100000  334  337  345  349  377 array push & join      5.1x  |   x10000  169  176  176  176  180 -------------------------------------------------------------------- https://github.com/silentmantra/benchmark ``` ### Results below on a 2.4 GHz 8-Core i9 Mac on Oct 2023 #### Chrome ``` -------------------------------------------------------------------- Same string 'Hot diggity dizzle' string concatenation   1.0x  |  x100000  574  592  594  607  613 array push & join      2.7x  |   x10000  156  157  159  164  165 Random strings string concatenation   1.0x  |  x100000  657  663  669  675  680 array push & join      4.3x  |   x10000  283  285  295  298  311 -------------------------------------------------------------------- https://github.com/silentmantra/benchmark ``` #### Firefox ``` -------------------------------------------------------------------- Same string 'Hot diggity dizzle' string concatenation   1.0x  |  x100000  546  648  659  663  677 array push & join      5.8x  |   x10000  314  320  326  331  335 Random strings string concatenation   1.0x  |  x100000  647  739  764  765  804 array push & join      2.9x  |   x10000  187  188  199  219  231 -------------------------------------------------------------------- https://github.com/silentmantra/benchmark ``` #### Brave ``` -------------------------------------------------------------------- Same string 'Hot diggity dizzle' string concatenation   1.0x  |  x100000  566  571  572  579  600 array push & join      2.5x  |   x10000  144  145  159  162  166 Random strings string concatenation   1.0x  |  x100000  649  658  659  663  669 array push & join      4.4x  |   x10000  285  285  290  292  300 -------------------------------------------------------------------- https://github.com/silentmantra/benchmark ` ``` #### Safari ``` -------------------------------------------------------------------- Same string 'Hot diggity dizzle' string concatenation   1.0x  |  x10000   76   77   77   79   82 array push & join      2.2x  |  x10000  168  168  174  178  186 Random strings string concatenation   1.0x  |  x100000  878  884  889  892  903 array push & join      2.3x  |   x10000  199  200  202  202  204 -------------------------------------------------------------------- https://github.com/silentmantra/benchmark ` ``` #### Opera ``` -------------------------------------------------------------------- Same string 'Hot diggity dizzle' string concatenation   1.0x  |  x100000  577  579  581  584  608 array push & join      2.7x  |   x10000  157  162  165  166  171 Random strings string concatenation   1.0x  |  x100000  688  694  740  750  781 array push & join      4.2x  |   x10000  291  315  316  317  379 -------------------------------------------------------------------- https://github.com/silentmantra/benchmark ```
from the [rhino book](http://oreilly.com/catalog/9780596000486/): > In JavaScript, strings are immutable objects, which means that the > characters within them may not be changed and that any operations on > strings actually create new strings. Strings are assigned by > reference, not by value. In general, when an object is assigned by > reference, a change made to the object through one reference will be > visible through all other references to the object. Because strings > cannot be changed, however, you can have multiple references to a > string object and not worry that the string value will change without > your knowing it
Are JavaScript strings immutable? Do I need a "string builder" in JavaScript?
[ "", "javascript", "string", "" ]
I have some template code that I would prefer to have stored in a CPP file instead of inline in the header. I know this can be done as long as you know which template types will be used. For example: **.h file** ``` class foo { public: template <typename T> void do(const T& t); }; ``` **.cpp file** ``` template <typename T> void foo::do(const T& t) { // Do something with t } template void foo::do<int>(const int&); template void foo::do<std::string>(const std::string&); ``` Note the last two lines - the foo::do template function is only used with ints and std::strings, so those definitions mean the app will link. My question is - is this a nasty hack or will this work with other compilers/linkers? I am only using this code with VS2008 at the moment but will be wanting to port to other environments.
The problem you describe can be solved by defining the template in the header, or via the approach you describe above. I recommend reading the following points from the [C++ FAQ Lite](https://isocpp.org/wiki/faq/templates): * [Why can’t I separate the definition of my templates class from its declaration and put it inside a .cpp file?](https://isocpp.org/wiki/faq/templates#templates-defn-vs-decl) * [How can I avoid linker errors with my template functions?](https://isocpp.org/wiki/faq/templates#separate-template-fn-defn-from-decl) * [How does the C++ keyword export help with template linker errors?](https://isocpp.org/wiki/faq/templates#separate-template-fn-defn-from-decl-export-keyword) They go into a lot of detail about these (and other) template issues.
For others on this page wondering what the correct syntax is (as did I) for explicit template instantiation (or at least in VS2008), its the following... In your .h file... ``` template<typename T> class foo { public: void bar(const T &t); }; ``` And in your .cpp file ``` template <class T> void foo<T>::bar(const T &t) { } // Explicit template instantiation template class foo<int>; ```
Storing C++ template function definitions in a .CPP file
[ "", "c++", "templates", "" ]
Is there a tutotial or help file, suitable for a beginner c# programmer to use.
The primary documentation for the Farseer Physics engine is on the homepage. <http://www.codeplex.com/FarseerPhysics/Wiki/View.aspx?title=Documentation&referringTitle=Home> You can also check out the source code, they have a demos folder in there, though it's only got one example, but it can show you how to implement the engine <http://www.codeplex.com/FarseerPhysics/SourceControl/DirectoryView.aspx?SourcePath=%24%2fFarseerPhysics%2fDemos%2fXNA3%2fGettingStarted&changeSetId=40048> For a last resort, check out their forums, and ask some questions. They seem nice enough that they should be able to help you out with any questions. <http://www.codeplex.com/FarseerPhysics/Thread/List.aspx>
I realize this is an old question, but for future searchers I will post a few links: **Farseer Physics Helper** Physics helper for Blend makes it very easy to create realistic looking games or demos using practically no code :) <http://physicshelper.codeplex.com/> **Farseer Physics Engine Simple Samples** Very simple and easy to understand samples (compared to the original Farseer ones) <http://farseersimplesamples.codeplex.com/>
Farseer Physics Tutorials, Help files
[ "", "c#", "silverlight", ".net-3.5", "farseer", "" ]
What kind of performance implications are there to consider when using try-catch statements in php 5? I've read some old and seemingly conflicting information on this subject on the web before. A lot of the framework I currently have to work with was created on php 4 and lacks many of the niceties of php 5. So, I don't have much experience myself in using try-catchs with php.
One thing to consider is that the cost of a try block where no exception is thrown is a different question from the cost of actually throwing and catching an exception. If exceptions are only thrown in failure cases, you almost certainly don't care about performance, since you won't fail very many times per execution of your program. If you're failing in a tight loop (a.k.a: banging your head against a brick wall), your application likely has worse problems than being slow. So don't worry about the cost of throwing an exception unless you're somehow forced to use them for regular control flow. Someone posted an answer talking about profiling code which throws an exception. I've never tested it myself, but I confidently predict that this will show a much bigger performance hit than just going in and out of a try block without throwing anything. Another thing to consider is that where you nest calls a lot of levels deep, it can even be faster to have a single try...catch right at the top than it is to check return values and propagate errors on every call. In the opposite of that situation, where you find that you're wrapping every call in its own try...catch block, your code will be slower. And uglier.
I was bored and profiled the following (I left the timing code out): ``` function no_except($a, $b) { $a += $b; return $a; } function except($a, $b) { try { $a += $b; } catch (Exception $e) {} return $a; } ``` using two different loops: ``` echo 'no except with no surrounding try'; for ($i = 0; $i < NUM_TESTS; ++$i) { no_except(5, 7); } echo 'no except with surrounding try'; for ($i = 0; $i < NUM_TESTS; ++$i) { try { no_except(5, 7); } catch (Exception $e) {} } echo 'except with no surrounding try'; for ($i = 0; $i < NUM_TESTS; ++$i) { except(5, 7); } echo 'except with surrounding try'; for ($i = 0; $i < NUM_TESTS; ++$i) { try { except(5, 7); } catch (Exception $e) {} } ``` With 1000000 runs on my WinXP box run apache and PHP 5.2.6: ``` no except with no surrounding try = 3.3296 no except with surrounding try = 3.4246 except with no surrounding try = 3.2548 except with surrounding try = 3.2913 ``` These results were consistent and remained in similar proportion no matter which order the tests ran. Conclusion: Adding code to handle rare exceptions is no slower than code the ignores exceptions.
Performance of try-catch in php
[ "", "php", "performance", "exception", "try-catch", "" ]
I have a table with a "Date" column, and I would like to do a query that does the following: If the date is a **Monday**, **Tuesday**, **Wednesday**, or **Thursday**, the displayed date should be shifted up by 1 day, as in ``` DATEADD(day, 1, [Date]) ``` On the other hand, if it is a **Friday**, the displayed date should be incremented by 3 days (i.e. so it becomes the following *Monday*). How do I do this in my SELECT statement? As in, ``` SELECT somewayofdoingthis([Date]) FROM myTable ``` (This is SQL Server 2000.)
Here is how I would do it. I do recommend a function like above if you will be using this in other places. ``` CASE WHEN DATEPART(dw, [Date]) IN (2,3,4,5) THEN DATEADD(d, 1, [Date]) WHEN DATEPART(dw, [Date]) = 6 THEN DATEADD(d, 3, [Date]) ELSE [Date] END AS [ConvertedDate] ```
``` CREATE FUNCTION dbo.GetNextWDay(@Day datetime) RETURNS DATETIME AS BEGIN DECLARE @ReturnDate DateTime set @ReturnDate = dateadd(dd, 1, @Day) if (select datename(@ReturnDate))) = 'Saturday' set @ReturnDate = dateadd(dd, 2, @ReturnDate) if (select datename(@ReturnDate) = 'Sunday' set @ReturnDate = dateadd(dd, 1, @ReturnDate) RETURN @ReturnDate END ```
SQL Query Help: Transforming Dates In A Non-Trivial Way
[ "", "sql", "sql-server", "date", "sql-server-2000", "dateadd", "" ]
I know next to nothing when it comes to the how and why of https connections. Obviously, when I'm transmitting secure data like passwords or especially credit card information, https is a critical tool. What do I need to know about it, though? What are the most common mistakes you see developers making when they implement it in their projects? Are there times when https is just a bad idea? Thanks!
An HTTPS, or Secure Sockets Layer (SSL) certificate is served for a site, and is typically signed by a Certificate Authority (CA), which is effectively a trusted 3rd party that verifies some basic details about your site, and certifies it for use in browsers. If your browser trusts the CA, then it trusts any certificates signed by that CA (this is known as the trust chain). Each HTTP (or HTTPS) request consists of two parts: a request, and a response. When you request something through HTTPS, there are actually a few things happening in the background: * The client (browser) does a "handshake", where it requests the server's public key and identification. + At this point, the browser can check for validity (does the site name match? is the date range current? is it signed by a CA it trusts?). It can even contact the CA and make sure the certificate is valid. * The client creates a new pre-master secret, which is encrypted using the servers's public key (so only the server can decrypt it) and sent to the server * The server and client both use this pre-master secret to generate the master secret, which is then used to create a symmetric session key for the actual data exchange * Both sides send a message saying they're done the handshake * The server then processes the request normally, and then encrypts the response using the session key If the connection is kept open, the same symmetric key will be used for each. If a new connection is established, and both sides still have the master secret, new session keys can be generated in an 'abbreviated handshake'. Typically a browser will store a master secret until it's closed, while a server will store it for a few minutes or several hours (depending on configuration). For more on the length of sessions see [How long does an HTTPS symmetric key last?](https://security.stackexchange.com/a/55477/27894) **Certificates and Hostnames** Certificates are assigned a Common Name (CN), which for HTTPS is the domain name. The CN has to match exactly, eg, a certificate with a CN of "example.com" will NOT match the domain "www.example.com", and users will get a warning in their browser. Before [SNI](https://en.wikipedia.org/wiki/Server_Name_Indication), it was not possible to host multiple domain names on one IP. Because the certificate is fetched before the client even sends the actual HTTP request, and the HTTP request contains the Host: header line that tells the server what URL to use, there is no way for the server to know what certificate to serve for a given request. SNI adds the hostname to part of the TLS handshake, and so as long as it's supported on both client and server (and in 2015, it is widely supported) then the server can choose the correct certificate. Even without SNI, one way to serve multiple hostnames is with certificates that include Subject Alternative Names (SANs), which are essentially additional domains the certificate is valid for. Google uses a single certificate to secure many of it's sites, for example. [![Google SSL certificate](https://i.stack.imgur.com/3Tniz.png)](https://i.stack.imgur.com/3Tniz.png) Another way is to use wildcard certificates. It is possible to get a certificate like "*.example.com" in which case "www.example.com" and "foo.example.com" will both be valid for that certificate. However, note that "example.com" does not match "*.example.com", and neither does "foo.bar.example.com". If you use "www.example.com" for your certificate, you should redirect anyone at "example.com" to the "www." site. If they request <https://example.com>, unless you host it on a separate IP and have two certificates, the will get a certificate error. Of course, you can mix both wildcard and SANs (as long as your CA lets you do this) and get a certificate for both "example.com" and with SANs "*.example.com", "example.net", and "*.example.net", for example. **Forms** Strictly speaking, if you are submitting a form, it doesn't matter if the form page itself is not encrypted, as long as the submit URL goes to an https:// URL. In reality, users have been trained (at least in theory) not to submit pages unless they see the little "lock icon", so even the form itself should be served via HTTPS to get this. **Traffic and Server Load** HTTPS traffic is much bigger than its equivalent HTTP traffic (due to encryption and certificate overhead), and it also puts a bigger strain on the server (encrypting and decrypting). If you have a heavily-loaded server, it may be desirable to be very selective about what content is served using HTTPS. **Best Practices** * If you're not just using HTTPS for the entire site, it should automatically redirect to HTTPS as required. Whenever a user is logged in, they should be using HTTPS, and if you're using session cookies, the cookie should have the [secure flag set](http://php.net/manual/en/function.session-set-cookie-params.php). This prevents interception of the session cookie, which is especially important given the popularity of open (unencrypted) wifi networks. * Any resources on the page should come from the same scheme being used for the page. If you try to fetch images from http:// when the page is loaded with HTTPS, the user will get security warnings. You should either use fully-qualified URLs, or another easy way is to use absolute URLs that do not include the hostname (eg, src="/images/foo.png") because they work for both. + This includes external resources (eg, Google Analytics) * Don't do POSTs (form submits) when changing from HTTPS to HTTP. Most browsers will flag this as a security warning.
I'm not going to go in depth on SSL in general, gregmac did a great job on that, see below ;-). However, some of the most common (and critical) mistakes made (not specifically PHP) with regards to use of SSL/TLS: 1. Allowing HTTP when you should be enforcing HTTPS 2. Retrieving some resources over HTTP from an HTTPS page (e.g. images, IFRAMEs, etc) 3. Directing to HTTP page from HTTPS page unintentionally - note that this includes "fake" pages, such as "about:blank" (I've seen this used as IFRAME placeholders), this will needlessly and unpleasantly popup a warning. 4. Web server configured to support old, unsecure versions of SSL (e.g. SSL v2 is common, yet horribly broken) (okay, this isn't exactly the programmer's issue, but sometimes noone else will handle it...) 5. Web server configured to support unsecure cipher suites (I've seen NULL ciphers only in use, which basically provides absolutely NO encryption) (ditto) 6. Self-signed certificates - prevents users from verifying the site's identity. 7. Requesting the user's credentials from an HTTP page, even if submitting to an HTTPS page. Again, this prevents a user from validating the server's identity BEFORE giving it his password... Even if the password is transmitted encrypted, the user has no way of knowing if he's on a bogus site - or even if it WILL be encrypted. 8. Non-secure cookie - security-related cookies (such as sessionId, authentication token, access token, etc.) **MUST** be set with the "secure" attribute set. This is important! If it's not set to secure, the security cookie, e.g. SessionId, can be transmitted over HTTP (!) - and attackers can ensure this will happen - and thus allowing session hijacking etc. While you're at it (tho this is not directly related), set the HttpOnly attribute on your cookies, too (helps mitigate some XSS). 9. Overly permissive certificates - say you have several subdomains, but not all of them are at the same trust level. For instance, you have www.yourdomain.com, dowload.yourdomain.com, and publicaccess.yourdomain.com. So you might think about going with a wildcard certificate.... BUT you also have secure.yourdomain.com, or finance.yourdomain.com - even on a different server. publicaccess.yourdomain.com will then be able to impersonate secure.yourdomain.com.... While there may be instances where this is okay, usually you'd want some separation of privileges... That's all I can remember right now, might re-edit it later... As far as when is it a BAD idea to use SSL/TLS - if you have public information which is NOT intended for a specific audience (either a single user or registered members), AND you're not particular about them retrieving it specifically from the proper source (e.g. stock ticker values MUST come from an authenticated source...) - then there is no real reason to incur the overhead (and not just performance... dev/test/cert/etc). However, if you have shared resources (e.g. same server) between your site and another MORE SENSITIVE site, then the more sensitive site should be setting the rules here. Also, passwords (and other credentials), credit card info, etc should ALWAYS be over SSL/TLS.
What does a PHP developer need to know about https / secure socket layer connections?
[ "", "php", "security", "ssl", "https", "" ]
I have an exe file generated with py2exe. In the setup.py I specify an icon to be embedded in the exe: ``` windows=[{'script': 'my_script.py','icon_resources': [(0, 'my_icon.ico')], ... ``` I tried loading the icon using: ``` hinst = win32api.GetModuleHandle(None) hicon = win32gui.LoadImage(hinst, 0, win32con.IMAGE_ICON, 0, 0, win32con.LR_DEFAULTSIZE) ``` But this produces an (very unspecific) error: **pywintypes.error: (0, 'LoadImage', 'No error message is available')** If I try specifying 0 as a string ``` hicon = win32gui.LoadImage(hinst, '0', win32con.IMAGE_ICON, 0, 0, win32con.LR_DEFAULTSIZE) ``` then I get the error: **pywintypes.error: (1813, 'LoadImage', 'The specified resource type cannot be found in the image file.')** So, what's the correct method/syntax to load the icon? *Also please notice that I don't use any GUI toolkit - just the Windows API via PyWin32.*
@efotinis: You're right. Here is a workaround until py2exe gets fixed and you don't want to include the same icon twice: ``` hicon = win32gui.CreateIconFromResource(win32api.LoadResource(None, win32con.RT_ICON, 1), True) ``` Be aware that **1** is not the ID you gave the icon in setup.py (which is the icon group ID), but the resource ID *automatically* assigned by py2exe to each icon in each icon group. At least that's how I understand it. If you want to create an icon with a specified size (as CreateIconFromResource uses the system default icon size), you need to use CreateIconFromResourceEx, which isn't available via PyWin32: ``` icon_res = win32api.LoadResource(None, win32con.RT_ICON, 1) hicon = ctypes.windll.user32.CreateIconFromResourceEx(icon_res, len(icon_res), True, 0x00030000, 16, 16, win32con.LR_DEFAULTCOLOR) ```
Well, well... I installed py2exe and I think it's a bug. In py2exe\_util.c they should init `rt_icon_id` to 1 instead of 0. The way it is now, it's impossible to load the first format of the first icon using LoadIcon/LoadImage. I'll notify the developers about this if it's not already a known issue. A workaround, in the meantime, would be to include the same icon twice in your setup.py: ``` 'icon_resources': [(1, 'my_icon.ico'), (2, 'my_icon.ico')] ``` You can load the second one, while Windows will use the first one as the shell icon. Remember to use non-zero IDs though. :)
How do you load an embedded icon from an exe file with PyWin32?
[ "", "python", "icons", "exe", "pywin32", "" ]
We have an application which needs to use Direct3D. Specifically, it needs at least DirectX 9.0c version 4.09.0000.0904. While this should be present on all newer XP machines it might not be installed on older XP machines. How can I programmatically (using C++) determine if it is installed? I want to be able to give an information message to the user that Direct3D will not be available.
Call DirectXSetupGetVersion: <http://msdn.microsoft.com/en-us/library/microsoft.directx_sdk.directsetup.directxsetupgetversion> You'll need to include dsetup.h Here's the sample code from the site: ``` DWORD dwVersion; DWORD dwRevision; if (DirectXSetupGetVersion(&dwVersion, &dwRevision)) { printf("DirectX version is %d.%d.%d.%d\n", HIWORD(dwVersion), LOWORD(dwVersion), HIWORD(dwRevision), LOWORD(dwRevision)); } ```
According to the DirectX 9.0 SDK (summer 2004) documentation, see the GetDXVer SDK sample at \Samples\Multimedia\DXMisc\GetDXVer.
How to determine which version of Direct3D is installed?
[ "", "c++", "windows", "direct3d", "" ]
No matter what I do sys.exit() is called by unittest, even the most trivial examples. I can't tell if my install is messed up or what is going on. ``` IDLE 1.2.2 ==== No Subprocess ==== >>> import unittest >>> >>> class Test(unittest.TestCase): def testA(self): a = 1 self.assertEqual(a,1) >>> unittest.main() option -n not recognized Usage: idle.pyw [options] [test] [...] Options: -h, --help Show this message -v, --verbose Verbose output -q, --quiet Minimal output Examples: idle.pyw - run default set of tests idle.pyw MyTestSuite - run suite 'MyTestSuite' idle.pyw MyTestCase.testSomething - run MyTestCase.testSomething idle.pyw MyTestCase - run all 'test*' test methods in MyTestCase Traceback (most recent call last): File "<pyshell#7>", line 1, in <module> unittest.main() File "E:\Python25\lib\unittest.py", line 767, in __init__ self.parseArgs(argv) File "E:\Python25\lib\unittest.py", line 796, in parseArgs self.usageExit(msg) File "E:\Python25\lib\unittest.py", line 773, in usageExit sys.exit(2) SystemExit: 2 >>> ```
Your example is exiting on my install too. I can make it execute the tests and stay within Python by changing ``` unittest.main() ``` to ``` unittest.TextTestRunner().run(unittest.TestLoader().loadTestsFromTestCase(Test)) ``` More information is available [here](http://docs.python.org/library/unittest.html#basic-example) in the Python Library Reference.
In new Python 2.7 release, [unittest.main()](http://docs.python.org/library/unittest.html#unittest.main) has a new argument. If 'exit' is set to `False`, `sys.exit()` is not called during the execution of `unittest.main()`.
Unittest causing sys.exit()
[ "", "python", "unit-testing", "" ]
I have a table called OffDays, where weekends and holiday dates are kept. I have a table called LeadTime where amount of time (in days) for a product to be manufactured is stored. Finally I have a table called Order where a product and the order date is kept. Is it possible to query when a product will be finished manufacturing without using stored procedures or loops? For example: * OffDays has 2008-01-10, 2008-01-11, 2008-01-14. * LeadTime has 5 for product 9. * Order has 2008-01-09 for product 9. The calculation I'm looking for is this: * 2008-01-09 1 * 2008-01-10 x * 2008-01-11 x * 2008-01-12 2 * 2008-01-13 3 * 2008-01-14 x * 2008-01-15 4 * 2008-01-16 5 I'm wondering if it's possible to have a query return 2008-01-16 without having to use a stored procedure, or calculate it in my application code. **Edit (why no stored procs / loops):** The reason I can't use stored procedures is that they are not supported by the database. I can only add extra tables / data. The application is a third party reporting tool where I can only control the SQL query. **Edit (how i'm doing it now):** My current method is that I have an extra column in the order table to hold the calculated date, then a scheduled task / cron job runs the calculation on all the orders every hour. This is less than ideal for several reasons.
You can generate a table of working days in advance. ``` WDId | WDDate -----+----------- 4200 | 2008-01-08 4201 | 2008-01-09 4202 | 2008-01-12 4203 | 2008-01-13 4204 | 2008-01-16 4205 | 2008-01-17 ``` Then do a query such as ``` SELECT DeliveryDay.WDDate FROM WorkingDay OrderDay, WorkingDay DeliveryDay, LeadTime, Order where DeliveryDay.WDId = OrderDay.WDId + LeadTime.LTDays AND OrderDay.WDDate = '' AND LeadTime.ProductId = Order.ProductId AND Order.OrderId = 1234 ``` You would need a stored procedure with a loop to generate the WorkingDays table, but not for regular queries. It's also fewer round trips to the server than if you use application code to count the days.
The best approach is to use a Calendar table. See <http://web.archive.org/web/20070611150639/http://sqlserver2000.databases.aspfaq.com/why-should-i-consider-using-an-auxiliary-calendar-table.html>. Then your query could look something like: ``` SELECT c.dt, l.*, o.*, c.* FROM [statistics].dbo.[calendar] c, [order] o JOIN lead l ON l.leadId = o.leadId WHERE c.isWeekday = 1 AND c.isHoliday =0 AND o.orderId = 1 AND l.leadDays = ( SELECT COUNT(*) FROM [statistics].dbo.Calendar c2 WHERE c2.dt >= o.startDate AND c2.dt <= c.dt AND c2.isWeekday=1 AND c2.isHoliday=0 ) ``` Hope that helps, RB.
Non-iterative / Non-looping Way To Calculate Effective Date?
[ "", "sql", "date", "" ]
I want to link to bookmark on a page (mysite.com/mypage.htm#bookmark) AND visually highlight the item that was bookmarked (maybe having a red border). Naturally, there would be multiple items bookmarked. So that if someone clicked on #bookmark2 then *that* other area would be highlighted). I can see how to do that with .asp or .aspx but I'd like to do it more simply than that. I thought maybe there was a clever way to do it with CSS. WHY I'm interested: - I want to have our programs link to a shopping page that lists all the programs on it. I'm using a bookmark so they're jumping to the particular program area (site.com/shoppingpage#Programx) but just to make it *obvious* I'd like to actually highlight the page being linked to.
In your css you need to define ``` a.highlight {border:1px solid red;} ``` or something similar Then using jQuery, ``` $(document).ready ( function () { //Work as soon as the DOM is ready for parsing var id = location.hash.substr(1); //Get the word after the hash from the url if (id) $('#'+id).addClass('highlight'); // add class highlight to element whose id is the word after the hash }); ``` To highlight the targets on mouse over also add: ``` $("a[href^='#']") .mouseover(function() { var id = $(this).attr('href').substr(1); $('#'+id).addClass('highlight'); }) .mouseout(function() { var id = $(this).attr('href').substr(1); $('#'+id).removeClass('highlight'); }); ```
You can also use the `target` pseudo-class in CSS: ``` <html> <head> <style type="text/css"> div#test:target { background-color: yellow; } </style> </head> <body> <p><b><a href="#test">Link</a></b></p> <div id="test"> Target </div> </body> </html> ``` Unfortunately the target pseudo class isn't supported by IE or Opera, so if you're looking for a universal solution here this might not be sufficient.
Is there a way to highlight the target of a bookmark? (www.site.com/page.htm#bookmark)?
[ "", "javascript", "jquery", "css", "" ]
I have heard there is a way to cheat single inheritance and implement multiple inheritance in Java. Does anyone know how to implement this(with out using interface)? Just out of curiosity ;-)
Sure you can, but it's tricky and you should really consider if that's the way you want to go. The idea is to use scope-based inheritance coupled with type-based one. Which is type-talk for saying that for internal purposes, inner classes "inherit" methods and fields of the outer class. It's a bit like mixins, where the outer class is mixed-in to the inner class, but not as safe, as you can change the state of the outer class as well as use its methods. Gilad Bracha (one of the main java language designers) wrote a [paper](http://bracha.org/dyla.pdf) discussing that. So, suppose you want to share some methods for internal use between some unrelated classes (e.g, for string manipulation), you can create sub classes of them as inner classes of a class that has all the needed methods, and the sub classes could use methods both from their super classes and from the outer class. Anyway, it's tricky for complex classes, and you could get most of the functionality using static imports (from java 5 on). Great question for job interviews and pub quizzes, though ;-)
SingleMultiple inheritance is not supported by Java, instead it has got interfaces to serve the same purpose. In case you are adamant on using multiple inheritance it should be done in C++.
Cheat single inheritance in Java?
[ "", "java", "oop", "inheritance", "" ]
We have 18 databases that should have identical schemas, but don't. In certain scenarios, a table was added to one, but not the rest. Or, certain stored procedures were required in a handful of databases, but not the others. Or, our DBA forgot to run a script to add views on all of the databases. What is the best way to keep database schemas in sync?
For legacy fixes/cleanup, there are tools, like [SQLCompare](http://www.red-gate.com/products/SQL_Compare/index.htm?gclid=CMvQrZjZ5ZUCFQZqswodnQ8Tfg), that can generate scripts to sync databases. For .NET shops running SQL Server, there is also the [Visual Studio Database Edition](http://msdn.microsoft.com/en-us/vsts2008/db/default.aspx), which can create change scripts for schema changes that can be checked into source control, and automatically built using your CI/build process.
[SQL Compare](http://www.red-gate.com/products/SQL_Compare/index.htm) by Red Gate is a great tool for this.
How can I maintain consistent DB schema accross 18 databases (sql server)?
[ "", "sql", "sql-server", "database", "schema", "" ]
I got this síngleton cache object and it exposes an IEnumerable property which just returns a private IEnumerable variable. I have a static method on my singleton object that updates this member variable (that exists on the single 'Instance' instance of this cache object). Let's say some thread is currently iterating over this IEnumerable variable/property while my cache is updating. I made it so the cache is updating on a new local variable and finally setting the exposed private variable to point to this new local variable. I know i'm just updating a reference, leaving the other (old) object in memory waiting to be picked up by the GC but my problem is - i'm not 100% sure what happens once i set the new reference? Would the other thread suddenly be iterating over the new object or the old one it got passed through the IEnumerable interface? If it had been a normal reference i'd say 'no'. The calling thread would be operating on the old object, but i'm not sure if this is the case for IEnumerable as well? Here is the class stripped down: ``` internal sealed class SektionCache : CacheBase { public static readonly SektionCache Instance = new SektionCache(); private static readonly object lockObject = new object(); private static bool isUpdating; private IEnumerable<Sektion> sektioner; static SektionCache() { UpdateCache(); } public IEnumerable<Sektion> Sektioner { get { return sektioner; } } public static void UpdateCache() { // SNIP - getting data, locking etc. Instance.sektioner = newSektioner; // SNIP } } ```
The thread which is currently enumerating sektioner will continue to enumerate it even when you update the reference within the singleton. There is nothing special about objects which implement IEnumerable. You should perhaps add the [volatile](https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/keywords/volatile) keyword to the sektioner field as you are not providing read-locking and multiple threads are reading/writing it.
Since the getter `{ return sektioner; }` is called before the new value is put in the field, the old value is returned. Then, the loop `foreach (Sektion s in cache.Sektioner)` uses the value that was received when the getter was called, i.e. the old value. That value will be used throughout the foreach loop.
Updating reference to a member variable in use
[ "", "c#", ".net", "singleton", "" ]
Suppose we have: ``` interface Foo { bool Func(int x); } class Bar: Foo { bool Func(int x) { return (x>0); } } class Baz: Foo { bool Func(int x) { return (x<0); } } ``` Now we can toss around Bar and Baz as a Foos and call their Func methods. Delegates simplify this a little bit: ``` delegate bool Foo(int x); bool Bar(int x) { return (x<0); } bool Baz(int x) { return (x>0); } ``` Now we can toss around Bar and Baz as Foo delegates. What is the real benefit of delegates, except for getting shorter code?
There is a slight difference, delegates can access the member variables of classes in which, they are defined. In C# (unlike Java) all inner class are consider to be static. Therefore if you are using an interface to manage a callback, e.g. an ActionListener for a button. The implementing inner class needs to be passed (via the constructor) references to the parts of the containing class that it may need to interact with during the callback. Delegates do not have this restriction therefore reduces the amount of code required to implement the callback. Shorter, more concise code is also a worthy benefit.
From a Software Engineering perspective you are right, delegates are much like function interfaces in that they prototype a function interface. They can also be used much in the same kind of way: instead of passing a whole class in that contains the method you need you can pass in just a delegate. This saves a whole lot of code and creates much more readable code. Moreover, with the advent of lambda expressions they can now also be defined easily on fly which is a huge bonus. While it is POSSIBLE to build classes on the fly in C#, it's really a huge pain in the butt. Comparing the two is an interesting concept. I hadn't previously considered how much alike the ideas are from a use case and code structuring standpoint.
Are delegates not just shorthand interfaces?
[ "", "c#", "delegates", "interface", "" ]
using C++Builder 2007, the FindFirstFile and FindNextFile functions doesn't seem to be able to find some files on 64-bit versions of Vista and XP. My test application is 32-bit. If I use them to iterate through the folder C:\Windows\System32\Drivers they only find a handful of files although there are 185 when I issue a dir command in a command prompt. Using the same example code lists all files fine on a 32-bit version of XP. Here is a small example program: ``` int main(int argc, char* argv[]) { HANDLE hFind; WIN32_FIND_DATA FindData; int ErrorCode; bool cont = true; cout << "FindFirst/Next demo." << endl << endl; hFind = FindFirstFile("*.*", &FindData); if(hFind == INVALID_HANDLE_VALUE) { ErrorCode = GetLastError(); if (ErrorCode == ERROR_FILE_NOT_FOUND) { cout << "There are no files matching that path/mask\n" << endl; } else { cout << "FindFirstFile() returned error code " << ErrorCode << endl; } cont = false; } else { cout << FindData.cFileName << endl; } if (cont) { while (FindNextFile(hFind, &FindData)) { cout << FindData.cFileName << endl; } ErrorCode = GetLastError(); if (ErrorCode == ERROR_NO_MORE_FILES) { cout << endl << "All files logged." << endl; } else { cout << "FindNextFile() returned error code " << ErrorCode << endl; } if (!FindClose(hFind)) { ErrorCode = GetLastError(); cout << "FindClose() returned error code " << ErrorCode << endl; } } return 0; } ``` Running it in the C:\Windows\System32\Drivers folder on 64-bit XP returns this: ``` C:\WINDOWS\system32\drivers>t:\Project1.exe FindFirst/Next demo. . .. AsIO.sys ASUSHWIO.SYS hfile.txt raspti.zip stcp2v30.sys truecrypt.sys All files logged. ``` A dir command on the same system returns this: ``` C:\WINDOWS\system32\drivers>dir/p Volume in drive C has no label. Volume Serial Number is E8E1-0F1E Directory of C:\WINDOWS\system32\drivers 16-09-2008 23:12 <DIR> . 16-09-2008 23:12 <DIR> .. 17-02-2007 00:02 80.384 1394bus.sys 16-09-2008 23:12 9.453 a.txt 17-02-2007 00:02 322.560 acpi.sys 29-03-2006 14:00 18.432 acpiec.sys 24-03-2005 17:11 188.928 aec.sys 21-06-2008 15:07 291.840 afd.sys 29-03-2006 14:00 51.712 amdk8.sys 17-02-2007 00:03 111.104 arp1394.sys 08-05-2006 20:19 8.192 ASACPI.sys 29-03-2006 14:00 25.088 asyncmac.sys 17-02-2007 00:03 150.016 atapi.sys 17-02-2007 00:03 106.496 atmarpc.sys 29-03-2006 14:00 57.344 atmepvc.sys 17-02-2007 00:03 91.648 atmlane.sys 17-02-2007 00:03 569.856 atmuni.sys 24-03-2005 19:12 5.632 audstub.sys 29-03-2006 14:00 6.144 beep.sys Press any key to continue . . . etc. ``` I'm puzzled. What is the reason for this? Brian
Is there redirection going on? See the remarks on Wow64DisableWow64FsRedirection <http://msdn.microsoft.com/en-gb/library/aa365743.aspx>
I found this on MSDN: *If you are writing a 32-bit application to list all the files in a directory and the application may be run on a 64-bit computer, you should call the Wow64DisableWow64FsRedirectionfunction before calling FindFirstFile and call Wow64RevertWow64FsRedirection after the last call to FindNextFile. For more information, see File System Redirector.* [Here's the link](http://msdn.microsoft.com/en-us/library/aa364418(VS.85).aspx) I'll have to update my code because of this :-)
FindNextFile fails on 64-bit Windows?
[ "", "c++", "64-bit", "c++builder", "" ]
I have a web page that includes a bunch of images. Sometimes the image isn't available, so a broken image is displayed in the client's browser. How do I use jQuery to get the set of images, filter it to broken images then replace the src? --- --I thought it would be easier to do this with jQuery, but it turned out much easier to just use a pure JavaScript solution, that is, the one provided by Prestaul.
Handle the `onError` event for the image to reassign its source using JavaScript: ``` function imgError(image) { image.onerror = ""; image.src = "/images/noimage.gif"; return true; } ``` ``` <img src="image.png" onerror="imgError(this);"/> ``` Or without a JavaScript function: ``` <img src="image.png" onError="this.onerror=null;this.src='/images/noimage.gif';" /> ``` The following compatibility table lists the browsers that support the error facility: <http://www.quirksmode.org/dom/events/error.html>
I use the built in `error` handler: ``` $("img").error(function () { $(this).unbind("error").attr("src", "broken.gif"); }); ``` **Edit:** The `error()` method is deprecated in [jquery 1.8](https://api.jquery.com/error/) and higher. Instead, you should use `.on("error")` instead: ``` $("img").on("error", function () { $(this).attr("src", "broken.gif"); }); ```
jQuery/JavaScript to replace broken images
[ "", "javascript", "jquery", "html", "broken-image", "" ]
I have a list of timesheet entries that show a start and stop time. This is sitting in a MySQL database. I need to create bar charts based on this data with the 24 hours of the day along the bottom and the amount of man-hours worked for each hour of the day. For example, if Alice worked a job from 15:30 to 19:30 and Bob worked from 12:15 to 17:00, the chart would look like this: [![Example Chart](https://i.stack.imgur.com/HHrs0.png)](https://i.stack.imgur.com/HHrs0.png) I have a WTFey solution right now that involves a spreadsheet going out to column DY or something like that. The needed resolution is 15-minute intervals. I'm assuming this is something best done in the database then exported for chart creation. Let me know if I'm missing any details. Thanks.
Create a table with just time in it from midnight to midnight containing each minute of the day. In the data warehouse world we would call this a time dimension. Here's an example: ``` TIME_DIM -id -time_of_day -interval_15 -interval_30 ``` an example of the data in the table would be ``` id time_of_day interval_15 interval_30 1 00:00 00:00 00:00 ... 30 00:23 00:15 00:00 ... 100 05:44 05:30 05:30 ``` Then all you have to do is join your table to the time dimension and then group by interval\_15. For example: ``` SELECT b.interval_15, count(*) FROM my_data_table a INNER JOIN time_dim b ON a.time_field = b.time WHERE a.date_field = now() GROUP BY b.interval_15 ```
How about this: Use that "times" table, but with two columns, containing the 15-minute intervals. The from\_times are the 15-minutely times, the to\_times are a second before the next from\_times. For example 12:30:00 to 12:44:59. Now get your person work table, which I've called "activity" here, with start\_time and end\_time columns. I added values for Alice and Bob as per the original question. Here's the query from MySQL: ``` SELECT HOUR(times.from_time) AS 'TIME', count(*) / 4 AS 'HOURS' FROM times JOIN activity ON times.from_time >= activity.start_time AND times.to_time <= activity.end_time GROUP BY HOUR(times.from_time) ORDER BY HOUR(times.from_time) ``` which gives me this: ``` TIME HOURS 12 0.7500 13 1.0000 14 1.0000 15 1.5000 16 2.0000 17 1.0000 18 1.0000 19 0.7500 ``` Looks about right...
Elegant method for drawing hourly bar chart from time-interval data?
[ "", "sql", "excel", "charts", "group-by", "" ]
Simple question, how do you list the primary key of a table with T-SQL? I know how to get indexes on a table, but can't remember how to get the PK.
``` SELECT Col.Column_Name from INFORMATION_SCHEMA.TABLE_CONSTRAINTS Tab, INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE Col WHERE Col.Constraint_Name = Tab.Constraint_Name AND Col.Table_Name = Tab.Table_Name AND Tab.Constraint_Type = 'PRIMARY KEY' AND Col.Table_Name = '<your table name>' ```
It's generally recommended practice now to use the `sys.*` views over `INFORMATION_SCHEMA` in SQL Server, so unless you're planning on migrating databases I would use those. Here's how you would do it with the `sys.*` views: ``` SELECT c.name AS column_name, i.name AS index_name, c.is_identity FROM sys.indexes i inner join sys.index_columns ic ON i.object_id = ic.object_id AND i.index_id = ic.index_id inner join sys.columns c ON ic.object_id = c.object_id AND c.column_id = ic.column_id WHERE i.is_primary_key = 1 and i.object_ID = OBJECT_ID('<schema>.<tablename>'); ```
How do you list the primary key of a SQL Server table?
[ "", "sql", "sql-server", "t-sql", "" ]
I'm inserting multiple records into a table A from another table B. Is there a way to get the identity value of table A record and update table b record with out doing a cursor? ``` Create Table A (id int identity, Fname nvarchar(50), Lname nvarchar(50)) Create Table B (Fname nvarchar(50), Lname nvarchar(50), NewId int) Insert into A(fname, lname) SELECT fname, lname FROM B ``` I'm using MS SQL Server 2005.
MBelly is right on the money - But then the trigger will always try and update table B even if that's not required (Because you're also inserting from table C?). Darren is also correct here, you can't get multiple identities back as a result set. Your options are using a cursor and taking the identity for each row you insert, or using Darren's approach of storing the identity before and after. So long as you know the increment of the identity this should work, so long as you make sure the table is locked for all three events. If it was me, and it wasn't time critical I'd go with a cursor.
Use the ouput clause from 2005: ``` DECLARE @output TABLE (id int) Insert into A (fname, lname) OUTPUT inserted.ID INTO @output SELECT fname, lname FROM B select * from @output ``` now your table variable has the identity values of all the rows you insert.
How to insert multiple records and get the identity value?
[ "", "sql", "sql-server", "sql-server-2005", "" ]
How do I convert a datetime *string in local time* to a *string in UTC time*? I'm sure I've done this before, but can't find it and SO will hopefully help me (and others) do that in future. **Clarification**: For example, if I have `2008-09-17 14:02:00` in my local timezone (`+10`), I'd like to generate a string with the equivalent `UTC` time: `2008-09-17 04:02:00`. Also, from <http://lucumr.pocoo.org/2011/7/15/eppur-si-muove/>, note that in general this isn't possible as with DST and other issues there is no unique conversion from local time to UTC time.
Thanks @rofly, the full conversion from string to string is as follows: ``` import time time.strftime("%Y-%m-%d %H:%M:%S", time.gmtime(time.mktime(time.strptime("2008-09-17 14:04:00", "%Y-%m-%d %H:%M:%S")))) ``` My summary of the `time`/`calendar` functions: `time.strptime` string --> tuple (no timezone applied, so matches string) `time.mktime` local time tuple --> seconds since epoch (always local time) `time.gmtime` seconds since epoch --> tuple in UTC and `calendar.timegm` tuple in UTC --> seconds since epoch `time.localtime` seconds since epoch --> tuple in local timezone
First, parse the string into a naive datetime object. This is an instance of `datetime.datetime` with no attached timezone information. See its [documentation](https://docs.python.org/3/library/datetime.html#available-types). Use the [`pytz`](http://pytz.sourceforge.net/) module, which comes with a full list of time zones + UTC. Figure out what the local timezone is, construct a timezone object from it, and manipulate and attach it to the naive datetime. Finally, use `datetime.astimezone()` method to convert the datetime to UTC. Source code, using local timezone "America/Los\_Angeles", for the string "2001-2-3 10:11:12": ``` from datetime import datetime import pytz local = pytz.timezone("America/Los_Angeles") naive = datetime.strptime("2001-2-3 10:11:12", "%Y-%m-%d %H:%M:%S") local_dt = local.localize(naive, is_dst=None) utc_dt = local_dt.astimezone(pytz.utc) ``` From there, you can use the `strftime()` method to format the UTC datetime as needed: ``` utc_dt.strftime("%Y-%m-%d %H:%M:%S") ```
How to convert local time string to UTC?
[ "", "python", "datetime", "utc", "localtime", "" ]
For example, given a type param method i'm looking for something like the part in bold > void MyMethod< T >() { > if ( **typeof(T).Implements( *IMyInterface* )** ) > { > > ``` > //Do something > ``` > > else > > ``` > //Do something else > ``` > > } Anwers using C# 3.0 are also welcome, but first drop the .NET 2.0 ones please ;)
[Type.IsAssignableFrom](http://msdn.microsoft.com/en-us/library/system.type.isassignablefrom.aspx) ``` if(typeof(IMyInterface).IsAssignableFrom(typeof(T))) { // something } else { // something else } ```
I think ``` if (typeof (IMyInterFace).IsAssignableFrom(typeof(T)) ``` should also work: but i don't see an advantage...
What is the easier way to know if a type param implements an interface in c# 2.0?
[ "", "c#", "reflection", "types", "" ]
In C#.NET I am trying to programmatically change the color of the border in a group box. Update: This question was asked when I was working on a winforms system before we switched to .NET.
Building on the previous answer, a better solution that includes the label for the group box: ``` groupBox1.Paint += PaintBorderlessGroupBox; private void PaintBorderlessGroupBox(object sender, PaintEventArgs p) { GroupBox box = (GroupBox)sender; p.Graphics.Clear(SystemColors.Control); p.Graphics.DrawString(box.Text, box.Font, Brushes.Black, 0, 0); } ``` You might want to adjust the x/y for the text, but for my use this is just right.
Just add paint event. ``` private void groupBox1_Paint(object sender, PaintEventArgs e) { GroupBox box = sender as GroupBox; DrawGroupBox(box, e.Graphics, Color.Red, Color.Blue); } private void DrawGroupBox(GroupBox box, Graphics g, Color textColor, Color borderColor) { if (box != null) { Brush textBrush = new SolidBrush(textColor); Brush borderBrush = new SolidBrush(borderColor); Pen borderPen = new Pen(borderBrush); SizeF strSize = g.MeasureString(box.Text, box.Font); Rectangle rect = new Rectangle(box.ClientRectangle.X, box.ClientRectangle.Y + (int)(strSize.Height / 2), box.ClientRectangle.Width - 1, box.ClientRectangle.Height - (int)(strSize.Height / 2) - 1); // Clear text and border g.Clear(this.BackColor); // Draw text g.DrawString(box.Text, box.Font, textBrush, box.Padding.Left, 0); // Drawing Border //Left g.DrawLine(borderPen, rect.Location, new Point(rect.X, rect.Y + rect.Height)); //Right g.DrawLine(borderPen, new Point(rect.X + rect.Width, rect.Y), new Point(rect.X + rect.Width, rect.Y + rect.Height)); //Bottom g.DrawLine(borderPen, new Point(rect.X, rect.Y + rect.Height), new Point(rect.X + rect.Width, rect.Y + rect.Height)); //Top1 g.DrawLine(borderPen, new Point(rect.X, rect.Y), new Point(rect.X + box.Padding.Left, rect.Y)); //Top2 g.DrawLine(borderPen, new Point(rect.X + box.Padding.Left + (int)(strSize.Width), rect.Y), new Point(rect.X + rect.Width, rect.Y)); } } ```
How do you change the color of the border on a group box?
[ "", "c#", "winforms", "user-interface", "controls", "groupbox", "" ]
Someone please correct me if I'm wrong, but parsing a yyyy/MM/dd (or other specific formats) dates in C# **should** be as easy as ``` DateTime.ParseExact(theDate, "yyyy/MM/dd"); ``` but no, C# forces you to create an IFormatProvider. Is there an app.config friendly way of setting this so I don't need to do this each time? ``` DateTime.ParseExact(theDate, "yyyy/MM/dd", new CultureInfo("en-CA", true)); ```
The IFormatProvider argument can be null.
ParseExact needs a culture : consider "yyyy MMM dd". MMM will be a localized month name that uses the current culture.
Parsing exact dates in C# shouldn't force you to create an IFormatProvider
[ "", "c#", ".net", "datetime", "parsing", "globalization", "" ]
How can I get a full call stack for a c++ application developed with Visual Studio 2005? I would like to have a full call stack including the code in the system libraries. Do I have to change some settings in Visual Studio, or do I have to install additional software?
1. Get debug information for all project dependencies. This is specified under the "Configuration Properties -> C/C++ -> General" section of the project properties. 2. On the menu, go to "Tools -> Options" then select "Debugging -> Symbols". 3. Add a new symbol location (the folder icon) that points to Microsoft's free symbol server “symsrv*symsrv.dll*c:\symbols\*<http://msdl.microsoft.com/downloads/symbols>“ 4. Fill out the "cache symbols" field with some place locally so you don't go to the internet all the time.
Agree with Clay, but for Symbols Server you should get the latest symsrv.DLL from "Debugging Tools For Windows", a free Microsoft download. (Since you explicitly asked what you need to download, I presume you don't have it yet)
How to get a full call stack in Visual Studio 2005?
[ "", "c++", "visual-studio", "debugging", "callstack", "visual-c++-2005", "" ]
I'm using python and CherryPy to create a simple internal website that about 2 people use. I use the built in webserver with CherryPy.quickstart and never messed with the config files. I recently changed machines so I installed the latest Python and cherrypy and when I run the site I can access it from localhost:8080 but not through the IP or the windows machine name. It could be a machine configuration difference or a newer version of CherryPy or Python. Any ideas how I can bind to the correct IP address? Edit: to make it clear, I currently don't have a config file at all.
That depends on how you are running the cherrypy init. If using cherrypy 3.1 syntax, that wold do it: ``` cherrypy.server.socket_host = 'www.machinename.com' cherrypy.engine.start() cherrypy.engine.block() ``` Of course you can have something more fancy, like subclassing the server class, or using config files. Those uses are covered in [the documentation](https://docs.cherrypy.dev/en/latest/ "Cherrypy documentation"). But that should be enough. If not just tell us what you are doing and cherrypy version, and I will edit this answer.
``` server.socket_host: '0.0.0.0' ``` ...would also work. That's IPv4 INADDR\_ANY, which means, "listen on all interfaces". In a config file, the syntax is: ``` [global] server.socket_host: '0.0.0.0' ``` In code: ``` cherrypy.server.socket_host = '0.0.0.0' ```
How do I configure the ip address with CherryPy?
[ "", "python", "cherrypy", "" ]
I'd like a short smallest possible javascript routine that when a mousedown occurs on a button it first responds just like a mouseclick and then if the user keeps the button pressed it responds as if the user was continously sending mouseclicks and after a while with the button held down acts as if the user was accelerating their mouseclicks...basically think of it like a keypress repeat with acceleration in time. i.e. user holds down mouse button (x=call function) - x\_\_\_x\_\_\_x\_\_\_x\_\_x\_\_x\_x\_x\_x\_xxxxxxx
``` function holdit(btn, action, start, speedup) { var t; var repeat = function () { action(); t = setTimeout(repeat, start); start = start / speedup; } btn.mousedown = function() { repeat(); } btn.mouseup = function () { clearTimeout(t); } }; /* to use */ holdit(btn, function () { }, 1000, 2); /* x..1000ms..x..500ms..x..250ms..x */ ```
When the button is pressed, call `window.setTimeout` with your intended time and the function `x`, and set the timer again at the end of `x` but this time with a smaller interval. Clear the timeout using `window.clearTimeout` upon release of the mouse button.
Need javascript code for button press and hold
[ "", "javascript", "mouse", "" ]
I have huge 3D arrays of numbers in my .NET application. I need to convert them to a 1D array to pass it to a COM library. Is there a way to convert the array without making a copy of all the data? I can do the conversion like this, but then I use twice the ammount of memory which is an issue in my application: ``` double[] result = new double[input.GetLength(0) * input.GetLength(1) * input.GetLength(2)]; for (i = 0; i < input.GetLength(0); i++) for (j = 0; j < input.GetLength(1); j++) for (k = 0; k < input.GetLength(2); k++) result[i * input.GetLength(1) * input.GetLength(2) + j * input.GetLength(2) + k)] = input[i,j,l]; return result; ```
I don't believe the way C# stores that data in memory would make it feasible the same way a simple cast in C would. Why not use a 1d array to begin with and perhaps make a class for the type so you can access it in your program as if it were a 3d array?
Unfortunately, C# arrays aren't guaranteed to be in contiguous memory like they are in closer-to-the-metal languages like C. So, no. There's no way to convert double[,,] to double[] without an element-by-element copy.
Is is possible to convert C# double[,,] array to double[] without making a copy
[ "", "c#", ".net", "arrays", "" ]
I've got a custom handler applied to a class (using the Policy Injection Application Block in entlib 4) and I would like to know whether the input method is a property when Invoke is called. Following is what my handler looks like. ``` [ConfigurationElementType(typeof(MyCustomHandlerData))] public class MyCustomHandler : ICallHandler { public IMethodReturn Invoke(IMethodInvocation input, GetNextHandlerDelegate getNext) { if (input.MethodBase.IsPublic && (input.MethodBase.Name.Contains("get_") || input.MethodBase.Name.Contains("set_"))) { Console.WriteLine("MyCustomHandler Invoke called with input of {0}", input.MethodBase.Name); } return getNext().Invoke(input, getNext); } public int Order { get; set; } } ``` As you can see from my code sample, the best way I've thought of so far is by parsing the method name. Isn't there a better way to do this?
You can also check IsSpecialName is true. this will be true in a property (amongst other things) At the il level the methods are exposed as follows (using Environment.ExitCode as example): ``` .method public hidebysig specialname static int32 get_ExitCode() cil managed .method public hidebysig specialname static void set_ExitCode(int32 'value') cil managed ``` If you wanted to get fancy you could verify after extracting the name that said property exists but to be honest ``` if (m.IsSpecialName && (m.Attributes & MethodAttributes.HideBySig) != 0)) ``` as well as starts with get\_ or set\_ then you should be good even for people using nasty names (faking the hidebysig is easy enough, faking the IsSpecialName would be very tricky) Nothing is guaranteed though. Someone could emit a class with a set\_Foo method that looked just like a real set method but actually wasn't a set on a read only property. Unless you check whether the property CanRead/CanWrite as well. This strikes me as madness for you though you aren't expecting deliberate circumvention. A simple utility/extension method on MethodInfo which did this logic wouldn't be too hard and including IsSpecialName would almost certainly cover all your needs.
A couple of you mentioned using the "IsSpecialName" property of the MethodBase type. While it is true that the will return true for property "gets" or "sets", it will also return true for operator overloads such as add\_EventName or remove\_EventName. So you will need to examine other attributes of the MethodBase instance to determine if its a property accessor. Unfortunately, if all you have is a reference to a MethodBase instance (which I believe is the case with intercepting behaviors in the Unity framework) there is not real "clean" way to determine if its a property setter or getter. The best way I've found is as follows: C#: ``` bool IsPropertySetter(MethodBase methodBase){ return methodBase.IsSpecialName && methodBase.Name.StartsWith("set_"); } bool IsPropertyGetter(MethodBase methodBase){ return methodBase.IsSpecialName && methodBase.Name.StartsWith("get_"); } ``` VB: ``` Private Function IsPropertySetter(methodBase As MethodBase) As Boolean Return methodBase.IsSpecialName AndAlso methodBase.Name.StartsWith("set_") End Function Private Function IsPropertyGetter(methodBase As MethodBase) As Boolean Return methodBase.IsSpecialName AndAlso methodBase.Name.StartsWith("get_") End Function ```
What's the best way to tell if a method is a property from within Policy Injection?
[ "", "c#", "enterprise-library", "policy-injection", "" ]
I'm trying to incorporate some JavaScript unit testing into my automated build process. Currently JSUnit works well with JUnit, but it seems to be abandonware and lacks good support for Ajax, debugging, and timeouts. Has anyone had any luck automating (with [Ant](https://en.wikipedia.org/wiki/Apache_Ant)) a unit testing library such as [YUI](https://en.wikipedia.org/wiki/Yahoo!_UI_Library) test, jQuery's [QUnit](https://code.jquery.com/qunit/), or [jQUnit](http://code.google.com/p/jqunit/)? Note: I use a custom built Ajax library, so the problem with Dojo's DOH is that it requires you to use their own Ajax function calls and event handlers to work with any Ajax unit testing.
There are many JavaScript unit test framework out there (JSUnit, scriptaculous, ...), but JSUnit is the only one I know that may be used with an automated build. If you are doing 'true' unit test you should not need AJAX support. For example, if you are using an [RPC](https://en.wikipedia.org/wiki/Remote_procedure_call) Ajax framework such as DWR, you can easily write a mock function: ``` function mockFunction(someArg, callback) { var result = ...; // Some treatments setTimeout( function() { callback(result); }, 300 // Some fake latency ); } ``` And yes, JSUnit does handle timeouts: *[Simulating Time in JSUnit Tests](http://googletesting.blogspot.com/2007/03/javascript-simulating-time-in-jsunit.html)*
I'm just about to start doing JavaScript [TDD](https://en.wikipedia.org/wiki/Test-driven_development) on a new project I am working on. My current plan is to use [QUnit](http://docs.jquery.com/QUnit) to do the unit testing. While developing the tests can be run by simply refreshing the test page in a browser. For continuous integration (and ensuring the tests run in all browsers), I will use [Selenium](http://selenium.openqa.org/) to automatically load the test harness in each browser, and read the result. These tests will be run on every checkin to source control. I am also going to use [JSCoverage](http://siliconforks.com/jscoverage/) to get code coverage analysis of the tests. This will also be automated with Selenium. I'm currently in the middle of setting this up. I'll update this answer with more exact details once I have the setup hammered out. --- Testing tools: * [qunit](http://docs.jquery.com/QUnit) * [JSCoverage](http://siliconforks.com/jscoverage/) * [Selenium](http://selenium.openqa.org/)
Automated unit testing with JavaScript
[ "", "javascript", "jquery", "unit-testing", "ant", "automation", "" ]
It's my understanding that nulls are not indexable in DB2, so assuming we have a huge table (Sales) with a date column (sold\_on) which is normally a date, but is occasionally (10% of the time) null. Furthermore, let's assume that it's a legacy application that we can't change, so those nulls are staying there and mean something (let's say sales that were returned). We can make the following query fast by putting an index on the sold\_on and total columns ``` Select * from Sales where Sales.sold_on between date1 and date2 and Sales.total = 9.99 ``` But an index won't make this query any faster: ``` Select * from Sales where Sales.sold_on is null and Sales.total = 9.99 ``` Because the indexing is done on the value. Can I index nulls? Maybe by changing the index type? Indexing the indicator column?
I'm no DB2 expert, but if 10% of your values are null, I don't think an index on that column alone will ever help your query. 10% is too many to bother using an index for -- it'll just do a table scan. If you were talking about 2-3%, I think it would actually use your index. Think about how many records are on a page/block -- say 20. The reason to use an index is to avoid fetching pages you don't need. The odds that a given page will contain 0 records that are null is (90%)^20, or 12%. Those aren't good odds -- you're going to need 88% of your pages to be fetched anyway, using the index isn't very helpful. If, however, your select clause only included a few columns (and not \*) -- say just salesid, you could probably get it to use an index on (sold\_on,salesid), as the read of the data page wouldn't be needed -- all the data would be in the index.
From where did you get the impression that DB2 doesn't index NULLs? I can't find anything in documentation or articles supporting the claim. And I just performed a query in a large table using a IS NULL restriction involving an indexed column containing a small fraction of NULLs; in this case, DB2 certainly used the index (verified by an EXPLAIN, and by observing that the database responded instantly instead of spending time to perform a table scan). So: I claim that DB2 has no problem with NULLs in non-primary key indexes. But as others have written: Your data may be composed in a way where DB2 thinks that using an index will not be quicker. Or the database's statistics aren't up-to-date for the involved table(s).
Indexing nulls for fast searching on DB2
[ "", "sql", "performance", "indexing", "null", "db2", "" ]
I've had a lot of trouble trying to come up with the best way to properly follow TDD principles while developing UI in JavaScript. What's the best way to go about this? Is it best to separate the visual from the functional? Do you develop the visual elements first, and then write tests and then code for functionality?
I've done some TDD with Javascript in the past, and what I had to do was make the distinction between Unit and Integration tests. Selenium will test your overall site, with the output from the server, its post backs, ajax calls, all of that. But for unit testing, none of that is important. What you want is just the UI you are going to be interacting with, and your script. The tool you'll use for this is basically [JsUnit](https://github.com/pivotal/jsunit), which takes an HTML document, with some Javascript functions on the page and executes them in the context of the page. So what you'll be doing is including the Stubbed out HTML on the page with your functions. From there,you can test the interaction of your script with the UI components in the isolated unit of the mocked HTML, your script, and your tests. That may be a bit confusing so lets see if we can do a little test. Lets to some TDD to assume that after a component is loaded, a list of elements is colored based on the content of the LI. **tests.html** ``` <html> <head> <script src="jsunit.js"></script> <script src="mootools.js"></script> <script src="yourcontrol.js"></script> </head> <body> <ul id="mockList"> <li>red</li> <li>green</li> </ul> </body> <script> function testListColor() { assertNotEqual( $$("#mockList li")[0].getStyle("background-color", "red") ); var colorInst = new ColorCtrl( "mockList" ); assertEqual( $$("#mockList li")[0].getStyle("background-color", "red") ); } </script> </html> ``` Obviously TDD is a multi-step process, so for our control, we'll need multiple examples. **yourcontrol.js (step1)** ``` function ColorCtrl( id ) { /* Fail! */ } ``` **yourcontrol.js (step2)** ``` function ColorCtrl( id ) { $$("#mockList li").forEach(function(item, index) { item.setStyle("backgrond-color", item.getText()); }); /* Success! */ } ``` You can probably see the pain point here, you have to keep your mock HTML here on the page in sync with the structure of what your server controls will be. But it does get you a nice system for TDD'ing with JavaScript.
I've never successfully TDDed UI code. The closest we came was indeed to separate UI code as much as possible from the application logic. This is one reason why the model-view-controller pattern is useful - the model and controller can be TDDed without much trouble and without getting too complicated. In my experience, the view was always left for our user-acceptance tests (we wrote web applications and our UATs used Java's HttpUnit). However, at this level it's really an integration test, without the test-in-isolation property we desire with TDD. Due to this setup, we had to write our controller/model tests/code first, then the UI and corresponding UAT. However, in the Swing GUI code I've been writing lately, I've been writing the GUI code first with stubs to explore my design of the front end, before adding to the controller/model/API. YMMV here though. So to reiterate, the only advice I can give is what you already seem to suspect - separate your UI code from your logic as much as possible and TDD them.
Developing UI in JavaScript using TDD Principles
[ "", "javascript", "user-interface", "tdd", "" ]
What is the preferred way to open a URL from a thick client application on Windows using C# and the .NET framework? I want it to use the default browser.
The following code surely works: ``` Process.Start("http://www.yoururl.com/Blah.aspx"); ``` It opens the default browser (technically, the default program that handles HTTP URIs).
I'd use the [Process.Start method](https://learn.microsoft.com/en-us/dotnet/api/system.diagnostics.process.start#System_Diagnostics_Process_Start_System_String_).
How does one load a URL from a .NET client application
[ "", "c#", ".net", "windows", "client", "" ]
Basically I would like to find a way to ddo something like: ``` <asp:Label ID="lID" runat="server" AssociatedControlID="txtId" Text="<%# MyProperty %>"></asp:Label> ``` I know I could set it from code behind (writing lId.Text = MyProperty), but I'd prefer doing it in the markup and I just can't seem to find the solution. (MyProperty is a string property) cheers
Code expressions are an option as well. These can be used inside of quotes in ASP tags, unlike standard <%= %> tags. The general syntax is: ``` <%$ resources: ResourceKey %> ``` There is a built-in expression for appSettings: ``` <%$ appSettings: AppSettingsKey %> ``` More info on this here: <http://weblogs.asp.net/infinitiesloop/archive/2006/08/09/The-CodeExpressionBuilder.aspx>
You can do ``` <asp:Label runat="server" Text='<%# MyProperty %>' /> ``` And then a Page.DataBind() in the codebehind.
How to 'bind' Text property of a label in markup
[ "", "c#", "asp.net", "" ]
The **C** preprocessor is justifiably feared and shunned by the C++ community. In-lined functions, consts and templates are usually a safer and superior alternative to a `#define`. The following macro: ``` #define SUCCEEDED(hr) ((HRESULT)(hr) >= 0) ``` is in no way superior to the type safe: ``` inline bool succeeded(int hr) { return hr >= 0; } ``` But macros do have their place, please list the uses you find for macros that you *can't* do without the preprocessor. Please put each use-cases in a seperate answer so it can be voted up and if you know of how to achieve one of the answers without the preprosessor point out how in that answer's comments.
As wrappers for debug functions, to automatically pass things like `__FILE__`, `__LINE__`, etc: ``` #ifdef ( DEBUG ) #define M_DebugLog( msg ) std::cout << __FILE__ << ":" << __LINE__ << ": " << msg #else #define M_DebugLog( msg ) #endif ``` Since C++20 the magic type [`std::source_location`](https://en.cppreference.com/w/cpp/utility/source_location) can however be used instead of `__LINE__` and `__FILE__` to implement an analogue as a normal function (template).
Methods must always be complete, compilable code; macros may be code fragments. Thus you can define a foreach macro: ``` #define foreach(list, index) for(index = 0; index < list.size(); index++) ``` And use it as thus: ``` foreach(cookies, i) printf("Cookie: %s", cookies[i]); ``` Since C++11, this is superseded by the [range-based for loop](http://en.cppreference.com/w/cpp/language/range-for).
When are C++ macros beneficial?
[ "", "c++", "c-preprocessor", "" ]
The situation is this: * You have a Hibernate context with an object graph that has some lazy loading defined. * You want to use the Hibernate objects in your UI as is without having to copy the data somewhere. * There are different UI contexts that require different amounts of data. * The data is too big to just eager load the whole graph each time. What is the best means to load all the appropriate objects in the object graph in a configurable way so that they can be accessed without having to go back to the database to load more data? Any help.
Let's say you have the Client and at one point you have to something with his Orders and maybe he has a Bonus for his Orders. Then I would define a Repository with a fluent interface that will allow me to say something like : ``` new ClientRepo().LoadClientBy(id) .WithOrders() .WithBonus() .OrderByName(); ``` And there you have the client with everything you need. It's preferably that you know in advance what you will need for the current operation. This way you can avoid unwanted trips to the database.(new devs in your team will usually do this - call a property and not be aware of the fact that it's actually a call to the DB)
An approach we use in our projects is to create a service for each view you have. Then the view fetches the sub-graph you need for this specific view, always trying to reduce the number of sqls send to the database. Therefore we are using a lot of joins to get the n:1 associated objects. If you are using a 2-tier desktop app directly connected to the DB you can just leave the objects attached and load additional data anytime automatically. Otherwise you have to reattach it to the session and initialize the association you need with `Hibernate.initialize(Object entity, String propertyName)` (Out of memory, maybe not 100% correct)
What is the best way to load a Hibernate object graph before using it in a UI?
[ "", "java", "hibernate", "" ]
From [PyPubSub](http://pypi.python.org/pypi/PyPubSub/): > Pypubsub provides a simple way for > your Python application to decouple > its components: parts of your > application can publish messages (with > or without data) and other parts can > subscribe/receive them. This allows > message "senders" and message > "listeners" to be unaware of each > other: > > * one doesn't need to import the other > * a sender doesn't need to know > + "who" gets the messages, > + what the listeners will do with the data, > + or even if any listener will get the message data. > * similarly, listeners don't need to worry about where messages come from. > > This is a great tool for implementing > a Model-View-Controller architecture > or any similar architecture that > promotes decoupling of its components. There seem to be quite a few Python modules for publishing/subscribing floating around the web, from PyPubSub, to [PyDispatcher](http://pydispatcher.sourceforge.net/) to simple "home-cooked" classes. Are there specific advantages and disadvantages when comparing different different modules? Which sets of modules have been benchmarked and compared? Thanks in advance
[PyDispatcher](http://pydispatcher.sourceforge.net/) is used heavily in [Django](http://pydispatcher.sourceforge.net/) and it's working perfectly for me (and for whole Django community, I guess). As I remember, there are some performance issues: * Arguments checking made by PyDispatcher is slow. * Unused connections have unnecessary overhead. AFAIK it's very unlikely you will run into this issues in a small-to-medium sized application. So these issues may not concern you. If you think you need every pound of performance (premature optimization is the root of all evil!), you can look at modifications done to PyDispatcher in Django. Hope this helps.
The best dispatch package for python seems to be the [dispatch module](http://github.com/django/django/tree/master/django/dispatch/) inside django (called [signals](http://docs.djangoproject.com/en/dev/topics/signals/) in the documentation). It is independent of the rest of django, and is short, documented, tested and very well written. **Edit:** I forked this project into an [independent signal project](https://github.com/olivierverdier/dispatch) for Python.
Recommended Python publish/subscribe/dispatch module?
[ "", "python", "publish-subscribe", "pypubsub", "" ]
Referenced [here](https://stackoverflow.com/questions/21265/comparing-ieee-floats-and-doubles-for-equality) and [here](https://stackoverflow.com/questions/17333/most-effective-way-for-float-and-double-comparison)...Why would I use two's complement over an epsilon method? It seems like the epsilon method would be good enough for most cases. --- **Update:** I'm purely looking for a theoretical reason why you'd use one over the other. I've always used the epsilon method. Has anyone used the 2's complement comparison successfully? Why? Why Not?
the second link you reference mentions an article that has quite a long description of the issue: <http://www.cygnus-software.com/papers/comparingfloats/comparingfloats.htm> but unless you are tweaking performance I would stick with epsilon so people can debug your code
In short, when comparing two floats with unknown origins, picking an epsilon that is valid is almost impossible. For example: What is a good epsilon when comparing distance in miles between Atlanta GA, Dallas TX and some place in Ohio? What is a good epsilon when comparing distance in miles between my left foot, my right foot and the computer under my desk? **EDIT:** Ok, I'm getting a fair number of people not understanding why you wouldn't know what your epsilon is. Back in the old days of lore, I wrote two programs that worked with NeverWinter Nights (a game made by BioWare). One of the programs took a binary model and converted it to ASCII. The other program took an ASCII model and compiled it into binary. One of the tests I wrote was to take all of BioWare's binary models, decompile them to ASCII and then back to binary. Then I compared my binary version with original one from BioWare. One of the problems during the comparison was dealing with some of the slight variances in floating point values. So instead of coming up with a bunch of different EPSILONS for each type of floating point number (vertex, normal, etc), I wanted to use something such as this twos compliment compare. Thus avoiding the whole multiple EPSILON issue. The same type of issue can apply to any type of software that processes 3rd party data and then needs to validate their results with the original. In these cases you might not even know what the floating point values represent, you just have to compare them. We ran into this issue with our industrial automation software. **EDIT:** LOL, this has been voted up and down by different people. I'll boil the problem down to this, given two **arbitrary** floating point numbers, how do you decide what epsilon to use? You can't. How can you compare 1e23 and 1.0001e23 with an epsilon and still compare 1e-23 and 5.2e-23 using the same epsilon? Sure, you can do some dynamic epsilon tricks, but that is the whole point to the integer compare (which does NOT require the integers be exact). The integer compare is able to compare two floats using an epsilon relative to the magnitude of the numbers. **EDIT** Steve, lets look at what you said in the comments: "But you know what equality means to you... Hence, you should be able to find an appropriate epsilon". Turn this statement around to say: "If you know what equality means to you, then you should be able to find an appropriate epsilon." The whole point to what I am trying to say is that there are applications where we don't know what equality means in the absolute sense, thus we have to resort to a relative compare which is what the integer version is trying to do.
Why would I use 2's complement to compare two doubles instead of comparing their differences against an epsilon value?
[ "", "c++", "floating-point", "double", "" ]
I create a web application (WAR) and deploy it on Tomcat. In the *webapp* there is a page with a form where an administrator can enter some configuration data. I don't want to store this data in an DBMS, but just in an XML file on the file system. Where to put it? I would like to put the file somewhere in the directory tree where the application itself is deployed. Should my configuration file be in the *WEB-INF* directory? Or put it somewhere else? And what is the Java code to use in a servlet to find the absolute path of the directory? Or can it be accessed with a relative path?
What we do is to put it in a separate directory on the server (you could use something like /config, /opt/config, /root/config, /home/username/config, or anything you want). When our servlets start up, they read the XML file, get a few things out of it (most importantly DB connection information), and that's it. I asked about why we did this once. It would be nice to store everything in the DB, but obviously you can't store DB connection information in the DB. You could hardcode things in the code, but that's ugly for many reasons. If the info ever has to change you have to rebuild the code and redeploy. If someone gets a copy of your code or your WAR file they would then get that information. Putting things in the WAR file seems nice, but if you want to change things much it could be a bad idea. The problem is that if you have to change the information, then next time you redeploy it will overwrite the file so anything you didn't remember to change in the version getting built into the WAR gets forgotten. The file in a special place on the file system thing works quite well for us. It doesn't have any big downsides. You know where it is, it's stored seperatly, makes deploying to multiple machines easy if they all need different config values (since it's not part of the WAR). The only other solution I can think of that would work well would be keeping everything in the DB except the DB login info. That would come from Java system properties that are retrieved through the JVM. This the Preferences API thing mentioned by Hans Doggen above. I don't think it was around when our application was first developed, if it was it wasn't used. As for the path for accessing the configuration file, it's just a file on the filesystem. You don't need to worry about the web path. So when your servlet starts up it just opens the file at "/config/myapp/config.xml" (or whatever) and it will find the right thing. Just hardcodeing the path in for this one seems pretty harmless to me.
WEB-INF is a good place to put your config file. Here's some code to get the absolute path of the directory from a servlet. ``` public void init(ServletConfig servletConfig) throws ServletException{ super.init(servletConfig); String path = servletConfig.getServletContext().getRealPath("/WEB-INF") ```
What is the best place to store a configuration file in a Java web application (WAR)?
[ "", "java", "web-applications", "tomcat", "jakarta-ee", "" ]
Since the Google App Engine Datastore is based on [Bigtable](http://research.google.com/archive/bigtable.html) and we know that's not a relational database, how do you design a ***database schema*/*data model*** for applications that use this type of database system?
Designing a bigtable schema is an open process, and basically requires you to think about: * The access patterns you will be using and how often each will be used * The relationships between your types * What indices you are going to need * The write patterns you will be using (in order to effectively spread load) GAE's datastore automatically denormalizes your data. That is, each index contains a (mostly) complete copy of the data, and thus every index adds significantly to time taken to perform a write, and the storage space used. If this were not the case, designing a Datastore schema would be a lot more work: You would have to think carefully about the primary key for each type, and consider the effect of your decision on the locality of data. For example, when rendering a blog post you would probably need to display the comments to go along with it, so each comment's key would probably begin with the associated post's key. With Datastore, this is not such a big deal: The query you use will look something like "Select \* FROM Comment WHERE post\_id = N." (If you want to page the comments, you would also have a limit clause, and a possible suffix of " AND comment\_id > last\_comment\_id".) Once you add such a query, Datastore will build the index for you, and your reads will be magically fast. Something to keep in mind is that each additional index creates some additional cost: it is best if you can use as few access patterns as possible, since it will reduce the number of indices GAE will construct, and thus the total storage required by your data. Reading over this answer, I find it a little vague. Maybe a hands-on design question would help to scope this down? :-)
You can use www.web2py.com. You build the model and the application once and it works on GAE but also witl SQLite, MySQL, Posgres, Oracle, MSSQL, FireBird
How do you design data models for Bigtable/Datastore (GAE)?
[ "", "python", "database", "google-app-engine", "bigtable", "" ]
A `.container` can contain many `.components`, and `.components` themselves can contain `.containers` (which in turn can contain .components etc. etc.) Given code like this: ``` $(".container .component").each(function(){ $(".container", this).css('border', '1px solid #f00'); }); ``` What do I need to add to the line within the braces to select only the nested `.containers` that have their `width` in CSS set to `auto`? I'm sure it's something simple, but I haven't really used jQuery all that much.
``` $(".container .component").each(function() { $(".container", this).each(function() { if($(this).css('width') == 'auto') { $(this).css('border', '1px solid #f00'); } }); }); ``` Similar to the other answer but since components can also have multiple containers, also needs the .each() check in here too for the width.
You may want to look into `.filter()`. Something like: ``` $('.container .component .container') .filter(function() {return $(this).css('width') == 'auto';}) .css({border: '1px solid #f00'}); ```
Can jQuery select by CSS rule, not class?
[ "", "javascript", "jquery", "css-selectors", "" ]
I am having a bit of a problem. I get a RAW char\* buffer from a camera and I need to add this tags before I can save it to disk. Writing the file to disk and reading it back again is not an option, as this will happen thousands of times. The buffer data I receive from the camera does not contain any EXIF information, apart from the Width, Height and Pixels per Inch. Any ideas? (C++)
Look at this [PDF](http://www.exif.org/Exif2-1.PDF), on page 20 you have a diagram showing you were to place or modify your exif information. What is the difference with a file on disk ? Does the JPEG buffer of your camera contain an EXIF section already ?
What's the difference? Why would doing it to a file on the disk be any different from doing it in memory? Just do whatever it is you do after you read the file from the disk..
How do you place EXIF tags into a JPG, having the raw jpeg buffer in C++?
[ "", "c++", "jpeg", "exif", "" ]
I am looking for a PHP blog engine which needs to be easy to redesign (CSS, HTML). It also needs to be free and have simple user interface so that the client doesn't struggle to add posts. Any suggestions?
[Wordpress](http://www.wordpress.org) - I keep trying other blogs and I keep going back to wordpress. It's definitely the easiest I've used for customizing templates, and the admin UI is very nice.
I kinda like [b2evo](http://b2evolution.net/) we used it on [our site](http://blogs.lessthandot.com/) and modded it to great effect.
Simple, free PHP blog engine easy to redesign?
[ "", "php", "blog-engine", "" ]
I want to grab the value of a hidden input field in HTML. ``` <input type="hidden" name="fooId" value="12-3456789-1111111111" /> ``` I want to write a regular expression in Python that will return the value of `fooId`, given that I know the line in the HTML follows the format ``` <input type="hidden" name="fooId" value="**[id is here]**" /> ``` Can someone provide an example in Python to parse the HTML for the value?
For this particular case, BeautifulSoup is harder to write than a regex, but it is much more robust... I'm just contributing with the BeautifulSoup example, given that you already know which regexp to use :-) ``` from BeautifulSoup import BeautifulSoup #Or retrieve it from the web, etc. html_data = open('/yourwebsite/page.html','r').read() #Create the soup object from the HTML data soup = BeautifulSoup(html_data) fooId = soup.find('input',name='fooId',type='hidden') #Find the proper tag value = fooId.attrs[2][1] #The value of the third attribute of the desired tag #or index it directly via fooId['value'] ```
I agree with Vinko [BeautifulSoup](http://www.crummy.com/software/BeautifulSoup/) is the way to go. However I suggest using `fooId['value']` to [get the attribute](http://www.crummy.com/software/BeautifulSoup/documentation.html#The%20attributes%20of%20Tags) rather than relying on value being the third attribute. ``` from BeautifulSoup import BeautifulSoup #Or retrieve it from the web, etc. html_data = open('/yourwebsite/page.html','r').read() #Create the soup object from the HTML data soup = BeautifulSoup(html_data) fooId = soup.find('input',name='fooId',type='hidden') #Find the proper tag value = fooId['value'] #The value attribute ```
Python regular expression for HTML parsing
[ "", "python", "regex", "web-scraping", "beautifulsoup", "" ]
If you have binary strings (literally String objects that contain only 1's and 0's), how would you output them as bits into a file? This is for a text compressor I was working on; it's still bugging me, and it'd be nice to finally get it working. Thanks!
Easiest is to simply take 8 consecutive characters, turn them into a byte and output that byte. Pad with zeros at the end if you can recognize the end-of-stream, or add a header with length (in bits) at the beginning of the file. The inner loop would look something like: ``` byte[] buffer = new byte[ ( string.length + 7 ) / 8 ]; for ( int i = 0; i < buffer.length; ++i ) { byte current = 0; for ( int j = 7; j >= 0; --j ) if ( string[ i * 8 + j ] == '1' ) current |= 1 << j; output( current ); } ``` You'll need to make some adjustments, but that's the general idea.
If you're lucky, java.math.BigInteger may do everything for you. ``` String s = "11001010001010101110101001001110"; byte[] bytes = (new java.math.BigInteger(s, 2)).toByteArray(); ``` This does depend on the byte order (big-endian) and right-aligning (if the number of bits is not a multiple of 8) being what you want but it may be simpler to modify the array afterwards than to do the character conversion yourself.
Bit manipulation and output in Java
[ "", "java", "bit-manipulation", "" ]
What would be better practice when giving a function the original variable to work with: ``` unsigned long x = 4; void func1(unsigned long& val) { val = 5; } func1(x); ``` or: ``` void func2(unsigned long* val) { *val = 5; } func2(&x); ``` IOW: Is there any reason to pick one over another?
My rule of thumb is: Use pointers if you want to do pointer arithmetic with them (e.g. incrementing the pointer address to step through an array) or if you ever have to pass a NULL-pointer. Use references otherwise.
I really think you will benefit from establishing the following function calling coding guidelines: 1. As in all other places, always be `const`-correct. * Note: This means, among other things, that only out-values (see item 3) and values passed by value (see item 4) can lack the `const` specifier. 2. Only pass a value by pointer if the value 0/NULL is a valid input in the current context. * Rationale 1: As **a caller**, you see that whatever you pass in *must be* in a usable state. * Rationale 2: As **called**, you know that whatever comes in *is* in a usable state. Hence, no NULL-check or error handling needs to be done for that value. * Rationale 3: Rationales 1 and 2 will be *compiler enforced*. Always catch errors at compile time if you can. 3. If a function argument is an out-value, then pass it by reference. * Rationale: We don't want to break item 2... 4. Choose "pass by value" over "pass by const reference" only if the value is a POD ([Plain old Datastructure](https://stackoverflow.com/questions/146452/what-are-pod-types-in-c)) or small enough (memory-wise) or in other ways cheap enough (time-wise) to copy. * Rationale: Avoid unnecessary copies. * Note: *small enough* and *cheap enough* are not absolute measurables.
Should I use a pointer or a reference to remotely assign a variable?
[ "", "c++", "variables", "pointers", "reference", "" ]
I'm looking to have text display vertically, first letter at the bottom, last letter at the top, within a JLabel. Is this possible?
I found this page: <http://www.java2s.com/Tutorial/Java/0240__Swing/VerticalLabelUI.htm> when I needed to do that. I don't know if you want the letters 'standing' on each other or all rotated on their side. ``` /* * The contents of this file are subject to the Sapient Public License * Version 1.0 (the "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * http://carbon.sf.net/License.html. * * Software distributed under the License is distributed on an "AS IS" basis, * WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License for * the specific language governing rights and limitations under the License. * * The Original Code is The Carbon Component Framework. * * The Initial Developer of the Original Code is Sapient Corporation * * Copyright (C) 2003 Sapient Corporation. All Rights Reserved. */ import java.awt.Dimension; import java.awt.FontMetrics; import java.awt.Graphics; import java.awt.Graphics2D; import java.awt.Insets; import java.awt.Rectangle; import java.awt.geom.AffineTransform; import javax.swing.Icon; import javax.swing.JComponent; import javax.swing.JLabel; import javax.swing.plaf.basic.BasicLabelUI; /** * This is the template for Classes. * * * @since carbon 1.0 * @author Greg Hinkle, January 2002 * @version $Revision: 1.4 $($Author: dvoet $ / $Date: 2003/05/05 21:21:27 $) * @copyright 2002 Sapient */ public class VerticalLabelUI extends BasicLabelUI { static { labelUI = new VerticalLabelUI(false); } protected boolean clockwise; public VerticalLabelUI(boolean clockwise) { super(); this.clockwise = clockwise; } public Dimension getPreferredSize(JComponent c) { Dimension dim = super.getPreferredSize(c); return new Dimension( dim.height, dim.width ); } private static Rectangle paintIconR = new Rectangle(); private static Rectangle paintTextR = new Rectangle(); private static Rectangle paintViewR = new Rectangle(); private static Insets paintViewInsets = new Insets(0, 0, 0, 0); public void paint(Graphics g, JComponent c) { JLabel label = (JLabel)c; String text = label.getText(); Icon icon = (label.isEnabled()) ? label.getIcon() : label.getDisabledIcon(); if ((icon == null) && (text == null)) { return; } FontMetrics fm = g.getFontMetrics(); paintViewInsets = c.getInsets(paintViewInsets); paintViewR.x = paintViewInsets.left; paintViewR.y = paintViewInsets.top; // Use inverted height & width paintViewR.height = c.getWidth() - (paintViewInsets.left + paintViewInsets.right); paintViewR.width = c.getHeight() - (paintViewInsets.top + paintViewInsets.bottom); paintIconR.x = paintIconR.y = paintIconR.width = paintIconR.height = 0; paintTextR.x = paintTextR.y = paintTextR.width = paintTextR.height = 0; String clippedText = layoutCL(label, fm, text, icon, paintViewR, paintIconR, paintTextR); Graphics2D g2 = (Graphics2D) g; AffineTransform tr = g2.getTransform(); if (clockwise) { g2.rotate( Math.PI / 2 ); g2.translate( 0, - c.getWidth() ); } else { g2.rotate( - Math.PI / 2 ); g2.translate( - c.getHeight(), 0 ); } if (icon != null) { icon.paintIcon(c, g, paintIconR.x, paintIconR.y); } if (text != null) { int textX = paintTextR.x; int textY = paintTextR.y + fm.getAscent(); if (label.isEnabled()) { paintEnabledText(label, g, clippedText, textX, textY); } else { paintDisabledText(label, g, clippedText, textX, textY); } } g2.setTransform( tr ); } } ```
You can do it by messing with the paint command, sort of like this: ``` public class JVertLabel extends JComponent{ private String text; public JVertLabel(String s){ text = s; } public void paintComponent(Graphics g){ super.paintComponent(g); Graphics2D g2d = (Graphics2D)g; g2d.rotate(Math.toRadians(270.0)); g2d.drawString(text, 0, 0); } } ```
How do I present text vertically in a JLabel ? (Java 1.6)
[ "", "java", "jlabel", "" ]
I'd like to know how to grab the Window title of the current active window (i.e. the one that has focus) using C#.
See example on how you can do this with full source code here: <http://www.csharphelp.com/2006/08/get-current-window-handle-and-caption-with-windows-api-in-c/> ``` [DllImport("user32.dll")] static extern IntPtr GetForegroundWindow(); [DllImport("user32.dll")] static extern int GetWindowText(IntPtr hWnd, StringBuilder text, int count); private string GetActiveWindowTitle() { const int nChars = 256; StringBuilder Buff = new StringBuilder(nChars); IntPtr handle = GetForegroundWindow(); if (GetWindowText(handle, Buff, nChars) > 0) { return Buff.ToString(); } return null; } ``` --- **Edited** with @Doug McClean comments for better correctness.
If you were talking about WPF then use: ``` Application.Current.Windows.OfType<Window>().SingleOrDefault(w => w.IsActive); ```
How do I get the title of the current active window using c#?
[ "", "c#", ".net", "windows", "winforms", "" ]
Running into a problem where on certain servers we get an error that the directory name is invalid when using Path.GetTempFileName. Further investigation shows that it is trying to write a file to c:\Documents and Setting\computername\aspnet\local settings\temp (found by using Path.GetTempPath). This folder exists so I'm assuming this must be a permissions issue with respect to the asp.net account. I've been told by some that Path.GetTempFileName should be pointing to C:\Windows\Microsoft.NET\Framework\v2.0.50727\temporaryasp.net files. I've also been told that this problem may be due to the order in which IIS and .NET where installed on the server. I've done the typical 'aspnet\_regiis -i' and checked security on the folders etc. At this point I'm stuck. Can anyone shed some light on this? \*\*Update:\*\*Turns out that providing 'IUSR\_ComputerName' access to the folder does the trick. Is that the correct procedure? I don't seem to recall doing that in the past, and obviously, want to follow best practices to maintain security. This is, after all, part of a file upload process.
This is probably a combination of impersonation and a mismatch of different authentication methods occurring. There are many pieces; I'll try to go over them one by one. **Impersonation** is a technique to "temporarily" switch the user account under which a thread is running. Essentially, the thread briefly gains the same rights and access -- no more, no less -- as the account that is being impersonated. As soon as the thread is done creating the web page, it "reverts" back to the original account and gets ready for the next call. This technique is used to access resources that only the user logged into your web site has access to. Hold onto the concept for a minute. Now, by default ASP.NET runs a web site under a local account called **ASPNET**. Again, by default, only the ASPNET account and members of the Administrators group can write to that folder. Your temporary folder is under that account's purview. This is the second piece of the puzzle. Impersonation doesn't happen on its own. It needs to be turn on intentionally in your web.config. ``` <identity impersonate="true" /> ``` If the setting is missing or set to false, your code will execute pure and simply under the ASPNET account mentioned above. Given your error message, I'm positive that you have impersonation=true. There is nothing wrong with that! Impersonation has advantages and disadvantages that go beyond this discussion. There is one question left: when you use impersonation, *which account gets impersonated*? Unless you specify the account in the web.config ([full syntax of the identity element here](http://msdn.microsoft.com/en-us/library/72wdk8cc.aspx)), the account impersonated is the one that the IIS handed over to ASP.NET. And that depends on how the user has authenticated (or not) into the site. That is your third and final piece. The IUSR\_ComputerName account is a low-rights account created by IIS. By default, this account is the account under which a web call runs **if the user could not be authenticated**. That is, the user comes in as an "anonymous". In summary, this is what is happening to you: Your user is trying to access the web site, and IIS could not authenticate the person for some reason. Because Anonymous access is ON, (or you would not see IUSRComputerName accessing the temp folder), IIS allows the user in anyway, but as a generic user. Your ASP.NET code runs and impersonates this generic IUSR\_\_\_ComputerName "guest" account; only now the code doesn't have access to the things that the ASPNET account had access to, including its own temporary folder. Granting IUSR\_ComputerName WRITE access to the folder makes your symptoms go away. But that just the symptoms. You need to review **why is the person coming as "Anonymous/Guest"?** There are two likely scenarios: a) You intended to use IIS for authentication, but the authentication settings in IIS for some of your servers are wrong. In that case, you need to disable Anonymous access on those servers so that the usual authentication mechanisms take place. Note that you might still need to grant to your users access to that temporary folder, or use another folder instead, one to which your users already have access. I have worked with this scenario many times, and quite frankly it gives you less headaches to forgo the Temp folder; create a dedicated folder in the server, set the proper permissions, and set its location in web.config. b) You didn't want to authenticate people anyway, or you wanted to use ASP.NET Forms Authentication (which uses IIS's Anonymous access to bypass checks in IIS and lets ASP.NET handle the authentication directly) This case is a bit more complicated. You should go to IIS and disable all forms of authentication other than "Anonymous Access". Note that you can't do that in the developer's box, because the debugger needs Integrated Authentication to be enabled. So your debugging box will behave a bit different than the real server; just be aware of that. Then, you need to decide whether you should turn impersonation OFF, or conversely, to specify the account to impersonate in the web.config. Do the first if your web server doesn't need outside resources (like a database). Do the latter if your web site does need to run under an account that has access to a database (or some other outside resource). You have two more alternatives to specify the account to impersonate. One, you could go to IIS and change the "anonymous" account to be one with access to the resource instead of the one IIS manages for you. The second alternative is to stash the account and password encrypted in the registry. That step is a bit complicated and also goes beyond the scope of this discussion. Good luck!
I encountered this error while diagnosing a console app that was writing in temp files. In one of my test iterations I purged all the files/directories in temp for a 'clean-slate' run. I resolved this self inflicted issue by logging out and back in again.
Path.GetTempFileName -- Directory name is invalid
[ "", "c#", "asp.net", "iis", "" ]
If you are relying on an implementation of Python that has a Global Interpreter Lock (i.e. CPython) and writing multithreaded code, do you really need locks at all? If the GIL doesn't allow multiple instructions to be executed in parallel, wouldn't shared data be unnecessary to protect? sorry if this is a dumb question, but it is something I have always wondered about Python on multi-processor/core machines. same thing would apply to any other language implementation that has a GIL.
You will still need locks if you share state between threads. The GIL only protects the interpreter internally. You can still have inconsistent updates in your own code. For example: ``` #!/usr/bin/env python import threading shared_balance = 0 class Deposit(threading.Thread): def run(self): for _ in xrange(1000000): global shared_balance balance = shared_balance balance += 100 shared_balance = balance class Withdraw(threading.Thread): def run(self): for _ in xrange(1000000): global shared_balance balance = shared_balance balance -= 100 shared_balance = balance threads = [Deposit(), Withdraw()] for thread in threads: thread.start() for thread in threads: thread.join() print shared_balance ``` Here, your code can be interrupted between reading the shared state (`balance = shared_balance`) and writing the changed result back (`shared_balance = balance`), causing a lost update. The result is a random value for the shared state. To make the updates consistent, run methods would need to lock the shared state around the read-modify-write sections (inside the loops) or have [some way to detect when the shared state had changed since it was read](http://en.wikipedia.org/wiki/Software_transactional_memory).
No - the GIL just protects python internals from multiple threads altering their state. This is a very low-level of locking, sufficient only to keep python's own structures in a consistent state. It doesn't cover the *application* level locking you'll need to do to cover thread safety in your own code. The essence of locking is to ensure that a particular *block* of code is only executed by one thread. The GIL enforces this for blocks the size of a single bytecode, but usually you want the lock to span a larger block of code than this.
Are locks unnecessary in multi-threaded Python code because of the GIL?
[ "", "python", "multithreading", "locking", "" ]
This question is the other side of the question asking, "[How do I calculate relative time?](https://stackoverflow.com/questions/11/how-do-i-calculate-relative-time)". Given some human input for a relative time, how can you parse it? By default you would offset from `DateTime.Now()`, but could optionally offset from another `DateTime`. (Prefer answers in C#) Example input: * "in 20 minutes" * "5 hours ago" * "3h 2m" * "next week" **Edit:** Let's suppose we can define some limits on the input. This sort of code would be a useful thing to have out on the web.
That's building a DSL (Domain specific language) for date handling. I don't know if somebody has done one for .NET but the construction of a DSL is fairly straightforward: 1. Define the language precisely, which input forms you will accept and what will you do with ambiguities 2. Construct the grammar for the language 3. Build the finite state machine that parses your language into an actionable AST You can do all that by yourself (with the help of [the Dragon Book](http://en.wikipedia.org/wiki/Compilers:_Principles,_Techniques,_and_Tools), for instance) or with the help of tools to the effect, as shown in this [link](http://www.codeproject.com/KB/recipes/YourFirstDSL.aspx). Just by thinking hard about the possibilities you have a good chance, with the help of good UI examples, of covering more than half of the actual inputs your application will receive. If you aim to accept everything a human could possibly type, you can record the input determined as ambiguous and then add them to the grammar, whenever they can be interpreted, as there are things that will be inherently ambiguous.
A Google search turns up the [parsedatetime](http://code.google.com/p/parsedatetime/) library (associated with the [Chandler project](http://chandlerproject.org/)), which is designed to do exactly this. It's open source (Apache License) and written in Python. It seems to be quite sophisticated -- from the homepage: > parsedatetime is able to parse, for > example, the following: > > ``` > * Aug 25 5pm > * 5pm August 25 > * next saturday > ... > * tomorrow > * next thursday at 4pm > * at 4pm > * eod > * in 5 minutes > * 5 minutes from now > * 5 hours before now > * 2 days from tomorrow > ``` Since it's implemented in pure Python and doesn't use anything fancy, there's a good chance it's compatible with [IronPython](http://www.codeplex.com/Wiki/View.aspx?ProjectName=IronPython), so you could use it with .net. If you want specifically a C# solution, you could write something based on the algorithms they use... It also comes with a whole bunch of unit tests.
How to parse relative time?
[ "", "c#", "parsing", "time", "language-agnostic", "" ]
I am using CodeDom to generate dynamic code based on user values. One of those values controls what the name of the class I'm generating is. I know I could sterilize the name based on language rules about valid class names using regular expressions, but I'd like to know if there is a specific method built into the framework to validate and/or sterilize a class name.
An easy way to determine if a string is a valid identifier for a class or variable is to call the static method ``` System.CodeDom.Compiler.CodeGenerator.IsValidLanguageIndependentIdentifier(string value) ```
Use the `CreateValidIdentifier` method on the `CSharpCodeProvider` class. ``` CSharpCodeProvider codeProvider = new CSharpCodeProvider(); string sFixedName = codeProvider.CreateValidIdentifier("somePossiblyInvalidName"); CodeTypeDeclaration codeType = new CodeTypeDeclaration(sFixedName); ``` It returns a valid name given some input. If you just want to validate the name and not fix it, compare the input and output. It won't alter valid input so the output will be equivalent.
Is there a .NET function to validate a class name?
[ "", "c#", ".net", "validation", "class", "naming", "" ]
I am developing a Win32 application and I would like to use an RSA encryption library. **Which library would you recommend?**
I use the following library: <http://www.efgh.com/software/rsa.htm> It's public domain, compact, self contained, and does the work well.
If you're using Win32, why don't you simply use the built-in win32 crypto-API? Here's a little example how it works in practice: <http://www.codeproject.com/KB/security/EncryptionCryptoAPI.aspx>
RSA encryption library for C++
[ "", "c++", "winapi", "encryption", "rsa", "" ]
I've been using a small class to emulate Enums in some Python projects. Is there a better way or does this make the most sense for some situations? Class code here: ``` class Enum(object): '''Simple Enum Class Example Usage: >>> codes = Enum('FOO BAR BAZ') # codes.BAZ will be 2 and so on ...''' def __init__(self, names): for number, name in enumerate(names.split()): setattr(self, name, number) ```
There's a lot of good discussion [here](https://stackoverflow.com/questions/36932/whats-the-best-way-to-implement-an-enum-in-python).
Enums have been proposed for inclusion into the language before, but were rejected (see [http://www.python.org/dev/peps/pep-0354/](http://www.python.org/dev/peps/pep-0354/ "PEP 354")), though there are existing packages you could use instead of writing your own implementation: * enum: [http://pypi.python.org/pypi/enum](http://pypi.python.org/pypi/enum "enum") * SymbolType (not quite the same as enums, but still useful): [http://pypi.python.org/pypi/SymbolType](http://pypi.python.org/pypi/SymbolType "SymbolType") * [Or just do a search](http://pypi.python.org/pypi?:action=search&term=enum&submit=search)
How should I best emulate and/or avoid enum's in Python?
[ "", "python", "enums", "" ]
How do I generate all the permutations of a list? For example: ``` permutations([]) [] permutations([1]) [1] permutations([1, 2]) [1, 2] [2, 1] permutations([1, 2, 3]) [1, 2, 3] [1, 3, 2] [2, 1, 3] [2, 3, 1] [3, 1, 2] [3, 2, 1] ```
Use [`itertools.permutations`](https://docs.python.org/3/library/itertools.html#itertools.permutations) from the **standard library**: ``` import itertools list(itertools.permutations([1, 2, 3])) ``` --- Adapted from [here](http://code.activestate.com/recipes/252178/) is a demonstration of how `itertools.permutations` might be implemented: ``` def permutations(elements): if len(elements) <= 1: yield elements return for perm in permutations(elements[1:]): for i in range(len(elements)): # nb elements[0:1] works in both string and list contexts yield perm[:i] + elements[0:1] + perm[i:] ``` A couple of alternative approaches are listed in the documentation of `itertools.permutations`. Here's one: ``` def permutations(iterable, r=None): # permutations('ABCD', 2) --> AB AC AD BA BC BD CA CB CD DA DB DC # permutations(range(3)) --> 012 021 102 120 201 210 pool = tuple(iterable) n = len(pool) r = n if r is None else r if r > n: return indices = range(n) cycles = range(n, n-r, -1) yield tuple(pool[i] for i in indices[:r]) while n: for i in reversed(range(r)): cycles[i] -= 1 if cycles[i] == 0: indices[i:] = indices[i+1:] + indices[i:i+1] cycles[i] = n - i else: j = cycles[i] indices[i], indices[-j] = indices[-j], indices[i] yield tuple(pool[i] for i in indices[:r]) break else: return ``` And another, based on `itertools.product`: ``` def permutations(iterable, r=None): pool = tuple(iterable) n = len(pool) r = n if r is None else r for indices in product(range(n), repeat=r): if len(set(indices)) == r: yield tuple(pool[i] for i in indices) ```
For [Python 2.6](http://docs.python.org/dev/whatsnew/2.6.html) onwards: ``` import itertools itertools.permutations([1, 2, 3]) ``` This returns as a generator. Use `list(permutations(xs))` to return as a list.
How do I generate all permutations of a list?
[ "", "python", "algorithm", "permutation", "combinatorics", "" ]
I'm building a PHP page with data sent from MySQL. Is it better to have * 1 `SELECT` query with 4 table joins, or * 4 small `SELECT` queries with no table join; I do select from an ID Which is faster and what is the pro/con of each method? I only need one row from each tables.
You should run a profiling tool if you're truly worried cause it depends on many things and it can vary but as a rule its better to have fewer queries being compiled and fewer round trips to the database. Make sure you filter things as well as you can using your where and join on clauses. But honestly, it usually doesn't matter since you're probably not going to be hit all that hard compared to what the database can do, so unless optimization is your spec you should not do it prematurely and do whats simplest.
Generally, it's better to have one SELECT statement. One of the main reasons to have databases is that they are fast at processing information, particularly if it is in the format of query. If there is any drawback to this approach, it's that there are some kinds of analysis that you can't do with one big SELECT statement. RDBMS purists will insist that this is a database design problem, in which case you are back to my original suggestion.
Should I use one big SQL Select statement or several small ones?
[ "", "php", "mysql", "performance", "optimization", "" ]
1. You have multiple network adapters. 2. Bind a UDP socket to an local port, without specifying an address. 3. Receive packets on one of the adapters. How do you get the local ip address of the adapter which received the packet? The question is, "What is the ip address from the receiver adapter?" not the address from the sender which we get in the ``` receive_from( ..., &senderAddr, ... ); ``` call.
You could enumerate all the network adapters, get their IP addresses and compare the part covered by the subnet mask with the sender's address. Like: ``` IPAddress FindLocalIPAddressOfIncomingPacket( senderAddr ) { foreach( adapter in EnumAllNetworkAdapters() ) { adapterSubnet = adapter.subnetmask & adapter.ipaddress; senderSubnet = adapter.subnetmask & senderAddr; if( adapterSubnet == senderSubnet ) { return adapter.ipaddress; } } } ```
The solution provided by [timbo](https://stackoverflow.com/users/1810/timbo) assumes that the address ranges are unique and not overlapping. While this is usually the case, it isn't a generic solution. There is an excellent implementation of a function that does exactly what you're after provided in the Steven's book "Unix network programming" (section 20.2) This is a function based on recvmsg(), rather than recvfrom(). If your socket has the IP\_RECVIF option enabled then recvmsg() will return the index of the interface on which the packet was received. This can then be used to look up the destination address. The source code is available [here](http://www.kohala.com/start/unp.tar.Z). The function in question is 'recvfrom\_flags()'
How to get your own (local) IP-Address from an udp-socket (C/C++)
[ "", "c++", "sockets", "udp", "" ]
Been scouring the net for something like firewatir but for python. I'm trying to automate firefox on linux. Any suggestions?
The [PyXPCOM extension](https://developer.mozilla.org/en/PyXPCOM) is one possibility. But looking at what firewatir provides, I have to 2nd the suggestion for twill. It's based on [mechanize](http://wwwsearch.sourceforge.net/mechanize/), which might also be useful in this context.
You could try [selenium](http://selenium.openqa.org/).
Automate firefox with python?
[ "", "python", "linux", "firefox", "ubuntu", "automation", "" ]
An array of ints in java is stored as a block of 32-bit values in memory. How is an array of Integer objects stored? i.e. ``` int[] vs. Integer[] ``` I'd imagine that each element in the Integer array is a reference to an Integer object, and that the Integer object has object storage overheads, just like any other object. I'm hoping however that the JVM does some magical cleverness under the hood given that Integers are immutable and stores it just like an array of ints. Is my hope woefully naive? Is an Integer array much slower than an int array in an application where every last ounce of performance matters?
No VM I know of will store an Integer[] array like an int[] array for the following reasons: 1. There can be **null** Integer objects in the array and you have no bits left for indicating this in an int array. The VM could store this 1-bit information per array slot in a hiden bit-array though. 2. You can synchronize in the elements of an Integer array. This is much harder to overcome as the first point, since you would have to store a monitor object for each array slot. 3. The elements of Integer[] can be compared for identity. You could for example create two Integer objects with the value 1 via **new** and store them in different array slots and later you retrieve them and compare them via ==. This must lead to false, so you would have to store this information somewhere. Or you keep a reference to one of the Integer objects somewhere and use this for comparison and you have to make sure one of the == comparisons is false and one true. This means the whole concept of object identity is quiet hard to handle for the *optimized* Integer array. 4. You can cast an Integer[] to e.g. Object[] and pass it to methods expecting just an Object[]. This means all the code which handles Object[] must now be able to handle the special Integer[] object too, making it slower and larger. Taking all this into account, it would probably be possible to make a special Integer[] which saves some space in comparison to a *naive* implementation, but the additional complexity will likely affect a lot of other code, making it slower in the end. The overhead of using Integer[] instead of int[] can be quiet large in space and time. On a typical 32 bit VM an Integer object will consume 16 byte (8 byte for the object header, 4 for the payload and 4 additional bytes for alignment) while the Integer[] uses as much space as int[]. In 64 bit VMs (using 64bit pointers, which is not always the case) an Integer object will consume 24 byte (16 for the header, 4 for the payload and 4 for alignment). In addition a slot in the Integer[] will use 8 byte instead of 4 as in the int[]. This means you can expect an overhead of **16 to 28** byte per slot, which is a **factor of 4 to 7** compared to plain int arrays. The performance overhead can be significant too for mainly two reasons: 1. Since you use more memory, you put on much more pressure on the memory subsystem, making it more likely to have cache misses in the case of Integer[]. For example if you traverse the contents of the int[] in a linear manner, the cache will have most of the entries already fetched when you need them (since the layout is linear too). But in case of the Integer array, the Integer objects itself might be scattered randomly in the heap, making it hard for the cache to guess where the next memory reference will point to. 2. The garbage collection has to do much more work because of the additional memory used and because it has to scan and move each Integer object separately, while in the case of int[] it is just one object and the contents of the object doesn't have to be scanned (they contain no reference to other objects). To sum it up, using an int[] in performance critical work will be both much faster and memory efficient than using an Integer array in current VMs and it is unlikely this will change much in the near future.
John Rose working on [fixnums](http://blogs.oracle.com/jrose/entry/fixnums_in_the_vm) in the JVM to fix this problem.
How are Integer arrays stored internally, in the JVM?
[ "", "java", "jvm", "" ]
In our application, we receive text files (`.txt`, `.csv`, etc.) from diverse sources. When reading, these files sometimes contain garbage, because the files where created in a different/unknown codepage. Is there a way to (automatically) detect the codepage of a text file? The `detectEncodingFromByteOrderMarks`, on the `StreamReader` constructor, works for `UTF8` and other unicode marked files, but I'm looking for a way to detect code pages, like `ibm850`, `windows1252`. --- Thanks for your answers, this is what I've done. The files we receive are from end-users, they do not have a clue about codepages. The receivers are also end-users, by now this is what they know about codepages: Codepages exist, and are annoying. **Solution:** * Open the received file in Notepad, look at a garbled piece of text. If somebody is called François or something, with your human intelligence you can guess this. * I've created a small app that the user can use to open the file with, and enter a text that user knows it will appear in the file, when the correct codepage is used. * Loop through all codepages, and display the ones that give a solution with the user provided text. * If more as one codepage pops up, ask the user to specify more text.
You can't detect the codepage, you need to be told it. You can analyse the bytes and guess it, but that can give some bizarre (sometimes amusing) results. I can't find it now, but I'm sure Notepad can be tricked into displaying English text in Chinese. Anyway, this is what you need to read: [The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!)](http://www.joelonsoftware.com/articles/Unicode.html). Specifically Joel says: > **The Single Most Important Fact About Encodings** > > If you completely forget everything I just explained, please remember one extremely important fact. It does not make sense to have a string without knowing what encoding it uses. You can no longer stick your head in the sand and pretend that "plain" text is ASCII. > There Ain't No Such Thing As Plain Text. > > If you have a string, in memory, in a file, or in an email message, you have to know what encoding it is in or you cannot interpret it or display it to users correctly.
If you're looking to detect non-UTF encodings (i.e. no BOM), you're basically down to heuristics and statistical analysis of the text. You might want to take a look at the [Mozilla paper on universal charset detection](http://www-archive.mozilla.org/projects/intl/UniversalCharsetDetection.html) ([same link, with better formatting via Wayback Machine](https://web.archive.org/web/20110602164601/http://www.mozilla.org/projects/intl/UniversalCharsetDetection.html)).
How can I detect the encoding/codepage of a text file?
[ "", "c#", ".net", "text", "encoding", "globalization", "" ]
While we try to set up as many unit tests as time allows for our applications, I always find the amount of UI-level tests lacking. There are many options out there, but I'm not sure what would be a good place to start. What is your preferred unit testing tool for testing Swing applications? Why do you like it?
On our side, we use to test SWING GUI with [FEST](http://code.google.com/p/fest/). This is an adapter on the classical swing robot, but it ease dramatically its use. Combined with TestNG, We found it an easy way to simulate "human" actions trough the GUI.
If your target application has **custom components**, I would definitely recommend [Marathon](http://www.marathontesting.com/) to automate your tests. I was given the task of automating an application with several *extremely* complicated custom components, written in-house from the ground up. I went through a review process that lasted two months, in which I made the decision on which test tool to use, from a list of close to 30 test tools that were available, both commercial and FOSS. It was the *only* test tool that was able to successfully automate our particular custom components; where IBM's Rational Functional Tester, Microfocus' TestPartner, QF-Test, Abbot & FEST **failed**. I have since been able to successfully integrate the tests with Cruise Control such that they run upon completing each build of the application. A word of warning though: 1) it is rather rough around the edges in the way it handles JTables. I got around this by writing my own proxy class for them. 2) Does not support record/replay of drag-and-drop actions yet.
What is the best testing tool for Swing-based applications?
[ "", "java", "swing", "testing", "" ]
In C# there are `String` objects and `string` objects. What is the difference between the two? What are the best practices regarding which to use?
There is no difference. string (lower case) is just an alias for System.String.
No difference. `System.String` is strictly identical to `string`. Common C# coding guidelines indicates that you should use the keyword `string`.
String vs string
[ "", "c#", ".net", "string", "declaration", "" ]
My team is developing a new service oriented product with a web front-end. In discussions about what technologies we will use we have settled on running a JBoss application server, and Flex frontend (with possible desktop deployment using Adobe AIR), and web services to interface the client and server. We've reached an impasse when it comes to which server technology to use for our business logic. The big argument is between EJB3 and Spring, with our biggest concerns being scalability and performance, and also maintainability of the code base. Here are my questions: 1. What are the arguments for or against EJB3 vs Spring? * What pitfalls can I expect with each? * Where can I find good benchmark information?
There won't be much difference between EJB3 and Spring based on Performance. We chose Spring for the following reasons (not mentioned in the question): * Spring drives the architecture in a direction that more readily supports unit testing. For example, inject a mock DAO object to unit test your business layer, or utilize Spring's MockHttpRequest object to unit test a servlet. We maintain a separate Spring config for unit tests that allows us to isolate tests to the specific layers. * An overriding driver was compatibility. If you need to support more than one App Server (or eventually want the option to move from JBoss to Glassfish, etc.), you will essentially be carrying your container (Spring) with you, rather than relying on compatibility between different implementations of the EJB3 specification. * Spring allows for technology choices for Persistence, object remoting, etc. For example, we are also using a Flex front end, and are using the Hessian protocol for communications between Flex and Spring.
The gap between EJB3 and Spring is much smaller than it was, clearly. That said, one of the downsides to EJB3 now is that you can only inject into a bean, so you can end up turning components into beans that don't need to be. The argument about unit testing is fairly irrelevant now - EJB3 is clearly designed to be more easily unit testable. The compatibility argument above is also kind of irrelevant: whether you use EJB3 or Spring, you're still reliant on 3rd party-provided implementations of transaction managers, JMS, etc. What would swing it for me, however, is support by the community. Working on an EJB3 project last year, there just weren't a lot of people out there using it and talking about their problems. Spring, rightly or wrongly, is extremely pervasive, particularlty in the enterprise, and that makes it easier to find someone who's got the same problem you're trying to solve.
Should I use EJB3 or Spring for my business layer?
[ "", "java", "performance", "spring", "ejb-3.0", "scalability", "" ]
A rather comprehensive site explaining the difficulties and solutions involved in using a dll written in c/c++ and the conversion of the .h header file to delphi/pascal was posted to a mailing list I was on recently, so I thought I'd share it, and invite others to post other useful resources for this, whether they be links, conversion tools, or book/paper titles. One resource per answer please, so we'll end up with the most popular/best resources bubbling to the top.
Over at [Rudy's Delphi Corner](http://rvelthuis.de/index.html), he has an [excellent article about the pitfalls of converting C/C++ to Delphi](http://rvelthuis.de/articles/articles-convert.html). In my opinion, this is essential information when attempting this task. Here is the description: > This article is meant for everyone who > needs to translate C/C++ headers to > Delphi. I want to share some of the > pitfalls you can encounter when > converting from C or C++. This article > is not a tutorial, just a discussion > of frequently encountered problem > cases. It is meant for the beginner as > well as for the more experienced > translator of C and C++. He also wrote a "[Conversion Helper Package](http://rvelthuis.de/programs/convertpack.html)" that installs into the Delphi IDE which aids in converting C/C++ code to Delphi: [![alt text](https://i.stack.imgur.com/nwutf.png)](https://i.stack.imgur.com/nwutf.png) (source: [rvelthuis.de](http://rvelthuis.de/images/convertpackshaded.png)) His other relevant articles on this topic include: * [Using C++ Objects in Delphi](http://rvelthuis.de/articles/articles-cppobjs.html) * [Using C object files in Delphi](http://rvelthuis.de/articles/articles-cobjs.html)
[Article](http://rvelthuis.de/articles/articles-convert.html#main) at Rudy's Delphi Corner
Best resources for converting C/C++ dll headers to Delphi?
[ "", "c++", "c", "delphi", "dll", "header", "" ]
I'm able to successfully uninstall a third-party application via the command line and via a custom Inno Setup installer. Command line Execution: ``` MSIEXEC.exe /x {14D74337-01C2-4F8F-B44B-67FC613E5B1F} /qn ``` Inno Setup Command: ``` [Run] Filename: msiexec.exe; Flags: runhidden waituntilterminated; Parameters: "/x {{14D74337-01C2-4F8F-B44B-67FC613E5B1F} /qn"; StatusMsg: "Uninstalling Service..."; ``` I am also able to uninstall the application programmatically when executing the following C# code in debug mode. C# Code: ``` string fileName = "MSIEXEC.exe"; string arguments = "/x {14D74337-01C2-4F8F-B44B-67FC613E5B1F} /qn"; ProcessStartInfo psi = new ProcessStartInfo(fileName, arguments) { CreateNoWindow = true, UseShellExecute = false, RedirectStandardOutput = true }; Process process = Process.Start(psi); string errorMsg = process.StandardOutput.ReadToEnd(); process.WaitForExit(); ``` The same C# code, however, produces the following failure output when run as a compiled, deployed Windows Service: ``` "This action is only valid for products that are currently installed." ``` Additional Comments: * The Windows Service which is issuing the uninstall command is running on the same machine as the code being tested in Debug Mode. The Windows Service is running/logged on as the Local system account. * I have consulted my application logs and I have validated that the executed command arguments are thhe same in both debug and release mode. * I have consulted the Event Viewer but it doesn't offer any clues. Thoughts? Any help would be greatly appreciated. Thanks.
Thanks to those offering help. This appears to be a permissions issue. I have updated my service to run under an Administrator account and it was able to successfully uninstall the third-party application. To Orion's point, though the Local System account is a powerful account that has full access to the system -- <http://technet.microsoft.com/en-us/library/cc782435.aspx> -- it doesn't seem to have the necessary rights to perform the uninstall. [See additional comments for full story regarding the LocalSystem being able to uninstall application for which it installed.]
I've come across similar problems in the past with installation, a customer was using the SYSTEM account to install and this was causing all sorts of permission problems for non-administrative users. MSI log files aren't really going to help if the application doesn't appear "installed", I'd suggest starting with capturing the output of `MSIINV.EXE` under the system account, that will get you an "Inventory" of the currently installed programs (or what that user sees installed) <http://blogs.msdn.com/brada/archive/2005/06/24/432209.aspx> I think you probably need to go back to the drawing board and see if you really need the windows service to do the uninstall. You'll probably come across all sorts of Vista UAC issues if you haven't already...
Uninstall Command Fails Only in Release Mode
[ "", "c#", "installation", "service", "" ]
I just want my apache to register some of my predefined environment so that i can retrieve it using getenv function in php. How can i do this? I tried adding /etc/profile.d/foo.sh with export FOO=/bar/baz using root and restarted apache.
Environment variables are inherited by processes in Unix. The files in /etc/profile.d are only executed (in the current shell, not in a subshell) when you log in. Just changing the value there and then restarting a process will not update the environment. Possible Fixes: * log out/log in, then start apache * source the file: `# . /etc/profile.d/foo.sh`, then restart apache * source the file in the apache init script You also need to make sure that `/etc/profile.d/` is sourced when Apache is started by `init` rather than yourself. The best fix might also depend on the distribution you are using, because they use different schemes for configuration.
You can use [SetEnv](http://httpd.apache.org/docs/2.2/mod/mod_env.html#setenv) in your config files (/etc/httpd/conf.d/\*.conf, .htaccess ...). Additionally you should be able to define them in /etc/sysconfig/httpd (on RPM-based distribs) and *export* them (note: not tested). Note: it wouldn't surprise me if some distributions tried quite hard to hide as much as possible, as far as system config is concerned, from a publically accessible service such as Apache. And if they don't, they might start doing this in a future version. Hence I advise you to do this explicitly. If you need to share such a setting between Apache and your shells, you could try sourcing */etc/profile.d/yourprofile.sh* from */etc/sysconfig/httpd*
reinitialize system wide environment variable in linux
[ "", "php", "linux", "variables", "environment", "" ]
I'd like to store a properties file as XML. Is there a way to sort the keys when doing this so that the generated XML file will be in alphabetical order? ``` String propFile = "/path/to/file"; Properties props = new Properties(); /*set some properties here*/ try { FileOutputStream xmlStream = new FileOutputStream(propFile); /*this comes out unsorted*/ props.storeToXML(xmlStream,""); } catch (IOException e) { e.printStackTrace(); } ```
Here's a quick and dirty way to do it: ``` String propFile = "/path/to/file"; Properties props = new Properties(); /* Set some properties here */ Properties tmp = new Properties() { @Override public Set<Object> keySet() { return Collections.unmodifiableSet(new TreeSet<Object>(super.keySet())); } }; tmp.putAll(props); try { FileOutputStream xmlStream = new FileOutputStream(propFile); /* This comes out SORTED! */ tmp.storeToXML(xmlStream,""); } catch (IOException e) { e.printStackTrace(); } ``` Here are the caveats: * The tmp Properties (an anonymous subclass) doesn't fulfill the contract of Properties. For example, if you got its `keySet` and tried to remove an element from it, an exception would be raised. So, don't allow instances of this subclass to escape! In the snippet above, you are never passing it to another object or returning it to a caller who has a legitimate expectation that it fulfills the contract of Properties, so it is safe. * The implementation of Properties.storeToXML could change, causing it to ignore the keySet method. For example, a future release, or OpenJDK, could use the `keys()` method of `Hashtable` instead of `keySet`. This is one of the reasons why classes should always document their "self-use" (Effective Java Item 15). However, in this case, the worst that would happen is that your output would revert to unsorted. * Remember that the Properties storage methods ignore any "default" entries.
Here's a way to produce sorted output for both store `Properties.store(OutputStream out, String comments)` and `Properties.storeToXML(OutputStream os, String comment)`: ``` Properties props = new Properties() { @Override public Set<Object> keySet(){ return Collections.unmodifiableSet(new TreeSet<Object>(super.keySet())); } @Override public synchronized Enumeration<Object> keys() { return Collections.enumeration(new TreeSet<Object>(super.keySet())); } }; props.put("B", "Should come second"); props.put("A", "Should come first"); props.storeToXML(new FileOutputStream(new File("sortedProps.xml")), null); props.store(new FileOutputStream(new File("sortedProps.properties")), null); ```
How to write java.util.Properties to XML with sorted keys?
[ "", "java", "xml", "" ]
I have an object in a multi-threaded environment that maintains a collection of information, e.g.: ``` public IList<string> Data { get { return data; } } ``` I currently have `return data;` wrapped by a `ReaderWriterLockSlim` to protect the collection from sharing violations. However, to be doubly sure, I'd like to return the collection as read-only, so that the calling code is unable to make changes to the collection, only view what's already there. Is this at all possible?
If your underlying data is stored as list you can use [List(T).AsReadOnly](http://msdn.microsoft.com/en-us/library/e78dcd75.aspx) method. If your data can be enumerated, you can use [Enumerable.ToList](http://msdn.microsoft.com/en-us/library/bb342261.aspx) method to cast your collection to List and call AsReadOnly on it.
I voted for your accepted answer and agree with it--however might I give you something to consider? Don't return a collection directly. Make an accurately named business logic class that reflects the purpose of the collection. The main advantage of this comes in the fact that you can't add code to collections so whenever you have a native "collection" in your object model, you ALWAYS have non-OO support code spread throughout your project to access it. For instance, if your collection was invoices, you'd probably have 3 or 4 places in your code where you iterated over unpaid invoices. You could have a getUnpaidInvoices method. However, the real power comes in when you start to think of methods like "payUnpaidInvoices(payer, account);". When you pass around collections instead of writing an object model, entire classes of refactorings will never occur to you. Note also that this makes your problem particularly nice. If you don't want people changing the collections, your container need contain no mutators. If you decide later that in just one case you actually HAVE to modify it, you can create a safe mechanism to do so. How do you solve that problem when you are passing around a native collection? Also, native collections can't be enhanced with extra data. You'll recognize this next time you find that you pass in (Collection, Extra) to more than one or two methods. It indicates that "Extra" belongs with the object containing your collection.
Return collection as read-only
[ "", "c#", ".net", "multithreading", "collections", "concurrency", "" ]
I wrote a small `PHP` application several months ago that uses the `WordPress XMLRPC library` to synchronize two separate WordPress blogs. I have a general "RPCRequest" function that packages the request, sends it, and returns the server response, and I have several more specific functions that customize the type of request that is sent. In this particular case, I am calling "getPostIDs" to retrieve the number of posts on the remote server and their respective postids. Here is the code: ``` $rpc = new WordRPC('http://mywordpressurl.com/xmlrpc.php', 'username', 'password'); $rpc->getPostIDs(); ``` I'm receiving the following error message: ``` expat reports error code 5 description: Invalid document end line: 1 column: 1 byte index: 0 total bytes: 0 data beginning 0 before byte index: ``` Kind of a cliffhanger ending, which is also strange. But since the error message isn't formatted in XML, my intuition is that it's the local XMLRPC library that is generating the error, not the remote server. Even stranger, if I change the "getPostIDs()" call to "getPostIDs(1)" or any other integer, it works just fine. Here is the code for the WordRPC class: ``` public function __construct($url, $user, $pass) { $this->url = $url; $this->username = $user; $this->password = $pass; $id = $this->RPCRequest("blogger.getUserInfo", array("null", $this->username, $this->password)); $this->blogID = $id['userid']; } public function RPCRequest($method, $params) { $request = xmlrpc_encode_request($method, $params); $context = stream_context_create(array('http' => array( 'method' => "POST", 'header' => "Content-Type: text/xml", 'content' => $request ))); $file = file_get_contents($this->url, false, $context); return xmlrpc_decode($file); } public function getPostIDs($num_posts = 0) { return $this->RPCRequest("mt.getRecentPostTitles", array($this->blogID, $this->username, $this->password, $num_posts)); } ``` As I mentioned, it works fine if "getPostIDs" is given a positive integer argument. Furthermore, this used to work perfectly well as is; the default parameter of 0 simply indicates to the RPC server that it should retrieve *all* posts, not just the most recent `$num_posts` posts. Only recently has this error started showing up. I've tried googling the error without much luck. My question, then, is **what exactly does "expat reports error code 5" mean, and who is generating the error?** Any details/suggestions/insights beyond that are welcome, too!
@Novak: Thanks for your suggestion. The problem turned out to be a memory issue; by retrieving all the posts from the remote location, the response exceeded the amount of memory PHP was allowed to utilize, hence the unclosed token error. The problem with the cryptic and incomplete error message was due to an outdated version of the XML-RPC library being used. Once I'd upgraded the version of WordPress, it provided me with the complete error output, including the memory error.
Expat is the XML parser in PHP. Error code 5 is one of many expat error constants, in this case: `XML_ERROR_UNCLOSED_TOKEN`. Sounds to me like there's an error in the result returned from the RPC call. You might want to do some error checking in RPCRequest after `file_get_contents` and before `xmlrpc_decode`.
WordPress XMLRPC: Expat reports error code 5
[ "", "php", "blogs", "xml-rpc", "" ]
I am trying to write a servlet that will send a XML file (xml formatted string) to another servlet via a POST. (Non essential xml generating code replaced with "Hello there") ``` StringBuilder sb= new StringBuilder(); sb.append("Hello there"); URL url = new URL("theservlet's URL"); HttpURLConnection connection = (HttpURLConnection)url.openConnection(); connection.setRequestMethod("POST"); connection.setRequestProperty("Content-Length", "" + sb.length()); OutputStreamWriter outputWriter = new OutputStreamWriter(connection.getOutputStream()); outputWriter.write(sb.toString()); outputWriter.flush(); outputWriter.close(); ``` This is causing a server error, and the second servlet is never invoked.
This kind of thing is much easier using a library like [HttpClient](http://hc.apache.org/httpclient-3.x/). There's even a [post XML code example](http://svn.apache.org/viewvc/httpcomponents/oac.hc3x/trunk/src/examples/PostXML.java?view=markup): ``` PostMethod post = new PostMethod(url); RequestEntity entity = new FileRequestEntity(inputFile, "text/xml; charset=ISO-8859-1"); post.setRequestEntity(entity); HttpClient httpclient = new HttpClient(); int result = httpclient.executeMethod(post); ```
I recommend using Apache [HTTPClient](http://hc.apache.org/) instead, because it's a nicer API. But to solve this current problem: try calling `connection.setDoOutput(true);` after you open the connection. ``` StringBuilder sb= new StringBuilder(); sb.append("Hello there"); URL url = new URL("theservlet's URL"); HttpURLConnection connection = (HttpURLConnection)url.openConnection(); connection.setDoOutput(true); connection.setRequestMethod("POST"); connection.setRequestProperty("Content-Length", "" + sb.length()); OutputStreamWriter outputWriter = new OutputStreamWriter(connection.getOutputStream()); outputWriter.write(sb.toString()); outputWriter.flush(); outputWriter.close(); ```
Writing post data from one java servlet to another
[ "", "java", "servlets", "" ]
I'm quite confident that globally declared variables get allocated (and initialized, if applicable) at program start time. ``` int globalgarbage; unsigned int anumber = 42; ``` But what about static ones defined within a function? ``` void doSomething() { static bool globalish = true; // ... } ``` When is the space for `globalish` allocated? I'm guessing when the program starts. But does it get initialized then too? Or is it initialized when `doSomething()` is first called?
I was curious about this so I wrote the following test program and compiled it with g++ version 4.1.2. ``` include <iostream> #include <string> using namespace std; class test { public: test(const char *name) : _name(name) { cout << _name << " created" << endl; } ~test() { cout << _name << " destroyed" << endl; } string _name; }; test t("global variable"); void f() { static test t("static variable"); test t2("Local variable"); cout << "Function executed" << endl; } int main() { test t("local to main"); cout << "Program start" << endl; f(); cout << "Program end" << endl; return 0; } ``` The results were not what I expected. The constructor for the static object was not called until the first time the function was called. Here is the output: ``` global variable created local to main created Program start static variable created Local variable created Function executed Local variable destroyed Program end local to main destroyed static variable destroyed global variable destroyed ```
Some relevant verbiage from C++ Standard: > ## 3.6.2 Initialization of non-local objects [basic.start.init] > > ### 1 > > The storage for objects with static storage > duration (*basic.stc.static*) shall be zero-initialized (*dcl.init*) > before any other initialization takes place. Objects of > POD types (*basic.types*) with static storage duration > initialized with constant expressions (*expr.const*) shall be > initialized before any dynamic initialization takes place. > Objects of namespace scope with static storage duration defined in > the same translation unit and dynamically initialized shall be > initialized in the order in which their definition appears in > the translation unit. [Note: *dcl.init.aggr* describes the > order in which aggregate members are initialized. The > initialization of local static objects is described in *stmt.dcl*. ] > > [more text below adding more liberties for compiler writers] > > ## 6.7 Declaration statement [stmt.dcl] > > ... > > ### 4 > > The zero-initialization (*dcl.init*) of all local objects with > static storage duration (*basic.stc.static*) is performed before > any other initialization takes place. A local object of > POD type (*basic.types*) with static storage duration > initialized with constant-expressions is initialized before its > block is first entered. An implementation is permitted to perform > early initialization of other local objects with static storage > duration under the same conditions that an implementation is > permitted to statically initialize an object with static storage > duration in namespace scope (*basic.start.init*). Otherwise such > an object is initialized the first time control passes through its > declaration; such an object is considered initialized upon the > completion of its initialization. If the initialization exits by > throwing an exception, the initialization is not complete, so it will > be tried again the next time control enters the declaration. If control re-enters the declaration (recursively) while the object is being > initialized, the behavior is undefined. [*Example:* > > ``` > int foo(int i) > { > static int s = foo(2*i); // recursive call - undefined > return i+1; > } > ``` > > --*end example*] > > ### 5 > > The destructor for a local object with static storage duration will > be executed if and only if the variable was constructed. > [Note: *basic.start.term* describes the order in which local > objects with static storage duration are destroyed. ]
When do function-level static variables get allocated/initialized?
[ "", "c++", "variables", "" ]
Let's say I have a container (std::vector) of pointers used by a multi-threaded application. When adding new pointers to the container, the code is protected using a critical section (boost::mutex). All well and good. The code should be able to return one of these pointers to a thread for processing, but another separate thread could choose to delete one of these pointers, which might still be in use. e.g.: ``` thread1() { foo* p = get_pointer(); ... p->do_something(); } thread2() { foo* p = get_pointer(); ... delete p; } ``` So thread2 could delete the pointer whilst thread1 is using it. Nasty. So instead I want to use a container of Boost shared ptrs. IIRC these pointers will be reference counted, so as long as I return shared ptrs instead of raw pointers, removing one from the container WON'T actually free it until the last use of it goes out of scope. i.e. ``` std::vector<boost::shared_ptr<foo> > my_vec; thread1() { boost::shared_ptr<foo> sp = get_ptr[0]; ... sp->do_something(); } thread2() { boost::shared_ptr<foo> sp = get_ptr[0]; ... my_vec.erase(my_vec.begin()); } boost::shared_ptr<foo> get_ptr(int index) { lock_my_vec(); return my_vec[index]; } ``` In the above example, if thread1 gets the pointer before thread2 calls erase, will the object pointed to still be valid? It won't actually be deleted when thread1 completes? **Note that access to the global vector will be via a critical section.** I think this is how shared\_ptrs work but I need to be sure.
For the threading safety of boost::shared\_ptr you should check [this link](http://www.boost.org/doc/libs/1_36_0/libs/smart_ptr/shared_ptr.htm#ThreadSafety). It's not guarantied to be safe, but on many platforms it works. Modifying the std::vector is not safe AFAIK.
> In the above example, if thread1 gets the pointer before thread2 calls erase, will the object pointed to still be valid? It won't actually be deleted when thread1 completes? In your example, if thread1 gets the pointer before thread2, then thread2 will have to wait at the beginning of the function (because of the lock). So, yes, the object pointed to will still be valid. However, you might want to make sure that my\_vec is not empty before accessing its first element.
Boost shared_ptr container question
[ "", "c++", "boost", "smart-pointers", "" ]
We have a whole bunch of queries that "search" for clients, customers, etc. You can search by first name, email, etc. We're using LIKE statements in the following manner: ``` SELECT * FROM customer WHERE fname LIKE '%someName%' ``` Does full-text indexing help in the scenario? We're using SQL Server 2005.
It will depend upon your DBMS. I believe that most systems will not take advantage of the full-text index unless you use the full-text functions. (e.g. [MATCH/AGAINST](http://dev.mysql.com/doc/refman/5.0/en/fulltext-natural-language.html) in mySQL or FREETEXT/CONTAINS in MS SQL) Here is two good articles on when, why, and how to use full-text indexing in SQL Server: 1. [How To Use SQL Server Full-Text Searching](https://www.developer.com/database/sql-server-full-text-searching/) 2. [Solving Complex SQL Problems with Full-Text Indexing](https://www.developer.com/guides/solving-complex-sql-problems-with-full-text-indexing/)
FTS *can* help in this scenario, the question is whether it is worth it or not. To begin with, let's look at why `LIKE` may not be the most effective search. When you use `LIKE`, especially when you are searching with a `%` at the beginning of your comparison, SQL Server needs to perform both a table scan of every single row *and* a byte by byte check of the column you are checking. FTS has some better algorithms for matching data as does some better statistics on variations of names. Therefore FTS can provide better performance for matching Smith, Smythe, Smithers, etc when you look for Smith. It is, however, a bit more complex to use FTS, as you'll need to master `CONTAINS` vs `FREETEXT` and the arcane format of the search. However, if you want to do a search where either FName or LName match, you can do that with one statement instead of an OR. To determine if FTS is going to be effective, determine how much data you have. I use FTS on a database of several hundred million rows and that's a real benefit over searching with `LIKE`, but I don't use it on every table. If your table size is more reasonable, less than a few million, you can get similar speed by creating an index for each column that you're going to be searching on and SQL Server should perform an index scan rather than a table scan.
When should you use full-text indexing?
[ "", "sql", "sql-server", "t-sql", "indexing", "full-text-search", "" ]
I've been writing C / C++ code for almost twenty years, and I know Perl, Python, PHP, and some Java as well, and I'm teaching myself JavaScript. But I've never done any .NET, VB, or C# stuff. What exactly does **managed** code mean? Wikipedia [describes it](http://en.wikipedia.org/wiki/Managed_code) simply as > Code that executes under the management of a virtual machine and it specifically says that Java is (usually) managed code, so * **why does the term only seem to apply to C# / .NET?** * **Can you compile C# into a .exe that contains the VM as well, or do you have to package it up and give it to another .exe (a la java)?** In a similar vein, * **is .NET a *language* or a *framework*, and what exactly does "framework" mean here?** OK, so that's more than one question, but for someone who's been in the industry as long as I have, I'm feeling rather N00B-ish right now...
When you compile C# code to a .exe, it is compiled to Common Intermediate Language(CIL) bytecode. Whenever you run a CIL executable it is executed on Microsofts Common Language Runtime(CLR) virtual machine. So no, it is not possible to include the VM withing your .NET executable file. You must have the .NET runtime installed on any client machines where your program will be running. To answer your second question, .NET is a framework, in that it is a set of libraries, compilers and VM that is not language specific. So you can code on the .NET framework in C#, VB, C++ and any other languages which have a .NET compiler. <https://bitbucket.org/brianritchie/wiki/wiki/.NET%20Languages> The above page has a listing of languages which have .NET versions, as well as links to their pages.
I don't think you are alone in being confused about what .Net is. There are already other answers that should have you covered but I'll throw out this tidbit of info for others. To see what .Net "really" is simply go to c:\Windows\Microsoft.Net\Framework In there you'll see folders that are specfic to the version(s) you have installed. Go into the v2.0.xxxxx folder if you have it installed for example. In that folder is the framework. You will basically see a bunch of .exe files and .dll files. All the DLL files that start with System.\*.dll is essentially the .Net framework. The .exe files you'll see in that folder are utilities for developers as well as compilers. You mentioned C#. Find the csc.exe file. That's your C# compiler. Building a program is really simple. Throw the following code into a hello.cs file. ``` using System; class Program { static void Main(string[] args) { Console.WriteLine("hello world"); } } ``` Then on the command line type> csc hello.cs That will generate you a .exe file. Run it and it will spit out 'hello world' obviously. The line that says Console.WriteLine() is calling into the Framework. Console is an object that lives within the System namespace and WriteLine() is a static method. This is the disassembled code for that Console.WriteLine() method: ``` [HostProtection(SecurityAction.LinkDemand, UI=true)] public static void WriteLine(string value) { Out.WriteLine(value); } ``` When people say things like, "Should I use PHP or .Net?", or "Should I use Python or .Net" you start to see how that's the wrong thing to be discussing. They are obviously comparing a language to a Framework. C# is a language and it is just one of the many languages that can be used to write code on top of the .Net platform. That same method of Console.WriteLine() can be invoked from C#, VB.Net, Pascal, C++, Ruby, Python, F# and any other language that has been made to work on top of the .Net platform. I hope that helps. -Keith
What exactly is "managed" code?
[ "", "c#", ".net", "vb.net", "managed-code", "" ]
This question was [already asked in the context of C#/.Net](https://stackoverflow.com/questions/13049). Now I'd like to learn the differences between a struct and a class in C++. Please discuss the technical differences as well as reasons for choosing one or the other in OO design. I'll start with an obvious difference: * If you don't specify `public:` or `private:`, members of a struct are public by default; members of a class are private by default. I'm sure there are other differences to be found in the obscure corners of the C++ specification.
You forget the tricky 2nd difference between classes and structs. Quoth the standard (§11.2.2 in C++98 through C++11): > In absence of an *access-specifier* > for a base class, public is assumed > when the derived class is declared > *struct* and private is assumed when the class is declared *class*. And just for completeness' sake, the more widely known difference between class and struct is defined in (11.2): > Member of a class defined with the > keyword *class* are *private* by > default. Members of a class defined > with the keywords *struct* or *union* > are *public* by default. Additional difference: the keyword `class` can be used to declare template parameters, while the `struct` keyword cannot be so used.
Quoting [The C++ FAQ](https://isocpp.org/wiki/faq/classes-and-objects#struct-vs-class), > [7.8] What's the difference between > the keywords struct and class? > > The members and base classes of a > struct are public by default, while in > class, they default to private. Note: > you should make your base classes > explicitly public, private, or > protected, rather than relying on the > defaults. > > Struct and class are otherwise > functionally equivalent. > > OK, enough of that squeaky clean > techno talk. Emotionally, most > developers make a strong distinction > between a class and a struct. A > struct simply feels like an open pile > of bits with very little in the way of > encapsulation or functionality. A > class feels like a living and > responsible member of society with > intelligent services, a strong > encapsulation barrier, and a well > defined interface. Since that's the > connotation most people already have, > you should probably use the struct > keyword if you have a class that has > very few methods and has public data > (such things do exist in well designed > systems!), but otherwise you should > probably use the class keyword.
What are the differences between struct and class in C++?
[ "", "c++", "oop", "class", "struct", "c++-faq", "" ]
What is best practises for communicating events from a usercontrol to parent control/page i want to do something similar to this: ``` MyPage.aspx: <asp:Content ID="Content1" ContentPlaceHolderID="MainContentPlaceholder" runat="server"> <uc1:MyUserControl ID="MyUserControl1" runat="server" OnSomeEvent="MyUserControl_OnSomeEvent" /> MyUserControl.ascx.cs: public partial class MyUserControl: UserControl { public event EventHandler SomeEvent; .... private void OnSomething() { if (SomeEvent!= null) SomeEvent(this, EventArgs.Empty); } ``` Question is what is best practise?
You would want to create an event on the control that is subscribed to in the parent. See [OdeToCode](http://www.odetocode.com/code/94.aspx) for an example. Here is the article for longevity sake: Some user controls are entirely self contained, for example, a user control displaying current stock quotes does not need to interact with any other content on the page. Other user controls will contain buttons to post back. Although it is possible to subscribe to the button click event from the containing page, doing so would break some of the object oriented rules of encapsulation. A better idea is to publish an event in the user control to allow any interested parties to handle the event. This technique is commonly referred to as “event bubbling” since the event can continue to pass through layers, starting at the bottom (the user control) and perhaps reaching the top level (the page) like a bubble moving up a champagne glass. For starters, let’s create a user control with a button attached. ``` <%@ Control Language="c#" AutoEventWireup="false" Codebehind="WebUserControl1.ascx.cs" Inherits="aspnet.eventbubble.WebUserControl1" TargetSchema="http://schemas.microsoft.com/intellisense/ie5" %> <asp:Panel id="Panel1" runat="server" Width="128px" Height="96px"> WebUserControl1 <asp:Button id="Button1" Text="Button" runat="server"/> </asp:Panel> ``` The code behind for the user control looks like the following. ``` public class WebUserControl1 : System.Web.UI.UserControl { protected System.Web.UI.WebControls.Button Button1; protected System.Web.UI.WebControls.Panel Panel1; private void Page_Load(object sender, System.EventArgs e) { Response.Write("WebUserControl1 :: Page_Load <BR>"); } private void Button1_Click(object sender, System.EventArgs e) { Response.Write("WebUserControl1 :: Begin Button1_Click <BR>"); OnBubbleClick(e); Response.Write("WebUserControl1 :: End Button1_Click <BR>"); } public event EventHandler BubbleClick; protected void OnBubbleClick(EventArgs e) { if(BubbleClick != null) { BubbleClick(this, e); } } #region Web Form Designer generated code override protected void OnInit(EventArgs e) { InitializeComponent(); base.OnInit(e); } private void InitializeComponent() { this.Button1.Click += new System.EventHandler(this.Button1_Click); this.Load += new System.EventHandler(this.Page_Load); } #endregion } ``` The user control specifies a public event (BubbleClick) which declares a delegate. Anyone interested in the BubbleClick event can add an EventHandler method to execute when the event fires – just like the user control adds an EventHandler for when the Button fires the Click event. In the OnBubbleClick event, we first check to see if anyone has attached to the event (BubbleClick != null), we can then invoke all the event handling methods by calling BubbleClick, passing through the EventArgs parameter and setting the user control (this) as the event sender. Notice we are also using Response.Write to follow the flow of execution. An ASPX page can now put the user control to work. ``` <%@ Register TagPrefix="ksa" TagName="BubbleControl" Src="WebUserControl1.ascx" %> <%@ Page language="c#" Codebehind="WebForm1.aspx.cs" AutoEventWireup="false" Inherits="aspnet.eventbubble.WebForm1" %> <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" > <HTML> <HEAD> <title>WebForm1</title> </HEAD> <body MS_POSITIONING="GridLayout"> <form id="Form1" method="post" runat="server"> <ksa:BubbleControl id="BubbleControl" runat="server" /> </form> </body> </HTML> ``` In the code behind for the page. ``` public class WebForm1 : System.Web.UI.Page { protected WebUserControl1 BubbleControl; private void Page_Load(object sender, System.EventArgs e) { Response.Write("WebForm1 :: Page_Load <BR>"); } #region Web Form Designer generated code override protected void OnInit(EventArgs e) { InitializeComponent(); base.OnInit(e); } private void InitializeComponent() { this.Load += new System.EventHandler(this.Page_Load); BubbleControl.BubbleClick += new EventHandler(WebForm1_BubbleClick); } #endregion private void WebForm1_BubbleClick(object sender, EventArgs e) { Response.Write("WebForm1 :: WebForm1_BubbleClick from " + sender.GetType().ToString() + "<BR>"); } } ``` Notice the parent page simply needs to add an event handler during InitializeComponent method. When we receive the event we will again use Reponse.Write to follow the flow of execution. One word of warning: if at anytime events mysteriously stop work, check the InitializeComponent method to make sure the designer has not removed any of the code adding event handlers.
1) Declare a Public event in the user control 2) Issue a RaiseEvent where appropriate inside the user control 3) In the Init event of the parent page, use AddHandler to assign the control.event to the the handling procedure you want to use Simple as that!
Eventhandling in ascx usercontrols
[ "", "c#", "asp.net", "" ]
What is the difference between the following class methods? Is it that one is static and the other is not? ``` class Test(object): def method_one(self): print "Called method_one" def method_two(): print "Called method_two" a_test = Test() a_test.method_one() a_test.method_two() ```
In Python, there is a distinction between *bound* and *unbound* methods. Basically, a call to a member function (like `method_one`), a bound function ``` a_test.method_one() ``` is translated to ``` Test.method_one(a_test) ``` i.e. a call to an unbound method. Because of that, a call to your version of `method_two` will fail with a `TypeError` ``` >>> a_test = Test() >>> a_test.method_two() Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: method_two() takes no arguments (1 given) ``` You can change the behavior of a method using a decorator ``` class Test(object): def method_one(self): print "Called method_one" @staticmethod def method_two(): print "Called method two" ``` The decorator tells the built-in default metaclass `type` (the class of a class, cf. [this question](https://stackoverflow.com/questions/100003/what-is-a-metaclass-in-python)) to not create bound methods for `method_two`. Now, you can invoke static method both on an instance or on the class directly: ``` >>> a_test = Test() >>> a_test.method_one() Called method_one >>> a_test.method_two() Called method_two >>> Test.method_two() Called method_two ```
Methods in Python are a very, very simple thing once you understood the basics of the descriptor system. Imagine the following class: ``` class C(object): def foo(self): pass ``` Now let's have a look at that class in the shell: ``` >>> C.foo <unbound method C.foo> >>> C.__dict__['foo'] <function foo at 0x17d05b0> ``` As you can see if you access the `foo` attribute on the class you get back an unbound method, however inside the class storage (the dict) there is a function. Why's that? The reason for this is that the class of your class implements a `__getattribute__` that resolves descriptors. Sounds complex, but is not. `C.foo` is roughly equivalent to this code in that special case: ``` >>> C.__dict__['foo'].__get__(None, C) <unbound method C.foo> ``` That's because functions have a `__get__` method which makes them descriptors. If you have an instance of a class it's nearly the same, just that `None` is the class instance: ``` >>> c = C() >>> C.__dict__['foo'].__get__(c, C) <bound method C.foo of <__main__.C object at 0x17bd4d0>> ``` Now why does Python do that? Because the method object binds the first parameter of a function to the instance of the class. That's where self comes from. Now sometimes you don't want your class to make a function a method, that's where `staticmethod` comes into play: ``` class C(object): @staticmethod def foo(): pass ``` The `staticmethod` decorator wraps your class and implements a dummy `__get__` that returns the wrapped function as function and not as a method: ``` >>> C.__dict__['foo'].__get__(None, C) <function foo at 0x17d0c30> ``` Hope that explains it.
Class method differences in Python: bound, unbound and static
[ "", "python", "static-methods", "" ]
We are getting very slow compile times, which can take upwards of 20+ minutes on dual core 2GHz, 2G Ram machines. A lot of this is due to the size of our solution which has grown to 70+ projects, as well as VSS which is a bottle neck in itself when you have a lot of files. (swapping out VSS is not an option unfortunately, so I don't want this to descend into a VSS bash) We are looking at merging projects. We are also looking at having multiple solutions to achieve greater separation of concerns and quicker compile times for each element of the application. This I can see will become a DLL hell as we try to keep things in synch. I am interested to know how other teams have dealt with this scaling issue, what do you do when your code base reaches a critical mass that you are wasting half the day watching the status bar deliver compile messages. **UPDATE** I neglected to mention this is a C# solution. Thanks for all the C++ suggestions, but it's been a few years since I've had to worry about headers. **EDIT:** Nice suggestions that have helped so far (not saying there aren't other nice suggestions below, just what has helped) * New 3GHz laptop - the power of lost utilization works wonders when whinging to management* Disable Anti Virus during compile* 'Disconnecting' from VSS (actually the network) during compile - I may get us to remove VS-VSS integration altogether and stick to using the VSS UI Still not rip-snorting through a compile, but every bit helps. Orion did mention in a comment that generics may have a play also. From my tests there does appear to be a minimal performance hit, but not high enough to sure - compile times can be inconsistent due to disc activity. Due to time limitations, my tests didn't include as many Generics, or as much code, as would appear in live system, so that may accumulate. I wouldn't avoid using generics where they are supposed to be used, just for compile time performance **WORKAROUND** We are testing the practice of building new areas of the application in new solutions, importing in the latest dlls as required, them integrating them into the larger solution when we are happy with them. We may also do them same to existing code by creating temporary solutions that just encapsulate the areas we need to work on, and throwing them away after reintegrating the code. We need to weigh up the time it will take to reintegrate this code against the time we gain by not having Rip Van Winkle like experiences with rapid recompiling during development.
The Chromium.org team listed several options for [accelerating the build](http://dev.chromium.org/developers/how-tos/build-instructions-windows) (at this point about half-way down the page): > In decreasing order of speedup: > > * Install Microsoft hotfix [935225](http://support.microsoft.com/kb/935225). > * Install Microsoft hotfix [947315](http://support.microsoft.com/kb/947315). > * Use a true multicore processor (ie. an Intel Core Duo 2; not a Pentium 4 HT). > * Use 3 parallel builds. In Visual Studio 2005, you will find the option in **Tools > Options... > Projects and Solutions > Build and Run > maximum number of parallel project builds**. > * Disable your anti-virus software for .ilk, .pdb, .cc, .h files and only check for viruses on **modify**. Disable scanning the directory where your sources reside. Don't do anything stupid. > * Store and build the Chromium code on a second hard drive. It won't really speed up the build but at least your computer will stay responsive when you do gclient sync or a build. > * Defragment your hard drive regularly. > * Disable virtual memory.
# We have nearly 100 projects in one solution and a dev build time of only seconds :) For local development builds we created a Visual Studio Addin that changes `Project references` to `DLL references` and unloads the unwanted projects (and an option to switch them back of course). * Build our entire solution *once* * Unload the projects we are not currently working on and change all project references to DLL references. * Before check-in change all references back from DLL to project references. Our builds now only take seconds when we are working on only a few projects at a time. We can also still debug the additional projects as it links to the debug DLLs. The tool typically takes 10-30 seconds to make a large number of changes, but you don't have to do it that often. ## Update May 2015 The deal I made (in comments below), was that I would release the plugin to Open Source *if* it gets enough interest. 4 years later it has only 44 votes (and Visual Studio now has two subsequent versions), so it is currently a low-priority project.
Very slow compile times on Visual Studio 2005
[ "", "c#", "visual-studio", "compilation", "" ]
I like the XMLReader class for it's simplicity and speed. But I like the xml\_parse associated functions as it better allows for error recovery. It would be nice if the XMLReader class would throw exceptions for things like invalid entity refs instead of just issuinng a warning.
I'd avoid SimpleXML if you can. Though it looks very tempting by getting to avoid a lot of "ugly" code, it's just what the name suggests: simple. For example, it can't handle this: ``` <p> Here is <strong>a very simple</strong> XML document. </p> ``` Bite the bullet and go to the DOM Functions. The power of it far outweighs the little bit extra complexity. If you're familiar at all with DOM manipulation in Javascript, you'll feel right at home with this library.
[SimpleXML](https://www.php.net/simplexml) seems to do a good job for me.
What XML parser do you use for PHP?
[ "", "php", "xml", "" ]
How to wait for multiple child processes in Python on Windows, without active wait (polling)? Something like this *almost* works for me: ``` proc1 = subprocess.Popen(['python','mytest.py']) proc2 = subprocess.Popen(['python','mytest.py']) proc1.wait() print "1 finished" proc2.wait() print "2 finished" ``` The problem is that when `proc2` finishes before `proc1`, the parent process will still wait for `proc1`. On Unix one would use `waitpid(0)` in a loop to get the child processes' return codes as they finish - how to achieve something like this in Python on Windows?
It might seem overkill, but, here it goes: ``` import Queue, thread, subprocess results= Queue.Queue() def process_waiter(popen, description, que): try: popen.wait() finally: que.put( (description, popen.returncode) ) process_count= 0 proc1= subprocess.Popen( ['python', 'mytest.py'] ) thread.start_new_thread(process_waiter, (proc1, "1 finished", results)) process_count+= 1 proc2= subprocess.Popen( ['python', 'mytest.py'] ) thread.start_new_thread(process_waiter, (proc2, "2 finished", results)) process_count+= 1 # etc while process_count > 0: description, rc= results.get() print "job", description, "ended with rc =", rc process_count-= 1 ```
Building on zseil's answer, you can do this with a mix of subprocess and win32 API calls. I used straight ctypes, because my Python doesn't happen to have win32api installed. I'm just spawning sleep.exe from MSYS here as an example, but clearly you could spawn any process you like. I use OpenProcess() to get a HANDLE from the process' PID, and then WaitForMultipleObjects to wait for any process to finish. ``` import ctypes, subprocess from random import randint SYNCHRONIZE=0x00100000 INFINITE = -1 numprocs = 5 handles = {} for i in xrange(numprocs): sleeptime = randint(5,10) p = subprocess.Popen([r"c:\msys\1.0\bin\sleep.exe", str(sleeptime)], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=False) h = ctypes.windll.kernel32.OpenProcess(SYNCHRONIZE, False, p.pid) handles[h] = p.pid print "Spawned Process %d" % p.pid while len(handles) > 0: print "Waiting for %d children..." % len(handles) arrtype = ctypes.c_long * len(handles) handle_array = arrtype(*handles.keys()) ret = ctypes.windll.kernel32.WaitForMultipleObjects(len(handle_array), handle_array, False, INFINITE) h = handle_array[ret] ctypes.windll.kernel32.CloseHandle(h) print "Process %d done" % handles[h] del handles[h] print "All done!" ```
Python on Windows - how to wait for multiple child processes?
[ "", "python", "windows", "asynchronous", "" ]
> **Possible Duplicate:** > [How do you send email from a Java app using Gmail?](https://stackoverflow.com/questions/46663/how-do-you-send-email-from-a-java-app-using-gmail) How do I send an SMTP Message from Java?
Here's an example for Gmail smtp: ``` import java.io.*; import java.net.InetAddress; import java.util.Properties; import java.util.Date; import javax.mail.*; import javax.mail.internet.*; import com.sun.mail.smtp.*; public class Distribution { public static void main(String args[]) throws Exception { Properties props = System.getProperties(); props.put("mail.smtps.host","smtp.gmail.com"); props.put("mail.smtps.auth","true"); Session session = Session.getInstance(props, null); Message msg = new MimeMessage(session); msg.setFrom(new InternetAddress("mail@tovare.com"));; msg.setRecipients(Message.RecipientType.TO, InternetAddress.parse("tov.are.jacobsen@iss.no", false)); msg.setSubject("Heisann "+System.currentTimeMillis()); msg.setText("Med vennlig hilsennTov Are Jacobsen"); msg.setHeader("X-Mailer", "Tov Are's program"); msg.setSentDate(new Date()); SMTPTransport t = (SMTPTransport)session.getTransport("smtps"); t.connect("smtp.gmail.com", "admin@tovare.com", "<insert password here>"); t.sendMessage(msg, msg.getAllRecipients()); System.out.println("Response: " + t.getLastServerResponse()); t.close(); } } ``` Now, do it this way only if you would like to keep your project dependencies to a minimum, otherwise i can warmly recommend using classes from apache <http://commons.apache.org/email/> Regards Tov Are Jacobsen
Another way is to use aspirin (<https://github.com/masukomi/aspirin>) like this: ``` MailQue.queMail(MimeMessage message) ``` ..after having constructed your mimemessage as above. Aspirin **is** an smtp 'server' so you don't have to configure it. But note that sending email to a broad set of recipients isnt as simple as it appears because of the many different spam filtering rules receiving mail servers and client applications apply.
How do I send an SMTP Message from Java?
[ "", "java", "smtp", "" ]
I'm really confused by the various configuration options for .Net configuration of dll's, ASP.net websites etc in .Net v2 - especially when considering the impact of a config file at the UI / end-user end of the chain. So, for example, some of the applications I work with use settings which we access with: ``` string blah = AppLib.Properties.Settings.Default.TemplatePath; ``` Now, this option seems cool because the members are stongly typed, and I won't be able to type in a property name that doesn't exist in the Visual Studio 2005 IDE. We end up with lines like this in the App.Config of a command-line executable project: ``` <connectionStrings> <add name="AppConnectionString" connectionString="XXXX" /> <add name="AppLib.Properties.Settings.AppConnectionString" connectionString="XXXX" /> </connectionStrings> ``` (If we don't have the second setting, someone releasing a debug dll to the live box could have built with the debug connection string embedded in it - eek) We also have settings accessed like this: ``` string blah = System.Configuration.ConfigurationManager.AppSettings["TemplatePath_PDF"]; ``` Now, these seem cool because we can access the setting from the dll code, or the exe / aspx code, and all we need in the Web or App.config is: ``` <appSettings> <add key="TemplatePath_PDF" value="xxx"/> </appSettings> ``` However, the value of course may not be set in the config files, or the string name may be mistyped, and so we have a different set of problems. So... if my understanding is correct, the former methods give strong typing but bad sharing of values between the dll and other projects. The latter provides better sharing, but weaker typing. I feel like I must be missing something. For the moment, I'm not even concerned with the application being able to write-back values to the configuration files, encryption or anything like that. Also, I had decided that the best way to store any non-connection strings was in the DB... and then the very next thing that I have to do is store phone numbers to text people in case of DB connection issues, so they must be stored outside the DB!
Nij, our difference in thinking comes from our different perspectives. I'm thinking about developing enterprise apps that predominantly use WinForms clients. In this instance the business logic is contained on an application server. Each client would need to know the phone number to dial, but placing it in the App.config of each client poses a problem if that phone number changes. In that case it seems obvious to store application configuration information (or application wide settings) in a database and have each client read the settings from there. The other, .NET way, (I make the distinction because we have, in the pre .NET days, stored application settings in DB tables) is to store application settings in the app.config file and access via way of the generated Settings class. I digress. Your situation sounds different. If all different apps are on the same server, you could place the settings in a web.config at a higher level. However if they are not, you could also have a seperate "configuration service" that all three applications talk to get their shared settings. At least in this solution you're not replicating the code in three places, raising the potential of maintenance problems when adding settings. Sounds a bit over engineered though. My personal preference is to use strong typed settings. I actually generate my own strongly typed settings class based on what it's my settings table in the database. That way I can have the best of both worlds. Intellisense to my settings and settings stored in the db (note: that's in the case where there's no app server). I'm interested in learning other peoples strategies for this too :)
If you use the settings tab in VS 2005+, you can add strongly typed settings and get intellisense, such as in your first example. ``` string phoneNum = Properties.Settings.Default.EmergencyPhoneNumber; ``` This is physically stored in App.Config. You could still use the config file's appSettings element, or even roll your own ConfigurationElementCollection, ConfigurationElement, and ConfigurationSection subclasses. As to where to store your settings, database or config file, in the case of non-connection strings: It depends on your application architecture. If you've got an application server that is shared by all the clients, use the aforementioned method, in App.Config on the app server. Otherwise, you may have to use a database. Placing it in the App.Config on each client will cause versioning/deployment headaches.
Understanding .Net Configuration Options
[ "", "c#", ".net", "configuration", "" ]
How do I find out what directory my console app is running in with C#?
To get the directory where the .exe file is: ``` AppDomain.CurrentDomain.BaseDirectory ``` To get the current directory: ``` Environment.CurrentDirectory ```
Depending on the rights granted to your application, whether [shadow copying](http://msdn.microsoft.com/en-us/library/ms404279.aspx) is in effect or not and other invocation and deployment options, different methods may work or yield different results so you will have to choose your weapon wisely. Having said that, all of the following will yield the same result for a fully-trusted console application that is executed locally at the machine where it resides: ``` Console.WriteLine( Assembly.GetEntryAssembly().Location ); Console.WriteLine( new Uri(Assembly.GetEntryAssembly().CodeBase).LocalPath ); Console.WriteLine( Environment.GetCommandLineArgs()[0] ); Console.WriteLine( Process.GetCurrentProcess().MainModule.FileName ); ``` You will need to consult the documentation of the above members to see the exact permissions needed.
How do I find out what directory my console app is running in?
[ "", "c#", ".net", "console-application", "" ]
I need a list of integers from 1 to x where x is set by the user. I could build it with a for loop eg assuming x is an integer set previously: ``` List<int> iList = new List<int>(); for (int i = 1; i <= x; i++) { iList.Add(i); } ``` This seems dumb, surely there's a more elegant way to do this, something like the [PHP range method](https://www.php.net/manual/en/function.range.php)
If you're using .Net 3.5, [Enumerable.Range](http://msdn.microsoft.com/en-us/library/system.linq.enumerable.range.aspx) is what you need. > Generates a sequence of integral > numbers within a specified range.
LINQ to the rescue: ``` // Adding value to existing list var list = new List<int>(); list.AddRange(Enumerable.Range(1, x)); // Creating new list var list = Enumerable.Range(1, x).ToList(); ``` See [Generation Operators](http://msdn.microsoft.com/en-us/vcsharp/aa336752.aspx#range) on [LINQ 101](http://msdn.microsoft.com/en-us/vcsharp/aa336746.aspx)
Populating a list of integers in .NET
[ "", "c#", ".net", "list", "integer", "" ]
I'm writing a WCF service for the first time. The service and all of its clients (at least for now) are written in C#. The service has to do a lot of input validation on the data it gets passed, so I need to have some way to indicate invalid data back to the client. I've been reading a lot about faults and exceptions, wrapping exceptions in faults, and a lot of conflicting articles that are just confusing me further. What is the proper way to handle this case? Should I avoid exceptions altogether and package a Results return message? Should I create a special Fault, or a special Exception, or just throw ArgumentExceptions like I would for a non-WCF validation function? The code I have right now (influenced by [MSDN](http://msdn.microsoft.com/en-us/library/ms733841.aspx)) is: ``` [DataContract] public class ValidationFault { [DataMember] public Dictionary<string, string> Errors { get; private set; } [DataMember] public bool Fatal { get; private set; } [DataMember] public Guid SeriesIdentifier { get; private set; } public ValidationFault(Guid id, string argument, string error, bool fatal) { SeriesIdentifier = id; Errors = new Dictionary<string, string> {{argument, error}}; Fatal = fatal; } public void AddError(string argument, string error, bool fatal) { Errors.Add(argument, error); Fatal |= fatal; } } ``` And on the method there's [FaultContract(typeof(ValidationFault))]. So is this the "right" way to approach this?
If you are doing validation on the client and should have valid values once they are passed into the method (the web service call) then I would throw an exception. It could be an exception indicating that a parameters is invalid with the name of the parameter. (see: ArgumentException) But you may not want to rely on the client to properly validate the data and that leaves you with the assumption that data could be invalid coming into the web service. In that case it is not truly an exceptional case and should not be an exception. In that case you could return an enum or a Result object that has a Status property set to an enum (OK, Invalid, Incomplete) and a Message property set with specifics, like the name of the parameter. I would ensure that these sorts of errors are found and fixed during development. Your QA process should carefully test valid and invalid uses of the client and you do not want to relay these technical messages back to the client. What you want to do instead is update your validation system to prevent invalid data from getting to the service call. My assumption for any WCF service is that there will be more than one UI. One could be a web UI now, but later I may add another using WinForms, WinCE or even a native iPhone/Android mobile application that does not conform to what you expect from .NET clients.
Throwing an exception is not useful from a WCF service Why not? Because it comes back as a bare fault and you need to a) Set the fault to include exceptions b) Parse the fault to get the text of the exception and see what happened. So yes you need a fault rather than an exception. I would, in your case, create a custom fault which contains a list of the fields that failed the validation as part of the fault contract. Note that WCF does fun things with dictionaries, which aren't ISerializable; it has special handling, so check the message coming back looks good over the wire; if not it's back to arrays for you.
WCF faults and exceptions
[ "", "c#", "wcf", "validation", "exception", "" ]
You might have a set of properties that is used on the developer machine, which varies from developer to developer, another set for a staging environment, and yet another for the production environment. In a Spring application you may also have beans that you want to load in a local environment but not in a production environment, and vice versa. How do you handle this? Do you use separate files, ant/maven resource filtering or other approaches?
I just put the various properties in JNDI. This way each of the servers can be configured and I can have ONE war file. If the list of properties is large, then I'll host the properties (or XML) files on another server. I'll use JNDI to specify the URL of the file to use. If you are creating different app files (war/ear) for each environment, then you aren't deploying the same war/ear that you are testing. In one of my apps, we use several REST services. I just put the root url in JNDI. Then in each environment, the server can be configured to communicate with the proper REST service for that environment.
I just use different Spring XML configuration files for each machine, and make sure that all the bits of configuration data that vary between machines is referenced by beans that load from those Spring configuration files. For example, I have a webapp that connects to a Java RMI interface of another app. My app gets the address of this other app's RMI interface via a bean that's configured in the Spring XML config file. Both my app and the other app have dev, test, and production instances, so I have three configuration files for my app -- one that corresponds to the configuration appropriate for the production instance, one for the test instance, and one for the dev instance. Then, the only thing that I need to keep straight is which configuration file gets deployed to which machine. So far, I haven't had any problems with the strategy of creating Ant tasks that handle copying the correct configuration file into place before generating my WAR file; thus, in the above example, I have three Ant tasks, one that generates the production WAR, one that generates the dev WAR, and one that generates the test WAR. All three tasks handle copying the right config file into the right place, and then call the same next step, which is compiling the app and creating the WAR. Hope this makes some sense...
How do you maintain java webapps in different staging environments?
[ "", "java", "spring", "configuration", "web-applications", "" ]
I'm trying to write a function that formats every (string) member/variable in an object, for example with a callback function. The variable names are unknown to me, so it must work with objects of all classes. How can I achieve something similar to `array_map` or `array_walk` with objects?
use [get\_object\_vars()](https://www.php.net/manual/en/function.get-object-vars.php) to get an associative array of the members, and use the functions you mentioned. btw, you can also do a foreach on an object like you would on an array, which is sometimes useful as well.
You can use `get_object_vars()`, but if you need more control, try using [reflection](http://web.archive.org/web/20090228224836/http://www.php.net:80/manual/en/language.oop5.reflection.php). It's slower than `get_object_vars()` (or `get_class_methods()` for that matter), but it's much more powerful.
PHP: array_map on object?
[ "", "php", "oop", "" ]
I'm trying to run a LINQ to SQL query that returns a result in a grid view in a search engine style listing. In the simplified example below, is it possible to populate the collection with a comma-separated list of any children that the parent has (NAMESOFCHILDREN) in a single query? ``` var family = from p in db.Parents where p.ParentId == Convert.ToInt32(Request.QueryString["parentId"]) join pcl in db.ParentChildLookup on p.ParentId equals pcl.ParentId join c in db.Children on pcl.ChildId equals c.ChildId select new { Family = "Name: " + p.ParentName + "<br />" + "Children: " + NAMESOFCHILDREN? + "<br />" }; ``` Thanks in advance.
Your joins are going to screw up your cardinality! You don't have a list of Parents! Here's some untested free-hand code. Adding the relationships in the Linq designer gives you relationship properties. String.Join will put the list together. I've added two optional method calls. *Where ... Any* will filter the parents to only those parents that have children. I'm unsure of string.Join's behavior on an empty array. *ToList* will yank Parents into memory, the children will be accessed by further database calls. This may be necessary if you get a runtime *string.Join is not supported by SQL translator* exception. This exception would mean that LINQ tried to translate the method call into something that SQL Server can understand - and failed. ``` int parentID = Convert.ToInt32(Request.QueryString["parentId"]); List<string> result = db.Parents .Where(p => p.ParentId == parentID) //.Where(p => p.ParentChildLookup.Children.Any()) //.ToList() .Select(p => "Name: " + p.ParentName + "<br />" + "Children: " + String.Join(", ", p.ParentChildLookup.Children.Select(c => c.Name).ToArray() + "<br />" )).ToList(); ``` Also note: generally you do not want to mix data and markup until the data is properly escaped for markup.
you could try as follow: ``` var family = from p in db.Parents where p.ParentId == Convert.ToInt32(Request.QueryString["parentId"]) join pcl in db.ParentChildLookup on p.ParentId equals pcl.ParentId select new { Family = "Name: " + p.ParentName + "<br />" + string.Join(",",(from c in db.Children where c.ChildId equals pcl.ChildId select c.ChildId.ToString()).ToArray()); }; ```
Linq to SQL Grouping Child Relationships
[ "", "c#", ".net", "linq", "linq-to-sql", "" ]
Does Java have a built-in way to escape arbitrary text so that it can be included in a regular expression? For example, if my users enter `"$5"`, I'd like to match that exactly rather than a `"5"` after the end of input.
Since [Java 1.5, yes](http://download.oracle.com/javase/7/docs/api/java/util/regex/Pattern.html#quote(java.lang.String)): ``` Pattern.quote("$5"); ```
Difference between [`Pattern.quote`](http://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html#quote-java.lang.String-) and [`Matcher.quoteReplacement`](http://docs.oracle.com/javase/8/docs/api/java/util/regex/Matcher.html#quoteReplacement-java.lang.String-) was not clear to me before I saw following example ``` s.replaceFirst(Pattern.quote("text to replace"), Matcher.quoteReplacement("replacement text")); ```
How to escape text for regular expression in Java?
[ "", "java", "regex", "escaping", "" ]
I am writing a little application to download files over http (as, for example, described [here](https://stackoverflow.com/questions/22676/how-do-i-download-a-file-over-http-using-python#22776)). I also want to include a little download progress indicator showing the percentage of the download progress. Here is what I came up with: ``` sys.stdout.write(rem_file + "...") urllib.urlretrieve(rem_file, loc_file, reporthook=dlProgress) def dlProgress(count, blockSize, totalSize): percent = int(count*blockSize*100/totalSize) sys.stdout.write("%2d%%" % percent) sys.stdout.write("\b\b\b") sys.stdout.flush() ``` Output: MyFileName... 9% Any other ideas or recommendations to do this? One thing that's somewhat annoying is the blinking cursor in the terminal on the first digit of the percentage. Is there a way to prevent this? Is there a way to hide the cursor? **EDIT:** Here a better alternative using a global variable for the filename in dlProgress and the '\r' code: ``` global rem_file # global variable to be used in dlProgress urllib.urlretrieve(rem_file, loc_file, reporthook=dlProgress) def dlProgress(count, blockSize, totalSize): percent = int(count*blockSize*100/totalSize) sys.stdout.write("\r" + rem_file + "...%d%%" % percent) sys.stdout.flush() ``` Output: MyFileName...9% And the cursor shows up at the END of the line. Much better.
There's a text progress bar library for python at <http://pypi.python.org/pypi/progressbar/2.2> that you might find useful: > This library provides a text mode progressbar. This is tipically used to display the progress of a long running operation, providing a visual clue that processing is underway. > > The ProgressBar class manages the progress, and the format of the line is given by a number of widgets. A widget is an object that may display diferently depending on the state of the progress. There are three types of widget: - a string, which always shows itself; - a ProgressBarWidget, which may return a diferent value every time it's update method is called; and - a ProgressBarWidgetHFill, which is like ProgressBarWidget, except it expands to fill the remaining width of the line. > > The progressbar module is very easy to use, yet very powerful. And automatically supports features like auto-resizing when available.
You might also try: ``` sys.stdout.write("\r%2d%%" % percent) sys.stdout.flush() ``` Using a single carriage return at the beginning of your string rather than several backspaces. Your cursor will still blink, but it'll blink after the percent sign rather than under the first digit, and with one control character instead of three you may get less flicker.
How to write a download progress indicator in Python?
[ "", "python", "http", "" ]
Suppose I have a collection (be it an array, generic List, or whatever is the **fastest** solution to this problem) of a certain class, let's call it `ClassFoo`: ``` class ClassFoo { public string word; public float score; //... etc ... } ``` Assume there's going to be like 50.000 items in the collection, all in memory. Now I want to obtain as fast as possible all the instances in the collection that obey a condition on its bar member, for example like this: ``` List<ClassFoo> result = new List<ClassFoo>(); foreach (ClassFoo cf in collection) { if (cf.word.StartsWith(query) || cf.word.EndsWith(query)) result.Add(cf); } ``` How do I get the results as fast as possible? Should I consider some advanced indexing techniques and datastructures? The application domain for this problem is an autocompleter, that gets a query and gives a collection of suggestions as a result. Assume that the condition doesn't get any more complex than this. Assume also that there's going to be a lot of searches.
With the constraint that the condition clause can be "anything", then you're limited to scanning the entire list and applying the condition. If there are limitations on the condition clause, then you can look at organizing the data to more efficiently handle the queries. For example, the code sample with the "byFirstLetter" dictionary doesn't help at all with an "endsWith" query. So, it really comes down to what queries you want to do against that data. In Databases, this problem is the burden of the "query optimizer". In a typical database, if you have a database with no indexes, obviously every query is going to be a table scan. As you add indexes to the table, the optimizer can use that data to make more sophisticated query plans to better get to the data. That's essentially the problem you're describing. Once you have a more concrete subset of the types of queries then you can make a better decision as to what structure is best. Also, you need to consider the amount of data. If you have a list of 10 elements each less than 100 byte, a scan of everything may well be the fastest thing you can do since you have such a small amount of data. Obviously that doesn't scale to a 1M elements, but even clever access techniques carry a cost in setup, maintenance (like index maintenance), and memory. **EDIT**, based on the comment If it's an auto completer, if the data is static, then sort it and use a binary search. You're really not going to get faster than that. If the data is dynamic, then store it in a balanced tree, and search that. That's effectively a binary search, and it lets you keep add the data randomly. Anything else is some specialization on these concepts.
var Answers = myList.Where(item => item.bar.StartsWith(query) || item.bar.EndsWith(query)); that's the easiest in my opinion, should execute rather quickly.
Fastest way to find objects from a collection matched by condition on string member
[ "", "c#", "arrays", "string", "collections", "performance", "" ]
I'm using GCC to generate a dependency file, but my build rules put the output into a subdirectory. Is there a way to tell GCC to put my subdirectory prefix in the dependency file it generates for me? ``` gcc $(INCLUDES) -E -MM $(CFLAGS) $(SRC) >>$(DEP) ```
The answer is in the [GCC manual](http://gcc.gnu.org/onlinedocs/gcc-4.3.2/cpp/Invocation.html): use the `-MT` flag. > `-MT target` > > Change the target of the rule emitted by dependency generation. By default CPP takes the name of the main input file, deletes any directory components and any file suffix such as `.c`, and appends the platform's usual object suffix. The result is the target. > > An `-MT` option will set the target to be exactly the string you specify. If you want multiple targets, you can specify them as a single argument to `-MT`, or use multiple `-MT` options. > > For example, `-MT '$(objpfx)foo.o'` might give > > ``` > $(objpfx)foo.o: foo.c > ```
I'm assuming you're using GNU Make and GCC. First add a variable to hold your list of dependency files. Assuming you already have one that lists all our sources: ``` SRCS = \ main.c \ foo.c \ stuff/bar.c DEPS = $(SRCS:.c=.d) ``` Then include the generated dependencies in the makefile: ``` include $(DEPS) ``` Then add this pattern rule: ``` # automatically generate dependency rules %.d : %.c $(CC) $(CCFLAGS) -MF"$@" -MG -MM -MP -MT"$@" -MT"$(<:.c=.o)" "$<" # -MF write the generated dependency rule to a file # -MG assume missing headers will be generated and don't stop with an error # -MM generate dependency rule for prerequisite, skipping system headers # -MP add phony target for each header to prevent errors when header is missing # -MT add a target to the generated dependency ``` "$@" is the target (the thing on the left side of the : ), "$<" is the prerequisite (the thing on the right side of the : ). The expression "$(<:.c=.o)" replaces the .c extension with .o. The trick here is to generate the rule with two targets by adding -MT twice; this makes both the .o file and the .d file depend on the source file and its headers; that way the dependency file gets automatically regenerated whenever any of the corresponding .c or .h files are changed. The -MG and -MP options keep make from freaking out if a header file is missing.
GCC dependency generation for a different output directory
[ "", "c++", "gcc", "makefile", "dependencies", "" ]
What does the `explicit` keyword mean in C++?
The compiler is allowed to make one implicit conversion to resolve the parameters to a function. This means that the compiler can use constructors callable with a **single parameter** to convert from one type to another in order to get the right type for a parameter. Here's an example with *converting constructors* that shows how it works: ``` struct Foo { // Single parameter constructor, can be used as an implicit conversion. // Such a constructor is called "converting constructor". Foo(int x) {} }; struct Faz { // Also a converting constructor. Faz(Foo foo) {} }; // The parameter is of type Foo, not of type int, so it looks like // we have to pass a Foo. void bar(Foo foo); int main() { // However, the converting constructor allows us to pass an int. bar(42); // Also allowed thanks to the converting constructor. Foo foo = 42; // Error! This would require two conversions (int -> Foo -> Faz). Faz faz = 42; } ``` Prefixing the `explicit` keyword to the constructor prevents the compiler from using that constructor for implicit conversions. Adding it to the above class will create a compiler error at the function call `bar(42)`. It is now necessary to call for conversion explicitly with `bar(Foo(42))` The reason you might want to do this is to avoid accidental construction that can hide bugs. Contrived example: * You have a `MyString` class with a constructor that constructs a string of the given size. You have a function `print(const MyString&)` (as well as an overload `print (char *string)`), and you call `print(3)` (when you *actually* intended to call `print("3")`). You expect it to print "3", but it prints an empty string of length 3 instead.
Suppose, you have a class `String`: ``` class String { public: String(int n); // allocate n bytes to the String object String(const char *p); // initializes object with char *p }; ``` Now, if you try: ``` String mystring = 'x'; ``` The character `'x'` will be implicitly converted to `int` and then the `String(int)` constructor will be called. But, this is not what the user might have intended. So, to prevent such conditions, we shall define the constructor as `explicit`: ``` class String { public: explicit String (int n); //allocate n bytes String(const char *p); // initialize sobject with string p }; ```
What does the explicit keyword mean?
[ "", "c++", "constructor", "explicit", "c++-faq", "explicit-constructor", "" ]
I've got a webapp whose original code base was developed with a hand crafted hibernate mapping file. Since then, I've become fairly proficient at 'coding' my hbm.xml file. But all the cool kids are using annotations these days. So, the question is: **Is it worth the effort** to refactor my code to use hibernate annotations? Will I gain anything, other than being hip and modern? Will I lose any of the control I have in my existing hand coded mapping file? A sub-question is, **how much effort will it be?** I like my databases lean and mean. The mapping covers only a dozen domain objects, including two sets, some subclassing, and about 8 tables. Thanks, dear SOpedians, in advance for your informed opinions.
"If it ain't broke - don't fix it!" I'm an old fashioned POJO/POCO kind of guy anyway, but why change to annotations just to be cool? To the best of my knowledge you can do most of the stuff as annotations, but the more complex mappings are sometimes expressed more clearly as XML.
One thing you'll gain from using annotations instead of an external mapping file is that your mapping information will be on classes and fields which improves maintainability. You add a field, you immediately add the annotation. You remove one, you also remove the annotation. you rename a class or a field, the annotation is right there and you can rename the table or column as well. you make changes in class inheritance, it's taken into account. You don't have to go and edit an external file some time later. this makes the whole thing more efficient and less error prone. On the other side, you'll lose the global view your mapping file used to give you.
Is it worth the effort to move from a hand crafted hibernate mapping file to annotaions?
[ "", "java", "hibernate", "annotations", "" ]
How would you explain JavaScript closures to someone with a knowledge of the concepts they consist of (for example functions, variables and the like), but does not understand closures themselves? I have seen [the Scheme example](http://en.wikipedia.org/wiki/Scheme_%28programming_language%29) given on Wikipedia, but unfortunately it did not help.
A closure is a pairing of: 1. A function and 2. A reference to that function's outer scope (lexical environment) A lexical environment is part of every execution context (stack frame) and is a map between identifiers (i.e. local variable names) and values. Every function in JavaScript maintains a reference to its outer lexical environment. This reference is used to configure the execution context created when a function is invoked. This reference enables code inside the function to "see" variables declared outside the function, regardless of when and where the function is called. If a function was called by a function, which in turn was called by another function, then a chain of references to outer lexical environments is created. This chain is called the scope chain. In the following code, `inner` forms a closure with the lexical environment of the execution context created when `foo` is invoked, *closing over* variable `secret`: ``` function foo() { const secret = Math.trunc(Math.random() * 100) return function inner() { console.log(`The secret number is ${secret}.`) } } const f = foo() // `secret` is not directly accessible from outside `foo` f() // The only way to retrieve `secret` is to invoke `f` ``` In other words: in JavaScript, functions carry a reference to a private "box of state", to which only they (and any other functions declared within the same lexical environment) have access. This box of the state is invisible to the caller of the function, delivering an excellent mechanism for data-hiding and encapsulation. And remember: functions in JavaScript can be passed around like variables (first-class functions), meaning these pairings of functionality and state can be passed around your program, similar to how you might pass an instance of a class around in C++. If JavaScript did not have closures, then more states would have to be passed between functions *explicitly*, making parameter lists longer and code noisier. So, if you want a function to always have access to a private piece of state, you can use a closure. ...and frequently we *do* want to associate the state with a function. For example, in Java or C++, when you add a private instance variable and a method to a class, you are associating the state with functionality. In C and most other common languages, after a function returns, all the local variables are no longer accessible because the stack-frame is destroyed. In JavaScript, if you declare a function within another function, then the local variables of the outer function can remain accessible after returning from it. In this way, in the code above, `secret` remains available to the function object `inner`, *after* it has been returned from `foo`. ## Uses of Closures Closures are useful whenever you need a private state associated with a function. This is a very common scenario - and remember: JavaScript did not have a class syntax until 2015, and it still does not have a private field syntax. Closures meet this need. ### Private Instance Variables In the following code, the function `toString` closes over the details of the car. ``` function Car(manufacturer, model, year, color) { return { toString() { return `${manufacturer} ${model} (${year}, ${color})` } } } const car = new Car('Aston Martin', 'V8 Vantage', '2012', 'Quantum Silver') console.log(car.toString()) ``` ### Functional Programming In the following code, the function `inner` closes over both `fn` and `args`. ``` function curry(fn) { const args = [] return function inner(arg) { if(args.length === fn.length) return fn(...args) args.push(arg) return inner } } function add(a, b) { return a + b } const curriedAdd = curry(add) console.log(curriedAdd(2)(3)()) // 5 ``` ### Event-Oriented Programming In the following code, function `onClick` closes over variable `BACKGROUND_COLOR`. ``` const $ = document.querySelector.bind(document) const BACKGROUND_COLOR = 'rgba(200, 200, 242, 1)' function onClick() { $('body').style.background = BACKGROUND_COLOR } $('button').addEventListener('click', onClick) ``` ``` <button>Set background color</button> ``` ### Modularization In the following example, all the implementation details are hidden inside an immediately executed function expression. The functions `tick` and `toString` close over the private state and functions they need to complete their work. Closures have enabled us to modularize and encapsulate our code. ``` let namespace = {}; (function foo(n) { let numbers = [] function format(n) { return Math.trunc(n) } function tick() { numbers.push(Math.random() * 100) } function toString() { return numbers.map(format) } n.counter = { tick, toString } }(namespace)) const counter = namespace.counter counter.tick() counter.tick() console.log(counter.toString()) ``` ## Examples ### Example 1 This example shows that the local variables are not copied in the closure: the closure maintains a reference to the original variables *themselves*. It is as though the stack-frame stays alive in memory even after the outer function exits. ``` function foo() { let x = 42 let inner = () => console.log(x) x = x + 1 return inner } foo()() // logs 43 ``` ### Example 2 In the following code, three methods `log`, `increment`, and `update` all close over the same lexical environment. And every time `createObject` is called, a new execution context (stack frame) is created and a completely new variable `x`, and a new set of functions (`log` etc.) are created, that close over this new variable. ``` function createObject() { let x = 42; return { log() { console.log(x) }, increment() { x++ }, update(value) { x = value } } } const o = createObject() o.increment() o.log() // 43 o.update(5) o.log() // 5 const p = createObject() p.log() // 42 ``` ### Example 3 If you are using variables declared using `var`, be careful you understand which variable you are closing over. Variables declared using `var` are hoisted. This is much less of a problem in modern JavaScript due to the introduction of `let` and `const`. In the following code, each time around the loop, a new function `inner` is created, which closes over `i`. But because `var i` is hoisted outside the loop, all of these inner functions close over the same variable, meaning that the final value of `i` (3) is printed, three times. ``` function foo() { var result = [] for (var i = 0; i < 3; i++) { result.push(function inner() { console.log(i) } ) } return result } const result = foo() // The following will print `3`, three times... for (var i = 0; i < 3; i++) { result[i]() } ``` ## Final points: * Whenever a function is declared in JavaScript closure is created. * Returning a `function` from inside another function is the classic example of closure, because the state inside the outer function is implicitly available to the returned inner function, even after the outer function has completed execution. * Whenever you use `eval()` inside a function, a closure is used. The text you `eval` can reference local variables of the function, and in the non-strict mode, you can even create new local variables by using `eval('var foo = …')`. * When you use `new Function(…)` (the [Function constructor](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Function)) inside a function, it does not close over its lexical environment: it closes over the global context instead. The new function cannot reference the local variables of the outer function. * A closure in JavaScript is like keeping a reference (**NOT** a copy) to the scope at the point of function declaration, which in turn keeps a reference to its outer scope, and so on, all the way to the global object at the top of the scope chain. * A closure is created when a function is declared; this closure is used to configure the execution context when the function is invoked. * A new set of local variables is created every time a function is called. ## Links * Douglas Crockford's simulated [private attributes and private methods](http://www.crockford.com/javascript/private.html) for an object, using closures. * A great explanation of how closures can [cause memory leaks in IE](https://www.codeproject.com/Articles/12231/Memory-Leakage-in-Internet-Explorer-revisited) if you are not careful. * MDN documentation on [JavaScript Closures](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Closures). * The Beginner's Guide to [JavaScript Closures](https://dmitripavlutin.com/javascript-closure/).
Every function in JavaScript maintains a link to its outer lexical environment. A lexical environment is a map of all the names (eg. variables, parameters) within a scope, with their values. So, whenever you see the `function` keyword, code inside that function has access to variables declared outside the function. ``` function foo(x) { var tmp = 3; function bar(y) { console.log(x + y + (++tmp)); // will log 16 } bar(10); } foo(2); ``` This will log `16` because function `bar` closes over the parameter `x` and the variable `tmp`, both of which exist in the lexical environment of outer function `foo`. Function `bar`, together with its link with the lexical environment of function `foo` is a closure. A function doesn't have to *return* in order to create a closure. Simply by virtue of its declaration, every function closes over its enclosing lexical environment, forming a closure. ``` function foo(x) { var tmp = 3; return function (y) { console.log(x + y + (++tmp)); // will also log 16 } } var bar = foo(2); bar(10); // 16 bar(10); // 17 ``` The above function will also log 16, because the code inside `bar` can still refer to argument `x` and variable `tmp`, even though they are no longer directly in scope. However, since `tmp` is still hanging around inside `bar`'s closure, it is available to be incremented. It will be incremented each time you call `bar`. The simplest example of a closure is this: ``` var a = 10; function test() { console.log(a); // will output 10 console.log(b); // will output 6 } var b = 6; test(); ``` When a JavaScript function is invoked, a new execution context `ec` is created. Together with the function arguments and the target object, this execution context also receives a link to the lexical environment of the calling execution context, meaning the variables declared in the outer lexical environment (in the above example, both `a` and `b`) are available from `ec`. Every function creates a closure because every function has a link to its outer lexical environment. Note that variables *themselves* are visible from within a closure, *not* copies.
How do JavaScript closures work?
[ "", "javascript", "function", "variables", "scope", "closures", "" ]
I have to deploy my php/html/css/etc code to multiple servers and i am looking at my options for software that allows easy and secure deployment to multiple servers. Also helps if it could be tied into my SVN. Any suggestions?
[Capistrano](http://capify.org/) is pretty handy for that. There's a few people using it ([1](http://www.simplisticcomplexity.com/2006/8/16/automated-php-deployment-with-capistrano/), [2](http://www.contentwithstyle.co.uk/Blog/178), [3](http://laurentbois.com/2008/08/05/use-capistrano-in-enterprise-for-php-and-ruby-on-rails-applications/)) for deploying PHP code as evidenced by doing a [quick search](http://www.google.com/search?q=capistrano+php+deployment).
Setting up password-less publickey authentication with ssh would allow you to scp your files to any of your servers very quickly (or be automated by a shell script). Here's a simple tutorial: <http://rcsg-gsir.imsb-dsgi.nrc-cnrc.gc.ca/documents/internet/node31.html>
Deploying to multiple servers
[ "", "php", "deployment", "" ]
How can I retrieve the page title of a webpage (title html tag) using Python?
I'll always use [lxml](http://lxml.de/) for such tasks. You could use [beautifulsoup](http://www.crummy.com/software/BeautifulSoup/) as well. ``` import lxml.html t = lxml.html.parse(url) print(t.find(".//title").text) ``` EDIT based on comment: ``` from urllib2 import urlopen from lxml.html import parse url = "https://www.google.com" page = urlopen(url) p = parse(page) print(p.find(".//title").text) ```
Here's a simplified version of [@Vinko Vrsalovic's answer](https://stackoverflow.com/a/51242/4279): ``` import urllib2 from BeautifulSoup import BeautifulSoup soup = BeautifulSoup(urllib2.urlopen("https://www.google.com")) print soup.title.string ``` NOTE: * *soup.title* finds the first *title* element **anywhere** in the html document * *title.string* assumes it has only **one** child node, and that child node is a **string** For [beautifulsoup 4.x](http://www.crummy.com/software/BeautifulSoup/bs4/doc/), use different import: ``` from bs4 import BeautifulSoup ```
How can I retrieve the page title of a webpage using Python?
[ "", "python", "html", "" ]
I'm getting an error message when I try to build my project in eclipse: `The type weblogic.utils.expressions.ExpressionMap cannot be resolved. It is indirectly referenced from required .class files` I've looked online for a solution and cannot find one (except for those sites that make you pay for help). Anyone have any idea of a way to find out how to go about solving this problem? Any help is appreciated, thanks!
How are you adding your Weblogic classes to the classpath in Eclipse? Are you using WTP, and a server runtime? If so, is your server runtime associated with your project? If you right click on your project and choose build `path->configure` build path and then choose the libraries tab. You should see the weblogic libraries associated here. If you do not you can click `Add Library->Server Runtime`. If the library is not there, then you first need to configure it. `Windows->Preferences->Server->Installed runtimes`
Add spring-tx jar file and it should settle it.
Indirectly referenced from required .class file
[ "", "java", "eclipse", "weblogic", "" ]
Working on a new back end system for my company, and one of their requests is for a window to become locked down and for the user to be sent to the login screen if they leave it idle for to long. I figure I'd do this with JavaScript by attaching listeners to clicks, mouse moves and key-ups but I worry about messing with other scripts. Any suggestions? ­­­­­­­­­­­­­­­­­­­­­­­­­­
Firstly, for this to be effective, you have to make sure users are logged out on the server at the end of this idle time. Otherwise, nothing you do on the client side is effective. If you send them to a login page, they can just click the back button. Second, the conventional way to do this is to use a "meta refresh" tag. Adding this to the page: ``` <meta http-equiv="refresh" content="900;url=http://example.com/login"/> ``` will send them to the login page after 15 minutes (900 seconds). This will send them there even if they are doing something on the page. It doesn't detect activity. It just knows how long the page has been up in the browser. This is usually good enough because people don't take 15 minutes to fill in a page (stackoverflow.com is a notable exception, I guess.) If you really need to detect activity on the page, then I think your first instinct is correct. You're going to have to add event handlers to several things. If you are worried about messing with other scripting for validation or other things, you should look at adding event handlers programmatically rather than inline. That is, instead of using ``` <input type="text" onClick="doSomething;"> ``` Access the object model directly with ``` Mozilla way: element.addEventListener('click' ...) Microsoft way: element.attachEvent('onclick' ...) ``` and then make sure you pass along the events after you receive them so existing code still does whatever (validation?) it is supposed to do. <http://www.quirksmode.org/js/introevents.html> has a decent write up on how to do this. -- bmb
You could just make it do a log out if the user doesn't change pages after so long. That's what the Angel Learning Courseware system seems to do. The other problem you'll face, though, is that some users disable JavaScript.
How can I check to make sure a window is being actively used, and if not alert the end user that they are about to be logged out?
[ "", "javascript", "session", "" ]
How do you deal with source control management and automated deployment (configuration management) of database tables. I work in a SQL Server environment and it's pretty easy to script out drop and create files for stored procedures/triggers/functions even jobs. It's also easy to handle scripting out the creation of a new db table. However, if at a later point you want to modify that table, you can't necessarily just drop it and recreate it with the new field for fear of losing data. Is there an automated way to deal with this problem? Do you script out a temp table and backfill after updating the new changed table? (could be rough if there is a lot of data) Any suggestions would be greatly appreciated.
You can automatically create the initial creation script, but ALTER scripts really need to be hand-coded on a case-by-case basis, because in practice you need to do custom stuff in them. In any case, you'll need some way of creating apply and rollback scripts for each change, and have an installer script which runs them (and a rollback which rolls them back of course). Such an installer should probably remember what version the schema is in, and run all the necessary migrations, in the right order. See my article here: <http://marksverbiage.blogspot.com/2008/07/versioning-your-schema.html>
Tools like Red-gate's SQL Compare are invaluable in making sure you have a complete script. You still may need to manually adjust it to make sure the objects are scripted in the correct order. Make sure to script triggers and constraints, etc as well as tables.In general you will want to use alter commands instead of drop and create especially if the table is at all large. All our tables and functions and stored procs are required to be under source control as well, so we can return to old versions if need be. Also our dbas periodically delte anything they find not in Source COntrol, so that keeps developers from forgetting to do it. Of course all development scripts being promoted to production should be run on a QA or staging server first to ensure the script will run properly (and with no changes required) before it is run on prod. Also the timing of running on prod needs to be considered, you don't want to lock out users especially during busy periods and time has shown that loading scripts to production late on Friday afternoon is usually a bad idea.
How do you deal with configuration management of Database Tables?
[ "", "sql", "sql-server", "database", "t-sql", "" ]