instruction
stringlengths
0
30k
Okay... so no-one's tried Astra, or people just avoid Flash questions. After a lot of guess work it turns out I needed to cast the series to a PieSeries and then work with those member functions, as the ISeries was useless on it's own. myPieChart.dataTipFunction = function (item:Object, index:int, series:ISeries):String { var oPieSeries:PieSeries = series as PieSeries; return oPieSeries.itemToCategory(item,index) + "\n$" + oPieSeries.itemToData(item) + "\n" + Number(oPieSeries.itemToPercentage(item)).toFixed(2) + "%"; };
I think there are too many variables involved to come up with a simple complexity metric unless you make a lot of assumptions. A simple SAX style parser should be linear in terms of document size and flat for memory. Something like XPath would be impossible to describe in terms of just the input document since the complexity of the XPath expression plays a huge role. Likewise for schema validation, a large but simple schema may well be linear, whereas a smaller schema that has a much more complex structure would show worse runtime performance. As with most performance questions the only way to get accurate answers is to measure it and see what happens!
What Is The Best Way To Tell If An Object is Modified?
|c#|
I have an object that is mapped to a cookie as a serialized base-64 string. I only want to write out a new cookie if there are changes made to the object stored in the cookie on server-side. What I want to do is get a hash code when the object is pulled from the cookie/initialized and compare the original hash code to the hash code that exists just before I send the cookie header off to the client to ensure I don't have to re-serialize/send the cookie unless changes were made. I was going to override the Object.GetHashCode() method, but I wasn't sure that this is the best way to go about checking if an object is modified. Are there any other ways I can check if an object is modified, or should I override the `GetHashCode()` method.
The SLSBs are single threaded because of the TX Context, Principal is associated with a bean instance when it is called. These beans are pooled and unless the max pool size is reached are processed in separate threads ( Vendor dependent). If SLSBs were designed thread safe every call would have looked like a servlet doGet/Post with request info containing Tx Context , Security Context info and etc. So at least the code looks clean (developer dependent).
> we do check the IsDisposed property on > the schedule component before using it > in the Timer Tick event but it doesn't > help. If I understand that stack trace, it's not your timer which is the problem, it's one in the control itself - it might be them who are not cleaning-up properly. Are you explicitly calling Dispose on their control?
Since i was looking for some more in-depth answers, i decided to research the issue myself in some more depth. Please let me know if i've misunderstood something. General recommendation is to use a separate webserver for handling media. Separate means a webserver which is not running django. This server can be for instance: - Lighttpd (Lighty) - Nginx (EngineX - Or some other light-weight server Then, for django, you can go down different paths. You can either: - Serve Django via **Apache** and: - **mod_python** This is the stable and recommended/well documented way. Cons: uses a lot of memory. - **mod_wsgi** From what i understand, mod_wsgi is a newer alternative. It appears to be faster and easier on resources. - **mod_fastcgi** When using FastCGI you are delegating the serving of Django to another process. Since mod_python includes a python interpreter in every request it uses a lot of memory. This is a way to bypass that problem. Also there is some security concerns. What you do is that you start your django fastcgi server in a separate process and then configures apache via rewrites to call this process when needed. Or you can: - Serve django **without using Apache** but with another server that supports FastCGI natively: (The documentation mentions that you can do this if you don't have any Apache specific needs. I guess the reason must be to save memory.) - **Lighttpd** This is the server that runs Youtube. It seems fast and easy to use, however i've seen reports on memoryleaks. - **nginx** I've seen benchmarks claiming that this server is even faster than lighttpd. It's mostly documented in russian though. Another thing, du to limitations in Python your server should be running in forked mode, note threaded. So this is my current research, But i want more opinions and experiences.
Windows Vista: Unable to load DLL 'x.dll': Invalid access to memory location. (DllNotFoundException)
|windows-vista|.net|dllnotfoundexception|c#|
|c#|.net|windows-vista|dllnotfoundexception|
I was testing on a customer's box this afternoon which has Windows Vista (He had home, but I am testing on a Business Edition with same results). We make use of a .DLL that gets the Hardware ID of the computer. It's usage is very simple and the sample program I have created works. The Dll is [This from AzSdk][1]. In fact, this works perfectly under Windows XP. However, for some strange reason, inside our project (way bigger), we get this exception: Exception Type: System.DllNotFoundException Exception Message: Unable to load DLL 'HardwareID.dll': Invalid access to memory location. (Exception from HRESULT: 0x800703E6) Exception Target Site: GetHardwareID I don't know what can be causing the problem, since I have full control over the folder. The project is a c#.net Windows Forms application and everything works fine, except the call for the external library. I am declaring it like this: (note: it's *not* a COM library and it doesn't need to be registered). [DllImport("HardwareID.dll")] public static extern String GetHardwareID(bool HDD, bool NIC, bool CPU, bool BIOS, string sRegistrationCode); And then the calling code is quite simple: private void button1_Click(object sender, EventArgs e) { textBox1.Text = GetHardwareID(cb_HDD.Checked, cb_NIC.Checked, cb_CPU.Checked, cb_BIOS.Checked, "*Registration Code*"); } When you create a sample application, it works, but inside my prj it doesn't. Under XP works fine. Any ideas about what should I do in Vista to make this work? As I've said, the folder and its subfolders have Full Control for "Everybody". Thanks in advance, [1]: http://www.azsdk.com/hardwareid.html
Anyone have experience with Sphinx speech recognition?
|speech-recognition|ivr|
Has anyone used the [Sphinx][1] speech recognition stack to build IVR applications? I am looking for open source alternatives to the expensive and somewhat limiting choices from MSFT and others. I have not been able to find a comprehensive package that ties open source speech/voip applications together. [1]: http://cmusphinx.sourceforge.net/html/cmusphinx.php
|speech-recognition|ivr|sphinx|
Do you have to register a Dialog Box?
|windows-api|
So, I am a total beginner in any kind of Windows related programming. I have been playing around with the Windows API and came across a couple of examples on how to initialize create windows and such. One example creates a regular window (I abbreviated some of the code): <pre> int WINAPI WinMain( [...] ) { [...] // Windows Class setup wndClass.cbSize = sizeof( wndClass ); wndClass.style = CS_HREDRAW | CS_VREDRAW; [...] // Register class RegisterClassEx( &wndClass ); // Create window hWnd = CreateWindow( szAppName, "Win32 App", WS_OVERLAPPEDWINDOW, 0, 0, 512, 384, NULL, NULL, hInstance, NULL ); [...] } </pre> The second example creates a dialog box (no abbreviations except the WinMain arguments): <pre> int WINAPI WinMain( [...] ) { // Create dialog box DialogBox(hInstance, MAKEINTRESOURCE(IDD_MAIN_DLG), NULL, (DLGPROC)DialogProc); } </pre> The second example does not contain any call to the register function. It just creates the DialogBox with its DialogProc process attached. This works fine, but I am wondering if there is a benefit of registering the window class and then creating the dialog box (if this is at all possible).
|c++|windows-api|
Yes, Mockito is a great framework. I use it together with [hamcrest](http://code.google.com/p/hamcrest/) and [Google guice](http://code.google.com/p/google-guice/) to setup my tests.
SQL Server Fast Forward Cursors
|sql|sql-server|stored-procedures|
It is generally accepted that the use of cursors in stored procedures should be avoided where possible (replaced with set based logic etc). If you take the cases where you need to iterate over some data, and can do in a read only manner, are fast forward (read only forward) cursor more or less inefficient than say while loops? From my investigations it looks as though the cursor option is generally faster and uses less reads and cpu time. I haven't done any extensive testing, but is this what others find? Do cursors of this type (fast forward) carry additional overhead or resource that could be expensive that I don't know about. Is all the talk about not using cursors really about avoiding the use of cursors when set-based approaches are available, and the use of updatable cursors etc. Thanks
I have used it and it has worked great for what I have done. However my measuring of performance isn't really scientific, because it was all visual. However I did notice that IronRuby seemed a little more snappier when I compared the two program on equal tasks. I really think this had to do more with the strong and tight binding with IIS that .NET has more than the speed of the framework. But I could totally be wrong, because I didn't really stress my applications to the levels that Twitter might see. But from my .NET experience I know it would hold up just as well if not better than current production Ruby applications. By the way I tested Ruby using FastCGI under IIS.
Do you understand how when you say "num1" you're referring to the same variable each time, and that each time you change num1 you replace the previous value?
Alternative constructors are the classic example.
You might want to look into the [ADO .Net Entity Framework](http://en.wikipedia.org/wiki/ADO.NET_Entity_Framework) as an alternative to LINQ to SQL, although it does support LINQ as well. I believe LINQ to SQL is designed to be fairly lightweight and simple, whereas the Entity Framework is more heavy duty and probably more suitable in large Enterprise applications.
A unit test should test a single codepath through a single method. When the execution of a method passes outside of that method, into another object, and back again, you have a dependency. When you test that code path with the actual dependency, you are not unit testing; you are integration testing. While that's good and necessary, it isn't unit testing. If your dependency is buggy, your test may be effected in such a way to return a false positive. For instance, you may pass the dependency an unexpected null, and the dependency may not throw on null as it is documented to do. Your test does not enounter a null argument exception as it should have, and the test passes. Also, you may find its hard, if not impossible, to reliably get the dependent object to return exactly what you want during a test. That also includes throwing expected exceptions within tests. A mock replaces that dependency. You set expectations on calls to the dependent object, set the exact return values it should give you to perform the test you want, and/or what exceptions to throw so that you can test your exception handling code. In this way you can test the unit in question easily. TL;DR: Mock every dependency your unit test touches.
I'd probably just setup Apache on the SAMBA servers and let it serve the files via HTTP. That'd give you a nice autoindex default page too, and you could just wget and concatenate each index for your master list. A couple of other thoughts: 1. file://server/share/file is the defacto Windows way of doing it 2. You can [register protocol handlers][1] in Windows, so you could register smb and redirect it to file://. I'd suspect GNOME/KDE/etc. would offer the same. [1]: http://msdn.microsoft.com/en-us/library/aa767914.aspx
Using Linq-to-Sql, you can have columns in the DataGridView appear different than in the original table by: 1. In your Linq query, extract the columns that you want, in the order than you want, and store them in a var. Then the autogenerate columns should show them in that order in the DataGridView 2. Use Template columns in your DataGridView 3. Do not use drag-and-drop on the Linq-to-Sql design surface to create your entities. Rather, create them by hand and associate them with the database table using table and column properties As far as I know, there is no drag-and-drop column reorder in the designer itself
I have an object that is mapped to a cookie as a serialized base-64 string. I only want to write out a new cookie if there are changes made to the object stored in the cookie on server-side. What I want to do is get a hash code when the object is pulled from the cookie/initialized and compare the original hash code to the hash code that exists just before I send the cookie header off to the client to ensure I don't have to re-serialize/send the cookie unless changes were made. I was going to override the .NET's `Object.GetHashCode()` method, but I wasn't sure that this is the best way to go about checking if an object is modified. Are there any other ways I can check if an object is modified, or should I override the `GetHashCode()` method.
|c#|.net|
What is the best way to tell if an object is modified?
|c#|.net|
I have an object that is mapped to a cookie as a serialized base-64 string. I only want to write out a new cookie if there are changes made to the object stored in the cookie on server-side. What I want to do is get a hash code when the object is pulled from the cookie/initialized and compare the original hash code to the hash code that exists just before I send the cookie header off to the client to ensure I don't have to re-serialize/send the cookie unless changes were made. I was going to override the .NET's `Object.GetHashCode()` method, but I wasn't sure that this is the best way to go about checking if an object is modified. Are there any other ways I can check if an object is modified, or should I override the `GetHashCode()` method. **Update** I decided to accept @[rmbarnes](#34882)'s answer as it had an interesting solution to the problem, and because I decided to use his advice at the end of his post and not check for modification. I'd still be interested to hear any other solutions anyone may have to my scenario however.
I like: foreach(DictionaryEntry entry in hashtable) { Console.WriteLine(entry.Key + ":" + entry.Value; }
1. Make code coverage part of the reviews. 2. Make "write a test that exposes the bug" a prerequisite to fixing a bug. 3. Require a certain level of coverage before code can be checked in. 4. Find a good book on test-driven development and use it to show how test-first can speed development.
Try this loop: { extern char byte_stream[]; int bytes = offset_in_packet; int n = length_of_packet; /* Under 4 billion, so this can be represented in 32 bits. */ int t; /* 32-bit number used for temporary storage. */ int i; unsigned char curByte; for (i = 0; i < 4; i++) { t = n & (0xFF000000 >> (i * 16)); curByte = t >> (24 - (i * 8)); if (t || write_zeros) { printf("byte %d: 0x%02x\n", i, curByte ); byte_stream[bytes] = curByte; bytes++; write_zeros = 1; } } }
DISCLAIMER: I am not a C++ programmer, however I know C really well. I adapated these calls from some C code I have. Also markdown put these strange &#95; as my underscores. You should just be able to write an abstraction class around the C sockets with something like this: class my_sock { private int sock; private int socket_type; private socklen_t sock_len; private struct sockaddr_in server_addr; public char *server_ip; public unsigned short server_port; }; Then have methods for opening, closing, and sending packets down the socket. For example, the open call might look something like this: int my_socket_connect() { int return_code = 0; if ( this->socket_type != CLIENT_SOCK ) { cout << "This is a not a client socket!\n"; return -1; } return_code = connect( this->local_sock, (struct sockaddr *) &this->server_addr, sizeof(this->server_addr)); if( return_code < 0 ) { cout << "Connect() failure! %s\n", strerror(errno); return return_code; } return return_code; }
I was testing on a customer's box this afternoon which has Windows Vista (He had home, but I am testing on a Business Edition with same results). We make use of a .DLL that gets the Hardware ID of the computer. It's usage is very simple and the sample program I have created works. The Dll is [This from AzSdk][1]. In fact, this works perfectly under Windows XP. However, for some strange reason, inside our project (way bigger), we get this exception: Exception Type: System.DllNotFoundException Exception Message: Unable to load DLL 'HardwareID.dll': Invalid access to memory location. (Exception from HRESULT: 0x800703E6) Exception Target Site: GetHardwareID I don't know what can be causing the problem, since I have full control over the folder. The project is a c#.net Windows Forms application and everything works fine, except the call for the external library. I am declaring it like this: (note: it's *not* a COM library and it doesn't need to be registered). [DllImport("HardwareID.dll")] public static extern String GetHardwareID(bool HDD, bool NIC, bool CPU, bool BIOS, string sRegistrationCode); And then the calling code is quite simple: private void button1_Click(object sender, EventArgs e) { textBox1.Text = GetHardwareID(cb_HDD.Checked, cb_NIC.Checked, cb_CPU.Checked, cb_BIOS.Checked, "*Registration Code*"); } When you create a sample application, it works, but inside my prj it doesn't. Under XP works fine. Any ideas about what should I do in Vista to make this work? As I've said, the folder and its subfolders have Full Control for "Everybody". Thanks in advance, **UPDATE:** I do not have Vista SP 1 installed. [1]: http://www.azsdk.com/hardwareid.html
I was testing on a customer's box this afternoon which has Windows Vista (He had home, but I am testing on a Business Edition with same results). We make use of a .DLL that gets the Hardware ID of the computer. It's usage is very simple and the sample program I have created works. The Dll is [This from AzSdk][1]. In fact, this works perfectly under Windows XP. However, for some strange reason, inside our project (way bigger), we get this exception: Exception Type: System.DllNotFoundException Exception Message: Unable to load DLL 'HardwareID.dll': Invalid access to memory location. (Exception from HRESULT: 0x800703E6) Exception Target Site: GetHardwareID I don't know what can be causing the problem, since I have full control over the folder. The project is a c#.net Windows Forms application and everything works fine, except the call for the external library. I am declaring it like this: (note: it's *not* a COM library and it doesn't need to be registered). [DllImport("HardwareID.dll")] public static extern String GetHardwareID(bool HDD, bool NIC, bool CPU, bool BIOS, string sRegistrationCode); And then the calling code is quite simple: private void button1_Click(object sender, EventArgs e) { textBox1.Text = GetHardwareID(cb_HDD.Checked, cb_NIC.Checked, cb_CPU.Checked, cb_BIOS.Checked, "*Registration Code*"); } When you create a sample application, it works, but inside my prj it doesn't. Under XP works fine. Any ideas about what should I do in Vista to make this work? As I've said, the folder and its subfolders have Full Control for "Everybody". Thanks in advance, **UPDATE:** I do not have Vista SP 1 installed. **UPDATE 2:** I have installed Vista SP1 and now, with UAC disabled, not even the simple sample works!!! :( Damn Vista. [1]: http://www.azsdk.com/hardwareid.html
## Common Lisp ## For a good reference of CL check out [Common Lisp the Language, 2nd Edition](http://www.cs.cmu.edu/afs/cs.cmu.edu/project/ai-repository/ai/html/cltl/cltl2.html)
Visual Studio moved to a 64 bit representation of time_t in Visual Studio 2005 (whilst still leaving _time32_t for backwards compatibility). As long as you are careful to always write code in terms of time_t and don't assume anything about the size then as sysrqb points out the problem will be solved by your compiler.
I write my db release scripts in parallel with coding, and keep the release scripts in a project specific section in SS. If I make a change to the code that requires a db change, then I update the release script at the same time. Prior to release, I run the release script on a clean dev db (copied structure wise from production) and do my final testing on it.
ASP.NET Preview 5 (available on [CodePlex][1]) has an answer for this: the [AcceptVerbs] attribute. Phil Haack has a [blog post][2] discussion how it's used. [1]: http://www.codeplex.com/aspnet [2]: http://haacked.com/archive/2008/08/29/how-a-method-becomes-an-action.aspx
ASP.NET Preview 5 (available on [CodePlex][1]) has an answer for this: the [AcceptVerbs] attribute. Phil Haack has a [blog post][2] discussion how it's used. As for the view data magic key question, it's an interesting problem. If you think of a view as being a bunch of semi-independent components (especially in light of the new partial view support), then making a strongly-typed model becomes less ideal, as the several pieces of the view should be relatively independent of one another. [1]: http://www.codeplex.com/aspnet [2]: http://haacked.com/archive/2008/08/29/how-a-method-becomes-an-action.aspx
The arrayCompare() user-defined function at cflib should do it [http://cflib.org/index.cfm?event=page.udfbyid&udfid=1210][1] [1]: http://cflib.org/index.cfm?event=page.udfbyid&udfid=1210 "arrayCompare()"
- C - [The C Programming Language](http://www.amazon.com/Programming-Language-Prentice-Hall-Software/dp/0131103628/ref=pd_bbs_sr_1?ie=UTF8&s=books&qid=1220122570&sr=1-1) - Obviously I *had* to reference K&R, one of the best programming books out there full stop. - C++ - [Accelerated C++](http://www.amazon.com/Accelerated-Practical-Programming-Example-Depth/dp/020170353X/ref=pd_bbs_sr_1?ie=UTF8&s=books&qid=1220122657&sr=1-1) - This clear, well written introduction to C++ goes straight to using the STL and gives nice, clear, practical examples. Lives up to its name. - C# - [Pro C# 2008 and the .NET 3.5 Platform](http://www.amazon.com/2008-NET-Platform-Fourth-Windows-Net/dp/1590598849/ref=pd_bbs_sr_1?ie=UTF8&s=books&qid=1220122344&sr=8-1) - Bit of a mouthful but wonderfully written and huge depth. - F# - [Expert F#](http://www.amazon.com/Expert-F-Experts-Voice-Net/dp/1590598504/ref=pd_bbs_sr_1?ie=UTF8&s=books&qid=1220122421&sr=1-1) - Designed to take experienced programmers from zero to expert in F#. Very well written, one of the author's invented F# so you can't go far wrong! - Scheme - [The Little Schemer](http://www.amazon.com/Little-Schemer-Daniel-P-Friedman/dp/0262560992/ref=pd_bbs_sr_1?ie=UTF8&s=books&qid=1220122404&sr=1-1) - Really unique approach to teaching a programming language done *really* well. - Ruby - [Programming Ruby](http://www.amazon.com/Programming-Ruby-Pragmatic-Programmers-Second/dp/0974514055/ref=pd_bbs_sr_1?ie=UTF8&s=books&qid=1220122751&sr=1-1) - Affectionately known as the 'pick axe' book, this is THE defacto introduction to Ruby. Very well written, clear and detailed.
I would think first on what the system requirements are for data access, data security, scalability, performance, disconnected scenarios, data transformation, data sizing. On the other side, consider also the experience and background of developers, operators, platform administrators. You should also think on what constraints you have regarding programming languages, operating systems, memory footprint, network bandwidth, hardware. Last, but not least, you have to think about business issues like budget for licences, support, operation. After all those considerations you should end up with just a couple of options and the selection should be easier. In other words, select the technology that suits the best the constraints and needs of your organization and project. I certainly think that you are right on saying that it is not an obvious choice given the wide number of alternatives, but this is the only way I think you can narrow them to the ones that are really feasible for your project.
The simplest way to sort a ComboBox is to use the [ComboBox.Sorted][1] property. However, that won't work if you're using data binding. In that case you'll have to sort the data source itself. You can use either a [SortedList][2] or [SortedDictionary][3] (both sort by the Key), or a [DataView][4]. The DataView has a [Sort][5] property that accepts a sort expression (string) for example: view.Sort = "State, ZipCode DESC"; In the above example both State and ZipCode are columns in the DataTable used to create the DataView. [1]: http://msdn.microsoft.com/en-us/library/system.windows.forms.combobox.sorted.aspx [2]: http://msdn.microsoft.com/en-us/library/ms132319(VS.80).aspx [3]: http://msdn.microsoft.com/en-us/library/f7fta44c.aspx [4]: http://msdn.microsoft.com/en-us/library/system.data.dataview.aspx [5]: http://msdn.microsoft.com/en-us/library/system.data.dataview.sort.aspx
If you want to increase the execution timeout for this one request you can set <pre><code>HttpContext.Current.Server.ScriptTimeout</code></pre> But you still may have the problem of the client timing out which you can't reliably solve directly from the server. To get around that you could implement a "processing" page (like Rob suggests) that posts back until the response is ready. Or you might want to look into AJAX to do something similar.
have you tried adding the following before the call : $.ajaxSetup({ scriptCharset: "utf-8" , contentType: "application/json; charset=utf-8" }); The options are explained <a href="http://docs.jquery.com/Ajax/jQuery.ajax#toptions">here</a>. contentType : When sending data to the server, use this content-type. Default is "application/x-www-form-urlencoded", which is fine for most cases. scriptCharset : Only for requests with 'jsonp' or 'script' dataType and GET type. Forces the request to be interpreted as a certain charset. Only needed for charset differences between the remote and local content.
Directory Sizes on Remote Host
|php|cgi|hosting|directory-size|
My hosting company does not provide an interface for me to see which folders are consuming the most ammount of space so what I'm looking for is something that will show me the size of each folder within my main folder recursively, I done know of anything on the net, and did a few searches however I came up with no results. This is a LAMP server with a CGI-Bin so most any PHP script should work or any thing that will work in the CGI-Bin. Something implementing graphs (GD/ImageMagik) would be best but not required.<br /> It appears that my host supports Perl only in the CGI-BIN, but I am going to call and verify that.
How to best implement software updates on windows?
|windows|installation|
I want to implement an "automatic update" system for a windows application. Right now I'm semi-manually creating an ["appcast"][2] which my program checks, and notifies the user that a new version is available. (I'm using [NSIS][1] for my installers). Is there software that I can use that will handle the "automatic" part of the updates, perhaps similar to [Sparkle][3] on the mac? Any issues/pitfalls that I should be aware of? [1]: http://nsis.sourceforge.net/Main_Page [2]: http://connectedflow.com/appcasting/ [3]: http://sparkle.andymatuschak.org/
If your application is written in .Net, you could try [ClickOnce](http://msdn.microsoft.com/en-us/library/142dbbz4.aspx). However, it's difficult to perform administrative or custom actions during install using this approach.
For .NET, a while back Microsoft Patterns + Practices published the [Application Updater Block][1]. This was (to my mind) rather overblown and over-engineered, but did the job quite well. In essence it used a "stub loader" to check a manifest and a Web service to see if a later version of the program than the one installed was available, then used the BITS background downloader technology to download a new version if one was available on the server. Once the new version was downloaded and installed (with .NET this is as simple as an xcopy to the relevant folder), the application would update the manifest. The next time the program was loaded the new version would be launched. While the Patterns + Practices code is .NET specific, there's nothing there that couldn't be copied for a non-.NET application, especially if you have the ability to silently run the install process in the background. [1]: http://msdn.microsoft.com/en-us/library/ms978574.aspx
It's been a while since I've done this, but IIRC, the first case is for creating a dialog dynamically, from an in-memory template. The second example is for the far more common case of creating a dialog using a resource. The dynamic dialog stuff in Win32 was fairly complex, but it allowed you to create a true data-driven interface, and avoid issues with bundling resources with DLLs. As for why use Win32 - if you need a windows app and you don't want to depend on MFC or the .NET runtime, then that's what you use.
There is no solution quite as smooth as Sparkle (that I know of). If you need an easy means of deployment and updating applications, [ClickOnce][1] is an option. Unfortunately, it's inflexible (e.g., no per-machine installation instead of per-user), opaque (you have very little influence and clarity and control over how its deployment actually works) and non-standard (the paths it stores the installed app in are unlike anything else on Windows). Much closer to what you're asking would be [ClickThrough][2], a side project of [WiX][3], but I'm not sure it's still in development (if it is, they should be clearer about that…) — and it would use MSI in any case, not NSIS. You're likely best off rolling something on your own. I'd love to see a Sparkle-like project for Windows, but nobody seems to have given it a shot thus far. [1]: http://en.wikipedia.org/wiki/ClickOnce [2]: http://wix.sourceforge.net/clickthrough.html [3]: http://wix.sourceforge.net/
We use a script like the following. Gzip is from the cygwin project. I'm sure you could modify the syntax to use a zip tool instead. The "skip" argument is the number of files to not archive off -- we keep 11 days in the 'current' directory. @echo off setlocal For /f "skip=11 delims=/" %%a in ('Dir D:\logs\W3SVC1\*.log /B /O:-N /T:C')do move "D:\logs\W3SVC1\%%a" "D:\logs\W3SVC1\old\%%a" d: cd "\logs\W3SVC1\old" gzip -n *.log Endlocal exit
Does C# have a way of giving me an immutable Dictionary?
|c#|dictionary|collections|java|
Is there anything built into the C# libraries that can give me an immutable Dictionary? Something along the lines of this in Java: Collections.unmodifiableMap(myMap); And just to clarify, I am not looking to stop the keys / values themselves from being changed, just the structure of the Dictionary. I want something that fails fast and loud if any of IDictionary's mutator methods are called (Add, Remove, Clear).
|c#|.net|java|collections|dictionary|
**Is there anything built into the core C# libraries that can give me an immutable Dictionary?** Something along the lines of *Java's*: Collections.unmodifiableMap(myMap); And just to clarify, I am not looking to stop the keys / values themselves from being changed, just the structure of the Dictionary. I want something that fails fast and loud if any of IDictionary's mutator methods are called (`Add, Remove, Clear`).
Access 2000 connecting to SQL Server 2005
|sql-server|microsoftaccess|
The company I work for has an old Access 2000 application that was using a SQL Server 2000 back-end. We were tasked with moving the back-end to a SQL Server 2005 database on a new server. Unfortunately, the application was not functioning correctly while trying to do any inserts or updates. My research has found many forum posts that Access 2000 -> SQL 2005 is not supported by Microsoft, but I cannot find any Microsoft documentation to verify that. Can anyone either link me to some official documentation, or has anyone used this setup and can confirm that this should be working and our problems lie somewhere else? Not sure if it matters, but the app is an ADP compiled into an ADE. Thank you for your help in advance.
|sql-server|ms-access|
> The default value of the argument must be a constant expression. It can't be a variable or a function call. If you need this functionality however: function foo($foo, $bar = false) { if(!$bar) { $bar = $foo; } } Assuming `$bar` isn't expected to be a boolean of course.
Access can be a bitch. Ive been in the position where i had to go around and tell 20-50 people to close access so I could go to "design mode" to change the design of the forms and maybe a column. No fun at all. (Old access, and it might just be a bad setup)
Q. Why are you storing the `AnswerCount` in the `Posts` table in the first place? An alternative approach is to eliminate the "write back" to the `Posts` table by not storing the `AnswerCount` in the table but to dynamically calculate that each time. Yes, this will mean you're running an additional query to `SELECT COUNT(*) FROM Answers WHERE post_id = @id` but this typically results in an `INDEX SCAN` and may be more efficient in the use of resources than using `READ ISOLATION`. *There's more than one way to skin a cat. De-normalisation of a database schema can introduce scalability issues.*
Q. Why are you storing the `AnswerCount` in the `Posts` table in the first place? An alternative approach is to eliminate the "write back" to the `Posts` table by not storing the `AnswerCount` in the table but to dynamically calculate the number of answers to the post as required. Yes, this will mean you're running an additional query: SELECT COUNT(*) FROM Answers WHERE post_id = @id or more typically (if you're displaying this for the home page): SELECT post_id, <post fields>, AnswerCount FROM Posts INNER JOIN AnswersCount_view ON <join criteria> WHERE <home page criteria> but this typically results in an `INDEX SCAN` and may be more efficient in the use of resources than using `READ ISOLATION`. *There's more than one way to skin a cat. De-normalisation of a database schema can introduce scalability issues.*
Q. Why are you storing the `AnswerCount` in the `Posts` table in the first place? An alternative approach is to eliminate the "write back" to the `Posts` table by not storing the `AnswerCount` in the table but to dynamically calculate the number of answers to the post as required. Yes, this will mean you're running an additional query: SELECT COUNT(*) FROM Answers WHERE post_id = @id or more typically (if you're displaying this for the home page): SELECT post_id, <post fields>, AnswerCount FROM Posts INNER JOIN AnswersCount_view ON <join criteria> WHERE <home page criteria> but this typically results in an `INDEX SCAN` and may be more efficient in the use of resources than using `READ ISOLATION`. *There's more than one way to skin a cat. De-normalisation of a database schema can introduce scalability issues.*
Q. Why are you storing the `AnswerCount` in the `Posts` table in the first place? An alternative approach is to eliminate the "write back" to the `Posts` table by not storing the `AnswerCount` in the table but to dynamically calculate the number of answers to the post as required. Yes, this will mean you're running an additional query: SELECT COUNT(*) FROM Answers WHERE post_id = @id or more typically (if you're displaying this for the home page): SELECT p.post_id, p.<additional post fields>, a.AnswerCount FROM Posts p INNER JOIN AnswersCount_view a ON <join criteria> WHERE <home page criteria> but this typically results in an `INDEX SCAN` and may be more efficient in the use of resources than using `READ ISOLATION`. *There's more than one way to skin a cat. Premature de-normalisation of a database schema can introduce scalability issues.*
Rather than having the input parameter as a cursor, I would have a table variable (don't know if Oracle has such a thing I'm a TSQL guy) or populate another temp table with the ID values and join on it in the view/function or wherever you need to. The only time for cursors in my honest opinion is when you *have* to loop. And when you have to loop I always recommend to do that outside of the database in the application logic.
This might be slightly tangential, but hopefully relevant. I used to work for National Instruments, R&D, where I wrote software for NI RF & Communication toolkits. We used LabVIEW quite a bit, and here are the practices we followed: 1. Source control. NI uses Perforce. We did the regular thing - dev/trunk branches, continuous integration, the works. 2. We wrote automated test suites. 3. We had a few people who came in with a background in signal processing and communication. We used to have regular code reviews, and best practices documents to make sure their code was up to the mark. 4. Despite the code reviews, there were a few occasions when "software guys", like me had to rewrite some of this code for efficiency. 5. I know exactly what you mean about stubborn people! We had folks who used to think that pointing out a potential performance improvement in their code was a direct personal insult! It goes without saying that that this calls for good management. I thought the best way to deal with these folks is to go slowly, not press to hard for changes and if necessary be prepared to do the dirty work. [Example: write a test suite for their code].
I'm no expert in this area, but I've always understood that this is what [Matlab][1] was created for. There is [a way to integrate Matlab with SVN for source control][2] as well. [1]: http://en.wikipedia.org/wiki/MATLAB [2]: http://www.mathworks.com/matlabcentral/fileexchange/loadFile.do?objectId=11596&objectType=file
First of all, I would definitely go with a scripting language to avoid having to explain a lot of extra things (for example manual memory management is - mostly - ok if you are writing low-level, performance sensitive stuff, but for somebody who just wants to use a computer as an upgraded scientific calculator it's definitely overkill). Also, look around if there is something specific for your domain (as is [R][1] for statistics). This has the advantage of already working with the concepts the users are familiar with and having specialized code for specific situations (for example calculating standard deviations, applying statistical tests, etc in the case of R). If you wish to use a more generic scripting language, I would go with Python. Two things it has going for it are: - The interactive shell where you can experiment - Its clear (although sometimes lengthy) syntax As an added advantage, it has libraries for most of the things you would want to do with it. [1]: http://www.r-project.org/
Class methods are for when you need to have methods that aren't specific to any particular instance, but still involve the class in some way. The most interesting thing about them is that they can be overridden by subclasses, something that's simply not possible in Java's static methods or Python's module-level functions. If you have a class `MyClass`, and a module-level function that operates on MyClass (factory, dependency injection stub, etc), make it a `classmethod`. Then it'll be available to subclasses.
Adding to what Matthew already said, it looks like the following modules would be suitable: [Tree::Nary](http://search.cpan.org/perldoc?Tree::Nary) [Tree::Simple](http://search.cpan.org/perldoc?Tree::Simple) [Tree](http://search.cpan.org/perldoc?Tree)
Best browser for web application
|php|hosting|cgi|
> Unable to load DLL 'HardwareID.dll': > Invalid access to memory location. > (Exception from HRESULT: 0x800703E6) The name of DllNotFoundException is confusing you - this isn't a problem with finding or loading the DLL file, the problem is that when the DLL is loaded, it does an illegal memory access which causes the loading process to fail. Like another poster here, I think this is a DEP problem, and that your UAC, etc, changes have finally allowed you to disable DEP for this application.
Surely the most compelling factor is the expertise of you or your team...or the pool of resource you are likely to hire in the future. I would tend to go with the grain most of the time, using MySQL in a LAMP team and SQL Server in a MS team, since either of these products is capable of doing everything necessary even in a high-load environment. The benefits of any other database are going to be marginal compared to the pain of learning how to use it well. The only exception to this, in my opinion, would be in a high-demand environment where: a. the obvious choice has been tried and is failing b. the benefits of scaling multiply the marginal benefit to such a degree that it will be worth the cost of using something unexpected. I would assume the need to hire at least two and preferably three excellent DBAs with long term familiarity with the new database. And first I would try to hire them for the technology that was failing, because it is more likely to be the way it's used than the technology itself that is causing the problem.
Try checking the [IsDisposed][1] property before accessing the control. [1]: http://msdn.microsoft.com/en-us/library/system.windows.forms.control.isdisposed.aspx
Try checking the [IsDisposed][1] property before accessing the control. You can also check it on the [FormClosing][2] event, assuming you're using the FormClosed event. [1]: http://msdn.microsoft.com/en-us/library/system.windows.forms.control.isdisposed.aspx [2]: http://msdn.microsoft.com/en-us/library/system.windows.forms.form.formclosing.aspx
Try checking the [IsDisposed][1] property before accessing the control. You can also check it on the [FormClosing][2] event, assuming you're using the FormClosed event. > We do stop the Timer on the > FormClosing event and we do check the > IsDisposed property on the schedule > component before using it in the Timer > Tick event but it doesn't help. Calling GC.Collect before checking IsDisposed may help, but be careful with this. Read this article by Rico Mariani "[When to call GC.Collect()][3]". [1]: http://msdn.microsoft.com/en-us/library/system.windows.forms.control.isdisposed.aspx [2]: http://msdn.microsoft.com/en-us/library/system.windows.forms.form.formclosing.aspx [3]: http://blogs.msdn.com/ricom/archive/2004/11/29/271829.aspx
from [http://www.go4expert.com/forums/showthread.php?t=290][1] > The function "getDirectorySize" will > ignore link/shorcuts to > files/directory. The function > "sizeFormat" will suffix the size with > bytes,KB,MB or GB accordingly. Code - function getDirectorySize($path) { $totalsize = 0; $totalcount = 0; $dircount = 0; if ($handle = opendir ($path)) { while (false !== ($file = readdir($handle))) { $nextpath = $path . '/' . $file; if ($file != '.' && $file != '..' && !is_link ($nextpath)) { if (is_dir ($nextpath)) { $dircount++; $result = getDirectorySize($nextpath); $totalsize += $result['size']; $totalcount += $result['count']; $dircount += $result['dircount']; } elseif (is_file ($nextpath)) { $totalsize += filesize ($nextpath); $totalcount++; } } } } closedir ($handle); $total['size'] = $totalsize; $total['count'] = $totalcount; $total['dircount'] = $dircount; return $total; } function sizeFormat($size) { if($size<1024) { return $size." bytes"; } else if($size<(1024*1024)) { $size=round($size/1024,1); return $size." KB"; } else if($size<(1024*1024*1024)) { $size=round($size/(1024*1024),1); return $size." MB"; } else { $size=round($size/(1024*1024*1024),1); return $size." GB"; } } Usage - $path="/httpd/html/pradeep/"; $ar=getDirectorySize($path); echo "<h4>Details for the path : $path</h4>"; echo "Total size : ".sizeFormat($ar['size'])."<br>"; echo "No. of files : ".$ar['count']."<br>"; echo "No. of directories : ".$ar['dircount']."<br>"; [1]: http://www.go4expert.com/forums/showthread.php?t=290 Output - Details for the path : /httpd/html/pradeep/ Total size : 2.9 MB No. of files : 196 No. of directories : 20
For modal input dialogs, I typically overload ShowDialog and pass out params for the data I need. DialogResult ShowDialog(out datetime birthday) I generally find that it's easier to discover and understand vs mixing my properties with the 100+ that the Form class exposes. For forms, I normally have a Controller and a IView interface that uses readonly properties to pass data.
`notify()` will wake up one thread while `notifyAll()` will wake up all. As far as I know there is no middle ground. But if you are not sure what `notify()` will do to your threads, use `notifyAll()`. Works like a charm everytime.
`%` is replaced with the current file name, thus you can use: :w !sudo tee % (`vim` will detect that the file has been changed and ask whether you want to it to be reloaded.) As a shortcut, you can define define your own command. Put the following in your `.vimrc`: command W w !sudo tee % >/dev/null
I think it depends on how resources are produced and consumed. If 5 work objects are available at once and you have 5 consumer objects, it would make sense to wake up all threads using notifyAll() so each one can process 1 work object. If you have just one work object available, what is the point in waking up all consumer objects to race for that one object? The first one checking for available work will get it and all other threads will check and find they have nothing to do. I found a [great explanation here][1]. In short: > The notify() method is generally used > for **resource pools**, where there > are an arbitrary number of "consumers" > or "workers" that take resources, but > when a resource is added to the pool, > only one of the waiting consumers or > workers can deal with it. The > notifyAll() method is actually used in > most other cases. Strictly, it is > required to notify waiters of a > condition that could allow multiple > waiters to proceed. But this is often > difficult to know. So as a general > rule, **if you have no particular > logic for using notify(), then you > should probably use notifyAll()**, > because it is often difficult to know > exactly what threads will be waiting > on a particular object and why. [1]: http://www.javamex.com/tutorials/notifyall.shtml
Simply put, it depends on why your threads are waiting to be notified. Do you want to tell one of the waiting threads that something happened, or do you want to tell all of them at the same time? In some cases, all waiting threads can take useful action once the wait finishes. An example would be a set of threads waiting for a certain task to finish; once the task has finished, all waiting threads can continue with their business. In such a case you would use <b>notifyAll()</b> to wake up all waiting threads at the same time. Another case, for example mutually exclusive locking, only one of the waiting threads can do something useful after being notified (in this case acquire the lock). In such a case, you would rather use <b>notify()</b>. Properly implemented, you <i>could</i> use <b>notifyAll()</b> in this situation as well, but you would unnecessarily wake threads that can't do anything anyway.
Useful differences: - Use **notify()** if all your waiting threads are interchangeable (the order they wake up doesn't matter), or if you only ever have one waiting thread. A common example is a thread pool used to execute jobs from a queue--when a job is added, one of threads is notified to wake up, execute the next job and go back to sleep. - Use **notifyAll()** for other cases where the waiting threads may have different purposes and should be able to run concurrently. An example is a maintenance operation on a shared resource, where multiple threads are waiting for the operation to complete before accessing the resource.
I'm not sure I understand your question. What exactly are you trying to count? If I understand correctly you're trying to find the Most Significant non-zero byte. You're probably better off using a loop like this: int i; int write_zeros = 0; for (i = 3; i >=0 ; --i) { t = (n >> (8 * i)) & 0xff; if (t || write_zeros) { write_zeros = 1; printf ("byte %d : 0x%02x\n", 4-i, t); byte_stream[bytes++] = t; } }