Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I have a problem using the Java search function in Eclipse on a particular project. When using the Java search on one particular project, I get an error message saying `Class file name must end with .class` (see stack trace below). This does not seem to be happening on all projects, just one particular one, so perhaps there's something I should try to get rebuilt? I have already tried `Project -> Clean`... and Closing Eclipse, deleting all the built class files and restarting Eclipse to no avail. The only reference I've been able to find on Google for the problem is at <http://www.crazysquirrel.com/computing/java/eclipse/error-during-java-search.jspx>, but unfortunately his solution (closing, deleting class files, restarting) did not work for me. If anyone can suggest something to try, or there's any more info I can gather which might help track it's down, I'd greatly appreciate the pointers. ``` Version: 3.4.0 Build id: I20080617-2000 ``` Also just found this thread - <http://www.myeclipseide.com/PNphpBB2-viewtopic-t-20067.html> - which indicates the same problem may occur when the project name contains a period. Unfortunately, that's not the case in my setup, so I'm still stuck. ``` Caused by: java.lang.IllegalArgumentException: Class file name must end with .class at org.eclipse.jdt.internal.core.PackageFragment.getClassFile(PackageFragment.java:182) at org.eclipse.jdt.internal.core.util.HandleFactory.createOpenable(HandleFactory.java:109) at org.eclipse.jdt.internal.core.search.matching.MatchLocator.locateMatches(MatchLocator.java:1177) at org.eclipse.jdt.internal.core.search.JavaSearchParticipant.locateMatches(JavaSearchParticipant.java:94) at org.eclipse.jdt.internal.core.search.BasicSearchEngine.findMatches(BasicSearchEngine.java:223) at org.eclipse.jdt.internal.core.search.BasicSearchEngine.search(BasicSearchEngine.java:506) at org.eclipse.jdt.core.search.SearchEngine.search(SearchEngine.java:551) at org.eclipse.jdt.internal.corext.refactoring.RefactoringSearchEngine.internalSearch(RefactoringSearchEngine.java:142) at org.eclipse.jdt.internal.corext.refactoring.RefactoringSearchEngine.search(RefactoringSearchEngine.java:129) at org.eclipse.jdt.internal.corext.refactoring.rename.RenameTypeProcessor.initializeReferences(RenameTypeProcessor.java:594) at org.eclipse.jdt.internal.corext.refactoring.rename.RenameTypeProcessor.doCheckFinalConditions(RenameTypeProcessor.java:522) at org.eclipse.jdt.internal.corext.refactoring.rename.JavaRenameProcessor.checkFinalConditions(JavaRenameProcessor.java:45) at org.eclipse.ltk.core.refactoring.participants.ProcessorBasedRefactoring.checkFinalConditions(ProcessorBasedRefactoring.java:225) at org.eclipse.ltk.core.refactoring.Refactoring.checkAllConditions(Refactoring.java:160) at org.eclipse.jdt.internal.ui.refactoring.RefactoringExecutionHelper$Operation.run(RefactoringExecutionHelper.java:77) at org.eclipse.jdt.internal.core.BatchOperation.executeOperation(BatchOperation.java:39) at org.eclipse.jdt.internal.core.JavaModelOperation.run(JavaModelOperation.java:709) at org.eclipse.core.internal.resources.Workspace.run(Workspace.java:1800) at org.eclipse.jdt.core.JavaCore.run(JavaCore.java:4650) at org.eclipse.jdt.internal.ui.actions.WorkbenchRunnableAdapter.run(WorkbenchRunnableAdapter.java:92) at org.eclipse.jface.operation.ModalContext$ModalContextThread.run(ModalContext.java:121) ``` ## Update Thanks McDowell, closing and opening the project seems to have fixed it (at least for now).
Two more general-purpose mechanisms for fixing some of Eclipse's idiosyncrasies: * Close and open the project * Delete the project (but not from disk!) and reimport it as an existing project Failing that, [bugs.eclipse.org](https://bugs.eclipse.org/bugs/buglist.cgi?query_format=specific&order=relevance+desc&bug_status=__all__&product=JDT&content=Class+file+name+must+end+with+.class) might provide the answer. If the workspace is caching something broken, you may be able to delete it by poking around in **workspace/.metadata/.plugins**. Most of that stuff is fairly transient (though backup and watch for deleted preferences).
Comment [#9 to bug 269820](https://bugs.eclipse.org/bugs/show_bug.cgi?id=269820#c9) explains how to delete the search index, which appears to be the solution to a corrupt index whose symptoms are the dreaded > An internal error occurred during: "Items filtering". > Class file name must end with .class message box. How to delete the search index: 1. Close Eclipse 2. Delete <workspace>/.metadata/.plugins/org.eclipse.jdt.core/\*.index 3. Delete <workspace>/.metadata/.plugins/org.eclipse.jdt.core/savedIndexNames.txt 4. Start Eclipse again
Class file name must end with .class exception in Java search
[ "", "java", "eclipse", "search", "" ]
I have several "ASP:TextBox" controls on a form (about 20). When the form loads, the text boxes are populated from a database. The user can change the populated values, and when they submit the form, I take the values posted to the server and conditionally save them (determined by some business logic). All but 1 of the text boxes work as intended. The odd box out, upon postback, does not contain the updated value that the user typed into the box. When debugging the application, it is clear that `myTextBox.Text` reflects the old, pre-populated value, not the new, user-supplied value. Every other box properly shows their respective user-supplied values. I did find a workaround. My solution was to basically extract the text box's value out of the `Request.Form` object: `Request.Form[myTextBox.UniqueID]`, which does contain the user-supplied value. What could be going on, here? As I mentioned, the other text boxes receive the user-supplied values just fine, and this particular problematic text box doesn't have any logic associated to it -- it just takes the value and saves it. The main difference between this text box and the others is that this is a multi-line box (for inputting notes), which I believe is rendered as an HTML "textarea" tag instead of an "input" tag in ASP.NET.
Are you initially loading the data only when !Page.IsPostBack? Also, is view state enabled for the text box?
this happens to me all the time. ``` protected void Page_Load(object sender, EventArgs e) { if (!Page.IsPostBack) { // populate text boxes from database } } ```
Form post doesn't contain textbox data [ASP.NET C#]
[ "", "c#", "asp.net", "" ]
I'm trying to perform a SQL query through a linked SSAS server. The initial query works fine: ``` SELECT "Ugly OLAP name" as "Value" FROM OpenQuery( OLAP, 'OLAP Query') ``` But if I try to add: ``` WHERE "Value" > 0 ``` I get an error > Invalid column name 'Value' Any ideas what I might be doing wrong? --- So the problem was that the order in which elements of the query are processed are different that the order they are written. According to this source: <http://blogs.x2line.com/al/archive/2007/06/30/3187.aspx> The order of evaluation in MSSQL is: 1. FROM 2. ON 3. JOIN 4. WHERE 5. GROUP BY 6. HAVING 7. SELECT 8. ORDER BY So the alias wasn't processed until after the WHERE and HAVING clauses.
This should work: ``` SELECT A.Value FROM ( SELECT "Ugly OLAP name" as "Value" FROM OpenQuery( OLAP, 'OLAP Query') ) AS a WHERE a.Value > 0 ``` It's not that Value is a reserved word, the problem is that it's a column alias, not the column name. By making it an inline view, "Value" becomes the column name and can then be used in a where clause.
You're using "Value" as a column alias, and I don't think the alias can appear in the where clause. It's simply used to name the returned column value. Your where clause should refer to the original column name: ``` SELECT "Ugly OLAP name" as "Value" FROM OpenQuery( OLAP, 'OLAP Query') WHERE "Ugly OLAP name" > 0 ```
"Invalid column name" error on SQL statement from OpenQuery results
[ "", "sql", "sql-server", "" ]
Does anyone know of an easy way to escape HTML from strings in [jQuery](http://jquery.com/)? I need to be able to pass an arbitrary string and have it properly escaped for display in an HTML page (preventing JavaScript/HTML injection attacks). I'm sure it's possible to extend jQuery to do this, but I don't know enough about the framework at the moment to accomplish this.
Since you're using [jQuery](https://jquery.com/), you can just set the element's [`text`](http://api.jquery.com/text/) property: ``` // before: // <div class="someClass">text</div> var someHtmlString = "<script>alert('hi!');</script>"; // set a DIV's text: $("div.someClass").text(someHtmlString); // after: // <div class="someClass">&lt;script&gt;alert('hi!');&lt;/script&gt;</div> // get the text in a string: var escaped = $("<div>").text(someHtmlString).html(); // value: // &lt;script&gt;alert('hi!');&lt;/script&gt; ```
There is also [the solution from mustache.js](https://github.com/janl/mustache.js/blob/master/mustache.js#L73) ``` var entityMap = { '&': '&amp;', '<': '&lt;', '>': '&gt;', '"': '&quot;', "'": '&#39;', '/': '&#x2F;', '`': '&#x60;', '=': '&#x3D;' }; function escapeHtml (string) { return String(string).replace(/[&<>"'`=\/]/g, function (s) { return entityMap[s]; }); } ```
Escaping HTML strings with jQuery
[ "", "javascript", "jquery", "string", "escaping", "" ]
Has anyone been able to get xinc to run correctly under OpenBSD's chrooted default Apache? I'd like to keep our development server running fully chrooted just like our Production server so that we make sure our code runs just fine chrooted.
Have you posted the issue on the Xinc [bug tracker](http://code.google.com/p/xinc/issues/list)? Xinc itself should run fine as it runs both as a daemon and as a web app. As you alluded to, the issue may be that the daemon is not running in a chroot'ed environment where as the web interface is, leading to either side not grabbing the files.
@dragonmantank In Xinc's case, I hope you used PEAR to install it. ``` pear list-files xinc/Xinc ``` This should do it, and show you where your Xinc install put its files. So even though Xinc is "just" one big PHP script, it's still spread out into rc scripts and all those other things which are necessary to make an application run. I'm sure you don't need to add all paths listed there, but probably some in order to make it run. Aside from Xinc itself, I think it also needs phpUnit and a bunch of other PEAR libs to run, so what I'd propose is this: ``` pear config-get php_dir ``` And then you need to add that path (like Henrik suggested) to the chroot environment.
Running xinc on OpenBSD's Apache Server
[ "", "php", "continuous-integration", "openbsd", "xinc", "" ]
I have some code for starting a thread on the .NET CF 2.0: ``` ThreadStart tStart = new ThreadStart(MyMethod); Thread t = new Thread(tStart); t.Start(); ``` If I call this inside a loop the items completely out of order. How do introduce a wait after `t.Start()`, so that the work on the thread completes before the code continues? Will BeginInvoke/EndInvoke be a better option for this than manually creating threads?
How much order do you need to impose on the threads? If you just need all of the work started in the loop to finish before the code continues, but you don't care about the order the work within the loop finishes, then calling Join is the answer. To add more detail to [Kevin Kenny's answer,](https://stackoverflow.com/questions/6890/compact-framework-how-to-wait-for-thread-complete-before-continuing#6935) you should call Join *outside* the loop. This means you will need a collection to hold references to the threads you started: ``` // Start all of the threads. List<Thread> startedThreads = new List<Thread>(); foreach (...) { Thread thread = new Thread(new ThreadStart(MyMethod)); thread.Start(); startedThreads.Add(thread); } // Wait for all of the threads to finish. foreach (Thread thread in startedThreads) { thread.Join(); } ``` In contrast, if you called Join inside the loop, the result would basically be the same as not using threads at all. Each iteration of the loop body would create and start a thread but then immediately Join it and wait for it to finish. If the individual threads produce some result (write a message in a log, for example) then the messages may still appear out of order because there's no coordination between the threads. It is possible to get the threads to output their results in order by coordinating them with a Monitor.
Another way of waiting for a thread to finish is using an `AutoResetEvent`. ``` private readonly AutoResetEvent mWaitForThread = new AutoResetEvent(false); private void Blah() { ThreadStart tStart = new ThreadStart(MyMethod); Thread t = new Thread(tStart); t.Start(); //... (any other things) mWaitForThread.WaitOne(); } private void MyMethod() { //... (execute any other action) mWaitForThread.Set(); } ```
How to wait for thread complete before continuing?
[ "", "c#", "multithreading", "compact-framework", "" ]
To illustrate, assume that I have two tables as follows: ``` VehicleID Name 1 Chuck 2 Larry LocationID VehicleID City 1 1 New York 2 1 Seattle 3 1 Vancouver 4 2 Los Angeles 5 2 Houston ``` I want to write a query to return the following results: ``` VehicleID Name Locations 1 Chuck New York, Seattle, Vancouver 2 Larry Los Angeles, Houston ``` I know that this can be done using server side cursors, ie: ``` DECLARE @VehicleID int DECLARE @VehicleName varchar(100) DECLARE @LocationCity varchar(100) DECLARE @Locations varchar(4000) DECLARE @Results TABLE ( VehicleID int Name varchar(100) Locations varchar(4000) ) DECLARE VehiclesCursor CURSOR FOR SELECT [VehicleID] , [Name] FROM [Vehicles] OPEN VehiclesCursor FETCH NEXT FROM VehiclesCursor INTO @VehicleID , @VehicleName WHILE @@FETCH_STATUS = 0 BEGIN SET @Locations = '' DECLARE LocationsCursor CURSOR FOR SELECT [City] FROM [Locations] WHERE [VehicleID] = @VehicleID OPEN LocationsCursor FETCH NEXT FROM LocationsCursor INTO @LocationCity WHILE @@FETCH_STATUS = 0 BEGIN SET @Locations = @Locations + @LocationCity FETCH NEXT FROM LocationsCursor INTO @LocationCity END CLOSE LocationsCursor DEALLOCATE LocationsCursor INSERT INTO @Results (VehicleID, Name, Locations) SELECT @VehicleID, @Name, @Locations END CLOSE VehiclesCursor DEALLOCATE VehiclesCursor SELECT * FROM @Results ``` However, as you can see, this requires a great deal of code. What I would like is a generic function that would allow me to do something like this: ``` SELECT VehicleID , Name , JOIN(SELECT City FROM Locations WHERE VehicleID = Vehicles.VehicleID, ', ') AS Locations FROM Vehicles ``` Is this possible? Or something similar?
If you're using SQL Server 2005, you could use the FOR XML PATH command. ``` SELECT [VehicleID] , [Name] , (STUFF((SELECT CAST(', ' + [City] AS VARCHAR(MAX)) FROM [Location] WHERE (VehicleID = Vehicle.VehicleID) FOR XML PATH ('')), 1, 2, '')) AS Locations FROM [Vehicle] ``` It's a lot easier than using a cursor, and seems to work fairly well. **Update** For anyone still using this method with newer versions of SQL Server, there is another way of doing it which is a bit easier and more performant using the [`STRING_AGG`](https://learn.microsoft.com/en-us/sql/t-sql/functions/string-agg-transact-sql?view=sql-server-ver15) method that has been available since SQL Server 2017. ``` SELECT [VehicleID] ,[Name] ,(SELECT STRING_AGG([City], ', ') FROM [Location] WHERE VehicleID = V.VehicleID) AS Locations FROM [Vehicle] V ``` This also allows a different separator to be specified as the second parameter, providing a little more flexibility over the former method.
Note that [Matt's code](https://stackoverflow.com/questions/6899/how-to-create-a-sql-server-function-to-join-multiple-rows-from-a-subquery-into#6961) will result in an extra comma at the end of the string; using COALESCE (or ISNULL for that matter) as shown in the link in Lance's post uses a similar method but doesn't leave you with an extra comma to remove. For the sake of completeness, here's the relevant code from Lance's link on sqlteam.com: ``` DECLARE @EmployeeList varchar(100) SELECT @EmployeeList = COALESCE(@EmployeeList + ', ', '') + CAST(EmpUniqueID AS varchar(5)) FROM SalesCallsEmployees WHERE SalCal_UniqueID = 1 ```
How to create a SQL Server function to "join" multiple rows from a subquery into a single delimited field?
[ "", "sql", "sql-server", "string-concatenation", "" ]
I'm trying to use the CoreCon API in Visual Studio 2008 to programmatically launch device emulators. When I call `device.Connect()`, I inexplicably get a DirectoryNotFoundException. I get it if I try it in PowerShell or in C# Console Application. Here's the code I'm using: ``` static void Main(string[] args) { DatastoreManager dm = new DatastoreManager(1033); Collection<Platform> platforms = dm.GetPlatforms(); foreach (var p in platforms) { Console.WriteLine("{0} {1}", p.Name, p.Id); } Platform platform = platforms[3]; Console.WriteLine("Selected {0}", platform.Name); Device device = platform.GetDevices()[0]; device.Connect(); Console.WriteLine("Device Connected"); SystemInfo info = device.GetSystemInfo(); Console.WriteLine("System OS Version:{0}.{1}.{2}",info.OSMajor, info.OSMinor, info.OSBuildNo); Console.ReadLine(); } ``` Does anyone know why I'm getting this error? I'm running this on WinXP 32-bit, plain jane Visual Studio 2008 Pro. I imagine it's some config issue since I can't do it from a Console app or PowerShell. Here's the stack trace: ``` System.IO.DirectoryNotFoundException was unhandled Message="The system cannot find the path specified.\r\n" Source="Device Connection Manager" StackTrace: at Microsoft.VisualStudio.DeviceConnectivity.Interop.ConManServerClass.ConnectDevice() at Microsoft.SmartDevice.Connectivity.Device.Connect() at ConsoleApplication1.Program.Main(String[] args) in C:\Documents and Settings\Thomas\Local Settings\Application Data\Temporary Projects\ConsoleApplication1\Program.cs:line 23 at System.AppDomain._nExecuteAssembly(Assembly assembly, String[] args) at System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence assemblySecurity, String[] args) at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly() at System.Threading.ThreadHelper.ThreadStart_Context(Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart() ```
It can be found at `<systemdrive>:\Program files\Common Files\Microsoft Shared\CoreCon\1.0\Bin`. This is the path where you can get this dll, so add this dll to your project.
Installing VS 2008 SP 1 fixed it for me.
Getting DirectoryNotFoundException when trying to Connect to Device with CoreCon API
[ "", "c#", "visual-studio", "windows-mobile", "compact-framework", "corecon", "" ]
Is there a way to implement a singleton object in C++ that is: 1. Lazily constructed in a thread-safe manner (two threads might simultaneously be the first user of the singleton - it should still only be constructed once). 2. Doesn't rely on static variables being constructed beforehand (so the singleton object is itself safe to use during the construction of static variables). (I don't know my C++ well enough, but is it the case that integral and constant static variables are initialized before any code is executed (ie, even before static constructors are executed - their values may already be "initialized" in the program image)? If so - perhaps this can be exploited to implement a singleton mutex - which can in turn be used to guard the creation of the real singleton..)
Basically, you're asking for synchronized creation of a singleton, without using any synchronization (previously-constructed variables). In general, no, this is not possible. You need something available for synchronization. As for your other question, yes, static variables which can be statically initialized (i.e. no runtime code necessary) are guaranteed to be initialized before other code is executed. This makes it possible to use a statically-initialized mutex to synchronize creation of the singleton. From the 2003 revision of the C++ standard: > Objects with static storage duration (3.7.1) shall be zero-initialized (8.5) before any other initialization takes place. Zero-initialization and initialization with a constant expression are collectively called static initialization; all other initialization is dynamic initialization. Objects of POD types (3.9) with static storage duration initialized with constant expressions (5.19) shall be initialized before any dynamic initialization takes place. Objects with static storage duration defined in namespace scope in the same translation unit and dynamically initialized shall be initialized in the order in which their definition appears in the translation unit. If you *know* that you will be using this singleton during the initialization of other static objects, I think you'll find that synchronization is a non-issue. To the best of my knowledge, all major compilers initialize static objects in a single thread, so thread-safety during static initialization. You can declare your singleton pointer to be NULL, and then check to see if it's been initialized before you use it. However, this assumes that you *know* that you'll use this singleton during static initialization. This is also not guaranteed by the standard, so if you want to be completely safe, use a statically-initialized mutex. Edit: Chris's suggestion to use an atomic compare-and-swap would certainly work. If portability is not an issue (and creating additional temporary singletons is not a problem), then it is a slightly lower overhead solution.
Here's [Meyer's singleton](https://en.wikipedia.org/wiki/Singleton_pattern), a very simple lazily constructed singleton getter: ``` Singleton &Singleton::self() { static Singleton instance; return instance; } ``` This is lazy, and C++11 requires it to be thread-safe. In fact, I believe that at least g++ implements this in a thread-safe manner. So if that's your target compiler *or* if you use a compiler which also implements this in a thread-safe manner (maybe newer Visual Studio compilers do? I don't know), then this might be all you need. See also [N2513: Dynamic Initialization and Destruction with Concurrency](http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2513.html) on this topic.
How do you lazily construct a singleton object thread-safely?
[ "", "c++", "multithreading", "thread-safety", "singleton", "lazy-initialization", "" ]
I have a WCF service that I have to reference from a .net 2.0 project. I have tried to reference it using the "**add web reference**" method but it messes up the params. For example, I have a method in the service that expects a `char[]` to be passed in, but when I add the web reference, the method expects an `int[]`. So then I tried to setup **svcutil** and it worked... kind of. I could only get the service class to compile by adding a bunch of *.net 3.0* references to my *.net 2.0* project. This didn't sit well with the architect so I've had to can it (and probably for the best too). So I was wondering if anyone has any pointers or resources on how I can setup a *.net 2.0* project to reference a **WCF service**.
One of those instances that you need to edit the WSDL. For a start a useful tool <http://codeplex.com/storm>
What binding are you using - I think if you stick to the basicHttp binding you should be able to generate a proxy using the "add web reference" approach from a .net 2 project? Perhaps if you post the contract/interface definition it might help? Cheers Richard
WCF Backward Compatibility Issue
[ "", "c#", ".net", "wcf", "" ]
Let's say that I want to have a table that logs the date and the number of columns in some other table (or really any sort of math / string concat etc). ``` CREATE TABLE `log` ( `id` INTEGER NOT NULL AUTO_INCREMENT , `date` DATETIME NOT NULL , `count` INTEGER NOT NULL , PRIMARY KEY (`id`) ); ``` Is it possible to have the count column calculated for me whenever I do an insert? e.g. do something like: ``` INSERT INTO log (date='foo'); ``` and have count calculated by mysql. Obviously I could do it myself by doing a query to get the count and inserting it, but this would be better.
Triggers are the best tool for annotating data when a table is changed by insert, update or delete. To automatically set the date column of a new row in the log with the current date, you'd create a trigger that looked something like this: ``` create trigger log_date before insert on log for each row begin set new.date = current_date() end; ```
You definitly have to declare what to insert. This should be possible by using the [INSERT ... SELECT](http://dev.mysql.com/doc/refman/5.1/en/insert-select.html "INSERT .... SELECT") statement. ``` INSERT INTO log (date, count) SELECT DATE() as date, count(id) as count from foo; ``` Which should insert a new row into the log table, containing todays date and the number of rows in the foo table. (Assuming the foo table has an id column.. Use the primary key or another indexed column)
Calculated columns in mysql on INSERT statements
[ "", "sql", "mysql", "database", "" ]
So I am writing a registration form and I need the display name to be only numbers, letters and underscores. Have a look at my code and tell me what I'm doing wrong. ``` <form method="post" action="/" onsubmit="return check_form()"> <input type="text" id="display-name" name="display-name" maxlength="255" /> <input type="submit" /> </form> <script type="text/javascript"> <!-- var name_regex = /^([a-zA-Z0-9_])+/ function check_form() { if (!name_regex.test(document.forms[0].elements[0].value)) { document.forms[0].elements[0].focus() alert("Your display name may only contain letters, numbers and underscores") return false } } --> </script> ``` It's obviously been trimmed down to not include anything not related to the problem but even this snippet doesn't work.
Your regex ``` /^([a-zA-Z0-9_])+/ ``` Looks for 1. Start of string(check), followed by 2. 1 or more letters, numbers, or underscore (check) And then whatever comes after it doesn't matter. This regex will match anything at all so long as it begins with a letter, number, or underscore If you put a `$` at the end, then it will work - `$` matches 'end of string', so the only way it can match is if there are *only* numbers, letters, and underscores between the start and end of the string. ``` /^([a-zA-Z0-9_])+$/ ``` Secondly, I'd suggest using `document.getElementById('display-name').value` instead of `document.forms` as it won't break if you rearrange the HTML, and is more 'the commonly accepted standard of what to do'
My regexp would go along the lines of: `/^[a-zA-Z0-9_]+$/` edit: I think it's the lack of a line end `$` that makes it fail.
What did I do wrong here? [Javascript Regex]
[ "", "javascript", "regex", "" ]
When attempting to compile my C# project, I get the following error: ``` 'C:\Documents and Settings\Dan\Desktop\Rowdy Pixel\Apps\CleanerMenu\CleanerMenu\obj\Debug\CSC97.tmp' is not a valid Win32 resource file. ``` Having gone through many Google searches, I have determined that this is usually caused by a 256x256 image inside an icon used by the project. I've gone through all the icons and removed the 256x256 versions, but the error persists. Any ideas on how to get rid of this? --- @Mike: It showed up mysteriously one night. I've searched the csproj file, but there's no mention of a CSC97.tmp (I also checked the solution file, but I had no luck there either). In case it helps, I've posted the [contents of the csproj file on pastebin](http://pastebin.com/mcd2607b). @Derek: No problem. Here's the compiler output. ``` ------ Build started: Project: Infralution.Licensing, Configuration: Debug Any CPU ------ Infralution.Licensing -> C:\Documents and Settings\Dan\Desktop\Rowdy Pixel\Apps\CleanerMenu\Infralution.Licensing\bin\Debug\Infralution.Licensing.dll ------ Build started: Project: CleanerMenu, Configuration: Debug Any CPU ------ C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Csc.exe /noconfig /nowarn:1701,1702 /errorreport:prompt /warn:4 /define:DEBUG;TRACE /main:CleanerMenu.Program /reference:"C:\Documents and Settings\Dan\Desktop\Rowdy Pixel\Apps\CleanerMenu\Infralution.Licensing\bin\Debug\Infralution.Licensing.dll" /reference:..\NotificationBar.dll /reference:..\PSTaskDialog.dll /reference:C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\System.Data.dll /reference:C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\System.dll /reference:C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\System.Drawing.dll /reference:C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\System.Windows.Forms.dll /reference:C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\System.Xml.dll /reference:obj\Debug\Interop.IWshRuntimeLibrary.dll /debug+ /debug:full /optimize- /out:obj\Debug\CleanerMenu.exe /resource:obj\Debug\CleanerMenu.Form1.resources /resource:obj\Debug\CleanerMenu.frmAbout.resources /resource:obj\Debug\CleanerMenu.ModalProgressWindow.resources /resource:obj\Debug\CleanerMenu.Properties.Resources.resources /resource:obj\Debug\CleanerMenu.ShortcutPropertiesViewer.resources /resource:obj\Debug\CleanerMenu.LocalizedStrings.resources /resource:obj\Debug\CleanerMenu.UpdatedLicenseForm.resources /target:winexe /win32icon:CleanerMenu.ico ErrorHandler.cs Form1.cs Form1.Designer.cs frmAbout.cs frmAbout.Designer.cs Licensing.cs ModalProgressWindow.cs ModalProgressWindow.Designer.cs Program.cs Properties\AssemblyInfo.cs Properties\Resources.Designer.cs Properties\Settings.Designer.cs Scanner.cs ShortcutPropertiesViewer.cs ShortcutPropertiesViewer.Designer.cs LocalizedStrings.Designer.cs UpdatedLicenseForm.cs UpdatedLicenseForm.Designer.cs error CS1583: 'C:\Documents and Settings\Dan\Desktop\Rowdy Pixel\Apps\CleanerMenu\CleanerMenu\obj\Debug\CSC97.tmp' is not a valid Win32 resource file Compile complete -- 1 errors, 0 warnings ------ Skipped Build: Project: CleanerMenu Installer, Configuration: Debug ------ Project not selected to build for this solution configuration ========== Build: 1 succeeded or up-to-date, 1 failed, 1 skipped ========== ``` I have also uploaded the icon I am using. You can [view it here.](http://rowdypixel.com/tmp/CleanerMenu.ico) --- @Mike: Thanks! After removing everything but the 32x32 image, everything worked great. Now I can go back and add the other sizes one-by-one to see which one is causing me grief. :) @Derek: Since I first got the error, I'd done a complete reinstall of Windows (and along with it, the SDK.) It wasn't the main reason for the reinstall, but I had a slim hope that it would fix the problem. Now if only I can figure out why it previously worked with all the other sizes...
I don't know if this will help, but from [this forum](http://forums.msdn.microsoft.com/en-US/csharplanguage/thread/4217bec6-ea65-465f-8510-757558b36094/): > Add an .ico file to the application section of the properties page, and recieved the error thats been described, when I checked the Icon file with an icon editor, it turn out that the file had more than one version of the image ie (16 x 16, 24 x 24, 32 x 32, 48 x 48 vista compressed), I removed the other formats that I didnt want resaved the file (just with 32x 32) and the application now compiles without error. Try opening the icon in an icon editor and see if you see other formats like described (also, try removing the icon and seeing if the project will build again, just to verify the icon is causing it).
I had a similar issue with an "obj/debug/**\***.tmp" file erroring out in my build log. Turns out my C:\ drive was out of space. After clearing some space, my builds started working.
Invalid Resource File
[ "", "c#", "visual-studio", "resources", "" ]
[Project Darkstar](http://en.wikipedia.org/wiki/Project_Darkstar) was the topic of the monthly [JavaSIG](http://www.javasig.com/meeting/home.xhtml) meeting down at the Google offices in NYC last night. For those that don't know (probably everyone), Project Darkstar is a framework for massively multiplayer online games that attempts to take care of all of the "hard stuff." The basic idea is that you write your game server logic in such a way that all operations are broken up into tiny tasks. You pass these tasks to the Project Darkstar framework which handles distributing them to a specific node in the cluster, any concurrency issues, and finally persisting the data. Apparently doing this kind of thing is a much different problem for video games than it is for enterprise applications. Jim Waldo, who gave the lecture, claims that MMO games have a DB read/write ratio of 50/50, whereas enterprise apps are more like 90% read, 10% write. He also claims that most existing MMOs keep everything in memory exlcusively, and only dump to a DB every 6 hours of so. This means if a server goes down, you would lose all of the work since the last DB dump. Now, the project itself sounds really cool, but I don't think the industry will accept it. First, you have to write your server code in Java. The client code can be written in anything (Jim claims ActionScript 3 is the most popular, follow by C++), but the server stuff has to be Java. Sounds good to me, but I really get the impression that everyone in the games industry hates Java. Second, unlike other industries where developers prefer to use existing frameworks and libraries, the guys in the games industry seem to like to write everything themselves. Not only that, they like to rewrite everything for every new game they produce. Things are starting to change where developers are using Havok for physics, Unreal Engine 3 as their platform, etc., but for the most part it looks like everything is still proprietary. So, are the guys at Project Darkstar just wasting their time? Can a general framework like this really work for complex games with the performance that is required? Even if it does work, are game companies willing to use it?
**Edit: This was written before Oracle bought Sun and started a rampage to kill everything that does not make them a billion $ per day. See the comments for an OSS Fork.** *I still stand by my opinion that stuff like that (MMO Middleware) is realistic, you just need a company that doesn't suck behind it.* The Market may be dominated by few large games, but that does not mean that there is not a lot of room for more niche games. Lets face it: If you want to reach 100.000+ players, you're ending up building your own technology stack, at least for the critical core. That's what CCP did for EVE Online ([StacklessIO](http://myeve.eve-online.com/devblog.asp?a=blog&bid=584)), that's what Blizzard did for World of Warcraft (although they do use many third-party libraries), that's what Mythic did for Warhammer Online (although they are based on Gamebryo). However, if you aim to be a small, niche MMO (like the dozens of Free-to-Play/Itemshop MMOs), then getting the Network stuff right is just insanely hard, data consistency is even harder and scalability is the biggest b\*tch. But game technology is not your only problem - you also need to tackle Billing. Credit Card only? Have fun selling in Germany then, people there want ELV. That's where you need a reliable billing provider, but you still need to wire in the billing application with your accounts to make sure that accounts are blocked/reactivated when the billing fails. There are some companies already offering "MMO Infratructure Services" (i.e. [Arvato's EEIS](http://www.e-eis.com/cms/front_content.php?changelang=2)), but the bottom line is: Stuff like Project Darkstar IS realistic, but assuming that you can build a Multi-Billion-MMO entirely on a Third Party Stack is optimistic, possibly idealistic. But then again, entirely inventing all of the technology is even more stupid - use the Third Party stuff that you need (i.e. Billing, Font Rendering, Audio Output...), but [write the stuff that really makes or breaks your business](http://www.codinghorror.com/blog/archives/001172.html) (i.e. Network stack, User interface etc.) on your own. (Note: Jeff's posting may be [a bit flawed](http://damilare.net/random-talks/dare-obasanjo-vs-jeff-atwood-on-html-validation/), but the overall direction is correct IMHO.) Addendum: Also, the game industry does license and reuse engines a lot. The most prominent game Engines are the [Unreal Engine](http://en.wikipedia.org/wiki/Unreal_Engine), [Source Engine](http://en.wikipedia.org/wiki/Source_Engine) and [id Tech](http://en.wikipedia.org/wiki/Id_tech), which fuel dozens, if not hundreds of games. But there are some lesser-known (outside of the industry) engines. There is [Gamebryo](http://en.wikipedia.org/wiki/Gamebryo), the Middleware behind games like Civilization 4 and Fallout 3, there was [RenderWare](http://en.wikipedia.org/wiki/RenderWare) that is now only EA-in-House, but used in games like Battlefield 2 or The Sims 3. There is the open source [Ogre3d](http://www.ogre3d.org/), which was used in [some](http://www.dtp-entertainment.com/en/games/game_detail.php?id=2973&landid=1) commercial [titles](http://www.venetica-game.com/). If you're just looking for Sound, there's stuff like [FMOD](http://www.fmod.org/) or if you want to do font-rendering, why not give [FreeType](http://www.freetype.org/) a spin? What I'm saying is: Third-Party Engines/Middleware do exist, and they ARE being successfully used since more than a decade (I know for sure that id's Wolfenstein Engine was licensed to other companies, and that was 1992), even by big companies in multi-million-dollar titles. The important thing is the support, because a good engine with no help in case of an issue is pretty much worthless or at least very expensive if the developer has to spend their game-development-time with unneccessary debugging of the Engine. If the Darkstar folks manage to get the support side right and 2 or 3 higher profile titles out, I do believe it could succeed in opening the MMO market to a lot more smaller developers and indies.
Sounds like useless tech to me. The MMO world is controlled by a few big game companies that already have their own tech in place. Indie game developers love trying to build MMO's and sometimes they do, but those games rarely gain traction. Larger companies breaking into the MMO world would probably license "proven" technology, or extend their own. Game companies reuse vast quantities of code from game to game. Most/many game companies have developed their own tech internally, and use it on every game they produce. Occasionally, they will do something like replace their physics code with a 3rd party physics engine. If their internal code base (game engine, design tools, internal pipeline) starts to age too much, or become unwieldy, they might switch to one of the big game engines like Unreal. Even then, major chunks of code will continue to be re-used from game to game.
Is Project Darkstar Realistic?
[ "", "java", "concurrency", "distributed", "" ]
What options are there in the industry for enterprise reporting? I'm currently using SSRS 2005, and know that there is another version coming out with the new release of MSSQL. But, it seems like it might also be a good time to investigate the market to see what else is out there. What have you encountered? Do you like it/dislike it? Why? Thank you.
I've used Cognos Series 7, Cognos Series 8, Crystal Reports, Business Objects XI R2 WebIntelligence, Reporting Services 2000, Reporting Services 2005, and Reporting Services 2008. Here's my feedback on what I've learned: **Reporting Services 2008/2005/2000** PROS 1. Cost: Cheapest enterprise business intelligence solution if you are using MS SQL Server as a back-end. You also have a best-in-class ETL solution at no additional cost if you throw in SSIS. 2. Most Flexible: Most flexible reporting solution I've ever used. It has always met all my business needs, particularly in its latest incarnation. 3. Easily Scalable: We initially used this as a departmental solution supporting about 20 users. We eventually expanded it to cover a few thousand users. Despite having a really bad quality virtual server located in a remote data center, we were able to scale to about 50-100 concurrent user requests. On good hardware at a consulting gig, I was able to scale it to a larger set of concurrent users without any issues. I've also seen implementations where multiple SSRS servers were deployed in different countries and SSIS was used to synch the data in the back-ends. This allowed for solid performance in a distributed manner at almost no additional cost. 4. Source Control Integration: This is CRITICAL to me when developing reports with my business intelligence teams. No other BI suite offers an out-of-box solution for this that I've ever used. Every other platform I used either required purchasing a 3rd party add-in or required you to promote reports between separate development, test, and production environments. 5. Analysis Services: I like the tight integration with Analysis Services between SSRS and SSIS. I've read about instances where Oracle and DB2 quotes include installing a SQL Server 2005 Analysis Services server for OLAP cubes. 6. Discoverability: No system has better discoverability than SSRS. There are more books, forums, articles, and code sites on SSRS than any other BI suite that I've ever used. If I needed to figuire out how to do something in SSRS, I could almost always find it with a few minutes or hours of work. CONS 1. IIS Required for SSRS 2005/2000: Older versions of SSRS required installing IIS on the database server. This was not permissible from an internal controls perspective when I worked at a large bank. We eventually implemented SSRS without authorized approval from IT operations and basically asked for forgiveness later. **This is not an issue in SSRS 2008 since IIS is no longer required.** 2. Report Builder: The web-based report builder was non-existant in SSRS 2000. The web-based report builder in SSRS 2005 was difficult to use and did not have enough functionality. The web-based report builder in SSRS 2008 is definitely better, but it is still too difficult to use for most business users. 3. Database Bias: It works best with Microsoft SQL Server. It isn't great with Oracle, DB2, and other back-ends. **Business Objects XI WebIntelligence** PROS 1. Ease of Use: Easiest to use for your average non-BI end-user for developing ad hoc reports. 2. Database Agnostic: Definitely a good solution if you expect to use Oracle, DB2, or another database back-end. 3. Performant: Very fast performance since most of the page navigations are basically file-system operations instead of database-calls. CONS 1. Cost: Number one problem. If I want to scale up my implementation of Business Objects from 30 users to 1000 users, then SAP will make certain to charge you a few hundred thousands of dollars. And that's just for the Business Objects licenses. Add in the fact that you will also need database server licenses, you are now talking about a very expensive system. Of course, that could be the personal justification for getting Business Objects: if you can convince management to purchase a very expensive BI system, then you can probably convince management to pay for a large BI department. 2. No Source Control: Lack of out-of-the-box source control integration leads to errors in accidentally modifying and deploying old report definitions by mistake. The "work-around" for this is promote reports between environments -- a process that I do NOT like to do since it slows down report development and introduces environmental differences variables. 3. No HTML Email Support: You cannot send an HTML email via a schedule. I regularly do this in SSRS. You can buy an expensive 3rd party add-in to do this, but you shouldn't have to spend more money for this functionality. 4. Model Bias: Report development requires universes -- basically a data model. That's fine for ad hoc report development, but I prefer to use stored procedures to have full control of performance. I also like to build flat tables that are then queried to avoid costly complex joins during report run-time. It is silly to have to build universes that just contain flat tables that are only used by one report. You shouldn't have to build a model just to query a table. Store procedure support is also not supported out of the box without hacking the SQL Overrides. 5. Poor Parameter Support: Parameter support is terrible in BOXI WebIntelligence reports. Although I like the meta-data refresh options for general business users, it just isn't robust enough when trying to setup schedules. I almost always have to clone reports and alter the filters slightly which leads to unnecessary report definition duplication. SSRS beats this hands down, particularly since you can make the value and the label have different values -- unlike BOXI. 6. Inadequate Report Linking Support: I wanted to store one report definition in a central folder and then create linked reports for other users. However, I quickly found out end-users needed to have full rights on the parent object to use the object in their own folder. This defeated the entire purpose of using a linked report object. Give me SSRS! 7. Separate CMC: Why do you have to launch another application just to manage your object security? Worse, why isn't the functionality identical between CMC and InfoSys? For example, if you want to setup a scheduled report to retry on failed attempts, then you can specify the number of retries and the retry interval in CMC. However, you can't do this in InfoSys and you can't see the information either. InfoSys allows you to setup event-driven schedules and CMC does not support this feature. 8. Java Version Dependency: BOXI works great on end-user machines so long as they are running the same version of java as the server. However, once a newer version of java is installed on your machine, things starts to break. We're running Java 1.5 on our BOXI R2 server (the default java client) and almost everyone in the company is on Java 1.6. If you use Java 1.6, then prompts can freeze your IE and FoxFire sessions or crash your report builder unexpectedly. 9. Weak Discoverability: Aside from BOB (Business Objects Board), there isn't much out there on the Internet regarding troubleshooting Business Objects problems. **Cognos Series 8** PROS 1. Ease of Use: Although BOXI is easier to use for writing simple reports for general business users, Cognos is a close 2nd in this area. 2. Database Agnostic: Like BOXI this is definitely a good solution if you expect to use Oracle, DB2, or another database back-end. 3. FrameWork Manager: This is definitely a best-in-class meta-data repository. BOXI's universe builder wishes it was half as good. This tool is well suited to promoting packages across Development, Test, and Production environments. CONS 1. Cost: Same issue as Business Objects. Similar cost structure. Similar database licensing requirements as well. 2. No Source Control: Same issue as Business Objects. I'm not aware of any 3rd party tools that resolve this issue, but they might exist. 3. Model Bias: Same issue as Business Objects. Has better support for stored procedures in FrameWork Manager, though. 4. Poor Parameter Support: Same issue as Business Objects. Has better support for creating prompt-pages if you can code in Java. Buggy behavior, though, when users click the back-button to return to the prompt-page. SSRS beats this out hands-down. 5. Inadequate Error Handling: Error messages in Cognos are nearly impossible to decipher. They generally give you a long negative number and a stack dump as part of the error message. I don't know how many times we "resolved" these error messages by rebuilding reports from scratch. For some reason, it is pretty easy to corrupt a report definition. 6. No Discoverability: It is very hard to track down any answers on how to troubleshoot problems or to implement functionality in Cognos. There just isn't adequate community support in Internet facing websites for the products. As you can guess from my answer, I believe Microsoft's BI suite is the best platform on the market. However, I must state that most articles I've read on comparisons of BI suites usually do not rate Microsoft's offering as well as SAP's Business Objects and Cognos's Series 8 products. Also, I've also seen Microsoft come out on the bottom in internal reviews of BI Suites in two separate companies after they were review by the reigning CIO's. In both instances, though, it seemed like it all boiled down to wanting to be perceived as a major department that justified a large operating budget.
I'd like to make two contributions. One is very negative (CR is rubbish) and the other is very positive (SSRS is backing store independent and available at no cost). On a side note, if you mod an answer down then add a comment explaining why you think the answer is wrong or counterproductive, unless someone else already said the same thing. Even then, a simple "as above" would be helpful. ## Crystal Reports is rubbish Crystal Reports is an insult to the development community. Simple dialog resize bugs that would be the work of moments to fix have remained uncorrected over ten years and six major releases, so I really doubt that any attempt is ever made to address the tough stuff. Crystal Reports is profoundly untrustworthy, as this SQL demonstrates. ``` SELECT COUNT(*) FROM sometable WHERE 1=0 ``` This statement produces a result of one when it should produce zero. This is a repeatable off-by-one error in the heart of the Crystal Reports SQL engine. The support for CR is equally dismal, having been moved offshore many years ago. If you cough up $200 for a support call, an unintelligible foreigner will misunderstand your question and insult your intelligence until you give up, at which point he will - because you have chosen to give up - declare the call resolved. If it's really this bad why is it so popular? It isn't popular. It's very *un* popular. It gets a toe-hold via great marketing. Management types see glossy adverts promising much, and because CR has been around so long they assume it's all true. Much like bindis (Australian prickle weed) in your lawn, once installed it's nearly impossible to get rid of it. Admitting to incompetence is a bad career move for a manager. When managers lack the technical expertise to make a decision, rather than allow a technical person to make the decision they fall back on precedent and repeat the mistakes of their peers. They also fail to realise that if they want to actually use the web delivery stuff they are up for a server licence. Also, longevity means it's easy to find people with CR experience. For the details and a good laugh I recommend these links. * [Clubbing the Crystal Dodo](http://secretgeek.net/CrystalDodo.asp) * [Crystal Reports "Sucks"](https://web.archive.org/web/20111030191139/http://msmvps.com/blogs/williamryan/archive/2004/11/07/18148.aspx) * [Crystal Reports Sucks Donkey Dork ] (dead link, still trying to find content) [3](http://unbecominglevity.blogharbor.com/blog/_archives/2005/12/21/1474036.html) Or just type "crystal reports sucks" into Google. For a balanced perspective, also try "crystal reports rocks". Don't worry, this won't take much of your time. There are *no* positive reviews outside their own marketing hype. Now for something more positive. ## SQL Reports is effectively free You can install it at no charge as part of *SQL Express with Advanced Services*. You can also install .NET 2.x which brings with it ADO.NET drivers for major database providers as well as generic OLEDB and ODBC support. Since SSRS uses ADO.NET, this means you can connect SSRS to anything to which you can connect ADO.NET, ie just about anything. The terms of the licence applying to SSRS as supplied with SQL Express require it to be deployed and installed as part of SQL Express. They don't have anything to say about where reports get their data. SQL Express is limited, but the accompanying SSRS has no such limitations. If your data is provided by another database engine you can support as many users as that engine is licensed to support. Don't get me wrong, at work we have dozens of licensed copies of MS SQL Server. I'm just saying that you can use SSRS against the backing store of your choice, without having to find or justify budget for it. What you will be missing is scheduling and subscription support. I speak from experience when I say that it is not profoundly difficult to write a service that fills the gap. SSRS fulfils every promise that CR makes. Easy to use, good support for user DIY, has a schema abstraction tool conceptually similar to CR BO but which works properly, high performance, schedulable, easy to use, stable, flexible, easy to extend, can be controlled interactively or programmatically. In the 2008 edition they even support rich-formatted flow-based templates (mail merge for form letters). It is the best reporting solution I have ever seen in twenty years of software development on platforms ranging from mainframes through minis to micros. It ticks every box I can think of and has only one profound weakness I can recall - the layout model doesn't support positioning relative to page bottom and the only workaround is positioning relative to page top on a known height page. It does not address problems like heterogeneous data provision, but IMHO these can and should be addressed outside of the report proper. Plenty of data warehousing solutions (such as SSIS) provide tools for solving such problems, and it would be absurd to put a half-assed duplicate capability in the report engine. ## Getting a sane decision out of your pointy-haired boss Tell him you think that given its problematic history and unpopularity with developers, choosing Crystal Reports is a courageous move that marks him as a risk-taker. Some bosses are so stupid they will think this is a good thing but with them you are doomed anyway.
Enterprise Reporting Solutions
[ "", "sql", "reporting-services", "reporting", "" ]
I'm trying to do a simple test php script for sessions. Basically it increments a counter (stored in `$_SESSION`) every time you refresh that page. That works, but I'm trying to have a link to destroy the session which reloads the page with the `?destroy=1` parameter. I've tried a couple of if statements to see if that parameter is set and if so to destroy the session but it doesn't seem to work. I've even put an if statement in the main body to pop-up a message if the parameter is set - but it doesn't seem to be picked up. I know I'm doing something silly (I'm a PHP newbie) but I can't seem to find what it is... See code here: ``` <?php if ($_POST['destroy']) { session_destroy(); } else { session_start(); } ?> <html> <head> <title>Session test</title> </head> <body> <?php if (isset($_POST['destroy'])) { echo "Destroy set"; } $_SESSION['counter']++; echo "You have visited this page " . $_SESSION['counter'] . " times" . "<BR>"; echo "I am tracking you using the session id " . session_id() . "<BR>"; echo "Click <a href=\"" . $_SERVER['PHP_SELF'] . "?destroy=1\">here</a> to destroy the session."; ?> ```
I think you put ``` $_POST['destroy'] ``` Instead of ``` $_GET['destroy'] ``` You need to use a form if you'd like to use a $\_POST variable. $\_GET variables are stored in the URL.
By the way you can use > $\_REQUEST['destroy'] which would work regardless if the data is passed in a POST or a GET request.
Accessing html parameter in PHP
[ "", "php", "session", "" ]
I have a quad core machine and would like to write some code to parse a text file that takes advantage of all four cores. The text file basically contains one record per line. Multithreading isn't my forte so I'm wondering if anyone could give me some patterns that I might be able to use to parse the file in an optimal manner. My first thoughts are to read all the lines into some sort of queue and then spin up threads to pull the lines off the queue and process them, but that means the queue would have to exist in memory and these are fairly large files so I'm not so keen on that idea. My next thoughts are to have some sort of controller that will read in a line and assign it a thread to parse, but I'm not sure if the controller will end up being a bottleneck if the threads are processing the lines faster than it can read and assign them. I know there's probably another simpler solution than both of these but at the moment I'm just not seeing it.
I'd go with your original idea. If you are concerned that the queue might get too large implement a buffer-zone for it (i.e. If is gets above 100 lines the stop reading the file and if it gets below 20 then start reading again. You'd need to do some testing to find the optimal barriers). Make it so that any of the threads can potentially be the "reader thread" as it has to lock the queue to pull an item out anyway it can also check to see if the "low buffer region" has been hit and start reading again. While it's doing this the other threads can read out the rest of the queue. Or if you prefer, have one reader thread assign the lines to three other *processor* threads (via their own queues) and implement a [work-stealing strategy](http://portal.acm.org/citation.cfm?doid=324133.324234). I've never done this so I don't know how hard it is.
Mark's answer is the simpler, more elegant solution. Why build a complex program with inter-thread communication if it's not necessary? Spawn 4 threads. Each thread calculates size-of-file/4 to determine it's start point (and stop point). Each thread can then work entirely independently. The *only* reason to add a special thread to handle reading is if you expect some lines to take a very long time to process *and* you expect that these lines are clustered in a single part of the file. Adding inter-thread communication when you don't need it is a *very bad idea*. You greatly increase the chance of introducing an unexpected bottleneck and/or synchronization bugs.
Multicore Text File Parsing
[ "", "c#", "multithreading", "" ]
I have a data structure that represents a directed graph, and I want to render that dynamically on an HTML page. These graphs will usually be just a few nodes, maybe ten at the very upper end, so my guess is that performance isn't going to be a big deal. Ideally, I'd like to be able to hook it in with jQuery so that users can tweak the layout manually by dragging the nodes around. Note: I'm not looking for a charting library.
I've just put together what you may be looking for: <http://www.graphdracula.net> It's JavaScript with directed graph layouting, SVG and you can even drag the nodes around. Still needs some tweaking, but is totally usable. You create nodes and edges easily with JavaScript code like this: ``` var g = new Graph(); g.addEdge("strawberry", "cherry"); g.addEdge("cherry", "apple"); g.addEdge("id34", "cherry"); ``` I used the previously mentioned Raphael JS library (the graffle example) plus some code for a force based graph layout algorithm I found on the net (everything open source, MIT license). If you have any remarks or need a certain feature, I may implement it, just ask! --- You may want to have a look at other projects, too! Below are two meta-comparisons: * [SocialCompare](http://socialcompare.com/en/comparison/javascript-graphs-and-charts-libraries) has an extensive list of libraries, and the "Node / edge graph" line will filter for graph visualization ones. * DataVisualization.ch has evaluated many libraries, including node/graph ones. Unfortunately there's no direct link so you'll have to filter for "graph":[![Selection DataVisualization.ch](https://i.stack.imgur.com/o4lwD.jpg)](http://selection.datavisualization.ch/) Here's a list of similar projects (some have been already mentioned here): ### Pure JavaScript Libraries * [vis.js](http://visjs.org/#gallery) supports many types of network/edge graphs, plus timelines and 2D/3D charts. Auto-layout, auto-clustering, springy physics engine, mobile-friendly, keyboard navigation, hierarchical layout, animation etc. [MIT licensed](https://github.com/almende/vis) and developed by a Dutch firm specializing in research on self-organizing networks. * [Cytoscape.js](http://js.cytoscape.org) - interactive graph analysis and visualization with mobile support, following jQuery conventions. Funded via NIH grants and developed by by [@maxkfranz](https://stackoverflow.com/users/947225/maxkfranz) (see [his answer below](https://stackoverflow.com/a/10319429/1269037)) with help from several universities and other organizations. * [The JavaScript InfoVis Toolkit](http://thejit.org/demos.html) - Jit, an interactive, multi-purpose graph drawing and layout framework. See for example the [Hyperbolic Tree](http://philogb.github.io/jit/static/v20/Docs/files/Visualizations/Hypertree-js.html). Built by Twitter dataviz architect [Nicolas Garcia Belmonte](http://www.sencha.com/conference/session/sencha-charting-visualization) and [bought by Sencha](http://philogb.github.io/infovis/) in 2010. * [D3.js](http://d3js.org/) Powerful multi-purpose JS visualization library, the successor of Protovis. See the [force-directed graph](http://bl.ocks.org/mbostock/4062045) example, and other graph examples in the [gallery](https://github.com/mbostock/d3/wiki/Gallery). * [Plotly's](https://plot.ly./) JS visualization library uses D3.js with JS, Python, R, and MATLAB bindings. See a nexworkx example in IPython [here](https://plot.ly/ipython-notebooks/network-graphs/), human interaction example [here](https://plot.ly/ipython-notebooks/bioinformatics/#In-%5B54%5D), and [JS Embed API](https://github.com/plotly/Embed-API). * [sigma.js](http://sigmajs.org/) Lightweight but powerful library for drawing graphs * [jsPlumb](http://jsplumbtoolkit.com/) jQuery plug-in for creating interactive connected graphs * [Springy](http://getspringy.com/) - a force-directed graph layout algorithm * [JS Graph It](http://js-graph-it.sourceforge.net/) - drag'n'drop boxes connected by straight lines. Minimal auto-layout of the lines. * [RaphaelJS's Graffle](http://raphaeljs.com/graffle.html) - interactive graph example of a generic multi-purpose vector drawing library. RaphaelJS can't layout nodes automatically; you'll need another library for that. * [JointJS Core](http://www.jointjs.com/demos) - David Durman's MPL-licensed open source diagramming library. It can be used to create either static diagrams or fully interactive diagramming tools and application builders. Works in browsers supporting SVG. Layout algorithms not-included in the core package * [mxGraph](https://github.com/jgraph/mxgraph) Previously commercial HTML 5 diagramming library, now available under Apache v2.0. mxGraph is the base library used in [draw.io](https://www.draw.io?splash=0). ### Commercial libraries * [GoJS](http://gojs.net/latest/index.html) Interactive graph drawing and layout library * [yFiles for HTML](http://www.yworks.com/yfileshtml) Commercial graph drawing and layout library * [KeyLines](http://keylines.com/) Commercial JS network visualization toolkit * [ZoomCharts](https://zoomcharts.com) Commercial multi-purpose visualization library * [Syncfusion JavaScript Diagram](https://www.syncfusion.com/javascript-ui-controls/diagram) Commercial diagram library for drawing and visualization. ### Abandoned libraries * [Cytoscape Web](http://cytoscapeweb.cytoscape.org/) Embeddable JS Network viewer (no new features planned; succeeded by Cytoscape.js) * [Canviz](http://code.google.com/p/canviz/) JS **renderer** for Graphviz graphs. [Abandoned](https://code.google.com/p/canviz/source/list) in Sep 2013. * [arbor.js](http://arborjs.org/) Sophisticated graphing with nice physics and eye-candy. Abandoned in May 2012. Several [semi-maintained](https://github.com/samizdatco/arbor/issues/56#issuecomment-62842532) forks exist. * [jssvggraph](http://github.com/jackrusher/jssvggraph) "The simplest possible force directed graph layout algorithm implemented as a Javascript library that uses SVG objects". Abandoned in 2012. * [jsdot](https://code.google.com/p/jsdot/) Client side graph drawing application. [Abandoned in 2011](https://code.google.com/p/jsdot/source/list). * [Protovis](http://vis.stanford.edu/protovis/ex/force.html) Graphical Toolkit for Visualization (JavaScript). Replaced by d3. * [Moo Wheel](http://labs.unwieldy.net/moowheel/) Interactive JS representation for connections and relations (2008) * [JSViz](http://www.jsviz.org/) 2007-era graph visualization script * [dagre](https://github.com/cpettitt/dagre) Graph layout for JavaScript ### Non-Javascript Libraries * [Graphviz](http://www.graphviz.org/) Sophisticated graph visualization language + Graphviz has been compiled to Javascript using Emscripten [here](https://github.com/mdaines/viz.js/) with an [online interactive demo here](http://mdaines.github.io/viz.js/) * [Flare](http://flare.prefuse.org/) Beautiful and powerful Flash based graph drawing * [NodeBox](http://nodebox.net/code/index.php/Graph) Python Graph Visualization * [Processing.js](http://processingjs.org/) Javascript port of the Processing library by John Resig
*Disclaimer: I'm a developer of Cytoscape.js* Cytoscape.js is a HTML5 graph visualisation library. The API is sophisticated and follows jQuery conventions, including * selectors for querying and filtering (`cy.elements("node[weight >= 50].someClass")` does much as you would expect), * chaining (e.g. `cy.nodes().unselect().trigger("mycustomevent")`), * jQuery-like functions for binding to events, * elements as collections (like jQuery has collections of HTMLDomElements), * extensibility (can add custom layouts, UI, core & collection functions, and so on), * and more. If you're thinking about building a serious webapp with graphs, you should at least consider Cytoscape.js. It's free and open-source: <http://js.cytoscape.org>
Graph visualization library in JavaScript
[ "", "javascript", "jquery", "data-structures", "graph-layout", "" ]
I need to periodically download, extract and save the contents of <http://data.dot.state.mn.us/dds/det_sample.xml.gz> to disk. Anyone have experience downloading gzipped files with C#?
To compress: ``` using (FileStream fStream = new FileStream(@"C:\test.docx.gzip", FileMode.Create, FileAccess.Write)) { using (GZipStream zipStream = new GZipStream(fStream, CompressionMode.Compress)) { byte[] inputfile = File.ReadAllBytes(@"c:\test.docx"); zipStream.Write(inputfile, 0, inputfile.Length); } } ``` To Decompress: ``` using (FileStream fInStream = new FileStream(@"c:\test.docx.gz", FileMode.Open, FileAccess.Read)) { using (GZipStream zipStream = new GZipStream(fInStream, CompressionMode.Decompress)) { using (FileStream fOutStream = new FileStream(@"c:\test1.docx", FileMode.Create, FileAccess.Write)) { byte[] tempBytes = new byte[4096]; int i; while ((i = zipStream.Read(tempBytes, 0, tempBytes.Length)) != 0) { fOutStream.Write(tempBytes, 0, i); } } } } ``` Taken from a post I wrote last year that shows how to decompress a gzip file using C# and the built-in GZipStream class. <http://blogs.msdn.com/miah/archive/2007/09/05/zipping-files.aspx> As for downloading it, you can use the standard [WebRequest](http://msdn.microsoft.com/en-us/library/system.net.webrequest.aspx) or [WebClient](http://msdn.microsoft.com/en-us/library/system.net.webclient.aspx) classes in .NET.
You can use WebClient in System.Net to download: ``` WebClient Client = new WebClient (); Client.DownloadFile("http://data.dot.state.mn.us/dds/det_sample.xml.gz", " C:\mygzipfile.gz"); ``` then use [#ziplib](http://sharpdevelop.net/OpenSource/SharpZipLib/Default.aspx) to extract Edit: or GZipStream... forgot about that one
How do you download and extract a gzipped file with C#?
[ "", "c#", ".net", "gzip", "" ]
There are numerous libraries providing Linq capabilities to C# code interacting with a MySql database. Which one of them is the most stable and usable on Mono? Background (mostly irrelevant): I have a simple C# (.Net 2.0) program updating values in a MySql database. It is executed nightly via a cron job and runs on a Pentium 3 450Mhz, Linux + Mono. I want to rewrite it using Linq (.Net 3.5) mostly as an exercise (I have not yet used Linq).
The only (free) linq provider for MySql is [DbLinq](http://code2code.net/DB_Linq/), and I believe it is a long way from production-ready. There is also [MyDirect.Net](http://www.devart.com/mysqlnet/) which is commercial, but I have heard mixed reviews of it's capability. I've read that MySql will be implementing the Linq to Entities API for the 5.3 version of the .net connector, but I don't know if there's even a timeline for that. In fact, MySql has been totally silent about Entity Framework support for months. --- Addendum: The latest release of the MySql Connector/Net 6.0 has support for the EF according to the release notes. I have no idea how stable/useful this is, so I'd love to hear from anybody who have tried it.
According to the [Mono roadmap](http://www.mono-project.com/Mono_Project_Roadmap) I'm not sure if Linq is available for mono? At least some of Linq might be available in the very latest release, but Linq to DB is listed for Mono 2.4 (Feb 2009)
How can I use Linq with a MySql database on Mono?
[ "", "c#", "mysql", "linux", "linq", "mono", "" ]
I am creating a GUI for a machine that runs remote (WinXP) or on the machine itself (Windows CE 6.0). Right now I've created a fast visualisation (read: very simple) of the machine itself. The goal is to make a bit more complex visualisation of the machine and for that I would need a lightweight 3d engine. The engine should run on both full and compact .net framework (I am prepared to do some porting :). What I mean with lightweigt is that it doesn't need shading, lighting, advanced camera's. The Idea is that the whole scene is shown in Isometric perspective. So what I was wondering is that anyone knows a good engine (open source?) or has some helpfull resources you could share with me.
Did you try [Irrlicht](http://irrlicht.sourceforge.net/). > Recently Irrlicht has acquired official .NET bindings, allowing users to develop in .Net languages such as VB.NET, C# and Boo. There is also [Ogre 3D](http://en.wikipedia.org/wiki/OGRE_3D) and also [Axiom Engine](http://en.wikipedia.org/wiki/Axiom_Engine)
It is a good question. I have looked as well, and not seen anything. It would be great to see some easy to access great visual effects for mobile, to somewhat compete with other platforms that are getting better looking. Sometimes with Windows Mobile I feel like I am in the Windows 3.1 days!
Lightweight 3D Graphics Engine .NET (Compact and Full Framework)
[ "", "c#", "3d-engine", "" ]
I need to import a csv file into **Firebird** and I've spent a couple of hours trying out some tools and none fit my needs. The main problem is that all the tools I've been trying like [EMS Data Import](http://www.sqlmanager.net/products/ibfb/dataimport) and [Firebird Data Wizard](http://www.sqlmaestro.com/products/firebird/datawizard/help/06_01_05_csv/) expect that my CSV file contains all the information needed by my Table. I need to write some custom SQL in the insert statement, for example, I have a CSV file with the city name, but as my database already has all the cities in another table (normalized), I need to write a subselect in the insert statement to lookup for the city and write its ID, also I have a stored procedure to cread GUIDS. My insert statement would be something like this: ``` INSERT INTO PERSON (ID, NAME, CITY_ID) VALUES((SELECT NEW_GUID FROM CREATE_GUID), :NAME, (SELECT CITY_ID FROM CITY WHERE NAME = :CITY_NAME) ``` How can I approach this?
It's a bit crude - but for one off jobs, I sometimes use Excel. If you import the CSV file into Excel, you can create a formula which creates an INSERT statement by using string concatenation in the formula. So - if your CSV file has 3 columns that appear in columns A, B, and C in Excel, you could write a formula like... ``` ="INSERT INTO MyTable (Col1, Col2, Col3) VALUES (" & A1 & ", " & B1 & ", " & C1 & ")" ``` Then you can replicate the formula down all of your rows, and copy, and paste the answer into a text file to run against your database. Like I say - it's crude - but it can be quite a 'quick and dirty' way of getting a job done!
Well, if it's a CSV, and it this is a one time process, open up the file in Excel, and then write formulas to populate your data in any way you desire, and then write a simple Concat formula to construct your SQL, and then copy that formula for every row. You will get a large number of SQL statements which you can execute anywhere you want.
Generate insert SQL statements from a CSV file
[ "", "sql", "csv", "insert", "firebird", "" ]
I'm using MSVE, and I have my own tiles I'm displaying in layers on top. Problem is, there's a ton of them, and they're on a network server. In certain directories, there are something on the order of 30,000+ files. Initially I called Directory.GetFiles, but once I started testing in a pseudo-real environment, it timed out. What's the best way to programatically list, and iterate through, this many files? Edit: My coworker suggested using the MS indexing service. Has anyone tried this approach, and (how) has it worked?
I've worked on a SAN system in the past with telephony audio recordings which had issues with numbers of files in a single folder - that system became unusable somewhere near 5,000 (on Windows 2000 Advanced Server with an application in C#.Net 1.1)- the only sensible solution that we came up with was to change the folder structure so that there were a more reasonable number of files. Interestingly Explorer would also time out! The convention we came up with was a structure that broke the structure up in years, months and days - but that will depend upon your system and whether you can control the directory structure...
Definitely split them up. That said, stay as far away from the Indexing Service as you can.
Searching directories for tons of files?
[ "", "c#", "directory", "file-management", "" ]
So WPF doesn't support standard sorting or filtering behavior for views of CompositeCollections, so what would be a best practice for solving this problem. There are two or more object collections of different types. You want to combine them into a single sortable and filterable collection (withing having to manually implement sort or filter). One of the approaches I've considered is to create a new object collection with only a few core properties, including the ones that I would want the collection sorted on, and an object instance of each type. ``` class MyCompositeObject { enum ObjectType; DateTime CreatedDate; string SomeAttribute; myObjectType1 Obj1; myObjectType2 Obj2; { class MyCompositeObjects : List<MyCompositeObject> { } ``` And then loop through my two object collections to build the new composite collection. Obviously this is a bit of a brute force method, but it would work. I'd get all the default view sorting and filtering behavior on my new composite object collection, and I'd be able to put a data template on it to display my list items properly depending on which type is actually stored in that composite item. What suggestions are there for doing this in a more elegant way?
Update: I found a much more elegant solution: ``` class MyCompositeObject { DateTime CreatedDate; string SomeAttribute; Object Obj1; { class MyCompositeObjects : List<MyCompositeObject> { } ``` I found that due to reflection, the specific type stored in Obj1 is resolved at runtime and the type specific DataTemplate is applied as expected!
"Brute force" method you mention is actually ideal solution. Mind you, all objects are in RAM, there is no I/O bottleneck, so you can pretty much sort and filter millions of objects in less than a second on any modern computer. The most elegant way to work with collections is System.Linq namespace in .NET 3.5 > Thanks - I also considered LINQ to > objects, but my concern there is loss > of flexibility for typed data > templates, which I need to display the > objects in my list. If you can't predict at this moment how people will sort and filter your object collection, then you should look at **System.Linq.Expressions** namespace to build your lambda expressions on demand during runtime (first you let user to build expression, then compile, run and at the end you use reflection namespace to enumerate through results). It's more tricky to wrap your head around it but invaluable feature, probably (to me definitively) even more ground-breaking feature than LINQ itself.
Sorting a composite collection
[ "", "c#", ".net", "wpf", "data-binding", "collections", "" ]
I would like to automatically generate PDF documents from [WebObjects](https://en.wikipedia.org/wiki/WebObjects) based on mulitpage forms. Assuming I have a class which can assemble the related forms (java/wod files) is there a good way to then parse the individual forms into a PDF instead of going to the screen?
The canonical response when asked about PDFs from WebObjects has generally been [ReportMill](http://www.reportmill.com/). It's a PDF document generating framework that works a lot like WebObjects, and includes its own graphical PDF builder tool similar to WebObjects Builder and Interface Builder. You can bind elements in your generated PDFs to dynamic data in your application just as you would for a `WOComponent`. They have couple of tutorial videos on the [ReportMill product page](http://reportmill.com/product/) that should give you an idea of how the tool works. It'll probably be a lot easier than trying to work with FOP programmatically.
I'm not familiar with WebObjects, but I see you have java listed in there. [iText](http://www.lowagie.com/iText/) is a java api for building pdfs. If you can access a java api from WebObjects you should be able to build pdfs that way.
Create PDFs from multipage forms in WebObjects
[ "", "java", "pdf", "webobjects", "" ]
What is the story behind XPath and support for namespaces? Did XPath as a specification precede namespaces? If I have a document where elements have been given a default namespace: ``` <foo xmlns="uri" /> ``` It appears as though some of the XPath processor libraries won't recognize `//foo` because of the namespace whereas others will. The option my team has thought about is to add a namespace prefix using regular expressions to the XPath (you can add a namespace prefix via XmlNameTable) but this seems brittle since XPath is such a flexible language when it comes to node tests. Is there a standard that applies to this? My approach is a bit hackish but it seems to work fine; I remove the `xmlns` declaration with a search/replace and then apply XPath. ``` string readyForXpath = Regex.Replace(xmldocument, "xmlns=\".+\"", String.Empty ); ``` Is that a fair approach or has anyone solved this differently?
I tried something similar to what palehorse proposed and could not get it to work. Since I was getting data from a published service I couldn't change the xml. I ended up using XmlDocument and XmlNamespaceManager like so: ``` XmlDocument doc = new XmlDocument(); doc.LoadXml(xmlWithBogusNamespace); XmlNamespaceManager nSpace = new XmlNamespaceManager(doc.NameTable); nSpace.AddNamespace("myNs", "http://theirUri"); XmlNodeList nodes = doc.SelectNodes("//myNs:NodesIWant",nSpace); //etc ```
You need local-name(): <http://www.w3.org/TR/xpath#function-local-name> To crib from <http://web.archive.org/web/20100810142303/http://jcooney.net:80/archive/2005/08/09/6517.aspx>: ``` <foo xmlns='urn:foo'> <bar> <asdf/> </bar> </foo> ``` This expression will match the “bar” element: ``` //*[local-name()='bar'] ``` This one won't: ``` //bar ```
XPATHS and Default Namespaces
[ "", "c#", "xml", "xpath", "namespaces", "" ]
The following code doesn't compile with gcc, but does with Visual Studio: ``` template <typename T> class A { public: T foo; }; template <typename T> class B: public A <T> { public: void bar() { cout << foo << endl; } }; ``` I get the error: > test.cpp: In member function ‘void B::bar()’: > > test.cpp:11: error: ‘foo’ was not declared in this scope But it should be! If I change `bar` to ``` void bar() { cout << this->foo << endl; } ``` then it *does* compile, but I don't think I have to do this. Is there something in the official specs of C++ that GCC is following here, or is it just a quirk?
This changed in [gcc-3.4](http://gcc.gnu.org/gcc-3.4/changes.html#cplusplus). The C++ parser got much more strict in that release -- per the spec but still kinda annoying for people with legacy or multi-platform code bases.
David Joyner had the history, here is the reason. The problem when compiling `B<T>` is that its base class `A<T>` is unknown from the compiler, being a template class, so no way for the compiler to know any members from the base class. Earlier versions did some inference by actually parsing the base template class, but ISO C++ stated that this inference can lead to conflicts where there should not be. The solution to reference a base class member in a template is to use `this` (like you did) or specifically name the base class: ``` template <typename T> class A { public: T foo; }; template <typename T> class B: public A <T> { public: void bar() { cout << A<T>::foo << endl; } }; ``` More information in [gcc manual](http://gcc.gnu.org/onlinedocs/gcc-3.4.6/gcc/Name-lookup.html).
GCC issue: using a member of a base class that depends on a template argument
[ "", "c++", "templates", "base-class", "class-members", "name-lookup", "" ]
I've recently taken up learning some C# and wrote a Yahtzee clone. My next step (now that the game logic is in place and functioning correctly) is to integrate some method of keeping stats across all the games played. My question is this, how should I go about storing this information? My first thought would be to use a database and I have a feeling that's the answer I'll get... if that's the case, can you point me to a good resource for creating and accessing a database from a C# application? --- Storing in an XML file actually makes more sense to me, but I thought if I suggested that I'd get torn apart ;). I'm used to building web applications and for those, text files are generally frowned upon. So, going with an XML file, what classes should I be looking at that would allow for easy manipulation?
Here is one idea: use Xml Serialization. Design your GameStats data structure and optionally use Xml attributes to influence the schema as you like. I like to use this method for small data sets because its quick and easy and all I need to do is design and manipulate the data structure. ``` using (FileStream fs = new FileStream(....)) { // Read in stats XmlSerializer xs = new XmlSerializer(typeof(GameStats)); GameStats stats = (GameStats)xs.Deserialize(fs); // Manipulate stats here ... // Write out game stats XmlSerializer xs = new XmlSerializer(typeof(GameStats)); xs.Serialize(fs, stats); fs.Close(); } ```
A database would probably be overkill for something like this - start with storing your information in an XML doc (or series of XML docs, if there's a lot of data). You get all that nifty XCopy deployment stuff, you can still use LINQ, and it would be a smooth transition to a database if you decided later you really needed performant relational query logic.
Store data from a C# application
[ "", "c#", ".net", "" ]
I want to create a draggable and resizable window in JavaScript for cross browser use, but I want to try and avoid using a framework if I can. Has anyone got a link or some code that I can use?
JQuery is more focused on a lot of nice utility functions, and makes DOM manipulation a whole lot easier. Basically, I consider it to be Javascript as it should have been. It's a supremely helpful addition to the Javascript language itself. ExtJS is a suite of GUI components with specific APIs... Use it if you want to easily create components that look like that, otherwise, go with a more flexible framework.
[JQuery](http://jquery.com/) would be a good way to go. And with the [Jquery UI](http://ui.jquery.com/) plugins (such as [draggable](http://docs.jquery.com/UI/Draggables)), it's a breeze.. (there's a [demo](http://ui.jquery.com/repository/real-world/photo-manager/) here). Not using a framework, to keep it 'pure', seems just a waste of time to me. There's good stuff, that will save you tremendous amounts of time, time better spent in making your application even better. But you can always check out the source to get some 'inspiration', and adapt it without the overhead of the stuff you won't use. It's well done and easy to read, and you often discover some cross-browser hacks you didn't even think about.. edit: oh, if you REALLY don't wan't no framework EVER, just check out their source then.. sure you can use some of it for your application.
How do I create a draggable and resizable JavaScript popup window?
[ "", "javascript", "dialog", "" ]
What's the **easiest**, **tersest**, and most **flexible** method or library for parsing Python command line arguments?
**This answer suggests `optparse` which is appropriate for older Python versions. For Python 2.7 and above, `argparse` replaces `optparse`. See [this answer](https://stackoverflow.com/questions/3217673/why-use-argparse-rather-than-optparse) for more information.** As other people pointed out, you are better off going with optparse over getopt. getopt is pretty much a one-to-one mapping of the standard getopt(3) C library functions, and not very easy to use. optparse, while being a bit more verbose, is much better structured and simpler to extend later on. Here's a typical line to add an option to your parser: ``` parser.add_option('-q', '--query', action="store", dest="query", help="query string", default="spam") ``` It pretty much speaks for itself; at processing time, it will accept -q or --query as options, store the argument in an attribute called query and has a default value if you don't specify it. It is also self-documenting in that you declare the help argument (which will be used when run with -h/--help) right there with the option. Usually you parse your arguments with: ``` options, args = parser.parse_args() ``` This will, by default, parse the standard arguments passed to the script (sys.argv[1:]) options.query will then be set to the value you passed to the script. You create a parser simply by doing ``` parser = optparse.OptionParser() ``` These are all the basics you need. Here's a complete Python script that shows this: ``` import optparse parser = optparse.OptionParser() parser.add_option('-q', '--query', action="store", dest="query", help="query string", default="spam") options, args = parser.parse_args() print 'Query string:', options.query ``` 5 lines of python that show you the basics. Save it in sample.py, and run it once with ``` python sample.py ``` and once with ``` python sample.py --query myquery ``` Beyond that, you will find that optparse is very easy to extend. In one of my projects, I created a Command class which allows you to nest subcommands in a command tree easily. It uses optparse heavily to chain commands together. It's not something I can easily explain in a few lines, but feel free to [browse around in my repository](https://thomas.apestaart.org/moap/trac/browser/trunk/moap/extern/command/command.py) for the main class, as well as [a class that uses it and the option parser](https://thomas.apestaart.org/moap/trac/browser/trunk/moap/command/doap.py)
[`argparse`](https://docs.python.org/library/argparse.html) is the way to go. Here is a short summary of how to use it: **1) Initialize** ``` import argparse # Instantiate the parser parser = argparse.ArgumentParser(description='Optional app description') ``` **2) Add Arguments** ``` # Required positional argument parser.add_argument('pos_arg', type=int, help='A required integer positional argument') # Optional positional argument parser.add_argument('opt_pos_arg', type=int, nargs='?', help='An optional integer positional argument') # Optional argument parser.add_argument('--opt_arg', type=int, help='An optional integer argument') # Switch parser.add_argument('--switch', action='store_true', help='A boolean switch') ``` **3) Parse** ``` args = parser.parse_args() ``` **4) Access** ``` print("Argument values:") print(args.pos_arg) print(args.opt_pos_arg) print(args.opt_arg) print(args.switch) ``` **5) Check Values** ``` if args.pos_arg > 10: parser.error("pos_arg cannot be larger than 10") ``` ## Usage **Correct use:** ``` $ ./app 1 2 --opt_arg 3 --switch Argument values: 1 2 3 True ``` **Incorrect arguments:** ``` $ ./app foo 2 --opt_arg 3 --switch usage: convert [-h] [--opt_arg OPT_ARG] [--switch] pos_arg [opt_pos_arg] app: error: argument pos_arg: invalid int value: 'foo' $ ./app 11 2 --opt_arg 3 Argument values: 11 2 3 False usage: app [-h] [--opt_arg OPT_ARG] [--switch] pos_arg [opt_pos_arg] convert: error: pos_arg cannot be larger than 10 ``` **Full help:** ``` $ ./app -h usage: app [-h] [--opt_arg OPT_ARG] [--switch] pos_arg [opt_pos_arg] Optional app description positional arguments: pos_arg A required integer positional argument opt_pos_arg An optional integer positional argument optional arguments: -h, --help show this help message and exit --opt_arg OPT_ARG An optional integer argument --switch A boolean switch ```
What's the best way to parse command line arguments?
[ "", "python", "command-line", "command-line-arguments", "" ]
I want to include a batch file rename functionality in my application. A user can type a destination filename pattern and (after replacing some wildcards in the pattern) I need to check if it's going to be a legal filename under Windows. I've tried to use regular expression like `[a-zA-Z0-9_]+` but it doesn't include many national-specific characters from various languages (e.g. umlauts and so on). What is the best way to do such a check?
You can get a list of invalid characters from [`Path.GetInvalidPathChars`](http://msdn.microsoft.com/en-us/library/system.io.path.getinvalidpathchars.aspx) and [`GetInvalidFileNameChars`](http://msdn.microsoft.com/en-us/library/system.io.path.getinvalidfilenamechars.aspx). **UPD:** See [Steve Cooper's suggestion](https://stackoverflow.com/questions/62771/how-check-if-given-string-is-legal-allowed-file-name-under-windows#62855) on how to use these in a regular expression. **UPD2:** Note that according to the Remarks section in MSDN "The array returned from this method is not guaranteed to contain the complete set of characters that are invalid in file and directory names." [The answer provided by sixlettervaliables](https://stackoverflow.com/questions/62771/how-check-if-given-string-is-legal-allowed-file-name-under-windows/62888#62888) goes into more details.
From [MSDN's "Naming a File or Directory,"](http://msdn.microsoft.com/en-us/library/aa365247.aspx "Naming a File or Directory") here are the general conventions for what a legal file name is under Windows: You may use any character in the current code page (Unicode/ANSI above 127), except: * `<` `>` `:` `"` `/` `\` `|` `?` `*` * Characters whose integer representations are 0-31 (less than ASCII space) * Any other character that the target file system does not allow (say, trailing periods or spaces) * Any of the DOS names: CON, PRN, AUX, NUL, COM0, COM1, COM2, COM3, COM4, COM5, COM6, COM7, COM8, COM9, LPT0, LPT1, LPT2, LPT3, LPT4, LPT5, LPT6, LPT7, LPT8, LPT9 (and avoid AUX.txt, etc) * The file name is all periods Some optional things to check: * File paths (including the file name) may not have more than 260 characters (that don't use the `\?\` prefix) * Unicode file paths (including the file name) with more than 32,000 characters when using `\?\` (note that prefix may expand directory components and cause it to overflow the 32,000 limit)
How do I check if a given string is a legal/valid file name under Windows?
[ "", "c#", "windows", "file", "filesystems", "" ]
How do I implement a Copy menu item in a Windows application written in C#/.NET 2.0? I want to let the user to mark some text in a control and then select the Copy menu item from an Edit menu in the menubar of the application and then do a Paste in for example Excel. What makes my head spin is how to first determine which child form is active and then how to find the control that contains the marked text that should be copied to the clipboard. Help, please.
With the aid of some heavy pair programming a colleague of mine and I came up with this, feel free to refactor. The code is placed in the main form. The copyToolStripMenuItem\_Click method handles the Click event on the Copy menu item in the Edit menu. ``` /// <summary> /// Recursively traverse a tree of controls to find the control that has focus, if any /// </summary> /// <param name="c">The control to search, might be a control container</param> /// <returns>The control that either has focus or contains the control that has focus</returns> private Control FindFocus(Control c) { foreach (Control k in c.Controls) { if (k.Focused) { return k; } else if (k.ContainsFocus) { return FindFocus(k); } } return null; } private void copyToolStripMenuItem_Click(object sender, EventArgs e) { Form f = this.ActiveMdiChild; // Find the control that has focus Control focusedControl = FindFocus(f.ActiveControl); // See if focusedControl is of a type that can select text/data if (focusedControl is TextBox) { TextBox tb = focusedControl as TextBox; Clipboard.SetDataObject(tb.SelectedText); } else if (focusedControl is DataGridView) { DataGridView dgv = focusedControl as DataGridView; Clipboard.SetDataObject(dgv.GetClipboardContent()); } else if (...more?...) { } } ```
Why not extending the control, so the control itself provides the data which should be copied into the clipboard. Take a look at [ApplicationCommands](http://msdn.microsoft.com/en-us/library/system.windows.input.applicationcommands.aspx) documentation.
How to implement the Edit -> Copy menu in c#/.net
[ "", "c#", ".net", "winforms", "user-interface", "" ]
At my work we are running a group of tests that consist of about 3,000 separate test cases. Previously we were running this entire test suite on one machine, which took about 24-72 hours to complete the entire test run. We now have created our own system for grouping and distributing the tests among about three separate machines and the tests are prioritized so that the core tests get run first for more immediate results and the extra tests run when there is an available machine. I am curious if anyone has found a good way to distribute their tests among several machines to reduce total test time for a complete run and what tools were used to achieve that. I've done some research and it looks like TestNG is [moving in this direction](http://beust.com/weblog/archives/000362.html), but it looks like it is still under quite a bit of development. We don't plan on rewriting any of our tests, but as we add new tests and test new products or add-ons I'd like to be able to deal with the fact that we are working with very large numbers of tests. On the other hand, if we can find a tool that would help distribute our Junit `3.x` tests even in a very basic fashion, that *would* be helpful since we wouldn't have to maintain our own tooling to do that.
I've seen some people having a play with [distributed JUnit](http://216.93.179.140:8080/wiki/display/GG15UG/Distributed+JUnit+Overview). I can't particularly vouch for how effective it is, but the other teams I've seen seemed to think it was straight forward enough. Hope that helps.
There's also [parallel-junit](https://parallel-junit.java.net/). Depending on how you currently execute your tests its convenience may vary - the idea is just to multithread on a single system that has multiple cores. I've played with it briefly, but it's a change from how we currently run our tests. [Hudson](https://java.net/projects/hudson/), the continuous integration engine I use, also has some ways to distribute test running (separate jobs aggregated results in one).
Test Distribution
[ "", "java", "testing", "enterprise", "" ]
It's fall of 2008, and I still hear developers say that you should not design a site that requires JavaScript. I understand that you should develop sites that degrade gracefully when JS is not present/on. But at what point do you not include funcitonality that can only be powered by JS? I guess the question comes down to demographics. Are there numbers out there of how many folks are browsing without JS?
5% according to these statistics: <http://www.w3schools.com/browsers/browsers_stats.asp>
Just as long as you're aware of the accessibility limitations you might be introducing, ie for users of screen-reading software, etc. It's one thing to exclude people because they choose to turn off JS or use a browser which doesn't support it, it's entirely another to exclude them because of a disability.
Should you design websites that require JavaScript in this day & age?
[ "", "javascript", "" ]
I've written PL/SQL code to denormalize a table into a much-easer-to-query form. The code uses a temporary table to do some of its work, merging some rows from the original table together. The logic is written as a [pipelined table function](http://www.oreillynet.com/lpt/a/3136), following the pattern from the linked article. The table function uses a `PRAGMA AUTONOMOUS_TRANSACTION` declaration to permit the temporary table manipulation, and also accepts a cursor input parameter to restrict the denormalization to certain ID values. I then created a view to query the table function, passing in all possible ID values as a cursor (other uses of the function will be more restrictive). My question: is this all really necessary? Have I completely missed a much more simple way of accomplishing the same thing? Every time I touch PL/SQL I get the impression that I'm typing way too much. **Update:** I'll add a sketch of the table I'm dealing with to give everyone an idea of the denormalization that I'm talking about. The table stores a history of employee jobs, each with an activation row, and (possibly) a termination row. It's possible for an employee to have multiple simultaneous jobs, as well as the same job over and over again in non-contiguous date ranges. For example: ``` | EMP_ID | JOB_ID | STATUS | EFF_DATE | other columns... | 1 | 10 | A | 10-JAN-2008 | | 2 | 11 | A | 13-JAN-2008 | | 1 | 12 | A | 20-JAN-2008 | | 2 | 11 | T | 01-FEB-2008 | | 1 | 10 | T | 02-FEB-2008 | | 2 | 11 | A | 20-FEB-2008 | ``` Querying that to figure out who is working when in what job is non-trivial. So, my denormalization function populates the temporary table with just the date ranges for each job, for any `EMP_ID`s passed in though the cursor. Passing in `EMP_ID`s 1 and 2 would produce the following: ``` | EMP_ID | JOB_ID | START_DATE | END_DATE | | 1 | 10 | 10-JAN-2008 | 02-FEB-2008 | | 2 | 11 | 13-JAN-2008 | 01-FEB-2008 | | 1 | 12 | 20-JAN-2008 | | | 2 | 11 | 20-FEB-2008 | | ``` (`END_DATE` allows `NULL`s for jobs that don't have a predetermined termination date.) As you can imagine, this denormalized form is much, much easier to query, but creating it--so far as I can tell--requires a temporary table to store the intermediate results (e.g., job records for which the activation row has been found, but not the termination...yet). Using the pipelined table function to populate the temporary table and then return its rows is the only way I've figured out how to do it.
I think a way to approach this is to use analytic functions... I set up your test case using: ``` create table employee_job ( emp_id integer, job_id integer, status varchar2(1 char), eff_date date ); insert into employee_job values (1,10,'A',to_date('10-JAN-2008','DD-MON-YYYY')); insert into employee_job values (2,11,'A',to_date('13-JAN-2008','DD-MON-YYYY')); insert into employee_job values (1,12,'A',to_date('20-JAN-2008','DD-MON-YYYY')); insert into employee_job values (2,11,'T',to_date('01-FEB-2008','DD-MON-YYYY')); insert into employee_job values (1,10,'T',to_date('02-FEB-2008','DD-MON-YYYY')); insert into employee_job values (2,11,'A',to_date('20-FEB-2008','DD-MON-YYYY')); commit; ``` I've used the **lead** function to get the next date and then wrapped it all as a sub-query just to get the "A" records and add the end date if there is one. ``` select emp_id, job_id, eff_date start_date, decode(next_status,'T',next_eff_date,null) end_date from ( select emp_id, job_id, eff_date, status, lead(eff_date,1,null) over (partition by emp_id, job_id order by eff_date, status) next_eff_date, lead(status,1,null) over (partition by emp_id, job_id order by eff_date, status) next_status from employee_job ) where status = 'A' order by start_date, emp_id, job_id ``` I'm sure there's some use cases I've missed but you get the idea. Analytic functions are your friend :) ``` EMP_ID JOB_ID START_DATE END_DATE 1 10 10-JAN-2008 02-FEB-2008 2 11 13-JAN-2008 01-FEB-2008 2 11 20-FEB-2008 1 12 20-JAN-2008 ```
Rather than having the input parameter as a cursor, I would have a table variable (don't know if Oracle has such a thing I'm a TSQL guy) or populate another temp table with the ID values and join on it in the view/function or wherever you need to. The only time for cursors in my honest opinion is when you *have* to loop. And when you have to loop I always recommend to do that outside of the database in the application logic.
Best way to encapsulate complex Oracle PL/SQL cursor logic as a view?
[ "", "sql", "oracle", "plsql", "" ]
Following on from my [previous question](https://stackoverflow.com/questions/19454/enforce-attribute-decoration-of-classesmethods) I have been working on getting my object model to serialize to XML. But I have now run into a problem (quelle surprise!). The problem I have is that I have a collection, which is of a abstract base class type, which is populated by the concrete derived types. I thought it would be fine to just add the XML attributes to all of the classes involved and everything would be peachy. Sadly, thats not the case! So I have done some digging on Google and I now understand *why* it's not working. In that **the `XmlSerializer` is in fact doing some clever reflection in order to serialize objects to/from XML, and since its based on the abstract type, it cannot figure out what the hell it's talking to**. Fine. I did come across [this page](http://www.codeproject.com/KB/XML/xmlserializerforunknown.aspx) on CodeProject, which looks like it may well help a lot (yet to read/consume fully), but I thought I would like to bring this problem to the StackOverflow table too, to see if you have any neat hacks/tricks in order to get this up and running in the quickest/lightest way possible. One thing I should also add is that I **DO NOT** want to go down the `XmlInclude` route. There is simply too much coupling with it, and this area of the system is under heavy development, so the it would be a real maintenance headache!
## Problem Solved! OK, so I finally got there (admittedly with a **lot** of help from [here](http://www.codeproject.com/KB/XML/xmlserializerforunknown.aspx)!). So summarise: ### Goals: * I didn't want to go down the *XmlInclude* route due to the maintenence headache. * Once a solution was found, I wanted it to be quick to implement in other applications. * Collections of Abstract types may be used, as well as individual abstract properties. * I didn't really want to bother with having to do "special" things in the concrete classes. ### Identified Issues/Points to Note: * *XmlSerializer* does some pretty cool reflection, but it is *very* limited when it comes to abstract types (i.e. it will only work with instances of the abstract type itself, not subclasses). * The Xml attribute decorators define how the XmlSerializer treats the properties its finds. The physical type can also be specified, but this creates a **tight coupling** between the class and the serializer (not good). * We can implement our own XmlSerializer by creating a class that implements *IXmlSerializable* . ## The Solution I created a generic class, in which you specify the generic type as the abstract type you will be working with. This gives the class the ability to "translate" between the abstract type and the concrete type since we can hard-code the casting (i.e. we can get more info than the XmlSerializer can). I then implemented the *IXmlSerializable* interface, this is pretty straight forward, but when serializing we need to ensure we write the type of the concrete class to the XML, so we can cast it back when de-serializing. It is also important to note it must be **fully qualified** as the assemblies that the two classes are in are likely to differ. There is of course a little type checking and stuff that needs to happen here. Since the XmlSerializer cannot cast, we need to provide the code to do that, so the implicit operator is then overloaded (I never even knew you could do this!). The code for the AbstractXmlSerializer is this: ``` using System; using System.Collections.Generic; using System.Text; using System.Xml.Serialization; namespace Utility.Xml { public class AbstractXmlSerializer<AbstractType> : IXmlSerializable { // Override the Implicit Conversions Since the XmlSerializer // Casts to/from the required types implicitly. public static implicit operator AbstractType(AbstractXmlSerializer<AbstractType> o) { return o.Data; } public static implicit operator AbstractXmlSerializer<AbstractType>(AbstractType o) { return o == null ? null : new AbstractXmlSerializer<AbstractType>(o); } private AbstractType _data; /// <summary> /// [Concrete] Data to be stored/is stored as XML. /// </summary> public AbstractType Data { get { return _data; } set { _data = value; } } /// <summary> /// **DO NOT USE** This is only added to enable XML Serialization. /// </summary> /// <remarks>DO NOT USE THIS CONSTRUCTOR</remarks> public AbstractXmlSerializer() { // Default Ctor (Required for Xml Serialization - DO NOT USE) } /// <summary> /// Initialises the Serializer to work with the given data. /// </summary> /// <param name="data">Concrete Object of the AbstractType Specified.</param> public AbstractXmlSerializer(AbstractType data) { _data = data; } #region IXmlSerializable Members public System.Xml.Schema.XmlSchema GetSchema() { return null; // this is fine as schema is unknown. } public void ReadXml(System.Xml.XmlReader reader) { // Cast the Data back from the Abstract Type. string typeAttrib = reader.GetAttribute("type"); // Ensure the Type was Specified if (typeAttrib == null) throw new ArgumentNullException("Unable to Read Xml Data for Abstract Type '" + typeof(AbstractType).Name + "' because no 'type' attribute was specified in the XML."); Type type = Type.GetType(typeAttrib); // Check the Type is Found. if (type == null) throw new InvalidCastException("Unable to Read Xml Data for Abstract Type '" + typeof(AbstractType).Name + "' because the type specified in the XML was not found."); // Check the Type is a Subclass of the AbstractType. if (!type.IsSubclassOf(typeof(AbstractType))) throw new InvalidCastException("Unable to Read Xml Data for Abstract Type '" + typeof(AbstractType).Name + "' because the Type specified in the XML differs ('" + type.Name + "')."); // Read the Data, Deserializing based on the (now known) concrete type. reader.ReadStartElement(); this.Data = (AbstractType)new XmlSerializer(type).Deserialize(reader); reader.ReadEndElement(); } public void WriteXml(System.Xml.XmlWriter writer) { // Write the Type Name to the XML Element as an Attrib and Serialize Type type = _data.GetType(); // BugFix: Assembly must be FQN since Types can/are external to current. writer.WriteAttributeString("type", type.AssemblyQualifiedName); new XmlSerializer(type).Serialize(writer, _data); } #endregion } } ``` So, from there, how do we tell the XmlSerializer to work with our serializer rather than the default? We must pass our type within the Xml attributes type property, for example: ``` [XmlRoot("ClassWithAbstractCollection")] public class ClassWithAbstractCollection { private List<AbstractType> _list; [XmlArray("ListItems")] [XmlArrayItem("ListItem", Type = typeof(AbstractXmlSerializer<AbstractType>))] public List<AbstractType> List { get { return _list; } set { _list = value; } } private AbstractType _prop; [XmlElement("MyProperty", Type=typeof(AbstractXmlSerializer<AbstractType>))] public AbstractType MyProperty { get { return _prop; } set { _prop = value; } } public ClassWithAbstractCollection() { _list = new List<AbstractType>(); } } ``` Here you can see, we have a collection and a single property being exposed, and all we need to do is add the *type* named parameter to the Xml declaration, easy! :D **NOTE: If you use this code, I would really appreciate a shout-out. It will also help drive more people to the community :)** Now, but unsure as to what to do with answers here since they all had their pro's and con's. I'll upmod those that I feel were useful (no offence to those that weren't) and close this off once I have the rep :) Interesting problem and good fun to solve! :)
One thing to look at is the fact that in the XmlSerialiser constructor you can pass an array of types that the serialiser might be having difficulty resolving. I've had to use that quite a few times where a collection or complex set of datastructures needed to be serialised and those types lived in different assemblies etc. [XmlSerialiser Constructor with extraTypes param](http://msdn.microsoft.com/en-us/library/e5aakyae.aspx) EDIT: I would add that this approach has the benefit over XmlInclude attributes etc that you can work out a way of discovering and compiling a list of your possible concrete types at runtime and stuff them in.
XML Serialization and Inherited Types
[ "", "c#", "xml", "inheritance", "serialization", "xml-serialization", "" ]
I know in JavaScript, *objects* double as hashes, but I have been unable to find a built-in function to get the keys: ``` var h = {a:'b', c:'d'}; ``` I want something like ``` var k = h.keys() ; // k = ['a', 'c']; ``` It is simple to write a function myself to iterate over the items and add the keys to an array that I return, but is there a standard cleaner way to do that? I keep feeling it must be a simple built in function that I missed but I can't find it!
There is function in modern JavaScript (ECMAScript 5) called [`Object.keys`](https://developer.mozilla.org/en/JavaScript/Reference/Global_Objects/Object/keys) performing this operation: ``` var obj = { "a" : 1, "b" : 2, "c" : 3}; alert(Object.keys(obj)); // will output ["a", "b", "c"] ``` Compatibility details can be found [here](http://kangax.github.com/es5-compat-table/). On the [Mozilla site](https://developer.mozilla.org/en/JavaScript/Reference/Global_Objects/Object/keys) there is also a snippet for backward compatibility: ``` if(!Object.keys) Object.keys = function(o){ if (o !== Object(o)) throw new TypeError('Object.keys called on non-object'); var ret=[],p; for(p in o) if(Object.prototype.hasOwnProperty.call(o,p)) ret.push(p); return ret; } ```
For production code requiring a large compatibility with client browsers I still suggest [Ivan Nevostruev's answer](https://stackoverflow.com/questions/18912/how-can-i-find-the-keys-of-a-hash/6921193#6921193) with shim to ensure `Object.keys` in older browsers. However, it's possible to get the exact functionality requested using ECMA's new `defineProperty` feature. **As of ECMAScript 5 - Object.defineProperty** As of ECMA5 you can use [`Object.defineProperty()`](https://developer.mozilla.org/en/docs/Web/JavaScript/Reference/Global_Objects/Object/defineProperty) to define non-enumerable properties. The [**current compatibility**](http://kangax.github.io/compat-table/es5/#test-Object.defineProperty) still has much to be desired, but this should eventually become usable in all browsers. (Specifically note the current incompatibility with IE8!) ``` Object.defineProperty(Object.prototype, 'keys', { value: function keys() { var keys = []; for(var i in this) if (this.hasOwnProperty(i)) { keys.push(i); } return keys; }, enumerable: false }); var o = { 'a': 1, 'b': 2 } for (var k in o) { console.log(k, o[k]) } console.log(o.keys()) # OUTPUT # > a 1 # > b 2 # > ["a", "b"] ``` However, since ECMA5 already added `Object.keys` you might as well use: ``` Object.defineProperty(Object.prototype, 'keys', { value: function keys() { return Object.keys(this); }, enumerable: false }); ``` **Original answer** ``` Object.prototype.keys = function () { var keys = []; for(var i in this) if (this.hasOwnProperty(i)) { keys.push(i); } return keys; } ``` --- **Edit:** Since this answer has been around for a while I'll leave the above untouched. Anyone reading this should also read Ivan Nevostruev's answer below. There's no way of making prototype functions non-enumerable which leads to them always turning up in for-in loops that don't use `hasOwnProperty`. I still think this answer would be ideal if extending the prototype of Object wasn't so messy.
How can I find the keys of an object?
[ "", "javascript", "object", "key", "" ]
I'm using Eclipse 3.4 and have configured the Java code formatter with all of the options on the *Comments* tab enabled. The problem is that when I format a document comment that contains: ``` * @see <a href="test.html">test</a> ``` the code formatter inserts a space in the closing HTML, breaking it: ``` * @see <a href="test.html">test< /a> ``` Why? How do I stop this happening? This is not fixed by disabling any of the options on the *Comments* tab, such as *Format HTML tags*. The only work-around I found is to disable Javadoc formatting completely by disabling both the *Enable Javadoc comment formatting* and *Enable block comment formatting* options, which means I then have to format comment blocks manually.
I can only assume it's a bug in Eclipse. It only happens with *@see* tags, it happens also for all 3 builtin code formatter settings. There are some interesting bugs reported already in the neighbourhood, but I couldn't find this specific one. See for example a search for *@see* in the [Eclipse Bugzilla](https://bugs.eclipse.org/bugs/buglist.cgi?query_format=specific&order=relevance+desc&bug_status=__all__&product=JDT&content=%40see).
Strict XML specifications require that the self closing tags should have a space before the closing slash like so: ``` <gcServer enabled="true" /> <!-- note the space just after "true" --> ``` I can only assume, like Bart said, that there is a bug in Eclipse's reformatter that thinks the closing tag is actually a self-closing tag. Another idea: Can you verify that your **a** tags are balanced (i.e. no unclosed tags higher up in the document)?
Why does the Eclipse code formatter break </a> in a Javadoc @see tag?
[ "", "java", "eclipse", "javadoc", "eclipse-3.4", "" ]
I'm using **Struts 2**. I'd like to return from an Action to the page which invoked it. Say I'm in page **x.jsp**, I invoke Visual action to change CSS preferences in the session; I want to return to **x.jsp** rather than to a fixed page (i.e. **home.jsp**) Here's the relevant **struts.xml** fragment: ``` <action name="Visual" class="it.___.web.actions.VisualizationAction"> <result name="home">/pages/home.jsp</result> </action> ``` Of course my `VisualizationAction.execute()` returns **home**. Is there any "magic" constant (like, say, INPUT\_PAGE) that I may return to do the trick? Must I use a more involved method (i.e. extracting the request page and forwarding to it)? T.I.A.
You can use a dynamic result in struts.xml. For instance: ``` <action name="Visual" class="it.___.web.actions.VisualizationAction"> <result name="next">${next}</result> </action> ``` Then in your action, you create a field called next. So to invoke the action you will pass the name of the page that you want to forward to next. The action then returns "next" and struts will know which page to go to. There is a nicer explanation on this post: [Stack Overflow](https://stackoverflow.com/questions/173846/struts2-how-to-do-dynamic-url-redirects)
``` return INPUT; ``` will do the trick. INPUT constant is defined in Action interface itself. It indicates that action needs more input. By calling page if you meant the page that took you to the action input page, then your input page will have to store HTTP header "Referer" in the request scope for the Action.
Struts 2: return to calling page
[ "", "java", "struts2", "struts", "" ]
When we create stored procedures in Visual Studio (with the extension .sql) the file encoding is automatically set to 'Unicode (UTF-8 with signature) - Codepage 65001'. This causes ours DBA's problems as their scripts don't like it. We have to manually go in and change them to 'Western European (windows) - Codepage 1252'. This only seems to be happening since we upgraded to VS 2008 from 2005. Can anyone explain whats going on and how to stop it?
I think somebody faced a similar problem like yours and had the following workaround which is posted at <http://connect.microsoft.com/VisualStudio/feedback/Workaround.aspx?FeedbackID=319830>
To summarise the link provided by Codeslayer (in case the page url changes)... *Change the sql file templates to ANSI encoding by opening with notepad and then saving it as an ANSI file. You can do the same for files already created. \Common7\Tools\Templates\Database Project Items* You've just made our DBA's very happy!
SQL file encoding in Visual Studio
[ "", "sql", "visual-studio", "stored-procedures", "" ]
The following code illustrates an object literal being assigned, but with no semicolon afterwards: ``` var literal = { say: function(msg) { alert(msg); } } literal.say("hello world!"); ``` This appears to be legal, and doesn't issue a warning (at least in Firefox 3). Is this completely legal, or is there a strict version of JavaScript where this is not allowed? I'm wondering in particular for future compatibility issues... I would like to be writing "correct" JavaScript, so if technically I need to use the semicolon, I would like to be using it.
Not technically, JavaScript has semicolons as optional in many situations. But, as a general rule, use them at the end of any statement. Why? Because if you ever want to compress the script, it will save you from countless hours of frustration. Automatic semicolon insertion is performed by the interpreter, so you *can* leave them out if you so choose. In the comments, someone claimed that > Semicolons are not optional with statements like break/continue/throw but this is incorrect. They are optional; what is really happening is that line terminators affect the automatic semicolon insertion; it is a subtle difference. Here is the rest of the standard on semicolon insertion: > For convenience, however, such semicolons may be omitted from the source text in certain situations. These situations are described by saying that semicolons are automatically inserted into the source code token stream in those situations.
The YUI Compressor and dojo shrinksafe should work perfectly fine without semicolons since they're based on a full JavaScript parser. But Packer and JSMin won't. The other reason to always use semi-colons at the end of statements is that occasionally you can accidentally combine two statements to create something very different. For example, if you follow the statement with the common technique to create a scope using a closure: ``` var literal = { say: function(msg) { alert(msg); } } (function() { // .... })(); ``` The parser might interpret the brackets as a function call, here causing a type error, but in other circumstances it could cause a subtle bug that's tricky to trace. Another interesting mishap is if the next statement starts with a regular expression, the parser might think the first forward slash is a division symbol.
Are semicolons needed after an object literal assignment in JavaScript?
[ "", "javascript", "" ]
What is the best way people have found to do String to Lower case / Upper case in C++? The issue is complicated by the fact that C++ isn't an English only programming language. Is there a good multilingual method?
``` #include <algorithm> std::string data = "Abc"; std::transform(data.begin(), data.end(), data.begin(), ::toupper); ``` <http://notfaq.wordpress.com/2007/08/04/cc-convert-string-to-upperlower-case/> Also, CodeProject article for common string methods: <http://www.codeproject.com/KB/stl/STL_string_util.aspx>
``` > std::string data = “Abc”; > std::transform(data.begin(), data.end(), data.begin(), ::toupper); ``` This will work, but this will use the standard "C" locale. You can use facets if you need to get a tolower for another locale. The above code using facets would be: ``` locale loc(""); const ctype<char>& ct = use_facet<ctype<char> >(loc); transform(str.begin(), str.end(), std::bind1st(std::mem_fun(&ctype<char>::tolower), &ct)); ```
String To Lower/Upper in C++
[ "", "c++", "string", "unicode", "" ]
I am messing around with [a toy interpreter in Java](http://code.google.com/p/zemscript/) and I was considering trying to write a simple compiler that can generate bytecode for the Java Virtual Machine. Which got me thinking, how much optimization needs to be done by compilers that target virtual machines such as JVM and CLI? Do Just In Time (JIT) compilers do constant folding, peephole optimizations etc?
I'm just gonna add two links which explain [Java's bytecode](http://www.ibm.com/developerworks/ibm/library/it-haggar_bytecode/) pretty well and some of the [various optimization](http://www.ibm.com/developerworks/java/library/j-benchmark1.html) of the JVM during runtime.
Optimisation is what makes JVMs viable as environments for long running applications, you can bet that SUN, IBM and friends are doing their best to ensure they can optimise your bytecode and JIT-compiled code in an efficient a manner as possible. With that being said, if you think you can pre-optimise your bytecode then it probably won't do much harm. It is worth being aware, however, that JVMs can tend towards performing better (and not crashing) when presented with just the sort of bytecode the Java compiler tends to construct. It is not unknown for optimisations to be missed or even for the JVM to crash when permutations of bytecode occur that are correct but unlike what would be produced by javac. Hopefully that sort of thing is more in the past now, but may be something to be aware of.
Virtual Machine Optimization
[ "", "java", "jvm", "jit", "cil", "" ]
Our application is interfacing with a lot of web services these days. We have our own package that someone wrote a few years back using UTL\_HTTP and it generally works, but needs some hard-coding of the SOAP envelope to work with certain systems. I would like to make it more generic, but lack experience to know how many scenarios I would have to deal with. The variations are in what namespaces need to be declared and the format of the elements. We have to handle both simple calls with a few parameters and those that pass a large amount of data in an encoded string. I know that 10g has UTL\_DBWS, but there are not a huge number of use-cases on-line. Is it stable and flexible enough for general use? [Documentation](http://stanford.edu/dept/itss/docs/oracle/10g/java.101/b12021/callouts.htm)
I have used `UTL_HTTP` which is simple and works. If you face a challenge with your own package, you can probably find a solution in one of the many wrapper packages around UTL\_HTTP on the net (Google "consuming web services from pl/sql", leading you to e.g. <http://www.oracle-base.com/articles/9i/ConsumingWebServices9i.php>) The reason nobody is using `UTL_DBWS` is that it is not functional in a default installed database. You need to load a ton of Java classes into the database, but the standard instructions seem to be defective - the process spews Java errors right and left and ultimately fails. It seems very few people have been willing to take the time to track down the package dependencies in order to make this approach work.
I had this challenge and found and installed the 'SOAP API' package that Sten suggests on Oracle-Base. It provides some good envelope-creation functionality on top of UTL\_HTTP. However there were some limitations that pertain to your question. SOAP\_API assumes all requests are simple XML- i.e. only one layer tag hierarchy. I extended the SOAP\_API package to allow the client code to arbitrarily insert an extra tag. So you can insert a sub-level such as , continue to build the request, and remember to insert a closing tag. The namespace issue was a bear for the project- different levels of XML had different namespaces. A nice debugging tool that I used is TCP Trace from Pocket Soap. www.pocketsoap.com/tcptrace/ You set it up like a proxy and watch the HTTP request and response objects between client and server code. Having said all that, we really like having a SOAP client in the database- we have full access to all data and existing PLSQL code, can easily loop through cursors and call the external app via SOAP when needed. It was a lot quicker and easier than deploying a middle tier with lots of custom Java or .NET code. Good luck and let me know if you'd like to see my enhanced SOAP API code.
Consuming web services from Oracle PL/SQL
[ "", "sql", "oracle", "web-services", "plsql", "" ]
Is there any JavaScript method similar to the jQuery `delay()` or `wait()` (to delay the execution of a script for a specific amount of time)?
There is the following: ``` setTimeout(function, milliseconds); ``` function which can be passed the time after which the function will be executed. See: [Window `setTimeout()` Method](https://www.w3schools.com/jsref/met_win_settimeout.asp).
Just to add to what everyone else have said about `setTimeout`: If you want to call a function with a parameter in the future, you need to set up some anonymous function calls. You need to pass the function as an argument for it to be called later. In effect this means without brackets behind the name. The following will call the alert at once, and it will display 'Hello world': ``` var a = "world"; setTimeout(alert("Hello " + a), 2000); ``` To fix this you can either put the name of a function (as Flubba has done) or you can use an anonymous function. If you need to pass a parameter, then you have to use an anonymous function. ``` var a = "world"; setTimeout(function(){alert("Hello " + a)}, 2000); a = "Stack Overflow"; ``` But if you run that code you will notice that after 2 seconds the popup will say 'Hello Stack Overflow'. This is because the value of the variable a has changed in those two seconds. To get it to say 'Hello world' after two seconds, you need to use the following code snippet: ``` function callback(a){ return function(){ alert("Hello " + a); } } var a = "world"; setTimeout(callback(a), 2000); a = "Stack Overflow"; ``` It will wait 2 seconds and then popup 'Hello world'.
Execute script after specific delay using JavaScript
[ "", "javascript", "settimeout", "" ]
Every project invariably needs some type of reporting functionality. From a foreach loop in your language of choice to a full blow BI platform. > To get the job done what tools, widgets, platforms has the group used with success, frustration and failure?
For knocking out fairly "run of the mill" reports, SQL Reporting Services is really quite impressive. For complicated analysis, loading the data (maybe pre-aggregated) into an Excel Pivot table is usually adequate for most users. I've found you can spend a lot of time (and money) building a comprehensive "ad-hoc" reporting suite and after the first month or two of "wow factor", 99% of the reports generated will be the same report with minor differences in a fixed set of parameters. Don't accept when a user says they want "ad-hoc" reports without specifying what goals and targets their looking for. They are just fishing and they need to actually spend as much time on THINKING about THEIR reporting requirements as YOU would have to spend BUILDING their solution. I've spent too much time building the "the system that can report everything" and for it to become out of date or out of favour before it was finished. Much better to get the quick wins out of the way as quick as possible and then spend time "systemising" the most important reports.
For most reports we use [BIRT](http://www.eclipse.org/birt/).
What is your reporting tool of choice?
[ "", "sql", "reporting", "business-intelligence", "" ]
How can I find any unused functions in a PHP project? Are there features or APIs built into PHP that will allow me to analyse my codebase - for example [Reflection](http://ie.php.net/manual/en/language.oop5.reflection.php), [`token_get_all()`](http://php.net/manual/en/function.token-get-all.php)? Are these APIs feature rich enough for me not to have to rely on a third party tool to perform this type of analysis?
Thanks Greg and Dave for the feedback. Wasn't quite what I was looking for, but I decided to put a bit of time into researching it and came up with this quick and dirty solution: ``` <?php $functions = array(); $path = "/path/to/my/php/project"; define_dir($path, $functions); reference_dir($path, $functions); echo "<table>" . "<tr>" . "<th>Name</th>" . "<th>Defined</th>" . "<th>Referenced</th>" . "</tr>"; foreach ($functions as $name => $value) { echo "<tr>" . "<td>" . htmlentities($name) . "</td>" . "<td>" . (isset($value[0]) ? count($value[0]) : "-") . "</td>" . "<td>" . (isset($value[1]) ? count($value[1]) : "-") . "</td>" . "</tr>"; } echo "</table>"; function define_dir($path, &$functions) { if ($dir = opendir($path)) { while (($file = readdir($dir)) !== false) { if (substr($file, 0, 1) == ".") continue; if (is_dir($path . "/" . $file)) { define_dir($path . "/" . $file, $functions); } else { if (substr($file, - 4, 4) != ".php") continue; define_file($path . "/" . $file, $functions); } } } } function define_file($path, &$functions) { $tokens = token_get_all(file_get_contents($path)); for ($i = 0; $i < count($tokens); $i++) { $token = $tokens[$i]; if (is_array($token)) { if ($token[0] != T_FUNCTION) continue; $i++; $token = $tokens[$i]; if ($token[0] != T_WHITESPACE) die("T_WHITESPACE"); $i++; $token = $tokens[$i]; if ($token[0] != T_STRING) die("T_STRING"); $functions[$token[1]][0][] = array($path, $token[2]); } } } function reference_dir($path, &$functions) { if ($dir = opendir($path)) { while (($file = readdir($dir)) !== false) { if (substr($file, 0, 1) == ".") continue; if (is_dir($path . "/" . $file)) { reference_dir($path . "/" . $file, $functions); } else { if (substr($file, - 4, 4) != ".php") continue; reference_file($path . "/" . $file, $functions); } } } } function reference_file($path, &$functions) { $tokens = token_get_all(file_get_contents($path)); for ($i = 0; $i < count($tokens); $i++) { $token = $tokens[$i]; if (is_array($token)) { if ($token[0] != T_STRING) continue; if ($tokens[$i + 1] != "(") continue; $functions[$token[1]][1][] = array($path, $token[2]); } } } ?> ``` I'll probably spend some more time on it so I can quickly find the files and line numbers of the function definitions and references; this information is being gathered, just not displayed.
You can try Sebastian Bergmann's Dead Code Detector: **Note:** *Dead Code Detector has been archived by the owner on Nov 9, 2017* **Alternative tools**: * <https://phpqa.io/> * <https://github.com/phpmd/phpmd> **Dead Code Detector:** > `phpdcd` is a Dead Code Detector (DCD) for PHP code. It scans a PHP project for all declared functions and methods and reports those as being "dead code" that are not called at least once. Source: <https://github.com/sebastianbergmann/phpdcd> Note that it's a static code analyzer, so it might give false positives for methods that only called dynamically, e.g. it cannot detect `$foo = 'fn'; $foo();` You can install it via PEAR: ``` pear install phpunit/phpdcd-beta ``` After that you can use with the following options: ``` Usage: phpdcd [switches] <directory|file> ... --recursive Report code as dead if it is only called by dead code. --exclude <dir> Exclude <dir> from code analysis. --suffixes <suffix> A comma-separated list of file suffixes to check. --help Prints this usage information. --version Prints the version and exits. --verbose Print progress bar. ``` --- **Note:** as per the repository notice, *this project is no longer maintained and its repository is only kept for archival purposes*. So your mileage may vary.
How can I find unused functions in a PHP project
[ "", "php", "" ]
How can you do a streaming read on a large XML file that contains a xs:sequence just below root element, without loading the whole file into a XDocument instance in memory?
Going with a SAX-style element parser and the [XmlTextReader](http://msdn.microsoft.com/en-us/library/system.xml.xmltextreader.aspx) class created with [XmlReader.Create](http://msdn.microsoft.com/en-us/library/system.xml.xmlreader.create.aspx) would be a good idea, yes. Here's a slightly-modified code example from [CodeGuru](http://www.codeguru.com/csharp/csharp/cs_data/xml/article.php/c4221/): ``` void ParseURL(string strUrl) { try { using (var reader = XmlReader.Create(strUrl)) { while (reader.Read()) { switch (reader.NodeType) { case XmlNodeType.Element: var attributes = new Hashtable(); var strURI = reader.NamespaceURI; var strName = reader.Name; if (reader.HasAttributes) { for (int i = 0; i < reader.AttributeCount; i++) { reader.MoveToAttribute(i); attributes.Add(reader.Name,reader.Value); } } StartElement(strURI,strName,strName,attributes); break; // //you can handle other cases here // //case XmlNodeType.EndElement: // Todo //case XmlNodeType.Text: // Todo default: break; } } } catch (XmlException e) { Console.WriteLine("error occured: " + e.Message); } } } } ```
I can't add a comment, since I just signed up but the code sample posted by Hirvox and currently selected as the answer has a bug in it. It should not have the `new` statement when using the static `Create` method. Current: ``` using (var reader = new XmlReader.Create(strUrl)) ``` Fixed: ``` using (var reader = XmlReader.Create(strUrl)) ```
How to do streaming read of a large XML file in C# 3.5
[ "", "c#", ".net", "xml", ".net-3.5", "streaming", "" ]
I would like to flash a success message on my page. I am using the jQuery `fadeOut` method to fade and then remove the element. I can increase the duration to make it last longer, however this looks strange. What I would like to happen is have the element be displayed for five seconds, then fade quickly, and finally be removed. How can you animate this using jQuery?
The new `delay()` function in jQuery 1.4 should do the trick. ``` $('#foo').fadeIn(200).delay(5000).fadeOut(200).remove(); ```
use `setTimeout(function(){$elem.hide();}, 5000);` Where `$elem` is the element you wish to hide, and `5000` is the delay in milliseconds. You can actually use any function within the call to `setTimeout()`, that code just defines a small anonymous function for simplicity.
How do you pause before fading an element out using jQuery?
[ "", "javascript", "jquery", "animation", "" ]
I've used Apache CXF to expose about ten java classes as web services. I've generated clients using CXF, Axis, and .NET. In Axis and CXF a "Service" or "Locator" is generated. From this service you can get a "Port". The "Port" is used to make individual calls to the methods exposed by the web service. In .NET the "Service" directly exposes the calls to the web service. Can someone explain the difference between a port, a service, a locator, and an endpoint when it comes to web services? Axis: ``` PatientServiceImplServiceLocator locator = new PatientServiceImplServiceLocator(); PatientService service = locator.getPatientServiceImplPort(); ``` CXF: ``` PatientServiceImplService locator = new PatientServiceImplService(); PatientService service = locator.getPatientServiceImplPort(); ``` .net: ``` PatientServiceImplService service = new PatientServiceImplService(); ```
I'd hop over to <http://www.w3.org/TR/wsdl.html> which I think explains Port, Service and Endpoint reasonably well. A locator is an implementation specific mechanism that some WS stacks use to provide access to service endpoints.
I found the information based on Kevin Kenny's answer, but I figured I'd post it here for others. A WSDL document defines services as collections of network endpoints, or ports. In WSDL, the abstract definition of endpoints and messages is separated from their concrete network deployment or data format bindings. This allows the reuse of abstract definitions: messages, which are abstract descriptions of the data being exchanged, and port types which are abstract collections of operations. The concrete protocol and data format specifications for a particular port type constitutes a reusable binding. A port is defined by associating a network address with a reusable binding, and a collection of ports define a service. Hence, a WSDL document uses the following elements in the definition of network services: * **Types**– a container for data type definitions using some type system (such as XSD). * **Message**– an abstract, typed definition of the data being communicated. * **Operation**– an abstract description of an action supported by the service. * **Port Type**–an abstract set of operations supported by one or more endpoints. * **Binding**– a concrete protocol and data format specification for a particular port type. * **Port**– a single endpoint defined as a combination of a binding and a network address. * **Service**– a collection of related endpoints.
What is the difference between an endpoint, a service, and a port when working with webservices?
[ "", "java", ".net", "web-services", "cxf", "axis", "" ]
`std::swap()` is used by many std containers (such as `std::list` and `std::vector`) during sorting and even assignment. But the std implementation of `swap()` is very generalized and rather inefficient for custom types. Thus efficiency can be gained by overloading `std::swap()` with a custom type specific implementation. But how can you implement it so it will be used by the std containers?
The right way to overload `std::swap`'s implemention (aka specializing it), is to write it in the same namespace as what you're swapping, so that it can be found via [argument-dependent lookup (ADL)](https://en.cppreference.com/w/cpp/language/adl). One particularly easy thing to do is: ``` class X { // ... friend void swap(X& a, X& b) { using std::swap; // bring in swap for built-in types swap(a.base1, b.base1); swap(a.base2, b.base2); // ... swap(a.member1, b.member1); swap(a.member2, b.member2); // ... } }; ```
**Attention Mozza314** Here is a simulation of the effects of a generic `std::algorithm` calling `std::swap`, and having the user provide their swap in namespace std. As this is an experiment, this simulation uses `namespace exp` instead of `namespace std`. ``` // simulate <algorithm> #include <cstdio> namespace exp { template <class T> void swap(T& x, T& y) { printf("generic exp::swap\n"); T tmp = x; x = y; y = tmp; } template <class T> void algorithm(T* begin, T* end) { if (end-begin >= 2) exp::swap(begin[0], begin[1]); } } // simulate user code which includes <algorithm> struct A { }; namespace exp { void swap(A&, A&) { printf("exp::swap(A, A)\n"); } } // exercise simulation int main() { A a[2]; exp::algorithm(a, a+2); } ``` For me this prints out: ``` generic exp::swap ``` If your compiler prints out something different then it is not correctly implementing "two-phase lookup" for templates. If your compiler is conforming (to any of C++98/03/11), then it will give the same output I show. And in that case exactly what you fear will happen, does happen. And putting your `swap` into namespace `std` (`exp`) did not stop it from happening. Dave and I are both committee members and have been working this area of the standard for a decade (and not always in agreement with each other). But this issue has been settled for a long time, and we both agree on how it has been settled. Disregard Dave's expert opinion/answer in this area at your own peril. This issue came to light after C++98 was published. Starting about 2001 Dave and I began to [work this area](http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2001/n1289.html). And this is the modern solution: ``` // simulate <algorithm> #include <cstdio> namespace exp { template <class T> void swap(T& x, T& y) { printf("generic exp::swap\n"); T tmp = x; x = y; y = tmp; } template <class T> void algorithm(T* begin, T* end) { if (end-begin >= 2) swap(begin[0], begin[1]); } } // simulate user code which includes <algorithm> struct A { }; void swap(A&, A&) { printf("swap(A, A)\n"); } // exercise simulation int main() { A a[2]; exp::algorithm(a, a+2); } ``` Output is: ``` swap(A, A) ``` **Update** An observation has been made that: ``` namespace exp { template <> void swap(A&, A&) { printf("exp::swap(A, A)\n"); } } ``` works! So why not use that? Consider the case that your `A` is a class template: ``` // simulate user code which includes <algorithm> template <class T> struct A { }; namespace exp { template <class T> void swap(A<T>&, A<T>&) { printf("exp::swap(A, A)\n"); } } // exercise simulation int main() { A<int> a[2]; exp::algorithm(a, a+2); } ``` Now it doesn't work again. :-( So you could put `swap` in namespace std and have it work. But you'll need to remember to put `swap` in `A`'s namespace for the case when you have a template: `A<T>`. And since both cases will work if you put `swap` in `A`'s namespace, it is just easier to remember (and to teach others) to just do it that one way.
How to overload std::swap()
[ "", "c++", "performance", "optimization", "stl", "c++-faq", "" ]
Is it necessary or advantageous to write custom connection pooling code when developing applications in .NET with an SQL Server database? I know that ADO.NET gives you the option to enable/disable connection pooling -- does that mean that it's built into the framework and I don't need to worry about it? Why do people talk about writing their own connection pooling software and how is this different than what's built into ADO.NET?
The connection pooling built-in to ADO.Net is robust and mature. I would recommend against attempting to write your own version.
I'm no *real* expert on this matter, but I know ADO.NET has its own connection pooling system, and as long as I've been using it it's been faultless. My reaction would be that there's no point in reinventing the wheel... Just make sure you close your connections when you're finished with them and everything will be fine! I hope someone else can give you some more firm anwers!
Connection Pooling in .NET/SQL Server?
[ "", "c#", ".net", "sql-server", "connection-pooling", "" ]
I could swear I've seen people typing function headers and then hitting some key combination to auto-create function braces and insert the cursor between them like so: ``` void foo()_ ``` to ``` void foo() { _ } ``` Is this a built-in feature?
Check out [Resharper](http://www.jetbrains.com/resharper/documentation/feature_map.html) - it is a Visual Studio add-on with this feature, among many other development helps. Also see [C# Completer](http://www.knowdotnet.com/articles/csharpcompleter.html), another add-on. If you want to roll your own, check out [this article](http://www.developer.com/net/cplus/article.php/3347271). Insane that one should have to do that, though.
The tools look nice (especially Resharper but at $200-350 ouch!) but I ended up just recording a macro and assigning it to ctrl+alt+[ Macro came out like this: ``` Sub FunctionBraces() DTE.ActiveDocument.Selection.NewLine DTE.ActiveDocument.Selection.Text = "{}" DTE.ActiveDocument.Selection.CharLeft DTE.ActiveDocument.Selection.NewLine(2) DTE.ActiveDocument.Selection.LineUp DTE.ActiveDocument.Selection.Indent End Sub ``` Edit: I used the macro recorder to make this and it wasn't too bad
How do I make Visual Studio auto generate braces for a function block?
[ "", "c#", "visual-studio", "" ]
I would like to monitor a log file that is being written to by an application. I want to process the file line by line as, or shortly after, it is written. I have not found a way of detecting that a file has been extended after reaching eof. The code needs to work on Mac and PC, and can be in any language, though I am most familiar with C++ and Perl. Does anybody have a suggestion for the best way to do it?
In Perl, the [File::Tail](http://search.cpan.org/dist/File-Tail/) module does exactly what you need.
A generic enough answer: Most languages, on EOF, return that no data were read. You can re-try reading after an interval, and if the file has grown since, this time the operating system will return data.
Reading data from a log file as a separate application is writing to it
[ "", "c++", "perl", "macos", "file-io", "logging", "" ]
Is there an easy way to iterate over an associative array of this structure in PHP: The array `$searches` has a numbered index, with between 4 and 5 associative parts. So I not only need to iterate over `$searches[0]` through `$searches[n]`, but also `$searches[0]["part0"]` through `$searches[n]["partn"]`. The hard part is that different indexes have different numbers of parts (some might be missing one or two). Thoughts on doing this in a way that's nice, neat, and understandable?
Nest two [`foreach` loops](http://php.net/foreach): ``` foreach ($array as $i => $values) { print "$i {\n"; foreach ($values as $key => $value) { print " $key => $value\n"; } print "}\n"; } ```
I know it's question necromancy, but iterating over Multidimensional arrays is easy with Spl Iterators ``` $iterator = new RecursiveIteratorIterator(new RecursiveArrayIterator($array)); foreach($iterator as $key=>$value) { echo $key.' -- '.$value.'<br />'; } ``` See * <http://php.net/manual/en/spl.iterators.php>
Iterating over a complex Associative Array in PHP
[ "", "php", "arrays", "associative-array", "" ]
I specifically want to add the style of `background-color` to the `<body>` tag of a master page, from the code behind (C#) of a content page that uses that master page. I have different content pages that need to make the master page has different colors depending on which content page is loaded, so that the master page matches the content page's theme. I have a solution below: --- I'm looking for something more like: ``` Master.Attributes.Add("style", "background-color: 2e6095"); ``` Inside of the page load function of the content page. But I can't get the above line to work. I only need to change the `background-color` for the `<body>` tag of the page.
What I would do for the particular case is: i. Define the body as a server side control ``` <body runat="server" id="masterpageBody"> ``` ii. In your content aspx page, register the MasterPage with the register: ``` <% MasterPageFile="..." %> ``` iii. In the Content Page, you can now simply use ``` Master.FindControl("masterpageBody") ``` and have access to the control. Now, you can change whatever properties/style that you like!
This is what I came up with: In the page load function: ``` HtmlGenericControl body = (HtmlGenericControl)Master.FindControl("default_body"); body.Style.Add(HtmlTextWriterStyle.BackgroundColor, "#2E6095"); ``` Where > default\_body = the id of the body tag.
How can I change the background of a masterpage from the code behind of a content page?
[ "", "c#", "asp.net", ".net", "master-pages", "" ]
Is there an easy way in C# to create [Ordinals](http://en.wikipedia.org/wiki/Ordinal_numbers_%28linguistics%29) for a number? For example: * 1 returns 1st * 2 returns 2nd * 3 returns 3rd * ...etc Can this be done through `String.Format()` or are there any functions available to do this?
This page gives you a complete listing of all custom numerical formatting rules: [Custom numeric format strings](https://learn.microsoft.com/en-us/dotnet/standard/base-types/custom-numeric-format-strings) As you can see, there is nothing in there about ordinals, so it can't be done using `String.Format`. However its not really that hard to write a function to do it. ``` public static string AddOrdinal(int num) { if( num <= 0 ) return num.ToString(); switch(num % 100) { case 11: case 12: case 13: return num + "th"; } switch(num % 10) { case 1: return num + "st"; case 2: return num + "nd"; case 3: return num + "rd"; default: return num + "th"; } } ``` Update: Technically Ordinals don't exist for <= 0, so I've updated the code above. Also removed the redundant `ToString()` methods. Also note, this is not internationalized. I've no idea what ordinals look like in other languages.
Remember internationalisation! The solutions here only work for English. Things get a lot more complex if you need to support other languages. For example, in Spanish "1st" would be written as "1.o", "1.a", "1.os" or "1.as" depending on whether the thing you're counting is masculine, feminine or plural! So if your software needs to support different languages, try to avoid ordinals.
Is there an easy way to create ordinals in C#?
[ "", "c#", ".net", "ordinals", "" ]
Regarding the same program as [my question a few minutes ago](https://stackoverflow.com/questions/20061/store-data-from-a-c-application)... I added a setup project and built an MSI for the program (just to see if I could figure it out) and it works great except for one thing. When I tried to install it on my parent's laptop, their antivirus (the free Avast Home Edition) set off an alarm and accused my setup.exe of being a Trojan. Does anyone have any idea why this would be happening and how I can fix it?
Indeed, boot from a clean CD (use a known good machine to build [BartPE](http://nu2.nu/pebuilder/) or something similar) and scan your machine thoroughly. Another good thing to check, though, would be exactly which virus Avast! thinks your program is. Once you know that, you should be able to look it up in one of the virus databases and insure that your software can't contain it. The odds are that Avast! is just getting a false positive for some reason, and I don't know that there's much you can do about that other than contacting Avast! and hoping for a reply.
I would do what jsight suggested and make sure that your machine did not have a virus. I would also submit the .msi file to [Avast's online scanner](http://onlinescan.avast.com/ "avast! Online Scanner") and see what they identified as being in your package. If that reports your file as containing a trojan, contact Avast and ask them to verify that your .msi package does contain a trojan. If it doesn't contain a trojan, find out from Avast what triggered their scanner. There may be something in your code that matches a pattern that Avast looks for, They may be able to adjust their pattern to ignore your file or you could tweak your code so that it doesn't trigger their scanner.
C# application detected as a virus
[ "", "c#", ".net", "antivirus", "" ]
This isn't a design question, really, though it may seem like it. (Well, okay, it's kind of a design question). What I'm wondering is why the C++ `std::fstream` classes don't take a `std::string` in their constructor or open methods. Everyone loves code examples so: ``` #include <iostream> #include <fstream> #include <string> int main() { std::string filename = "testfile"; std::ifstream fin; fin.open(filename.c_str()); // Works just fine. fin.close(); //fin.open(filename); // Error: no such method. //fin.close(); } ``` This gets me all the time when working with files. Surely the C++ library would use `std::string` wherever possible?
By taking a C string the C++03 [`std::fstream`](http://en.cppreference.com/w/cpp/io/basic_fstream) class reduced dependency on the `std::string` class. In C++11, however, the `std::fstream` class does allow passing a `std::string` for its constructor parameter. Now, you may wonder why isn't there a transparent conversion from a `std:string` to a C string, so a class that expects a C string could still take a `std::string` just like a class that expects a `std::string` can take a C string. The reason is that this would cause a conversion cycle, which in turn may lead to problems. For example, suppose `std::string` would be convertible to a C string so that you could use `std::string`s with `fstream`s. Suppose also that C string are convertible to `std::string`s as is the state in the current standard. Now, consider the following: ``` void f(std::string str1, std::string str2); void f(char* cstr1, char* cstr2); void g() { char* cstr = "abc"; std::string str = "def"; f(cstr, str); // ERROR: ambiguous } ``` Because you can convert either way between a `std::string` and a C string the call to `f()` could resolve to either of the two `f()` alternatives, and is thus ambiguous. The solution is to break the conversion cycle by making one conversion direction explicit, which is what the STL chose to do with `c_str()`.
There are several places where the C++ standard committee did not really optimize the interaction between facilities in the standard library. `std::string` and its use in the library is one of these. One other example is `std::swap`. Many containers have a swap member function, but no overload of std::swap is supplied. The same goes for `std::sort`. I hope all these small things will be fixed in the upcoming standard.
Why don't the std::fstream classes take a std::string?
[ "", "c++", "stl", "file-io", "stdstring", "" ]
I'm writing an application that on some stage performs low-level disk operations in Linux environment. The app actually consists of 2 parts, one runs on Windows and interacts with a user and another is a linux part that runs from a LiveCD. User makes a choice of Windows drive letters and then a linux part performs actions with corresponding partitions. The problem is finding a match between a Windows drive letter (like C:) and a linux device name (like /dev/sda1). This is my current solution that I rate as ugly: * store partitions information (i.e. drive letter, number of blocks, drive serial number etc.) in Windows in some pre-defined place (i.e. the root of the system partition). * read a list of partitions from /proc/partitions. Get only those partitions that has major number for SCSI or IDE hard drives and minor number that identifies them as real partitions and not the whole disks. * Try to mount each of them with either ntfs or vfat file systems. Check whether the mounted partition contains the information stored by Windows app. * Upon finding the required information written by the Windows app make the actual match. For each partition found in /proc/partitions acquire drive serial number (via HDIO\_GET\_IDENTITY syscall), number of blocks (from /proc/partitions) and drive offset (/sys/blocks/drive\_path/partition\_name/start), compare this to the Windows information and if this matches - store a Windows drive letter along with a linux device name. There are a couple of problems in this scheme: * This is ugly. Writing data in Windows and then reading it in Linux makes testing a nightmare. * linux device major number is compared only with IDE or SCSI devices. This would probably fail, i.e. on USB or FireWire disks. It's possible to add these types of disks, but limiting the app to only known subset of possible devices seems to be rather bad idea. * looks like HDIO\_GET\_IDENTITY works only on IDE and SATA drives. * /sys/block hack may not work on other than IDE or SATA drives. Any ideas on how to improve this schema? Perhaps there is another way to determine windows names without writing all the data in windows app? P.S. The language of the app is C++. I can't change this.
Partitions have UUIDs associated with them. I don't know how to find these in Windows but in linux you can find the UUID for each partition with: > sudo vol\_id -u device (e.g. /dev/sda1) If there is an equivilent function in Windows you could simply store the UUIDs for whatever partition they pick then iterate through all known partitions in linux and match the UUIDs. **Edit:** This may be a linux-only thing, and it may speficially be the volid util that generates these from something (instead of reading off meta-data for the drive). Having said that, there is nothing stopping you getting the source for volid and checking out what it does.
> Partitions have UUIDs associated with them My knowledge of this is very shallow, but I thought that was only true for disks formatted with GPT (Guid Partition Table) partitions, rather than the old-style MBR format which 99% of the world is still stuck with?
How to match linux device path to windows drive name?
[ "", "c++", "linux", "drives", "" ]
I'm working on a website that will switch to a new style on a set date. The site's built-in semantic HTML and CSS, so the change should just require a CSS reference change. I'm working with a designer who will need to be able to see how it's looking, as well as a client who will need to be able to review content updates in the current look as well as design progress on the new look. I'm planning to use a magic querystring value and/or a javascript link in the footer which writes out a cookie to select the new CSS page. We're working in ASP.NET 3.5. Any recommendations? I should mention that we're using IE Conditional Comments for IE8, 7, and 6 support. I may create a function that does a replacement: ``` <link href="Style/<% GetCssRoot() %>.css" rel="stylesheet" type="text/css" /> <!--[if lte IE 8]> <link type="text/css" href="Style/<% GetCssRoot() %>-ie8.css" rel="stylesheet" /> <![endif]--> <!--[if lte IE 7]> <link type="text/css" href="Style/<% GetCssRoot() %>-ie7.css" rel="stylesheet" /> <![endif]--> <!--[if lte IE 6]> <link type="text/css" href="Style/<% GetCssRoot() %>-ie6.css" rel="stylesheet" /> <![endif]--> ```
In Asp.net 3.5, you should be able to set up the Link tag in the header as a server tag. Then in the codebehind you can set the href property for the link element, based on a cookie value, querystring, date, etc. In your aspx file: ``` <head> <link id="linkStyles" rel="stylesheet" type="text/css" runat="server" /> </head> ``` And in the Code behind: ``` protected void Page_Load(object sender, EventArgs e) { string stylesheetAddress = // logic to determine stylesheet linkStyles.Href = stylesheetAddress; } ```
You should look into `ASP.NET` themes, that's exactly what they're used for. They also allow you to skin controls, which means give them a set of default attributes.
How to set up a CSS switcher
[ "", "javascript", "html", "asp.net", "css", "" ]
I am using Java back end for creating an XML string which is passed to the browser. Currently I am using simple string manipulation to produce this XML. Is it essential that I use some XML library in Java to produce the XML string? I find the libraries very difficult to use compared to what I need.
It's not essential, but advisable. However, if string manipulation works for you, then go for it! There are plenty of cases where small or simple XML text can be safely built by hand. Just be aware that creating XML text is harder than it looks. Here's some criteria I would consider: * First: how much *control* do you have on the information that goes into the xml? The less control you have on the source data, the more likely you will have trouble, and the more advantageous the library becomes. For example: (a) Can you *guarantee* that the element names will never have a character that is illegal in a name? (b) How about quotes in an attribute's content? Can they happen, and are you handling them? (c) Does the data ever contain anything that might need to be encoded as an [entity](http://en.wikipedia.org/wiki/XML_entity) (like the less-than which often needs to be output as **&lt;**); are you doing it correctly? * Second, maintainability: is the code that builds the XML easy to understand *by someone else*? You probably don't want to be stuck with the code for life. I've worked with second-hand C++ code that hand-builds XML and it can be surprisingly obscure. Of course, if this is a personal project of yours, then you don't need to worry about "others": substitute "in a year" for "others" above. I wouldn't worry about performance. If your XML is simple enough that you can hand-write it, any overhead from the library is probably meaningless. Of course, your case might be different, but you should measure to prove it first. Finally, Yes; you can hand build XML text by hand if it's simple enough; but not knowing the libraries available is *probably* not the right reason. A modern XML library is a quite powerful tool, but it can also be daunting. However, learning the essentials of your XML library is not that hard, and it can be quite handy; among other things, it's almost a requisite in today's job marketplace. Just don't get bogged down by namespaces, schemas and other fancier features until you get the essentials. Good luck.
Xml is hard. Parsing yourself is a bad idea, it's even a worse idea to generate content yourself. Have a look at the Xml 1.1 spec. You have to deal with such things as proper encoding, attribute encoding (e.g., produces invalid xml), proper CDATA escaping, UTF encoding, custom DTD entities, and that's without throwing in the mix xml namespaces with the default / empty namespace, namespace attributes, etc. Learn a toolkit, there's plenty available.
Is it essential that I use libraries to manipulate XML?
[ "", "java", "xml", "" ]
I would like to display details of an xml error log to a user in a winforms application and am looking for the best control to do the job. The error data contains all of the sever variables at the time that the error occurred. These have been formatted into an XML document that looks something to the effect of: ``` <error> <serverVariables> <item> <value> </item> </serverVariables> <queryString> <item name=""> <value string=""> </item> </queryString> </error> ``` I would like to read this data from the string that it is stored in and display it to the user via a windows form in a useful way. XML Notepad does a cool job of formatting xml, but is not really was I am looking for since I would prefer to rather display item details in a *Name : string* format. Any suggestions or am I looking and a custom implementation? [EDIT] A section of the data that needs to be displayed: ``` <?xml version="1.0" encoding="utf-8"?> <error host="WIN12" type="System.Web.HttpException" message="The file '' does not exist." source="System.Web" detail="System.Web.HttpException: The file '' does not exist. at System.Web.UI.Util.CheckVirtualFileExists(VirtualPath virtualPath) at" time="2008-09-01T07:13:08.9171250+02:00" statusCode="404"> <serverVariables> <item name="ALL_HTTP"> <value string="HTTP_CONNECTION:close HTTP_USER_AGENT:Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1) " /> </item> <item name="AUTH_TYPE"> <value string="" /> </item> <item name="HTTPS"> <value string="off" /> </item> <item name="HTTPS_KEYSIZE"> <value string="" /> </item> <item name="HTTP_USER_AGENT"> <value string="Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)" /> </item> </serverVariables> <queryString> <item name="tid"> <value string="196" /> </item> </queryString> </error> ```
You can transform your XML data using [XSLT](http://www.xml.com/pub/a/2002/08/14/dotnetxslt.html) Another option is to use XLinq. If you want concrete code example provide us with sample data **EDIT**: here is a sample XSLT transform for your XML file: ``` <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:output method="text"/> <xsl:template match="//error/serverVariables"> <xsl:text>Server variables: </xsl:text> <xsl:for-each select="item"> <xsl:value-of select="@name"/>:<xsl:value-of select="value/@string"/> <xsl:text> </xsl:text> </xsl:for-each> </xsl:template> <xsl:template match="//error/queryString"> <xsl:text>Query string items: </xsl:text> <xsl:for-each select="item"> <xsl:value-of select="@name"/>:<xsl:value-of select="value/@string"/> <xsl:text> </xsl:text> </xsl:for-each> </xsl:template> </xsl:stylesheet> ``` You can apply this transform using [XslCompiledTransform](http://msdn.microsoft.com/en-us/library/system.xml.xsl.xslcompiledtransform.aspx) class. It should give output like this: > Server variables: > ALL\_HTTP:HTTP\_CONNECTION:close HTTP\_USER\_AGENT:Mozilla/4.0 (compatible MSIE 6.0; Windows NT 5.1; SV1) > AUTH\_TYPE: > HTTPS:off > HTTPS\_KEYSIZE: > HTTP\_USER\_AGENT:Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1;S ) > > Query string items: > tid:196
You could try using the `DataGridView` control. To see an example, load an XML file in DevStudio and then right-click on the XML and select "View Data Grid". You'll need to read the API documentation on the control to use it.
Displaying XML data in a Winforms control
[ "", "c#", "xml", "winforms", "formatting", "" ]
**Is there a way to utilise the GCC compiler whilst still being able to develop via the Visual Studio IDE?** Our project is cross-platform, and I quite frequently get into trouble from my [colleague](https://stackoverflow.com/users/912/jkp) because I'm checking in code that's not standards compliant (this can be attributed to the VS compiler!). I'd still like to be able to compile using the MS compiler, so I can continue debugging, etc, however I'd like to be able to *switch* to compile using GCC, just so that I can be sure I'm not breaking the build on other platforms. **Is this possible?**
What I am about to suggest would still require a makefile, so I am partially repeating the advice from an earlier reply. Or, as was also mentioned earlier, maybe you already have a makefile, in which case you will have even fewer steps in order to accomplish what I am about to describe. Once you know your specific windows command-line command for invoking make or g++ on your code, then you create a "Pre-Build Event" in your Visual Studio Project. ("Project Properties >> Configuration Properties >> Build Events >> Pre-Build Event"). The pre-build event can call a bat file script, or any other script on your machine, and that script will be able to return an error-code. Essentially, "script OK," or "script FAILED" is the extent of the amount of communication your script can have BACK to visual studio. The script doesn't automatically see all the visual studio environment variables (such as $(InputDir), $(ProjectDir), $(SolutionName), etc), however you can use those variables when you specify how to call the script. In other words, you can pass those values to the script as arguments. Set this up so that every time you build in Visual Studio, the pre-build event will FIRST try to run make/g++ on your code. If your script (the one that calls make/g++) detects any problems, then the script returns an error and the build can be STOPPED right then and there. The script can print to stdout or stderr and that output should be visible to you in the Visual Studio Build output window (the window that usually shows stuff like "========== Build: 3 succeeded, 0 failed"). You can have the script print: "BUILD FAILED, non-portable code detected, make/g++ returned the following:........." This way, you don't have to remember to periodically switch from Visual Studio to the command line. It will be automatically done for you every time you build.
I don't think there is a simple switch, because gcc's command-line options are very different from VSs. In any case, just running the compiler will be non-trivial, as your build system probably sets a bunch of preprocessor defines and build variables that need to be set for the compile to succeed. If your colleague is working on Unix, he probably has a make, scons or cmake-based build system anyway. You can use [Cygwin](http://www.cygwin.com) to install the standard Unix toolchain on Windows, including gcc, make, flex, bison and all the other Unix goodies. There are native versions of scons and cmake, but those will try to use VS, so that won't help you. I haven't tried installing them through Cygwin to see if that forces them to gcc, but that might not be relevant to you. Creating a make system that uses the VS compiler is possible but painful (been there, done that). And a different question. ;) You can then use a special buildstep to run the gcc compile from inside VS. It would be better to have a separate build target (like Debug and Release), and you can create those in the project files (they're just ASCII files, check them out), but I'm not enough of a VS person to know how easy that would be to do. Keeping it up-to-date will be a little painful, you might want to write a script to create it automatically.
Using GCC from within VS 2005(8) IDE
[ "", "c++", "visual-studio", "visual-c++", "gcc", "cross-platform", "" ]
I have a collection of classes that inherit from an abstract class I created. I'd like to use the abstract class as a factory for creating instances of concrete implementations of my abstract class. Is there any way to hide a constructor from all code except a parent class. I'd like to do this basically ``` public abstract class AbstractClass { public static AbstractClass MakeAbstractClass(string args) { if (args == "a") return new ConcreteClassA(); if (args == "b") return new ConcreteClassB(); } } public class ConcreteClassA : AbstractClass { } public class ConcreteClassB : AbstractClass { } ``` But I want to prevent anyone from directly instantiating the 2 concrete classes. I want to ensure that only the MakeAbstractClass() method can instantiate the base classes. Is there any way to do this? **UPDATE** I don't need to access any specific methods of ConcreteClassA or B from outside of the Abstract class. I only need the public methods my Abstract class provides. I don't really need to prevent the Concrete classes from being instantiated, I'm just trying to avoid it since they provide no new public interfaces, just different implementations of some very specific things internal to the abstract class. To me, the simplest solution is to [make child classes as samjudson mentioned](https://stackoverflow.com/questions/20185/is-there-a-way-to-make-a-constructor-only-visible-to-a-parent-class-in-c#20200). I'd like to avoid this however since it would make my abstract class' file a lot bigger than I'd like it to be. I'd rather keep classes split out over a few files for organization. I guess there's no easy solution to this...
You can make the sub classes child classes, something like this: ``` public abstract class AbstractClass { public static AbstractClass MakeAbstractClass(string args) { if (args == "a") return new ConcreteClassA(); if (args == "b") return new ConcreteClassB(); } private class ConcreteClassA : AbstractClass { } private class ConcreteClassB : AbstractClass { } } ``` [@Vaibhav](https://stackoverflow.com/questions/20185/is-there-a-way-to-make-a-constructor-only-visible-to-a-parent-class-in-c#20220) This does indeed mean that the classes are also hidden. But this is as far as I am aware the only way to completely hide the constructor. Edit: As others have mentioned the same thing can be accomplished using Reflection, which might actually be closer to what you would like to be the case - for example the above method replies on the concrete classes being inside the same file as the Abstract class, which probably isn't very convenient. Having said that this way is a nice 'Hack', and good if the number and complexity of the concrete classes is low.
> To me, the simplest solution is to > make child classes as samjudson > mentioned. I'd like to avoid this > however since it would make my > abstract class' file a lot bigger than > I'd like it to be. I'd rather keep > classes split out over a few files for > organization. No problem, just use **partial** keyword and you can split your inner classes into as many files as you wish. You don't have to keep it in the same file. **Previous answer:** It's possible but only with reflection ``` public abstract class AbstractClass { public static AbstractClass MakeAbstractClass(string args) { if (args == "a") return (AbstractClass)Activator.CreateInstance(typeof(ConcreteClassA), true); if (args == "b") return (AbstractClass)Activator.CreateInstance(typeof(ConcreteClassB), true); } } public class ConcreteClassA : AbstractClass { private ConcreteClassA() { } } public class ConcreteClassB : AbstractClass { private ConcreteClassB() { } } ``` and here is another pattern, without ugly **MakeAbstractClass(string args)** ``` public abstract class AbstractClass<T> where T : AbstractClass<T> { public static T MakeAbstractClass() { T value = (T)Activator.CreateInstance(typeof(T), true); // your processing logic return value; } } public class ConcreteClassA : AbstractClass<ConcreteClassA> { private ConcreteClassA() { } } public class ConcreteClassB : AbstractClass<ConcreteClassB> { private ConcreteClassB() { } } ```
Is there a way to make a constructor only visible to a parent class in C#?
[ "", "c#", "inheritance", "oop", "" ]
I'm using Visual C++ 2003 to debug a program remotely via TCP/IP. I had set the Win32 exception c00000005, "Access violation," to break into the debugger when thrown. Then, I set it back to "Use parent setting." The setting for the parent, Win32 Exceptions, is to continue when the exception is thrown. Now, when I debug the program, it breaks each time that exception is thrown, forcing me to click Continue to let it keep debugging. How do I get it to stop breaking like this?
I'd like to support [Will Dean's answer](https://stackoverflow.com/questions/8263/i-cant-get-my-debugger-to-stop-breaking-on-first-chance-exceptions#8304) An access violation sounds like an actual bug in your code. It's not something I'd expect the underlying C/++ Runtime to be throwing and catching internally. The 'first-chance-exceptions' feature is so you can intercept things which get 'caught' in code, using the debugger, and have a look. If there's nothing 'catching' that exception (which makes sense, why on earth would you catch and ignore access violations?), then it will trigger the debugger regardless of what options you may have set.
Is this an exception that your code would actually handle if you weren't running in the debugger?
I can't get my debugger to stop breaking on first-chance exceptions
[ "", "c++", "visual-studio", "debugging", "visual-studio-2003", "first-chance-exception", "" ]
I've done this before in C++ by including sqlite.h but is there a similarly easy way in C#?
[Microsoft.Data.Sqlite](https://www.nuget.org/packages/Microsoft.Data.Sqlite) by Microsoft has over 9000 downloads every day, so I think you are safe using that one. Example usage from [the documentation](https://learn.microsoft.com/dotnet/standard/data/sqlite/): ``` using (var connection = new SqliteConnection("Data Source=hello.db")) { connection.Open(); var command = connection.CreateCommand(); command.CommandText = @" SELECT name FROM user WHERE id = $id "; command.Parameters.AddWithValue("$id", id); using (var reader = command.ExecuteReader()) { while (reader.Read()) { var name = reader.GetString(0); Console.WriteLine($"Hello, {name}!"); } } } ```
I'm with, Bruce. I AM using <http://system.data.sqlite.org/> with great success as well. Here's a simple class example that I created: ``` using System; using System.Text; using System.Data; using System.Data.SQLite; namespace MySqlLite { class DataClass { private SQLiteConnection sqlite; public DataClass() { //This part killed me in the beginning. I was specifying "DataSource" //instead of "Data Source" sqlite = new SQLiteConnection("Data Source=/path/to/file.db"); } public DataTable selectQuery(string query) { SQLiteDataAdapter ad; DataTable dt = new DataTable(); try { SQLiteCommand cmd; sqlite.Open(); //Initiate connection to the db cmd = sqlite.CreateCommand(); cmd.CommandText = query; //set the passed query ad = new SQLiteDataAdapter(cmd); ad.Fill(dt); //fill the datasource } catch(SQLiteException ex) { //Add your exception code here. } sqlite.Close(); return dt; } } ``` There is also an [NuGet package: System.Data.SQLite](https://www.nuget.org/packages/System.Data.SQLite) available.
What is the best way to connect and use a sqlite database from C#
[ "", "c#", "sqlite", "" ]
I'm about to deploy a mediumsized site powered by Django. I have a dedicated Ubuntu Server. I'm really confused over which serversoftware to use. So i thought to myself: why not ask stackoverflow. What i'm looking for is: * Easy to set up * Fast and easy on resources * Can serve mediafiles * Able to serve multiple djangosites on same server * I would rather not install PHP or anything else that sucks resources, and for which I have no use for. I have heard of mod\_wsgi and mod\_python on Apache, nginx and lighty. Which are the pros and cons of these and have i missed someone? **@Barry**: Somehow i feel like Apache is to bloated for me. What about the alternatives? **@BrianLy**: Ok I'll check out mod\_wsgi some more. But why do i need Apache if i serve static files with lighty? I have also managed to serve the django app itself with lighty. Is that bad in anyway? Sorry for beeing so stupid :-) **UPDATE**: What about lighty and nginx - which are the uses-cases when these are the perfect choice?
Since I was looking for some more in-depth answers, I decided to research the issue myself in depth. Please let me know if I've misunderstood anything. Some general recommendation are to use a separate webserver for handling media. By separate, I mean a webserver which is not running Django. This server can be for instance: * Lighttpd (Lighty) * Nginx (EngineX) * Or some other light-weight server Then, for Django, you can go down different paths. You can either: * Serve Django via **Apache** and: + **mod\_python** This is the stable and recommended/well documented way. Cons: uses a lot of memory. + **mod\_wsgi** From what I understand, mod\_wsgi is a newer alternative. It appears to be faster and easier on resources. + **mod\_fastcgi** When using FastCGI you are delegating the serving of Django to another process. Since mod\_python includes a python interpreter in every request it uses a lot of memory. This is a way to bypass that problem. Also there is some security concerns. What you do is that you start your Django FastCGI server in a separate process and then configures apache via rewrites to call this process when needed. Or you can: * Serve Django **without using Apache** but with another server that supports FastCGI natively: (The documentation mentions that you can do this if you don't have any Apache specific needs. I guess the reason must be to save memory.) + **Lighttpd** This is the server that runs Youtube. It seems fast and easy to use, however i've seen reports on memoryleaks. + **nginx** I've seen benchmarks claiming that this server is even faster than lighttpd. It's mostly documented in Russian though. Another thing, due to limitations in Python your server should be running in forked mode, not threaded. So this is my current research, but I want more opinions and experiences.
I'm using [Cherokee](http://www.cherokee-project.com/). According to [their benchmarks](http://www.cherokee-project.com/benchmarks.html) (grain of salt with them), it handles load better than both Lighttpd and nginx... But that's not why I use it. I use it because if you type `cherokee-admin`, it starts a new server that you can log into (with a one-time password) and configure the whole server through a beautifully-done webmin. That's a killer feature. It has already saved me a *lot* of time. And it's saving my server a lot of resources! As for django, I'm running it as a threaded SCGI process. Works well. Cherokee can keep it running too. Again, very nice feature. The current Ubuntu repo version is very old so I'd advise you use [their PPA](https://launchpad.net/~cherokee-webserver/+archive). Good luck.
Cleanest & Fastest server setup for Django
[ "", "python", "django", "apache", "hosting", "" ]
For my blog I am wanting to use the Output Cache to save a cached version of a perticular post for around 10 minutes, and thats fine... ``` <%@OutputCache Duration="600" VaryByParam="*" %> ``` However, if someone posts a comment, I want to clear the cache so that the page is refreshed and the comment can be seen. How do I do this in ASP.Net C#?
I've found the answer I was looking for: ``` HttpResponse.RemoveOutputCacheItem("/caching/CacheForever.aspx"); ```
The above are fine if you know what pages you want to clear the cache for. In my instance (ASP.NET MVC) I referenced the same data from all over. Therefore, when I did a [save] I wanted to clear cache site wide. This is what worked for me: <http://aspalliance.com/668> This is done in the context of an OnActionExecuting filter. It could just as easily be done by overriding OnActionExecuting in a BaseController or something. ``` HttpContextBase httpContext = filterContext.HttpContext; httpContext.Response.AddCacheItemDependency("Pages"); ``` Setup: ``` protected void Application_Start() { HttpRuntime.Cache.Insert("Pages", DateTime.Now); } ``` Minor Tweak: I have a helper which adds "flash messages" (Error messages, success messages - "This item has been successfully saved", etc). In order to avoid the flash message from showing up on every subsequent GET, I had to invalidate after writing the flash message. Clearing Cache: ``` HttpRuntime.Cache.Insert("Pages", DateTime.Now); ``` Hope this helps.
Clearing Page Cache in ASP.NET
[ "", "c#", "asp.net", "outputcache", "" ]
I have inherited an old crusty `PHP application`, and I'd like to refactor it into something a little nicer to deal with, but in a gradual manner. In perl's CPAN, there is a series of classes around Class::DBI that allow you to use database rows as the basis for objects in your code, with the library generating `accessor methods` etc as appropriate, but also allowing you to add additional methods. Does anyone know of something like this for PHP? Especially something that doesn't require wholesale adoption of a "framework"... bonus points if it works in PHP4 too, but to be honest, I'd love to have another reason to ditch that. :-)
It's now defunct but [phpdbi](http://phpdbi.sourceforge.net/web/) is possibly worth a look. If you're willing to let go of some of your caveats (the framework one), I've found that [Doctrine](http://www.phpdoctrine.org/) is a pretty neat way of accessing DBs in PHP. Worth investigating anyway.
I'm trying to get more feedback on my own projects, so I'll suggest my take on ORM: [ORMer](http://greaterscope.net/projects/ORMer) Usage examples are [here](http://greaterscope.net/projects/ORMer/examples) You can phase it in, it doesn't require you to adopt MVC, and it requires very little setup.
Class::DBI-like library for php?
[ "", "php", "perl", "orm", "" ]
What are the differences between these two, and which one should I use? ``` string s = "Hello world!"; String s = "Hello world!"; ```
[`string`](https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/keywords/string) is an alias in C# for [`System.String`](https://learn.microsoft.com/en-us/dotnet/api/system.string). So technically, there is no difference. It's like [`int` *vs.* `System.Int32`](https://stackoverflow.com/questions/62503/c-int-or-int32-should-i-care). As far as guidelines, it's generally recommended to use `string` any time you're referring to an object. e.g. ``` string place = "world"; ``` Likewise, I think it's generally recommended to use `String` if you need to refer specifically to the class. e.g. ``` string greet = String.Format("Hello {0}!", place); ``` ### This is the style that Microsoft tends to use in [their examples](https://learn.microsoft.com/en-us/dotnet/api/system.string.format#examples). It appears that the guidance in this area may have changed, as [StyleCop](https://github.com/StyleCop) now enforces the use of the C# specific aliases.
Just for the sake of completeness, here's a brain dump of related information... As others have noted, `string` is an alias for `System.String`. Assuming your code using `String` compiles to `System.String` (i.e. you haven't got a using directive for some other namespace with a different `String` type), they compile to the same code, so at execution time there is no difference whatsoever. This is just one of the aliases in C#. The complete list is: ``` bool: System.Boolean byte: System.Byte char: System.Char decimal: System.Decimal double: System.Double float: System.Single int: System.Int32 long: System.Int64 object: System.Object sbyte: System.SByte short: System.Int16 string: System.String uint: System.UInt32 ulong: System.UInt64 ushort: System.UInt16 ``` Apart from `string` and `object`, the aliases are all to value types. `decimal` is a value type, but not a primitive type in the CLR. The only primitive type which doesn't have an alias is `System.IntPtr`. In the spec, the value type aliases are known as "simple types". Literals can be used for constant values of every simple type; no other value types have literal forms available. (Compare this with VB, which allows `DateTime` literals, and has an alias for it too.) There is one circumstance in which you *have* to use the aliases: when explicitly specifying an enum's underlying type. For instance: ``` public enum Foo : UInt32 {} // Invalid public enum Bar : uint {} // Valid ``` That's just a matter of the way the spec defines enum declarations - the part after the colon has to be the *integral-type* production, which is one token of `sbyte`, `byte`, `short`, `ushort`, `int`, `uint`, `long`, `ulong`, `char`... as opposed to a *type* production as used by variable declarations for example. It doesn't indicate any other difference. Finally, when it comes to which to use: personally I use the aliases everywhere for the implementation, but the CLR type for any APIs. It really doesn't matter too much which you use in terms of implementation - consistency among your team is nice, but no-one else is going to care. On the other hand, it's genuinely important that if you refer to a type in an API, you do so in a language-neutral way. A method called `ReadInt32` is unambiguous, whereas a method called `ReadInt` requires interpretation. The caller could be using a language that defines an `int` alias for `Int16`, for example. The .NET framework designers have followed this pattern, good examples being in the `BitConverter`, `BinaryReader` and `Convert` classes.
What is the difference between String and string in C#?
[ "", "c#", ".net", "string", "types", "alias", "" ]
I need to prevent [Session Fixation](http://www.owasp.org/index.php/Session_Fixation), a particular type of session hijacking, in a Java web application running in JBoss. However, it appears that the standard idiom [doesn't work in JBoss](http://www.owasp.org/index.php/Session_Fixation_in_Java). Can this be worked around?
[This defect](https://jira.jboss.org/jira/browse/JBAS-4436) (found [here](http://forum.springframework.org/showthread.php?t=54688)) points the way to the solution. The Tomcat instance that runs in JBoss is configured with emptySessionPath="true", rather than "false", which is the default. This can be modified in `.../deploy/jboss-web.deployer/server.xml`; both the HTTP and AJP connectors have this option. The feature itself is used to eliminate the context path (eg. "foo" in <http://example.com/foo>) from being included in the JSESSIONID cookie. Setting it to false will break applications that rely on cross-application authentication, which includes stuff built using some portal frameworks. It didn't negatively affect the application in question, however.
This problem and the specific case in which it occurs is a problem in Tomcat as well as JBoss. Tomcat shares the emptySessionPath="true" effect (and actually JBoss inherits it from Tomcat). This really seems like a bug in Tomcat and JBoss when you are trying to prevent session fixation attacks but the servlet spec (at least version 2.3) does not actually require the JSESSIONID to be defined or redefined according to any specific logic. Perhaps this has been cleaned up in later versions.
Resolving Session Fixation in JBoss
[ "", "java", "security", "jboss", "" ]
The `datepicker` function only works on the first input box that is created. I'm trying to duplicate a datepicker by cloning the `div` that is containing it. ``` <a href="#" id="dupMe">click</a> <div id="template"> input-text <input type="text" value="text1" id="txt" /> date time picker <input type="text" id="example" value="(add date)" /> </div> ``` To initialize the datepicker, according to the [jQuery UI documentation](http://docs.jquery.com/UI/Datepicker) I only have to do `$('#example').datepicker();` and it does work, but only on the first datepicker that is created. The code to duplicate the `div` is the following: ``` $("a#dupMe").click(function(event){ event.preventDefault(); i++; var a = $("#template") .clone(true) .insertBefore("#template") .hide() .fadeIn(1000); a.find("input#txt").attr('value', i); a.find("input#example").datepicker(); }); ``` The strangest thing is that on the `document.ready` I have: ``` $('#template #example').datepicker(); $("#template #txt").click(function() { alert($(this).val()); }); ``` and if I click on the `#txt` it always works.
I'd recommend just using a common class name as well. However, if you're against this for some reason, you could also write a function to create date pickers for all text boxes in your template `div` (to be called after each duplication). Something like: ``` function makeDatePickers() { $("#template input[type=text]").datepicker(); } ```
I use a CSS class instead: ``` <input type="text" id="BeginDate" class="calendar" /> <input type="text" id="EndDate" class="calendar" /> ``` Then, in your `document.ready` function: ``` $('.calendar').datepicker(); ``` Using it that way for multiple calendar fields works for me.
duplicating jQuery datepicker
[ "", "javascript", "jquery", "" ]
A researcher has created a small simulation in MATLAB and we want to make it accessible to others. My plan is to take the simulation, clean up a few things and turn it into a set of functions. Then I plan to compile it into a C library and use [SWIG](https://en.wikipedia.org/wiki/SWIG) to create a Python wrapper. At that point, I should be able to call the simulation from a small Django application. At least I hope so. Do I have the right plan? Are there are any serious pitfalls that I'm not aware of at the moment?
I remember that I was able to wrap a MATLAB simulation into a [DLL file](https://en.wikipedia.org/wiki/Dynamic-link_library) and then call it from a [Delphi](https://en.wikipedia.org/wiki/Embarcadero_Delphi) application. It worked really well.
One thing to remember is that the MATLAB compiler does not actually compile the MATLAB code into native machine instructions. It simply wraps it into a stand-alone executable or a library with its own runtime engine that runs it. You would be able to run your code without MATLAB installed, and you would be able to interface it with other languages, but it will still be interpreted MATLAB code, so there would be no speedup. Matlab Coder, on the other hand, is the thing that can generate C code from Matlab. There are some limitations, though. Not all Matlab functions are supported for code generation, and there are things you cannot do, like change the type of a variable on the fly.
Creating a shared library in MATLAB
[ "", "python", "c", "matlab", "" ]
This should be fine seeing as the CLR hasn't actually changed? The boxes running the C# 2.0 code **have** had .NET 3.5 rolled out. The background is that we have a windows service (.NET 2.0 exe built with VS2005, deployed to ~150 servers) that dynamically loads assemblies (almost like plug-ins) to complete various work items asked of it. Whenever we roll out a new version of the bus logic, we just drop the assemblies on an FTP server and the windows service knows how to check for, grab and store the latest versions. New assemblies are now built using VS2008 and targetting .NET 2.0, we know that works ok. However we'd like to start taking advantage of C# 3.0 language features such as LINQ and targetting the assemblies against .NET 3.5 without having to build and deploy a new version of the windows service.
C#3 and .Net 3.5 adds new assemblies, but the IL is unchanged. This means that with .Net 2 assemblies you can compile and use C#3, as long as you don't use Linq or anything else that references System.Linq or System.Core `yield`, `var`, lambda syntax, anon types and initialisers are all compiler cleverness. The IL they produce is cross-compatible. If you can reference the new assemblies for 3.5 it should all just work. There is no new version of ASP.Net - it should still be 2.0.50727 - but you should still compile for 3.5
> yield, var, lambda syntax, anon types > and initialisers are all compiler > cleverness. The IL they produce is > cross-compatible. Minor nit-picking point, but yield was a 2.0 feature anyway.
C# 2.0 code consuming assemblies compiled with C# 3.0
[ "", "c#", ".net", ".net-3.5", "" ]
I'm looking to add a testing suite to my application, however I can't move to the newer testing frameworks for .NET 3.5. Does anyone have a suggestion about good testing frameworks to use?
We use [MbUnit](http://www.mbunit.com/) and [Rihno Mocks](http://www.ayende.com/Blog/archive/2007/03/28/Rhino-Mocks-3.0-Released.aspx) and they prove to work very well together. When doing TDD you will almost certainly need to do some form of dependency injection, while this can be done manually, its worth looking at an IoC container such as [Castle Windsor](http://www.castleproject.org/projects/windsor/). It well worth looking at John Paul Bodhood's screen casts to get you started. [JPB's Blog](http://www.jpboodhoo.com/)
NUnit and Rhino suit well and the [auto-mocking container](http://www.ayende.com/Blog/archive/2007/06/08/The-Auto-Mocking-Container.aspx) might be of interest. If you're looking at BDD too then [NBehave](http://nbehave.org/) is probably a good choice. If however you just mean the style of BDD that relates to unit testing (xSpec) though you can get away with adding a framework (though things like [specunit](http://code.google.com/p/specunit-net/) do add some synctactic sugar), but you might want to look at [MSpec](http://codebetter.com/blogs/aaron.jensen/archive/2008/05/08/introducing-machine-specifications-or-mspec-for-short.aspx) is also interesting.
What's the best way to implement BDD/TDD in .NET 2.0?
[ "", "c#", ".net", "testing", "tdd", "bdd", "" ]
Imagine an object you are working with has a collection of other objects associated with it, for example, the Controls collection on a WinForm. You want to check for a certain object in the collection, but the collection doesn't have a `Contains()` method. There are several ways of dealing with this. * Implement your own `Contains()` method by looping through all items in the collection to see if one of them is what you are looking for. This seems to be the "best practice" approach. * I recently came across some code where instead of a loop, there was an attempt to access the object inside a try statement, as follows: > ``` > try > { > Object aObject = myCollection[myObject]; > } > catch(Exception e) > { > //if this is thrown, then the object doesn't exist in the collection > } > ``` My question is how poor of a programming practice do you consider the second option be and why? How is the performance of it compared to a loop through the collection?
I would have to say that this is pretty bad practice. Whilst some people might be happy to say that looping through the collection is less efficient to throwing an exception, there is an overhead to throwing an exception. I would also question why you are using a collection to access an item by key when you would be better suited to using a dictionary or hashtable. My main problem with this code however, is that regardless of the type of exception thrown, you are always going to be left with the same result. For example, an exception could be thrown because the object doesn't exist in the collection, or because the collection itself is null or because you can't cast myCollect[myObject] to aObject. All of these exceptions will get handled in the same way, which may not be your intention. These are a couple of nice articles on when and where it is usally considered acceptable to throw exceptions: * [Foundations of Programming](http://codebetter.com/blogs/karlseguin/archive/2008/05/29/foundations-of-programming-pt-8-back-to-basics-exceptions.aspx) * [Throwing exceptions in c#](http://www.blackwasp.co.uk/CSharpThrowingExceptions.aspx) I particularly like this quote from the second article: > It is important that exceptions are > thrown only when an unexpected or > invalid activity occurs that prevents > a method from completing its normal > function. Exception handling > introduces a small overhead and lowers > performance so should not be used for > normal program flow instead of > conditional processing. It can also be > difficult to maintain code that > misuses exception handling in this > way.
The general rule of thumb is to avoid using exceptions for control flow unless the circumstances that will trigger the exception are "exceptional" -- e.g., extremely rare! If this is something that will happen normally and regularly it definitely should not be handled as an exception. Exceptions are very, very slow due to all the overhead involved, so there can be performance reasons as well, if it's happening often enough.
Using unhandled exceptions instead of Contains()?
[ "", "c#", ".net", "error-handling", "" ]
I'm looking to create `favicon.ico` files programatically from Python, but PIL only has support for reading `ico` files.
According to [Wikipedia](http://en.wikipedia.org/wiki/Favicon) modern browsers can handle favicons in PNG format, so maybe you could just generate that? Alternatively the [ICO article](http://en.wikipedia.org/wiki/ICO_(icon_image_file_format)) describes the format...
You can use [Pillow](http://pillow.readthedocs.org): ``` from PIL import Image filename = r'logo.png' img = Image.open(filename) img.save('logo.ico') ``` Optionally, you may specify the icon sizes you want: ``` icon_sizes = [(16,16), (32, 32), (48, 48), (64,64)] img.save('logo.ico', sizes=icon_sizes) ``` The [Pillow docs](http://pillow.readthedocs.org/en/3.1.x/handbook/image-file-formats.html) say that by default it will generate sizes `[(16, 16), (24, 24), (32, 32), (48, 48), (64, 64), (128, 128), (255, 255)]` and any size bigger than the original size or 255 will be ignored. Yes, it is in the **Read-only** section of the docs, but it works to some extent.
Is there a Python library for generating .ico files?
[ "", "python", "favicon", "" ]
I'm trying to fix some JavaScript bugs. Firebug makes debugging these issues a lot easier when working in Firefox, but what do you do when the code works fine on Firefox but IE is complaining?
you can also check out the [IE Developer Toolbar](http://www.microsoft.com/downloads/en/details.aspx?FamilyID=95e06cbe-4940-4218-b75d-b8856fced535) which isn't a debugger but will help you analyze the contents of your code. [Visual Studio](http://weblogs.asp.net/scottgu/archive/2007/07/19/vs-2008-javascript-debugging.aspx) will help with the debugging [Fiddler](http://www.fiddlertool.com/fiddler/) should help analyse the traffic travelling to and from your browser
You can try [Firebug Lite](http://getfirebug.com/lite.html) or use Visual Studio to debug the JavaScript.
Is there something like "Firebug for IE" (for debugging JavaScript)?
[ "", "javascript", "debugging", "internet-explorer", "firebug", "javascript-debugger", "" ]
Could someone write-up a step by step guide to developing a C++ based plugin for FireFox on Windows? The links and examples on <http://www.mozilla.org/projects/plugins/> are all old and inaccurate - the "NEW" link was added to the page in 2004. The example could be anything, but I was thinking a plugin that lets JavaScript set the name and then displays "Hello {Name}". To show 2-way communication, it could have a property that returns the full salutation. Though not as important, it would be nice if the plugin would work in Chrome too.
See also <http://developer.mozilla.org/en/Plugins> . And yes, NPAPI plugins should work in Google Chrome as well. [edit 2015: Chrome removes support for NPAPI soon <http://blog.chromium.org/2014/11/the-final-countdown-for-npapi.html> ]
If you need something that works cross-browser (firefox and ie), you could look at firebreath: <http://www.firebreath.org> For general "how to build a npapi plugin on windows" information, I have a few blog posts on the subject (linked to from some of the above sources as well) <http://colonelpanic.net/2009/03/building-a-firefox-plugin-part-one/> I really recommend firebreath, however, since we created it exactly for people who don't have time to do the months (literally) of research that it took us to figure out how it all works. If you don't want to use it as a basis for your plugin, though, you can still find a lot of good example code there. should work on chrome, firefox, and safari on windows too! =] good luck!
How to write a C++ FireFox 3 plugin (not extension) on Windows?
[ "", "c++", "windows", "firefox", "plugins", "" ]
I have over a million text files compressed into 40 zip files. I also have a list of about 500 model names of phones. I want to find out the number of times a particular model was mentioned in the text files. Is there any python module which can do a regex match on the files without unzipping it. Is there a simple way to solve this problem without unzipping?
There's nothing that will automatically do what you want. However, there is a python zipfile module that will make this easy to do. Here's how to iterate over the lines in the file. ``` #!/usr/bin/python import zipfile f = zipfile.ZipFile('myfile.zip') for subfile in f.namelist(): print subfile data = f.read(subfile) for line in data.split('\n'): print line ```
Isn't it (at least theoretically) possible, to read in the ZIP's Huffman coding and then translate the regexp into the Huffman code? Might this be more efficient than first de-compressing the data, then running the regexp? (Note: I know it wouldn't be quite that simple: you'd also have to deal with other aspects of the ZIP coding—file layout, block structures, back-references—but one imagines this could be fairly lightweight.) EDIT: Also note that it's probably much more sensible to just use the `zipfile` solution.
Is there a python module for regex matching in zip files
[ "", "python", "regex", "zip", "text-processing", "" ]
I currently use a DataTable to get results from a database which I can use in my code. However, many example on the web show using a DataSet instead and accessing the table(s) through the collections method. Is there any advantage, performance wise or otherwise, of using DataSets or DataTables as a storage method for SQL results?
It really depends on the sort of data you're bringing back. Since a DataSet is (in effect) just a collection of DataTable objects, you can return multiple distinct sets of data into a single, and therefore more manageable, object. Performance-wise, you're more likely to get inefficiency from unoptimized queries than from the "wrong" choice of .NET construct. At least, that's been my experience.
One major difference is that DataSets can hold multiple tables and you can define relationships between those tables. If you are only returning a single result set though I would think a DataTable would be more optimized. I would think there has to be some overhead (granted small) to offer the functionality a DataSet does and keep track of multiple DataTables.
Datatable vs Dataset
[ "", "c#", "dataset", "datatable", "" ]
I just saw this mentioned in Stack Overflow question *[Best WYSIWYG CSS editor](http://web.archive.org/web/20090503103538/http://stackoverflow.com:80/questions/7975/best-css-editor)* and didn't know it could be done. I'm a Visual Studio newbie, so how do you do it? Is there a separate debugger for JavaScript? I know how to work the one for code-behind pages... I usually use Firebug to deal with debugging JavaScript code. I'm using [Visual Studio 2005](http://en.wikipedia.org/wiki/Microsoft_Visual_Studio#Visual_Studio_2005).
I prefer using [Firebug](http://en.wikipedia.org/wiki/Firebug_%28software%29) for projects I can't use [Visual Studio 2008](http://en.wikipedia.org/wiki/Microsoft_Visual_Studio#Visual_Studio_2008) on.
To debug in Visual Studio 2005, make sure that "disable script debugging" is unchecked. Then load your webpage in Internet Explorer. From the debug menu inside of Visual Studio 2005, select "Attach to process" and pick the instance of Internet Explorer that has your web page loaded. Alternatively, the [Firebug](http://en.wikipedia.org/wiki/Firebug_%28software%29) team has been working on a "lite" version that you can include either as a script in your page or by launching it via a bookmarklet from your browser. It doesn't provide the full debugger that Firebug does, but it gives you a console and a command line from which you can inspect variables and log things to the console.
How do I debug JavaScript in Visual Studio 2005?
[ "", "javascript", "visual-studio", "visual-studio-2005", "debugging", "" ]
I would like to create a database backed interactive AJAX webapp which has a custom (specific kind of events, editing) calendaring system. This would involve quite a lot of JavaScript and AJAX, and I thought about Google Web Toolkit for the interface and Ruby on Rails for server side. Is Google Web Toolkit reliable and good? What hidden risks might be if I choose Google Web Toolkit? Can one easily combine it with Ruby on Rails on server side? Or should I try to use directly a JavaScript library like jQuery? I have no experience in web development except some HTML, but I am an experienced programmer (c++, java, c#), and I would like to use only free tools for this project.
RoR is actually one of the things the GWT is made to work well with, as long as you're using REST properly. It's in the Google Web Toolkit Applications book, and you can see a demo from the book using this kind of idea [here](http://www.gwtapps.com/?page_id=33). That's not to say that you won't have any problems, but I think the support is definitely out there for it. There's a neat project for making RoR/GWT easy that you can find [here](http://code.google.com/p/gwt-on-rails/) (MIT license). I haven't had a chance to try it out yet, but it looks like a good amount of thought has been put into it. One catch is that it looks like it hasn't been fully tested with 2.1 Rails yet, just 2.0, so you may run into a few (probably minor and fixable) errors.
If you are looking to integrate GWT with non-Java backends such as ROR, PHP etc., you should bear in mind that GWT 1.5 now supports JavaScript Overlay types. This feature lets you write classes that can be mapped over the top of native JavaScript objects to easily provide accessor methods for properties of those objects and other extended functionality. See this link for more details: [JavaScript Overlay Types](http://googlewebtoolkit.blogspot.com/2008/08/getting-to-really-know-gwt-part-2.html) So you could return JSON encoded data from your backend via AJAX calls, parse it into a JavaScript Object and then access the data through your GWT Java code using the overlay classes you've created. Or when you render your page you can render static config data as JavaScript Objects and read it in via this mechanism, rather than having to do an AJAX call to grab the data.
Should I use Google Web Toolkit for my new webapp?
[ "", "javascript", "ruby", "gwt", "" ]
From time to time I see an enum like the following: ``` [Flags] public enum Options { None = 0, Option1 = 1, Option2 = 2, Option3 = 4, Option4 = 8 } ``` I don't understand what exactly the `[Flags]` attribute does. Anyone have a good explanation or example they could post?
The `[Flags]` attribute should be used whenever the enumerable represents a collection of possible values, rather than a single value. Such collections are often used with bitwise operators, for example: ``` var allowedColors = MyColor.Red | MyColor.Green | MyColor.Blue; ``` Note that the `[Flags]` attribute **doesn't** enable this by itself - all it does is allow a nice representation by the `.ToString()` method: ``` enum Suits { Spades = 1, Clubs = 2, Diamonds = 4, Hearts = 8 } [Flags] enum SuitsFlags { Spades = 1, Clubs = 2, Diamonds = 4, Hearts = 8 } ... var str1 = (Suits.Spades | Suits.Diamonds).ToString(); // "5" var str2 = (SuitsFlags.Spades | SuitsFlags.Diamonds).ToString(); // "Spades, Diamonds" ``` It is also important to note that `[Flags]` **does not** automatically make the enum values powers of two. If you omit the numeric values, the enum will not work as one might expect in bitwise operations, because by default the values start with 0 and increment. Incorrect declaration: ``` [Flags] public enum MyColors { Yellow, // 0 Green, // 1 Red, // 2 Blue // 3 } ``` The values, if declared this way, will be Yellow = 0, Green = 1, Red = 2, Blue = 3. This will render it useless as flags. Here's an example of a correct declaration: ``` [Flags] public enum MyColors { Yellow = 1, Green = 2, Red = 4, Blue = 8 } ``` To retrieve the distinct values in your property, one can do this: ``` if (myProperties.AllowedColors.HasFlag(MyColor.Yellow)) { // Yellow is allowed... } ``` or prior to .NET 4: ``` if((myProperties.AllowedColors & MyColor.Yellow) == MyColor.Yellow) { // Yellow is allowed... } if((myProperties.AllowedColors & MyColor.Green) == MyColor.Green) { // Green is allowed... } ``` **Under the covers** This works because you used powers of two in your enumeration. Under the covers, your enumeration values look like this in binary ones and zeros: ``` Yellow: 00000001 Green: 00000010 Red: 00000100 Blue: 00001000 ``` Similarly, after you've set your property *AllowedColors* to Red, Green and Blue using the binary bitwise OR `|` operator, *AllowedColors* looks like this: ``` myProperties.AllowedColors: 00001110 ``` So when you retrieve the value you are actually performing bitwise AND `&` on the values: ``` myProperties.AllowedColors: 00001110 MyColor.Green: 00000010 ----------------------- 00000010 // Hey, this is the same as MyColor.Green! ``` **The None = 0 value** And regarding the use of `0` in your enumeration, quoting from MSDN: ``` [Flags] public enum MyColors { None = 0, .... } ``` > Use None as the name of the flag enumerated constant whose value is zero. **You cannot use the None enumerated constant in a bitwise AND operation to test for a flag because the result is always zero.** However, you can perform a logical, not a bitwise, comparison between the numeric value and the None enumerated constant to determine whether any bits in the numeric value are set. You can find more info about the flags attribute and its usage at [msdn](http://msdn.microsoft.com/en-us/library/system.flagsattribute.aspx) and [designing flags at msdn](http://msdn.microsoft.com/en-us/library/ms229062.aspx)
You can also do this ``` [Flags] public enum MyEnum { None = 0, First = 1 << 0, Second = 1 << 1, Third = 1 << 2, Fourth = 1 << 3 } ``` I find the bit-shifting easier than typing 4,8,16,32 and so on. It has no impact on your code because it's all done at compile time
What does the [Flags] Enum Attribute mean in C#?
[ "", "c#", "enums", "flags", "" ]
I want to create an allocator which provides memory with the following attributes: * cannot be paged to disk. * is incredibly hard to access through an attached debugger The idea is that this will contain sensitive information (like licence information) which should be inaccessible to the user. I have done the usual research online and asked a few other people about this, but I cannot find a good place start on this problem. **Updates** [Josh](https://stackoverflow.com/questions/8451/secure-memory-allocator-in-c#27194) mentions using `VirtualAlloc` to set protection on the memory space. I have created a custom allocator ( shown below ) I have found the using the `VirtualLock` function it limits the amount of memory I can allocate. This seems to be by design though. Since I am using it for small objects this is not a problem. ``` // template<class _Ty> class LockedVirtualMemAllocator : public std::allocator<_Ty> { public: template<class _Other> LockedVirtualMemAllocator<_Ty>& operator=(const LockedVirtualMemAllocator<_Other>&) { // assign from a related LockedVirtualMemAllocator (do nothing) return (*this); } template<class Other> struct rebind { typedef LockedVirtualMemAllocator<Other> other; }; pointer allocate( size_type _n ) { SIZE_T allocLen = (_n * sizeof(_Ty)); DWORD allocType = MEM_COMMIT; DWORD allocProtect = PAGE_READWRITE; LPVOID pMem = ::VirtualAlloc( NULL, allocLen, allocType, allocProtect ); if ( pMem != NULL ) { ::VirtualLock( pMem, allocLen ); } return reinterpret_cast<pointer>( pMem ); } pointer allocate( size_type _n, const void* ) { return allocate( _n ); } void deallocate(void* _pPtr, size_type _n ) { if ( _pPtr != NULL ) { SIZE_T allocLen = (_n * sizeof(_Ty)); ::SecureZeroMemory( _pPtr, allocLen ); ::VirtualUnlock( _pPtr, allocLen ); ::VirtualFree( _pPtr, 0, MEM_RELEASE ); } } }; ``` and is used ``` //a memory safe std::string typedef std::basic_string<char, std::char_traits<char>, LockedVirtualMemAllocato<char> > modulestring_t; ``` [Ted Percival](https://stackoverflow.com/questions/8451/secure-memory-allocator-in-c#38708) mentions mlock, but I have no implementation of that yet. I found [Practical Cryptography by Neil Furguson and Bruce Schneier](http://www.schneier.com/book-practical.html) quite helpful as well.
You can't really protect against memory access. You can probably prevent paging if you are running as an admin or as the system, but you cannot prevent the admin or system from reading your memory. Even if you could somehow completely block other processes from reading your memory (which you can't), another process could still actually inject a new thread into your process and read the memory that way. Even if you could somehow completely lock down your process and guarantee that the OS would *never* allow anyone else to access your process, you still don't have full protection. The entire OS could be running in a virtual machine, which could be paused and inspected at any time. You *cannot* protect memory contents from the owner of the system. Hollywood and the music industry have been aching for this for years. If it were possible, they'd already be doing it.
On Unix systems you can use [mlock(2)](http://linux.die.net/man/2/mlock) to lock memory pages into RAM, preventing them being paged. > mlock() and mlockall() respectively lock part or all of the calling > process’s virtual address space into RAM, preventing that memory from > being paged to the swap area. There is a limit to how much memory each process can lock, it can be shown with `ulimit -l` and is measured in kilobytes. On my system, the default limit is 32 kiB per process.
Secure Memory Allocator in C++
[ "", "c++", "security", "memory", "ram-scraping", "" ]
I have an object that is mapped to a cookie as a serialized base-64 string. I only want to write out a new cookie if there are changes made to the object stored in the cookie on server-side. What I want to do is get a hash code when the object is pulled from the cookie/initialized and compare the original hash code to the hash code that exists just before I send the cookie header off to the client to ensure I don't have to re-serialize/send the cookie unless changes were made. I was going to override the .NET's `Object.GetHashCode()` method, but I wasn't sure that this is the best way to go about checking if an object is modified. Are there any other ways I can check if an object is modified, or should I override the `GetHashCode()` method. **Update** I decided to accept @rmbarnes's answer as it had an interesting solution to the problem, and because I decided to use his advice at the end of his post and not check for modification. I'd still be interested to hear any other solutions anyone may have to my scenario however.
At the end of the object's constructor you could serialize the object to a base 64 string just like the cookie stores it, and store this in a member variable. When you want to check if the cookie needs recreating, re - serialize the object and compare this new base 64 string against the one stored in a member variable. If it has changed, reset the cookie with the new value. Watch out for the gotcha - don't include the member variable storing the base 64 serialization in the serialization itself. I presume your language uses something like a sleep() function (is how PHP does it) to serialize itself, so just make sure the member is not included in that function. This will always work because you are comparing the exact value you'd be saving in the cookie, and wouldn't need to override GetHashCode() which sounds like it could have nasty consequences. All that said I'd probably just drop the test and always reset the cookie, can't be that much overhead in it when compared to doing the change check, and far less likelyhood of bugs.
GetHashCode() should always be in sync with Equals(), and Equals() isn't necessarily guaranteed to check for *all* of the fields in your object (there's certain situations where you want that to not be the case). Furthermore, GetHashCode() isn't guaranteed to return unique values for all possible object states. It's conceivable (though unlikely) that two object states could result in the same HashCode (which does, after all, only have an int's worth of possible states; see [the Pigeonhole Principle](http://en.wikipedia.org/wiki/Pigeonhole_principle) for more details). If you can ensure that Equals() checks all of the appropriate fields, then you could possibly clone the object to record its state and then check it with Equals() against the new state to see if its changed. BTW: Your mention of serialization gave me an idea. You could serialize the object, record it, and then when you check for object changing, repeat the process and compare the serialized values. That would let you check for state changes without having to make any code changes to your object. However, this isn't a great solution, because: 1. It's probably very inefficient 2. It's prone to serialization changes in the object; you might get false positives on the object state change.
What is the best way to tell if an object is modified?
[ "", "c#", ".net", "" ]
How can I use the Prototype library and create unobtrusive javascript to inject the onmouseover and onmouseout events to each row, rather than putting the javascript in each table row tag? An answer utilizing the Prototype library (instead of mootools, jQuery, etc) would be most helpful.
``` <table id="mytable"> <tbody> <tr><td>Foo</td><td>Bar</td></tr> <tr><td>Bork</td><td>Bork</td></tr> </tbody> </table> <script type="text/javascript"> $$('#mytable tr').each(function(item) { item.observe('mouseover', function() { item.setStyle({ backgroundColor: '#ddd' }); }); item.observe('mouseout', function() { item.setStyle({backgroundColor: '#fff' }); }); }); </script> ```
You can use Prototype's `addClassName` and `removeClassName` methods. Create a CSS class "hilight" that you'll apply to the hilighted `<tr>`'s. Then run this code on page load: ``` var rows = $$('tbody tr'); for (var i = 0; i < rows.length; i++) { rows[i].onmouseover = function() { $(this).addClassName('hilight'); } rows[i].onmouseout = function() { $(this).removeClassName('hilight'); } } ```
How can I highlight a table row using Prototype?
[ "", "javascript", "ajax", "prototypejs", "" ]
Is it possible to write a `doctest` unit test that will check that an exception is raised? For example, if I have a function `foo(x)` that is supposed to raise an exception if `x < 0`, how would I write the `doctest` for that?
Yes. You can do it. The [doctest module documentation](https://docs.python.org/3/library/doctest.html) and Wikipedia has an [example](http://en.wikipedia.org/wiki/Doctest#Example_2:_doctests_embedded_in_a_README.txt_file) of it. ``` >>> x Traceback (most recent call last): ... NameError: name 'x' is not defined ```
``` >>> scope # doctest: +IGNORE_EXCEPTION_DETAIL Traceback (most recent call last): NameError: name 'scope' is not defined ``` Don't know why the previous answers don't have the IGNORE\_EXCEPTION\_DETAIL. I need this for it to work. Py versioin: 3.7.3.
Can you check that an exception is thrown with doctest in Python?
[ "", "python", "doctest", "" ]
Let's say I have a drive such as **C:\**, and I want to find out if it's shared and what it's share name (e.g. **C$**) is. To find out if it's shared, I can use [NetShareCheck](https://learn.microsoft.com/en-us/windows/desktop/api/Lmshare/nf-lmshare-netsharecheck). How do I then map the drive to its share name? I thought that [NetShareGetInfo](https://learn.microsoft.com/en-us/windows/desktop/api/Lmshare/nf-lmshare-netsharegetinfo) would do it, but it looks like that takes the share name, not the local drive name, as an input.
If all else fails, you could always use [NetShareEnum](https://learn.microsoft.com/en-us/windows/win32/api/lmshare/nf-lmshare-netshareenum) and call [NetShareGetInfo](https://learn.microsoft.com/windows/desktop/api/lmshare/nf-lmshare-netsharegetinfo) on each.
I believe you're looking for [WNetGetConnectionA](https://learn.microsoft.com/en-us/windows/win32/api/winnetwk/nf-winnetwk-wnetgetconnectiona) or [WNetGetConnectionW](https://learn.microsoft.com/en-us/windows/win32/api/winnetwk/nf-winnetwk-wnetgetconnectionw).
Windows/C++: How do I determine the share name associated with a shared drive?
[ "", "c++", "windows", "networking", "share", "" ]
Every method I write to encode a string in Java using 3DES can't be decrypted back to the original string. Does anyone have a simple code snippet that can just encode and then decode the string back to the original string? I know I'm making a very silly mistake somewhere in this code. Here's what I've been working with so far: \*\* note, I am not returning the BASE64 text from the encrypt method, and I am not base64 un-encoding in the decrypt method because I was trying to see if I was making a mistake in the BASE64 part of the puzzle. ``` public class TripleDESTest { public static void main(String[] args) { String text = "kyle boon"; byte[] codedtext = new TripleDESTest().encrypt(text); String decodedtext = new TripleDESTest().decrypt(codedtext); System.out.println(codedtext); System.out.println(decodedtext); } public byte[] encrypt(String message) { try { final MessageDigest md = MessageDigest.getInstance("md5"); final byte[] digestOfPassword = md.digest("HG58YZ3CR9".getBytes("utf-8")); final byte[] keyBytes = Arrays.copyOf(digestOfPassword, 24); for (int j = 0, k = 16; j < 8;) { keyBytes[k++] = keyBytes[j++]; } final SecretKey key = new SecretKeySpec(keyBytes, "DESede"); final IvParameterSpec iv = new IvParameterSpec(new byte[8]); final Cipher cipher = Cipher.getInstance("DESede/CBC/PKCS5Padding"); cipher.init(Cipher.ENCRYPT_MODE, key, iv); final byte[] plainTextBytes = message.getBytes("utf-8"); final byte[] cipherText = cipher.doFinal(plainTextBytes); final String encodedCipherText = new sun.misc.BASE64Encoder().encode(cipherText); return cipherText; } catch (java.security.InvalidAlgorithmParameterException e) { System.out.println("Invalid Algorithm"); } catch (javax.crypto.NoSuchPaddingException e) { System.out.println("No Such Padding"); } catch (java.security.NoSuchAlgorithmException e) { System.out.println("No Such Algorithm"); } catch (java.security.InvalidKeyException e) { System.out.println("Invalid Key"); } catch (BadPaddingException e) { System.out.println("Invalid Key");} catch (IllegalBlockSizeException e) { System.out.println("Invalid Key");} catch (UnsupportedEncodingException e) { System.out.println("Invalid Key");} return null; } public String decrypt(byte[] message) { try { final MessageDigest md = MessageDigest.getInstance("md5"); final byte[] digestOfPassword = md.digest("HG58YZ3CR9".getBytes("utf-8")); final byte[] keyBytes = Arrays.copyOf(digestOfPassword, 24); for (int j = 0, k = 16; j < 8;) { keyBytes[k++] = keyBytes[j++]; } final SecretKey key = new SecretKeySpec(keyBytes, "DESede"); final IvParameterSpec iv = new IvParameterSpec(new byte[8]); final Cipher decipher = Cipher.getInstance("DESede/CBC/PKCS5Padding"); decipher.init(Cipher.DECRYPT_MODE, key, iv); //final byte[] encData = new sun.misc.BASE64Decoder().decodeBuffer(message); final byte[] plainText = decipher.doFinal(message); return plainText.toString(); } catch (java.security.InvalidAlgorithmParameterException e) { System.out.println("Invalid Algorithm"); } catch (javax.crypto.NoSuchPaddingException e) { System.out.println("No Such Padding"); } catch (java.security.NoSuchAlgorithmException e) { System.out.println("No Such Algorithm"); } catch (java.security.InvalidKeyException e) { System.out.println("Invalid Key"); } catch (BadPaddingException e) { System.out.println("Invalid Key");} catch (IllegalBlockSizeException e) { System.out.println("Invalid Key");} catch (UnsupportedEncodingException e) { System.out.println("Invalid Key");} catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } return null; } } ```
Your code was fine except for the Base 64 encoding bit (which you mentioned was a test), the reason the output may not have made sense is that you were displaying a raw byte array (doing toString() on a byte array returns its internal Java reference, not the String representation of the **contents**). Here's a version that's just a teeny bit cleaned up and which prints "kyle boon" as the decoded string: ``` import java.security.MessageDigest; import java.util.Arrays; import javax.crypto.Cipher; import javax.crypto.SecretKey; import javax.crypto.spec.IvParameterSpec; import javax.crypto.spec.SecretKeySpec; public class TripleDESTest { public static void main(String[] args) throws Exception { String text = "kyle boon"; byte[] codedtext = new TripleDESTest().encrypt(text); String decodedtext = new TripleDESTest().decrypt(codedtext); System.out.println(codedtext); // this is a byte array, you'll just see a reference to an array System.out.println(decodedtext); // This correctly shows "kyle boon" } public byte[] encrypt(String message) throws Exception { final MessageDigest md = MessageDigest.getInstance("md5"); final byte[] digestOfPassword = md.digest("HG58YZ3CR9" .getBytes("utf-8")); final byte[] keyBytes = Arrays.copyOf(digestOfPassword, 24); for (int j = 0, k = 16; j < 8;) { keyBytes[k++] = keyBytes[j++]; } final SecretKey key = new SecretKeySpec(keyBytes, "DESede"); final IvParameterSpec iv = new IvParameterSpec(new byte[8]); final Cipher cipher = Cipher.getInstance("DESede/CBC/PKCS5Padding"); cipher.init(Cipher.ENCRYPT_MODE, key, iv); final byte[] plainTextBytes = message.getBytes("utf-8"); final byte[] cipherText = cipher.doFinal(plainTextBytes); // final String encodedCipherText = new sun.misc.BASE64Encoder() // .encode(cipherText); return cipherText; } public String decrypt(byte[] message) throws Exception { final MessageDigest md = MessageDigest.getInstance("md5"); final byte[] digestOfPassword = md.digest("HG58YZ3CR9" .getBytes("utf-8")); final byte[] keyBytes = Arrays.copyOf(digestOfPassword, 24); for (int j = 0, k = 16; j < 8;) { keyBytes[k++] = keyBytes[j++]; } final SecretKey key = new SecretKeySpec(keyBytes, "DESede"); final IvParameterSpec iv = new IvParameterSpec(new byte[8]); final Cipher decipher = Cipher.getInstance("DESede/CBC/PKCS5Padding"); decipher.init(Cipher.DECRYPT_MODE, key, iv); // final byte[] encData = new // sun.misc.BASE64Decoder().decodeBuffer(message); final byte[] plainText = decipher.doFinal(message); return new String(plainText, "UTF-8"); } } ```
Here is a solution using the **javax.crypto** library and the apache commons codec library for encoding and decoding in Base64: ``` import java.security.spec.KeySpec; import javax.crypto.Cipher; import javax.crypto.SecretKey; import javax.crypto.SecretKeyFactory; import javax.crypto.spec.DESedeKeySpec; import org.apache.commons.codec.binary.Base64; public class TrippleDes { private static final String UNICODE_FORMAT = "UTF8"; public static final String DESEDE_ENCRYPTION_SCHEME = "DESede"; private KeySpec ks; private SecretKeyFactory skf; private Cipher cipher; byte[] arrayBytes; private String myEncryptionKey; private String myEncryptionScheme; SecretKey key; public TrippleDes() throws Exception { myEncryptionKey = "ThisIsSpartaThisIsSparta"; myEncryptionScheme = DESEDE_ENCRYPTION_SCHEME; arrayBytes = myEncryptionKey.getBytes(UNICODE_FORMAT); ks = new DESedeKeySpec(arrayBytes); skf = SecretKeyFactory.getInstance(myEncryptionScheme); cipher = Cipher.getInstance(myEncryptionScheme); key = skf.generateSecret(ks); } public String encrypt(String unencryptedString) { String encryptedString = null; try { cipher.init(Cipher.ENCRYPT_MODE, key); byte[] plainText = unencryptedString.getBytes(UNICODE_FORMAT); byte[] encryptedText = cipher.doFinal(plainText); encryptedString = new String(Base64.encodeBase64(encryptedText)); } catch (Exception e) { e.printStackTrace(); } return encryptedString; } public String decrypt(String encryptedString) { String decryptedText=null; try { cipher.init(Cipher.DECRYPT_MODE, key); byte[] encryptedText = Base64.decodeBase64(encryptedString); byte[] plainText = cipher.doFinal(encryptedText); decryptedText= new String(plainText); } catch (Exception e) { e.printStackTrace(); } return decryptedText; } public static void main(String args []) throws Exception { TrippleDes td= new TrippleDes(); String target="imparator"; String encrypted=td.encrypt(target); String decrypted=td.decrypt(encrypted); System.out.println("String To Encrypt: "+ target); System.out.println("Encrypted String:" + encrypted); System.out.println("Decrypted String:" + decrypted); } } ``` Running the above program results with the following output: ``` String To Encrypt: imparator Encrypted String:FdBNaYWfjpWN9eYghMpbRA== Decrypted String:imparator ```
How do I use 3DES encryption/decryption in Java?
[ "", "java", "encryption", "3des", "" ]
I'm displaying a set of images as an overlay using Google Maps. Displaying these images should be in an endless loop but most most browsers detect this, and display a warning. Is there a way to make a endless loop in JavaScript so that it isn't stopped or warned against by the browser?
Try [setInterval](https://developer.mozilla.org/en-US/docs/Web/API/WindowOrWorkerGlobalScope/setInterval) or [setTimeout](https://developer.mozilla.org/en-US/docs/Web/API/WindowOrWorkerGlobalScope/settimeout). Here is an example: ``` (show = (o) => setTimeout(() => { console.log(o) show(++o) }, 1000))(1); ``` ``` .as-console-wrapper { max-height: 100% !important; top: 0; } ```
You should use a timer to continuously bring new images instead of an infinite loop. Check the `setTimeout()` function. The caveat is that you should call it in a function that calls itself, for it to wait again. Example taken from [w3schools](http://www.w3schools.com/js/js_timing.asp): ``` var c = 0 var t; function timedCount() { document.getElementById('txt').value = c; c = c + 1; t = setTimeout("timedCount()", 1000); } ``` ``` <form> <input type="button" value="Start count!" onClick="timedCount()"> <input type="text" id="txt"> </form> ```
Endless loop in JavaScript that does not trigger warning by browser
[ "", "javascript", "loops", "" ]
I have a bunch of classes I want to rename. Some of them have names that are small and that name is reused in other class names, where I don't want that name changed. Most of this lives in Python code, but we also have some XML code that references class names. Simple search and replace only gets me so far. In my case, I want to rename AdminAction to AdminActionPlug and AdminActionLogger to AdminActionLoggerPlug, so the first one's search-and-replace would also hit the second, wrongly. Does anyone have experience with Python refactoring tools ? Bonus points if they can fix class names in the XML documents too.
In the meantime, I've tried it two tools that have some sort of integration with vim. The first is [Rope](https://github.com/python-rope/rope), a python refactoring library that comes with a Vim (and emacs) plug-in. I tried it for a few renames, and that definitely worked as expected. It allowed me to preview the refactoring as a diff, which is nice. It is a bit text-driven, but that's alright for me, just takes longer to learn. The second is [Bicycle Repair Man](http://bicyclerepair.sourceforge.net/) which I guess wins points on name. Also plugs into vim and emacs. Haven't played much with it yet, but I remember trying it a long time ago. Haven't played with both enough yet, or tried more types of refactoring, but I will do some more hacking with them.
I would strongly recommend [PyCharm](https://www.jetbrains.com/pycharm/) - not just for refactorings. Since the first PyCharm answer was posted here a few years ago the refactoring support in PyCharm has improved significantly. [Python Refactorings available in PyCharm](https://www.jetbrains.com/pycharm/webhelp/refactoring-source-code.html) (last checked 2016/07/27 in PyCharm 2016.2) * Change Signature * Convert to Python Package/Module * Copy * Extract Refactorings * Inline * Invert Boolean * Make Top-Level Function * Move Refactorings * Push Members down * Pull Members up * Rename Refactorings * Safe Delete XML refactorings (I checked in context menu in an XML file): * Rename * Move * Copy * Extract Subquery as CTE * Inline Javascript refactorings: * Extract Parameter in JavaScript * Change Signature in JavaScript * Extract Variable in JavaScript
What refactoring tools do you use for Python?
[ "", "python", "refactoring", "" ]
If I call `finalize()` on an object from my program code, will the **JVM** still run the method again when the garbage collector processes this object? This would be an approximate example: ``` MyObject m = new MyObject(); m.finalize(); m = null; System.gc() ``` Would the explicit call to `finalize()` make the **JVM**'s garbage collector not to run the `finalize()` method on object `m`?
According to this simple test program, the JVM will still make its call to finalize() even if you explicitly called it: ``` private static class Blah { public void finalize() { System.out.println("finalizing!"); } } private static void f() throws Throwable { Blah blah = new Blah(); blah.finalize(); } public static void main(String[] args) throws Throwable { System.out.println("start"); f(); System.gc(); System.out.println("done"); } ``` The output is: > start > finalizing! > finalizing! > done Every resource out there says to never call finalize() explicitly, and pretty much never even implement the method because there are no guarantees as to if and when it will be called. You're better off just closing all of your resources manually.
One must understand the Garbage Collector(GC) Workflow to understand the function of finalize. calling .finalize() will not invoke the garbage collector, nor calling system.gc. Actually, What finalize will help the coder is to declare the reference of the object as "unreferenced". GC forces a suspension on the running operation of JVM, which creates a dent on the performance. During operation, GC will traverse all referenced objects, starting from the root object(your main method call). This suspension time can be decreased by declaring the objects as unreferenced manually, because it will cut down the operation costs to declare the object reference obsolete by the automated run. By declaring finalize(), coder sets the reference to the object obsolete, thus on the next run of GC operation, GC run will sweep the objects without using operation time. Quote: "After the finalize method has been invoked for an object, no further action is taken until the Java virtual machine has again determined that there is no longer any means by which this object can be accessed by any thread that has not yet died, including possible actions by other objects or classes which are ready to be finalized, at which point the object may be discarded. " from Java API Doc on java.Object.finalize(); For detailed explanation, you can also check: [javabook.computerware](http://javabook.compuware.com/content/memory/how-garbage-collection-works.aspx)
Java and manually executing finalize
[ "", "java", "garbage-collection", "finalize", "" ]
I have a little game written in C#. It uses a database as back-end. It's a [trading card game](http://en.wikipedia.org/wiki/Collectible_card_game), and I wanted to implement the function of the cards as a script. What I mean is that I essentially have an interface, `ICard`, which a card class implements (`public class Card056: ICard`) and which contains a function that is called by the game. Now, to make the thing maintainable/moddable, I would like to have the class for each card as source code in the database and essentially compile it on first use. So when I have to add/change a card, I'll just add it to the database and tell my application to refresh, without needing any assembly deployment (especially since we would be talking about 1 assembly per card which means hundreds of assemblies). Is that possible? Register a class from a source file and then instantiate it, etc. ``` ICard Cards[current] = new MyGame.CardLibrary.Card056(); Cards[current].OnEnterPlay(ref currentGameState); ``` The language is C# but extra bonus if it's possible to write the script in any .NET language.
[Oleg Shilo's C# Script solution (at The Code Project](https://www.codeproject.com/Articles/8656/C-Script-The-Missing-Puzzle-Piece)) really is a great introduction to providing script abilities in your application. A different approach would be to consider a language that is specifically built for scripting, such as [IronRuby](https://en.wikipedia.org/wiki/IronRuby), [IronPython](https://en.wikipedia.org/wiki/IronPython), or [Lua](https://en.wikipedia.org/wiki/Lua_%28programming_language%29). IronPython and IronRuby are both available today. For a guide to embedding IronPython read [How to embed IronPython script support in your existing app in 10 easy steps](https://blogs.msdn.microsoft.com/jmstall/2005/09/01/how-to-embed-ironpython-script-support-in-your-existing-app-in-10-easy-steps/). Lua is a scripting language commonly used in games. There is a Lua compiler for .NET, available from CodePlex -- <http://www.codeplex.com/Nua> That codebase is a great read if you want to learn about building a compiler in .NET. A different angle altogether is to try [PowerShell](https://en.wikipedia.org/wiki/PowerShell). There are numerous examples of embedding PowerShell into an application -- here's a thorough project on the topic: [Powershell Tunnel](http://code.msdn.microsoft.com/PowerShellTunnel/Wiki/View.aspx?title=PowerShellTunnel%20Reference "PowerShell Tunnel")
You might be able to use IronRuby for that. Otherwise I'd suggest you have a directory where you place precompiled assemblies. Then you could have a reference in the DB to the assembly and class, and use reflection to load the proper assemblies at runtime. If you really want to compile at run-time you could use the CodeDOM, then you could use reflection to load the dynamic assembly. [Microsoft documentation article which might help](https://learn.microsoft.com/dotnet/api/microsoft.csharp.csharpcodeprovider).
Adding scripting functionality to .NET applications
[ "", "c#", ".net", "scripting", "compiler-construction", "" ]
I'm looking for a tool which can generate a [Makefile](https://en.wikipedia.org/wiki/Make_(software)#Makefile) for a C/C++ project for different compilers ([GCC](https://en.wikipedia.org/wiki/GNU_Compiler_Collection), [Microsoft Visual C++](https://en.wikipedia.org/wiki/Microsoft_Visual_C%2B%2B), [C++Builder](https://en.wikipedia.org/wiki/C%2B%2BBuilder), etc.) and different platforms (Windows, Linux, and Mac).
Other suggestions you may want to consider: * [Scons](http://www.scons.org/) is a cross-platform, cross-compiler build library, uses Python scripting for the build systems. Used in a variety of large projects, and performs very well. * If you're using [Qt](https://en.wikipedia.org/wiki/Qt_%28software%29), [QMake](https://doc.qt.io/qt-5/qmake-manual.html) is a nice build system too. * [CMake](https://cmake.org/) is also pretty sweet. * Finally, [if all else fails...](http://www.google.co.uk/search?q=cross+platform+build+system)
Try [Automatic Makefile Generator](http://www.robertnz.net/genmake.htm#Details). It has support for the following compilers: * Borland 3.1 * Borland 5.0 * Borland 5.0, 16 bit * Borland 5.5 * Borland 5.6 * Borland 5.8 * CC * GNU g++ * GNU g++, dynamic library * Intel 5, 6, 7 for Linux * Intel 5, 6, 7 for Linux, dynamic library * Intel 5, 6, 7 for Windows * Intel 8,9,10 for Linux * Intel 8,9,10 for Linux, dynamic library * Intel 8,9 for Windows * Intel 10 for Windows * Visual C++ 5 * Visual C++ 6, 7, 7.1 * Visual C++ 8 * Open Watcom * Watcom 10A * Watcom 10A, 16 bit
Is there a good tool for Makefile generation?
[ "", "c++", "c", "cross-platform", "makefile", "" ]
I often have to sort a dictionary (consisting of keys & values) by value. For example, I have a hash of words and respective frequencies that I want to order by frequency. There is a `SortedList` which is good for a single value (say frequency), that I want to map back to the word. [SortedDictionary](http://msdn.microsoft.com/en-us/library/f7fta44c.aspx) orders by key, not value. Some resort to a [custom class](http://www.codeproject.com/KB/recipes/lookupcollection.aspx), but is there a cleaner way?
Use: ``` using System.Linq.Enumerable; ... List<KeyValuePair<string, string>> myList = aDictionary.ToList(); myList.Sort( delegate(KeyValuePair<string, string> pair1, KeyValuePair<string, string> pair2) { return pair1.Value.CompareTo(pair2.Value); } ); ``` Since you're targeting .NET 2.0 or above, you can simplify this into lambda syntax -- it's equivalent, but shorter. If you're targeting .NET 2.0 you can only use this syntax if you're using the compiler from Visual Studio 2008 (or above). ``` var myList = aDictionary.ToList(); myList.Sort((pair1,pair2) => pair1.Value.CompareTo(pair2.Value)); ```
Use LINQ: ``` Dictionary<string, int> myDict = new Dictionary<string, int>(); myDict.Add("one", 1); myDict.Add("four", 4); myDict.Add("two", 2); myDict.Add("three", 3); var sortedDict = from entry in myDict orderby entry.Value ascending select entry; ``` This would also allow for great flexibility in that you can select the top 10, 20 10%, etc. Or if you are using your word frequency index for `type-ahead`, you could also include `StartsWith` clause as well.
How do you sort a dictionary by value?
[ "", "c#", ".net", "sorting", "dictionary", "" ]
I'm trying to compile over 100 java classes from different packages from a clean directory (no incremental compiles) using the following ant tasks: ``` <target name="-main-src-depend"> <depend srcdir="${src.dir}" destdir="${bin.dir}" cache="${cache.dir}" closure="true"/> </target> <target name="compile" depends="-main-src-depend" description="Compiles the project."> <echo>Compiling</echo> <javac target="${javac.target}" source="${javac.source}" debug="${javac.debug}" srcdir="${src.dir}" destdir="${bin.dir}"> <classpath> <path refid="runtime.classpath"/> <path refid="compile.classpath"/> </classpath> </javac> </target> ``` However, the first time I run the compile task I always get a StackOverflowException. If I run the task again the compiler does an incremental build and everything works fine. This is undesirable since we are using [CruiseControl](http://cruisecontrol.sourceforge.net/) to do an automatic daily build and this is causing false build failures. As a quick-and-dirty solution I have created 2 separate tasks, compiling portions of the project in each. I really don't think this solution will hold as more classes are added in the future, and I don't want to be adding new compile tasks every time we hit the "compile limit".
> It will be nice to know; what can > cause or causes a StackOverflowError > during compilation of Java code? It is probable that evaluating the long expression in your java file consumes lots of memory and because this is being done in conjunction with the compilation of other classes, the VM just runs out of stack space. Your generated class is perhaps pushing the legal limits for its contents. See chapter [4.10 Limitations of the Java Virtual Machine](http://java.sun.com/docs/books/jvms/second_edition/html/ClassFile.doc.html#88659) in [The Java Virtual Machine Specification, Second Edition](http://java.sun.com/docs/books/jvms/). **Fix 1: refactor the class** Since your class is being generated, this might not be an option. Still, it is worth looking at the options your class generation tool offers to see if it can produce something less troublesome. **Fix 2: increase the stack size** I think [Kieron](https://stackoverflow.com/questions/16935/ants-javac-tasks-throws-stackoverflowexception#16982) has one solution when he mentions the -Xss argument. [javac](http://java.sun.com/javase/6/docs/technotes/tools/windows/javac.html) takes a number of non-standard arguments that will vary between versions and compiler vendors. My compiler: ``` $ javac -version javac 1.6.0_05 ``` To list all the options for it, I'd use these commands: ``` javac -help javac -X javac -J-X ``` I *think* the stack limit for javac is 512Kb by default. You can increase the stack size for this compiler to 10Mb with this command: ``` javac -J-Xss10M Foo.java ``` You might be able to pass this in an Ant file with a *compilerarg* element nested in your *javac* task. ``` <javac srcdir="gen" destdir="gen-bin" debug="on" fork="true"> <compilerarg value="-J-Xss10M" /> </javac> ```
``` <javac srcdir="gen" destdir="gen-bin" debug="on" fork="true"> <compilerarg value="-J-Xss10M" /> </javac> ``` from the [comment above](https://stackoverflow.com/questions/16935/ants-javac-tasks-throws-stackoverflowexception/19782#19782) is incorrect. You need a space between the -J and -X, like so: ``` <javac srcdir="gen" destdir="gen-bin" debug="on" fork="true"> <compilerarg value="-J -Xss10M" /> </javac> ``` to avoid the following error: ``` [javac] [javac] The ' characters around the executable and arguments are [javac] not part of the command. [javac] Files to be compiled: ``` ... [javac] javac: invalid flag: -J-Xss1m [javac] Usage: javac
Ant's <javac> tasks throws StackOverflowException
[ "", "java", "ant", "" ]
What is actually the difference between these two casts? ``` SomeClass sc = (SomeClass)SomeObject; SomeClass sc2 = SomeObject as SomeClass; ``` Normally, shouldn't they both be explicit casts to the specified type?
The former will throw an exception if the source type can't be cast to the target type. The latter will result in sc2 being a null reference, but no exception. [Edit] My original answer is certainly the most pronounced difference, but as Eric Lippert [points out](http://blogs.msdn.com/ericlippert/archive/2009/10/08/what-s-the-difference-between-as-and-cast-operators.aspx), it's not the only one. Other differences include: * You can't use the 'as' operator to cast to a type that doesn't accept 'null' as a value * You can't use 'as' to *convert* things, like numbers to a different representation (float to int, for example). And finally, using 'as' vs. the cast operator, you're also saying "I'm not sure if this will succeed."
Also note that you can only use the as keyword with a reference type or a nullable type ie: ``` double d = 5.34; int i = d as int; ``` will not compile ``` double d = 5.34; int i = (int)d; ``` will compile.
Casting: (NewType) vs. Object as NewType
[ "", "c#", ".net", "" ]
I'm currently designing a program that will involve some physics (nothing too fancy, a few balls crashing to each other) What's the most exact datatype I can use to represent position (without a feeling of discrete jumps) in c#? Also, what's the smallest ammount of time I can get between t and t+1? One tick? EDIT: Clarifying: What is the smallest unit of time in C#? `[TimeSpan].Tick`?
In .Net a `decimal` will be the most precise datatype that you could use for position. I would just write a class for the position: ``` public class Position { decimal x; decimal y; decimal z; } ``` As for time, your processor can't give you anything smaller than one tick. Sounds like an fun project! Good luck!
The Decimal data type although precise might not be the optimum choice depending on what you want to do. Generally Direct3D and GPUs use 32-bit floats, and vectors of 3 (total 96 bits) to represent a position in x,y,z. This will usually give more than enough precision unless you need to mix both huge scale (planets) and microscopic level (basketballs) in the same "world". Reasons for not using Decimals could be size (4 x larger), speed (orders of magnitude slower) and no trigonometric functions available (AFAIK). On Windows, the [QueryPerformanceCounter](http://msdn.microsoft.com/en-us/library/ms644904(VS.85).aspx) API function will give you the highest resolution clock, and [QueryPerformanceFrequency](http://msdn.microsoft.com/en-us/library/ms644905(VS.85).aspx) the frequency of the counter. I believe the Stopwatch described in other comments wraps this in a .net class.
Datatypes for physics
[ "", "c#", "types", "physics", "" ]