Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I need to conduct a survey of 3 questions: * The first question will be Yes/No. * The second will have multiple answers, in which you can select multiple answers for just that question, as well as an "other" box that you can use to fill in an answer. * The last question will be a textarea in which the user can enter general comments/suggestions. I would love for this survey app to have an administration end so I can export results to a standard format and/or be able to view the stats via the interface. One I did find was [this](http://www.limesurvey.org/). Are there any others like this? EDIT: PHP/MySQL based.
<http://docs.google.com> Create a form and collect results of the survey in spreadsheet
I looked into this a few years ago. Surveymonkey is a web service that appears to be widely used, especially by non-computer literate. If you want to modify, serve and count yourself (possible for free), see [this comparison chart](http://websurveytoolbox.org/FeatureTable.html "this comparison chart").
What is a good free online poll/survey app?
[ "", "php", "mysql", "survey", "" ]
**Update:** Thanks for the suggestions guys. After further research, I’ve reformulated the question here: [Python/editline on OS X: £ sign seems to be bound to ed-prev-word](https://stackoverflow.com/questions/217020/pythoneditline-on-os-x-163-sign-seems-to-be-bound-to-ed-prev-word) On Mac OS X I can’t enter a pound sterling sign (£) into the Python interactive shell. * Mac OS X 10.5.5 * Python 2.5.1 (r251:54863, Jan 17 2008, 19:35:17) * European keyboard (£ is shift-3) When I type “£” (i.e. press shift-3) at an empty Python shell, nothing appears. If I’ve already typed some characters, e.g. ``` >>> 1234567890 1234567890 1234567890 ``` ... then pressing shift-3 will make the cursor position itself after the most recent space, or the start of the line if there are no spaces left between the cursor and the start of the line. In a normal bash shell, pressing shift-3 types a “£” as expected. Any idea how I can type a literal “£” in the Python interactive shell?
I'd imagine that the terminal emulator is eating the keystroke as a control code. Maybe see if it has a config file you can mess around with?
Not the best solution, but you could type: ``` pound = u'\u00A3' ``` Then you have it in a variable you can use in the rest of your session.
How do I enter a pound sterling character (£) into the Python interactive shell on Mac OS X?
[ "", "python", "bash", "macos", "shell", "terminal", "" ]
We are building a framework that will be used by other developers and for now we have been using a lot of TDD practices. We have interfaces everywhere and have well-written unit tests that mock the interfaces. However, we are now reaching the point where some of the properties/methods of the input classes need to be internal, and not visible to our framework users (for example object Id). The problem then is that we can't put those fields/methods on the interface as the interface does not describe accessibility. We could: 1. Still use interfaces and upcast in the first line of the method, but that seems to defeat the purpose of interfaces. 2. Use classes as input parameters - breaking the TDD rule that *everything* should be interfaces 3. Provide another layer which does some translation between public interfaces and internal interfaces Is there an existing pattern/approach to deal with this? What do the TDD people say should be done?
You need to be able to replicate those internal methods in your mock up objects. And call them in the same way the real object would call them. Then you focus your unit test on the public method that relies on that private method you need to test. If these internal methods are calling other objects or doing a lot of work, you may need to refactor your design. Good luck.
First, there is no general TDD rule that says everything should be an interface. This is coming from a specific style that is not practiced by every TDDer. See <http://martinfowler.com/articles/mocksArentStubs.html> Second, you are experiencing the dichotomy of [public vs. published](http://martinfowler.com/ieeeSoftware/published.pdf). Our team "solved" this problem by introducing a @Published annotation that shows up in the API documentation. Eclipse uses naming conventions, as far as I know. I don't know of a really *good* solution to the problem, unfortunately.
How do you balance Framework/API Design and TDD
[ "", "c#", "design-patterns", "oop", "interface", "tdd", "" ]
I have a Winforms application created in Visual Studio 2005 Pro, it connects to an SQL Server 2005 database using the SqlConnection / SqlCommand / SqlDataAdapter classes to extract data. I have stored procedures in my database to return the data to me. What is the best way to handle queries that take a "long time" to complete? (i.e long enough that the user starts to think something is wrong). Currently my application locks up until the query is complete, or the query times out. Obviously this is unnacceptable. I'd at least like a progress meter with a "stop" button on it. The progress meter doesn't even have to do anything useful, being a hint to sit patiently and wait would be enough. Even better would be a warning that said something like "This will return 140,000 rows of data. Do you want to continue?" I know this probably requires threads, but how?
Making sure that the queries are optimized is the first step, but afterwards, you have to use threading, and .NET has the [BackgroundWorker component](http://msdn.microsoft.com/en-us/library/8xs8549b.aspx) for these cases. Here's a [complete example](http://msdn.microsoft.com/en-us/library/waw3xexc.aspx)
Simple C# 2.0 example: ``` private void Form_Load(object sender, EventArgs e) { BackgroundWorker bw = new BackgroundWorker(); bw.DoWork += new DoWorkEventHandler(bw_DoWork); bw.RunWorkerCompleted += new RunWorkerCompletedEventHandler(bw_RunWorkerCompleted); //change UI to reflect we're doing this bw.RunWorkerAsync(); } void bw_DoWork(object sender, DoWorkEventArgs e) { //SQL Work } void bw_RunWorkerCompleted(object sender, RunWorkerCompletedEventArgs e) { //Let the user know we're done } ```
What is the correct way to deal with procedures that take a "long time" to complete?
[ "", "c#", "database", "sql-server-2005", "multithreading", "visual-studio-2005", "" ]
While working on a project, I came across a JS-script created by a former employee that basically creates a report in the form of ``` Name : Value Name2 : Value2 ``` etc. The peoblem is that the values can sometimes be floats (with different precision), integers, or even in the form `2.20011E+17`. What I want to output are pure integers. I don't know a lot of JavaScript, though. How would I go about writing a method that takes these sometimes-floats and makes them integers?
You hav to convert your input into a number and then round them: ``` function toInteger(number){ return Math.round( // round to nearest integer Number(number) // type cast your input ); }; ``` Or as a one liner: ``` function toInt(n){ return Math.round(Number(n)); }; ``` Testing with different values: ``` toInteger(2.5); // 3 toInteger(1000); // 1000 toInteger("12345.12345"); // 12345 toInteger("2.20011E+17"); // 220011000000000000 ```
If you need to round to a certain number of digits use the following function ``` function roundNumber(number, digits) { var multiple = Math.pow(10, digits); var rndedNum = Math.round(number * multiple) / multiple; return rndedNum; } ```
How do I round a number in JavaScript?
[ "", "javascript", "floating-point", "numbers", "rounding", "" ]
Does anyone know of a really simple way of publishing Java methods as web services? I don't really want the overhead of using Tomcat or Jetty or any of the other container frameworks. Scenario: I've got a set of Java methods in a service type application that I want to access from other machines on the local LAN.
Well, Tomcat or Jetty may be overkill for publishing just some methods as a web service. But on the other hand its not too complicated and they do the job, so why not? I had a similar problem not too long ago and used a Tomcat together with Axis2. Just download Tomcat, unpack it, deploy the Axis2 WAR. To publish a webservice, there are several aproaches, the one I took is probably one of the easiest: Just build your application as usual and annotate the web service class and methods with the appropriate annotaions from javax.jws.\*. Package everything into a jar. Create a service.xml in the META-INF directory of your jar file and put this into it: ``` <service name="name of the service" scope="<one of request, session or application>"> <description> optional description of your service </description> <messageReceivers> <messageReceiver mep="http://www.w3.org/2004/08/wsdl/in-only" class="org.apache.axis2.rpc.receivers.RPCInOnlyMessageReceiver" /> <messageReceiver mep="http://www.w3.org/2004/08/wsdl/in-out" class="org.apache.axis2.rpc.receivers.RPCMessageReceiver"/> </messageReceivers> <parameter name="ServiceClass" locked="false">put here the fully qualified name of your service class (e.g. x.y.z.FooService)</parameter> </service> ``` Rename the .jar to .aar and put it into the /webapps/axis2/WEB-INF/services/ directory. Start tomcat and the service will be deployed. You can check if it is running by visiting the axis2 page (<http://localhost:8080/axis2/>). There you will see which services are deployed and which methods are exported. Also you can get the WSDL url there to connect to your service. Read <http://ws.apache.org/axis2/1_4_1/contents.html> for more about using Axis2. The approach I described here is not found exactly like this in the docs, but it works very well. **Update:** If you just want to provide web services and really don't need any of the other features of Tomcat (e.g. serving of plain old web pages, jsps or other stuff), you can also use the Axis2 standalone server. But except for the setup part it doesn't change anything I described. I've written a slightly more detailed version of this, which can be found at: <http://www.slashslash.de/lang/en/2008/10/java-webservices-mit-apache-tomcat-und-axis2/> (don't let the German in URL irritate you, it's written in English)
Erhm. Why not just use [RMI](http://java.sun.com/javase/technologies/core/basic/rmi/index.jsp)?
Simple Java web services
[ "", "java", "web-services", "remoting", "" ]
How can I document a member inline in .Net? Let me explain. Most tools that extract documentation from comments support some kind of inline documentation where you can add a brief after the member declaration. Something like: ``` public static string MyField; /// <summary>Information about MyField.</summary> ``` Is there a way to do this in C# or the .NET languages?
No, you can't. XML comments are only supported as a block level comment, meaning it must be placed before the code element you are documenting. The common tools for extracting XML comments from .NET code do not understand how to parse inline comments like that. If you need this ability you will need to write your own parser.
There is not a built-in way to do this. The XML documentation system is hierarchical; it defines a relationship between the tag `<summary />` and the data that immediately follows it.
How to use inline comments to document members in .NET?
[ "", "c#", ".net", "documentation", "documentation-generation", "xml-documentation", "" ]
I would like to load a BMP file, do some operations on it in memory, and output a new BMP file using C++ on Windows (Win32 native). I am aware of [ImageMagick](http://www.imagemagick.org/) and it's C++ binding [Magick++](http://www.imagemagick.org/Magick%2B%2B/), but I think it's an overkill for this project since I am currently not interested in other file formats or platforms. What would be the simplest way in terms of code setup to read and write BMP files? The answer may be "just use Magick++, it's the simplest." Related Question: [What is the best image manipulation library?](https://stackoverflow.com/questions/158756/what-is-the-best-image-manipulation-library)
When developing just for Windows I usually just use the ATL [CImage](http://msdn.microsoft.com/en-us/library/bwea7by5(VS.80).aspx) class
[EasyBMP](http://easybmp.sourceforge.net/) if you want just bmp support. I'ts simple enough to start using within minutes, and it's multiplatform if you would need that.
C++: What's the simplest way to read and write BMP files using C++ on Windows?
[ "", "c++", "windows", "winapi", "image-manipulation", "bmp", "" ]
I have SQL data that looks like this: ``` events id name capacity 1 Cooking 10 2 Swimming 20 3 Archery 15 registrants id name 1 Jimmy 2 Billy 3 Sally registrant_event registrant_id event_id 1 3 2 3 3 2 ``` I would like to select all of the fields in 'events' as well as an additional field that is the number of people who are currently registered for that event. In this case Archery would have 2 registrants, Swimming would have 1, and Cooking would have 0. I imagine this could be accomplished in a single query but I'm not sure of the correct syntax. **How would a query be written to get that data?**
``` SELECT e.*, ISNULL(ec.TotalRegistrants, 0) FROM events e LEFT OUTER JOIN ( SELECT event_id, Count(registrant_id) AS TotalRegistrants FROM registrant_event GROUP BY event_id ) ec ON e.id = ec.event_id ```
``` SELECT Events.ID, Events.Name, Events.Capacity, ISNULL(COUNT(Registrant_Event.Registrant_ID), 0) FROM Events LEFT OUTER JOIN Registrant_Event ON Events.ID = Registrant_Event.Event_ID GROUP BY Events.ID, Events.Name, Events.Capacity ```
SQL LEFT OUTER JOIN subquery
[ "", "sql", "database", "count", "subquery", "" ]
I am an advocate of ORM-solutions and from time to time I am giving a workshop about Hibernate. When talking about framework-generated SQL, people usually start talking about how they need to be able to use "hints", and this is supposedly not possible with ORM frameworks. Usually something like: "We tried Hibernate. It looked promising in the beginning, but when we let it loose on our very very complex production database it broke down because we were not able to apply hints!". But when asked for a concrete example, the memory of those people is suddenly not so clear any more ... I usually feel intimidated, because the whole "hints"-topic sounds like voodoo to me... So can anybody enlighten me? What is meant by SQL-hints or DB-Hints? The only thing I know, that is somehow "hint-like" is SELECT ... FOR UPDATE. But this is supported by the Hibernate-API...
A SQL statement, especially a complex one, can actually be executed by the DB engine in any number of different ways (which table in the join to read first, which index to use based on many different parameters, etc). An experienced dba can use hints to *encourage* the DB engine to choose a particular method when it generates its execution plan. You would only normally need to do this after extensive testing and analysis of the specific queries (because the DB engines are usually pretty darn good at figuring out the optimum execution plan). Some MSSQL-specific discussion and syntax here: <http://msdn.microsoft.com/en-us/library/ms181714.aspx> Edit: some additional examples at <http://geeks.netindonesia.net/blogs/kasim.wirama/archive/2007/12/31/sql-server-2005-query-hints.aspx>
Query hints are used to guide the query optimiser when it doesn't produce sensible query plans by default. First, a small background in query optimisers: Database programming is different from pretty much all other software development because it has a mechanical component. Disk seeks and rotational latency (waiting fora particular sector to arrive under the disk head) are very expensive in comparison to CPU. Different query resolution strategies will result in different amounts of I/O, often radically different amounts. Getting this right or wrong can make a major difference to the performance of the query. For an overview of query optimisation, see [This paper.](http://ftp://ftp.research.microsoft.com/users/surajitc/pods98-tutorial.pdf) SQL is declarative - you specify the logic of the query and let the DBMS figure out how to resolve it. A modern cost-based query optimiser (some systems, such as Oracle also have a legacy query optimiser retained for backward compatibility) will run a series of transformations on the query. These maintain semantic equivalence but differ in the order and choice of operations. Based on statistics collected on the tables (sizes, distribution histograms of keys) the optimiser computes an estimate of the amount of work needed for each query plan. It selects the most efficient plan. Cost-based optimisation is heuristic, and is dependent on accurate statistics. As query complexity goes up the heuristics can produce incorrect plans, which can potentially be wildly inefficient. Query hints can be used in this situation to force certain strategies in the query plan, such as a type of join. For example, on a query that usually returns very small result sets you may wish to force a nested loops join. You may also wish to force a certain join order of tables. O/R mappers (or any tool that generates SQL) generates its own query, which will typically not have hinting information. In the case that this query runs inefficiently you have limited options, some of which are: * Examine the indexing on the tables. Possibly you can add an index. Some systems (recent versions of Oracle for example) allow you index joins across more than one table. * Some database management systems (again, Oracle comes to mind) allow you to manually associate a query plan with a specific query string. Query plans are cached by a hash value of the query. If the queries are paramaterised the base query string is constant and will resolve to the same hash value. * As a last resort, you can modify the database schema, but this is only possible if you control the application. If you control the SQL you can hint queries. In practice it's fairly uncommon to actually need to do this. A more common failure mode on O/R mappers with complex database schemas is they can make it difficult to express complex query predicates or do complex operations over large bodies of data. I tend to advocate using the O/R mapper for the 98% of work that it's suited for and dropping to stored procedures where they are the appropriate solution. If you really need to hint a query than this might be the appropriate strategy. Unless there is something unusual about your application (for example some sort of DSS) you should only need to escape from the O/R mapper on a minority of situations. You might also find (again, an example would be DSS tools working with the data in aggregate) that an O/R mapper is not really the appropriate strategy for the application.
What are "SQL-Hints"?
[ "", "sql", "hibernate", "orm", "" ]
What is the default capacity of a `StringBuilder`? And when should (or shouldn't) the default be used?
The Venerable J. Skeet has provided a good analysis of precisely this problem: <https://jonskeet.uk/csharp/stringbuilder.html>
The default capacity of StringBuilder is 16 characters (I used .NET Reflector to find out).
Default capacity of StringBuilder
[ "", "c#", ".net", "stringbuilder", "capacity", "" ]
I'm writing code like this, doing a little quick and dirty timing: ``` var sw = new Stopwatch(); sw.Start(); for (int i = 0; i < 1000; i++) { b = DoStuff(s); } sw.Stop(); Console.WriteLine(sw.ElapsedMilliseconds); ``` Surely there's a way to call this bit of timing code as a fancy-schmancy .NET 3.0 lambda rather than (God forbid) cutting and pasting it a few times and replacing the **`DoStuff(s)`** with **`DoSomethingElse(s)`**? I know it can be done as a `Delegate` but I'm wondering about the lambda way.
How about extending the Stopwatch class? ``` public static class StopwatchExtensions { public static long Time(this Stopwatch sw, Action action, int iterations) { sw.Reset(); sw.Start(); for (int i = 0; i < iterations; i++) { action(); } sw.Stop(); return sw.ElapsedMilliseconds; } } ``` Then call it like this: ``` var s = new Stopwatch(); Console.WriteLine(s.Time(() => DoStuff(), 1000)); ``` You could add another overload which omits the "iterations" parameter and calls this version with some default value (like 1000).
Here's what I've been using: ``` public class DisposableStopwatch: IDisposable { private readonly Stopwatch sw; private readonly Action<TimeSpan> f; public DisposableStopwatch(Action<TimeSpan> f) { this.f = f; sw = Stopwatch.StartNew(); } public void Dispose() { sw.Stop(); f(sw.Elapsed); } } ``` Usage: ``` using (new DisposableStopwatch(t => Console.WriteLine("{0} elapsed", t))) { // do stuff that I want to measure } ```
Wrapping StopWatch timing with a delegate or lambda?
[ "", "c#", ".net", "delegates", "lambda", "" ]
I have some complex stored procedures that may return many thousands of rows, and take a long time to complete. Is there any way to find out how many rows are going to be returned before the query executes and fetches the data? This is with Visual Studio 2005, a Winforms application and SQL Server 2005.
A solution to your problem might be to re-write the stored procedure so that it limits the result set to some number, like: ``` SELECT TOP 1000 * FROM tblWHATEVER ``` in SQL Server, or ``` SELECT * FROM tblWHATEVER WHERE ROWNUM <= 1000 ``` in Oracle. Or implement a paging solution so that the result set of each call is acceptably small.
You mentioned your stored procedures take a long time to complete. Is the majority of the time taken up during the process of selecting the rows from the database or returning the rows to the caller? If it is the latter, maybe you can create a mirror version of your SP that just gets the count instead of the actual rows. If it is the former, well, there isn't really that much you can do since it is the act of finding the eligible rows which is slow.
Can I get the rowcount before executing a stored procedure?
[ "", "c#", "sql-server-2005", "stored-procedures", "visual-studio-2005", "count", "" ]
I need to create a midnight DateTime I've just done this: ``` DateTime endTime = DateTime.Now; endTime.Subtract(endTime.TimeOfDay); ``` Haven't test it yet, I'm assuming it works but is there a better/cleaner way?
Just use `foo.Date`, or [`DateTime.Today`](https://learn.microsoft.com/en-us/dotnet/api/system.datetime.today) for today's date
``` DateTime endTime = DateTime.Now.Date; ``` Now `endTime.TimeOfDay.ToString()` returns `"00:00:00"`
Best way to create a Midnight DateTime in C#
[ "", "c#", "datetime", "" ]
I have this code: ``` CCalcArchive::CCalcArchive() : m_calcMap() { } ``` `m_calcMap` is defined as this: ``` typedef CTypedPtrMap<CMapStringToPtr, CString, CCalculation*> CCalcMap; CCalcMap& m_calcMap; ``` When I compile in Visual Studio 2008, I get this error: ``` error C2440: 'initializing' : cannot convert from 'int' to 'CCalcArchive::CCalcMap &' ``` I don't even understand where it gets the "int" error from, and also why this doesn't work? It feels like I'm actually having some sort of syntax error, but isn't this how member initialization lists are supposed to be used? Also, AFAIK, the MFC class `CTypedPtrMap` has no constructor taking arguments.
The `int` is coming from the fact that `CTypedPtrMap` has a constructor that takes an `int` argument that is defaulted to 10. The real problem that you're running into is that the `m_calcMap` reference initalization you have there is trying to default construct a temporary `CTypedPtrMap` object to bind the reference to. However, only `const` references can be bound to temporary objects. No doubt the error message is not very informative. But even if the `m_calcMap` member were a `const` refernce, you'd still have a problem binding it to a temporary. in this case, the MSVC 2008 compiler gives a pretty clear warning: ``` mfctest.cpp(72) : warning C4413: '' : reference member is initialized to a temporary that doesn't persist after the constructor exits ```
I'm not sure where it's getting the `int` from, but you **must** initialize all references in the initializer list. `m_calcMap` is declared as a reference, and so it must be initialized to refer to some instance of a `CCalcMap` object - you can't leave it uninitialized. If there's no way for you to pass the referred-to object into the constructor, or there's a possibility that you need it to not refer to an object, then use a pointer instead of a reference.
Can't initialize an object in a member initialization list
[ "", "c++", "mfc", "" ]
**Background:** I have an HTML page which lets you expand certain content. As only small portions of the page need to be loaded for such an expansion, it's done via JavaScript, and not by directing to a new URL/ HTML page. However, as a bonus the user is able to permalink to such expanded sections, i.e. send someone else a URL like *<http://example.com/#foobar>* and have the "foobar" category be opened immediately for that other user. This works using parent.location.hash = 'foobar', so that part is fine. **Now the question:** When the user closes such a category on the page, I want to empty the URL fragment again, i.e. turn <http://example.com/#foobar> into <http://example.com/> to update the permalink display. However, doing so using `parent.location.hash = ''` causes a reload of the whole page (in Firefox 3, for instance), which I'd like to avoid. Using `window.location.href = '/#'` won't trigger a page reload, but leaves the somewhat unpretty-looking "#" sign in the URL. So is there a way in popular browsers to JavaScript-remove a URL anchor including the "#" sign without triggering a page refresh?
As others have mentioned, [replaceState](https://developer.mozilla.org/en-US/docs/DOM/Manipulating_the_browser_history#The_replaceState%28%29.C2.A0method) in HTML5 can be used to remove the URL fragment. Here is an example: ``` // remove fragment as much as it can go without adding an entry in browser history: window.location.replace("#"); // slice off the remaining '#' in HTML5: if (typeof window.history.replaceState == 'function') { history.replaceState({}, '', window.location.href.slice(0, -1)); } ```
Since you are controlling the action on the hash value, why not just use a token that means "nothing", like "#\_" or "#default".
Remove fragment in URL with JavaScript w/out causing page reload
[ "", "javascript", "html", "url", "fragment-identifier", "" ]
I'm writing code on the master page, and I need to know which child (content) page is being displayed. How can I do this programmatically?
This sounds like a bad idea to start with. The idea of the master is that it shouldn't care what page is there as this is all common code for each page.
I use this: ``` string pageName = this.ContentPlaceHolder1.Page.GetType().FullName; ``` It retuns the class name in this format "ASP.default\_aspx", but I find that easy to parse for most purposes. Hope that helps!
How to determine which Child Page is being displayed from Master Page?
[ "", "c#", "asp.net", "master-pages", "" ]
If I make a JFrame like this ``` public static void main(String[] args) { new JFrame().setVisible(true); } ``` then after closing the window the appication doesn't stop (I need to kill it). What is the proper way of showing application's main windows ? I'd also like to know a reason of a proposed solution. Thanks in advance.
You should call the `setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);` in your JFrame. Example code: ``` public static void main(String[] args) { Runnable guiCreator = new Runnable() { public void run() { JFrame fenster = new JFrame("Hallo Welt mit Swing"); fenster.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); fenster.setVisible(true); } }; SwingUtilities.invokeLater(guiCreator); } ```
There's a difference between the application window and the application itself... The window runs in its own thread, and finishing `main()` will not end the application if other threads are still active. When closing the window you should also make sure to close the application, possibly by calling `System.exit(0);` Yuval =8-)
Why does my application still run after closing main window?
[ "", "java", "swing", "" ]
I'm trying to print to Dot Matrix printers (various models) out of C#, currently I'm using Win32 API (you can find alot of examples online) calls to send escape codes directly to the printer out of my C# application. This works great, but... My problem is because I'm generating the escape codes and not relying on the windows print system the printouts can't be sent to any "normal" printers or to things like PDF print drivers. (This is now causing a problem as we're trying to use the application on a 2008 Terminal Server using Easy Print [Which is XPS based]) The question is: How can I print formatted documents (invoices on pre-printed stationary) to Dot Matrix printers (Epson, Oki and Panasonic... various models) out of C# not using direct printing, escape codes etc. \*\*Just to clarify, I'm trying things like GDI+ (System.Drawing.Printing) but the problem is that its very hard, to get things to line up like the old code did. (The old code sent the characters direct to the printer bypassing the windows driver.) Any suggestions how things could be improved so that they could use GDI+ but still line up like the old code did?
You should probably use a reporting tool to make templates that allow you or users to correctly position the fields with regards to the pre-printed stationery. Using dot-matrix printers, you basically have to work in either of 2 modes: * simple type-writer mode of line/column text where you send escape sequences to manage a small number of fonts that are included in the printer hardware and have to manage line returns, etc. * graphic output where the page is rasterized and the printer driver just drives the print head and pins to output the dots. The first usage is mostly deprecated under Windows as it does not offer much in the way of controlling the output, and each printer having its own characteristics it becomes unwieldy and difficult for the software to predict and position things on the page (no WYSIWYG). The second just uses a graphic page paradigm that makes positioning text and graphics independent of the actual capabilities of the printer. When using pre-printed stationery, your job s to correctly position the data on the page. Doing this by hand is resource-consuming and creating the layout in code is certainly not recommended since you'll get stuck with code to change should your printer, page format or printed stationery change. The best is to just use the standard printing model offered by .Net and a reporting tool that allows you to define models and templates where the correct text and graphics will be positioned, and then drive this from code. Visual Studio is shipped with a version of Crystal Reports but there are other, better reporting systems (I use the one from [developer express](http://devexpress.com/) for instance), some of them are even [free](http://www.fyireporting.com/).
From my experience, it is easier to use two kinds of reports for the same data: * one report for dot matrix printers using escape codes and anything else is required, which is saved in a text file and then printed using various methods (`type file.txt > lpt1` or selecting in code the default printer and using `NOTEPAD /P file.txt`) - see [this page](http://www.robvanderwoude.com/printfiles.html) for more printing methods. * another report for laser/inkjet printers using a report builder tool (Crystal Reports, [Report Manager](http://reportman.sourceforge.net/), [RLIB](http://rlib.sicompos.com/) or anything available) Since it is not uncommon to buy the right kind of printer for the right kind of report, this approach has the advantage of letting the customer decide: dot matrix printer for text reports in A3/A4 paper format (usually for the accounting department) or laser/inkjet printer for graphical reports.
Dot Matrix printing in C#?
[ "", "c#", "printing", "xps", "" ]
For instance, winsock libs works great across all versions of the visual studio. But I am having real trouble to provide a consistent binary across all the versions. The dll compiled with VS 2005 won't work when linked to an application written in 2008. I upgraded both 2k5 and 2k8 to SP1, but the results haven't changed much. It works some what ok. But when they include this with a C# app, the C# app gets access violation errors, but with classic C++ application it works fine. Is there a strategy that I should know when I provide dlls ?
First, dont pass anything other than plain old data accross DLL boundries. i.e. structs are fine. classes are not. Second, make sure that ownership is not transferred - i.e. any structs passed accross the dll boundry are never deallocated outside the dll. So, if you dll exports a X\* GetX() function, there is a corresponding FreeX(X\*) type function ensuring that the same runtime that allocated is responsible for de-allocation. Next: Get your DLLs to link to the static runtime. Putting together a project comprimising dls from several 3rd parties, each linked to and expecting different runtimes, potentially different to the runtime expected by the app, is a pain, potentially forcing the installer software to install runtimes for 7.0, 7.1, 8.0 and 9.0 - several of which exist in different service packs which may or may not cause issues. Be kind - statically link your dll projects. -- Edit: You cannot export a c++ class directly with this approach. Sharing class definitions between modules means you MUST have a homogeneous runtime environment as different compilers or versions of compilers will generate decorated names differently. You *can* bypass this restriction by exporting your class instead as a COM style interface... which is to say, while you cannot export a class in a runtime independent way, you CAN export an "interface", which you can easilly make by declaring a class containing only pure virtual functions... ``` struct IExportedMethods { virtual long __stdcall AMethod(void)=0; }; // with the win32 macros: interface IExportedMethods { STDMETHOD_(long,AMethod)(THIS)PURE; }; ``` In your class definition, you inherit from this interface: ``` class CMyObject: public IExportedMethods { ... ``` You can export interfaces like this by making C factory methods: ``` extern "C" __declspec(dllexport) IExportedClass* WINAPI CreateMyExportedObject(){ return new CMyObject; } ``` This is a very lightweight way of exporting compiler version and runtime independent class versions. Note that you still cannot delete one of these. You Must include a release function as a member of the dll or the interface. As a member of the interface it could look like this: ``` interface IExportedMethods { STDMETHOD_(void) Release(THIS) PURE; }; class CMyObject : public IExportedMethods { STDMETHODIMP_(void) Release(){ delete this; } }; ``` You can take this idea and run further with it - inherit your interface from IUnknown, implement ref counted AddRef and Release methods as well as the ability to QueryInterface for v2 interfaces or other features. And finally, use DllCreateClassObject as the means to create your object and get the necessary COM registration going. All this is optional however, you can easilly get away with a simple interface definition accessed through a C function.
I disagree with Chris Becke's viewpoint, while seeing the advantages of his approach. The disadvantage is that you are unable to create libraries of utility objects, because you are forbidden to share them across libraries. ## Expanding Chris' solution [How to make consistent dll binaries across VS versions?](https://stackoverflow.com/questions/232926/how-to-make-consistent-dll-binaries-across-vs-versions#232959) Your choices depend on how much the compilers are different. In one side, different versions of the same compiler could handle data alignement the same way, and thus, you could expose structs and classes across your DLLs. In the other side, you could mistrust the other libraries compilers or compile options. In Windows Win32 API, they handled the problem through "handlers". You do the same by: 1 - Never expose a struct. **Expose only pointers** (i.e. a void \* pointer) 2 - This struct data's **access is through functions** taking the pointer as first parameter 3 - this struct's pointer **allocation/deallocation data is through functions** This way, you can avoid recompiling everything when your struct change. The C++ way of doing this is the PImpl. See <http://en.wikipedia.org/wiki/Opaque_pointer> It has the same behaviour as the void \* concept above, but with the PImpl, you can use both RAII, encapsulation, and profit from strong type safety. This would need compatible decoration (same compiler), but not the same runtime or version (if decorations are the same between versions). ## Another solution? Hoping to mix together DLLs from different compilers/compiler versions is either a recipe for disaster (as you explained in your question) or tedious as you have let go most (if not all) C++ solutions to your code to fall back to basic C coding, or both. My solution would be: 1 - Be sure all your modules are compiled with the **same compiler/version**. Period. 2 - Be sure all your modules are compiled to **link dynamically with the same runtime** 3 - Be sure to have an **"encapsulation" of all third-party modules** over which you have no control (unable to compile with your compiler), as explained quite rightly by Chris Becke at [How to make consistent dll binaries across VS versions?](https://stackoverflow.com/questions/232926/how-to-make-consistent-dll-binaries-across-vs-versions#232959). **Note that it is not surprising nor outrageous to mandate that all modules of your application compiled against the same compiler and the same version of the compiler.** Don't let anyone tell you mixing compilers is a good thing. It is not. The freedom of mixing compiler is, for most people, the same kind of freedom one can enjoy by jumping from the top of a building: You are free to do so, but usually, you just don't want that. My solution enables you to: 1 - export classes and thus, make real, non-castrated, **C++ libraries** (as you are supposed to do with the \_\_declspec(dllexport) of Visual C++ for example) 2 - **transfert allocation ownership** (which happens without your consent when using allocating and/or deallocating inlined code, or STL) 3 - **not be annoyed with problems** tied to the fact each module has its own version of the runtime (i.e. memory allocation, and some global data used by the C or C++ API) Note that it means you're not supposed to mix debug version of your modules with release versions of other modules. Your app is either fully in debug or fully in release.
How to make consistent dll binaries across VS versions?
[ "", ".net", "c++", "dll", "" ]
How do I multiply the values of a multi-dimensional array with weights and sum up the results into a new array in PHP or in general? The boring way looks like this: ``` $weights = array(0.25, 0.4, 0.2, 0.15); $values = array ( array(5,10,15), array(20,25,30), array(35,40,45), array(50,55,60) ); $result = array(); for($i = 0; $i < count($values[0]); ++$i) { $result[$i] = 0; foreach($weights as $index => $thisWeight) $result[$i] += $thisWeight * $values[$index][$i]; } ``` Is there a more elegant solution?
Depends on what you mean by elegant, of course. ``` function weigh(&$vals, $key, $weights) { $sum = 0; foreach($vals as $v) $sum += $v*$weights[$key]; $vals = $sum; } $result = $values; array_walk($result, "weigh", $weights); ``` EDIT: Sorry for not reading your example better. I make result a copy of values, since array\_walk works by reference.
Hm... ``` foreach($values as $index => $ary ) $result[$index] = array_sum($ary) * $weights[$index]; ```
How do I sum up weighted arrays in PHP?
[ "", "php", "arrays", "interpolation", "" ]
I want to find an SQL query to find rows where field1 does not contain $x. How can I do this?
What kind of field is this? The IN operator cannot be used with a single field, but is meant to be used in subqueries or with predefined lists: ``` -- subquery SELECT a FROM x WHERE x.b NOT IN (SELECT b FROM y); -- predefined list SELECT a FROM x WHERE x.b NOT IN (1, 2, 3, 6); ``` If you are searching a string, go for the LIKE operator (but this will be slow): ``` -- Finds all rows where a does not contain "text" SELECT * FROM x WHERE x.a NOT LIKE '%text%'; ``` If you restrict it so that the string you are searching for has to start with the given string, it can use indices (if there is an index on that field) and be reasonably fast: ``` -- Finds all rows where a does not start with "text" SELECT * FROM x WHERE x.a NOT LIKE 'text%'; ```
`SELECT * FROM table WHERE field1 NOT LIKE '%$x%';` (Make sure you escape $x properly beforehand to avoid SQL injection) Edit: `NOT IN` does something a bit different - your question isn't totally clear so pick which one to use. `LIKE 'xxx%'` can use an index. `LIKE '%xxx'` or `LIKE '%xxx%'` can't.
SQL Query Where Field DOES NOT Contain $x
[ "", "sql", "mysql", "" ]
How can I use/display characters like ♥, ♦, ♣, or ♠ in Java/Eclipse? When I try to use them directly, e.g. in the source code, Eclipse cannot save the file. What can I do? Edit: How can I find the unicode escape sequence?
The problem is that the characters you are using cannot be represented in the encoding you have the file set to (Cp1252). The way I see it, you essentially have two options: Option 1. **Change the encoding.** [According to IBM](http://publib.boulder.ibm.com/infocenter/eruinf/v2r1m1/index.jsp?topic=/com.ibm.iru.doc/concepts/cirerwp.htm), you should set the encoding to UTF-8. I believe this would solve your problem. > * Set the global text file encoding preference Workbench > Editors to "UTF-8". > * If an encoding other than UTF-8 is required, set the encoding on the individual file rather than using the global preference setting. To do this use the File > Properties > Info menu selection to set the encoding on an individual file. Option 2. **Remove the characters which are not supported by the "Cp1252" character encoding.** You can replace the unsupported characters with [Unicode escape sequences](http://en.wikibooks.org/wiki/Java_Programming/Syntax/Unicode_Escape_Sequences) (\uxxxx). While this would allow you to save your file, it is not necessarily the best solution. For the characters you specified in your question here are the Unicode escape sequences: ``` ♥ \u2665 ♦ \u2666 ♣ \u2663 ♠ \u2660 ```
In Eclipse: 1. Go to Window -> Preferences -> General -> Workspace -> TextFileEncoding 2. Set it to UTF-8
How to use Special Chars in Java/Eclipse
[ "", "java", "eclipse", "unicode", "encoding", "cp1252", "" ]
Hey right now I'm using jQuery and I have some global variables to hold a bit of preloaded ajax stuff (preloaded to make pages come up nice and fast): ``` $.get("content.py?pageName=viewer", function(data) {viewer = data;}); $.get("content.py?pageName=artists", function(data) {artists = data;}); $.get("content.py?pageName=instores", function(data) {instores = data;}); $.get("content.py?pageName=specs", function(data) {specs = data;}); $.get("content.py?pageName=about", function(data) {about = data;}); ``` As you can see, we have a huge violation of the DRY principle, but... I don't really see a way to fix it... any ideas? maybe an array?
You don't need `eval()` or `Function()` for this. An array, as you suspected, will do the job nicely: ``` (function() // keep outer scope clean { // pages to load. Each name is used both for the request and the name // of the property to store the result in (so keep them valid identifiers // unless you want to use window['my funky page'] to retrieve them) var pages = ['viewer', 'artists', 'instores', 'specs', 'about']; for (var i=0; i<pages.length; ++i) { // "this" refers to the outer scope; likely the window object. // And will result in page contents being stored in global variables // with the same names as the pages being loaded. We use the with({}) // construct to create a local scope for each callback with the // appropriate context and page name. with ({context: this, pageName: pages[i]}) $.get("content.py?pageName=" + pageName, function(data) {context[pageName] = data;}); } })(); // close scope, execute anonymous function // at this point, viewer, artists, etc. are populated with page contents // (assuming all requests completed successfully) ```
Using the jQuery each method to iterate through an array of page names and then setting a global (in window scope) variable: ``` jQuery.each( ["viewer", "artists", "instores", "specs", "about"], function (page) { $.get("content.py?pageName=" + page, new Function("window[" + page + "] = arguments[0]")); } ); ``` **Update:** Actually, you don't even need the "new Function": ``` jQuery.each( ["viewer", "artists", "instores", "specs", "about"], function (page) { $.get("content.py?pageName=" + page, function () { window[page] = arguments[0]; }); } ); ```
Assigning values to a list of global variables in JavaScript
[ "", "javascript", "jquery", "dry", "" ]
I have an MFC legacy app that I help to maintain. I'm not quite sure how to identify the version of MFC and I don't think it would make a difference anyway. The app can take some parameters on the command line; I would like to be able to set an errorlevel on exiting the app to allow a bat/cmd file to check for failure and respond appropriately. I don't believe that exit() would work (hadn't tried it yet to be honest) because of the fact that this is an MFC app. Anyone know how to set the errorlevel returned by an MFC app? Can I just use exit()?
I can't take credit for this so please don't up this reply. CWinApp::ExitInstance(); return myExitCode; This will return the errorlevel to the calling batch file for you to then evaluate and act upon.
If your application refuses to update ERRORLEVEL in the DOS shell you run it from no matter what you do in the code of your program, it might help to run your app with "start /wait" so that the shell is locked until your program ends.
How To Set Errorlevel On Exit of MFC App
[ "", "c++", "windows", "mfc", "batch-file", "cmd", "" ]
Lets say I have an array like this: ``` string [] Filelist = ... ``` I want to create an Linq result where each entry has it's position in the array like this: ``` var list = from f in Filelist select new { Index = (something), Filename = f}; ``` Index to be 0 for the 1st item, 1 for the 2nd, etc. What should I use for the expression Index= ?
Don't use a query expression. Use [the overload of `Select` which passes you an index](http://msdn.microsoft.com/en-us/library/bb534869.aspx): ``` var list = FileList.Select((file, index) => new { Index=index, Filename=file }); ```
``` string[] values = { "a", "b", "c" }; int i = 0; var t = (from v in values select new { Index = i++, Value = v}).ToList(); ```
How do you add an index field to Linq results
[ "", "c#", "linq", "" ]
I'm writing some mail-processing software in Python that is encountering strange bytes in header fields. I suspect this is just malformed mail; the message itself claims to be us-ascii, so I don't think there is a true encoding, but I'd like to get out a unicode string approximating the original one without throwing a `UnicodeDecodeError`. So, I'm looking for a function that takes a `str` and optionally some hints and does its darndest to give me back a `unicode`. I could write one of course, but if such a function exists its author has probably thought a bit deeper about the best way to go about this. I also know that Python's design prefers explicit to implicit and that the standard library is designed to avoid implicit magic in decoding text. I just want to explicitly say "go ahead and guess".
As far as I can tell, the standard library doesn't have a function, though it's not too difficult to write one as suggested above. I think the real thing I was looking for was a way to decode a string and guarantee that it wouldn't throw an exception. The errors parameter to string.decode does that. ``` def decode(s, encodings=('ascii', 'utf8', 'latin1')): for encoding in encodings: try: return s.decode(encoding) except UnicodeDecodeError: pass return s.decode('ascii', 'ignore') ```
+1 for the [chardet](https://chardet.readthedocs.io/en/latest/usage.html) module. It is not in the standard library, but you can easily install it with the following command: ``` $ pip install chardet ``` [Example](https://chardet.readthedocs.io/en/latest/usage.html#example-using-the-detect-function): ``` >>> import urllib.request >>> rawdata = urllib.request.urlopen('http://yahoo.co.jp/').read() >>> import chardet >>> chardet.detect(rawdata) {'encoding': 'EUC-JP', 'confidence': 0.99} ``` See [Installing Pip](https://pip.pypa.io/en/stable/installation/) if you don't have one.
Is there a Python library function which attempts to guess the character-encoding of some bytes?
[ "", "python", "email", "character-encoding", "invalid-characters", "" ]
I have a requirement to hide a process in Task Manager. It is for Intranet scenario. So, everything is legitimate. :) Please feel free to share any code you have (preferably in C#) or any other techniques or any issues in going with this route. **Update1**: Most of the users have admin privileges in order to run some legacy apps. So, one of the suggestion was to hide it in task manager. If there are other approaches to prevent users from killing the process, that would be great. **Update2**: Removing the reference to rootkit. Somehow made this post look negative.
There is no supported way to accomplish this. The process list can be read at any privilege level. If you were hoping to hide a process from even Administrators, then this is doubly unsupported. To get this to work, you would need to write a kernel mode rootkit to intercept calls to [NtQuerySystemInformation](http://msdn.microsoft.com/en-us/library/ms724509(VS.85).aspx) so that the SystemProcessInformation info class fails to list your hidden process. Intercepting system calls is very difficult to do safely, and the 64 bit Windows kernels go [out of their way](http://www.microsoft.com/whdc/driver/kernel/64bitPatching.mspx) to prevent this from being possible: trying to modify the syscall table results in an instant blue screen. It's going to be very difficult on those platforms [This Sony DRM software](https://web.archive.org/web/20190424203808/https://blogs.technet.microsoft.com/markrussinovich/2005/10/31/sony-rootkits-and-digital-rights-management-gone-too-far/) is an example of a rootkit that tries to do something similar (and has several serious problems).
Don't try to stop it from being killed - you're not going to manage it. Instead, make it regularly call home to a webservice. When the webservice notices a client "going silent" it can ping the machine to see if it's just a reboot issue, and send an email to a manager (or whoever) to discipline whoever has killed the process.
How do I hide a process in Task Manager in C#?
[ "", "c#", ".net", "taskmanager", "" ]
How do I learn where the source file for a given Python module is installed? Is the method different on Windows than on Linux? I'm trying to look for the source of the `datetime` module in particular, but I'm interested in a more general answer as well.
For a pure python module you can find the source by looking at `themodule.__file__`. The datetime module, however, is written in C, and therefore `datetime.__file__` points to a .so file (there is no `datetime.__file__` on Windows), and therefore, you can't see the source. If you download a python source tarball and extract it, the modules' code can be found in the **Modules** subdirectory. For example, if you want to find the datetime code for python 2.6, you can look at ``` Python-2.6/Modules/datetimemodule.c ``` You can also find the latest version of this file on github on the web at <https://github.com/python/cpython/blob/main/Modules/_datetimemodule.c>
Running `python -v` from the command line should tell you what is being imported and from where. This works for me on Windows and Mac OS X. ``` C:\>python -v # installing zipimport hook import zipimport # builtin # installed zipimport hook # C:\Python24\lib\site.pyc has bad mtime import site # from C:\Python24\lib\site.py # wrote C:\Python24\lib\site.pyc # C:\Python24\lib\os.pyc has bad mtime import os # from C:\Python24\lib\os.py # wrote C:\Python24\lib\os.pyc import nt # builtin # C:\Python24\lib\ntpath.pyc has bad mtime ... ``` I'm not sure what those bad mtime's are on my install!
How do I find the location of Python module sources?
[ "", "python", "module", "" ]
I'm looking to get data such as Size/Capacity, Serial No, Model No, Heads Sectors, Manufacturer and possibly SMART data.
You can use WMI Calls to access info about the hard disks. //Requires using System.Management; & System.Management.dll Reference ``` ManagementObject disk = new ManagementObject("win32_logicaldisk.deviceid=\"c:\""); disk.Get(); Console.WriteLine("Logical Disk Size = " + disk["Size"] + " bytes"); Console.WriteLine("Logical Disk FreeSpace = " + disk["FreeSpace"] + "bytes"); ```
You should use the [System.Management](http://msdn.microsoft.com/en-us/library/system.management.aspx) namespace: ``` System.Management.ManagementObjectSearcher ms = new System.Management.ManagementObjectSearcher("SELECT * FROM Win32_DiskDrive"); foreach (ManagementObject mo in ms.Get()) { System.Console.Write(mo["Model"); } ``` For details on the members of the Win32\_DiskDrive class, check out: <http://msdn.microsoft.com/en-us/library/aa394132(VS.85).aspx>
using c# how can I extract information about the hard drives present on the local machine
[ "", "c#", "hard-drive", "" ]
I have been trying to parse Java exceptions that appear in a log for some code I'm working with. My question is, do you parse the exception trace from the top down, or the bottom up? It looks something like this: ``` ERROR [main]</b> Nov/04 11:03:19,440 [localhost].[/BookmarksPortlet].[] - Exception sending context... org.springframework.beans.factory.BeanCreationException: Error creating bean...: Cannot Resolve reference...: Error creating bean... nested exception... nested exception is org.hibernate.HibernateException: Dialect class not found: org.hibernate.dialect.Oracle10gDialect Caused by: ... [similar exceptions and nested exceptions] ... at [start of stack trace] ``` Something like that. Obviously, I'm not looking for the answer to this specific exception, but how do you go about parsing an exception trace like this? Do you start at the top level error, or do you start at the inner most error (under the "caused by" clauses)? The problem is more difficult for me because I'm not working with code I wrote. I'm editing the XML configurations, so I'm not really even looking the Java code. In my own code, I would recognize locations in the trace and would know what sort of things to look for. So how do you approach an exception like this in general?
In your particular example, there's a class missing. As soon as you see an error like that, you know what needs fixing (either correcting the class name, or updating the classpath so that the class can be found). In general, though, I look from my code toward the generated code until I find the error. If I get a NullPointerException, for example, I check to see if it's being caused by one of my classes. If it's a missing class, though, I won't find anything wrong with my own classes, so I'll start at the other end of the stack trace and look for a recognizable error.
This stuff is a little hard to explain, but my first step is nearly always starting from the top and skimming down until I see the familiar `com.mycompany.myproject`. Given the line number attached to that you have a place to work from in your own code, which is often a good start. **Edit:** But, re-reading your question you say it's not your code.. so this may not be a useful answer at all..
How do I approach debugging starting from a Java exception log entry?
[ "", "java", "debugging", "exception", "" ]
I have a C++ library that provides various classes for managing data. I have the source code for the library. I want to extend the C++ API to support C function calls so that the library can be used with C code and C++ code at the same time. I'm using GNU tool chain (gcc, glibc, etc), so language and architecture support are not an issue. Are there any reasons why this is **technically** not possible? Are there any **gotcha's** that I need to watch out for? Are there resources, example code and/or documentation available regarding this? --- Some other things that I have found out: 1. Use the following to wrap your C++ headers that need to be used by C code. ``` #ifdef __cplusplus extern "C" { #endif // // Code goes here ... // #ifdef __cplusplus } // extern "C" #endif ``` 2. Keep "real" C++ interfaces in separate header files that are not included by C. Think [PIMPL principle](http://en.wikipedia.org/wiki/Private_class_data_pattern) here. Using `#ifndef __cplusplus #error` stuff helps here to detect any craziness. 3. Careful of C++ identifiers as names in C code 4. Enums varying in size between C and C++ compilers. Probably not an issue if you're using GNU tool chain, but still, be careful. 5. For structs follow the following form so that C does not get confused. ``` typedef struct X { ... } X ``` 6. Then use pointers for passing around C++ objects, they just have to be declared in C as struct X where X is the C++ object. All of this is courtesy of a friend who's a wizard at C++.
Yes, this is certainly possible. You will need to write an interface layer in C++ that declares functions with `extern "C"`: ``` extern "C" int foo(char *bar) { return realFoo(std::string(bar)); } ``` Then, you will call `foo()` from your C module, which will pass the call on to the `realFoo()` function which is implemented in C++. If you need to expose a full C++ class with data members and methods, then you may need to do more work than this simple function example.
C++ FAQ Lite: ["How to mix C and C++ code"](https://isocpp.org/wiki/faq/mixing-c-and-cpp). Some gotchas are described in answers to these questions: * [32.8] How can I pass an object of a C++ class to/from a C function? * [32.9] Can my C function directly access data in an object of a C++ class?
Using C++ library in C code
[ "", "c++", "c", "gcc", "glibc", "" ]
I have what is essentially a jagged array of name value pairs - i need to generate a set of unique name values from this. the jagged array is approx 86,000 x 11 values. It does not matter to me what way I have to store a name value pair (a single string "name=value" or a specialised class for example KeyValuePair). **Additional Info:** There are 40 distinct names and a larger number of distinct values - probably in the region 10,000 values. I am using C# and .NET 2.0 (and the performance is so poor I am thinking that it may be better to push my entire jagged array into a sql database and do a select distinct from there). Below is the current code Im using: ``` List<List<KeyValuePair<string,string>>> vehicleList = retriever.GetVehicles(); this.statsLabel.Text = "Unique Vehicles: " + vehicleList.Count; Dictionary<KeyValuePair<string, string>, int> uniqueProperties = new Dictionary<KeyValuePair<string, string>, int>(); foreach (List<KeyValuePair<string, string>> vehicle in vehicleList) { foreach (KeyValuePair<string, string> property in vehicle) { if (!uniqueProperties.ContainsKey(property)) { uniqueProperties.Add(property, 0); } } } this.statsLabel.Text += "\rUnique Properties: " + uniqueProperties.Count; ```
**I have it running in 0.34 seconds** down from 9+ minutes The problem is when comparing the KeyValuePair structs. I worked around it by writing a comparer object, and passing an instance of it to the Dictionary. From what I can determine, the KeyValuePair.GetHashCode() returns the hashcode of it's `Key` object (in this example the least unique object). As the dictionary adds (and checks existence of) each item, it uses both Equals and GetHashCode functions, but has to rely on the Equals function when the hashcode is less unique. By providing a more unique GetHashCode function, it excerises the Equals function far less often. I also optimised the Equals function to compare the more unique Values before the less unqiue Keys. 86,000 \* 11 items with 10,000 unique properties runs in 0.34 seconds using the comparer object below (without the comparer object it takes 9 minutes 22 seconds) Hope this helps :) ``` class StringPairComparer : IEqualityComparer<KeyValuePair<string, string>> { public bool Equals(KeyValuePair<string, string> x, KeyValuePair<string, string> y) { return x.Value == y.Value && x.Key == y.Key; } public int GetHashCode(KeyValuePair<string, string> obj) { return (obj.Key + obj.Value).GetHashCode(); } } ``` **EDIT**: If it was just one string (instead of a KeyValuePair, where string = Name+Value) it would be approx twice as fast. It's a nice intresting problem, and I have spent *faaaaaar too much time on it* (I learned quiet a bit though)
if you don't need any specific correlation between each key/value pair and the unique values you're generating, you could just use a GUID? I'm assuming the problem is that your current 'Key' isn't unique in this jagged array. ``` Dictionary<System.Guid, KeyValuePair<string, string>> myDict = new Dictionary<Guid, KeyValuePair<string, string>>(); foreach of your key values in their current format myDict.Add(System.Guid.NewGuid(), new KeyValuePair<string, string>(yourKey, yourvalue)) ``` Sounds like it would store what you need but I don't know how you would pull data back from this as there would be no semantic relationship between the generate Guid & what you originally had... Can you provide any more info in your question?
what is the fastest way to generate a unique set in .net 2
[ "", "c#", ".net", "performance", "collections", "" ]
Is there a collection (BCL or other) that has the following characteristics: Sends event if collection is changed AND sends event if any of the elements in the collection sends a `PropertyChanged` event. Sort of an `ObservableCollection<T>` where `T: INotifyPropertyChanged` and the collection is also monitoring the elements for changes. I could wrap an observable collection my self and do the event subscribe/unsubscribe when elements in the collection are added/removed but I was just wondering if any existing collections did this already?
Made a quick implementation myself: ``` public class ObservableCollectionEx<T> : ObservableCollection<T> where T : INotifyPropertyChanged { protected override void OnCollectionChanged(NotifyCollectionChangedEventArgs e) { Unsubscribe(e.OldItems); Subscribe(e.NewItems); base.OnCollectionChanged(e); } protected override void ClearItems() { foreach(T element in this) element.PropertyChanged -= ContainedElementChanged; base.ClearItems(); } private void Subscribe(IList iList) { if (iList != null) { foreach (T element in iList) element.PropertyChanged += ContainedElementChanged; } } private void Unsubscribe(IList iList) { if (iList != null) { foreach (T element in iList) element.PropertyChanged -= ContainedElementChanged; } } private void ContainedElementChanged(object sender, PropertyChangedEventArgs e) { OnPropertyChanged(e); } } ``` Admitted, it would be kind of confusing and misleading to have the PropertyChanged fire on the collection when the property that actually changed is on a contained element, but it would fit my specific purpose. It could be extended with a new event that is fired instead inside ContainerElementChanged Thoughts? EDIT: Should note that the BCL ObservableCollection only exposes the INotifyPropertyChanged interface through an explicit implementation so you would need to provide a cast in order to attach to the event like so: ``` ObservableCollectionEx<Element> collection = new ObservableCollectionEx<Element>(); ((INotifyPropertyChanged)collection).PropertyChanged += (x,y) => ReactToChange(); ``` EDIT2: Added handling of ClearItems, thanks Josh EDIT3: Added a correct unsubscribe for PropertyChanged, thanks Mark EDIT4: Wow, this is really learn-as-you-go :). KP noted that the event was fired with the collection as sender and not with the element when the a contained element changes. He suggested declaring a PropertyChanged event on the class marked with *new*. This would have a few issues which I'll try to illustrate with the sample below: ``` // work on original instance ObservableCollection<TestObject> col = new ObservableCollectionEx<TestObject>(); ((INotifyPropertyChanged)col).PropertyChanged += (s, e) => { Trace.WriteLine("Changed " + e.PropertyName); }; var test = new TestObject(); col.Add(test); // no event raised test.Info = "NewValue"; //Info property changed raised // working on explicit instance ObservableCollectionEx<TestObject> col = new ObservableCollectionEx<TestObject>(); col.PropertyChanged += (s, e) => { Trace.WriteLine("Changed " + e.PropertyName); }; var test = new TestObject(); col.Add(test); // Count and Item [] property changed raised test.Info = "NewValue"; //no event raised ``` You can see from the sample that 'overriding' the event has the side effect that you need to be extremely careful of which type of variable you use when subscribing to the event since that dictates which events you receive.
@soren.enemaerke: I would have made this comment on your answer post, but I can't (I don't know why, maybe because I don't have many rep points). Anyway, I just thought that I'd mention that in your code you posted I don't think that the Unsubscribe would work correctly because it is creating a new lambda inline and then trying to remove the event handler for it. I would change the add/remove event handler lines to something like: ``` element.PropertyChanged += ContainedElementChanged; ``` and ``` element.PropertyChanged -= ContainedElementChanged; ``` And then change the ContainedElementChanged method signature to: ``` private void ContainedElementChanged(object sender, PropertyChangedEventArgs e) ``` This would recognise that the remove is for the same handler as the add and then remove it correctly. Hope this helps somebody :)
ObservableCollection that also monitors changes on the elements in collection
[ "", "c#", "collections", "" ]
I have an `ArrayList<String>`, and I want to remove repeated strings from it. How can I do this?
If you don't want duplicates in a `Collection`, you should consider why you're using a `Collection` that allows duplicates. The easiest way to remove repeated elements is to add the contents to a `Set` (which will not allow duplicates) and then add the `Set` back to the `ArrayList`: ``` Set<String> set = new HashSet<>(yourList); yourList.clear(); yourList.addAll(set); ``` Of course, this destroys the ordering of the elements in the `ArrayList`.
Although converting the `ArrayList` to a `HashSet` effectively removes duplicates, if you need to preserve insertion order, I'd rather suggest you to use this variant ``` // list is some List of Strings Set<String> s = new LinkedHashSet<>(list); ``` Then, if you need to get back a `List` reference, you can use again the conversion constructor.
How do I remove repeated elements from ArrayList?
[ "", "java", "list", "collections", "arraylist", "duplicates", "" ]
Here's what [MSDN has to say under *When to Use Static Classes*](https://learn.microsoft.com/en-us/dotnet/csharp/programming-guide/classes-and-structs/static-classes-and-static-class-members): > ``` > static class CompanyInfo > { > public static string GetCompanyName() { return "CompanyName"; } > public static string GetCompanyAddress() { return "CompanyAddress"; } > //... > } > ``` > > Use a static class as a unit of > organization for methods not > associated with particular objects. > Also, a static class can make your > implementation simpler and faster > because you do not have to create an > object in order to call its methods. > It is useful to organize the methods > inside the class in a meaningful way, > such as the methods of the Math class > in the System namespace. To me, that example doesn't seem to cover very many possible usage scenarios for static classes. In the past I've used static classes for stateless suites of related functions, but that's about it. So, under what circumstances should (and shouldn't) a class be declared static?
I wrote my thoughts of static classes in an earlier Stack Overflow answer: *[Class with single method -- best approach?](https://stackoverflow.com/questions/205689/class-with-single-method-best-approach#206481)* I used to love utility classes filled up with static methods. They made a great consolidation of helper methods that would otherwise lie around causing redundancy and maintenance hell. They're very easy to use, no instantiation, no disposal, just fire'n'forget. I guess this was my first unwitting attempt at creating a service-oriented architecture - lots of stateless services that just did their job and nothing else. As a system grows however, dragons be coming. **Polymorphism** Say we have the method UtilityClass.SomeMethod that happily buzzes along. Suddenly we need to change the functionality slightly. Most of the functionality is the same, but we have to change a couple of parts nonetheless. Had it not been a static method, we could make a derivate class and change the method contents as needed. As it's a static method, we can't. Sure, if we just need to add functionality either before or after the old method, we can create a new class and call the old one inside of it - but that's just gross. **Interface woes** Static methods cannot be defined through interfaces for logic reasons. And since we can't override static methods, static classes are useless when we need to pass them around by their interface. This renders us unable to use static classes as part of a strategy pattern. We might patch some issues up by [passing delegates instead of interfaces](https://learn.microsoft.com/archive/blogs/kirillosenkov/how-to-override-static-methods). **Testing** This basically goes hand in hand with the interface woes mentioned above. As our ability of interchanging implementations is very limited, we'll also have trouble replacing production code with test code. Again, we can wrap them up, but it'll require us to change large parts of our code just to be able to accept wrappers instead of the actual objects. **Fosters blobs** As static methods are usually used as utility methods and utility methods usually will have different purposes, we'll quickly end up with a large class filled up with non-coherent functionality - ideally, each class should have a single purpose within the system. I'd much rather have a five times the classes as long as their purposes are well defined. **Parameter creep** To begin with, that little cute and innocent static method might take a single parameter. As functionality grows, a couple of new parameters are added. Soon further parameters are added that are optional, so we create overloads of the method (or just add default values, in languages that support them). Before long, we have a method that takes 10 parameters. Only the first three are really required, parameters 4-7 are optional. But if parameter 6 is specified, 7-9 are required to be filled in as well... Had we created a class with the single purpose of doing what this static method did, we could solve this by taking in the required parameters in the constructor, and allowing the user to set optional values through properties, or methods to set multiple interdependent values at the same time. Also, if a method has grown to this amount of complexity, it most likely needs to be in its own class anyway. **Demanding consumers to create an instance of classes for no reason** One of the most common arguments is: Why demand that consumers of our class create an instance for invoking this single method, while having no use for the instance afterwards? Creating an instance of a class is a very very cheap operation in most languages, so speed is not an issue. Adding an extra line of code to the consumer is a low cost for laying the foundation of a much more maintainable solution in the future. And finally, if you want to avoid creating instances, simply create a singleton wrapper of your class that allows for easy reuse - although this does make the requirement that your class is stateless. If it's not stateless, you can still create static wrapper methods that handle everything, while still giving you all the benefits in the long run. Finally, you could also make a class that hides the instantiation as if it was a singleton: MyWrapper.Instance is a property that just returns `new MyClass();` **Only a Sith deals in absolutes** Of course, there are exceptions to my dislike of static methods. True utility classes that do not pose any risk to bloat are excellent cases for static methods - System.Convert as an example. If your project is a one-off with no requirements for future maintenance, the overall architecture really isn't very important - static or non static, doesn't really matter - development speed does, however. **Standards, standards, standards!** Using instance methods does not inhibit you from also using static methods, and vice versa. As long as there's reasoning behind the differentiation and it's standardised. There's nothing worse than looking over a business layer sprawling with different implementation methods.
When deciding whether to make a class static or non-static you need to look at what information you are trying to represent. This entails a more '**bottom-up**' style of programming where you focus on the data you are representing first. Is the class you are writing a real-world object like a rock, or a chair? These things are physical and have physical attributes such as color, weight which tells you that you may want to instantiate multiple objects with different properties. I may want a black chair AND a red chair at the same time. If you ever need two configurations at the same time then you instantly know you will want to instantiate it as an object so each object can be unique and exist at the same time. On the other end, static functions tend to lend more to actions which do not belong to a real-world object or an object that you can easily represent. Remember that C#'s predecessors are C++ and C where you can just define global functions that do not exist in a class. This lends more to '**top-down**' programming. Static methods can be used for these cases where it doesn't make sense that an 'object' performs the task. By forcing you to use classes this just makes it easier to group related functionality which helps you create more maintainable code. Most classes can be represented by either static or non-static, but when you are in doubt just go back to your OOP roots and try to think about what you are representing. Is this an object that is performing an action (a car that can speed up, slow down, turn) or something more abstract (like displaying output). Get in touch with your inner OOP and you can never go wrong!
When to use static classes in C#
[ "", "c#", "class", "static", "" ]
``` template <class M, class A> class C { std::list<M> m_List; ... } ``` Is the above code possible? I would like to be able to do something similar. Why I ask is that i get the following error: ``` Error 1 error C2079: 'std::_List_nod<_Ty,_Alloc>::_Node::_Myval' uses undefined class 'M' C:\Program Files\Microsoft Visual Studio 9.0\VC\include\list 41 ```
My guess: you forward declared class M somewhere, and only declared it fully after the template instantiation. My hint: give your formal template arguments a different name than the actual ones. (i.e. class M) ``` // template definition file #include <list> template< class aM, class aT > class C { std::list<M> m_List; ... }; ``` Example of a bad forward declaration, resulting in the mentioned error: ``` // bad template usage file causing the aforementioned error class M; ... C<M,OtherClass> c; // this would result in your error class M { double data; }; ``` Example of proper declaration, not resulting in the error: ``` // better template usage file class M { double data; }; // or #include the class header ... C<M,OtherClass> c; // this would have to compile ```
Yes. This is very common. As xtofl mentioned, a forward declaration of your parameter would cause a problem at the time of template instantiation, which looks like what the error message is hinting at.
is it possible to have templated classes within a template class?
[ "", "c++", "oop", "visual-c++", "templates", "" ]
I am new to Java and am trying to run a program using Eclipse. But I have no idea how to get the command prompt running in with Eclipse... I did some online research and couldn't get anything consolidated! ### Update: I'm not using an applet. It's a normal Java program trying to read a line from command prompt. I'm trying to do system programming.
Check out this lesson plan on how to get started with Eclipse programs: [Lesson](http://www.ics.uci.edu/~thornton/ics22/LabManual/Lab0/) Specifically, see this image: ![Picture of Eclipse Console with Standard Input](https://www.ics.uci.edu/~thornton/ics22/LabManual/Lab0/Eclipse-ConsoleTab-Stopped.png) If the Console tab is not visible in your Eclipse, go to Window -> Show View -> Console in the menu bar.
If you downloaded the Eclipse/Java package, and you wrote a Java program in Eclipse in a project, just click compile and run, it will run in the output window in Eclipse.
Open command prompt using Eclipse
[ "", "java", "eclipse", "" ]
I have a database with a table which is full of conditions and error messages for checking another database. I want to run a loop such that each of these conditions is checked against all the tables in the second database and generae a report which gives the errors. Is this possible in ms access. For example, querycrit table ``` id query error 1 speed<25 and speed>56 speed above limit 2 dist<56 or dist >78 dist within limit ``` I have more than 400 queries like this of different variables. THe table against which I am running the queries is records table ``` id speed dist accce decele aaa bbb ccc 1 33 34 44 33 33 33 33 2 45 44 55 55 55 22 23 ``` regards ttk
Here is some more sample code. It illustrates the use of two different types of recordsets. You may wish to read [VBA Traps: Working with Recordsets](http://allenbrowne.com/ser-29.html) by Allen Browne and [List of reserved words in Access 2002 and in later versions of Access](http://support.microsoft.com/kb/286335). ``` Dim rs As DAO.Recordset Dim rs2 As ADODB.Recordset Set rs = CurrentDb.OpenRecordset("querycrit") Set rs2 = CreateObject("ADODB.Recordset") rs2.ActiveConnection = CurrentProject.Connection For Each tdf In CurrentDb.TableDefs 'EDIT: TableDefs includes Microsoft System tables and ' 'these should never be tampered with. They all begin with Msys ' 'so we can leave them out of the loop here. ' If Left(tdf.Name, 4) <> "msys" And tdf.Name <> "querycrit" Then rs.MoveFirst strSQL = "SELECT * From [" & tdf.Name & "] WHERE " Do While Not rs.EOF On Error Resume Next Debug.Print tdf.Name rs2.Open strSQL & " " & rs![query] If Err.Number = 0 Then On Error GoTo 0 If Not rs2.EOF Then Debug.Print rs![Error] Debug.Print rs2.GetString End If End If Err.Clear rs2.Close rs.MoveNext Loop End If Next End Sub ```
When you say "report", do you mean an Access Report, or would writing to a file or Access Form work? You can create a function or sub in a Module to do this. Open a recordset on your querycrit table and spin through the records dynamically building and running the SQL for the records table. You can write the results of these dynamic queries to a file, or a form, or insert the results into a temp table and drive the Access Report from there.
How to run a loop of queries in access?
[ "", "sql", "ms-access", "loops", "" ]
`select max(DELIVERY_TIMESTAMP) from DOCUMENTS;` will return the time that the latest document was delivered. How do I return **the other columns** for the latest document? For example I want `DOC_NAME` for the document that was most recently delivered? I'm not sure how to form the `WHERE` clause.
You have a few options ``` SELECT DOC_NAME FROM DOCUMENTS WHERE DELIVERY_TIMESTAMP IN ( SELECT MAX(DELIVERY_TIMESTAMP) FROM DOCUMENTS ) ``` Or with joins ``` SELECT DOC_NAME FROM DOCUMENTS INNER JOIN ( SELECT MAX(DELIVERY_TIMESTAMP) AS MAX_DELIVERY_TIMESTAMP FROM DOCUMENTS ) AS M ON M.MAX_DELIVERY_TIMESTAMP = DOCUMENTS.DELIVERY_TIMESTAMP ``` It gets more complicated if there are duplicates in a timestamp or you need multiple columns in your "max" criteria (because `MAX()` is only over the one column for all rows) This is where the `JOIN` option is the only option available, because a construction like this is not available (say multiple orders with identical timestamp): ``` SELECT DOC_NAME FROM DOCUMENTS WHERE (DELIVERY_TIMESTAMP, ORDERID) IN ( SELECT TOP 1 DELIVERY_TIMESTAMP, ORDERID FROM DOCUMENTS ORDER BY DELIVERY_TIMESTAMP DESC, ORDERID DESC ) ``` Where you in fact, would need to do: ``` SELECT DOC_NAME FROM DOCUMENTS INNER JOIN ( SELECT TOP 1 DELIVERY_TIMESTAMP, ORDERID FROM DOCUMENTS ORDER BY DELIVERY_TIMESTAMP DESC, ORDERID DESC ) AS M ON M.DELIVERY_TIMESTAMP = DOCUMENTS.DELIVERY_TIMESTAMP AND M.ORDERID = DOCUMENTS.ORDERID ```
``` SELECT DELIVERY_TIMESTAMP, OTHER_COLUMN FROM DOCUMENTS WHERE DELIVERY_TIMESTAMP = (SELECT MAX(DELIVERY_TIMESTAMP) FROM DOCUMENTS) ```
SQL Searching by MAX()
[ "", "sql", "" ]
I have a C++ app in VS2005 and import a VB DLL. IntelliSense shows me all the symbols in the DLL as expected but it also shows all (or nearly all) of them again with an underscore prefix (no @s in them though). Why is this? What are the differences between the underscored items and the normal items?
Assuming you're talking VB6, the leading underscore version \_Klass is the Vb-generated default interface for the class Klass. This site has a nice explanation: <http://www.15seconds.com/issue/040721.htm>
In (some) c# coding standards the underscore prefix denotes a private variable, that might explain it... is it VB or VB.Net?
What are underscored symbols in a VB DLL?
[ "", "c++", "dll", "com", "vb6", "" ]
I've a terrible memory. Whenever I do a CONNECT BY query in Oracle - and I do mean *every* time - I have to think hard and usually through trial and error work out on which argument the PRIOR should go. I don't know why I don't remember - but I don't. Does anyone have a handy memory mnemonic so I always remember ? For example: To go **down** a tree from a node - obviously I had to look this up :) - you do something like: ``` select * from node connect by prior node_id = parent_node_id start with node_id = 1 ``` So - I start with a `node_id` of 1 (the top of the branch) and the query looks for all nodes where the `parent_node_id` = 1 and then iterates down to the bottom of the tree. To go **up** the tree the prior goes on the parent: ``` select * from node connect by node_id = prior parent_node_id start with node_id = 10 ``` So starting somewhere down a branch (`node_id = 10` in this case) Oracle first gets all nodes where the `parent_node_id` is the same as the one for which `node_id` is 10. **EDIT**: I **still** get this wrong so thought I'd add a clarifying edit to expand on the accepted answer - here's how I remember it now: ``` select * from node connect by prior node_id = parent_node_id start with node_id = 1 ``` The 'english language' version of this SQL I now read as... > In NODE, starting with the row in > which `node_id = 1`, the next row > selected has its `parent_node_id` > equal to `node_id` from the previous > (prior) row. **EDIT**: Quassnoi makes a great point - the order you write the SQL makes things a lot easier. ``` select * from node start with node_id = 1 connect by parent_node_id = prior node_id ``` This feels a lot clearer to me - the "start with" gives the first row selected and the "connect by" gives the next row(s) - in this case the children of node\_id = 1.
Think about the order in which the records are going to be selected: the link-back column on each record must match the link-forward column on the PRIOR record selected.
I always try to put the expressions in `JOIN`'s in the following order: ``` joined.column = leading.column ``` This query: ``` SELECT t.value, d.name FROM transactions t JOIN dimensions d ON d.id = t.dimension ``` can be treated either like "for each transaction, find the corresponding dimension name", or "for each dimension, find all corresponding transaction values". So, if I search for a given transaction, I put the expressions in the following order: ``` SELECT t.value, d.name FROM transactions t JOIN dimensions d ON d.id = t.dimension WHERE t.id = :myid ``` , and if I search for a dimension, then: ``` SELECT t.value, d.name FROM dimensions d JOIN transactions t ON t.dimension = d.id WHERE d.id = :otherid ``` Ther former query will most probably use index scans first on `(t.id)`, then on (`d.id`), while the latter one will use index scans first on `(d.id)`, then on `(t.dimension)`, and you can easily see it in the query itself: the searched fields are at left. The driving and driven tables may be not so obvious in a `JOIN`, but it's as clear as a bell for a `CONNECT BY` query: the `PRIOR` row is driving, the non-`PRIOR` is driven. That's why this query: ``` SELECT * FROM hierarchy START WITH id = :root CONNECT BY parent = PRIOR id ``` means "find all rows whose `parent` is a given `id`". This query builds a hierarchy. This can be treated like this: ``` connect_by(row) { add_to_rowset(row); /* parent = PRIOR id */ /* PRIOR id is an rvalue */ index_on_parent.searchKey = row->id; foreach child_row in index_on_parent.search { connect_by(child_row); } } ``` And this query: ``` SELECT * FROM hierarchy START WITH id = :leaf CONNECT BY id = PRIOR parent ``` means "find the rows whose `id` is a given `parent`". This query builds an ancestry chain. **Always put `PRIOR` in the right part of the expression.** **Think of `PRIOR column` as of a constant all your rows will be searched for.**
How do I remember which way round PRIOR should go in CONNECT BY queries
[ "", "sql", "oracle", "" ]
I've got a junk directory where I toss downloads, one-off projects, email drafts, and other various things that might be useful for a few days but don't need to be saved forever. To stop this directory from taking over my machine, I wrote a program that will delete all files older than a specified number of days and logs some statistics about the number of files deleted and their size just for fun. I noticed that a few project folders were living way longer than they should, so I started to investigate. In particular, it seemed that folders for projects in which I had used SVN were sticking around. It turns out that the read-only files in the .svn directories are not being deleted. I just did a simple test on a read-only file and discovered that `System.IO.File.Delete` and `System.IO.FileInfo.Delete` will not delete a read-only file. I don't care about protecting files in this particular directory; if something important is in there it's in the wrong place. Is there a .NET class that can delete read-only files, or am I going to have to check for read-only attributes and strip them?
According to [File.Delete's documentation,](http://msdn.microsoft.com/en-us/library/system.io.file.delete.aspx), you'll have to strip the read-only attribute. You can set the file's attributes using [File.SetAttributes()](http://msdn.microsoft.com/en-us/library/system.io.file.setattributes.aspx). ``` using System.IO; File.SetAttributes(filePath, FileAttributes.Normal); File.Delete(filePath); ```
According to [File.Delete's documentation,](http://msdn.microsoft.com/en-us/library/system.io.file.delete.aspx), you'll have to strip the read-only attribute. You can set the file's attributes using [File.SetAttributes()](http://msdn.microsoft.com/en-us/library/system.io.file.setattributes.aspx).
How do I delete a read-only file?
[ "", "c#", ".net", "file", "" ]
Is there a way in C# to: 1. Get all the properties of a class that have attributes on them (versus having to loop through all properties and then check if attribute exists. 2. If i want all Public, Internal, and Protected properties but NOT private properties, i can't find a way of doing that. I can only do this: PropertyInfo[] props = type.GetProperties(BindingFlags.Instance | BindingFlags.Public | BindingFlags.NonPublic) Is there a way to avoid getting private properties but do get everything else.
I don't believe there's a way to do either of these. Just how many types do you have to reflect over, though? Is it really a bottleneck? Are you able to cache the results to avoid having to do it more than once per type?
There isn't really a way to do it any *quicker* - but what you can do is do it less often by caching the data. A generic utility class can be a handy way of doing this, for example: ``` static class PropertyCache<T> { private static SomeCacheType cache; public static SomeCacheType Cache { get { if (cache == null) Build(); return cache; } } static void Build() { /// populate "cache" } } ``` Then your PropertyCache.Cache has the data just for Foo, etc - with lazy population. You could also use a static constructor if you prefer.
Reflection optimizations with attributes .
[ "", "c#", "reflection", "attributes", "" ]
I was wondering if I could pop up `JOptionPane`s or other Swing components from within a browser using JSP.
If you embed an applet. But I don't think that's what you want. Swing is for desktop apps. JSP web pages. If you want components, try looking into JSF or some of the many AJAX Javascript frameworks like prototype.
You may also want to consider GWT, which enables you to develop a web interface in Java code (the Java code is converted to HTML & JavaScript by the GWT compiler). Although you don't program to the Swing API *directly* when writing GWT applications, the GWT API is very similar in style to Swing programming.
Is it possible to display Swing components in a JSP?
[ "", "java", "swing", "jsp", "" ]
I have a class derived from `CTreeCtrl`. In `OnCreate()` I replace the default `CToolTipCtrl` object with a custom one: ``` int CMyTreeCtrl::OnCreate(LPCREATESTRUCT lpCreateStruct) { if (CTreeCtrl::OnCreate(lpCreateStruct) == -1) return -1; // Replace tool tip with our own which will // ask us for the text to display with a TTN_NEEDTEXT message CTooltipManager::CreateToolTip(m_pToolTip, this, AFX_TOOLTIP_TYPE_DEFAULT); m_pToolTip->AddTool(this, LPSTR_TEXTCALLBACK); SetToolTips(m_pToolTip); // Update: Added these two lines, which don't help either m_pToolTip->Activate(TRUE); EnableToolTips(TRUE); return 0; } ``` My message handler looks like this: ``` ON_NOTIFY_EX(TTN_NEEDTEXT, 0, &CMyTreeCtrl::OnTtnNeedText) ``` However I never receive a `TTN_NEEDTEXT` message. I had a look with Spy++ and it also looks like this message never gets sent. What could be the problem here? ## Update I'm not sure whether this is relevant: The `CTreeCtrl`'s parent window is of type `CDockablePane`. Could there be some extra work needed for this to work?
Finally! I (partially) solved it: It looks like the CDockablePane parent window indeed caused this problem... First I removed all the tooltip-specific code from the CTreeCtrl-derived class. Everything is done in the parent pane window. Then I edited the parent window's `OnCreate()` method: ``` int CMyPane::OnCreate(LPCREATESTRUCT lpCreateStruct) { if (CDockablePane::OnCreate(lpCreateStruct) == -1) return -1; const DWORD dwStyle = WS_CHILD | WS_VISIBLE | WS_CLIPSIBLINGS | WS_CLIPCHILDREN | TVS_CHECKBOXES | TVS_DISABLEDRAGDROP | TVS_HASBUTTONS | TVS_HASLINES | TVS_LINESATROOT | TVS_INFOTIP | TVS_NOHSCROLL | TVS_SHOWSELALWAYS; // TREECTRL_ID is a custom member constant, set to 1 if(!m_tree.Create(dwStyle, m_treeRect, this, TREECTRL_ID ) ) { TRACE0("Failed to create trace tree list control.\n"); return -1; } // m_pToolTip is a protected member of CDockablePane m_pToolTip->AddTool(&m_tree, LPSTR_TEXTCALLBACK, &m_treeRect, TREECTRL_ID); m_tree.SetToolTips(m_pToolTip); return 0; ``` } Unforunately we cannot simply call `AddTool()` with less parameters because the base class will complain in the form of an `ASSERT` about a `uFlag` member if there is no tool ID set. And since we need to set the ID, we also need to set a rectangle. I created a `CRect` member and set it to `(0, 0, 10000, 10000)` in the CTor. I have not yet found a working way to change the tool's rect size so this is my very ugly workaround. This is also why I call this solution partial. **Update: [I asked a question regarding this.](https://stackoverflow.com/questions/867724/how-to-modify-the-tool-rect-of-a-ctooltipctrl)** Finally there is the handler to get the tooltip info: ``` // Message map entry ON_NOTIFY(TVN_GETINFOTIP, TREECTRL_ID, &CMobileCatalogPane::OnTvnGetInfoTip) // Handler void CMyPane::OnTvnGetInfoTip(NMHDR *pNMHDR, LRESULT *pResult) { LPNMTVGETINFOTIP pGetInfoTip = reinterpret_cast<LPNMTVGETINFOTIP>(pNMHDR); // This is a CString member m_toolTipText.ReleaseBuffer(); m_toolTipText.Empty(); // Set your text here... pGetInfoTip->pszText = m_toolTipText.GetBuffer(); *pResult = 0; } ```
I believe you still have to enable the tooltip, even though you are replacing the builtin. ``` EnableToolTips(TRUE); ``` Well, since that did not work for you and since no-one more expert has offered any help, here a few more suggestions from me. Although they are lame, they might get you moving again: * Make sure your OnCreate() rotine is actually being executed. * Enable the tool tip BEFORE you replace it. * The code I use to do this looks like this. ( I confess I do not understand all the details, I copied it from some sample code, it worked and so I never looked at it any more. ) // Enable the standard tooltip EnableToolTips(TRUE); // Disable the builtin tooltip CToolTipCtrl\* pToolTipCtrl = (CToolTipCtrl\*)CWnd::FromHandle((HWND)::SendMessage(m\_hWnd, LVM\_GETTOOLTIPS, 0, 0L));
How do I display custom tooltips in a CTreeCtrl?
[ "", "c++", "mfc", "tooltip", "mfc-feature-pack", "" ]
This probably has a simple answer, but I must not have had enough coffee to figure it out on my own: If I had a comma delimited string such as: ``` string list = "Fred,Sam,Mike,Sarah"; ``` How would get each element and add quotes around it and stick it back in a string like this: ``` string newList = "'Fred','Sam','Mike','Sarah'"; ``` I'm assuming iterating over each one would be a start, but I got stumped after that. One solution that is ugly: ``` int number = 0; string newList = ""; foreach (string item in list.Split(new char[] {','})) { if (number > 0) { newList = newList + "," + "'" + item + "'"; } else { newList = "'" + item + "'"; } number++; } ```
``` string s = "A,B,C"; string replaced = "'"+s.Replace(",", "','")+"'"; ``` Thanks for the comments, I had missed the external quotes. Of course.. if the source was an empty string, would you want the extra quotes around it or not ? And what if the input was a bunch of whitespaces... ? I mean, to give a 100% complete solution I'd probably ask for a list of unit tests but I hope my gut instinct answered your core question. *Update*: A LINQ-based alternative has also been suggested (with the added benefit of using String.Format and therefore not having to worry about leading/trailing quotes): ``` string list = "Fred,Sam,Mike,Sarah"; string newList = string.Join(",", list.Split(',').Select(x => string.Format("'{0}'", x)).ToList()); ```
Following Jon Skeet's example above, this is what worked for me. I already had a `List<String>` variable called \_\_messages so this is what I did: ``` string sep = String.Join(", ", __messages.Select(x => "'" + x + "'")); ```
In C#: Add Quotes around string in a comma delimited list of strings
[ "", "c#", "string", "" ]
I have an application which has to live as a service, I create an object which then spawns off a buch of threads. If I set the only reference to that object to null will all the child threads get cleaned up? or will I suffer from a memory leak. Do I have to explicitly terminate all the child threads?
Yes, you need to make sure your other threads stop. The garbage collector is irrelevant to this. You should also do so in an orderly fashion though - don't just abort them. [Here's a pattern in C# for terminating threads co-operatively](http://pobox.com/~skeet/csharp/threads/shutdown.shtml) - it's easy to translate to Java.
Threads and static references are 'root objects'. They are immune from GCing and anything that can be traced back to them directly or indirectly cannot be collected. Threads will therefore not be collected as long as they are running. Once the run method exits though, the GC can eat up any unreferenced thread objects.
Will setting the only reference to null mean it and its child threads are garbage collected?
[ "", "java", "multithreading", "" ]
My studio has a large codebase that has been developed over 10+ years. The coding standards that we started with were developed with few developers in house and long before we had to worry about any kind of standards related to C++. Recently, we started a small R&D project in house and we updated our coding conventions to be more suitable for our environment. The R&D work is going to be integrated into existing project code. One major problem facing us is that we now have two standards for the two areas of work, and now the code bases will cross. I don't want two standards at the studio, and I'm actually quite happy to move forward with a single standard. (The 'how' of how we got into this situation isn't important -- just that we are and I had hoped that we wouldn't be.) The problem is refactoring existing code. I'm not very keen on having two code bases (one relatively small and one very large) looking different. I am interested in doing some refactoring of one of the existing codebases to make it conform to the other standard. The problem is, the smaller code base is (IMO) the more desireable standard. I started looking around for a tool that could do large scale refactoring for me. I'm not interested in rearranging and tightening code. I'm interested in changing things like ``` class my_class {} .... class my_class A; ``` to ``` class MyClass {} .... class MyClass A; ``` Basically doing function/variable level renaming. I'd prefer not to use something like Visual Assist because that will take a long time. I have upwards of 10000 source/header files with hundreds of thousands of lines of code. Using VA one class at a time would be a time killer and not worth the effort. I did run across [Vera](http://www.inspirel.com/vera/) in another post on SO. That seems like it might do the job and do it well. I'd like to know if anyone has specific experience using Vera for the situation that I'm in, or has any other recommendations for tools that might get the job done. I think that it's important that this tool actually understand code structure so that we don't wind up just renaming variables in a search/replace manner because that will lead to subtle bugs if not done carefully. EDIT: While my example shows I'm going from using \_ between names to camelcase type notation, it might be more beneficial for us to move the other way. I'm really looking for a generic solution that will help with large scale renaming. Thanks.
My process would be to rename each time someone touches a given module. Eventually, all modules would be refactored, but the incremental approach would result in less code breakage(assuming you have a complete set of tests. ;) )
I've made changes like this using custom scripts. If I can, I use sed. Otherwise I'll use a scripting language with good support for regular expressions. It is a crude hack which is sure to introduce bugs but unless you find a better solution, it is a path forward.
Code standard refactoring on large codebase
[ "", "c++", "refactoring", "" ]
I've been looking for a simple regex for URLs, does anybody have one handy that works well? I didn't find one with the zend framework validation classes and have seen several implementations.
I used this on a few projects, I don't believe I've run into issues, but I'm sure it's not exhaustive: ``` $text = preg_replace( '#((https?|ftp)://(\S*?\.\S*?))([\s)\[\]{},;"\':<]|\.\s|$)#i', "'<a href=\"$1\" target=\"_blank\">$3</a>$4'", $text ); ``` Most of the random junk at the end is to deal with situations like `http://domain.example.` in a sentence (to avoid matching the trailing period). I'm sure it could be cleaned up but since it worked. I've more or less just copied it over from project to project.
Use the `filter_var()` function to validate whether a string is URL or not: ``` var_dump(filter_var('example.com', FILTER_VALIDATE_URL)); ``` It is bad practice to use regular expressions when not necessary. **EDIT**: Be careful, this solution is not unicode-safe and not XSS-safe. If you need a complex validation, maybe it's better to look somewhere else.
PHP validation/regex for URL
[ "", "php", "regex", "url", "validation", "" ]
This has happened to me 3 times now, and I am wondering if anyone is having the same problem. I am running Visual Studio 2008 SP1, and hitting SQL Server 2005 developer edition. For testing, I use the Server Explorer to browse a database I have already created. For testing I will insert data by hand (right click table -> show table data). I can do this for a week without problems, but sometimes my computer will crash when the stars are aligned. It doesn't hang, it doesn't blue screen, I see the bios boot screen .2 seconds after I enter some value in a new row that hasn't been saved yet. I have never seen a crash like this, where it reboots instantly. I think it may happen when I enter something that violates a database constraint, but I am not sure yet (need a few more crashes to pinpoint it). Anyone have the same problem? Know of a solution? Too bad VS wasn't written with managed code, eh? :) UPDATE: I can reproduce it by inserting a duplicate primary key, clicking off the row, clicking OK on the database error notification, then trying to update the primary key again. I agree - it definitely sounds like a hardware or driver issue, but am not really sure how to solve this since this has only happened when doing this 1 particular thing in VS. I am not overclocking either - I am running a normal (single core) 3 Ghz Dell.
Just so you know, when your computer just snaps right back to the BIOS boot screen with no blue screen or other crash data, this is called a "[triple fault](http://en.wikipedia.org/wiki/Triple_fault)" Basically, there was an exception (on a hardware level) whose exception handler triggered an exception whose exception handler triggered an exception. This is almost always a hardware issue since operating systems tend to be VERY careful in the double fault handler not to trigger an exception. Needless to say, it could be a software problem, but I would check your RAM, CPU and motherboard for potential issues. I had a dell which was doing this fairly randomly with Vista and a new mobo/CPU from dell solved it perfectly. EDIT: just to note, it can be very hard to convince dell that it is indeed a hardware issue, but stick with it and eventually they'll cave and fix it for you.
[Test your memory](http://www.memtest.org/), it's the most likely cause of your reboots.
Visual Studio 2008 crashes horribly
[ "", "c#", "sql-server", "visual-studio", "visual-studio-2008", "ide", "" ]
Generally, when using the conditional operator, here's the syntax: ``` int x = 6; int y = x == 6 ? 5 : 9; ``` Nothing fancy, pretty straight forward. Now, let's try to use this when assigning a Lambda to a Func type. Let me explain: ``` Func<Order, bool> predicate = id == null ? p => p.EmployeeID == null : p => p.EmployeeID == id; ``` That's the same syntax, and *should* work? Right? For some reason that doesn't. The compiler gives this nice cryptic message: > Error 1 Type of conditional expression cannot be determined because there is no implicit conversion between 'lambda expression' and 'lambda expression' I then went ahead and changed the syntax and this way it *did* work: ``` Func<Order, bool> predicate = id == null ? predicate = p => p.EmployeeID == null : predicate = p => p.EmployeeID == id; ``` I'm just curious as to why it doesn't work the first way? (Side note: I ended up not needing this code, as I found out that when comparing an int value against null, you just use object.Equals)
You can convert a lambda expression to a particular target delegate type, but in order to determine the type of the conditional expression, the compiler needs to know the type of each of the second and third operands. While they're both just "lambda expression" there's no conversion from one to the other, so the compiler can't do anything useful. I wouldn't suggest using an assignment, however - a cast is more obvious: ``` Func<Order, bool> predicate = id == null ? (Func<Order, bool>) (p => p.EmployeeID == null) : p => p.EmployeeID == id; ``` Note that you only need to provide it for one operand, so the compiler can perform the conversion from the other lambda expression.
The C# compiler cannot infer the type of the created lambda expression because it processes the ternary first and then the assignment. you could also do: ``` Func<Order, bool> predicate = id == null ? new Func<Order,bool>(p => p.EmployeeID == null) : new Func<Order,bool>(p => p.EmployeeID == id); ``` but that just sucks, you could also try ``` Func<Order, bool> predicate = id == null ? (Order p) => p.EmployeeID == null : (Order p) => p.EmployeeID == id; ```
How can I assign a Func<> conditionally between lambdas using the conditional ternary operator?
[ "", "c#", "lambda", "conditional-operator", "" ]
Is there any good practice related to dynamic\_cast error handling (except not using it when you don't have to)? I'm wondering how should I go about NULL and bad\_cast it can throw. Should I check for both? And if I catch bad\_cast or detect NULL I probably can't recover anyway... For now, I'm using assert to check if dynamic\_cast returned not NULL value. Would you accept this solution on a code review?
If the `dynamic_cast` *should* succeed, it would be good practice to use `boost::polymorphic_downcast` instead, which goes a little something like this: ``` assert(dynamic_cast<T*>(o) == static_cast<T*>(o)); return static_cast<T*>(o); ``` This way, you will detect errors in the debug build while at the same time avoiding the runtime overhead in a release build. If you suspect the cast *might* fail and you want to detect it, use `dynamic_cast` and cast to a reference type. This cast will throw `bad_cast` in case of error, and will take down your program. (This is good if, as you say, you are not going to recover anyway) ``` T& t = dynamic_cast<T&>(o); t.func(); //< Use t here, no extra check required ``` Use `dynamic_cast` to a pointer type only if the 0-pointer makes sense in the context. You might want to use it in an `if` like this: ``` if (T* t = dynamic_cast<T*>(o)) { t->func(); //< Use t here, it is valid } // consider having an else-clause ``` With this last option you need to make sure that the execution path makes sense if the `dynamic_cast` returns 0. To answer your question directly: I would prefer one of the two first alternatives I have given to having an explicit `assert` in the code :)
bad\_cast is only thrown when casting references ``` dynamic_cast< Derived & >(baseclass) ``` NULL is returned when casting pointers ``` dynamic_cast< Derived * >(&baseclass) ``` So there's never a need to check both. Assert can be acceptable, but that greatly depends on the context, then again, that's true for pretty much every assert...
c++ dynamic_cast error handling
[ "", "c++", "dynamic-cast", "" ]
What possible reasons could exist for MySQL giving the error `“Access denied for user 'xxx'@'yyy'”` when trying to access a database using PHP-mysqli and working fine when using the command-line mysql tool with exactly the same username, password, socket, database and host? **Update:** There were indeed three users in the `mysql.user` table, each one with a different host (but with the same hashed password), one was set to localhost, one to 127.0.0.1 and one to the machine’s host name. Deleting two of them and changing the host of the third to “%” had only one effect: now the access is denied using the command-line tool also. I did do a ``` select user(); ``` before that in the command line and it yielded the same xxx@yyy that were denied in php.
In case anyone’s still interested: I never did solve this particular problem. It really seems like the problem was with the hardware I was running MySQL on. I’ve never seen anything remotely like it since.
Sometimes in php/mysql there is a difference between localhost and 127.0.0.1 In mysql you grant access based on the host name, for localusers this would be localhost. I have seen php trying to connect with 'myservername' instead of localhost allthough in the config 'localhost' was defined. Try to grant access in mysql for 127.0.0.1 and connect in php over 127.0.0.1 port 3306.
Reasons for MySQL authentication error: "Access denied for user 'xxx'@'yyy'"?
[ "", "php", "mysql", "mysql-error-1045", "" ]
I'm trying to compile such code: ``` #include <iostream> using namespace std; class CPosition { private: int itsX,itsY; public: void Show(); void Set(int,int); }; void CPosition::Set(int a, int b) { itsX=a; itsY=b; } void CPosition::Show() { cout << "x:" << itsX << " y:" << itsY << endl; } class CCube { friend class CPosition; private: CPosition Position; }; main() { CCube cube1; cube1.Position.Show(); cube1.Position.Set(2,3); cube1.Position.Show(); } ``` but get 'CCube::Position' is not accessible in function main() 3 times. I want class CPosition to be declared outside CCube so that I can use it in future in new classes e.g. CBall :) but how can I make it work without using inheritance. Is it possible :)? Regards, PK
In addition to the normal getter you should also have a const getter. Please note the return by reference. This allows you any call to SetXX() to affect the copy of Position inside CCube and not the copy that you have been updating. ``` class CCube { private: CPosition Position; public: CPosition& getPosition() { return Position; } CPosition const& getPosition() const { return Position; } }; ```
The statement `friend class CPosition;` means that CPosition can now access the private members of the CCube class. To every other class the members are still as private as you declared them. To make the sample work you'd: ``` class CCube { public: CPosition Position; }; ```
c++ class friend
[ "", "c++", "class", "inheritance", "friend", "" ]
Here's a very simple question. I have an SP that inserts a row into a table and at the end there's the statement RETURN @@IDENTITY. What I can't seem to find is a way to retrieve this value in C#. I'm using the Enterprise library and using the method: ``` db.ExecuteNonQuery(cmd); ``` I've tried **cmd.Parameters[0].Value** to get the value but that returns 0 all the time. Any ideas?
``` Dim c as new sqlcommand("...") Dim d As New SqlParameter() d.Direction = ParameterDirection.ReturnValue c.parameters.add(d) c.executeNonQuery (@@IDENTITY) = d.value ``` It is more or less like this...either this or just return the value from a stored procedure as an output parameter.
BTW, in most circumstances, you should use SCOPE\_IDENTITY() rather than @@IDENTITY. [Ref](http://msdn.microsoft.com/en-us/library/aa259185.aspx).
Retrieving the value of RETURN @@IDENTITY in C#
[ "", "c#", ".net", "ado.net", "" ]
I am reading a binary log file produced by a piece of equipment. I have the data in a byte[]. If I need to read two bytes to create a short I can do something like this: ``` short value = (short)(byte[1] << 8); value += byte[2]; ``` Now I know the value is the correct for valid data. How would I know if the file was messed up and lets say the values FF FF were in those two places in the byte array? When I look at the resultant value of converting FF FF to a short, I get a -1. Is this a normal value for FF FF?, or did the computer just hit some kind of short bound and roll over with invalid data? For my purposes all of theses numbers are going to be positive. If FF FF is actually a short -1, then I just need to validate that all my results are postitive. Thank you, Keith *BTW, I am also reading other number data types. I'll show them here just because. The Read function is the basic part of reading from the byte[]. All the other data type reads use the basic Read() function.* ``` public byte Read() { //advance position and then return byte at position byte returnValue; if (_CurrentPosition < _count - 1) { returnValue= _array[_offset + ++_CurrentPosition]; return returnValue; } else throw new System.IO.EndOfStreamException ("Cannot Read Array, at end of stream."); } public float ReadFloat() { byte[] floatTemp = new byte[4]; for (int i = 3; i >= 0; i--) { floatTemp[i] = Read(); } float returnValue = System.BitConverter.ToSingle (floatTemp, 0); if (float.IsNaN(returnValue)) { throw new Execption("Not a Number"); } return returnValue; } public short ReadInt16() { short returnValue = (short)(Read() << 8); returnValue += Read(); return returnValue; } public int ReadInt32() { int returnValue = Read() << 24; returnValue += Read() << 16; returnValue += Read() << 8; returnValue += Read(); return returnValue; } ```
`0xffff` (all bits equal to 1) is -1 for signed shorts, yes. Read up on [Two's complement](http://en.wikipedia.org/wiki/Two's_complement) to learn more about the details. You can switch to a larger datatype, or (as suggested by Grzenio) just use an unsigned type.
Well, you seemed to have found `BitConverter` for singles. Now let's see if we can get to to use it for everything else as well... ``` MemoryStream mem = new MemoryStream(_array); float ReadFloat(Stream str) { byte[] bytes = str.Read(out bytes, 0, 4); return BitConverter.ToSingle(bytes, 0) } public int ReadInt32(Stream str) { byte[] bytes = str.Read(out bytes, 0, 4); return BitConverter.ToInt32(bytes, 0) } ```
Is the binary data I convert to a Short valid?
[ "", "c#", "binary", "bit-shift", "" ]
How do I get the type of a generic typed class within the class? An example: I build a generic typed collection implementing *ICollection< T>*. Within I have methods like ``` public void Add(T item){ ... } public void Add(IEnumerable<T> enumItems){ ... } ``` How can I ask within the method for the given type *T*? The reason for my question is: If *object* is used as *T* the collection uses Add(object item) instead of Add(IEnumerable<object> enumItems) even if the parameter is IEnumerable. So in the first case it would add the whole enumerable collection as one object instead of multiple objects of the enumerable collection. So i need something like ``` if (T is object) { // Check for IEnumerable } ``` but of course that cannot work in C#. Suggestions? Thank you very much! Michael
Personally, I would side step the issue by renaming the `IEnumerable<T>` method to `AddRange`. This avoids such issues, and is consistent with existing APIs such as [`List<T>.AddRange`](http://msdn.microsoft.com/en-us/library/z883w3dc.aspx). It also keeps things clean when the `T` you want to add implements `IEnumerable<T>` (rare, I'll admit).
You can use: `typeof(T)` ``` if (typeof(T) == typeof(object) ) { // Check for IEnumerable } ```
.NET: How to check the type within a generic typed class?
[ "", "c#", ".net", "generics", "types", "" ]
We have a function which a single thread calls into (we name this the main thread). Within the body of the function we spawn multiple worker threads to do CPU intensive work, wait for all threads to finish, then return the result on the main thread. The result is that the caller can use the function naively, and internally it'll make use of multiple cores. *All good so far..* The problem we have is dealing with exceptions. We don't want exceptions on the worker threads to crash the application. We want the caller to the function to be able to catch them on the main thread. We must catch exceptions on the worker threads and propagate them across to the main thread to have them continue unwinding from there. ***How can we do this?*** The best I can think of is: 1. Catch a whole variety of exceptions on our worker threads (std::exception and a few of our own ones). 2. Record the type and message of the exception. 3. Have a corresponding switch statement on the main thread which rethrows exceptions of whatever type was recorded on the worker thread. This has the obvious disadvantage of only supporting a limited set of exception types, and would need modification whenever new exception types were added.
C++11 introduced the `exception_ptr` type that allows to transport exceptions between threads: ``` #include<iostream> #include<thread> #include<exception> #include<stdexcept> static std::exception_ptr teptr = nullptr; void f() { try { std::this_thread::sleep_for(std::chrono::seconds(1)); throw std::runtime_error("To be passed between threads"); } catch(...) { teptr = std::current_exception(); } } int main(int argc, char **argv) { std::thread mythread(f); mythread.join(); if (teptr) { try{ std::rethrow_exception(teptr); } catch(const std::exception &ex) { std::cerr << "Thread exited with exception: " << ex.what() << "\n"; } } return 0; } ``` Because in your case you have multiple worker threads, you will need to keep one `exception_ptr` for each of them. Note that `exception_ptr` is a shared ptr-like pointer, so you will need to keep at least one `exception_ptr` pointing to each exception or they will be released. Microsoft specific: if you use SEH Exceptions (`/EHa`), the example code will also transport SEH exceptions like access violations, which may not be what you want.
Currently, the only **portable** way is to write catch clauses for all the types of exceptions that you might like to transfer between threads, store the information somewhere from that catch clause and then use it later to rethrow an exception. This is the approach taken by [Boost.Exception](http://www.boost.org/doc/libs/1_36_0/libs/exception/doc/boost-exception.html). In C++0x, you will be able to catch an exception with `catch(...)` and then store it in an instance of `std::exception_ptr` using `std::current_exception()`. You can then rethrow it later from the same or a different thread with `std::rethrow_exception()`. If you are using Microsoft Visual Studio 2005 or later, then the [just::thread C++0x thread library](http://www.stdthread.co.uk) supports `std::exception_ptr`. (Disclaimer: this is my product).
How can I propagate exceptions between threads?
[ "", "c++", "multithreading", "exception", "" ]
I am using the WMD markdown editor in a project for a large number of fields that correspond to a large number of properties in a large number of Entity classes. Some classes may have multiple properties that require the markdown. I am storing the markdown itself since this makes it easier to edit the fields later. However, I need to convert the properties to HTML for display later on. The question is: is there some pattern that I can use to avoid writing markdown conversion code in all my entity classes? I created a utility class with a method that accepts a markdown string and returns the HTML. I am using markdownj and this works fine. The problem is for each property of each class that stores markdown I may need another method that converts to HTML: ``` public class Course{ private String description; . . . public String getDescription(){ return description; } public String getDescriptionAsHTML(){ return MarkdownUtil.convert(getDescription()); } . . . } ``` The problem there is that if the Course class has 2 more properties Tuition and Prerequisites say, that both need converters then I will have to write getTuitionAsHTML() and getPrerequisiteAsHTML(). I find that a bit ugly and would like a cleaner solution. The classes that require this are not part of a single inheritance hierarchy. The other option I am considering is doing this in the controller rather than the model. What are your thoughts on this? Thanks. [EDIT]: New thoughts (Thanks Jasper). Since the project uses struts2 (I did not say this before) I could create a view component say that will convert the markdown for me. Then I use that wherever I need to display the value as HTML.
> The classes that require this are not part of a single inheritance hierarchy. They should at least implement a common interface, otherwise coming up with a clean generic solution is going to be cumbersome. > The other option I am considering is doing this in the controller rather than the model. What are your thoughts on this? This clearly is a responsibility of the View. The #1 MVC rule is that the Model doesn't care about its representation, the markdown in this case. However, I feel that there is to little detail about your current architecture to give a meaningful answer to your question.
You do have one option for doing this if you can't use inheritance or an interface. I know, I know refactor but this is reality and \*hit happens. You can use reflection to iterate over your properties and apply the formatting to them. You could either tag them with an attribute or you could adopt a naming scheme (brittle, but still an option).
Design Pattern to apply conversion to multiple properties in multiple classes
[ "", "java", "design-patterns", "markdown", "" ]
I tried to use `OPTION (MAXRECURSION 0)` in a view to generate a list of dates. This seems to be unsupported. Is there a workaround for this issue? EDIT to Explain what I actually want to do: I have 2 tables. table1: int weekday, bool available table2: datetime date, bool available I want the result: view1: date (here all days in this year), available(from table2 or from table1 when not in table2). That means I have to apply a join on a date with a weekday. I hope this explanation is understandable, because I actually use more tables with more fields in the query. I found this code to generate the recursion: ``` WITH Dates AS ( SELECT cast('2008-01-01' as datetime) Date UNION ALL SELECT Date + 1 FROM Dates WHERE Date + 1 < DATEADD(yy, 1, GETDATE()) ) ```
[No](http://connect.microsoft.com/SQLServer/feedback/ViewFeedback.aspx?FeedbackID=124653) - if you can find a way to do it within 100 levels of recusion (have a table of numbers), which will get you to within 100 recursion levels, you'll be able to do it. But if you have a numbers or pivot table, you won't need the recursion anyway... See [this question](https://stackoverflow.com/questions/10819/sql-auxiliary-table-of-numbers) (but I would create a table and not a table-valued function), [this question](https://stackoverflow.com/questions/40456/sql-missing-rows-when-grouped-by-day-month-year) and [this link](http://andre-silva-cardoso.blogspot.com/2007/11/sql-trickspatterns-1-numbers-table.html) and [this link](http://www.sqlmag.com/Article/ArticleID/94376/sql_server_94376.html)
You can use a [CTE](http://blog.crowe.co.nz/archive/2007/09/06/Microsoft-SQL-Server-2005---CTE-Example-of-a-simple.aspx) for hierarchical queries.
Can I use recursion in a Sql Server 2005 View?
[ "", "sql", "sql-server", "recursion", "" ]
In MySQL 5.0 why does the following error occur when trying to create a view with a subquery in the FROM clause? > ERROR 1349 (HY000): View's SELECT contains a subquery in the FROM clause If this is a limitation of the MySQL engine, then why haven't they implemented this feature yet? Also, what are some good workarounds for this limitation? Are there any workarounds that work for any subquery in the FROM clause or are there some queries that can not be expressed without using a subquery in the FROM clause? --- An example query (was buried in a comment): ``` SELECT temp.UserName FROM (SELECT u1.name as UserName, COUNT(m1.UserFromId) as SentCount FROM Message m1, User u1 WHERE u1.uid = m1.UserFromId Group BY u1.name HAVING SentCount > 3 ) as temp ```
Couldn't your query just be written as: ``` SELECT u1.name as UserName from Message m1, User u1 WHERE u1.uid = m1.UserFromID GROUP BY u1.name HAVING count(m1.UserFromId)>3 ``` That should also help with the known speed issues with subqueries in MySQL
I had the same problem. I wanted to create a view to show information of the most recent year, from a table with records from 2009 to 2011. Here's the original query: ``` SELECT a.* FROM a JOIN ( SELECT a.alias, MAX(a.year) as max_year FROM a GROUP BY a.alias ) b ON a.alias=b.alias and a.year=b.max_year ``` Outline of solution: 1. create a view for each subquery 2. replace subqueries with those views Here's the solution query: ``` CREATE VIEW v_max_year AS SELECT alias, MAX(year) as max_year FROM a GROUP BY a.alias; CREATE VIEW v_latest_info AS SELECT a.* FROM a JOIN v_max_year b ON a.alias=b.alias and a.year=b.max_year; ``` It works fine on mysql 5.0.45, without much of a speed penalty (compared to executing the original sub-query select without any views).
MySQL: View with Subquery in the FROM Clause Limitation
[ "", "mysql", "sql", "view", "mysql-error-1349", "" ]
[1] In JDBC, why should we first load drivers using Class.forName("some driver name"). Why SUN didnt take care of loading driver within the getConnection() method itself.If I pass driver name as a parameter to the getConnection() method. [2] I want to understand JBDC internals.Any pointers towards it are appreciated.
With JDBC 4, you no longer need to use Class.forName(...) see [here](http://www.artima.com/lejava/articles/jdbc_four.html) for one article explaining this: > Connection to a database requires that a suitable JDBC database driver be loaded in the client's VM. In the early days of JDBC, it was common to load a suitable driver via Class.forName(), passing in the name of the class implementing the JDBC Driver interface. The DriverManager class later offered a more flexible means of managing JDBC drivers in a client application. For a driver to become available, DriverManager's registerDriver() had to be invoked with the driver's class name. Alternatively, you could specify the drivers to load via the jdbc.drivers system property. When DriverManager initializes, it attempts to load the drivers associated with that property. > > JDBC 4 adds the ***J2SE Service Provider*** mechanism as another means to specify database drivers. For this to work, driver JAR files must include the file META-INF/services/java.sql.driver. That file must contain a single line with the name of the JDBC driver's implementation of the Driver interface. Invoking getConnection() on DriverManager will load a driver so packaged, if needed. Upon loading the driver, an instance of the driver is created, and then registerDriver() is invoked to make that driver available to clients. Have a look at [Sun's JDBC link](http://java.sun.com/products/jdbc/download.html) for more information on JDBC. The JDBC 4.0 Specification is relatively a nice read compared to some other specs...
There is no way for java.sql to know which class to load if you only give it the JDBC protocol name. Arguably JDBC driver jar files should be able to specify protocol name and driver class within their manifest or elsewhere under META-INF/. In my opinion, you might as well construct the driver instance yourself rather than attempting to load the class with a hardwired string or fiddly services file. There isn't much to JDBC itself. The source is in src.zip of the JDK. DriverManager is the class with code.
Understanding JDBC internals
[ "", "java", "jdbc", "" ]
i have the following script ``` import getopt, sys opts, args = getopt.getopt(sys.argv[1:], "h:s") for key,value in opts: print key, "=>", value ``` if i name this getopt.py and run it doesn't work as it tries to import itself is there a way around this, so i can keep this filename but specify on import that i want the standard python lib and not this file? Solution based on Vinko's answer: ``` import sys sys.path.reverse() from getopt import getopt opts, args = getopt(sys.argv[1:], "h:s") for key,value in opts: print key, "=>", value ```
You shouldn't name your scripts like existing modules. Especially if standard. That said, you can touch sys.path to modify the library loading order ``` ~# cat getopt.py print "HI" ~# python Python 2.5.2 (r252:60911, Jul 31 2008, 17:28:52) [GCC 4.2.3 (Ubuntu 4.2.3-2ubuntu7)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import sys >>> import getopt HI ~# python Python 2.5.2 (r252:60911, Jul 31 2008, 17:28:52) [GCC 4.2.3 (Ubuntu 4.2.3-2ubuntu7)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import sys >>> sys.path.remove('') >>> import getopt >>> dir(getopt) ['GetoptError', '__all__', '__builtins__', '__doc__', '__file__', '__name__', 'do_longs', 'do_shorts', 'error', 'getopt', 'gnu_getopt', 'long_has_args', 'os', 'short_has_arg'] ``` In addition, you may wish to avoid the full import and do it differently, like this: ``` import sys sys.path.remove('') from getopt import getopt sys.path.insert(0,'') opts, args = getopt(sys.argv[1:], "h:s") for key,value in opts: print key, "=>", value ```
You should avoid naming your python files with standard library module names.
python name a file same as a lib
[ "", "python", "" ]
I recently switched my hosting provider and due to the time zone that the server is now in, my code has stopped working. The hosting server reports in Pacific time, However, my code needs to work with GMT as my site is for the UK market. So, all my displays and searches need to be in the format dd/MM/yyyy How can I account for the difference? For instance, when I do a DateTime.Parse("03/11/2008") it fail as I assume the 'Parse' is against the servers settings. I also get "String was not recognized as a valid DateTime." throughout my code.
In your web.config file add `<globalization>` element under `<system.web>` node: ``` <system.web> <globalization culture="en-gb"/> <!-- ... --> </system.web> ```
Try ``` DateTime.Parse("28/11/2008", new CultureInfo("en-GB")) ``` Have a look at [the overload for DateTime.Parse on MSDN](http://msdn.microsoft.com/en-us/library/kc8s65zs.aspx). Also, be careful not to confuse time zones (pacific, GMT) with cultures. Cultures are your actual problem here.
DateTime format on hosting server
[ "", "c#", "asp.net", "datetime", "" ]
I have barcode images in jpg format and want to extract barcode # from those. Please help!
See the CodeProject article: [Reading Barcodes from an Image - II](http://www.codeproject.com/KB/graphics/barcodeimaging2.aspx). The author ([James](http://www.codeproject.com/script/Membership/Profiles.aspx?mid=1770204)) improves (and credits) a [previously written](http://www.codeproject.com/KB/dotnet/barcodeImaging.aspx) VB library to decode barcodes from an image using only .NET code. There are two projects in the downloadable solution: * The barcode library - written in C# * The test app - written in VB I have successfully used the C# code in VS2008 against a JPG image with an extended (includes alpha chars) code 39 barcode. The library has the ability to scan an entire image for a barcode, where the barcode is only a portion. This has good and bad points. It is more flexible, but you may have to parse out noise. Of course, you will want to start with the cleanest image possible. Also, the scanned barcode must be fairly straight, not rotated or skewed at an angle. If you can limit the scan to a "slice" of the actual barcode, you might get better accuracy. In the article comments, [another user](http://www.codeproject.com/script/Membership/Profiles.aspx?mid=2393263) submits [a function](http://www.codeproject.com/KB/graphics/barcodeimaging2.aspx?fid=191552&select=1482523#xx1482523xx) that re-scans the barcode and uses a checksum digit, which is great if you control the printing of the original barcode and can include the checksum in the first place. There are, of course some very impressive (and some very expensive) commercial solutions that have the advantage of being well-tested, more flexible, can scan more barcode formats, and are more tolerant of image quality due to improved image sampling. But this is a good (*free*) start! *You will need to sign up with CodeProject to download the code, but that is free also - and worth the time because there is so much good code on that site!* UPDATE: Incidentally, I just looked at the [blog](http://blog.lemqi.com/) that [Joachim Kerschbaumer](https://stackoverflow.com/users/20227/joachim-kerschbaumer) mentions in [another answer](https://stackoverflow.com/questions/204008/barcode-image-to-code39-conversion-in-c#204084) to your question. Definitely keep an eye on that project! It looks like a **very** robust solution. If it can read those skewed barcodes from those busy images, then it can do anything!
we've developed a c# component that reads values from barcodes of all dimension, rotation, quality etc. it's not yet release but we will release detailed information about it at <http://blog.lemqi.com> . it will be probably free (maybe open source). but maybe it's still 1-2 weeks till release as we have to refactor the code before.
barcode image to Code39 conversion in C#?
[ "", "c#", "image-processing", "barcode", "code39", "" ]
I have a Python script that needs to execute an external program, but for some reason fails. If I have the following script: ``` import os; os.system("C:\\Temp\\a b c\\Notepad.exe"); raw_input(); ``` Then it fails with the following error: > 'C:\Temp\a' is not recognized as an internal or external command, operable program or batch file. If I escape the program with quotes: ``` import os; os.system('"C:\\Temp\\a b c\\Notepad.exe"'); raw_input(); ``` Then it works. However, if I add a parameter, it stops working again: ``` import os; os.system('"C:\\Temp\\a b c\\Notepad.exe" "C:\\test.txt"'); raw_input(); ``` What is the right way to execute a program and wait for it to complete? I do not need to read output from it, as it is a visual program that does a job and then just exits, but I need to wait for it to complete. Also note, moving the program to a non-spaced path is not an option either. --- This does not work either: ``` import os; os.system("'C:\\Temp\\a b c\\Notepad.exe'"); raw_input(); ``` Note the swapped single/double quotes. With or without a parameter to Notepad here, it fails with the error message > The filename, directory name, or volume label syntax is incorrect.
[`subprocess.call`](http://docs.python.org/2/library/subprocess.html#using-the-subprocess-module) will avoid problems with having to deal with quoting conventions of various shells. It accepts a list, rather than a string, so arguments are more easily delimited. i.e. ``` import subprocess subprocess.call(['C:\\Temp\\a b c\\Notepad.exe', 'C:\\test.txt']) ```
Here's a different way of doing it. If you're using Windows the following acts like double-clicking the file in Explorer, or giving the file name as an argument to the DOS "start" command: the file is opened with whatever application (if any) its extension is associated with. ``` filepath = 'textfile.txt' import os os.startfile(filepath) ``` Example: ``` import os os.startfile('textfile.txt') ``` This will open textfile.txt with Notepad if Notepad is associated with .txt files.
How do I execute a program from Python? os.system fails due to spaces in path
[ "", "python", "shellexecute", "" ]
Is there any way in SQL Server to get the results starting at a given offset? For example, in another type of SQL database, it's possible to do: ``` SELECT * FROM MyTable OFFSET 50 LIMIT 25 ``` to get results 51-75. This construct does not appear to exist in SQL Server. How can I accomplish this without loading all the rows I don't care about? Thanks!
I would avoid using `SELECT *`. Specify columns you actually want even though it may be all of them. **SQL Server 2005+** ``` SELECT col1, col2 FROM ( SELECT col1, col2, ROW_NUMBER() OVER (ORDER BY ID) AS RowNum FROM MyTable ) AS MyDerivedTable WHERE MyDerivedTable.RowNum BETWEEN @startRow AND @endRow ``` **SQL Server 2000** [Efficiently Paging Through Large Result Sets in SQL Server 2000](https://web.archive.org/web/20210506081930/http://www.4guysfromrolla.com/webtech/041206-1.shtml) [A More Efficient Method for Paging Through Large Result Sets](https://web.archive.org/web/20211020131201/https://www.4guysfromrolla.com/webtech/042606-1.shtml)
If you will be processing all pages in order then simply remembering the last key value seen on the previous page and using `TOP (25) ... WHERE Key > @last_key ORDER BY Key` can be the best performing method if suitable indexes exist to allow this to be seeked efficiently - or [an API cursor](https://dba.stackexchange.com/a/68280/3690) if they don't. For selecting an arbitary page the best solution for SQL Server 2005 - 2008 R2 is probably `ROW_NUMBER` and `BETWEEN` For SQL Server 2012+ you can use the enhanced [ORDER BY](http://msdn.microsoft.com/en-us/library/ms188385%28SQL.110%29.aspx) clause for this need. ``` SELECT * FROM MyTable ORDER BY OrderingColumn ASC OFFSET 50 ROWS FETCH NEXT 25 ROWS ONLY ``` Though [it remains to be seen how well performing this option will be](http://sqlblogcasts.com/blogs/sqlandthelike/archive/2010/11/10/denali-paging-is-it-win-win.aspx).
Row Offset in SQL Server
[ "", "sql", "sql-server", "" ]
It appear that SQL Server like most other products Random Function really is not that random. So we have this nice little function to generate a 10 char value. Is there a better way to accomplish what the following does. I am betting there is. ``` DECLARE @SaltCount INT; SELECT @SaltCount = COUNT(*) FROM tmp_NewLogin; PRINT 'Set Salt values for all records' + CAST(@SaltCount AS VARCHAR(10)) DECLARE @CharPool CHAR(83); DECLARE @Salt VARCHAR(10); SET @CharPool = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789!"#$%&()*+,-./:;<=>?@'; SET NOCOUNT ON; updateSaltValue: SET @Salt = '' SELECT @Salt = @Salt + SUBSTRING(@CharPool, number, 1) FROM ( SELECT TOP 10 number FROM MASTER..[spt_values] WHERE TYPE = 'p' AND Number BETWEEN 1 AND 83 ORDER BY NEWID() ) AS t UPDATE TOP(1) [table] SET [Salt] = @Salt WHERE [Salt] IS NULL IF (@@ROWCOUNT > 0) GOTO updateSaltValue SET NOCOUNT OFF; PRINT 'Completed setting salts for all records'; ```
Most programmers make a mistake of reinventing the randomization functionality and end up with something that is not random at all. I'd recommend you to stick with built-in RAND() function. Seed it once then fetch as many values as you need.
Reinventing RAND is a recipe for disaster. Where have you ever noticed it behaving incorrectly? I don't think you even need to seed it. SQL Server should seed it on its own just fine. Seeding should just be necessary when you need to produce the same "random" sequence several times when testing algorithms or some such.
Generate Random values from SQL
[ "", "sql", "sql-server", "t-sql", "" ]
I would like to be able to emulate the functionality of the "Presenter Tools" in MS Office. This requires the program to be able to detect and use an external/secondary monitor/projector. Could somebody please point me in the right direction for achieving this. I would like to use Java 1.5 Thankyou.
Through the [GraphicsEnvironment](http://java.sun.com/javase/6/docs/api/java/awt/GraphicsEnvironment.html) class you can get information about the available screens on the computer. In the docs for [GraphicsDevice](http://java.sun.com/javase/6/docs/api/java/awt/GraphicsDevice.html) there is a basic example showing how to get and use the screen information.
It really depends on the GUI API you are planning. For AWT, see John Meagher's post. For SWT, you can look at Display.getMonitors() for a starting point.
How can I send output to an external/second display in Java?
[ "", "java", "ms-office", "" ]
I would like to either prevent or handle a `StackOverflowException` that I am getting from a call to the `XslCompiledTransform.Transform` method within an `Xsl Editor` I am writing. The problem seems to be that the user can write an `Xsl script` that is infinitely recursive, and it just blows up on the call to the `Transform` method. (That is, the problem is not just the typical programmatic error, which is usually the cause of such an exception.) Is there a way to detect and/or limit how many recursions are allowed? Or any other ideas to keep this code from just blowing up on me?
From Microsoft: > Starting with the .NET Framework > version 2.0, a StackOverflowException > object cannot be caught by a try-catch > block and the corresponding process is > terminated by default. Consequently, > users are advised to write their code > to detect and prevent a stack > overflow. For example, if your > application depends on recursion, use > a counter or a state condition to > terminate the recursive loop. I'm assuming the exception is happening within an internal .NET method, and not in your code. You can do a couple things. * Write code that checks the xsl for infinite recursion and notifies the user prior to applying a transform (Ugh). * Load the XslTransform code into a separate process (Hacky, but less work). You can use the Process class to load the assembly that will apply the transform into a separate process, and alert the user of the failure if it dies, without killing your main app. EDIT: I just tested, here is how to do it: MainProcess: ``` // This is just an example, obviously you'll want to pass args to this. Process p1 = new Process(); p1.StartInfo.FileName = "ApplyTransform.exe"; p1.StartInfo.UseShellExecute = false; p1.StartInfo.WindowStyle = ProcessWindowStyle.Hidden; p1.Start(); p1.WaitForExit(); if (p1.ExitCode == 1) Console.WriteLine("StackOverflow was thrown"); ``` ApplyTransform Process: ``` class Program { static void Main(string[] args) { AppDomain.CurrentDomain.UnhandledException += new UnhandledExceptionEventHandler(CurrentDomain_UnhandledException); throw new StackOverflowException(); } // We trap this, we can't save the process, // but we can prevent the "ILLEGAL OPERATION" window static void CurrentDomain_UnhandledException(object sender, UnhandledExceptionEventArgs e) { if (e.IsTerminating) { Environment.Exit(1); } } } ```
> **NOTE** The question in the bounty by @WilliamJockusch and the original question are different. > > This answer is about StackOverflow's in the general case of third-party libraries and what you can/can't do with them. If you're looking about the special case with XslTransform, see the accepted answer. --- Stack overflows happen because the data on the stack exceeds a certain limit (in bytes). The details of how this detection works can be found [here](https://stackoverflow.com/questions/30327674/how-is-a-stackoverflowexception-detected/30327998#30327998). > I'm wondering if there is a general way to track down StackOverflowExceptions. In other words, suppose I have infinite recursion somewhere in my code, but I have no idea where. I want to track it down by some means that is easier than stepping through code all over the place until I see it happening. I don't care how hackish it is. As I mentioned in the link, detecting a stack overflow from static code analysis would require solving the halting problem which is *undecidable*. Now that we've established that *there is no silver bullet*, I can show you a few tricks that I think helps track down the problem. I think this question can be interpreted in different ways, and since I'm a bit bored :-), I'll break it down into different variations. **Detecting a stack overflow in a test environment** Basically the problem here is that you have a (limited) test environment and want to detect a stack overflow in an (expanded) production environment. Instead of detecting the SO itself, I solve this by exploiting the fact that the stack depth can be set. The debugger will give you all the information you need. Most languages allow you to specify the stack size or the max recursion depth. Basically I try to force a SO by making the stack depth as small as possible. If it doesn't overflow, I can always make it bigger (=in this case: safer) for the production environment. The moment you get a stack overflow, you can manually decide if it's a 'valid' one or not. To do this, pass the stack size (in our case: a small value) to a Thread parameter, and see what happens. The default stack size in .NET is 1 MB, we're going to use a way smaller value: ``` class StackOverflowDetector { static int Recur() { int variable = 1; return variable + Recur(); } static void Start() { int depth = 1 + Recur(); } static void Main(string[] args) { Thread t = new Thread(Start, 1); t.Start(); t.Join(); Console.WriteLine(); Console.ReadLine(); } } ``` *Note: we're going to use this code below as well.* Once it overflows, you can set it to a bigger value until you get a SO that makes sense. **Creating exceptions before you SO** The `StackOverflowException` is not catchable. This means there's not much you can do when it has happened. So, if you believe something is bound to go wrong in your code, you can make your own exception in some cases. The only thing you need for this is the current stack depth; there's no need for a counter, you can use the real values from .NET: ``` class StackOverflowDetector { static void CheckStackDepth() { if (new StackTrace().FrameCount > 10) // some arbitrary limit { throw new StackOverflowException("Bad thread."); } } static int Recur() { CheckStackDepth(); int variable = 1; return variable + Recur(); } static void Main(string[] args) { try { int depth = 1 + Recur(); } catch (ThreadAbortException e) { Console.WriteLine("We've been a {0}", e.ExceptionState); } Console.WriteLine(); Console.ReadLine(); } } ``` Note that this approach also works if you are dealing with third-party components that use a callback mechanism. The only thing required is that you can intercept *some* calls in the stack trace. **Detection in a separate thread** You explicitly suggested this, so here goes this one. You can try detecting a SO in a separate thread.. but it probably won't do you any good. A stack overflow can happen *fast*, even before you get a context switch. This means that this mechanism isn't reliable at all... **I wouldn't recommend actually using it**. It was fun to build though, so here's the code :-) ``` class StackOverflowDetector { static int Recur() { Thread.Sleep(1); // simulate that we're actually doing something :-) int variable = 1; return variable + Recur(); } static void Start() { try { int depth = 1 + Recur(); } catch (ThreadAbortException e) { Console.WriteLine("We've been a {0}", e.ExceptionState); } } static void Main(string[] args) { // Prepare the execution thread Thread t = new Thread(Start); t.Priority = ThreadPriority.Lowest; // Create the watch thread Thread watcher = new Thread(Watcher); watcher.Priority = ThreadPriority.Highest; watcher.Start(t); // Start the execution thread t.Start(); t.Join(); watcher.Abort(); Console.WriteLine(); Console.ReadLine(); } private static void Watcher(object o) { Thread towatch = (Thread)o; while (true) { if (towatch.ThreadState == System.Threading.ThreadState.Running) { towatch.Suspend(); var frames = new System.Diagnostics.StackTrace(towatch, false); if (frames.FrameCount > 20) { towatch.Resume(); towatch.Abort("Bad bad thread!"); } else { towatch.Resume(); } } } } } ``` Run this in the debugger and have fun of what happens. **Using the characteristics of a stack overflow** Another interpretation of your question is: "Where are the pieces of code that could potentially cause a stack overflow exception?". Obviously the answer of this is: all code with recursion. For each piece of code, you can then do some manual analysis. It's also possible to determine this using static code analysis. What you need to do for that is to decompile all methods and figure out if they contain an infinite recursion. Here's some code that does that for you: ``` // A simple decompiler that extracts all method tokens (that is: call, callvirt, newobj in IL) internal class Decompiler { private Decompiler() { } static Decompiler() { singleByteOpcodes = new OpCode[0x100]; multiByteOpcodes = new OpCode[0x100]; FieldInfo[] infoArray1 = typeof(OpCodes).GetFields(); for (int num1 = 0; num1 < infoArray1.Length; num1++) { FieldInfo info1 = infoArray1[num1]; if (info1.FieldType == typeof(OpCode)) { OpCode code1 = (OpCode)info1.GetValue(null); ushort num2 = (ushort)code1.Value; if (num2 < 0x100) { singleByteOpcodes[(int)num2] = code1; } else { if ((num2 & 0xff00) != 0xfe00) { throw new Exception("Invalid opcode: " + num2.ToString()); } multiByteOpcodes[num2 & 0xff] = code1; } } } } private static OpCode[] singleByteOpcodes; private static OpCode[] multiByteOpcodes; public static MethodBase[] Decompile(MethodBase mi, byte[] ildata) { HashSet<MethodBase> result = new HashSet<MethodBase>(); Module module = mi.Module; int position = 0; while (position < ildata.Length) { OpCode code = OpCodes.Nop; ushort b = ildata[position++]; if (b != 0xfe) { code = singleByteOpcodes[b]; } else { b = ildata[position++]; code = multiByteOpcodes[b]; b |= (ushort)(0xfe00); } switch (code.OperandType) { case OperandType.InlineNone: break; case OperandType.ShortInlineBrTarget: case OperandType.ShortInlineI: case OperandType.ShortInlineVar: position += 1; break; case OperandType.InlineVar: position += 2; break; case OperandType.InlineBrTarget: case OperandType.InlineField: case OperandType.InlineI: case OperandType.InlineSig: case OperandType.InlineString: case OperandType.InlineTok: case OperandType.InlineType: case OperandType.ShortInlineR: position += 4; break; case OperandType.InlineR: case OperandType.InlineI8: position += 8; break; case OperandType.InlineSwitch: int count = BitConverter.ToInt32(ildata, position); position += count * 4 + 4; break; case OperandType.InlineMethod: int methodId = BitConverter.ToInt32(ildata, position); position += 4; try { if (mi is ConstructorInfo) { result.Add((MethodBase)module.ResolveMember(methodId, mi.DeclaringType.GetGenericArguments(), Type.EmptyTypes)); } else { result.Add((MethodBase)module.ResolveMember(methodId, mi.DeclaringType.GetGenericArguments(), mi.GetGenericArguments())); } } catch { } break; default: throw new Exception("Unknown instruction operand; cannot continue. Operand type: " + code.OperandType); } } return result.ToArray(); } } class StackOverflowDetector { // This method will be found: static int Recur() { CheckStackDepth(); int variable = 1; return variable + Recur(); } static void Main(string[] args) { RecursionDetector(); Console.WriteLine(); Console.ReadLine(); } static void RecursionDetector() { // First decompile all methods in the assembly: Dictionary<MethodBase, MethodBase[]> calling = new Dictionary<MethodBase, MethodBase[]>(); var assembly = typeof(StackOverflowDetector).Assembly; foreach (var type in assembly.GetTypes()) { foreach (var member in type.GetMembers(BindingFlags.Public | BindingFlags.NonPublic | BindingFlags.Static | BindingFlags.Instance).OfType<MethodBase>()) { var body = member.GetMethodBody(); if (body!=null) { var bytes = body.GetILAsByteArray(); if (bytes != null) { // Store all the calls of this method: var calls = Decompiler.Decompile(member, bytes); calling[member] = calls; } } } } // Check every method: foreach (var method in calling.Keys) { // If method A -> ... -> method A, we have a possible infinite recursion CheckRecursion(method, calling, new HashSet<MethodBase>()); } } ``` Now, the fact that a method cycle contains recursion, is by no means a guarantee that a stack overflow will happen - it's just the most likely precondition for your stack overflow exception. In short, this means that this code will determine the pieces of code where a stack overflow *can* occur, which should narrow down most code considerably. **Yet other approaches** There are some other approaches you can try that I haven't described here. 1. Handling the stack overflow by hosting the CLR process and handling it. Note that you still cannot 'catch' it. 2. Changing all IL code, building another DLL, adding checks on recursion. Yes, that's quite possible (I've implemented it in the past :-); it's just difficult and involves a lot of code to get it right. 3. Use the .NET profiling API to capture all method calls and use that to figure out stack overflows. For example, you can implement checks that if you encounter the same method X times in your call tree, you give a signal. There's a project [clrprofiler](https://github.com/MicrosoftArchive/clrprofiler) that will give you a head start.
How do I prevent and/or handle a StackOverflowException?
[ "", "c#", ".net", "stack-overflow", "xslcompiledtransform", "" ]
I am interested in getting some Python code talking to some Ruby code on Windows, Linux and possibly other platforms. Specificlly I would like to access classes in Ruby from Python and call their methods, access their data, create new instances and so on. An obvious way to do this is via something like XML-RPC or maybe CORBA but I would be interested in any other approaches. What have other people done to get code from Python and Ruby communicating with one another, either locally on the same system or remotely accross a network? Thanks in advance.
Well, you could try [named pipes](http://en.wikipedia.org/wiki/Named_pipe) or something similar but I really think that XML-RPC would be the most headache-free way.
Please be advised that I don't speak from personal experience here, but I imagine JRuby and Jython (The ruby and python implementations in the JVM) would be able to to easily talk to each other, as well as Java code. You may want to look into that.
Ruby to Python bridge
[ "", "python", "ruby", "interop", "" ]
If I do this: ``` // In header class Foo { void foo(bar*); }; // In cpp void Foo::foo(bar* const pBar) { //Stuff } ``` The compiler does not complain that the signatures for Foo::foo do not match. However if I had: ``` void foo(const bar*); //In header void Foo::foo(bar*) {} //In cpp ``` The code will fail to compile. What is going on? I'm using gcc 4.1.x
The *const* keyword in the first example is meaningless. You are saying that you don't plan on changing the pointer. However, the pointer was passed by value and so it dos not matter if you change it or not; it will not effect the caller. Similarly, you could also do this: ``` // In header class Foo { void foo( int b ); }; // In cpp void Foo::foo( const int b ) { //Stuff } ``` You can even do this: ``` // In header class Foo { void foo( const int b ); }; // In cpp void Foo::foo( int b ) { //Stuff } ``` Since the *int* is passed by value, the constness does not matter. In the second example you are saying that your function takes a pointer to one type, but then implement it as taking a pointer to another type, therefore it fails.
In the first, you've promised the compiler, but not other users of the class that you will not edit the variable. In your second example, you've promised other users of the class that you will not edit their variable, but failed to uphold that promise. I should also note that there is a distinct difference between ``` bar* const variable ``` and ``` const bar* variable ``` and ``` const bar* const variable ``` In the first form, the pointer will never change, but you can edit the object that is pointed to. In the second form, you can edit the pointer(point it to another object), but never the variable that it points to. In the final form, you will neither edit the pointer, nor the object it points to. [Reference](http://www.parashift.com/c++-faq-lite/const-correctness.html#faq-18.5) To add a bit more of a clarification to the question stated, you can always promise MORE const than less. Given a class: ``` class Foo { void func1 (int x); void func2 (int *x); } ``` You can compile the following implementation: ``` Foo::func1(const int x) {} Foo::func2(const int *x) {} ``` or: ``` Foo::func1(const int x) {} Foo::func2(const int* const x) {} ``` without any problems. You've told your users that you may possibly edit their variables. In your implementation, you've told the compiler that this particular implementation will not edit those variables, even though the told the users you might. You haven't broken a promise to the user, and so the code compiles.
C++ const question
[ "", "c++", "gcc", "constants", "" ]
A couple of recent questions discuss strategies for naming columns, and I was rather surprised to discover the concept of embedding the notion of foreign and primary keys in column names. That is ``` select t1.col_a, t1.col_b, t2.col_z from t1 inner join t2 on t1.id_foo_pk = t2.id_foo_fk ``` I have to confess I have never worked on any database system that uses this sort of scheme, and I'm wondering what the benefits are. The way I see it, once you've learnt the N principal tables of a system, you'll write several orders of magnitude more requests with those tables. To become productive in development, you'll need to learn which tables are the important tables, and which are simple tributaries. You'll want to commit an good number of column names to memory. And one of the basic tasks is to join two tables together. To reduce the learning effort, the easiest thing to do is to ensure that the column name is the same in both tables: ``` select t1.col_a, t1.col_b, t2.col_z from t1 inner join t2 on t1.id_foo = t2.id_foo ``` I posit that, as a developer, you don't need to be reminded that much about which columns are primary keys, which are foreign and which are nothing. It's easy enough to look at the schema if you're curious. When looking at a random ``` tx inner join ty on tx.id_bar = ty.id_bar ``` ... is it all that important to know which one is the foreign key? Foreign keys are important only to the database engine itself, to allow it to ensure referential integrity and do the right thing during updates and deletes. What problem is being solved here? (I know this is an invitation to discuss, and feel free to do so. But at the same time, I *am* looking for an answer, in that I may be genuinely missing something).
I agree with you that the foreign key column in a child table should have the same name as the primary key column in the parent table. Note that this permits syntax like the following: ``` SELECT * FROM foo JOIN bar USING (foo_id); ``` The USING keyword assumes that a column exists by the same name in both tables, and that you want an equi-join. It's nice to have this available as shorthand for the more verbose: ``` SELECT * FROM foo JOIN bar ON (foo.foo_id = bar.foo_id); ``` Note, however, there are cases when you can't name the foreign key the same as the primary key it references. For example, in a table that has a self-reference: ``` CREATE TABLE Employees ( emp_id INT PRIMARY KEY, manager_id INT REFERENCES Employees(emp_id) ); ``` Also a table may have multiple foreign keys to the same parent table. It's useful to use the name of the column to describe the nature of the relationship: ``` CREATE TABLE Bugs ( ... reported_by INT REFERENCES Accounts(account_id), assigned_to INT REFERENCES Accounts(account_id), ... ); ``` I don't like to include the name of the table in the column name. I also eschew the obligatory "id" as the name of the primary key column in every table.
I agree with you. Putting this information in the column name smacks of the crappy Hungarian Notation idiocy of the early Windows days.
Why specify primary/foreign key attributes in column names
[ "", "sql", "naming-conventions", "foreign-keys", "" ]
I'm trying to write some LINQ To SQL code that would generate SQL like ``` SELECT t.Name, g.Name FROM Theme t INNER JOIN ( SELECT TOP 5 * FROM [Group] ORDER BY TotalMembers ) as g ON t.K = g.ThemeK ``` So far I have ``` var q = from t in dc.Themes join g in dc.Groups on t.K equals g.ThemeK into groups select new { t.Name, Groups = (from z in groups orderby z.TotalMembers select z.Name ) }; ``` but I need to do a top/take on the ordered groups subquery. According to <http://blogs.msdn.com/vbteam/archive/2008/01/08/converting-sql-to-linq-part-7-union-top-subqueries-bill-horst.aspx> in VB I could just add TAKE 5 on the end, but I can't get this syntax to work in c#. How do you use the take syntax in c#? edit: PS adding .Take(5) at the end causes it to run loads of individual queries edit 2: I made a slight mistake with the intent of the SQL above, but the question still stands. **The problem is that if you use extension methods in the query like .Take(5), LinqToSql runs lots of SQL queries instead of a single query.**
Here's a faithful translation of the original query. This should not generate repeated roundtrips. ``` var subquery = dc.Groups .OrderBy(g => g.TotalMembers) .Take(5); var query = dc.Themes .Join(subquery, t => t.K, g => g.ThemeK, (t, g) => new { ThemeName = t.Name, GroupName = g.Name } ); ``` The roundtrips in the question are caused by the groupjoin (join into). Groups in LINQ have a heirarchical shape. Groups in SQL have a row/column shape (grouped keys + aggregates). In order for LinqToSql to fill its hierarchy from row/column results, it must query the child nodes seperately using the group's keys. It only does this if the children are used outside of an aggregate.
Second answer, now I've reread the original question. Are you sure the SQL you've shown is actually correct? It won't give the top 5 groups within each theme - it'll match each theme just against the top 5 groups *overall*. In short, I suspect you'll get your original SQL if you use: ``` var q = from t in dc.Themes join g in dc.Groups.OrderBy(z => z.TotalMembers).Take(5) on t.K equals g.ThemeK into groups select new { t.Name, Groups = groups }; ``` But I don't think that's what you actually want...
Can take be used in a query expression in c# linq instead of using .Take(x)?
[ "", "c#", "linq", "linq-to-sql", "" ]
I'm working on a program that processes many requests, none of them reaching more than 50% of CPU (**currently I'm working on a dual core**). So I created a thread for each request, the whole process is faster. Processing 9 requests, a single thread lasts 02min08s, while with 3 threads working simultaneously the time decreased to 01min37s, but it keeps not using 100% CPU, only around 50%. How could I allow my program to use full processors capability? **EDIT** The application isn't IO or Memory bounded, they're at reasonable levels all the time. I think it has something to do with the 'dual core' thing. There is a locked method invocation that every request uses, but it is really fast, I don't think this is the problem. The more cpu-costly part of my code is the call of a dll via COM (the same external method is called from all threads). This dll is also no Memory or IO-bounded, it is an AI recognition component, I'm doing an OCR recognition of paychecks, a paycheck for request. **EDIT2** It is very probable that the STA COM Method is my problem, I contacted the component owners in order to solve this problem.
Do you have significant locking within your application? If the threads are waiting for each other a lot, that could easily explain it. Other than that (and the other answers given), it's very hard to guess, really. A profiler is your friend... EDIT: Okay, given the comments below, I think we're onto something: > The more cpu-costly part of my code is > the call of a dll via COM (the same > external method is called from all > threads). Is the COM method running in an STA by any chance? If so, it'll only use one thread, serializing calls. I strongly suspect that's the key to it. It's similar to having a lock around that method call (not quite the same, admittedly).
The problem is the COM object. Most COM objects run in the context of a 'single-threaded apartment'. (You may have seen a [STAThread] annotation on the main method of a .NET application from time to time?) Effectively this means that all dispatches to that object are handled by a single thread. Throwing more cores at the problem just gives you more resources that can sit around and wait or do other things in .NET. You might want to take a look at this article from Joe Duffy (the head parallel .NET guy at Microsoft) on the topic. <http://www.bluebytesoftware.com/blog/PermaLink,guid,8c2fed10-75b2-416b-aabc-c18ce8fe2ed4.aspx> In practice if you have to do a bunch of things against a single COM object like this you are hosed, because .NET will just serialize access patterns internally behind your back. If you can create multiple COM objects and use them then you can resolve the issue because each can be created and accessed from a distinct STA thread. This will work until you hit about 100 STA threads, then things will go wonky. For details, see the article.
Why doesn't multithreading in C# reach 100% CPU?
[ "", "c#", ".net", "multithreading", "multicore", "" ]
We use Tomcat to host our WAR based applications. We are servlet container compliant J2EE applications with the exception of org.apache.catalina.authenticator.SingleSignOn. We are being asked to move to a commercial Java EE application server. 1. The first downside to changing that I see is the cost. No matter what the charges for the application server, Tomcat is free. 2. Second is the complexity. We don't use either EJB nor EAR features (of course not, we can't), and have not missed them. What then are the benefits I'm not seeing? What are the drawbacks that I haven't mentioned? --- Mentioned were... 1. JTA - Java Transaction API - We control transaction via database stored procedures. 2. JPA - Java Persistence API - We use JDBC and again stored procedures to persist. 3. JMS - Java Message Service - We use XML over HTTP for messaging. This is good, please more!
Unless you want EJB proper, you don't need a full stack J2EE server (commercial or not). You can have most J2EE features (such as JTA, JPA, JMS, JSF) with no full stack J2EE server. The only benefit of a full stack j2ee is that the container will manage all these on your behalf declaratively. With the advent of EJB3, if you need container managed services, using one is a good thing. You can also have no cost full stack server such as Glasfish, Geronimo or JBoss. You can also run embedded j2ee container managed services with embedded Glasfish for example, right inside Tomcat. You may want an EJB container if you want to use session beans, message beans, timer beans nicely managed for you, even with clustering and fail over. I would suggest to the management to consider upgrades based on feature need. Some of these EJB containers might just well use embedded Tomcat as their webserver so what gives! Some managers just like to pay for things. Ask them to consider a city shelter donation or just go for BEA.
When we set out with the goal to Java EE 6 certify Apache Tomcat as [Apache TomEE](http://tomee.apache.org/comparison.html), here are some of the gaps we had to fill in order to finally pass the Java EE 6 TCK. Not a complete list, but some highlights that might not be obvious even with the existing answers. ## No TransactionManager Transaction Management is definitely required for any certified server. In any web component (servlet, filter, listener, jsf managed bean) you should be able to get a `UserTransaction` injected like so: * `@Resource UserTransaction transaction;` You should be able use the `javax.transaction.UserTransaction` to create transactions. All the resources you touch in the scope of that transaction should all be enrolled in that transaction. This includes, but is not limited to, the following objects: * `javax.sql.DataSource` * `javax.persistence.EntityManager` * `javax.jms.ConnectionFactory` * `javax.jms.QueueConnectionFactory` * `javax.jms.TopicConnectionFactory` * `javax.ejb.TimerService` For example, if in a servlet you start a transaction then: * Update the database * Fire a JMS message to a topic or queue * Create a Timer to do work at some later point .. and then one of those things fails or you simply choose to call `rollback()` on the `UserTransaction`, then all of those things are undone. ## No Connection Pooling To be very clear there are two kinds of connection pooling: * Transactionally aware connection pooling * Non-Transactionally aware connection pooling The Java EE specs do not strictly require connection pooling, however if you have connection pooling, it should be transaction aware or you will lose your transaction management. What this means is basically: * Everyone in the same transaction should have the same connection from the pool * The connection should not be returned to the pool until the transaction completes (commit or rollback) regardless if someone called `close()` or any other method on the `DataSource`. A common library used in Tomcat for connection pooling is commons-dbcp. We wanted to also use this in TomEE, however it did not support transaction-aware connection pooling, so we actually added that functionality into commons-dbcp (yay, Apache) and it is there as of commons-dbc version 1.4. Note, that adding commons-dbcp to Tomcat is still not enough to get transactional connection pooling. You still need the transaction manager and you still need the container to do the plumbing of registering connections with the `TransactionManager` via `Synchronization` objects. In Java EE 7 there's talk of adding a standard way to encrypt DB passwords and package them with the application in a secure file or external storage. This will be one more feature that Tomcat will not support. ## No Security Integration WebServices security, JAX-RS SecurityContext, EJB security, JAAS login and JAAC are all security concepts that by default are not "hooked up" in Tomcat even if you individually add libraries like CXF, OpenEJB, etc. These APIs are all of course suppose to work together in a Java EE server. There was quite a bit of work we had to do to get all these to cooperate and to do it on top of the Tomcat `Realm` API so that people could use all the existing Tomcat `Realm` implementations to drive their "Java EE" security. It's really still Tomcat security, it's just very well integrated. ## JPA Integration Yes, you can drop a JPA provider into a .war file and use it without Tomcat's help. With this approach you will not get: * `@PersistenceUnit EntityManagerFactory` injection/lookup * `@PersistenceContext EntityManager` injection/lookup * An `EntityManager` hooked up to a transactional aware connection pool * JTA-Managed `EntityManager` support * Extended persistence contexts JTA-Managed `EntityManager` basically mean that two objects in the same transaction that wish to use an `EntityManager` will both see the same `EntityManager` and there is no need to explicitly pass the `EntityManager` around. All this "passing" is done for you by the container. How is this achieved? Simple, the `EntityManager` you got from the container is a fake. It's a wrapper. When you use it, it looks in the current transaction for the real `EntityManager` and delegates the call to that `EntityManager`. This is the reason for the mysterious `EntityManager.getDelegate()` method, so users can get the **real** EntityManager if they want and make use of any non-standard APIs. Do so with great care of course and never keep a reference to the delegate `EntityManager` or you will have a serious memory leak. The delegate `EntityManager` will normally be flushed, closed, cleaned up and discarded when a transaction completes. If you're still holding onto a reference, you will prevent garbage collection of that `EntityManager` and possibly all the data it holds. * It's always safe to hold a reference to a `EntityManager` you got from the container * Its not safe to hold a reference to `EntityManager.getDelegate()` * Be very careful holding a reference to an `EntityManager` you created yourself via an `EntityManagerFactory` -- you are 100% responsible for its management. ## CDI Integration I don't want to over simplify CDI, but I find it is a little too big and many people have not take a serious look -- it's on the "someday" list for many people :) So here is just a couple highlights that I think a "web guy" would want to know about. You know all the putting and getting you do in a typical webapp? Pulling things in and out of `HttpSession` all day? Using `String` for the key and continuously casting objects you get from the `HttpSession`. You've probably go utility code to do that for you. CDI has this utility code too, it's called `@SessionScoped`. Any object annotated with `@SessionScoped` gets put and tracked in the `HttpSession` for you. You just request the object to be injected into your Servlet via `@Inject FooObject` and the CDI container will track the "real" FooObject instance in the same way I described the transactional tracking of the `EntitityManager`. Abracadabra, now you can delete a bunch of code :) Doing any `getAttribute` and `setAttribute` on `HttpServletRequest`? Well, you can delete that too with `@RequestScoped` in the same way. And of course there is `@ApplicationScoped` to eliminate the `getAttribute` and `setAttribute` calls you might be doing on `ServletContext` To make things even cooler, any object tracked like this can implement a `@PostConstruct` which gets invoked when the bean gets created and a `@PreDestroy` method to be notified when said "scope" is finished (the session is done, the request is over, the app is shutting down). CDI can do a lot more, but that's enough to make anyone want to re-write an old webapp. ## Some picky things There are some things added in Java EE 6 that are in Tomcats wheelhouse that were not added. They don't require big explanations, but did account for a large chunk of the "filling in the gaps". * Support for `@DataSourceDefinition` * Support for Global JNDI (`java:global`, `java:app`, `java:module`) * Enum injection via `@Resource MyEnum myEnum` and * Class injection via `@Resource Class myPluggableClass` and * Support for `@Resource(lookup="foo")` Minor points, but it can be incredibly useful to define `DataSource` in the app in a portable way, share JNDI entries between webapps, and have the simple power to say "look this thing up and inject it" ## Conclusion As mentioned, not a complete list. No mention of EJB, JMS, JAX-RS, JAX-WS, JSF, Bean Validation and other useful things. But at least some idea of the things often overlooked when people talk about what Tomcat is and is not. Also be aware that what you might have thought of as "Java EE" might not match the actual definition. With the Web Profile, Java EE has shrank. This was **deliberately** to address "Java EE is too heavy and I don't need all that". If you cut EJB out of the Web Profile, here's what you have left: * Java Servlets * Java ServerPages (JSP) * Java ServerFaces (JSF) * Java Transaction API (JTA) * Java Persistence API (JPA) * Java Contexts and Dependency Injection (CDI) * Bean Validation It's a pretty darn useful stack.
Besides EAR and EJB, what do I get from a Java EE app server that I don't get in a servlet container like Tomcat?
[ "", "java", "tomcat", "jakarta-ee", "application-server", "" ]
Is there an official C# guideline for the order of items in terms of class structure? Does it go: * Public Fields * Private Fields * Properties * Constructors * Methods ? I'm curious if there is a hard and fast rule about the order of items? I'm kind of all over the place. I want to stick with a particular standard so I can do it everywhere. The real problem is my more complex properties end up looking a lot like methods and they feel out of place at the top before the constructor. Any tips/suggestions?
According to the [StyleCop Rules Documentation](https://github.com/DotNetAnalyzers/StyleCopAnalyzers/blob/master/documentation/SA1201.md) the ordering is as follows. Within a class, struct or interface: (SA1201 and SA1203) * Constant Fields * Fields * Constructors * Finalizers (Destructors) * Delegates * Events * Enums * Interfaces (*interface implementations*) * Properties * Indexers * Methods * Structs * Classes Within each of these groups order by access: (SA1202) * public * internal * protected internal * protected * private Within each of the access groups, order by static, then non-static: (SA1204) * static * non-static Within each of the static/non-static groups of fields, order by readonly, then non-readonly : (SA1214 and SA1215) * readonly * non-readonly An unrolled list is 130 lines long, so I won't unroll it here. The methods part unrolled is: * public static methods * public methods * internal static methods * internal methods * protected internal static methods * protected internal methods * protected static methods * protected methods * private static methods * private methods The documentation notes that if the prescribed order isn't suitable - say, multiple interfaces are being implemented, and the interface methods and properties should be grouped together - then use a partial class to group the related methods and properties together.
Rather than grouping by visibility or by type of item (field, property, method, etc.), how about grouping by functionality?
Order of items in classes: Fields, Properties, Constructors, Methods
[ "", "c#", ".net", "code-cleanup", "code-structure", "" ]
I've got two ListBox'es that are databound to the same BindingList. The issue is that when changing the selected item from the GUI it's changing the position in the BindingList and then the BindingList signals the other ListBox to change its selected item. So I've got the two ListBoxes Selected Item also synchronized which is not good for me. I'd like to maintain the list of items in sync. without the cursor position. How do I disable that cursor so it's not maintained? sample code (just add two ListBoxes to the Form at design time and register the SelectedIndexChanged events and register the button click event with a button): ``` public partial class Form1 : Form { BindingList<string> list = new BindingList<string>(); public Form1() { InitializeComponent(); } private void Form1_Load(object sender, EventArgs e) { list.Add("bla1"); list.Add("bla2"); list.Add("bla3"); this.listBox1.DataSource = list; this.listBox2.DataSource = list; } private void listBox1_SelectedIndexChanged(object sender, EventArgs e) { if (listBox1.SelectedIndex != -1) System.Diagnostics.Trace.WriteLine("ListBox1: " + listBox1.SelectedItem.ToString()); } private void listBox2_SelectedIndexChanged(object sender, EventArgs e) { if (listBox2.SelectedIndex != -1) System.Diagnostics.Trace.WriteLine("ListBox2: " + listBox2.SelectedItem.ToString()); } // Register this event to a button private void button1_Click(object sender, EventArgs e) { list.Add("Test"); } } ``` Thanks, --Ran.
Add this line to `Form_Load`: ``` this.listBox1.BindingContext = new BindingContext(); ```
Declaring listBox1 and listBox2 to be of the following type seems to result in the desired behaviour. ``` class MyListBox: ListBox { protected override void OnSelectedIndexChanged (EventArgs a) { if (DataManager != null) { DataManager.SuspendBinding(); } } } ``` Regards, tamberg
C# Binding: How can I disable the CurrencyManager in BindingList so Current Item position is not maintained and not signaled?
[ "", "c#", "data-binding", "binding", "currencymanager", "" ]
I was taking a look through some open-source C++ code and I noticed a lot of double underscores used within in the code, mainly at the start of variable names. ``` return __CYGWIN__; ``` Just wondering: Is there a reason for this, or is it just some people's code styles? I would think that it makes it hard to read.
From [Programming in C++, Rules and Recommendations](http://www.doc.ic.ac.uk/lab/cplus/c++.rules/chap5.html) : > The use of two underscores (`\_\_') in identifiers is reserved for the compiler's internal use according to the ANSI-C standard. > > Underscores (`\_') are often used in names of library functions (such as "\_main" and "\_exit"). In order to avoid collisions, do not begin an identifier with an underscore.
Unless they feel that they are "part of the implementation", i.e. the standard libraries, then they shouldn't. The rules are fairly specific, and are slightly more detailed than some others have suggested. All identifiers that contain a double underscore or start with an underscore followed by an uppercase letter are reserved for the use of the implementation at all scopes, i.e. they might be used for macros. In addition, all other identifiers which start with an underscore (i.e. not followed by another underscore or an uppercase letter) are reserved for the implementation at the global scope. This means that you can use these identifiers in your own namespaces or in class definitions. This is why Microsoft use function names with a leading underscore and all in lowercase for many of their core runtime library functions which aren't part of the C++ standard. These function names are guaranteed not to clash with either standard C++ functions or user code functions.
Why do people use __ (double underscore) so much in C++
[ "", "c++", "syntax", "" ]
Is there a better/simpler way to find the number of images in a directory and output them to a variable? ``` function dirCount($dir) { $x = 0; while (($file = readdir($dir)) !== false) { if (isImage($file)) {$x = $x + 1} } return $x; } ``` This seems like such a long way of doing this, is there no simpler way? **Note:** The isImage() function returns true if the file is an image.
Check out the Standard PHP Library (aka SPL) for DirectoryIterator: ``` $dir = new DirectoryIterator('/path/to/dir'); foreach($dir as $file ){ $x += (isImage($file)) ? 1 : 0; } ``` (FYI there is an undocumented function called iterator\_count() but probably best not to rely on it for now I would imagine. And you'd need to filter out unseen stuff like . and .. anyway.)
This will give you the count of what is in your dir. I'll leave the part about counting only images to you as I am about to fallll aaasssllleeelppppppzzzzzzzzzzzzz. ``` iterator_count(new DirectoryIterator('path/to/dir/')); ```
Return total number of files within a folder using PHP
[ "", "php", "file", "count", "directory", "" ]
I need to store log files and configuration files for my application. Where is the best place to store them? Right now, I'm just using the current directory, which ends up putting them in the Program Files directory where my program lives. The log files will probably be accessed by the user somewhat regularly, so `%APPDATA%` seems a little hard to get to. Is a directory under `%USERPROFILE%\My Documents` the best? It needs to work for all versions of Windows, from 2000 forward.
If you're not using `ConfigurationManager` to manage your application and user settings, you should be. The configuration toolkit in the .NET Framework is remarkably well thought out, and the Visual Studio tools that interoperate with it are too. The default behavior of `ConfigurationManager` puts both invariant (application) and modifiable (user) settings in the right places: the application settings go in the application folder, and the user settings go in `System.Environment.SpecialFolder.LocalApplicationData`. It works properly under all versions of Windows that support .NET. As for log files, `System.Environment.SpecialFolder.LocalApplicationData` is generally the place that you want to put them, because it's guaranteed to be user-writeable. There are certainly cases where you wouldn't - for instance, if you want to write files to a network share so that you easily can access them remotely. There's a pretty wide range of ways to implement that, but most of them start with creating an application setting that contains the path to the shared folder. All of them involve administration. I have a couple of complaints about `ConfigurationManager` and the VS tools: there needs to be better high-level documentation than there is, and better documentation of the VS-generated `Settings` class. The mechanism by which the `app.config` file turns into the application configuration file in the target build directory is opaque (and the source of one of the most frequently asked questions of all: "what happened to my connection string?"). And if there's a way of creating settings that don't have default values, I haven't found it.
Note: You can get the path to the LocalApplicationData folder in .NET by using the following function: ``` string strPath=System.Environment.GetFolderPath(System.Environment.SpecialFolder.LocalApplicationData); ```
Best place to store configuration files and log files on Windows for my program?
[ "", "c#", "windows", "" ]
I have implemented a SAX parser in Java by extending the default handler. The XML has a ñ in its content. When it hits this character it breaks. I print out the char array in the character method and it simply ends with the character before the ñ. The parser seems to stop after this as no other methods are called even though there is still much more content. ie the endElement method is never called again. Has anyone run into this problem before or have any suggestion on how to deal with it?
What's the encoding on the file? Make sure the file's encoding decloration matches it. Your parser may be defaulting to ascii or ISO-8859-1. You can set the encoding like so ``` <?xml version="1.0" encoding="UTF-8"?> ``` UTF-8 will cover that character, just make sure that's what the file actually is in.
If you are saving your XMLs in ASCII, you can only use the lower half (first 128 characters) of the 8-bit character table. To include accented, or other non-english characters in your XML, you will either have to save your XML in UTF-8 or escape your charaters like &#241; for ñ.
SAX parser breaking on ñ
[ "", "java", "xml", "encoding", "sax", "" ]
* What is the main difference between `int.Parse()` and `Convert.ToInt32()`? * Which one is to be preferred
* If you've got a string, and you expect it to always be an integer (say, if some web service is handing you an integer in string format), you'd use [**`Int32.Parse()`**](http://msdn.microsoft.com/en-us/library/system.int32.parse.aspx). * If you're collecting input from a user, you'd generally use [**`Int32.TryParse()`**](http://msdn.microsoft.com/en-us/library/system.int32.tryparse.aspx), since it allows you more fine-grained control over the situation when the user enters invalid input. * [**`Convert.ToInt32()`**](http://msdn.microsoft.com/en-us/library/System.Convert.ToInt32.aspx) takes an object as its argument. (See Chris S's answer for how it works) `Convert.ToInt32()` also does not throw `ArgumentNullException` when its argument is null the way `Int32.Parse()` does. That also means that `Convert.ToInt32()` is probably a wee bit slower than `Int32.Parse()`, though in practice, unless you're doing a very large number of iterations in a loop, you'll never notice it.
Have a look in reflector: **int.Parse("32"):** ``` public static int Parse(string s) { return System.Number.ParseInt32(s, NumberStyles.Integer, NumberFormatInfo.CurrentInfo); } ``` which is a call to: ``` internal static unsafe int ParseInt32(string s, NumberStyles style, NumberFormatInfo info) { byte* stackBuffer = stackalloc byte[1 * 0x72]; NumberBuffer number = new NumberBuffer(stackBuffer); int num = 0; StringToNumber(s, style, ref number, info, false); if ((style & NumberStyles.AllowHexSpecifier) != NumberStyles.None) { if (!HexNumberToInt32(ref number, ref num)) { throw new OverflowException(Environment.GetResourceString("Overflow_Int32")); } return num; } if (!NumberToInt32(ref number, ref num)) { throw new OverflowException(Environment.GetResourceString("Overflow_Int32")); } return num; } ``` **Convert.ToInt32("32"):** ``` public static int ToInt32(string value) { if (value == null) { return 0; } return int.Parse(value, CultureInfo.CurrentCulture); } ``` As the first (Dave M's) comment says.
What's the main difference between int.Parse() and Convert.ToInt32
[ "", "c#", "" ]
We have an Java application running on Weblogic server that picks up XML messages from a JMS or MQ queue and writes it into another JMS queue. The application doesn't modify the XML content in any way. We use BEA's XMLObject to read and write the messages into queues. The XML messages contain the encoding type declarations as UTF-8. We have an issue when the XML contains characters that are out side the normal ASCII range (like £ symbol for example). When the message is read from the queue we can see that the £ symbol is intact, however once we write it to the destination queue, the £ symbol is lost and is replaced with £ instead. I have checked the OS level settings (locale settings) and everything seems to be fine. What else should I be checking to make sure that this doesn't happen?
> once we write it to the destination queue, the £ symbol is lost and is replaced with £ instead That tells me the character is being *written* as UTF-8, but it's being *read* as if it were in a single-byte encoding like ISO-8859-1. (For any character in the range U+00A0..U+00BF, if you encode it as UTF-8 and decode it as ISO-8859-1, you end up with the two-character sequence **`ÃX`**, where **`X`** is the original character.) I would look at the encoding settings of the receiving JMS queue.
You should use `InputStream`, `OutputStream`, and `byte[]` to handle XML documents, not `Reader`, `Writer`, and `String`. In the world of JMS, `BytesMessage` is a better fit for XML payloads than `TextMessage`. Every XML document specifies its character encoding internally, and all XML processing APIs are oriented to take byte streams and where necessary figure out the correct character encoding to use themselves. The text-based APIs are only there… to confuse people, I guess! Anyway, applications should let the XML processor deal with character encoding issues, rather than trying to manage it themselves (or using a text-oriented API without a solid understanding of character-encoding issues).
Handling UTF-8 encoding
[ "", "java", "xml", "unicode", "encoding", "utf-8", "" ]
I just learned about [ngrep](http://ngrep.sourceforge.net/), a cool program that lets you easily sniff packets that match a particular string. The only problem is that it can be hard to see the match in the big blob of output. I'd like to write a wrapper script to highlight these matches -- it could use ANSI escape sequences: ``` echo -e 'This is \e[31mRED\e[0m.' ``` I'm most familiar with Perl, but I'm perfectly happy with a solution in Python or any other language. The simplest approach would be something like: ``` while (<STDIN>) { s/$keyword/\e[31m$keyword\e[0m/g; print; } ``` However, this isn't a nice solution, because ngrep prints out hash marks without newlines whenever it receives a non-matching packet, and the code above will suppress the printing of these hashmarks until the script sees a newline. Is there any way to do the highlighting without inhibiting the instant appearance of the hashmarks?
Ah, forget it. This is too much of a pain. It was a lot easier to get the source to ngrep and make it print the hash marks to stderr: ``` --- ngrep.c 2006-11-28 05:38:43.000000000 -0800 +++ ngrep.c.new 2008-10-17 16:28:29.000000000 -0700 @@ -687,8 +687,7 @@ } if (quiet < 1) { - printf("#"); - fflush(stdout); + fprintf (stderr, "#"); } switch (ip_proto) { ``` Then, filtering is a piece of cake: ``` while (<CMD>) { s/($keyword)/\e[93m$1\e[0m/g; print; } ```
This seems to do the trick, at least comparing two windows, one running a straight ngrep (e.g. ngrep whatever) and one being piped into the following program (with ngrep whatever | ngrephl target-string). ``` #! /usr/bin/perl use strict; use warnings; $| = 1; # autoflush on my $keyword = shift or die "No pattern specified\n"; my $cache = ''; while (read STDIN, my $ch, 1) { if ($ch eq '#') { $cache =~ s/($keyword)/\e[31m$1\e[0m/g; syswrite STDOUT, "$cache$ch"; $cache = ''; } else { $cache .= $ch; } } ```
How can I write a wrapper around ngrep that highlights matches?
[ "", "python", "perl", "unix", "networking", "" ]
We're building some software for an in-house Kiosk. The software is a basic .net windows form with an embedded browser. The Kiosk is outfitted with a mat that the user steps on. When the user steps on the mat, it sends a key comination through the keyboard. When the user steps off the mat it sends a different key combination. What we want to do is look for the key combination in our app, and based on if the user steps on or off, cause the browser to go to a different url. How do you hook the keyboard to accomodate this type of situation?
If your window is the active window, then you can simply override the forms ProcessCmdKey as such below. ``` protected override bool ProcessCmdKey(ref Message msg, Keys keyData) { const int WM_KEYDOWN = 0x100; const int WM_SYSKEYDOWN = 0x104; if ((msg.Msg == WM_KEYDOWN) || (msg.Msg == WM_SYSKEYDOWN)) { switch (keyData) { case Keys.Down: this.Text = "Down Arrow Captured"; break; case Keys.Up: this.Text = "Up Arrow Captured"; break; case Keys.Tab: this.Text = "Tab Key Captured"; break; case Keys.Control | Keys.M: this.Text = "<CTRL> + M Captured"; break; case Keys.Alt | Keys.Z: this.Text = "<ALT> + Z Captured"; break; } } return base.ProcessCmdKey(ref msg, keyData); } ```
I was doing a somewhat similar search just a short while ago regarding USB card readers. I came across [this article on CodeProject](http://www.codeproject.com/KB/system/rawinput.aspx) for handling raw input of devices. My main goal was differentiating multiple devices that act as keyboards. Your device may have a different interface. It also may have an SDK and documentation. We don't know.
Capturing key press messages
[ "", "c#", ".net", "winforms", "winapi", "" ]
As I understand, the pimpl idiom is exists only because C++ forces you to place all the private class members in the header. If the header were to contain only the public interface, theoretically, any change in class implementation would not have necessitated a recompile for the rest of the program. What I want to know is why C++ is not designed to allow such a convenience. Why does it demand at all for the private parts of a class to be openly displayed in the header (no pun intended)?
This has to do with the size of the object. The h file is used, among other things, to determine the size of the object. If the private members are not given in it, then you would not know how large an object to new. You can simulate, however, your desired behavior by the following: ``` class MyClass { public: // public stuff private: #include "MyClassPrivate.h" }; ``` This does not enforce the behavior, but it gets the private stuff out of the .h file. On the down side, this adds another file to maintain. Also, in visual studio, the intellisense does not work for the private members - this could be a plus or a minus.
I think there is a confusion here. The problem is not about headers. Headers don't do anything (they are just ways to include common bits of source text among several source-code files). The problem, as much as there is one, is that class declarations in C++ have to define everything, public and private, that an instance needs to have in order to work. (The same is true of Java, but the way reference to externally-compiled classes works makes the use of anything like shared headers unnecessary.) It is in the nature of common Object-Oriented Technologies (not just the C++ one) that someone needs to know the concrete class that is used and how to use its constructor to deliver an implementation, even if you are using only the public parts. The device in (3, below) hides it. The practice in (1, below) separates the concerns, whether you do (3) or not. 1. Use abstract classes that define only the public parts, mainly methods, and let the implementation class inherit from that abstract class. So, using the usual convention for headers, there is an abstract.hpp that is shared around. There is also an implementation.hpp that declares the inherited class and that is only passed around to the modules that implement methods of the implementation. The implementation.hpp file will #include "abstract.hpp" for use in the class declaration it makes, so that there is a single maintenance point for the declaration of the abstracted interface. 2. Now, if you want to enforce hiding of the implementation class declaration, you need to have some way of requesting construction of a concrete instance without possessing the specific, complete class declaration: you can't use new and you can't use local instances. (You can delete though.) Introduction of helper functions (including methods on other classes that deliver references to class instances) is the substitute. 3. Along with or as part of the header file that is used as the shared definition for the abstract class/interface, include function signatures for external helper functions. These function should be implemented in modules that are part of the specific class implementations (so they see the full class declaration and can exercise the constructor). The signature of the helper function is probably much like that of the constructor, but it returns an instance reference as a result (This constructor proxy can return a NULL pointer and it can even throw exceptions if you like that sort of thing). The helper function constructs a particular implementation instance and returns it cast as a reference to an instance of the abstract class. Mission accomplished. Oh, and recompilation and relinking should work the way you want, avoiding recompilation of calling modules when only the implementation changes (since the calling module no longer does any storage allocations for the implementations).
Could C++ have not obviated the pimpl idiom?
[ "", "c++", "language-design", "compiler-theory", "pimpl-idiom", "" ]
The problem is quite basic. I have a JTable showing cached data from a database. If a user clicks in a cell for editing, I want to attempt a lock on that row in the database. If the lock does not succeed, I want to prevent editing. But I can't seem to find any clean way to accomplish this. Am I missing something?
Because you have to test on the click you can't use the model's way of doing this so you should try overriding the JTable's [public void changeSelection(int rowIndex, int columnIndex, boolean toggle, oolean extend)](http://java.sun.com/javase/6/docs/api/javax/swing/JTable.html) method. If the row is locked then don't call super.changeSelection and it should leave the row unselected.
Before editing/setting value the table model is asked via TableModel.isCellEditable(row,col) whether this cell is editable. Here you can implement your lock. and after TableModel.setValue(row,col,val) you should unlock this. BUT. The lock operation should take a lot of time and makes your UI un-responsible. And it si BAD. Try different approach. What about lazy fail? You lock the row, check for validity of data and fail if data are newer. If data are OK, you put them down. UNLOCK.
How to do a check before allowing editing of a given row in a JTable
[ "", "java", "swing", "jtable", "" ]
What is the best way of working with calculated fields of Propel objects? Say I have an object "Customer" that has a corresponding table "customers" and each column corresponds to an attribute of my object. What I would like to do is: add a calculated attribute "Number of completed orders" to my object when using it on View A but not on Views B and C. The calculated attribute is a COUNT() of "Order" objects linked to my "Customer" object via ID. What I can do now is to first select all Customer objects, then iteratively count Orders for all of them, but I'd think doing it in a single query would improve performance. But I cannot properly "hydrate" my Propel object since it does not contain the definition of the calculated field(s). How would you approach it?
There are several choices. First, is to create a view in your DB that will do the counts for you, similar to my answer [here](https://stackoverflow.com/questions/234785/#235267). I do this for a current Symfony project I work on where the read-only attributes for a given table are actually much, much wider than the table itself. This is my recommendation since grouping columns (max(), count(), etc) are read-only anyway. The other options are to actually build this functionality into your model. You absolutely CAN do this hydration yourself, but it's a bit complicated. Here's the rough steps 1. Add the columns to your *Table* class as protected data members. 2. Write the appropriate getters and setters for these columns 3. Override the hydrate method and within, populate your new columns with the data from other queries. Make sure to call parent::hydrate() as the first line However, this isn't much better than what you're talking about already. You'll still need *N* + 1 queries to retrieve a single record set. However, you can get creative in step #3 so that *N* is the number of calculated columns, not the number of rows returned. Another option is to create a custom selection method on your *Table*Peer class. 1. Do steps 1 and 2 from above. 2. Write custom SQL that you will query manually via the Propel::getConnection() process. 3. Create the dataset manually by iterating over the result set, and handle custom hydration at this point as to not break hydration when use by the doSelect processes. Here's an example of this approach ``` <?php class TablePeer extends BaseTablePeer { public static function selectWithCalculatedColumns() { // Do our custom selection, still using propel's column data constants $sql = " SELECT " . implode( ', ', self::getFieldNames( BasePeer::TYPE_COLNAME ) ) . " , count(" . JoinedTablePeer::ID . ") AS calc_col FROM " . self::TABLE_NAME . " LEFT JOIN " . JoinedTablePeer::TABLE_NAME . " ON " . JoinedTablePeer::ID . " = " . self::FKEY_COLUMN ; // Get the result set $conn = Propel::getConnection(); $stmt = $conn->prepareStatement( $sql ); $rs = $stmt->executeQuery( array(), ResultSet::FETCHMODE_NUM ); // Create an empty rowset $rowset = array(); // Iterate over the result set while ( $rs->next() ) { // Create each row individually $row = new Table(); $startcol = $row->hydrate( $rs ); // Use our custom setter to populate the new column $row->setCalcCol( $row->get( $startcol ) ); $rowset[] = $row; } return $rowset; } } ``` There may be other solutions to your problem, but they are beyond my knowledge. Best of luck!
I am doing this in a project now by overriding hydrate() and Peer::addSelectColumns() for accessing postgis fields: ``` // in peer public static function locationAsEWKTColumnIndex() { return GeographyPeer::NUM_COLUMNS - GeographyPeer::NUM_LAZY_LOAD_COLUMNS; } public static function polygonAsEWKTColumnIndex() { return GeographyPeer::NUM_COLUMNS - GeographyPeer::NUM_LAZY_LOAD_COLUMNS + 1; } public static function addSelectColumns(Criteria $criteria) { parent::addSelectColumns($criteria); $criteria->addAsColumn("locationAsEWKT", "AsEWKT(" . GeographyPeer::LOCATION . ")"); $criteria->addAsColumn("polygonAsEWKT", "AsEWKT(" . GeographyPeer::POLYGON . ")"); } // in object public function hydrate($row, $startcol = 0, $rehydrate = false) { $r = parent::hydrate($row, $startcol, $rehydrate); if ($row[GeographyPeer::locationAsEWKTColumnIndex()]) // load GIS info from DB IFF the location field is populated. NOTE: These fields are either both NULL or both NOT NULL, so this IF is OK { $this->location_ = GeoPoint::PointFromEWKT($row[GeographyPeer::locationAsEWKTColumnIndex()]); // load gis data from extra select columns See GeographyPeer::addSelectColumns(). $this->polygon_ = GeoMultiPolygon::MultiPolygonFromEWKT($row[GeographyPeer::polygonAsEWKTColumnIndex()]); // load gis data from extra select columns See GeographyPeer::addSelectColumns(). } return $r; } ``` There's something goofy with AddAsColumn() but I can't remember at the moment, but this does work. You can [read more about the AddAsColumn() issues](http://propel.phpdb.org/trac/ticket/681).
Symfony app - how to add calculated fields to Propel objects?
[ "", "php", "orm", "symfony1", "propel", "" ]
In C# there is the static property [Environment.Newline](http://msdn.microsoft.com/en-us/library/system.environment.newline.aspx) that changed depending on the running platform. Is there anything similar in Java?
**As of Java 7 (and Android API level 19):** ``` System.lineSeparator() ``` Documentation: [Java Platform SE 7](https://docs.oracle.com/javase/7/docs/api/java/lang/System.html#lineSeparator%28%29) --- **For older versions of Java, use:** ``` System.getProperty("line.separator"); ``` See <https://java.sun.com/docs/books/tutorial/essential/environment/sysprop.html> for other properties.
As of Java 7: ``` System.lineSeparator() ``` > [**Java API : System.lineSeparator**](http://docs.oracle.com/javase/7/docs/api/java/lang/System.html#lineSeparator%28%29) > > Returns the system-dependent line separator string. It always returns > the same value - the initial value of the system property > line.separator. On UNIX systems, it returns "\n"; on Microsoft Windows > systems it returns "\r\n".
Is there a Newline constant defined in Java like Environment.Newline in C#?
[ "", "java", "" ]
I've recently been lifted out of the .Net world into the Java world and I miss my unit tests. Using Visual Studio I used NUnit and TestDriven.net to run my unit tests. What is a comparable system for Java Using Eclipse? I'm looking specifically for the plugins that will get me going, or a guide on how to do it. I'm aware that JUnit is what NUnit was initially based on, but I want to know the best way to integrate it into Eclipse as there seem to be a few plugins that do this and I don't have the time to play around with them all. **UPDATE** Okay I didn't know that JUnit was built into the IDE. Are there any plugins that make using JUnit any easier?
Using JUnit with eclipse is actually very easy. Just go to File->New... and select JUnit Test Case. Eclipse will handle adding the JUnit library and all of the imports.
Which version of Eclipse are you using? For as long as I remember (I've been using Eclipse since early 3.xs), Eclipse supports JUnit out of the box. You just: Right-click on a project -> Run As -> JUnit Test Does this not work for you?
What do you need to do Unit testing in Java with Eclipse?
[ "", "java", "eclipse", "unit-testing", "junit", "" ]
Does anyone know how could I programatically disable/enable sleep mode on Windows Mobile? Thanks!
If you want your program to not be put to sleep while it's running, the best way is to create a KeepAlive type function that calls SystemIdleTimerReset, SHIdleTimerReset and simulates a key touch. Then you need to call it a lot, basically everywhere. ``` #include <windows.h> #include <commctrl.h> extern "C" { void WINAPI SHIdleTimerReset(); }; void KeepAlive() { static DWORD LastCallTime = 0; DWORD TickCount = GetTickCount(); if ((TickCount - LastCallTime) > 1000 || TickCount < LastCallTime) // watch for wraparound { SystemIdleTimerReset(); SHIdleTimerReset(); keybd_event(VK_LBUTTON, 0, KEYEVENTF_SILENT, 0); keybd_event(VK_LBUTTON, 0, KEYEVENTF_KEYUP | KEYEVENTF_SILENT, 0); LastCallTime = TickCount; } } ``` This method only works when the user starts the application manually. If your application is started by a notification (i.e. while the device is suspended), then you need to do more or else your application will be suspended after a very short period of time until the user powers the device out of suspended mode. To handle this you need to put the device into unattended power mode. ``` if(!::PowerPolicyNotify (PPN_UNATTENDEDMODE, TRUE)) { // handle error } // do long running process if(!::PowerPolicyNotify (PPN_UNATTENDEDMODE, FALSE)) { // handle error } ``` During unattended mode use, you still need to call the KeepAlive a lot, you can use a separate thread that sleeps for x milliseconds and calls the keep alive funcation. Please note that unattended mode does not bring it out of sleep mode, it puts the device in a weird half-awake state. So if you start a unattended mode while the device in suspended mode, it will not wake up the screen or anything. All unattended mode does is stop WM from suspending your application. Also the other problem is that it does not work on all devices, some devices power management is not very good and it will suspend you anyway no matter what you do.
Modify [the Power Manager registry setting](http://msdn.microsoft.com/en-us/library/aa932196.aspx) that affects the specific sleep condition you want (timeout, batter, AC power, etc) and the SetEvent on a named system event called "PowerManager/ReloadActivityTimeouts" to tell the OS to reload the settings.
Disable sleep mode in Windows Mobile 6
[ "", "c++", "windows-mobile", "pocketpc", "" ]
``` function Submit_click() { if (!bValidateFields()) return; } function bValidateFields() { /// <summary>Validation rules</summary> /// <returns>Boolean</returns> ... } ``` So, when I type the call to my bValidateFields() function intellisence in Visual Studio doesn't show my comments. But according to [this](http://weblogs.asp.net/scottgu/archive/2007/06/21/vs-2008-javascript-intellisense.aspx) it should. Should it?
I recall an issue where having turned off the Navigation Bar in VS stopped a lot of the JS intellisense from working properly. If you have it turned off, try turning the Navigation Bar on again and see if it helps. Edit: You may also have to do Ctrl+Shift+J to force the IDE to update the intellisense. Edit2: As @blub said, if there are any issues with the javascript, the intellisense can break. Visual Studio actually evaluates the javascript to create the intellisense, so if there are syntax errors it can fail and not build the intellisense completely, or at all.
The XML comments have to be inside the function, not above it. In Visual Studio 2008, the XML comment information is only display for files referenced with a /// <reference... item. Visual Studio 2010 will display XML comment information for functions in the file your are editing and for files you are referencing.
Visual Studio 2008 doesn't show my XML comments in JS files
[ "", "javascript", "visual-studio-2008", "comments", "intellisense", "" ]
What is the best way to do GUIs in [Clojure](http://en.wikipedia.org/wiki/Clojure)? Is there an example of some functional [Swing](http://en.wikipedia.org/wiki/Swing_%28Java%29) or [SWT](http://en.wikipedia.org/wiki/Standard_Widget_Toolkit) wrapper? Or some integration with [JavaFX](http://en.wikipedia.org/wiki/JavaFX) declarative GUI description which could be easily wrapped to [s-expressions](http://en.wikipedia.org/wiki/S-expression) using some macrology? Any tutorials?
I will humbly suggest [Seesaw](https://github.com/daveray/seesaw). [Here's a REPL-based tutorial](https://gist.github.com/1441520) that assumes no Java or Swing knowledge. --- Seesaw's a lot like what @tomjen suggests. Here's "Hello, World": ``` (use 'seesaw.core) (-> (frame :title "Hello" :content "Hello, Seesaw" :on-close :exit) pack! show!) ``` and here's @Abhijith and @dsm's example, translated pretty literally: ``` (ns seesaw-test.core (:use seesaw.core)) (defn handler [event] (alert event (str "<html>Hello from <b>Clojure</b>. Button " (.getActionCommand event) " clicked."))) (-> (frame :title "Hello Swing" :on-close :exit :content (button :text "Click Me" :listen [:action handler])) pack! show!) ```
Stuart Sierra recently published a series of blog posts on GUI-development with clojure (and swing). Start off here: <http://stuartsierra.com/2010/01/02/first-steps-with-clojure-swing>
What is the best way to do GUIs in Clojure?
[ "", "java", "user-interface", "lisp", "clojure", "" ]
I have a not-so-small class under development (that it changes often) and I need not to provide a public copy constructor and copy assignment. The class has objects with value semantics, so default copy and assignment work. the class is in a hierarchy, with virtual methods, so I provide a virtual Clone() to avoid slicing and to perform "polymorphic copy". I don't want to declare copy assignment and construction protected AND to define them (and to maintain in-sync with changes) unless I have some special thing to perform. Do someone knows if there's another way? thanks! UgaSofT
An object from a polymorphic hierarchy, and with value semantics ? Something is wrong here. If you really do need your class to have a value semantics, have a look at J.Coplien's Envelop-Letter Idiom, or at this article about Regular Objects [1]. [1] Sean Parent. “Beyond Objects”. Understanding The Software We Write. <http://stlab.adobe.com/wiki/index.php/Papers_and_Presentations>. C++ Connections. Nov 2005. HTH,
I don't think there is anything in the C++ language that allows you to do this. Although I'd love to be wrong on this point. I've run into this in the past and come up with the following solution. Asumme the class is C1. 1. Define a private inner class called Data 2. Put all of my members I would delare in C1 on Data instead 3. Define a protected copy constructor that just copies Data instances between C1. This approach has a couple of downsides. Namely it feels a bit un-natural and eliminates direct field access (can be mitigated with small accessor functions). It's a roundabout way of doing what you're looking for but it avoids you having to write the copy constructor by hand.
There's a way to declare Copy Constructor non-public AND using default copy Constructor?
[ "", "c++", "constructor", "copy", "" ]
What do I have to consider **in database design** for a new application which should be able to support the most common relational database systems (SQL Server, MySQL, Oracle, PostgreSQL ...)? Is it even worth the effort? What are the pitfalls?
The short answer is to stick to features that are standardly, or close to standardly implemented. What this means in more detail is: * Avoid anything that uses the database's procedural language (stored procedures or triggers) since this is where the huge differences between the systems come in. You may need to use them to emulate some features, but don't use them to create your own functionality. * Separate auto-increment fields' sequences from the fields themselves. This will look a bit forced for MSSQL but will implement cleanly in Oracle, DB/2 etc without needing any emulation fixes. * Keep char and varchar fields below the smallest maximum size for the set of engines you're aiming at. * When you're writing queries use full JOIN syntax, and bracket the JOINs so that each join is between a single table and bracketed expression. * Keep date handling logic in the code, not the queries, since a lot of the date functions are outside the standard. (For example: if you want to get stuff for the past two weeks calculate the date two weeks ago in code and use that in the query.) Beyond that the effort involved shouldn't be too intimidating, so it may well be worth it.
If I were you, I'd **think hard about the return on your investment here**. It always *sounds* like a great idea to be able to hook up to any back end or to change back ends whenever you like, but this very rarely happens in The Real World in my experience. It might turn out that you may cover 95% of your potential customers by supporting just Oracle & SQL Server (or MySQL & SQL Server, or... etc.). **Do your research before going any further**, and good luck!
Database design for database-agnostic applications
[ "", "sql", "database", "database-design", "" ]
Using the PHP [pack()](http://www.php.net/pack) function, I have converted a string into a binary hex representation: ``` $string = md5(time); // 32 character length $packed = pack('H*', $string); ``` The H\* formatting means "Hex string, high nibble first". To unpack this in PHP, I would simply use the [unpack()](http://www.php.net/unpack) function with the H\* format flag. How would I unpack this data in Python?
There's an easy way to do this with the `binascii` module: ``` >>> import binascii >>> print binascii.hexlify("ABCZ") '4142435a' >>> print binascii.unhexlify("4142435a") 'ABCZ' ``` Unless I'm misunderstanding something about the nibble ordering (high-nibble first is the default… anything different is insane), that should be perfectly sufficient! Furthermore, Python's `hashlib.md5` objects have a `hexdigest()` method to automatically convert the MD5 digest to an ASCII hex string, so that this method isn't even necessary for MD5 digests. Hope that helps.
There's no corresponding "hex nibble" code for struct.pack, so you'll either need to manually pack into bytes first, like: ``` hex_string = 'abcdef12' hexdigits = [int(x, 16) for x in hex_string] data = ''.join(struct.pack('B', (high <<4) + low) for high, low in zip(hexdigits[::2], hexdigits[1::2])) ``` Or better, you can just use the hex codec. ie. ``` >>> data = hex_string.decode('hex') >>> data '\xab\xcd\xef\x12' ``` To unpack, you can encode the result back to hex similarly ``` >>> data.encode('hex') 'abcdef12' ``` However, note that for your example, there's probably no need to take the round-trip through a hex representation at all when encoding. Just use the md5 binary digest directly. ie. ``` >>> x = md5.md5('some string') >>> x.digest() 'Z\xc7I\xfb\xee\xc96\x07\xfc(\xd6f\xbe\x85\xe7:' ``` This is equivalent to your pack()ed representation. To get the hex representation, use the same unpack method above: ``` >>> x.digest().decode('hex') 'acbd18db4cc2f85cedef654fccc4a4d8' >>> x.hexdigest() 'acbd18db4cc2f85cedef654fccc4a4d8' ``` [Edit]: Updated to use better method (hex codec)
How can I unpack binary hex formatted data in Python?
[ "", "python", "binary", "hex", "" ]
How do I split strings in J2ME in an effective way? There is a [`StringTokenizer`](http://download.oracle.com/javase/1.4.2/docs/api/java/util/StringTokenizer.html) or [`String.split(String regex)`](http://download.oracle.com/javase/1.5.0/docs/api/java/lang/String.html#split%28java.lang.String%29) in the standard edition (J2SE), but they are absent in the micro edition (J2ME, MIDP).
There are a few implementations of a StringTokenizer class for J2ME. This one by [Ostermiller](http://ostermiller.org/utils/StringTokenizer.html) will most likely include the functionality you need See also [this page on Mobile Programming Pit Stop](https://web.archive.org/web/20120206073031/http://mobilepit.com:80/09/using-stringtokenizer-in-j2me-javame-applications.html) for some modifications and the following example: ``` String firstToken; StringTokenizer tok; tok = new StringTokenizer("some|random|data","|"); firstToken= tok.nextToken(); ```
There is no built in method to split strings. You have to write it on your own using `String.indexOf()` and `String.substring()`. Not hard.
How do I split strings in J2ME?
[ "", "java", "algorithm", "string", "java-me", "" ]
I wrote a managed C++ class that has the following function: ``` void EndPointsMappingWrapper::GetLastError(char* strErrorMessage) { strErrorMessage = (char*) Marshal::StringToHGlobalAnsi(_managedObject->GetLastError()).ToPointer(); } ``` As you can see, this is a simple method to copy the managed string of the last error to the unmanaged world (`char*`). From my unmanaged class I call the method like this: ``` char err[1000]; ofer->GetLastError(err); ``` Putting a breakpoint at the managed C++ method shows that the string is successfully translated into the `char*`. However, once I return to the unmanaged class, the content of `err[1000]` is lost and it's empty again.
You are assigning the value of the passed parameter (strErrorMessage) instead of copying to that address the content of the buffer returned by Marshal::StringToHGlobalAnsi. A correct implementation should be: ``` void EndPointsMappingWrapper::GetLastError(char* strErrorMessage, int len) { char *str = (char*) Marshal::StringToHGlobalAnsi(_managedObject->GetLastError()).ToPointer(); strncpy(strErrorMessage,str,len); strErrorMessage[len-1] = '\0'; Marshal::FreeHGlobal(IntPtr(str)); } ``` The length is the size of the buffer passed. `strncpy()` will copy at the most **len** bytes. If there is no null byte among the first n bytes of the **str**, the destination string won't be null terminated. For that reason we force the '\0' in the last byte of the buffer.
We use the following C++ Class to do the conversions for us and it works fine. You should be able to modify your method to use it. **H File** ``` public ref class ManagedStringConverter { public: ManagedStringConverter( System::String^ pString ); ~ManagedStringConverter(); property char* PrimitiveString { char* get() { return m_pString; } } /// <summary> /// Converts a System::String to a char * string. You must release this with FreeString. /// </summary> static const char* StringToChar( System::String^ str ); /// <summary> /// Converts a System::String to a __wchar_t * string. You must release this with FreeString. /// </summary> static const __wchar_t * StringToWChar( System::String^ str ); /// <summary> /// Frees memory allocated in StringToChar() /// </summary> static void FreeString( const char * pszStr ); private: char* m_pString; }; ``` **CPP File** ``` ManagedStringConverter::ManagedStringConverter( System::String^ pString ) { m_pString = const_cast<char*>( ManagedStringConverter::StringToChar( pString ) ); } ManagedStringConverter::~ManagedStringConverter() { ManagedStringConverter::FreeString( m_pString ); } // static const char * ManagedStringConverter::StringToChar( System::String^ str ) { IntPtr^ ip = Marshal::StringToHGlobalAnsi( str ); if ( ip != IntPtr::Zero ) { return reinterpret_cast<const char *>( ip->ToPointer() ); } else { return nullptr; } } // static const __wchar_t * ManagedStringConverter::StringToWChar( System::String^ str ) { IntPtr^ ip = Marshal::StringToHGlobalUni( str ); if ( ip != IntPtr::Zero ) { return reinterpret_cast<const __wchar_t *>( ip->ToPointer() ); } else { return nullptr; } } // static void ManagedStringConverter::FreeString( const char * pszStr ) { IntPtr ip = IntPtr( (void *)pszStr ); Marshal::FreeHGlobal( ip ); } ```
string to char* marshaling
[ "", ".net", "c++", "managed-c++", "" ]