Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I filled up a combobox with the values from an Enum. Now a combobox is text right? So I'm using a getter and a setter. I'm having problems reading the text. Here's the code: ``` public BookType type { get { return (BookType)Enum.Parse(typeof(BookType), this.typeComboBox.Text); } set { this.typeComboBox.Text = value.ToString(); } } ``` For some reason, `this.typeComboBox.Text` always returns an empty string when I select an item on the combobox. Does someone see what I'm doing wrong? EDIT: I have come to the conclusion that the problem lies in timing. The point in time at which I summon the text is indeed after I changed the combobox, but still before that value is parsed as a value. Problem fixed in a different way now, thanks for all the ideas.
I just created a simple windows form, and everything worked okay for me. Here is the code. ``` public enum Test { One, Two, Three } public partial class Form1 : Form { public Form1() { InitializeComponent(); this.comboBox1.DataSource = Enum.GetNames(typeof(Test)); } public Test Test { get { return (Test)Enum.Parse(typeof(Test), this.comboBox1.Text); } set { this.comboBox1.Text = value.ToString(); } } private void button1_Click(object sender, EventArgs e) { MessageBox.Show(this.Test.ToString()); this.Test = Test.Two; MessageBox.Show(this.Test.ToString()); } } ```
``` string selectedText = this.ComboBox.GetItemText(this.ComboBox.SelectedItem); ``` The GetItemText method analyzes the item and returns the text of the bound to that item.
Get the combobox text in C#
[ "", "c#", ".net", "winforms", "combobox", "" ]
In C++, how can I establish an SQL connection to store data in an SQL database?
You should have a look at C preprocessors that exists traditionaly with databases ([ecpg](http://www.postgresql.org/docs/8.3/interactive/ecpg.html) for postgres, [Pro\*C](http://www.cs.umbc.edu/help/oracle8/server.815/a68022/toc.htm) for oracle ... which lets you embed straight SQL directly in your source files) or an [orginal system](http://dev.mysql.com/doc/refman/6.0/en/apis-cplusplus.html) for mysql. ECPG will do with C++, that is/was not the case for some other preprocessors ...
IF you are targetting Windows, then you might want to use ODBC.
How to create database connectivity through C++
[ "", "c++", "" ]
I'm writing a Python application that takes a command as an argument, for example: ``` $ python myapp.py command1 ``` I want the application to be extensible, that is, to be able to add new modules that implement new commands without having to change the main application source. The tree looks something like: ``` myapp/ __init__.py commands/ __init__.py command1.py command2.py foo.py bar.py ``` So I want the application to find the available command modules at runtime and execute the appropriate one. Python defines an `__import__()` function, which takes a string for a module name: > `__import__(name, globals=None, locals=None, fromlist=(), level=0)` > > The function imports the module `name`, potentially using the given `globals` and `locals` to determine how to interpret the name in a package context. The `fromlist` gives the names of objects or submodules that should be imported from the module given by `name`. > > Source: <https://docs.python.org/3/library/functions.html#__import_>\_ So currently I have something like: ``` command = sys.argv[1] try: command_module = __import__("myapp.commands.%s" % command, fromlist=["myapp.commands"]) except ImportError: # Display error message command_module.run() ``` This works just fine, I'm just wondering if there is possibly a more idiomatic way to accomplish what we are doing with this code. Note that I specifically don't want to get in to using eggs or extension points. This is not an open-source project and I don't expect there to be "plugins". The point is to simplify the main application code and remove the need to modify it each time a new command module is added. --- **See also:** [How do I import a module given the full path?](https://stackoverflow.com/questions/67631)
With Python older than 2.7/3.1, that's pretty much how you do it. For newer versions, see `importlib.import_module` for [Python 2](https://docs.python.org/2/library/importlib.html#importlib.import_module) and [Python 3](https://docs.python.org/3/library/importlib.html#importlib.import_module). Or using `__import__` you can import a list of modules by doing this: ``` >>> moduleNames = ['sys', 'os', 're', 'unittest'] >>> moduleNames ['sys', 'os', 're', 'unittest'] >>> modules = map(__import__, moduleNames) ``` Ripped straight from [Dive Into Python](http://web.archive.org/web/20120315061111/http://www.diveintopython.net/functional_programming/dynamic_import.html).
The [recommended](https://docs.python.org/3/library/functions.html#__import__) way for Python 2.7 and 3.1 and later is to use [`importlib`](http://docs.python.org/3/library/importlib.html#importlib.import_module) module: > **`importlib.import_module(`*****`name, package=None)`*** > > Import a module. The name argument specifies what module to import in absolute or relative terms (e.g. either `pkg.mod` or `..mod`). If the name is specified in relative terms, then the package argument must be set to the name of the package which is to act as the anchor for resolving the package name (e.g. `import_module('..mod', 'pkg.subpkg')` will import `pkg.mod`). e.g. ``` my_module = importlib.import_module('os.path') ```
How can I import a module dynamically given its name as string?
[ "", "python", "python-import", "" ]
I'm using a logging module that can have reporting enabled/disabled at runtime. Calls generally go something like: ``` WARN( "Danger Will Robinson! There are " + boost::lexical_cast<string>(minutes) + " minutes of oxygen left!" ); ``` I'm using an inline function for WARN, but I'm curious as to how much optimization is going on behind the scenes -- evaluation of the arguments throughout the entire program would be costly. The `WARN` function goes something like this: ``` bool WARNINGS_ENABLED = false; inline void WARN(const string &message) { if (!WARNINGS_ENABLED) { return; } // ... } ``` Given that constructing the string argument has no side-effects, will the compiler optimize it out? Is a certain level of optimization required (`-Ox` in `g++` for some `x`)?
If you need to be able to selectively enable and disable the warnings at run-time, the compiler will *not* be able to optimize out the call. What you need is to rename your *function* to `WARN2` and add a macro something like: ``` #define WARN(s) do {if (WARNINGS_ENABLED) WARN2(s);} while (false) ``` This will prevent the evaluation of s at run-time unless you have warnings enabled. The do-while stuff is a trick that allows it to be used anywhere in the code (naked statement, statement within a braced if-block, statement within an unbraced if-block, braced and unbraced while statements and so on).
You can check what GCC/G++ do by using the **-S** option. This will output the code before it actually gets assembled – see [gcc(1)](http://linux.die.net/man/1/gcc). GCC and G++ more or less behave the same in this case. So I first translated the code into C to make some further tests: ``` char WARNINGS_ENABLED = 0; inline void WARN(const char* message) { if (!WARNINGS_ENABLED) { return; } puts(message); } int main() { WARN("foo"); return 0; } ``` run **gcc -O3 -S file.c** and look into the output file '**file.s**' You will see that GCC **didn't remove anything**! That's not what you asked for, but in order to give the compiler the opportunity to optimize that code out, you would have to make WARNINGS\_ENABLED **constant**. An alternative is to make it **static** and not changing the value within that file. *But*: making it *static* has the side-effect that the symbol gets not exported. ``` static const char WARNINGS_ENABLED = 0; inline void WARN(const char* message) { if (!WARNINGS_ENABLED) { return; } puts(message); } int main() { WARN("foo"); return 0; } ``` GCC then completely cleans up the code.
C++ compiler optimization of passed arguments
[ "", "c++", "optimization", "compiler-construction", "arguments", "" ]
How do I get: ``` id Name Value 1 A 4 1 B 8 2 C 9 ``` to ``` id Column 1 A:4, B:8 2 C:9 ```
**No CURSOR, WHILE loop, or User-Defined Function needed**. Just need to be creative with FOR XML and PATH. [Note: This solution only works on SQL 2005 and later. Original question didn't specify the version in use.] ``` CREATE TABLE #YourTable ([ID] INT, [Name] CHAR(1), [Value] INT) INSERT INTO #YourTable ([ID],[Name],[Value]) VALUES (1,'A',4) INSERT INTO #YourTable ([ID],[Name],[Value]) VALUES (1,'B',8) INSERT INTO #YourTable ([ID],[Name],[Value]) VALUES (2,'C',9) SELECT [ID], STUFF(( SELECT ', ' + [Name] + ':' + CAST([Value] AS VARCHAR(MAX)) FROM #YourTable WHERE (ID = Results.ID) FOR XML PATH(''),TYPE).value('(./text())[1]','VARCHAR(MAX)') ,1,2,'') AS NameValues FROM #YourTable Results GROUP BY ID DROP TABLE #YourTable ```
If it is SQL Server 2017 or SQL Server Vnext, SQL Azure you can use `STRING_AGG` as below: ``` SELECT id, STRING_AGG(CONCAT(name, ':', [value]), ', ') FROM #YourTable GROUP BY id ```
How to use GROUP BY to concatenate strings in SQL Server?
[ "", "sql", "sql-server", "string-concatenation", "sql-server-group-concat", "" ]
I don't have much experience with databases, so I don't know which is better for long-term performance, better practice, etc. Here is my (hypothetical) case: imagine you have a database of customer information and history of purchase orders for each. You want to keep track of how much each customer is buying. I can think of two ways of calculating this: 1) Just do a SUM() every time it is needed. This is an easy solution, but the concern is that this database could be 20 years old with tens of thousands of rows for each customer. As more customers purchases are added to the database, the SUM() operation will take longer to calculate. 2) Keep the sum cached in the customer information table, and every time a new purchase is made (updated, deleted, etc), update this cache. That way no matter how many purchase orders there are, there won't be an increase in the calculation time. The downside is that, this is a less flexible solution (only the sum over all rows, what about per month sum? other intervals? etc); this cached value could get out of sync with the actual total somehow (technically shouldn't happen, but it might) So what should I be doing for this? I know I shouldn't be storing anything I can calculate from what's already in the database, but part of me doesn't like the fact that this type of calculation will get worse in time, and that there is some kind of elegance to option 2.
In database terms, there is no elegance in option 2 - it would be considered a hack that you might use as a last resort if your database gets to be truly humongous - not likely to happen for a novice setting it up for the first time (but possible). There would be a lot of work maintaining the totals; and you'd be forever dealing with the question: "Why doesn't the detail add up to the total?" Go for option 1 until you've proven you can't. Which will in most cases be a long time.
What you're describing in option #2 is a case of premature optimization. Using the SUM() of all purchases is going to work for a really long time (years). When (if) you start to see this function degrade, you can add indexes or a totals table to your database to speed things up. Don't complicate things when a simple solution exists. Of course, the *real* solution is to try both solutions with 20 years of made-up data and see if there's any real difference. I suspect there isn't.
use SUM() or caching
[ "", "sql", "performance", "" ]
Does anyone has a good solution for a C# version of the C++ \_\_FUNCTION\_\_ macro? The compiler does not seem to like it.
Try using this instead. ``` System.Reflection.MethodBase.GetCurrentMethod().Name ``` C# doesn't have `__LINE__` or `__FUNCTION__` macros like C++ but there are equivalents
What I currently use is a function like this: ``` using System.Diagnostics; public string __Function() { StackTrace stackTrace = new StackTrace(); return stackTrace.GetFrame(1).GetMethod().Name; } ``` When I need \_\_FUNCTION\_\_, I just call the \_\_Function() instead. For example: ``` Debug.Assert(false, __Function() + ": Unhandled option"); ``` Of course this solution uses reflection too, but it is the best option I can find. Since I only use it for Debugging (not Tracing in release builds) the performance hit is not important. I guess what I should do is create debug functions and tag them with ``` [ Conditional("Debug") ] ``` instead, but I haven't got around to that. Thanks to Jeff Mastry for his [solution](http://discuss.fogcreek.com/dotnetquestions/default.asp?cmd=show&ixPost=6163) to this.
C# version of __FUNCTION__ macro
[ "", "c#", ".net", "macros", "" ]
With .net 3.5, there is a SyndicationFeed that will load in a RSS feed and allow you to run LINQ on it. Here is an example of the RSS that I am loading: ``` <rss version="2.0" xmlns:media="http://search.yahoo.com/mrss/"> <channel> <title>Title of RSS feed</title> <link>http://www.google.com</link> <description>Details about the feed</description> <pubDate>Mon, 24 Nov 08 21:44:21 -0500</pubDate> <language>en</language> <item> <title>Article 1</title> <description><![CDATA[How to use StackOverflow.com]]></description> <link>http://youtube.com/?v=y6_-cLWwEU0</link> <media:player url="http://youtube.com/?v=y6_-cLWwEU0" /> <media:thumbnail url="http://img.youtube.com/vi/y6_-cLWwEU0/default.jpg" width="120" height="90" /> <media:title>Jared on StackOverflow</media:title> <media:category label="Tags">tag1, tag2</media:category> <media:credit>Jared</media:credit> <enclosure url="http://youtube.com/v/y6_-cLWwEU0.swf" length="233" type="application/x-shockwave-flash"/> </item> </channel> ``` When I loop through the items, I can get back the title and the link through the public properties of SyndicationItem. I can't seem to figure out how to get the attributes of the enclosure tag, or the values of the media tags. I tried using ``` SyndicationItem.ElementExtensions.ReadElementExtensions<string>("player", "http://search.yahoo.com/mrss/") ``` Any help with either of these?
Your missing the namespace. Using [LINQPad](http://www.linqpad.net/ "LINQPad") and your example feed: ``` string xml = @" <rss version='2.0' xmlns:media='http://search.yahoo.com/mrss/'> <channel> <title>Title of RSS feed</title> <link>http://www.google.com</link> <description>Details about the feed</description> <pubDate>Mon, 24 Nov 08 21:44:21 -0500</pubDate> <language>en</language> <item> <title>Article 1</title> <description><![CDATA[How to use StackOverflow.com]]></description> <link>http://youtube.com/?v=y6_-cLWwEU0</link> <media:player url='http://youtube.com/?v=y6_-cLWwEU0' /> <media:thumbnail url='http://img.youtube.com/vi/y6_-cLWwEU0/default.jpg' width='120' height='90' /> <media:title>Jared on StackOverflow</media:title> <media:category label='Tags'>tag1, tag2</media:category> <media:credit>Jared</media:credit> <enclosure url='http://youtube.com/v/y6_-cLWwEU0.swf' length='233' type='application/x-shockwave-flash'/> </item> </channel> </rss> "; XElement rss = XElement.Parse( xml ); XNamespace media = "http://search.yahoo.com/mrss/"; var player = rss.Element( "channel" ).Element( "item" ).Element(media + "player").Attribute( "url" ); player.Dump(); ``` result: url="http://youtube.com/?v=y6\_-cLWwEU0" The construct to look at is: Element(media + "player") that tells Linq to use the namespace represented by 'media' as well as the element name 'player'. **Brain damage must be setting in on my part, I thought you were using Linq. Anyway, you need to take the namespace into consideration.**
This should give you an idea on how to do it: ``` using System.Linq; using System.ServiceModel.Syndication; using System.Xml; using System.Xml.Linq; ``` --- ``` SyndicationFeed feed = reader.Read(); foreach (var item in feed.Items) { foreach (SyndicationElementExtension extension in item.ElementExtensions) { XElement ele = extension.GetObject<XElement>(); Console.WriteLine(ele.Value); } } ```
Reading non-standard elements in a SyndicationItem with SyndicationFeed
[ "", "c#", "rss", "syndication-feed", "syndication-item", "" ]
I am working on my website, and I am trying to get the url parameter "page" which is an integer that tells which entry to read in the MySQL database that hols the HTML for all the pages. Here is my code, with the MySQL username and password removed for security reasons: ``` if ($_GET["page"]) { $con = mysql_connect("localhost","username","password"); if (!$con) { die('Could not connect: ' . mysql_error()); } mysql_select_db("jmurano_pages", $con); $title=mysql_query("SELECT title FROM pageContent WHERE pageID=" . $_GET['page']); echo "<title>" . $title . "</title>\n"; echo "</head>\n"; echo "<body>\n"; $content = mysql_query("SELECT content FROM pageContent WHERE pageID=" . $_GET['page']); echo $content; echo "\n</body>\n</html>"; } ``` This puts the title as "Resource id #2" and the content as "Resource id #3". I can't think of what I may have done wrong. --- I'm still confused. I'm a complete PHP newbie. What exactly do I need to do to access the content and title?
Apart from the injection vulnerability (see John's answer) you should get the title from the mysql\_query using ``` $res = mysql_query("SELECT title FROM pageContent WHERE pageID=" . $escapedpage); $title = mysql_fetch_assoc($res); $title = $title['title'] $res2 = mysql_query("SELECT content FROM pageContent WHERE pageID=" . $escapedpage); $content = mysql_fetch_assoc($res2); $content = $content['content']; ``` However I think it would be wise if you would follow an online mysql php tutorial. **EDIT** even better would be to just use 1 mysql\_query like so: ``` $res = mysql_query("SELECT title, content FROM pageContent WHERE pageID=" . $escapedpage); $row = mysql_fetch_assoc($res); $title = $row['title']; $content = $row['content']; ``` That would save your script time and resources since there is only need for one mysql query. This tutorial is rather good: <http://www.freewebmasterhelp.com/tutorials/phpmysql>
You've obviously got a lot to learn *(we all had to start somewhere!)*, so a single answer on SO won't be able to teach you everything, but here's a starter: When you run `mysql_query` on a SELECT query, it will return one of two things: * if there was an error in your query, it will return **false**. + Details about this error are available by calling `mysql_error()` * if the query was fine it will return a **resource** + Using this resource, you can call other mysql functions to find out information about the dataset you've just created with your SELECT. + `mysql_fetch_assoc()` will return an associative array of ONE row from your query. - Do this to see: `$row = mysql_fetch_assoc($resource); print_r($row);` + Call it again to retrieve the next row. + When there's no more rows, `mysql_fetch_assoc()` will return false. - Therefore you can write loops like this: `while ($row = mysql_fetch_assoc($resource)) { // do stuff }`
php url parameters
[ "", "php", "mysql", "" ]
void (int a[]) { a[5] = 3; // this is wrong? } Can I do this so that the array that is passed in is modified? Sorry for deleting, a bit new here... I have another question which might answer my question: If I have ``` void Test(int a) { } void Best(int &a) { } ``` are these two statements equivalent? ``` Test(a); Best(&a); ```
``` void Test(int a[]) { a[5] = 3; } ``` just alternate syntax for: ``` void Test(int* a) { *(a+5) = 3; } ``` No array is passed, just a pointer. The original array is modified. As for your second revision, given: ``` void Test(int a) { } void Best(int &a) { } ``` then ``` Test(aa); // Passes aa by value. Changes to a in Test() do not effect aa Best(aa); // Passes aa by reference; Changes to a DO effect aa Best(&aa); // Is a syntax error: Passing a pointer instead of an int. ```
If you get the variable not by reference and not by pointer, it means that the function is essentially isolated, getting an ad-hoc copy of a. No matter what you do (without trying to hack the stack or things like that) you wouldn't have access to that value in the calling context. If you know something about the calling context, you may be able to do things based on some anticipation of stack contents, but it's generally a bad idea. If your method takes a[] which is essentially a\*, then yes, you can alter the contents of the cell that a points to, but you won't be able to alter a (the pointer) itself to point at something else.
C++ Parameter Reference
[ "", "c++", "parameters", "reference", "" ]
I have several C# projects along with some internal library components that I'm trying to document together. Sandcastle seems to be the place to go to generate documentation from C#. I would like to know which of the two, DocProject or Sandcastle Help File Builder GUI is better and supports the features I need. I would like to compile only each projects own part of the document and then have it all integrated together in the end. (i.e. the library components in one documentation project and each project in it's own documentation project, then all of the above in a single root using the Help 2 viewer)
I can vouch for Sandcastle Help File Builder. It works really well and you can document any number of assemblies within a Sandcastle Help File Builder project. In theory, you could have a Builder project and generate a doc for each C# project and then have a master Builder project which documents everything.
Here are some useful links for Sandcastle based .NET documentation: [Tutorial on Sandcastle](http://blogs.msdn.com/sandcastle/archive/2006/07/29/682398.aspx) [Sandcastle Help File Builder](http://www.codeplex.com/SHFB) (SHFB) [Tutorial on SHFB](http://www.codeproject.com/KB/cs/SandcastleBuilder.aspx) [Web Project Documentation](http://www.codeplex.com/Release/ProjectReleases.aspx?ProjectName=SandcastleStyles&ReleaseId=13517) [Tutorial on how to document Web Projects](http://www.codeproject.com/KB/vb/SandcastleXmlFileBuilder.aspx) (More manual and I believe outdated given previous the link) [MSDN reference for XML Documentation](http://msdn.microsoft.com/en-us/library/ms172652.aspx)
DocProject vs Sandcastle Help File Builder GUI
[ "", "c#", "sandcastle", "" ]
Is there a tool out there which can convert SQL syntax to LINQ syntax? I just want to rewrite basic queries with join, etc., to [LINQ](http://en.wikipedia.org/wiki/Language_Integrated_Query). It would save me a lot of time.
Edit 7/17/2020: I cannot delete this accepted answer. It used to be good, but now it isn't. Beware really old posts, guys. I'm removing the link. [**Linqer**] is a SQL to LINQ converter tool. It helps you to learn LINQ and convert your existing SQL statements. Not every SQL statement can be converted to LINQ, but Linqer covers many different types of SQL expressions. Linqer supports both .NET languages - C# and Visual Basic.
I know that this isn't what you asked for but [LINQPad](http://en.wikipedia.org/wiki/LINQPad) is a really great tool to teach yourself [LINQ](http://en.wikipedia.org/wiki/Language_Integrated_Query) (and it's free :o). When time isn't critical, I have been using it for the last week or so instead or a query window in [SQL Server](http://en.wikipedia.org/wiki/Microsoft_SQL_Server) and my LINQ skills are getting better and better. It's also a nice little code snippet tool. Its only downside is that the free version doesn't have [IntelliSense](https://en.wikipedia.org/wiki/Intelli-sense).
SQL to LINQ Tool
[ "", "sql", "linq", "" ]
I am writing a windows service. This service runs another process I've developed through Process class methods, but I want to run this process on debug mode also, using breakpoints, for instance. How can I do this?
When debugging a service, DebugBreak() is very nice. You can even debug the startup of the service, which can be very hard to time if you try to attach the process. In C# ``` #if DEBUG System.Diagnostics.Debugger.Break(); #endif ``` In C++ ``` #if DEBUG System.Diagnostics.Debugger.Break(); #endif ``` Also see the question: [how can I use debugbreak() in C#.](https://stackoverflow.com/questions/104235/how-can-i-use-debugbreak-in-c)
From the main menu "Debug->Attach Process".
How to run another process at debug mode?
[ "", "c#", "visual-studio", "debugging", "visual-studio-2005", "" ]
Working with a traditional listener callback model. I have several listeners that collect various stuff. Each listener's collected stuff is inside the listener in internal structures. The problem is that I want some of the listeners to be aware of some of the "stuff" in the other listeners. I enforce listener registration order, so if I knowingly register events in some order a later listener can be sure that a previously listener updated its stuff and somehow access it to do more stuff. My first attempt at this is to have each listener store a reference to the listeners upon which it depends. So I register listeners in the order of those without dependencies to those with prior-registered dependencies, and then set the references between the listeners in various methods. I am starting to realize how bad this feels and I was wondering if somehow has been down this road before. What would be a more appropriate pattern when one of the listeners needs to access stuff in another? Here is some pseudocode to illustrate: ``` interface Listener { onEvent(char e); } class A implements Listener { private int count; public void onEvent(char e) { if(e == 'a') count++; } public int getCount() { return count; } } class B implements Listener { private int count; // private A a; // private void setA(A a) { this.a = a; } public void onEvent(char e) { if(e == 'b') count++; } public int getCount() { return count; } public int getAPlusBCount() { // We know B count, but we don't know A so how would we change this // so B is A aware? Or not even aware, just somehow coupled? This // is the question // return a.getCount() + count; } public void doConditionalHere() { // Do some condition in B that relies on the state of data in A int acount = 0; // a.getCount(); ??? if(acount % 2 == 0) { this.count--; } } } class Run { A a = new A(); B b = new B(); List listeners = new List(); listeners.add(a); listeners.add(b); // The ugly way I add coupling right now is to keep a reference to A // inside B. It's commented out because I am hoping there is a more intelligent approach // b.setA(a); for(char c : "ababbabab") { for(listener : listeners) { listener.onEvent(c); } } } ```
Why not have a central object that will keep track of how many times the onEvent method was fired for all the listener classes ``` public interface CountObserver { public void updateCount(String className); public int getCount(String className); } public class CentralObserver implements CountObserver { private int aCount; private int bCount; public void updateCount(String className) { //There's probably a better way to do this than using //all these if-elses, but you'll get the idea. if (className.equals("AclassName")) { aCount++; } else if (className.equals("BclassName")) { bCount++; } } public int getCount(String className) { if (className.equals("AclassName")) { return aCount; } else if (className.equals("BclassName")) { return bCount; } } class A implements Listener { CountObserver countObserver; public void registerObserver (CountObserver countObserver) { this.countObserver = countObserver; } public void onEvent(char e) { if(e == 'a') { countObserver.updateCount (this.getClass.getName); } } } //Same thing for B or any other class implementing Listener. Your Listener interface should, of //course, have a method signature for the registerObserver method which all the listener classes //will implement. class Run { private A a; private B b; private CountObserver centralObserver; public runProgram () { centralObserver = new CentralObserver(); a.registerObserver(centralObserver); b.registerObserver(centralObserver); //run OnEvent method for A a couple of times, then for B } public int getAcount () { return centralObserver.getCount(a.getClass.getName()); } public int getBcount () { return centralObserver.getCount(b.getClass.getName()); } } //To get the sum of all the counts just call getAcount + getBcount. Of course, you can always add more listeners and more getXCount methods ```
You've describing a lot of coupling here. Best would be to eliminate all this back-channel dependency, but failing that maybe you could have those with dependencies listening not on the initial listener list, but on whatever they are dependent on. Or you could have them wait till they have all the signals. You could automate the dependency management by having the listeners identify who they are dependent upon. The listener list would be ordered not by insertion order, but to insure dependent objects follow their dependency. Your listener interface would look something like this: ``` interface Listener { String getId(); Collection<String> getDependencies(); onEvent(char e); } ``` Or just have the references, like so: ``` interface Listener { Collection<Listener> getDependencies(); onEvent(char e); } ```
Proper coupling for multiple listeners that need to accessed shared state data
[ "", "java", "design-patterns", "listener", "" ]
I have a database with one table, like so: ``` UserID (int), MovieID (int), Rating (real) ``` The userIDs and movieIDs are large numbers, but my database only has a sample of the many possible values (4000 unique users, and 3000 unique movies) I am going to do a matrix SVD (singular value decomposition) on it, so I want to return this database as an ordered array. Basically, I want to return each user in order, and for each user, return each movie in order, and then return the rating for that user, movie pair, or null if that user did not rate that particular movie. example: ``` USERID | MOVIEID | RATING ------------------------- 99835 8847874 4 99835 8994385 3 99835 9001934 null 99835 3235524 2 . . . 109834 8847874 null 109834 8994385 1 109834 9001934 null etc ``` This way, I can simply read these results into a two dimensional array, suitable for my SVD algorithm. (Any other suggestions for getting a database of info into a simple two dimensional array of floats would be appreciated) It is important that this be returned in order so that when I get my two dimensional array back, I will be able to re-map the values to the respective users and movies to do my analysis.
``` SELECT m.UserID, m.MovieID, r.Rating FROM (SELECT a.userid, b.movieid FROM (SELECT DISTINCT UserID FROM Ratings) AS a, (SELECT DISTINCT MovieID FROM Ratings) AS b ) AS m LEFT OUTER JOIN Ratings AS r ON (m.MovieID = r.MovieID AND m.UserID = r.UserID) ORDER BY m.UserID, m.MovieID; ``` *Now tested and it seems to work!* The concept is to create the cartesian product of the list of UserID values in the Ratings table with the list of MovieID values in the Ratings table (ouch!), and then do an outer join of that complete matrix with the Ratings table (again) to collect the ratings values. This is **NOT** efficient. It might be effective. You might do better though to just run the plain simple select of the data, and arrange to populate the arrays as the data arrives. If you have many thousands of users and movies, you are going to be returning many millions of rows, but most of them are going to have nulls. You should treat the incoming data as a description of a sparse matrix, and first set the matrix in the program to all zeroes (or other default value), and then read the stream from the database and set just the rows that were actually present. That query is the basically trivial: ``` SELECT UserID, MovieID, Rating FROM Ratings ORDER BY UserID, MovieID; ```
Sometimes the best thing to do is refactor the table/normalize your data (if that is an option). Normalize the data structure: Users Table: (all distinct users) UserId, FirstName, LastName Movies Table: (all distinct movies) MovieId, Name UserMovieRatings: (ratings that users have given to movies) UserId, MovieId, Rating You can do a Cartesian join if you want every combination of users and movies and then use the UserMovieRatings table as needed. It's probably best to do the refactoring now before you system gets any more complicated. Take this time upfront and I'm positive any queries you need to make will come naturally...hope that helps... Sample Query: ``` select UserId, FirstName, LastName, MoveId, Name, cast(null as int) as Rating into #FinalResults from Users cross join Movies ``` ``` update #FinalResults set Rating = UMR.Rating from #FinalResults FR inner join UserMovieRatings UMR on FR.UserId = UMR.UserId and FR.MovieId = UMR.MovieId ```
Novice SQL query question for a movie ratings database
[ "", "sql", "database", "sql-server-2005", "" ]
I'm having a bit of a problem with converting the result of a MySQL query to a Java class when using SUM. When performing a simple SUM in MySQL ``` SELECT SUM(price) FROM cakes WHERE ingredient = 'chocolate'; ``` with `price` being an integer, it appears that the `SUM` sometimes returns a string and sometimes an integer, depending on the version of the JDBC driver. Apparently the server does tell the JDBC driver that the result of `SUM` is a string, and the JDBC driver sometimes 'conveniently' converts this to an integer. (see [Marc Matthews' explanation](http://marc.info/?l=mysql-java&m=116295785422117&w=2)). The Java code uses some [BeanInfo](http://java.sun.com/j2se/1.4.2/docs/api/java/beans/BeanInfo.html) and [Introspection](http://java.sun.com/j2se/1.4.2/docs/api/java/beans/Introspector.html) to automagically fill in a (list of) bean(s) with the result of a query. But this obviously can't work if the datatypes differ between servers where the application is deployed. I don't care wether I get a string or an integer, but I'd like to always have the same datatype, or at least know in advance which datatype I'll be getting. Is there some way to know which datatype will be returned by a MySQL `SUM` from within the Java code? Or does anyone know some better way to deal with this?
This is just a guess, but maybe casting to integer will force MySQL to always tell it is an integer. ``` SELECT CAST(SUM(price) AS SIGNED) FROM cakes WHERE ingredient = 'marshmallows'; ```
I have never worked with MySQL before so I cannot say why but If you say: ``` ResultSet rs = statement.executeQuery("SELECT SUM(price) FROM cakes WHERE ingredient = 'chocolate'"); int sum = 0; if(rs.next()) size = Integer.parseInt(rs.getString(1)); ``` Then you should not have a problem regardless of the returned datatype as the documentation says: > ### String getString(int columnIndex) throws SQLException > > > Retrieves the value of the designated column in the current row of this ResultSet object **as a String** in the Java programming language
Datatype of SUM result in MySQL
[ "", "java", "mysql", "sum", "return-type", "" ]
I was using GetWindowLong like this: ``` [DllImport("user32.dll")] private static extern IntPtr GetWindowLong(IntPtr hWnd, int nIndex); ``` But according to the MSDN docs I am supposed to be using GetWindowLongPtr to be 64bit compatible. <http://msdn.microsoft.com/en-us/library/ms633584(VS.85).aspx> The MSDN docs for GetWindowLongPtr say that I should define it like this (in C++): ``` LONG_PTR GetWindowLongPtr(HWND hWnd, int nIndex); ``` I used to be using IntPtr as the return type, but what the heck would I use for an equivalent for LONG\_PTR? I have also seen GetWindowLong defined as this in C#: ``` [DllImport("user32.dll")] private static extern long GetWindowLong(IntPtr hWnd, int nIndex); ``` What is right, and how can I ensure proper 64bit compatibility?
Unfortunately it's not that easy, because GetWindowLongPtr doesn't exist in 32bit Windows. On 32bit systems GetWindowLongPtr is just a C macro that points to GetWindowLong. If you really need to use GetWindowLongPtr on both 32 and 64 bit systems you'll have to determine the correct one to call at run time. See the description at [pinvoke.net](http://www.pinvoke.net/default.aspx/user32/GetWindowLongPtr.html)
You should define GetWindowLongPtr using an IntPtr. In C/C++ a LONG\_PTR is 32-bits on a 32-bit system and 64-bits on a 64-bit system (see [here](http://msdn.microsoft.com/en-us/library/aa383751(VS.85).aspx)). IntPtr in C# is designed to work the same way (see [here](http://msdn.microsoft.com/en-us/library/system.intptr(VS.71).aspx)). So what you want is: ``` [DllImport("user32.dll")] private static extern IntPtr GetWindowLongPtr(IntPtr hWnd, int nIndex); ```
GetWindowLong vs GetWindowLongPtr in C#
[ "", "c#", "getwindowlong", "" ]
Are there any good JavaScript frameworks out there which primary audience is not web programming? Especially frameworks/libraries which improves the object orientation? The framework should be usable within an desktop application embedding a JavaScript engine (such as Spidermonkey or JavaScriptCore), so no external dependency are allowed.
[Dojo](http://dojotoolkit.org/) can be used (and is used) in non-browser environments (e.g., Rhino, Jaxer, SpiderMonkey). It can be easily adapted for other environments too — all DOM-related functions are separated from functions dealing with global language features. [dojo.declare()](http://api.dojotoolkit.org/jsdoc/dojo/1.2/dojo.declare) ([more docs](http://docs.dojocampus.org/dojo/declare)) comes in the Dojo Base (as soon as you load dojo.js) and implements full-blown OOP with single- and multiple- inheritance, automatic constructor chaining, and super-calls. In fact it is the cornerstone of many Dojo facilities. Of course there are more low-level facilities like [dojo.mixin()](http://api.dojotoolkit.org/jsdoc/dojo/1.2/dojo.mixin) to mix objects together and [dojo.extend()](http://api.dojotoolkit.org/jsdoc/dojo/1.2/dojo.extend) to extend a prototype dynamically. More language-related features can be found in [dojox.lang](http://archive.dojotoolkit.org/nightly/dojotoolkit/dojox/lang/). Following parts of it are thoroughly explained and documented: [functional](http://lazutkin.com/blog/2008/jan/12/functional-fun-javascript-dojo/), [AOP](http://lazutkin.com/blog/2008/may/18/aop-aspect-javascript-dojo/), [recursion combinators](http://lazutkin.com/blog/2008/jun/30/using-recursion-combinators-javascript/). Dojo comes with other batteries included from string-related algorithms to the date processing. If you are interested in those [you can discover them yourself](http://docs.dojocampus.org/), or contact [the Dojo community](http://dojotoolkit.org/community/).
As far as "improving object orientation" goes, Javascript is already great. You just need to get used to thinking in prototypes instead of classes. After reading Douglas Crawford's [great page on prototypal inheritance](http://javascript.crockford.com/prototypal.html) I really started to enjoy working with javascript. (He also has [a page on class inheritance](http://javascript.crockford.com/inheritance.html) if you must use classes.) **Edit:** If by asking for a *framework* you also mean, *helpful libraries that don't use the DOM*, you might be interested in [Functional Javascript](http://osteele.com/sources/javascript/functional/).
Non-web Javascript frameworks
[ "", "javascript", "frameworks", "desktop", "" ]
I have a .NET 2.0 server that seems to be running into scaling problems, probably due to poor design of the socket-handling code, and I am looking for guidance on how I might redesign it to improve performance. **Usage scenario:** 50 - 150 clients, high rate (up to 100s / second) of small messages (10s of bytes each) to / from each client. Client connections are long-lived - typically hours. (The server is part of a trading system. The client messages are aggregated into groups to send to an exchange over a smaller number of 'outbound' socket connections, and acknowledgment messages are sent back to the clients as each group is processed by the exchange.) OS is Windows Server 2003, hardware is 2 x 4-core X5355. **Current client socket design:** A `TcpListener` spawns a thread to read each client socket as clients connect. The threads block on `Socket.Receive`, parsing incoming messages and inserting them into a set of queues for processing by the core server logic. Acknowledgment messages are sent back out over the client sockets using async `Socket.BeginSend` calls from the threads that talk to the exchange side. **Observed problems:** As the client count has grown (now 60-70), we have started to see intermittent delays of up to 100s of milliseconds while sending and receiving data to/from the clients. (We log timestamps for each acknowledgment message, and we can see occasional long gaps in the timestamp sequence for bunches of acks from the same group that normally go out in a few ms total.) Overall system CPU usage is low (< 10%), there is plenty of free RAM, and the core logic and the outbound (exchange-facing) side are performing fine, so the problem seems to be isolated to the client-facing socket code. There is ample network bandwidth between the server and clients (gigabit LAN), and we have ruled out network or hardware-layer problems. Any suggestions or pointers to useful resources would be greatly appreciated. If anyone has any diagnostic or debugging tips for figuring out exactly what is going wrong, those would be great as well. *Note: I have the MSDN Magazine article [Winsock: Get Closer to the Wire with High-Performance Sockets in .NET](http://msdn2.microsoft.com/en-us/magazine/cc300760.aspx), and I have glanced at the Kodart "XF.Server" component - it looks sketchy at best.*
A lot of this has to do with many threads running on your system and the kernel giving each of them a time slice. The design is simple, but does not scale well. You probably should look at using Socket.BeginReceive which will execute on the .net thread pools (you can specify somehow the number of threads it uses), and then pushing onto a queue from the asynchronous callback ( which can be running in any of the .NET threads ). This should give you much higher performance.
Socket I/O performance has improved in .NET 3.5 environment. You can use ReceiveAsync/SendAsync instead of BeginReceive/BeginSend for better performance. Chech this out: <http://msdn.microsoft.com/en-us/library/bb968780.aspx>
Tips / techniques for high-performance C# server sockets
[ "", "c#", ".net", "performance", "sockets", "" ]
In our database, we have a system set up to keep track of applications. We have a bool column that indicates whether or not the application is approved. Then there's another column that indicates whether or not the application is denied. If neither column is true, then the application is considered to be pending. Is there any easy way to merge those into one value (like say a tinyint or maybe a string that says "approved", "denied", or "pending") in a view? Or is this going to require something like a Table-valued function? **UPDATE:** It's difficult to choose an answer choose since they were all helpful. I'll go with baldy's since he posted first.
you could use a case statement in your query: select case approved when 1 then 'Approved' else ... Case statements can be nested so you can delve into the different options. Why not rather use an int column with 3 distinct values, or you can even go as far as using one bool column, with null enabled. When null it is pending, 1 approved and 0 denied.
You can use a case statement like this: ``` select case when Approved = 1 then 'Approved' when Denied = 1 then 'Denied' else 'Pending' end 'Status' ```
Building a view column out of separate columns
[ "", "sql", "sql-server", "view", "" ]
How do I get a platform-independent newline in Java? I can’t use `"\n"` everywhere.
In addition to the line.separator property, if you are using java 1.5 or later and the **String.format** (or other **formatting** methods) you can use `%n` as in ``` Calendar c = ...; String s = String.format("Duke's Birthday: %1$tm %1$te,%1$tY%n", c); //Note `%n` at end of line ^^ String s2 = String.format("Use %%n as a platform independent newline.%n"); // %% becomes % ^^ // and `%n` becomes newline ^^ ``` See the [Java 1.8 API for Formatter](http://docs.oracle.com/javase/8/docs/api/java/util/Formatter.html) for more details.
Java 7 now has a [`System.lineSeparator()`](http://docs.oracle.com/javase/8/docs/api/java/lang/System.html#lineSeparator--) method.
How do I get a platform-independent new line character?
[ "", "java", "cross-platform", "newline", "eol", "" ]
I have a large query in a PostgreSQL database. The Query is something like this: ``` SELECT * FROM table1, table2, ... WHERE table1.id = table2.id... ``` When I run this query as a sql query, the it returns the wanted row. But when I tries to use the same query to create a view, it returns an error: > error: column "id" specified more than once. (I use pgAdminIII when executing the queries.) I'll guess this happens because the resultset will have more than one column named "id". Is there someway to solve this, without writing all the column names in the query?
That happens because a view would have two id named columns, one from table1 and one from table2, because of the select \*. You need to specify which id you want in the view. ``` SELECT table1.id, column2, column3, ... FROM table1, table2 WHERE table1.id = table2.id ``` The query works because it can have equally named columns... ``` postgres=# select 1 as a, 2 as a; a | a ---+--- 1 | 2 (1 row) postgres=# create view foobar as select 1 as a, 2 as a; ERROR: column "a" duplicated postgres=# create view foobar as select 1 as a, 2 as b; CREATE VIEW ```
If only join columns are duplicated (i.e. have the same names), then you can get away with changing: ``` select * from a, b where a.id = b.id ``` to: ``` select * from a join b using (id) ```
`column "..." specified more than once` error of view in PostgreSQL
[ "", "sql", "database", "postgresql", "pgadmin", "sql-view", "" ]
This question is about using getter methods of a singleton object in worker threads. Here is some pseudo code first: ``` // Singleton class which contains data class MyData { static MyData* sMyData ; int mData1[1024]; int mData2[1024]; int mData3[1024]; MyData* getInstance() { // sMyData is created in the very beginning. return sMyData ; } void getValues(int idx, int& data1,int& data2,int& data3) { data1 = mData1[idx]; data2 = mData2[idx]; data3 = mData3[idx]; } int* getData1() { return &mData1[0]; } } class MyThread { void workerMethod() { MyData* md = MyData::getInstance(); int d1,d2,d3; md->getValue( 12, d1,d2,d3 ); int* data1 = md->getData1(); d1 = data1[34]; } } ``` Now as you see I have some getter methods (all read-only), MyData::getInstance(), MyData::getValue() and MyData::getData1(). The 1st question is how thread-safe these methods are ? Since they are often-called methods, protecting those methods with mutex is something I am trying to avoid. The 2nd question is: what is the suggested way of reading data from central sources in a multi-thread application, especially in worker methods. Thanks ! Paul
Provided that no other thread will try to write to the data in your singleton object, you don't need to protect them: by definition, multiple readers in the absence of a writer is thread-safe. This is a common pattern where the program's initialization code sets up a singleton, which is then only read from by worker threads. However, if *any* thread *ever* writes to this data while others are reading from it, you must protect it in some way. If you have lots of readers and only the occasional writer, it is worth considering some sort of "read-write" lock, which allows multiple readers in the absence of any writers.
It is not possible to tell if this could is threadsafe. If the data is initialized during object creation and never changes than this will run correctly. If you are mutating the underlying data through other methods then the readers will have to perform some sort of synchronization against the writers, there is no way around that. Depending on exactly what you are doing you might be able to use something lighter weight than a mutex, such as atomic updates synchronize commands, or using reader-writer locks, but without knowing more about what you are doing it is impossible to tell.
Singleton getInstance in thread worker methods
[ "", "c++", "multithreading", "" ]
I have an application that uses `window.open()` to generate dynamic popups. Unfortunately, I've had trouble creating the content of the new windows using the standard DOM functions (`createElement`, `appendChild`), and I've gone to using `document.write()` to generate the page. Concretely, how can I go from this: ``` function writePopup() { var popup = window.open("", "popup", "height=400px, width=400px"); var doc = popup.document; doc.write("<html>"); doc.write("<head>"); doc.write("<title>Written Popup</title>"); doc.write("</head>"); doc.write("<body>"); doc.write("<p>Testing Write</p>"); doc.write("</body>"); doc.write("</html>"); doc.close(); } ``` To a function that creates the same popup using the DOM? Edit: I did consider using an absolutely positioned element to simulate a popup, and though it looks better, the users need to be able to print the information being shown.
Why not use a library function such as <http://plugins.jquery.com/project/modaldialog> instead of reinventing the wheel? [EDIT] OR ``` function writePopup(){ var popup = window.open("", "_blank", "height=400px, width=400px"); var doc = popup.document; doc.title = 'Written Popup'; var p = doc.createElement('p'); p.innerHTML = 'Testing Write'; doc.body.appendChild(p); } ```
Just doing quick tests, I can get doc to append DOM created HTML to my popup like so: ``` var popup = window.open("", "popup", "height=400px, width=400px"); var doc = popup.document.documentElement; var p = document.createElement("p"); p.innerHTML = "blah"; doc.appendChild(p); ``` My example produces totally invalid HTML I know, but it works (with limited testing obviously).
Refactoring a function that uses window.open to use the DOM rather than write()
[ "", "javascript", "dom", "web-standards", "" ]
I came across a class instance function that needed to temporarily change a class instance variable, and then restore it when the function completed. The function had return statements all over the place, and before each return there was a restoring statement. That seemed messy to me, not to mention scary when a exception is thrown. As an improvement I came up with this generalization using a inner class definition. Here is a sample driver program (class restorer). ``` class Unwind { private: bool b_active_; ///< the thing I want to be restored template<typename T> class restorer { T* ref_; T save_; public: restorer(T* perm) : ref_(perm), save_(*ref_) {}; ~restorer() { *ref_ = save_; } }; public: Unwind() : b_active_(false) {}; void a() { out("a in"); b(); out("a end"); } void b() { out("b in"); { restorer<bool> trust_in_the_stack(&b_active_); // "restorer" created on the stack b_active_ = true; // change b_active_ only while "within" b() c(); out("b inner end"); } out("b end"); } void c() { out("c in"); d(); out("c end"); } void d() { out("d in"); cout << "deepest" << endl; out("d end"); } void out(const std::string& msg) { std::cout << msg << ": " << b_active_ << std::endl; } }; int main() { Unwind u; u.a(); return 0; } ``` The output using g++ 4.2.3 (-Wall) was: ``` a in: 0 b in: 0 c in: 1 d in: 1 deepest d end: 1 c end: 1 b inner end: 1 b end: 0 a end: 0 ``` Which is what I expect at "b end". I felt that defining the class restorer inside the class Unwind helps to discourage misuse. My question is, is there a general and safer way to do this? I am worried about lifetime issues. Edit: Please assume that there are no threads, but "downstream" methods on the stack that change behavior based on this b\_active\_ flag.
I revised the sample a bit more based on the comments, and placed as an Community Wiki answer instead of editing the question. ``` /// c++ code sample #ifndef UTIL_RESTORER_HPP #define UTIL_RESTORER_HPP namespace Utility { /// A Restorer instance ("inst") uses the stack to restore a saved /// value to the named variable when the instance "inst" goes out of /// scope. /// /// Restorer is designed to be an auto variable, not allocated on any /// other memory resource like a heap or in-place. template<typename T> class restorer { T& ref_; T save_; public: restorer(T& perm) : ref_(perm), save_(perm) {} ~restorer() { ref_ = save_; } }; }//NAMESPACE #endif//UTIL_RESTORER_HPP ```
I agree with Adam Pierce and also think that you should prefer references over pointers: ``` template<typename T> class restorer { T& ref_; T save_; public: restorer(T& perm) : ref_(perm), save_(ref_) {}; ~restorer() { ref_ = save_; } }; ```
General way to reset a member variable to its original value using the stack?
[ "", "c++", "callstack", "" ]
How do I make the XDocument object save an attribute value of a element with single quotes?
If it is absolutely necessary to have single quotes you could write your XML document to a string and then use a string replace to change from single to double quotes.
I'm not sure that any of the formatting options for LINQ to XML allow you to specify that. Why do you need to? It's a pretty poor kind of XML handler which is going to care about it...
Save attribute value of xml element with single quotes using linq to xml
[ "", "c#", "xml", ".net-3.5", "linq-to-xml", "" ]
I am developing a .NET CF based Graphics Application, my project involves a lot of drawing images, We have decided to go for porting the application on different handset resolution.(240 X 240 , 480 X 640) etc. How would i go onto achieve this within single solution/project? Is there a need to create different projects based on resolutions? How would i handle common files? and i need the changes in one of the common classes to occur across all devices. Thank you, Cronos
Don't listen to that idiot MusiGenesis. A much better way of handling different screen resolutions for Windows Mobile devices is to use **forms inheritance**, which can be tacked onto an existing CF application with minimal effort. Basically, you design each form for a standard 240x320 screen. When you need to re-arrange a form for a new resolution (let's say 240x240), you add a new form to your project and have it inherit from your original 240x320 form: ``` public partial class frmDialog240x240: frmDialog ``` instead of just Form: ``` public partial class frmDialog240x240: Form ``` like usual. On your original form, you need to **set the Modifiers property of each control to Protected** (instead of the default Private). In the designer for your new form, you will see all of the controls on the form you're inheriting from, and you can move them and resize them as you see fit to accomodate the new screen dimensions (this will not affect the original form's layout). When your program is running, it's easy for it to check the screen resolution of the device it's running on and create the appropriate form (a factory method is good for this). Your new form inherits everything from the old form, but uses your new custom layout. This approach allows you to avoid code duplication, because there isn't any.
Anchoring and Docking is the most common mechanism for handling different resolutions (remember also that many devices can rotate the screen, so you need to handle changes even on a single device). Getting screen size, if needed after that, is as simple as querying the Screen object: ``` int screenWidth = Screen.PrimaryScreen.Bounds.Width; int workingHeight = Screen.PrimaryScreen.WorkingArea.Height; ```
Handling Different Resolutions in Visual Studio 2008 for .NET CF
[ "", "c#", "visual-studio", "compact-framework", "" ]
If I have the following code, ``` Foo *f = new Foo(); vector<Foo*> vect; vect.push_back(f); // do stuff vect.erase(f); ``` Did I create a memory leak? I guess so, but the word *erase* gives the feeling that it is deleting it. Writing this, I am wondering if it is not a mistake to put a pointer in a STL vector. What do you think?
Yes, you created a memory leak by that. std::vector and other containers will just remove the pointer, they won't free the memory the pointer points to. It's not unusual to put a pointer into a standard library container. The problem, however, is that you have to keep track of deleting it when removing it from the container. A better, yet simple, way to do the above, is to use boost::shared\_ptr: ``` { boost::shared_ptr<foo> f(new foo); std::vector< boost::shared_ptr<foo> > v; v.push_back(f); v.erase(v.begin()); } /* if the last copy of foo goes out of scope, the memory is automatically freed */ ``` The next C++ standard (called C++1x and C++0x commonly) will include `std::shared_ptr`. There, you will also be able to use `std::unique_ptr<T>` which is faster, as it doesn't allow copying. Using `std::unique_ptr` with containers in c++0x is similar to the `ptr_container` library in boost.
Another option is to use the Boost [Pointer Containers](http://www.boost.org/doc/libs/1_37_0/libs/ptr_container/doc/ptr_container.html). They are designed to do exactly what you want.
Does myVector.erase(myPtr) delete the object pointed by myPtr?
[ "", "c++", "vector", "stl", "" ]
A common pattern in C++ is to create a class that wraps a lock - the lock is either implicitly taken when object is created, or taken explicitly afterwards. When object goes out of scope, dtor automatically releases the lock. Is it possible to do this in C#? As far as I understand there are no guarantees on when dtor in C# will run after object goes out of scope. Clarification: Any lock in general, spinlock, ReaderWriterLock, whatever. Calling Dispose myself defeats the purpose of the pattern - to have the lock released as soon as we exit scope - no matter if we called return in the middle, threw exception or whatnot. Also, as far as I understand using will still only queue object for GC, not destroy it immediately...
Your understanding regarding `using` is incorrect, this is a way to have scoped actions happen in a deterministic fashion (no queuing to the GC takes place). C# supplies the `lock` keyword which provides an exclusive lock and if you want to have different types (e.g. Read/Write) you'll have to use the `using` statement. P.S. [This thread](https://stackoverflow.com/questions/173670/why-is-there-no-raii-in-net) may interest you.
To amplify Timothy's answer, the lock statement does create a scoped lock using a monitor. Essentially, this translates into something like this: ``` lock(_lockKey) { // Code under lock } // is equivalent to this Monitor.Enter(_lockKey) try { // Code under lock } finally { Monitor.Exit(_lockKey) } ``` In C# you rarely use the dtor for this kind of pattern (see the using statement/IDisposable). One thing you may notice in the code is that if an async exception happens between the Monitor.Enter and the try, it looks like the monitor will not be released. The JIT actually makes a special guarantee that if a Monitor.Enter immediately precedes a try block the async exception will not happen until the try block thus ensuring the release.
Is it possible to implement scoped lock in C#?
[ "", "c#", "locking", "" ]
Is there any way to check if a given index of an array exists? I am trying to set numerical index but something like 1, 5, 6,10. And so I want to see if these indexes already exist and if they do just increase another counter. I normally work with php but I am trying to do this in c++, so basically I am trying to ask if there is an isset() way to use with c++ PS: Would this be easier with vectors? If so, can anyone point me to a good vector tutorial? Thanks
In C++, the size of an array is fixed when it is declared, and while you can access off the end of the declared array size, this is very dangerous and the source of hard-to-track-down bugs: ``` int i[10]; i[10] = 2; // Legal but very dangerous! Writing on memory you don't know about ``` It seems that you want array-like behavior, but without all elements being filled. Traditionally, this is in the realms of hash-tables. Vectors are not such a good solution here as you will have empty elements taking up space, much better is something like a map, where you can test if an element exists by searching for it and interpreting the result: ``` #include <map> #include <string> // Declare the map - integer keys, string values std::map<int, std::string> a; // Add an item at an arbitrary location a[2] = std::string("A string"); // Find a key that isn't present if(a.find(1) == a.end()) { // This code will be run in this example std::cout << "Not found" << std::endl; } else { std::cout << "Found" << std::endl; } ``` One word of warning: Use the above method to find if a key exists, rather than something like testing for a default value ``` if(a[2] == 0) { a[2] = myValueToPutIn; } ``` as the behavior of a map is to insert a default constructed object on the first access of that key value, if nothing is currently present.
My personal vote is for using a vector. They will resize dynamically, and as long as you don't do something stupid (like try and access an element that doesn't exist) they are quite friendly to use. As for tutorials the best thing I could point you towards is a [google search](http://www.google.com.au/search?q=C%2B%2B+vector+tutorial)
Check if array index exists
[ "", "c++", "arrays", "" ]
Is there any way other than using reflection to access the members of a anonymous inner class?
Anonymous inner classes have a type but no name. You can access fields not defined by the named supertype. However once assigned to a named type variable, the interface is lost. Obviously, you can access the fields from within the inner class itself. One way of adding code is through an instance initialiser: ``` final AtomicInteger y = new AtomicInteger(); new Runnable() { int x; { x = 5; doRun(this); y.set(x); } public void run() { ... blah ... } }; ``` The value returned by the anonymous inner class expression has the anonymous type, so you have one chance to use it outside of the class itself: ``` final int y = new Runnable() { int x; { x = 5; doRun(this); } public void run() { ... blah ... } }.x; ``` You can also pass it through a method declared similar to: ``` <T extends Runnable> T doRun(T runnable); ```
You can use local classes instead anonymous class. Look: ``` public class Test { public static void main(String... args) { class MyInner { private int value = 10; } MyInner inner = new MyInner(); System.out.println(inner.value); } } ``` You can have reference of `MyInner` type only in the method body though. So outside of the method you will not be able to use its fields/methods that are not declared in its super class (`java.lang.Object` in this case) or interface.
Accessing inner anonymous class members
[ "", "java", "anonymous-class", "" ]
I'm interested in compressing data using Python's `gzip` module. It happens that I want the compressed output to be deterministic, because that's often a really convenient property for things to have in general -- if some non-gzip-aware process is going to be looking for changes in the output, say, or if the output is going to be cryptographically signed. Unfortunately, the output is different every time. As far as I can tell, the only reason for this is the timestamp field in the gzip header, which the Python module always populates with the current time. I don't think you're actually allowed to have a gzip stream without a timestamp in it, which is too bad. In any case, there doesn't seem to be a way for the caller of Python's `gzip` module to supply the correct modification time of the underlying data. (The actual `gzip` program seems to use the timestamp of the input file when possible.) I imagine this is because basically the only thing that ever cares about the timestamp is the `gunzip` command when writing to a file -- and, now, me, because I want deterministic output. Is that so much to ask? Has anyone else encountered this problem? What's the least terrible way to `gzip` some data with an arbitrary timestamp from Python?
From Python 2.7 onwards you can specify the time to be used in the gzip header. N.B. filename is also included in the header and can also be specified manually. ``` import gzip content = b"Some content" f = open("/tmp/f.gz", "wb") gz = gzip.GzipFile(fileobj=f,mode="wb",filename="",mtime=0) gz.write(content) gz.close() f.close() ```
Yeah, you don't have any pretty options. The time is written with this line in \_write\_gzip\_header: ``` write32u(self.fileobj, long(time.time())) ``` Since they don't give you a way to override the time, you can do one of these things: 1. Derive a class from GzipFile, and copy the `_write_gzip_header` function into your derived class, but with a different value in this one line. 2. After importing the gzip module, assign new code to its time member. You will essentially be providing a new definition of the name time in the gzip code, so you can change what time.time() means. 3. Copy the entire gzip module, and name it my\_stable\_gzip, and change the line you need to. 4. Pass a CStringIO object in as fileobj, and modify the bytestream after gzip is done. 5. Write a fake file object that keeps track of the bytes written, and passes everything through to a real file, except for the bytes for the timestamp, which you write yourself. Here's an example of option #2 (untested): ``` class FakeTime: def time(self): return 1225856967.109 import gzip gzip.time = FakeTime() # Now call gzip, it will think time doesn't change! ``` Option #5 may be the cleanest in terms of not depending on the internals of the gzip module (untested): ``` class GzipTimeFixingFile: def __init__(self, realfile): self.realfile = realfile self.pos = 0 def write(self, bytes): if self.pos == 4 and len(bytes) == 4: self.realfile.write("XYZY") # Fake time goes here. else: self.realfile.write(bytes) self.pos += len(bytes) ```
setting the gzip timestamp from Python
[ "", "python", "gzip", "" ]
There are a number of great Javascript libraries\frameworks out there (jQuery, Prototype, MooTools, etc.), but they all seem to focus on DOM interaction and AJAX functionality. I haven't found any that focus on extending the built-in data types (String, Date, Number, etc.). And by "Extending" I mean methods to solve typical work-a-day problems we all have. An example would be the .NET String.Format() method. Not only more convenient, but makes reading and trouble-shooting string concatenation better. While I have already created a String prototype method for this, I'd like to see if a good library has already been developed to address similar productivity issues before launching into a library of my own. Prototype has a few interesting methods in this regard, but since I've already settled on jQuery for DOM work, I really don't need to duplicate functionality on every page. Is anyone aware of a good, lean data type productivity library for Javascript?
The Microsoft AJAX Library has quite a few handy extensions to the JavaScript base types, including String.Format(): <http://www.asp.net/AJAX/Documentation/Live/ClientReference/Global/>
There's a good reason the big javascript libraries steer clear of extending common object prototypes with functions which should really exist (String.trim, Array.shuffle !!)... If each library extended the String object to have a trim() function, then they'd overwrite each other - not good! Instead, what jQuery has done is to put those functions into the jQuery object. `$.trim(String), $.inArray(value, Array)`, etc. Not as "neat" as extending the prototype, but a lot more portable. Remember to consider this before you go adding your own functions!
Javascript libraries to extend built in data type objects
[ "", "javascript", "" ]
Could someone please point me toward a cleaner method to generate a random enum member. This works but seems ugly. Thanks! ``` public T RandomEnum<T>() { string[] items = Enum.GetNames(typeof( T )); Random r = new Random(); string e = items[r.Next(0, items.Length - 1)]; return (T)Enum.Parse(typeof (T), e, true); } ```
``` public T RandomEnum<T>() { T[] values = (T[]) Enum.GetValues(typeof(T)); return values[new Random().Next(0,values.Length)]; } ``` Thanks to @[Marc Gravell] for ponting out that the max in Random.Next(min,max) is exclusive.
Marxidad's answer is good (note you only need `Next(0,values.Length)`, since the upper bound is exclusive) - but watch out for timing. If you do this in a tight loop, you will get lots of repeats. To make it more random, consider keeping the Random object in a field - i.e. ``` private Random rand = new Random(); public T RandomEnum<T>() { T[] values = (T[]) Enum.GetValues(typeof(T)); return values[rand.Next(0,values.Length)]; } ``` If it is a static field, you will need to synchronize access.
Generate random enum in C# 2.0
[ "", "c#", "enums", "" ]
I'm programming a class that implements the observable pattern (not the interface) and I'm thinking about whether or not the copy constructor should also copy the listeners. On the one hand the copy constructor should create an instance that is as close as possible to the original instance so that it can be swapped out in the display context. On the other hand this would assume the listeners can cope with that kind of thing. Any thoughts? Are there any best practices?
The answer is **it depends on what you want to happen**. There are technically three things you can do: 1. Copy nothing. Any observers will know nothing about the new object. 2. Have the new object add itself to the list of things the old observers are observing. The existing observers will respond to the new object as they did the old. 3. Create new observers that observe the new object, as the old ones did the old object. The new observers will respond to changes in the new object as the old ones did to the old object. The other posters are certainly right that 1) is likely to be the preferred option, if only because doing 2 or 3 in the copy constructor means that the observers are **always** created/copied. Doing nothing allows for observers to be added later if necessary. However it is possible to imagine cases where the other options are right. If you want an observer that responds to *any* instance of a given class, no matter how created, then 2) is right. If your observer does a specific task for the object, and doesn't care about the rest of the system then 3) might be the way. It's often best to think about how your system works than just follow a rule. If this isn't your code that you are modifying, then ask the owner. And if you simply don't know what you want to happen, choose 1) and add the observers later.
Don't copy. The listeners are not aware of the new object and are not expecting to receive messages related to it.
Copying listeners/observers in a copy constructor
[ "", "java", "" ]
I've got a small piece of code that is parsing an index value to determine a cell input into Excel. It's got me thinking... What's the difference between ``` xlsSheet.Write("C" + rowIndex.ToString(), null, title); ``` and ``` xlsSheet.Write(string.Format("C{0}", rowIndex), null, title); ``` Is one "better" than the other? And why?
**Before C# 6** To be honest, I think the first version is simpler - although I'd simplify it to: ``` xlsSheet.Write("C" + rowIndex, null, title); ``` I suspect other answers *may* talk about the performance hit, but to be honest it'll be minimal *if present at all* - and this concatenation version doesn't need to parse the format string. Format strings are great for purposes of localisation etc, but in a case like this concatenation is simpler and works just as well. **With C# 6** String interpolation makes a lot of things simpler to read in C# 6. In this case, your second code becomes: ``` xlsSheet.Write($"C{rowIndex}", null, title); ``` which is probably the best option, IMO.
My initial preference (coming from a C++ background) was for String.Format. I dropped this later on due to the following reasons: * String concatenation is arguably "safer". It happened to me (and I've seen it happen to several other developers) to remove a parameter, or mess up the parameter order by mistake. The compiler will not check the parameters against the format string and you end up with a runtime error (that is, if you're lucky enough not to have it in an obscure method, such as logging an error). With concatenation, removing a parameter is less error prone. You could argue the chance of error is very small, but it **may** happen. ~~- String concatenation allows for null values, `String.Format` does not. Writing "`s1 + null + s2`" does not break, it just treats the null value as String.Empty. Well, this may depend on your specific scenario - there are cases where you'd like an error instead of silently ignoring a null FirstName. However even in this situation I personally prefer checking for nulls myself and throwing specific errors instead of the standard ArgumentNullException I get from String.Format.~~ * String concatenation performs better. Some of the posts above already mention this (without actually explaining why, which determined me to write this post :). Idea is the .NET compiler is smart enough to convert this piece of code: ``` public static string Test(string s1, int i2, int i3, int i4, string s5, string s6, float f7, float f8) { return s1 + " " + i2 + i3 + i4 + " ddd " + s5 + s6 + f7 + f8; } ``` to this: ``` public static string Test(string s1, int i2, int i3, int i4, string s5, string s6, float f7, float f8) { return string.Concat(new object[] { s1, " ", i2, i3, i4, " ddd ", s5, s6, f7, f8 }); } ``` What happens under the hood of String.Concat is easy to guess (use Reflector). The objects in the array get converted to their string via ToString(). Then the total length is computed and only one string allocated (with the total length). Finally, each string is copied into the resulting string via wstrcpy in some unsafe piece of code. Reasons `String.Concat` is way faster? Well, we can all have a look what `String.Format` is doing - you'll be surprised at the amount of code required to process the format string. On top of this (I've seen comments regarding the memory consumption), `String.Format` uses a StringBuilder internally. Here's how: `StringBuilder builder = new StringBuilder(format.Length + (args.Length * 8));` So for every passed argument, it reserves 8 characters. If the argument is a one-digit value, then too bad, we have some wasted space. If the argument is a custom object returning some long text on `ToString()`, then there might be even some reallocation needed (worst-case scenario, of course). Compared to this, the concatenation only wastes the space of the object array (not too much, taking into account it's an array of references). There's no parsing for format specifiers and no intermediary StringBuilder. The boxing/unboxing overhead is present in both methods. The only reason I'd go for String.Format is when localization is involved. Putting format strings in resources allows you to support different languages without messing with the code (think about scenarios where formatted values change order depending on the language, i.e. "after {0} hours and {1} minutes" may look quite different in Japanese :). --- To sum up my first (and quite long) post: * best way (in terms of performance vs. maintainability/readability) for me is using string concatenation, without any `ToString()` calls * if you're after performance, make the `ToString()` calls yourself to avoid boxing (I'm somewhat biased towards readability) - same as first option in your question * if you're showing localized strings to the user (not the case here), `String.Format()` has an edge.
When is it better to use String.Format vs string concatenation?
[ "", "c#", ".net", "string", "" ]
I wonder if anyone could suggest the best way of looping through all the `<option>` s in a `<select>` element with jQuery, and building an array. Eg. Instead of the following, whereby a string ins passed to the autoCompleteArray(), ``` $("#CityLocal").autocompleteArray( [ "Aberdeen", "Ada", "Adamsville", "Zoar" //and a million other cities... ], { delay:10, minChars:1, matchSubset:1, onItemSelect:selectItem, onFindValue:findValue, autoFill:true, maxItemsToShow:10 } ); ``` ...I need to loop through all the `<options>` in a `<select>` and push them into an array, and just pass that array variable to the function instead of a long string. Eg, ``` $("#CityLocal").autocompleteArray( [ MyBigArrayOfOptions ], { delay:10, minChars:1, matchSubset:1, onItemSelect:selectItem, onFindValue:findValue, autoFill:true, maxItemsToShow:10 } ); ``` I'd be grateful if you could suggest how to push stuff into an array in the correct format. I've pretty much sussed the looping part from another post on this site. Thanks.
This should work: ``` $(document).ready(function(){ // array of option elements' values var optionValues = []; // array of option elements' text var optionTexts = []; // iterate through all option elements $('#sel > option').each(function() { // get value/text and push it into respective array optionValues.push($(this).val()); optionTexts.push($(this).text()); }); // test with alert alert(optionValues); alert(optionTexts); }); ``` Given that your `select` element has ID *sel*.
The [jQuery.map](http://docs.jquery.com/Utilities/jQuery.map#arraycallback) function might be what you're looking for. The code below will create an array that contains all of the values or text values for the `<select>` options. ``` var values = jQuery.map(jQuery("#select")[0].options, function(option) { return option.value; }); var texts = jQuery.map(jQuery("#select")[0].options, function(option) { return option.innerHTML; }); ```
Loop through <select> and build array in the format: "value1","value2","value3"
[ "", "javascript", "jquery", "autocomplete", "" ]
I am writing a java program that needs a file open dialog. The file open dialog isn't difficult, I'm hoping to use a `JFileChooser`. My problem is that I would like to have a dual pane `JFrame` (consisting of 2 `JPanels`). The left panel would have a `JList`, and the right panel would have a file open dialog. When I use `JFileChooser.showOpenDialog()` this opens the dialog box above all other windows, which isn't what I want. Is there any way to have the `JFileChooser` (or maybe another file selection dialog) display inside a `JPanel` and not pop-up above it? Here is the code that I've tried, at this point it's very simplified. I'm only trying to get the `JFileChooser` to be embedded in the `JPanel` at this point. ``` public class JFC extends JFrame{ public JFC() { setSize(800,600); JPanel panel= new JPanel(); JFileChooser chooser = new JFileChooser(); panel.add(chooser); setVisible(true); chooser.showOpenDialog(null); } public static void main(String[] args) { JFC blah = new JFC(); } } ``` I have also tried calling `chooser.showOpenDialog` with `this` and `panel`, but to no avail. Also, I have tried adding the `JFileChooser` directly to the frame. Both of the attempts listed above still have the `JFileChooser` pop up in front of the frame or panel (depending on which I add the `JFileChooser` to).
JFileChooser extends JComponent and Component so you should be able to add it directly to your frame. ``` JFileChooser fc = ... JPanel panel ... panel.add(fc); ```
To access the "buttons" in the file chooser, you will have to add an ActionListener to it: ``` fileChooser.addActionListener(this); [...] public void actionPerformed(ActionEvent action) { if (action.getActionCommand().equals("CancelSelection")) { System.out.printf("CancelSelection\n"); this.setVisible(false); this.dispose(); } if (action.getActionCommand().equals("ApproveSelection")) { System.out.printf("ApproveSelection\n"); this.setVisible(false); this.dispose(); } } ```
JFileChooser embedded in a JPanel
[ "", "java", "jpanel", "jfilechooser", "fileopendialog", "" ]
I am looking for a redistributable component to convert HTML to PDF. I would - at the moment - like to avoid using a "PDF printer", as this requires a printer installation and some user "playing around" in the printers panel might break that feature. The HTML is available in a Browser control or as external file. The HTML is normally fairly simple, but customers can create their own templates, so a "good range" of HTML should be supported. Should be accessible for an automated process from C++ - DLL / COM / external executable with command line support are all fine. Commercial is fine, but in the thousands is not an option right now. So, which components do you know or can you recommend?
[PDFCreator](http://sourceforge.net/projects/pdfcreator/) can function as a virtual printer but it's also usable via COM. The default setup even includes COM examples. You can check the COM samples in the SourceForge SVN repository right here: <http://pdfcreator.svn.sourceforge.net/viewvc/pdfcreator/trunk/COM/>
If you have Microsoft Word installed, I guess you could automate the whole process using the "save as pdf" plugin that can be downloaded from the Microsoft Office Site. You would automate word then open the HTML document inside word, then output as PDF. Might be worth a shot, if you're developing in a Microsoft Environment.
Export HTML to PDF (C++, Windows)
[ "", "c++", "winapi", "pdf", "pdf-generation", "" ]
As the title suggests, I am having trouble maintaining my code on postback. I have a bunch of jQuery code in the Head section and this works fine until a postback occurs after which it ceases to function! How can I fix this? Does the head not get read on postback, and is there a way in which I can force this to happen? JavaScript is: ``` <script type="text/javascript"> $(document).ready(function() { $('.tablesorter tbody tr').tablesorter(); $('.tablesearch tbody tr').quicksearch({ position: 'before', attached: 'table.tablesearch', stripeRowClass: ['odd', 'even'], labelText: 'Search:', delay: 100 }); }); </script> ```
If you just have that code hard coded into your page's head then a post back won't affect it. I would check the following by debugging (FireBug in FireFox is a good debugger): * Verify the script is still in the head on postback. * verify that the css classes are in fact attached to some element in the page. * verify that the jquery code is executing after the browser is done loading on post back. EDIT: Are you using UpdatePanels for your post back? In other words is this an asynchronous postback or a normal full page refresh? EDIT EDIT: AHhhhh... Ok. So if you're using UpdatePanels then the document's ready state is already in the ready so that portion of jquery code won't be fired again. I would extract the jquery delegate out to a separate function that you can also call after the async postback.
put your code in ``` function pageLoad(sender, args) { /* code here */ } ``` instead of in `$(document).ready(function() { ... });` `pageLoad()` is a function that will execute after all postbacks, synchronous and asynchronous. See this answer for more details * [**How to have a javascript callback executed after an update panel postback**](https://stackoverflow.com/questions/1152946/how-to-have-a-javascript-callback-executed-after-an-update-panel-postback/1153002#1153002)
Maintaining JavaScript Code in the <Head> after ASP.Net Postback.
[ "", "asp.net", "javascript", "jquery", "postback", "" ]
We're using Spring/Hibernate on a Websphere Application Server for AIX. On my Windows machine, the problem doesn't occur--only when running off AIX. When a user logs in with an account number, if they prefix the '0' to their login ID, the application rejects the login. In the DB2 table, the column is of numeric type, and there shouldn't be a problem converting '090....' to '90...' Anyone else experience a problem like this? Both machines have Java v1.5. To be more specific, the flow is FormView -> LoginValidator -> LoginController In LoginValidator, the value of login is null with the prefixed 0. Without the 0, the value is what it should be (But again, this is only on the AIX environment--on 2 Windows environments it's fine). Here's the snippet of code where the object equals null.. ``` public class LoginValidator implements Validator { public boolean supports(Class clazz) { return Login.class.equals(clazz); } @SuppressWarnings("all") public void validate(Object obj, Errors errors) { System.out.println("Inside LoginValidator"); Login login = (Login) obj; //null value System.out.println("Before conversion in Validator, store id = " + login.getStoreId()); } } ``` I've also written this short Java program for constructing a Long from a String, and using the java binary that is packaged with WebSphere ``` public class String2Long { public static void main(String[] args){ String a = "09012179"; String b = "9012179"; Long _a = new Long(a); Long _b = new Long(b); System.out.println(a + " => " + _a); //09012179 => 9012179 System.out.println(b + " => " + _b); //9012179 => 9012179 System.out.println("_a.equals(_b) " + _a.equals(_b)); //_a.equals(_b) true } } ``` [# SOLUTION](https://stackoverflow.com/questions/291286/java-not-converting-string-to-long-object-properly#292709)
# SOLUTION A co-worker did some research on Spring updates, and apparently this error was correct in v. 2.5.3: > CustomNumberEditor treats number with leading zeros as decimal (removed unwanted octal support while preserving hex) We were using Spring 2.0.5. We simply replaced the jars with Spring 2.5.4, and it worked as it should have! Thanks to everyone for your help/assistance. We will make use of Unit tests in the future, but this just turned out to be a Spring bug.
Well there's an awful lot of things going on there. You really need to try to isolate the problem - work out what's being sent to the database, what's being seen by Java etc. Try to pin it down in a short but complete program which *just* shows the problem - then you'll be in a much stronger position to file a bug or fix your code.
Java Not Converting String to Long Object Properly
[ "", "java", "hibernate", "spring", "websphere", "aix", "" ]
I need to compare build outputs of VS2005 in order to be sure I can reproduce the exact same product. when I do two builds one after the other in release and compare the produced files I get different files, doing text comparison it seems that more than just the datetime is different how can I build in order to reproduce exact same product each time ?
Whenever you build, the compiler embeds: * The date and time * A GUID (used for debugging etc, I believe) * Potentially the assembly version (which may have "1.2.3.\*" and populated automatically) * Potentially a strong hash A couple of options: * Find out where the files are "known to differ immaterially", wipe those bytes, and then compare * Use ildasm to convert the assemblies to text, and then compare those results. (I haven't checked whether or not this reveals the differences too - it may do, in which case again you'd have to explicitly ignore certain sections) This problem does pop up every so often, so I'll see if I can come up with a tool to do the comparisons - although it's unlikely to cope with signed assemblies, as the location of the signature could vary significantly.
One question: you did ***text*** comparison for binary build outputs? As I know most of compilers never produces binary identical build output for the same project. Compiler encodes into binary time of compilation, special ordinal, etc.
comparing products of builds in release
[ "", "c#", "visual-studio-2005", "build-process", "" ]
Suppose `a` and `b` are both of type `int`, and `b` is nonzero. Consider the result of performing `a/b` in the following cases: 1. `a` and `b` are both nonnegative. 2. `a` and `b` are both negative. 3. Exactly one of them is negative. In Case 1 the result is rounded down to the nearest integer. But what does the standard say about Cases 2 and 3? An old draft I found floating on the Internet indicates that it is implementation dependent (yes, even case 2) but the committee is leaning toward making it always 'round toward zero.' Does anyone know what the (latest) standard says? Please answer only based on the standard, not what makes sense, or what particular compilers do.
According to the May 2008 revision, You're right: > The binary / operator yields the quotient, and the binary % operator yields the remainder from the division of the first expression by the second. If the second operand of / or % is zero the behavior is undefined; otherwise (a/b)\*b + a%b is equal to a. If both operands are nonnegative then the remainder is nonnegative; if not, the sign of the remainder is implementation-defined75). Note 75 says: > According to work underway toward the revision of ISO C, the preferred algorithm for integer division follows the rules defined in the ISO Fortran standard, ISO/IEC 1539:1991, in which the quotient is always rounded toward zero. Chances are that C++ will lag C in this respect. As it stands, it's undefined but they have an eye towards changing it. I work in the same department as Stroustrup and with a member of the committee. Things take AGES to get accomplished, and its endlessly political. If it seems silly, it probably is.
As an update to the other answers: The last draft of C++11, [n3242](http://www.open-std.org/Jtc1/sc22/wg21/docs/papers/2011/n3242.pdf) which is for most practical purposes identical to the actual C++11 standard, says this in 5.6 point 4 (page 118): > For integral operands the / operator yields the algebraic quotient > with any fractional part discarded; (see note 80) Note 80 states (note that notes are non-normative): > 80) This is often called truncation towards zero. Point 4 goes on to state: > if the quotient a/b is representable in the type of the result, > (a/b)\*b + a%b is equal to a. which can be shown to require the sign of `a%b` to be the same as the sign of `a` (when not zero).
Integer division rounding with negatives in C++
[ "", "c++", "rounding", "" ]
using jython I have a situation where emails come in with different attachments. Certain file types I process others I ignore and dont write to file. I am caught in a rather nasty situation, because sometimes people send an email as an attachment, and that attached email has legal attachments. What I want to do is skip that attached email and all its attachments. using python/jythons std email lib how can i do this? --- to make it clearer I need to parse an email (named ROOT email), I want to get the attachments from this email using jython. Next certain attachments are supported ie .pdf .doc etc now it just so happens that, the clients send an email (ROOT email) with another email message (CHILD email) as an attachment, and in CHILD email it has .pdf attachments and such like. What I need is: to get rid of any CHILD emails attached to the ROOT email AND the CHILD emails attachments. What happens is I walk over the whole email and it just parses every attachment, BOTH ROOT attachments and CHILD attachments as if they were ROOT attachments. I cannot have this. I am only interested in ROOT attachements that are legal ie .pdf .doc. xls .rtf .tif .tiff That should do for now, I have to run to catch a bus! thanks!
The problem with existing suggestions is the walk method. This recursively, depth-first, walks the entire tree, including children. Look at the source of the walk method, and adapt it to skip the recursive part. A cursory reading suggests: ``` if msg.is_multipart(): for part in msg.get_payload(): """ Process message, but do not recurse """ filename = part.get_filename() ``` Reading the pydocs, get\_payload should return a list of the top level messages, without recursing.
What about the example named "[Here’s an example of how to unpack a MIME message like the one above, into a directory of files](http://docs.python.org/library/email-examples.html)"? It looks close from what you want. ``` import email ... msg = email.message_from_file(fp) ... for part in msg.walk(): # multipart/* are just containers if part.get_content_maintype() == 'multipart': continue # Applications should really sanitize the given filename so that an # email message can't be used to overwrite important files filename = part.get_filename() if not filename: ext = mimetypes.guess_extension(part.get_content_type()) ... ```
How do I skip processing the attachments of an email which is an attachment of a different email
[ "", "python", "email", "jython", "attachment", "" ]
How does the C/C++ compiler manipulate the escape character ["\"] in source code? How is compiler grammar written for processing that character? What does the compiler do after encountering that character?
Most compilers are divided into parts: the compiler front-end is called a [lexical analyzer](http://en.wikipedia.org/wiki/Lexical_analyzer) or a scanner. This part of the compiler reads the actual characters and creates tokens. It has a state machine which decides, upon seeing an escape character, whether it is genuine (for example when it appears inside a string) or it modifies the next character. The token is output accordingly as the escape character or some other token (such as a tab or a newline) to the next part of the compiler ([the parser](http://en.wikipedia.org/wiki/Parser)). The state machine can group several characters into a token.
An interesting note on this subject is [On Trusting Trust [PDF link]](http://www.ece.cmu.edu/~ganger/712.fall02/papers/p761-thompson.pdf). The paper describes one way a compiler could handle this problem exactly, shows how the c-written-in-c compiler does not have an explicit translation of the codes into ASCII values; and how to bootstrap a new escape code into the compiler so that the understanding of the ASCII value for the new code is also implicit.
What's the Magic Behind Escape(\) Character
[ "", "c++", "c", "compiler-construction", "escaping", "backslash", "" ]
Do you know a simple script to count NLOCs (netto lines of code). The script should count lines of C Code. It should not count empty lines or lines with just braces. But it doesn't need to be overly exact either.
I would do that using **awk** & **cpp** (preprocessor) & **wc** . awk removes all braces and blanks, the preprocessor removes all comments and wc counts the lines: ``` find . -name \*.cpp -o -name \*.h | xargs -n1 cpp -fpreprocessed -P | awk '!/^[{[:space:]}]*$/' | wc -l ``` If you want to have comments included: ``` find . -name \*.cpp -o -name \*.h | xargs awk '!/^[{[:space:]}]*$/' | wc -l ```
Looking NLOC on the Net, I found mostly "Non-commented lines of code". You don't specify if comments must be skipped... So if I stick to your current message, the following one-liner in Perl should do the job: ``` perl -pe "s/^\s*[{}]?\s*\n//" Dialog.java | wc -l ``` I can extend it to handle line comments: ``` perl -pe "s#^\s*[{}]?\s*\n|^\s*//.*\n##" Dialog.java | wc -l ``` or perhaps ``` perl -pe "s#^\s*(?:[{}]?\s*|//.*)\n##" Dialog.java | wc -l ``` Handling block comments is slightly more tricky (I am not a Perl expert!). [EDIT] Got it... First part can be probably improved (shorter). Was fun to experiment with. ``` perl -e "$x = join('', <>); $x =~ s#/\*.*?\*/##gs; print $x" Dialog.java | perl -pe "s#^\s*(?:[{}]?\s*|//.*)\n##" | wc -l ``` PS.: I use double quotes because I tested on Windows...
Simple script to count NLOC?
[ "", "c++", "c", "metrics", "lines-of-code", "" ]
I'm looking for a way to upload a file to s3. I am using django. I am currently using amazon's python library for uploading along with the following code: View: ``` def submitpicture(request): fuser = request.session["login"] copied_data = request.POST.copy() copied_data.update(request.FILES) content_type = copied_data['file'].get('content-type') ffile = copied_data['file']['content'] key = '%s-%s' % (fuser, ''.join(copied_data['file']['filename'].split(' '))) site_s3.save_s3_data(key, ffile, content_type) ``` Template: ``` <form action="/submitpicture/" method="POST"> <input type="file" id="file" name="file" /> <input type="submit" value="submit" /> </form> ``` However, when I actually try to run it i get the following error: ``` "Key 'file' not found in <QueryDict: {}>" #MultiValueDictKeyError ``` I really don't see what I'm doing wrong. Can someone point me in the right direction? Edit: Just in case someone wonders, I am planning on adding some validation after I get the actual upload working.
You will have to provide the enctype attribute to the FORM element (I've been bitten by this before). For example, your FORM tag should look like: ``` <form action="/submitpicture/" method="POST" enctype="multipart/form-data" > ``` Without the enctype, you will find yourself with an empty request.FILES.
Instead of doing this manually I would take a look at the storage backend David Larlet has written for Django, [django-storages](https://django-storages.readthedocs.io/en/latest/)
How to upload a file with django (python) and s3?
[ "", "python", "django", "file-upload", "amazon-s3", "" ]
I am using python 2.6 on XP. I have just installed py2exe, and I can successfully create a simple hello.exe from a hello.py. However, when I try using py2exe on my real program, py2exe produces a few information messages but fails to generate anything in the dist folder. My setup.py looks like this: ``` from distutils.core import setup import py2exe setup(console=['ServerManager.py']) ``` and the py2exe output looks like this: ``` python setup.py py2exe running py2exe creating C:\DevSource\Scripts\ServerManager\build creating C:\DevSource\Scripts\ServerManager\build\bdist.win32 ... ... creating C:\DevSource\Scripts\ServerManager\dist *** searching for required modules *** *** parsing results *** creating python loader for extension 'wx._misc_' (C:\Python26\lib\site-packages\wx-2.8-msw-unicode\wx\_misc_.pyd -> wx._misc_.pyd) creating python loader for extension 'lxml.etree' (C:\Python26\lib\site-packages\lxml\etree.pyd -> lxml.etree.pyd) ... ... creating python loader for extension 'bz2' (C:\Python26\DLLs\bz2.pyd -> bz2.pyd) *** finding dlls needed *** ``` py2exe seems to have found all my imports (though I was a bit surprised to see win32 mentioned, as I am not explicitly importing it). Also, my program starts up quite happily with this command: ``` python ServerManager.py ``` Clearly I am doing something fundamentally wrong, but in the absence of any error messages from py2exe I have no idea what.
I've discovered that py2exe works just fine if I comment out the part of my program that uses wxPython. Also, when I use py2exe on the 'simple' sample that comes with its download (i.e. in Python26\Lib\site-packages\py2exe\samples\simple), I get this error message: ``` *** finding dlls needed *** error: MSVCP90.dll: No such file or directory ``` So something about wxPython makes py2exe think I need a Visual Studio 2008 DLL. I don't have VS2008, and yet my program works perfectly well as a directory of Python modules. I found a copy of MSVCP90.DLL on the web, installed it in Python26/DLLs, and py2exe now works fine. I still don't understand where this dependency has come from, since I can run my code perfectly okay without py2exe. It's also annoying that py2exe didn't give me an error message like it did with the test\_wx.py sample. Further update: When I tried to run the output from py2exe on another PC, I discovered that it needed to have MSVCR90.DLL installed; so if your target PC hasn't got Visual C++ 2008 already installed, I recommend you download and install the [Microsoft Visual C++ 2008 Redistributable Package](http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=29).
I put this in all my setup.py scripts: ``` distutils.core.setup( options = { "py2exe": { "dll_excludes": ["MSVCP90.dll"] } }, ... ) ``` This keeps py2exe quiet, but you still need to make sure that dll is on the user's machine.
py2exe fails to generate an executable
[ "", "python", "wxpython", "py2exe", "" ]
How do I connect to the database(MYSQL) in connection bean using JSF to retrieve its contents. Also please let me know how do I configure the web.xml file?
To get connected to mysql: ``` public void open() { try { String databaseName = "custom"; String userName = "root"; String password = "welcome"; // String url = "jdbc:mysql://localhost/" + databaseName; Class.forName("com.mysql.jdbc.Driver").newInstance(); connection = DriverManager.getConnection(url, userName, password); } catch (Exception e) { System.out.println("Not able to connect"); } } ``` In this case there is nothing to change in web.xml.But you to add this in pom.xml ``` <dependency> <groupId>groupId = mysql</groupId> <artifactId>mysql-connector-java</artifactId> <version>5.1.6</version> </dependency> ``` This was working successfully.
Here is a very good tutorial on how to use DAO with JSF in the best way: <http://balusc.blogspot.com/2008/07/dao-tutorial-use-in-jsf.html> If you are using JSF, that website can be a good place to find solutions for common problems. There are great and complete examples. Anyway, JSF is a framework that manages the view and the controller layers. For the model layer and the access to a database there are not big difference if you use JSF or any other java web framework that manages the view/controller part of your application.
Connect to database using jsf
[ "", "java", "mysql", "database", "maven-2", "" ]
I'm using two commercial libraries that are produced by the same vendor, called VendorLibA and VendorLibB. The libraries are distributed as many DLLs that depend on the compiler version (e.g. VC7, VC8). Both libraries depend on a another library, produced by this vendor, called VendorLibUtils and contained in one DLL. The problem: VendorLibA uses a different version of VendorLibUtils than VendorLibB. The two versions are not binary compatible, and even if they were it would be a bad idea to use the wrong version. Is there any way I could use the two libraries under the same process? **Note:** LoadLibrary can't solve this since my process is not that one that's importing VendorLibUtils. **EDIT:** Forgot to mention the obvious, I don't have to source code for any of the commercial libraries and probably I will never have (*sigh*). **EDIT:** The alternative btw, is to do this: [How to combine GUI applications in Windows](https://stackoverflow.com/questions/312569/how-to-combine-gui-applications-in-windows)
As you are not using VendorLibUtils directly, I assume you can't use [LoadLibrary](http://msdn.microsoft.com/en-us/library/ms684175(VS.85).aspx) etc. If the VendorLibUtils DLLs only have exports by ordinal, you could probably rename one of the the libraries and patch the corresponding VendorLib*X* to use a different filename for its imports. If the VendorLibUtils DLLs have one or more exported symbols with the same names, you *might* need to patch the imports and export tables too, but let's hope not! :-)
I think your most promising option is to complain, loudly, to the vendor who is distributing mutually incompatible products. That rather goes against the idea of a DLL. You can't just put the DLLs in different directories. Once a DLL with a given name is loaded, all other attempts to load another DLL with the same module name will simply use the one that's already loaded, even if the paths are different. From that, we can conclude that to load two copies of VendorLibUtils, one copy needs to have a different name. You can't just rename the DLL file; the code in your program won't know to look for the different file. Therefore, perhaps there's a way to edit the import table of VendorLibB to make it think the functions it needs are in VendorLibUtilsB.dll instead of just VendorLibUtils.dll. I'm afraid I don't know of any utility that will do that, but I have little doubt it's possible to do.
Can I use two incompatible versions of the same DLL in the same process?
[ "", "c++", "windows", "winapi", "dll", "" ]
I've got a message contained in an byte[], encrypted with "RSA/ECB/PKCS1Padding". To decrypt it I create a Cipher c and initiate it with ``` c = Cipher.getInstance("RSA/ECB/PKCS1Padding"); ``` Untill now I have only decrypted small messages, using the *doFinal()* method, returning an byte[] with the decrypted bytes. ``` c.init(Cipher.DECRYPT_MODE, privateKey); byte[] decryptetBytes = c.doFinal(encryptedBytes); ``` But in this case the data is bigger (approx 500 Bytes), and the *doFinal()*-method throws an exception (javax.crypto.IllegalBlockSizeException: Data must not be longer than 128 bytes). I guess I need to use the *update()*- method, but I can't figure out how to get it to work properly. How is this done?
I think using RSA encryption for anything but key transport is abuse. Generate a new key for a symmetric cipher and encrypt your bulk data with that. Then encrypt the key with RSA. Send the symmetrically-encrypted cipher-text along with the asymmetrically-encrypted content encryption key to your recipient.
With RSA you can only encrypt/decrypt block with size up to your key length minus padding length. If you have data longer than your key maybe it is just merged in one array so you should split it into chunks with size of your key (128 bytes suggests 1024 key with no padding, I'm not sure if it's possible). Using update() is not the case here. Simply, you have to know how this array was created. Generally speaking, RSA shouldn't be used to encrypt large amount of data as it's quite time consuming. Should be used to encrypt key to symmetric cipher, like AES. Take a look here: <https://www.owasp.org/index.php/Digital_Signature_Implementation_in_Java>
Java/JCE: Decrypting "long" message encrypted with RSA
[ "", "java", "rsa", "encryption", "jce", "" ]
I need to test a function that needs to query a page on an external server using urllib.urlopen (it also uses urllib.urlencode). The server could be down, the page could change; I can't rely on it for a test. What is the best way to control what urllib.urlopen returns?
Another simple approach is to have your test override urllib's `urlopen()` function. For example, if your module has ``` import urllib def some_function_that_uses_urllib(): ... urllib.urlopen() ... ``` You could define your test like this: ``` import mymodule def dummy_urlopen(url): ... mymodule.urllib.urlopen = dummy_urlopen ``` Then, when your tests invoke functions in `mymodule`, `dummy_urlopen()` will be called instead of the real `urlopen()`. Dynamic languages like Python make it super easy to stub out methods and classes for testing. See my blog posts at <http://softwarecorner.wordpress.com/> for more information about stubbing out dependencies for tests.
I am using [Mock's](http://www.voidspace.org.uk/python/mock/) patch decorator: ``` from mock import patch [...] @patch('urllib.urlopen') def test_foo(self, urlopen_mock): urlopen_mock.return_value = MyUrlOpenMock() ```
How can one mock/stub python module like urllib
[ "", "python", "unit-testing", "testing", "mocking", "" ]
This is to all the C# gurus. I have been banging my head on this for some time already, tried all kinds of advice on the net with no avail. The action is happening in Windows Mobile 5.0. I have a DLL named MyDll.dll. In the MyDll.h I have: ``` extern "C" __declspec(dllexport) int MyDllFunction(int one, int two); ``` The definition of MyDllFunction in MyDll.cpp is: ``` int MyDllFunction(int one, int two) { return one + two; } ``` The C# class contains the following declaration: ``` [DllImport("MyDll.dll")] extern public static int MyDllFunction(int one, int two); ``` In the same class I am calling MyDllFunction the following way: ``` int res = MyDllFunction(10, 10); ``` And this is where the bloody thing keeps giving me "Can't find PInvoke DLL 'MyDll.dll'". I have verified that I can actually do the PInvoke on system calls, such as "GetAsyncKeyState(1)", declared as: ``` [DllImport("coredll.dll")] protected static extern short GetAsyncKeyState(int vKey); ``` The MyDll.dll is in the same folder as the executable, and I have also tried putting it into the /Windows folder with no changes nor success. Any advice or solutions are greatly appreciated.
Maybe this seems like an obvious thing to check, but are you compiling the native DLL for the correct CPU architecture? IIRC, Windows Mobile runs on multiple CPU architectures.
I have the same problem, but this time there is no obvious difference in the platforms involved. Why is it something that is supposed to be 'normal' is so badly documented, so difficult and worst of all so 'flaky'. Are there ANY tools that allow me to go onto my PPC emulator, and stop by step check (a) that the dll is where CLR expects it (I have put it both in the same directory, and the windows directory on the emulator - I can see it is there usign the emulators file explorer), (b) that CLR can load it (has enough memory etc), (c) that CLR can then find what functions are present (useful if I can get beyond the stupid 'can't find the dll' error) I'd love to force Gates or indeed any of the 'windows mobile' team to use the crap they create.
How do I solve the .NET CF exception "Can't find PInvoke DLL"?
[ "", "c#", ".net", "windows-mobile", "pinvoke", "" ]
I would like to use Linq to query a bus schedule in my project, so that at any time I can get the next 5 bus arrival times. How can I limit my query to the first 5 results? More generally, how can I take a slice of a list in C#? (In Python I would use `mylist[:5]` to get the first 5 elements.)
``` var firstFiveItems = myList.Take(5); ``` Or to slice: ``` var secondFiveItems = myList.Skip(5).Take(5); ``` And of course often it's convenient to get the first five items according to some kind of order: ``` var firstFiveArrivals = myList.OrderBy(i => i.ArrivalTime).Take(5); ```
In case anyone is interested (even if the question does not ask for this version), in C# 2 would be: (I have edited the answer, following some suggestions) ``` myList.Sort(CLASS_FOR_COMPARER); List<string> fiveElements = myList.GetRange(0, 5); ```
How to get first N elements of a list in C#?
[ "", "c#", "" ]
This is a [follow up question](https://stackoverflow.com/questions/319199/why-is-java-able-to-store-0xff000000-as-an-int). So, Java store's integers in [two's-complements](http://en.wikipedia.org/wiki/Two%27s_complement) and you can do the following: ``` int ALPHA_MASK = 0xff000000; ``` In C# this requires the use of an unsigned integer, `uint`, because it interprets this to be `4278190080` instead of `-16777216`. My question, how do declare negative values in hexadecimal notation in c#, and how exactly are integers represented internally? What are the differences to Java here?
C# (rather, .NET) also uses the two's complement, but it supports both signed and unsigned types (which Java doesn't). A bit mask is more naturally an unsigned thing - why should one bit be different than all the other bits? In this specific case, it is safe to use an unchecked cast: ``` int ALPHA_MASK = unchecked((int)0xFF000000); ``` To "directly" represent this number as a signed value, you write ``` int ALPHA_MASK = -0x1000000; // == -16777216 ``` Hexadecimal is not (or should not) be any different from decimal: to represent a negative number, you need to write a negative sign, followed by the digits representing the absolute value.
Well, you can use an unchecked block and a cast: ``` unchecked { int ALPHA_MASK = (int)0xff000000; } ``` or ``` int ALPHA_MASK = unchecked((int)0xff000000); ``` Not terribly convenient, though... perhaps just use a literal integer?
Hexadecimal notation and signed integers
[ "", "c#", "types", "hex", "" ]
Given a couple of simple tables like so: ``` create table R(foo text); create table S(bar text); ``` If I were to union them together in a query, what do I call the column? ``` select T.???? from ( select foo from R union select bar from S) as T; ``` Now, in mysql, I can apparently refer to the column of T as 'foo' -- the name of the matching column for the first relation in the union. In sqlite3, however, that doesn't seem to work. Is there a way to do it that's standard across all SQL implementations? If not, how about just for sqlite3? Correction: sqlite3 does allow you to refer to T's column as 'foo' after all! Oops!
Although there is no spelled rule, we can use the column names from the first subquery in the union query to fetch the union results.
Try to give an alias to columns; ``` select T.Col1 from ( select foo as Col1 from R union select bar as Col1 from S) as T; ``` or If the name of column is not necessary then T.\* will be enough.
Column names for a table formed by a UNION
[ "", "sql", "sqlite", "union", "" ]
In C++, I code this way: ``` //foo.h class cBar { void foobar(); } ``` --- ``` //foo.cpp void cBar::foobar() { //Code } ``` I tried to do this on PHP but the parser would complain. PHP's documentation also doesn't help. Can this be done in PHP?
No. You need to including all your function definitions inside the class block. If defining your functions in a separate structure makes you feel better you could use an interface. ``` interface iBar { function foobar(); } class cBar implements iBar { function foobar() { //Code } } ``` I'd suggest just getting used to coding in a new way. It's easy to code consistantly within a single language, but I think you are fighting a loosing battle if you want to do the same across languages.
You can't really do this in the same manner. You can use [class abstraction](http://nl2.php.net/manual/en/language.oop5.abstract.php) and [interfaces](http://nl2.php.net/manual/en/language.oop5.interfaces.php), though. The main difference between the two is that and interface does not allow you to specify the function body, where (not abstract) methods in an abstract object can hold all kinds of default behaviour. **Abstraction:** ``` abstract class cBar { // MUST be extended abstract protected function foobar(); // MAY be extended protected function someMethod() { // do stuff } } class cBarExtender extends cBar { protected function foobar() { // do stuff } } ``` **Interfacing:** ``` interface cBar { // MUST be implemented protected function foobar(); } class cBarImplementation implements cBar { protected function foobar() { // do stuff } } ```
Can you code a class function outside of a class in PHP?
[ "", "php", "" ]
Does the HTML "select" element have an on select event? what exactly is the name of the event?
Is [`onchange`](http://www.w3.org/TR/html401/interact/scripts.html#adef-onchange) what you're looking for?
It's also worth mentioning that this doesn't fire if the selection doesn't change (seems self explanatory). As far as I know there is no event that fires when a user drops down the selectbox and then reselects the original value.
HTML <select> what is the name of on select event?
[ "", "javascript", "jquery", "html", "html-select", "" ]
How do I execute the following shell command using the Python [`subprocess`](https://docs.python.org/library/subprocess.html) module? ``` echo "input data" | awk -f script.awk | sort > outfile.txt ``` The input data will come from a string, so I don't actually need `echo`. I've got this far, can anyone explain how I get it to pipe through `sort` too? ``` p_awk = subprocess.Popen(["awk","-f","script.awk"], stdin=subprocess.PIPE, stdout=file("outfile.txt", "w")) p_awk.communicate( "input data" ) ``` **UPDATE**: Note that while the accepted answer below doesn't actually answer the question as asked, I believe S.Lott is right and it's better to avoid having to solve that problem in the first place!
You'd be a little happier with the following. ``` import subprocess awk_sort = subprocess.Popen( "awk -f script.awk | sort > outfile.txt", stdin=subprocess.PIPE, shell=True ) awk_sort.communicate( b"input data\n" ) ``` Delegate part of the work to the shell. Let it connect two processes with a pipeline. You'd be a lot happier rewriting 'script.awk' into Python, eliminating awk and the pipeline. **Edit**. Some of the reasons for suggesting that awk isn't helping. [There are too many reasons to respond via comments.] 1. Awk is adding a step of no significant value. There's nothing unique about awk's processing that Python doesn't handle. 2. The pipelining from awk to sort, for large sets of data, may improve elapsed processing time. For short sets of data, it has no significant benefit. A quick measurement of `awk >file ; sort file` and `awk | sort` will reveal of concurrency helps. With sort, it rarely helps because sort is not a once-through filter. 3. The simplicity of "Python to sort" processing (instead of "Python to awk to sort") prevents the exact kind of questions being asked here. 4. Python -- while wordier than awk -- is also explicit where awk has certain implicit rules that are opaque to newbies, and confusing to non-specialists. 5. Awk (like the shell script itself) adds Yet Another Programming language. If all of this can be done in one language (Python), eliminating the shell and the awk programming eliminates two programming languages, allowing someone to focus on the value-producing parts of the task. Bottom line: awk can't add significant value. In this case, awk is a net cost; it added enough complexity that it was necessary to ask this question. Removing awk will be a net gain. **Sidebar** Why building a pipeline (`a | b`) is so hard. When the shell is confronted with `a | b` it has to do the following. 1. Fork a child process of the original shell. This will eventually become b. 2. Build an os pipe. (not a Python subprocess.PIPE) but call `os.pipe()` which returns two new file descriptors that are connected via common buffer. At this point the process has stdin, stdout, stderr from its parent, plus a file that will be "a's stdout" and "b's stdin". 3. Fork a child. The child replaces its stdout with the new a's stdout. Exec the `a` process. 4. The b child closes replaces its stdin with the new b's stdin. Exec the `b` process. 5. The b child waits for a to complete. 6. The parent is waiting for b to complete. I think that the above can be used recursively to spawn `a | b | c`, but you have to implicitly parenthesize long pipelines, treating them as if they're `a | (b | c)`. Since Python has `os.pipe()`, `os.exec()` and `os.fork()`, and you can replace `sys.stdin` and `sys.stdout`, there's a way to do the above in pure Python. Indeed, you may be able to work out some shortcuts using `os.pipe()` and `subprocess.Popen`. However, it's easier to delegate that operation to the shell.
``` import subprocess some_string = b'input_data' sort_out = open('outfile.txt', 'wb', 0) sort_in = subprocess.Popen('sort', stdin=subprocess.PIPE, stdout=sort_out).stdin subprocess.Popen(['awk', '-f', 'script.awk'], stdout=sort_in, stdin=subprocess.PIPE).communicate(some_string) ```
How do I use subprocess.Popen to connect multiple processes by pipes?
[ "", "python", "pipe", "subprocess", "" ]
I'm pretty sure this is a simple question in regards to formatting but here's what I want to accomplish: I want to output data onto the screen using `cout`. I want to output this in the form of a table format. What I mean by this is the columns and rows should be properly aligned. Example: ``` Test 1 Test2 2 Iamlongverylongblah 2 Etc 1 ``` I am only concerned with the individual line so my line to output now (not working) is `cout << var1 << "\t\t" << var2 << endl;` Which gives me something like: ``` Test 1 Test2 2 Iamlongverylongblah 2 Etc 1 ```
[setw](http://www.cplusplus.com/reference/iomanip/setw/). ``` #include <iostream> #include <iomanip> using namespace std; int main () { cout << setw(21) << left << "Test" << 1 << endl; cout << setw(21) << left << "Test2" << 2 << endl; cout << setw(21) << left << "Iamlongverylongblah" << 2 << endl; cout << setw(21) << left << "Etc" << 1 << endl; return 0; } ```
I advise using [Boost Format](http://www.boost.org/doc/libs/1_37_0/libs/format/index.html). Use something like this: ``` cout << format("%|1$30| %2%") % var1 % var2; ```
Align cout format as table's columns
[ "", "c++", "string", "format", "" ]
I am using the webbrowser control in visual studio. I think it is a wrapper around internet explorer. Anyway all is going well I am using it in edit mode however I can't get he document's keydown event to fire (in order to catch ctrl+v) anyone had similar problems with it? Anyone have a solution?
Indeed the webbrowser control is just a wrapper of the IE browser control. Is your problem that the controls PreviewKeyDown not working? Seems to be working for me as long as the control has focus. ``` webBrowser1.PreviewKeyDown += new PreviewKeyDownEventHandler(webBrowser1_PreviewKeyDown); .... private void webBrowser1_PreviewKeyDown(object sender, PreviewKeyDownEventArgs e) { Console.WriteLine(e.KeyCode.ToString() + " " + e.Modifiers.ToString()); if (e.Modifiers == Keys.Control && e.KeyCode == Keys.V) { MessageBox.Show("ctrl-v pressed"); } } ``` but perhaps I am not completely understanding?
You should override a "WndProc()" method in derived class from WebBrowser control or in form, which contains a webbrowser. Or you can catch the keys with custom message filter ( Application.AddMessageFilter ). With this way you can also filter a mouse actions. I had same problems years ago, but I don't remember which way i used.
.net webbrowser control
[ "", "c#", ".net", "" ]
I try to build a gui (Swing) for a simple java application. The application should have a start window like a menu. From there I would like to navigate to several other windows. My question is what is the best-practice to achieve such a navigation? Should I build several JFrames and switch the visibility of them on/off when navigating OR should I better have one JFrame and add/remove JPanels in this single frame to navigate between the windows? Thanks.
If each of your windows correspond to different task (possibly nested), you could present your application as a SDI, with on the left a **task panel** like: ![demo](https://i.stack.imgur.com/Zucvw.png) Each of the task would display one JFrame with the associated window. Other solution: a **[table of content](http://www.java2s.com/Code/Java/Swing-Components/SwingTaskpanestyle6.htm)** like this picture on the left side (note: it actually also displays a task panel in this example on the bottom right) ![SwingXDemo](https://i.stack.imgur.com/KbYmE.png)
I recommend * Do *not* do a MDI application with sub-frames like those found in the old Windows days. They suck as they make the matter confusing. * Do design a tabbed interface. The welcome page/menu will be displayed on a first tab that is always created on the start. All cool kids do that nowadays: * Visual Studio * Eclipse * Firefox
Gui - Best way to navigate between windows?
[ "", "java", "user-interface", "swing", "" ]
What is your preferred method of traversing a tree data structure, since recursive method calls can be pretty inefficient in some circumstances. I am simply using a generator like the one above. Do you have any hints to make it faster? ``` def children(self): stack = [self.entities] while stack: for e in stack.pop(): yield e if e.entities: stack.append(e.entities) ``` Here is some test data. The first one is recursive, the second uses the generator: ``` s = time.time() for i in range(100000): e.inc_counter() print time.time() - s s = time.time() for i in range(100000): for e in e.children(): e.inc_counter_s() print time.time() - s ``` Results: ``` 0.416000127792 0.298999786377 ``` Test code: ``` import random class Entity(): def __init__(self, name): self.entities = [] self.name = name self.counter = 1 self.depth = 0 def add_entity(self, e): e.depth = self.depth + 1 self.entities.append(e) def inc_counter_r(self): for e in self.entities: e.counter += 1 e.inc_counter_r() def children(self): stack = [self.entities] while stack: for e in stack.pop(): yield e if e.entities: stack.append(e.entities) root = Entity("main") def fill_node(root, max_depth): if root.depth <= max_depth: for i in range(random.randint(10, 15)): e = Entity("node_%s_%s" % (root.depth, i)) root.add_entity(e) fill_node(e, max_depth) fill_node(root, 3) import time s = time.time() for i in range(100): root.inc_counter_r() print "recursive:", time.time() - s s = time.time() for i in range(100): for e in root.children(): e.counter += 1 print "generator:", time.time() - s ```
I can't think of any big algorithmic improvements, but a simple microoptimisation you can make is to bind frequently called methods (such as stack.append / stack.pop) to locals (this saves a dictionary lookup) ``` def children(self): stack = [self.entities] push = stack.append pop = stack.pop while stack: for e in pop(): yield e if e.entities: push(e.entities) ``` This gives a small (~15%) speedup by my tests (using 100 traversals of an 8-deep tree with 4 children at each node gives me the below timings:) ``` children : 5.53942348004 children_bind: 4.77636131253 ``` Not huge, but worth doing if speed is important.
Unless your tree is really large or you have really high (real) requirements for speed, I would choose the recursive method. Easier to read, easier to code.
How do you iterate over a tree?
[ "", "python", "algorithm", "" ]
What is the correct way to convert ASP.NET SOAP-based web services to JSON-based responses? ...And then call these from jQuery? What are "best practices" when integrating jQuery based AJAX and ASP.NET? Articles? Books?
JSON conversion to .NET classes can be done with **System.Runtime.Serialization** and **System.Runtime.Serialization.JSON**. I suspect you're more interested in setting up function calls from client to server. I think it is worth trying [this tutorial](http://asp.net/AJAX/Documentation/Live/tutorials/ASPNETAJAXWebServicesTutorials.aspx). In this tutorial, you'll need to add a webservice '.asmx' file. In the asmx file you will be able to create functions callable from client script. Your ASP.NET pages can also reference client script generated to make calling the .asmx functions. If you actually want to do JSON serialization though, you could also use the following: ``` using System.Runtime.Serialization; using System.Runtime.Serialization.Json; public class JsonSerializer { // To make a type serializeable, mark it with DataContractAttribute // To make a member of such types serializeable, mark them with DataMemberAttribute // All types marked for serialization then need to be passed to JsonSerialize as // parameter 'types' static public string JsonSerialize(object objectToSerialize, params Type[] types) { DataContractJsonSerializer serializer = new DataContractJsonSerializer( types[0], types.Skip(1)); MemoryStream ms = new MemoryStream(); serializer.WriteObject(ms, objectToSerialize); ms.Seek(0, SeekOrigin.Begin); StreamReader sr = new StreamReader(ms); return sr.ReadToEnd(); } } ```
The following article [Extending an existing ASP.NET Web Service to support JSON](http://www.codeproject.com/KB/webservices/XML_and_JSON_on_same_We_S.aspx) by Bobby Soares on codproject.com talks about using custom method attributes to achieve desired result.
ASP.NET - What is the correct approach to JSON based web services with jQuery?
[ "", "asp.net", "javascript", "jquery", "web-services", "json", "" ]
In PHP, I can write: ``` $vname = 'phone'; $$vname = '555-1234'; print $phone; ``` ... And the script will output "555-1234". Is there any equivalent in Perl? Is there any way to constrain `$phone` to the scope of the local block, as if I'd written `my $phone`? Using `my $$vname` gives me "Can't declare scalar dereference in my at ..." errors.
What you're attempting to do is called a "symbolic reference." While you can do this in Perl [you shouldn't](http://perl.plover.com/varvarname.html). Symbolic references only work with global variables -- not lexical (`my`) ones. There is no way to restrict their scope. **Symbolic references are dangerous.** For that reason they don't work under the `strict` pragma. In general, whenever you think you need symbolic references you should use a hash instead: ``` my %hash; $hash{phone} = '555-1234'; print $hash{phone}; ``` There are a few cases where symrefs are useful and even necessary. For example, Perl's export mechanism uses them. These are advanced topics. By the time you're ready for them you won't need to ask how. ;-)
Read Mark-Jason Dominus's rants against doing this in *[Why it's stupid to `use a variable as a variable name'](http://www.plover.com/perl/varvarname.html)*. You would limit the scope of your changes to $phone by starting the block with ``` local $phone; ``` or even ``` local $$vname; ``` (Though either changes $phone for any subs called from your block too, so it's not the same as the lexical scope of a `my()` declaration.)
Does Perl have PHP-like dynamic variables?
[ "", "php", "perl", "dynamic", "variables", "" ]
I'm used to work with Java where large amounts of examples are available. For various reasons I had to switch to C# and trying to do the following in SharpDevelop: ``` // Form has a menu containing a combobox added via SharpDevelop's GUI // --- Variables languages = new string[2]; languages[0] = "English"; languages[1] = "German"; DataSet myDataSet = new DataSet(); // --- Preparation DataTable lTable = new DataTable("Lang"); DataColumn lName = new DataColumn("Language", typeof(string)); lTable.Columns.Add( lName ); for( int i=0; i<languages.Length; i++ ) { DataRow lLang = lTable.NewRow(); lLang["Language"] = languages[i]; lTable.Rows.Add(lLang); } myDataSet.Tables.Add(lTable); // --- Handling the combobox mnuActionLanguage.ComboBox.DataSource = myDataSet.Tables["Lang"].DefaultView; mnuActionLanguage.ComboBox.DisplayMember = "Language"; ``` One would assume to see some values in the dropdown, but it's empty. Please tell me what I'm doing wrong ;( *EDIT: mnuActionLanguage.ComboBox.DataBind() is what I also found on the net, but it doesn't work in my case.* **SOLUTION** ``` mnuActionLanguage.ComboBox.BindingContext = this.BindingContext; ``` at the end solved the problem!
You need to set the binding context of the ToolStripComboBox.ComboBox. Here is a slightly modified version of the code that I have just recreated using Visual Studio. The menu item combo box is called toolStripComboBox1 in my case. Note the last line of code to set the binding context. I noticed that if the combo is in the visible are of the toolstrip, the binding works without this but not when it is in a drop-down. Do you get the same problem? If you can't get this working, drop me a line via my contact page and I will send you the project. You won't be able to load it using SharpDevelop but will with C# Express. ``` var languages = new string[2]; languages[0] = "English"; languages[1] = "German"; DataSet myDataSet = new DataSet(); // --- Preparation DataTable lTable = new DataTable("Lang"); DataColumn lName = new DataColumn("Language", typeof(string)); lTable.Columns.Add(lName); for (int i = 0; i < languages.Length; i++) { DataRow lLang = lTable.NewRow(); lLang["Language"] = languages[i]; lTable.Rows.Add(lLang); } myDataSet.Tables.Add(lTable); toolStripComboBox1.ComboBox.DataSource = myDataSet.Tables["Lang"].DefaultView; toolStripComboBox1.ComboBox.DisplayMember = "Language"; toolStripComboBox1.ComboBox.BindingContext = this.BindingContext; ```
``` string strConn = "Data Source=SEZSW08;Initial Catalog=Nidhi;Integrated Security=True"; SqlConnection Con = new SqlConnection(strConn); Con.Open(); string strCmd = "select companyName from companyinfo where CompanyName='" + cmbCompName.SelectedValue + "';"; SqlDataAdapter da = new SqlDataAdapter(strCmd, Con); DataSet ds = new DataSet(); Con.Close(); da.Fill(ds); cmbCompName.DataSource = ds; cmbCompName.DisplayMember = "CompanyName"; cmbCompName.ValueMember = "CompanyName"; //cmbCompName.DataBind(); cmbCompName.Enabled = true; ```
C# - Fill a combo box with a DataTable
[ "", "c#", ".net", "combobox", "datatable", "sharpdevelop", "" ]
I am porting some queries from Access to T-SQL and those who wrote the queries used the Avg aggregate function on datetime columns. This is not supported in T-SQL and I can understand why - it doesn't make sense. What is getting averaged? So I was about to start reverse engineering what Access does when it aggregates datetime using Avg, but thought I would throw the question out here first.
I'd imagine that Access is averaging the numeric representation of the dates. You could do similar in T-SQL with the following... ``` select AverageDate = cast(avg(cast(MyDateColumn as decimal(20, 10))) as datetime) from MyTable ```
I'm more familiar with non-MS DBMS, but... Since you cannot add two DATETIME values, you cannot ordinarily average them. However, you could do something similar to: ``` SELECT AVG(datetime_column - TIMESTAMP '2000-01-01 00:00:00.000000') + TIMESTAMP '2000-01-01 00:00:00.000000' FROM table_containing_datetime_column; ``` This calculates the average interval between the start of 2000 and the actual datetime values, and then adds that interval to the start of 2000. The choice of 'start of 2000' is arbitrary; as long as the datetime subtracted in the AVG() function is added back, you get a sensible answer. This does assume that the DBMS used supports SQL standard 'timestamp' notation, and supports the INTERVAL types appropriately. The difference between two DATETIME or TIMESTAMP values should be an INTERVAL (indeed, INTERVAL DAY(9) TO SECOND(6), to be moderately accurate, though the '9' is somewhat debatable). When appropriately mangled for the DBMS I work with, the expression 'works': ``` CREATE TEMP TABLE table_containing_datetime_column ( datetime_column DATETIME YEAR TO FRACTION(5) NOT NULL ); INSERT INTO table_containing_datetime_column VALUES('2008-11-19 12:12:12.00000'); INSERT INTO table_containing_datetime_column VALUES('2008-11-19 22:22:22.00000'); SELECT AVG(datetime_column - DATETIME(2000-01-01 00:00:00.00000) YEAR TO FRACTION(5)) + DATETIME(2000-01-01 00:00:00.00000) YEAR TO FRACTION(5) FROM table_containing_datetime_column; ``` Answer: ``` 2008-11-19 17:17:17.00000 ```
Avg on datetime in Access
[ "", "sql", "sql-server", "t-sql", "ms-access", "" ]
My WPF application generates sets of data which may have a different number of columns each time. Included in the output is a description of each column that will be used to apply formatting. A simplified version of the output might be something like: ``` class Data { IList<ColumnDescription> ColumnDescriptions { get; set; } string[][] Rows { get; set; } } ``` This class is set as the DataContext on a WPF DataGrid but I actually create the columns programmatically: ``` for (int i = 0; i < data.ColumnDescriptions.Count; i++) { dataGrid.Columns.Add(new DataGridTextColumn { Header = data.ColumnDescriptions[i].Name, Binding = new Binding(string.Format("[{0}]", i)) }); } ``` Is there any way to replace this code with data bindings in the XAML file instead?
Here's a workaround for Binding Columns in the DataGrid. Since the Columns property is ReadOnly, like everyone noticed, I made an Attached Property called BindableColumns which updates the Columns in the DataGrid everytime the collection changes through the CollectionChanged event. If we have this Collection of DataGridColumn's ``` public ObservableCollection<DataGridColumn> ColumnCollection { get; private set; } ``` Then we can bind BindableColumns to the ColumnCollection like this ``` <DataGrid Name="dataGrid" local:DataGridColumnsBehavior.BindableColumns="{Binding ColumnCollection}" AutoGenerateColumns="False" ...> ``` The Attached Property BindableColumns ``` public class DataGridColumnsBehavior { public static readonly DependencyProperty BindableColumnsProperty = DependencyProperty.RegisterAttached("BindableColumns", typeof(ObservableCollection<DataGridColumn>), typeof(DataGridColumnsBehavior), new UIPropertyMetadata(null, BindableColumnsPropertyChanged)); private static void BindableColumnsPropertyChanged(DependencyObject source, DependencyPropertyChangedEventArgs e) { DataGrid dataGrid = source as DataGrid; ObservableCollection<DataGridColumn> columns = e.NewValue as ObservableCollection<DataGridColumn>; dataGrid.Columns.Clear(); if (columns == null) { return; } foreach (DataGridColumn column in columns) { dataGrid.Columns.Add(column); } columns.CollectionChanged += (sender, e2) => { NotifyCollectionChangedEventArgs ne = e2 as NotifyCollectionChangedEventArgs; if (ne.Action == NotifyCollectionChangedAction.Reset) { dataGrid.Columns.Clear(); foreach (DataGridColumn column in ne.NewItems) { dataGrid.Columns.Add(column); } } else if (ne.Action == NotifyCollectionChangedAction.Add) { foreach (DataGridColumn column in ne.NewItems) { dataGrid.Columns.Add(column); } } else if (ne.Action == NotifyCollectionChangedAction.Move) { dataGrid.Columns.Move(ne.OldStartingIndex, ne.NewStartingIndex); } else if (ne.Action == NotifyCollectionChangedAction.Remove) { foreach (DataGridColumn column in ne.OldItems) { dataGrid.Columns.Remove(column); } } else if (ne.Action == NotifyCollectionChangedAction.Replace) { dataGrid.Columns[ne.NewStartingIndex] = ne.NewItems[0] as DataGridColumn; } }; } public static void SetBindableColumns(DependencyObject element, ObservableCollection<DataGridColumn> value) { element.SetValue(BindableColumnsProperty, value); } public static ObservableCollection<DataGridColumn> GetBindableColumns(DependencyObject element) { return (ObservableCollection<DataGridColumn>)element.GetValue(BindableColumnsProperty); } } ```
I've continued my research and have not found any reasonable way to do this. The Columns property on the DataGrid isn't something I can bind against, in fact it's read only. Bryan suggested something might be done with AutoGenerateColumns so I had a look. It uses simple .Net reflection to look at the properties of the objects in ItemsSource and generates a column for each one. Perhaps I could generate a type on the fly with a property for each column but this is getting way off track. Since this problem is so easily sovled in code I will stick with a simple extension method I call whenever the data context is updated with new columns: ``` public static void GenerateColumns(this DataGrid dataGrid, IEnumerable<ColumnSchema> columns) { dataGrid.Columns.Clear(); int index = 0; foreach (var column in columns) { dataGrid.Columns.Add(new DataGridTextColumn { Header = column.Name, Binding = new Binding(string.Format("[{0}]", index++)) }); } } // E.g. myGrid.GenerateColumns(schema); ```
How do I bind a WPF DataGrid to a variable number of columns?
[ "", "c#", "wpf", "xaml", "data-binding", "datagrid", "" ]
Something that confuses me, but has never caused any problems... the recommended way to dispatch an event is as follows: ``` public event EventHandler SomeEvent; ... { .... if(SomeEvent!=null)SomeEvent(); } ``` In a multi-threaded environment, how does this code guarantee that another thread will not alter the invocation list of `SomeEvent` between the check for null and the invocation of the event?
In C# 6.0 you can use monadic Null-conditional operator `?.` to check for null and raise events in easy and thread-safe way. ``` SomeEvent?.Invoke(this, args); ``` It’s thread-safe because it evaluates the left-hand side only once, and keeps it in a temporary variable. You can read more [here](https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/operators/member-access-operators#thread-safe-delegate-invocation).
As you point out, where multiple threads can access `SomeEvent` simultaneously, one thread could check whether `SomeEvent`is null and determine that it isn't. Just after doing so, another thread could remove the last registered delegate from `SomeEvent`. When the first thread attempts to raise `SomeEvent`, an exception will be thrown. A reasonable way to avoid this scenario is: ``` protected virtual void OnSomeEvent(EventArgs args) { EventHandler ev = SomeEvent; if (ev != null) ev(this, args); } ``` This works because whenever a delegate is added to or removed from an event using the default implementations of the add and remove accessors, the Delegate.Combine and Delegate.Remove static methods are used. Each of these methods returns a new instance of a delegate, rather than modifying the one passed to it. In addition, assignment of an object reference in .NET is [atomic](https://stackoverflow.com/questions/14370575/why-are-atomic-operations-considered-thread-safe), and the default implementations of the add and remove event accessors are [synchronised](http://www.albahari.com/threading/part2.aspx). So the code above succeeds by first copying the multicast delegate from the event to a temporary variable. Any changes to SomeEvent after this point will not affect the copy you've made and stored. Thus you can now safely test whether any delegates were registered and subsequently invoke them. Note that this solution solves one race problem, namely that of an event handler being null when it's invoked. It doesn't handle the problem where an event handler is defunct when it's invoked, or an event handler subscribes after the copy is taken. For example, if an event handler depends on state that's destroyed as soon as the handler is un-subscribed, then this solution might invoke code that cannot run properly. See [Eric Lippert's excellent blog entry](http://blogs.msdn.com/ericlippert/archive/2009/04/29/events-and-races.aspx) for more details. Also, see [this StackOverflow question and answers](https://stackoverflow.com/questions/786383/c-events-and-thread-safety). EDIT: If you're using C# 6.0, then [Krzysztof's answer](https://stackoverflow.com/a/32421409/13118) looks like a good way to go.
Checking for null before event dispatching... thread safe?
[ "", "c#", "multithreading", "events", "" ]
I have to build an HTML table that shows data for users versus pages visited. It seems klunky to use for and/or foreach loops, but I can't think of anything better. I'm using PHP, but I would assume that this is language agnostic.
Avoiding loops is probably not possible, if not in implementation, then it will still happen at machine level. However, if you want to try stay 'pure' without nasty code, you can at least do: ``` $tableformat = '<table><thead>%s</thead><tbody>%s</tbody></table>'; $rowformat = '<tr>%s</tr>'; $cellformat = '<td>%s</td>'; $hdata = ''; foreach( $data[0] as $cellname => $cellvalue ) { $hdata .= sprintf( $cellformat, $cellname ); } $hdata = sprintf( $rowformat, $hdata ); $rdata = ""; foreach( $data as $rownum => $rowvalue ) { $row = ""; foreach( $rowvalue as $colname => $colvalue ) { $row .= sprintf( $cellformat, $colvalue ); } $rdata .= sprintf($rowformat,$row); } return sprintf( $tableformat, $hdata, $rdata ); ``` At least that way it might be somewhat maintainable. and you don't have much worry about incomplete strings. You could also subsitutue some of that code with ``` $hdata = "<tr><td>" . implode( "</td><td>", array_keys( $data[0] )) . "</td></tr>"; $rdata .= "<tr><td>" . implode( "</td><td>", $rowvalue ) . "</td></tr>"; ``` Which while being rather concise, will possibly get you in a bit of hot water eventually and it will need recoding. ( and implode internally runs a loop, so its a lose-lose ). If you don't mind the over head of extra function calls ( and still running loops ) this could make for more concise code without too many negatives: ``` $tableformat = '<table><thead>%s</thead><tbody>%s</tbody></table>'; $rowformat = '<tr>%s</tr>'; $cellformat = '<td>%s</td>'; function tr( $cells ) { $o = ""; foreach( $cells as $i => $v ) { $o .= sprintf( $cellformat, $v ); } return sprintf( $rowformat, $o ); } return sprintf( $tableformat, tr( array_keys($data[0])), implode("", array_map( 'tr', $data )) ); ``` But beware, thats hard to read, and you keep that up one day you'll wake up with lisp.
Well, if you have multiple rows, with data formatted similarly in each row, I can't think of a way to create a table that avoids using a loop. That type of thing is basically what loops were invented for. If you gave us more details on the table you want to build, maybe we can formulate a better method.
Are loops the best way to build a table?
[ "", "php", "html", "loops", "html-table", "" ]
I have a page P1 loading from site S1 which contains an iframe. That iframe loads a page P2 from another site S2. At some point P2 would like to close the browser window, which contains P1 loaded from S1. Of course, since P2 is loaded from another site, it can't just do parent.close(). I have full control over P1 and P2, so I can add JavaScript code to both P1 and P2 as needed. Suggestions on how to resolve this?
It's impossible, I am afraid. JavaScript from an iframe that is loaded to a different site then the one it is being rendered on is strictly prohibited due to security issues. However, if the iframe is pointed to the same site you can get to it like: ``` <iframe name = "frame1" src = "http://yoursite"> </iframe> <script type = "text/javascript"> alert(window.frames["frame1"].document); </script> ```
If they originated from the same domain, you can modify the security-restrictions to allow modification between sub-domains. set document.domain = "domain.com"; //on both pages and they are allowed to modify eachother. It might work to just set them to a bogus-domain, haven't tried that, or just simply ".com" or something.
JavaScript: closing window from iframe
[ "", "javascript", "iframe", "" ]
So let's say I'm using Python 2.5's built-in default sqlite3 and I have a Django model class with the following code: ``` class SomeEntity(models.Model): some_field = models.CharField(max_length=50, db_index=True, unique=True) ``` I've got the admin interface setup and everything appears to be working fine except that I can create two SomeEntity records, one with some\_field='some value' and one with some\_field='Some Value' because the unique constraint on some\_field appears to be case sensitive. Is there some way to force sqlite to perform a case *in*sensitive comparison when checking for uniqueness? I can't seem to find an option for this in Django's docs and I'm wondering if there's something that I can do directly to sqlite to get it to behave the way I want. :-)
Yes this can easily be done by adding a unique index to the table with the following command: CREATE UNIQUE INDEX uidxName ON mytable (myfield COLLATE NOCASE) If you need case insensitivity for nonASCII letters, you will need to register your own COLLATION with commands similar to the following: The following example shows a custom collation that sorts “the wrong way”: ``` import sqlite3 def collate_reverse(string1, string2): return -cmp(string1, string2) con = sqlite3.connect(":memory:") con.create_collation("reverse", collate_reverse) cur = con.cursor() cur.execute("create table test(x)") cur.executemany("insert into test(x) values (?)", [("a",), ("b",)]) cur.execute("select x from test order by x collate reverse") for row in cur: print row con.close() ``` Additional python documentation for sqlite3 shown [here](http://docs.python.org/library/sqlite3.html)
Perhaps you can create and use a custom model field; it would be a subclass of CharField but providing a [db\_type](http://docs.djangoproject.com/en/dev/howto/custom-model-fields/#db_type) method returning "text collate nocase"
Can you achieve a case insensitive 'unique' constraint in Sqlite3 (with Django)?
[ "", "python", "django", "sqlite", "django-models", "" ]
I need to warn users about unsaved changes before they leave a page (a pretty common problem). ``` window.onbeforeunload = handler ``` This works but it raises a default dialog with an irritating standard message that wraps my own text. I need to either completely replace the standard message, so my text is clear, or (even better) replace the entire dialog with a modal dialog using jQuery. So far I have failed and I haven't found anyone else who seems to have an answer. Is it even possible? Javascript in my page: ``` <script type="text/javascript"> window.onbeforeunload = closeIt; </script> ``` The closeIt() function: ``` function closeIt() { if (changes == "true" || files == "true") { return "Here you can append a custom message to the default dialog."; } } ``` Using jQuery and jqModal I have tried this kind of thing (using a custom confirm dialog): ``` $(window).beforeunload(function () { confirm('new message: ' + this.href + ' !', this.href); return false; }); ``` which also doesn't work - I cannot seem to bind to the `beforeunload` event.
You can't modify the default dialogue for `onbeforeunload`, so your best bet may be to work with it. ``` window.onbeforeunload = function() { return 'You have unsaved changes!'; } ``` [Here's a reference](http://msdn.microsoft.com/en-us/library/ms536907(VS.85).aspx) to this from Microsoft: > When a string is assigned to the returnValue property of window.event, a dialog box appears that gives users the option to stay on the current page and retain the string that was assigned to it. The default statement that appears in the dialog box, "Are you sure you want to navigate away from this page? ... Press OK to continue, or Cancel to stay on the current page.", cannot be removed or altered. The problem seems to be: 1. When `onbeforeunload` is called, it will take the return value of the handler as `window.event.returnValue`. 2. It will then parse the return value as a string (unless it is null). 3. Since `false` is parsed as a string, the dialogue box will fire, which will then pass an appropriate `true`/`false`. The result is, there doesn't seem to be a way of assigning `false` to `onbeforeunload` to prevent it from the default dialogue. Additional notes on jQuery: * Setting the event in jQuery **may** be problematic, as that allows other `onbeforeunload` events to occur as well. If you wish only for your unload event to occur I'd stick to plain ol' JavaScript for it. * jQuery doesn't have a shortcut for `onbeforeunload` so you'd have to use the generic `bind` syntax. ``` $(window).bind('beforeunload', function() {} ); ``` **Edit 09/04/2018**: custom messages in onbeforeunload dialogs are deprecated since chrome-51 (cf: [release note](https://developers.google.com/web/updates/2016/04/chrome-51-deprecations#remove_custom_messages_in_onbeforeunload_dialogs))
What worked for me, using [jQuery](https://jquery.com/) and tested in IE8, Chrome and Firefox, is: ``` $(window).bind("beforeunload",function(event) { if(hasChanged) return "You have unsaved changes"; }); ``` It is important not to return anything if no prompt is required as there are differences between IE and other browser behaviours here.
How can I override the OnBeforeUnload dialog and replace it with my own?
[ "", "javascript", "jquery", "onbeforeunload", "" ]
Is there a 4 byte unsigned int data type in MS SQL Server? Am I forced to use a bigint?
It doesn't seem so. Here's an article describing how to create your own rules restricting an `int` to positive values. But that doesn't grant you positive values above `2^31-1`. <http://www.julian-kuiters.id.au/article.php/sqlserver2005-unsigned-integer>
Can you just add/subtract 2,147,483,648 (2^31) to the regular int ? (subtract on the way in, & add coming out) I know it sounds silly, but if you declare a custom datatype that does this, it's integer arithmetic and very fast.... It just won't be readable directly from the table
4 byte unsigned int in SQL Server?
[ "", "sql", "sql-server-2005", "types", "" ]
We've got a WinForms app written in C# that has a very custom GUI. The user is not allowed to run any other applications and the user cannot go into the OS (WinXP Pro) at all. We're planning on allowing the user to connect to available wireless networks. We're going to have to create a configuration screen that displays available networks (by SSID) and allows the user to connect. While connected we want to display signal strength. Are there any existing components that provide this capability? I haven't found anything but [this](http://sysnet.ucsd.edu/pawn/wrapi/). I can set the TCP/IP settings using WMI, but it's the wireless stuff that I need a direction on. Many thanks! Matt
[Managed Wifi API](http://www.codeplex.com/managedwifi) should work. This might not be ideal - you have XP, which is good, but you would have to deploy a hotfix. I'd go for it, because all the wifi code I've dealt with (for the Compact Framework) is hideous. This code is as simple as could be. Their sample code doesn't include reading the signal strength, though, and I'm not sure if the Native wifi API provides that. I have written C# code that gets the wireless signal strength, but it did this by PInvoking into a manufacturer-specific DLL available only on their devices. It may be that you'll have to do something similar to get the wireless strength from your PC's wireless card (and that may be why that functionality is not available in an all-purpose API).
It is possible to connect available wireless networks using native wifi. <http://www.codeproject.com/KB/gadgets/SignalStrenghth.aspx> Check the link, Which was developed by me.
Managing wireless network connection in C#
[ "", "c#", ".net", "networking", "wireless", "" ]
I need to know how to return a default row if no rows exist in a table. What would be the best way to do this? I'm only returning a single column from this particular table to get its value. Edit: This would be SQL Server.
One approach for Oracle: ``` SELECT val FROM myTable UNION ALL SELECT 'DEFAULT' FROM dual WHERE NOT EXISTS (SELECT * FROM myTable) ``` Or alternatively in Oracle: ``` SELECT NVL(MIN(val), 'DEFAULT') FROM myTable ``` Or alternatively in SqlServer: ``` SELECT ISNULL(MIN(val), 'DEFAULT') FROM myTable ``` These use the fact that `MIN()` returns `NULL` when there are no rows.
If your base query is expected to return only one row, then you could use this trick: ``` select NVL( MIN(rate), 0 ) AS rate from d_payment_index where fy = 2007 and payment_year = 2008 and program_id = 18 ``` (Oracle code, not sure if NVL is the right function for SQL Server.)
How to set a default row for a query that returns no rows?
[ "", "sql", "sql-server", "" ]
We have had issues with Mootools not being very backward compatible specifically in the area of drag and drop functionality. I was wondering if anyone has had any similar problems with jQuery not being backward compatible. We are starting to use it quite heavily and are thinking about upgrading to a newer version to start using several plugins that require it. Will we have any issues if we get rid of the older version?
jQuery seems to be nicely backward compatible. I have been using it for more than a couple of years now through several versions of the core and have not had issues when upgrading except a few minor ones with some plugins. I would say that the core seems to be fine but if you're using a lot of plugins you might run into some problems (but these are usually easy to fix, or the new core has that functionality built in anyway so you can just drop them).
jQuery is so serious about backwards compatibility that they produce a "backwards compatibility" plugin for each release: <http://docs.jquery.com/Release:jQuery_1.2#jQuery_1.1_Compatibility_Plugin>. It let people who don't need backwards compatibility save on page weight.
How well does jQuery support backward compatibility?
[ "", "javascript", "jquery", "jquery-plugins", "mootools", "backwards-compatibility", "" ]
At the moment I'm creating a `DateTime` for each month and formatting it to only include the month. Is there another or any better way to do this?
You can use the [`DateTimeFormatInfo`](http://msdn.microsoft.com/en-us/library/system.globalization.datetimeformatinfo.aspx) to get that information: ``` // Will return January string name = DateTimeFormatInfo.CurrentInfo.GetMonthName(1); ``` or to get all names: ``` string[] names = DateTimeFormatInfo.CurrentInfo.MonthNames; ``` You can also instantiate a new `DateTimeFormatInfo` based on a `CultureInfo` with [`DateTimeFormatInfo.GetInstance`](http://msdn.microsoft.com/en-us/library/system.globalization.datetimeformatinfo.getinstance.aspx) or you can use the current culture's [`CultureInfo.DateTimeFormat`](http://msdn.microsoft.com/en-us/library/system.globalization.cultureinfo.datetimeformat.aspx) property: ``` var dateFormatInfo = CultureInfo.GetCultureInfo("en-GB").DateTimeFormat; ``` Keep in mind that calendars in .Net support up to 13 months, thus you will get an extra empty string at the end for calendars with only 12 months (such as those found in en-US or fr for example).
This method will allow you to apply a list of key value pairs of months to their `int` counterparts. We generate it with a single line using Enumerable Ranges and LINQ. Hooray, LINQ code-golfing! ``` var months = Enumerable.Range(1, 12).Select(i => new { I = i, M = DateTimeFormatInfo.CurrentInfo.GetMonthName(i) }); ``` To apply it to an ASP dropdown list: ``` // <asp:DropDownList runat="server" ID="ddlMonths" /> ddlMonths.DataSource = months; ddlMonths.DataTextField = "M"; ddlMonths.DataValueField = "I"; ddlMonths.DataBind(); ```
How to list all month names, e.g. for a combo?
[ "", "c#", "asp.net", ".net", "datetime", "" ]
I understand that BigDecimal is recommended best practice for representing monetary values in Java. What do you use? Is there a better library that you prefer to use instead?
`BigDecimal` all the way. I've heard of some folks creating their own `Cash` or `Money` classes which encapsulate a cash value with the currency, but under the skin it's still a `BigDecimal`, probably with [`BigDecimal.ROUND_HALF_EVEN`](http://java.sun.com/j2se/1.5.0/docs/api/java/math/BigDecimal.html#ROUND_HALF_EVEN) rounding. **Edit:** As Don mentions in [his answer](https://stackoverflow.com/questions/285680/representing-monetary-values-in-java#286228), there are open sourced projects like [timeandmoney](http://timeandmoney.sourceforge.net/), and whilst I applaud them for trying to prevent developers from having to reinvent the wheel, I just don't have enough confidence in a pre-alpha library to use it in a production environment. Besides, if you dig around under the hood, you'll see [they use `BigDecimal` too](http://timeandmoney.svn.sourceforge.net/viewvc/timeandmoney/timeandmoney/trunk/src/main/java/com/domainlanguage/money/Money.java?view=markup).
It can be useful to people arriving here by search engines to know about JodaMoney: <http://www.joda.org/joda-money/>.
Representing Monetary Values in Java
[ "", "java", "currency", "bigdecimal", "" ]
Question: Is exception handling in Java actually slow? Conventional wisdom, as well as a lot of Google results, says that exceptional logic shouldn't be used for normal program flow in Java. Two reasons are usually given, 1. it is really slow - even an order of magnitude slower than regular code (the reasons given vary), and 2. it is messy because people expect only errors to be handled in exceptional code. This question is about #1. As an example, [this page](http://leepoint.net/notes-java/flow/exceptions/03exceptions.html) describes Java exception handling as "very slow" and relates the slowness to the creation of the exception message string - "this string is then used in creating the exception object that is thrown. This is not fast." The article [Effective Exception Handling in Java](http://adtmag.com/articles/2000/08/22/effective-exception-handling-in-java.aspx) says that "the reason for this is due to the object creation aspect of exception handling, which thereby makes throwing exceptions inherently slow". Another reason out there is that the stack trace generation is what slows it down. My testing (using Java 1.6.0\_07, Java HotSpot 10.0, on 32 bit Linux), indicates that exception handling is no slower than regular code. I tried running a method in a loop that executes some code. At the end of the method, I use a boolean to indicate whether to *return* or *throw*. This way the actual processing is the same. I tried running the methods in different orders and averaging my test times, thinking it may have been the JVM warming up. In all my tests, the throw was at least as fast as the return, if not faster (up to 3.1% faster). I am completely open to the possibility that my tests were wrong, but I haven't seen anything out there in the way of the code sample, test comparisons, or results in the last year or two that show exception handling in Java to actually be slow. What leads me down this path was an API I needed to use that threw exceptions as part of normal control logic. I wanted to correct them in their usage, but now I may not be able to. Will I instead have to praise them on their forward thinking? In the paper [Efficient Java exception handling in just-in-time compilation](http://portal.acm.org/citation.cfm?id=337453), the authors suggest that the presence of exception handlers alone, even if no exceptions are thrown, is enough to prevent the JIT compiler from optimizing the code properly, thus slowing it down. I haven't tested this theory yet.
It depends how exceptions are implemented. The simplest way is using setjmp and longjmp. That means all registers of the CPU are written to the stack (which already takes some time) and possibly some other data needs to be created... all this already happens in the try statement. The throw statement needs to unwind the stack and restore the values of all registers (and possible other values in the VM). So try and throw are equally slow, and that is pretty slow, however if no exception is thrown, exiting the try block takes no time whatsoever in most cases (as everything is put on the stack which cleans up automatically if the method exists). Sun and others recognized, that this is possibly suboptimal and of course VMs get faster and faster over the time. There is another way to implement exceptions, which makes try itself lightning fast (actually nothing happens for try at all in general - everything that needs to happen is already done when the class is loaded by the VM) and it makes throw not quite as slow. I don't know which JVM uses this new, better technique... ...but are you writing in Java so your code later on only runs on one JVM on one specific system? Since if it may ever run on any other platform or any other JVM version (possibly of any other vendor), who says they also use the fast implementation? The fast one is more complicated than the slow one and not easily possible on all systems. You want to stay portable? Then don't rely on exceptions being fast. It also makes a big difference what you do within a try block. If you open a try block and never call any method from within this try block, the try block will be ultra fast, as the JIT can then actually treat a throw like a simple goto. It neither needs to save stack-state nor does it need to unwind the stack if an exception is thrown (it only needs to jump to the catch handlers). However, this is not what you usually do. Usually you open a try block and then call a method that might throw an exception, right? And even if you just use the try block within your method, what kind of method will this be, that does not call any other method? Will it just calculate a number? Then what for do you need exceptions? There are much more elegant ways to regulate program flow. For pretty much anything else but simple math, you will have to call an external method and this already destroys the advantage of a local try block. See the following test code: ``` public class Test { int value; public int getValue() { return value; } public void reset() { value = 0; } // Calculates without exception public void method1(int i) { value = ((value + i) / i) << 1; // Will never be true if ((i & 0xFFFFFFF) == 1000000000) { System.out.println("You'll never see this!"); } } // Could in theory throw one, but never will public void method2(int i) throws Exception { value = ((value + i) / i) << 1; // Will never be true if ((i & 0xFFFFFFF) == 1000000000) { throw new Exception(); } } // This one will regularly throw one public void method3(int i) throws Exception { value = ((value + i) / i) << 1; // i & 1 is equally fast to calculate as i & 0xFFFFFFF; it is both // an AND operation between two integers. The size of the number plays // no role. AND on 32 BIT always ANDs all 32 bits if ((i & 0x1) == 1) { throw new Exception(); } } public static void main(String[] args) { int i; long l; Test t = new Test(); l = System.currentTimeMillis(); t.reset(); for (i = 1; i < 100000000; i++) { t.method1(i); } l = System.currentTimeMillis() - l; System.out.println( "method1 took " + l + " ms, result was " + t.getValue() ); l = System.currentTimeMillis(); t.reset(); for (i = 1; i < 100000000; i++) { try { t.method2(i); } catch (Exception e) { System.out.println("You'll never see this!"); } } l = System.currentTimeMillis() - l; System.out.println( "method2 took " + l + " ms, result was " + t.getValue() ); l = System.currentTimeMillis(); t.reset(); for (i = 1; i < 100000000; i++) { try { t.method3(i); } catch (Exception e) { // Do nothing here, as we will get here } } l = System.currentTimeMillis() - l; System.out.println( "method3 took " + l + " ms, result was " + t.getValue() ); } } ``` Result: ``` method1 took 972 ms, result was 2 method2 took 1003 ms, result was 2 method3 took 66716 ms, result was 2 ``` The slowdown from the try block is too small to rule out confounding factors such as background processes. But the catch block killed everything and made it 66 times slower! As I said, the result will not be that bad if you put try/catch and throw all within the same method (method3), but this is a special JIT optimization I would not rely upon. And even when using this optimization, the throw is still pretty slow. So I don't know what you are trying to do here, but there is definitely a better way of doing it than using try/catch/throw.
FYI, I extended the experiment that Mecki did: ``` method1 took 1733 ms, result was 2 method2 took 1248 ms, result was 2 method3 took 83997 ms, result was 2 method4 took 1692 ms, result was 2 method5 took 60946 ms, result was 2 method6 took 25746 ms, result was 2 ``` The first 3 are the same as Mecki's (my laptop is obviously slower). method4 is identical to method3 except that it creates a `new Integer(1)` rather than doing `throw new Exception()`. method5 is like method3 except that it creates the `new Exception()` without throwing it. method6 is like method3 except that it throws a pre-created exception (an instance variable) rather than creating a new one. In Java much of the expense of throwing an exception is the time spent gathering the stack trace, which occurs when the exception object is created. The actual cost of throwing the exception, while large, is considerably less than the cost of creating the exception.
What are the effects of exceptions on performance in Java?
[ "", "java", "performance", "exception", "" ]
Here is a nice underhand lob pitch to you guys. So basically I've got my content table with unique primary key IDs and I've got my tag table with unique primary key IDs. I've got a table that has an identity column as a primary key but the two other columes are the contentID and tagID. What do I need to do to the table to make sure that I only have the same contentID and tagID combo only once.
There's no reason from a data modeling perspective that you need an identity column in the mapping table. The primary key constraint should be a two-column constraint over `contentID`, `tagID`. Some frameworks (e.g. Rails) demand that *every* table have a surrogate key named `id`, even if it makes no sense for the entity relationship. This is a pity, but apparently this convention gives them some efficiency somewhere else. You can always create a `UNIQUE` constraint over `contentID`, `tagID` in addition to the table's primary key on the surrogate key column.
There is one reason to have a numeric (surrogate) key on a many-to-many join table... That is, when there are other child derived tables that would need Foreign Keys back to the records in this table... i.e., say you have Table PetStores w/StoreId, table Breeds w/BreedId, and many-to-many join table StoreBreeds that has a row for each store=breed combination... And then let's say you want to track the price, and Store Discount for each breed at each stoer, as it has changed over time, So you need another table to record the price of each breed at each store, with valid start and end dates reflecting the range of dates when the price was in effect for that breed at that store.. if you only haD a meaningful composite key for the many to many join table, then the FK in the child table would have to be composite as well, based on StoreId and BreedId... For performance, adding a non-meaningful integral surrogate key to the join table allows you to use that as the FK in derived child tables instead, and thereby increase the performance of joins to retrieve those child records... In a simple case, this ***may*** not be that significant, but in more complex scenarios, where the composite key consists of 4 or more columns, the impact can be substantial.
SQL many to many realtion table, but unique entires
[ "", "sql", "" ]
I want to debug an application in Linux. The application is created in C++. The GUI is created using QT. The GUI is linked with a static library that can be treated as the back end of the application. I want to debug the static library but am not sure how to do that. I tried using gdb ``` gdb GUI ``` But how can I attach the library? Has anyone had experience in debugging libraries in linux?
gdb will automatically debug functions in the library when they are called. just call it like ``` gdb ./foo run ``` :) . Be sure you build foo with debugging flags (`-g3` will enable all debugging stuffs for gcc :). You should not optimize when debugging (pass at most `-O1` to gcc, do not optimize further). It can confuse the debugger.
If you want to debug the library code itself, you'll need to build the library with the `-g` compiler flag (as well as building the executable with `-g` as [litb pointed out](https://stackoverflow.com/questions/320124/debugging-application-in-linux#320136)). Otherwise gdb will step through your code fine but will throw up its hands each time you make a library call.
Debugging an application in Linux
[ "", "c++", "linux", "debugging", "gdb", "" ]
Example C API signature: `void Func(unsigned char* bytes);` In C, when I want to pass a pointer to an array, I can do: ``` unsigned char* bytes = new unsigned char[1000]; Func(bytes); // call ``` How do I translate the above API to P/Invoke such that I can pass a pointer to C# byte array?
The easiest way to pass an array of bytes is to declare the parameter in your import statement as a byte array. ``` [DllImport EntryPoint="func" CharSet=CharSet.Auto, SetLastError=true] public extern static void Func(byte[]); byte[] ar = new byte[1000]; Func(ar); ``` You should also be able to declare the parameter as an IntPtr and Marshal the data manually. ``` [DllImport EntryPoint="func" CharSet=CharSet.Auto, SetLastError=true] public extern static void Func(IntPtr p); byte[] ar = new byte[1000]; IntPtr p = Marshal.AllocHGlobal(Marshal.SizeOf(typeof(byte)) * ar.Length); Marshal.Copy(ar, 0, p, ar.Length); Func(p); Marshal.FreeHGlobal(p); ```
You can use unsafe code: ``` unsafe { fixed(byte* pByte = byteArray) IntPtr intPtr = new IntPtr((void *) pByte); Func(intPtr); } ``` If you need to use safe code, you can use a few tricks: ``` IntPtr intPtr = Marshal.AllocHGlobal(Marshal.SizeOf(byteArray)); Marshal.Copy(byteArray, 0, intPtr, Marshal.SizeOf(byteArray)); Func(intPtr); Marshal.FreeHGlobal(intPtr); ``` However, the safe code is going to be slow IMHO.
How can I pass a pointer to an array using p/invoke in C#?
[ "", "c#", "c", "api", "pinvoke", "" ]
I have a series of Eclipse projects containing a number of plugins and features that are checked into CVS. I now need to run an automated build of these plugins. Ideally I'd like to do it without having to hardcode large numbers of Eclipse library locations by hand, which has been the problem with the automatically generated Ant files that Eclipse provides. The build also needs to run headlessly. Does anyone have experience of this sort of set-up with Eclipse, and recommendations for how to achieve it?
There are a few options for you to look at, depending on which build scripting language you're using: * For [Maven2](http://books.sonatype.com/maven-book/reference/multimodule.html), the way forward seems to be Spring Dynamic Modules. Other options are [Pax Construct](http://www.ops4j.org/projects/pax/maven/), [m2eclipse](http://m2eclipse.codehaus.org/), [Maven BND](http://felix.apache.org/site/maven-bundle-plugin-bnd.html) * For Ant/Gant, [Eclipse PDE Buil](http://wiki.eclipse.org/index.php/PDEBuild)d, [Ant4Eclipse](http://ant4eclipse.sourceforge.net/) * For command line or both the above, [Buckminster](http://www.eclipse.org/buckminster/). At my current clients we use Buckminster, which wraps PDE-Build, and call it from Ant/CruiseControl. We've got code coming in from multiple repositories all being built into a single RCP product. Also, [these](https://stackoverflow.com/questions/27818/how-can-i-manage-osgi-build-dependencies) [questions](https://stackoverflow.com/questions/121244/using-custombuildcallbacksxml-in-an-eclipse-rcp-headless-build) may be of help.
The standard way to make an Eclipse Build is to use the PDE Build Plugin. <http://help.eclipse.org/help32/index.jsp?topic=/org.eclipse.pde.doc.user/guide/tasks/pde_feature_build.htm> <http://wiki.eclipse.org/index.php/PDEBuild> The PDU plugin is normally included with the Eclipse IDE and contains a series of templates. The templates help you set up a system that will: * fetch: Checkout all plugins and features using a map file, that contains the locations of the plugins * generate: Creates a build process for every plugin checked out * process: Compiles the plugins * assamble: Jars and packs the plugins * postBuild: Allows to set up automatic tests and deployment Theoretically all you need to do is to modify a customTargets.xml file , write a map file that contains a reference to every plugin that you need to check out and modify a build.properties file to indicate such properties as the cvs server location. I had a similar problem to the one you have. The build mechanism is divided into several steps. You can customize the preFetch target of the customTargets.xml file so some "bulk" libraries are imported from specific trees in the repository and add them to the build directory, so you don't have to specify every single plugin in the map.
How can I build an Eclipse plugin outside of Eclipse?
[ "", "java", "eclipse", "build-process", "" ]
If I have a list like this: ``` <ul id="mylist"> <li id="list-item1">text 1</li> <li id="list-item2">text 2</li> <li id="list-item3">text 3</li> <li id="list-item4">text 4</li> </ul> ``` What's the easiest way to re-arrange the DOM nodes to my preference? (This needs to happen automatically when the page loads, the list-order preference is gained from a cookie) E.g. ``` <ul id="mylist"> <li id="list-item3">text 3</li> <li id="list-item4">text 4</li> <li id="list-item2">text 2</li> <li id="list-item1">text 1</li> </ul> ```
Though there's probably an easier way to do this using a JS Library, here's a working solution using vanilla js. ``` var list = document.getElementById('mylist'); var items = list.childNodes; var itemsArr = []; for (var i in items) { if (items[i].nodeType == 1) { // get rid of the whitespace text nodes itemsArr.push(items[i]); } } itemsArr.sort(function(a, b) { return a.innerHTML == b.innerHTML ? 0 : (a.innerHTML > b.innerHTML ? 1 : -1); }); for (i = 0; i < itemsArr.length; ++i) { list.appendChild(itemsArr[i]); } ```
You can use this to re-sort an element's children: ``` const list = document.querySelector('#test-list'); [...list.children] .sort((a, b) => a.innerText > b.innerText ? 1 : -1) .forEach(node => list.appendChild(node)); ``` * [`list.children`](https://developer.mozilla.org/en-US/docs/Web/API/Element/children) is an [HTMLCollection](https://developer.mozilla.org/en-US/docs/Web/API/HTMLCollection), so we use the [spread syntax](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Spread_syntax) introduced in ES6 to convert it to a standard array. * We [`sort`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/sort) this array with a [custom comparison function](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/sort#sorting_array_of_objects) that sorts by `innerText`. * Every child is re-appended to `list` in the order of the now sorted array. When an element is re-appended it is actually moved from its previous location ([docs](https://developer.mozilla.org/en-US/docs/Web/API/Node/appendChild)), so after every element has been re-appended in sorted order we end up with a sorted `list`.
Easiest way to sort DOM nodes?
[ "", "javascript", "dom", "" ]
I'm looking for a way to select until a sum is reached. My "documents" table has "`tag_id`" and "`size`" fields. I want to select all of the documents with `tag_id = 26` but I know I can only handle 600 units of size. So, there's no point in selecting 100 documents and discarding 90 of them when I could have known that the first 10 already added up to > 600 units. So, the goal is: don't bring back a ton of data to parse through when I'm going to discard most of it. ...but I'd also really like to avoid introducing working with cursors to this app. I'm using MySQL.
You need some way to order which records get priority over others when adding up to your max units. Otherwise, how do you know which set of records that totals up to 600 do you keep? ``` SELECT d.id, d.size, d.date_created FROM documents d INNER JOIN documents d2 ON d2.tag_id=d.tag_id AND d2.date_created >= d.date_created WHERE d.tag_id=26 GROUP BY d.id, d.size, d.date_created HAVING sum(d2.size) <= 600 ORDER BY d.date_created DESC ``` This is just a basic query to get you started, and there are a number of problems still to solve: * It stops at <= 600, so in most cases you won't fill up your size limit exactly. This means you might want to tweak it to allow one more record. For example, if the first record is > 600 the query will return nothing, and that could be a problem. * It won't do anything to check for additional smaller records later on that might still fit under the cap. * Records with identical date\_created values could be sort of 'double counted' here and there.
This is much less efficient, but it does avoid a cursor (assuming your documents table also has a serial id column): ``` select a.id, (select sum(b.size) from documents b where b.id <= a.id and b.tag_id = 26) from documents a where a.tag_id = 26 order by a.id ``` Also, this was done in pgsql, so I'm not sure if this exact syntax would work in mysql. Then you can wrap this in another query that looks for those having a sum > 600 (you'll have to name the sum column) and take the first id. Then process all ids below and including that one.
How to "select until" a sum is reached
[ "", "mysql", "sql", "" ]
Does anyone have a good algorithm for taking an ordered list of integers, i.e.: [1, 3, 6, 7, 8, 10, 11, 13, 14, 17, 19, 23, 25, 27, 28] into a given number of evenly sized ordered sublists, i.e. for 4 it will be: [1, 3, 6] [7, 8, 10, 11] [13, 14, 17, 19] [23, 25, 27, 28] The requirement being that each of the sublists are ordered and as similar in size as possible.
Splitting the lists evenly means you will have two sizes of lists - size S and S+1. With N sublists, and X elements in the original, you would get: floor(X/N) number of elements in the smaller sublists (S), and X % N is the number of larger sublists (S+1). Then iterate over the original array, and (looking at your example) creating small lists firsts. Something like this maybe: ``` private static List<Integer[]> splitOrderedDurationsIntoIntervals(Integer[] durations, int numberOfIntervals) { int sizeOfSmallSublists = durations.length / numberOfIntervals; int sizeOfLargeSublists = sizeOfSmallSublists + 1; int numberOfLargeSublists = durations.length % numberOfIntervals; int numberOfSmallSublists = numberOfIntervals - numberOfLargeSublists; List<Integer[]> sublists = new ArrayList(numberOfIntervals); int numberOfElementsHandled = 0; for (int i = 0; i < numberOfIntervals; i++) { int size = i < numberOfSmallSublists ? sizeOfSmallSublists : sizeOfLargeSublists; Integer[] sublist = new Integer[size]; System.arraycopy(durations, numberOfElementsHandled, sublist, 0, size); sublists.add(sublist); numberOfElementsHandled += size; } return sublists; } ```
Here is my own recursive solution, inspired by merge sort and breadth first tree traversal: ``` private static void splitOrderedDurationsIntoIntervals(Integer[] durations, List<Integer[]> intervals, int numberOfInterals) { int middle = durations.length / 2; Integer[] lowerHalf = Arrays.copyOfRange(durations, 0, middle); Integer[] upperHalf = Arrays.copyOfRange(durations, middle, durations.length); if (lowerHalf.length > upperHalf.length) { intervals.add(lowerHalf); intervals.add(upperHalf); } else { intervals.add(upperHalf); intervals.add(lowerHalf); } if (intervals.size() < numberOfIntervals) { int largestElementLength = intervals.get(0).length; if (largestElementLength > 1) { Integer[] duration = intervals.remove(0); splitOrderedDurationsIntoIntervals(duration, intervals); } } } ``` I was hoping someone might have a suggestion for an iterative solution.
How do I divide an ordered list of integers into evenly sized sublists?
[ "", "java", "algorithm", "sorting", "" ]
I'm looking for a small and fast library implementing an HTTP server in .NET My general requirements are: * Supports multiple simultaneous connections * Only needs to support static content (no server side processing) * HTTP only, HTTPS not needed * Preferably be able to serve a page from an in memory source. I want to integrate it into another app to be able to make changing data available via a browser, but I don't want to have to write it to a file on disk first. For example, just pass it a C# string to use as the current page content. * Preferably open source so I can modify it if needed * *Definitely* needs to be free... it's for a personal project with no budget other than my own time. I also want to be able to release the final product that would use this library freely (even if that means complying to the particular OSS license of that library. Edit: To clarify some more, what I need can be REALLY simple. I need to be able to serve essentially 2 documents, which I would like to be served directly from memory. And that's it. Yes, I could write my own, but I wanted to make sure I wasn't doing something that was already available.
Use [Cassini](http://www.asp.net/downloads/archived/cassini/). Free, Open Source. It would take trivial hacking to serve from memory.
Well, how complicated of a HTTP server do you need? .NET 2.0 has the [HttpListener Class](http://msdn.microsoft.com/en-us/library/system.net.httplistener.aspx) which you can use to [roll your own basic library](http://bartdesmet.net/blogs/bart/archive/2007/02/22/httplistener-for-dummies-a-simple-http-request-reflector.aspx). Since this is for a personal project and you are willing to invest the time, it would also make for a good learning experience as you you would get to learn how to work with the class. Additionally, according to the MSDN documentation, it has an asynchronous mode that gives each request its own thread. Getting a basic HTTP server with the class up and running isn't too difficult either, you should be able to get it done in only a couple hundred lines of code.
Light weight HTTP Server library in .NET
[ "", "c#", ".net", "http", "" ]
I'm working with some schema which defines an abstract complex type, eg. ``` <xs:complexType name="MyComplexType" abstract="true"> ``` This type is then referenced by another complex type in the schema: ``` <xs:complexType name="AnotherType"> <xs:sequence> <xs:element name="Data" type="MyComplexType" maxOccurs="unbounded"/> </xs:sequence> </xs:complexType> ``` When I run "xsd.exe /d /l:CS MySchema.xsd" I get this error: > Error: There was an error processing MySchema.xsd'. > - Error generating code for DataSet ''. > - Unable to convert input xml file content to a DataSet. DataSet cannot instantiate an abstract ComplexType for the node Data. > - DataSet cannot instantiate an abstract ComplexType for the node Data. It doesn't seem to be invalid schema. Has anyone else run into this issue? Thanks Nick
After evaluating several different schema-to-code tools the only one we found that was able to deal with our schema (very large and extremely complicated) was Liquid XML (<http://www.liquid-technologies.com/>).
I have never run into this problem, but I quickly learned that xsd.exe has a lot of shortcomings. We started using [CodeXS](http://www.bware.biz/CodeXS/) a long time ago. The product is a web service, but the code is available as a command-line tool. It has its own warts, but the code it generates is much better and it is free. Plus, it is extensible, and adding your own plug-ins for it is actually pretty easy. Possibly, it won't have the same limitation?
Generate class for schema with abstract complex type
[ "", "c#", ".net", "xml", "xsd", "schema", "" ]
Here's a relatively common task for me, and, I think, for many a .NET programmer: I want to use the .NET ThreadPool for scheduling worker threads that need to process a given type of tasks. As a refresher, the signatures for the queueing method of the ThreadPool and its associated delegate are: ``` public static bool QueueUserWorkItem ( WaitCallback callBack, Object state ) public delegate void WaitCallback (Object state) ``` Therefore, a typical generic worker thread class would look something like: ``` public class Worker<T> { public void schedule(T i_task) { ThreadPool.QueueUserWorkItem(execute, i_task) } private void execute(Object o){ T task = (T)o; //What happened to the type safety? executeTask(task); } private void executeTask(T i_task){ //process i_task } } ``` Notice the type of the `state` parameter? It's *`Object`* ! What's the compelling reason why the .NET team chose not to make the `QueueUserWorkItem` method (or the whole `ThreadPool` class) generic? I can't believe they just overlooked it. Here's how I'd like to see it: ``` //in the ThreadPool class: public static bool QueueUserWorkItem<T> ( WaitCallback<T> callBack, T state ) public delegate void WaitCallback<T> (T state) ``` This would make the worker class type-safe (and a lot clearer, IMHO): ``` public class Worker<T> { public void schedule(T i_task) { ThreadPool.QueueUserWorkItem<T>(execute, i_task) } private void execute(T i_task){ //process i_task } } ``` I must be missing something.
It sounds like you are talking about a work queue? (and I sound like clippy...) For the record, thread-pool threads should typically be used for short pieces of work. You should ideally create your own threads for a long-lived queue. Note that .NET 4.0 may be adopting the CCR/TPL libraries, so we'll get some inbuilt work queues for free - but it isn't hard to write a threaded work-queue. And you can make it generic, too ;-p Re the question - I prefer the captured variables approach to passing state into threads (be they `Thread`, `ThreadPool`, or `Control.Invoke`): ``` Thread t = new Thread(() => SomeMethod(arg)); t.IsBackground = true; t.Name = "Worker n"; t.Start(); ``` This gives you much more granular control over the thread, without saturating the `ThreadPool`.
Since it's trivial to package whatever state you like by passing an anonymous delegate or lambda to the threadpool (through variable capture), there's no need for a generic version. For example, you could write a utility function: ``` static void QueueItem<T>(Action<T> action, T state) { ThreadPool.QueueUserWorkItem(delegate { action(state); }); } ``` But it wouldn't be terribly useful, as you can simply use a delegate yourself any time you need state in the pooled task.
Generic ThreadPool in .NET
[ "", "c#", ".net", "multithreading", "generics", "threadpool", "" ]
I know how to disable [WSDL-cache](https://stackoverflow.com/questions/303488/in-php-how-can-you-clear-a-wsdl-cache) in PHP, but what about force a re-caching of the WSDL? This is what i tried: I run my code with caching set to disabled, and the new methods showed up as espected. Then I activated caching, but of some reason my old non-working wsdl showed up again. So: how can I force my new WSDL to overwrite my old cache?
I guess when you disable caching it will also stop writing to the cache. So when you re-enable the cache the old cached copy will still be there and valid. You could try (with caching enabled) ``` ini_set('soap.wsdl_cache_ttl', 1); ``` I put in a time-to-live of one second in because I think if you put zero in it will disable the cache entirely but not remove the entry. You probably will only want to put that line in when you want to kill the cached copy.
In my php.ini there's an entry which looks like this: ``` soap.wsdl_cache_dir="/tmp" ``` In /tmp, I found a bunch of files named wsdl-[some hexadecimal string] I can flush the cached wsdl files with this command: ``` rm /tmp/wsdl-* ```
Force re-cache of WSDL in php
[ "", "php", "web-services", "caching", "wsdl", "" ]
I want to be able to read from an unsorted source text file (one record in each line), and insert the line/record into a destination text file by specifying the line number where it should be inserted. Where to insert the line/record into the destination file will be determined by comparing the incoming line from the incoming file to the already ordered list in the destination file. (The destination file will start as an empty file and the data will be sorted and inserted into it one line at a time as the program iterates over the incoming file lines.) Incoming File Example: ``` 1 10/01/2008 line1data 2 11/01/2008 line2data 3 10/15/2008 line3data ``` Desired Destination File Example: ``` 2 11/01/2008 line2data 3 10/15/2008 line3data 1 10/01/2008 line1data ``` I could do this by performing the sort in memory via a linked list or similar, but I want to allow this to scale to very large files. (And I am having fun trying to solve this problem as I am a C++ newbie :).) One of the ways to do this may be to open 2 file streams with `fstream` (1 in and 1 out, or just 1 in/out stream), but then I run into the difficulty that it's difficult to find and search the file position because it seems to depend on absolute position from the start of the file rather than line numbers :). I'm sure problems like this have been tackled before, and I would appreciate advice on how to proceed in a manner that is good practice. I'm using Visual Studio 2008 Pro C++, and I'm just learning C++.
The basic problem is that under common OSs, files are just streams of bytes. There is no concept of lines at the filesystem level. Those semantics have to be added as an additional layer on top of the OS provided facilities. Although I have never used it, I believe that VMS has a record oriented filesystem that would make what you want to do easier. But under Linux or Windows, you can't insert into the middle of a file without rewriting the rest of the file. It is similar to memory: At the highest level, its just a sequence of bytes, and if you want something more complex, like a linked list, it has to be added on top.
A [distinctly-no-c++] solution would be to use the \*nix `sort` tool, sorting on the second column of data. It might look something like this: ``` cat <file> | sort -k 2,2 > <file2> ; mv <file2> <file> ``` It's not exactly in-place, and it fails the request of using C++, but it does work :) Might even be able to do: ``` cat <file> | sort -k 2,2 > <file> ``` I haven't tried that route, though. \* <http://www.ss64.com/bash/sort.html> - sort man page
C++ inserting a line into a file at a specific line number
[ "", "c++", "algorithm", "file-io", "" ]
Some of the platforms that I develop on, don't have profiling tools. I am looking for suggestions/techniques that you have personally used to help you identify hotspots, without the use of a profiler. The target language is C++. I am interested in what you have personally used.
I've found the following quite useful: ``` #ifdef PROFILING # define PROFILE_CALL(x) do{ \ const DWORD t1 = timeGetTime(); \ x; \ const DWORD t2 = timeGetTime(); \ std::cout << "Call to '" << #x << "' took " << (t2 - t1) << " ms.\n"; \ }while(false) #else # define PROFILE_CALL(x) x #endif ``` Which can be used in the calling function as such: ``` PROFILE_CALL(renderSlow(world)); int r = 0; PROFILE_CALL(r = readPacketSize()); ```
No joke: In addition to dumping timings to std::cout and other text/data oriented approaches I also use the Beep() function. There's something about hearing the gap of silence between two "Beep" checkpoints that makes a different kind of impression. It's like the difference between looking at a written sheet music, and actually HEARING the music. It's like the difference between reading rgb(255,0,0) and seeing fire-engine red. So, right now, I have a client/server app and with Beeps of different frequencies, marking where the client sends the message, where the server starts its reply, finishes its reply, where reply first enters the client, etc, I can very naturally get a feel for where the time is spent.
What techniques can you use to profile your code
[ "", "c++", "profile", "profiler", "homebrew", "" ]
What is the memory model for concurrency in C++03? (And, does C++11 change the memory model to support concurrency better?)
The C++ memory model is the specification of when and why physical memory is read/written with respect to C++ code. Until the next C++ standard, the C++ memory model is the same as C. In the C++0x standard, a proper memory model for multithreading is expected to be included (see [here](http://en.wikipedia.org/wiki/C%2B%2B0x#Multithreading_memory_model)), and it will be part possibly of the next revision of the C standard, C1X. The current one is rudimentary: * it only specifies the behavior of memory operations observable by the current program. * it doesn't say anything about concurrent memory accesses when multiple processes access the same memory (there is no notion of shared memory or processes). * it doesn't say anything about concurrent memory accesses when multiple threads access the same memory (there is no notion of threads). * it offers no way to specify an ordering for memory accesses (compiler optimizations include code motion and recent processors reorder accesses, both can break patterns such as double checked initialization). So, the current state is: C++ memory operations are only specified when you have 1 process, with its main thread and don't write code which depends on a specific ordering of variable read/writes and that's it. In essence, this means that aside from the traditional hello world program you're screwed. Of course, you'll be prompt to add that *"it works today on my machine, you can't possibly be right"*. The correct sentence would be *"it works today on my machine with this specific combination of hardware, operating system (thread library) and compiler who know enough of each other to implement something which is somewhat working but will probably break at some point"*. Ok ok, this is a bit harsh but hell, [even Herb Sutter acknowledges that](http://www.open-std.org/JTC1/sc22/wg21/docs/papers/2007/n2197.pdf) (just read the intro) and he is talking about all pre 2007 versions of one of the most ubiquitous C/C++ toolchain... The C++ standard committee attempts to come up with something which will address all those issues while still being less constraining (and thus better performing) than Java's memory model. Hans Boehm has collected [here](http://www.hboehm.info/c++mm/) some pointers to papers on the issue, both academic, and from the C++ committee.
*Seeing some other answers, it seems many C++ programmers are not even aware what the "memory model" you are asking about means.* *The questions is about memory model in the sense: what guarantees (if any) are there about write / read reordering (which may happen on the compiler side or on the runtime side)? This question is very important for multithreaded programming, as without such rules writing correct multithread programs is not possible, and somewhat surprising truth is with current lack of explicit memory model many multithreaded programs work more or less "by sheer luck" - most often thanks to compilers assuming pointer aliasing across function calls. - see [Threads Cannot be Implemented as a Library](http://www.hpl.hp.com/techreports/2004/HPL-2004-209.html)* In current C++ there is no standard memory model. Some compilers define memory model for volatile variables, but this is nonstandard. C++0x defines new "atomic" primitives for this purpose. Exhaustive starting point to check what recent status is can be found at [Threads and memory model for C++](http://www.hpl.hp.com/personal/Hans_Boehm/c++mm/) Important links are also [Concurrency memory model](http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2429.htm), [Atomic Types](http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2427.html) and [C++ Data-Dependency Ordering: Atomics and Memory Model](http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2427.html) standard proposals.
What is the C++03 memory model for concurrency?
[ "", "c++", "concurrency", "c++03", "memory-model", "" ]
What, if any, is the performance difference between the following two loops? ``` for (Object o: objectArrayList) { o.DoSomething(); } ``` and ``` for (int i=0; i<objectArrayList.size(); i++) { objectArrayList.get(i).DoSomething(); } ```
From Item 46 in [Effective Java](https://rads.stackoverflow.com/amzn/click/com/0321356683) by Joshua Bloch : > The for-each loop, introduced in > release 1.5, gets rid of the clutter > and the opportunity for error by > hiding the iterator or index variable > completely. The resulting idiom > applies equally to collections and > arrays: > > ``` > // The preferred idiom for iterating over collections and arrays > for (Element e : elements) { > doSomething(e); > } > ``` > > When you see the colon (:), read it as > “in.” Thus, the loop above reads as > “for each element e in elements.” Note > that there is no performance penalty > for using the for-each loop, even for > arrays. In fact, it may offer a slight > performance advantage over an ordinary > for loop in some circumstances, as it > computes the limit of the array index > only once. While you can do this by > hand (Item 45), programmers don’t > always do so.
All these loops do the exact same, I just want to show these before throwing in my two cents. First, the classic way of looping through List: ``` for (int i=0; i < strings.size(); i++) { /* do something using strings.get(i) */ } ``` Second, the preferred way since it's less error prone (how many times have YOU done the "oops, mixed the variables i and j in these loops within loops" thing?). ``` for (String s : strings) { /* do something using s */ } ``` Third, the micro-optimized for loop: ``` int size = strings.size(); for (int i = -1; ++i < size;) { /* do something using strings.get(i) */ } ``` Now the actual two cents: At least when I was testing these, the third one was the fastest when counting milliseconds on how long it took for each type of loop with a simple operation in it repeated a few million times - this was using Java 5 with jre1.6u10 on Windows in case anyone is interested. While it at least seems to be so that the third one is the fastest, you really should ask yourself if you want to take the risk of implementing this peephole optimization everywhere in your looping code since from what I've seen, actual looping isn't usually the most time consuming part of any real program (or maybe I'm just working on the wrong field, who knows). And also like I mentioned in the pretext for the Java *for-each loop* (some refer to it as *Iterator loop* and others as *for-in loop*) you are less likely to hit that one particular stupid bug when using it. And before debating how this even can even be faster than the other ones, remember that javac doesn't optimize bytecode at all (well, nearly at all anyway), it just compiles it. If you're into micro-optimization though and/or your software uses lots of recursive loops and such then you may be interested in the third loop type. Just remember to benchmark your software well both before and after changing the for loops you have to this odd, micro-optimized one.
Is there a performance difference between a for loop and a for-each loop?
[ "", "java", "performance", "for-loop", "" ]
I want to build an Axis2 client (I'm only accessing a remote web service, I'm *not* implementing one!) with Maven2 and I don't want to add 21MB of JARs to my project. What do I have to put in my pom.xml to compile the code when I've converted the WSDL with ADB?
(**Note:** This response was provided by Aaron Digulla himself. What follows is the exact text of his own answer.) In maven2, the minimum dependency set to make an ADB client work ("ADB" as in the way you created the Java classes from the WSDL) is this: ``` <dependency> <groupId>org.apache.axis2</groupId> <artifactId>axis2-kernel</artifactId> <version>1.4.1</version> </dependency> <dependency> <groupId>org.apache.axis2</groupId> <artifactId>axis2-adb</artifactId> <version>1.4.1</version> </dependency> ``` Hmmm... it seems I can't flag that as the correct answer. Can someone please copy this so I can flag his post?
The minimum jars for the client are: * activation-1.1.jar * axiom-api-1.2.8.jar * axiom-impl-1.2.8.jar * axis2-adb-1.5.1.jar * axis2-kernel-1.5.1.jar * axis2-transport-http-1.5.1.jar * axis2-transport-local-1.5.1.jar * commons-codec-1.3.jar * commons-httpclient-3.1.jar * commons-logging-1.1.1.jar * httpcore-4.0.jar * mail-1.4.jar * neethi-2.0.4.jar * wsdl4j-1.6.2.jar * XmlSchema-1.4.3.jar STAX jars below are not part of Axis2 1.5.1 release and will be needed if your JDK version is less than 6: * stax-1.2.0.jar * stax-api-1.0.1.jar
What's the minimum classpath for an Axis2 client?
[ "", "java", "maven-2", "classpath", "apache-axis", "" ]
I'm trying to understand the differences between Assembly.Load and Assembly.ReflectionOnlyLoad. In the code below I am attempting to find all of the objects in a given assembly that inherit from a given interface: ``` var myTypes = new List<Type>(); var assembly = Assembly.Load("MyProject.Components"); foreach (var type in assembly.GetTypes()) { if (type.GetInterfaces().Contains(typeof(ISuperInterface))) { myTypes.Add(type); } } ``` This code works fine for me, but I was doing some research into other possibly better alternatives and came across Assembly.ReflectionOnlyLoad() method. I assumed that since I'm not loading or executing any of the objects, essentially just querying on their definitions that I could use ReflectionOnlyLoad for a slight performance increase... But it turns out that when I change Assembly.Load to Assembly.ReflectionOnlyLoad I get the following error when it calls assembly.GetTypes(): > ``` > System.Reflection.ReflectionTypeLoadException: > ``` > > Unable to load one or more of the > requested types. Retrieve the > LoaderExceptions property for more > information. I assumed that the above code was JUST doing reflection and "looking at" the library... but is this some sort of instance of the Heisenberg Uncertainty Principle whereby looking at the library and the objects in it is actually attempting to instantiate them in some way? Thanks, Max
As per Jon's reply, it would be helpful to know what's in `LoaderExceptions`. In lieu of this information, I think I can hazard a guess. From [MSDN](http://msdn.microsoft.com/en-us/library/ms172331(VS.80).aspx): > If the assembly has dependencies, the > ReflectionOnlyLoad method does not > load them. If you need to examine > them, you must load them yourself. You need to attach a handler to `AppDomain.ReflectionOnlyAssemblyResolve` to help the CLR load any dependencies of the assembly you're loading. Have you done this?
The ReflectionOnly methods are the only way you can load a specific Assembly on disk to examine without going via the usual Load/LoadFrom rules. For example, you can load a disk-based assembly with the same identity as one in the GAC. If you tried this with LoadFrom or LoadFile, the GAC assembly is ALWAYS loaded. Additionally, you may not call GetCustomAttributes(...) on the return Assembly instance since this will attempt to instantiate the Attributes on the assembly, which are ReflectionOnly. You must use CustomAttributeData class's static methods for this. No types in an assembly loaded via ReflectionOnly may be instantiated.
C# Assembly.Load vs Assembly.ReflectionOnlyLoad
[ "", "c#", "reflection", "assembly.load", "assembly.reflectiononly", "" ]
How exactly do you make an auto-refreshing `div` with JavaScript (specifically, jQuery)? I know about the `setTimeout` method, but is it really a good practice ? Is there a better method? ``` function update() { $.get("response.php", function(data) { $("#some_div").html(data); }); window.setTimeout("update();", 10000); } ```
Another modification: ``` function update() { $.get("response.php", function(data) { $("#some_div").html(data); window.setTimeout(update, 10000); }); } ``` The difference with this is that it waits 10 seconds AFTER the ajax call is one. So really the time between refreshes is 10 seconds + length of ajax call. The benefit of this is if your server takes longer than 10 seconds to respond, you don't get two (and eventually, many) simultaneous AJAX calls happening. Also, if the server fails to respond, it won't keep trying. I've used a similar method in the past using .ajax to handle even more complex behaviour: ``` function update() { $("#notice_div").html('Loading..'); $.ajax({ type: 'GET', url: 'response.php', timeout: 2000, success: function(data) { $("#some_div").html(data); $("#notice_div").html(''); window.setTimeout(update, 10000); }, error: function (XMLHttpRequest, textStatus, errorThrown) { $("#notice_div").html('Timeout contacting server..'); window.setTimeout(update, 60000); } } ``` This shows a loading message while loading (put an animated gif in there for typical "web 2.0" style). If the server times out (in this case takes longer than 2s) or any other kind of error happens, it shows an error, and it waits for 60 seconds before contacting the server again. This can be especially beneficial when doing fast updates with a larger number of users, where you don't want everyone to suddenly cripple a lagging server with requests that are all just timing out anyways.
``` $(document).ready(function() { $.ajaxSetup({ cache: false }); // This part addresses an IE bug. without it, IE will only load the first number and will never refresh setInterval(function() { $('#notice_div').load('response.php'); }, 3000); // the "3000" }); ```
Auto-refreshing div with jQuery - setTimeout or another method?
[ "", "javascript", "jquery", "" ]
How do I send mail via PHP with attachment of HTML file? -> Content of HTML file (code) is in string in DB. Is there some easy way or free script to do this? I don't want to store the file localy, I need to read it out of DB and send it straightaway as attachment (not included in body).
If you have a hard time getting the headers right, you can always use something like [PHP Mailer](https://github.com/PHPMailer/PHPMailer) instead of reinventing the wheel.
I like pear. ``` <? include('Mail.php'); include('Mail/mime.php'); $text = 'Text version of email'; $html = '<html><body>HTML version of email</body></html>'; $file = './files/example.zip'; $crlf = "rn"; $hdrs = array( 'From' => 'someone@domain.pl', 'To' => 'someone@domain.pl', 'Subject' => 'Test mime message' ); $mime = new Mail_mime($crlf); $mime->setTXTBody($text); $mime->setHTMLBody($html); $mime->addAttachment($file,'application/octet-stream'); $body = $mime->get(); $hdrs = $mime->headers($hdrs); $mail =& Mail::factory('mail', $params); $mail->send('mail@domain.pl', $hdrs, $body); ?> ```
How to attach HTML file to email using content taken from DB in PHP?
[ "", "php", "html", "database", "email", "attachment", "" ]
I have a `JFrame` that contains a "display" `JPanel` with `JTextField` and a "control" `JPanel` with buttons that should access the contents of the display `JPanel`. I think my problem is related on how to use the observer pattern, which in principle I understand. You need to place listeners and update messages, but I don't have a clue where to put these, how to get access from one panel to the other and maybe if necessary to introduce a "datamodel" class. For example, I want to access the contents of the `JTextField` from the control panel and I use an anonymous action listener as follows: ``` JButton openfile = new JButton("Convert file"); openfile.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent e) { openButtonPressed(); } }); ```
You need to reduce the coupling between these objects. You can have a master object, that owns all the text fields and the button ( the panels are irrelevant ) Then a separete actionlistener within that master object ( I call it mediator see mediator pattern ) That action listener performs a method on the mediator which in turn take the values from the textfields and create perhaps a transfer object. This way you reduce the coupling between the panels, textfields etc. and let the control in one place ( the mediator ) that is, you don't let them know each other. You can take a look at the code in this question: <https://stackoverflow.com/questions/324554/#324559> It shows these concepts in running code. BTW the observer pattern is already implemented in the JTextField, JButton, ActionListener etc. You just need to add the hooks. I hope this helps. **EDIT** Joined two answers into one. This is the code. ``` class App { // this is the mediator // GUI components. private JFrame frame; private JTextField name; private JTextField count; private JTextField date; // Result is displayed here. private JTextArea textArea; // Fired by this button. private JButton go; private ActionListener actionListener; public App(){ actionListener = new ActionListener(){ public void actionPerformed( ActionEvent e ){ okButtonPressed(); } }; } private void okButtonPressed(){ // template is an object irrelevant to this code. template.setData( getData() ); textArea.setText( template.getTransformedData() ); } public void initialize(){ frame = new JFrame("Code challenge v0.1"); frame.setDefaultCloseOperation( JFrame.EXIT_ON_CLOSE ); name = new JTextField(); count = new JTextField(); date = new JTextField(); textArea = new JTextArea(); go = new JButton("Go"); go.addActionListener( actionListener ); // prepare the button. layoutComponents(); // a lot of panels are created here. Irrelevant. } } ``` Complete and running code can be retrieved [here:](https://stackoverflow.com/questions/324554/#324559) It is important to favor composition over inheritance when possible.
It does make the code cleaner if you create the models in one layer and add a layer or two above to create the components and layout. Certainly do not extend the likes of `JFrame` and `JPanel`. Do not feel the need to make the composition hierarchy in the model layer exactly match the display. Then it's just a matter of taking the text from the `Document` and performing the relevant operation. Okay, perhpas not that simple. Swing models are a little bit messy. In particular ButtonModel is brain damaged, and the controller area of code might not be entirely pure.
How to access multiple JPanels inside JFrame?
[ "", "java", "design-patterns", "swing", "controller", "jpanel", "" ]