Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
``` true.ToString() false.toString(); Output: True False ``` Is there a valid reason for it being "True" and not "true"? It breaks when writing XML as XML's boolean type *is lower case*, and also isn't compatible with C#'s true/false (not sure about CLS though). **Update** Here is my very hacky way of getting around it in C# (for use with XML) ``` internal static string ToXmlString(this bool b) { return b.ToString().ToLower(); } ``` Of course that adds 1 more method to the stack, but removes ToLowers() everywhere.
Only people from Microsoft can really answer that question. However, I'd like to offer some fun facts about it ;) First, this is what it says in MSDN about the **Boolean.ToString()** method: > **Return Value** > > Type: System.String > > *TrueString* if the value of this > instance is true, or *FalseString* if > the value of this instance is false. > > **Remarks** > > This method returns the > constants "True" or "False". Note that > XML is case-sensitive, and that the > XML specification recognizes "true" > and "false" as the valid set of > Boolean values. If the String object > returned by the ToString() method > is to be written to an XML file, its > String.ToLower method should be > called first to convert it to > lowercase. Here comes the fun fact #1: it doesn't return TrueString or FalseString at all. It uses hardcoded literals "True" and "False". Wouldn't do you any good if it used the fields, because they're marked as readonly, so there's no changing them. The alternative method, **Boolean.ToString(IFormatProvider)** is even funnier: > **Remarks** > > The provider parameter is reserved. It does not participate in the execution of this method. This means that the Boolean.ToString(IFormatProvider) method, unlike most methods with a provider parameter, does not reflect culture-specific settings. What's the solution? Depends on what exactly you're trying to do. Whatever it is, I bet it will require a hack ;)
...because the .NET environment is designed to support many languages. System.Boolean (in mscorlib.dll) is designed to be used internally by languages to support a boolean datatype. C# uses all lowercase for its keywords, hence 'bool', 'true', and 'false'. VB.NET however uses standard casing: hence 'Boolean', 'True', and 'False'. Since the languages have to work together, you couldn't have true.ToString() (C#) giving a different result to True.ToString() (VB.NET). The CLR designers picked the standard CLR casing notation for the ToString() result. The string representation of the boolean true is defined to be Boolean.TrueString. (There's a similar case with System.String: C# presents it as the 'string' type).
Why does Boolean.ToString output "True" and not "true"
[ "", "c#", ".net", "boolean", "" ]
I have a table with about 2,000,000 rows. I need to query one of the columns to retrieve the rows where a string exsists as part of the value. When I run the query I will know the position of the string, but not before hand. So a view which takes a substring is not an option. As far as I can see I have three options 1. using like ‘% %’ 2. using instr 3. using substr I do have the option of creating a function based index, if I am nice to the dba. At the moment all queries are taking about two seconds. Does anyone have experience of which of these options will work best, or if there is another option? The select will be used for deletes every few seconds, it will typically select 10 rows. **edit with some more info** The problem comes about as we are using a table for storing objects with arbitrary keys and values. The objects come from outside our system so we have limited scope to control them so the text column is something like 'key1=abc,key2=def,keyn=ghi' I know this is horribly denormalised but as we don't know what the keys will be (to some extent) it is a reliable way to store and retrieve values. To retrieve a row is fairly fast as we are searching the whole of the column, which is indexed. But the performance is not good if we want to retrieve the rows with key2=def. We may be able to create a table with columns for the most common keys, but I was wondering if there was a way to improve performance with the existing set up.
In Oracle 10: ``` CREATE TABLE test (tst_test VARCHAR2(200)); CREATE INDEX ix_re_1 ON test(REGEXP_REPLACE(REGEXP_SUBSTR(tst_test, 'KEY1=[^,]*'), 'KEY1=([^,]*)', '\1')) SELECT * FROM TEST WHERE REGEXP_REPLACE(REGEXP_SUBSTR(TST_TEST, 'KEY1=[^,]*'), 'KEY1=([^,]*)', '\1') = 'TEST' ``` This will use newly selected index. You will need as many indices as there are `KEY`s in you data. Presence of an `INDEX`, of course, impacts performance, but it depends very little on `REGEXP` being there: ``` SQL> CREATE INDEX ix_test ON test (tst_test) 2 / Index created Executed in 0,016 seconds SQL> INSERT 2 INTO test (tst_test) 3 SELECT 'KEY1=' || level || ';KEY2=' || (level + 10000) 4 FROM dual 5 CONNECT BY 6 LEVEL <= 1000000 7 / 1000000 rows inserted Executed in 47,781 seconds SQL> TRUNCATE TABLE test 2 / Table truncated Executed in 2,546 seconds SQL> DROP INDEX ix_test 2 / Index dropped Executed in 0 seconds SQL> CREATE INDEX ix_re_1 ON test(REGEXP_REPLACE(REGEXP_SUBSTR(tst_test, 'KEY1=[^,]*'), 'KEY1=([^,]*)', '\1')) 2 / Index created Executed in 0,015 seconds SQL> INSERT 2 INTO test (tst_test) 3 SELECT 'KEY1=' || level || ';KEY2=' || (level + 10000) 4 FROM dual 5 CONNECT BY 6 LEVEL <= 1000000 7 / 1000000 rows inserted Executed in 53,375 seconds ``` As you can see, on my not very fast machine (`Core2 4300`, `1 Gb RAM`) you can insert `20000` records per second to an indexed field, and this rate almost does not depend on type of `INDEX` being used: plain or function based.
You can use [Tom Kyte's runstats package](http://asktom.oracle.com/tkyte/runstats.html) to compare the performance of different implementations - running each say 1000 times in a loop. For example, I just compared LIKE with SUBSTR and it said that LIKE was faster, taking about 80% of the time of SUBSTR. Note that "col LIKE '%xxx%'" is different from "SUBSTR(col,5,3) = 'xxx'". The equivalent LIKE would be: ``` col LIKE '____xxx%' ``` using one '\_' for each leading character to be ignored. I think whichever way you do it, the results will be similar - it always involves a full table (or perhaps full index) scan. A function-based index would only help if you knew the offset of the substring at the time of creating the index. I am rather concerned when you say that "The select will be used for deletes every few seconds". This does rather suggest a design flaw somewhere, but without knowing the requirements it's hard to say. **UPDATE:** If your column values are like 'key1=abc,key2=def,keyn=ghi' then perhaps you could consider adding another table like this: ``` create table key_values ( main_table_id references main_table , key_value varchar2(50) , primary key (fk_col, key_value) ); create index key_values_idx on key_values (key_value); ``` Split the key values up and store them in this table like this: ``` main_table_id key_value 123 key1=abc 123 key2=def 123 key3=ghi ``` (This could be done in an AFTER INSERT trigger on main\_table for example) Then your delete could be: ``` delete main_table where id in (select main_table_id from key_values where key_value = 'key2=def'); ```
Checking for the presence of text in a text column efficiently
[ "", "sql", "database", "oracle", "" ]
I have a little frame where I ask user & password. This frame will be opened clicking over a button in a main window. Then I have two buttons: ok and cancel. When I click on "cancel" button, I need to close this frame without exiting the app. How can I do that?
You can use either Frame.hide() or Frame.dispose(). I would also recommend to look into JDialog or JOptionPane Correction: hide() is deprecated. SetVisible(false) should be used instead
Maybe a cleaner way is just change the setDefaultCloseOperation from EXIT\_ON\_CLOSE to DISPOSE\_ON\_CLOSE : ``` setDefaultCloseOperation(JFrame.DISPOSE_ON_CLOSE); ```
java swing close window without exiting app
[ "", "java", "swing", "button", "exit", "" ]
I need to create a folder that I use only once, but need to have it exist until the next run. It seems like I should be using the tmp\_file module in the standard library, but I'm not sure how to get the behavior that I want. Currently, I'm doing the following to create the directory: ``` randName = "temp" + str(random.randint(1000, 9999)) os.makedirs(randName) ``` And when I want to delete the directory, I just look for a directory with "temp" in it. This seems like a dirty hack, but I'm not sure of a better way at the moment. Incidentally, the reason that I need the folder around is that I start a process that uses the folder with the following: ``` subprocess.Popen([command], shell=True).pid ``` and then quit my script to let the other process finish the work.
Creating the folder with a 4-digit random number is insecure, and you also need to worry about collisions with other instances of your program. A much better way is to create the folder using [`tempfile.mkdtemp`](http://docs.python.org/library/tempfile.html), which does exactly what you want (i.e. the folder is not deleted when your script exits). You would then pass the folder name to the second Popen'ed script as an argument, and it would be responsible for deleting it.
What you've suggested is dangerous. You may have race conditions if anyone else is trying to create those directories -- including other instances of your application. Also, deleting anything containing "temp" may result in deleting more than you intended. As others have mentioned, [tempfile.mkdtemp](http://docs.python.org/library/tempfile.html#tempfile.mkdtemp) is probably the safest way to go. Here is an example of what you've described, including launching a subprocess to use the new directory. ``` import tempfile import shutil import subprocess d = tempfile.mkdtemp(prefix='tmp') try: subprocess.check_call(['/bin/echo', 'Directory:', d]) finally: shutil.rmtree(d) ```
In Python, how do I make a temp file that persists until the next run?
[ "", "python", "temporary-files", "" ]
Is there a good, *strict* date parser for Java? I have access to Joda-Time but I have yet to see this option. I found the "Is there a good date parser for Java" question, and while this is related it is sort of the opposite. Whereas that question was asking for a lenient, more fuzzy-logic and prone to human error parser, I would like a strict parser. For example, with both JodaTime (as far as I can tell) and simpleDateFormat, if you have a format "MM/dd/yyyy": parse this: 40/40/4353 This becomes a valid date. I want a parser that knows that 40 is an invalid month and date. Surely some implementation of this exists in Java?
I don't see that [Joda](http://joda-time.sourceforge.net/) recognizes that as a valid date. Example: ``` strict = org.joda.time.format.DateTimeFormat.forPattern("MM/dd/yyyy") try { strict.parseDateTime('40/40/4353') assert false } catch (org.joda.time.IllegalFieldValueException e) { assert 'Cannot parse "40/40/4353": Value 40 for monthOfYear must be in the range [1,12]' == e.message } ``` As best as I can tell, neither does [DateFormat](http://java.sun.com/javase/6/docs/api/java/text/DateFormat.html) with [setLenient(false)](http://java.sun.com/javase/6/docs/api/java/text/DateFormat.html#setLenient(boolean)). Example: ``` try { df = new java.text.SimpleDateFormat('MM/dd/yyyy') df.setLenient(false) df.parse('40/40/4353') assert false } catch (java.text.ParseException e) { assert e.message =~ 'Unparseable' } ``` Hope this helps!
A good way to do strict validation with DateFormat is re-formatting the parsed date and checking equality to the original string: ``` String myDateString = "87/88/9999"; Date myDate = dateFormat.parse(myDateString); if (!myDateString.equals(df.format(myDate))){ throw new ParseException(); } ``` Works like a charm.
Is there a good *strict* date parser for Java?
[ "", "java", "datetime", "parsing", "strict", "" ]
Given this linq query against an EF data context: ``` var customers = data.Customers.Where(c => c.EmailDomain.StartsWith(term)) ``` You’d expect it to produce SQL like this, right? ``` SELECT {cols} FROM Customers WHERE EmailDomain LIKE @term+’%’ ``` Well, actually, it does something like this: ``` SELECT {cols} FROM Customer WHERE ((CAST(CHARINDEX(@term, EmailDomain) AS int)) = 1) ``` Do you know why? Also, replacing the Where selector to: ``` c => c.EmailDomain.Substring(0, term.Length) == term ``` it runs 10 times faster but still produces some pretty yucky SQL. **NOTE:** Linq to SQL correctly translates StartsWith into Like {term}%, and nHibernate has a dedicated LikeExpression.
The reason is that CharIndex is a lot faster and cleaner for SQL to perform than LIKE. The reason is, that you ***can*** have some crazy "LIKE" clauses. Example: ``` SELECT * FROM Customer WHERE EmailDomain LIKE 'abc%de%sss%' ``` But, the "CHARINDEX" function (which is basically "IndexOf") ONLY handles finding the first instance of a set of characters... no wildcards are allowed. So, there's your answer :) EDIT: I just wanted to add that I encourage people to use CHARINDEX in their SQL queries for things that they didn't need "LIKE" for. It is important to note though that in SQL Server 2000... a "Text" field can use the *LIKE* method, but not CHARINDEX.
I don't know about MS SQL server but on SQL server compact LIKE 'foo%' is thousands time faster than CHARINDEX, if you have INDEX on seach column. And now I'm sitting and pulling my hair out how to force it use LIKE. <http://social.msdn.microsoft.com/Forums/en-US/adodotnetentityframework/thread/1b835b94-7259-4284-a2a6-3d5ebda76e4b>
SQL produced by Entity Framework for string matching
[ "", "sql", "entity-framework", "" ]
When using `System.Windows.Forms.ShowDialog(IWin32Window)`, should I be able to pass in an `IWin32Window` representing any window handle and have it be modal with respect to that window? As part of an Internet Explorer 7 extension I'm trying to open a window modal with respect to an Internet Explorer tab. It's not the currently selected tab, but I can get the hwnd of the tab OK. However, when I pass this to ShowDialog my Form is shown, but it's not modal with respect to anything: I can still do things in Internet Explorer, including in the tab that's supposed to be the owner. My form is shown floating above the Internet Explorer windows and it stays on top, so it's not like it's just opened as a normal form, but it's not correctly modal. Using [Spy++](http://msdn.microsoft.com/en-us/library/aa264396%28v=vs.60%29.aspx), I can find my form and it's owner handle is correctly set. Does this mean that something has gone wrong, or I'm doing something wrong? How do I make my form correctly modal? FYI, I'm using this wrapper class to create an `IWin32Window` from a `hwnd` (thanks [Ryan!](http://ryanfarley.com/blog/archive/2004/03/23/465.aspx)): ``` /// <summary> /// Wrapper class so that we can return an IWin32Window given a hwnd /// </summary> public class WindowWrapper : System.Windows.Forms.IWin32Window { public WindowWrapper(IntPtr handle) { _hwnd = handle; } public IntPtr Handle { get { return _hwnd; } } private IntPtr _hwnd; } ``` UPDATE: Using Internet Explorer 7 & .NET 2.0 UPDATE: Playing around some more with Spy++ and the handles it exposes, I find that if I use a different `hwnd` then I can make my window modal to the tab: I was using the tab's `hwnd` as suggested by the [IWebBrowser2.HWND doc](http://msdn.microsoft.com/en-us/library/aa752126(VS.85).aspx), which in Spy++ appears as class `TabWindowClass`. It has a child of class `Shell DocObject View`, which has a child of Internet\_Explorer\_Server. If I use the `hwnd` of the `Internet Explorer_Server` then it works correctly, for example, when I click with the mouse on other tabs, Internet Explorer reacts normally. When I click with the mouse on the tab of interest, it plays the windows d'oh sound and doesn't do anything. I don't yet know how to programatically get the Internet\_Explorer\_Server `hwnd`, but it should be possible. Also, for what it's worth, while playing with other window handles I was generally able to make my form modal to other applications and dialogs. So I guess the answer to my question is 'many but not all handles'... possibly it depends on the application? UPDATE: Another side-note: The original reason I wanted to make my form modal to the tab instead of the whole window is that when opening a `MessageBox` from my form, passing the form as owner, the `MessageBox` would not always open on top of my form. If a new Internet Explorer tab had just been opened but wasn't active then the `MessageBox` would be hidden and that tab would start flashing. However, since Internet Explorer was disabled with my form opened modal it wasn't possible to switch to that tab, so Internet Explorer would be frozen. I thought that opening my form modal to the tab would solve this, but I've found another solution is to avoid using `MessageBox`: if I use a second form and `ShowDialog(this)` from my first form then the second form correctly opens to the front. So it seems that `Form.ShowDialog()` works better than `MessageBox.Show()` in some cases. More discussion in *[Problems with modal dialogs and messageboxes](http://social.msdn.microsoft.com/Forums/en-US/ieextensiondevelopment/thread/df0fe7f2-0153-47d9-b18f-266d57ab7909)*.
Your code is correct. The problem you are likely running into though is that IE has a threading model related to its tabs. I don't know the exact details but the short version is that each tab can and likely is running on a different thread than other tabs. The Modal'ness of a dialog is specific to the thread where the dialog is running. UI on other threads will be unaffected by a model dialog on another thread. It's entirely possible you are able to access tabs which are running on a different thread for this reason.
ShowDialog() does two important things. It starts pumping a message loop so it acts modally to the calling code. And it disables any other windows in the application with a EnableWindow(false) API call. The latter is what is not happening in your case. Not entirely surprising, considering that the window that needs to be disabled is not a WF window. You may need to call EnableWindow() yourself. Be sure to re-enable it in before the dialog closes or Windows will go hunting for another app's window to give the focus to.
Should Form.ShowDialog(IWin32Window) work with any window handle?
[ "", "c#", ".net", "winforms", "modal-dialog", "bho", "" ]
I have the following statement ``` DateTime now = DateTime.Now; string test = string.Format("{0}{1}{2}{3}", now.Day, now.Month, now.Year, now.Hour); ``` This gives me: ``` test = "242200915" ``` But I'd like to have something like: ``` test = "2402200915" ``` So the question is, how can I enforce the string formatter to output each int with the width of 2 while padding with zeroes?
You can use `string.Format("{0:000} {0:D3}", 7)` to get `007 007` And here is a useful overview on MSDN: [Custom Numeric Format Strings](http://msdn.microsoft.com/en-us/library/0c899ak8.aspx)
``` DateTime now = DateTime.Now; string test = now.ToString("ddMMyyyyHH"); ```
How to enforce minimum width of formatted string in C#
[ "", "c#", ".net", "string", "formatting", "" ]
Using SQL Server 2005 I have a field that contains a datetime value. What I am trying to do is create 2 queries: 1. Compare to see if stored datetime is of the same month+year as current date 2. Compare to see if stored datetime is of the same year as current date There is probably a simple solution but I keep hitting brick walls using various samples I can find, any thoughts? Thanks in advance.
Compare the parts of the date: ``` WHERE YEAR( columnName ) = YEAR( getDate() ) ```
While the other answers will work, they all suffer from the same problem: they apply a transformation to the column and therefore will never utilize an index on that column. To search the date without a transformation, you need a couple built-in functions and some math. Example below: ``` --create a table to hold our example values create table #DateSearch ( TheDate datetime not null ) insert into #DateSearch (TheDate) --today select getdate() union all --a month in advance select dateadd(month, 1, getdate()) union all --a year in advance select dateadd(year, 1, getdate()) go --declare variables to make things a little easier to see declare @StartDate datetime, @EndDate datetime --search for "same month+year as current date" select @StartDate = dateadd(month, datediff(month, 0, getdate()), 0), @EndDate = dateadd(month, datediff(month, 0, getdate()) + 1, 0) select @StartDate [StartDate], @EndDate [EndDate], TheDate from #DateSearch where TheDate >= @StartDate and TheDate < @EndDate --search for "same year as current date" select @StartDate = dateadd(year, datediff(year, 0, getdate()), 0), @EndDate = dateadd(year, datediff(year, 0, getdate()) + 1, 0) select @StartDate [StartDate], @EndDate [EndDate], TheDate from #DateSearch where TheDate >= @StartDate and TheDate < @EndDate ``` What the statement does to avoid the transformations, is find all values greater-than or equal-to the beginning of the current time period (month or year) AND all values less-than the beginning of the next (invalid) time period. This solves our index problem and also mitigates any issues related to 3ms rounding in the DATETIME type.
Compare current date with stored datetime using month an year only
[ "", "sql", "sql-server", "" ]
I am trying to numerically integrate an arbitrary (known when I code) function in my program using numerical integration methods. I am using Python 2.5.2 along with SciPy's numerical integration package. In order to get a feel for it, i decided to try integrating sin(x) and observed this behavior- ``` >>> from math import pi >>> from scipy.integrate import quad >>> from math import sin >>> def integrand(x): ... return sin(x) ... >>> quad(integrand, -pi, pi) (0.0, 4.3998892617846002e-14) >>> quad(integrand, 0, 2*pi) (2.2579473462709165e-16, 4.3998892617846002e-14) ``` I find this behavior odd because - 1. In ordinary integration, integrating over the full cycle gives zero. 2. In numerical integration, this (1) isn't necessarily the case, because you may just be approximating the total area under the curve. In any case, either assuming 1 is True or assuming 2 is True, I find the behavior to be inconsistent. Either both integrations (-pi to pi and 0 to 2\*pi) should return 0.0 (first value in the tuple is the result and the second is the error) or return 2.257... Can someone please explain why this is happening? Is this really an inconsistency? Can someone also tell me if I am missing something really basic about numerical methods? In any case, in my final application, I plan to use the above method to find the arc length of a function. If someone has experience in this area, please advise me on the best policy for doing this in Python. **Edit** **Note** I already have the first differential values at all points in the range stored in an array. Current error is tolerable. **End note** I have read Wikipaedia on this. As Dimitry has pointed out, I will be integrating sqrt(1+diff(f(x), x)^2) to get the Arc Length. What I wanted to ask was - is there a better approximation/ Best practice(?) / faster way to do this. If more context is needed, I'll post it separately/ post context here, as you wish.
The `quad` function is a function from an old Fortran library. It works by judging by the flatness and slope of the function it is integrating how to treat the step size it uses for numerical integration in order to maximize efficiency. What this means is that you may get slightly different answers from one region to the next even if they're analytically the same. Without a doubt both integrations should return zero. Returning something that is 1/(10 trillion) is pretty close to zero! The slight differences are due to the way `quad` is rolling over `sin` and changing its step sizes. For your planned task, `quad` will be all you need. EDIT: For what you're doing I think `quad` is fine. It is fast and pretty accurate. My final statement is use it with confidence unless you find something that really has gone quite awry. If it doesn't return a nonsensical answer then it is probably working just fine. No worries.
I think it is probably machine precision since both answers are effectively zero. If you want an answer from the horse's mouth I would post this question on the [scipy discussion board](http://www.nabble.com/Scipy-User-f33045.html)
Can someone explain why scipy.integrate.quad gives different results for equally long ranges while integrating sin(X)?
[ "", "python", "scipy", "numerical-methods", "numerical-integration", "" ]
Let say a company are building a brand new application. The application are following the DDD principles. The old codebase has alot of products (or another "entity" for the company) that they want to convert to the new codebase. How should this work be done? normally it is faster and easier to import using for examples ssis,-transferring from one database to another. But the main problem here is that alot of the BusinessRules (implemented in managed code in the DomainLayer) is skipped... Is this good enough if the develeoper says: "i have it under control. The rules are duplicated as sql scripts..." Should we import the managed code libraries into the SQL Server (atleast this is possible in .NET and MS SQL Server)? Or should we we create a import script in managed code so all the layers in the domain are traversed when the entity are saved in the database?... (can take many hours..) What are your thoughts?
I would suggest that you write a little - import-application in .NET where you can apply the business rules. Since this task (at least I suppose so) will only run once (or twice ;)) speed is not that important - for speeding it up - design it to be multi-threaded - if possible. and no it is not good enough - if anyone says "I have it under control" - this is a buzz-sentence and all my alarm-bells go off. some detail will always be forgotten and this is mostly a little catastrophe ;)
The two options are not mutually exclusive. SQL Server can consume web services. You can create your import service as a web service and then call it from SQL Server. You can, of course, even do this with SSIS.
How to convert old data to a new system? using sql or managed code?
[ "", "java", ".net", "import", "domain-driven-design", "" ]
Is there an easy way to create an IM bot on multiple im networks (aim, gtalk, yim, etc) that can accept and interpet specific commands sent to it to perform a server related task? Lets say for instance I have a website for managing an rss feed. I want to send a command to an IM bot to add another feed to my collection. the IM bot would associate my screen name with my account from prior setup on the website.
I have done some internal bots for my company using the [XMPP](http://en.wikipedia.org/wiki/Xmpp) (Jabber) protocol, I've used the [agsXMPP SDK](http://www.ag-software.de/index.php?page=agsxmpp-sdk) and the [Jabber.NET](http://code.google.com/p/jabber-net/) client libraries, I was looking for APIS to work with YIM, AIM and Windows Live Messenger but I've found only COM exposed APIS, nothing for .NET... But an idea comes to my mind, with the XMPP Protocol you can configure a local server with IM Gateways, that allow users to access networks using other protocols through your server I use eJabberd, you can install a variety of [transport gateways](http://www.ejabberd.im/tutorials-transports) to connect with other IM protocols ([AIM](http://www.ejabberd.im/pyaimt), [MSN](http://delx.net.au/projects/pymsnt/docs/user.html), [ICQ](http://www.ejabberd.im/pyicqt), [GTalk](http://www.ejabberd.im/google-talk)... To GTalk you can [connect directly](http://www.ag-software.de/index.php?page=google-talk) using the libraries I mention... A sample ICQ gateway: ![](https://upload.wikimedia.org/wikipedia/commons/thumb/3/36/Wie_ein_Jabber-Transport_funktioniert.svg/800px-Wie_ein_Jabber-Transport_funktioniert.svg.png)
The short answer to this question is yes this can be done relitivly easily. Sedning and receiving IMs sending, receiving and interpreting requests from the network you wish to communicate on and there are libraries available for each of the major IM protocols to make this easier. For messenger you can try the [DotMsn](http://www.xihsolutions.net/dotmsn) library, I have used it in the past but at that time it was still quite new and I have not used it since so I can't vouch for it's quality. Jabber uses the XMMP protocol which is an open-standard so there are bound to be plenty of client libraries available. If I recall correctly Google Talk uses this protocol or a modified version thereof.
How to create an IM bot with C#
[ "", "c#", "bots", "instant-messaging", "" ]
I am creating a new website and this question came up, statistically, including all browsers (phone browsers as well), which gets disabled more often? Images, Javascript, or CSS? Reason Im looking for an answer is to design the image-replacement technique that covers the most recurrung scenarios.
Resource: [W3Schools Browser stats](http://www.w3schools.com/browsers/browsers_stats.asp) (JavaScript stats at the bottom).
Well, that's hard to tell. Disable images for saving bandwidth while disable javascript for security. I think most users will disable javascript while keeping images.
Statistics or resources: which gets disabled first (more often than others)? Images or Javascript?
[ "", "javascript", "html", "css", "image", "" ]
You can do this in Python, but is it possible in PHP? ``` >>> def a(): print 1 ... >>> def a(): print 2 ... >>> a() 2 ``` e.g.: ``` <? function var_dump() {} ?> Fatal error: Cannot redeclare var_dump() in /tmp/- on line 1 ```
This is a *bit* late, but I just want to point out that since PHP 5.3, it is actually possible to override internal functions without using a PHP extension. The trick is that you can redefine an internal PHP function inside a namespace. It's based on the way PHP does name resolution for functions: Inside namespace (say A\B), calls to unqualified functions are resolved at run-time. Here is how a call to function foo() is resolved: 1. It looks for a function from the current namespace: A\B\foo(). 2. It tries to find and call the global function foo()
No, it is not possible to do this as you might expect. From the [`manual`](http://php.net/manual/en/functions.user-defined.php): > PHP does not support function overloading, nor is it possible to undefine or redefine previously-declared functions. HOWEVER, You can use [`runkit_function_redefine`](http://php.net/manual/en/function.runkit-function-redefine.php) and its cousins, but it is definitely not very elegant... You can also use [`create_function`](http://php.net/manual/en/function.create-function.php) to do something like this: ``` <?php $func = create_function('$a,$b','return $a + $b;'); echo $func(3,5); // 8 $func = create_function('$a,$b','return $a * $b;'); echo $func(3,5); // 15 ?> ``` As with runkit, it is not very elegant, but it gives the behavior you are looking for.
Is it possible to replace (monkeypatch) PHP functions?
[ "", "php", "function", "monkeypatching", "" ]
Given a vector of strings, what is the best way to write them out to a HDF5 dataset? At the moment I'm doing something like the following: ``` const unsigned int MaxStrLength = 512; struct TempContainer { char string[MaxStrLength]; }; void writeVector (hid_t group, std::vector<std::string> const & v) { // // Firstly copy the contents of the vector into a temporary container std::vector<TempContainer> tc; for (std::vector<std::string>::const_iterator i = v.begin () , end = v.end () ; i != end ; ++i) { TempContainer t; strncpy (t.string, i->c_str (), MaxStrLength); tc.push_back (t); } // // Write the temporary container to a dataset hsize_t dims[] = { tc.size () } ; hid_t dataspace = H5Screate_simple(sizeof(dims)/sizeof(*dims) , dims , NULL); hid_t strtype = H5Tcopy (H5T_C_S1); H5Tset_size (strtype, MaxStrLength); hid_t datatype = H5Tcreate (H5T_COMPOUND, sizeof (TempConainer)); H5Tinsert (datatype , "string" , HOFFSET(TempContainer, string) , strtype); hid_t dataset = H5Dcreate1 (group , "files" , datatype , dataspace , H5P_DEFAULT); H5Dwrite (dataset, datatype, H5S_ALL, H5S_ALL, H5P_DEFAULT, &tc[0] ); H5Dclose (dataset); H5Sclose (dataspace); H5Tclose (strtype); H5Tclose (datatype); } ``` At a minimum, I would really like to change the above so that: 1. It uses variable length strings 2. I don't need to have a temporary container I have no restrictions over how I store the data so for example, it doesn't have to be a *COMPOUND* datatype if there is a better way to do this. **EDIT:** Just to narrow the problem down, I'm relatively familiar with playing with the data on the C++ side, it's the HDF5 side where I need most of the help. Thanks for your help.
[Many thanks to [dirkgently](https://stackoverflow.com/users/66692/dirkgently) for his help in answering this.] To write a variable length string in HDF5 use the following: ``` // Create the datatype as follows hid_t datatype = H5Tcopy (H5T_C_S1); H5Tset_size (datatype, H5T_VARIABLE); // // Pass the string to be written to H5Dwrite // using the address of the pointer! const char * s = v.c_str (); H5Dwrite (dataset , datatype , H5S_ALL , H5S_ALL , H5P_DEFAULT , &s ); ``` One solution for writing a container is to write each element individually. This can be achieved using [hyperslabs](http://www.hdfgroup.org/HDF5/Tutor/select.html). For example: ``` class WriteString { public: WriteString (hid_t dataset, hid_t datatype , hid_t dataspace, hid_t memspace) : m_dataset (dataset), m_datatype (datatype) , m_dataspace (dataspace), m_memspace (memspace) , m_pos () {} private: hid_t m_dataset; hid_t m_datatype; hid_t m_dataspace; hid_t m_memspace; int m_pos; ``` //... ``` public: void operator ()(std::vector<std::string>::value_type const & v) { // Select the file position, 1 record at position 'pos' hsize_t count[] = { 1 } ; hsize_t offset[] = { m_pos++ } ; H5Sselect_hyperslab( m_dataspace , H5S_SELECT_SET , offset , NULL , count , NULL ); const char * s = v.c_str (); H5Dwrite (m_dataset , m_datatype , m_memspace , m_dataspace , H5P_DEFAULT , &s ); } }; ``` // ... ``` void writeVector (hid_t group, std::vector<std::string> const & v) { hsize_t dims[] = { m_files.size () } ; hid_t dataspace = H5Screate_simple(sizeof(dims)/sizeof(*dims) , dims, NULL); dims[0] = 1; hid_t memspace = H5Screate_simple(sizeof(dims)/sizeof(*dims) , dims, NULL); hid_t datatype = H5Tcopy (H5T_C_S1); H5Tset_size (datatype, H5T_VARIABLE); hid_t dataset = H5Dcreate1 (group, "files", datatype , dataspace, H5P_DEFAULT); // // Select the "memory" to be written out - just 1 record. hsize_t offset[] = { 0 } ; hsize_t count[] = { 1 } ; H5Sselect_hyperslab( memspace, H5S_SELECT_SET, offset , NULL, count, NULL ); std::for_each (v.begin () , v.end () , WriteStrings (dataset, datatype, dataspace, memspace)); H5Dclose (dataset); H5Sclose (dataspace); H5Sclose (memspace); H5Tclose (datatype); } ```
Here is some working code for writing a vector of variable length strings using the HDF5 c++ API. I incorporate some of the suggestions in the other posts: 1. use H5T\_C\_S1 and H5T\_VARIABLE 2. use `string::c_str()` to obtain pointers to the strings 3. place the pointers into a `vector` of `char*` and pass to the HDF5 API It is **not necessary** to create expensive copies of the string (e.g. with `strdup()`). `c_str()` returns a pointer to the null terminated data of the underlying string. This is precisely what the function is intended for. Of course, strings with embedded nulls will not work with this... `std::vector` is guaranteed to have contiguous underlying storage, so using `vector` and `vector::data()` is the same as using raw arrays but is of course much neater and safer than the clunky, old-fashioned c way of doing things. ``` #include "H5Cpp.h" void write_hdf5(H5::H5File file, const std::string& data_set_name, const std::vector<std::string>& strings ) { H5::Exception::dontPrint(); try { // HDF5 only understands vector of char* :-( std::vector<const char*> arr_c_str; for (unsigned ii = 0; ii < strings.size(); ++ii) arr_c_str.push_back(strings[ii].c_str()); // // one dimension // hsize_t str_dimsf[1] {arr_c_str.size()}; H5::DataSpace dataspace(1, str_dimsf); // Variable length string H5::StrType datatype(H5::PredType::C_S1, H5T_VARIABLE); H5::DataSet str_dataset = file.createDataSet(data_set_name, datatype, dataspace); str_dataset.write(arr_c_str.data(), datatype); } catch (H5::Exception& err) { throw std::runtime_error(string("HDF5 Error in " ) + err.getFuncName() + ": " + err.getDetailMsg()); } } ```
How to best write out a std::vector < std::string > container to a HDF5 dataset?
[ "", "c++", "stl", "hdf5", "" ]
Say you have the following java bean: ``` public class MyBean { private List<String> names = new ArrayList<String>(); public void addName(String name) { names.add(name); fireNamesPropertyChange(name); } } ``` How would you normally implement a property change event for a collection? Do you try and use the index property which seems to be more for arrays than collections?
Take a look at [Glazed Lists](http://publicobject.com/glazedlists/) library, which has support for observable collections. If I were to do it myself, I would likely create custom Listener interface with elementsAdded, elementsRemoved methods, or similar :-) (also depending on my needs)
*(**NOTE**: I updated this post after realizing a few mistakes of my own so this isn't the original but a more refined one instead)* For this purpose I'd do two new interfaces, `ListListener` and `Listenable` and then I would create a new class like `ListenableArrayList` which would wrap every `List` method with a call to one (or more) relevant methods defined in `ListListener`. In code it'd be something like this: ``` public class ListenableArrayList<T> extends ArrayList<T> implements Listenable<T> { private ArrayList<T> internalList; private ListListener<T> listener; /* .. */ public void add(T item) { listener.beforeAdd(T item); internalList.add(item); listener.afterAdd(T item); } /* .. */ public void setListener(ListListener<T> listener) { this.listener = listener; } } public interface ListListener<T> { /* .. */ void beforeAdd(T item); void afterAdd(T item); /* .. */ } public interface Listenable<T> { /* .. */ void setListener(ListListener<T> listener); /* .. */ } ``` The reason I'd do it this way would be to allow for creating truly ad-hoc listeners on the fly instead of tying the ListenableArrayList to some specific implementation. For example with this the following would be possible: ``` Listenable<String> list = new ListenableArrayList<String>(); list.setListener(new ListListener<String>() { @Override public void beforeAdd(String item) { System.out.println("About to add element "+item+"..."); } @Override public void afterAdd(String item) { System.out.println("...element "+item+" has been added."); } }); ``` A bit cluttered, maybe but on the other hand this would allow for easy extension to Collections, Sets and whatnot rather easily.
Monitor changes to a collection
[ "", "java", "observer-pattern", "javabeans", "" ]
Alright, so I'm working on programming my own installer in C#, and what I'd like to do is something along the lines of put the files in the .exe, so I can do File.Copy(file, filedir); Or, if this isn't possible, is there another way of doing what I am attempting to do?
I wouldn't code my own installer, but if you truely want to embed files into your assembly you could use strongly typed resources. In the properties dialog of your project open up the "Resources" tab and then add your file. You'll then be able to get the file using: ``` ProjectNamespace.Properties.Resources.MyFile ``` Then you'll be able to write the embedded resource to disk using: ``` System.IO.File.WriteAllBytes(@"C:\MyFile.bin", ProjectNamespace.Properties.Resources.MyFile); ```
Honestly, I would suggest you NOT create your own installer. There are many many issues with creating installers. Even the big installer makers don't make their own actual installers anymore, they just create custom MSI packages. Use Mirosoft Installer (MSI). It's the right thing to do. Make your own custom front-end for it, but don't recreate the already very complex wheel that exists. UPDATE: If you're just doing this for learning, then I would shy away from thinking of it as "an installer". You might be tempted to take your "research" and use it someday, and frankly, that's how we end up with so many problems when new versions of Windows come out. People create their own wheels with assumptions that aren't valid. What you're really trying to do is called "packaging", and you really have to become intimately familiar with the Executable PE format, because you're talking about changing the structure of the PE image on disk. You can simulate it, to a point, with putting files in resources, but that's not really what installers, or self-extractors do. Here's a link to [Self-Extractor](http://www.codeproject.com/KB/winsdk/selfextract.aspx) tutorial, but it's not in C#. I don't know enough about the .NET PE requirements to know if you can do this in with a managed code executable or not. UPDATE2: This is probably more of what you're looking for, it embeds files in the resource, but as I said, it's not really the way professional installers or self-extractors do it. I think there are various limitations on what you can embed as resources. But here's the like to a [Self-Extractor Demo](http://www.codeproject.com/KB/cs/SelfExtractor.aspx) written in C#.
How To Store Files In An EXE
[ "", "c#", "file", "executable", "" ]
We recently had a problem with a Java server application where the application was throwing Errors which were not caught because Error is a separate subclass of Throwable and we were only catching Exceptions. We solved the immediate problem by catching Throwables rather than Exceptions, but this got me thinking as to why you would ever want to catch Exceptions, rather than Throwables, because you would then miss the Errors. So, **why would you want to catch Exceptions, when you can catch Throwables**?
It all depends a bit on what you're going to do with an Error once you've caught it. In general, catching Errors probably shouldn't be seen as part of your "normal" exception flow. If you do catch one, you shouldn't be thinking about "carrying on as though nothing has happened", because the JVM (and various libraries) will use Errors as a way of signalling that "something really serious has happened and we need to shut down as soon as possible". In general, it's best to listen to them when they're telling you the end is nigh. Another issue is that the recoverability or not from an Error may depend on the particular virtual machine, which is something you may or not have control over. That said, there are a few corner cases where it is safe and/or desirable to catch Errors, or at least certain subclasses: * There are cases where you really do want to stop the normal course of flow: e.g. if you're in a Servlet, you might not want the Servlet runner's default exception handler to announce to the world that you've had an OutOfMemoryError, whether or not you can recover from it. * Occasionally, an Error will be thrown in cases where the JVM can cleanly recover from the cause of the error. For example, if an OutOfMemoryError occurs while attempting to allocate an array, in Hotspot at least, it seems you can safely recover from this. (There are of course other cases where an OutOfMemoryError could be thrown where it isn't safe to try and plough on.) So the bottom line is: if you do catch Throwable/Error rather than Exception, it should be a **well-defined case where you know you're "doing something special"**. Edit: Possibly this is obvious, but I forgot to say that in practice, **the JVM might not actually invoke your catch clause** on an Error. I've definitely seen Hotspot glibly gloss over attempts to catch certain OutOfMemoryErrors and NoClassDefFoundError.
From the Java API documentation: > The class `Exception` and its subclasses are a form of `Throwable` that indicates conditions that a reasonable application might want to catch. > > An `Error` is a subclass of `Throwable` that indicates serious problems that a reasonable application should not try to catch. Errors usually are low-level (eg., raised by the virtual machine) and should not be caught by the application since reasonable continuation might not be possible.
Why catch Exceptions in Java, when you can catch Throwables?
[ "", "java", "" ]
I think I understand the concept of a delegate in C# as a pointer to a method, but I cant find any good examples of where it would be a good idea to use them. What are some examples that are either significantly more elegant/better with delegates or cant be solved using other methods?
What exactly do you mean by delegates? Here are two ways in which they can be used: ``` void Foo(Func<int, string> f) { //do stuff string s = f(42); // do more stuff } ``` and ``` void Bar() { Func<int, string> f = delegate(i) { return i.ToString(); } //do stuff string s = f(42); // do more stuff } ``` The point in the second one is that you can declare new functions on the fly, as delegates. This can be largely replaced by lambda expressions,and is useful any time you have a small piece of logic you want to 1) pass to another function, or 2) just execute repeatedly. LINQ is a good example. Every LINQ function takes a lambda expression as its argument, specifying the behavior. For example, if you have a `List<int> l` then `l.Select(x=>(x.ToString())` will call ToString() on every element in the list. And the lambda expression I wrote is implemented as a delegate. The first case shows how Select might be implemented. You take a delegate as your argument, and then you call it when needed. This allows the caller to customize the behavior of the function. Taking Select() as an example again, the function itself guarantees that the delegate you pass to it will be called on every element in the list, and the output of each will be returned. What that delegate actually *does* is up to you. That makes it an amazingly flexible and general function. Of course, they're also used for subscribing to events. In a nutshell, delegates allow you to reference functions, using them as argument in function calls, assigning them to variables and whatever else you like to do.
The .NET 1.0 delegates: ``` this.myButton.Click += new EventHandler(this.MyMethod); ``` The .NET 2.0 delegates: ``` this.myOtherButton.Click += delegate { var res = PerformSomeAction(); if(res > 5) PerformSomeOtherAction(); }; ``` They seem pretty useful. How about: ``` new Thread(new ThreadStart(delegate { // do some worker-thread processing })).Start(); ```
C# - Can someone tell me why and where I should use delegates?
[ "", "c#", "delegates", "" ]
Is there any reason to expose an internal collection as a ReadOnlyCollection rather than an IEnumerable if the calling code only iterates over the collection? ``` class Bar { private ICollection<Foo> foos; // Which one is to be preferred? public IEnumerable<Foo> Foos { ... } public ReadOnlyCollection<Foo> Foos { ... } } // Calling code: foreach (var f in bar.Foos) DoSomething(f); ``` As I see it IEnumerable is a subset of the interface of ReadOnlyCollection and it does not allow the user to modify the collection. So if the IEnumberable interface is enough then that is the one to use. Is that a proper way of reasoning about it or am I missing something?
**More modern solution** Unless you need the internal collection to be mutable, you could use the [`System.Collections.Immutable`](https://www.nuget.org/packages/System.Collections.Immutable/) package, change your field type to be an immutable collection, and then expose that directly - assuming `Foo` itself is immutable, of course. **Updated answer to address the question more directly** > Is there any reason to expose an internal collection as a ReadOnlyCollection rather than an IEnumerable if the calling code only iterates over the collection? It depends on how much you trust the calling code. If you're in complete control over everything that will ever call this member and you *guarantee* that no code will ever use: ``` ICollection<Foo> evil = (ICollection<Foo>) bar.Foos; evil.Add(...); ``` then sure, no harm will be done if you just return the collection directly. I generally try to be a bit more paranoid than that though. Likewise, as you say: if you only *need* `IEnumerable<T>`, then why tie yourself to anything stronger? **Original answer** If you're using .NET 3.5, you can avoid making a copy *and* avoid the simple cast by using a simple call to Skip: ``` public IEnumerable<Foo> Foos { get { return foos.Skip(0); } } ``` (There are plenty of other options for wrapping trivially - the nice thing about `Skip` over Select/Where is that there's no delegate to execute pointlessly for each iteration.) If you're not using .NET 3.5 you can write a very simple wrapper to do the same thing: ``` public static IEnumerable<T> Wrapper<T>(IEnumerable<T> source) { foreach (T element in source) { yield return element; } } ```
If you only need to iterate through the collection: ``` foreach (Foo f in bar.Foos) ``` then returning **IEnumerable** is enough. If you need random access to items: ``` Foo f = bar.Foos[17]; ``` then wrap it in **ReadOnlyCollection**.
ReadOnlyCollection or IEnumerable for exposing member collections?
[ "", "c#", ".net", "collections", "ienumerable", "readonly-collection", "" ]
Is it possible to specify which X display the JVM is to launch it's windows on through the JVM? I am thinking something like this ``` java -DISPLAY=THE_DISPLAY_I_WANT:0.1 -jar my.jar ``` I looked at the man pages and I can't find anything. or do I need to wrap my call to the jvm in a shell script like this ``` #/bin/sh export DISPLAY=THE_DISPLAY_I_WANT:0.1 java -jar my.jar ``` **I don't want to make a script just to specify an argument that can be past directly to the JVM.** PS: No, I don't want to change my enviroment DISPLAY variable I want to launch the JVM on whichever DISPLAY I like :) **Update** Responding to the question "Why does it matter if I use the second solution" Using the second solution, If I would like to start a jvm session on several different displays I would have to set the DISPLAY for each session. What I am looking for is like what you can do with Xprograms try `xterm -display my_display:0.0` So my question is can I do this with the jvm, I can't see it documented anywhere. If it can't be done then the correct answer should be **"No you can't"**, I already know the alternative :) Thanks
Given your constraints on the answer, the answer you're looking for is "**It can't be done**" I'm still curious why the second solution doesn't work for you. I realize it's not what you *want* to do, I just want to know *why*
If you are using sh, bash or the like you can just do ``` DISPLAY=THE_DISPLAY_I_WANT:0.1 java -jar my.jar ```
Is it possible to pass the X Display name to the JVM
[ "", "java", "jvm", "x11", "" ]
I found [this question](https://stackoverflow.com/questions/340762/which-languages-support-tail-recursion-optimization) about which languages optimize tail recursion. Why C# doesn't optimize tail recursion, whenever possible? For a concrete case, why isn't this method optimized into a loop ([Visual Studio 2008](http://en.wikipedia.org/wiki/Microsoft_Visual_Studio#Visual_Studio_2008) 32-bit, if that matters)?: ``` private static void Foo(int i) { if (i == 1000000) return; if (i % 100 == 0) Console.WriteLine(i); Foo(i+1); } ```
JIT compilation is a tricky balancing act between not spending too much time doing the compilation phase (thus slowing down short lived applications considerably) vs. not doing enough analysis to keep the application competitive in the long term with a standard ahead-of-time compilation. Interestingly the [NGen](https://en.wikipedia.org/wiki/Native_Image_Generator) compilation steps are not targeted to being more aggressive in their optimizations. I suspect this is because they simply don't want to have bugs where the behaviour is dependent on whether the JIT or NGen was responsible for the machine code. The [CLR](https://en.wikipedia.org/wiki/Common_Language_Runtime) itself does support tail call optimization, but the language-specific compiler must know how to generate the relevant [opcode](https://learn.microsoft.com/en-us/previous-versions/windows/silverlight/dotnet-windows-silverlight/56c08k0k(v=vs.95)?redirectedfrom=MSDN) and the JIT must be willing to respect it. [F#'s](https://en.wikipedia.org/wiki/F_Sharp_(programming_language)) fsc will generate the relevant opcodes (though for a simple recursion it may just convert the whole thing into a `while` loop directly). C#'s csc does not. See [this blog post](https://learn.microsoft.com/en-us/archive/blogs/davbr/) for some details (quite possibly now out of date given recent JIT changes). Note that the CLR changes for 4.0 [the x86, x64 and ia64 will respect it](https://learn.microsoft.com/en-us/archive/blogs/clrcodegeneration/tail-call-improvements-in-net-framework-4).
This [Microsoft Connect feedback submission](https://web.archive.org/web/20140724090729/http://connect.microsoft.com/VisualStudio/feedback/details/166013/c-compiler-should-optimize-tail-calls) should answer your question. It contains an official response from Microsoft, so I'd recommend going by that. > Thanks for the suggestion. We've > considered emiting tail call > instructions at a number of points in > the development of the C# compiler. > However, there are some subtle issues > which have pushed us to avoid this so > far: 1) There is actually a > non-trivial overhead cost to using the > .tail instruction in the CLR (it is > not just a jump instruction as tail > calls ultimately become in many less > strict environments such as functional > language runtime environments where > tail calls are heavily optimized). 2) > There are few real C# methods where it > would be legal to emit tail calls > (other languages encourage coding > patterns which have more tail > recursion, and many that rely heavily > on tail call optimization actually do > global re-writing (such as > Continuation Passing transformations) > to increase the amount of tail > recursion). 3) Partly because of 2), > cases where C# methods stack overflow > due to deep recursion that should have > succeeded are fairly rare. > > All that said, we continue to look at > this, and we may in a future release > of the compiler find some patterns > where it makes sense to emit .tail > instructions. By the way, as it has been pointed out, it is worth noting that tail recursion *is* optimised on x64.
Why doesn't .NET/C# optimize for tail-call recursion?
[ "", "c#", ".net", "optimization", "tail-recursion", "" ]
I am using POI API to read a template n to create an XLS file through java. I want to get the chart from the template. IS there any way to do so?
If you build the chart into the template it will automatically update when you insert data into the spreadsheet. If you are wanting to create a chart from scratch I believe that this functionality is not yet available in POI. from the POI site: > You can not currently create charts. > You can however create a chart in > Excel, modify the chart data values > using HSSF and write a new spreadsheet > out. This is possible because POI > attempts to keep existing records > intact as far as possible. [POI Limitations](http://poi.apache.org/spreadsheet/limitations.html)
You can use JFree chart API to create charts in excel.. <http://www.programming-free.com/2012/12/create-charts-in-excel-using-java.html>
how to read xls chart in java?
[ "", "java", "excel", "charts", "" ]
So I found [this](https://stackoverflow.com/questions/326919/disabling-button-with-javascript-ff-vs-ie/327152#327152) recommendation, but I can't quite seem to figure out how. This is the code I originally started with: ``` function greySubmits(e) { var value = e.srcElement.defaultValue; // This doesn't work, but it needs to $(e).insert('<input type="hidden" name="commit" value="' + value + '" />'); // This causes IE to not submit at all $$("input[type='submit']").each(function(v) {v.disabled = true;}) } // This works fine Event.observe(window, 'load', function() { $$("input[type='submit']").each(function(e) { Event.observe(e, 'click', greySubmits); }); }); ``` Anyway, I am pretty close, but I can't seem to get any further. Thanks for any help at all! **Update**: Sorry, I guess I wasn't entirely clear. I'd like to disable all of the submit buttons when someone clicks a submit button. But I **do** need to send along the value of the submit button so the server knows which button I clicked, hence the insert call. (Note: insert does [**not**](http://www.prototypejs.org/api/element/insert) create a child of the element you call it on.) And then after disabling the submit buttons I need to call the containing form of the submit buttons submit call, as IE will not submit after you disable the button. Does that make sense?
I finally got it to work. Ryan helped so I'll upvote him :-) Here's the code: ``` function replaceSubmit(e) { var el = e.element(); Element.insert(el, { 'before': '<input type="hidden" name="' + el.name + '" value="' + el.value +'" />'}); } function greySubmits(e) { // Don't disable the submit if the submit was stopped with a return(false) if (e.returnValue) { $$("input[type='submit']").each(function(v) {v.disabled = true;}) } } function fixButtons() { $$("input[type='submit']").each(function(v) { if (Element.hasClassName(v, 'disabled')) { v.disabled = true; } else { v.disabled = false; } }); } Event.observe(window, 'load', function() { fixButtons(); $$("input[type='submit']").each(function(e) { Event.observe(e, 'click', replaceSubmit); }); $$("form").each(function(e) { Event.observe(e, 'submit', greySubmits); }); }); ``` The fixButtons is so that when people click the back button the page will fix all the buttons. And if you want to disable a button and have it not re-enable on a back you just give it a class of disabled.
You need to do exactly what the answer says : "Do not disable the button in its "onclick", but save it, and do it in form's onsubmit." So in greySubmits() keep the line that sets the hidden value, but remove the line that disables all the submit buttons. Then add another event handler in your online - to the form, not the submit buttons - that does the disabling. ``` function reallyGreySubmits(e) { // This causes IE to not submit at all $$("input[type='submit']").each(function(v) {v.disabled = true;}) } Event.observe(window, 'load', function() { $$("input[type='submit']").each(function(e) { Event.observe(e, 'click', greySubmits); }); $$("form").each(function(e) { Event.observe(e, 'submit', reallyGreySubmits); }); }); ``` Another option, which I've used is to not disable the submits but to swap visibility between two elements. On click, mark the submits hidden, and then make visible a div or some other element that displays as "disabled" in their place.
How can I unobtrusively disable submit buttons with Javascript and Prototype?
[ "", "javascript", "prototypejs", "" ]
I have made use of the following JPA implementations: 1. Hibernate, 2. Toplink, 3. OpenJPA Each of them has their own strengths and weaknesses. I found Hibernate the most advanced of the three except that it mixed some of its own enhancements with JPA which made it difficult to switch out to other providers. Most importantly, its query parser was much more lenient when interpreting JPA. They make it slightly difficult to get the correct libraries to support hibernate because I found it a mission trying to get the right versions of all the dependencies. Toplink was ok but one is left with the feeling that it is slightly crippled as it seems Oracle wants you to use/buy? their more advanced library. Trying to download it was also a mission because you need to install it via running a jar file. I found that it implemented only the base JPA spec. The reason I used it was hibernate uses a lot of libraries that are commonly used in other open source projects that one would often get classloarder problems, especially when using JBoss OpenJPA - This has by far the best documentation and is easy to download and use but it seems its very buggy. Maybe its just my code but code I found more advanced usage such as OneToMany relationships with CascadeType.all set just didn't seem to work. Admittedly It may be my code that was wrong and I haven't had time to test a clean case but many incidents like this leave me scared to use it. I really hope it gets better. Its error messages are often useless in helping solve the problem. What other libraries have people used and which ones do they prefer and why?
I had the same conclusions about these implementations. 1. OpenJPA was/seemed buggy 2. Hibernate had tons of libraries and seemed to have trouble with *not* lazy loading everything. 3. Toplink ended up as my choice. It was not as flexible as Hibernate would have been but it works and I don't have to install *commons-logging*. The one I would try next is JPOX, which has recently been renamed to [datanucleus](http://www.datanucleus.org/).
Personally I don't feel OpenJPA is mature enough yet. There are other open source libraries that are more mature and I would rather use those. These are the ones I would consider in order: 1. Hibernate. Hibernate has been around for a long time and has really paved the way for ORM in Java. The only issue I have with Hibernate is the licensing. It is LGPL licensed which may cause some commercial companies to squirm (for reasons I won't go into here). Anyway, if LGPL is an issue for you, it is probably good to steer clear. 2. EclipseLink. Some background on eclipselink. Toplink Essentials was Oracle's free version of their JPA implementation. EclipseLink was taken from Toplink, Oracle's full blown JPA implementation. EclipseLink is going to be the JPA 2.0 provider for Glassfish v3.0 so it looks like everything is moving away from Toplink Essentials and to EclipseLink. Although the EclipseLink version is only 1.0.2, the product has been around for a long time under other names. A project I'm working on is on Toplink Essentials right now but we plan to switch to Eclipselink shortly. Hibernate was out for the licensing issues I mentioned earlier.
JPA Implementations - Which one is the best to use?
[ "", "java", "hibernate", "jpa", "toplink", "openjpa", "" ]
Consider: ``` public static T GetValueOrDefault<T>(this IDataReader reader, string columnName) T returnValue = default(T); ``` I want to implement something like [this](http://weblogs.asp.net/skillet/archive/2008/06/18/idatareader-extension-methods.aspx) to check DBNull. I can follow the code fine, but I don't quite understand what static T is in VB.NET. Can someone please explain it a bit?
The equivalent of `static` in VB in `Shared`. Shared methods are usually put in Helper classes, because they do not require an instance of the class to run. The type T indicates that this is a generic method (this is a new feature in VB 9 and C# 3). A generic method effectively takes a type as an argument or returns a generic type. Extension methods are also new in VB 9/C# 3. These allow you to extend an existing type by adding methods. All you need is a Shared method which is available in the same namespace as your code, and in VB the code has to be in a module, not a normal class. A module is a class that can't be instantiated and (therefore) only has shared methods. It is declared with the Module keyword in place of the class keyword. Here is your code in VB. (Also for those that know what's going on "under the covers" strangely setting a value type to `Nothing` does compile in VB and is the supported way to get the default value of a value type). ``` Imports System.Runtime.CompilerServices <Extension()> _ Public Shared Function GetValueOrDefault(Of T)(ByVal reader As IDataReader, ByVal columnName As String) As T Dim returnValue As T = Nothing End Function ```
He's creating an extension method. In C#, that's done by creating a static method (Shared in Visual Basic). The mechanism for creating extension methods in Visual Basic appears to be much different than how you do it in C#. You'll probably want to read the MSDN entry about extension methods, here: <http://msdn.microsoft.com/en-us/library/bb384936.aspx>
VB.NET: What is static T (C#) in VB.NET?
[ "", "c#", ".net", "c#-to-vb.net", "" ]
Following on from my previous question, *[Python time to age](https://stackoverflow.com/questions/508727/python-time-to-age)*, I have now come across a problem regarding the timezone, and it turns out that it's not always going to be "+0200". So when strptime tries to parse it as such, it throws up an exception. I thought about just chopping off the +0200 with [:-6] or whatever, but is there a real way to do this with strptime? I am using Python 2.5.2 if it matters. ``` >>> from datetime import datetime >>> fmt = "%a, %d %b %Y %H:%M:%S +0200" >>> datetime.strptime("Tue, 22 Jul 2008 08:17:41 +0200", fmt) datetime.datetime(2008, 7, 22, 8, 17, 41) >>> datetime.strptime("Tue, 22 Jul 2008 08:17:41 +0300", fmt) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/python2.5/_strptime.py", line 330, in strptime (data_string, format)) ValueError: time data did not match format: data=Tue, 22 Jul 2008 08:17:41 +0300 fmt=%a, %d %b %Y %H:%M:%S +0200 ```
> New in version 2.6. > > For a naive object, the %z and %Z > format codes are replaced by empty > strings. It looks like this is implemented only in >= 2.6, and I think you have to manually parse it. I can't see another solution than to remove the time zone data: ``` from datetime import timedelta,datetime try: offset = int("Tue, 22 Jul 2008 08:17:41 +0300"[-5:]) except: print "Error" delta = timedelta(hours = offset / 100) fmt = "%a, %d %b %Y %H:%M:%S" time = datetime.strptime("Tue, 22 Jul 2008 08:17:41 +0200"[:-6], fmt) time -= delta ```
> is there a real way to do this with strptime? No, but since your format appears to be an RFC822-family date, you can read it much more easily using the [email](http://docs.python.org/library/email.util.html#email.utils.parsedate_tz) library instead: ``` >>> import email.utils >>> email.utils.parsedate_tz('Tue, 22 Jul 2008 08:17:41 +0200') (2008, 7, 22, 8, 17, 41, 0, 1, 0, 7200) ``` (7200 = timezone offset from UTC in seconds)
Python time to age, part 2: timezones
[ "", "python", "datetime", "timezone", "" ]
Got some C# forms/controls that can be called up either from a C# control on a Winform in a Winforms MDI app OR the same C# control used by a PowerBuilder MDI app via COM. I've been using the WinAPI call SetParent to attach forms to the MDI. 1. It works (or seemed to) in both environments. 2. It lets the child window have its own WindowState (Normal, Maximised) instead of taking on that of the child windows already open (which was a real pain). Say the control is called T. Code on control T calls up form D. Control T is on form X. Control T is also on form Y. In .Net all is well, and form D stays within the MDI. in PB: Control T is on PB control PX. Control T is also on PB control PY. For PX all is well. For PY, however, there is a problem - form D does not seem to become an MDI child - it can go outside the app and has a taskbar icon. I stress that this is using *same* objects as the ones that do work. The SetParent is literally the same line of code. Further research has revealed that SetParent doesn't *really* work for proper MDI childing - but that's OK(ish) cos we don't need to merge menus etc. Interestingly, have found that though SetParent seems to 'work', you don't get the handle back if you try GetParent... ``` Form form = new MyForm(); WindowsMessageHelper.SetParent(form.Handle, MDIParentHandle); //passed down int parentHandle = WindowsMessageHelper.GetParent(form.Handle); ``` parentHandle will always be 0.... Is there any way of making form D behave under all circumstances? My own researches have not been promising. I don't really want to go back and rewrite my forms as controls and have PowerBuilder manage them - mainly becasue there can be multiple instances of each form and PowerBuilder would have to handle that (instead of the controller class/base class I've got doing it in the .net app). Can I stress that there is NO problem within .Net, the problem only shows up in the PowerBuilder app
In the end, we found that the difference was that PB was doing the equivalent of setting .MDIParent for the control PX (the one where calling up form D worked) but not for PY. Once that was sorted then we were then getting the correct MDIParent handle and all is now well.
Your child needs to be a System.Windows.Forms.Form, and set its MdiParent property to the MDI Patent window (not its Parent). The container needs to also follow a few rules. A read through the [MDI instructions on MSDN](http://msdn.microsoft.com/en-gb/library/7aw8zc76.aspx) may help further. --- Option two: you may not be able to do this with a single control. Instad consider composition of the core implementation in two wrappers. The first wrapper acts as a WinForms MDI child, the second as a COM wrapper for use under whatever GUI framework PowerBuilder works.
Any new hope? Making a window an MDI child
[ "", "c#", "winforms", "mdi", "" ]
I'm having issues with a very strange error in some code I wrote. The basic idea behind the code can be trivialised in the following example: ``` template <class f, class g> class Ptr; ``` template <class a, class b, class c = Ptr<a,b> > class Base { public: Base(){}; }; template <class d, class e> class Derived : public Base <d,e> { public: Derived(){}; }; template <class f, class g> class Ptr { public: Ptr(){}; Ptr(Base<f,g,Ptr<f,g> >\* a){}; }; typedef Derived<double,double> DDerived; int main() { Base<int,int> b = Base<int,int>(); Derived<double,double> d = Derived<double,double>(); DDerived dd = DDerived(); Ptr<double,double> p(&dd); return 1; } The basic idea is that pointers are replaced by the Ptr class (This will eventually be used in an MPI setting so standard pointers are effectively useless). The pointers are designed to 'point' at the base class, and so can point at any inherited class (as demonstrated in the example). Can anyone think of any reason this might *not* work in a non-trivial case (But a case where the object architecture remains identical). The error that is occurring in the main case is as follows: ``` void function() { vector nVector(1); // cut down for simplicity nVector[0].SetId(1); // To ensure the node is instantiated correctly Ptr temp(&nVector[1]); }; ``` This code produces the (slightly extended version of the) error when compiled with `MPICXX:` no matching function for call to `Ptr&lt;double, double>::Ptr(Derived&lt;double, double>*)` candidates are . . . (Some removed for simplicity's sake) `Ptr&lt;f, g>::Ptr(Base&lt;f, g, Ptr&lt;f, g> >*) [with f = double, g = double]` Cheers, Ed EDITED (Detailing the error a little better, added info on the compiler)
Unfortunately, I was being slightly stupid and had forgotten to put my Ptr class in the same namespace as the Base and Derived classes. That, I guess, would be why it wasn't working ! =]
Ok what about this: ``` class BasePtr { public: virtual void* obj() = 0; }; template <class T> class Ptr :: public BasePtr { public: Ptr() : ptr(0) {}; Ptr(T* a) : ptr(a) {}; virtual T* obj() { return ptr; } protected: T* ptr; }; ``` Use BasePtr in the Base class but pass it the correct template object when Base needs to use it. ``` template <class a, class b > class Base { public: Base(){}; void set_ptr( BasePtr* ); }; DDerived dd = DDerived(); Ptr<double,double> p(&dd); dd.set_ptr( p ); ``` I still do not quite understand your problem but I hope that helps.
Inheritance issues with template classes
[ "", "c++", "inheritance", "templates", "" ]
I have been spying the MSDN and can't see a property/method for TextBox that allows you to get the default text value that was set on a field; I want to be able to compare the current txtMyTextBox.Text to the default value (like this psuedo code shows): ``` var myValue = (String.Compare(txtMyTextBox.Text, txtMyTextBox.DefaultText) ? "" : txtMyTextBox.Text) ``` Is this something which exists in the ASP.NET control? Or am I asking too much? :) Thanks for any help (as always)! Pete
There is no built in way of retrieving the default value of a textbox during postback. One option would be to use ViewState to store the value during the initial PageLoad and retrieving it from there during the postback to make the comparison.
By DefaultText do you mean the initial text before editing? Perhaps declare this in a constant / field / etc somewhere, and set it programatically rather than in the markup - i.e. in the first load, `txtMyTextBox.Text = defaultText;` - then later you can just compare again to `defaultText` to track changes.
ASP.NET TextBox Control - Get the default text value in code behind?
[ "", "c#", ".net", "asp.net", "web-controls", "" ]
I have an existing enum with numerous items in it. I also have existing code which does certain things with this enum. I would now like a way to view only a subset enum members. What I'm looking for is a way to divide my enum into groups. I need to preserve the (int) value of each member and I need to preserve the ability to view all enum members if needed. The only thing I can think of is to just create a new enum for each subenum that only contain the items I want using the same name and value. This works but violates the whole no repetition principle. I don't expect anyone to have a better alternative but I thought I'd ask just in case someone had a fancy trick to show me. Thanks, as always.
In the end, I had to rewrite much of the code but the following "trick" was derived: I trashed C# enums and use static members on a regular class. This class was made into a singleton and is inited on application start. My static members' constructors are allowed to reference another static member as a "parent". Next, my init method uses reflection to go through each static member and indexes them based on several properties. These indexes are stored in hashtables which are also members of the singleton. I thus get: a singleton object which: * has static members which can be easily accessed during design time. * can be used during run-time to lookup certain static members (based on "group" and other properties). My init method does a fair amount of validation. If invalid (such as duplicate) static members are built, you get a run-time error on application startup. Obviously a pretty big hack but I'm quite happy with it.
I would go with this (which works in VB.NET at least) ``` enum MySuperEnumGroup { Group1Item1, Group1Item2, Group1Item3, Group2Item1, Group2Item2, Group2Item3, Group3Item1, Group3Item2, Group3Item3, } enum MySubEnumGroup { Group2Item1 = MySuperEnumGroup.Group2Item1 Group3Item1 = MySuperEnumGroup.Group3Item1 Group3Item3 = MySuperEnumGroup.Group3Item3 } ``` Then do some kind of CType when you need to.
Enum subset or subgroup in C#
[ "", "c#", "enums", "" ]
I'm developing a website with a custom search function and I want to collect statistics on what the users search for. It is not a full text search of the website content, but rather a search for companies with search modes like: * by company name * by area code * by provided services * ... How to design the database for storing statistics about the searches? What information is most relevant and how should I query for them?
Well, it's dependent on how the different search modes work, but generally I would say that a table with 3 columns would work: ``` SearchType SearchValue Count ``` Whenever someone does a search, say they search for "Company Name: Initech", first query to see if there are any rows in the table with SearchType = "Company Name" (or whatever enum/id value you've given this search type) and SearchValue = "Initech". If there is already a row for this, UPDATE the row by incrementing the Count column. If there is not already a row for this search, insert a new one with a Count of 1. By doing this, you'll have a fair amount of flexibility for querying it later. You can figure out what the most popular searches for each type are: ``` ... ORDER BY Count DESC WHERE SearchType = 'Some Search Type' ``` You can figure out the most popular search types: ``` ... GROUP BY SearchType ORDER BY SUM(Count) DESC ``` Etc.
This is a pretty general question but here's what I would do: **Option 1** If you want to strictly separate all three search types, then create a table for each. For company name, you could simply store the CompanyID (assuming your website is maintaining a list of companies) and a search count. For area code, store the area code and a search count. If the area code doesn't exist, insert it. Provided services is most dependent on your setup. The most general way would be to store key words and a search count, again inserting if not already there. Optionally, you could store search date information as well. As an example, you'd have a table with Provided Services Keyword and a unique ID. You'd have another table with an FK to that ID and a SearchDate. That way you could make sense of the data over time while minimizing storage. **Option 2** Treat all searches the same. One table with a Keyword column and a count column, incorporating SearchDate if needed.
How to design a database table structure for storing and retrieving search statistics?
[ "", "sql", "search", "statistics", "" ]
I've been chatting with my colleagues the other day and heard that their coding standard explicitly forbids them to use the `var` keyword in C#. They had no idea why it was so and I've always found implicit declaration to be incredibly useful when coding. I've never had any problems finding out what type the variable was (you only hover over the variable in VS and you'll get the type that way). Does anyone know why it would be a bad idea to use the var keyword in C#?
The writers of the [.Net Framework Design Guidelines](https://rads.stackoverflow.com/amzn/click/com/0321545613) (awesome book) that came out in November 2008 recommend considering using `var` when the Type is obvious and unambiguous. On the other hand, if using `var` would result in an ambiguity when reading the code, as Anton Gogolev pointed out, then it's better not to use it. in the book (Annex A), they actually give this example: ``` var names = new List<string>(); // good usage of var string source = GetSource(); var tokens = source.Split(' '); // ok; most developers know String.Split var id = GetId(); // Probably not good; it's not clear what the type of id is ``` It's possible that, to ensure that readability is not subjected to the whims of lowly developers, your organisation has decided that you were not worthy of `var` and banned it. It's a shame though, it's like having a nice tool at your disposal but keeping it in a locked glass cabinet. In most cases, using `var` for simple types actually helps readability and we must not forget that there is also no performance penalty for using `var`.
``` var q = GetQValue(); ``` is indeed a bad thing. However, ``` var persistenceManager = ServiceLocator.Resolve<IPersistenceManager>(); ``` is perfectly fine to me. The bottomline is: use descriptive identifier names and you'll get along just fine. As a sidenote: I wonder how do they deal with anonymous types when not allowed to use `var` keyword. Or they don't use them altogether?
Why would var be a bad thing?
[ "", "c#", "coding-style", "implicit-typing", "" ]
Well i have a byte array, and i know its a xml serilized object in the byte array is there any way to get the encoding from it? Im not going to deserilize it but im saving it in a xml field on a sql server... so i need to convert it to a string?
You could look at the first 40-ish bytes1. They *should* contain the document declaration (assuming it *has* an document declaration) which should either contain the encoding *or* you can assume it's UTF-8 or UTF-16, which should should be obvious from how you've understood the `<?xml` part. (Just check for both patterns.) Realistically, do you expect you'll ever get anything other than UTF-8 or UTF-16? If not, you could check for the patterns you get at the start of both of those and throw an exception if it doesn't follow either pattern. Alternatively, if you want to make another attempt, you could always try to decode the document as UTF-8, re-encode it and see if you get the same bytes back. It's not ideal, but it might just work. I'm sure there are more rigorous ways of doing this, but they're likely to be finicky :) --- 1 Quite possibly less than this. I figure 20 characters should be enough, which is 40 bytes in UTF-16.
A solution similar to [this question](https://stackoverflow.com/questions/637855/how-to-best-detect-encoding-in-xml-file) could solve this by using a Stream over the byte array. Then you won't have to fiddle at the byte level. Like this: ``` Encoding encoding; using (var stream = new MemoryStream(bytes)) { using (var xmlreader = new XmlTextReader(stream)) { xmlreader.MoveToContent(); encoding = xmlreader.Encoding; } } ```
c# Detect xml encoding from Byte Array?
[ "", "c#", "xml", "encoding", "binary-data", "" ]
Are there any Python object-relational mapping libraries that, given a database schema, can generate a set of Python classes? I know that the major libraries such as [SQLObject](http://www.sqlobject.org/), [SQLAlchemy](http://www.sqlalchemy.org/), and [Django](http://www.djangoproject.com/)'s internal SQL ORM library do a very good job of creating a DB schema given a set of classes, but I'm looking for a library that works in reverse. There is a [related question for Perl libraries](https://stackoverflow.com/questions/362929/is-there-a-perl-orm-with-database-reverse-engineering) on Stack Overflow.
[SQLAlchemy extension to create a python code model from an existing database](http://code.google.com/p/sqlautocode/)
Django had an [inspectdb](http://docs.djangoproject.com/en/dev/ref/django-admin/#inspectdb) command that creates models.py files out of your database.
Python libraries to construct classes from a relational database? (ORM in reverse)
[ "", "python", "database", "orm", "" ]
Does WebKit expose an API for working directly with its DOM? I'm looking for a class like HtmlElement that can be used to build/traverse trees of HTML content. I'm trying to host WebKit as a web browser control in a desktop application, and would prefer a direct API rather than going through COM. Thanks! Thanks!
Using QT/WebKit (at least version 4.6) it's now possible to access the DOM of the loaded document. [QtWebKit Module reference](http://qt-project.org/doc/qt-4.8/qtwebkit.html) [DOM Traversal Example](http://qt-project.org/doc/qt-4.8/webkit-domtraversal.html)
Following URL has some interesting answer to you. [Where is WebKIT API?](http://ubuntuforums.org/showthread.php?t=901305)
WebKit API for DOM
[ "", "c++", "browser", "webkit", "webbrowser-control", "" ]
In our new project we have to provide a search functionality to retrieve data from hundreds of xml files. I have a brief of our current plan below, I would like to know your suggestions/improvements on this. These xml files contain personal information, and the search is based on 10 elements in it for example last name, first name, email etc. Our current plan is to create an master XmlDocument with all the searchable data and a key to the actual file. So that when the user searches the data we first look at master file and get the the results. We will also cache the actual xml files from the recent searches so simillar searches later can be handled quickly. Our application is a .net 2.0 web application.
First: how big are the xml files? `XmlDocument` doesn't scale to "huge"... but can handle "large" OK. Second: can you perhaps put the data into a regular database structure (perhaps SQL Server Express Edition), index it, and access via regular TSQL? That will usually out-perform an xpath search. Equally, if it is structured, SQL Server 2005 and above supports the `xml` data-type, which *shreds* data - this allows you to index and query xml data in the database without having the entire DOM in memory (it translates xpath into relational queries).
Index your XML files. Look into <http://incubator.apache.org/lucene.net/> I recently used it at my previous job to cache our SQL database for fast searching and very little overhead. It provides fast searching of content inside xml files (all depending on how you organize your cache). Very easy and straight forward to use. Much easier than trying to loop through a bunch of files.
Best way to search data in xml files?
[ "", "c#", "asp.net", "xml", "search", ".net-2.0", "" ]
i am currently working on a web application that needs to accept video uploaded by users in any format (.avi, .mov, etc.) and convert them to flv for playing in a flash-based player. Since the site is OpenCms-based, the best solution would be a ready-made plugin for OpenCms that allowed to upload and play videos doing the transcode operation in background, but just a set of Java classes to do the transcode would be great and then i could make the uploading form and playback part on my own.
You basically have two choices if you want to host, transcode and stream flv files (and don't want to buy a video transcoding application): you can call out to FFMpeg/MEncoder or you can use an external Web service. You could also sidestep the problem completely by allowing them to embed YouTube videos on your site. If you go the 'local FFMpeg route' I would suggest simply using ProcessBuilder and constructing a command-line to execute FFMpeg. That way you get full control over what gets executed, you avoid JNI, which is an absolute nightmare to work with, and you keep OS-specific code out of your app. You can find FFMPeg with all the bells and whistles for pretty much any platform. There's a good chance it's already on your server. The nice thing about the 'Local FFMPeg' route is that you don't have to pay for any extra hosting, and everything is running locally, although your hosting admin might start complaining if you're using a crazy amount of disk and CPU. There are some other StackOverflow questions that talk about some of the [gotchas using FFMpeg](https://stackoverflow.com/questions/97781/what-is-the-best-tool-to-convert-common-video-formats-to-flv-on-a-linux-cli) to create flvs that you can actually play in the flash player. The Web service route is nice because there is less setup involved. I have not used [Hey!Watch](http://heywatch.com/page/home) but it looks promising. [PandaStream](http://pandastream.com/) is easy to set up and it works well, plus you get all your videos on S3 with no additional effort.
There's a great open source tool call [FFmpeg](http://ffmpeg.org/) that I use to transcode my videos. I use PHP making shell calls to make it come to life, but I can't imagine that it would be too hard to get it to play nice with Java. [(Maybe this could be a good starting point for you.)](http://fmj-sf.net/ffmpeg-java/getting_started.php) I feed my installation 30+ gig batches on a weekly basis and it always comes out as quality material. The only tricky part for me has been getting it compiled to handle a wide variety of video formats. On the bright side, this has provided me with heavy lifting I need.
Programmatically convert a video to FLV
[ "", "java", "video", "flv", "opencms", "transcode", "" ]
Let's say I have a set of Countries in my application. I expect this data to change but not very often. In other words, I do not look at this set as an operational data (I would not provide CRUD operations for Country, for example). That said I have to store this data somewhere. I see two ways to do that: * Database driven. Create and populate a Country table. Provide some sort of DAO to access it (findById() ?). This way client code will have to know Id of a country (which also can be a name or ISO code). On the application side I will have a class Country. * Application driven. Create an Enum where I can list all the Countries known to my system. It will be stored in DB as well, but the difference would be that now client code does not have to have lookup method (findById, findByName, etc) and hardcode Id, names or ISO codes. It will reference particular country directly. I lean towards second solution for several reasons. How do you do this? Is this correct to call this 'dictionary data'? Addendum: One of the main problems here is that if I have a lookup method like **findByName("Czechoslovakia")** then after 1992 this will return nothing. I do not know how the client code will react on it (after all it sorta expects always get the Country back, because, well, it is a dictionary data). It gets even worse if I have something like **findById(ID\_CZ)**. It will be really hard to find all these dependencies. If I will remove Country.Czechoslovakia from my enum, I will force myself to take care of any dependency on Czechoslovakia.
This won't help you, but it depends... -What are you going to do with those countries ? Will you store them in other tables in the DB / what will happen with existing data if you add new countries / will other applications access to those datas ? -Are you going to translate the contry names in several languages ? -Will the business logic of your application depend on the choosen country ? -Do you need a Country class ? etc... Without more informations I would start with an Enum with a few countries and refactor depending on my needs...
In some applications I've worked on there has been a single 'Enum' table in the database that contained all of this type of data. It simply consisted of two columns: EnumName and Value, and would be populated like this: * "Country", "Germany" * "Country", "United Kingdom" * "Country", "United States" * "Fruit", "Apple" * "Fruit", "Banana" * "Fruit", "Orange" This was then read in and cached at the beginning of the application execution. The advantages being that we weren't using dozens of database tables for each distinct enumeration type; and we didn't have to recompile anything if we needed to alter the data. This could easily be extended to include extra columns, e.g. to specify a default sort order or alternative IDs.
Where do you put your dictionary data?
[ "", "java", "dictionary", "enums", "" ]
Is there any way by which i can find out free port on the server by executing a code from client computer. Example for opening a port on my local machine: ``` ServerSocket localmachine = new ServerSocket(0); int port = localmachine.getLocalPort(); ``` This way I can retrieve a free port on my local machine. Now suppose I have my own server: <http://xyz.com> How can I get a free port for the server by running a program from client? Basically I want to initiate a socket connection between client and a server. Which will be further used for communication purpose.
The port information is only available to the server itself, so you'll need to run a program on the server to find a free port and send it to the client. If you help us understand why you want to do this, we might be able to give a better answer.
A free port implies it is a port you can used. The only ports a client can use on a server are ports which are open on the server. However, I assume you mean you want to find an unused port. This sounds like you are trying to do something which doesn't need to be done if you were to do whatever you are trying to a do a standard way. However, to answer your question literally. ``` public static int unusedPort(String hostname) throws IOException { while (true) { int port = (int) (65536 * Math.random()); try { Socket s = new Socket(hostname, port); s.close(); } catch (ConnectException e) { return port; } catch (IOException e) { if (e.getMessage().contains("refused")) return port; throw e; } } } ```
How can I detect a free port on the server by code from client side?
[ "", "java", "" ]
I've been tasked with determining if it's possible to detect link clicks with javascript. I'm aware of the onclick attribute, which gets me part of the way there. Other then that, I don't know what the best approach is. I've already told my boss that it will likely involve some form of ajax, which probably involves a big library, which is not acceptable. Is there any other way then to use ajax, or anyway to use ajax that won't add a lot of time? Edit: He wants to be able to tell how many times users use the links on the homepage of the site. Unfortunately, we can't do a slick server side solution because nearly all of the pages on the site are just plain html . I would love to convert all the pages to php or some other alternative and just take note of HTTP\_REFERRER data, but that's not currently possible. We're already using Google analytics; it doesn't record the referrer data. Edit again: It turns out that my boss hadn't seen the overlay, and I assumed he clicked through all the tabs. Upon my investigation, initially they were all reporting zero clicks, but I discovered that we had the old version of google's analytics blurb in place. A quick upgrade to the new hotness one and the problem is solved. Thanks to all the responses.
Actually, Google Analytics does track this data. If you go to the *Content Overview* page of your report, there is a link for **Site Overlay**. This will show you your website overlaid with the number of clicks on each link on the page. [site overlay example http://okay-plus.com/dropbox/img/site\_overlay.jpg](http://okay-plus.com/dropbox/img/site_overlay.jpg)
If this is for data collection about website usage, have you considered [Google Analytics](http://www.google.com/analytics/) instead?
How can I monitor users clicking on links via Javascript?
[ "", "javascript", "html", "ajax", "" ]
I'd love some other opinions on what's more efficient in this code. Basically in the following code, there's a setInterval loop and I need 4 requirements to be true before the code runs in the loop. So in v.1 I wrote an if statement checking all 4. Worked fine. Then I switched to just using try/catch, with the code I want to execute sitting in try{}. The logic was that during each loop, an exception would be generated but suppressed for each invalid condition. On the final loop where all conditions are true, the code executes and clears the interval. Either works. I like the try/catch method because there's less conditional code that I need to write and worry about breaking. But I'm worried that try/catch is really inefficient, especially in a setInterval() loop hitting at 100ms. What are some opinions of other bright minds here on SO? **Try/Catch** ``` var intvar = setInterval(function(){ try{ clearInterval(intvar); jQuery('#'+nav[pageid].t1+'>a').replaceWith(jQuery('<span>'+jQuery('#'+nav[pageid].t1+'>a').text()+'</span>')); //set display classes for nav jQuery('#'+nav[pageid].t1).addClass('selected').find('#'+nav[pageid].t2).addClass('subselect'); //topnav jQuery('#'+nav[pageid].t3).addClass('selected').find('#'+nav[pageid].t4).addClass('subselect'); //leftnav }catch(err){} },100); ``` **IF Block** ``` var intvar = setInterval(function(){ if(typeof jQuery == 'function' && typeof nav == 'object' && typeof pageid != 'undefined' && typeof document.getElementById('leftnav') == 'object'){ clearInterval(intvar); jQuery('#'+nav[pageid].t1+'>a').replaceWith(jQuery('<span>'+jQuery('#'+nav[pageid].t1+'>a').text()+'</span>')); //set display classes for nav jQuery('#'+nav[pageid].t1).addClass('selected').find('#'+nav[pageid].t2).addClass('subselect'); //topnav jQuery('#'+nav[pageid].t3).addClass('selected').find('#'+nav[pageid].t4).addClass('subselect'); //leftnav } },100); ```
Use the if statement. I don't know what the overhead is for a TRY/CATCH, but I suspect it's far greater than evaluating a boolean expression. To hit the TRY/CATCH you will have to: execute a statement, generate an error [with that associated overhead], log the error (presumably), made a stacktrace(presumably), and moved back into the code. Additionally, if you have to debug code near those lines the real error could get obfuscated with what you are TRY/CATCHing. Furthermore, it's a misuse of TRY/CATCH and can make your code that much harder to read. Suppose you do this for longer or more obfuscated cases? Where might your catch end up? This is referred to as [Exception handling](http://en.wikipedia.org/wiki/Exception_handling) EDIT: As commented below, you only take the runtime performance hit if you actually cause an exception.
Exceptions should be used for exceptional circumstances (i.e. things that you don't expect to happen normally). You should not, in general, use exceptions to catch something that you can test for with an if statement. Also, from what I understand, exceptions are much more expensive than if statements.
Javascript: What's more efficient, IF block or TRY/CATCH?
[ "", "javascript", "try-catch", "" ]
Is there really a way to do this ? Retrieving raw .php file from the server (other than getting into server's FTP account) ? Is this the reason why there are tools/script to encrypt php source code ? If it's true, then how to protect against it ? (without using php source code encryption) edit: the server mentioned has php running, eg. apache-php-mysql, your standard hosting server configuration.
If you are talking about someone else's server, then the short answer is no. If third parties could read your PHP source code, that would be quite a security hole, since PHP files tend to contain database passwords, hash keys, proprietary algorithms and other goodies that you don't want falling in the wrong hands. If you are talking about your own server (ie. that you yourself have access to), then there are simple scripts that you can put on the server, that allow you to specify a path to any file on the server and have it returned as plaintext. *However, you NEVER EVER want to place such a script on a production server, for the reasons mentioned above.*
Generally speaking, you can't access remote source code. The PHP module would have to be disabled for this to occur. But as a thought experiment, how might this happen? Leaving aside wholesale exploits which get access to the entire filesystem, imagine if there were a security hole in an application which allowed you to insert an line into an .htaccess file. Given that an .htaccess writable by the httpd process is useful for apps like Wordpress, it's not too outlandish a possibility. If you added this: ``` php_value engine off ``` The source files now become downloadable!
PHP security : retrieving PHP file from server, un-processed
[ "", "php", "security", "" ]
I've created a windows service in c# and I'm trying to install it for debug using the installutil as recommended here: <http://msdn.microsoft.com/en-us/library/sd8zc8ha.aspx> The installutil says Install completed. However, nothing appears in the service control manager. I've tried this on Server 2008 and XP with the same result. Any ideas?
A colleague of mine had a more or less identical problem. Did you add an installer to your project? For the service to be installed you need to add an installer to your Visual Studio Project. The easiest way to add an installer in Visual Studio is to open your service in Design Mode and right click the design area and select Add Installer. This will add a file ProjectInstaller.cs with itself contains a ServiceInstaller object and a ServiceProcessInstaller object. With the installer added you can set the Service Name, Description and other options that will be used when installing the service. If you now try to use InstallUtil your service should be installed and should show up in the Services list.
I had a similar issue (build installer, no errors, no service appears in `services.msc`) but a different solution, as I had configured the installers. In my case, the Service-project's `Application Properties` (Alt-Enter, Application-tab) **Startup object** was `(not set)` as shown below: ![enter image description here](https://i.stack.imgur.com/li2Q6.png) Picking the default **.Program** and rebuilding service and installer worked (service appeared in `services.msc`). Setting this property is one of the steps in [the MSDN service installer walkthrough](http://msdn.microsoft.com/en-us/library/zt39148a.aspx) referenced in [this SO answer.](https://stackoverflow.com/questions/407109/installing-a-windows-service-from-a-visual-studio-installer-project/407127#407127) Make sure to follow all of the steps!
Installing a Windows Service - No error but it isn't in Service Control Manager
[ "", "c#", "visual-studio", "service", "windows-services", "installation", "" ]
When i initialize an object using the new object initializers in C# I cannot use one of the properties within the class to perform a further action and I do not know why. My example code: ``` Person person = new Person { Name = "David", Age = "29" }; ``` Within the Person Class, x will equal 0 (default): ``` public Person() { int x = Age; // x remains 0 - edit age should be Age. This was a typo } ``` However person.Age does equal 29. I am sure this is normal, but I would like to understand why.
The properties get set for Name and Age after the constructor 'public Person()' has finished running. ``` Person person = new Person { Name = "David", Age = "29" }; ``` is equivalent to ``` Person tempPerson = new Person() tempPerson.Name = "David"; tempPerson.Age = "29"; Person person = tempPerson; ``` So, in the constructor Age won't have become 29 yet. (tempPerson is a unique variable name you don't see in your code that won't clash with other Person instances constructed in this way. tempPerson is necessary to avoid multi-threading issues; its use ensures that the new object doesn't become available to any other thread until after the constructor has been executed and after all of the properties have been initialized.) --- If you want to be able to manipulate the Age property in the constructor, then I suggest you create a constructor that takes the age as an argument: ``` public Person(string name, int age) { Name = name; Age = age; // Now do something with Age int x = Age; // ... } ```
Note, as an important technical detail, that: ``` Person person = new Person { Name = "David", Age = "29" }; ``` is equivalent to: ``` Person <>0 = new Person(); // a local variable which is not visible within C# <>0.Name = "David"; <>0.Age = "29"; Person person = <>0; ``` but is not equivalent to: ``` Person person = new Person(); person.Name = "David"; person.Age = "29"; ```
What am I doing wrong with C# object initializers?
[ "", "c#", "object-initializers", "" ]
I'm creating a windows service and after installing the service, it stops and starts immediately, but it shouldn't be at all. Previously, I was getting errors that the service was not responding to the start command in a timely fashion, so I took the init code out and put it in a thread, and now I am here: ``` protected override void OnStart(string[] args) { this.EventLog.WriteEntry("ATNotifier Started"); ThreadPool.QueueUserWorkItem(WaitOnEmailsChanged); ThreadPool.QueueUserWorkItem(Init, "IP"); } ``` The waitonemailschanged thread simply creates a filesystemwatcher to watch to see if the settings file (xml document) gets changed, and loads in the data from that file if that happens. For the time being, this just waits indefinitely (which is the general case, as that will only be changed a few times a year), as no changes are being made to the xml document. The Init thread does all kinds of things, including creating and starting a System.Timers.Timer object whose Elapsed method is the meat of the service. I can't understand why it would start and then immediately stop. I should also note that the eventviewer shows no logs from this app. edit> I tried creating 'proper' threads, with the same results and I've removed everything except the creating and starting of the timer like so: ``` protected override void OnStart(string[] args) { this.EventLog.WriteEntry("ATNotifier Started"); m_Timer = new System.Timers.Timer(90000.0); // 1.5 mins m_Timer.Elapsed += new ElapsedEventHandler(m_Timer_Elapsed); m_Timer.Start(); } ``` and I'm still getting the same message. It's almost as if the OnStart is never being called.
The problem turned out top be that the EventLog.WriteEntry was throwing an error because there was no EventSource associated with it. see <http://msdn.microsoft.com/en-us/library/xzwc042w.aspx>
It might be stopped unexpectedly if your main thread terminates on exception.
windows service stops and starts immediately, but it shouldn't
[ "", "c#", "windows-services", "" ]
I am building a site that is an interface used to create XML files that are read as input by a server side program. The website allows users to dynamically create blocks of HTML. Each block can be thought of as an object and contains several input fields. When the user submits the form, the data is turned into an XML file. What is the best way to preserve/rebuild the user generated HTML across the post request? I am using JQuery, but not AJAX.
What [strager said](https://stackoverflow.com/questions/530660/best-way-to-preserve-user-generated-html-across-a-post-request/530703#530703). Plus, with Javascript you can get the HTML string for any element: ``` document.getElementById('myimportantthing').innerHTML ``` And send that to your server for inclusion in your CDATA XML element and you should be good. Although the whole idea of capturing HTML to send to an API for some purpose just reeks of a code smell, without knowing what you are up to I can't say much more about that.
You're probably looking for XML's [CDATA](http://www.w3schools.com/XML/xml_cdata.asp). ``` <post> <![CDATA[ <p>Hello, world! <span style="color: green;">Green text</span> <!-- oops, didn't close the p! --> <ul> <li>list <li>doesn't <li>have closing <li>&lt;/li> (note the lack of use of &gt;) </ul> ]]> </post> ``` Just be sure to escape the `]]>` in the user's input, else they may exploit your use of `CDATA` and mangle your XML!
Best way to preserve user-generated HTML across a post request?
[ "", "javascript", "jquery", "html", "user-input", "" ]
I usually get this error and (always) don't know how to solve it. This time I got it also, it seems that I'm not understanding a concept or missing something Here's the code ``` // create a new twitteroo core with provided username/password TwitterooCore core = new TwitterooCore(username, password); // request friends timeline from twitter Users users = core.GetTimeline(Timeline.Friends); // error here ``` Please some help and also some explanations of what's happening Thanks
I found the problem finally It was because of my Firewall seems to be blocking connections to Visual Studio. Now it works with no changes at all :) Thanks for all your support
There are two specific phrases in the error message, **object reference** and **instance of an object**. These concepts are very basic when dealing with OOP languages. First, an **object reference** can be thought of a variable in a function or class. This term may also refer to function parameters that expect a specific reference to an object. Initially the value of a variable is NULL until it is set to a value using the '=' operator. Frequently you will have a variable declaration and the '=' operation in the same statement. The term **instance of an object** refers to an object that has been created using the syntax `new`. When you call `new` to initialize an object, an unused memory location is allocated to store a copy of the object until the program ends, or the object goes out of scope and is freed by the garbage collector. At creation time the object properties are defined by the constructor method called to create the object. Consider this code: ``` Integer my_int; my_int = new Integer(5); ``` In this example, 'my\_int' is the **object reference** to an `Integer` object **instance** being created. If you try to access 'my\_int', before assigning it a reference to an `Integer` **instance**, then you would have the error, "an **object reference** (my\_int) not set to an **instance of an object** (`Integer`)".
Object reference not set to an instance of an object
[ "", "c#", "object", "instance", "" ]
I've written C++ event-driven apps using MFC for GUI buttons, and I use HTML events like onmousedown to trigger some Javascript in a webpage. The idea of event-driven programming is intuitive when you are using the interface and events are hard-coded. I know how to use a function pointer in C, and I see how Windows uses them to run an event loop. I can write code to dynamically call a different function. My problem is, moving away from compile-time and into runtime. I'm talking any language here, but if you need a language, choose Javascript, or PHP as I use them most these days. I don't have a grasp at all of how I might use my own event object to get something done. It's just not a concept I get. And since I don't use them, I don't know what their practical use might be. For example, I suppose that if I wanted to make a custom turn-based game, that I might use event objects to represent an effect of some game-piece ability that just needs to "happen." Can someone give me a simple use of custom events? Or situations where it is useful or practical? Does it depend largely on the language and environment and how events are handled? A Javascript answer would be good, but I read many languages. Sorry I am just deficient in this regard and need some practical, advanced (intermediate?) insight. It's like pointers in C. I "get" pointers and how they are your friend. But there was a time where I didn't, and many people I know still don't. It's simple once you get it, I just don't get custom, dynamic events that way.
When we talk about event-oriented programming, we're usually talking about one or more implementations of the [observer design pattern](http://www.google.com/search?hl=en&client=firefox-a&rls=org.mozilla%3Aen-US%3Aofficial&hs=E6G&q=observer+design+pattern&btnG=Search). And one goal of the observer design pattern is to achieve [loose coupling](http://www.google.com/search?q=loose+coupling&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-US:official&client=firefox-a) between objects. That probably doesn't mean much without a good example, and there are certainly lots of good ones out there; I doubt mine will be the best of them, but I'll take a whack at it nonetheless. Imagine two objects. One object wants to know when something happens with the other object, so it can do something itself -- maybe the family dog, Rover, wants to greet Dad at the door when he gets home from work. There are a couple of ways you might program that scenario, right? * Make sure Dad has an reference to Rover, and when he comes home, call Rover's greetDatAtTheDoor() method, or * Give Rover a reference to dad, listen for Dad's onArriveHome event, and when it fires, call greetDadAtTheDoor() internally. On the surface, there might not seem to be much difference between the two, but actually, Dad has some of Rover's implementation burned into him; if Rover's implementation had to change for some reason, we'd have to make changes in two places: once in Rover, and then again in Dad. Not a huge deal, maybe, but still, not ideal. Now imagine Mom wants to greet Dad as well. And the cat, Mr. Bigglesworth, who doesn't like Dad very much, wants to make sure he's not around, so he'd rather go outside. And a neighbor, Joe, wants to know when Dad gets home, so he can bug him about that twenty bucks Dad owes him. How do we account for all those scenarios, and the additional ones that'll inevitably crop up? Placing references to Mom's greetHusband() method, the cat's getOutNow() method, the dog's greetDadAtTheDoor() method, and Joe's goGetMyMoney() method into Dad's class definition would mess things up fast for Dad -- and for no good reason, since all Dad really needs to do, himself, is come home. Mom, the dog, the cat, and the neighbor just want to be notified when that happens, so they can do whatever their internal implementations require. Languages handle the specifics of this stuff in different ways, but as you'll see when you start Googling around, the pattern usually involves there being an array, or array-like structure, on the event "dispatcher" (in this case, Dad -- i.e., the object everyone else is interested in), and the "subscribers" (e.g., Mom, the cat, Rover, and Joe) all "register" by calling a publicly exposed method on the dispatcher and passing in references to themselves -- references that end up, in some form, in Dad's array of "subscribers." Then, when Dad comes home, he "dispatches" an event -- the "I'm home!" event, say -- which is essentially a function that loops over each of his subscribers and invokes them with some publicly accessible method of their own -- only it's a method whose name Dad doesn't know, doesn't have to know, because the listener provided it when he/she/it passed it in. Since I happen to code mostly ActionScript these days, here's how it might look in my world -- say, as declared from within my Mom class: ``` var dad:Dad = new Dad(); dad.addEventListener(DadEvent.ARRIVED_HOME, greetHusbandAtDoor); private function greetHusbandAtDoor(event:DadEvent):void { // Go greet my husband } ``` In Dad, then, all I have to do, when I come home, is: ``` dispatchEvent(new DadEvent(DadEvent.ARRIVED_HOME)); ``` ... and because Flash handles the event-dispatching and notification details for me (again, everyone does it a bit differently, but internally, Flash follows the conceptual model I've described), my Dad comes home, and each family member does what it signed up to do automatically. Hopefully that explains things a bit -- what event-thinking looks like, what loose coupling means, why it's important, and so on. Best of luck!
In a compiled language, you write your own event loop. In a runtime language, it's implemented by the host environment already, and the programmer only gets an abstracted interface into the events system. You never see the event loop. There are two facets to building functionality with events. The first is defining the conditions under which an event will fire. In javascript in a browser, there isn't a single `mousedown` event. There's, in fact, one for every element on the page. In this case, the condition for when any of them will fire is based both on the location of the mouse cursor, and whether the mouse button has just switched from the upstate to the downstate. This condition may cause multiple events to fire. (In the browser, there's something called event `bubbling` that's related to this. but that's out of scope here). The other facet is the event handler. An event happens regardless of whether there's a handler for it or not. Providing the handler is up to you. In javascript, functions are first class values. They get defined at runtime, not at compile-time. So there's no need for pointers or jump tables. You create a new function at run time, and you assign it to a special property that the host environment looks up when a particular event fires. so on stack overflow, there will be an event handler on each of the down arrow's `onclick` events, which turns the arrow red, decrements a number, checks some other variable and conditionally shows a notification. Then when the handler returns- it returns control of the thread to the browser, and hallelujah the user can interact with the browser again. (all interactions cease while an event is handled, so you'd better make it quick). Here's an example using the jquery library. ``` jQuery("a .downarrow") .click(function(e){ e.target.style.color="red" /* ... do other things */ }); ``` or in raw JavaScript (version 1.6) it may be ``` Array.prototype.slice.apply(document.getElementsByTagName("a")) .filter(function(o){ return !!(/downarrow/).exec(o.className)}) .forEach(function(o){ o.onclick= function(e){ e.target.style.color="red" /* ...do other things * / } }); ``` It really is that simple. The main reason you would do something like this is it allows asynchronous interaction. A GUI application doesn't progress in a straight line, step by step. It has to respond dynamically to the user's input. Events are a way to kind of fake the appearance of a multithreaded application. It can look to the user that many things are happening simultaneously without interrupting each other. You could use a library to define a custom event. This would perhaps encode some domain-specific event. For example: you have a chess game that is thinking about its next move. the process for searching for the move runs out of time. The chess game could fire an `onComputerMove` event. You could handle that event with a function that grabs the specifications for the move, and updates the visual representation of the chessboard, and pops up a message for the user, and exits. Then when the player moves you can fire an `onPlayerMove` event, which does much of the same thing.
Can you make me understand dynamic, abstract event objects easily?
[ "", "javascript", "event-driven", "" ]
I have a `char` (ie. byte) buffer that I'm sending over the network. At some point in the future I might want to switch the buffer to a different type like `unsigned char` or `short`. I've been thinking about doing something like this: ``` typedef char bufferElementType; ``` And whenever I do anything with a buffer element I declare it as `bufferElementType` rather than `char`. That way I could switch to another type by changing this typedef (of course it wouldn't be that simple, but it would at least be easy to identify the places that need to be modified... there'll be a `bufferElementType` nearby). Is this a valid / good use of typedef? Is it not worth the trouble? Is it going to give me a headache at some point in the future? Is it going to make maintainance programmers hate me? I've read through [When Should I Use Typedef In C++](https://stackoverflow.com/questions/516237/when-should-i-use-typedef-in-c), but no one really covered this.
It is a great (and normal) usage. You have to be careful, though, that, for example, the type you select meet the same signed/unsigned criteria, or that they respond similarly to operators. Then it would be easier to change the type afterwards. Another option is to use templates to avoid fixing the type till the moment you're compiling. A class that is defined as: ``` template <typename CharType> class Whatever { CharType aChar; ... }; ``` is able to work with any char type you select, while it responds to all the operators in the same way.
Another advantage of typedefs is that, if used wisely, they can increase readability. As a really dumb example, a Meter and a Degree can both be doubles, but you'd like to differentiate between them. Using a typedef is onc quick & easy solution to [make errors more visible](http://www.joelonsoftware.com/articles/Wrong.html). **Note**: a more robust solution to the above example would have been to create different types for a meter and a degree. Thus, the compiler can enforce things itself. This requires a bit of work, which doesn't always pay off, however. Using typedefs is a quick & easy way to make errors **visible**, as described in the article linked above.
Valid use of typedef?
[ "", "c++", "typedef", "" ]
I'd like to have my desktop Java application to have single sign on related to Active Directory users. In two steps, I'd like to: 1. Be sure that the particular user has logged in to Windows with some user entry. 2. Check out some setup information for that user from the Active Directory With [Java: Programatic Way to Determine Current Windows User](https://stackoverflow.com/questions/31394/java-programatic-way-to-determine-current-windows-user) I can get the name of the current Windows user but can I rely to that? I think the ``` System.getProperty("user.name") ``` won't be secure enough? ("user.name" seems to be got from environment variables, so I can't rely on that, I think?) Question [Authenticating against Active Directory with Java on Linux](https://stackoverflow.com/questions/390150/authenticating-against-active-directory-with-java-on-linux) provides me the authentication for given name+pass but I'd like to authenticate based on the Windows logon? For the Active Directory access, LDAP would probably be the choice?
It is not supported. Java 6 has improvements, but not enough yet. Java has its own GSS stack. The problem is for single sign-on, you need to get the Kerberos ticket from the OS (not the Java stack). Otherwise the user has to authenticate a second time (defeating the purpose of single sign-on). Look at <http://java.sun.com/developer/technicalArticles/J2SE/security/>. Look down for "Access Native GSS-API" and it talks about a new system property sun.security.jgss.native which when set to true causes Java to use the underlying OS GSS implementation, giving access to the OS level authentication. Perfect!.... except its only supported for Solaris and Linux, not Microsoft Windows. Java 6 however does appear to have enough support for acting as a *server* receiving SPNEGO authentication requests from IE and then authenticating that user against Active Directory. Its just the desktop client support that is still incomplete.
Use [JAAS](http://java.sun.com/javase/6/docs/technotes/guides/security/jaas/JAASRefGuide.html) with an [LDAP LoginModule](http://java.sun.com/javase/6/docs/jre/api/security/jaas/spec/com/sun/security/auth/module/LdapLoginModule.html). This will allow you to plug-into the underlying Java security infrastructure. When you need to take the app offline or "debug" the app, you can easily swap-out the LDAP module for a dummy module. This allows you to continue testing your "security", without depending on Active Directory. Highly testable, decoupled, and you can the authentication scheme at a later time with almost no grief.
How to use Windows login for single-sign-on and for Active Directory entries for Desktop Java application?
[ "", "java", "active-directory", "single-sign-on", "windows-authentication", "" ]
As unit testing is not used in our firm, I'm teaching myself to unit test my own code. I'm using the standard .net test framework for some really basic unit testing. A method of mine returns a `IEnumerable<string>` and I want to test it's output. So I created an `IEnumerable<string>` expected to test it against. I thought I remembered there to be a way to `Assert.ArePartsEqual` or something like that, but I can't seem to find it. So in short, how do I test if two `IEnumerable<string>` contain the same strings?
I don't know which "standard .net test framework" you're referring to, but if it's Visual Studio Team System Unit testing stuff you could use [CollectionAssert](http://msdn.microsoft.com/en-us/library/microsoft.visualstudio.testtools.unittesting.collectionassert.aspx). Your test would be like this: ``` CollectionAssert.AreEqual(ExpectedList, ActualList, "..."); ``` **Update:** I forgot CollectionAssert needs an ICollection interface, so you'll have to call ActualList.ToList() to get it to compile. Returning the IEnumerable is a good thing, so don't change that just for the tests.
You want the [`SequenceEqual()`](https://learn.microsoft.com/en-us/dotnet/api/system.linq.enumerable.sequenceequal?view=net-5.0) extension method (LINQ): ``` string[] x = { "abc", "def", "ghi" }; List<string> y = new List<string>() { "abc", "def", "ghi" }; bool isTrue = x.SequenceEqual(y); ``` or just: ``` bool isTrue = x.SequenceEqual(new[] {"abc","def","ghi"}); ``` (it will return false if they are different lengths, or any item is different)
Assert IEnumerables
[ "", "c#", "unit-testing", "" ]
Are `true` and `false` keywords in Java?
No. `true` and `false` are literals.
Here's the complete list of [Java Language Keywords](http://java.sun.com/docs/books/tutorial/java/nutsandbolts/_keywords.html). In particular, note that > `true`, `false`, and `null` might seem like > keywords, but **they are actually > literals**; you cannot use them as > identifiers in your programs.
Are true and false keywords?
[ "", "java", "" ]
I have created a custom WPF user control which is intended to be used by a third party. My control has a private member which is disposable, and I would like to ensure that its dispose method will always get called once the containing window/application is closed. However, UserControl is not disposable. I tried implementing the IDisposable interface and subscribing to the Unloaded event but neither get called when the host application closes. MSDN says that the Unloaded event may not be raised at all. And it might also be triggered more than once, that is when user changes theme. If at all possible, I don't want to rely on consumers of my control remembering to call a specific Dispose method. ``` public partial class MyWpfControl : UserControl { SomeDisposableObject x; // where does this code go? void Somewhere() { if (x != null) { x.Dispose(); x = null; } } } ``` The only solution I have found so far is to subscribe to the Dispatcher's ShutdownStarted event. Is this a reasonable approach? ``` this.Dispatcher.ShutdownStarted += Dispatcher_ShutdownStarted; ```
Interesting blog post here: [Dispose of a WPF UserControl (ish)](https://web.archive.org/web/20091029112834/http://geekswithblogs.net/cskardon/archive/2008/06/23/dispose-of-a-wpf-usercontrol-ish.aspx) It mentions subscribing to [Dispatcher.ShutdownStarted](http://msdn.microsoft.com/en-us/library/system.windows.threading.dispatcher.shutdownstarted.aspx) to dispose of your resources.
`Dispatcher.ShutdownStarted` event is fired only at the end of application. It's worth to call the disposing logic just when control gets out of use. In particular it frees resources when control is used many times during application runtime. So **ioWint**'s solution is preferable. Here's the code: ``` public MyWpfControl() { InitializeComponent(); Loaded += (s, e) => { // only at this point the control is ready Window.GetWindow(this) // get the parent window .Closing += (s1, e1) => Somewhere(); //disposing logic here }; } ```
Disposing WPF User Controls
[ "", "c#", ".net", "wpf", "user-controls", "dispose", "" ]
I have a solution in VS 2008 with 2 projects in it. One is a DLL written in C++ and the other is a simple C++ console application created from a blank project. I would like know how to call the functions in the DLL from the application. Assume I am starting with a blank C++ project and that I want to call a function called `int IsolatedFunction(int someParam)` How do I call it?
There are many ways to do this but I think one of the easiest options is to link the application to the DLL at link time and then use a *definition file* to define the symbols to be exported from the DLL. **CAVEAT:** The definition file approach works bests for *undecorated* symbol names. If you want to export decorated symbols then it is probably better to *NOT USE* the definition file approach. Here is an simple example on how this is done. **Step 1:** Define the function in the *export.h* file. ``` int WINAPI IsolatedFunction(const char *title, const char *test); ``` **Step 2:** Define the function in the *export.cpp* file. ``` #include <windows.h> int WINAPI IsolatedFunction(const char *title, const char *test) { MessageBox(0, title, test, MB_OK); return 1; } ``` **Step 3:** Define the function as an export in the *export.def* defintion file. ``` EXPORTS IsolatedFunction @1 ``` **Step 4:** Create a DLL project and add the *export.cpp* and *export.def* files to this project. Building this project will create an *export.dll* and an *export.lib* file. The following two steps link to the DLL at link time. If you don't want to define the entry points at link time, ignore the next two steps and use the **LoadLibrary** and **GetProcAddress** to load the function entry point at runtime. **Step 5:** Create a *Test* application project to use the dll by adding the *export.lib* file to the project. Copy the *export.dll* file to ths same location as the *Test* console executable. **Step 6:** Call the *IsolatedFunction* function from within the Test application as shown below. ``` #include "stdafx.h" // get the function prototype of the imported function #include "../export/export.h" int APIENTRY WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nCmdShow) { // call the imported function found in the dll int result = IsolatedFunction("hello", "world"); return 0; } ```
Can also export functions from dll and import from the exe, it is more tricky at first but in the end is much easier than calling LoadLibrary/GetProcAddress. See [MSDN](http://msdn.microsoft.com/en-us/library/16ya5xae%28VS.80%29.aspx). When creating the project with the VS wizard there's a check box in the dll that let you export functions. Then, in the exe application you only have to #include a header from the dll with the proper definitions, and add the dll project as a dependency to the exe application. Check this other question if you want to investigate this point further [Exporting functions from a DLL with dllexport](https://stackoverflow.com/questions/538134/exporting-functions-from-a-dll-with-dllexport).
Calling functions in a DLL from C++
[ "", "c++", "visual-studio-2008", "dll", "dllimport", "" ]
I am currently working on a site that requires ACL and as I am using Zend, it makes sense for me to make use of their ACL class but I have little to zero idea of how to do this. I have read the docs but it confused me further...basically all I want to do is to set up two user groups e.g. "normal" and "admin", normal users can access all pages that have a controller that is not admin while admin can obviously access the admin controller pages. I have many questions: 1. How do I set this up? 2. Should I run it through a DB or the config.ini? 3. Where do I place my ACL.php? 4. How do I write such a script? 5. How do I then call, is this done in the Index?. I would very much appreciate if you guide me to a website or a good tutorial.
I implemented similar thing not so long ago. Basic concept follows in an example code. I created my own configAcl.php file which is loaded in bootstrap file, in my case it is index.php. Here is how it'd be according to your case: ``` $acl = new Zend_Acl(); $roles = array('admin', 'normal'); // Controller script names. You have to add all of them if credential check // is global to your application. $controllers = array('auth', 'index', 'news', 'admin'); foreach ($roles as $role) { $acl->addRole(new Zend_Acl_Role($role)); } foreach ($controllers as $controller) { $acl->add(new Zend_Acl_Resource($controller)); } // Here comes credential definiton for admin user. $acl->allow('admin'); // Has access to everything. // Here comes credential definition for normal user. $acl->allow('normal'); // Has access to everything... $acl->deny('normal', 'admin'); // ... except the admin controller. // Finally I store whole ACL definition to registry for use // in AuthPlugin plugin. $registry = Zend_Registry::getInstance(); $registry->set('acl', $acl); ``` Another case is if you want to allow normal user only "list" action on all your controllers. It's pretty simple, you'd add line like this: ``` $acl->allow('normal', null, 'list'); // Has access to all controller list actions. ``` Next you should create new plugin which takes care of credential checking automatically when there is a request for some controller action. This checking takes place in preDispatch() method that is called before every call to the controller action. Here is AuthPlugin.php: ``` class AuthPlugin extends Zend_Controller_Plugin_Abstract { public function preDispatch(Zend_Controller_Request_Abstract $request) { $loginController = 'auth'; $loginAction = 'login'; $auth = Zend_Auth::getInstance(); // If user is not logged in and is not requesting login page // - redirect to login page. if (!$auth->hasIdentity() && $request->getControllerName() != $loginController && $request->getActionName() != $loginAction) { $redirector = Zend_Controller_Action_HelperBroker::getStaticHelper('Redirector'); $redirector->gotoSimpleAndExit($loginAction, $loginController); } // User is logged in or on login page. if ($auth->hasIdentity()) { // Is logged in // Let's check the credential $registry = Zend_Registry::getInstance(); $acl = $registry->get('acl'); $identity = $auth->getIdentity(); // role is a column in the user table (database) $isAllowed = $acl->isAllowed($identity->role, $request->getControllerName(), $request->getActionName()); if (!$isAllowed) { $redirector = Zend_Controller_Action_HelperBroker::getStaticHelper('Redirector'); $redirector->gotoUrlAndExit('/'); } } } } ``` Final steps are loading your configAcl.php and register the AuthPlugin in bootstrap file (probably index.php). ``` require_once '../application/configAcl.php'; $frontController = Zend_Controller_Front::getInstance(); $frontController->registerPlugin(new AuthPlugin()); ``` So this is the basic concept. I didn't test the code above (copy and paste and rewrite just for the showcase purpose) so it's not bullet-proof. Just to give an idea. **EDIT** For the clarity. The code above in AuthPlugin suppose that the $identity object is filled with user data ("role" column in the database). This could be done within the login process like this: ``` [...] $authAdapter = new Zend_Auth_Adapter_DbTable($db); $authAdapter->setTableName('Users'); $authAdapter->setIdentityColumn('username'); $authAdapter->setCredentialColumn('password'); $authAdapter->setIdentity($username); $authAdapter->setCredential(sha1($password)); $authAdapter->setCredentialTreatment('? AND active = 1'); $auth = Zend_Auth::getInstance(); $result = $auth->authenticate($authAdapter); if ($result->isValid()) { $data = $authAdapter->getResultRowObject(null, 'password'); // without password $auth->getStorage()->write($data); [...] ```
This solution may prove to be the simplest implementation of Zend\_Acl. Example: ``` class UserController extends Zend_Controller_Action { public function preDispatch(){ $resource = 'user_area'; $privilege = $this->_request->getActionName(); if (!$this->_helper->acl($resource, $privilege)) $this->_redirect(); } } ``` Zend/Controller/Action/Helper/Acl.php ``` class Zend_Controller_Action_Helper_Acl extends Zend_Controller_Action_Helper_Abstract { protected $acl; protected $role; protected function getAcl(){ if (is_null($this->acl)){ $acl = new Zend_Acl(); $acl->addResource(new Zend_Acl_Resource('user_area')); $acl->addResource(new Zend_Acl_Resource('customer_area'), 'user_area'); $acl->addResource(new Zend_Acl_Resource('web_area')); $acl->addRole(new Zend_Acl_Role('guest')); $acl->addRole(new Zend_Acl_Role('user'), 'guest'); $acl->allow('guest', 'web_area'); $acl->allow('guest', 'user_area', array( 'forgot-password', 'login' )); $acl->allow('user', 'user_area'); $acl->allow('customer', 'customer_area'); $this->acl = $acl; } return $this->acl; } protected function getRole(){ if (is_null($this->role)){ $session = new Zend_Session_Namespace('session'); $role = (isset($session->userType)) ? $session->userType : 'guest'; $this->role = $role; } return $this->role; } public function direct($resource, $privilege = null){ $acl = $this->getAcl(); $role = $this->getRole(); $allowed = $acl->isAllowed($role, $resource, $privilege); return $allowed; } } ```
Need guidance to start with Zend ACL
[ "", "php", "zend-framework", "authentication", "user-controls", "acl", "" ]
I've a List of this type List> that contains this ``` List<int> A = new List<int> {1, 2, 3, 4, 5}; List<int> B = new List<int> {0, 1}; List<int> C = new List<int> {6}; List<int> X = new List<int> {....,....}; ``` I want to have all combinations like this ``` 1-0-6 1-1-6 2-0-6 2-1-6 3-0-6 ``` and so on. According to you is This possibile to resolve using Linq?
It's quite similar to [this answer](https://stackoverflow.com/questions/312732/whats-a-good-way-for-figuring-out-all-possible-words-of-a-given-length/312761#312761) I gave to another question: ``` var combinations = from a in A from b in B from c in C orderby a, b, c select new List<int> { a, b, c }; var x = combinations.ToList(); ``` For a variable number of inputs, now with added generics: ``` var x = AllCombinationsOf(A, B, C); public static List<List<T>> AllCombinationsOf<T>(params List<T>[] sets) { // need array bounds checking etc for production var combinations = new List<List<T>>(); // prime the data foreach (var value in sets[0]) combinations.Add(new List<T> { value }); foreach (var set in sets.Skip(1)) combinations = AddExtraSet(combinations, set); return combinations; } private static List<List<T>> AddExtraSet<T> (List<List<T>> combinations, List<T> set) { var newCombinations = from value in set from combination in combinations select new List<T>(combination) { value }; return newCombinations.ToList(); } ```
If the number of dimensions is fixed, this is simply `SelectMany`: ``` var qry = from a in A from b in B from c in C select new {A=a,B=b,C=c}; ``` However, if the number of dimensions is controlled by the data, you need to use recursion: ``` static void Main() { List<List<int>> outerList = new List<List<int>> { new List<int>(){1, 2, 3, 4, 5}, new List<int>(){0, 1}, new List<int>(){6,3}, new List<int>(){1,3,5} }; int[] result = new int[outerList.Count]; Recurse(result, 0, outerList); } static void Recurse<TList>(int[] selected, int index, IEnumerable<TList> remaining) where TList : IEnumerable<int> { IEnumerable<int> nextList = remaining.FirstOrDefault(); if (nextList == null) { StringBuilder sb = new StringBuilder(); foreach (int i in selected) { sb.Append(i).Append(','); } if (sb.Length > 0) sb.Length--; Console.WriteLine(sb); } else { foreach (int i in nextList) { selected[index] = i; Recurse(selected, index + 1, remaining.Skip(1)); } } } ```
Combination of List<List<int>>
[ "", "c#", "list", "linq", "generics", "cartesian-product", "" ]
Does anyone have any sample code in that makes use of the .Net framework that connects to googlemail servers via IMAP SSL to check for new emails?
The URL listed here might be of interest to you <http://www.codeplex.com/InterIMAP> which was extension to <http://www.codeproject.com/KB/IP/imaplibrary.aspx?fid=91819&df=90&mpp=25&noise=5&sort=Position&view=Quick&fr=26&select=2562067#xx2562067xx>
I'd recommend looking at [MailKit](https://github.com/jstedfast/MailKit) as it is probably the most robust mail library out there and it's Open Source (MIT). One of the awesome things about MailKit is that all network APIs are cancelable (something I haven't seen available in any other IMAP library). It's also the only library that I know of that supports threading of messages. ``` using System; using System.Net; using System.Threading; using MailKit.Net.Imap; using MailKit.Search; using MailKit; using MimeKit; namespace TestClient { class Program { public static void Main (string[] args) { using (var client = new ImapClient ()) { using (var cancel = new CancellationTokenSource ()) { client.Connect ("imap.gmail.com", 993, true, cancel.Token); // If you want to disable an authentication mechanism, // you can do so by removing the mechanism like this: client.AuthenticationMechanisms.Remove ("XOAUTH"); client.Authenticate ("joey", "password", cancel.Token); // The Inbox folder is always available... var inbox = client.Inbox; inbox.Open (FolderAccess.ReadOnly, cancel.Token); Console.WriteLine ("Total messages: {0}", inbox.Count); Console.WriteLine ("Recent messages: {0}", inbox.Recent); // download each message based on the message index for (int i = 0; i < inbox.Count; i++) { var message = inbox.GetMessage (i, cancel.Token); Console.WriteLine ("Subject: {0}", message.Subject); } // let's try searching for some messages... var query = SearchQuery.DeliveredAfter (DateTime.Parse ("2013-01-12")) .And (SearchQuery.SubjectContains ("MailKit")) .And (SearchQuery.Seen); foreach (var uid in inbox.Search (query, cancel.Token)) { var message = inbox.GetMessage (uid, cancel.Token); Console.WriteLine ("[match] {0}: {1}", uid, message.Subject); } client.Disconnect (true, cancel.Token); } } } } } ```
using c# .net libraries to check for IMAP messages from gmail servers
[ "", "c#", ".net", "email", "imap", "" ]
The question's pretty self-explanatory really. I know vaguely about vectors in maths, but I don't really see the link to C++ vectors.
Mathematical definition of a vector is a member of the set `S``n`, which is an ordered sequence of values in a specific set (`S`). This is what a C++ `vector` stores.
It's called a vector because Alex Stepanov, the designer of the Standard Template Library, was looking for a name to distinguish it from built-in arrays. He admits now that he made a mistake, because mathematics already uses the term 'vector' for a fixed-length sequence of numbers. C++11 compounds this mistake by introducing a class 'array' that behaves similarly to a mathematical vector. Alex's lesson: be very careful every time you name something.
Why is a C++ Vector called a Vector?
[ "", "c++", "stl", "vector", "" ]
I used to be a java developer and we used tools like ant or maven to manage our development/testing/UAT environments in a standardized way. This allowed us to handle library dependencies, setting OS variables, compiling, deploying, running unit tests, and all the required tasks. Also, the scripts generated guaranteed that all the environments were almost equally configured, and all the task were performed in the same way by all the members of the team. I'm starting to work in Python now and I'd like your advice in which tools should I use to accomplish the same as described for java.
1. [virtualenv](http://pypi.python.org/pypi/virtualenv) to create a contained virtual environment (prevent different versions of Python or Python packages from stomping on each other). There is increasing buzz from people moving to this tool. The author is the same as the older working-env.py mentioned by Aaron. 2. [pip](http://pypi.python.org/pypi/pip) to install packages inside a virtualenv. The traditional is easy\_install as answered by S. Lott, but pip works better with virtualenv. easy\_install still has features not found in pip though. 3. [scons](http://www.scons.org/) as a build tool, although you won't need this if you stay purely Python. 4. [Fabric](http://pypi.python.org/pypi/Fabric/0.0.3) paste, or [paver](http://www.blueskyonmars.com/projects/paver/) for deployment. 5. [buildbot](http://buildbot.net/trac) for continuous integration. 6. Bazaar, mercurial, or git for version control. 7. [Nose](http://somethingaboutorange.com/mrl/projects/nose/) as an extension for unit testing. 8. [PyFit](http://pypi.python.org/pypi/PyFIT/0.8a2) for [FIT](http://fit.c2.com) testing.
I also work with both java and python. For python development the maven equivalent is setuptools (<http://peak.telecommunity.com/DevCenter/setuptools>). For web application development I use this in combination with paster (<http://pythonpaste.org/>) for the deployment process
Tool (or combination of tools) for reproducible environments in Python
[ "", "python", "continuous-integration", "installation", "development-environment", "automated-deploy", "" ]
I have loaded XmlDocument into memory and created new XmlElement. Now I am trying to add XmlElement to the path /report/section/hosts but I don't know how. I can add it easily below root node of XML but I cannot figure out how can I navigate deeper level in XML and just append there. In pseudo I am trying to do this: *doc.SelectNodes("/report/section/hosts").AppendChild(subRoot);* The code: ``` XmlDocument doc = new XmlDocument(); doc.Load("c:\\data.xml"); //host XmlElement subRoot = doc.CreateElement("host"); //Name XmlElement ElName = doc.CreateElement("name"); XmlText TxtName = doc.CreateTextNode("text text"); ElName.AppendChild(TxtName); subRoot.AppendChild(ElName); doc.DocumentElement.AppendChild(subRoot); doc.Save("c:\\data.xml"); ```
Try SelectSingleNode instead of SelectNodes ``` XmlElement parent = (XmlElement)doc.SelectSingleNode("/report/section/hosts") parent.AppendChild(subRoot); ```
The SelectNodes method returns a list of Nodes. You should use SelectSingleNode instead... e.g. (top of my head, did not test in Visual Studio) ``` doc.SelectSingleNode("/report/section/hosts").AppendChild(subRoot); ```
How can I navigate deeper in XML and append data in it
[ "", "c#", "append", "xmldocument", "selectnodes", "" ]
I've got a procedure to return a result set which is limited by page number and some other stuff. As an OUTPUT parameter I need to return a total amount of selected rows according to the parameters except the page number. So I have something like that: ``` WITH SelectedItems AS (SELECT Id, Row1, Row2, ROW_NUMBER() OVER (ORDER BY Row1) AS Position FROM Items WHERE Row2 = @Row2) SELECT Id, Row1, Row2 FROM SelectedItems WHERE Position BETWEEN @From AND @To ``` And then I need to set the OUTPUT parameter to the number of rows in the innerquery. I can just copy the query and count it, but this query could returns thousands of rows (and will be more in the future), so I am looking for method to do that with a good performance. I was thinking about table variables, is it a good idea? Or any other suggestions? To be more specific, it's the Microsoft SQL Server 2008. Thank you, Jan
You can count the total rows as a separate column in your main query using COUNT(\*). Like this: ``` WITH SelectedItems AS (SELECT Id, Row1, Row2, ROW_NUMBER() OVER (ORDER BY Row1) AS Position, COUNT(*) OVER () AS TotalRows FROM Items WHERE Row2 = @Row2) SELECT Id, Row1, Row2 FROM SelectedItems WHERE Position BETWEEN @From AND @To ``` This will return the count in your result set rather than in a output parameter, but that should fit your requirements. Otherwise, combine with a temp table: ``` DECLARE @tmp TABLE (Id int, RowNum int, TotalRows int); WITH SelectedItems AS (SELECT Id, Row1, Row2, ROW_NUMBER() OVER (ORDER BY Row1) AS Position, COUNT(*) OVER () AS TotalRows FROM Items WHERE Row2 = @Row2) INSERT @tmp SELECT Id, Row1, Row2 FROM SelectedItems WHERE Position BETWEEN @From AND @To SELECT TOP 1 @TotalRows = TotalRows FROM @tmp SELECT * FROM @tmp ``` You will find using a temp table for just your paged result will not use much memory (depending on your page size of course) and you're only keeping it live for a short period of time. Selecting the full result set from the temp table and selecting the TotalRows will only take a tiny bit longer. This will be much faster than running a totally separate query, which in my test (repeating the WITH) doubled the execution time.
I think you should do it in a separate query. While those two queries might look pretty much the same, but the way query optimizer deals with them would differ pretty significantly. Theoretically, SQL Server might not even go through all the rows in the subquery to be able to count it.
Select COUNT(*) of subquery without running it twice
[ "", "sql", "sql-server", "t-sql", "count", "paging", "" ]
I have a Silverlight application using C#, with 2 main functions that I want to make accessible from JavaScript functions. I have done the RegisterScriptableObject() in the class and set-up the [ScriptableMember] for the functions I want access to. This the Silverlight object: ``` <div id="silverlightControlHost"> <object id="silverlightControl" data="data:application/x-silverlight," type="application/x-silverlight-2" width="1024px" height="300px"> <param name="source" value="DrawingWaveForm.xap"/> <param name="onerror" value="onSilverlightError" /> <param name="background" value="white" /> <param name="minRuntimeVersion" value="2.0.31005.0" /> <param name="autoUpgrade" value="true" /> <a href="http://go.microsoft.com/fwlink/?LinkID=124807" style="text-decoration: none;"> <img src="http://go.microsoft.com/fwlink/?LinkId=108181" alt="Get Microsoft Silverlight" style="border-style: none"/> </a> </object> <iframe style='visibility:hidden;height:0;width:0;border:0px'></iframe> </div> ``` and these are my JavaScript functions: ``` function Start() { var control = document.getElementById("silverlightControl"); control.Content.Page.Start(); } function Stop() { var control = document.getElementById("silverlightControl"); control.Content.Page.Stop(); } ``` Can anyone tell me where I'm going wrong as it does not seem to work
As timheuer said, [Scriptable] on your Silverlight methods. Call this in your class: ``` HtmlPage.RegisterScriptableObject("Page", this); ``` Call the Silverlight methods marked as Scriptable from your javascript like this: ``` function CenterMap(latitude, longitude) { var silvercontrol = document.getElementById("ctl00_cphMain_slControl"); if (silvercontrol) silvercontrol.Content.Page.CenterOnCoordinates(latitude, longitude); } ``` [This page](http://blogs.vertigo.com/personal/ralph/Blog/archive/2008/05/15/call-silverlight-from-javascript-call-javascript-from-silverlight.aspx) shows you this and how to do the reverse, calling javascript methods from Silverlight. It's a really nice model.
You need to ensure your C# functions are marked as Scriptable. See <http://silverlight.net/learn/learnvideo.aspx?video=65683> for some walk throughs on how to accomplish this.
Javascript calling C# function under Silverlight
[ "", "javascript", "silverlight", "function", "" ]
I wrote a windows application using C# .Net 2.0 and i want to do something which hide the source code, so when any one use refactor tool can't see the source code. I used dotfuscator but it just changed the function names but not all the source code. UPDATE: I want to hide the source code, not because of hiding the key, but to hide how the code is working. Thanks,
Well, the source code is yours and unless you explicitly provide it, youll perobably only be providing compiled binaries. Now, these compiled binaries are IL code. To prevent someone "decompiling" and reverse engineering your IL code back to source code, you'll need to obfuscate the IL code. This is done with a code obfuscator. There are many in the marketplace. You've already done this with dotfuscator, however, you say that it only changed the function names, not all the source code. It sounds like you're using the dotfuscator edition that comes with Visual Studio. This is effectively the "community edition" and only contains a subset of the functionality of the "professional edition". Please see [this link](http://msdn.microsoft.com/en-us/library/ms227240(VS.80).aspx) for a comparison matrix of the features of the community edition and the professional edition. If you want more obfuscation of your code (specifically to protect against people using tools such as Reflector), you'll need the professional edition of Dotfuscator, or another code obfuscator product that contains similar functionality.
IL is by definition very expressive in terms of what remains in the body; you'll just have to either: * find a better (read: more expensive) obfuscator * keep the key source under your control (for example, via a web-service, so key logic is never at the client).
Hide c# windows application source code
[ "", "c#", "obfuscation", "" ]
I'm now working with code that looks like this ``` public String getName(User user) { user.setSth(...); return user.getName(); } ``` I think it's bad practice to change objects passed as parameters. Is there a tool that detects that kind of code? I looked at findbugs, pmd and checkstyle, but could not find any check for this. P.S. sorry for bad example.
I think you are already on the right track: your best tool to detect this sort of code is almost certainly [Findbugs](http://findbugs.sourceforge.net/). However, you will probably need to write your own detector for this pattern. [Here is an example](http://www.ibm.com/developerworks/library/j-findbug2/) of how you would write a detector, though it isn't exactly the detector that you're looking for. **Caveat**: I don't really agree that a side-effecting getter is always bad style. However, if you really want to find that sort of thing, I would recommend Findbugs.
You won't find anything because, from a tool's point of view, "getName" and "setSth" are just method calls. Humans say "this is a getter" and "this is a setter" but tools don't. In fact, getName() is not a getter because getters don't accept arguments. So the tool can't see anything unusual because methods change objects all the time. If you want to enforce this rule, have a look at extending findbugs and PMD. Both allow you to define additional constraints. What you're looking for is probably: * If method name starts with "get" * AND method body calls method of any object passes as parameter then print a warning. That shouldn't take too long. Run this and you will see how many "false positives" you get (warnings about methods which are actually OK). This will help you determine whether it's worth to pursue this further. Plus you'll have a new item to add to your CV :)
Detect changing value of object passed as parameter
[ "", "java", "parameter-passing", "checkstyle", "findbugs", "pmd", "" ]
I'm using ASP.NET and C# and am exporting a very large results set to Excel. While my export code is running I would like to show a "loading" animated gif so the users will know their request is processing. I've been trying to do this with multithreading, but I am not very familiar with it. Can anyone guide me in the right direction? Thanks! Jon
Are you trying to do multi-threading on the server? What I'd recommend is in your client side javascript turn on a please wait message before posting to the server. Then on the client side when your done posting you turn the message off. Without knowing more about your actual setup I can't help much further, but last time I implemented this I did something along these lines: Assume we have a div called PrintLoadingPanel using JQUERY I set the div to display and take over the window: ``` $("#printLoadingPanel") .css({display:"block",top:"0px",left:"0px",height:screen.availHeight}); ``` I then will start a timer with a 1/2 second interval which will start checking if we are done printing. I'm only generating and downloading small PDF's so i needed a quicker response. If your report is really slow you might want to tweak this: ``` window.setTimeout(checkIfDoneDownload,500); ``` Then my CheckIfDoneDownload function hits the server and checks if we finished generating the file and downloaded it. I am using JQUERY here again to call an AJAX enabled WCF service. You could substitute this with PageMethods or any other way to callback to the server. ``` function checkIfDoneDownload() { $.ajax({ type: "POST", url: "../Services/Utilities.svc/IsPrintDownloadDone", data: "{}", contentType: "application/json; charset=utf-8", dataType: "json", success: function(msg) { if (msg.d) { $("#printLoadingPanel").css("display","none"); } else {window.setTimeout(checkIfDoneDownload,500);} }, error: function (xhr, textStatus, errorThrown) { if (xhr.status==12030) {checkIfDoneDownload();} } }); } ``` Now on the server side, I am generating my downloads via an HTTP Handler. Essentially the first thing it does is set a session level flag to false, then the last thing it does is set it back to true. My check if done service just returns the value of the flag.
I've run into this same problem with generating reports. A simple solution would be to run the report at night and then write it to the server. Then provide a link to the report.
Display loading message while Exporting a results set to Excel
[ "", "c#", "asp.net", "multithreading", "loading", "" ]
Does anyone have experience with the prefuse graph toolkit? Is it possible to change an already displayed graph, ie. add/remove nodes and/or edges, and have the display correctly adapt? For instance, prefuse comes with an example that visualizes a network of friends: > <http://prefuse.org/doc/manual/introduction/example/Example.java> What I would like to do is something along the lines of this: ``` // -- 7. add new nodes on the fly ------------------------------------- new Timer(2000, new ActionListener() { private Node oldNode = graph.nodes().next(); // init with random node public void actionPerformed(ActionEvent e) { // insert new node // Node newNode = graph.addNode(); // insert new edge // graph.addEdge(oldNode, newNode); // remember node for next call // oldNode = newNode; } }).start(); ``` But it doesn't seem to work. Any hints?
As pointed out in my other post, the reason new nodes and edges are not visible in the original example is that the colors etc. for the nodes are not set correctly. One way to fix this is to explicitly call vis.run("color"); whenever a node or edge was added. Alternatively, we can ensure that the color action is always running, by initializing the ActionList to which we add it (called "color" in the original example) slightly differently: instead of ``` ActionList color = new ActionList(); ``` we could write ``` ActionList color = new ActionList(Activity.INFINITY); ``` This keeps the action list running indefinitely, so that new nodes/edges will automatically be initialized for their visual appearance. However, it is unclear to me whether this would actually be the preferred method - for things like a dynamic layout action (e.g. ForceDirectedLayout), such a declaration makes perfect sense, but for colors it seems to me that a constantly running coloring action is mostly overhead. So, perhaps the previously posted solution of just running the "color" action explicitly (but only once) whenever the graph gets extended, might be the better choice...
You should be aware the several layers of prefuse: * Data * Visualization * Display To be short, the three layers can be linked this way: ``` Graph graph = new Graph(eg. yourXML_file); Visualization viz = new Visualization(); viz.add(GRAPH, graph); Display disp = new Display(); disp.setVisualization(viz); ``` Display is a graphic component that you add to a panel as usual. Here you only modify the data layer. ``` Node newNode = graph.addNode(); graph.addEdge(oldNode, newNode); ``` You need now to update the visual layer: ``` viz.run("repaint"); ``` The *repaint* action has to be defined. ``` ActionList repaint = new ActionList(); repaint.add(new RepaintAction()); viz.putAction("repaint", repaint); ``` I really advise you to read the [prefuse doc](http://prefuse.org/doc/manual/). And you can find a lot a resources on the official [forum](http://sourceforge.net/forum/forum.php?forum_id=343013) At least, I can say you that prefuse is for the moment not really efficient for live graph update. But it should not be enough, as you modified the graph structure, you have to regenerate it in the visualization (ie. recalculate the node placements etc...). There are two actions already defined in your sample code. Run them at the end of your actionPerformed. ``` viz.run("color"); viz.run("layout"); ``` This method is not very efficient, because it adds a lot of computation each time you add a node, but there are not any others for the moment with prefuse.
Prefuse Toolkit: dynamically adding nodes and edges
[ "", "java", "layout", "graph-theory", "prefuse", "" ]
I'd like to implement a custom manipulator for ostream to do some manipulation on the next item being inserted into the stream. For example, let's say I have a custom manipulator *quote*: ``` std::ostringstream os; std::string name("Joe"); os << "SELECT * FROM customers WHERE name = " << quote << name; ``` The manipulator *quote* will quote *name* to produce: ``` SELECT * FROM customers WHERE name = 'Joe' ``` How do I go about accomplishing that? Thanks.
It's particularly difficult to add a manipulator to a C++ stream, as one has no control of how the manipulator is used. One can imbue a new locale into a stream, which has a facet installed that controls how numbers are printed - but not how strings are output. And then the problem would still be how to store the quoting state safely into the stream. Strings are output using an operator defined in the `std` namespace. If you want to change the way those are printed, yet keeping the look of manipulators, you can create a proxy class: ``` namespace quoting { struct quoting_proxy { explicit quoting_proxy(std::ostream & os):os(os){} template<typename Rhs> friend std::ostream & operator<<(quoting_proxy const& q, Rhs const& rhs) { return q.os << rhs; } friend std::ostream & operator<<(quoting_proxy const& q, std::string const& rhs) { return q.os << "'" << rhs << "'"; } friend std::ostream & operator<<(quoting_proxy const& q, char const* rhs) { return q.os << "'" << rhs << "'"; } private: std::ostream & os; }; struct quoting_creator { } quote; quoting_proxy operator<<(std::ostream & os, quoting_creator) { return quoting_proxy(os); } } int main() { std::cout << quoting::quote << "hello" << std::endl; } ``` Which would be suitable to be used for `ostream`. If you want to generalize, you can make it a template too and also accept `basic_stream` instead of plain `string`. It has different behaviors to standard manipulators in some cases. Because it works by returning the proxy object, it will not work for cases like ``` std::cout << quoting::quote; std::cout << "hello"; ```
Try this: ``` #include <iostream> #include <iomanip> // The Object that we put on the stream. // Pass in the character we want to 'quote' the next object with. class Quote { public: Quote(char x) :m_q(x) {} private: // Classes that actual does the work. class Quoter { public: Quoter(Quote const& quote,std::ostream& output) :m_q(quote.m_q) ,m_s(output) {} // The << operator for all types. Outputs the next object // to the stored stream then returns the stream. template<typename T> std::ostream& operator<<(T const& quoted) { return m_s << m_q << quoted << m_q; } private: char m_q; std::ostream& m_s; }; friend Quote::Quoter operator<<(std::ostream& str,Quote const& quote); private: char m_q; }; // When you pass an object of type Quote to an ostream it returns // an object of Quote::Quoter that has overloaded the << operator for // all types. This will quote the next object and the return the stream // to continue processing as normal. Quote::Quoter operator<<(std::ostream& str,Quote const& quote) { return Quote::Quoter(quote,str); } int main() { std::cout << Quote('"') << "plop" << std::endl; } ```
Custom manipulator for C++ iostream
[ "", "c++", "string", "iostream", "" ]
I have a jframe (parent) which creates an input frame (child) where I get some parameter. In the "child" frame I have "ok" and "cancel" buttons. When "ok" button is pressed, the parent frame needs to be updated with new data. What is the best way to do that??
Pass in a reference to the parent frame when you create (or display) the child frame. This will require an overloaded constructor or display method. Once the child has the reference, it can of course call any method that the parent exposes as public, like UpdateDate()
As of Java 1.3 ``` public class MyPanel extends JPanel { public MyPanel() { .... JButton cancelButton = new JButton("Cancel"); cancelButton.addActionListener( new ActionListener() { public void actionPerformed(ActionEvent e) { // <<<< HERE'S THE INTERESTING BIT >>>> javax.swing.SwingUtilities.getWindowAncestor(MyPanel.this).dispose(); } } ); add(cancelButton); ..... } } ```
managing parent frame from child frame on java swing
[ "", "java", "swing", "events", "jframe", "jbutton", "" ]
Currently i can create PDF files from images in Imagick with this function ``` $im->setImageFormat("pdf"); $im->writeImage("file.pdf"); ``` And it's possible to fetch multiple pages with imagick like this ``` $im = new imagick("file.pdf[0]"); $im2 = new imagick("file.pdf[1]"); ``` But is it possible to save two image objects to two pages? (example of what i am thinking, its not possible like this) ``` $im->setImageFormat("pdf"); $im->writeImage("file.pdf[0]"); $im2->setImageFormat("pdf"); $im2->writeImage("file.pdf[1]"); ```
I know this is long past due, but this result came up when I was trying to do the same thing. Here is how you create a multi-page PDF file in PHP and Imagick. ``` $images = array( 'page_1.png', 'page_2.png' ); $pdf = new Imagick($images); $pdf->setImageFormat('pdf'); if (!$pdf->writeImages('combined.pdf', true)) { die('Could not write!'); } ```
Accepted answer wasn't working for me, as a result, it always generated one page pdf (last image from constructor), to make this work I had to get file descriptor first, like this: ``` $images = array( 'img1.png', 'img2.png' ); $fp = fopen('combined.pdf', 'w'); $pdf = new Imagick($images); $pdf->resetiterator(); $pdf->setimageformat('pdf'); $pdf->writeimagesfile($fp); fclose($fp); ```
Creating two pdf pages with Imagick
[ "", "php", "pdf", "imagick", "" ]
From my experience, one of the bigger problems we come across during our webdevelopment process is keeping different setups updated and secure across different servers. My company has it's own CMS which is currently installed across 100+ servers. At the moment, we use a hack-ish FTP-based approach, combined with upgrade scripts at specific locations to upgrade all of our CMS setups. Efficiently managing these setups becomes increasingly difficult and risky when there are several custom modules involved. * What is the best way to keep multiple setups of a web application secure and up-to-date? * How do *you* do it? * Are there any specific tips regarding modularity in applications, in order to maintain flexibility towards our clients, but still being able to efficiently manage multiple "branches" of an application? Some contextual information: we mainly develop on the LAMP-stack. One of the main factors that helps us sell our CMS is that we can plugin pretty much anything our client wants. This can very from 10 to to 10.000 lines of custom code. **A lot of custom work consists of very small pieces of code; managing all these small pieces of code in Subversion seems quite tedious and inefficient to me (since we deliver around 2 websites every week, this would result in a *lot* of branches).** If there is something I am overlooking, I'd love to hear it from you. Thanks in advance. --- **Roundup:** first of all, thanks for all of your answers. All of these are really helpful. I will most likely use a SVN-based approach, which makes [benlumley](https://stackoverflow.com/users/39161/benlumley)'s solution closest to what I will use. Since the answer to this question might differ in other usecases, I will accept the answer with the most votes at the end of the run. **Please examine the answers and vote for the ones that you think have the most added value.**
I think using a version control system and "branching" the part of the codes that you have to modify could turn out to be the best approach in terms of robustness and efficiency. A distributed version system could be best suited to your needs, since it would allow you to update your "core" features seamlessly on different "branches" while keeping some changes local if need be. **Edit:** I'm pretty sure that keeping all that up to date with a distributed version system would be far less tedious than what you seem to expect : you can keep the changes you are sure you're never going to need elsewhere local, and the distributed aspect means each of your deployed application is actually independent from the others and only the fix you mean to propagate will propagate.
If customizing your application involves changing many little pieces of code, this may be a sign that your application's design is flawed. Your application should have a set of stable core code, extensibility points for custom libraries to plug into, the ability to change appearance using templates, and the ability to change behavior and install plugins using configuration files. In this way, you don't need a separate SVN branch for every client. Rather, keep the core code and extension plugin libraries in source control as normal. In another repository, create a folder for each client and keep all their templates and configuration files there. For now, creating SVN branches may be the only solution that helps you keep your sanity. In your current state, it's almost inevitable that you'll make a mistake and mess up a client's site. At least with branches you are guaranteed to have a stable code base for each client. The only gotcha with SVN branches is if you move or rename a file in a branch, it's impossible to merge that change back down to the trunk (you'd have to do it manually). Good luck! EDIT: For an example of a well-designed application using all the principles I outlined above, see [Magento E-Commerce](http://www.magentocommerce.com/). Magento is the most powerful, extensible and easy to customize web application I've worked with so far.
How to efficiently manage multiple installations of a web application?
[ "", "php", "web-applications", "module", "release-management", "" ]
When I try to insert a new record into the database using SQLAlchemy and I don't fill out all values, it tries to insert them as "None" (instead of omitting them). It then complains about "can't be null" errors. Is there a way to have it just omit columns from the sql query if I also omitted them when declaring the instance?
This is a database schema issue, not an SQLAlchemy issue. If your database schema has a column which cannot be NULL, you must put something (i.e. not None) into there. Or change your schema to allow NULL in those columns. Wikipedia has an article [about NULL](http://en.wikipedia.org/wiki/Null_(SQL)) and an article which describes [non-NULL constraints](http://en.wikipedia.org/wiki/Check_Constraint#NOT_NULL_Constraint)
To add to the answer from Ali A, this means you need to have `nullable=True` in your column definition, so that NULL is allowed in the column. For example: ``` email_address = Column(String, nullable=True) ``` [SQLAlchemy docs for Tables and Columns](http://docs.sqlalchemy.org/en/latest/core/metadata.html#accessing-tables-and-columns), excerpt from v1.2 doc: > nullable – When set to False, will cause the “NOT NULL” phrase to be > added when generating DDL for the column. When True, will normally > generate nothing (in SQL this defaults to “NULL”), except in some very > specific backend-specific edge cases where “NULL” may render > explicitly. Defaults to True unless primary\_key is also True, in which > case it defaults to False. This parameter is only used when issuing > CREATE TABLE statements.
SQLAlchemy and empty columns
[ "", "python", "database-design", "sqlalchemy", "" ]
Im looking to build a simple upload site, that will generate temporary URLS to video files after a captcha has been entered. I need to protect the true location of the files, so they cannot be hotlinked directly. A time based url, that expires after x minutes seems like the best option, but Im not sure on the actual implementation. Any tips?
Make your urls like this: <http://myvid.com/video?id=1&timestamp=12341561234&hash=1203941h23jk479sdf87sdf> Where timestamp is the unix timestamp and hash is an md5 hash, say, of the timestamp appended to a secret string on the server. Then, when you play that video, check if the timestamp field is valid (by using the hash), and then check to see how old the timestamp is.
Yegor, they use Mod Rewrite. So when someone enters www.domain.com/video/1234567890/theLongHashCode you write in .htaccess that the url should be treated as video.php?timestamp=12341561234&hash=1203941h23jk479sdf87sdf This allows you prevent from showing the actual url. Some sources on mod rewrite: <http://www.modrewrite.com/> What you would need to put in .htaccess file, considering you have mod\_rewrite module enabled on Apache: ``` RewriteEngine On RewriteRule ^video/([0-9]+)/(.*) video.php?timestamp=$1&hash=$2 ``` This only takes in 2 values: timestamp and hash. The video id is not sent. I would not even send the timestamp. For temporary url, I only generate a hash, store it in the database along with the timestamp. So when someone visits an url, I look up the hash from the database. If the hash exists, then I compare the timestamp from the database with the current time, and if it is within the time limit, then the url is considered valid, otherwise it is invalid and write to the page "This link has expired." So I would have the url look like: <http://hsbsitez.com/video/thehashcodehere> With the following .htaccess file to interpret that url. ``` RewriteEngine On RewriteRule ^video/(.*) video.php?hash=$1 ``` Where video.php is the file that checks if the hash exists in the database or not.
How to create temporary urls to prevent hotlinking in php?
[ "", "php", "" ]
We have several products which have a lot of shared code and which must be maintained several versions back. To handle this we use a lot of Eclipse projects, some contain library jars, and some contain shared source code (in several projects to avoid getting a giant heap with numerous dependencies while being able to compile everything from scratch to ensure that source and binaries are consistent). We manage those with projectSet.psf's as these can directly pull all projects out from CVS and leave a fully prepared workspace. We do not do ant builds directly or use maven. We now want to be able to put all these projects and their various versions in a Continous Integration tool - I like Hudson but this is just a matter of taste - which essentially means that we need to get an automatic way to check out the projects to a fresh workspace, and compile the source folders as described in the project-files in each project. Hudson does not provide such an approach to build a project, so I have been considering what the best way to approach this would be. Ideas have been * Find or write an ant plugin/converter that understands projectSet.psf's and map to cvs-checkout and compile tags. * Create the build.xml files from within Eclipse and use those. I tried this, and found the result to be verbose and with absolute locations which is not good with automatic tools putting files where they want to. * Write a Hudson plugin which understands projectSet.psf's to derive a configuration and build it. * Just bite the bullet and manually create and update the CI configuration whenever stuff breaks - I don't like this :) I'd really like to hear about other peoples experiences so I can decide how to approach this. --- Edit: Another option might be using a CI which knows better about Eclipse projects and/or project sets. We are not religious - this is just a matter of getting stuff running without having to do everything ourselves. Would Cruise Control be a better option perhaps? Others? --- Edit: Found that ant4eclipse has a "Team Project Set" facility. <http://ant4eclipse.sourceforge.net/> --- Edit: Used the ant4eclipse and ant-contrib ant extensions to build a complete workspace as a sjgned runnable jar file similar to the Runnable Jar facility in Eclipse 3.5M6. I am still depending on Eclipse to create the initial empty workspace, and extract the ProjectSet, so that is the next hurdle. --- Edit: Ended up with a dual configuration, namely that Hudson extracts the same set of modules as listed in the ProjectSet.pdf file from CVS (which needs to have the same tag) causing them to be located next to each other. Then ant4eclipse works well with the projectSet.psf file embedded in the main module. Caveat: Module list in Hudson must be manually updated, and it appears that a manual workspace cleanup is needed afterwards to let Hudson "discover" that there is more projects now than earlier. This has now worked well for us for a couple of months, but it was quite tedious to get everything working inside the ant file. --- Edit: The "Use Team Projects" with ant4eclipse and a Ctrl-A, Ctrl-C in Project Panel with a Ctrl-V in the CVS projects in Hudson has turned out to work well enough for us to live with (for mature projects this is very rarely changed). I am awaiting the release of ant4eclipse 1.0 - <http://www.ant4eclipse.org/>, currently in milestone 2 - to see how much homegrown functionality can be replaced with ant4eclipse things. Edit: ant4eclipse is as of 20100609 in M4 so the schedule at <http://www.ant4eclipse.org/node?page=1> is slipping somewhat. --- Edit: My conclusion after using our ant4eclipse approach for a longer period is that the build script get very gnarly and is hard to maintain. Also the Team ProjectSet facility (which ant4eclipse use to locate the projects) which works well for CVS based repositories, but not after we migrated to git (which is a big thing in itself). New projects will most likely be based on maven, as this has good support in Jenkins.
> Write a Hudson plugin which > understands projectSet.psf's to derive > a configuration and build it. That seems like the winning answer to me. I work with CruiseControl rather than Hudson but in my experience if you can create a plugin that solves your problem it will quickly payoff. And it is generally pretty easy to write a plugin that is custom fit for your solution as opposed to one that needs to work for everyone in a similar situation.
I'm not completely sure I understand the problem, but it sounds like the root issue is that you have many projects, some of which are dependent on others. Some of the projects that are closer to the "leaf" of the dependency tree need to be able to use "stable" (or previously "released") versions of the more "core" projects. I solve exactly this problem using [Hudson](http://hudson-ci.org/), [ant](http://ant.apache.org/), and [ivy](http://ant.apache.org/ivy). I follow a pattern demonstrated by Clark in [Pragmatic Project Automation](https://rads.stackoverflow.com/amzn/click/com/0974514039) (he doesn't demonstrate the dependency problems and solutions, and he uses CruiseControl rather than hudson.) I have a hand-written ant build file (we call it "cc-build.xml", because of our CruiseControl roots.) This file is responsible for refreshing the working space for the project from the CM repository and labeling the contents for future reference. It then hands off control to another hand-written ant build file (build.xml) that is provided by each project's developers. This project is responsible for the traditional build steps (compile, packaging, etc.) It is required to spit out the installable artifacts, unit test reports, etc, to the Hudson artifacts directory. It is my experience that automatically generated build files (by Eclipse or other similar IDE's) will never get close to getting this sufficiently robust for use in a CI scenario. Additionally, it uses ivy to resolve its own dependencies. Ivy supports precisely-specified dependency versions (e.g. "use version 1.1") and it supports "fuzzy versions" (e.g. "use version 1.1+" or "use the latest version in integration status.") Our projects typically start out specifying a very "fuzzy" version for internal projects under ongoing development, and as they get close to a release point, they "freeze" the dependency version so that stuff stops moving underneath them. The non-leaf projects (projects that are dependents for other projects) also use ivy to publish their artifacts to our internal ivy repository. That repository keeps all past builds of the dependents, so that any project can always depend on any other previous version. Lastly, each project in Hudson is configured to have a build trigger that causes a rebuild when any of its dependent projects successfully build. This causes them to get built again with the (possibly) new ivy dependent version. It is worth noting that once you get this up and running, consistent automated "labeling" or "tagging" of an automated build's inputs is going to be critical for you - otherwise troubleshooting post-build problems is going to result in having to untangle a hornet's nest to find the original source. Getting all of this setup for our environment took quite a bit of effort (primarily in setting up the ivy repository and ant build files,) but it has paid for itself many times over in saved headaches in manually managing the dependencies and decreased troubleshooting effort.
Best way to automatically check out and compile Eclipse projects with Ant in Hudson or another CI tool?
[ "", "java", "eclipse", "continuous-integration", "hudson", "cvs", "" ]
How can i get the current time? (in JavaScript) Not the time of your computer like: ``` now = new Date; now_string = addZero(now.getHours()) + ":" + addZero(now.getMinutes()) + ":" + addZero(now.getSeconds()); ``` But the real accurate world time? Do i need to connect to to a server (most likely yes, which one? and how can i retrieve time from it?) All the searches I do from google return the `(new Date).getHours()`. Edit: I want to avoid showing an incorrect time if the user has a wrong time in his computer.
First, to get the accurate GMT time you need a source that you trust. This means some server somewhere. Javascript can generally only make HTTP calls, and only to the server hosting the page in question (same origin policy). Thus that server has to be your source for GMT time. I would configure your webserver to use NTP to synchronize its clock with GMT, and have the webserver tell the script what time it is, by writing a variable to the page. Or else make and XmlHttpRequest back to the server when you need to know the time. The downside is that this will be inaccurate due to the latency involved: the server determines the time, writes it to the response, the response travels over the network, and the javascript executes whenever the client's cpu gives it a timeslice, etc. On a slow link you can expect seconds of delay if the page is big. You might be able to save some time by determining how far off from GMT the user's clock is, and just adjusting all the time calculations by that offset. Of course if the user's clock is slow or fast (not just late or early) or if the user changes the time on their PC then your offset is blown. Also keep in mind that the client can change the data so don't trust any timestamps they send you. **Edit**: [JimmyP's answer](https://stackoverflow.com/questions/489581/getting-the-current-gmt-world-time/489846#489846) is very simple and easy to use: use Javascript to add a `<script>` element which calls a url such as <http://json-time.appspot.com/time.json?tz=GMT>. This is easier than doing this yourself because the json-time.appspot.com server works as a source of GMT time, and provides this data in a way that lets you work around the same-origin policy. I would recommend that for simple sites. However it has one major drawback: the json-time.appspot.com site can execute arbitrary code on your user's pages. This means that if the operators of that site want to profile your users, or hijack their data, they can do that trivially. Even if you trust the operators you need to also trust that they have not been hacked or compromised. For a business site or any site with high reliability concerns I'd recommend hosting the time solution yourself. **Edit 2**: [JimmyP's answer](https://stackoverflow.com/questions/489581/getting-the-current-gmt-world-time/489846#489846) has a comment which suggests that the json-time app has some limitations in terms of the number of requests it can support. This means if you need reliability you should host the time server yourself. However, it should be easy enough to add a page on your server which responds with the same format of data. Basically your server takes a query such as ``` http://json-time.appspot.com/time.json?tz=America/Chicago&callback=foo ``` and returns a string such as ``` foo({ "tz": "America\/Chicago", "hour": 15, "datetime": "Thu, 09 Apr 2009 15:07:01 -0500", "second": 1, "error": false, "minute": 7 }) ``` Note the `foo()` which wraps the JSON object; this corresponds to the callback=foo in the query. This means when the script is loaded into the page it will call your foo function, which can do whatever it wants with the time. Server-side programming for this example is a separate question.
You can use JSON[P] and access a time API: (The code below should work perfectly, just tested it...) ``` function getTime(zone, success) { var url = 'http://json-time.appspot.com/time.json?tz=' + zone, ud = 'json' + (+new Date()); window[ud]= function(o){ success && success(new Date(o.datetime)); }; document.getElementsByTagName('head')[0].appendChild((function(){ var s = document.createElement('script'); s.type = 'text/javascript'; s.src = url + '&callback=' + ud; return s; })()); } getTime('GMT', function(time){ // This is where you do whatever you want with the time: alert(time); }); ```
Getting the current GMT world time
[ "", "javascript", "" ]
This snippet results in a JavaScript runtime error: (`foo` is not defined) ``` if (foo) { // ... } ``` I have to define `foo` first, like so: ``` var foo = foo || null // or undefined, 0, etc. ``` ... And only then can I do: ``` if (foo) { // ... } ``` Why is that? **Update:** This was somewhat of a brainfart on my side of things: 'fcourse you can't access a variable which is not allocated. Fun stuff that you can do a typeof() on an undefined variable thou. I'm gonna accept [miccet](https://stackoverflow.com/users/66841/miccet)'s answer since I think it's the most elegant solution.
You'll have to define it, to be able to check it for a value. In this case you're checking weather it's true. This variable is obviously not set to anything at all, same as null in C# and Nothing in VB for example. If you must, debugging issues or whatever, you could check if the variable is undefined like this: ``` if (typeof(variable) == "undefined") ```
I sense you are asking because you are aware that javascript seems to allow undefined variables in some situations (ie no runtime errors) and not in others. The reasoning is as follows: javascript *always* throws an error on checking undefined variables, but *never* throws an error on checking undefined *properties*, as long as you only use one level of indirection. Example: ``` // throws runtime error if(foo) { // ... } // does not throw runtime error if(window.foo) { // ... } // does not throw runtime error var obj = {}; if(obj.foo) { // ... } // throws runtime error if(obj.foo.bar) { // going two levels deep, but foo is undefined // ... } ``` Hope that clears it up a bit.
The use of undefined variables in if-statements
[ "", "javascript", "" ]
I'm re-engineering a PHP-driven web site which uses a minimal database. The original version used "pseudo-prepared-statements" (PHP functions which did quoting and parameter replacement) to prevent injection attacks and to separate database logic from page logic. It seemed natural to replace these ad-hoc functions with an object which uses PDO and real prepared statements, but after doing my reading on them, I'm not so sure. PDO still seems like a great idea, but one of the primary selling points of prepared statements is being able to reuse them… which I never will. Here's my setup: * The statements are all trivially simple. Most are in the form `SELECT foo,bar FROM baz WHERE quux = ? ORDER BY bar LIMIT 1`. The most complex statement in the lot is simply three such selects joined together with `UNION ALL`s. * Each page hit executes *at most* one statement and executes it only once. * I'm in a hosted environment and therefore leery of slamming their servers by doing any "stress tests" personally. Given that using prepared statements will, at minimum, double the number of database round-trips I'm making, am I better off avoiding them? Can I use `PDO::MYSQL_ATTR_DIRECT_QUERY` to avoid the overhead of multiple database trips while retaining the benefit of parametrization and injection defense? Or do the binary calls used by the prepared statement API perform well enough compared to executing non-prepared queries that I shouldn't worry about it? **EDIT:** Thanks for all the good advice, folks. This is one where I wish I could mark more than one answer as "accepted" — lots of different perspectives. Ultimately, though, I have to give [rick](https://stackoverflow.com/questions/535464/when-not-to-use-prepared-statements/537834#537834) his due… without his answer I would have blissfully gone off and done the completely Wrong Thing even after *following* everyone's advice. :-) Emulated prepared statements it is!
I think you want PDO::ATTR\_EMULATE\_PREPARES. That turns off native database prepared statements, but still allows query bindings to prevent sql injection and keep your sql tidy. From what I understand, PDO::MYSQL\_ATTR\_DIRECT\_QUERY turns off query bindings completely.
*Today's rule of software engineering:* if it isn't going to do anything for you, don't use it.
When *not* to use prepared statements?
[ "", "php", "mysql", "pdo", "prepared-statement", "" ]
Does the .Net DateTime contain information about time zone where it was created? I have a library parsing DateTime from a format that has "+zz" at the end, and while it parses correctly and adjusts a local time, I need to get what the specific time zone was from the DateTime object. Is this possible at all? All I can see is DateTime.Kind, which specifies if time is local or UTC.
DateTime itself contains no real timezone information. It *may* know if it's UTC or local, but not what local really means. DateTimeOffset is somewhat better - that's basically a UTC time and an offset. However, that's still not really enough to determine the timezone, as many different timezones can have the same offset at any one point in time. This sounds like it may be good enough for you though, as all you've got to work with when parsing the date/time is the offset. The support for time zones as of .NET 3.5 is a lot better than it was, but I'd really like to see a standard "ZonedDateTime" or something like that - a UTC time and an actual time zone. It's easy to build your own, but it would be nice to see it in the standard libraries. EDIT: Nearly four years later, I'd now suggest using [Noda Time](http://nodatime.org/) which has a rather richer set of date/time types. I'm biased though, as the main author of Noda Time :)
No. > *A developer is responsible for keeping track of time-zone information associated with a DateTime value via some external mechanism.* A quote from an excellent article [here](http://msdn.microsoft.com/en-us/library/ms973825.aspx). A must read for every .Net developer. So my advice is to write a little wrapper class that suits your needs.
Get timezone from DateTime
[ "", "c#", ".net", "datetime", "timezone", "" ]
When outputting user input, do you only use `htmlspecialchars()` or are there are functions/actions/methods you also run? I'm looking for something that will also deal with XSS. I'm wondering if I should write a function that escapes user input on output or just use `htmlspecialchars()`. I'm looking for the generic cases, not the specific cases that can be dealt with individually.
I usually use ``` htmlspecialchars($var, ENT_QUOTES) ``` on input fields. I created a method that does this because i use that a lot and it makes the code shorter and more readable.
**Lets have a quick review of WHY escaping is needed in different contexts:** If you are in a quote delimited string, you need to be able to escape the quotes. If you are in xml, then you need to separate "content" from "markup" If you are in SQL, you need to separate "commands" from "data" If you are on the command line, you need to separate "commands" from "data" This is a really basic aspect of computing in general. Because the syntax that delimits data can occur IN THE DATA, there needs to be a way to differentiate the DATA from the SYNTAX, hence, escaping. In web programming, the common escaping cases are: 1. Outputting text into HTML 2. Outputting data into HTML attributes 3. Outputting HTML into HTML 4. Inserting data into Javascript 5. Inserting data into SQL 6. Inserting data into a shell command Each one has a different security implications if handled incorrectly. THIS IS REALLY IMPORTANT! Let's review this in the context of PHP: 1. Text into HTML: htmlspecialchars(...) 2. Data into HTML attributes htmlspecialchars(..., ENT\_QUOTES) 3. HTML into HTML Use a library such as [HTMLPurifier](http://htmlpurifier.org/) to ENSURE that only valid tags are present. 4. Data into Javascript I prefer `json_encode`. If you are placing it in an attribute, you still need to use #2, such as 5. Inserting data into SQL Each driver has an escape() function of some sort. It is best. If you are running in a normal latin1 character set, addslashes(...) is suitable. Don't forget the quotes AROUND the addslashes() call: "INSERT INTO table1 SET field1 = '" . addslashes($data) . "'" 6. Data on the command line escapeshellarg() and escapeshellcmd() -- read the manual -- Take these to heart, and you will eliminate 95%\* of common web security risks! (\* a guess)
Do you only run htmlspecialchars() on output or is there other functionality you also do?
[ "", "php", "xss", "escaping", "htmlspecialchars", "" ]
In my app I have a fair number of entities which have fields which are getting their values set via reflection. (In this case NHibernate is setting them). I'd like to get rid of the "x is never assigned to and will always have its default value 0" warnings, so I can more easily pick out the other warnings. I realize you can surround them in pragma directives, but AFAIK you have to do this for each one. Is there a project wide or solution wide way I could do this?
Use the C# commandline option /nowarn <http://msdn.microsoft.com/en-us/library/7f28x9z3(VS.80).aspx> To do this within visual studio goto **Project properties->Build->(Errors and warnings) Suppress Warnings** and then specify a comma separated list of warnings which need to be suppressed.
Open the project properties, on the build tab, enter warning IDs you want to surpress in the Suppress warnings: box.
Globally suppress c# compiler warnings
[ "", "c#", "compiler-warnings", "suppress", "" ]
I am working on a web API for the insurance industry and trying to work out a suitable data structure for the quoting of insurance. The database already contains a "ratings" table which is basically: ``` sysID (PK, INT IDENTITY) goods_type (VARCHAR(16)) suminsured_min (DECIMAL(9,2)) suminsured_max (DECIMAL(9,2)) percent_premium (DECIMAL(9,6)) [Unique Index on goods_type, suminsured_min and suminsured_max] ``` [edit] Each type of goods typically has 3 - 4 ranges for suminsured [/edit] The list of goods\_types rarely changes and most queries for insurance will involve goods worth less than $100. Because of this, I was considering de-normalising using tables in the following format (for all values from $0.00 through to $100.00): ``` Table Name: tblRates[goodstype] suminsured (DECIMAL(9,2)) Primary Key premium (DECIMAL(9,2)) ``` Denormalising this data should be easy to maintain as the rates are generally only updated once per month at most. All requests for values >$100 will always be looked up in the primary tables and calculated. My question(s) are: 1. Am I better off storing the suminsured values as DECIMAL(9,2) or as a value in cents stored in a BIGINT? 2. This de-normalisation method involves storing 10,001 values ($0.00 to $100.00 in $0.01 increments) in possibly 20 tables. Is this likely to be more efficient than looking up the percent\_premium and performing a calculation? - Or should I stick with the main tables and do the calculation?
Don't create new tables. You already have an index on goods, min and max values, so this sql for (known goods and its value): ``` SELECT percent_premium FROM ratings WHERE goods='PRECIOUST' and :PREC_VALUE BETWEEN suminsured_min AND suminsured_max ``` will use your index efficently. The data type you are looking for is ***smallmoney***. Use it.
The plan you suggest will use a `binary search` on `10001` rows instead of `3` or `4`. It's hardly a performance improvement, don't do that. As for arithmetics, `BIGINT` will be slightly faster, thought I think you will hardly notice that.
SQL Data Normalisation / Performance
[ "", "sql", "sql-server", "sql-server-2005", "performance", "denormalization", "" ]
Is there a way to point a Maven target (with classes) path to JBoss Application Server instead of building .ear and deploying it everytime I edit some sources? Thanks in advance, Etam.
Well, you could just symlink the directory inside the jboss deploy directory (or change the maven target dir property). But if you are editing java source, it will need a redeploy on each change anyway. For that, it's easier to use the [maven cargo plugin](http://cargo.codehaus.org/Maven2+plugin), it can (re)deploy your ear to a running j2ee server.
Try specifying the build directory in your pom. ``` <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi=" http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation=" http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.mycomp.build</groupId> <artifactId>build-base</artifactId> <version>1.0-SNAPSHOT</version> <packaging>pom</packaging> <build> <directory>/path/to/jboss/server/default/deploy</directory> </build> </project> ```
JBoss AS target deploy
[ "", "java", "jboss", "jakarta-ee", "" ]
The default JVM parameters are not optimal for running large applications. Any insights from people who have tuned it on a real application would be helpful. We are running the application on a 32-bit windows machine, where the client JVM is used [by default](http://java.sun.com/docs/hotspot/gc5.0/ergo5.html#0.0.%20Garbage%20collector,%20heap,%20and%20runtime%20compiler|outline). We have added -server and changed the NewRatio to 1:3 (A larger young generation). Any other parameters/tuning which you have tried and found useful? [Update] The specific type of application I'm talking about is a server application that are rarely shutdown, taking at least -Xmx1024m. Also assume that the application is profiled already. I'm looking for general guidelines in terms of **JVM performance** only.
There are great quantities of that information around. First, profile the code before tuning the JVM. Second, read the [JVM documentation](http://www.oracle.com/technetwork/java/javase/tech/index-jsp-137187.html) carefully; there are a lot of sort of "urban legends" around. For example, the -server flag only helps if the JVM is staying resident and running for some time; -server "turns up" the JIT/HotSpot, and that needs to have many passes through the same path to get turned up. -server, on the other hand, *slows* initial execution of the JVM, as there's more setup time. There are several good books and websites around. See, for example, <http://www.javaperformancetuning.com/>
# Foreword ## Background Been at a Java shop. Spent entire months dedicated to running performance tests on distributed systems, the main apps being in Java. Some of which implying products developed and sold by Sun themselves (then Oracle). I will go over the lessons I learned, some history about the JVM, some talks about the internals, a couple of parameters explained and finally some tuning. Trying to keep it to the point so you can apply it in practice. Things are changing fast in the Java world so part of it might be already outdated since the last year I've done all that. (Is Java 10 out already?) # Good Practices ## What you SHOULD do: benchmark, Benchmark, BENCHMARK! When you really need to know about performances, you need to perform real benchmarks, specific to your workload. There is no alternatives. Also, **you should monitor the JVM. Enable monitoring.** The good applications usually provide a monitoring web page and/or an API. Otherwise there is the common Java tooling (JVisualVM, JMX, hprof, and some JVM flags). **Be aware that there is usually no performance to gain by tuning the JVM**. It's more a *"to crash or not to crash, finding the transition point"*. It's about knowing that when you give *that* amount of resources to your application, you can consistently expect *that* amount of performances in return. *Knowledge is power.* **Performances is mostly dictated by your application. If you want faster, you gotta write better code.** ## What you WILL do most of the time: Live with reliable sensitive defaults We don't get time to optimize and tune every single application out there. Most of the time we'll simply live with sensible defaults. The first thing to do when configuring a new application is to read the documentation. Most of the serious applications comes with a guide for performance tuning, including advice on JVM settings. Then you can configure the application: `JAVA_OPTS: -server -Xms???g -Xmx???g` * `-server`: enable full optimizations (this flag is automatic on most JVM nowadays) * `-Xms` `-Xmx`: set the minimum and maximum heap (always the same value for both, that's about the only optimizations to do). **Well done, you know about all the optimization parameters there is to know about the JVM, congratulations!** That was simple :D ## What you SHALL NOT do, EVER: Please do NOT copy random string you found on the internet, especially when they take multiple lines like that: ``` -server -Xms1g -Xmx1g -XX:PermSize=1g -XX:MaxPermSize=256m -Xmn256m -Xss64k -XX:SurvivorRatio=30 -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=10 -XX:+ScavengeBeforeFullGC -XX:+CMSScavengeBeforeRemark -XX:+PrintGCDateStamps -verbose:gc -XX:+PrintGCDetails -Dsun.net.inetaddr.ttl=5 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=`date`.hprof -Dcom.sun.management.jmxremote.port=5616 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -server -Xms2g -Xmx2g -XX:MaxPermSize=256m -XX:NewRatio=1 -XX:+UseConcMarkSweepGC ``` For instance, this thing found on the first page of google is plain terrible. There are arguments specified multiples times with conflicting values. Some are just forcing the JVM defaults (eventually the defaults from 2 JVM versions ago). A few are obsolete and simply ignored. And finaly at least one parameter is so invalid that it will consistently crash the JVM at startup by it's mere existence. # Actual tuning ## How do you choose the memory size: Read the guide from your application, it should give some indication. Monitor production and adjust afterwards. Perform some benchmarks if you need accuracy. **Important Note**: The java process will take up to **max heap PLUS 10%**. The X% overhead being the heap management, not included in the heap itself. All the memory is usually preallocated by the process on startup. You may see the process using max heap ALL THE TIME. It's simply not true. **You need to use Java monitoring tools to see what is really being used.** Finding the right size: * If it crashes with OutOfMemoryException, it ain't enough memory * If it doesn't crash with OutOfMemoryException, it's too much memory * If it's too much memory BUT the hardware got it and/or is already paid for, it's the *perfect* number, job done! ## JVM6 is bronze, JVM7 is gold, JVM8 is platinum... The JVM is forever improving. Garbage Collection is a very complex thing and there are a lot of very smart people working on it. It had tremendous improvements in the past decade and it will continue to do so. For informational purpose. They are at least 4 available Garbage Collectors in Oracle Java 7-8 (HotSpot) and OpenJDK 7-8. (Other JVM may be entirely different e.g. Android, IBM, embedded): * SerialGC * ParallelGC * ConcurrentMarkSweepGC * G1GC * (plus variants and settings) [Starting from Java 7 and onward. The Oracle and OpenJDK code are partially shared. The GC should be (mostly) the same on both platforms.] JVM >= 7 have many optimizations and pick decent defaults. It changes a bit by platform. It balances multiple things. For instance deciding to enable multicore optimizations or not whether the CPU has multiple cores. You should let it do it. **Do not change or force GC settings.** It's okay to let the computer takes decision for you (that's what computers are for). It's better to have the JVM settings being 95%-optimal all the time than forcing a "always 8 core aggressive collection for lower pause times" on all the boxes, half of them being t2.small in the end. **Exception**: When the application comes with a performance guide and specific tuning in place. It's perfectly okay to leave the provided settings as is. **Tip**: Moving to a newer JVM to benefit from the latest improvements can sometimes provide a good boost without much effort. ## Special Case: -XX:+UseCompressedOops The JVM has a special setting that forces using 32bits index internally (read: pointers-like). That allows to address 4 294 967 295 objects \* 8 bytes address => 32 GB of memory. (NOT to be confused with the 4GB address space for REAL pointers). It reduces the overall memory consumption with a potential positive impact on all caching levels. **Real life example**: ElasticSearch documentation states that a running 32GB 32bits node may be equivalent to a 40GB 64bits node in terms of actual data kept in memory. **A note on history**: The flag was known to be unstable in pre-java-7 era (maybe even pre-java-6). It's been working perfectly in newer JVM for a while. [Java HotSpot™Virtual Machine Performance Enhancements](https://docs.oracle.com/javase/7/docs/technotes/guides/vm/performance-enhancements-7.html) > [...] In Java SE 7, use of compressed oops is the default for 64-bit JVM processes when -Xmx isn't specified and for values of -Xmx less than 32 gigabytes. For JDK 6 before the 6u23 release, use the -XX:+UseCompressedOops flag with the java command to enable the feature. **See**: Once again the JVM is lights years ahead over manual tuning. Still, it's interesting to know about it =) ## Special Case: -XX:+UseNUMA > Non-uniform memory access (NUMA) is a computer memory design used in multiprocessing, the memory access time depends on the memory location relative to the processor. Source: [Wikipedia](https://en.wikipedia.org/wiki/Non-uniform_memory_access) Modern systems have extremely complex memory architectures with multiple layers of memory and caches, either private and shared, across cores and CPU. Quite obviously accessing a data in the L2 cache in the current processor is A LOT faster than having to go all the way to a memory stick from another socket. I believe that all multi-**socket** systems sold today are NUMA by design, while all consumers systems are NOT. Check whether your server supports NUMA with the command `numactl --show` on linux. The NUMA-aware flag tells the JVM to optimize memory allocations for the underlying hardware topology. The performance boost can be substantial (i.e. two digits: +XX%). In fact someone switching from a "NOT-NUMA 10CPU 100GB" to a "NUMA 40CPU 400GB" might experience a [dramatic] loss in performances if he doesn't know about the flag. **Note**: There are discussions to detect NUMA and set the flag automatically in the JVM <http://openjdk.java.net/jeps/163> **Bonus**: All applications intending to run on big fat hardware (i.e. NUMA) needs to be optimized for it. It is not specific to Java applications. ## Toward the future: -XX:+UseG1GC The latest improvement in Garbage Collection is the [G1 collector (read: Garbage First)](http://www.oracle.com/technetwork/java/javase/tech/g1-intro-jsp-135488.html). It is intended for high cores, high memory systems. At the absolute minimum 4 cores + 6 GB memory. It is targeted toward databases and memory intensive applications using 10 times that and beyond. Short version, at these sizes the traditional GC are facing too much data to process at once and pauses are getting out of hand. The G1 splits the heap in many small sections that can be managed independently and in parallel while the application is running. The first version was available in 2013. It is mature enough for production now but it will not be going as default anytime soon. That is worth a try for large applications. ## Do not touch: Generation Sizes (NewGen, PermGen...) The GC split the memory in multiple sections. (Not getting into details, you can google "Java GC Generations".) The last time I've been spending a week to try 20 different combination of generations flags on an app taking 10000 hit/s. I was getting a magnificent boost ranging from -1% to +1%. Java GC generations are an interesting topic to read papers on or to write one about. They are not a thing to tune unless you're part of the 1% who can devote substantial time for negligible gains among the 1% of people who really need optimizations. # Conclusion Hope this can help you. Have fun with the JVM. Java is the best language and the best platform in the world! Go spread the love :D
JVM performance tuning for large applications
[ "", "java", "jvm", "performance", "jvm-arguments", "" ]
I have a simple model that is defined as: ``` class Article(models.Model): slug = models.SlugField(max_length=50, unique=True) title = models.CharField(max_length=100, unique=False) ``` and the form: ``` class ArticleForm(ModelForm): class Meta: model = Article ``` The validation here fails when I try to update an existing row: ``` if request.method == 'POST': form = ArticleForm(request.POST) if form.is_valid(): # POOF form.save() ``` Creating a new entry is fine, however, when I try to update any of these fields, the validation no longer passes. The "errors" property had nothing, but I dropped into the debugger and deep within the Django guts I saw this: slug: "Article with this None already exists" So it looks like is\_valid() fails on a unique value check, but all I want to do is **update** the row. I can't just do: ``` form.save(force_update=True) ``` ... because the form will fail on validation. This looks like something very simple, but I just can't figure it out. I am running Django 1.0.2 What croaks is BaseModelForm.validate\_unique() which is called on form initialization.
I don't think you are actually updating an existing article, but instead creating a new one, presumably with more or less the same content, especially the slug, and thus you will get an error. It is a bit strange that you don't get better error reporting, but also I do not know what the rest of your view looks like. What if you where to try something along these lines (I have included a bit more of a possible view function, change it to fit your needs); I haven't actually tested my code, so I am sure I've made at least one mistake, but you should at least get the general idea: ``` def article_update(request, id): article = get_objects_or_404(Article, pk=id) if request.method == 'POST': form = ArticleForm(request.POST, instance=article) if form.is_valid(): form.save() return HttpResponseRedirect(to-some-suitable-url) else: form = ArticleForm(instance=article) return render_to_response('article_update.html', { 'form': form }) ``` The thing is, as taurean noted, you should instantiate your model form with the object you wish to update, otherwise you will get a new one.
I was also searching for a way to update an existing record, even tried `form.save(force_update=True)` but received errors?? Finally by trial & error managed to update existing record. Below codes tested working. Hope this helps... # models.py from djangobook ``` class Author(models.Model): first_name = models.CharField(max_length=30) last_name = models.CharField(max_length=40) email = models.EmailField(blank=True, verbose_name='e-mail') objects = models.Manager() sel_objects=AuthorManager() def __unicode__(self): return self.first_name+' '+ self.last_name class AuthorForm(ModelForm): class Meta: model = Author # views.py # add new record def authorcontact(request): if request.method == 'POST': form = AuthorForm(request.POST) if form.is_valid(): form.save() return HttpResponseRedirect('/contact/created') else: form = AuthorForm() return render_to_response('author_form.html', {'form': form}) ``` # update existing record ``` def authorcontactupd(request,id): if request.method == 'POST': a=Author.objects.get(pk=int(id)) form = AuthorForm(request.POST, instance=a) if form.is_valid(): form.save() return HttpResponseRedirect('/contact/created') else: a=Author.objects.get(pk=int(id)) form = AuthorForm(instance=a) return render_to_response('author_form.html', {'form': form}) ```
Django form fails validation on a unique field
[ "", "python", "django", "" ]
I've read mixed reviews of both [Suds](https://fedorahosted.org/suds/) and [ZSI](http://pywebsvcs.sourceforge.net/) -- two Python SOAP libraries. However, I'm unclear whether either of them can support WS-Attachments. I'd prefer to use Suds (appears to be more straightforward), but I'll defer to whichever library suits my needs.
For your requirements I'd have to recommend ZSI. From its documentation, > It can also be used to build applications using SOAP Messages with Attachments. Their website is not as pretty as Suds but the package includes promising documentation. SOAPpy has support for attachments on its TODO list. Suds does not mention the word "attachments" anywhere. If you need attachments and don't want to implement them yourself, then ZSI is your choice.
I believe [soaplib](http://trac.optio.webfactional.com/) can handle attachments. I'm just not sure exactly how compliant it is with WS-Attachments because they don't trumpet it. Here's a sample client that, their words, allows "multi-part mime payloads": [helloworld\_attach.py](http://trac.optio.webfactional.com/browser/soaplib/trunk/examples/helloworld_attach.py)
Can you recommend a Python SOAP client that can accept WS-Attachments?
[ "", "python", "soap", "" ]
Given: * Two similar and complex Schemas, lets call them XmlA and XmlB. * We want to convert from XmlA to XmlB * Not all the information required to produce XmlB is contained withing XmlA (A database lookup will be required) Can I use XSLT for this given that I'll need to reference additional data in the database? If so what are the arguments in favour of using XSLT rather than plain old object mapping and conversion? I'm thinking that the following criteria might influence this decision: * Performance/speed * Memory usage * Code reuse/comlpexity The project will be C# based. Thanks.
With C# you can always provide extension objects to XSLT transforms, so that's a non-issue. It's hard to qualitatively say without having the schemas and XML to hand, but I imagine a compiled transform will be faster than object mapping since you'll have to do a fair amount of wheel reinventing. Further, one of the huge benefits of XSLT is it's maintainability and portability. You'll be able to adapt the XSLT doc really quickly with schema changes, and on-the-fly without having to do any rebuilds and takedowns if you're monitoring the file. Could go either way based on what you've given us though.
My question is how likely are the set-of-transformations to change? If they won't change much, I favor doing it all in one body of source code -- here that would be C#. I would use XSD.exe (.NET XSD tool) generated serialization classes in conjunction with data layers for this kind of thing. On the other hand, if the set-of-transformations are likely to change -- or perhaps need to be 'corrected' post installation -- then I would favor a combination of XSLT and C# extensions to XSLT. The Extension mechanism is straightforward, and if you use the XslCompiledTransform type the performance is quite good.
Converting XML between schemas - XSLT or Objects?
[ "", "c#", "xml", "xslt", "" ]
I have Oracle9i Release 2 ODAC installed and we are migrating from Microsoft .NET Data Provider for Oracle. We have some SqlCommand caching implemented, that uses System.Data.SqlCommandBuilder.DeriveParameters(result) (var result is type of SqlCommand), but there is no DeriveParameters() method until ODP.NET version 10. Perhaps anyone has experience using ODP.NET 10.1.XXXXXX or greater with Oracle 9i and has no problems?
You can use ODP.NET R2 for 10g on 9i without problems, as stated in the readme documentation of odp.net R2. I'd go for the R2 version (so 2.10.2.xxx) as it has a native .NET 2.0 build and many bugfixes not present in the 10.1 version. It also is less buggy than the 9.2.0.xxx version. It will install the 10g client, which can access Oracle databases of version 8i R3 or higher.
You can even use odp.net 11.1.0.6.20 together with Oracle 9i. However use Oracle server 9.2.0.7, not 9.2.0.6 .
ODP.NET OracleCommandBuilder.DeriveParameters for 9i
[ "", "c#", "oracle", "ado.net", "odp.net", "" ]
I've a TreeView that displays hierarchical data, while I select a TreeNode, I generate a series of links that map the path from selected node till root node Say: Root Node --> ChildNode --> SelectedNode What I need is, while clicking any of these links, select required node in TreeView? I know that selecting a TreeNode is done through ClientSide using this code: ``` javascript:__doPostBack(TreeView.ClientID,TreeNodeValuePath) ``` and ``` TreeView_SelectNode(TreeView.ClientID_Data, this,TreeNode.ClientID) ``` Ex: ``` <a style="border-style: none; font-size: 1em;" id="tvMenut1" title="Created On: 1/28/2009 9:50:06 AM" onclick="TreeView_SelectNode(tvMenu_Data, this,'tvMenut1');" href="javascript:__doPostBack('tvMenu','s1\\720,63')" class="tvMenu_0 treenodeParent tvMenu_3"> Etravel2000 ``` My problem is to get TreeNode.ClientID? Any suggestion? FYI: this page is an AJAX-Enabled page.
You can get ClientID of a TreeNode using the following method. You can change which parameter you will use to get a node based on it. ``` function GetTreeNodeID(nodeTooltip) { var tree = document.getElementById(TreeView.ClientID); // Change TreeView ClientID. var treeLink = tree.getElementsByTagName('A'); for(var element in treeLink) if((nodeTooltip == treeLink[element].title) && (treeLink[element].title != "")) return treeLink[element].id; } ``` Then use this code for each TreeNode in TreeView. ``` spanPath.InnerHtml += "<a href=\"javascript:__doPostBack('tvMenu','" + targetNode.ValuePath + "')\" onclick=\"TreeView_SelectNode(tvMenu_Data, this, GetTreeNodeID('" + targetNode.ToolTip + "'));\">" + targetNode.Text + "</a>&nbsp;&nbsp;-->&nbsp;&nbsp;"; ``` While you select a node from spanPath, it will be selected in TreeView.
TreeNode.ClientID as int the ASP.Net control ClientID? If so, drop this wherever you're trying to pass it: <%= TreeNode.ClientID %>
How to select a TreeNode by Javascript?
[ "", "asp.net", "javascript", "ajax", "" ]
If I have a table ``` CREATE TABLE users ( id int(10) unsigned NOT NULL auto_increment, name varchar(255) NOT NULL, profession varchar(255) NOT NULL, employer varchar(255) NOT NULL, PRIMARY KEY (id) ) ``` and I want to get all unique values of `profession` field, what would be faster (or recommended): ``` SELECT DISTINCT u.profession FROM users u ``` or ``` SELECT u.profession FROM users u GROUP BY u.profession ``` ?
They are essentially equivalent to each other (in fact this is how some databases implement `DISTINCT` under the hood). If one of them is faster, it's going to be `DISTINCT`. This is because, although the two are the same, a query optimizer would have to catch the fact that your `GROUP BY` is not taking advantage of any group members, just their keys. `DISTINCT` makes this explicit, so you can get away with a slightly dumber optimizer. When in doubt, test!
If you have an index on `profession`, these two are synonyms. If you don't, then use `DISTINCT`. `GROUP BY` in `MySQL` sorts results. You can even do: ``` SELECT u.profession FROM users u GROUP BY u.profession DESC ``` and get your professions sorted in `DESC` order. `DISTINCT` creates a temporary table and uses it for storing duplicates. `GROUP BY` does the same, but sortes the distinct results afterwards. So ``` SELECT DISTINCT u.profession FROM users u ``` is faster, if you don't have an index on `profession`.
What's faster, SELECT DISTINCT or GROUP BY in MySQL?
[ "", "mysql", "sql", "database", "group-by", "distinct", "" ]
Is there any way to update a table within the select\_expr part of a mysql select query. Here is an example of what I am trying to achieve: ``` SELECT id, name, (UPDATE tbl2 SET currname = tbl.name WHERE tbl2.id = tbl.id) FROM tbl; ``` This gives me an error in mysql, but I dont see why this shouldn't be possible as long as I am not changing tbl. Edit: I will clarify why I cant use an ordinary construct for this. Here is the more complex example of the problem which I am working on: ``` SELECT id, (SELECT @var = col1 FROM tbl2), @var := @var+1, (UPDATE tbl2 SET col1 = @var) FROM tbl WHERE ... ``` So I am basically in a situation where I am incrementing a variable during the select statement and want to reflect this change as I am selecting the rows as I am using the value of this variable during the execution. The example given here can probably be implemented with other means, but the real example, which I wont post here due to there being too much unnecessary code, needs this functionality.
If your goal is to update tbl2 every time you query tbl1, then the best way to do that is to create a stored procedure to do it and wrap it in a transaction, possibly changing isolation levels if atomicity is needed. You can't nest updates in selects.
What results do you want? The results of the select, or of the update. If you want to update based on the results of a query you can do it like this: ``` update table1 set value1 = x.value1 from (select value1, id from table2 where value1 = something) as x where id = x.id ```
Updating a table within a select statement
[ "", "sql", "mysql", "" ]
My brother recently started learning C++. He told me a problem he encountered while trying to validate input in a simple program. He had a text menu where the user entered an integer `choice`, if they entered an invalid choice, they would be asked to enter it again (do while loop). However, if the user entered a string instead of an int, the code would break. I read various questions on stackoverflow and told him to rewrite his code along the lines of: ``` #include<iostream> using namespace std; int main() { int a; do { cout<<"\nEnter a number:" cin>>a; if(cin.fail()) { //Clear the fail state. cin.clear(); //Ignore the rest of the wrong user input, till the end of the line. cin.ignore(std::numeric_limits<std::streamsize>::max(),\ '\n'); } }while(true); return 0; } ``` While this worked ok, I also tried a few other ideas: 1. Using a try catch block. It didn't work. I think this is because an exception is not raised due to bad input. 2. I tried `if(! cin){//Do Something}` which didn't work either. I haven't yet figured this one out. 3. Thirdly, I tried inputting a fixed length string and then parsing it. I would use atoi(). Is this standards compliant and portable? Should I write my own parsing function? 4. If write a class that uses cin, but dynamically does this kind of error detection, perhaps by determining the type of the input variable at runtime, would it have too much overhead? Is it even possible? I would like to know what is the best way to do this kind of checking, what are the best practices? I would like to add that while I am not new to writing C++ code, I am new to writing good standards compliant code. I am trying to unlearn bad practices and learn the right ones. I would be much obliged if answerers give a detailed explanation. **EDIT**: I see that litb has answered one of my previous edits. I'll post that code here for reference. ``` #include<iostream> using namespace std; int main() { int a; bool inputCompletionFlag = true; do { cout<<"\nEnter a number:" cin>>a; if(cin.fail()) { //Clear the fail state. cin.clear(); //Ignore the rest of the wrong user input, till the end of the line. cin.ignore(std::numeric_limits<std::streamsize>::max(),\ '\n'); } else { inputCompletionFlag = false; } }while(!inputCompletionFlag); return 0; } ``` This code fails on input like "1asdsdf". I didn't know how to fix it but litb has posted a great answer. :)
Here is code you could use to make sure you also reject things like ``` 42crap ``` Where non-number characters follow the number. If you read the whole line and then parse it and execute actions appropriately it will possibly require you to change the way your program works. If your program read your number from different places until now, you then have to put one central place that parses one line of input, and decides on the action. But maybe that's a good thing too - so you could increase the readability of the code that way by having things separated: **I**nput - **P**rocessing - **O**utput Anyway, here is how you can reject the number-non-number of above. Read a line into a string, then parse it with a `stringstream`: ``` std::string getline() { std::string str; std::getline(std::cin, str); return str; } int choice; std::istringstream iss(getline()); iss >> choice >> std::ws; if(iss.fail() || !iss.eof()) { // handle failure } ``` It eats all trailing whitespace. When it hits the end-of-file of the stringstream while reading the integer or trailing whitespace, then it sets the eof-bit, and we check that. If it failed to read any integer in the first place, then the fail or bad bit will have been set. Earlier versions of this answer used `std::cin` directly - but `std::ws` won't work well together with `std::cin` connected to a terminal (it will block instead waiting for the user to input something), so we use a `stringstream` for reading the integer. --- Answering some of your questions: **Question:** 1. Using a try catch block. It didn't work. I think this is because an exception is not raised due to bad input. ***Answer:*** Well, you can tell the stream to throw exceptions when you read something. You use the `istream::exceptions` function, which you tell for which kind of error you want to have an exception thrown: ``` iss.exceptions(ios_base::failbit); ``` I did never use it. If you do that on `std::cin`, you will have to remember to restore the flags for other readers that rely on it not throwing. Finding it way easier to just use the functions *fail*, *bad* to ask for the state of the stream. **Question:** 2. I tried `if(!cin){ //Do Something }` which didn't work either. I haven't yet figured this one out. ***Answer:*** That could come from the fact that you gave it something like "42crap". For the stream, that is completely valid input when doing an extraction into an integer. **Question:** 3. Thirdly, I tried inputting a fixed length string and then parsing it. I would use atoi(). Is this standards compliant and portable? Should I write my own parsing function? ***Answer:*** atoi is Standard Compliant. But it's not good when you want to check for errors. There is no error checking, done by it as opposed to other functions. If you have a string and want to check whether it contains a number, then do it like in the initial code above. There are C-like functions that can read directly from a C-string. They exist to allow interaction with old, legacy code and writing fast performing code. One should avoid them in programs because they work rather low-level and require using raw naked pointers. By their very nature, they can't be enhanced to work with user defined types either. Specifically, this talks about the function "strtol" (string-to-long) which is basically atoi with error checking and capability to work with other bases (hex for example). **Question:** 4. If I write a class that uses cin, but dynamically do this kind of error detection, perhaps by determining the type of the input variable at runtime, will it have too much overhead? Is it even possible? ***Answer:*** Generally, you don't need to care too much about overhead here (if you mean runtime-overhead). But it depends specifically on where you use that class. That question will be very important if you are writing a high performance system that processes input and needs to have high throughout. But if you need to read input from a terminal or a file, you already see what this comes down to: Waiting for the user to input something takes really so long, you don't need to watch runtime costs at this point anymore on this scale. If you mean code overhead - well it depends on how the code is implemented. You would need to scan your string that you read - whether it contains a number or not, whether some arbitrary string. Depending on what you want to scan (maybe you have a "date" input, or a "time" input format too. Look into `boost.date_time` for that), your code can become arbitrarily complex. For simple things like classifying between number or not, I think you can get away with small amount of code.
This is what I do with C but it's probably applicable for C++ as well. **Input everything as a string.** Then, and only then, parse the string into what you need. It's sometimes better to code your own than try to bend someone else's to your will.
What is the best way to do input validation in C++ with cin?
[ "", "c++", "coding-style", "validation", "" ]
I have a Winforms application that uses show multiple top-level windows: ``` Form1 form1 = new Form1(); form1.Show(); Form2 form2 = new Form2(); form2.Show(); Application.Run(); ``` Inside one of the event-handlers in Form1, I would like to be able to show a modal dialog: ``` Dialog dialog = new Dialog(); dialog.ShowDialog(form1); ``` without suspending the other top-level window. Is this possible?
You'd need to run each top-level window on its own STA thread to achieve that, I believe.
There is a simple solution that seems to work properly. You can simply check if we are being disabled and re-enable if needed. ``` [DllImport("user32.dll")] private static extern void EnableWindow(IntPtr handle, bool enable); protected override void WndProc(ref System.Windows.Forms.Message msg) { if (msg.Msg == 0x000a /* WM_ENABLE */ && msg.WParam == IntPtr.Zero) { EnableWindow(this.Handle, true); return; } base.WndProc(ref msg); } ```
C# Winforms: Multiple top level windows and ShowDialog()
[ "", "c#", "winforms", "modal-dialog", "" ]
Actually, maybe not full-blown Lex/Yacc. I'm implementing a command-interpreter front-end to administer a webapp. I'm looking for something that'll take a grammar definition and turn it into a parser that directly invokes methods on my object. Similar to how ASP.NET MVC can figure out which controller method to invoke, and how to pony up the arguments. So, if the user types "create foo" at my command-prompt, it should transparently call a method: ``` private void Create(string id) { /* ... */ } ``` Oh, and if it could generate help text from (e.g.) attributes on those controller methods, that'd be awesome, too.
I'm not sure Lex/Yacc will be of any help. You'll just need a basic tokenizer and an interpreter which are faster to write by hand. If you're still into parsing route see [Irony](https://github.com/IronyProject/Irony). As a sidenote: have you considered PowerShell and its commandlets?
I've done a couple of small projects with GPLEX/GPPG, which are pretty straightforward reimplementations of LEX/YACC in C#. I've not used any of the other tools above, so I can't really compare them, but these worked fine. GPPG can be found [here](http://gppg.codeplex.com/) and GPLEX [here](http://gplex.codeplex.com/). That being said, I agree, a full LEX/YACC solution probably is overkill for your problem. I would suggest generating a set of bindings using IronPython: it interfaces easily with .NET code, non-programmers seem to find the basic syntax fairly usable, and it gives you a lot of flexibility/power if you choose to use it.
Lex/Yacc for C#?
[ "", "c#", "parsing", "yacc", "" ]
I have been wanting to make a RSS reader for a while now (just for fun), but I don't have the slightest idea of where to start. I don't understand anything about RSS. Are there any good tutorials on RSS and how to implement it in an application (not a tutorial on how to make a RSS reader, that would be too easy).
See <http://msdn.microsoft.com/en-us/library/bb943474.aspx> <http://msdn.microsoft.com/en-us/library/system.servicemodel.syndication.syndicationfeed.aspx> <http://msdn.microsoft.com/en-us/library/bb943480.aspx> Basically there is a lot of stuff in the .Net 3.5 framework that does the grunt-work of parsing and representing feeds; it's not hard to write a 30-line app that takes in a feed URL and downloads the feed and prints the title and author of all the items, for example. (Works for RSS 2.0 (not others!) or Atom.)
If you are focusing on creating an **RSS Reader** and not on RSS parsing logic, you might want to delegate creation/reading RSS feeds using this free RSS Library called [Argotic](http://www.codeplex.com/Argotic) on CodePlex.com
How can I get started making a C# RSS Reader?
[ "", "c#", "xml", "rss", "" ]
When I install my application using the msi file for the second time, I found 2 different behaviors: 1) Sometimes it displays a warning window informing me that there is a previous version installed in your PC. If you want to remove it, please go to control panel. 2) It displays a wizard which asks me if I want to repair or remove the application. Actually, I don't want the first behavior. I want him to ask me either to repair the previous version or to remove it. How to do that?!?
i found the solution.. i should set DetectNewerInstalledVersion property to false so that it will do the second behavior. please correct me if i'm wronge
If you want for the setup to automatically uninstall the old version of your app, you could do the following: 1. Increment the version number of your app in the setup 2. After you do that, VS will pop up a dialog asking if you want a different Product Id. (answer yes/ok) 3. Make sure tha t RemovePreviousVersion is true. Next time you install (assuming and older version is present), your setup will automatically uninstall the older version and install the new. The magic is caused by one last guid -- the GroupCode. The guid must ALWAYS be the same across all versions of your product. So the logic is that if the version number has changed, and the product code has changed, but the group code is the same, an automatic uninstall will occur.
a problem with the installation package
[ "", "c#", "windows-installer", "" ]
How do you compile and execute a .cs file from a command-prompt window?
CSC.exe is the CSharp compiler included in the .NET Framework and can be used to compile from the command prompt. The output can be an executable ".exe", if you use "/target:exe", or a DLL; If you use /target:library, CSC.exe is found in the .NET Framework directory, e.g. for .NET 3.5, `c:\windows\Microsoft.NET\Framework\v3.5\`. To run, first, open a command prompt, click "Start", then type `cmd.exe`. You may then have to cd into the directory that holds your source files. Run the C# compiler like this: ``` c:\windows\Microsoft.NET\Framework\v3.5\bin\csc.exe /t:exe /out:MyApplication.exe MyApplication.cs ... ``` (all on one line) If you have more than one source module to be compiled, you can put it on that same command line. If you have other assemblies to reference, use `/r:AssemblyName.dll` . Ensure you have a static Main() method defined in one of your classes, to act as the "entry point". To run the resulting EXE, type `MyApplication`, followed by `<ENTER>` using the command prompt. [This article](https://msdn.microsoft.com/en-us/library/ms379563(v=vs.80).aspx) on MSDN goes into more detail on the options for the command-line compiler. You can embed resources, set icons, sign assemblies - everything you could do within Visual Studio. If you have Visual Studio installed, in the "Start menu"; under Visual Studio Tools, you can open a "Visual Studio command prompt", that will set up all required environment and path variables for command line compilation. While it's very handy to know of this, you should combine it with knowledge of some sort of build tool such as [NAnt](http://nant.sourceforge.net/), [MSBuild](http://msdn.microsoft.com/en-us/library/0k6kkbsd.aspx), [FinalBuilder](http://www.finalbuilder.com/) etc. These tools provide a complete build environment, not just the basic compiler. ### On a Mac On a Mac, syntax is similar, only C sharp Compiler is just named `csc`: ``` $ csc /target:exe /out:MyApplication.exe MyApplication.cs ... ``` Then to run it : ``` $ mono MyApplication.exe ```
Another way to compile C# programs **(without using Visual Studio or without having it installed)** is to create a user variable in environment variables, namely "PATH". Copy the following path in this variable: **"C:\Windows\Microsoft.NET\Framework\v4.0.30319"** or depending upon which .NET your PC have. So you don't have to mention the whole path every time you compile a code. Simply use **"C:\Users\UserName\Desktop>csc [options] filename.cs"** or wherever the path of your code is. Now you are good to go.
Compiling/Executing a C# Source File in Command Prompt
[ "", "c#", "command-line", "compilation", "command-prompt", "" ]
I have a Category entity which has a Nullable ParentId field. When the method below is executing and the categoryId is null, the result seems null however there are categories which has null ParentId value. What is the problem in here, what am I missing? ``` public IEnumerable<ICategory> GetSubCategories(long? categoryId) { var subCategories = this.Repository.Categories.Where(c => c.ParentId == categoryId) .ToList().Cast<ICategory>(); return subCategories; } ``` By the way, when I change the condition to (c.ParentId == null), result seems normal.
The first thing to do is to put on logging, to see what TSQL was generated; for example: ``` ctx.Log = Console.Out; ``` LINQ-to-SQL seems to treat nulls a little inconsistently (depending on literal vs value): ``` using(var ctx = new DataClasses2DataContext()) { ctx.Log = Console.Out; int? mgr = (int?)null; // redundant int? for comparison... // 23 rows: var bosses1 = ctx.Employees.Where(x => x.ReportsTo == (int?)null).ToList(); // 0 rows: var bosses2 = ctx.Employees.Where(x => x.ReportsTo == mgr).ToList(); } ``` So all I can suggest is use the top form with nulls! i.e. ``` Expression<Func<Category,bool>> predicate; if(categoryId == null) { predicate = c=>c.ParentId == null; } else { predicate = c=>c.ParentId == categoryId; } var subCategories = this.Repository.Categories .Where(predicate).ToList().Cast<ICategory>(); ``` --- Update - I got it working "properly" using a custom `Expression`: ``` static void Main() { ShowEmps(29); // 4 rows ShowEmps(null); // 23 rows } static void ShowEmps(int? manager) { using (var ctx = new DataClasses2DataContext()) { ctx.Log = Console.Out; var emps = ctx.Employees.Where(x => x.ReportsTo, manager).ToList(); Console.WriteLine(emps.Count); } } static IQueryable<T> Where<T, TValue>( this IQueryable<T> source, Expression<Func<T, TValue?>> selector, TValue? value) where TValue : struct { var param = Expression.Parameter(typeof (T), "x"); var member = Expression.Invoke(selector, param); var body = Expression.Equal( member, Expression.Constant(value, typeof (TValue?))); var lambda = Expression.Lambda<Func<T,bool>>(body, param); return source.Where(lambda); } ```
Other way: ``` Where object.Equals(c.ParentId, categoryId) ``` or ``` Where (categoryId == null ? c.ParentId == null : c.ParentId == categoryId) ```
Compare nullable types in Linq to Sql
[ "", "c#", "linq-to-sql", "nullable", "" ]
I have two flags: ``` [Flags] enum Flags { A = 1, B = 2 }; ``` I set them like this: ``` Mode = Flags.A | Flags.B; // default value for(int i = 0; i < args.Length; i++) { switch(args[i]) { case "--a": { if ((Mode & Flags.A) == Flags.A && (Mode & Flags.B) == Flags.B) // both, default assume { Mode = Flags.A; // only A } else { Mode |= Flags.A; // append A } break; } case "--b": { if ((Mode & Flags.A) == Flags.A && (Mode & Flags.B) == Mode.B) { Mode = Flags.B; } else { Mode |= Flags.B; } break; } } } ``` and use them later like this: ``` if((Mode & Flags.A) == Flags.A) { // } if((Mode & Flags.B) == Flags.B) { // } ``` Major reservation: Both flags may be set. Or just one, in this case only one part of code is executed. Is my code good? What is the best way to setup flags? **Upd**: Is this less ugly then first, how do you think? ``` Mode = 0; // default value now is empty for(int i = 0; i < args.Length; i++) { switch(args[i]) { case "--a": { Mode |= Flags.A; break; } case "--b": { Mode |= Flags.B; break; } } } if(Mode == 0) { Mode = Flags.A | Flags.B; // if no parameters are given, setup both flags } ```
Here's how I would set my flags: ``` Mode = 0; for(int i = 0; i < args.Length; i++) { switch(args[i]) { case "--a": Mode |= Flags.A; break; case "--b": Mode |= Flags.B; break; } } ``` If both flags should be on by default, I think it makes more sense to change the command line parameters to something like `--not-a` and `--not-b`. This would both reflect the default setting, and let you get rid of `(Mode & Flags.A) == Flags.A && (Mode & Flags.B) == Flags.B`, which is rather ugly, IMHO. Then you can set your flags like this: ``` Mode = Flags.A | Flags.B; for(int i = 0; i < args.Length; i++) { switch(args[i]) { case "--not-a": Mode &= ~Flags.A; break; case "--not-b": Mode &= ~Flags.B; break; } } ``` Finally, if you have a lot of flags (instead of just two), it might be easier to set up your enum like this: ``` [Flags] enum Flags { A = 1, B = 1 << 1, C = 1 << 2, D = 1 << 3, E = 1 << 4, F = 1 << 5 }; ```
You can turn a "bit" off with the following wonderful statement: ``` Mode &= ~Flags.A; ``` I'd reccoment including a "null" value in your enum as well: ``` [Flags] enum Flags { Null = 0; A = 1, B = 2; } ``` It will keep your life simpler! :-)
Enum as Flag using, setting and shifting
[ "", "c#", ".net", "enums", "flags", "" ]
I'm learning C# asynchronous socket programming, and I've learned that it's a good idea to reuse byte buffers in some sort of pool, and then just check one out as needed when receiving data from a socket. However, I have seen two different methods of doing a byte array pool: one used a simple queue system, and just added/removed them from the queue as needed. If one was requested and there were no more left in the queue, a new byte array is created. The other method that I've seen uses one big byte array for the entire program. The idea of a queue still applies, but instead it's a queue of integers which determine the slice (offset) of the byte array to use. If one was requested and there were no more left in the queue, the array must be resized. Which one of these is a better solution for a highly scalable server? My instinct is it would be cheaper to just use many byte arrays because I'd imagine resizing the array as needed (even if we allocate it in large chunks) would be pretty costly, especially when it gets big. Using multiple arrays seems more intuitive too - is there some advantage to using one massive array that I'm not thinking of?
You are correct in your gut feeling. Every time you need to make the array bigger, you will be recreating the array and copying the existing bytes over. Since we are talking about bytes here, the size of the array may get large very quickly. So, you will be asking for a contiguous piece of memory each time, which, depending on how your program uses memory, might or might not be viable. This will also in effect, become a virtual pool, so to speak. A pool by definition has a set of multiple items that are managed and shared by various clients. The one array solution is also way more complex to implement. The good thing is that a one array solution allows you to give variable-sized chunks out, but this comes at the cost of essentially reimplementing malloc: dealing with fragmentation, etc, etc, which you shouldn't get into. A multiple array solution allows you to initialize a pool with N amount of buffers and easily manage them in a straightforward fashion. Definitely the approach I'd recommend.
I wouldn't suggest the resizing option. Start simple and work your way up. A queue of byte buffers which gets a new one added to the end when it is exhausted would be a good start. You will probably have to pay attention to threading issues, so my advice would be to use somebody else's thread-safe queue implementation. Next you can take a look at the more complex "pointers" into a big byte array chunk, except my advice would be to have a queue of 4k/16k (some power of two multiple of the page size) blocks that you index into, and when it is full you add another big chunk to the queue. Actually, I don't recommend this at all due to the complexity and the dubious gain in performance. Start simple, work your way up. Pool of buffers, make it thread safe, see if you need anything more.
One big byte buffer or several small ones?
[ "", "c#", "networking", "sockets", "buffer", "" ]
I need to generate a unique temporary file with a .csv extension. What I do right now is ``` string filepath = System.IO.Path.GetTempFileName().Replace(".tmp", ".csv"); ``` However, this doesn't guarantee that my .csv file will be unique. I know the chances I ever got a collision are very low (especially if you consider that I don't delete the .tmp files), but this code doesn't looks good to me. Of course I could manually generate random file names until I eventually find a unique one (which shouldn't be a problem), but I'm curious to know if others have found a nice way to deal with this problem.
Guaranteed to be (statistically) unique: ``` string fileName = System.IO.Path.GetTempPath() + Guid.NewGuid().ToString() + ".csv"; ``` (To quote from the wiki article on the probabilty of a collision: > ...one's annual risk of being hit by a > meteorite is estimated to be one > chance in 17 billion [19], that means > the probability is about 0.00000000006 > (6 × 10−11), equivalent to the odds of > creating a few tens of trillions of > UUIDs in a year and having one > duplicate. In other words, only after > generating 1 billion UUIDs every > second for the next 100 years, the > probability of creating just one > duplicate would be about 50%. The > probability of one duplicate would be > about 50% if every person on earth > owns 600 million UUIDs EDIT: Please also see JaredPar's comments.
Try this function ... ``` public static string GetTempFilePathWithExtension(string extension) { var path = Path.GetTempPath(); var fileName = Path.ChangeExtension(Guid.NewGuid().ToString(), extension); return Path.Combine(path, fileName); } ``` It will return a full path with the extension of your choice. Note, it's not guaranteed to produce a unique file name since someone else could have technically already created that file. However the chances of someone guessing the next guid produced by your app and creating it is very very low. It's pretty safe to assume this will be unique.
How can I create a temp file with a specific extension with .NET?
[ "", "c#", ".net", "file", "temporary-files", "" ]