Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
A query that is used to loop through **17 millions records to remove duplicates** has been running now for about **16 hours** and I wanted to know if the query is stopped right now if it will finalize the delete statements or if it has been deleting while running this query? Indeed, if I do stop it, does it finalize the deletes or rolls back? I have found that when I do a ``` select count(*) from myTable ``` That the rows that it returns (while doing this query) is about 5 less than what the starting row count was. Obviously the server resources are extremely poor, so does that mean that this process has taken 16 hours to find 5 duplicates (when there are actually thousands), and this could be running for days? This query took 6 seconds on 2000 rows of test data, and it works great on that set of data, so I figured it would take 15 hours for the complete set. Any ideas? Below is the query: ``` --Declare the looping variable DECLARE @LoopVar char(10) DECLARE --Set private variables that will be used throughout @long DECIMAL, @lat DECIMAL, @phoneNumber char(10), @businessname varchar(64), @winner char(10) SET @LoopVar = (SELECT MIN(RecordID) FROM MyTable) WHILE @LoopVar is not null BEGIN --initialize the private variables (essentially this is a .ctor) SELECT @long = null, @lat = null, @businessname = null, @phoneNumber = null, @winner = null -- load data from the row declared when setting @LoopVar SELECT @long = longitude, @lat = latitude, @businessname = BusinessName, @phoneNumber = Phone FROM MyTable WHERE RecordID = @LoopVar --find the winning row with that data. The winning row means SELECT top 1 @Winner = RecordID FROM MyTable WHERE @long = longitude AND @lat = latitude AND @businessname = BusinessName AND @phoneNumber = Phone ORDER BY CASE WHEN webAddress is not null THEN 1 ELSE 2 END, CASE WHEN caption1 is not null THEN 1 ELSE 2 END, CASE WHEN caption2 is not null THEN 1 ELSE 2 END, RecordID --delete any losers. DELETE FROM MyTable WHERE @long = longitude AND @lat = latitude AND @businessname = BusinessName AND @phoneNumber = Phone AND @winner != RecordID -- prep the next loop value to go ahead and perform the next duplicate query. SET @LoopVar = (SELECT MIN(RecordID) FROM MyTable WHERE @LoopVar < RecordID) END ```
no, sql server will not roll back the deletes it has already performed if you stop query execution. oracle requires an explicit committal of action queries or the data gets rolled back, but not mssql. with sql server it will not roll back unless you are specifically running in the context of a transaction and you rollback that transaction, or the connection closes without the transaction having been committed. but i don't see a transaction context in your above query. you could also try re-structuring your query to make the deletes a little more efficient, but essentially if the specs of your box are not up to snuff then you might be stuck waiting it out. going forward, you should create a unique index on the table to keep yourself from having to go through this again.
Your query is not wrapped in a transaction, so it won't rollback the changes already made by the individual delete statements. I specifically tested this myself on my own SQL Server using the following query, and the ApplicationLog table was empty even though I cancelled the query: ``` declare @count int select @count = 5 WHILE @count > 0 BEGIN print @count delete from applicationlog; waitfor time '20:00'; select @count = @count -1 END ``` However your query is likely to take many days or weeks, much longer then 15 hours. Your estimate that you can process 2000 records every 6 seconds is wrong because each iteration in your while loop will take significantly longer with 17 million rows then it does with 2000 rows. So unless your query takes significantly less then a second for 2000 rows, it will take days for all 17 million. You should ask a new question on how you can delete duplicate rows efficiently.
If I stop a long running query, does it rollback?
[ "", "sql", "sql-server", "duplicate-data", "" ]
I'm aware I can add maven repositories for fetching dependencies in ~/.m2/settings.xml. But is it possible to add a repository using command line, something like: ``` mvn install -Dmaven.repository=http://example.com/maven2 ``` The reason I want to do this is because I'm using a continuous integration tool where I have full control over the command line options it uses to call maven, but managing the settings.xml for the user that runs the integration tool is a bit of a hassle.
You can do this but you're probably better off doing it in the POM as others have said. On the command line you can specify a property for the local repository, and another repository for the remote repositories. The remote repository will have all default settings though The example below specifies two remote repositories and a custom local repository. ``` mvn package -Dmaven.repo.remote=http://www.ibiblio.org/maven/,http://myrepo -Dmaven.repo.local="c:\test\repo" ```
One of the goals for Maven't Project Object Model (POM) is to capture all information needed to reliably reproduce an artifact, thus passing settings impacting the artifact creation is strongly discouraged. To achieve your goal, you can check in your user-level settings.xml file with each project and use the -s (or --settings) option to pass it to the build.
Can I add maven repositories in the command line?
[ "", "java", "maven-2", "build-process", "" ]
Ok, so I just ran into the following problem that raised an eyebrow. For various reasons I have a testing setup where Testing classes in a TestingAssembly.dll depend on the TestingBase class in a BaseTestingAssembly.dll. One of the things the TestBase does in the meantime is look for a certain embedded resource in its own and the calling assembly So my BaseTestingAssembly contained the following lines... ``` public class TestBase { private static Assembly _assembly; private static Assembly _calling_assembly; static TestBase() { _assembly = Assembly.GetExecutingAssembly(); _calling_assembly = Assembly.GetCallingAssembly(); } } ``` Static since I figured, these assemblies would be the same over the application's lifetime so why bother recalculating them on every single test. When running this however I noticed that both \_assembly and \_calling\_assembly were being set to BaseTestingAssembly rather than BaseTestingAssembly and TestingAssembly respectively. Setting the variables to non-static and having them initialized in a regular constructor fixed this but I am confused why this happened to begin this. I thought static constructors run the first time a static member gets referenced. This could only have been from my TestingAssembly which should then have been the caller. Does anyone know what might have happened?
The static constructor is called by the runtime and not directly by user code. You can see this by setting a breakpoint in the constructor and then running in the debugger. The function immediately above it in the call chain is native code. **Edit:** There are a lot of ways in which static initializers run in a different environment than other user code. Some other ways are 1. They're implicitly protected against race conditions resulting from multithreading 2. You can't catch exceptions from outside the initializer In general, it's probably best not to use them for anything too sophisticated. You can implement single-init with the following pattern: ``` private static Assembly _assembly; private static Assembly Assembly { get { if (_assembly == null) _assembly = Assembly.GetExecutingAssembly(); return _assembly; } } private static Assembly _calling_assembly; private static Assembly CallingAssembly { get { if (_calling_assembly == null) _calling_assembly = Assembly.GetCallingAssembly(); return _calling_assembly; } } ``` Add locking if you expect multithreaded access.
I think the answer is here in the discussion of [C# static constructors](http://msdn.microsoft.com/en-us/library/k9x6w0hc(VS.80).aspx). My best guess is that the static constructor is getting called from an unexpected context because: > The user has no control on when the > static constructor is executed in the > program
Assembly.GetCallingAssembly() and static constructors?
[ "", "c#", "reflection", "static-constructor", "" ]
I'm looking for some programming guides to C# GUI design. I come from the Java camp (where I can happily hand-code Swing/AWT GUIs) and thus don't have a clue where to start :( Also, what difference (if any) is there between the Windows Presentation Foundation and WinForms?
Chris Sells seems to be 'dah man' with regard to Windows Forms and WPF: > [http://www.sellsbrothers.com/writing/](https://web.archive.org/web/20120415062302/http://www.sellsbrothers.com/writing/) > [http://www.sellsbrothers.com/writing/wfbook](https://web.archive.org/web/20120507182223/http://www.sellsbrothers.com/writing/wfbook) > [http://www.sellsbrothers.com/writing/wpfbook](https://web.archive.org/web/20120415192041/http://www.sellsbrothers.com/writing/wpfbook) Also well taking a look at Charles Petzold as well: > <http://www.charlespetzold.com/winforms/index.html> MS also have a heap of stuff related to design guidelines and usability from a windows perspective: > <http://msdn.microsoft.com/en-us/library/aa152962.aspx>
**Windows Presentation Foundation** is a vector-based system that is part of .NET 3.0. It allows you to define your UI in XAML, and can do all sorts of animation, 3D, etc. very easily. It's much newer and still being evalulated by a lot of folks. **Windows Forms** is a wrapper over older windows UI classes (Win32/MFC or whatever). It came with .NET 1.0 and uses C# to define all the UI and its layout. It's the tried and true UI method.
C# .NET 3.5 GUI design
[ "", "c#", ".net", "wpf", "winforms", "user-interface", "" ]
Is there a way to lock a row in the SQL 2005-2008 database without starting a transaction, so other processes cannot update the row until it is unlocked?
You can use [RowLock or other hints](http://blog.sqlauthority.com/2007/04/27/sql-server-2005-locking-hints-and-examples/) but you should be [careful](http://www.sqlteam.com/article/row-locking).. The HOLDLOCK hint will instruct SQL Server to hold the lock until you commit the transaction. The ROWLOCK hint will lock only this record and not issue a page or table lock. The lock will also be released if you close your connection or it times out. I'd be VERY careful doing this since it will stop any SELECT statements that hit this row dead in their tracks. SQL Server has numerous locking hints that you can use. You can see them in Books Online when you search on either HOLDLOCK or ROWLOCK.
Everything you execute in the server happens in a transaction, either implicit or explicit. You can not simply lock a row with no transaction (make the row read only). You can make the database read only, but not just one row. Explain your purpose and it might be a better solution. Isolation levels and lock hints and [row versioning](http://msdn.microsoft.com/en-us/library/ms187101.aspx).
Locking Row in SQL 2005-2008
[ "", "sql", "sql-server", "locking", "" ]
Is there a way in Firebug to start a new script file to apply to page? Basically I want to do work like I'd normally do on the Firebug console but be able to to paste in multi-line functions, etc. It doesn't seem like the console is amenable to that.
Down in the lower-right corner of the FireBug UI you should see a red square icon with an up arrow. Use that and stretch it to a size you like. --- [![Screenshot](https://i.stack.imgur.com/SE7Ai.gif)](https://i.stack.imgur.com/SE7Ai.gif) [![Screenshot](https://i.stack.imgur.com/eG63w.gif)](https://i.stack.imgur.com/eG63w.gif)
maybe not within firebug, but you could try some techniques similar to the jQuery bookmarklet. [bookmarklet link](http://erikandcolleen.com/erik/projects/jquery/bookmarklet/)
Firebug - how can I run multiline scripts or create a new JavaScript file?
[ "", "javascript", "firefox", "firebug", "" ]
According to the documentation, they're pretty much interchangeable. Is there a stylistic reason to use one over the other?
I like to use double quotes around strings that are used for interpolation or that are natural language messages, and single quotes for small symbol-like strings, but will break the rules if the strings contain quotes, or if I forget. I use triple double quotes for docstrings and raw string literals for regular expressions even if they aren't needed. For example: ``` LIGHT_MESSAGES = { 'English': "There are %(number_of_lights)s lights.", 'Pirate': "Arr! Thar be %(number_of_lights)s lights." } def lights_message(language, number_of_lights): """Return a language-appropriate string reporting the light count.""" return LIGHT_MESSAGES[language] % locals() def is_pirate(message): """Return True if the given message sounds piratical.""" return re.search(r"(?i)(arr|avast|yohoho)!", message) is not None ```
Quoting the official docs at <https://docs.python.org/2.0/ref/strings.html>: > In plain English: String literals can be enclosed in matching single quotes (') or double quotes ("). So there is no difference. Instead, people will tell you to choose whichever style that matches the context, *and to be consistent*. And I would agree - adding that it is pointless to try to come up with "conventions" for this sort of thing because you'll only end up confusing any newcomers.
Single quotes vs. double quotes in Python
[ "", "python", "coding-style", "" ]
I wrote a Java program to add and retrieve data from an MS Access. At present it goes sequentially through ~200K insert queries in ~3 minutes, which I think is slow. I plan to rewrite it using threads with 3-4 threads handling different parts of the hundred thousands records. I have a compound question: * Will this help speed up the program because of the divided workload or would it be the same because the threads still have to access the database sequentially? * What strategy do you think would speed up this process (except for query optimization which I already did in addition to using Java's preparedStatement)
First, don't use Access. Move your data anywhere else -- SQL/Server -- MySQL -- anything. The DB engine inside access (called Jet) is pitifully slow. It's not a real database; it's for personal projects that involve small amounts of data. It doesn't scale at all. Second, threads rarely help. The JDBC-to-Database connection is a process-wide resource. All threads share the one connection. "But wait," you say, "I'll create a unique Connection object in each thread." Noble, but *sometimes* doomed to failure. Why? Operating System processing between your JVM and the database may involve a socket that's a single, process-wide resource, shared by all your threads. If you have a single OS-level I/O resource that's shared across all threads, you won't see much improvement. In this case, the ODBC connection is one bottleneck. And MS-Access is the other.
1. Don't know. Without knowing more about what the bottle neck is I can't comment if it will make it faster. If the database is the limiter then chances are more threads will slow it down. 2. I would dump the access database to a flat file and then bulk load that file. Bulk loading allows for optimzations which are far, far faster than running multiple insert queries.
Will it be faster to use several threads to update the same database?
[ "", "sql", "database", "multithreading", "" ]
What is the difference between this: ``` this.btnOk.Click += new System.EventHandler(this.btnOK_Click); ``` and this? ``` this.btnOk.Click += this.btnOK_Click; ``` They both work. The former is what Visual Studio defaults to when you use the snippets. But it seems like it only ads extra verbiage, or am I missing something?
No difference. Omitting the delegate instantiation is just syntax candy; the C# compiler will generate the delegate instantiation for you under the hood.
In C# 3.0 and later this is no difference. Before C# 3.0 EventHandlers were required due to compiler limitations, but with the advent of C# 3.0, the second form is preferred unless you want to be very explicit.
What is the difference between Events with Delegate Handlers and those without?
[ "", "c#", "events", "" ]
I'm taking a look at how the model system in django works and I noticed something that I don't understand. I know that you create an empty `__init__.py` file to specify that the current directory is a package. And that you can set some variable in `__init__.py` so that import \* works properly. But django adds a bunch of from ... import ... statements and defines a bunch of classes in `__init__.py`. Why? Doesn't this just make things look messy? Is there a reason that requires this code in `__init__.py`?
All imports in `__init__.py` are made available when you import the package (directory) that contains it. Example: `./dir/__init__.py`: ``` import something ``` `./test.py`: ``` import dir # can now use dir.something ``` EDIT: forgot to mention, the code in `__init__.py` runs the first time you import any module from that directory. So it's normally a good place to put any package-level initialisation code. EDIT2: dgrant pointed out to a possible confusion in my example. In `__init__.py` `import something` can import any module, not necessary from the package. For example, we can replace it with `import datetime`, then in our top level `test.py` both of these snippets will work: ``` import dir print dir.datetime.datetime.now() ``` and ``` import dir.some_module_in_dir print dir.datetime.datetime.now() ``` The bottom line is: all names assigned in `__init__.py`, be it imported modules, functions or classes, are automatically available in the package namespace whenever you import the package or a module in the package.
It's just personal preference really, and has to do with the layout of your python modules. Let's say you have a module called `erikutils`. There are two ways that it can be a module, either you have a file called *erikutils.py* on your `sys.path` or you have a directory called *erikutils* on your `sys.path` with an empty *`__init__.py`* file inside it. Then let's say you have a bunch of modules called `fileutils`, `procutils`, `parseutils` and you want those to be sub-modules under `erikutils`. So you make some .py files called *fileutils.py*, *procutils.py*, and *parseutils.py*: ``` erikutils __init__.py fileutils.py procutils.py parseutils.py ``` Maybe you have a few functions that just don't belong in the `fileutils`, `procutils`, or `parseutils` modules. And let's say you don't feel like creating a new module called `miscutils`. AND, you'd like to be able to call the function like so: ``` erikutils.foo() erikutils.bar() ``` rather than doing ``` erikutils.miscutils.foo() erikutils.miscutils.bar() ``` So because the `erikutils` module is a directory, not a file, we have to define it's functions inside the *`__init__.py`* file. In django, the best example I can think of is `django.db.models.fields`. ALL the django \*Field classes are defined in the *`__init__.py`* file in the *django/db/models/fields* directory. I guess they did this because they didn't want to cram everything into a hypothetical *django/db/models/fields.py* model, so they split it out into a few submodules (*related.py*, *files.py*, for example) and they stuck the made \*Field definitions in the fields module itself (hence, *`__init__.py`*).
Adding code to __init__.py
[ "", "python", "initialization", "package", "" ]
Ever since I started using .NET, I've just been creating Helper classes or Partial classes to keep code located and contained in their own little containers, etc. What I'm looking to know is the best practices for making ones code as clean and polished as it possibly could be. Obviously clean code is subjective, but I'm talking about when to use things (not how to use them) such as polymorphism, inheritance, interfaces, classes and how to design classes more appropriately (to make them more useful, not just say 'DatabaseHelper', as some considered this bad practice in the [code smells wiki](https://stackoverflow.com/questions/114342/what-are-code-smells-what-is-the-best-way-to-correct-them)). Are there any resources out there that could possibly help with this kind of decision making? Bare in mind that I haven't even started a CS or software engineering course, and that a teaching resource is fairly limited in real-life.
A real eye-opener to me was [Refactoring: Improving the Design of Existing Code](https://rads.stackoverflow.com/amzn/click/com/0201485672): > With proper training a skilled system > designer can take a bad design and > rework it into well-designed, robust > code. In this book, Martin Fowler > shows you where opportunities for > refactoring typically can be found, > and how to go about reworking a bad > design into a good one. [Refactoring http://ecx.images-amazon.com/images/I/519XT0DER6L.\_SL160\_PIlitb-dp-arrow,TopRight,21,-23\_SH30\_OU01\_AA115\_.jpg](http://ecx.images-amazon.com/images/I/519XT0DER6L._SL160_PIlitb-dp-arrow,TopRight,21,-23_SH30_OU01_AA115_.jpg) It helped me to efficiently and systematically refactor code. Also it helped me a lot in discussions with other developers, when their `holy code` has to be changed ...
Jeff Atwood made a [nice blog post on refactoring and code smells](http://blog.codinghorror.com/code-smells/), you might want to check that out. Refactoring code in .NET takes some time to grok. You need to know some object-oriented [design principles](http://books.google.se/books?id=LjJcCnNf92kC) (or [design techniques](http://en.wikipedia.org/wiki/Gang_of_Four_(software))) in order to [refactor effectively](http://www.industriallogic.com/xp/refactoring/catalog.html) and [mercilessly](http://www.extremeprogramming.org/rules/refactor.html). In short, you refactor code in order to remove code smells and make changes easier to do. Also, don't overdo it.
Refactoring Code: When to do what?
[ "", "c#", "vb.net", "refactoring", "coding-style", "" ]
I'm reading data from a table( from a MySQL Database) with Hibernate SQL Query. The thing is, the table contains a colum that is mapped to a char in Hibernate Model, and sometimes this column is empty. And I suppose this is where my exception comes from. How can I map a colum of char to my hibernate model without getting this error ? Thanks for your answers ! --- Thank you for your answer ! My column is not nullable (I 'm using MySQL and this column is NOT NULL) Then, I don't think that ``` if (str == null) { ``` is appropriate. the error is : ``` 15:30:35,289 INFO CharacterType:178 - could not read column value from result set: LSFUS11_20_; String index out of range: 0 ``` which results in the following exception : ``` java.lang.StringIndexOutOfBoundsException: String index out of range: 0 at java.lang.String.charAt(String.java:558) ``` I think I may try your solution, but with : ``` if (str == "") { ``` since it can't be null, it's just an empty String. Thanks for your piece code, I'm going to try that !
I am assuming from your question that you're mapping this to a primitive character. Next time, please post the stacktrace that you receive (you may leave out where you call it, you could only include the hibernate stuff if your project is too sensitive). If you do map to a primitive character, and it is null, you will get an exception, because primitives cannot have null assigned to them. This class will mitigate this, the "null" character is returned as a character representing "0". You can customize this to your liking: ``` import java.sql.ResultSet; import java.sql.SQLException; import org.hibernate.type.CharacterType; public class NullCharacterType extends CharacterType { /** * Serializable ID generated by Eclipse */ private static final long serialVersionUID = 1L; public NullCharacterType() { super(); } public Object get(final ResultSet rs, final String name) throws SQLException { final String str = rs.getString(name); if (str == null || str.length() == 0) { return new Character((char) 0); } else { return new Character(str.charAt(0)); } } } ``` To use this new type, in your hibernate mapping, before you had something like: ``` <property name="theChar" type="character"> ``` Now, you just specify the class name as your type: ``` <property name="theChar" type="yourpackage.NullCharacterType"> ``` However, the best practice is to not use primitive types for database mapping. If at all possible, use Character instead of char, because that way you won't have an issue with null (null can be assigned to the wrapper types).
Search your JBoss (server) whether `mysql.jar` (mysql-connector-java-5.1.7-bin) is present in lib files or not. Even I faced the same problem, after adding the `mysql.jar` file it is working fine.
could not read column value from result set ; String index out of range: 0
[ "", "java", "hibernate", "" ]
I have some decimal data that I am pushing into a SharePoint list where it is to be viewed. I'd like to restrict the number of significant figures displayed in the result data based on my knowledge of the specific calculation. Sometimes it'll be 3, so 12345 will become 12300 and 0.012345 will become 0.0123. Occasionally it will be 4 or 5. Is there any convenient way to handle this?
See: [RoundToSignificantFigures](https://stackoverflow.com/questions/374316/round-a-double-to-x-significant-figures-after-decimal-point/374470#374470) by "P Daddy". I've combined his method with another one I liked. Rounding to significant figures is a lot easier in TSQL where the rounding method is based on rounding position, not number of decimal places - which is the case with .Net math.round. You could round a number in TSQL to negative places, which would round at whole numbers - so the scaling isn't needed. Also see this [other thread](https://stackoverflow.com/questions/202302/rounding-to-an-arbitrary-number-of-significant-digits). Pyrolistical's method is good. The trailing zeros part of the problem seems like more of a string operation to me, so I included a ToString() extension method which will pad zeros if necessary. ``` using System; using System.Globalization; public static class Precision { // 2^-24 public const float FLOAT_EPSILON = 0.0000000596046448f; // 2^-53 public const double DOUBLE_EPSILON = 0.00000000000000011102230246251565d; public static bool AlmostEquals(this double a, double b, double epsilon = DOUBLE_EPSILON) { // ReSharper disable CompareOfFloatsByEqualityOperator if (a == b) { return true; } // ReSharper restore CompareOfFloatsByEqualityOperator return (System.Math.Abs(a - b) < epsilon); } public static bool AlmostEquals(this float a, float b, float epsilon = FLOAT_EPSILON) { // ReSharper disable CompareOfFloatsByEqualityOperator if (a == b) { return true; } // ReSharper restore CompareOfFloatsByEqualityOperator return (System.Math.Abs(a - b) < epsilon); } } public static class SignificantDigits { public static double Round(this double value, int significantDigits) { int unneededRoundingPosition; return RoundSignificantDigits(value, significantDigits, out unneededRoundingPosition); } public static string ToString(this double value, int significantDigits) { // this method will round and then append zeros if needed. // i.e. if you round .002 to two significant figures, the resulting number should be .0020. var currentInfo = CultureInfo.CurrentCulture.NumberFormat; if (double.IsNaN(value)) { return currentInfo.NaNSymbol; } if (double.IsPositiveInfinity(value)) { return currentInfo.PositiveInfinitySymbol; } if (double.IsNegativeInfinity(value)) { return currentInfo.NegativeInfinitySymbol; } int roundingPosition; var roundedValue = RoundSignificantDigits(value, significantDigits, out roundingPosition); // when rounding causes a cascading round affecting digits of greater significance, // need to re-round to get a correct rounding position afterwards // this fixes a bug where rounding 9.96 to 2 figures yeilds 10.0 instead of 10 RoundSignificantDigits(roundedValue, significantDigits, out roundingPosition); if (Math.Abs(roundingPosition) > 9) { // use exponential notation format // ReSharper disable FormatStringProblem return string.Format(currentInfo, "{0:E" + (significantDigits - 1) + "}", roundedValue); // ReSharper restore FormatStringProblem } // string.format is only needed with decimal numbers (whole numbers won't need to be padded with zeros to the right.) // ReSharper disable FormatStringProblem return roundingPosition > 0 ? string.Format(currentInfo, "{0:F" + roundingPosition + "}", roundedValue) : roundedValue.ToString(currentInfo); // ReSharper restore FormatStringProblem } private static double RoundSignificantDigits(double value, int significantDigits, out int roundingPosition) { // this method will return a rounded double value at a number of signifigant figures. // the sigFigures parameter must be between 0 and 15, exclusive. roundingPosition = 0; if (value.AlmostEquals(0d)) { roundingPosition = significantDigits - 1; return 0d; } if (double.IsNaN(value)) { return double.NaN; } if (double.IsPositiveInfinity(value)) { return double.PositiveInfinity; } if (double.IsNegativeInfinity(value)) { return double.NegativeInfinity; } if (significantDigits < 1 || significantDigits > 15) { throw new ArgumentOutOfRangeException("significantDigits", value, "The significantDigits argument must be between 1 and 15."); } // The resulting rounding position will be negative for rounding at whole numbers, and positive for decimal places. roundingPosition = significantDigits - 1 - (int)(Math.Floor(Math.Log10(Math.Abs(value)))); // try to use a rounding position directly, if no scale is needed. // this is because the scale mutliplication after the rounding can introduce error, although // this only happens when you're dealing with really tiny numbers, i.e 9.9e-14. if (roundingPosition > 0 && roundingPosition < 16) { return Math.Round(value, roundingPosition, MidpointRounding.AwayFromZero); } // Shouldn't get here unless we need to scale it. // Set the scaling value, for rounding whole numbers or decimals past 15 places var scale = Math.Pow(10, Math.Ceiling(Math.Log10(Math.Abs(value)))); return Math.Round(value / scale, significantDigits, MidpointRounding.AwayFromZero) * scale; } } ```
This might do the trick: ``` double Input1 = 1234567; string Result1 = Convert.ToDouble(String.Format("{0:G3}",Input1)).ToString("R0"); double Input2 = 0.012345; string Result2 = Convert.ToDouble(String.Format("{0:G3}", Input2)).ToString("R6"); ``` Changing the G3 to G4 produces the oddest result though. It appears to round up the significant digits?
Formatting numbers with significant figures in C#
[ "", "c#", "significant-digits", "" ]
I've been reading through a lot of the rookie Java questions on `finalize()` and find it kind of bewildering that no one has really made it plain that finalize() is an unreliable way to clean up resources. I saw someone comment that they use it to clean up Connections, which is really scary since the only way to come as close to a guarantee that a Connection is closed is to implement try (catch) finally. I was not schooled in CS, but I have been programming in Java professionally for close to a decade now and I have never seen anyone implement `finalize()` in a production system ever. This still doesn't mean that it doesn't have its uses, or that people I've worked with have been doing it right. So my question is, what use cases are there for implementing `finalize()` that cannot be handled more reliably via another process or syntax within the language? Please provide specific scenarios or your experience, simply repeating a Java text book, or finalize's intended use is not enough, as is not the intent of this question.
You could use it as a backstop for an object holding an external resource (socket, file, etc). Implement a `close()` method and document that it needs to be called. Implement `finalize()` to do the `close()` processing if you detect it hasn't been done. Maybe with something dumped to `stderr` to point out that you're cleaning up after a buggy caller. It provides extra safety in an exceptional/buggy situation. Not every caller is going to do the correct `try {} finally {}` stuff every time. Unfortunate, but true in most environments. I agree that it's rarely needed. And as commenters point out, it comes with GC overhead. Only use if you need that "belt and suspenders" safety in a long-running app. I see that as of Java 9, [`Object.finalize()`](https://docs.oracle.com/javase/9/docs/api/java/lang/Object.html#finalize--) is deprecated! They point us to [`java.lang.ref.Cleaner`](https://docs.oracle.com/javase/9/docs/api/java/lang/ref/Cleaner.html) and [`java.lang.ref.PhantomReference`](https://docs.oracle.com/javase/9/docs/api/java/lang/ref/PhantomReference.html) as alternatives.
`finalize()` is a hint to the JVM that it might be nice to execute your code at an unspecified time. This is good when you want code to mysteriously fail to run. Doing anything significant in finalizers (basically anything except logging) is also good in three situations: * you want to gamble that other finalized objects will still be in a state that the rest of your program considers valid. * you want to add lots of checking code to all the methods of all your classes that have a finalizer, to make sure they behave correctly after finalization. * you want to accidentally resurrect finalized objects, and spend a lot of time trying to figure out why they don't work, and/or why they don't get finalized when they are eventually released. If you think you need finalize(), sometimes what you really want is a **phantom reference** (which in the example given could hold a hard reference to a connection used by its referand, and close it after the phantom reference has been queued). This also has the property that it may mysteriously never run, but at least it can't call methods on or resurrect finalized objects. So it's just right for situations where you don't absolutely need to close that connection cleanly, but you'd quite like to, and the clients of your class can't or won't call close themselves (which is actually fair enough - what's the point of having a garbage collector at all if you design interfaces that *require* a specific action be taken prior to collection? That just puts us back in the days of malloc/free.) Other times you need the resource you think you're managing to be more robust. For example, why do you need to close that connection? It must ultimately be based on some kind of I/O provided by the system (socket, file, whatever), so why can't you rely on the system to close it for you when the lowest level of resource is gced? If the server at the other end absolutely requires you to close the connection cleanly rather than just dropping the socket, then what's going to happen when someone trips over the power cable of the machine your code is running on, or the intervening network goes out? Disclaimer: I've worked on a JVM implementation in the past. I hate finalizers.
Why would you ever implement finalize()?
[ "", "java", "jvm", "finalize", "" ]
example: ``` public static void DoSomething<K,V>(IDictionary<K,V> items) { items.Keys.Each(key => { if (items[key] **is IEnumerable<?>**) { /* do something */ } else { /* do something else */ } } ``` Can this be done without using reflection? How do I say IEnumerable in C#? Should I just use IEnumerable since IEnumerable<> implements IEnumerable?
[The previously accepted answer](https://stackoverflow.com/a/74648/1968) is nice but it is wrong. Thankfully, the error is a small one. Checking for `IEnumerable` is not enough if you really want to know about the generic version of the interface; there are a lot of classes that implement only the nongeneric interface. I'll give the answer in a minute. First, though, I'd like to point out that the accepted answer is overly complicated, since the following code would achieve the same under the given circumstances: ``` if (items[key] is IEnumerable) ``` This does even more because it works for each item separately (and not on their common subclass, `V`). Now, for the correct solution. This is a bit more complicated because we have to take the generic type `` IEnumerable`1 `` (that is, the type `IEnumerable<>` with one type parameter) and inject the right generic argument: ``` static bool IsGenericEnumerable(Type t) { var genArgs = t.GetGenericArguments(); if (genArgs.Length == 1 && typeof(IEnumerable<>).MakeGenericType(genArgs).IsAssignableFrom(t)) return true; else return t.BaseType != null && IsGenericEnumerable(t.BaseType); } ``` You can test the correctness of this code easily: ``` var xs = new List<string>(); var ys = new System.Collections.ArrayList(); Console.WriteLine(IsGenericEnumerable(xs.GetType())); Console.WriteLine(IsGenericEnumerable(ys.GetType())); ``` yields: ``` True False ``` Don't be overly concerned by the fact that this uses reflection. While it's true that this adds runtime overhead, so does the use of the `is` operator. Of course the above code is awfully constrained and could be expanded into a more generally applicable method, `IsAssignableToGenericType`. The following implementation is slightly incorrect1 and I’ll leave it here *for historic purposes only*. **Do not use it**. Instead, [James has provided an excellent, correct implementation in his answer.](https://stackoverflow.com/a/1075059/1968) ``` public static bool IsAssignableToGenericType(Type givenType, Type genericType) { var interfaceTypes = givenType.GetInterfaces(); foreach (var it in interfaceTypes) if (it.IsGenericType) if (it.GetGenericTypeDefinition() == genericType) return true; Type baseType = givenType.BaseType; if (baseType == null) return false; return baseType.IsGenericType && baseType.GetGenericTypeDefinition() == genericType || IsAssignableToGenericType(baseType, genericType); } ``` 1 It fails when the `genericType` is the same as `givenType`; for the same reason, it fails for nullable types, i.e. ``` IsAssignableToGenericType(typeof(List<int>), typeof(List<>)) == false IsAssignableToGenericType(typeof(int?), typeof(Nullable<>)) == false ``` I’ve created a [gist with a comprehensive suite of test cases](https://gist.github.com/4174727).
Thanks very much for this post. I wanted to provide a version of Konrad Rudolph's solution that has worked better for me. I had minor issues with that version, notably when testing if a Type is a nullable value type: ``` public static bool IsAssignableToGenericType(Type givenType, Type genericType) { var interfaceTypes = givenType.GetInterfaces(); foreach (var it in interfaceTypes) { if (it.IsGenericType && it.GetGenericTypeDefinition() == genericType) return true; } if (givenType.IsGenericType && givenType.GetGenericTypeDefinition() == genericType) return true; Type baseType = givenType.BaseType; if (baseType == null) return false; return IsAssignableToGenericType(baseType, genericType); } ```
How To Detect If Type is Another Generic Type
[ "", "c#", "" ]
Can't understand why the following takes place: ``` String date = "06-04-2007 07:05"; SimpleDateFormat fmt = new SimpleDateFormat("MM-dd-yyyy HH:mm"); Date myDate = fmt.parse(date); System.out.println(myDate); //Mon Jun 04 07:05:00 EDT 2007 long timestamp = myDate.getTime(); System.out.println(timestamp); //1180955100000 -- where are the milliseconds? // on the other hand... myDate = new Date(); System.out.println(myDate); //Tue Sep 16 13:02:44 EDT 2008 timestamp = myDate.getTime(); System.out.println(timestamp); //1221584564703 -- why, oh, why? ```
What milliseconds? You are providing only minutes information in the first example, whereas your second example grabs current date from the system with milliseconds, what is it you're looking for? ``` String date = "06-04-2007 07:05:00.999"; SimpleDateFormat fmt = new SimpleDateFormat("MM-dd-yyyy HH:mm:ss.S"); Date myDate = fmt.parse(date); System.out.println(myDate); long timestamp = myDate.getTime(); System.out.println(timestamp); ```
Instead of using the Sun JDK Time/Date libraries (which leave much to be desired) I recommend taking a look at <http://joda-time.sourceforge.net>. This is a very mature and active sourceforge project and has a very elegant API.
Date.getTime() not including time?
[ "", "java", "date", "timestamp", "gettime", "" ]
[This page](http://www.techonthenet.com/oracle/functions/trunc_date.php]) mentions how to trunc a timestamp to minutes/hours/etc. in Oracle. How would you trunc a timestamp to seconds in the same manner?
Since the precision of `DATE` is to the second (and no fractions of seconds), there is no need to `TRUNC` at all. The data type `TIMESTAMP` allows for fractions of seconds. If you convert it to a `DATE` the fractional seconds will be removed - e.g. ``` select cast(systimestamp as date) from dual; ```
I am sorry, but all my predecessors seem to be wrong. ``` select cast(systimestamp as date) from dual ``` ..does not truncate, but rounds to the next second instead. I use a function: ``` CREATE OR REPLACE FUNCTION TRUNC_TS(TS IN TIMESTAMP) RETURN DATE AS BEGIN RETURN TS; END; ``` For example: ``` SELECT systimestamp ,trunc_ts(systimestamp) date_trunc ,CAST(systimestamp AS DATE) date_cast FROM dual; ``` Returns: ``` SYSTIMESTAMP DATE_TRUNC DATE_CAST 21.01.10 15:03:34,567350 +01:00 21.01.2010 15:03:34 21.01.2010 15:03:35 ```
How to trunc a date to seconds in Oracle
[ "", "sql", "oracle", "" ]
We use [Hudson](http://hudson-ci.org/) as a continuous integration system to execute automated builds (nightly and based on CVS polling) of a lot of our projects. Some projects poll CVS every 15 minutes, some others poll every 5 minutes and some poll every hour. Every few weeks we'll get a build that fails with the following output: ``` FATAL: java.io.IOException: Too many open files java.io.IOException: java.io.IOException: Too many open files at java.lang.UNIXProcess.<init>(UNIXProcess.java:148) ``` The next build always worked (with 0 changes) so we always chalked it up to 2 build jobs being run at the same time and happening to have too many files open during the process. This weekend we had a build fail Friday night (automatic nightly build) with the message and every other nightly build also failed. Somehow this triggered Hudson to continuously build every project which failed until the issue was resolved. This resulted in a build every 30 minutes or so of every project until sometime Saturday night when the issue magically disappeared.
This is Hudson issue 715 (~~<http://issues.hudson-ci.org/browse/HUDSON-715>~~). The current recommendation is to set the 'maximum number of simultaneous polling threads' to keep the polling activity down.
See <https://wiki.jenkins-ci.org/display/JENKINS/I%27m+getting+too+many+open+files+error> for what we need from you to fix this kind of problem.
How do you fix "Too many open files" problem in Hudson?
[ "", "java", "exception", "continuous-integration", "build-automation", "hudson", "" ]
I currently am tasked with updating an XML file (persistance.xml) within a jar at a customers site. I can of course unjar the file, update the xml, then rejar the file for redeployment. I would like to kind these command line operations in a Swing App so that the person doing it does not have to drop to the command line. Any thoughts on a way to do this programatically?
The Java API has [classes](http://java.sun.com/javase/6/docs/api/java/util/jar/package-summary.html) for manipulating JAR files.
Sure: ``` File tmp = new File ("tmp"); tmp.mkdirs(); Process unjar = new ProcessBuilder ("jar", "-xf", "myjar.jar", tmp.getName ()).start(); unjar.waitFor(); // TODO read and update persistence.xml Process jar = new ProcessBuilder ("jar", "-cf", "myjar.jar", tmp.getName()).start(); jar.waitFor(); ```
Modify an xml files in a jar file with Java
[ "", "java", "jakarta-ee", "jboss", "jar", "" ]
I have a C# application that includes the following code: ``` string file = "relativePath.txt"; //Time elapses... string contents = File.ReadAllText(file); ``` This works fine, most of the time. The file is read relative to the directory that the app was started from. However, in testing, it has been found that if left alone for about 5 hours, the app will throw a `FileNotFoundException` saying that "C:\Documents and Settings\Adminstrator\relativePath.txt" could not be found. If the action that reads the file is run right away though, the file is read from the proper location, which we'll call "C:\foo\relativePath.txt" What gives? And, what is the best fix? Resolving the file against `Assembly.GetEntryAssembly().Location`?
If the file is always in a path relative to the executable assembly, then yes, use Assembly.Location. I mostly use Assembly.GetExecutingAssembly if applicable though instead of Assembly.GetEntryAssembly. This means that if you're accessing the file from a DLL, the path will be relative to the DLL path.
One spooky place that can change your path is the OpenFileDialog. As a user navigates between folders it's changing your application directory to the one currently being looked at. If the user closes the dialog in a different directory then you will be stuck in that directory. It has a property called [RestoreDirectory](http://msdn.microsoft.com/en-us/library/system.windows.forms.filedialog.restoredirectory(VS.85).aspx) which causes the dialog to reset the path. But I believe the default is "false".
What would cause the current directory of an executing app to change?
[ "", "c#", ".net", "filesystems", "working-directory", "" ]
What I want to do is to remove all accents and umlauts from a string, turning "lärm" into "larm" or "andré" into "andre". What I tried to do was to utf8\_decode the string and then use strtr on it, but since my source file is saved as UTF-8 file, I can't enter the ISO-8859-15 characters for all umlauts - the editor inserts the UTF-8 characters. Obviously a solution for this would be to have an include that's an ISO-8859-15 file, but there must be a better way than to have another required include? ``` echo strtr(utf8_decode($input), 'ŠŒŽšœžŸ¥µÀÁÂÃÄÅÆÇÈÉÊËÌÍÎÏÐÑÒÓÔÕÖØÙÚÛÜÝßàáâãäåæçèéêëìíîïðñòóôõöøùúûüýÿ', 'SOZsozYYuAAAAAAACEEEEIIIIDNOOOOOOUUUUYsaaaaaaaceeeeiiiionoooooouuuuyy'); ``` **UPDATE:** Maybe I was a bit inaccurate with what I try to do: I do not actually want to remove the umlauts, but to replace them with their closest "one character ASCII" equivalent.
``` iconv("utf-8","ascii//TRANSLIT",$input); ``` Extended [example](http://php.net/manual/en/function.iconv.php#83238)
A little trick that doesn't require setting locales or having huge translation tables: ``` function Unaccent($string) { if (strpos($string = htmlentities($string, ENT_QUOTES, 'UTF-8'), '&') !== false) { $string = html_entity_decode(preg_replace('~&([a-z]{1,2})(?:acute|cedil|circ|grave|lig|orn|ring|slash|tilde|uml);~i', '$1', $string), ENT_QUOTES, 'UTF-8'); } return $string; } ``` The only requirement for it to work properly is to save your files in UTF-8 (as you should already).
PHP: Replace umlauts with closest 7-bit ASCII equivalent in an UTF-8 string
[ "", "php", "utf-8", "diacritics", "strtr", "" ]
I'm firing off a Java application from inside of a C# [.NET](http://en.wikipedia.org/wiki/.NET_Framework) console application. It works fine for the case where the Java application doesn't care what the "default" directory is, but fails for a Java application that only searches the current directory for support files. Is there a process parameter that can be set to specify the default directory that a process is started in?
Yes! ProcessStartInfo Has a property called *WorkingDirectory*, just use: ``` ... using System.Diagnostics; ... var startInfo = new ProcessStartInfo(); startInfo.WorkingDirectory = // working directory // set additional properties Process proc = Process.Start(startInfo); ```
Use the [*ProcessStartInfo.WorkingDirectory*](http://msdn.microsoft.com/en-us/library/system.diagnostics.processstartinfo.workingdirectory.aspx "Link to MSDN documentation on ProcessStartInfo.WorkingDirectory") property to set it prior to starting the process. If the property is not set, the default working directory is %SYSTEMROOT%\system32. You can determine the value of %SYSTEMROOT% by using: ``` string _systemRoot = Environment.GetEnvironmentVariable("SYSTEMROOT"); ``` Here is some sample code that opens Notepad.exe with a working directory of %ProgramFiles%: ``` ... using System.Diagnostics; ... ProcessStartInfo _processStartInfo = new ProcessStartInfo(); _processStartInfo.WorkingDirectory = @"%ProgramFiles%"; _processStartInfo.FileName = @"Notepad.exe"; _processStartInfo.Arguments = "test.txt"; _processStartInfo.CreateNoWindow = true; Process myProcess = Process.Start(_processStartInfo); ``` There is also an Environment variable that controls the current working directory for your process that you can access directly through the [*Environment.CurrentDirectory*](http://msdn.microsoft.com/en-us/library/system.environment.currentdirectory.aspx "MSDN documentation on Environment.CurrentDirectory") property .
.NET Process.Start default directory?
[ "", "c#", "" ]
I'm looking for a good ASP.NET RichTextBox component that integrates fairly easily with .NET Framework 3.5 Ajax, specifically one that can easily provide its values from inside an UpdatePanel. I got burned by RicherComponents RichTextBox which still does not reference the Framework 3.5. thanks!
Look at FCKEditor for a free solution. I'm unsure if it's usable inside an update panel, but it's free and opensource. <http://www.fckeditor.net/>
If you would consider going with an HTML editor instead of a Rich Text format editor, I recommend the Telerik web editor. It is very flexible and integrates quite solidly with Ajax.
What is the best free, Ajax.NET (System.Web.Extensions 3.5) compatible Rich Text Box control?
[ "", "c#", "asp.net", ".net-3.5", "richtext", "ajax.net", "" ]
What is the best way (in C++) to set up a container allowing for double-indexing? Specifically, I have a list of objects, each indexed by a key (possibly multiple per key). This implies a multimap. The problem with this, however, is that it means a possibly worse-than-linear lookup to find the location of an object. I'd rather avoid duplication of data, so having each object maintain it's own coordinate and have to move itself in the map would be bad (not to mention that moving your own object may indirectly call your destructor whilst in a member function!). I would rather some container that maintains an index both by object pointer and coordinate, and that the objects themselves guarantee stable references/pointers. Then each object could store an iterator to the index (including the coordinate), sufficiently abstracted, and know where it is. Boost.MultiIndex seems like the best idea, but it's very scary and I don't wany my actual objects to need to be const. What would you recommend? EDIT: Boost Bimap seems nice, but does it provide stable indexing? That is, if I change the coordinate, references to other elements must remain valid. The reason I want to use pointers for indexing is because objects have otherwise no intrinsic ordering, and a pointer can remain constant while the object changes (allowing its use in a Boost MultiIndex, which, IIRC, does provide stable indexing).
I'm making several assumptions based on your writeup: * Keys are cheap to copy and compare * There should be only one copy of the object in the system * The same key may refer to many objects, but only one object corresponds to a given key (one-to-many) * You want to be able to efficiently look up which objects correspond to a given key, and which key corresponds to a given object I'd suggest: * Use a linked list or some other container to maintain a global list of all objects in the system. The objects are allocated on the linked list. * Create one `std::multimap<Key, Object *>` that maps keys to object pointers, pointing to the single canonical location in the linked list. * Do one of: + Create one `std::map<Object *, Key>` that allows looking up the key attached to a particular object. Make sure your code updates this map when the key is changed. (This could also be a `std::multimap` if you need a many-to-many relationship.) + Add a member variable to the `Object` that contains the current `Key` (allowing O(1) lookups). Make sure your code updates this variable when the key is changed. Since your writeup mentioned "coordinates" as the keys, you might also be interested in reading the suggestions at [Fastest way to find if a 3D coordinate is already used](https://stackoverflow.com/questions/72128/fastest-way-to-find-if-a-3d-coordinate-is-already-used#72178).
One option would be to use two std::maps that referenced shared\_ptrs. Something like this may get you going: ``` template<typename T, typename K1, typename K2> class MyBiMap { public: typedef boost::shared_ptr<T> ptr_type; void insert(const ptr_type& value, const K1& key1, const K2& key2) { _map1.insert(std::make_pair(key1, value)); _map2.insert(std::make_pair(key2, value)); } ptr_type find1(const K1& key) { std::map<K1, ptr_type >::const_iterator itr = _map1.find(key); if (itr == _map1.end()) throw std::exception("Unable to find key"); return itr->second; } ptr_type find2(const K2& key) { std::map<K2, ptr_type >::const_iterator itr = _map2.find(key); if (itr == _map2.end()) throw std::exception("Unable to find key"); return itr->second; } private: std::map<K1, ptr_type > _map1; std::map<K2, ptr_type > _map2; }; ``` Edit: I just noticed the multimap requirement, this still expresses the idea so I'll leave it.
Best container for double-indexing
[ "", "c++", "stl", "containers", "" ]
I have a web application where there are number of Ajax components which refresh themselves every so often inside a page (it's a dashboard of sorts). Now, I want to add functionality to the page so that when there is no Internet connectivity, the current content of the page doesn't change and a message appears on the page saying that the page is offline (currently, as these various gadgets on the page try to refresh themselves and find that there is no connectivity, their old data vanishes). So, what is the best way to go about this?
One way to handle this might be to extend the XmlHTTPRequest object with an explicit timeout method, then use that to determine if you're working in offline mode (that is, for browsers that don't support navigator.onLine). Here's how I implemented Ajax timeouts on one site (a site that uses the [Prototype](http://prototypejs.org/"Prototype") library). After 10 seconds (10,000 milliseconds), it aborts the call and calls the onFailure method. ``` /** * Monitor AJAX requests for timeouts * Based on the script here: http://codejanitor.com/wp/2006/03/23/ajax-timeouts-with-prototype/ * * Usage: If an AJAX call takes more than the designated amount of time to return, we call the onFailure * method (if it exists), passing an error code to the function. * */ var xhr = { errorCode: 'timeout', callInProgress: function (xmlhttp) { switch (xmlhttp.readyState) { case 1: case 2: case 3: return true; // Case 4 and 0 default: return false; } } }; // Register global responders that will occur on all AJAX requests Ajax.Responders.register({ onCreate: function (request) { request.timeoutId = window.setTimeout(function () { // If we have hit the timeout and the AJAX request is active, abort it and let the user know if (xhr.callInProgress(request.transport)) { var parameters = request.options.parameters; request.transport.abort(); // Run the onFailure method if we set one up when creating the AJAX object if (request.options.onFailure) { request.options.onFailure(request.transport, xhr.errorCode, parameters); } } }, // 10 seconds 10000); }, onComplete: function (request) { // Clear the timeout, the request completed ok window.clearTimeout(request.timeoutId); } }); ```
``` navigator.onLine ``` That should do what you're asking. You probably want to check that in whatever code you have that updates the page. Eg: ``` if (navigator.onLine) { updatePage(); } else { displayOfflineWarning(); } ```
What is the best method to detect offline mode in the browser?
[ "", "javascript", "ajax", "offline-mode", "" ]
Ok I have two modules, each containing a class, the problem is their classes reference each other. Lets say for example I had a room module and a person module containing CRoom and CPerson. The CRoom class contains infomation about the room, and a CPerson list of every one in the room. The CPerson class however sometimes needs to use the CRoom class for the room its in, for example to find the door, or too see who else is in the room. The problem is with the two modules importing each other I just get an import error on which ever is being imported second :( In c++ I could solve this by only including the headers, and since in both cases the classes just have pointers to the other class, a forward declaration would suffice for the header eg: ``` class CPerson;//forward declare class CRoom { std::set<CPerson*> People; ... ``` Is there anyway to do this in python, other than placing both classes in the same module or something like that? edit: added python example showing problem using above classes error: > Traceback (most recent call last): > File "C:\Projects\python\test\main.py", line 1, in > from room import CRoom > File "C:\Projects\python\test\room.py", line 1, in > from person import CPerson > File "C:\Projects\python\test\person.py", line 1, in > from room import CRoom > ImportError: cannot import name CRoom > room.py ``` from person import CPerson class CRoom: def __init__(Self): Self.People = {} Self.NextId = 0 def AddPerson(Self, FirstName, SecondName, Gender): Id = Self.NextId Self.NextId += 1# Person = CPerson(FirstName,SecondName,Gender,Id) Self.People[Id] = Person return Person def FindDoorAndLeave(Self, PersonId): del Self.People[PeopleId] ``` person.py ``` from room import CRoom class CPerson: def __init__(Self, Room, FirstName, SecondName, Gender, Id): Self.Room = Room Self.FirstName = FirstName Self.SecondName = SecondName Self.Gender = Gender Self.Id = Id def Leave(Self): Self.Room.FindDoorAndLeave(Self.Id) ```
**No need to import CRoom** You don't use `CRoom` in `person.py`, so don't import it. Due to dynamic binding, Python doesn't need to "see all class definitions at compile time". If you actually *do* use `CRoom` in `person.py`, then change `from room import CRoom` to `import room` and use module-qualified form `room.CRoom`. See [Effbot's Circular Imports](http://effbot.org/zone/import-confusion.htm#circular-imports) for details. *Sidenote:* you probably have an error in `Self.NextId += 1` line. It increments `NextId` of instance, not `NextId` of class. To increment class's counter use `CRoom.NextId += 1` or `Self.__class__.NextId += 1`.
Do you actually need to reference the classes at class definition time? ie. ``` class CRoom(object): person = CPerson("a person") ``` Or (more likely), do you just need to use CPerson in the methods of your class (and vice versa). eg: ``` class CRoom(object): def getPerson(self): return CPerson("someone") ``` If the second, there's no problem - as by the time the method gets **called** rather than defined, the module will be imported. Your sole problem is how to refer to it. Likely you're doing something like: ``` from CRoom import CPerson # or even import * ``` With circularly referencing modules, you can't do this, as at the point one module imports another, the original modules body won't have finished executing, so the namespace will be incomplete. Instead, use qualified references. ie: ``` #croom.py import cperson class CRoom(object): def getPerson(self): return cperson.CPerson("someone") ``` Here, python doesn't need to lookup the attribute on the namespace until the method actually gets called, by which time both modules should have completed their initialisation.
Python module dependency
[ "", "python", "module", "circular-dependency", "" ]
If you have two jars in your classpath that contain different versions of the same class, the classpath order becomes critical. I am looking for a tool that can detect and flag such potential conflicts in a given classpath or set of folders. Certainly a script that starts: ``` classes=`mktemp` for i in `find . -name "*.jar"` do echo "File: $i" > $classes jar tf $i > $classes ... done ``` with some clever sort/uniq/diff/grep/awk later on has potential, but I was wondering if anyone knows of any existing solutions.
The [Tattletale](http://www.jboss.org/tattletale) tool from JBoss is another candidate: "Spot if a class/package is located in multiple JAR files"
Looks like [jarfish](http://code.google.com/p/jarfish/wiki/Intro) will do what you want with its "dupes" command.
Is there a tool to discover if the same class exists in multiple jars in the classpath?
[ "", "java", "jar", "classpath", "" ]
I've seen both done in some code I'm maintaining, but don't know the difference. Is there one? let me add that myCustomer is an instance of Customer
The result of both are exactly the same in your case. It will be your custom type that derives from `System.Type`. The only real difference here is that when you want to obtain the type from an instance of your class, you use `GetType`. If you don't have an instance, but you know the type name (and just need the actual `System.Type` to inspect or compare to), you would use `typeof`. ### Important difference EDIT: Let me add that the call to `GetType` gets resolved at runtime, while `typeof` is resolved at compile time.
GetType() is used to find the *actual* type of a object reference at run-time. This can be different from the type of the variable that references the object, because of inheritance. typeof() creates a Type literal that is of the exact type specified and is determined at compile-time.
What is the difference between myCustomer.GetType() and typeof(Customer) in C#?
[ "", "c#", ".net", "" ]
Which is your favorite way to go with strings in C++? A C-style array of `char`s? Or `wchar_t`? `CString`, `std::basic_string`, `std::string`, `BSTR` or `CComBSTR`? Certainly each of these has its own area of application, but anyway, which is your favorite and why?
std::string or std::wstring, depending on your needs. Why? * They're standard * They're portable * They can handle I18N * They have performance guarantees (as per the standard) * Protected against buffer overflows and similar attacks * Are easily converted to other types as needed * Are nicely templated, giving you a wide variety of options while reducing code bloat and improving performance. Really. Compilers that can't handle templates are long gone now. A C-style array of chars is just asking for trouble. You'll still need to deal with them on occasion (and that's what std::string.c\_str() is for), but, honestly -- one of the biggest dangers in C is programmers doing Bad Things with char\* and winding up with buffer overflows. Just don't do it. An array of wchar\_\_t is the same thing, just bigger. CString, BSTR, and CComBSTR are not standard and not portable. Avoid them unless absolutely forced. Optimally, just convert a std::string/std::wstring to them when needed, which shouldn't be very expensive. Note that std::string is just a child of std::basic\_string, but you're still better off using std::string unless you have a really good reason not to. Really Good. Let the compiler take care of the optimization in this situation.
**std::string** !! There's a reason why they call it a "Standard". basic\_string is an implementation detail and should be ignored. BSTR & CComBSTR only for interOp with COM, and only for the moment of interop.
How do you handle strings in C++?
[ "", "c++", "string", "" ]
I have a C++ program that when run, by default, displays the X in the upper right corner. Clicking X, minimizes the program. I've added code using the SHInitDialog function to change the X to OK, so that clicking OK exits the program. My question: Is there a better method that applies to the window, since SHInitDialog works best with Dialog Boxes?
With Windows Mobile 5.0 and higher, using the CreateWindowEx function passing it WS\_EX\_CAPTIONOKBTN for the extended style works. @ctacke SHDoneButton may have also worked but I wanted to change the main window without handling it like a dialogbox, which is basically what SHInitDialog is doing.
Take a look at [SHDoneButton](http://msdn.microsoft.com/en-us/library/aa453682.aspx) API.
On Windows Mobile device, what is the best way to display an OK button instead of the X button?
[ "", "c++", "windows-mobile", "" ]
I'm doing a tech review and looking at AMF integration with various backends (Rails, Python, Grails etc). Lots of options are out there, question is, what do the Adobe products do (BlazeDS etc) that something like RubyAMF / pyAMF don't?
Other than NIO (RTMP) channels, LCDS include also the "data management" features. Using this feature, you basically implement, in an ActionScript class, a CRUD-like interface defined by LCDS, and you get: * automatic progressive list loading (large lists/datagrids loads while scrolling) * automatic crud management (you get object locally in flash, modify it, send it back and DB will get updated automatically) * feature for conflict resolution (if multiple user try to updated the same record at the same time) * if I remember well, also some improved integration with the LiveCycle ES workflow engine IMO, it can be very fast to develop this way, but only if you have only basic requirements and a simple architecture (forget SOA, that otherwise works so well with Flex). I'm fine with BlazeDS.
The data management features for LCDS described here are certainly valid, however I believe they do not let you actually develop a solution faster. A developer still has to write ALL the data access code, query execution, extracting data from datareaders into value objects. ALL of this has been solved a dozen of times with code generators. For instance the data management approach in WebORB for Java (much like in WebORB for .NET and PHP) is based on code generation which creates code for both client side AND server-side. You get all the ActionScript APIs out of the code generator to do full CRUD. Additionally, WebORB provides video streaming and real-time messaging features and goes WAY beyond what both BlazeDS and LCDS offer combined, especially considering that the product is free. Just google it.
What does BlazeDS Livecycle Data Services do, that something like PyAMF or RubyAMF not do?
[ "", "python", "ruby-on-rails", "ruby", "apache-flex", "blazeds", "" ]
I have a large 2d array that I serialize and base64\_encode and throw into a database. On a different page I pull the array out and when I `base64_decode` the serialized array I can echo it out and it definitely looks valid. However, if I try to `unserialize(base64_decode($serializedArray))` it just throws the same error to the point of nearly crashing Firefox. The error is: > Warning: `unserialize()` [function.unserialize]: Node no longer exists in /var/www/dev/wc\_paul/inc/analyzerTester.php on line 24 I would include the entire serialized array that I echo out but last time I tried that on this form it crashed my Firefox. Does anyone have any idea why this might be happening?
Are you sure you're just serializing an array, and not an object (e.g. DOMNode?) Like resources, not all classes are going to be happy with being unserialized. As an example with the DOM (which your error suggests to me you're working with), every node has a reference to the parentNode, and if the parentNode doesn't exist at the moment a node is being unserialized, it's not able to recreate that reference and problems ensue. I would suggest saving the dom tree as XML to the database and loading it back later.
Make sure that the database field is large enough to hold the serialized array. Serialized data is **very** space-inefficient in PHP, and many DBs (like MySQL) will silently truncate field values that are too long.
PHP unserialize keeps throwing same error over 100 times
[ "", "php", "mysql", "serialization", "" ]
I've updated **php.ini** and moved **php\_mysql.dll** as explained in [steps 6 and 8 here.](https://stackoverflow.com/questions/11919/how-do-i-get-php-and-mysql-working-on-iis-70#94341) I get this error… ``` Fatal error: Call to undefined function mysql_connect() in C:\inetpub... ``` MySQL doesn't show up in my **phpinfo;** report. --- I've updated the **c:\Windows\php.ini** file from ``` ; Directory in which the loadable extensions (modules) reside. extension_dir = "./" ``` to ``` ; Directory in which the loadable extensions (modules) reside. extension_dir = ".;c:\Windows\System32" ``` Result: no change. --- I changed the **php.ini** value of extension\_dir thusly: ``` extension_dir = "C:\Windows\System32" ``` Result: much more in the **phpinfo;** report, but MySQL still isn't working. --- I copied the file **libmysql.dll** from folder **C:\php** to folders **C:\Windows\System32** and **C:\Windows** Result: no change. --- I stopped and **restarted IIS**. Result: new, different errors instead! ``` Warning: mysql_connect() [function.mysql-connect]: Access denied for user '...'@'localhost' (using password: YES) in C:\inetpub\... error in query. ``` ``` Fatal error: Call to a member function RecordCount() on a non-object in C:\inetpub\... ``` --- I found several .php files in the website where I had to set variables: ``` $db_user $db_pass ``` Result: The site works!
As the others say these two values in php.ini are crucial. I have the following in my php.ini: note the trailing slash - not sure if it is needed - but it does work. ``` extension_dir = "H:\apps\php\ext\" extension=php_mysql.dll ``` Also it is worth ensuring that you only have one copy of php.ini on your machine - I've had problems with this where I've been editting a php.ini file which php isn't using and getting very frustrated until I realised. Also if php is running as a module within apache you will need to restart the apache server to pickup the changes. Wise to do this in anycase if you're not sure. a "php -m" from the cmd prompt will show you the modules that are loaded from the ini file.
In the php.ini file, check if the extention path configuration is valid.
How do I install MySQL modules within PHP?
[ "", "php", "mysql", "windows", "" ]
I have an object of the type System.Drawing.Image and want to make every pixel which has some specific color, for example black, transparent (that is, set alpha to 0 for this pixel). What is the best way to do this?
One good approach is to use the **ImageAttributes** class to setup a list of colors to remap when drawing takes place. The advantage of this is good performance as well as allowing you to alter the remapping colors very easily. Try something like this code... ``` ImageAttributes attribs = new ImageAttributes(); List<ColorMap> colorMaps = new List<ColorMap>(); // // Remap black top be transparent ColorMap remap = new ColorMap(); remap.OldColor = Color.Black; remap.NewColor = Color.Transparent; colorMaps.Add(remap); // // ...add additional remapping entries here... // attribs.SetRemapTable(colorMaps.ToArray(), ColorAdjustType.Bitmap); context.Graphics.DrawImage(image, imageRect, 0, 0, imageRect.Width, imageRect.Height, GraphicsUnit.Pixel, attribs); ```
Construct a Bitmap from the Image, and then call MakeTransparent() on that Bitmap. It allows you to specify a colour that should be rendered as transparent.
Making every pixel of an image having a specific color transparent
[ "", "c#", ".net", "image", "transparency", "" ]
I would like display something more meaningful that animated gif while users upload file to my web application. What possibilities do I have? *Edit: I am using .Net but I don't mind if somebody shows me platform agnostic version.*
Here are a couple of versions of what you're looking for for some common JavaScript toolkits. * Mootools - <http://digitarald.de/project/fancyupload/> * Extjs - <http://extjs.com/learn/Extension:UploadForm>
If you are interested how all this generally works client-site, this is it: All the solutions hook up the form via javascript and change the forms target to a newly created, invisible IFRAME. Then they are free to use AJAX to request some status about the file from the server. The IFRAME trick is needed because all the scripts running in the window that is doing the upload will hang until the the request is completed at which time the file is fully uploaded.
How to implement file upload progress bar on web?
[ "", ".net", "javascript", "ajax", "" ]
I need to replace all WinAPI calls of the * CreateFile, * ReadFile, * SetFilePointer, * CloseHandle with my own implementation (which use low-level file reading via Bluetooth). The code, where functions will be replaced, is Video File Player and it already works with the regular hdd files. It is also needed, that Video Player still can play files from HDD, if the file in the VideoPlayer input is a regular hdd file. What is the best practice for such task?
I suggest that you follow these steps: 1. Write a set of wrapper functions, e.g MyCreateFile, MyReadFile, etc, that initially just call the corresponding API and pass the same arguments along, unmodified. 2. Use your text editor to search for all calls to the original APIs, and replace these with calls to your new wrapper functions. 3. Test that the application still functions correctly. 4. Modify the wrapper functions to suit your own purposes. Note that CreateFile is a macro which expands to either CreateFileW or CreateFileA, depending on whether UNICODE is defined. Consider using LPCTSTR and the [TCHAR functions](http://msdn.microsoft.com/en-us/library/c426s321%28VS.80%29.aspx) so that your application can be built as either ANSI or Unicode. Please don't use #define, as suggested in other responses here, as this will just lead to maintenance problems, and as Maximilian correctly points out, it's not a best-practice.
You could just write your new functions in a custom namespace. e.g. ``` namespace Bluetooth { void CreateFile(/*params*/); void etc... } ``` Then in your code, the only thing you would have to change is: ``` if (::CreateFile(...)) { } ``` to ``` if (Bluetooth::CreateFile(...)) { } ``` Easy! :)
How to replace WinAPI functions calls in the MS VC++ project with my own implementation (name and parameters set are the same)?
[ "", "c++", "winapi", "visual-c++", "" ]
Greetings, I need a way (either via C# or in a .bat file) to get a list of all the computers on a given network. Normally, I use "net view", but this tends to work (from my understanding) only within your domain. I need the names (or at least the IP Addresses) of all computers available on my network. Being able to get all computers on a domain that isn't mine (in which case I'd use WORKGROUP, or whatever the default is) would also work.
[Nmap](http://nmap.org/) is good for this - use the -O option for OS fingerprinting and -oX "filename.xml" for [output](http://nmap.org/book/man-output.html) as xml that you can then parse from c#. A suitable commandline would be (where 192.168.0.0/24 is the subnet to scan): ``` nmap -O -oX "filename.xml" 192.168.0.0/24 ``` leave out the -O if you aren't interested in guessing the OS - if you just want a ping sweep use -sP, or read the docs for the myriad other options.
To expand on what Unkwntech has said - You can also do a "broadcast" ping to avoid having to ping each IP address individually. Immediately after than you can use "arp" to examine the ARP cache and get a list of which IP addresses are on which MAC address.
Get a list of all computers on a network w/o DNS
[ "", "c#", ".net", "networking", "batch-file", "" ]
Here's the situation. I have a webservice (C# 2.0), which consists of (mainly) a class inheriting from System.Web.Services.WebService. It contains a few methods, which all need to call a method that checks if they're authorized or not. Basically something like this (pardon the architecture, this is purely as an example): ``` public class ProductService : WebService { public AuthHeader AuthenticationHeader; [WebMethod(Description="Returns true")] [SoapHeader("AuthenticationHeader")] public bool MethodWhichReturnsTrue() { if(Validate(AuthenticationHeader)) { throw new SecurityException("Access Denied"); } return true; } [WebMethod(Description="Returns false")] [SoapHeader("AuthenticationHeader")] public bool MethodWhichReturnsFalse() { if(Validate(AuthenticationHeader)) { throw new SecurityException("Access Denied"); } return false; } private bool Validate(AuthHeader authHeader) { return authHeader.Username == "gooduser" && authHeader.Password == "goodpassword"; } } ``` As you can see, the method `Validate` has to be called in each method. I'm looking for a way to be able to call that method, while still being able to access the soap headers in a sane way. I've looked at the events in the `global.asax`, but I don't think I can access the headers in that class... Can I?
Here is what you need to do to get this to work correctly. It is possible to create your own custom SoapHeader: ``` public class ServiceAuthHeader : SoapHeader { public string SiteKey; public string Password; public ServiceAuthHeader() {} } ``` Then you need a SoapExtensionAttribute: ``` public class AuthenticationSoapExtensionAttribute : SoapExtensionAttribute { private int priority; public AuthenticationSoapExtensionAttribute() { } public override Type ExtensionType { get { return typeof(AuthenticationSoapExtension); } } public override int Priority { get { return priority; } set { priority = value; } } } ``` And a custom SoapExtension: ``` public class AuthenticationSoapExtension : SoapExtension { private ServiceAuthHeader authHeader; public AuthenticationSoapExtension() { } public override object GetInitializer(Type serviceType) { return null; } public override object GetInitializer(LogicalMethodInfo methodInfo, SoapExtensionAttribute attribute) { return null; } public override void Initialize(object initializer) { } public override void ProcessMessage(SoapMessage message) { if (message.Stage == SoapMessageStage.AfterDeserialize) { foreach (SoapHeader header in message.Headers) { if (header is ServiceAuthHeader) { authHeader = (ServiceAuthHeader)header; if(authHeader.Password == TheCorrectUserPassword) { return; //confirmed } } } throw new SoapException("Unauthorized", SoapException.ClientFaultCode); } } } ``` Then, in your web service add the following header to your method: ``` public ServiceAuthHeader AuthenticationSoapHeader; [WebMethod] [SoapHeader("AuthenticationSoapHeader")] [AuthenticationSoapExtension] public string GetSomeStuffFromTheCloud(string IdOfWhatYouWant) { return WhatYouWant; } ``` When you consume this service, you must instantiate the custom header with the correct values and attach it to the request: ``` private ServiceAuthHeader header; private PublicService ps; header = new ServiceAuthHeader(); header.SiteKey = "Thekey"; header.Password = "Thepassword"; ps.ServiceAuthHeaderValue = header; string WhatYouWant = ps.GetSomeStuffFromTheCloud(SomeId); ```
You can implement the so-called SOAP extension by deriving from [SoapExtension base](http://msdn.microsoft.com/en-us/library/system.web.services.protocols.soapextension.aspx) class. That way you will be able to inspect an incoming SOAP message and perform validate logic before a particular web method is called.
Call a certain method before each webservice call
[ "", "c#", ".net", "web-services", "" ]
I am using Apache Axis to connect my Java app to a web server. I used wsdl2java to create the stubs for me, but when I try to use the stubs, I get the following exception: ``` org.apache.axis.ConfigurationException: No service named `<web service name>` is available ``` What is happening?
Just a guess, but it looks like that error message is reporting that you've left the service name blank. I imagine the code that generates that error message looks like this: ``` throw new ConfigurationException("No service named" + serviceName + " is available"); ```
According to the [documentation](http://wiki.apache.org/ws/FrontPage/Axis/DealingWithCommonExceptions) linked to by @arnonym, this exception is somewhat misleading. In the first attempt to find the service a ConfigurationException is thrown and caught. It is logged at DEBUG level by the ConfigurationException class. Then another attempt is made using a different method to find the service that may then succeed. The workaround for this is to just change the log level on the ConfigurationException class to INFO in your log4j.properties: ``` log4j.logger.org.apache.axis.ConfigurationException = INFO ```
Apache Axis ConfigurationException
[ "", "java", "apache", "apache-axis", "" ]
Im trying to put an html embed code for a flash video into the `rss feed`, which will then be parser by a parser `(magpie)` on my other site. How should I encode the embed code on one side, and then decode it on the other so I can insert clean html into the `DB` on the receiving server?
Since RSS is XML, you might want to check out CDATA, which I believe is valid in the various RSS specs. <summary><![CDATA[Data Here]]> Here's the w3schools entry on it: <http://www.w3schools.com/XML/xml_cdata.asp>
htmlencode/htmldecode should do the trick.
Whats the best way to pass html embed code via rss feed to a rss parser in php?
[ "", "php", "html", "xml", "validation", "rss", "" ]
How do I cause the page to make the user jump to a new web page after X seconds. If possible I'd like to use HTML but a niggly feeling tells me it'll have to be Javascript. So far I have the following but it has no time delay ``` <body onload="document.location='newPage.html'"> ```
A meta refresh is ugly but will work. The following will go to the new url after 5 seconds: ``` <meta http-equiv="refresh" content="5;url=http://example.com/"/> ``` <http://en.wikipedia.org/wiki/Meta_refresh>
If you are going the JS route just use ``` setTimeout("window.location.href = 'newPage.html';", 5000); ```
Delayed jump to a new web page
[ "", "javascript", "html", "" ]
For all major browsers (except IE), the JavaScript `onload` event doesn’t fire when the page loads as a result of a back button operation — it only fires when the page is first loaded. Can someone point me at some sample cross-browser code (Firefox, Opera, Safari, IE, …) that solves this problem? I’m familiar with Firefox’s `pageshow` event but unfortunately neither Opera nor Safari implement this.
Guys, I found that JQuery has only one effect: the page is reloaded when the back button is pressed. This has nothing to do with "**ready**". How does this work? Well, JQuery adds an **onunload** event listener. ``` // http://code.jquery.com/jquery-latest.js jQuery(window).bind("unload", function() { // ... ``` By default, it does nothing. But somehow this seems to trigger a reload in Safari, Opera and Mozilla -- no matter what the event handler contains. [*edit(Nickolay)*: here's why it works that way: [webkit.org](http://webkit.org/blog/516/webkit-page-cache-ii-the-unload-event/), [developer.mozilla.org](https://developer.mozilla.org/En/Using_Firefox_1.5_caching). Please read those articles (or my summary in a separate answer below) and consider whether you *really* need to do this and make your page load slower for your users.] Can't believe it? Try this: ``` <body onunload=""><!-- This does the trick --> <script type="text/javascript"> alert('first load / reload'); window.onload = function(){alert('onload')}; </script> <a href="http://stackoverflow.com">click me, then press the back button</a> </body> ``` You will see similar results when using JQuery. You may want to compare to this one without **onunload** ``` <body><!-- Will not reload on back button --> <script type="text/javascript"> alert('first load / reload'); window.onload = function(){alert('onload')}; </script> <a href="http://stackoverflow.com">click me, then press the back button</a> </body> ```
Some modern browsers (Firefox, Safari, and Opera, but not Chrome) support the special "back/forward" cache (I'll call it bfcache, which is a term invented by Mozilla), involved when the user navigates Back. Unlike the regular (HTTP) cache, it captures the complete state of the page (including the state of JS, DOM). This allows it to re-load the page quicker and exactly as the user left it. The `load` event is not supposed to fire when the page is loaded from this bfcache. For example, if you created your UI in the "load" handler, and the "load" event was fired once on the initial load, and the second time when the page was re-loaded from the bfcache, the page would end up with duplicate UI elements. This is also why adding the "unload" handler stops the page from being stored in the bfcache (thus making it slower to navigate back to) -- the unload handler could perform clean-up tasks, which could leave the page in unworkable state. For pages that need to know when they're being navigated away/back to, Firefox 1.5+ and the version of Safari with the fix for [bug 28758](https://bugs.webkit.org/show_bug.cgi?id=28758) support special events called "pageshow" and "pagehide". References: * Webkit: <http://webkit.org/blog/516/webkit-page-cache-ii-the-unload-event/> * Firefox: <https://developer.mozilla.org/En/Using_Firefox_1.5_caching>. * Chrome: <https://code.google.com/p/chromium/issues/detail?id=2879>
Is there a cross-browser onload event when clicking the back button?
[ "", "javascript", "pageload", "" ]
I was wondering how to use `cin` so that if the user does not enter in any value and just pushes `ENTER` that `cin` will recognize this as valid input.
You will probably want to try `std::getline`: ``` #include <iostream> #include <string> std::string line; std::getline( std::cin, line ); if( line.empty() ) ... ```
I find that for user input `std::getline` works very well. You can use it to read a line and just discard what it reads. The problem with doing things like this, ``` // Read a number: std::cout << "Enter a number:"; std::cin >> my_double; std::count << "Hit enter to continue:"; std::cin >> throwaway_char; // Hmmmm, does this work? ``` is that if the user enters other garbage e.g. "4.5 - about" it is all too easy to get out of sync and to read what the user wrote the last time before printing the prompt that he needs to see the next time. If you read every complete line with `std::getline( std::cin, a_string )` and then parse the returned string (e.g. using an istringstream or other technique) it is much easier to keep the printed prompts in sync with reading from std::cin, even in the face of garbled input.
C++ having cin read a return character
[ "", "c++", "input", "return", "iostream", "cin", "" ]
What's the difference between `TRUNCATE` and `DELETE` in SQL? If your answer is platform specific, please indicate that.
Here's a list of differences. I've highlighted Oracle-specific features, and hopefully the community can add in other vendors' specific difference also. Differences that are common to most vendors can go directly below the headings, with differences highlighted below. --- # General Overview If you want to quickly delete all of the rows from a table, and you're really sure that you want to do it, and you do not have foreign keys against the tables, then a TRUNCATE is probably going to be faster than a DELETE. Various system-specific issues have to be considered, as detailed below. --- # Statement type Delete is DML, Truncate is DDL ([What is DDL and DML?](https://stackoverflow.com/q/2578194/276052)) --- # Commit and Rollback Variable by vendor **SQL\*Server** Truncate can be rolled back. **PostgreSQL** Truncate can be rolled back. **Oracle** Because a TRUNCATE is DDL it involves two commits, one before and one after the statement execution. Truncate can therefore not be rolled back, and a failure in the truncate process will have issued a commit anyway. However, see Flashback below. --- # Space reclamation Delete does not recover space, Truncate recovers space **Oracle** If you use the REUSE STORAGE clause then the data segments are not de-allocated, which can be marginally more efficient if the table is to be reloaded with data. The high water mark is reset. --- # Row scope Delete can be used to remove all rows or only a subset of rows. Truncate removes all rows. **Oracle** When a table is partitioned, the individual partitions can be truncated in isolation, thus a partial removal of all the table's data is possible. --- # Object types Delete can be applied to tables and tables inside a cluster. Truncate applies only to tables or the entire cluster. (May be Oracle specific) --- # Data Object Identity **Oracle** Delete does not affect the data object id, but truncate assigns a new data object id *unless* there has never been an insert against the table since its creation Even a single insert that is rolled back will cause a new data object id to be assigned upon truncation. --- # Flashback (Oracle) Flashback works across deletes, but a truncate prevents flashback to states prior to the operation. However, from 11gR2 the FLASHBACK ARCHIVE feature allows this, except in Express Edition [Use of FLASHBACK in Oracle](https://stackoverflow.com/questions/25950145/use-of-flashback-in-oracle) <http://docs.oracle.com/cd/E11882_01/appdev.112/e41502/adfns_flashback.htm#ADFNS638> --- # Privileges Variable **Oracle** Delete can be granted on a table to another user or role, but truncate cannot be without using a DROP ANY TABLE grant. --- # Redo/Undo Delete generates a small amount of redo and a large amount of undo. Truncate generates a negligible amount of each. --- # Indexes **Oracle** A truncate operation renders unusable indexes usable again. Delete does not. --- # Foreign Keys A truncate cannot be applied when an enabled foreign key references the table. Treatment with delete depends on the configuration of the foreign keys. --- # Table Locking **Oracle** Truncate requires an exclusive table lock, delete requires a shared table lock. Hence disabling table locks is a way of preventing truncate operations on a table. --- # Triggers DML triggers do not fire on a truncate. **Oracle** DDL triggers are available. --- # Remote Execution **Oracle** Truncate cannot be issued over a database link. --- # Identity Columns **SQL\*Server** Truncate resets the sequence for IDENTITY column types, delete does not. --- # Result set In most implementations, a `DELETE` statement can return to the client the rows that were deleted. e.g. in an Oracle PL/SQL subprogram you could: ``` DELETE FROM employees_temp WHERE employee_id = 299 RETURNING first_name, last_name INTO emp_first_name, emp_last_name; ```
The difference between truncate and delete is listed below: ``` +----------------------------------------+----------------------------------------------+ | Truncate | Delete | +----------------------------------------+----------------------------------------------+ | We can't Rollback after performing | We can Rollback after delete. | | Truncate. | | | | | | Example: | Example: | | BEGIN TRAN | BEGIN TRAN | | TRUNCATE TABLE tranTest | DELETE FROM tranTest | | SELECT * FROM tranTest | SELECT * FROM tranTest | | ROLLBACK | ROLLBACK | | SELECT * FROM tranTest | SELECT * FROM tranTest | +----------------------------------------+----------------------------------------------+ | Truncate reset identity of table. | Delete does not reset identity of table. | +----------------------------------------+----------------------------------------------+ | It locks the entire table. | It locks the table row. | +----------------------------------------+----------------------------------------------+ | Its DDL(Data Definition Language) | Its DML(Data Manipulation Language) | | command. | command. | +----------------------------------------+----------------------------------------------+ | We can't use WHERE clause with it. | We can use WHERE to filter data to delete. | +----------------------------------------+----------------------------------------------+ | Trigger is not fired while truncate. | Trigger is fired. | +----------------------------------------+----------------------------------------------+ | Syntax : | Syntax : | | 1) TRUNCATE TABLE table_name | 1) DELETE FROM table_name | | | 2) DELETE FROM table_name WHERE | | | example_column_id IN (1,2,3) | +----------------------------------------+----------------------------------------------+ ```
What's the difference between TRUNCATE and DELETE in SQL
[ "", "sql", "database", "truncate", "" ]
I need to remove temp files on Tomcat startup, the pass to a folder which contains temp files is in applicationContext.xml. Is there a way to run a method/class only on Tomcat startup?
You could write a `ServletContextListener` which calls your method from the `contextInitialized()` method. You attach the listener to your webapp in web.xml, e.g. ``` <listener> <listener-class>my.Listener</listener-class> </listener> ``` and ``` package my; public class Listener implements javax.servlet.ServletContextListener { public void contextInitialized(ServletContext context) { MyOtherClass.callMe(); } } ``` Strictly speaking, this is only run once on webapp startup, rather than Tomcat startup, but that may amount to the same thing.
You can also use (starting Servlet v3) an annotated aproach (no need to add anything to web.xml): ``` @WebListener public class InitializeListner implements ServletContextListener { @Override public final void contextInitialized(final ServletContextEvent sce) { } @Override public final void contextDestroyed(final ServletContextEvent sce) { } } ```
Is there a way to run a method/class only on Tomcat/Wildfly/Glassfish startup?
[ "", "java", "tomcat", "jakarta-ee", "web-applications", "startup", "" ]
I'm trying to run the following SQL statement in Oracle, and it takes ages to run: ``` SELECT orderID FROM tasks WHERE orderID NOT IN (SELECT DISTINCT orderID FROM tasks WHERE engineer1 IS NOT NULL AND engineer2 IS NOT NULL) ``` If I run just the sub-part that is in the IN clause, that runs very quickly in Oracle, i.e. ``` SELECT DISTINCT orderID FROM tasks WHERE engineer1 IS NOT NULL AND engineer2 IS NOT NULL ``` Why does the whole statement take such a long time in Oracle? In SQL Server the whole statement runs quickly. Alternatively is there a simpler/different/better SQL statement I should use? Some more details about the problem: * Each order is made of many tasks * Each order will be allocated (one or more of its task will have engineer1 and engineer2 set) or the order can be unallocated (all its task have null values for the engineer fields) * I am trying to find all the orderIDs that are unallocated. Just in case it makes any difference, there are ~120k rows in the table, and 3 tasks per order, so ~40k different orders. Responses to answers: * I would prefer a SQL statement that works in both SQL Server and Oracle. * The tasks only has an index on the orderID and taskID. * I tried the NOT EXISTS version of the statement but it ran for over 3 minutes before I cancelled it. Perhaps need a JOIN version of the statement? * There is an "orders" table as well with the orderID column. But I was trying to simplify the question by not including it in the original SQL statement. I guess that in the original SQL statement the sub-query is run every time for each row in the first part of the SQL statement - even though it is static and should only need to be run once? Executing ``` ANALYZE TABLE tasks COMPUTE STATISTICS; ``` made my original SQL statement execute much faster. Although I'm still curious why I have to do this, and if/when I would need to run it again? > The statistics give Oracle's > cost-based optimzer information that > it needs to determine the efficiency > of different execution plans: for > example, the number of rowsin a table, > the average width of rows, highest and > lowest values per column, number of > distinct values per column, clustering > factor of indexes etc. > > In a small database you can just setup > a job to gather statistics every night > and leave it alone. In fact, this is > the default under 10g. For larger > implementations you usually have to > weigh the stability of the execution > plans against the way that the data > changes, which is a tricky balance. > > Oracle also has a feature called > "dynamic sampling" that is used to > sample tables to determine relevant > statistics at execution time. It's > much more often used with data > warehouses where the overhead of the > sampling it outweighed by the > potential performance increase for a > long-running query.
Often this type of problem goes away if you analyze the tables involved (so Oracle has a better idea of the distribution of the data) ``` ANALYZE TABLE tasks COMPUTE STATISTICS; ```
The "IN" - clause is known in Oracle to be pretty slow. In fact, the internal query optimizer in Oracle cannot handle statements with "IN" pretty good. try using "EXISTS": ``` SELECT orderID FROM tasks WHERE orderID NOT EXISTS (SELECT DISTINCT orderID FROM tasks WHERE engineer1 IS NOT NULL AND engineer2 IS NOT NULL)`print("code sample");` ``` Caution: Please check if the query builds the same data results. Edith says: ooops, the query is not well formed, but the general idea is correct. Oracle has to fulfill a full table scan for the second (inner) query, build the results and then compare them to the first (outer) query, that's why it's slowing down. Try ``` SELECT orderID AS oid FROM tasks WHERE NOT EXISTS (SELECT DISTINCT orderID AS oid2 FROM tasks WHERE engineer1 IS NOT NULL AND engineer2 IS NOT NULL and oid=oid2) ``` or something similiar ;-)
Optimising a SELECT query that runs slow on Oracle which runs quickly on SQL Server
[ "", "sql", "sql-server", "oracle", "optimization", "" ]
I'm writing a photo gallery webapp for a friend's wedding and they want a photo gallery for guests to submit the digital photos they take on the day. After evaluating all the options, I've decided the easiest thing for users would be to let them use a familiar interface (their email) and just have them send in the pictures as attachments. I've created an mailbox but now I need to connect and retrieve these attachments for automated processing for adding to the gallery system. But how? Are there any tutorials or prefab classes you've seen for doing this?
What MTA are you using? If you use postfix + maildrop you can create a filtering rule that pipes certain messages through a PHP script that then handles the incoming mails. (google for maildrop and `xfilter`).
I used to do a lot of this before, but I can't find the code, here's a scaled down version I found. It should put you on the correct path. I used to run this type of script from a cronjob. Sorry I can't find the final version. ;( ``` // Open pop mailbox if (!$mbox = imap_open ("{localhost:110/pop3/notls}INBOX", "user", "tester")) { die ('Cannot connect/check pop mail! Exiting'); } if ($hdr = imap_check($mbox)) { $msgCount = $hdr->Nmsgs; } else { echo "Failed to get mail"; exit; } $MN=$msgCount; $overview=imap_fetch_overview($mbox,"1:$MN",0); for ($X = 1; $X <= $MN; $X++) { $file = imap_fetchbody($mbox, $X, 1); imap_delete($mbox, $X); } imap_expunge($mbox); imap_close($mbox); ``` Good luck!
How to get email and their attachments from PHP
[ "", "php", "email", "" ]
I need to know, from within Powershell, if the current drive is a mapped drive or not. Unfortunately, Get-PSDrive is not working "as expected": ``` PS:24 H:\temp >get-psdrive h Name Provider Root CurrentLocation ---- -------- ---- --------------- H FileSystem H:\ temp ``` but in MS-Dos "net use" shows that H: is really a mapped network drive: ``` New connections will be remembered. Status Local Remote Network ------------------------------------------------------------------------------- OK H: \\spma1fp1\JARAVJ$ Microsoft Windows Network The command completed successfully. ``` What I want to do is to get the root of the drive and show it in the prompt (see: [Customizing PowerShell Prompt - Equivalent to CMD's $M$P$\_$+$G?](https://stackoverflow.com/questions/157923/customizing-powershell-prompt-equivalent-to-cmds-mpg))
Use the .NET framework: ``` PS H:\> $x = new-object system.io.driveinfo("h:\") PS H:\> $x.drivetype Network ```
A slightly more compact variation on the accepted answer: ``` [System.IO.DriveInfo]("C") ```
In PowerShell, how can I determine if the current drive is a networked drive or not?
[ "", "c#", "powershell", "networking", "mapped-drive", "" ]
I've got a request from a customer to automatically detect the type of mobile device (not the browser, the type. ex: Moto Q, Blackjack II, etc.) and automatically select the device from a drop down with a list of supported devices. So far I've found that the HTTP Headers (submitted by mobile IE) contain information such as * Resolution * UA-CPU (i've seen ARM from WM 2003 and x86 from WM5) * User Agent (which basically just says Windows CE) The only thing I can think of right now is possibly using a combination of the resolution/cpu and making a "best guess" Any thoughts?
What exactly does the customer mean by "supported". Surely it means that the phone in question supports the web application and it's inner functionality - wouldn't it be better then to forget device detection and simply focus on detecting those capabilities required for the app to function properly? For example, if my mobile website requires Ajax to work then instead of listing all the devices which are said to "support Ajax" I could do some simple object detection to find out for myself. Device detection, just like browser detection is unreliable. Yes, it's possible but I wouldn't recomend it... on a project I've done we used the User Agent string to detect various devices. The indexOf javaScript method came in handy! :)
You may want to have a look at WURFL, here: <http://wurfl.sourceforge.net/>. From the site: > So... What is WURFL? > The WURFL is an XML configuration file which contains information about capabilities and features of many mobile devices. > > The main scope of the file is to collect as much information as we can about all the existing mobile devices that access WAP pages so that developers will be able to build better applications and better services for the users.
Possible to detect the *type of mobile device* via javascript or HTTP Headers?
[ "", "javascript", "http", "windows-mobile", "http-headers", "" ]
There seem to many ways to skin this particular cat - but which is the best and easiest to implement. It would seem that the ASP.net Membership Provider is the one that will save more time, so my questions are: 1. What are the pros/cons of Membership? 2. How do you integrate the auto generated user db with your own custom made db? e.g customers/orders/products db. (We are talking MS Sql here BTW) 3. Where can I find a good tutorial thats up do date? Many thanks.
Membership is lightweight and easy to set up. You can also use the various providers to use Active Directory or some other member location. You shouldn't need to integrate the databases, you can use one to authenticate users and then as long as they are valid, go query another database for the data. This is a good way to keep information seperate for security reasons. For a good tutorial, I'd suggest: <http://msdn.microsoft.com/en-us/library/yh26yfzy.aspx> And if you want to create your own membership provider: <http://www.asp.net/learn/videos/video-189.aspx>
Overall I give it a thumbs up, but there are several minor cons I can think of: * Roles are just strings, there's no way to attach additional information to them without rolling your own code. * Some of the Login controls don't set their default button properly, so hitting the "enter" key while in an input field does nothing. You can fix this by setting it yourself. * No default way to require numbers in a password, just symbols. * Login controls w/ SqlMembershipProvider don't display specific "user is locked out" messages. Either a con or a pro, depends on your point of view: * User names are case-insensitive in the SqlMembershipProvider
New site creation and security/authentication,- should I use ASP.net Membership Provider?
[ "", "c#", "asp.net", "security", "authentication", "asp.net-membership", "" ]
I want to clear the Firebug console of the JavaScript already sent. Does something like `console.clear()` exist and work?
`console.clear();` works for me
If you want to see all the available methods under console: ``` for(var i in console) { console.log(i); } ```
Clear the JavaScript sent to Firebug console
[ "", "javascript", "firebug", "" ]
As part of some error handling in our product, we'd like to dump some stack trace information. However, we experience that many users will simply take a screenshot of the error message dialog instead of sending us a copy of the full report available from the program, and thus I'd like to make some minimal stack trace information available in this dialog. A .NET stack trace on my machine looks like this: ``` at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath) at System.IO.FileStream.Init(String path, FileMode mode, FileAccess access, Int32 rights, Boolean useRights, FileShare share, Int32 bufferSize, FileOptions options, SECURITY_ATTRIBUTES secAttrs, String msgPath, Boolean bFromProxy) at System.IO.FileStream..ctor(String path, FileMode mode, FileAccess access, FileShare share, Int32 bufferSize, FileOptions options) at System.IO.StreamReader..ctor(String path, Encoding encoding, Boolean detectEncodingFromByteOrderMarks, Int32 bufferSize) at System.IO.StreamReader..ctor(String path) at LVKWinFormsSandbox.MainForm.button1_Click(Object sender, EventArgs e) in C:\Dev\VS.NET\Gatsoft\LVKWinFormsSandbox\MainForm.cs:line 36 ``` I have this question: The format looks to be this: ``` at <class/method> [in file:line ##] ``` However, the *at* and *in* keywords, I assume these will be localized if they run, say, a norwegian .NET runtime instead of the english one I have installed. Is there any way for me to pick apart this stack trace in a language-neutral manner, so that I can display only the file and line number for those entries that have this? In other words, I'd like this information from the above text: ``` C:\Dev\VS.NET\Gatsoft\LVKWinFormsSandbox\MainForm.cs:line 36 ``` Any advice you can give will be helpful.
You should be able to get a StackTrace object instead of a string by saying ``` var trace = new System.Diagnostics.StackTrace(exception); ``` You can then look at the frames yourself without relying on the framework's formatting. See also: [StackTrace reference](http://msdn.microsoft.com/en-us/library/system.diagnostics.stacktrace.aspx)
Here is the code I use to do this without an exception ``` public static void LogStack() { var trace = new System.Diagnostics.StackTrace(); foreach (var frame in trace.GetFrames()) { var method = frame.GetMethod(); if (method.Name.Equals("LogStack")) continue; Log.Debug(string.Format("{0}::{1}", method.ReflectedType != null ? method.ReflectedType.Name : string.Empty, method.Name)); } } ```
Print stack trace information from C#
[ "", "c#", "parsing", "error-handling", "stack-trace", "" ]
I've got two arrays of the same size. I'd like to merge the two so the values of one are the key indexes of the new array, and the values of the new array are the values of the other. Right now I'm just looping through the arrays and creating the new array manually, but I have a feeling there is a much more elegant way to go about this. I don't see any array functions for this purpose, but maybe I missed something? Is there a simple way to this along these lines? ``` $mapped_array = mapkeys($array_with_keys, $array_with_values); ```
See [`array_combine()`](http://php.net/array_combine) on PHP.net.
(from the docs for easy reading) array\_combine — Creates an array by using one array for keys and another for its values **Description** `array array_combine ( array $keys , array $values )` Creates an array by using the values from the keys array as keys and the values from the values array as the corresponding values. **Parameters** keys - Array of keys to be used. Illegal values for key will be converted to string. values - Array of values to be used **Example** ``` <?php $a = array('green', 'red', 'yellow'); $b = array('avocado', 'apple', 'banana'); $c = array_combine($a, $b); print_r($c); ?> ``` The above example will output: ``` Array ( [green] => avocado [red] => apple [yellow] => banana ) ```
Merge two arrays as key value pairs in PHP
[ "", "php", "arrays", "" ]
I want to compress some files (into the [ZIP](http://en.wikipedia.org/wiki/ZIP_%28file_format%29) format) and encrypt them if possible using C#. Is there some way to do this? Can encryption be done as a part of the compression itself?
For compression, look at the [`System.IO.Compression`](http://msdn.microsoft.com/en-us/library/system.io.compression.aspx) namespace and for encryption look at [`System.Security.Cryptography`](http://msdn.microsoft.com/en-us/library/system.security.cryptography.aspx).
For Zip Compression, have you seen <http://www.icsharpcode.net/OpenSource/SharpZipLib/>
Are there any compression and encryption libraries in C#?
[ "", "c#", "encryption", "compression", "" ]
If I use mod\_rewrite to control all my 301 redirects, does this happen before my page is served? so if I also have a bunch of redirect rules in a php script that runs on my page, will the .htaccess kick in first?
When a request is made to the URI affected by the .htaccess file, then Apache will handle any rewrite rules before any of your PHP code executes.
The .htaccess will kick in first. If you look at the Apache request cycle: ![Apache Request Cycle](https://i.stack.imgur.com/3PwIg.gif) PHP is a response handler. mod\_rewrite runs at URI translation, except for rewrite rules in .htaccess and <Directory> or <Location> blocks which run in the fixup phase. This is because Apache doesn't know which directory it's in (and thus which <Directory> or .htaccess to read) until after URI translation. In response to to gabriel1836's question about the image, I grabbed it from the second slide of [this presentation](http://stein.cshl.org/~lstein/talks/perl_conference/apache_api/) but it's originally from the book: [Writing Apache Modules in Perl and C](http://www.modperl.com/) which I highly recommend.
What happens first? .htaccess or php code?
[ "", "php", "apache", ".htaccess", "mod-rewrite", "" ]
I was wondering if it is possible to not attach Excel sheet if it is empty, and maybe write a different comment in the email if empty. When I go to report delivery options, there's no such configuration. **Edit**: I'm running SQL Server Reporting Services 2005. Some possible workarounds as mentioned below: [MSDN: Reporting Services Extensions](http://msdn.microsoft.com/en-us/library/ms152934(SQL.90).aspx) [NoRows and NoRowsMessage properties](http://technet.microsoft.com/es-es/library/microsoft.reportingservices.reportrendering.table.norows(SQL.90).aspx) I should look into these things.
I believe the answer is no, at least not out of the box. It shouldn't be difficult to write your own delivery extension given the printing delivery extension sample included in RS.
Yeah, I don't think that is possible. You could use the "NoRows" property of your table to display a message when no data is returned, but that wouldn't prevent the report from being attached. But at least when they opened the excel file it could print out your custom message instead of an empty document.
SQL Server Reporting Services 2005 - How to Handle Empty Reports
[ "", "sql", "reporting-services", "reporting", "report", "" ]
I have a large number of Enums that implement this interface: ``` /** * Interface for an enumeration, each element of which can be uniquely identified by its code */ public interface CodableEnum { /** * Get the element with a particular code * @param code * @return */ public CodableEnum getByCode(String code); /** * Get the code that identifies an element of the enum * @return */ public String getCode(); } ``` A typical example is: ``` public enum IMType implements CodableEnum { MSN_MESSENGER("msn_messenger"), GOOGLE_TALK("google_talk"), SKYPE("skype"), YAHOO_MESSENGER("yahoo_messenger"); private final String code; IMType (String code) { this.code = code; } public String getCode() { return code; } public IMType getByCode(String code) { for (IMType e : IMType.values()) { if (e.getCode().equalsIgnoreCase(code)) { return e; } } } } ``` As you can imagine these methods are virtually identical in all implementations of CodableEnum. I would like to eliminate this duplication, but frankly don't know how. I tried using a class such as the following: ``` public abstract class DefaultCodableEnum implements CodableEnum { private final String code; DefaultCodableEnum(String code) { this.code = code; } public String getCode() { return this.code; } public abstract CodableEnum getByCode(String code); } ``` But this turns out to be fairly useless because: 1. An enum cannot extend a class 2. Elements of an enum (SKYPE, GOOGLE\_TALK, etc.) cannot extend a class 3. I cannot provide a default implementation of getByCode(), because DefaultCodableEnum is not itself an Enum. I tried changing DefaultCodableEnum to extend java.lang.Enum, but this doesn't appear to be allowed. Any suggestions that do not rely on reflection? Thanks, Don
You could factor the duplicated code into a `CodeableEnumHelper` class: ``` public class CodeableEnumHelper { public static CodeableEnum getByCode(String code, CodeableEnum[] values) { for (CodeableEnum e : values) { if (e.getCode().equalsIgnoreCase(code)) { return e; } } return null; } } ``` Each `CodeableEnum` class would still have to implement a `getByCode` method, but the actual implementation of the method has at least been centralized to a single place. ``` public enum IMType implements CodeableEnum { ... public IMType getByCode(String code) { return (IMType)CodeableEnumHelper.getByCode(code, this.values()); } } ```
Abstract enums are potentially very useful (and currently not allowed). But a proposal and prototype exists if you'd like to lobby someone in Sun to add it: <http://freddy33.blogspot.com/2007/11/abstract-enum-ricky-carlson-way.html> Sun RFE: <https://bugs.java.com/bugdatabase/view_bug?bug_id=6570766>
How can I eliminate duplicated Enum code?
[ "", "java", "enums", "enumeration", "" ]
I am working with Reporting Services and Sharepoint, I have an application that leverages reporting services however a client would like our application integrated into sharepoint. Currently we are tightly coupled to the ReportService.asmx webservice which exposes various methods for performing operations. Reporting Services has something called "Sharepoint Integration mode" when enabled the report server works differently and Sharepoint is used to manage the reports. Sharepoint adds a new web service called ReportService2006.asmx which is almost exactly the same. Now our application uses a web reference to the ReportService and uses various objects exposed by the service. ReportService2006 has exactly the same objects but they are obviously in a different namespace e.g I have 2 web references - 1 to each service so there is an object MyApplication.ReportService.CatalogItem and another MyApplication.ReportService2006.CatalogItem. I've tried to use dependency injection to absract the Service out of our application coupled with a factory pattern to determine which implementation of my interface to instantiate. Heres my interface. I've simplified it to include only the calls I need for this application. ``` using System; using NetworkUserEncrypt.ReportService; namespace MyApplication.Service { public interface IReportingService { CatalogItem CreateDataSource(string DataSource, string Parent, bool Overwrite, DataSourceDefinition Definition, Property[] Properties); void DeleteItem(string Item); DataSourceDefinition GetDataSourceContents(string DataSource); byte[] GetReportDefinition(string Report); CatalogItem[] ListChildren(string Item); } } ``` So I have 2 implementations of this each instantiating a different web service e.g: ``` namespace MyApp.Service.Implementation { class ReportingServiceImpl : IReportingService { ReportingService _service = null; public ReportingServiceImpl() { ReportingService _service = new ReportingService(); } /* SNIP */ } } ``` and ``` namespace MyApp.Service.Implementation { class ReportingService2006Impl : IReportingService { ReportingService2006 _service = null; public ReportingService2006Impl() { ReportingService2006 _service = new ReportingService2006(); } /* SNIP */ } } ``` So the plan is I can inject these into my ServiceWrapper at run time. However - if you'll notice the interface is tied to the ReportService and some of the methods return objects that are from the web reference e.g. CatalogItem. Thus my project won't build because my implementation for ReportService2006 is referencing the CatalogItem from a different namespace. Any ideas? Am I going totally the wrong direction with this?
The most robust solution is to create a CatalogItem interface and create wrappers for each of your web services and hide the whole thing behind a factory. The factory will contain the logic for calling the "correct" web service and the client code will have to be changed to use the interface but it is a change for the better. WCF does solve most of these issues with Service Contracts and if my earlier advice proves to be too unmanageable you could consider migrating towards to a WCF solution.
I think you are headed in the right direction for this situation, It's just going to take a fair amount more of work to drive it home. I would create some proxy classes that can wrap both versions of the classes using reflection or dynamic methods. I've also seen people use the proxy classes from the remoting namespace to intercept method calls at runtime and direct them to the right place, that way you could create the dynamic methods on demand instead of hand coding them, all you really need for that is an interface that matches the object's interface.
.NET Architectural issue: 2 Web Services, how do I change which one is used at run time?
[ "", "c#", ".net", "sharepoint", "architecture", "reporting-services", "" ]
What are the most common memory optimizations in csharp, dotnet 2.0. Wanted to see if there common things that people may not be doing by default in winform app
* use structs for small wrapper objects to avoid heap fragmentation * think carefully about object lifetimes, especially for large objects so they do not end up on the LOH unless you intend them to * think about allocations inside of a loop * make sure dynamically sized array will be of reasonable size, otherwise partition the problem
Use StringBuilder instead of directly modifying a string if you're performing many modifications to the same string.
Common memory optimization
[ "", "c#", "optimization", "memory-management", "" ]
Is global memory initialized in C++? And if so, how? (Second) clarification: When a program starts up, what is in the memory space which will become global memory, prior to primitives being initialized? I'm trying to understand if it is zeroed out, or garbage for example. The situation is: can a singleton reference be set - via an `instance()` call, prior to its initialization: ``` MySingleton* MySingleton::_instance = NULL; ``` and get two singleton instances as a result? See my C++ quiz on on multiple instances of a singleton...
Yes global primitives are initialized to NULL. Example: ``` int x; int main(int argc, char**argv) { assert(x == 0); int y; //assert(y == 0); <-- wrong can't assume this. } ``` You cannot make any assumptions about classes, structs, arrays, blocks of memory on the heap... It's safest just to always initialize everything.
From the standard: > Objects with static storage duration (3.7.1) shall be zero-initialized (8.5) before any other initialization takes place. Zero-initialization and initialization with a constant expression are collectively called *static initialization*; all other initialization is *dynamic initialization*. Objects of POD [plain old data] types (3.9) with static storage duration initialized with constant expressions (5.19) shall be initialized before any dynamic initialization takes place. Objects with static storage duration defined in namespace scope in the same translation unit and dynamically initialized shall be initialized in the order in which their definition appears in the translation unit. [Note:8.5.1 describes the order in which aggregate members are initialized. The initial- > ization of local static objects is described in 6.7.] So yes, globals which have static storage duration will be initialized. Globals allocated, e.g., on the heap will of course not be initialized automatically.
Is global memory initialized in C++?
[ "", "c++", "memory", "memory-management", "" ]
I'm trying to install a service using InstallUtil.exe but invoked through `Process.Start`. Here's the code: ``` ProcessStartInfo startInfo = new ProcessStartInfo (m_strInstallUtil, strExePath); System.Diagnostics.Process.Start (startInfo); ``` where `m_strInstallUtil` is the fully qualified path and exe to "InstallUtil.exe" and `strExePath` is the fully qualified path/name to my service. Running the command line syntax from an elevated command prompt works; running from my app (using the above code) does not. I assume I'm dealing with some process elevation issue, so how would I run my process in an elevated state? Do I need to look at `ShellExecute` for this? This is all on Windows Vista. I am running the process in the VS2008 debugger elevated to admin privilege. I also tried setting `startInfo.Verb = "runas";` but it didn't seem to solve the problem.
You can indicate the new process should be started with elevated permissions by setting the Verb property of your startInfo object to 'runas', as follows: ``` startInfo.UseShellExecute = true; startInfo.Verb = "runas"; ``` This will cause Windows to behave as if the process has been started from Explorer with the "Run as Administrator" menu command. This does mean the UAC prompt will come up and will need to be acknowledged by the user: if this is undesirable (for example because it would happen in the middle of a lengthy process), you'll need to run your entire host process with elevated permissions by [Create and Embed an Application Manifest (UAC)](https://msdn.microsoft.com/en-us/library/bb756929.aspx) to require the 'highestAvailable' execution level: this will cause the UAC prompt to appear as soon as your app is started, and cause all child processes to run with elevated permissions without additional prompting.
This code puts the above all together and restarts the current wpf app with admin privs: ``` if (IsAdministrator() == false) { // Restart program and run as admin var exeName = System.Diagnostics.Process.GetCurrentProcess().MainModule.FileName; ProcessStartInfo startInfo = new ProcessStartInfo(exeName); startInfo.Verb = "runas"; System.Diagnostics.Process.Start(startInfo); Application.Current.Shutdown(); return; } private static bool IsAdministrator() { WindowsIdentity identity = WindowsIdentity.GetCurrent(); WindowsPrincipal principal = new WindowsPrincipal(identity); return principal.IsInRole(WindowsBuiltInRole.Administrator); } // To run as admin, alter exe manifest file after building. // Or create shortcut with "as admin" checked. // Or ShellExecute(C# Process.Start) can elevate - use verb "runas". // Or an elevate vbs script can launch programs as admin. // (does not work: "runas /user:admin" from cmd-line prompts for admin pass) ``` Update: The app manifest way is preferred: Right click project in visual studio, add, new application manifest file, change the file so you have requireAdministrator set as shown in the above. A problem with the original way: If you put the restart code in app.xaml.cs OnStartup, it still may start the main window briefly even though Shutdown was called. My main window blew up if app.xaml.cs init was not run and in certain race conditions it would do this.
Elevating process privilege programmatically?
[ "", "c#", ".net", "windows", "windows-services", "process-elevation", "" ]
I have a website where we use Javascript to submit the login form. On Firefox it prompts the user to remember their password, when they login, but on IE7 it doesn't. After doing some research it looks like the user is only prompted in IE7 when the form is submitted via a Submit control. I've created some sample html to prove this is the case. ``` <html> <head> <title>test autocomplete</title> <script type="text/javascript"> function submitForm() { return document.forms[0].submit(); } </script> </head> <body> <form method="GET" action="test_autocomplete.html"> <input type="text" id="username" name="username"> <br> <input type="password" id="password" name="password"/> <br> <a href="javascript:submitForm();">Submit</a> <br> <input type="submit"/> </form> </body> </html> ``` The href link doesn't get the prompt but the submit button will in IE7. Both work in Firefox. I can't get the style of my site to look the same with a submit button, Does anyone know how to get the remember password prompt to show up when submitting via Javascript?
Why not try hooking the form submission this way? ``` <html> <head> <title>test autocomplete</title> <script type="text/javascript"> function submitForm() { return true; } </script> </head> <body> <form method="GET" action="test_autocomplete.html" onsubmit="return submitForm();"> <input type="text" id="username" name="username"> <br> <input type="password" id="password" name="password"/> <br> <a href="#" onclick="document.getElementById('FORMBUTTON').click();">Submit</a> <br> <input id="FORMBUTTON" type="submit"/> </form> </body> </html> ``` That way your function will be called whether the link is clicked or the submit button is pushed (or the enter key is pressed) and you can cancel the submission by returning false. This may affect the way IE7 interprets the form's submission. Edit: I would recommend always hooking form submission this way rather than calling submit() on the form object. If you call submit() then it will not trigger the form object's onsubmit.
Did you try putting in url in the href and attaching a click event handler to submit the form and returning false from the click handler so that the url does not get navigates to. Alternatively hidden submit button triggered via javascript?
IE7 form not prompted for remember password when submitted through javascript
[ "", "javascript", "html", "internet-explorer-7", "" ]
I've done some research into server push with javascript and have found the general consensus to be that what I'm looking for lies in the "Comet" design pattern. Are there any good implementations of this pattern built on top of jQuery? If not, are there any good implementations of this pattern at all? And regardless of the answer to those questions, is there any documentation on this pattern from an implementation stand-point?
I wrote the plugin mentioned by Till. The plugin is an implementation of the [Bayeux](http://cometdproject.dojotoolkit.org/documentation/bayeux) protocol and currently supports long-polling (local server via AJAX) and callback-polling (remote server via XSS). There is a [Bayeux](http://cometdproject.dojotoolkit.org/documentation/bayeux) implementation for Python called cometd-twisted that I have heard my plugin works with, but I have not verified this. I have tested and verified it works with [cometd-jetty](http://cometdproject.dojotoolkit.org/documentation/cometd-jetty) and [erlycomet](http://code.google.com/p/erlycomet/) which has a jQuery Comet example included. There is more info on my [blog](http://morglog.org) and the current code with a basic chat example can be found on its [google code page](http://code.google.com/p/jquerycomet/). Hope this info is helpful and feel free to contact me if need any further help with the plugin.
Someone built [a client for Comet using jQuery](http://plugins.jquery.com/project/Comet). I don't know if it's any good though. I've read about Comet and heard about all the good it can do, but I have never gotten around to using it. Just had no time and no use case on any of my current projects. I totally forgot to add a link as for implementing comet. There is Comet Daily and they have a [comparison](http://cometdaily.com/maturity.html) online. The comparison emphasizes on maturity of the different implementation. It's pretty interesting and should get you started. Hope that helps!
Comet and jQuery
[ "", "javascript", "jquery", "design-patterns", "comet", "server-push", "" ]
When a user on our site loses his password and heads off to the Lost Password page we need to give him a new temporary password. I don't really mind how random this is, or if it matches all the "needed" strong password rules, all I want to do is give them a password that they can change later. The application is a Web application written in C#. so I was thinking of being mean and going for the easy route of using part of a Guid. i.e. ``` Guid.NewGuid().ToString("d").Substring(1,8) ``` Suggesstions? thoughts?
There's always [`System.Web.Security.Membership.GeneratePassword(int length, int numberOfNonAlphanumericCharacters`)](http://msdn.microsoft.com/en-us/library/system.web.security.membership.generatepassword.aspx).
``` public string CreatePassword(int length) { const string valid = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890"; StringBuilder res = new StringBuilder(); Random rnd = new Random(); while (0 < length--) { res.Append(valid[rnd.Next(valid.Length)]); } return res.ToString(); } ``` This has a good benefit of being able to choose from a list of available characters for the generated password (e.g. digits only, only uppercase or only lowercase etc.)
Generating Random Passwords
[ "", "c#", "passwords", "random", "" ]
I'm having some internationalisation woes: My UTF-8 string fields are being rendered in the browser as ???? after being returned from the database. After retrieval from the database using Hibernate, the String fields are presented correctly on inspection using the eclipse debugger. However Struts2/Tiles is rendering these strings as ???? in the HTML sent to the browser. The charset directive is present in the HTML header: Perhaps I need to add something to my struts2 or tiles configurations?
You could try something like this. It's taken from sun's page on [Character Sets and Encodings](http://java.sun.com/j2ee/1.4/docs/tutorial/doc/WebI18N5.html). I think this has to be the very first line in your jsp. ``` <%@ page contentType="text/html; charset=UTF-8" %> ```
OMG - it turns out that the cause was a total WTF? all our tile responses were being served by a homegrown servlet that was ignoring the `<%@ page contentType="text/html; charset=UTF-8" %>` directive (and who know what else). `TilesDispatchExtensionServlet` : bloody architecture astronauts, i shake my fist at ye.
Stuts2 Tiles Tomcat suspected of changing UTF-8 to?
[ "", "java", "utf-8", "internationalization", "struts2", "tiles", "" ]
We have the usual **web.xml** for our web application which includes some jsp and jsp tag files. I want to switch to using pre-compiled jsp's. I have the pre-compilation happening in the build ok, and it generates the web.xml fragment and now I want to merge the fragment into the main web.xml. Is there an **include** type directive for **web.xml** that will let me include the fragment. Ideally I will leave things as is for DEV- as its useful to change jsp's on the fly and see the changes immediately but then for UAT/PROD, the jsp's will be pre-compiled and thus work faster.
I use the [Tomcat jasper ANT tasks](http://tomcat.apache.org/tomcat-6.0-doc/jasper-howto.html) in my project, which precompile the JSPs into servlets and add the new servlet mappings to the original web.xml. In the DEV builds, just skip this step and deploy the JSPs without pre-compile and modification of the web.xml. ``` <?xml version="1.0"?> <project name="jspc" basedir="." default="all"> <import file="${build.appserver.home}/bin/catalina-tasks.xml"/> <target name="all" depends="jspc,compile"></target> <target name="jspc"> <jasper validateXml="false" uriroot="${build.war.dir}" webXmlFragment="${build.war.dir}/WEB-INF/generated_web.xml" addWebXmlMappings="true" outputDir="${build.src.dir}" /> </target> <target name="compile"> <javac destdir="${build.dir}/classes" srcdir="${build.src.dir}" optimize="on" debug="off" failonerror="true" source="1.5" target="1.5" excludes="**/*.smap"> <classpath> <fileset dir="${build.war.dir}/WEB-INF/classes"> <include name="*.class" /> </fileset> <fileset dir="${build.war.lib.dir}"> <include name="*.jar" /> </fileset> <fileset dir="${build.appserver.home}/lib"> <include name="*.jar" /> </fileset> <fileset dir="${build.appserver.home}/bin"> <include name="*.jar"/> </fileset> </classpath> <include name="**" /> <exclude name="tags/**"/> </javac> </target> <target name="clean"> <delete> <fileset dir="${build.src.dir}"/> <fileset dir="${build.dir}/classes/org/apache/jsp"/> </delete> </target> </project> ``` If you already have the JSP compilation working and just want to merge the web.xml files, a simple XSLT could be written to add selected elements(such as the servlet mappings) from the newly generated web,xml into your original.
Doh - there is an option on the jasper2 task to auto-merge the fragment into the main web.xml - **addWebXmlMappings** ``` <jasper2 validateXml="false" uriroot="${web.dir}" addWebXmlMappings="true" webXmlFragment="${web.dir}/WEB-INF/classes/jasper_generated_web.xml" outputDir="${web.dir}/WEB-INF/jsp-src" /> ``` I wonder how good the merge is... Annoyingly you need to generate the fragment still, even though its not needed after this task.
How to merge jsp pre-compiled web.xml fragment with main web.xml using Ant
[ "", "java", "jsp", "tomcat", "ant", "web.xml", "" ]
Can you do a better code? I need to check/uncheck all childs according to parent and when an child is checked, check parent, when all childs are unchecked uncheck parent. ``` $(".parent").children("input").click(function() { $(this).parent().siblings("input").attr("checked", this.checked); }); $(".parent").siblings("input").click(function() { if (this.checked) { $(this).siblings("div").children("input").attr("checked", true); return; } var childs = $(this).siblings("div").siblings("input"); for (i = 0; i < childs.length; i++) { if ($(childs.get(i)).attr("checked")) return; } $(this).parent().children("div").children("input").attr("checked", false); }); ```
``` $(".parent").children("input").click(function() { $(this).parent().siblings("input").attr("checked", this.checked); }); $(".parent").siblings("input").click(function() { $(this).siblings("div").children("input").attr("checked", this.checked || $(this).siblings("input[checked]").length>0 ); }); ```
woah, i'm mega confused. it looks as though you have inputs with other inputs inside of them? ...which doesn't make sense. Here's what I *think* your structure looks like, so here I go. ``` <div class="parent"> <input type="checkbox" /> <div> <input type="checkbox" /> <input type="checkbox" /> </div> <input type="checkbox" /> <div> <input type="checkbox" /> <input type="checkbox" /> </div> </div> ``` And here's the code I'd use. ``` $("input[type='checkbox']").click(function() { // turn on or off all descendants. $(this) // get this checkbox // get the div directly after it .next('div') // get ALL the inputs in that div (not just first level children) .find("input[type='checkbox']") .attr("checked", this.checked) ; // now check if we have to turn the parent on or off. $(this) .parent() // this will be the div .prev('input[type="checkbox"]') // this is the input .attr( "checked", // set checked to true if... this.checked // this one is checked, or... || $(this).siblings("input[type='checkbox'][checked]").length > 0 // any of the siblings are checked. ) ; }); ``` update: i've just tested this and it totally works (woo!). It also works with as many levels of nesting as you want, not just two.
Check/uncheck parents and children
[ "", "javascript", "jquery", "" ]
I'm creating a networked server for a boggle-clone I wrote in python, which accepts users, solves the boards, and scores the player input. The dictionary file I'm using is 1.8MB (the ENABLE2K dictionary), and I need it to be available to several game solver classes. Right now, I have it so that each class iterates through the file line-by-line and generates a hash table(associative array), but the more solver classes I instantiate, the more memory it takes up. What I would like to do is import the dictionary file once and pass it to each solver instance as they need it. But what is the best way to do this? Should I import the dictionary in the global space, then access it in the solver class as globals()['dictionary']? Or should I import the dictionary then pass it as an argument to the class constructor? Is one of these better than the other? Is there a third option?
If you create a dictionary.py module, containing code which reads the file and builds a dictionary, this code will only be executed the first time it is imported. Further imports will return a reference to the existing module instance. As such, your classes can: ``` import dictionary dictionary.words[whatever] ``` where dictionary.py has: ``` words = {} # read file and add to 'words' ```
Even though it is essentially a singleton at this point, the usual arguments against globals apply. For a pythonic singleton-substitute, look up the "borg" object. That's really the only difference. Once the dictionary object is created, you are only binding new references as you pass it along unless if you explicitly perform a deep copy. It makes sense that it is centrally constructed once and only once so long as each solver instance does not require a private copy for modification.
Best way to store and use a large text-file in python
[ "", "python", "boggle", "" ]
1. New class is a subclass of the original object 2. It needs to be php4 compatible
You could have your classes instantiated empty and then loaded by any number of methods. One of these methods could accept an instance of the parent class as an argument, and then copy its data from there ``` class childClass extends parentClass { function childClass() { //do nothing } function loadFromParentObj( $parentObj ) { $this->a = $parentObj->a; $this->b = $parentObj->b; $this->c = $parentObj->c; } }; $myParent = new parentClass(); $myChild = new childClass(); $myChild->loadFromParentObj( $myParent ); ```
You can do it with some black magic, although I would seriously question **why** you have this requirement in the first place. It suggests that there is something severely wrong with your design. Nonetheless: ``` function change_class($object, $new_class) { preg_match('~^O:[0-9]+:"[^"]+":(.+)$~', serialize($object), $matches); return unserialize(sprintf('O:%s:"%s":%s', strlen($new_class), $new_class, $matches[1])); } ``` This is subject to the same limitations as serialize in general, which means that references to other objects or resources are lost.
How do you copy a PHP object into a different object type
[ "", "php", "oop", "" ]
Jquery has a great language construct that looks like this: ``` $(document).ready(function() { $("a").click(function() { alert("Hello world!"); }); }); ``` As you might guess this, once the document has loaded, binds a custom function to the onClick event of all ***a*** tags. The question is, how can I achieve this same kind of behavior in Prototype?
Prototype 1.6 provides the [`dom:loaded`](http://prototypejs.org/api/document/observe) event on document: ``` document.observe("dom:loaded", function() { $$('a').each(function(elem) { elem.observe("click", function() { alert("Hello World"); }); }); }); ``` I also use the [`each`](http://prototypejs.org/api/enumerable/each) iterator on the array returned by [`$$()`](http://prototypejs.org/api/utility/dollar-dollar).
``` $(document).observe('dom:loaded', function() { $$('a').invoke('observe', 'click', function() { alert('Hello world!'); }); }); ```
Binding custom functions to DOM events in prototype?
[ "", "javascript", "dom", "prototypejs", "" ]
I am developing a Joomla component and one of the views needs to render itself as PDF. In the view, I have tried setting the content-type with the following line, but when I see the response, it is text/html anyways. ``` header('Content-type: application/pdf'); ``` If I do this in a regular php page, everything works as expected. It seems that I need to tell Joomla to use application/pdf instead of text/html. How can I do it? Note: Setting other headers, such as `Content-Disposition`, works as expected.
Since version 1.5 Joomla has the JDocument object. Use [JDocument::setMimeEncoding()](http://api.joomla.org/Joomla-Framework/Document/JDocument.html#setMimeEncoding) to set the content type. ``` $doc =& JFactory::getDocument(); $doc->setMimeEncoding('application/pdf'); ``` In your special case, a look at [JDocumentPDF](http://api.joomla.org/Joomla-Framework/Document/JDocumentPDF.html) may be worthwile.
For those of you thinking that the above is a very old answer, I confirm that the JDocument::setMimeEncoding() still works, even on the 1.6 version (haven't tried it on 1.7 yet).
How do I set the Content-type in Joomla?
[ "", "php", "http", "joomla", "content-type", "" ]
What java GUI layout manager does everyone use? Lately, I have been using [MigLayout](http://www.miglayout.com/), which has some powerful component controls. Just wanted to see what other developers are using other than the standard JDK ones.
[MiGLayout](http://www.miglayout.com/) is the GUI layout manager which is widely used by Java Developers.
[GridBagLayout](http://java.sun.com/javase/6/docs/api/java/awt/GridBagLayout.html) is usable. Once you get used to using it, it works great. I think the standard JDK layout managers are pretty powerful on their own. Plus, you get to minimize dependency on 3rd party libraries.
Which Layout Manager do you use?
[ "", "java", "swing", "layout", "" ]
What guidelines do you follow to improve the general quality of your code? Many people have rules about how to write C++ code that (supposedly) make it harder to make mistakes. I've seen people *insist* that every `if` statement is followed by a brace block (`{...}`). I'm interested in what guidelines other people follow, and the reasons behind them. I'm also interested in guidelines that you think are rubbish, but are commonly held. Can anyone suggest a few? To get the ball rolling, I'll mention a few to start with: * Always use braces after every `if` / `else` statement (mentioned above). The rationale behind this is that it's not always easy to tell if a single statement is actually one statement, or a preprocessor macro that expands to more than one statement, so this code would break: ``` // top of file: #define statement doSomething(); doSomethingElse // in implementation: if (somecondition) doSomething(); ``` but if you use braces then it will work as expected. * Use preprocessor macros for conditional compilation ONLY. preprocessor macros can cause all sorts of hell, since they don't allow C++ scoping rules. I've run aground many times due to preprocessor macros with common names in header files. If you're not careful you can cause all sorts of havoc! Now over to you.
A few of my personal favorites: Strive to write code that is [const correct](http://www.parashift.com/c++-faq-lite/const-correctness.html). You will enlist the compiler to help weed out easy to fix but sometimes painful bugs. Your code will also tell a story of what you had in mind at the time you wrote it -- valuable for newcomers or maintainers once you're gone. Get out of the memory management business. Learn to use smart pointers: `std::auto_ptr`, `std::tr1::shared_ptr` (or `boost::shared_ptr`) and `boost::scoped_ptr`. Learn the differences between them and when to use one vs. another. You're probably going to be using the Standard Template Library. Read the [Josuttis book](https://rads.stackoverflow.com/amzn/click/com/0201379260). Don't just stop after the first few chapters on containers thinking that you know the STL. Push through to the good stuff: algorithms and function objects.
1. Delete unnecessary code. That is all.
Guidelines to improve your code
[ "", "c++", "coding-style", "" ]
I have normally hand written xml like this: ``` <tag><?= $value ?></tag> ``` Having found tools such as simpleXML, should I be using those instead? What's the advantage of doing it using a tool like that?
Good XML tools will ensure that the resulting XML file properly validates against the DTD you are using. Good XML tools also save a bunch of repetitive typing of tags.
If you're dealing with a small bit of XML, there's little harm in doing it by hand (as long as you can avoid typos). However, with larger documents you're frequently better off using an editor, which can validate your doc against the schema and protect against typos.
When writing XML, is it better to hand write it, or to use a generator such as simpleXML in PHP?
[ "", "php", "xml", "language-agnostic", "" ]
I have a need to convert Pixels to Points in C#. I've seen some complicated explanations about the topic, but can't seem to locate a simple formula. Let's assume a standard 96dpi, how do I calulate this conversion?
There are [72 points per inch](http://en.wikipedia.org/wiki/Point_%28typography%29); if it is sufficient to assume 96 pixels per inch, the formula is rather simple: points = pixels \* 72 / 96 There is a way to get the configured pixels per inch of your display in Windows using [`GetDeviceCaps`](https://msdn.microsoft.com/en-us/library/dd144877(v=vs.85).aspx). Microsoft has a guide called ["Developing DPI-Aware Applications"](http://msdn.microsoft.com/en-us/library/ms838191.aspx), look for the section "Creating DPI-Aware Fonts". The W3C has defined the pixel measurement `px` as exactly [1/96th of 1in](https://www.w3.org/TR/css3-values/#absolute-lengths) regardless of the actual resolution of your display, so the above formula should be good for all web work.
Try this if your code lies in a form: ``` Graphics g = this.CreateGraphics(); points = pixels * 72 / g.DpiX; g.Dispose(); ```
Convert Pixels to Points
[ "", "c#", ".net", "pixel", "point", "" ]
I have always made a point of writing nice code comments for classes and methods with the C# xml syntax. I always expected to easily be able to export them later on. Today I actually have to do so, but am having trouble finding out how. Is there something I'm missing? I want to go *Menu->Build->Build Code Documentation*, but there is no option to do that, there.
Actually it's in the project properties. Build tab, Output section, XML documentation file, and enter the filename. It will be built on every build of the project. After that you can build the actual help with [Sandcastle](http://www.microsoft.com/downloads/details.aspx?FamilyId=E82EA71D-DA89-42EE-A715-696E3A4873B2&displaylang=en).
You could try [NDOC](http://ndoc.sourceforge.net/) or [SandCastle](http://www.codeplex.com/Sandcastle) if you dont mind using 3rd party tools.
How do I export the code documentation in C# / VisualStudio 2008?
[ "", "c#", ".net", "visual-studio-2008", "xml-documentation", "" ]
Just how much slower are events? I have written a streaming XML parser (that can handle open-ended and incomplete documents) and by tearing out the events and using an interface instead I got a significant speed boost. Does any one else have any war stories? (Let's not open the GC can of worms here, we all know it's broken :))
Events firing are delegate invocations, which are a [a bit slower than virtual calls](http://msdn.microsoft.com/en-us/magazine/cc163759.aspx) [![alt text](https://i.stack.imgur.com/xExdh.gif)](https://i.stack.imgur.com/xExdh.gif) (source: [microsoft.com](http://i.msdn.microsoft.com/cc163759.fig02(en-us).gif)) But dealing with interfaces for subscriber/publisher/observer/observable scenario is more painful that using events.
Events are really just delegates. From what I recall, they were made much faster in the 2.0 CLR. I'm surprised that replacing events with an interface made your code significantly faster - in my experience they're pretty fast, and if you're dealing with XML, I wouldn't have expected the event calls to be the bottlenecks. Did your code constantly subscribe to and unsubscribe from events? Do you have any indication of the number of event calls that were made when parsing a particular document?
Events in C#
[ "", "c#", ".net", "performance", "events", "" ]
How do I truncate a java `String` so that I know it will fit in a given number of bytes storage once it is UTF-8 encoded?
Here is a simple loop that counts how big the UTF-8 representation is going to be, and truncates when it is exceeded: ``` public static String truncateWhenUTF8(String s, int maxBytes) { int b = 0; for (int i = 0; i < s.length(); i++) { char c = s.charAt(i); // ranges from http://en.wikipedia.org/wiki/UTF-8 int skip = 0; int more; if (c <= 0x007f) { more = 1; } else if (c <= 0x07FF) { more = 2; } else if (c <= 0xd7ff) { more = 3; } else if (c <= 0xDFFF) { // surrogate area, consume next char as well more = 4; skip = 1; } else { more = 3; } if (b + more > maxBytes) { return s.substring(0, i); } b += more; i += skip; } return s; } ``` This *does* handle [surrogate pairs](http://en.wikipedia.org/wiki/UTF-16) that appear in the input string. Java's UTF-8 encoder (correctly) outputs surrogate pairs as a single 4-byte sequence instead of two 3-byte sequences, so `truncateWhenUTF8()` will return the longest truncated string it can. If you ignore surrogate pairs in the implementation then the truncated strings may be shorted than they needed to be. I haven't done a lot of testing on that code, but here are some preliminary tests: ``` private static void test(String s, int maxBytes, int expectedBytes) { String result = truncateWhenUTF8(s, maxBytes); byte[] utf8 = result.getBytes(Charset.forName("UTF-8")); if (utf8.length > maxBytes) { System.out.println("BAD: our truncation of " + s + " was too big"); } if (utf8.length != expectedBytes) { System.out.println("BAD: expected " + expectedBytes + " got " + utf8.length); } System.out.println(s + " truncated to " + result); } public static void main(String[] args) { test("abcd", 0, 0); test("abcd", 1, 1); test("abcd", 2, 2); test("abcd", 3, 3); test("abcd", 4, 4); test("abcd", 5, 4); test("a\u0080b", 0, 0); test("a\u0080b", 1, 1); test("a\u0080b", 2, 1); test("a\u0080b", 3, 3); test("a\u0080b", 4, 4); test("a\u0080b", 5, 4); test("a\u0800b", 0, 0); test("a\u0800b", 1, 1); test("a\u0800b", 2, 1); test("a\u0800b", 3, 1); test("a\u0800b", 4, 4); test("a\u0800b", 5, 5); test("a\u0800b", 6, 5); // surrogate pairs test("\uD834\uDD1E", 0, 0); test("\uD834\uDD1E", 1, 0); test("\uD834\uDD1E", 2, 0); test("\uD834\uDD1E", 3, 0); test("\uD834\uDD1E", 4, 4); test("\uD834\uDD1E", 5, 4); } ``` **Updated** Modified code example, it now handles surrogate pairs.
You should use [CharsetEncoder](https://docs.oracle.com/javase/8/docs/api/java/nio/charset/CharsetEncoder.html#encode-java.nio.CharBuffer-java.nio.ByteBuffer-boolean-), the simple `getBytes()` + copy as many as you can can cut UTF-8 charcters in half. Something like this: ``` public static int truncateUtf8(String input, byte[] output) { ByteBuffer outBuf = ByteBuffer.wrap(output); CharBuffer inBuf = CharBuffer.wrap(input.toCharArray()); CharsetEncoder utf8Enc = StandardCharsets.UTF_8.newEncoder(); utf8Enc.encode(inBuf, outBuf, true); System.out.println("encoded " + inBuf.position() + " chars of " + input.length() + ", result: " + outBuf.position() + " bytes"); return outBuf.position(); } ```
How do I truncate a java string to fit in a given number of bytes, once UTF-8 encoded?
[ "", "java", "string", "unicode", "utf-8", "truncate", "" ]
Is there a way to test if an object is a dictionary? In a method I'm trying to get a value from a selected item in a list box. In some circumstances, the list box might be bound to a dictionary, but this isn't known at compile time. I would like to do something similar to this: ``` if (listBox.ItemsSource is Dictionary<??>) { KeyValuePair<??> pair = (KeyValuePair<??>)listBox.SelectedItem; object value = pair.Value; } ``` Is there a way to do this dynamically at runtime using reflection? I know it's possible to use reflection with generic types and determine the key/value parameters, but I'm not sure if there's a way to do the rest after those values are retrieved.
It should be something like the following. I wrote this in the answer box so the syntax may not be exactly right, but I've made it Wiki editable so anybody can fix up. ``` if (listBox.ItemsSource.IsGenericType && typeof(IDictionary<,>).IsAssignableFrom(listBox.ItemsSource.GetGenericTypeDefinition())) { var method = typeof(KeyValuePair<,>).GetProperty("Value").GetGetMethod(); var item = method.Invoke(listBox.SelectedItem, null); } ```
Check to see if it implements IDictionary. See the definition of System.Collections.IDictionary to see what that gives you. ``` if (listBox.ItemsSource is IDictionary) { DictionaryEntry pair = (DictionaryEntry)listBox.SelectedItem; object value = pair.Value; } ``` **EDIT:** Alternative when I realized KeyValuePair's aren't castable to DictionaryEntry ``` if (listBox.DataSource is IDictionary) { listBox.ValueMember = "Value"; object value = listBox.SelectedValue; listBox.ValueMember = ""; //If you need it to generally be empty. } ``` This solution uses reflection, but in this case you don't have to do the grunt work, ListBox does it for you. Also if you generally have dictionaries as data sources you may be able to avoid reseting ValueMember all of the time.
Testing if an Object is a Dictionary in C#
[ "", "c#", "reflection", "collections", "dictionary", "" ]
Currently I use .Net `WebBrowser.Document.Images()` to do this. It requires the `Webrowser` to load the document. It's messy and takes up resources. According to [this question](https://stackoverflow.com/questions/138313/how-to-extract-img-src-title-and-alt-from-html-using-php) XPath is better than a regex at this. Anyone know how to do this in C#?
If your input string is valid XHTML you can treat is as xml, load it into an xmldocument, and do XPath magic :) But it's not always the case. Otherwise you can try this function, that will return all image links from HtmlSource : ``` public List<Uri> FetchLinksFromSource(string htmlSource) { List<Uri> links = new List<Uri>(); string regexImgSrc = @"<img[^>]*?src\s*=\s*[""']?([^'"" >]+?)[ '""][^>]*?>"; MatchCollection matchesImgSrc = Regex.Matches(htmlSource, regexImgSrc, RegexOptions.IgnoreCase | RegexOptions.Singleline); foreach (Match m in matchesImgSrc) { string href = m.Groups[1].Value; links.Add(new Uri(href)); } return links; } ``` And you can use it like this : ``` HttpWebRequest request = (HttpWebRequest)WebRequest.Create("http://www.example.com"); request.Credentials = System.Net.CredentialCache.DefaultCredentials; HttpWebResponse response = (HttpWebResponse)request.GetResponse(); if (response.StatusCode == HttpStatusCode.OK) { using(StreamReader sr = new StreamReader(response.GetResponseStream())) { List<Uri> links = FetchLinksFromSource(sr.ReadToEnd()); } } ```
The big issue with any HTML parsing is the "well formed" part. You've seen the crap HTML out there - how much of it is really well formed? I needed to do something similar - parse out all links in a document (and in my case) update them with a rewritten link. I found the [Html Agility Pack](http://www.codeplex.com/htmlagilitypack) over on CodePlex. It rocks (and handles malformed HTML). Here's a snippet for iterating over links in a document: ``` HtmlDocument doc = new HtmlDocument(); doc.Load(@"C:\Sample.HTM"); HtmlNodeCollection linkNodes = doc.DocumentNode.SelectNodes("//a/@href"); Content match = null; // Run only if there are links in the document. if (linkNodes != null) { foreach (HtmlNode linkNode in linkNodes) { HtmlAttribute attrib = linkNode.Attributes["href"]; // Do whatever else you need here } } ``` [Original Blog Post](http://www.rcs-solutions.com/blog/2008/08/13/ProcessingHTMLDocuments.aspx)
How do you parse an HTML string for image tags to get at the SRC information?
[ "", "c#", ".net", "html", "regex", "xpath", "" ]
How can we connect a `PHP` script to `MS Access (.mdb)` file? I tried by including following `PHP` code: ``` $db_path = $_SERVER['DOCUMENT_ROOT'] . '\WebUpdate\\' . $file_name . '.mdb'; $cfg_dsn = "Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" . $db_path; $odbcconnect = odbc_connect($cfg_dsn, '', ''); ``` But it failed and I received following error message: ``` Warning: odbc_connect() [function.odbc-connect]: SQL error: [Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified, SQL state IM002 in SQLConnect in C:\web\WebUpdate\index.php on line 41 ```
Here's a sample for a connect and a simple select... ``` <?php $db_conn = new COM("ADODB.Connection"); $connstr = "DRIVER={Microsoft Access Driver (*.mdb)}; DBQ=". realpath("./Northwind.mdb").";"; $db_conn->open($connstr); $rS = $db_conn->execute("SELECT * FROM Employees"); $f1 = $rS->Fields(0); $f2 = $rS->Fields(1); while (!$rS->EOF) { print $f1->value." ".$f2->value."<br />\n"; $rS->MoveNext(); } $rS->Close(); $db_conn->Close(); ?> ```
In the filename, I'm looking at '\WebUpdate\' - it looks like you have one backslash at the beginning at two at the end. Are you maybe missing a backslash at the beginning?
PHP and MS Access
[ "", "php", "ms-access", "" ]
I have a temporary file with data that's returned as part of a SOAP response via a MTOM binary attachment. I would like to trash it as soon as the method call "ends" (i.e., finishes transferring). What's the best way for me to do this? The best way I can figure out how to do this is to delete them when the session is destroyed, but I'm not sure if there's a more 'immediate' way to do this. FYI, I'm NOT using Axis, I'm using jax-ws, if that matters. UPDATE: I'm not sure the answerers are really understanding the issue. I know how to delete a file in java. My problem is this: ``` @javax.jws.WebService public class MyWebService { ... @javax.jws.WebMethod public MyFileResult getSomeObject() { File mytempfile = new File("tempfile.txt"); MyFileResult result = new MyFileResult(); result.setFile(mytempfile); // sets mytempfile as MTOM attachment // mytempfile.delete() iS WRONG // can't delete mytempfile because it hasn't been returned to the web service client // yet. So how do I remove it? return result; } } ```
I ran into this same problem. The issue is that the JAX-WS stack manages the file. It is not possible to determine in your code when JAX-WS is done with the file so you do not know when to delete it. In my case, I am using a DataHandler on my object model rather than a file. MyFileResult would have the following field instead of a file field: ``` private DataHandler handler; ``` My solution was to create a customized version of FileDataSource. Instead of returning a FileInputStream to read the contents of the file, I return the following extension of FileInputStream: ``` private class TemporaryFileInputStream extends FileInputStream { public TemporaryFileInputStream(File file) throws FileNotFoundException { super(file); } @Override public void close() throws IOException { super.close(); file.delete(); } } ``` Essentially the datasource allows reading only once. After the stream is closed, the file is deleted. Since the JAX-WS stack only reads the file once, it works. The solution is a bit of a hack but seems to be the best option in this case.
Are you using standard java temp files? If so, you can do this: ``` File script = File.createTempFile("temp", ".tmp", new File("./")); ... use the file ... script.delete(); // delete when done. ```
Tomcat 6: how to delete temporary files after a web method call has ended?
[ "", "java", "web-services", "tomcat", "jax-ws", "" ]
I am trying to add a timed delay in a C++ program, and was wondering if anyone has any suggestions on what I can try or information I can look at? I wish I had more details on how I am implementing this timed delay, but until I have more information on how to add a timed delay I am not sure on how I should even attempt to implement this.
[In Win32](https://msdn.microsoft.com/en-us/library/windows/desktop/ms686298(v=vs.85).aspx): ``` #include<windows.h> Sleep(milliseconds); ``` [In Unix](http://linux.die.net/man/3/usleep): ``` #include<unistd.h> unsigned int microsecond = 1000000; usleep(3 * microsecond);//sleeps for 3 second ``` `sleep()` only takes a number of seconds which is often too long.
An updated answer for C++11: Use the `sleep_for` and `sleep_until` functions: ``` #include <chrono> #include <thread> int main() { using namespace std::this_thread; // sleep_for, sleep_until using namespace std::chrono; // nanoseconds, system_clock, seconds sleep_for(nanoseconds(10)); sleep_until(system_clock::now() + seconds(1)); } ``` With these functions there's no longer a need to continually add new functions for better resolution: `sleep`, `usleep`, `nanosleep`, etc. `sleep_for` and `sleep_until` are template functions that can accept values of any resolution via `chrono` types; hours, seconds, femtoseconds, etc. In C++14 you can further simplify the code with the literal suffixes for `nanoseconds` and `seconds`: ``` #include <chrono> #include <thread> int main() { using namespace std::this_thread; // sleep_for, sleep_until using namespace std::chrono_literals; // ns, us, ms, s, h, etc. using std::chrono::system_clock; sleep_for(10ns); sleep_until(system_clock::now() + 1s); } ``` Note that the actual duration of a sleep depends on the implementation: You can ask to sleep for 10 nanoseconds, but an implementation might end up sleeping for a millisecond instead, if that's the shortest it can do.
How do you add a timed delay to a C++ program?
[ "", "c++", "time", "" ]
I have a generic method defined like this: ``` public void MyMethod<T>(T myArgument) ``` The first thing I want to do is check if the value of myArgument is the default value for that type, something like this: ``` if (myArgument == default(T)) ``` But this doesn't compile because I haven't guaranteed that T will implement the == operator. So I switched the code to this: ``` if (myArgument.Equals(default(T))) ``` Now this compiles, but will fail if myArgument is null, which is part of what I'm testing for. I can add an explicit null check like this: ``` if (myArgument == null || myArgument.Equals(default(T))) ``` Now this feels redundant to me. ReSharper is even suggesting that I change the myArgument == null part into myArgument == default(T) which is where I started. Is there a better way to solve this problem? I need to support **both** references types and value types.
To avoid boxing, the best way to compare generics for equality is with `EqualityComparer<T>.Default`. This respects `IEquatable<T>` (without boxing) as well as `object.Equals`, and handles all the `Nullable<T>` "lifted" nuances. Hence: ``` if(EqualityComparer<T>.Default.Equals(obj, default(T))) { return obj; } ``` This will match: * null for classes * null (empty) for `Nullable<T>` * zero/false/etc for other structs
How about this: ``` if (object.Equals(myArgument, default(T))) { //... } ``` Using the `static object.Equals()` method avoids the need for you to do the `null` check yourself. Explicitly qualifying the call with `object.` probably isn't necessary depending on your context, but I normally prefix `static` calls with the type name just to make the code more soluble.
Null or default comparison of generic argument in C#
[ "", "c#", "generics", "" ]
I have a function in a native DLL defined as follows: ``` #include <string> void SetPath(string path); ``` I tried to put this in Microsoft's P/Invoke Interop Assistant, but it chokes on the "string" class (which I think is from MFC?). I have tried marshaling it as a variety of different types (C# String, char[], byte[]) but every time I either get a NotSupportedException or a Native Assembly Exception (depending on what marshaling I tried). As anyone ever done Native/Managed Interop where the native string class is used? Is there any way to Marshal this? Am I going to have to write my own Marshaler?
Looks like you're trying to use the C++ standard library string class. I doubt that will be easy to Marshal. Better to stick with a char \* and Marshal as StringBuilder. That's what I usually do. You'll have to add a wrapper that generates the C++ string for you.
The PInvoke interop assistant only supports C not C++. Unfortunately the MFC String class (CString I believe?) is C++ and won't work through the assistant. Instead try using the following ``` void SetPath(__in const WCHAR* path); ```
Marshal C++ "string" class in C# P/Invoke
[ "", ".net", "c++", "string", "interop", "marshalling", "" ]
Which is better to do client side or server side validation? In our situation we are using * jQuery and MVC. * JSON data to pass between our View and Controller. A lot of the validation I do is validating data as users enter it. For example I use the the `keypress` event to prevent letters in a text box, set a max number of characters and that a number is with in a range. I guess the better question would be, Are there any benefits to doing server side validation over client side? --- Awesome answers everyone. The website that we have is password protected and for a small user base(<50). If they are not running JavaScript we will send ninjas. But if we were designing a site for everyone one I'd agree to do validation on both sides.
As others have said, you should do both. Here's why: ## Client Side You want to validate input on the client side first because you can give **better feedback to the average user**. For example, if they enter an invalid email address and move to the next field, you can show an error message immediately. That way the user can correct every field **before** they submit the form. If you only validate on the server, they have to submit the form, get an error message, and try to hunt down the problem. (This pain can be eased by having the server re-render the form with the user's original input filled in, but client-side validation is still faster.) ## Server Side You want to validate on the server side because you can **protect against the malicious user**, who can easily bypass your JavaScript and submit dangerous input to the server. It is very dangerous to trust your UI. **Not only can they abuse your UI, but they may not be using your UI at all, or even a browser**. What if the user manually edits the URL, or runs their own Javascript, or tweaks their HTTP requests with another tool? What if they send custom HTTP requests from `curl` or from a script, for example? (*This is not theoretical; eg, I worked on a travel search engine that re-submitted the user's search to many partner airlines, bus companies, etc, by sending `POST` requests as if the user had filled each company's search form, then gathered and sorted all the results. Those companies' form JS was never executed, and it was crucial for us that they provide error messages in the returned HTML. Of course, an API would have been nice, but this was what we had to do.*) Not allowing for that is not only naive from a security standpoint, but also non-standard: a client should be allowed to send HTTP by whatever means they wish, and you should respond correctly. That includes validation. Server side validation is also important for **compatibility** - not all users, even if they're using a browser, will have JavaScript enabled. ## Addendum - December 2016 There are some validations that **can't even be properly done in server-side application code, and are utterly impossible in client-side code**, because they depend on the current state of the database. For example, "nobody else has registered that username", or "the blog post you're commenting on still exists", or "no existing reservation overlaps the dates you requested", or "your account balance still has enough to cover that purchase." **Only the database can reliably validate data which depends on related data.** Developers [regularly screw this up](http://www.bailis.org/papers/feral-sigmod2015.pdf), but [PostgreSQL provides some good solutions](http://nathanmlong.com/2016/01/protect-your-data-with-postgresql-constraints/).
Yes, client side validation can be totally bypassed, always. You need to do both, client side to provide a better user experience, and server side to be sure that the input you get is actually validated and not just supposedly validated by the client.
JavaScript: client-side vs. server-side validation
[ "", "javascript", "security", "validation", "" ]
Is it possible to do a cast within a LINQ query (for the compiler's sake)? The following code isn't terrible, but it would be nice to make it into one query: ``` Content content = dataStore.RootControl as Controls.Content; List<TabSection> tabList = (from t in content.ChildControls select t).OfType<TabSection>().ToList(); List<Paragraph> paragraphList = (from t in tabList from p in t.ChildControls select p).OfType<Paragraph>().ToList(); List<Line> parentLineList = (from p in paragraphList from pl in p.ChildControls select pl).OfType<Line>().ToList(); ``` The code continues on with a few more queries, but the gist is I have to create a List out of each query in order for the compiler to know that all of the objects in `content.ChildControls` are of type `TabSection` and all of the objects in `t.ChildControls` are of type `Paragraph`...and so on and and so forth. Is there a way within the LINQ query to tell the compiler that `t` in `from t in content.ChildControls` is a `TabSection`?
Try this: ``` from TabSection t in content.ChildControls ``` Also, even if this were not available (or for a different, future scenario you may encounter), you wouldn't be restricted to converting everything to Lists. Converting to a List causes query evaluation on the spot. But if you removing the ToList call, you could work with the IEnumerable type, which would continue to defer the execution of the query until you actually iterate or store in a real container.
Depending on what you are trying to do, one of these might do the trick: ``` List<Line> parentLineList1 = (from t in content.ChildControls.OfType<TabSection>() from p in t.ChildControls.OfType<Paragraph>() from pl in p.ChildControls.OfType<Line>() select pl).ToList(); List<Line> parentLineList2 = (from TabSection t in content.ChildControls from Paragraph p in t.ChildControls from Line pl in p.ChildControls select pl).ToList(); ``` Note that one uses OfType<T>(), which you were using. This will filter the results and return only the items of the specified type. The second query implicitly uses Cast<T>(), which casts the results into the specified type. If any item cannot be cast, an exception is thrown. As mentioned by Turbulent Intellect, you should refrain from calling ToList() as long as possible, or try to avoid it altogether.
Doing a Cast Within a LINQ Query
[ "", "c#", ".net", "linq", ".net-3.5", "" ]
I'm writing a small web server in Python, using BaseHTTPServer and a custom subclass of BaseHTTPServer.BaseHTTPRequestHandler. Is it possible to make this listen on more than one port? What I'm doing now: ``` class MyRequestHandler(BaseHTTPServer.BaseHTTPRequestHandler): def doGET [...] class ThreadingHTTPServer(ThreadingMixIn, HTTPServer): pass server = ThreadingHTTPServer(('localhost', 80), MyRequestHandler) server.serve_forever() ```
Sure; just start two different servers on two different ports in two different threads that each use the same handler. Here's a complete, working example that I just wrote and tested. If you run this code then you'll be able to get a Hello World webpage at both <http://localhost:1111/> and <http://localhost:2222/> ``` from threading import Thread from SocketServer import ThreadingMixIn from BaseHTTPServer import HTTPServer, BaseHTTPRequestHandler class Handler(BaseHTTPRequestHandler): def do_GET(self): self.send_response(200) self.send_header("Content-type", "text/plain") self.end_headers() self.wfile.write("Hello World!") class ThreadingHTTPServer(ThreadingMixIn, HTTPServer): daemon_threads = True def serve_on_port(port): server = ThreadingHTTPServer(("localhost",port), Handler) server.serve_forever() Thread(target=serve_on_port, args=[1111]).start() serve_on_port(2222) ``` *update:* This also works with Python 3 but three lines need to be slightly changed: ``` from socketserver import ThreadingMixIn from http.server import HTTPServer, BaseHTTPRequestHandler ``` and ``` self.wfile.write(bytes("Hello World!", "utf-8")) ```
I would say that threading for something this simple is overkill. You're better off using some form of asynchronous programming. Here is an example using [Twisted](http://twistedmatrix.com/): ``` from twisted.internet import reactor from twisted.web import resource, server class MyResource(resource.Resource): isLeaf = True def render_GET(self, request): return 'gotten' site = server.Site(MyResource()) reactor.listenTCP(8000, site) reactor.listenTCP(8001, site) reactor.run() ``` I also thinks it looks a lot cleaner to have each port be handled in the same way, instead of having the main thread handle one port and an additional thread handle the other. Arguably that can be fixed in the thread example, but then you're using three threads.
How do I write a python HTTP server to listen on multiple ports?
[ "", "python", "webserver", "" ]
Can you have a multicolumn listview control where one of the columns is a checkbox? Example code or links would be greatly appreciated. I am using visual studio 2005
Allan Anderson created a custom control to let you do this. You can find it here: <http://www.codeproject.com/KB/list/aa_listview.aspx> Here's some example code for that control: ``` GlacialList mylist = new GlacialList(); mylist.Columns.Add( "Column1", 100 ); // this can also be added // through the design time support mylist.Columns.Add( "Column2", 100 ); mylist.Columns.Add( "Column3", 100 ); mylist.Columns.Add( "Column4", 100 ); GLItem item; item = this.glacialList1.Items.Add( "Atlanta Braves" ); item.SubItems[1].Text = "8v"; item.SubItems[2].Text = "Live"; item.SubItems[2].BackColor = Color.Bisque; item.SubItems[3].Text = "MLB.TV"; item = this.glacialList1.Items.Add( "Florida Marlins" ); item.SubItems[1].Text = ""; item.SubItems[2].Text = "Delayed"; item.SubItems[2].BackColor = Color.LightCoral; item.SubItems[3].Text = "Audio"; item.SubItems[1].BackColor = Color.Aqua; // set the background // of this particular subitem ONLY item.UserObject = myownuserobjecttype; // set a private user object item.Selected = true; // set this item to selected state item.SubItems[1].Span = 2; // set this sub item to span 2 spaces ArrayList selectedItems = mylist.SelectedItems; // get list of selected items ```
Better use grid view control, but if you want **only** one column with checkboxes and that column is the **first** one you can just write: ``` this.listView1.CheckBoxes = true; ```
Checkbox in listview control
[ "", "c#", "user-interface", "visual-studio-2005", "checkbox", "" ]
I despise the PHP language, and I'm quite certain that I'm not alone. But the great thing about PHP is the way that mod\_php takes and hides the gory details of integrating with the apache runtime, and achieves CGI-like request isolation and decent performance. What's the shortest-distance approach to getting the same simplicity, speed and isolation as PHP's runtime environment, with Perl semantics? I feel like raw mod\_perl gives me too much rope to hang myself with: cross-request globals, messy config, too many template engines to choose from. FastCGI? HTML::Mason? I'd like to do development largely in Perl, if only I had a framework that let me.
Look at [Catalyst](http://www.catalystframework.org/) this MVC (model, view, controller) framework works stand-a-lone or with apache\_perl and hides a lot of the messy bits. There is a slightly odd learning curve (quick start, slower middle, then it really clicks for advanced stuff). Catalyst allows you to use [Template Toolkit](http://template-toolkit.org/ "Template Toolkit") to separate the design logic from the business logic, Template toolkit really is great, even if you decide not to use Catalyst then you should be using this. HTML::Mason isn't something I personally like, although if you do all the HTML yourself then you might want to review [Template::Declare](http://search.cpan.org/dist/Template-Declare/) which is another alternative you can also use with Catalyst. For database stuff look at [DBIx::Class](http://search.cpan.org/dist/DBIx-Class/ "DBIx::Class"), which yet again works with Catalyst or on it's own.
I just saw [Dancer](http://search.cpan.org/dist/Dancer/lib/Dancer.pm). Looks like this might be a good option.
How can I get PHP's (deployment) simplicity but Perl's power?
[ "", "php", "perl", "" ]
It is discouraged to simply catch `System.Exception`. Instead, only the "known" exceptions should be caught. Now, this sometimes leads to unnecessary repetitive code, for example: ``` try { WebId = new Guid(queryString["web"]); } catch (FormatException) { WebId = Guid.Empty; } catch (OverflowException) { WebId = Guid.Empty; } ``` I wonder: Is there a way to catch both exceptions and only call the `WebId = Guid.Empty` call once? The given example is rather simple, as it's only a [`GUID`](http://en.wikipedia.org/wiki/Globally_unique_identifier). But imagine code where you modify an object multiple times, and if one of the manipulations fails expectedly, you want to "reset" the `object`. However, if there is an unexpected exception, I still want to throw that higher.
Catch `System.Exception` and switch on the types ``` catch (Exception ex) { if (ex is FormatException || ex is OverflowException) { WebId = Guid.Empty; } else throw; } ```
**EDIT:** I do concur with others who are saying that, as of C# 6.0, exception filters are now a perfectly fine way to go: `catch (Exception ex) when (ex is ... || ex is ... )` Except that I still kind of hate the one-long-line layout and would personally lay the code out like the following. I think this is as functional as it is aesthetic, since I believe it improves comprehension. Some may disagree: ``` catch (Exception ex) when ( ex is ... || ex is ... || ex is ... ) ``` **ORIGINAL:** I know I'm a little late to the party here, but holy smoke... Cutting straight to the chase, this kind of duplicates an earlier answer, but if you really want to perform a common action for several exception types and keep the whole thing neat and tidy within the scope of the one method, why not just use a lambda/closure/inline function to do something like the following? I mean, chances are pretty good that you'll end up realizing that you just want to make that closure a separate method that you can utilize all over the place. But then it will be super easy to do that without actually changing the rest of the code structurally. Right? ``` private void TestMethod () { Action<Exception> errorHandler = ( ex ) => { // write to a log, whatever... }; try { // try some stuff } catch ( FormatException ex ) { errorHandler ( ex ); } catch ( OverflowException ex ) { errorHandler ( ex ); } catch ( ArgumentNullException ex ) { errorHandler ( ex ); } } ``` I can't help but wonder (**warning:** a little irony/sarcasm ahead) why on earth go to all this effort to basically just replace the following: ``` try { // try some stuff } catch( FormatException ex ){} catch( OverflowException ex ){} catch( ArgumentNullException ex ){} ``` ...with some crazy variation of this next code smell, I mean example, only to pretend that you're saving a few keystrokes. ``` // sorta sucks, let's be honest... try { // try some stuff } catch( Exception ex ) { if (ex is FormatException || ex is OverflowException || ex is ArgumentNullException) { // write to a log, whatever... return; } throw; } ``` Because it certainly isn't automatically more readable. Granted, I left the three identical instances of `/* write to a log, whatever... */ return;` out of the first example. But that's sort of my point. Y'all have heard of functions/methods, right? Seriously. Write a common `ErrorHandler` function and, like, call it from each catch block. If you ask me, the second example (with the `if` and `is` keywords) is both significantly less readable, and simultaneously significantly more error-prone during the maintenance phase of your project. The maintenance phase, for anyone who might be relatively new to programming, is going to compose 98.7% or more of the overall lifetime of your project, and the poor schmuck doing the maintenance is almost certainly going to be someone other than you. And there is a very good chance they will spend 50% of their time on the job cursing your name. And of course FxCop barks at you and so you have to ***also*** add an attribute to your code that has precisely zip to do with the running program, and is only there to tell FxCop to ignore an issue that in 99.9% of cases it is totally correct in flagging. And, sorry, I might be mistaken, but doesn't that "ignore" attribute end up actually compiled into your app? Would putting the entire `if` test on one line make it more readable? I don't think so. I mean, I did have another programmer vehemently argue once long ago that putting more code on one line would make it "run faster." But of course he was stark raving nuts. Trying to explain to him (with a straight face--which was challenging) how the interpreter or compiler would break that long line apart into discrete one-instruction-per-line statements--essentially identical to the result if he had gone ahead and just made the code readable instead of trying to out-clever the compiler--had no effect on him whatsoever. But I digress. How much *less* readable does this get when you add three more exception types, a month or two from now? (Answer: it gets a ***lot*** less readable). One of the major points, really, is that most of the point of formatting the textual source code that we're all looking at every day is to make it really, really obvious to other human beings what is actually happening when the code runs. Because the compiler turns the source code into something totally different and couldn't care less about your code formatting style. So all-on-one-line totally sucks, too. Just saying... ``` // super sucks... catch( Exception ex ) { if ( ex is FormatException || ex is OverflowException || ex is ArgumentNullException ) { // write to a log, whatever... return; } throw; } ```
Catch multiple exceptions at once?
[ "", "c#", ".net", "exception", "" ]
I have three (C++) classes: Player, Hand, and Card. Player has a member, hand, that holds a Hand. It also has a method, getHand(), that returns the contents of hand. ``` Hand Player::getHand() { return hand; } ``` Hand has a method, addCard(Card c), that adds a card to the hand. I want to do this: player1.getHand().addCard(c); but it doesn't work. It doesn't throw an error, so it's doing something. But if I examine the contents of player1's hand afterward, the card hasn't been added. How can I get this to work?
If getHand() is not returning a reference you will be in trouble.
If getHand() returns by-value you're modifying a copy of the hand and not the original.
How to use one object's method to update another object's attribute?
[ "", "c++", "oop", "methods", "" ]
``` SELECT GETDATE() ``` Returns: `2008-09-22 15:24:13.790` I want that date part without the time part: `2008-09-22 00:00:00.000` How can I get that?
*NOTE: This answer returns the original `DATETIME` or `DATETIME2` type. For an expression that returns a true `DATE` type (SQL Server 2008 and later), see [BenR's answer](https://stackoverflow.com/a/126984/1501497) below.* ``` SELECT DATEADD(dd, 0, DATEDIFF(dd, 0, @your_date)) ``` for example ``` SELECT DATEADD(dd, 0, DATEDIFF(dd, 0, GETDATE())) ``` gives me ``` 2008-09-22 00:00:00.000 ``` Pros: * No varchar<->datetime conversions required * No need to think about locale
SQLServer 2008 now has a 'date' data type which contains only a date with no time component. Anyone using SQLServer 2008 and beyond can do the following: ``` SELECT CONVERT(date, GETDATE()) ```
How to return only the Date from a SQL Server DateTime datatype
[ "", "sql", "date", "t-sql", "datetime", "sql-server-2008", "" ]
`fopen` is failing when I try to read in a very moderately sized file in `PHP`. `A 6 meg file` makes it choke, though smaller files around `100k` are just fine. i've read that it is sometimes necessary to recompile `PHP` with the `-D_FILE_OFFSET_BITS=64` flag in order to read files over 20 gigs or something ridiculous, but shouldn't I have no problems with a 6 meg file? Eventually we'll want to read in files that are around 100 megs, and it would be nice be able to open them and then read through them line by line with fgets as I'm able to do with smaller files. What are your tricks/solutions for reading and doing operations on very large files in `PHP`? Update: Here's an example of a simple codeblock that fails on my 6 meg file - PHP doesn't seem to throw an error, it just returns false. Maybe I'm doing something extremely dumb? ``` $rawfile = "mediumfile.csv"; if($file = fopen($rawfile, "r")){ fclose($file); } else { echo "fail!"; } ``` Another update: Thanks all for your help, it did turn out to be something incredibly dumb - a permissions issue. My small file inexplicably had read permissions when the larger file didn't. Doh!
Are you sure that it's `fopen` that's failing and not your script's timeout setting? The default is usually around 30 seconds or so, and if your file is taking longer than that to read in, it may be tripping that up. Another thing to consider may be the memory limit on your script - reading the file into an array may trip over this, so check your error log for memory warnings. If neither of the above are your problem, you might look into using [`fgets`](http://ie.php.net/fgets) to read the file in line-by-line, processing as you go. ``` $handle = fopen("/tmp/uploadfile.txt", "r") or die("Couldn't get handle"); if ($handle) { while (!feof($handle)) { $buffer = fgets($handle, 4096); // Process buffer here.. } fclose($handle); } ``` **Edit** > PHP doesn't seem to throw an error, it just returns false. Is the path to `$rawfile` correct relative to where the script is running? Perhaps try setting an absolute path here for the filename.
Did 2 tests with a 1.3GB file and a 9.5GB File. **1.3 GB** **Using `fopen()`** This process used 15555 ms for its computations. It spent 169 ms in system calls. **Using `file()`** This process used 6983 ms for its computations. It spent 4469 ms in system calls. **9.5 GB** **Using `fopen()`** This process used 113559 ms for its computations. It spent 2532 ms in system calls. **Using `file()`** This process used 8221 ms for its computations. It spent 7998 ms in system calls. Seems `file()` is faster.
Reading very large files in PHP
[ "", "php", "file", "large-files", "" ]
Can I persuade `operator>>` in C++ to read both a `hex` value AND and a `decimal` value? The following program demonstrates how reading hex goes wrong. I'd like the same `istringstream` to be able to read both `hex` and `decimal`. ``` #include <iostream> #include <sstream> int main(int argc, char** argv) { int result = 0; // std::istringstream is("5"); // this works std::istringstream is("0x5"); // this fails while ( is.good() ) { if ( is.peek() != EOF ) is >> result; else break; } if ( is.fail() ) std::cout << "failed to read string" << std::endl; else std::cout << "successfully read string" << std::endl; std::cout << "result: " << result << std::endl; } ```
Use [`std::setbase(0)`](http://en.cppreference.com/w/cpp/io/manip/setbase) which enables prefix dependent parsing. It will be able to parse `10` (dec) as 10 decimal, `0x10` (hex) as 16 decimal and `010` (octal) as 8 decimal. ``` #include <iomanip> is >> std::setbase(0) >> result; ```
You need to tell C++ what your base is going to be. Want to parse a hex number? Change your "is >> result" line to: ``` is >> std::hex >> result; ``` Putting a std::dec indicates decimal numbers, std::oct indicates octal.
Can operator>> read an int hex AND decimal?
[ "", "c++", "hex", "istringstream", "" ]
I want to delete all directories and subdirectories under a root directory that are contain "tmp" in their names. This should include any .svn files too. My first guess is to use ``` <delete> <dirset dir="${root}"> <include name="**/*tmp*" /> </dirset> </delete> ``` This does not seem to work as you can't nest a `dirset` in a `delete` tag. Is this a correct approach, or should I be doing something else? * ant version == 1.6.5. * java version == 1.6.0\_04
Here's the answer that worked for me: ``` <delete includeemptydirs="true"> <fileset dir="${root}" defaultexcludes="false"> <include name="**/*tmp*/**" /> </fileset> </delete> ``` I had an added complication I needed to remove `.svn` directories too. With `defaultexcludes`, `.*` files were being excluded, and so the empty directories weren't really empty, and so weren't getting removed. The attribute `includeemptydirs` (thanks, flicken, XL-Plüschhase) enables the trailing `**` wildcard to match the an empty string.
try: ``` <delete includeemptydirs="true"> <fileset dir="${root}"> <include name="**/*tmp*/*" /> </fileset> </delete> ``` --- ThankYou flicken !
How do I delete a dirset of directories with Ant?
[ "", "java", "ant", "build", "" ]
In Oracle, the number of rows returned in an arbitrary query can be limited by filtering on the "virtual" `rownum` column. Consider the following example, which will return, at most, 10 rows. ``` SELECT * FROM all_tables WHERE rownum <= 10 ``` Is there a simple, generic way to do something similar in Ingres?
Blatantly changing my answer. "Limit 10" works for MySql and others, Ingres uses ``` Select First 10 * from myTable ``` [Ref](http://docs.ingres.com/sqlref/Selectinteractive)
select \* from myTable limit 10 does not work. Have discovered one possible solution: ``` TIDs are "tuple identifiers" or row addresses. The TID contains the page number and the index of the offset to the row relative to the page boundary. TIDs are presently implemented as 4-byte integers. The TID uniquely identifies each row in a table. Every row has a TID. The high-order 23 bits of the TID are the page number of the page in which the row occurs. The TID can be addressed in SQL by the name `tid.' ``` So you can limit the number of rows coming back using something like: ``` select * from SomeTable where tid < 2048 ``` The method is somewhat inexact in the number of rows it returns. It's fine for my requirement though because I just want to limit rows coming back from a very large result set to speed up testing.
How to limit result set size for arbitrary query in Ingres?
[ "", "sql", "oracle", "ingres", "" ]
What is the simplest way to determine the length (in seconds) of a given mp3 file, **without using outside libraries**? (python source highly appreciated)
You can use [pymad](http://spacepants.org/src/pymad/). It's an external library, but don't fall for the Not Invented Here trap. Any particular reason you don't want any external libraries? ``` import mad mf = mad.MadFile("foo.mp3") track_length_in_milliseconds = mf.total_time() ``` Spotted [here](http://simonwillison.net/2003/Dec/4/mp3lengths/). -- If you really don't want to use an external library, have a look [here](http://ibofobi.dk/stuff/mp3/) and check out how he's done it. Warning: it's complicated.
For google followers' sake, here are a few more external libs: * mpg321 -t * ffmpeg -i * midentify (mplayer basically) see [Using mplayer to determine length of audio/video file](https://stackoverflow.com/questions/497681/using-mplayer-to-determine-length-of-audio-video-file) * mencoder (pass it invalid params, it will spit out an error message but also give you info on the file in question, ex $ mencoder inputfile.mp3 -o fake) * mediainfo program <http://mediainfo.sourceforge.net/en> * exiftool * the linux "file" command * mp3info * sox refs: * <https://superuser.com/questions/36871/linux-command-line-utility-to-determine-mp3-bitrate> * <http://www.ruby-forum.com/topic/139468> * [mp3 length in milliseconds](https://stackoverflow.com/questions/993971/mp3-length-in-milliseconds) (making this a wiki for others to add to). and libs: .net: naudio, java: jlayer, c: libmad Cheers!
time length of an mp3 file
[ "", "python", "mp3", "media", "duration", "" ]
What is the best way to generate a current datestamp in Java? YYYY-MM-DD:hh-mm-ss
Using the standard JDK, you will want to use java.text.SimpleDateFormat ``` Date myDate = new Date(); SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd:HH-mm-ss"); String myDateString = sdf.format(myDate); ``` However, if you have the option to use the Apache Commons Lang package, you can use org.apache.commons.lang.time.FastDateFormat ``` Date myDate = new Date(); FastDateFormat fdf = FastDateFormat.getInstance("yyyy-MM-dd:HH-mm-ss"); String myDateString = fdf.format(myDate); ``` FastDateFormat has the benefit of being thread safe, so you can use a single instance throughout your application. It is strictly for formatting dates and does not support parsing like SimpleDateFormat does in the following example: ``` SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd:HH-mm-ss"); Date yourDate = sdf.parse("2008-09-18:22-03-15"); ```
``` Date d = new Date(); String formatted = new SimpleDateFormat ("yyyy-MM-dd:HH-mm-ss").format (d); System.out.println (formatted); ```
Generate a current datestamp in Java
[ "", "java", "datestamp", "" ]
I'd like to check if the current browser supports the onbeforeunload event. The common javascript way to do this does not seem to work: ``` if (window.onbeforeunload) { alert('yes'); } else { alert('no'); } ``` Actually, it only checks whether some handler has been attached to the event. Is there a way to detect if onbeforeunload is supported without detecting the particular browser name?
I wrote about a more-or-less [reliable inference for detecting event support](http://perfectionkills.com/detecting-event-support-without-browser-sniffing/) in modern browsers some time ago. You can see on a demo page that "beforeunload" is supported in at least Safari 4+, FF3.x+ and IE. **Edit**: This technique is now used in jQuery, Prototype.js, Modernizr, and likely other scripts and libraries.
Unfortunately [kangax's answer](https://stackoverflow.com/a/1262811/345716) doesn't work for [Safari on iOS](https://stackoverflow.com/questions/3239834/window-onbeforeunload-not-working-on-the-ipad). In my testing `beforeunload` was supported in every browser I tried exactly except Safari on IOS :-( Instead I suggest a different approach: The idea is simple. On the very first page visit, we don't actually know yet if `beforeunload` is supported. But on that very first page, we set up both an `unload` and a `beforeunload` handler. If the `beforeunload` handler fires, we set a flag saying that `beforeunload` is supported (actually `beforeunloadSupported = "yes"`). When the `unload` handler fires, if the flag hasn't been set, we set the flag that `beforeunload` is *not* supported. In the following we'll use `localStorage` ( supported in all the browsers I care about - see <http://caniuse.com/namevalue-storage> ) to get/set the flag. We could just as well have used a cookie, but I chose `localStorage` because there is no reason to send this information to the web server at every request. We just need a flag that survives page reloads. Once we've detected it once, it'll stay detected forever. With this, you can now call `isBeforeunloadSupported()` and it will tell you. ``` (function($) { var field = 'beforeunloadSupported'; if (window.localStorage && window.localStorage.getItem && window.localStorage.setItem && ! window.localStorage.getItem(field)) { $(window).on('beforeunload', function () { window.localStorage.setItem(field, 'yes'); }); $(window).on('unload', function () { // If unload fires, and beforeunload hasn't set the field, // then beforeunload didn't fire and is therefore not // supported (cough * iPad * cough) if (! window.localStorage.getItem(field)) { window.localStorage.setItem(field, 'no'); } }); } window.isBeforeunloadSupported = function () { if (window.localStorage && window.localStorage.getItem && window.localStorage.getItem(field) && window.localStorage.getItem(field) == "yes" ) { return true; } else { return false; } } })(jQuery); ``` Here is a full [jsfiddle](http://jsfiddle.net/pmorch/tW827/) with example usage. Note that it will only have been detected on the second or any subsequent page loads on your site. If it is important to you to have it working on the very first page too, you could load an `iframe` on that page with a `src` attribute pointing to a page on the same domain with the detection here, make sure it has loaded and then remove it. That should ensure that the detection has been done so `isBeforeunloadSupported()` works even on the first page. But I didn't need that so I didn't put that in my demo.
onbeforeunload support detection
[ "", "javascript", "" ]