Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
So, I started learning to code in [Python](http://en.wikipedia.org/wiki/Python_%28programming_language%29) and later [Django](http://en.wikipedia.org/wiki/Django_%28web_framework%29). The first times it was hard looking at tracebacks and actually figure out what I did wrong and where the syntax error was. Some time has passed now and some way along the way, I guess I got a routine in debugging my Django code. As this was done early in my coding experience, I sat down and wondered if how I was doing this was ineffective and could be done faster. I usually manage to find and correct the bugs in my code, but I wonder if I should be doing it faster? I usually just use the debug info Django gives when enabled. When things do end up as I thought it would, I break the code flow a lot with a syntax error, and look at the variables at that point in the flow to figure out, where the code does something other than what I wanted. But can this be improved? Are there some good tools or better ways to debug your Django code?
There are a bunch of ways to do it, but the most straightforward is to simply use the [Python debugger](https://docs.python.org/2/library/pdb.html). Just add following line in to a Django view function: ``` import pdb; pdb.set_trace() ``` or ``` breakpoint() #from Python3.7 ``` If you try to load that page in your browser, the browser will hang and you get a prompt to carry on debugging on actual executing code. However there are other options (I am not recommending them): ``` * return HttpResponse({variable to inspect}) * print {variable to inspect} * raise Exception({variable to inspect}) ``` But the Python Debugger (pdb) is highly recommended for all types of Python code. If you are already into pdb, you'd also want to have a look at [**IPDB**](http://pypi.python.org/pypi/ipdb/) that uses [**ipython**](http://ipython.org/) for debugging. Some more useful extension to pdb are [**pdb++**](https://pypi.python.org/pypi/pdbpp/), suggested by [Antash](https://stackoverflow.com/users/5792269/antash). [**pudb**](https://pypi.python.org/pypi/pudb), suggested by [PatDuJour](https://stackoverflow.com/users/5081188/patdujour). [**Using the Python debugger in Django**](https://mike.tig.as/blog/2010/09/14/pdb/), suggested by [Seafangs](https://stackoverflow.com/users/884640/seafangs).
I really like [Werkzeug](http://werkzeug.pocoo.org/)'s interactive debugger. It's similar to Django's debug page, except that you get an interactive shell on every level of the traceback. If you use the [django-extensions](https://github.com/django-extensions/django-extensions), you get a `runserver_plus` managment command which starts the development server and gives you Werkzeug's debugger on exceptions. Of course, you should only run this locally, as it gives anyone with a browser the rights to execute arbitrary python code in the context of the server.
How to debug in Django, the good way?
[ "", "python", "django", "debugging", "" ]
I have the following enum how do i map in jna ?? This enum is further referenced in structure. ``` typedef enum { eFtUsbDeviceNotShared, eFtUsbDeviceSharedActive, eFtUsbDeviceSharedNotActive, eFtUsbDeviceSharedNotPlugged, eFtUsbDeviceSharedProblem } eFtUsbDeviceStatus; ``` Abdul Khaliq
If you're using JNA you probably want to explicitly specify the values of the enumeration in Java. By default, Java's basic enum type doesn't really give you that functionality, you have to add a constructor for an EnumSet (see [this](http://java.sun.com/docs/books/tutorial/java/javaOO/enum.html) and [this](http://epirsch.blogspot.com/2008/02/jna-love-nxt.html)). A simple way to encode C enumerations is to use public static final const ints wrapped in a class with the same name as the enum. You get most of the functionality you'd get from a Java enum but slightly less overhead to assign values. Some good JNA examples, including the snippets below (which were copied) are available [here](http://code.google.com/p/jnaerator/wiki/Documentation). Suppose your C code looks like: ``` enum Values { First, Second, Last }; ``` Then the Java looks like: ``` public static interface Values { public static final int First = 0; public static final int Second = 1; public static final int Last = 2; } ```
On my blog I wrote up [a convenient way to use **real** Java `enum`s with JNA](http://technofovea.com/blog/archives/815), rather than just arbitrary `int`s. It's a bit more complex, but it has several advantages: * You get *most* of the type-safety and error-prevention * Your IDE can suggest/autocomplete things * You can make a much class-ier and easier Java API Basically, you need to use a custom [`TypeConverter`](https://jna.java.net/javadoc/com/sun/jna/TypeConverter.html) for the `enum`, and provide that to JNA through a simple [`TypeMapper`](https://jna.java.net/javadoc/com/sun/jna/TypeMapper.html). Most of the extra code is to avoid needing to make a separate `TypeConverter` for each different `enum` class. (In my case, I had to make a lot of them.) --- You can see some real-world code in my [jhllib](https://github.com/DHager/jhllib) project. In particular, look at the definitions and usages of [HlTypeMapper](https://github.com/DHager/jhllib/search?utf8=%E2%9C%93&q=HlTypeMapper), [EnumConverter](https://github.com/DHager/jhllib/search?utf8=%E2%9C%93&q=enumconverter), and [JnaEnum](https://github.com/DHager/jhllib/search?utf8=%E2%9C%93&q=JnaEnum).
How to map enum in JNA
[ "", "java", "enums", "native", "jna", "" ]
I have a trigger that supposed to update log out time[generate random log out time within an hour of log in time, seconds have to differ too], how do I achieve that functionality, please help here is my trigger so far: ``` USE TestDB GO CREATE TRIGGER AfterInsertTG ON dbo.usage_reports_Dummy2 AFTER INSERT AS DECLARE @pk_id as int, @member_id as int,@login_time AS DATETIME,@logout_time AS DATETIME ,@ip AS VARCHAR(255),@session_id AS VARCHAR(255); SELECT @pk_id = pk_id , @member_id = member_id, @login_time =login_time, @logout_time = logout_time, @ip = ip, @session_id = session_id FROM usage_reports_Dummy2 IF(@logout_time IS NULL) BEGIN ??????? END GO ``` Thank you all for helping me out specially Eric for taking some time and think about the formula, I chose Marc's answer becuase it suits my conditions here is the final code: ``` CREATE TRIGGER trgInsert ON dbo.usage_reports_Dummy2 INSTEAD OF INSERT AS BEGIN INSERT INTO dbo.usage_reports_Dummy2(member_id, login_time, logout_time, ip, session_id) SELECT member_id, login_time, logout_time, ip, session_id FROM inserted WHERE logout_time IS NOT NULL INSERT INTO dbo.usage_reports_Dummy2(member_id, login_time, logout_time, ip, session_id) SELECT member_id, login_time, DATEADD(ss, RAND() * 3600, login_time), ip, session_id FROM inserted WHERE logout_time IS NULL END ```
What you could do is create an INSTEAD OF INSERT trigger something like this: ``` CREATE TRIGGER trgInsert ON dbo.usage_reports_Dummy2 INSTEAD OF INSERT AS BEGIN INSERT INTO dbo.usage_reports_Dummy2(member_id, login_time, logout_time, ip, session_id) SELECT member_id, login_time, logout_time, ip, session_id FROM inserted WHERE logout_time IS NOT NULL INSERT INTO dbo.usage_reports_Dummy2(member_id, login_time, logout_time, ip, session_id) SELECT member_id, login_time, DATEADD(ss, RAND() * 3600, login_time), ip, session_id FROM inserted WHERE logout_time IS NULL END ``` That way, if a "logout\_time" is provided, the values are stored "as is", and if not, then your logout\_time is calculated based on your requirements. Marc
You can use [`newid`](http://msdn.microsoft.com/en-us/library/aa276822(SQL.80).aspx) to generate a random sequence for you, then send it to [`checksum`](http://msdn.microsoft.com/en-us/library/aa258245(SQL.80).aspx) to get an int out of it, and then [`mod` (`%`)](http://msdn.microsoft.com/en-us/library/aa276866(SQL.80).aspx) it by 3600 (seconds in an hour). Using the [`dateadd`](http://msdn.microsoft.com/en-us/library/aa258267(SQL.80).aspx) function, you can add that random amount of seconds to your `@login_time`. ``` select @logout_time = dateadd(ss, checksum(newid()) % 3600, @login_time) ``` It should be noted that [`rand`](http://msdn.microsoft.com/en-us/library/aa238382(SQL.80).aspx) only returns one value per statement, so all of your rows would have the same variance between `@login_time` and `@logout_time`. However, if is a single row, then this works well: ``` select @logout_time = dateadd(ss, rand() * 3600, @login_time) ```
trigger in sql server 2000
[ "", "sql", "sql-server", "t-sql", "sql-server-2000", "" ]
I have this interface: ``` public interface Command<T> { T execute(String... args); } ``` it works fine for most uses. But when I try to model a command that have only side effects (e.g. without return value) I'm tempted to write: ``` public class SideEffectCommand implements Command<Void> { @Override public Void execute(String... args) { return null; // null is fine? } } ``` Is this a common problem? Are there best practices to model `Commands` with **and** without return value? I've tried with this adapter but I think this is not optimal for several reasons: ``` public abstract class VoidCommand implements Command<Void> { @Override public Void execute(String... args) { execute2(args); return null; } public abstract void execute2(String... args); } ```
Here's a best-of-multiple-worlds implementation. ``` // Generic interface for when a client doesn't care // about the return value of a command. public interface Command { // The interfaces themselves take a String[] rather // than a String... argument, because otherwise the // implementation of AbstractCommand<T> would be // more complicated. public void execute(String[] arguments); } // Interface for clients that do need to use the // return value of a command. public interface ValuedCommand<T> extends Command { public T evaluate(String[] arguments); } // Optional, but useful if most of your commands are ValuedCommands. public abstract class AbstractCommand<T> implements ValuedCommand<T> { public void execute(String[] arguments) { evaluate(arguments); } } // Singleton class with utility methods. public class Commands { private Commands() {} // Singleton class. // These are useful if you like the vararg calling style. public static void execute(Command cmd, String... arguments) { cmd.execute(arguments); } public static <T> void execute(ValuedCommand<T> cmd, String... arguments) { return cmd.evaluate(arguments); } // Useful if you have code that requires a ValuedCommand<?> // but you only have a plain Command. public static ValuedCommand<?> asValuedCommand(Command cmd) { return new VoidCommand(cmd); } private static class VoidCommand extends AbstractCommand<Void> { private final Command cmd; public VoidCommand(Command actual) { cmd = actual; } public Void evaluate(String[] arguments) { cmd.execute(arguments); return null; } } } ``` With this implementation, clients can talk about a `Command` if they don't care about the return value, and a `ValuedCommand<T>` if the need a command that returns a certain value. About the only reason not to go with using `Void` straight up is all the unsightly `return null;` statements that you will be forced to insert.
I would stick to using `Void` explicitly. It's easy to see what is going on without another class involved. It'd be nice if you could override a `Void` return with `void` (and `Integer` with `int`, etc), but that is not a priority.
Void value as return parameter
[ "", "java", "command", "adapter", "" ]
> **Possible Duplicate:** > [When to use virtual destructors?](https://stackoverflow.com/questions/461203/when-to-use-virtual-destructors) When should your C++ object's destructor be `virtual`?
1. You need virtual destructor when at least one of class methods is virtual. This is because the reason for virtual method is that you want to use polymorphism. Meaning you will call a method on the base class pointer and you want the most derived implementation - this is the whole point of polymorphism. Now if you did not have virtual destructor and through the pointer to base class you call destructor you end up calling base class destructor. In this case you want polymorphism to work on your destructor as well, e.g. through calling destructor on your base class you want to end up calling destructor of your most derived class not your base class. ``` class A { virtual void f() {} ~A() {} } class B : public A { void f() {} ~B() {} } A * thing = new B(); thing->f(); // calls B's f() delete thing; // calls ~A(), not what you wanted, you wanted ~B() ``` having ~A() virtual turns on polymorphism ``` virtual ~A() {} ``` So when you now call ``` delete thing; ``` ~B() will be called. You would declare virtual destructors when you design class as an interface e.g. you expect it to be extended or implemented. A good practice in that case is to have a interface class (in the sense of Java interfaces) with virtual methods and virtual destructor and then have concrete implementation classes. You can see that STL classes don't have virtual destructors so they are not supposed to be extended (e.g. std::vector, std::string ...). If you extend std::vector and you call destructor on base class via pointer or reference you will definitely not call your specialized class destructor which may lead to memory leaks.
From [Stroustrup's C++ Style and Technique FAQ](http://www.research.att.com/~bs/bs_faq2.html#virtual-dtor): > So when should I declare a destructor > virtual? Whenever the class has at > least one virtual function. Having > virtual functions indicate that a > class is meant to act as an interface > to derived classes, and when it is, an > object of a derived class may be > destroyed through a pointer to the > base. Lots of additional info on [when your destructor should be virtual on the C++ FAQ](http://www.parashift.com/c++-faq-lite/virtual-functions.html#faq-20.7). (thanks Stobor) What is a virtual member? From the [C++ FAQ](http://www.parashift.com/c++-faq-lite/virtual-functions.html#faq-20.1): > [20.1] What is a "virtual member function"? > > From an OO perspective, it is the > single most important feature of C++: > [6.9], [6.10]. > > A virtual function allows derived > classes to replace the implementation > provided by the base class. The > compiler makes sure the replacement is > always called whenever the object in > question is actually of the derived > class, even if the object is accessed > by a base pointer rather than a > derived pointer. This allows > algorithms in the base class to be > replaced in the derived class, even if > users don't know about the derived > class. > > The derived class can either fully > replace ("override") the base class > member function, or the derived class > can partially replace ("augment") the > base class member function. The latter > is accomplished by having the derived > class member function call the base > class member function, if desired.
When should your destructor be virtual?
[ "", "c++", "virtual-destructor", "" ]
What is "Fetching rows with a scrollable cursor" all about?
It creates a [cursor](http://en.wikipedia.org/wiki/Cursor_(databases)) for the query, which allows you to iterate over the result set without fetching the whole result at once. A [scrollable cursor](http://en.wikipedia.org/wiki/Cursor_(databases)#Scrollable_cursors), specifically, is one that allows iterating backwards. Example use: You can scroll forward until you find the record you need and iterate back to fetch the previous records, if you need them, too.
[Wikipedia](http://en.wikipedia.org/wiki/Cursor_%28databases%29#Scrollable_cursors) gives this : > With a non-scrollable cursor, also > known as forward-only, one can FETCH > each row at most once, and the cursor > automatically moves to the immediately > following row. A fetch operation after > the last row has been retrieved > positions the cursor after the last > row and returns SQLSTATE 02000 > (SQLCODE +100). And this : > A program may position a scrollable > cursor anywhere in the result set > using the FETCH SQL statement. You should read the article linked earlier, but this looks like some interesting information too : > Scrollable cursors can potentially > access the same row in the result set > multiple times. Thus, data > modifications (insert, update, delete > operations) from other transactions > could have an impact on the result > set. In PHP, you can use scrollable cursors with PDO by using prepared statements (see [`PDOStatement::fetch`](http://php.net/manual/en/pdostatement.fetch.php)) : > To request a scrollable cursor for > your PDOStatement object, you must set > the PDO::ATTR\_CURSOR attribute to > PDO::CURSOR\_SCROLL when you prepare > the SQL statement with PDO::prepare(). (there is also an example further down that page) What seems interesting is the ability to "scroll" through the resultset, without having to get all the data in memory to iterate over it.
What is PDO scrollable cursor?
[ "", "php", "mysql", "pdo", "" ]
In order to set a div containing a transparent text image as the highest z-index in my document, I picked the number 10,000 and it solved my problem. Previously I had guessed with the number 3 but it had no effect. So, is there a more scientific way of figuring out what z-index is higher than that of all of your other elements? I tried looking for this metric in Firebug but couldn't find it.
You could call `findHighestZIndex` for a particular element type such as `<div>` like this: ``` findHighestZIndex('div'); ``` assuming a `findHighestZindex` function that is defined like this: ``` function findHighestZIndex(elem) { var elems = document.getElementsByTagName(elem); var highest = Number.MIN_SAFE_INTEGER || -(Math.pow(2, 53) - 1); for (var i = 0; i < elems.length; i++) { var zindex = Number.parseInt( document.defaultView.getComputedStyle(elems[i], null).getPropertyValue("z-index"), 10 ); if (zindex > highest) { highest = zindex; } } return highest; } ```
Stealing some code from abcoder site for the sake of clarity: ``` var maxZ = Math.max.apply(null, $.map($('body *'), function(e,n) { if ($(e).css('position') != 'static') return parseInt($(e).css('z-index')) || 1; })); ```
How can you figure out the highest z-index in your document?
[ "", "javascript", "html", "css", "z-index", "" ]
I have on table named `players`, then other tables named `tries`, `conversions`, `penalties`, `dropgoals`. I need to get the `player_id` from `players`, then count the following: * number of tries for `player_id` in `tries` * number of conversions for `player_id` in `conversions` * number of penalties for `player_id` in `penalties` * number of dropgoals for `player_id` in `dropgoals` And this all needs to happen in one go as there are about fifteen players for each game, the site relates to rugby. I have tried the following, which works: `SELECT players.player_id, players.number, CASE WHEN (COUNT(tries.player_id) = 0) THEN '&nbsp;' ELSE COUNT(tries.player_id) END AS nrTries FROM players LEFT JOIN tries ON players.player_id = tries.player_id WHERE players.team_id IS NULL GROUP BY players.player_id ORDER BY players.number` It select all players from the `players` table and counts their tries, but as soon as I change it to the following it gives me an error: `SELECT players.player_id, players.number, player.name, CASE WHEN (COUNT(tries.player_id) = 0) THEN '&nbsp;' ELSE COUNT(tries.player_id) END AS nrTries FROM players, player LEFT JOIN tries ON players.player_id = tries.player_id WHERE players.player_id = player.player_id AND players.team_id IS NULL GROUP BY players.player_id ORDER BY players.number` **It gives the following error:** `Unknown column 'players.player_id' in 'on clause'` Can someone please help me with this, I have been struggling for days now? Thanks in advance **// edit:** Hi, all, I feel very stupid now, this code works brilliantly, except that when a player has scored in more than one type, say tries and conversions, or tries and penalties, it miscounts the number of each type, and they are made all the same. Let's say my player scored 1 penalty and 3 dropgoals, it outputs as 3 penalties and 3 dropgoals, I can't figure out what's wrong. Here is my query: ``` SELECT players.player_id, players.number, player.name, player.surname, CASE WHEN (COUNT(tries.player_id) = 0) THEN '& nbsp;' ELSE COUNT(tries.player_id) END AS nrTries, CASE WHEN (COUNT(conversions.player_id) = 0) THEN '& nbsp;' ELSE COUNT(conversions.player_id) END AS nrConversions, CASE WHEN (COUNT(dropgoals.player_id) = 0) THEN '& nbsp;' ELSE COUNT(dropgoals.player_id) END AS nrDropgoals, CASE WHEN (COUNT(penalties.player_id) = 0) THEN '& nbsp;' ELSE COUNT(penalties.player_id) END AS nrPenalties FROM players LEFT JOIN tries ON players.player_id = tries.player_id AND tries.game_id = '$game_id' LEFT JOIN conversions ON players.player_id = conversions.player_id AND conversions.game_id = '$game_id' LEFT JOIN dropgoals ON players.player_id = dropgoals.player_id AND dropgoals.game_id = '$game_id' LEFT JOIN penalties ON players.player_id = penalties.player_id AND penalties.game_id = '$game_id' LEFT JOIN player ON players.player_id = player.player_id WHERE players.player_id = player.player_id AND players.team_id IS NULL AND players.game_id = '$game_id' GROUP BY players.player_id ORDER BY players.number ``` ***Please note:*** $game\_id is a PHP variable. ***Also:*** I have included a space between & and nbsp; otherwise it does not get ouputted to SO. Can someone please point me in the right direction?
I'm a little confused why you stated Cletus had the correct solution, and proceeded to not use it while updating your original question. That said, Cletus' solution is slightly off, you need COUNT() not SUM(). Try the following: ``` SELECT players.player_id, (SELECT COUNT(*) FROM tries WHERE player_id = players.player_id) tries, (SELECT COUNT(*) FROM penalties WHERE player_id = players.player_id) penalties FROM players; ``` This will return 0's instead of &nbsp I'd recommend handling that in your application code though. You can add in the CASE mess if you **really** want to get the &nbsp from mysql.
You haven't been specific enough about the data but, assuming there is one record in each associated table per player and you're happy to show NULL if it's not there then: ``` SELECT player_id, tries, conversions, penalties, dropgoals FROM players p LEFT JOIN tries t ON t.player_id = p.player_id LEFT JOIN conversions c ON c.player_id = p.player_id LEFT JOIN penalties e ON e.player_id = p.player_id LEFT JOIN dropgoals d ON d.player_id = p.player_id ``` This can be restated as: ``` SELECT player_id (SELECT tries FROM tries WHERE player_id = p.player_id) tries, (SELECT conversions FROM conversions WHERE player_id = p.player_id) conversions, (SELECT penalties FROM penalties WHERE player_id = p.player_id) penalties, (SELECT dropgoals FROM dropgoals WHERE player_id = p.player_id) dropgoals FROM players p ``` Performance may or may not vary depending on your database engine. If you need to sum this then change it to: ``` SELECT player_id (SELECT SUM(tries) FROM tries WHERE player_id = p.player_id) tries, (SELECT SUM(conversions) FROM conversions WHERE player_id = p.player_id) conversions, (SELECT SUM(penalties) FROM penalties WHERE player_id = p.player_id) penalties, (SELECT SUM(dropgoals) FROM dropgoals WHERE player_id = p.player_id) dropgoals FROM players p ``` Any of the above can use IFNULL() or similar functions to return 0 instead of NULL, if desired.
MySQL - Need help counting id's that correspond to certain in other tables
[ "", "php", "optimization", "mysql", "query-optimization", "" ]
Trying to use the RHEL5.3 GCC 4.3.2 compiler to build my software on that platform. I get the following error no matter what I try when compiling with -O2, but it builds fine without optimization. Any ideas? ``` /usr/bin/ld: myapp: hidden symbol `void std::__ostream_fill<char, std::char_traits<char> >(std::basic_ostream<char, std::char_traits<char> >&, long)' isn't defined /usr/bin/ld: final link failed: Nonrepresentable section on output ``` In RHEL5.3, I am using `/usr/bin/g++43` for compilation and linking. The correct `libstdc++.so` is found here: `/usr/lib/gcc/i386-redhat-linux6E/4.3.2/libstdc++.so` which is a text file containing `INPUT ( -lstdc++_nonshared /usr/lib/libstdc++.so.6 )`. Wouldn't that mismatch the system `stdlibc++ 4.1` version?
It turns out to be a GCC bug in RHEL 5.3 :-/. <https://bugzilla.redhat.com/show_bug.cgi?id=493929>. I sent an email to the maintainer, Jakub Jelinek, who said that RHEL 5.4 (which is due out soon) will have a fix and also will bump to GCC 4.4. A workaround is to use -fno-inline, but this has some obvious drawbacks.
You seem to be running into [visibility](http://gcc.gnu.org/wiki/Visibility) issues -- can we see your full command line? For example, what [`-fvisibility-inlines-hidden`](http://gcc.gnu.org/onlinedocs/gcc-4.4.0/gcc/C_002b_002b-Dialect-Options.html#index-fvisibility_002dinlines_002dhidden-153) does may change at different optimization levels because GCC decides to inline different things.
Error linking with GCC 4.3.2 on RHEL 5.3 and libstdc++.so. Any GCC gurus?
[ "", "c++", "linux", "gcc", "linker", "g++", "" ]
You may know the Windows compliance tool that helps people to know if their code is supported by any version of the MS OS. I am looking something similar for Python. I am writing a lib with Python 2.6 and I realized that it was not compatible with Python 2.5 due to the use of the with keyword. I would like to know if there is a simple and automatic way to avoid this situation in the future. I am also interested in something similar to know which OS are supported. Thanks for your help
In response to [a previous question about this](https://stackoverflow.com/questions/804538/tool-to-determine-what-lowest-version-of-python-required), I wrote [pyqver](http://github.com/ghewgill/pyqver/tree/master). If you have any improvements, please feel free to fork and contribute!
I recommend you rather use automated tests than a code analysis tool. Be aware that there are subtle behaviour changes in the Python standard library that your code may or may not depend upon. For example `httplib`: When uploading files, it is normal to give the data as a `str`. In Python 2.6 you *can* give stream objects instead (useful for >1GB files) if you nudge them correctly, but in Python 2.5 you will get an error. A comprehensive set of unit tests and integration tests will be much more reliable because they test that your program actually *works* on Python version X.Y. ``` $ python2.6 tests/run_all.py ................................. 33 tests passed [OK] ``` You're Python 2.6 compatible. ``` $ python2.4 tests/run_all.py ...........EEE.........EEE....... 27 tests passed, 6 errors [FAIL] ``` You're *not* Python 2.4 compatible.
Is there a way to know which versions of python are supported by my code?
[ "", "python", "dependencies", "versioning", "" ]
I know the syntax: ``` ALTER TABLE [TheTable] DROP CONSTRAINT [TheDefaultConstraint] ``` but how to I drop the default constraint when I don't know its name? (That is, it was autogenerated at `CREATE TABLE` time.)
If you want to do this manually, you can use Management Studio to find it (under the Constraints node inside the table). To do it using SQL: * If the constraints are default constraints, you can use `sys.default_constraints` to find it: ``` SELECT OBJECT_NAME(parent_object_id) AS TableName, name AS ConstraintName FROM sys.default_constraints ORDER BY TableName, ConstraintName ``` * If you are looking for other constraints as well (check, unique, foreign key, default, primary key), you can use `sysconstraints`: ``` SELECT OBJECT_NAME(id) AS TableName, OBJECT_NAME(constid) AS ConstraintName FROM sysconstraints ORDER BY TableName, ConstraintName ``` You do not say which version of SQL Server you are using. The above work on both SQL 2005 and SQL 2008.
You can use this code to do it automatically: ``` DECLARE @tableName VARCHAR(MAX) = '<MYTABLENAME>' DECLARE @columnName VARCHAR(MAX) = '<MYCOLUMNAME>' DECLARE @ConstraintName nvarchar(200) SELECT @ConstraintName = Name FROM SYS.DEFAULT_CONSTRAINTS WHERE PARENT_OBJECT_ID = OBJECT_ID(@tableName) AND PARENT_COLUMN_ID = ( SELECT column_id FROM sys.columns WHERE NAME = @columnName AND object_id = OBJECT_ID(@tableName)) IF @ConstraintName IS NOT NULL EXEC('ALTER TABLE '+@tableName+' DROP CONSTRAINT ' + @ConstraintName) ``` Just replace `<MYTABLENAME>` and `<MYCOLUMNNAME>` as appropriate.
How do you drop a default value or similar constraint in T-SQL?
[ "", "sql", "t-sql", "constraints", "default", "" ]
On [MSDN Magazine](http://msdn.microsoft.com/en-us/magazine/dd419663.aspx) it has a good article about MVVM and they are binding the validation error in the Xaml to `Validation.ErrorTemplate="{x:Null}"`. I do not get it why and how they can display from the IDataErrorInfo the error? Anyone can light me on how to get the error message displayed to the screen with the MVVM approach?
I was looking at the same sample just a few minutes ago. Your guess is righ. In this code sample they removed default ErrorTemplate from TextBox control so it would't show red rectangle. Instead of using ErrorTemplate they create ContentProvider with content bound to validation error of specific text box.
When you bind to an object that supports IDataErrorInfo, there are several features of the WPF Binding class to consider: 1. ValidatesOnDataErrors must be True. This instructs WPF to look for and use the IDataError interface on the underlying object. 2. The attached property Validation.HasError will be set to true on the target object if the source object's IDataError interface reported a validation problem. You can then use this property with trigger to change the tooltip of the control to display the validation error message (I'm doing this in my current project and the end user's love it). 3. The Validation.Errors attached property will contain an enumeration of any ValidationResult errors resulting from the last validation attempt. If you're going with the tooltip approach, use an IValueConverter to retrieve only the first item... otherwise you run into binding errors for displaying the error message itself. 4. The binding class exposes NotifyOnValidationError, which when True, will cause routed events to bubble up from the bound control every time a validation rule's state changes. This is useful if you want to implement an event handler in the container of the bound controls, and then add and remove the validation messages to/from a listbox. There are samples on MSDN for doing both style of feedback (the tooltips as well as the listbox), but I'll paste below the code I roled to implement the tooltip feedback on my DataGridCells and TextBoxes... The DataGridCell style: ``` <Style TargetType="{x:Type dg:DataGridCell}" x:Key="DataGridCellStyle"> <Setter Property="ToolTip" Value="{Binding Path=Column.(ToolTipService.ToolTip),RelativeSource={RelativeSource Self}}" /> <Style.Triggers> <Trigger Property="Validation.HasError" Value="True"> <Setter Property="ToolTip" Value="{Binding RelativeSource={RelativeSource Self},Path=(Validation.Errors), Converter={StaticResource ErrorContentConverter}}" /> </Trigger> </Style.Triggers> </Style> ``` The TextBox style: ``` <Style x:Key="ValidatableTextBoxStyle" TargetType="TextBox"> <!--When the control is not in error, set the tooltip to match the AutomationProperties.HelpText attached property--> <Setter Property="ToolTip" Value="{Binding RelativeSource={RelativeSource Mode=Self},Path=(AutomationProperties.HelpText)}" /> <Style.Triggers> <Trigger Property="Validation.HasError" Value="true"> <Setter Property="ToolTip" Value="{Binding RelativeSource={x:Static RelativeSource.Self},Path=(Validation.Errors)[0].ErrorContent}" /> </Trigger> </Style.Triggers> </Style> ``` The ErrorContentConverter (for retrieving the first validation error message for the tooltip): ``` Imports System.Collections.ObjectModel Namespace Converters <ValueConversion(GetType(ReadOnlyObservableCollection(Of ValidationError)), GetType(String))> _ Public Class ErrorContentConverter Implements IValueConverter Public Function Convert(ByVal value As Object, ByVal targetType As System.Type, ByVal parameter As Object, ByVal culture As System.Globalization.CultureInfo) As Object Implements System.Windows.Data.IValueConverter.Convert Dim errors As ReadOnlyObservableCollection(Of ValidationError) = TryCast(value, ReadOnlyObservableCollection(Of ValidationError)) If errors IsNot Nothing Then If errors.Count > 0 Then Return errors(0).ErrorContent End If End If Return String.Empty End Function Public Function ConvertBack(ByVal value As Object, ByVal targetType As System.Type, ByVal parameter As Object, ByVal culture As System.Globalization.CultureInfo) As Object Implements System.Windows.Data.IValueConverter.ConvertBack Throw New NotImplementedException() End Function End Class End Namespace ``` ...and finally an example of using the style in a textbox: ``` <TextBox Text="{Binding Path=EstimatedUnits,ValidatesOnDataErrors=True,NotifyOnValidationError=True}" Style="{StaticResource ValidatableTextBoxStyle}" AutomationProperties.HelpText="The number of units which are likely to sell in 1 year." /> ```
MVVM pattern, IDataErrorInfo and Binding to display error?
[ "", "c#", ".net", "wpf", "mvvm", "binding", "" ]
I have a query like: ``` SELECT EXTRACT(WEEK FROM j.updated_at) as "week", count(j.id) FROM jobs WHERE EXTRACT(YEAR FROM j.updated_at)=2009 GROUP BY EXTRACT(WEEK FROM j.updated_at) ORDER BY week ``` Which works fine, but I only want to show the last say 12 weeks, LIMIT 12 works, but only gives me the first twelve and I need the order to be in sequential week order (ie. not reversed) for charting purposes... Is there a equivalent statement in Postgresql such as Show BOTTOM x items?
``` SELECT * FROM ( SELECT EXTRACT(WEEK FROM j.updated_at) as "week", count(j.id) FROM jobs WHERE EXTRACT(YEAR FROM j.updated_at)=2009 GROUP BY EXTRACT(WEEK FROM j.updated_at) ORDER BY week DESC LIMIT 12 ) q ORDER BY week ASC ``` Note that if you have an index on `updated_at`, you may rewrite this query a little so that it's more efficient: ``` SELECT week, ( SELECT COUNT(j.id) FROM jobs WHERE updated_at >= start_week AND updated_at < end_week ) FROM ( SELECT week, '04.01.2009'::DATE - EXTRACT(DOW FROM '04.01.2009'::DATE)::INTEGER + week * 7 '04.01.2009'::DATE - EXTRACT(DOW FROM '04.01.2009'::DATE)::INTEGER + week * 7 + 7 FROM generate_series(42, 53) week ) q ```
If the value for *week* increases, you could *ORDER BY WEEK DESC* and then take the *TOP 12*, right? UPDATE: You would then have to re-order the 12 resulting rows to have them in chronological order, using *ORDER BY Week ASC*. (I glossed over the request to have them in ascending order afterwards; Thank you, etlerant!)
SQL How to Limit BOTTOM x rows (PostgreSQL)
[ "", "sql", "postgresql", "" ]
I'm learning C++ and i want to know how i can print the IP adress of the host machine, but remember that my program is a command line aplication(*cmd*), but i don't want the code, but some links here i can learn this, not copy and paste. Thanks!
Check this out: [Socket Programming](http://windows-programming.suite101.com/article.cfm/socket_programming_gethostbyname). Winsock looks like a good choice.
Google = [How to find local ip addresses c++](http://www.codeguru.com/forum/showthread.php?t=233261)
How I Can Print The IP Of The Host
[ "", "c++", "printing", "localhost", "ip", "" ]
How do I install Tomcat as a daemon on OS X? It should run even when a user is not logged on. I could probably survive installing Tomcat for a single users and having it run at log in, for now. But long term I need it installed and running as a service on boot.
You could write your own LaunchDaemon script for it. Launch Services can be used to set up user or system-based startup.
See [Updated for 2009](http://wiki.apache.org/tomcat/TomcatOnMacOS) at the bottom of the page for details on what is needed
How to intall Tomcat as a daemon on OS X?
[ "", "java", "macos", "tomcat", "daemon", "" ]
I'm having an issue with deserializing an XML file with boolean values. The source XML files I'm deserializing were created from a VB6 app, where all boolean values are capitalized (`True`, `False`). When I try to deserialize the XML, I'm getting a ``` System.FormatException: The string 'False' is not a valid Boolean value. ``` Is there a way to say ignore case with an attribute?
You could read that value as a string into a string field, then have a readonly bool field that had an if statement in it to return bool true or false. For example (using c#): ``` public bool str2bool(string str) { if (str.Trim().ToUpper() == "TRUE") return true; else return false; } ``` And you can use it in the template: ``` <xsl:if test="user:str2bool($mystr)"> ```
Based on [another stack overflow question](https://stackoverflow.com/a/85468/573377) you can do: ``` public class MySerilizedObject { [XmlIgnore] public bool BadBoolField { get; set; } [XmlElement("BadBoolField")] public string BadBoolFieldSerialize { get { return this.BadBoolField ? "True" : "False"; } set { if(value.Equals("True")) this.BadBoolField = true; else if(value.Equals("False")) this.BadBoolField = false; else this.BadBoolField = XmlConvert.ToBoolean(value); } } } ```
Xml Serialization vs. "True" and "False"
[ "", "c#", "xml", "xml-serialization", "boolean", "" ]
I've just started using Python and I was thinking about which notation I should use. I've read the [PEP 8](http://www.python.org/dev/peps/pep-0008/) guide about notation for Python and I agree with most stuff there except function names (which I prefer in mixedCase style). In C++ I use a modified version of the Hungarian notation where I don't include information about type but only about the scope of a variable (for instance lVariable for a local variable and mVariable for a member variable of a class, g for global, s for static, in for a function's input and out for a function's output.) I don't know if this notation style has a name but I was wondering whether it's a good idea not to use such a notation style in Python. I am not extremely familiar with Python so you guys/gals might see issues that I can't imagine yet. I'm also interested to see what you think of it in general :) Some people might say it makes the code less readable, but I've become used to it and code written without these labels is the code that is less readable for me.
> (**Almost every Python programmer will say** it makes the code less readable, but I've become used to it and code written without these labels is the code that is less readable for me) FTFY. Seriously though, it will help you but confuse and annoy other Python programmers that try to read your code. This also isn't as necessary because of how Python itself works. For example you would *never* need your "mVariable" form because it's obvious in Python: ``` class Example(object): def__init__(self): self.my_member_var = "Hello" def sample(self): print self.my_member_var e = Example() e.sample() print e.my_member_var ``` No matter how you access a member variable (using `self.foo` or `myinstance.foo`) it's always clear that it's a member. The other cases might not be so painfully obvious, but if your code isn't simple enough that a reader can keep in mind "the 'names' variable is a parameter" while reading a function you're probably doing something wrong.
Use PEP-8. It is almost universal in the Python world.
Python Notation?
[ "", "python", "naming-conventions", "notation", "" ]
I know this is a little broad, but here's the situation: I am using JSP and Java. I have a file located on my server. I would like to add a link to the screen that, when clicked, would open the file for the user to view. The file can either appear in a window in the web browser, or pop up the program needed to open the file (similar to when you are outputting with iText to the screen, where Adobe opens to display the file). I know my output stream already, but how can I write the file to the output stream? Most of what I have read has only dealt with text files, but I might be dealing with image files, etc., as well. Any help is appreciated! Thanks!
Here's a simple implementation on how to achieve it: ``` protected void doPost(final HttpServletRequest request, final HttpServletResponse response) throws ServletException, IOException { // extract filename from request // TODO use a whitelist to avoid [path-traversing][1] File file = new File(getFileName(request)); InputStream input = new FileInputStream(file); response.setContentLength((int) file.length()); // TODO map your file to the appropriate MIME response.setContentType(getMimeType(file)); OutputStream output = response.getOutputStream(); byte[] bytes = new byte[BUFFER_LENGTH]; int read = 0; while (read != -1) { read = input.read(bytes, 0, BUFFER_LENGTH); if (read != -1) { output.write(bytes, 0, read); output.flush(); } } input.close(); output.close(); } ```
You need to add certain fields to the response. For a text/csv, you'd do: ``` response.setContentType("text/csv"); // set MIME type response.setHeader("Content-Disposition", "attachment; filename=\"" strExportFileName "\""); ``` Here's a forum on [sun](http://forums.sun.com/thread.jspa?threadID=751569) about it.
Java output a file to the screen
[ "", "java", "file-io", "" ]
``` template<class T> class Set { public: void insert(const T& item); void remove(const T& item); private: std::list<T> rep; } template<typename T> void Set<T>::remove(const T& item) { typename std::list<T>::iterator it = // question here std::find(rep.begin(),rep.end(),itme); if(it!=rep.end()) rep.erase(it); } ``` Why the typename in the remove() is needed?
In general, C++ needs `typename` because of the unfortunate syntax [\*] it inherits from C, that makes it impossible without non-local information to say -- for example -- in `A * B;` whether `A` names a type (in which case this is a declaration of `B` as a pointer to it) or not (in which case this is a multiplication expression -- quite possible since `A`, for all you can tell without non-local information, could be an instance of a class that overloads `operator*` to do something weird;-). In most cases the compiler does have the non-local information needed to disambiguate (though the unfortunate syntax still means the low-level parser needs feedback from the higher-level layer that keeps the symbol table info)... but with templates it doesn't (not in general, though in this specific case it might be technically illegal to specialize a `std::list<T>` so that its `::iterator` is NOT a type name;-). [\*] not just my opinion, but also the opinion of Ken Thompson and Rob Pikes, currently my colleagues, who are busy designing and implementing a new programming language for internal use: that new programming language, while its syntax is mostly C-like, does NOT repeat C's syntax design errors -- it the new language (like e.g. in good old Pascal), syntax is sufficient to distinguish identifiers that must name a type from ones that must not;-).
If you are talking of `typename` used with `std::list<T>::iterator`: The typename is used to clarify that `iterator` is a type defined within class `std::list<T>`. Without typename, `std::list<T>::iterator` will be considered a static member. `typename` is used whenever a name that depends on a template parameter is a type.
Why do we need typename here?
[ "", "c++", "" ]
I am trying to create my hello world windows app in WPF. What should I do to run this window? ## Class1.xaml ``` <Window x:Class="Window1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="Window1" Height="300" Width="300"> <Grid> </Grid> </Window> ``` ## App.xaml ``` <Application x:Class="App" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" StartupUri="Class1.xaml" > <Application.Resources> </Application.Resources> </Application> ``` ## Program.cs ``` class Program { [STAThread] public static void Main() { new App().Run(); } } ``` I have created a blank sln and added these three files. I have also added WindowsBase, PresentationBase, PresentationFramework refs. But the App is not running. What is the problem?
If you're going to have a `Main` method, your App.xaml file will be ignored. App.xaml is only used if you set its Build Action to be `ApplicationDefinition`. When you do this, you'll notice you get a compiler error, as you'll have two entry points into your program -- so you'll have to lose the `Main` method. If you want to keep your `Main` method, you can. Don't change the Build Action on App.xaml (actually, I think you can delete it), and do something like this: ``` [STAThread] public static void Main() { App app = new App(); app.StartupUri = new System.Uri("/Project1;component/Class1.xaml", System.UriKind.Relative); app.Run(); } ``` Replace the `/Project1` with your namespace.
Creating a new WPF Application would be the way to go, but I may know one way of fixing it. Your project file probably currently has a section that looks like this: ``` <Page Include="App.xaml"> <Generator>MSBuild:Compile</Generator> <SubType>Designer</SubType> </Page> ``` You'll need to change it to use `ApplicationDefinition` instead of `Page`, similar to this: ``` <ApplicationDefinition Include="App.xaml"> <Generator>MSBuild:Compile</Generator> <SubType>Designer</SubType> </ApplicationDefinition> ``` This way your project should know what to actually start when you run your application. The solution I pulled this from was a VS2010 solution, but I think it's the same in VS2008.
Hello World [WPF]
[ "", "c#", ".net", "wpf", "windows", "" ]
Just wondering if it is possible to declare a function of an enumerated type in C++ For example: ``` class myclass{ //.... enum myenum{ a, b, c, d}; myenum function(); //.... }; myenum function() { //.... } ```
yes, it is very common to return an enum type. You will want to put your enum outside of the class though since the function wants to use it. Or scope the function's enum return type with the class name (enum must be in a public part of the class definition). ``` class myclass { public: enum myenum{ a, b, c, d}; //.... myenum function(); //.... }; myClass::myenum function() { //.... } ```
Just make sure the enum is in the `public` section of your class: ``` class myclass { public: enum myenum{POSITIVE, ZERO, NEGATIVE}; myenum function(int n) { if (n > 0) return POSITIVE; else if (n == 0) return ZERO; else return NEGATIVE; } }; bool test(int n) { myclass C; if (C.function(n) == myclass::POSITIVE) return true; else return n == -5; } ```
Possible to declare function of enumerated type?
[ "", "c++", "" ]
My current task is to optimise a **[Monte Carlo Simulation](http://en.wikipedia.org/wiki/Monte_Carlo_method)** that calculates Capital Adequacy figures by region for a set of Obligors. It is running about 10 x too slow for where it will need to be in production and number or daily runs required. Additionally the granularity of the result figures will need to be improved down to desk possibly book level at some stage, the code I've been given is basically a prototype which is used by business units in a semi production capacity. The application is **currently single threaded** so I'll need to make it **multi-threaded**, may look at `System.Threading.ThreadPool` or the **Microsoft [Parallel Extensions](http://en.wikipedia.org/wiki/Parallel_Extensions)** library but I'm constrained to .NET 2 on the server at this bank so I may have to consider this guy's port, <http://www.codeproject.com/KB/cs/aforge_parallel.aspx>. I am trying my best to get them to upgrade to .NET 3.5 SP1 but it's a major exercise in an organisation of this size and might not be possible in my contract time frames. **I've profiled the application** using the trial of **[dotTrace](http://en.wikipedia.org/wiki/DotTrace)** (<http://www.jetbrains.com/profiler>). What other good profilers exist? Free ones? A lot of the execution time is spent **generating uniform random numbers** and then translating this to a normally distributed random number. They are using a C# **[Mersenne twister](http://en.wikipedia.org/wiki/Mersenne_twister)** implementation. I am not sure where they got it or if it's the best way to go about this (or best implementation) to generate the uniform random numbers. Then this is **translated to a normally distributed** version for use in the calculation (I haven't delved into the translation code yet). Also what is the experience using the following? * <http://quantlib.org> * <http://www.qlnet.org> (C# port of quantlib) or * <http://www.boost.org> Any alternatives you know of? I'm a C# developer so would prefer C#, but a **wrapper to C++** shouldn't be a problem, should it? Maybe even faster leveraging the C++ implementations. I am thinking some of these libraries will have the fastest method to directly generate normally distributed random numbers, without the translation step. Also they may have some other functions that will be helpful in the subsequent calculations. Also the computer this is on is a **quad core Opteron 275, 8 GB memory but Windows Server 2003** Enterprise **32 bit**. Should I advise them to upgrade to a **64 bit OS**? Any links to articles supporting this decision would really be appreciated. Anyway, any advice and help you may have is really appreciated.
I have found the Mersenne Twister to be quick. The problem may be in the algorithm (Box-Muller) to transform the uniform distrubution to Gaussian distribution. The standard algorithm looks like: ``` y1 = sqrt( - 2 ln(x1) ) cos( 2 pi x2 ) y2 = sqrt( - 2 ln(x1) ) sin( 2 pi x2 ) ``` Where x1 and x2 are uniform random numbers and y1 and y2 are the gaussian distribution outputs. The square roots are slow, but the trig is worse, and it is unstable close to 0. [Taygeta's page](http://www.taygeta.com/random/gaussian.html) on the subject gives a faster one (in pseudocode): ``` float x1, x2, w, y1, y2; do { x1 = 2.0 * ranf() - 1.0; x2 = 2.0 * ranf() - 1.0; w = x1 * x1 + x2 * x2; } while ( w >= 1.0 ); w = sqrt( (-2.0 * ln( w ) ) / w ); y1 = x1 * w; y2 = x2 * w; ``` If they're not using something like this, you may be able to speed things up quite a bit by avoiding the trig functions or even pre-generating the random numbers.
Have you considered pointing [a profiler at your code](http://www.red-gate.com/)? I've seen cases where there are simple fixes get very significant improvements. Like switching a couple of properties over to fields.
C# Monte Carlo Incremental Risk Calculation optimisation, random numbers, parallel execution
[ "", "c#", "multithreading", "parallel-processing", "random", "montecarlo", "" ]
Why is Java's `Class<T>` generic?
So that generic typed methods can be used - ``` Class<Foo> klass = Foo.class; Foo f = klass.newInstance(); Foo f = klass.cast(Object); ```
Here is a reasonably good summary of the advantages: <http://download.oracle.com/javase/tutorial/extra/generics/literals.html>
Why is Java's Class<T> generic?
[ "", "java", "generics", "" ]
I was using stl::merge to put two sorted collections into one. But my object has a natural key; and a defined addition semantic, so what I am after is a merge\_and\_sum that would not just merge the two collections into a single N+M length collection, but if the operator== on the object returned true, would then operator+ them. I have implemented it thus ``` template<class _InIt1, class _InIt2, class _OutIt> _OutIt merge_and_sum(_InIt1 _First1, _InIt1 _Last1, _InIt2 _First2, _InIt2 _Last2, _OutIt _Dest ) { // copy merging ranges, both using operator< for (; _First1 != _Last1 && _First2 != _Last2; ++_Dest) { if ( *_First2 < *_First1 ) *_Dest = *_First2, ++_First2; else if ( *_First2 == *_First1) *_Dest = *_First2 + *_First1, ++_First1, ++_First2; else *_Dest = *_First1, ++_First1; } _Dest = copy(_First1, _Last1, _Dest); // copy any tail return (copy(_First2, _Last2, _Dest)); } ``` But was wondering if I have reinvented something that is composable from the other algorithms.
It sounds like your collections are like multisets with duplicates collapsed by your + operator (maybe just summing the multiplicities instead of keeping redundant copies). I assume so, because you're not changing the sorting order when you +, so + isn't affecting your key. You should use your implementation. There's nothing in STL that will do it as efficiently. The closest semantic I can think of is standard merge followed by unique\_copy. You could *almost* get unique\_copy to work with a side-effectful comparison operator, but that would be extremely ill advised, as the implementation doesn't promise to only compare things directly vs. via a value-copied temporary (or even a given number of times). Your type and variable names are unpleasantly long ;)
You could use `std::merge` with an output iterator of your own creation, which does the following in `operator=`. I think this ends up making more calls to `operator==` than your version, though, so unless it works out as less code it's probably not worth it. ``` if ((mylist.size() > 0) && (newvalue == mylist.back())) { mylist.back() += newvalue; } else { mylist.push_back(newvalue); } ``` (Actually, writing a proper output iterator might be more fiddly than that, I can't remember. But I hope you get the general idea). `mylist` is a reference to the collection you're merging into. If the target doesn't have `back()`, then you'll have to buffer one value in the output iterator, and only write it once you see a non-equal value. Then define a `flush` function on the output iterator to write the last value, and call it at the end. I'm pretty sure that in this case it is too much mess to beat what you've already done.
STL algorithm for merge with addition
[ "", "c++", "stl", "" ]
A webservice i'm working with sends back a result set that equates to around 66980 lines of XML, .net returns this as a list object. As the user journey requires that we can reload this set if they step back a page, whats the fastest/best way of storing this result set per-user without slowing everything down. Ta -- *many solutions:* <http://msdn.microsoft.com/en-us/magazine/cc300437.aspx>
I would use memcache as a general way of caching queries. Best is that it works across nodes (in case you have more webservers).
Save it to HttpContext.Current.Cache, keyed on the user id, possibly something like "MyXml\_UserId".
Fastest way to store a Large Set of Data (C# ASP.net)
[ "", "c#", "asp.net-mvc", ".net-3.5", "caching", "" ]
I found a custom field model ([JSONField](http://www.djangosnippets.org/snippets/377/)) that I would like to integrate into my Django project. * Where do I actually put the JSONField.py file? -- Would it reside in my Django project or would I put it in something like: /django/db/models/fields/ * Since I assume it can be done multiple ways, would it then impact how JSONField (or any custom field for that matter) would get imported into my models.py file as well?
For the first question, I would rather not put it into django directory, because in case of upgrades you may end up loosing all of your changes. It is a general point: modifying an external piece of code will lead to increased maintenance costs. Therefore, I would suggest you putting it into some place accessible from your pythonpath - it could be a module in your project, or directly inside the site-packages directory. As about the second question, just "installing" it will not impact your existing models. You have to explicitly use it, by either by adding it to all of your models that need it, either by defining a model that uses it, and from whom all of your models will inherit.
It's worth remembering that Django is just Python, and so the same rules apply to Django customisations as they would for any other random Python library you might download. To use a bit of code, it has to be in a module somewhere on your Pythonpath, and then you can just to `from foo import x`. I sometimes have a `lib` directory within my Django project structure, and put into it all the various things I might need to import. In this case I might put the JSONField code into a module called `fields`, as I might have other customised fields. Since I know my project is already on the Pythonpath, I can just do `from lib.fields import JSONField`, then I can just do `myfield = JSONField(options)` in the model definition.
Using and Installing Django Custom Field Models
[ "", "python", "django", "django-models", "" ]
I'm trying to load data from an external `.js` file, containing a JSON representation of a bunch of data. I cannot for the life of me figure out how to access the data inside the page. I'm sure this is really easy and I'm missing something simple! right now, I'm trying this: ``` $(document).ready(function () { $.getJSON("http://api.crunchbase.com/v/1/company/xobni.js", function (data) { alert(data.company_url); }); }); ``` which is obviously very wrong, since nothing happens. I've tried loading it in a `<script>` tag, but firebug tells me it didn't even load. how could I screw that up? anyway, I'm about ready to pull my hair out, and I figure this will take someone else about 15 seconds to figure out.
that data file doesn't have `company_url` entry. Additionally, the `.js` file is served with `text/javascript` mime-type, when it should be served with `application/json` (or `application/x-javascript`, correct me on that). **The real reason**, of course, is that you need to add `?callback=?` to your url. Then everything is going to work. So, it'll look like this: ``` $(document).ready(function(){ $.getJSON("http://api.crunchbase.com/v/1/company/xobni.js?callback=?", function(data){ alert(data.homepage_url); }); }); ```
I looked at the json data. It looks like there is no company\_url. You might want homepage\_url ``` $(document).ready(function(){ $.getJSON("http://api.crunchbase.com/v/1/company/xobni.js", function(data){ alert(data.homepage_url); }); }); ```
How do I load external JSON from a script on a web page?
[ "", "javascript", "jquery", "json", "" ]
How do you think the following code? Is this good? If so, why it is? If not, why it is not? And how CLR see this code? ``` public abstract class EntityBase<TEntity> : IEquatable<TEntity> { public bool Equals(TEntity other) { // check equalitiy } // yes, below is object's Equals and GetHashCode method implementation } public class Person : EntityBase<Person> { } ``` I have a bit odd feeling on this. Like chicken and egg problem. And here is .Net framework code which has the same behaviour. ``` public sealed class String : IComparable<string>, IEquatable<string> // I removed other interfaces ``` Any thoughts?
For other people reference, I copied the answer of [Eric Lippert](https://stackoverflow.com/users/88656/eric-lippert) (who answer in the comment). > Though the definition appears > circular, it is not. However, the C# > compilers cycle detection algorithm is > both wrong and weak. Wrong because it > incorrectly detects non-cycles as > cycles, and weak because it fails to > detect certain very nasty cycles. If > this topic interests you, see my > article on it here: > <http://blogs.msdn.com/ericlippert/archive/2008/05/07/covariance-and-contravariance-part-twelve-to-infinity-but-not-beyond.aspx>
In the right circumstances (e.g. implementing `IComparable<T>`) it is exactly the right thing to do. But this can only be determined on a case by case basis, looking at the details of why it is being considered. On the other hand, C++ allows the "curiously recurring base pattern": ``` template<typename T> class SomeWrapper<T> : T { ... ] ``` where a generic class inherits its generic wrapper. This allows some advanced wrapping scenarios, but can rapidely become confusing if used beyond wrapping. Fortunately(?) this pattern is not allowed in .NET.
How this Generic Inheritance is done (internally)?
[ "", "c#", "generics", ".net-2.0", "" ]
I need to remove the focus from several TextBoxes. I tried using: ``` textBox1.Focused = false; ``` Its `ReadOnly` property value is `true`. I then tried setting the focus on the form, so as to remove it from all the TextBoxes, but this also fails to work: ``` this.Focus(); ``` and the function returns `false` when a textbox is selected. So, how do I remove the focus from a TextBox?
You can add the following code: ``` this.ActiveControl = null; //this = form ```
Focusing on the label didn't work for me, doing something like `label1.Focus()` right? the textbox still has focus when loading the form, however trying [Velociraptors](https://stackoverflow.com/users/122215/velociraptors) answer, worked for me, setting the Form's Active control to the label like this: ``` private void Form1_Load(object sender, EventArgs e) { this.ActiveControl = label1; } ```
How to remove the focus from a TextBox in WinForms?
[ "", "c#", ".net", "winforms", "textbox", "focus", "" ]
I want to use Python to get the group id to a corresponding group name. The routine must work for Unix-like OS (Linux and Mac OS X). This is what I found so far ``` >>> import grp >>> for g in grp.getgrall(): ... if g[0] == 'wurzel': ... print g[2] ```
If you read the [grp module documentation](http://docs.python.org/library/grp.html) you'll see that grp.getgrnam(groupname) will return one entry from the group database, which is a tuple-like object. You can either access the information by index or by attribute: ``` >>> import grp >>> groupinfo = grp.getgrnam('root') >>> print groupinfo[2] 0 >>> print groupinfo.gr_gid 0 ``` Other entries are the name, the encrypted password (usually empty, if using a shadow file, it'll be a dummy value) and all group member names. This works fine on any Unix system, including my Mac OS X laptop: ``` >>> import grp >>> admin = grp.getgrnam('admin') >>> admin ('admin', '*', 80, ['root', 'admin', 'mj']) >>> admin.gr_name 'admin' >>> admin.gr_gid 80 >>> admin.gr_mem ['root', 'admin', 'mj'] ``` The module also offers a method to get entries by gid, and as you discovered, a method to loop over all entries in the database: ``` >>> grp.getgrgid(80) ('admin', '*', 80, ['root', 'admin', 'mj']) >>> len(grp.getgrall()) 73 ``` Last but not least, python offers similar functionality to get information on the password and shadow files, in the [pwd](http://docs.python.org/library/pwd.html) and [spwd](http://docs.python.org/library/spwd.html) modules, which have a similar API.
See [`grp.getgrnam(name)`](http://docs.python.org/library/grp.html#grp.getgrnam): > `grp.getgrnam(name)` > > Return the group database entry for the given group name. KeyError is raised if the entry asked for cannot be found. > > Group database entries are reported as a tuple-like object, whose attributes correspond to the members of the group structure: ``` Index Attribute Meaning 0 gr_name the name of the group 1 gr_passwd the (encrypted) group password; often empty 2 gr_gid the numerical group ID 3 gr_mem all the group member’s user names ``` The numerical group ID is at index 2, or 2nd from last, or the attribute `gr_gid`. GID of `root` is 0: ``` >>> grp.getgrnam('root') ('root', 'x', 0, ['root']) >>> grp.getgrnam('root')[-2] 0 >>> grp.getgrnam('root').gr_gid 0 >>> ```
get group id by group name (Python, Unix)
[ "", "python", "linux", "unix", "" ]
We need to have a semi complex report in CRM that displays some accumulated lead values. The only way I see this report working is writing a stored procedure that creates a couple of temporary tables and calculates/accumulates data utilizing cursors. Then is the issue of getting the data from the stored procedure to be accessible from the Reporting Server report. Does anyone know if that's possible? If I could have the option of writing a custom SQL statement to generate report data, that would be just excellent. Any pointers ? Edit: To clarify my use of cursors I can explain exactly what I'm doing with them. The basis for my report (which should be a chart btw) is a table (table1) that has 3 relevant columns: ``` Start date Number of months Value ``` I create a temp table (temp1) that contains the following columns: ``` Year Month number Month name Value ``` First I loop through the rows in the first table and insert a row in the temptable for each month, incrementing month, while setting the value to the total value divided by months. I.e: 2009-03-01,4,1000 in table1 yields ``` 2009,03,March,250 2009,04,April,250 2009,05,May,250 2009,06,June,250 ``` in the temp1 table. A new cursor is then used to sum and create a running total from the values in temp1 and feed that into temp2 which is returned to the caller as data to chart. example temp1 data: ``` 2009,03,March,250 2009,04,April,200 2009,04,April,250 2009,05,May,250 2009,05,May,100 2009,06,June,250 ``` yields temp2 data: ``` 2009,03,March,250,250 2009,04,April,450,700 2009,05,May,350,1050 2009,06,June,250,1300 ``` Last column is the running totals, which starts at zero for each new year.
I found the solution. Downloaded Report Builder 2.0 from Microsoft. This allows me to write querys and call stored procedures for the report data. [Microsoft SQL Server Report Builder link](http://www.microsoft.com/downloads/details.aspx?familyid=9f783224-9871-4eea-b1d5-f3140a253db6&displaylang=en)
I haven't done this - just thinking how I would start. I would make sure when the stored procedures populate the temporary tables they use the Filtered views for pulling data. I would then set the access to execute the SP to have the same security roles as the Filtered views (which should be pretty much to allow members of the PrivReportingGroup). I would think that would cover allowing you to execute the SP in your report. I imagine if you set up the SP before hand, the SSRS designer has some means of showing you what data is available and to select an SP at design time. But I don't know that for sure.
Using SQL Stored Procedure as data for a Microsoft Dynamics CRM report
[ "", "sql", "stored-procedures", "dynamics-crm", "" ]
Is there any shortcut to accomplishing the equivalent of [PHP's array\_flip function](http://www.php.net/manual/en/function.array-flip.php) in JavaScript or does it have to be done via brute force looping? It has to be used for dozens of arrays so even small speedups will probably add up.
Don't think there's one built in. Example implementation [here](http://phpjs.org/functions/array_flip:317), though `:)`. ``` function array_flip( trans ) { var key, tmp_ar = {}; for ( key in trans ) { if ( trans.hasOwnProperty( key ) ) { tmp_ar[trans[key]] = key; } } return tmp_ar; } ```
## ES6 version ``` const example = { a: 'foo', b: 'bar' }; const flipped = Object.entries(example) .reduce((obj, [key, value]) => ({ ...obj, [value]: key }), {}); // flipped is {foo: 'a', bar: 'b'} ``` ## ES5 version ``` var example = {a: 'foo', b: 'bar'}; var flipped = Object.keys(example) //get the keys as an array .reduce(function(obj, key) { //build up new object obj[example[key]] = key; return obj; }, {}); //{} is the starting value of obj // flipped is {foo: 'a', bar: 'b'} ```
JavaScript equivalent of PHP function: array_flip
[ "", "javascript", "arrays", "" ]
I get the following error. [Microsoft][ODBC SQL Server Driver][SQL Server]Column 'STATS.VisitorIP' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause. I used VISITORIP column in both select lists. I used GROUP BY clause in one of them so I addded VISITORIP to it. But I dont have GROUP BY clause in other select so did not add it. Why do I receive this error? ``` SELECT S.VISITORIP, S.HTTPADDRESS, S.DATEENTERED FROM STATS S JOIN (SELECT S1.VISITORIP, MAX(S1.DATEENTERED) FROM STATS S1 WHERE DATEENTERED BETWEEN '" & OnlineTime & "' AND '" & NOW() & "') S1 ON S.VISITORIP = S1.VISITORIP AND S.DATEENTERED = S1.DATEENTERED WHERE S.DATEENTERED BETWEEN '" &TIME& "' AND '" & NOW() & "' GROUP BY S.VISITORIP, S.HTTPADDRESS, S.DATEENTERED ORDER BY S.DATEENTERED DESC ```
You need the GROUP BY in the nested subquery because you are trying to get the MAX DATEENTERED for each VISITORIP. VISITORIP exists in the SELECT for that subquery, so you need the GROUP BY in there also in order for it to return the latest date for each visitorip. ``` SELECT S.VISITORIP, S.HTTPADDRESS, S.DATEENTERED FROM STATS S JOIN (SELECT S1.VISITORIP, MAX(S1.DATEENTERED) FROM STATS S1 WHERE DATEENTERED BETWEEN '" & OnlineTime & "' AND '" & NOW() & "' GROUP BY S1.VISITORIP) S1 ON S.VISITORIP = S1.VISITORIP AND S.DATEENTERED = S1.DATEENTERED WHERE S.DATEENTERED BETWEEN '" &TIME& "' AND '" & NOW() & "' GROUP BY S.VISITORIP, S.HTTPADDRESS, S.DATEENTERED ORDER BY S.DATEENTERED DESC ```
This (nested query) is not valid: ``` SELECT S1.VISITORIP, MAX(S1.DATEENTERED) FROM STATS S1 WHERE DATEENTERED BETWEEN '" & OnlineTime & "' AND '" & NOW() & "' ``` It should be: ``` SELECT S1.VISITORIP, MAX(S1.DATEENTERED) AS DATEENTERED FROM STATS S1 WHERE DATEENTERED BETWEEN '" & OnlineTime & "' AND '" & NOW() & "' GROUP BY S1.VISITORIP ```
(using join) I get not contained in either an aggregate function or the GROUP BY error
[ "", "sql", "sql-server", "" ]
Is the path for the C# compiler, csc.exe, stored in a reference location somewhere? I am working on a custom tool to compile code snippets into their own DLL's by the users so location could vary, would like to automate this as much as possible before resorting to the locate file dialog box!
Is there any reason you want to invoke the binary directly, instead of using [`CSharpCodeProvider`](http://msdn.microsoft.com/en-us/library/microsoft.csharp.csharpcodeprovider.aspx) to programmatically compile? You may be interested in my own snippet compiler (Snippy), which you can download the source for [on my C# in Depth site](http://csharpindepth.com/Downloads.aspx).
You can find the compiler here: ``` %windir%\Microsoft.NET\Framework\v[version number]\csc.exe ``` So, for .net 3.5, it would be ``` %windir%\Microsoft.NET\Framework\v3.5\csc.exe ```
How can I determine the path to the C# compiler?
[ "", "c#", "frameworks", "sdk", "" ]
I'm writing a library for manipulating bond graphs, and I'm using the Boost Graph Library to store the data for me. Unfortunately, I can't seem to figure out how to implement a proper visitor pattern using it, as you can't subclass out vertices - you must rely on 'properties' instead. The visitor framework provided in the library seems heavily geared towards working with certain algorithms where vertices are all of the same type, but store different information. In my problem, the vertices are of differing types and store differing types of information - some vertices are resistors, while some are capacitors, etc. How do I go about writing a visitor pattern that works based on a property of a vertex, instead of the vertex itself? My only thought so far has been to write a small class to represent the type of an object that points back to the original vertex that I need to get the graph information. However, this seems very kludgy, and evil to work with.
What do you mean, you can't subclass out vertices? You can use your own vertex class, it's just a matter of specifying it in the Graph typedef. You can even use members as properties when working with BGL algorithms. As for the other way around (which is harder IMO), you need to create a vertex property list and access it using a vertex descriptor... I think. Edit: You specify vertex/edge classes when defining your graph type: ``` struct Vertex { double some_property; }; struct Edge { double weight; }; typedef boost::adjacency_list< boost::listS, boost::vecS, boost::undirectedS, Vertex, Edge > Graph; //sorry about the formatting Graph g; ``` From where on g[vertex\_descriptor] should return a reference to Vertex, e.g.: ``` //add 100 vertices for (int i=0; i<100; ++i) { Graph::vertex_descriptor v = add_vertex(g); g[v].some_property = -1.0; } //zero some_property for all vertices for (Graph::vertex_iterator i = vertices(g).first; i != vertices(g).second; ++i) { g[*i].some_property = 0.0; } ``` I couldn't find my visitor code making use of these properties but I did find the relevant part of the BGL documentation: 1) The part about [Internal Properties](http://www.boost.org/doc/libs/1_39_0/libs/graph/doc/using_adjacency_list.html#sec:adjacency-list-properties), which recommends you use instead: 2) [Bundled Properties](http://www.boost.org/doc/libs/1_39_0/libs/graph/doc/bundles.html) The second link seems to have a Boost function making use of bundled properties using a member pointer. Does this help?
If anyone cares, after 2 months, here is a visitor that looks at the property. ``` class demo_visitor : public default_bfs_visitor { public: template <typename Vertex, typename Graph> void discover_vertex( Vertex u, Graph& g) { printf("Visited vertex %d with property %f\n", u, g[u].some_property); } }; ``` If the visitor needs to modify the properties, then things are lightly more complicated. For the issues - [click here](https://stackoverflow.com/questions/1510945/modifying-bundled-properties-from-visitor)
Boost Graph Library and Visitors
[ "", "c++", "visitor-pattern", "boost-graph", "" ]
I'm having trouble using python function decorators in Google's AppEngine. I'm not that familiar with decorators, but they seem useful in web programming when you might want to force a user to login before executing certain functions. Anyway, I was following along with a flickr login example [here](http://stuvel.eu/flickrapi/documentation/#example-using-django) that uses django and decorates a function to require the flickr login. I can't seem to get this type of decorator to work in AppEngine. I've boiled it down to this: ``` def require_auth(func): def check_auth(*args, **kwargs): print "Authenticated." return func(*args, **kwargs) return check_auth @require_auth def content(): print "Release sensitive data!" content() ``` This code works from the commandline, but when I run it in GoogleAppEngineLauncher (OS X), I get the following error: ``` check_auth() takes at least 1 argument (0 given) ``` And I'm not really sure why... EDIT to include actual code: @asperous.us I changed the definition of content() to include variable arguments, is that what you meant? @Alex Martelli, 'print' does work within AppEngine, but still a completely fair criticism. Like I said, I'm trying to use the flickr login from the link above. I tried to put it into my app like so: ``` def require_flickr_auth(view): def protected_view(request,*args, **kwargs): if 'token' in request.session: token = request.session['token'] log.info('Getting token from session: %s' % token) else: token = None log.info('No token in session') f = flickrapi.FlickrAPI(api_key, api_secret, token=token, store_token=False) if token: # We have a token, but it might not be valid log.info('Verifying token') try: f.auth_checkToken() except flickrapi.FlickrError: token = None del request.session['token'] if not token: # No valid token, so redirect to Flickr log.info('Redirecting user to Flickr to get frob') url = f.web_login_url(perms='read') print "Redirect to %s" % url # If the token is valid, we can call the decorated view. log.info('Token is valid') return view(request,*args, **kwargs) return protected_view @require_flickr_auth def content(*args, **kwargs): print 'Welcome, oh authenticated user!' def main(): print 'Content-Type: text/plain' content() if __name__ == "__main__": main() ``` When I remove the @require\_flickr\_auth decoration, the string 'Welcome ...' prints out just fine. Otherwise I get a big ugly AppEngine exception page with ``` type 'exceptions.TypeError': protected_view() takes at least 1 argument (0 given) ``` at the bottom.
You're calling `content()` without any arguments, but the decorated version `protected_view` requires the `request` argument. Either add the argument to `content` or remove it from `protected_view`. If you're getting that error with your simple version then I'd suspect that `content` is a class method as Alex suggested. Otherwise it looks like you're telling it to expect at least one argument, then not supplying it any.
@Owen, the "takes at least 1 argument" error suggests you're defining content within a class (i.e. as a method) and not as a bare function as you show -- indeed, how exactly are you trying to execute that code in GAE? I.e. what's your app.yaml &c? If you put your code exactly as you gave it in `silly.py` and in your app.yaml you have: ``` handlers: - url: /silly script: silly.py ``` then when you visit `yourapp.appspot.com/silly` you will see absolutely nothing on either the browser or the logs (besides the `"GET /silly HTTP/1.1" 200 -` in the latter of course;-): there is no error but the `print` doesn't DO anything in particular either. So I have to imagine you tried running code different from what you're showing us...!-)
Python Function Decorators in Google App Engine
[ "", "python", "google-app-engine", "decorator", "" ]
``` //This is the function that will run every time a new item is added or the //list is sorted. var showNewOrder = function() { //This function means we get serialize() to tell us the text of each //element, instead of its ID, which is the default return. var serializeFunction = function(el) { return el.get('text'); }; //We pass our custom function to serialize(); var orderTxt = sort.serialize(serializeFunction); //And then we add that text to our page so everyone can see it. $('data').set('text', orderTxt.join(' ')); }; ``` full code is at <http://demos.mootools.net/Dynamic.Sortables> ``` var serializeFunction = function(*el*) { return el.get('text'); }; var orderTxt = sort.serialize(serializeFunction*(el)*); ``` compare the codes. Is el being passed or not? what is going on??? I want to learn advanced parameter usage. If not declaring functions like `function name(parameter1, parameter2, parameter3...)`. If not calling functions like `name(parameter1, parameter2, parameter3...)`. If parameters aren't variables. If declaring functions like `function(parameter1, parameter2, parameter3...)`. If calling functions like `variable(parameter1, parameter2, parameter3...)`. If parameters are objects. I'm interested. You probably have a bookmark with the lessons in which I'm interested... please, share!!!
The value assigned to "serializeFunction" is actually an anonymous function, you can see it like a *pointer* or reference to a function, "el" is simply a declared input parameter that will be used then that function will be called. Looking at the [original code](http://demos.mootools.net/demos/Dynamic.Sortables/demo.js) of the one that was posted, the call of the `sort.serialize` function, receives only the function as a parameter, the "serializeFunction" is not being invocated, it's only passed as an argument. So, the serialize function that receives the reference of the function passed as a parameter it will be in charge of execute it internally.
This is a lambda expression like. ``` sort.serialize() ``` accept the **function** as parameter, not the value.
Advanced parameter usage
[ "", "javascript", "parameters", "mootools", "anonymous-function", "" ]
I just can't seem to get localization to work. I have a class library. Now I want to create *resx* files in there, and return some values based on the thread culture. How can I do that?
* Add a Resource file to your project (you can call it "strings.resx") by doing the following: Right-click **Properties** in the project, select **Add -> New Item...** in the context menu, then in the list of **Visual C# Items** pick **"Resources file"** and name it `strings.resx`. * Add a string resouce in the resx file and give it a good name (example: name it "Hello" with and give it the value "Hello") * Save the resource file (**note:** this will be the **default** resource file, since it does not have a two-letter language code) * Add references to your program: `System.Threading` and `System.Globalization` Run this code: ``` Console.WriteLine(Properties.strings.Hello); ``` It should print "Hello". Now, add a new resource file, named "strings.fr.resx" (note the "fr" part; this one will contain resources in French). Add a string resource with the same name as in strings.resx, but with the value in French (Name="Hello", Value="Salut"). Now, if you run the following code, it should print Salut: ``` Thread.CurrentThread.CurrentUICulture = CultureInfo.GetCultureInfo("fr-FR"); Console.WriteLine(Properties.strings.Hello); ``` What happens is that the system will look for a resource for "fr-FR". It will not find one (since we specified "fr" in your file"). It will then fall back to checking for "fr", which it finds (and uses). The following code, will print "Hello": ``` Thread.CurrentThread.CurrentUICulture = CultureInfo.GetCultureInfo("en-US"); Console.WriteLine(Properties.strings.Hello); ``` That is because it does not find any "en-US" resource, and also no "en" resource, so it will fall back to the default, which is the one that we added from the start. You can create files with more specific resources if needed (for instance strings.fr-FR.resx and strings.fr-CA.resx for French in France and Canada respectively). In each such file you will need to add the resources for those strings that differ from the resource that it would fall back to. So if a text is the same in France and Canada, you can put it in strings.fr.resx, while strings that are different in Canadian french could go into strings.fr-CA.resx.
It's quite simple, actually. Create a new resource file, for example `Strings.resx`. Set `Access Modifier` to `Public`. Use the apprioriate file template, so Visual Studio will automatically generate an accessor class (the name will be `Strings`, in this case). This is your default language. Now, when you want to add, say, German localization, add a localized resx file. This will be typically `Strings.de.resx` in this case. If you want to add additional localization for, say, Austria, you'll additionally create a `Strings.de-AT.resx`. Now go create a string - let's say a string with the name `HelloWorld`. In your `Strings.resx`, add this string with the value "Hello, world!". In `Strings.de.resx`, add "Hallo, Welt!". And in `Strings.de-AT.resx`, add "Servus, Welt!". That's it so far. Now you have this generated `Strings` class, and it has a property with a getter `HelloWorld`. Getting this property will load "Servus, Welt!" when your locale is de-AT, "Hallo, Welt! when your locale is any other de locale (including de-DE and de-CH), and "Hello, World!" when your locale is anything else. If a string is missing in the localized version, the resource manager will automatically walk up the chain, from the most specialized to the invariant resource. You can use the `ResourceManager` class for more control about how exactly you are loading things. The generated `Strings` class uses it as well.
How to use localization in C#
[ "", "c#", "localization", "cultureinfo", "" ]
If a class contains a variable named "blah", then the standard getter/setter syntax is obviously getBlah() and setBlah(). But if I have a POJO class with a variable named isBlah, would I use: ``` public type getIsBlah() { return isBlah; } public setIsBlah(type isBlah) { this.isBlah = isBlah; } ``` Or would it be this? ``` public type isBlah() { return isBlah; } public setBlah(type blah) { this.isBlah = blah; } ``` The first seems to conform more strictly to the POJO conventions, but the second type is what IntelliJ generates if I ask it to make a class' getter/setters (and hey, IntelliJ has never let me down yet :] ). So which is the preferred syntax?
One reason for using properties is to decouple the API from the implementation. In other words, you shouldn't feel bound by what your private variable is called. That shouldn't inform the naming beyond trying to keep it readable to code maintainers. I would say that if "type" is `boolean` in this case, then the second form is correct. If it's *not* `boolean`, you should use `getXXX` - but I probably wouldn't use `getIsXXX`. To me, "is" has a very strong correspondence with Boolean properties, and using it in other contexts would not only break the JavaBeans conventions (which could affect other tools) but would be misleading IMO.
Note that the name of the field is completely irrelevant to the [JavaBean specification](http://java.sun.com/javase/technologies/desktop/javabeans/index.jsp). Only the names of the getter/setter are relevant. Normally the name of the getter is `get<PropertyName>()`. Only for `boolean` properties is `is<PropertyName>()` allowed as an alternative. Note that in your example the Bean property name is "Blah" when you call the getter `isBlah()` and it's "IsBlah" when you call your getter `getIsBlah()`. Personally I usually prefer `isBlah()`.
What is the correct syntax for "is" variable getter/setters in a POJO class?
[ "", "java", "pojo", "" ]
Hey guys, I want to parse some xml but I don't know how I can get the same tags out of 1 element. I want to parse this: ``` <profile> <name>john</name> <lang>english</lang> <lang>dutch</lang> </profile> ``` So I want to parse the languages that john speaks. how can I do that ?
You can run a `foreach` loop over the element node after you've pulled it in with SimpleXML like so: ``` $xml_profiles = simplexml_load_file($file_profiles); foreach($xml_profiles->profile as $profile) { //-- first foreach pulls out each profile node foreach($profile->lang as $lang_spoken) { //-- will pull out each lang node into a variable called $lang_spoken echo $lang_spoken; } } ``` This has the benefit of being able to handle any number of `lang` elements you may have or not have for each profile element.
``` $profile->lang[0] $profile->lang[1] ```
SimpleXML more tags in 1 element
[ "", "php", "xml", "simplexml", "" ]
How to split the CSV file in c sharp? And how to display this?
I had got the result for my query. its like simple like i had read a file using io.file. and all the text are stored into a string. After that i splitted with a seperator. The code is shown below. ``` using System; using System.Collections.Generic; using System.Text; namespace CSV { class Program { static void Main(string[] args) { string csv = "user1, user2, user3,user4,user5"; string[] split = csv.Split(new char[] {',',' '}); foreach(string s in split) { if (s.Trim() != "") Console.WriteLine(s); } Console.ReadLine(); } } } ```
I've been using the [TextFieldParser Class](http://msdn.microsoft.com/en-us/library/microsoft.visualbasic.fileio.textfieldparser.aspx) in the Microsoft.VisualBasic.FileIO namespace for a C# project I'm working on. It will handle complications such as embedded commas or fields that are enclosed in quotes etc. It returns a string[] and, in addition to CSV files, can also be used for parsing just about any type of structured text file.
Splitting Comma Separated Values (CSV)
[ "", "c#", "csv", "" ]
``` $posts = array( "message" => 'this is a test message' ); foreach ($posts as $post) { echo $post['message']; } ``` Why does the above code only output the first letter in message? "t". Thanks!
foreach takes each element of the array and assigns it to the variable. To get the results I assume you are expecting you just need to do: ``` foreach ($posts as $post) { echo $post; } ``` The specifics as to why your code didn't work: `$post` would be the contents of the array element - in this case a string. Because PHP isn't strongly typed / supports type juggling, you can in fact work with a string as if it were an array, and get to each character in the sequence: ``` foreach ($posts as $post) { echo $post[0]; //'t' echo $post[1]; //'h' } ``` Obviously `$post['message']` therefore is not a valid element, and there is no explicit conversion from `(string)'message'` to `int`, so this evals to `$post[0]`.
``` # $posts is an array with one index ('message') $posts = array( "message" => 'this is a test message' ); # You iterate over the $posts array, so $post contains # the string 'this is a test message' foreach ($posts as $post) { # You try to access an index in the string. # Background info #1: # You can access each character in a string using brackets, just # like with arrays, so $post[0] === 't', $post[1] === 'e', etc. # Background info #2: # You need a numeric index when accessing the characters of a string. # Background info #3: # If PHP expects an integer, but finds a string, it tries to convert # it. Unfortunately, string conversion in PHP is very strange. # A string that does not start with a number is converted to 0, i.e. # ((int) '23 monkeys') === 23, ((int) 'asd') === 0, # ((int) 'strike force 1') === 0 # This means, you are accessing the character at position ((int) 'message'), # which is the first character in the string echo $post['message']; } ``` What you possibly want is either this: ``` $posts = array( array( "message" => 'this is a test message' ) ); foreach ($posts as $post) { echo $post['message']; } ``` Or this: ``` $posts = array( "message" => 'this is a test message' ); foreach ($posts as $key => $post) { # $key === 'message' echo $post; } ```
Array and foreach
[ "", "php", "arrays", "" ]
I have comma separated list of regular expressions: ``` .{8},[0-9],[^0-9A-Za-z ],[A-Z],[a-z] ``` I have done a *split* on the comma. Now I'm trying to match this regex against a generated password. The problem is that `Pattern.compile` does not like *square brackets* that is not escaped. Can some please give me a simple function that takes a string like so: `[0-9]` and returns the escaped string `\[0-9\]`.
You can use [`Pattern.quote(String)`](https://docs.oracle.com/javase/9/docs/api/java/util/regex/Pattern.html#quote-java.lang.String-). From the docs: > `public static String quote​(String s)` > > Returns a literal pattern `String` for the specified `String`. > > This method produces a String that can be used to create a Pattern that would match the string s as if it were a literal pattern. > > Metacharacters or escape sequences in the input sequence will be given no special meaning.
I was expecting a single backslash to escape the bracket, however, you must use two if you have the pattern stored in a string. The first backslash escapes the second one into the string, so that what regex sees is `\]`. Since regex just sees one backslash, it uses it to escape the square bracket. ``` \\] ``` In regex, that will match a single closing square bracket. If you're trying to match a newline, for example though, you'd only use a single backslash. You're using the string escape pattern to insert a newline character into the string. Regex doesn't see `\n` - it sees the newline character, and matches that. You need two backslashes because it's not a string escape sequence, it's a regex escape sequence.
How to escape a square bracket for Pattern compilation?
[ "", "java", "regex", "escaping", "special-characters", "" ]
In some **RightToLeft** languages (Like Arabic, Persian, Urdu, etc) each letter can have different shapes. There is isolated form, initial form, and middle form (you can just find it on the Character Map of the windows for any unicode font). Imagine you need the exact characters that user has been entered on a text box, by default, when you converting the String to CharArray, it will convert each character to Isolated form. **(because when user entering the characters by keyboard, it is in the isolated form and when it is displaying on the screen, it will be converted to proper format; this is just a guess. because if you make the string by using exact character codes, it will generate the proper array).** My question is, how we can get that form of the string, the form that has been displayed in the textbox. If there is no way in .NET then this means i need to make my own class to convert this T\_T
Windows uses [Uniscribe](http://msdn.microsoft.com/en-us/library/dd374091.aspx) to perform *contextual shaping* for complex scripts (which can apply to *l-to-r* as well as *r-to-l* languages). The displayed text in a text box is based on the glyph info after the characters have been fed into Uniscribe. Although the Unicode standard defines code points for each of isolated, initial, medial, and final forms of a chracter, not all fonts necessarily support them yet they may have pre-shaped glyphs or use a combination of glyphs—Uniscribe uses a shaping engine from the Windows language pack to determine which glyph(s) to use, based on the font's cmap. Here are some relevant links: * [More Uniscribe Mysteries](http://www.catch22.net/tuts/neatpad/13) (explains difference between glyphs and characters) * Microsoft Bhasha, Glyph Processing: [Uniscribe](http://www.bhashaindia.com/Developers/KnowHow/Glyph/uniscribe.htm) * MSDN: *[Complex Scripts Awareness](http://msdn.microsoft.com/en-us/goglobal/bb688137.aspx)* * Buried in the bowels of Mozilla code is [code that handles complex script rendering](http://lxr.mozilla.org/seamonkey/source/gfx/thebes/src/gfxWindowsFonts.cpp#1566) using Uniscribe. There's also additional [code that scans the list of fonts in the system and reads the cmap tables](http://lxr.mozilla.org/seamonkey/source/gfx/thebes/src/gfxWindowsPlatform.cpp) of each font. (From the comments at <http://www.siao2.com/2005/12/06/500485.aspx>). * Sorting it all Out: *[Did he say shaping? It's not in the script!](http://www.siao2.com/2006/05/31/611340.aspx)* The **[TextRenderer](http://msdn.microsoft.com/en-us/magazine/cc751527.aspx).DrawText()** method uses Uniscribe via the Win32 *DrawTextExW()* function, using the following P/Invoke: ``` [DllImport("user32.dll", CharSet=CharSet.Unicode, SetLastError=true)] public static extern int DrawTextExW( HandleRef hDC ,string lpszString ,int nCount ,ref RECT lpRect ,int nFormat ,[In, Out] DRAWTEXTPARAMS lpDTParams); [StructLayout(LayoutKind.Sequential)] public struct RECT { public int left; public int top; public int right; public int bottom; } [StructLayout(LayoutKind.Sequential)] public class DRAWTEXTPARAMS { public int iTabLength; public int iLeftMargin; public int iRightMargin; public int uiLengthDrawn; } ```
So how are you creating the "wrong" string? If you're just putting it in a string literal, then it's quite possible it's just the input method that's wrong. If you copy the "right" string after displaying it, and then paste that into a string literal, what happens? You might also want to check which encoding Visual Studio is using for your source files. If you're *not* putting the string into your source code as a literal, how are you creating it? Given the possibility for confusion, I think I'd want to either keep these strings in a resource, or hard code them using unicode escaping: ``` string text = "\ufb64\ufea0\ufe91\ufeea"; ``` (Then possibly put a comment afterwards showing the non-escaped value; at least then if it looks about right, it won't be *too* misleading. Admittedly it's then easy for the two to get out of sync...)
How do I get the characters for context-shaped input in a complex script?
[ "", "c#", "string", "unicode", "char", "" ]
In a SQL Server Execution plan what is the difference between an Index Scan and an Index Seek I'm on SQL Server 2005.
An index scan is where SQL server reads the whole of the index looking for matches - the time this takes is proportional to the size of the index. An index seek is where SQL server uses the b-tree structure of the index to seek directly to matching records (see [http://mattfleming.com/node/192](https://web.archive.org/web/20141003135304/http://www.mattfleming.com:80/node/192) for an idea on how this works) - time taken is only proportional to the number of matching records. * In general an index seek is preferable to an index scan (when the number of matching records is proprtionally much lower than the total number of records), as the time taken to perform an index seek is constant regardless of the toal number of records in your table. * Note however that in certain situations an index scan can be faster than an index seek (sometimes *significantly* faster) - usually when the table is very small, or when a large percentage of the records match the predicate.
The basic rule to follow is Scans are bad, Seeks are good. **Index Scan** When SQL Server does a scan it loads the object which it wants to read from disk into memory, then reads through that object from top to bottom looking for the records that it needs. **Index Seek** When SQL Server does a seek it knows where in the index that the data is going to be, so it loads up the index from disk, goes directly to the part of the index that it needs and reads to where the data that it needs ends. This is obviously a much more efficient operation than a scan, as SQL already knows where the data it is looking for is located. --- **How can I modify an Execution Plan to use a Seek instead of a Scan?** When SQL Server is looking for your data probably one of the largest things which will make SQL Server switch from a seek to a scan is when some of the columns are you looking for are not included in the index you want it to use. Most often this will have SQL Server fall back to doing a clustered index scan, since the Clustered index contains all the columns in the table. This is one of the biggest reasons (in my opinion at least) that we now have the ability to INCLUDE columns in an index, without adding those columns to the indexed columns of the index. By including the additional columns in the index we increase the size of the index, but we allow SQL Server to read the index, without having togo back to the clustered index, or to the table it self to get these values. **References** For information regarding the specifics of each of these operators within a SQL Server Execution plan see.... * [Clustered Index Scan - Books Online](http://msdn.microsoft.com/en-us/library/ms175184.aspx) * [Clustered Index Seek - Books Online](http://technet.microsoft.com/en-us/library/ms190400.aspx)
SQL Server Plans : difference between Index Scan / Index Seek
[ "", "sql", "sql-server", "sql-execution-plan", "" ]
I have a client who is embedding videos into his WordPress blog. The problem is they have a large CSS dropdown that sneaks behind the flash video. I understand that setting the video's wmode to opaque will fix this, but I obviously need this to apply to every video they upload and not have to go to the HTML to add this tag Is there any way I can do this programatically?
For the record, it needs to be changed in TWO places: see [here](http://www.communitymx.com/content/article.cfm?cid=e5141).
To avoid confusion with all the edits I've done to my previous answer, I'm creating a new answer with a fully tested and working sample page. It has been tested and is working in IE 6, 7 & 8, Opera 9.6 & 10, Safari 3 & 4, Google Chrome, but no version of Firefox I tested (2, 3 or 3.5): ``` <html> <head><title>Opacity text</title></head> <body> <div style="color:Red;position:absolute;top:0px;left:0px;"> XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX<br> XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX<br> XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX<br> XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX<br> XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX<br> XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX<br> XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX<br> XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX<br> XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX<br> XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX<br> XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX<br> XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX<br> </div> <object classid="clsid:D27CDB6E-AE6D-11cf-96B8-444553540000" height="200" width="300"> <param name="movie" value="http://freevideocoding.com/flvplayer.swf?file=http://www.freevideoediting.com/TVQvideos/Queen Demo--flv.flv&autoStart=false"> <param name="bgcolor" value="#ffff00"> </object> <!-- all you need to make this work is the script listed below. everything else is just sample code to provide a demonstration that the script shown below actually works --> <script type="text/javascript"> function makeObjectsOpaque_TestedAndWorking() { var elementToAppend = document.createElement('param'); elementToAppend.setAttribute('name', 'wmode'); elementToAppend.setAttribute('value', 'opaque'); var objects = document.getElementsByTagName('object'); for (var i = 0; i < objects.length; i++) { var newObject = objects[i].cloneNode(true); elementToAppend = elementToAppend.cloneNode(true); newObject.appendChild(elementToAppend); objects[i].parentNode.replaceChild(newObject, objects[i]); } } window.onload = makeObjectsOpaque_TestedAndWorking; </script> </body> </html> ```
How do I programmatically set all <object>'s to have the wmode set to opaque?
[ "", "javascript", "flash", "video", "" ]
Not that I'm not appreciative of the powers of multithreading or `ThreadPool`, but I'm scared I broke something since I'm getting a roughly 20x speed increase (2-3s down from over a minute) with a relatively naive usage of `ThreadPool`. So I submit my code here to be torn apart by people far wiser than I. Am I doing something wrong here, or is this just a far better candidate for multithreading than I ever hoped? (Yes, this function is an entire thread: like I said, this used to take over a minute to run) **EDIT:** To answer my own question, no, this is broken: It seems to be running multiple times, but over the same trigger. Is this because of the way lambda are handled? ``` private static void CompileEverything() { try { // maintain the state of our systray icon object iconLock = new object(); bool iconIsOut = true; // keep a count of how many threads are still running object runCountLock = new object(); int threadRunning = 0; foreach (World w in Worlds) { foreach (Trigger t in w.Triggers) { lock (runCountLock) { threadRunning++; } ThreadPool.QueueUserWorkItem(o => { // [snip]: Do some work involving compiling code already in memory with CSharpCodeProvider // provide some pretty feedback lock (iconLock) { if (iconIsOut) notifyIcon.Icon = Properties.Resources.Icon16in; else notifyIcon.Icon = Properties.Resources.Icon16out; iconIsOut = !iconIsOut; } lock (runCountLock) { threadRunning--; } }); } } // wait for all the threads to finish up while (true) { lock (runCountLock) { if (threadRunning == 0) break; } } // set the notification icon to our default icon. notifyIcon.Icon = Properties.Resources.Icon16; } // we're going down before we finished starting... // oh well, be nice about it. catch (ThreadAbortException) { } } ```
Interlocked.Increment is better than locking, but the polling loop at the end scares me. First, if you're going to loop, then do a Thread.Sleep(0) to release the processor each time. Second, if you're going to poll for a variable, then you need to make sure it's either marked volatile or you use MemoryBarrier, else the compiler may assume no outside thread will change it and therefore optimize away the check, leading to an infinite loop. Even better would be for each thread to check for it hitting zero and set an event if it does. You can then wait on the event instead of polling. The only trick is that you want to increment once in the main thread before the dispatch loop, then decrement and check for zero before waiting on the event. *edit* If it's broken because it's reusing the trigger, then the closure is wrong. Trying copying the value of world into a variable local to the inside of the loop and using that variable for the lambda expression.
I think you can do better. There is no need to lock around the changes to threadRunning. You can just use Interlocked.Increment() and Interlocked.Decrement(): ``` Interlocked.Increment(ref threadRunning); ThreadPool.QueueUserWorkItem(o => { // [snip]: Do some work involving compiling code already in memory with CSharpCodeProvider // provide some pretty feedback lock (iconLock) { notifyIcon.Icon = (iconIsOut ? Properties.Resources.Icon16in : Properties.Resources.Icon16out); iconIsOut = !iconIsOut; } Interlocked.Decrement(ref threadRunning); }); ```
ThreadPool giving amazing results, did I do this right? (no, I didn't)
[ "", "c#", "multithreading", "threadpool", "" ]
This is probably a really straightforward answer - but a bit of advice would be appreciated. I have a small system where all of my objects use the same load() and loadMultiple() methods. Exactly the same code in each. Very simple, example below. ``` public static function load($id) { // Instantiate the object $object = new self($id); return $object; } ``` The example code above can't go in an interface obviously, but neither can it go in an abstract class as I am instantiating the object (same in the loadMultiple method). What is the most elegant solution to this? I am currently planning to create a parent class such as: ``` class Object_Model { public function load() { // Code here } public function loadMultiple() { // Code here } } class Object1 extends Object_Model { } class Object2 extends Object_Model { } ``` But I am sure there must be a better way. Is there?
I agree with Ionut - just go with a traditional factory method. ``` <?php abstract class A { protected $id; protected function __construct( $id ) { $this->id = $id; } final public static function factory( $subclass, $id ) { // Instantiate the object if ( is_subclass_of( $subclass, __CLASS__ ) ) { return new $subclass($id); } throw new Exception( "$subclass is not a valid class for this factory." ); } } class B extends A{} class C extends A{} class D{} try { $b = A::factory( 'B', 1 ); $c = A::factory( 'C', 2 ); $d = A::factory( 'D', 2 ); } catch ( Exception $e ) { echo "Oops! " . $e->getMessage(); } ``` You might be able to do something with [late static binding](http://us.php.net/lsb), but that's PHP 5.3 only
As your current code uses a *static* method, I believe it would be just fine to have a function which takes two arguments: the name of the class to instantiate and that id: ``` function load($class, $id) { return new $class($id); } ``` It's OK to implement factories using functions. Or is there more logic that you do in there that cannot be extracted in a function?
Object Orientated Common Methods: Interfaces, Abstract, or something else
[ "", "php", "oop", "" ]
I have these classes: ``` public abstract class CustomField { public String Id { get; set; } public String Name { get; set; } public String Description { get; set; } public FieldType Type { get; set; } public enum FieldType { String = 0, Integer = 1, Boolean = 2, List = 3 } } public class StringCustomField:CustomField { public String Value { get; set; } public Int32 MinLenght { get; set; } public Int32 MaxLenght { get; set; } public StringCustomField() { this.Type = FieldType.String; } } public class CustomGroup { public String Id { get; set; } public String Name { get; set; } public String Description { get; set; } public List<CustomField> FieldList = new List<CustomField>(); } ``` When I try to transfer `CustomGroup` through my webservice I get this error: > The remote server returned an error: NotFound Serialization is failing when C# tries to transfer my `StringField` through my `CustomField`. What am I doing wrong? Marc Gravel tell me to do that and i understand the solution but some thing is wrong, no effects, cath the same error!! , help!! ``` [XmlInclude(typeof(StringCustomField))] [XmlInclude(typeof(IntegerCustomField))] [XmlInclude(typeof(BooleanCustomField))] [XmlInclude(typeof(ListCustomField))] public abstract class CustomField { public String Id { get; set; } public String Name { get; set; } public String Description { get; set; } public FieldType Type { get; set; } public enum FieldType { String = 0, Integer = 1, Boolean = 2, List = 3 } } ```
If you are sending subclasses as xml, you will need `[XmlInclude]`: ``` [XmlInclude(typeof(StringCustomField))] public abstract class CustomField {...} ``` You can add multiple `[XmlInclude(...)]` markers for any other subclasses in the model.
`List<CustomField>` will serialize and deserialize to a `CustomField[]` if you're using a web service, won't it?
Why can't I send my custom class through my webservice?
[ "", "c#", "web-services", "serialization", "" ]
For fun, I'm playing with a class to easily cache function results. The basic idea is that you can take any function you want — though you'd only want to use it for relatively expensive functions — and easily wrap it to use relatively inexpensive dictionary lookups for later runs with the same argument. There's really not much to it: ``` public class AutoCache<TKey, TValue> { public AutoCache(Func<TKey, TValue> FunctionToCache) { _StoredFunction = FunctionToCache; _CachedData = new Dictionary<TKey, TValue>(); } public TValue GetResult(TKey Key) { if (!_CachedData.ContainsKey(Key)) _CachedData.Add(Key, _StoredFunction(Key)); return _CachedData[Key]; } public void InvalidateKey(TKey Key) { _CachedData.Remove(Key); } public void InvalidateAll() { _CachedData.Clear(); } private Dictionary<TKey, TValue> _CachedData; private Func<TKey, TValue> _StoredFunction; } ``` Unfortunately, there are some additional restrictions that make this much less useful than it could be. There are also some features we could add and other considerations to the implementation. I'm looking for thoughts on ways this can be improved for any of the following points: * This requires a function that returns the same result for a given set of arguments (it must be stateless). Probably no way to change this. * It's limited to a very narrow delegate range. Could we expand it to easily work for any function that accepts at least one parameter and returns a value, perhaps by wrapping arguments in an anonymous type? Or would we need an additional implemenation for each Func delegate we wanted to support? If so, can we build an abstract class to make this easier? * It's not thread-safe. * No automatic invalidation. This makes it dangerous for garbage collection. You need to keep it around for a while for it to be useful, and that means you're not going to really ever discard old and potentially un-needed cache items. * Can we inherit from this to make the cache bi-directional for the case where the function has a single argument? As a point of reference, if I ever use this in real code the most likely place I envision it is as part of a business logic layer, where I use this code to wrap a method in the data access layer that just pulls data from a lookup table. In this case, the database trip would be expensive relative to the dictionary and there would almost always be exactly one 'key' value for the lookup, so it's a good match.
Another name for this automatic caching of function results is memoization. For a public interface, consider something along these lines: ``` public Func<T,TResult> Memoize<T,TResult>(Func<T,TResult> f) ``` ... and simply use polymorphism to store T's in a dictionary of object. Extending the delegate range could be implemented via currying and partial function application. Something like this: ``` static Func<T1,Func<T2,TResult>> Curry(Func<T1,T2,TResult> f) { return x => y => f(x, y); } // more versions of Curry ``` Since `Curry` turns functions of multiple arguments into functions of single arguments (but that may return functions), the return values are eligible for memoization themselves. Another way to do it would be to use reflection to inspect the delegate type, and store tuples in the dictionary rather than simply the argument type. A simplistic tuple would be simply an array wrapper whose hashcode and equality logic used deep comparisons and hashing. Invalidation could be helped with weak references, but creating dictionaries with `WeakReference` keys is tricky - it's best done with the support of the runtime (WeakReference values is much easier). I believe there are some implementations out there. Thread safety is easily done by locking on the internal dictionary for mutation events, but having a lock-free dictionary may improve performance in heavily concurrent scenarios. That dictionary would probably be even harder to create - there's an interesting [presentation on one for Java here](http://video.google.com/videoplay?docid=2139967204534450862) though.
Wow - what serendipity - I had just recently posted a question about [opaque keys in C#](https://stackoverflow.com/questions/1165967/is-there-any-reason-not-to-use-this-opaquekey-pattern-with-c-3) ... and because I'm trying to implement something related to function result caching. How funny. This type of metaprogramming can be tricky with C# ... especially because generic type parameters can result in awkward code duplication. You often end up repeating almost the same code in multiple places, with different type parameters, to achieve type safety. So here's my variation on your approach that uses my opaque key pattern and closures to create cacheable functions. The sample below demonstrates the pattern with either one or two arguments, but it's relatively easy to extend to more. It also uses extension methods to create a transparent pattern for wrapping a Func<> with a cachable Func<> using the `AsCacheable()` method. Closures capture the cache that is associated with the function - and make it's existence transparent to other callers. This technique has many of the same limitations as your approach (thread safety, holding on to references, etc) - I suspect they aren't too hard to overcome - but it DOES support an easy way to extend to multiple parameters, and it allows cacheable functions to be completely substitutable with regular ones - since they are just a wrapper delegate. It's also worth noting that if you create a second instance of the CacheableFunction - you get a separate cache. This can be both a strength and a weakness ... since it some situations you may not realize this is happening. Here's the code: ``` public interface IFunctionCache { void InvalidateAll(); // we could add more overloads here... } public static class Function { public class OpaqueKey<A, B> { private readonly object m_Key; public A First { get; private set; } public B Second { get; private set; } public OpaqueKey(A k1, B k2) { m_Key = new { K1 = k1, K2 = k2 }; First = k1; Second = k2; } public override bool Equals(object obj) { var otherKey = obj as OpaqueKey<A, B>; return otherKey == null ? false : m_Key.Equals(otherKey.m_Key); } public override int GetHashCode() { return m_Key.GetHashCode(); } } private class AutoCache<TArgs,TR> : IFunctionCache { private readonly Dictionary<TArgs,TR> m_CachedResults = new Dictionary<TArgs, TR>(); public bool IsCached( TArgs arg1 ) { return m_CachedResults.ContainsKey( arg1 ); } public TR AddCachedValue( TArgs arg1, TR value ) { m_CachedResults.Add( arg1, value ); return value; } public TR GetCachedValue( TArgs arg1 ) { return m_CachedResults[arg1]; } public void InvalidateAll() { m_CachedResults.Clear(); } } public static Func<A,TR> AsCacheable<A,TR>( this Func<A,TR> function ) { IFunctionCache ignored; return AsCacheable( function, out ignored ); } public static Func<A, TR> AsCacheable<A, TR>( this Func<A, TR> function, out IFunctionCache cache) { var autocache = new AutoCache<A,TR>(); cache = autocache; return (a => autocache.IsCached(a) ? autocache.GetCachedValue(a) : autocache.AddCachedValue(a, function(a))); } public static Func<A,B,TR> AsCacheable<A,B,TR>( this Func<A,B,TR> function ) { IFunctionCache ignored; return AsCacheable(function, out ignored); } public static Func<A,B,TR> AsCacheable<A,B,TR>( this Func<A,B,TR> function, out IFunctionCache cache ) { var autocache = new AutoCache<OpaqueKey<A, B>, TR>(); cache = autocache; return ( a, b ) => { var key = new OpaqueKey<A, B>( a, b ); return autocache.IsCached(key) ? autocache.GetCachedValue(key) : autocache.AddCachedValue(key, function(a, b)); }; } } public class CacheableFunctionTests { public static void Main( string[] args ) { Func<string, string> Reversal = s => new string( s.Reverse().ToArray() ); var CacheableReverse = Reversal.AsCacheable(); var reverse1 = CacheableReverse("Hello"); var reverse2 = CacheableReverse("Hello"); // step through to prove it uses caching Func<int, int, double> Average = (a,b) => (a + b)/2.0; var CacheableAverage = Average.AsCacheable(); var average1 = CacheableAverage(2, 4); var average2 = CacheableAverage(2, 4); } } ```
Cache Function results
[ "", "c#", ".net", "caching", "" ]
Lets say you have various objects of arbitrary type that you would like to store in a key+value type of table. Key could for example be an int, string or guid. What would the value be? String, Binary or something else? And how would you store and load the objects? I would think some sort of serialization, but what kind? --- I have one solution at the moment where I have a class with these two methods: ``` public T Get<T>(string key) public void Set<T>(string key, T value) ``` In the database I have a table with a string column and a binary column. I then use a `BinaryFormatter` to serialize and deserialize the value and Linq2Sql to put the binary result in the database table. But is this a good solution? Currently I have only dared trying this with simple values like integers and strings. How do the `BinaryFormatter` and serialization in general work with more complex types like structs and class? Especially if for example the value contains things like arrays or lists. Any pointers? --- At the moment I will be using it to store various last-selected-or-typed-etc type of values. Although they may not always necessarily be typed. For example it may be choosing values from a list. Main point is that they will pretty much be just convenience stuff for the user, so not very critical data.
If you can restrict yourself to specific types which map easily to SQL types, I'd be tempted to keep those in separate columns in the table, making sure you only fill one of them in. That way you have human-readable data in the database, which makes ad-hoc querying easier. It also means you're not forever locked into .NET. There are loads of different serialization options available. The core framework ones are `BinaryFormatter` and `XmlSerializer` of course; XML is much more portable, but at a cost of space. I believe it's also less thoroughly customisable. There are third party serialization technologies such as [Thrift](http://incubator.apache.org/thrift/) and [Protocol Buffers](http://code.google.com/p/protobuf/). These will be more restrictive in terms of what they can serialize) but more portable. (Disclaimer: my 20% project is a [C# port](http://code.google.com/p/protobuf-csharp-port/) of Protocol Buffers, so I'm not entirely unbiased here.) You should also consider versioning - what do you want to happen if you change the data structure you're serializing/deserializing? Maybe you don't need to be able to read "old" records, maybe you do. Maybe you need old code to be able to read "new" records - or maybe not. Your choice of technology should really be driven by requirements. The more general you try to make it, the more complex it will become - so work out what you really need before you try to come up with a solution.
When we have done this type of thing we have kept the data in a byte array while in C#, and stored it in a varbinary(max) column in SQL Server. **EDIT** Based on comment You could try having a property on your class that was the byte array of your value field.
C#: How would you store arbitrary objects in an SQL Server?
[ "", "c#", "sql-server", "database-design", "serialization", "data-structures", "" ]
I am trying to create my first fluent interface and I was just wondering what other poeple thought was more fluent and which one they would prefer to use? ``` Check.Field().Named("est").WithValueOf("sdsd").IsNotNull() Check.Field("est").WithValueOf("sdsd").IsNotNull() Check.Field("est").WithValue("sdsd").IsNotNull() ``` Cheers Anthony
Last one definitely: ``` Check.Field("est").WithValue("sdsd").IsNotNull() ```
I concur: ``` Check.Field("est").WithValue("sdsd").IsNotNull() ``` As short as possible, while still making sense. Avoid noise words like `.as. .of. .and. .in.` unless they add contextual meaning. I've seen fluent interfaces that do this, and it adds nothing useful except more typing and more hoops for the application to jump through when it executes.
Which is more fluent - longer or shorter syntax?
[ "", "c#", ".net", "fluent-interface", "" ]
I'm involved in a development project that is using freeglut (based on the long defunct glut) for it's client. The client will eventually allow full interaction with a large-scale 3d environment. Should I let the development continue with freeglut (is it even possible) or should I advise they use another alternative such as libsdl, opentk or even axiom. I'm not a graphics person but I get the feeling freeglut might potentially be a limited choice. The most convincing answer (for or against) will be accredited. EDIT: A few points to make... * The project is already using the Tao Framework. * DirectX and XNA are not options (ie: something like freeglut or libsdl has to be used). I did do my research and found that freeglut was once again under active development and that they have a release pending. That doesn't change my feeling that it may still be a potentially limited choice. My question isn't on how it's done but on whether or not freeglut is still a viable choice for something that could potentially get big and whether or not there are more "modern" solutions that might ease development a bit. EDIT: It would help if other alternatives have better mutli-threading support (not for rendering objects but for processing data and so forth). EDIT: To elaborate a bit more... the client must work in at least linux and windows. Thanks!
I don't think there's anything wrong with freeglut. It is being actively developed, and there is an active board for support on Nabble. That said, it is not difficult to create an OpenGL context in Windows, so why not just use it directly? I always thought people used Glut/FreeGlut because that was what the Red Book used. (though the callbacks for mouse and keyboard do simplify things) I never timed it, but Glut always felt a little slower than using OpenGL through interop. The [Tao project](http://www.taoframework.com/) has a good OpenGL wrapper if you want to use OpenGL directly. It also has nice .Net bindings for FreeGlut and GLFW, a Glut alternative which offers mouse and keyboard callbacks too.
I'd go for libsdl, its multimedia capabilities make it easier to work with audio hardware as well, it has native bindings to C# and a variety of other languages, and can also be of use if you ever decide to integrate a mobile interface for your project.
A project I'm assisting with is using freeglut for the client. Should I advise otherwise?
[ "", "c#", "3d", "sdl", "freeglut", "" ]
In Python 2.x when you want to mark a method as abstract, you can define it like so: ``` class Base: def foo(self): raise NotImplementedError("Subclasses should implement this!") ``` Then if you forget to override it, you get a nice reminder exception. Is there an equivalent way to mark a field as abstract? Or is stating it in the class docstring all you can do? At first I thought I could set the field to NotImplemented, but when I looked up what it's actually for (rich comparisons) it seemed abusive.
Yes, you can. Use the `@property` decorator. For instance, if you have a field called "example" then can't you do something like this: ``` class Base(object): @property def example(self): raise NotImplementedError("Subclasses should implement this!") ``` Running the following produces a `NotImplementedError` just like you want. ``` b = Base() print b.example ```
Alternate answer: ``` @property def NotImplementedField(self): raise NotImplementedError class a(object): x = NotImplementedField class b(a): # x = 5 pass b().x a().x ``` This is like Evan's, but concise and cheap--you'll only get a single instance of NotImplementedField.
Equivalent of NotImplementedError for fields in Python
[ "", "python", "abstract-class", "" ]
I'm looking for a regular expression that will match text given the following requirements: * contains only 10 digits (only numbers) * starts with 9. These examples should match: * 9999999999 * 9876543210 These examples should not match: * 999999999 * 1234567890 * 8912456789 * qwe3456&ert It is basically for Indian mobile numbers. Please provide examples, I have already searched Google and those answers provide over-validation.
Try something like this: ``` ^9\d{9}$ ```
I always prefer the REs that can be used on any engine (so no fancy `"\d"` things): ``` ^9[0-9]{9}$ ``` or, in a crunch, ``` ^9[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]$ ``` if the RE engine doesn't even have `"{}"`. Yes, I know the C# engine can do both `"\d"` and `"{}"` but that's not always the case.
Regular expression for Indian mobile phone numbers?
[ "", "c#", "regex", "" ]
sometimes i see that certain projects write something to the output during compilation. how can that be achieved in MSVC++ thanks!
use `#pragma message` e.g. ``` #define MESSAGE(t) message(__FILE__ "(" STRINGXXX(__LINE__) ") : " t) #define STRINGXXX(x) STRINGYYY(x) #define STRINGYYY(x) #x ``` then if you put ``` #pragma MESSAGE("TODO: testing") ``` it will appear as a clickable message just like the normal compiler messages
You want to include something like this in your source code: ``` #pragma message("Hello World") ```
MSVC++ how to ouput something to the "output"-window during compilation
[ "", "c++", "visual-c++", "" ]
I've been searching and reading around to that and couldn't fine anything really useful. I'm writing an small C# win app that allows user to send files to a web server, not by FTP, but by HTTP using POST. Think of it like a web form but running on a windows application. I have my HttpWebRequest object created using something like this ``` HttpWebRequest req = WebRequest.Create(uri) as HttpWebRequest ``` and also set the `Method`, `ContentType` and `ContentLength` properties. But thats the far I can go. This is my piece of code: ``` HttpWebRequest req = WebRequest.Create(uri) as HttpWebRequest; req.KeepAlive = false; req.Method = "POST"; req.Credentials = new NetworkCredential(user.UserName, user.UserPassword); req.PreAuthenticate = true; req.ContentType = file.ContentType; req.ContentLength = file.Length; HttpWebResponse response = null; try { response = req.GetResponse() as HttpWebResponse; } catch (Exception e) { } ``` So my question is basically how can I send a fie (text file, image, audio, etc) with C# via HTTP POST. Thanks!
Using .NET 4.5 (or .NET 4.0 by adding the [Microsoft.Net.Http](http://nuget.org/Packages/Microsoft.Net.Http "Microsoft.Net.Http") package from NuGet) there is an easier way to simulate form requests. Here is an example: ``` private async Task<System.IO.Stream> Upload(string actionUrl, string paramString, Stream paramFileStream, byte [] paramFileBytes) { HttpContent stringContent = new StringContent(paramString); HttpContent fileStreamContent = new StreamContent(paramFileStream); HttpContent bytesContent = new ByteArrayContent(paramFileBytes); using (var client = new HttpClient()) using (var formData = new MultipartFormDataContent()) { formData.Add(stringContent, "param1", "param1"); formData.Add(fileStreamContent, "file1", "file1"); formData.Add(bytesContent, "file2", "file2"); var response = await client.PostAsync(actionUrl, formData); if (!response.IsSuccessStatusCode) { return null; } return await response.Content.ReadAsStreamAsync(); } } ```
To send the raw file *only*: ``` using(WebClient client = new WebClient()) { client.UploadFile(address, filePath); } ``` If you want to emulate a browser form with an `<input type="file"/>`, then that is harder. See [this answer](https://stackoverflow.com/questions/566462/upload-files-with-httpwebrequest-multipart-form-data) for a multipart/form-data answer.
Send a file via HTTP POST with C#
[ "", "c#", "http", "post", "system.net", "" ]
I have a multithreaded program where I create a generator function and then pass it to new threads. I want it to be shared/global in nature so each thread can get the next value from the generator. Is it safe to use a generator like this, or will I run into problems/conditions accessing the shared generator from multiple threads? If not, is there a better way to approach the problem? I need something that will cycle through a list and produce the next value for whichever thread calls it.
It's not thread-safe; simultaneous calls may interleave, and mess with the local variables. The common approach is to use the master-slave pattern (now called farmer-worker pattern in PC). Make a third thread which generates data, and add a Queue between the master and the slaves, where slaves will read from the queue, and the master will write to it. The standard queue module provides the necessary thread safety and arranges to block the master until the slaves are ready to read more data.
Edited to add benchmark below. You can wrap a generator with a lock. For example, ``` import threading class LockedIterator(object): def __init__(self, it): self.lock = threading.Lock() self.it = it.__iter__() def __iter__(self): return self def next(self): self.lock.acquire() try: return self.it.next() finally: self.lock.release() gen = [x*2 for x in [1,2,3,4]] g2 = LockedIterator(gen) print list(g2) ``` --- Locking takes 50ms on my system, Queue takes 350ms. Queue is useful when you really do have a queue; for example, if you have incoming HTTP requests and you want to queue them for processing by worker threads. (That doesn't fit in the Python iterator model--once an iterator runs out of items, it's done.) If you really do have an iterator, then LockedIterator is a faster and simpler way to make it thread safe. ``` from datetime import datetime import threading num_worker_threads = 4 class LockedIterator(object): def __init__(self, it): self.lock = threading.Lock() self.it = it.__iter__() def __iter__(self): return self def next(self): self.lock.acquire() try: return self.it.next() finally: self.lock.release() def test_locked(it): it = LockedIterator(it) def worker(): try: for i in it: pass except Exception, e: print e raise threads = [] for i in range(num_worker_threads): t = threading.Thread(target=worker) threads.append(t) t.start() for t in threads: t.join() def test_queue(it): from Queue import Queue def worker(): try: while True: item = q.get() q.task_done() except Exception, e: print e raise q = Queue() for i in range(num_worker_threads): t = threading.Thread(target=worker) t.setDaemon(True) t.start() t1 = datetime.now() for item in it: q.put(item) q.join() start_time = datetime.now() it = [x*2 for x in range(1,10000)] test_locked(it) #test_queue(it) end_time = datetime.now() took = end_time-start_time print "took %.01f" % ((took.seconds + took.microseconds/1000000.0)*1000) ```
Are Generators Threadsafe?
[ "", "python", "multithreading", "thread-safety", "generator", "" ]
To be clear, I'm not asking if/why multiple inheritance is good or bad. I've heard a lot of arguments from both sides of that debate. I'm wondering if there is any kind of design problem or scenario in C++ in which multiple inheritance is either the only way to accomplish something, or at least is the most optimal way over all other alternatives to the point that it wouldn't make sense to consider anything else. Obviously, this question doesn't apply to languages that don't support multiple inheritance.
You can't do [policy-based design](http://en.wikipedia.org/wiki/Policy-based_design) without multiple inheritance. So if policy-based design is the most elegant way to solve your problem, than that means you need multiple inheritance to solve your problem, over all other options. Multiple-inheritance can be very useful if it's not misused (like everything, in any language).
There is a situation in which you would inherit from a class and maybe implement one or two interfaces in Java. This is something you would resolve with multiple inheritance in c++ I think.
When might multiple inheritance be the only reasonable solution?
[ "", "c++", "multiple-inheritance", "" ]
In this code sample, is there any way to continue on the outer loop from the catch block? ``` while { // outer loop while { // inner loop try { throw; } catch { // how do I continue on the outer loop from here? continue; } } } ```
UPDATE: This question was inspiration for [my article on this subject.](http://ericlippert.com/2010/01/11/continuing-to-an-outer-loop/) Thanks for the great question! --- "continue" and "break" are nothing more than a pleasant syntax for a "goto". Apparently by giving them cute names and restricting their usages to particular control structures, they no longer draw the ire of the "all gotos are all bad all the time" crowd. If what you want to do is a continue-to-outer, you *could* simply define a label at the top of the outer loop and then "goto" that label. If you felt that doing so did not impede the comprehensibility of the code, then that might be the most expedient solution. However, I would take this as an opportunity to consider whether your control flow would benefit from some refactoring. Whenever I have conditional "break" and "continue" in nested loops, I consider refactoring. Consider: ``` successfulCandidate = null; foreach(var candidate in candidates) { foreach(var criterion in criteria) { if (!candidate.Meets(criterion)) { // TODO: no point in continuing checking criteria. // TODO: Somehow "continue" outer loop to check next candidate } } successfulCandidate = candidate; break; } if (successfulCandidate != null) // do something ``` Two refactoring techniques: First, extract the inner loop to a method: ``` foreach(var candidate in candidates) { if (MeetsCriteria(candidate, criteria)) { successfulCandidate = candidate; break; } } ``` Second, can *all* the loops be eliminated? If you are looping because you are trying to search for something, then refactor it into a query. ``` var results = from candidate in candidates where criteria.All(criterion=>candidate.Meets(criterion)) select candidate; var successfulCandidate = results.FirstOrDefault(); if (successfulCandidate != null) { do something with the candidate } ``` If there are no loops then there is no need to break or continue!
``` while { // outer loop while { // inner loop try { throw; } catch { // how do I continue on the outer loop from here? goto REPEAT; } } // end of outer loop REPEAT: // some statement or ; } ``` Problem solved. (what?? Why are you all giving me that dirty look?)
Continue in nested while loops
[ "", "c#", "loops", "while-loop", "continue", "" ]
I'm looking for the rules involving passing C++ templates functions as arguments. This is supported by C++ as shown by an example here: ``` #include <iostream> void add1(int &v) { v += 1; } void add2(int &v) { v += 2; } template <void (*T)(int &)> void doOperation() { int temp = 0; T(temp); std::cout << "Result is " << temp << std::endl; } int main() { doOperation<add1>(); doOperation<add2>(); } ``` Learning about this technique is difficult, however. [Googling for "function as a template argument"](http://www.google.com/search?q=function+as+template+argument) doesn't lead to much. And the classic [C++ Templates The Complete Guide](https://rads.stackoverflow.com/amzn/click/com/0201734842) surprisingly also doesn't discuss it (at least not from my search). The questions I have are whether this is valid C++ (or just some widely supported extension). Also, is there a way to allow a functor with the same signature to be used interchangeably with explicit functions during this kind of template invocation? The following does **not** work in the above program, at least in [Visual C++](http://en.wikipedia.org/wiki/Visual_C%2B%2B#32-bit_versions), because the syntax is obviously wrong. It'd be nice to be able to switch out a function for a functor and vice versa, similar to the way you can pass a function pointer or functor to the std::sort algorithm if you want to define a custom comparison operation. ``` struct add3 { void operator() (int &v) {v += 3;} }; ... doOperation<add3>(); ``` Pointers to a web link or two, or a page in the C++ Templates book would be appreciated!
Yes, it is valid. As for making it work with functors as well, the usual solution is something like this instead: ``` template <typename F> void doOperation(F f) { int temp = 0; f(temp); std::cout << "Result is " << temp << std::endl; } ``` which can now be called as either: ``` doOperation(add2); doOperation(add3()); ``` [See it live](https://godbolt.org/z/3pwX4G) The problem with this is that if it makes it tricky for the compiler to inline the call to `add2`, since all the compiler knows is that a function pointer type `void (*)(int &)` is being passed to `doOperation`. (But `add3`, being a functor, can be inlined easily. Here, the compiler knows that an object of type `add3` is passed to the function, which means that the function to call is `add3::operator()`, and not just some unknown function pointer.)
Template parameters can be either parameterized by type (typename T) or by value (int X). The "traditional" C++ way of templating a piece of code is to use a functor - that is, the code is in an object, and the object thus gives the code unique type. When working with traditional functions, this technique doesn't work well, because a change in type doesn't indicate a *specific* function - rather it specifies only the signature of many possible functions. So: ``` template<typename OP> int do_op(int a, int b, OP op) { return op(a,b); } int add(int a, int b) { return a + b; } ... int c = do_op(4,5,add); ``` Isn't equivalent to the functor case. In this example, do\_op is instantiated for all function pointers whose signature is int X (int, int). The compiler would have to be pretty aggressive to fully inline this case. (I wouldn't rule it out though, as compiler optimization has gotten pretty advanced.) One way to tell that this code doesn't quite do what we want is: ``` int (* func_ptr)(int, int) = add; int c = do_op(4,5,func_ptr); ``` is still legal, and clearly this is not getting inlined. To get full inlining, we need to template by value, so the function is fully available in the template. ``` typedef int(*binary_int_op)(int, int); // signature for all valid template params template<binary_int_op op> int do_op(int a, int b) { return op(a,b); } int add(int a, int b) { return a + b; } ... int c = do_op<add>(4,5); ``` In this case, each instantiated version of do\_op is instantiated with a specific function already available. Thus we expect the code for do\_op to look a lot like "return a + b". (Lisp programmers, stop your smirking!) We can also confirm that this is closer to what we want because this: ``` int (* func_ptr)(int,int) = add; int c = do_op<func_ptr>(4,5); ``` will fail to compile. GCC says: "error: 'func\_ptr' cannot appear in a constant-expression. In other words, I can't fully expand do\_op because you haven't given me enough info at compiler time to know what our op is. So if the second example is really fully inlining our op, and the first is not, what good is the template? What is it doing? The answer is: type coercion. This riff on the first example will work: ``` template<typename OP> int do_op(int a, int b, OP op) { return op(a,b); } float fadd(float a, float b) { return a+b; } ... int c = do_op(4,5,fadd); ``` That example will work! (I am not suggesting it is good C++ but...) What has happened is do\_op has been templated around the *signatures* of the various functions, and each separate instantiation will write different type coercion code. So the instantiated code for do\_op with fadd looks something like: ``` convert a and b from int to float. call the function ptr op with float a and float b. convert the result back to int and return it. ``` By comparison, our by-value case requires an exact match on the function arguments.
Function passed as template argument
[ "", "c++", "templates", "code-generation", "functor", "" ]
I'm trying to learn the currently accepted features of c++11 and I'm having trouble with auto and decltype. As a learning exercise I'm extending the std class list with some generic functions. ``` template<class _Ty, class _Ax = allocator<_Ty>> class FList : public std::list<_Ty, _Ax> { public: void iter(const function<void (_Ty)>& f) { for_each(begin(), end(), f); } auto map(const function<float (_Ty)>& f) -> FList<float>* { auto temp = new FList<float>(); for (auto i = begin(); i != end(); i++) temp->push_back(f(*i)); return temp; } }; auto *ints = new FList<int>(); ints->push_back(2); ints->iter([](int i) { cout << i; }); auto *floats = ints->map([](int i) { return (float)i; }); floats->iter([](float i) { cout << i; }); ``` For the member map I want the return type to be generic depending on what the passed function returns. So for the return type I could do something like this. ``` auto map(const function<float (_Ty)>& f) -> FList<decltype(f(_Ty))>* ``` This would also need to remove the float type in the function template. ``` auto map(const function<auto (_Ty)>& f) -> FList<decltype(f(_Ty))>* ``` I could use a template class but that makes the use of instances more verbose since i have to specify the return type. ``` template<class T> FList<T>* map(const function<T (_Ty)>& f) ``` To sum of my question i'm trying to figure out how to define map without using a template class and still have it generic in the type it returns.
Deriving from `std::list` or other `std::` containers is discouraged. Write your operations as free functions so they can work on any standard container via iterators. Do you mean "define map without using a template function"? You should be able to use the `result_type` member type of `std::function` to get the type it returns. Also it's not necessary for you to specify that the function is passed as a `std::function`. You could leave it open as any type, and let the compiler join everything up. You only need `std::function` for runtime polymorphism. And using new to create raw heap-allocation objects and returning them by pointer is soooo 1992! :) Your iter function is essentially the same thing as the [range-based for loop](http://en.wikipedia.org/wiki/C%2B%2B0x#Range-based_for-loop). But all that aside... do you mean something like this? ``` template <class TFunc> auto map(const TFunc &f) -> FList<decltype(f(_Ty()))>* { auto temp = new FList<decltype(f(_Ty()))>(); for (auto i = begin(); i != end(); i++) temp->push_back(f(*i)); return temp; } ``` This will match anything callable, and will figure out the return type of the function by using decltype. Note that it requires \_Ty to be default constructable. You can get around that by manufacturing an instance: ``` template <class T> T make_instance(); ``` No implementation is required because no code is generated that calls it, so the linker has nothing to complain about (thanks to dribeas for pointing this out!) So the code now becomes: ``` FList<decltype(f(make_instance<_Ty>()))>* ``` Or, literally, a list of whatever the type would be you'd get from calling the function f with a reference to an instance of \_Ty. And as a free bonus for accepting, look up rvalue references - these will mean that you can write: ``` std::list<C> make_list_somehow() { std::list<C> result; // blah... return result; } ``` And then call it like this: ``` std::list<C> l(make_list_somehow()); ``` Because std::list will have a "move constructor" (like a copy constructor but chosen when the argument is a temporary, like here), it can steal the contents of the return value, i.e. do the same as an optimal `swap`. So there's no copying of the whole list. (This is why C++0x will make naively-written existing code run faster - many popular but ugly performance tricks will become obsolete). And you can get the same kind of thing for free for ANY existing class of your own, without having to write a correct move constructor, by using `unique_ptr`. ``` std::unique_ptr<MyThing> myThing(make_my_thing_somehow()); ```
You can't use auto in function arguments where you want the types of the arguments to be deduced. You use templates for that. Have a look at: <http://thenewcpp.wordpress.com/2011/10/18/the-keyword-auto/> and <http://thenewcpp.wordpress.com/2011/10/25/decltype-and-declval/>. They both explain how to use auto and decltype. They should give you enough information about how they are used. In particular, another answer about make\_instance could be done better with declval.
Using auto and decltype in C++11
[ "", "c++", "templates", "c++11", "decltype", "auto", "" ]
Why am I getting an error "Attribute value must be constant". Isn't **null** constant??? ``` @Target(ElementType.TYPE) @Retention(RetentionPolicy.RUNTIME) public @interface SomeInterface { Class<? extends Foo> bar() default null;// this doesn't compile } ```
I don't know why, but the [JLS](http://java.sun.com/docs/books/jls/third_edition/html/interfaces.html) is very clear: ``` Discussion Note that null is not a legal element value for any element type. ``` And the definition of a default element is: ``` DefaultValue: default ElementValue ``` Unfortunately I keep finding that the new language features (Enums and now Annotations) have very unhelpful compiler error messages when you don't meet the language spec. EDIT: A little googleing found the [following](https://web.archive.org/web/20090804142546/http://groups.csail.mit.edu/pag/jsr308/specification/java-annotation-design.html#htoc48) in the JSR-308, where they argue for allowing nulls in this situation: > We note some possible objections to the proposal. > > The proposal doesn't make anything possible that was not possible before. > > The programmer-defined special value provides better documentation than null, which might mean “none”, “uninitialized”, null itself, etc. > > The proposal is more error-prone. It's much easier to forget checking against null than to forget checking for an explicit value. > > The proposal may make the standard idiom more verbose. Currently only the users of an annotation need to check for its special values. With the proposal, many tools that process annotations will have to check whether a field's value is null lest they throw a null pointer exception. I think only the last two points are relevant to "why not do it in the first place." The last point certainly brings up a good point - an annotation processor never has to be concerned that they will get a null on an annotation value. I tend to see that as more the job of annotation processors and other such framework code to have to do that kind of check to make the developers code clearer rather than the other way around, but it would certainly make it hard to justify changing it.
Try this ``` @Target(ElementType.TYPE) @Retention(RetentionPolicy.RUNTIME) public @interface SomeInterface { Class bar() default void.class; } ``` It does not require a new class and it is already a keyword in Java that means nothing.
Error setting a default null value for an annotation's field
[ "", "java", "annotations", "default", "" ]
I'm using PHP's "simplexml\_load\_file" to get some data from Flickr. My goal is to get the photo url. I'm able to get the following value (assigned to PHP variable): ``` <p><a href="http://www.flickr.com/people/19725893@N00/">codewrecker</a> posted a photo:</p> <p><a href="http://www.flickr.com/photos/19725893@N00/2302759205/" title="Santa Monica Pier"><img src="http://farm3.static.flickr.com/2298/2302759205_4fb109f367_m.jpg" width="180" height="240" alt="Santa Monica Pier" /></a></p> ``` **How can I extract just this part of it?** ``` http://farm3.static.flickr.com/2298/2302759205_4fb109f367_m.jpg ``` Just in case it helps, here's the code I'm working with: ``` <?php $xml = simplexml_load_file("http://api.flickr.com/services/feeds/photos_public.gne?id=19725893@N00&lang=en-us&format=xml&tags=carousel"); foreach($xml->entry as $child) { $flickr_content = $child->content; // gets html including img url // how can I get the img url from "$flickr_content"??? } ?> ```
You can probably get away with using a regular expression for this, assuming that the way the HTML is formed is pretty much going to stay the same, e.g.: ``` if (preg_match('/<img src="([^"]+)"/i', $string, $matches)) { $imageUrl = $matches[1]; } ``` This is fairly un-robust, and if the HTML is going to change (e.g. the order of parameters in the `<img>` tag, risk of malformed HTML etc.), you would be better off using an HTML parser.
It's not solving your problem(and probably total overkill), but worth mentioning because I've used the library on 2 projects and it's well written. phpFlickr - <http://phpflickr.com/>
How to Extract the URL from A String in PHP?
[ "", "php", "simplexml", "flickr", "" ]
I always have trouble with Java layouts, but the main thing bugging me now is that when the content changes, in particular changes it sizes, it's not laid out again properly. Take the example below: ``` package layouttest; import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import javax.swing.JButton; import javax.swing.JFrame; import javax.swing.JPanel; import javax.swing.JTextArea; public class LayoutTestStart extends JFrame implements ActionListener { static JButton button= new JButton("Expand"); static JTextArea f = new JTextArea("A medium sized text"); static LayoutTestStart lst; public static void main(String[] args) { //Schedule a job for the event-dispatching thread: //creating and showing this application's GUI. javax.swing.SwingUtilities.invokeLater(new Runnable() { public void run() { createAndShowGUI(); } }); } public static void createAndShowGUI() { lst = new LayoutTestStart(); lst.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); JPanel all = new JPanel(); button.addActionListener(lst); all.add(button); all.add(f); lst.getContentPane().add(all); lst.setVisible(true); lst.pack(); } @Override public void actionPerformed(ActionEvent e) { f.setText(f.getText()+"\n"+f.getText()); // this doesn't work f.invalidate(); // this does but it's cheating // lst.pack(); } } ``` The only way I get this to work is to call lst.pack(), but that's cheating since then each component should have a reference to it's JFrame, which gets messy when a component is a seperate class. What's the preferred way to let this example work?
Well, generally, users don't like the size of the frame changing every time they hit enter. The frame should be designed to accomodate growth. So you would define the text area to have a given number of row and columns. Then you add the text area to a scroll pane and add the scrollpane to the frame. Then as data is changes scrollbars will appear or disappear as required. If however you truly need to have a dynamically changing frame then you should use pack(). You can use: ``` SwingUtilities.windowForComponent(...) ``` where the Component is the source component of the ActionEvent, to find the Window to pack().
`revalidate` instead of `invalidate`. `invalidate` just marks the container as needing a layout. `revalidate` does that and then schedules a `validate`. BTW: I suggest: avoid extending from `JFrame` and other components; avoid multiple inheritance of interface and avoid (mutable) statics.
Letting a Java Swing program layout itself again
[ "", "java", "swing", "layout", "" ]
I was wondering if anyone had any suggestions for minimizing a function, f(x,y), where x and y are integers. I have researched lots of minimization and optimization techniques, like BFGS and others out of GSL, and things out of Numerical Recipes. So far, I have tried implenting a couple of different schemes. The first works by picking the direction of largest descent f(x+1,y),f(x-1,y),f(x,y+1),f(x,y-1), and follow that direction with line minimization. I have also tried using a downhill simplex (Nelder-Mead) method. Both methods get stuck far away from a minimum. They both appear to work on simpler functions, like finding the minimum of a paraboloid, but I think that both, and especially the former, are designed for functions where x and y are real-valued (doubles). One more problem is that I need to call f(x,y) as few times as possible. It talks to external hardware, and takes a couple of seconds for each call. Any ideas for this would be greatly appreciated. Here's an example of the error function. Sorry I didn't post this before. This function takes a couple of seconds to evaluate. Also, the information we query from the device does not add to the error if it is below our desired value, only if it is above ``` double Error(x,y) { SetDeviceParams(x,y); double a = QueryParamA(); double b = QueryParamB(); double c = QueryParamC(); double _fReturnable = 0; if(a>=A_desired) { _fReturnable+=(A_desired-a)*(A_desired-a); } if(b>=B_desired) { _fReturnable+=(B_desired-b)*(B_desired-b); } if(c>=C_desired) { _fReturnable+=(C_desired-c)*(C_desired-c); } return Math.sqrt(_fReturnable) } ```
How do you define f(x,y) ? Minimisation is a hard problem, depending on the complexity of your function. Genetic Algorithms could be a good candidate. Resources: [Genetic Algorithms in Search, Optimization, and Machine Learning](https://rads.stackoverflow.com/amzn/click/com/0201157675) [Implementing a Genetic Algorithms in C#](http://www.c-sharpcorner.com/UploadFile/mgold/GeneticAlgorithm12032005044205AM/GeneticAlgorithm.aspx?ArticleID=840446e9-cb2b-4602-a769-73d6e1d9277b) [Simple C# GA](http://www.codeproject.com/kb/recipes/btl_ga.aspx)
There are many, many solutions here. In fact, there are entire books and academic disciplines based on the subject. I am reading an excellent one right now: [How to Solve It: Modern Heuristics](https://rads.stackoverflow.com/amzn/click/com/3540660615). There is no one solution that is correct - different solutions have different advantages based on specific knowledge of your function. It has even been proven that there is no one heuristic that performs the best at all optimization tasks. If you know that your function is quadratic, you can use [Newton-Gauss](http://en.wikipedia.org/wiki/Gauss%E2%80%93Newton_algorithm) to find the minimum in one step. A genetic algorithm can be a great general-purpose tool, or you can try simulated annealing, which is less complicated.
Minimization of f(x,y) where x and y are integers
[ "", "c#", "algorithm", "numerical-analysis", "minimization", "" ]
I want to add a timer rather than a countdown which automatically starts when the form loads. Starting time should be 45 minutes and once it ends, i.e. on reaching 0 minutes, the form should terminate with a message displayed. How can I do this? Language: preferably C#.
Bit more detail: ``` private void Form1_Load(object sender, EventArgs e) { Timer MyTimer = new Timer(); MyTimer.Interval = (45 * 60 * 1000); // 45 mins MyTimer.Tick += new EventHandler(MyTimer_Tick); MyTimer.Start(); } private void MyTimer_Tick(object sender, EventArgs e) { MessageBox.Show("The form will now be closed.", "Time Elapsed"); this.Close(); } ```
Something like this in your form main. Double click the form in the visual editor to create the form load event. ``` Timer Clock=new Timer(); Clock.Interval=2700000; // not sure if this length of time will work Clock.Start(); Clock.Tick+=new EventHandler(Timer_Tick); ``` Then add an event handler to do something when the timer fires. ``` public void Timer_Tick(object sender,EventArgs eArgs) { if(sender==Clock) { // do something here } } ```
Add timer to a Windows Forms application
[ "", "c#", "winforms", "" ]
> **Possible Duplicates:** > [Why should we typedef a struct so often in C?](https://stackoverflow.com/questions/252780/why-should-we-typedef-a-struct-so-often-in-c) > [Difference between ‘struct’ and ‘typedef struct’ in C++?](https://stackoverflow.com/questions/612328/difference-between-struct-and-typedef-struct-in-c) What is the difference between the following type declarations? ``` struct Person { int age; }; typedef struct { int age; }Person; ``` I understand that ``` struct { int age; }Person; ``` Creates and instance of an unnamed struct called person, where ``` struct Person { int age; }; ``` declares a type called person, but not an instance. But I still dont get what the typedef does.
I think that's the same as in C, typedef creates an alias of a type... in your first case, the name of the type is "`struct Person`", while in the second case is just "`Person`". Usually, when you have to declare self referencing structures (like lists), you use both, because the typedef has not effect until the structure is defined (unless you make a forward declaration), for example: ``` typedef struct node { void *data; struct node *next; } TNode, *PTNode; ``` so now you can declare variables of the same type in the following ways: ``` struct node *node1; TNode *node2; PTNode node3; ``` the three variables above are the same, pointers to the node structure.
In C, structs live in their own name space, so you have to write `struct Person` if you want to use the struct's type name. The `typedef` eliminates the need for that prefix. In C++, structs live in the same name space as everything else, so there's no need to do this. It's usually seen as an unnecessary C-ism.
What is the difference when using typdef when declaring a struct?
[ "", "c++", "struct", "typedef", "" ]
In a graphical program I'm writing using pygame I use a tuple representing a coordinate like this: (50, 50). Sometimes, I call a function which returns another tuple such as (3, -5), which represents the change in coordinate. What is the best way to add the change value to the coordinate value. It would be nice if I could do something like coordinate += change, but it appears that would simply concatenate the two tuples to something like (50, 50, 3, -5). Rather than adding the 1st value to the 1st value and the 2nd to the 2nd, and returning a resulting tuple. Until now I've been using this rather tiresome method: coord = (coord[0] + change[0], coord[1] + change[1]) What is a better, more concise method to add together the values of two tuples of the same length. It seems especially important to know how to do it if the tuples are of an arbitrary length or a particularly long length that would make the previous method even more tiresome.
Well, one way would be ``` coord = tuple(sum(x) for x in zip(coord, change)) ``` If you are doing a lot of math, you may want to investigate using [NumPy](https://numpy.org/), which has much more powerful array support and better performance.
List comprehension is probably more readable, but here's another way: ``` >>> a = (1,2) >>> b = (3,4) >>> tuple(map(sum,zip(a,b))) (4,6) ```
Adding Values From Tuples of Same Length
[ "", "python", "" ]
Does a C++ shared library have its own memory space? Or does it share the caller process' one? I have a shared library which contains some classes and wrapper functions. One of this wrapper function is kinda: `libXXX_construct()` which initializes an object and returns the pointer to the said object. Once I use `libXXX_construct()` in a caller program where is the object placed?Is it in the "caller" memory space or is it in the library's memory space?
A linked instance of the shared library shares the memory space of the instance of the executable that linked to it, directly or indirectly. This is true for both Windows and the UN\*X-like operating systems. Note that this means that static variables in shared libraries are not a way of inter-process communication (something a lot of people thinks).
All the shared libraries share the virtual memory space of their **process**. (Including the main executable itself)
Shared libraries memory space
[ "", "c++", "memory", "memory-management", "shared-libraries", "" ]
With this code: **test.py** ``` import sys import codecs sys.stdout = codecs.getwriter('utf-16')(sys.stdout) print "test1" print "test2" ``` Then I run it as: ``` test.py > test.txt ``` In Python 2.6 on Windows 2000, I'm finding that the newline characters are being output as the byte sequence **`\x0D\x0A\x00`** which of course is wrong for UTF-16. Am I missing something, or is this a bug?
Try this: ``` import sys import codecs if sys.platform == "win32": import os, msvcrt msvcrt.setmode(sys.stdout.fileno(), os.O_BINARY) class CRLFWrapper(object): def __init__(self, output): self.output = output def write(self, s): self.output.write(s.replace("\n", "\r\n")) def __getattr__(self, key): return getattr(self.output, key) sys.stdout = CRLFWrapper(codecs.getwriter('utf-16')(sys.stdout)) print "test1" print "test2" ```
The newline translation is happening inside the stdout file. You're writing "test1\n" to sys.stdout (a StreamWriter). StreamWriter translates this to "t\x00e\x00s\x00t\x001\x00\n\x00", and sends it to the real file, the original sys.stderr. That file doesn't know that you've converted the data to UTF-16; all it knows is that any \n values in the output stream need to be converted to \x0D\x0A, which results in the output you're seeing.
Bug with Python UTF-16 output and Windows line endings?
[ "", "python", "windows", "utf-16", "" ]
I am having a problem placing an instance of my reference-counting Pointer`<Type>` class into my Array class. Using the debugger, it seems that the constructor is never called (which messes up the reference-count and causes a segfault down the line)! My push\_back function is: ``` void push_back(const T& element) { if (length >= max) reallocate(max > 0 ? max * 2 : 1); new (&data[length]) T(element); ++length; } ``` The reference-count is the same before new is called as after. I'm very sure this is the problem, but I can't figure out why the constructor wouldn't be called. Additionally Pointer::Pointer(...) compiles whether it takes a Pointer`<T>`& or a const Pointer`<T>`& (huh?), and has the problem regardless as well! Maybe there are some details on placement new I am not taking into account. If anyone has some thoughts, they'd be much appreciated! edit: [as requested, a relevant excerpt from Pointer] ``` // ... private: T* p; public: //! Constructor Pointer() : p(0) { } //! Copy Constructor template<class X> Pointer(Pointer<X>& other) : p(other.getPointer()) { if (p) p->incrementRef(); } //! Constructor (sets and increments p) Pointer(T* p) : p(p) { if (p) p->incrementRef(); } //! Destructor (decrements p) ~Pointer() { if (p) p->decrementRef(); } // ... ``` I've also implemented operator = for Pointer`<T>`& and `T*`, as well as operator -> and operator `T*`
Your comment and your code are out of sync: ``` //! Copy Constructor template<class X> Pointer(Pointer<X>& other) ``` A constructor generated from a class template is not a *copy constructor* (there's a footnote in 12.8 [class.copy] that clarifies this), so won't prevent the compiler from generating a *copy constructor* for you. This generated constructor will be a better match for a standard copy as non-template functions are preferred to template functions in overload resolution. It appears that you need to write an explicit *copy constructor* in your pointer class to get the desired effect.
According to docs constructor should be called... Few things you can check: To test pointer: ``` Pointer<int> p1(new int); Pointer<int> p2(p1); // Does this call constructor properly? ``` To test array: ``` Array<std::string> array; std::string str("bla"); array.push_back(str); // Does this call string's constructor ``` That's what fails, right? ``` Array<Pointer<int> > array; Pointer<int> p1(new int); array.push_back(p1); ``` If all else fails, you can always do this to surely invoke copy constructor or operator= ``` T* t = new (&data[length]) T(); *t = element; ```
Having a problem with placement-new!
[ "", "c++", "constructor", "new-operator", "" ]
i know crystal reports very well and i work on this for last 3 years. I wanted to learn Reporting services of sqlserver so which is the best resource to learn the Reporting services quickly ?
I'm using this one to learn reporting services : <http://www.apress.com/book/view/1590599926> A good book to learn and practice.
I did learn almost all by practice, of course you should read a book or something like that, but the best way is to "play" with SSRS, don't forget to install the AdventureWorks sample database since almost all the official SSRS examples uses this database, the examples will teach you what you can do with the software. Also you can try to "convert" some big reports from CR to SSRS in order to learn differences between each platform (i did it that way). With 3 year of CR experience this way into SSRS it will be a piece of cake :)
Reporting services in Sqlserver
[ "", "sql", "sql-server-2005", "reporting-services", "" ]
How can I play a sound based on waveform data that my .NET program is generating from user input and mathematical functions? By "waveform data" I mean SPL (sound pressure level) values in a fixed interval time-series (probably 44.1 kHz). I presume that this requires some kind of streaming buffer arrangement. Note, that this has to be live/real-time, so just creating a .wav file and then playing that will not be sufficient. VB.NET is preferred, but C# is acceptable also. Just to clarify: What I am looking for is a simple working code example.
You can do this using [NAudio](https://github.com/naudio/NAudio). You create a stream that derives from WaveStream and in its overriden Read method, you return your samples which you can generate on the fly. You have control over the size of the buffers used by the soundcard which gives you control over the latency.
[How to play from an array of doubles](http://alvas.net/alvas.audio,tips.aspx#tip32) ``` PlayerEx pl = new PlayerEx(); private static void PlayArray(PlayerEx pl) { double fs = 8000; // sample freq double freq = 1000; // desired tone short[] mySound = new short[4000]; for (int i = 0; i < 4000; i++) { double t = (double)i / fs; // current time mySound[i] = (short)(Math.Cos(t * freq) * (short.MaxValue)); } IntPtr format = AudioCompressionManager.GetPcmFormat(1, 16, (int)fs); pl.OpenPlayer(format); byte[] mySoundByte = new byte[mySound.Length * 2]; Buffer.BlockCopy(mySound, 0, mySoundByte, 0, mySoundByte.Length); pl.AddData(mySoundByte); pl.StartPlay(); } ```
Play sound in .NET using generated waveform data
[ "", "c#", ".net", "vb.net", "audio", "waveform", "" ]
I have the following array of integers: ``` int[] array = new int[7] { 1, 3, 5, 2, 8, 6, 4 }; ``` I wrote the following code to get the top 3 elements in the array: ``` var topThree = (from i in array orderby i descending select i).Take(3); ``` When I check what's inside the `topThree`, I find: > {System.Linq.Enumerable.TakeIterator} > count:0 What did I do wrong and how can I correct my code?
How did you "check what's inside the topThree"? The easiest way to do so is to print them out: ``` using System; using System.Linq; public class Test { static void Main() { int[] array = new int[7] { 1, 3, 5, 2, 8, 6, 4 }; var topThree = (from i in array orderby i descending select i).Take(3); foreach (var x in topThree) { Console.WriteLine(x); } } } ``` Looks okay to me... There are potentially more efficient ways of finding the top N values than sorting, but this will certainly work. You might want to consider using dot notation for a query which only does one thing: ``` var topThree = array.OrderByDescending(i => i) .Take(3); ```
Your code seems fine to me, you maybe want to get the result back to another array? ``` int[] topThree = array.OrderByDescending(i=> i) .Take(3) .ToArray(); ```
How to get the top 3 elements in an int array using LINQ?
[ "", "c#", "linq", ".net-3.5", "" ]
Visual Studio will automatically create using statements for you whenever you create a new page or project. Some of these you will never use. Visual Studio has the useful feature to "remove unused usings". I wonder if there is any negative effect on program performance if the using statements which are never accessed, remain mentioned at the top of the file.
An unused using has no impact to the runtime performance of your application. It can affect the performance of the IDE and the overall compilation phase. The reason why is that it creates an additional namespace in which name resolution must occur. However these tend to be minor and shouldn't have a noticeable impact on your IDE experience for most scenarios. It can also affect the performance of evaluating expressions in the debugger for the same reasons.
No, it's just a compile-time/coding style thing. .NET binaries use fully qualified names under the hood.
How is performance affected by an unused using directive?
[ "", "c#", ".net", "visual-studio", "using", "" ]
I have a situation where in I need to reference a C# library in my ColdFusion code. Any suggestions or links would be really useful. cheers
ColdFusion 8+ supports using .NET classes. Here is an [example](http://www.coldfusioncookbook.com/entries/How-can-I-accesscall-functions-in-Net-classes.html): ``` <cfobject type = ".NET" name = "myInstance" class = "myDotNetClass" assembly = "C:/Net/Assemblies/dotNetClass.dll"> <!--- Call a method---> <cfset myVar = myInstance.myDotNetClass(5)> ```
You'll have to make your .Net library COM visible first. In Visual Studio 2008 you can do this by going into your project's properties, Selecting the Application tab, select Assembly Information, and select the checkbox to make assembly COM visible. Make sure that your class is public and not static because I found that the calling programs can't see the static classes. This may not be what you're asking but hope it helps. If this is a step in what you need then I would suggest searching for using C# .Net libraries in MS Access for info on making your assemblies COM accessible. Although MS Access isn't what you're using there's a lot of info on the topic of assemblies to COM.
How to reference C# library in ColdFusion?
[ "", "c#", "coldfusion", "" ]
I am using the StreamWriter object to write to either a file that is created by the constructor or already exists. If the file exists then it appends data. If not, then it should create a file and then also append data. The problem is that when the file needs to be created, the StreamWriter constructor creates the file but does not write any data to the file. ``` bool fileExists = File.Exists(filePath); using (StreamWriter writer = new StreamWriter(filePath, true)) { if (!fileExists) { writer.WriteLine("start"); } writer.WriteLine("data"); } ``` EDIT: Thanks for the answers. The using block takes care of closing the writer. As for other people saying it works for them, is there any information I can give you to further diagnose the problem? The file is localed across a network. Could that be a potential problem. Intermittently I receive the errors, "Could not find a part of the path ..." and "The specified network name is no longer available."
Alright, so I figured it out. My local machine was having problems intermittently accessing the file over the network. I uploaded the code to the server and ran it there without any problems. I really appreciate all the help. I'm sorry the solution wasn't very exciting.
The code ran fine on my computer. Can we know what the variable filePath contains? Perhaps you were looking at the wrong file... UPDATE: Network problem? Maybe someone was doing something on the other side of the network. Try writing to a local file. If it works, try writing to a remote file on another location.
StreamWriter - Not appending to created file
[ "", "c#", "file-io", "streamwriter", "" ]
I have an abstract class in a library. I'm trying to make it as easy as possible to properly implement a derivation of this class. The trouble is that I need to initialize the object in a three-step process: grab a file, do a few intermediate steps, and then work with the file. The first and last step are particular to the derived class. Here's a stripped-down example. ``` abstract class Base { // grabs a resource file specified by the implementing class protected abstract void InitilaizationStep1(); // performs some simple-but-subtle boilerplate stuff private void InitilaizationStep2() { return; } // works with the resource file protected abstract void InitilaizationStep3(); protected Base() { InitilaizationStep1(); InitilaizationStep2(); InitilaizationStep3(); } } ``` The trouble, of course, is the virtual method call in the constructor. I'm afraid that the consumer of the library will find themselves constrained when using the class if they can't count on the derived class being fully initialized. I could pull the logic out of the constructor into a protected `Initialize()` method, but then the implementer might call `Step1()` and `Step3()` directly instead of calling `Initialize()`. The crux of the issue is that there would be no obvious error if `Step2()` is skipped; just terrible performance in certain situations. I feel like either way there is a serious and non-obvious "gotcha" that future users of the library will have to work around. Is there some other design I should be using to achieve this kind of initialization? I can provide more details if necessary; I was just trying to provide the simplest example that expressed the problem.
That's way too much to place in the constructor of any class, much less of a base class. I suggest you factor that out into a separate `Initialize` method.
I would consider creating an [abstract factory](http://en.wikipedia.org/wiki/Abstract_factory_pattern) that is responsible for instantiating and initializing instances of your derived classes using a [template method](http://en.wikipedia.org/wiki/Template_method) for initialization. As an example: ``` public abstract class Widget { protected abstract void InitializeStep1(); protected abstract void InitializeStep2(); protected abstract void InitializeStep3(); protected internal void Initialize() { InitializeStep1(); InitializeStep2(); InitializeStep3(); } protected Widget() { } } public static class WidgetFactory { public static CreateWidget<T>() where T : Widget, new() { T newWidget = new T(); newWidget.Initialize(); return newWidget; } } // consumer code... var someWidget = WidgetFactory.CreateWidget<DerivedWidget>(); ``` This factory code could be improved dramatically - especially if you are willing to use an IoC container to handle this responsibility... If you don't have control over the derived classes, you may not be able to prevent them from offering a public constructor that can be called - but at least you can establish a usage pattern that consumers could adhere to. It's not always possible to prevent users of you classes from shooting themselves in the foot - but, you can provide infrastructure to help consumers use your code correctly when they familiarize themselves with the design.
How to Avoid Calling Viritual Methods from a Base Constructor
[ "", "c#", "inheritance", "constructor", "virtual-functions", "" ]
I am currently refactoring some code which performs Windows Impersonation for testability and have run into a bit of a roadblock. This is the bit of code that I am having trouble with: ``` ... if (LogonUserA(user, domain, password, LOGON32_LOGON_INTERACTIVE, LOGON32_PROVIDER_DEFAULT, ref token) > 0) { if (DuplicateToken(token, 2, ref tokenDuplicate)) { var tempWindowsIdentity = new System.Security.Principal.WindowsIdentity(tokenDuplicate); var impersonationContext = tempWindowsIdentity.Impersonate(); ... } ... } ``` How do I mock the behaviour of instantiating a WindowsIdentity object out? I have thought of various alternatives: * Pass in a factory class that would create the instance and mock the behaviour of that * Pass in a delegate that handles the creation of the instance (i.e. like a C++ function pointer) None of these alternatives seem particularly good to me because I'm afraid they would blur the intent of the method as the method signature would look something like the following: ``` public bool Impersonate(string user, string password, string domain, Factory factory) ``` or ``` public bool Impersonate(string user, string password, string domain, delegate WinIDCreator) ``` Because the intent of the method is to impersonate a particular user, it doesn't make sense to me that either a Factory class or Delegate should be provided to it. I do want to isolate and mock this behaviour out however as I am uncomfortable with the thought of a new WindowsIdentity instance being created every time I run a bunch of unit tests. Any ideas or comments?
I think you are on the right track with the Factory idea, but I'd inject the Factory in the class constructor not as a method parameter. Your default constructor can create an instance of the default Factory if one is not supplied. You are also going to have a few problems -- like needing real login ids and passwords in your unit tests -- if you don't factor out the LogonUserA and DuplicateToken methods as well. I'd suggest a thin wrapper around this implementing an interface that you can also inject in the constructor. Below are some of the highlights to show you how to start structuring it. ``` public interface ILogonHelpers { bool LogonUser( string user, string domain, string password, ref int token ); void DuplicateToken( int token, ref int duplicateToken ); } public class MyClass { public MyClass( ILogonHelper logonHelper, IIdentityFactory factory ) { this.LogonHelper = logonHelper ?? new DefaultLogonHelper(); this.IdentityFactory = factory ?? new DefaultIdentityFactory(); } ... if (this.LogonHelper.Logon(user, domain, password, ref token) > 0) { if (this.LogonHelper.DuplicateToken(token, ref tokenDuplicate)) { var tempWindowsIdentity = this.IdentityFactory.CreateIdentity(tokenDuplicate); var impersonationContext = tempWindowsIdentity.Impersonate(); ... } ... } ```
I'm a Java dev but ... Why not make the "Factory" an attribute of the the class that contains the Impersonate method? The "Factory" attribute, perhaps, "windowIdentityFactory", could be set in the constructor or via a setter method (using some type of dependency injection). The the test, you would supply the class with a mock Factory (as you've suggested). In production, you give it the real deal. ``` ... if (LogonUserA(user, domain, password, LOGON32_LOGON_INTERACTIVE, LOGON32_PROVIDER_DEFAULT, ref token) > 0) { if (DuplicateToken(token, 2, ref tokenDuplicate)) { var tempWindowsIdentity = windowIdentityFactory.newInstance(tokenDuplicate); var impersonationContext = tempWindowsIdentity.Impersonate(); ... } ... } ```
Mocking the instantiation of an object for a unit test
[ "", "c#", "unit-testing", "rhino-mocks", "" ]
I have some simple code in a class: ``` private ITemplate _content1 = null; [TemplateContainer(typeof(ContentContainer))] public ITemplate Content1 { get { return _content1; } set { _content1 = value; } } ``` I need about 15 of these content containers. I could just copy and paste this block 15 times and change the number, but there must be a better way. Can anyone suggest a cleaner way to do this?
Try this instead: ``` [TemplateContainer(typeof(ContentContainer))] public ITemplate Content1 { get; set; } ```
There is a property (prop) Snippet (snippets are native to visual studio). Either modify the snippet (it is a simple xml file), or create a new one for your task. ReSharper has a easier way, called code templates. Or, generate the properties you need with a t4 script. But that is probably overkill.
C# code generation / removing redundant code
[ "", "c#", "templates", "code-generation", "" ]
When manipulating controls on a .NET windows form which of the following is best practice and why? ``` //Hide control from user and stop control form being useable oControl.Enabled = false; oControl.Visible = false; ``` or ``` //Hide control from user and stop control form being useable oControl.Visible = false; ``` I've been using the first case and always disabling a control when hiding it, but I've been told that this is **wrong** and that I should only be hiding it. I seem to vaguely remember reading somewhere that if you don't specifically dissable a control that it can continue to interact with the user. Any enlightenment would be apreciated.
`Enabled` refers to whether or not the user can interact with the control (i.e. if the control is grayed out or not) `Visible` refers to wehether or not the control is displayed (usually if this is false the control is not rendered at all, however not all the time apparently - see the comments of this post). If the control is not rendered then the value of the enabled propery will have no impact.
Whether you will need to set `Enabled = false` when hiding a control depends on the control in question, and what kind of interaction it offers. For many controls (such as a `Button` or a `CheckBox`), setting `Visible = false` will suffice to prevent any interaction between the user and the control. But some controls (it seems to be especially those offering a Shortcut key property), will still offer user interaction when not visible. For instance the `ToolStripMenuItem` (and the "older" `MenuItem`) will still have their `Click` event invoked when the shortcut key is pressed, regardless of `Visible` being `true` or `false`. Setting `Enabled = false` will prevent invoking the `Click` event through shortcut keys in those cases. From that point of view, I would *not* advice against setting `Enabled = false` when hiding a control in a WinForms application.
Should a control be disabled and hidden or just hidden?
[ "", "c#", ".net", "winforms", "controls", "" ]
Here's the current code I am using. ``` <? header("Content-type: image/png"); // example: <img src="gradient.php?height=600&width=100&start=00FF00&end=ff0000" /> $height=100; $width=1; $start='000000'; $end='FFFFFF'; extract($_REQUEST); // overwrite using vars from url $start_r = hexdec(substr($start,0,2)); $start_g = hexdec(substr($start,2,2)); $start_b = hexdec(substr($start,4,2)); $end_r = hexdec(substr($end,0,2)); $end_g = hexdec(substr($end,2,2)); $end_b = hexdec(substr($end,4,2)); $image = @imagecreate($width,$height); for($y=0;$y<$height;$y++){ for($x=0;$x<$width;$x++){ if($start_r==$end_r) $new_r = $start_r; $difference = $start_r-$end_r; $new_r = $start_r-intval(($difference/$height)*$y); if($start_g==$end_g) $new_g = $start_g; $difference = $start_g-$end_g; $new_g = $start_g-intval(($difference/$height)*$y); if($start_b==$end_b) $new_b = $start_b; $difference = $start_b - $end_b; $new_b = $start_b-intval(($difference/$height)*$y); $row_color = imagecolorresolve($image,$new_r,$new_g,$new_b); imagesetpixel($image,$x,$y,$row_color); } } imagepng($image); imagedestroy($image); ?> ``` The above code works perfect in making vertical (top to bottom) gradients but I'd like to be able to make horizontal ones as well. I have a very good understanding for PHP, but I don't deal with PHP image functions very often. If someone can help me and figure this out I'd really appreciate it!
This code will work for vertical gradient and make it faster as well. I have commented out useless code so you know what to delete. ``` for($x=0;$x<$width;$x++){ /*if($start_r==$end_r) $new_r = $start_r;*/ // ^^ the line above is useless, $new_r will be set below either way $difference = $start_r-$end_r; $new_r = $start_r-intval(($difference/$width)*$x); /*if($start_g==$end_g) $new_g = $start_g;*/ // ^^ the line above is useless, $new_g will be set below either way $difference = $start_g-$end_g; $new_g = $start_g-intval(($difference/$width)*$x); /*if($start_b==$end_b) $new_b = $start_b;*/ // ^^ the line above is useless, $new_b will be set below either way $difference = $start_b - $end_b; $new_b = $start_b-intval(($difference/$width)*$x); $new_color = imagecolorresolve($image,$new_r,$new_g,$new_b); // ^^ used to be $row_color for($y=0;$y<$height;$y++){ imagesetpixel($image,$x,$y,$new_color); } } ```
Thanks go out to Gert! Here is the final code I came up with, it's efficient, the images cache, and the file sizes are very friendly. ``` <? header("Content-type: image/png"); // example: <img src="gradient.php?width=100&start=00FF00&end=ff0000&type=x" /> $width = 1; $height=1; $start='000000'; $end='FFFFFF'; $type='x'; extract($_REQUEST); $path = "gradients/".$start."-".$end."_".$width."x".$height."_".$type.".png"; if(file_exists($path)) echo file_get_contents($path); else{ $r1 = hexdec(substr($start,0,2)); $g1 = hexdec(substr($start,2,2)); $b1 = hexdec(substr($start,4,2)); $r2 = hexdec(substr($end,0,2)); $g2 = hexdec(substr($end,2,2)); $b2 = hexdec(substr($end,4,2)); $image = @imagecreate($width,$height); switch($type){ case 'x': $d1 = 'height'; $d2 = 'width'; $v1 = 'y'; $v2 = 'x'; break; case 'y': $d1 = 'width'; $d2 = 'height'; $v1 = 'x'; $v2 = 'y'; break; } for($$v1=0;$$v1<$$d1;$$v1++){ $r = $r1-intval((($r1-$r2)/$$d1)*$$v1); $g = $g1-intval((($g1-$g2)/$$d1)*$$v1); $b = $b1-intval((($b1-$b2)/$$d1)*$$v1); $color = imagecolorresolve($image,$r,$g,$b); for($$v2=0;$$v2<$$d2;$$v2++) imagesetpixel($image,$x,$y,$color); } imagepng($image,$path,1); imagepng($image,NULL,1); imagedestroy($image); }?> ``` Variable $type can be x or y, and it would be in relation to your CSS sheet and what coordinate is repeating... Here are some examples: ``` <style type="text/css"> body{ background:url(gradient.php?height=123&start=ABBABB&end=FFF000&type=x) repeat-x scroll top left; /* the &type=x' so the repeat is 'repeat-x'. height needs set. */ } </style> <style type="text/css"> body{ background:url(gradient.php?width=345&start=111222&end=999888&type=y) repeat-y scroll top left; /* the &type=y' so the repeat is 'repeat-y'. width needs set. */ } </style> ```
Does anyone have a script to create a horizontal gradient (left to right) using PHP?
[ "", "php", "image", "gradient", "" ]
I have been working on a Notes integration project and I am using the Domingo API for communicating with Lotus Notes. This API is very useful, however I don't see any NotesUIDocument class and limited support for RichText in Lotus Notes. I have checked in the Notes.jar file and even that jar file seems to miss the NotesUIDocument functionality. Does anybody know of any alternative for this ?
`NotesUIDocument` is a LotusScript class which works because LotusScript support is embedded into the Notes client UI. When using Java, you generally work with the back-end classes such as `Document` (`NotesDocument` in LotusScript). Why do you need access to `NotesUIDocument` from Java? Any possible alternative may depend on your specific needs. **Update:** I don't believe you'll be able to get tight UI integration between the Notes client and a Java application. In terms of rich text, the Java classes in Notes.jar include a set of classes for rich text manipulation which will cater for the basic functionality, but you won't get as much rich text editing flexibility as you do through the Notes UI.
You can try using LS2J this allows you to use lotusscript for all the front end stuff and allows you to call your java back-end code.
Update UIDocument in LotusNotes
[ "", "java", "lotus-notes", "lotus", "" ]
I realise that this might be a question that has been asked and answered, but please bear with me. I want to know if it is possible to use annotations to inject code into your classes compile time. The classic example is to generate a getter and setter for the members of your object. This is not exactly what I need it for, but it serves to illustrate the basic idea. Now on the internet the basic answer I get is no, but this guy did it: [link text](http://www.hanhuy.com/pfn/java_property_annotation) Does anyone know how he does what he does (and if he actually does what he says he does)? The main thing is that he does not use an annotation processor to generate a new java file to compile. This technique I am aware of and will not work for our purpose. Thanks
It is not supported to modify code at compile time but it seems to be possible by using non-supported javac-internal APIs, [here](http://forums.sun.com/thread.jspa?threadID=5269463) is a post referencing the hanbuy-panno solution with also a link to the [code](http://svntrac.hanhuy.com/repo/browser/hanhuy/trunk/panno/src/com/hanhuy/panno/processing/PropertyProcessor.java)...
I went looking for [something similar](https://stackoverflow.com/questions/340353/plugging-in-to-java-compilers) last year. There is no standard way to alter classes using annotation processors or the compiler and the annotation API documentation recommends creating decorators. If you are willing to live with the hacks, have a look at [Adrian Kuhn](https://stackoverflow.com/users/24468/adrian-kuhn)'s use of the private API where he [adds Roman numeral literals to Java](http://www.iam.unibe.ch/~akuhn/blog/2008/roman-numerals-in-your-java/). This approach is limited to the Sun javac compiler and you would need to implement something else if you used another (like the Eclipse compiler). --- Edit: anyone interested in this area should check out [Project Lombok](http://projectlombok.org/).
Can annotations be used for code injection?
[ "", "java", "annotations", "" ]
Is it dangerous thing to view access log without sanitizing via web browser? I am considering to record access log, and I am considering to view it via wev browser, but if attacker modifies his remote host or user agent or something, can he attack to me? By inserting attacking code into his remote host or user agent or ect. So do I need to sanitize by htmlspecialchar before opening the access log file via web browser? I mean attacker insert some attacking code into his remote host or user agent or someware, then I see that access log via web browser, then my PC will be affected that code.
You probably want some html formatting for the output and therefore have to sanitize/encode the log data. But for the arguments sake: If you send the output as text/plain the client isn't supposed to parse any html/javascript. E.g. the output of ``` <?php header('Content-type: text/plain; charset=utf-8'); echo '<script>alert(document.URL);</script>'; ``` displays as ``` <script>alert(document.URL);</script> ``` (at least in FF3, IE8, opera, safari).
Yes, this is dangerous. For example, a malicious user can just request something like this: ``` GET /<script src="http://www.evilsite.com/malicious.js"></script> HTTP/1.1 Host: www.example.com Connection: close User-Agent: <script src="http://www.evilsite.com/malicious.js"></script> ``` And compromise your view page with malicious JavaScript. Since you're probably viewing the log on your site, you'd be logged in as an account with administrative rights. With the malicious JavaScript, the attacker can steal your session cookie and take over your session, complete with all the things you can do while logged in. So, in conclusion, you should **definitely** escape access log pages, unless you like having your administrative accounts compromised.
Is it dangerous thing to view access log without sanitizing via web browser?
[ "", "php", "security", "logging", "sanitize", "" ]
At some point `java.lang.Override` started to be available for use with implementations of methods declared in interfaces. I'm pretty sure there was a time when it just worked for overrides of methods defined in superclasses. How can I find out when (i.e. at which version) this change happened?
Use of the `@Override` annotation on methods that are implemented from interfaces and not overridden from a superclass is a new feature in Java 6. See [@Override specification changes in Java 6](http://dertompson.com/2008/01/25/override-specification-changes-in-java-6/): > Between Java 5 and Java 6 changes to > the specification of @Override have > been made. In Java 6 it is possible to > add the @Override annotation to > methods that implement methods of an > interface which is not allowed in Java > 5. > > I noticed the difference when a friend > told me that he had to remove all the > @Override annotations to make his Java > 6 project compile with Java 5. > > The interesting thing is that there is > no documentation about this change. > The API specification of @Override is > exactly the same in both Java > versions. I found out that this was > forgotten by Sun developers. [Peter > Ahé, a former developer at Sun, calls > it the @Override Snafu](http://blogs.oracle.com/ahe/entry/override_snafu).
I don't know how you'd find this out, but it happened between 5 and 6. (i.e. it's forbidden in 5 but accepted in 6.)
How can I find out when this Java API change happened?
[ "", "java", "" ]
Is there an online Python interpreter or some such that would allow me to try out simple python code from my iPhone? Something like [try ruby! (in your browser)](http://tryruby.hobix.com/) for Python, and works with the iPhone?
No. Python and all interpretation is strictly verboten on the iPhone. That said, if you jailbreak your phone, you can install linux with python and go to town. **Edit:** I didn't notice the word "online" before. Here's an online python interpreter: <http://try-python.mired.org/>
[CodePad](http://www.codepad.org) allows you to run snippets of Python code (and many other languages!).
Online Python interpreter for use *from* the iPhone?
[ "", "iphone", "python", "" ]
Should I check whether particular key is present in Dictionary **if I am sure it will be added in dictionary by the time I reach the code to access it?** There are two ways I can access the value in dictionary 1. checking ContainsKey method. If it returns true then I access using indexer [key] of dictionary object. or 2. TryGetValue which will return true or false as well as return value through out parameter. (2nd will perform better than 1st if I want to get value. [Benchmark](http://dotnetperls.com/trygetvalue).) However if I am sure that the function which is accessing global dictionary will surely have the key then should I still check using TryGetValue or without checking I should use indexer[]. Or I should never assume that and always check?
Use the indexer if the key is meant to be present - if it's not present, it will throw an appropriate exception, which is the right behaviour if the absence of the key indicates a bug. If it's valid for the key not to be present, use `TryGetValue` instead and react accordingly. (Also apply Marc's advice about accessing a shared dictionary safely.)
If the dictionary is global (static/shared), you should be synchronizing access to it (this is important; otherwise you can corrupt it). Even if your thread is only **reading** data, it needs to respect the locks of other threads that might be editing it. However; if you are sure that the item is there, the indexer should be fine: ``` Foo foo; lock(syncLock) { foo = data[key]; } // use foo... ``` Otherwise, a useful pattern is to check and add in the same lock: ``` Foo foo; lock(syncLock) { if(!data.TryGetValue(key, out foo)) { foo = new Foo(key); data.Add(key, foo); } } // use foo... ``` Here we only add the item if it wasn't there... but **inside the same lock**.
Should I check whether particular key is present in Dictionary before accessing it?
[ "", "c#", ".net", "dictionary", "" ]
I have the following test case in eclipse, using JUnit 4 which is refusing to pass. **What could be wrong?** ``` @Test(expected = IllegalArgumentException.class) public void testIAE() { throw new IllegalArgumentException(); } ``` This exact testcase came about when trying to test my own code with the expected tag didn't work. I wanted to see if JUnit would pass the most basic test. It didn't. I've also tested with custom exceptions as expected without luck. **Screenshot:** [![Screenshot](https://i.stack.imgur.com/tVTcA.png)](https://i.stack.imgur.com/tVTcA.png)
The problem is that your `nnounceThreadTest` extends `TestCase`. Because it extends `TestCase`, the JUnit Runner is treating it as a JUnit 3.8 test, and the test is running because it starts with the word test, hiding the fact that the `@Test` annotiation is in fact not being used at all. To fix this, remove the `extends TestCase` from the class definition.
Instead of removing extends TestCase , you can add this to run your test case with Junit4 which supports annotation. @RunWith(JUnit4.class)
JUnit expected tag not working as expected
[ "", "java", "unit-testing", "junit", "junit4", "" ]
I have the following code (sorry for the large code chunk, but I could not narrow it down any more) ``` template <bool B> struct enable_if_c { typedef void type; }; template <> struct enable_if_c<false> {}; template <class Cond> struct enable_if : public enable_if_c<Cond::value> {}; template <typename X> struct Base { enum { value = 1 }; }; template <typename X, typename Y=Base<X>, typename Z=void> struct Foo; template <typename X> struct Foo<X, Base<X>, void> { enum { value = 0 }; }; template <typename X, typename Y> struct Foo<X, Y, typename enable_if<Y>::type > { enum { value = 1 }; }; int main(int, char**) { Foo<int> foo; } ``` But it fails to compile with gcc (v4.3) with ``` foo.cc: In function ‘int main(int, char**)’: foo.cc:33: error: ambiguous class template instantiation for ‘struct Foo<int, Base<int>, void>’ foo.cc:24: error: candidates are: struct Foo<X, Base<X>, void> foo.cc:27: error: struct Foo<X, Y, typename enable_if<Y>::type> foo.cc:33: error: aggregate ‘Foo<int, Base<int>, void> foo’ has incomplete type and cannot be defined ``` OK, so it's ambiguous. but I wasn't expecting it to be a problem as when using specialization it will almost always be some ambiguity. However this error is only triggered when using the class with `enable_if<...>`, if I replace it with a class like the following there is no problem. ``` template <typename X, typename Y> struct Foo<X, Y, void > { enum { value = 2 }; }; ``` Why does this class not cause an ambiguity while the others do? Isn't the two the same thing for classes with a true ::value? Anyway, any hints as to what I am doing wrong are appreciated. **Thanks for the answers**, my real problem (to get the compiler to select my first specialization) was solved by replacing `struct Foo<X, Base<X>, void>` with `struct Foo<X, Base<X>, typename enable_if< Base<X> >::type >` which seems to work the way I want.
The gist of your question is that you have: ``` template <typename X, typename Y, typename Z> struct Foo {}; template <typename X> struct Foo<X, Base<X>, void> {}; // #1 template <typename X, typename Y> struct Foo<X, Y, typename whatever<Y>::type> {}; // #2 ``` and you're trying to match it to ``` Foo<int, Base<int>, void> ``` Obviously, both specializations match (the first with `X = int`, the second with `X = int, Y = Base<int>`). According to the standard, section 14.5.4, if there are more matching specializations, a partial ordering (as defined in 14.5.5.2) among them is constructed and the most specialized one is used. In your case, however, neither one is more specialized than the other. (Simply put, a template is more specialized than another, if you can replace each type parameter of the latter template with some type and in result get the signature of the former. Also, if you have `whatever<Y>::type` and you replace `Y` with `Base<X>` you get `whatever<Base<X> >::type` not `void`, i.e. there is not processing performed.) If you replace `#2` with ``` template <typename X, typename Y> struct Foo<X, Y, void > {}; // #3 ``` then the candidate set again contains both templates, however, #1 is more specialized then #3 and as such is selected.
aren't you missing a ``` < ``` symbol ?
ambiguous template weirdness
[ "", "c++", "templates", "gcc", "specialization", "" ]
I don't understand when an output parameter should be used, I personally wrap the result in a new type if I need to return more than one type, I find that a lot easier to work with than out. I have seen method like this, ``` public void Do(int arg1, int arg2, out int result) ``` are there any cases where that actually makes sense? how about `TryParse`, why not return a `ParseResult` type? or in the newer framework return a null-able type?
Out is good when you have a `TryNNN` function and it's clear that the out-parameter will always be set even if the function does not succeed. This allows you rely on the fact that the local variable you declare will be set rather than having to place checks later in your code against null. (A comment below indicates that the parameter could be set to `null`, so you may want to verify the documentation for the function you're calling to be sure if this is the case or not.) It makes the code a little clearer and easier to read. Another case is when you need to return some data and a status on the condition of the method like: ``` public bool DoSomething(int arg1, out string result); ``` In this case the return can indicate if the function succeeded and the result is stored in the out parameter. Admittedly, this example is contrived because you can design a way where the function simply returns a `string`, but you get the idea. A disadvantage is that you have to declare a local variable to use them: ``` string result; if (DoSomething(5, out result)) UpdateWithResult(result); ``` Instead of: ``` UpdateWithResult(DoSomething(5)); ``` However, that may not even be a disadvantage, it depends on the design you're going for. In the case of DateTime, both means (Parse and TryParse) are provided.
Well as with most things it depends. Let us look at the options * you could return whatever you want as the return value of the function * if you want to return multiple values or the function already has a return value, you can either use out params or create a new composite type that exposes all these values as properties In the case of TryParse, using an out param is efficient - you dont have to create a new type which would be 16B of overhead (on 32b machines) or incur the perf cost of having them garbage collected post the call. TryParse could be called from within a loop for instance - so out params rule here. For functions that would not be called within a loop (i.e. performance is not a major concern), returning a single composite object might be 'cleaner' (subjective to the beholder). Now with anonymous types *and Dynamic typing , it might become even easier.* Note: 1. `out` params have some rules that need to be followed i.e. the compiler will ensure that the function does initialize the value before it exits. So TryParse has to set the out param to some value even if parse operation failed 2. The TryXXX pattern is a good example of when to use out params - Int32.TryParse was introduced coz people complained of the perf hit of catching exceptions to know if parse failed. Also the most likely thing you'd do in case parse succeeded, is to obtain the parsed value - using an out param means you do not have to make another method call to Parse
When should I use out parameters?
[ "", "c#", ".net", "out", "" ]
Does anyone know of an elegant way to get the decimal part of a number only? In particular I am looking to get the exact number of places after the decimal point so that the number can be formatted appropriately. I was wondering if there is away to do this without any kind of string extraction using the culture specific decimal separator.... For example 98.0 would be formatted as 98 98.20 would be formatted as 98.2 98.2765 would be formatted as 98.2765 etc.
It it's only for formatting purposes, just calling `ToString` will do the trick, I guess? ``` double d = (double)5 / 4; Console.WriteLine(d.ToString()); // prints 1.75 d = (double)7 / 2; Console.WriteLine(d.ToString()); // prints 3.5 d = 7; Console.WriteLine(d.ToString()); // prints 7 ``` That will, of course, format the number according to the current culture (meaning that the decimal sign, thousand separators and such will vary). **Update** As Clement H points out in the comments; if we are dealing with great numbers, at some point d.ToString() will return a string with scientific formatting instead (such as `"1E+16"` instead of `"10000000000000000"`). One way to overcome this probem, and force the full number to be printed, is to use `d.ToString("0.#")`, which will also result in the same output for lower numbers as the code sample above produces.
You can get all of the relevant information from the [Decimal.GetBits](http://msdn.microsoft.com/en-us/library/system.decimal.getbits.aspx) method assuming you really mean `System.Decimal`. (If you're talking about decimal formatting of a float/double, please clarify the question.) Basically `GetBits` will return you 4 integers in an array. You can use the scaling factor (the fourth integer, after masking out the sign) to indicate the number of decimal places, but you should be aware that it's not necessarily the number of *significant* decimal places. In particular, the decimal representations of 1 and 1.0 are different (the former is 1/1, the latter is 10/10). Unfortunately, manipulating the 96 bit integer is going to require some fiddly arithmetic unless you can use .NET 4.0 and `BigInteger`. To be honest, you'll get a *simpler* solution by using the built in formatting with `CultureInfo.InvariantCulture` and then finding everything to the right of "."
Get the decimal part of a number and the number of places after the decimal point (C#)
[ "", "c#", "decimal", "numbers", "" ]
I'm trying to get the number of times each item in a list is in a string in Python: ``` paragraph = "I eat bananas and a banana" def tester(x): return len(re.findall(x,paragraph)) map(tester, ['banana', 'loganberry', 'passion fruit']) ``` Returns [2, 0, 0] What I'd like to do however is extend this so I can feed the paragraph value into the map() function. Right now, the tester() function has paragraph hardcoded. Does anybody have a way to do this (perhaps make an n-length list of paragraph values)? Any other ideas here? Keep in mind that each of the array values will have a weight at some point in the future - hence the need to keep the values in a list rather than crunching them all together. UPDATE: The paragraph will often be 20K and the list will often have 200+ members. My thinking is that map operates in parallel - so it will be much more efficient than any serial methods.
Here's a response to the movement of the goalposts ("I probably need the regex because I'll need word delimiters in the near future"): This method parses the text once to obtain a list of all the "words". Each word is looked up in a dictionary of the target words, and if it is a target word it is counted. The time taken is O(P) + O(T) where P is the size of the paragraph and T is the number of target words. All other solutions to date (including the currently accepted solution) except my Aho-Corasick solution are O(PT). ``` def counts_all(targets, paragraph, word_regex=r"\w+"): tally = dict((target, 0) for target in targets) for word in re.findall(word_regex, paragraph): if word in tally: tally[word] += 1 return [tally[target] for target in targets] def counts_iter(targets, paragraph, word_regex=r"\w+"): tally = dict((target, 0) for target in targets) for matchobj in re.finditer(word_regex, paragraph): word = matchobj.group() if word in tally: tally[word] += 1 return [tally[target] for target in targets] ``` The finditer version is a strawman -- it's much slower than the findall version. Here's the currently accepted solution expressed in a standardised form and augmented with word delimiters: ``` def currently_accepted_solution_augmented(targets, paragraph): def tester(s): def f(x): return len(re.findall(r"\b" + x + r"\b", s)) return f return map(tester(paragraph), targets) ``` which goes overboard on closures and could be reduced to: ``` # acknowledgement: # this is structurally the same as one of hughdbrown's benchmark functions def currently_accepted_solution_augmented_without_extra_closure(targets, paragraph): def tester(x): return len(re.findall(r"\b" + x + r"\b", paragraph)) return map(tester, targets) ``` All variations on the currently accepted solution are O(PT). Unlike the currently accepted solution, the regex search with word delimiters is not equivalent to a simple `paragraph.find(target)`. Because the re engine doesn't use the "fast search" in this case, adding the word delimiters changes it fron slow to **very** slow.
A closure would be a quick solution: ``` paragraph = "I eat bananas and a banana" def tester(s): def f(x): return len(re.findall(x,s)) return f print map(tester(paragraph), ['banana', 'loganberry', 'passion fruit']) ```
Using map() to get number of times list elements exist in a string in Python
[ "", "python", "regex", "mapreduce", "" ]
I get this error: ``` The error description is 'Only one top level element is allowed in an XML document.'. Could not find prepared statement with handle 0. The XML parse error 0xc00ce555 occurred on line number 1, near the XML text "<value1>34</value1><value1>33</value1><value1>32</value1>". The statement has been terminated. ``` This is Stored Procedure call: ``` public bool HideFromList(string commentList, bool state) { //commentList =<values><value1>34</value1><value1>33</value1><value1>32/value1></values> using (SqlConnection cn = new SqlConnection(this.ConnectionString)) { SqlCommand cmd = new SqlCommand("VisibleFromList", cn); cmd.CommandType = CommandType.StoredProcedure; cmd.Parameters.Add("@XMLDoc", SqlDbType.Xml).Value = commentList; cmd.Parameters.Add("@state", SqlDbType.Int).Value = state; cn.Open(); int ret = cmd.ExecuteNonQuery(); return (ret == 1); } ``` } This is my Stored Procedure: ``` ALTER PROCEDURE dbo.VisibleFromList ( @XMLDoc xml, @state BIT ) AS BEGIN DECLARE @docHandle int EXEC sp_xml_preparedocument @docHandle OUTPUT, @XMLDoc UPDATE tbh_Comments SET Visible = @state WHERE CommentID IN (SELECT * FROM OPENXML(@docHandle, '/values/value1', 2) WITH (value1 INT '.')) END ``` But If I Modify SP with embedded input string it does work: ``` ALTER PROCEDURE dbo.VisibleFromList ( @XMLDoc xml, @state BIT ) AS BEGIN DECLARE @docHandle int EXEC sp_xml_preparedocument @docHandle OUTPUT, '<values> <value1>33</value1> <value1>34</value1> </values>' UPDATE tbh_Comments SET Visible = @state WHERE CommentID IN (SELECT * FROM OPENXML(@docHandle, '/values/value1', 2) WITH (value1 INT '.')) END ``` How to make it work with input parameter?
Are you positive your commentList is as you describe? ``` //commentList =<values><value1>34</value1><value1>33</value1><value1>32/value1></values> ``` If your list is indeed similar to that, it should work. According to your error, it appears the `<values>` element may not exist: ``` near the XML text "<value1>34</value1><value1>33</value1><value1>32</value1>". ```
I recommend you convert your procedures to use [XML Data Type Methods](http://msdn.microsoft.com/en-us/library/ms190798.aspx). Unlike the sp\_prepare\_document stuff, these are implemented natively by the SQL engine and they interact better with queries, produce better error messages and you [don't run the risk of leaking document handles](http://msdn.microsoft.com/en-us/library/aa260386(SQL.80).aspx). So in your procedure you could do something like: ``` declare @XMLDoc xml; select @XMLDoc =N'<values> <value1>33</value1> <value1>34</value1> </values>'; UPDATE tbh_Comments SET Visible = @state WHERE CommentID IN ( SELECT v.value('.', 'INT') as CommentID FROM @XMLDoc.nodes('/values/value1') t(v)) ```
Stored Procedure and XML
[ "", "c#", "asp.net", "sql-server", "xml", "stored-procedures", "" ]
`Boo` seems like a very cool language. Is it 100% C# compatible? I mean: can I use any C# DLL/class? Could I make use of the XNA framework?
As far as I know, Boo has an implementation on top of the .NET CLR - which implies that it should be able to both consume, and be consumed by C# code. The syntax may not always be pretty when consuming Boo from C# - but the opposite should be quite elegant, given Boo's syntax. Also, all of the classes in the .NET BCL should be available to you in Boo.
Yes Boo is easily consumed by C# and vice versa. Most of the best features of Boo don't carry over to C#, such as syntactic macros, for obvious reasons, but you can create Macros in C# and consume them in Boo. Additionally Boo has the nice feature of being able to create Modules, which is something you can't do in C#. They both can create extension methods. Boo has 'duck' typing while C# now has the "dynamic" keyword. While they're both functionally equivalent you might end up seeing the two merge eventually. Boo currently has known issues with generics, but the feature will be completely supported once they are all ironed out. I suspect there will have to be some extra work done to support the new Co/Contra-variance features in .NET 4 as well.
Is Boo 100% C# compatible?
[ "", "c#", ".net", "compatibility", "boo", "" ]
I have an associative array in PHP ``` $asd['a'] = 10; $asd['b'] = 1; $asd['c'] = 6; $asd['d'] = 3; ``` i want to sort this on basis of its value and to get the key value for the first 4 values. how can i do that in php ???
[asort()](http://php.net/asort) should keep the index association: ``` asort($asd); ``` After that, a simple foreach can get you the next four values ``` $i = 0; foreach ($asd as $key=>$value) { if ($i >= 4) break; // do something with $asd[$key] or $value $i++; } ```
An alternative to the other answers. This one without a loop: ``` asort($asd); $top_four_keys = array_slice(array_keys($asd), 0, 4); ```
PHP associative Array
[ "", "php", "arrays", "" ]
I got the assignment to unify and simplyfy the companies Email sendouts from their site, with the possibility to edit the emails them selves. So Im scetching on a C# Window Form application with WYSIWYG-editor to manage all the emails. The email is stored in SQL-DB But im in dire need of some tips and pointers on the logic of some of the sendouts. Some of the sendouts is action triggerd from signups on the site etc. But some sendouts is intervall based, like search-match-email-notification and other reminder-emails wich is sent out in intervals from 5 min to every midnight. My dilema is this: How do you best handle interval based sendouts? Can you implement some kind of deamon or service the checks stored procedurs at given intervals and trigger sendouts if there is any hits? I would prefer if the Application could handle both the managing of the email content and the schedueled sendouts, (the 5 min checks and every midnight) Or is there any other smarter way to tackle the interval based sendouts? Thankfull for tips and pointers on how to tackle this
For the sign-up emails you can just add code to send the email to whatever is being executed when users are signing up. For the scheduled emails there are a couple of ways that you could handle it. If you think just managing it all in the database is better, and are on SQL 2005 or above, there is built-in Database Mail functionality. <http://msdn.microsoft.com/en-us/library/ms175887.aspx> Recurring emails can be sent by scheduled jobs that check a queue of emails that you maintain. If you want to handle the emails in your code you can use System.Net.Mail. The scheduling can be handled by a Windows Service or a recurring task in the OS. <http://support.microsoft.com/kb/226795>
Try a scheduling engine like <http://quartznet.sourceforge.net/>
Building a Email sending application in C#
[ "", "c#", "winforms", "email", "" ]
I'm tring to migrate my code from VCpp 6 to VCpp 2008 express but when I build the solution I receive this error message: > ``` > icl: warning: problem with > Microsoft compilation of > 'c:\Desenvolvimento\DFF\Base\\version.cpp' > 1>C:\Arquivos de programas\Microsoft > Visual Studio > 9.0\VC\include\string.h(69): error: expected a ";" 1> > __DEFINE_CPP_OVERLOAD_STANDARD_FUNC_0_1(char> *, > __RETURN_POLICY_DST, __EMPTY_DECLSPEC, _strset, _Inout_z_, > char, _Dest, _In_ int, _Value) > ``` There are a lot of errors like this, sometimes expecting ";", sometimes ")". Thanks, Leandro Lima
The error was occuring because in the Visual C++ 6 I called Intel compiler from a .bat file to create a version number for my project. Now I'm using Microsoft compiler and I forgot to change the call.
Be careful about upgrading to express version if you have need for MFC and the control wizards/editors. You will not have access to that kind of functionality in Express if I recall correctly.
Migrating from Visual C++ 6 to Visual C++ 2008 express
[ "", "c++", "visual-studio-2008", "visual-c++", "visual-c++-2008", "visual-c++-2008-express", "" ]
Given the following HTML: ``` <p><img id="one" alt="at beginning, return true" />Some Text</p> <p>Some <img id="two" alt="in middle, return false" />Text</p> <p>Some Text<img id="three" alt="at end, return false" /></p> ``` How would I be able to tell that `$("img#one")` is at the beginning of its parent node? Ideally what I'm trying to do is this: ``` $("p>img").each(function () { var $this = $(this); var $parent = $this.parent(); if ("$this is at the beginning of $parent.html()") { $parent.before($this); } else { $parent.after($this); } }); ``` **Edit:** with [sebasgo's help](https://stackoverflow.com/questions/1144384/how-can-i-tell-if-a-jquery-node-is-at-the-beginning-of-its-parent-node/1144418#1144418), here's the final code and result: ``` $("p>img").each(function () { var $this = $(this); var $parent = $this.parent(); if (this == this.parentNode.firstChild) { $parent.before($this); } else { $parent.after($this); } }); <img id="one" alt="at beginning, return true" /> <p>Some Text</p> <p>Some Text</p> <img id="two" alt="in middle, return false" /> <p>Some Text</p> <img id="three" alt="at end, return false" /> ```
Use ``` var elem = $("img#one").get(0) if (elem.parentNode.firstChild == elem) { .... } ``` Hope this works better.
The following code will tell you if the node in question is the 'first-child' or not so should work for you. I've just been battling with a similar problem and this worked for me. ``` if($(this).prev().length) { // Not first-child } else { // is first-child } ```
How can I tell if a jQuery node is at the beginning of its parent node?
[ "", "javascript", "jquery", "html", "dom", "" ]