Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
Is there a way to set my own custom test case names when using parameterized tests in JUnit4? I'd like to change the default — `[Test class].runTest[n]` — to something meaningful.
**This feature has made it into [JUnit 4.11](https://github.com/junit-team/junit/wiki/Download-and-Install).** To use change the name of parameterized tests, you say: ``` @Parameters(name="namestring") ``` *`namestring`* is a string, which can have the following special placeholders: * `{index}` - the index of this set of arguments. The default *`namestring`* is `{index}`. * `{0}` - the first parameter value from this invocation of the test. * `{1}` - the second parameter value * and so on The final name of the test will be the name of the test method, followed by the *`namestring`* in brackets, as shown below. For example (adapted from the unit test for the `Parameterized` annotation): ``` @RunWith(Parameterized.class) static public class FibonacciTest { @Parameters( name = "{index}: fib({0})={1}" ) public static Iterable<Object[]> data() { return Arrays.asList(new Object[][] { { 0, 0 }, { 1, 1 }, { 2, 1 }, { 3, 2 }, { 4, 3 }, { 5, 5 }, { 6, 8 } }); } private final int fInput; private final int fExpected; public FibonacciTest(int input, int expected) { fInput= input; fExpected= expected; } @Test public void testFib() { assertEquals(fExpected, fib(fInput)); } private int fib(int x) { // TODO: actually calculate Fibonacci numbers return 0; } } ``` will give names like `testFib[1: fib(1)=1]` and `testFib[4: fib(4)=3]`. (The `testFib` part of the name is the method name of the `@Test`).
Looking at JUnit 4.5, its runner clearly doesn't support that, as that logic is buried inside a private class inside the Parameterized class. You could not use the JUnit Parameterized runner, and create your own instead which would understand the concept of names (which leads to the question of how you might set a name ...). From a JUnit perspective, it would be nice if instead of (or in addition to) just passing an increment, they would pass the comma delimited arguments. TestNG does this. If the feature is important to you, you can comment on the yahoo mailing list referenced at www.junit.org.
Changing names of parameterized tests
[ "", "java", "junit", "junit4", "parameterized", "" ]
Say I have a type that implements a property with a string type: ``` public class Record { public string Value { get; set; } } ``` Then I have an interface that defines a property with the same name: ``` public interface IIntValued { public int Value { get; set; } } ``` I can use explicit interface as follows: ``` public class Record : IIntValued { public string Value { get; set; } int IIntValued.Value { get{ return 0; } set{} } } ``` However, if I want to be able to reference the string "Value" in my explicit interface, can I do it? If so, how? I imagine it to be something like: ``` public class Record : IIntValued { public string Value { get; set; } public int IIntValued.Value { get { string value = /*Magic here*/.Value; return int.parse(value); } set{} } } ``` As you can see, I want the "string valued" "Value" property for an expression in the "int valued" "Value" property. If it were another explicitly implemented interface member, I could typecast to that Interface and then use, but how would it work for an implicit type member? Note: The example is a bit contrived, but hopefully demonstrates the language question.
For an implicit type member, just `Value` or `this.Value` should be fine - because it won't resolve to `IIntValued.Value` by default.
Sure you can! The problem is that you were placing accessibility keywords where they are illegal. Explicitly-implemented members can not have an accessibility keyword. An implicitly-implemented would be written with out the IIntValue. prefix to the member name. Here's a sample that works. ``` public interface IIntValued { int Value { get; set; } } public class Record : IIntValued { public string Value { get; set; } int IIntValued.Value { get { string value = this.Value; return int.Parse(value); } set { } } } ```
Can one reference a same-named implicit property in an explicit Interface implementation?
[ "", "c#", "explicit-interface", "" ]
I'm using this code to make my Java program open a (visible) CMD window: ``` try { String line; Process p = Runtime.getRuntime().exec("cmd /C start \"Render\" \"" + myPath + "\\punchRender.cmd\""); BufferedReader input = new BufferedReader(new InputStreamReader(p.getInputStream())); while ((line = input.readLine()) != null) { System.out.println(line); jLabel7.setText(line); } input.close(); } catch (Exception err) { err.printStackTrace(); } ``` and I've been trying to do the same thing with the OSX terminal, this is where I'm at right now: ``` try { String line; Process p = Runtime.getRuntime().exec("sh " + myPath + "/punchRender.sh"); BufferedReader input = new BufferedReader(new InputStreamReader(p.getInputStream())); while ((line = input.readLine()) != null) { System.out.println(line); jLabel7.setText(line); } input.close(); } catch (Exception err) { err.printStackTrace(); } ``` So far, no luck :( Any suggestions? The .sh file isn't even running...
I would just make sure your shell script has the execute bits on and just pass in the shell script file name. `Process p = Runtime.getRuntime().exec(myPath + "/punchRender.sh")` **Edit:** I don't know Java specifically if there is anyway to set file permissions for Unix/Linux with it to set the eXecute bit or how to escape quotes. But It would be something like this: `Process chmod = Runtime.getRuntime().exec("chmod u+x \"" + myPath + "/punchRenderer.sh\"")`
This should work. Not only running the script, but opening a terminal also: `Process p = Runtime.getRuntime().exec("open -a /Applications/Utilities/Terminal.app \"" + myPath + " /punchRender.sh\"");`
Run CMD equivalent in OSX?
[ "", "java", "macos", "shell", "scripting", "sh", "" ]
I have an object of class A. I want to override one of the methods of that class. Can this be done? More specifically, I have an object that is being injected into a field. I need to override one of the methods, before I can use it. I am trying to see if Reflection could help solve the problem. Note that the method that I am trying override does not dependent on private variables of that class.
Look into [Dynamic Proxy classes](http://java.sun.com/j2se/1.3/docs/guide/reflection/proxy.html).
[CGLIB](http://cglib.sourceforge.net/) should be able to help you to achieve what you're trying to do. Check out [Enhancer class](http://cglib.sourceforge.net/xref/net/sf/cglib/proxy/Enhancer.html).
In Java, given an object, is it possible to override one of the methods?
[ "", "java", "reflection", "dependency-injection", "" ]
I'd like to populate my DropDownList using a simple xml file: ``` <?xml version="1.0" encoding="utf-8" ?> <Databases> <Database>foo</Database> <Database>bar</Database> <Database>baz</Database> </Databases> ``` My XPath is ``` /Databases/Database ``` My drop down list is rendered as: ``` <select name="databaseDropDownList" id="databaseDropDownList"> <option selected="selected" value="System.Web.UI.WebControls.XmlDataSourceNodeDescriptor">System.Web.UI.WebControls.XmlDataSourceNodeDescriptor</option> <option value="System.Web.UI.WebControls.XmlDataSourceNodeDescriptor">System.Web.UI.WebControls.XmlDataSourceNodeDescriptor</option> <option value="System.Web.UI.WebControls.XmlDataSourceNodeDescriptor">System.Web.UI.WebControls.XmlDataSourceNodeDescriptor</option> </select> ``` How should I extract the text? Thanks
I can't recall it from the top of my head but I think there was a bug in XmlDataSource that prevents you to bind to values of xml nodes. It works with attributes only. Please correct me if I am wrong with this. There's a slight modification you need to make to your XML file: ``` <%@ Page Language="C#" %> <script runat="server"> protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { string xml = @"<?xml version=""1.0"" encoding=""utf-8"" ?> <Databases> <Database name=""foo"" /> <Database name=""bar"" /> <Database name=""baz"" /> </Databases>"; databasesSource.Data = xml; databasesSource.DataBind(); } } </script> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <head runat="server"> <title>Untitled Page</title> </head> <body> <form id="form1" runat="server"> <div> <asp:DropDownList ID="databases" runat="server" DataSourceID="databasesSource" DataValueField="name" DataTextField="name" /> <asp:XmlDataSource ID="databasesSource" runat="server" XPath="/Databases/Database" /> </div> </form> </body> </html> ``` Note that I added the *name* attribute instead of using the value of the node directly. If you can't modify the structure of your original XML file you can apply an XSLT transformation on it using the [TransformFile](http://msdn.microsoft.com/en-us/library/system.web.ui.webcontrols.xmldatasource.transformfile.aspx) property as described in this [post](http://weblogs.asp.net/rajbk/pages/431322.aspx).
I had the same problem today. My solution: This is my xml: ``` <?xml version="1.0" encoding="utf-8"?> <pokemons> <pokemon> <nome itemname="bulbassaur">bulbassaur </nome> </pokemon> <pokemon> <nome itemname="charmander">chamander </nome> </pokemon> <pokemon> <nome itemname="squirtle"> squirtle </nome> </pokemon> </pokemons> ``` And I put DataTextField="itemname" on the DropDownList server control. ex: ``` <asp:DropDownList ID="DropDownList1" runat="server" AutoPostBack="True" DataSourceID="XmlDataSource1" DataTextField="itemname"> ``` It's working without problems. Probably not the best solution,... but at least better than `System.Web.UI.WebControls.XmlDataSourceNodeDescriptor`.
Populate DropDownList from XmlDataSource
[ "", "c#", "asp.net", "xpath", "drop-down-menu", "" ]
Using cakephp, I have a generic address table, which I want to link to customers, vendors, contacts. most of the tables only have a 1 to 1 relationship, but I want my customers table to have 2 perhaps for clarification: I have a customers table ``` id, int mailing_address_id, int billing_address_id, int ``` and an addresses table ``` id,int addr, varchar city, varchar etc.... ``` Now I know I could put a `customer_id` in the addresses table. But I don't want to do that because I have a vendors table, and contacts table, and other tables that all are going to use the addresses table. the `customer_id` would not really be relavant to those other tables. I'd like the Customer model to automatically link in the two addresses
Follow Travis Leleu's suggestion - because it's a good idea, regardless. Then add an enum field to the `Addresses` table called `table_id`. The value of the `table_id` field could be "customer", "vendor", "contact", and whatever other tables would link to the addresses table. Also include a single foreign key called `entity_id`. This foreign key would be the primary key of the corresponding customer, vendor, or whatever. When you, for example, want the billing address for a certain vendor, add in the `$conditions` array: ``` 'Address.entity_id'=>'123456' 'Address.table_id'=>'vendor' 'Address.type'=>'billing' ``` With this set-up you could have as many tables as you want referencing the `Addresses` table.
I like [Kyle's](https://stackoverflow.com/questions/655068/how-do-i-use-multiple-foreign-keys-in-one-table-referencing-another-table-in-cake/684105#684105) and [Travis's](https://stackoverflow.com/questions/655068/how-do-i-use-multiple-foreign-keys-in-one-table-referencing-another-table-in-cake/655407#655407) suggestions, but you can also put the foreign keys the other direction. If you want your addresses to be independent and have several other tables reference them, then you should be able to define two [belongsTo relationships](http://book.cakephp.org/view/78/Associations-Linking-Models-Together#belongsTo-81) from customer to address. Each relationship then has to specify which field to use as the foreign key. ``` <?php class Customer extends AppModel { var $name = 'Customer'; var $belongsTo = array( 'BillingAddress' => array( 'className' => 'Address', 'foreignKey' => 'billing_address_id' ), 'MailingAddress' => array( 'className' => 'Address', 'foreignKey' => 'mailing_address_id' ) ); } ?> ``` However, both of these solutions leave you open to orphaned addresses, because the foreign key constraint isn't really correct. The simplest solution might be to just add a bunch of optional foreign keys to the address table, like `customer_id`, `company_id`, `employee_id`, and so on. Then you've got a standard arc pattern, and the keys are pointing the right direction, so you get correct referential integrity. Another solution is to design a more general entity table that has address as a child table. Then customer, company, and employee are all subtypes of the entity table. For more details on that style of schema, I recommend [Data Model Patterns](https://rads.stackoverflow.com/amzn/click/com/0932633293) by David Hay.
How do I use multiple foreign keys in one table referencing another table in cakephp
[ "", "php", "cakephp", "" ]
Using the Win32 APIs, is it possible to create a Window or Dialog in one thread then collect events for it from another thread? Are HWNDs tied to threads? Trying the contrived example below I never see GetMessage() fire. ``` HWND g_hWnd; DWORD WINAPI myThreadProc(LPVOID lpParam) { while(GetMessage(&msg, hWnd, 0, 0) > 0) { ... } } int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nShowCmd) { hWnd = CreateDialog(hInstance, MAKEINTRESOURCE(IDD_MYDIALOG), 0, myDlgProc); CreateThread(NULL, 0 myThreadProc, NULL, 0, NULL); ... } ``` But here, I do. ``` HWND g_hWnd; HINSTANCE g_hInstance; DWORD WINAPI myThreadProc(LPVOID lpParam) { hWnd = CreateDialog(hInstance, MAKEINTRESOURCE(IDD_MYDIALOG), 0, myDlgProc); while(GetMessage(&msg, hWnd, 0, 0) > 0) { ... } } int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nShowCmd) { g_hInstance = hInstance; CreateThread(NULL, 0 myThreadProc, NULL, 0, NULL); ... } ``` Can somebody explain what I'm seeing?
No. GetMessage returns messages on the current thread's input queue. The HWND parameter is a filter, so that GetMessage only returns messages in the current thread's input queue intended for that window. Windows have thread affinity - messages intended for a window get handled on the thread that created and therefore owns the window.
Use [AttachThreadInput](http://msdn.microsoft.com/en-us/library/ms681956(VS.85).aspx).
Can the HWND from CreateWindow/CreateDialog be GetMessage'd from another thread?
[ "", "c++", "c", "windows", "multithreading", "winapi", "" ]
Why does `SELECT * FROM TABLE ORDER BY RAND()` Work? I thought `ORDER BY` only works for columns. So what exactly does it mean to `ORDER BY RAND()` or `ORDER BY SUM()`?
`ORDER` will work with any value you can put in your results (but doesn't have to be one of the values in the results). This can be a column in any of the source tables or calculated using a function. For example, you could use `ORDER UPPER(name)` for a case-insensitive sort. If you `ORDER BY RAND()` you're ordering by a random number generated for each row in the results, i.e. returning the rows in a random order. If you're ordering by `SUM()` you've probably got a `GROUP BY` in there too so you could order customers by total calculated invoice total for example. Ideally you want to use a column from an index as this will be much faster.
**You can order by nearly everything**, functions (*like RAND())*, aggregations (*like SUM()*) and so on. For example, the [MySQL-Documentation](http://dev.mysql.com/doc/refman/5.1/en/select.html) states as Syntax: ``` [ORDER BY {col_name | expr | position} ``` Or the [Postgresql Documentation](http://www.postgresql.org/docs/8.2/static/sql-select.html) is even more explicit: > The optional **ORDER BY** clause has this general form: > > `ORDER BY expression [ ASC | DESC | USING operator ] [, ...]` > > expression can be the name or ordinal number of an output column (SELECT list item), **or it can be an arbitrary expression** formed from input-column values. For your second question: * `ORDER BY RAND()` or `ORDER BY RANDOM()` does what it states: Your rows are shuffeled and you get them in a random order. So if you do a `SELECT * FROM ... ORDER BY RAND() LIMIT 1` you select a (=one) random row out of your table. * `ORDER BY SUM(Column)` only makes sense in combination with a `GROUP BY` statement.
Does ORDER BY only work with columns?
[ "", "sql", "" ]
I'm looking to create a form on my website where users can enter their email address and a message (however long). Then upon clicking on a submit button, the code, sends the message as an email to my mailbox. I was wondering if someone knew what code i could use. I am using PHP as my server-side language.
You need to have access to an smtp server somewhere. Assuming you do, use the php mail function to send the mail like so: ``` $Name = "John doe"; //senders name $email = "email@adress.com"; //senders e-mail adress $recipient = "recipient@emailadress.com"; //recipient $mail_body = "The text for the mail..."; //mail body $subject = "Subject for receiver"; //subject $header = "From: ". $Name . " <" . $email . ">\r\n"; //optional headerfields mail($recipient, $subject, $mail_body, $header); ``` The smtp server is set in the php.ini file in these two lines: ``` SMTP = servername smtp_port = 25 ``` More information at [w3schools site](http://www.w3schools.com/PHP/php_mail.asp).
while the above two answers provide a basic email sending suggestion, there's one thing you should consider, the codes are not secure. spammers can inject Cc: codes and send spam using the form. if they do, your smtp provider may ban your account. try a dedicated mailer, like [phpmailer](http://phpmailer.codeworxtech.com/)
How do i create an email contact form for my website
[ "", "php", "email", "forms", "sendmail", "contacts", "" ]
I have a function that generates a CRC check byte based on the content of any packet.The problem is in translating the function from C++ to C# C++ code: ``` unsigned char GenerateCheckByte( char* packet, int length, unsigned long seed ) { if( !packet ) return 0; unsigned long checksum = 0xFFFFFFFF; length &= 0x7FFF; char* ptr = packet; unsigned long moddedseed = seed << 8; for( int i = 0; i < length; i++ ) checksum = ( checksum >> 8 ) ^ table[moddedseed + ( ( *(ptr++) ^ checksum ) & 0xFF )]; unsigned char result = ( (checksum>>24)&0xFF ) + ( (checksum>>8)&0xFF ) + ( (checksum>>16)&0xFF ) + ( checksum&0xFF ); return result; } ``` the char\*(packet) can also be defined as LPBYTE,the idea is that the value assigned to \*packet is assigned to \*ptr and as you see \*ptr increases.Meaning a byte array is passed in and by increasing the value of the pointer it goes to the next byte. I tried to do it in C# and failed many times.After some hard work I figured out some code,but i can't execute it :? C# code ``` public static unsafe byte GenerateCheckByte(byte *packet, int length, UInt32 seed ) { if (*packet == 0) return 0; UInt32 checksum = 0xFFFFFFFF; length &= 0x7FFF; byte *ptr = packet; UInt32 moddedseed = seed << 8; for (int i = 0; i < length; i++) checksum = ( checksum >> 8 ) ^ Table.table[moddedseed + ( ( *(ptr++) ^ checksum ) & 0xFF )]; byte result = (byte)(( (checksum>>24)&0xFF ) + ( (checksum>>8)&0xFF ) + ( (checksum>>16)&0xFF ) + ( checksum&0xFF )); return result; } ``` It doesn't look that bad,but I can't call it ``` unsafe { packetBuffer[5] = Functions.GenerateCheckByte(&packetBuffer[0], 18, packet.seedCRC); } ``` error: "You can only take the address of an unfixed expression inside of a fixed statement initializer" **Please note** packetbuffer in both C++ and C# application is byte[] packetBuffer = new byte[18];
You could make the method accept a byte array: ``` public static unsafe byte GenerateCheckByte(byte[] packetArray, int length, UInt32 seed) { fixed(byte *packet = packetArray) { ... etc } } ``` It's better to keep the unsafe stuff hidden away as much as possible behind managed interfaces. Then calling it would be easy: ``` packetBuffer[5] = Functions.GenerateCheckByte(packetBuffer, 18, ... ``` In fact, it would be better to write `GenerateCheckByte` to operate on an array anyway, instead of delving into `unsafe` techniques: ``` public static unsafe byte GenerateCheckByte(byte[] packet, int length, UInt32 seed ) { if (packet == null) throw new ArgumentNullException("packet"); // the right way in C# UInt32 checksum = 0xFFFFFFFF; length &= 0x7FFF; UInt32 moddedseed = seed << 8; for (int i = 0; i < length; i++) checksum = ( checksum >> 8 ) ^ Table.table[moddedseed + ( ( packet[i] ^ checksum ) & 0xFF )]; byte result = (byte)(( (checksum>>24)&0xFF ) + ( (checksum>>8)&0xFF ) + ( (checksum>>16)&0xFF ) + ( checksum&0xFF )); return result; } ``` Write the simplest, safest implementation you can, and only mess with pointers if you find a bottleneck in profiling. Are you just translating a lot of existing C/C++ into C#? There's little point doing that unless you get some new safety/maintainability from it. :)
You shouldn't have to use unsafe code at all. If you send in a byte array to the function, it can access it without using pointers. I haven't tested the code, but it should be something like this: ``` byte GenerateCheckByte(byte[] packet, ulong seed) { if (packet == null) return 0; int length = packet.Length & 0x7FFF; ulong checksum = 0xFFFFFFFF; ulong moddedseed = seed << 8; for (int i = 0; i < length; i++) { checksum = (checksum >> 8) ^ table[moddedseed + ((packet[i] ^ checksum) & 0xFF)]; } return (byte)( ((checksum >> 24) & 0xFF) + ((checksum >> 16) & 0xFF) + ((checksum >> 8) & 0xFF) + (checksum & 0xFF) ); } ```
How to assign byte[] as a pointer in C#
[ "", "c#", "pointers", "byte", "" ]
For this question, we want to avoid having to write a special query since the query would have to be different across multiple databases. Using only hibernate criteria, we want to be able to escape special characters. This situation is the reason for needing the ability to escape special characters: Assume that we have table 'foo' in the database. Table 'foo' contains only 1 field, called 'name'. The 'name' field can contain characters that may be considered special in a database. Two examples of such a name are 'name\_1' and 'name%1'. Both the '\_' and '%' are special characters, at least in Oracle. If a user wants to search for one of these examples after they are entered in the database, problems may occur. ``` criterion = Restrictions.ilike("name", searchValue, MatchMode.ANYWHERE); return findByCriteria(null, criterion); ``` In this code, 'searchValue' is the value that the user has given the application to use for its search. If the user wants to search for '%', the user is going to be returned with every 'foo' entry in the database. This is because the '%' character represents the "any number of characters" wildcard for string matching and the SQL code that hibernate produces will look like: ``` select * from foo where name like '%' ``` Is there a way to tell hibernate to escape certain characters, or to create a workaround that is not database type specific?
LikeExpression's constructors are all protected, so it's not a viable option. Also, it has [problems of its own](http://opensource.atlassian.com/projects/hibernate/browse/HHH-2997). A colleague and I created a patch which works pretty well. The gist of the patch is that for the LikeExpression constructor which consumes a MatchMode, we escape the special characters. For the constructor which consumes a Character (the escape character), we assume the user escapes the special characters on their own. We also parameterized the escape character to ensure that it can't corrupt the SQL query if they use something like \ or a quote character. ``` package org.hibernate.criterion; import org.hibernate.Criteria; import org.hibernate.HibernateException; import org.hibernate.dialect.Dialect; import org.hibernate.engine.TypedValue; public class LikeExpression implements Criterion { private final String propertyName; private final String value; private final Character escapeChar; protected LikeExpression( String propertyName, Object value) { this(propertyName, value.toString(), (Character) null); } protected LikeExpression( String propertyName, String value, MatchMode matchMode) { this( propertyName, matchMode.toMatchString( value .toString() .replaceAll("!", "!!") .replaceAll("%", "!%") .replaceAll("_", "!_")), '!' ); } protected LikeExpression( String propertyName, String value, Character escapeChar) { this.propertyName = propertyName; this.value = value; this.escapeChar = escapeChar; } public String toSqlString( Criteria criteria, CriteriaQuery criteriaQuery) throws HibernateException { Dialect dialect = criteriaQuery.getFactory().getDialect(); String[] columns = criteriaQuery.getColumnsUsingProjection( criteria, propertyName ); if ( columns.length != 1 ) { throw new HibernateException( "Like may only be used with single-column properties" ); } String lhs = lhs(dialect, columns[0]); return lhs + " like ?" + ( escapeChar == null ? "" : " escape ?" ); } public TypedValue[] getTypedValues( Criteria criteria, CriteriaQuery criteriaQuery) throws HibernateException { return new TypedValue[] { criteriaQuery.getTypedValue( criteria, propertyName, typedValue(value) ), criteriaQuery.getTypedValue( criteria, propertyName, escapeChar.toString() ) }; } protected String lhs(Dialect dialect, String column) { return column; } protected String typedValue(String value) { return value; } } ``` If you're wondering what the lhs and typedValue methods are for, the new IlikeExpression should answer those questions. ``` package org.hibernate.criterion; import org.hibernate.dialect.Dialect; public class IlikeExpression extends LikeExpression { protected IlikeExpression( String propertyName, Object value) { super(propertyName, value); } protected IlikeExpression( String propertyName, String value, MatchMode matchMode) { super(propertyName, value, matchMode); } protected IlikeExpression( String propertyName, String value, Character escapeChar) { super(propertyName, value, escapeChar); } @Override protected String lhs(Dialect dialect, String column) { return dialect.getLowercaseFunction() + '(' + column + ')'; } @Override protected String typedValue(String value) { return super.typedValue(value).toLowerCase(); } } ``` After this, the only thing left is to make Restrictions use these new classes: ``` public static Criterion like(String propertyName, Object value) { return new LikeExpression(propertyName, value); } public static Criterion like(String propertyName, String value, MatchMode matchMode) { return new LikeExpression(propertyName, value, matchMode); } public static Criterion like(String propertyName, String value, Character escapeChar) { return new LikeExpression(propertyName, value, escapeChar); } public static Criterion ilike(String propertyName, Object value) { return new IlikeExpression(propertyName, value); } public static Criterion ilike(String propertyName, String value, MatchMode matchMode) { return new IlikeExpression(propertyName, value, matchMode); } public static Criterion ilike(String propertyName, String value, Character escapeChar) { return new IlikeExpression(propertyName, value, escapeChar); } ``` Edit: Oh yeah. This works for Oracle. We're not sure about other databases though.
It's not a very clean way to do it but a sqlRestrinction should be easier: ``` criterions.add(Restrictions.sqlRestriction(columnName+ " ilike '!%' escape '!'")); ``` You can even do a start with search using the same principle: ``` criterions.add(Restrictions.sqlRestriction(columnName+ " ilike '!%%' escape '!'")); ```
Using hibernate criteria, is there a way to escape special characters?
[ "", "java", "database", "hibernate", "criteria", "escaping", "" ]
Does anyone know of an open source library that does ftp-client with TLS and SSL for dotNET? We are using a commercial library now but we are not very happy with it, so we are thinking of switching. So instead of rolling our own, are there any lgpl (or equivalent) librarys out there? Or - If we have to roll our own (it will be a LGPL or BSD or apache style licence), does anyone know of a half finished code base we can start with?
Take a look at these libraries if FtpWebRequest with SSL enabled is not sufficient: * <http://code.google.com/p/sshsync/> - SshSync : Directory synchronisation via SSH * <http://ftpclient.codeplex.com> - FtpRequest library for .NET 2.0 * <http://www.rebex.net/ftp.net/>
Wel, [FtpWebRequest](http://msdn.microsoft.com/en-us/library/system.net.ftpwebrequest.aspx) supports SSL via [EnableSsl](http://msdn.microsoft.com/en-us/library/system.net.ftpwebrequest.enablessl.aspx) (using "`AUTH TLS`") - is that sufficient?
Any OSS .Net FTP (client) with TLS and SSL?
[ "", "c#", ".net", "open-source", "ftp", "" ]
Back in 2005, Quirksmode.com released this article: <http://www.quirksmode.org/dom/classchange.html> that showed "proof" that changing the style of an element by changing its class (ie. "elem.className = x") was almost twice as fast as changing its style via its style property (ie. "elem.style.someStyle = x"), except in Opera. As a result of that article, we started using a className-based solution to do things like showing/hiding elements on our site. The problem is, one of our developers would much rather use jQuery's equivalent methods for this kind of thing (ie. "$(something).hide()"), and I'm having a hard time convincing him that our className-based function is worth using, as I can only find a single article written four years ago. Does anyone know of any more recent or more comprehensive investigations in to this issue?
[There is a flaw in the benchmark that article uses](http://webkit.org/blog/13/classname-vs-style/). In my personal experience I've never seen a case where updating a className outperforms inline style setting. I have no concrete proof of this (I do vaguely remember an article I'm going to try to dig up), but I have noticed that large clientside apps (for example gmail, or google maps) prefer setting inline styles to classNames, and it was in the context of analysis of these apps that I first heard of the speed increase in doing so. Note that I am not promoting one over the other: setting the className dynamically goes a long way in terms of maintainability/readability and separating concerns.
Micro-optimization is evil. I think unless you are hiding a seriously large amount of elements at once or something, the difference in milliseconds is unimportant if by some chance that article is still relevant nowadays. With that in mind, I would go with jQuery's methods as they are battle tested and more concise.
Javascript: Performance of .className Changes vs. .style Changes
[ "", "javascript", "performance", "" ]
Can anyone help, I have 2 applications and they are nearly identical. they have a completely different presentation layer (web) and the business logic and database are nearly identical. Basically one app has some things that the other doesn't. so i was wondering if i can do the following without breaking any rules etc Each app has their own presentation layer. Each app has their own service layer. Each app has their own data access layer. Each app shares another service layer. Hence the shared service layer both apps can access as the business logic is identical, but they both have another service layer which has 5 or 6 methods in there which are specific to that actual app The data access layer - I don't see anyway of sharing this as there are 2 different db's with entity framework hence its got the EDM in there so its not dynamic - and the tables differ slightly. I could use IOC on the shared data access layer I suppose Would anyone help or comment weather this is good practise.. What I didn't want to do is have only a separate service layer when a lot of it is shared.. Is this a good idea? Maybe i have got it wrong, is there a better way?
As Arjen de Blok said, your business entities should use a repository, a repository is an object with methods to query, update or insert domain entities. The interface which describes your repository belongs to your domain layer, but the implementation belongs to the infrastructure layer (DAL). You can share the domain and infrastructure libraries between your two projects. If these two projects should retrieves their data through a shared web service or a shared database, you just have to choose (i.e inject) the correct implementation of your repository (your domain objects know only about the interface of your repository, not about the concrete type)
If the business logic is mostly identical then you should focus to this first. If you want to do DDD then you should identify your entities and (business) services first and place these in a single library. These entities and business services should talk to your infrastructure layer (your DAL). If the infrastructure layer is very different in these two applications then try to work with interfaces. So wrap the intfrastructure layer with interfaces and only talk from the domain layer to your infrastructure layer via these interfaces. To bind your business logic to your infrastructure's implementation you could use IoC/DI.
Repository Pattern with 2 services & 2 dataccess layers - C# DDD?
[ "", "c#", "frameworks", "entity", "repository-pattern", "" ]
Is there any free tools available for generating class diagram from c++ source files and if possible for mfc source files too.
We use [doxygen](http://www.doxygen.nl) with [graphviz](http://www.graphviz.org/) support
You could try SourceNavigator. I'm not sure what the current state of the project is, but [here's a place to start](http://sourcenav.sourceforge.net/).
Create class diagram from c++ source?
[ "", "c++", "class-design", "" ]
I have an ASP button that lookts like this: ``` <asp:Button ID="btnReset" runat="server" OnClientClick = "hideOverlay('<%=pnlOverlay.ClientID %>', '<%=pnlAddComment.ClientID %>');" CssClass ="btnCancel PopUpButton" /> ``` The problem are the asp tags in de hideOverlay part.I don't get it working. Why isn't working? And how do i fix it?
Try below examples First Example In aspx ``` <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title></title> </head> <body> <form id="form1" runat="server"> <div> <asp:Button ID="btnReset" runat="server" CssClass="btnCancel PopUpButton" /> <asp:Panel ID="pnlOverlay" runat="server"> </asp:Panel> <asp:Panel ID="pnlAddComment" runat="server"> </asp:Panel> </div> </form> </body> </html> ``` In Codebehind ``` using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; public partial class Default10 : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { btnReset.Attributes.Add("onclick", string.Format("hideOverlay('{0}','{1}')", pnlOverlay.ClientID, pnlAddComment.ClientID)); } } ``` It will generate the below source for the button ``` <input type="submit" name="btnReset" value="" onclick="hideOverlay('pnlOverlay','pnlAddComment');" id="btnReset" class="btnCancel PopUpButton" /> ``` Second Example In Aspx ``` <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title></title> </head> <body> <form id="form1" runat="server"> <div> <asp:Button ID="btnReset" runat="server" CssClass="btnCancel PopUpButton" OnClientClick=<%# "hideOverlay('" + pnlOverlay.ClientID + "', '" + pnlAddComment.ClientID +"');" %> /> <asp:Panel ID="pnlOverlay" runat="server"> </asp:Panel> <asp:Panel ID="pnlAddComment" runat="server"> </asp:Panel> </div> </form> </body> </html> ``` In CodeBehind ``` using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; public partial class Default10 : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { btnReset.DataBind(); } } ``` It will generate the below source for the button ``` <input type="submit" name="btnReset" value="" onclick="hideOverlay('pnlOverlay', 'pnlAddComment');" id="btnReset" class="btnCancel PopUpButton" /> ``` Third Example In aspx ``` <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title></title> </head> <body> <form id="form1" runat="server"> <div> <asp:Button ID="btnReset" runat="server" CssClass="btnCancel PopUpButton" OnClientClick="hideOverlay();" /> <asp:Panel ID="pnlOverlay" runat="server"> </asp:Panel> <asp:Panel ID="pnlAddComment" runat="server"> </asp:Panel> </div> </form> </body> <script type="text/javascript" > function hideOverlay() { var pnlOverlayID = '<%= pnlOverlay.ClientID %>'; var pnlAddCommentID = '<%= pnlAddComment.ClientID %>'; //Do your stuff } </script> </html> ``` It will generate the follwing source for the script portion ``` <script type="text/javascript" > function hideOverlay() { var pnlOverlayID = 'pnlOverlay'; var pnlAddCommentID = 'pnlAddComment'; //Do your stuff } </script> ```
Try to replace "=" with "#" in you inline code. e.g. **<%=pnlOverlay.ClientID %>** => **<%#pnlOverlay.ClientID %>**, so that the **ClientId** is instantiated in compile time. [OnClientClick](http://msdn.microsoft.com/en-us/library/system.web.ui.webcontrols.button.onclientclick.aspx) is only used to call client-sidee script such as javascript code. If you are trying to call a method in code behind, you should use **OnClick** event.
C# OnclientClick doens't render asp tags?
[ "", "c#", "onclientclick", "" ]
In JavaScript, how would you check if an element is actually visible? I don't just mean checking the `visibility` and `display` attributes. I mean, checking that the element is not * `visibility: hidden` or `display: none` * underneath another element * scrolled off the edge of the screen For technical reasons I can't include any scripts. I can however use [Prototype](http://en.wikipedia.org/wiki/Prototype_JavaScript_Framework) as it is on the page already.
For the point 2. I see that no one has suggested to use `document.elementFromPoint(x,y)`, to me it is the fastest way to test if an element is nested or hidden by another. You can pass the offsets of the targetted element to the function. Here's PPK test page on [elementFromPoint](http://www.quirksmode.org/dom/tests/elementfrompoint.html). From [MDN's documentation](https://developer.mozilla.org/en-US/docs/Web/API/DocumentOrShadowRoot/elementFromPoint): > The `elementFromPoint()` method—available on both the Document and ShadowRoot objects—returns the topmost Element at the specified coordinates (relative to the viewport).
I don't know how much of this is supported in older or not-so-modern browsers, but I'm using something like this (without the neeed for any libraries): ``` function visible(element) { if (element.offsetWidth === 0 || element.offsetHeight === 0) return false; var height = document.documentElement.clientHeight, rects = element.getClientRects(), on_top = function(r) { var x = (r.left + r.right)/2, y = (r.top + r.bottom)/2; return document.elementFromPoint(x, y) === element; }; for (var i = 0, l = rects.length; i < l; i++) { var r = rects[i], in_viewport = r.top > 0 ? r.top <= height : (r.bottom > 0 && r.bottom <= height); if (in_viewport && on_top(r)) return true; } return false; } ``` It checks that the element has an area > 0 and then it checks if any part of the element is within the viewport and that it is not hidden "under" another element (actually I only check on a single point in the center of the element, so it's not 100% assured -- but you could just modify the script to itterate over all the points of the element, if you really need to...). *Update* Modified on\_top function that check every pixel: ``` on_top = function(r) { for (var x = Math.floor(r.left), x_max = Math.ceil(r.right); x <= x_max; x++) for (var y = Math.floor(r.top), y_max = Math.ceil(r.bottom); y <= y_max; y++) { if (document.elementFromPoint(x, y) === element) return true; } return false; }; ``` Don't know about the performance :)
How do I check if an element is really visible with JavaScript?
[ "", "javascript", "dom", "visibility", "" ]
How can I get the age of someone given the date of birth in a C# datetime. **I want a precise age like 40.69 years old**
This will calculate the exact age. The fractional part of the age is calculated relative to the number of days between the last and the next birthday, so it will handle leap years correctly. The fractional part is linear across the year (and doesn't take into account the different lengths of the months), which seems to make most sense if you want to express a fractional age. ``` // birth date DateTime birthDate = new DateTime(1968, 07, 14); // get current date (don't call DateTime.Today repeatedly, as it changes) DateTime today = DateTime.Today; // get the last birthday int years = today.Year - birthDate.Year; DateTime last = birthDate.AddYears(years); if (last > today) { last = last.AddYears(-1); years--; } // get the next birthday DateTime next = last.AddYears(1); // calculate the number of days between them double yearDays = (next - last).Days; // calcluate the number of days since last birthday double days = (today - last).Days; // calculate exaxt age double exactAge = (double)years + (days / yearDays); ```
This would be an approximative calculation: ``` TimeSpan span = DateTime.Today.Subtract(birthDate); Console.WriteLine( "Age: " + (span.TotalDays / 365.25).toString() ); ``` BTW: see also this question on Stack Overflow: [How do I calculate someone’s age in C#?](https://stackoverflow.com/questions/9/how-do-i-calculate-someones-age-in-c)
Age in years with decimal precision given a datetime
[ "", "c#", "datetime", "" ]
I've looked through the various questions on unit testing but can't find one that specifically answers this question. I've got several PHP classes that contain functions that look like this: ``` static function _setSuspended($Suspended, $UserID) { try { $con = Propel::getConnection(); $c1 = new Criteria(); $c1->add(DomainsPeer::USERID,$UserID); $update = new Criteria(); $update->add(DomainsPeer::SUSPENDED,$Suspended); BasePeer::doUpdate($c1, $update, $con); return true; } catch(PropelException $e) { return $e->getMessage(); } } ``` I'm using Propel as my ORM. I've read through various unit testing topics that talk about creating 'Mocks' and 'Stubs' and what not but I have not been able to find anything that specifically tells you how to test a function like above. My thinking goes something like: I need to test the function above so I would want to call it. But if I call it, it uses Propel as the ORM and according to the Unit Testing principles I should isolate each function by itself. I just don't see a way to do that. What am I missing here?
I've found that mocking the ORM doesn't give me any confidence because the ORM configuration never gets tested. ORMs also have lots of action at a distance effects which can give false confidence with unit tests. Mocking the database driver or providing an alternate in-memory database gives me much higher confidence my code is correct and is about as hard as mocking the ORM. SQLite is a great in-memory database for unit testing. It's on the PDO supported database list. (PDO is the Propel 1.3 database driver.) If you don't want to use an in-memory database, you might be able to find a PDO mock already written.
This is a generic answer in that I'm not familiar with Propel at all and only somewhat more familiar with PHP. The basic answer is that you use dependency injection. Instead of referring directly to your ORM, you create a wrapper around it, then inject the wrapper into your class/function to actually use. To do unit testing, then you create a mock or fake version of the wrapper that doesn't interface to the ORM but instead lets you configure the responses from the wrapper to your method invocations. This allows you to factor out the ORM when unit testing your functions.
How do you do Unit Testing With an App that Uses an ORM?
[ "", "php", "unit-testing", "phpunit", "propel", "" ]
I'm using PHP and PHPMyAdmin to create a small profile site. I'm giving members an ID number, based on which is the biggest number currently in the database, +1 I did 25 tests before I got the PHP script where I wanted it to be. I then deleted those 25 entries using PHPMyAdmin. But now, when my PHP code does this: ``` function getLatestID() { $query = "SELECT max(member_id) FROM members"; $result = @mysql_query($query) or showError("unable to query database for user information"); if (!($record = mysql_fetch_array($result))) return null; return $record[0]; } ``` I get the wrong number. Test scenario: the database table holds 3 entries, with ID's 1, 2 and 3. I start a debugging session and put a breakpoint on the return `$record[0]`. I check its contents and instead of 3, which is the biggest number, it's 28. As in 25+3=28, the 25 entries that I allready deleted... Does anybody know what's causing this and how I can fix it?
It's probably because you have auto\_increment set and the query is returning the highest id. When you deleted the other records, you probably didn't reset the auto increment count.
If you're using `auto_increment` in MySQL then deleting records won't decrease the next value. You can empty a table with `TRUNCATE TABLE mytable` - this will reset the value.
MySQL query in PHP gives obvious wrong result
[ "", "php", "mysql", "phpmyadmin", "" ]
I have three javascript files that I want to merge into a single file. Is it possible to just copy paste all of the code into one file or will there be namespace conflicts and other problems? EDIT: I'm worried that each file acts like a namespace encapsulating each file's code, and that this encapsulating will cease to existing if I merge the files.
If the script files were all loaded in the `<head>` and you paste them in the same order they appeared in the HTML then there shouldn't be any problems. Having said that, if they use document.write I'm not sure...
If they all work when loaded sequentially, it makes no difference if you concatenate them to a single file and you that instead. Just make sure you put them together in the same order as when you load the seperately.
JavaScript: merging files without problems?
[ "", "javascript", "performance", "optimization", "" ]
In an answer to [Is it safe to store objects of a class which has an std::auto\_ptr as its member variable in std::vector?](https://stackoverflow.com/questions/700588/is-it-safe-to-store-objects-of-a-class-which-has-an-stdautoptr-as-its-member-v) I stated that a class that contained an auto\_ptr could be stored in a vector **provided the class had a user-defined copy constructor**. There were several comment suggesting that this was not the case, so this question is an attempt to clear the issue up. Consider the following code: ``` #include <memory> #include <vector> using namespace std; struct Z {}; struct A { A( Z z ) : p( new Z(z) ) {} A( const A & a ) : p( a.p.get() ? new Z( *a.p.get()) : 0 ) {} // no assigment op or dtor defined by intent auto_ptr <Z> p; }; int main() { vector <A> av; Z z; A a(z); av.push_back( a ); av.push_back( A(z) ); av.clear(); } ``` Please examine the above & in your reply indicate where undefined behaviour in the meaning of the C++ Standard could occur for this particular class used in this particular way. I am not interested whether the class is useful, well-behaved, sortable, or how it performs under exceptions. **Please also note that this is not a question about the validity of creating a vector of auto\_ptrs - I am well aware of the issues regarding that.** > Thanks all for your inputs on what in > retrospect is probably a rather silly > question. I guess I focussed too much > on the copy ctor & forgot about > assignment. The lucky winner of my > acceptance points (and points mean > prizes!) is **litb** for a typically > exhaustive explanation (sorry > earwicker)
Trying to put the list of places together that makes the example undefined behavior. ``` #include <memory> #include <vector> using namespace std; struct Z {}; struct A { A( Z z ) : p( new Z(z) ) {} A( const A & a ) : p( a.p.get() ? new Z( *a.p.get()) : 0 ) {} // no assigment op or dtor defined by intent auto_ptr <Z> p; }; int main() { vector <A> av; ... } ``` I will examine the lines up to the one where you instantiate the vector with your type `A`. The Standard has to say In `23.1/3`: > The type of objects stored in these components must meet the requirements of CopyConstructible types (20.1.3), and the additional requirements of Assignable types. In `23.1/4` (emphasis mine): > In Table 64, T is the type used to instantiate the container, t is a value of T, and u is a value of (*possibly const*) T. > > ``` > +-----------+---------------+---------------------+ > |expression |return type |postcondition | > +-----------+---------------+---------------------+ > |t = u |T& |t is equivalent to u | > +-----------+---------------+---------------------+ > ``` > > Table 64 In `12.8/10`: > If the class definition does not explicitly declare a copy assignment operator, one is declared implicitly. The implicitly-declared copy assignment operator for a class X will have the form > > ``` > X& X::operator=(const X&) > ``` > > if > > * each direct base class B of X has a copy assignment operator whose parameter is of type const B&, > const volatile B& or B, and > * for all the nonstatic data members of X that are of a class type M (or array thereof), each such class type has a copy assignment operator whose parameter is of type const M&, const volatile M& or M. > > Otherwise, the implicitly declared copy assignment operator will have the form > > ``` > X& X::operator=(X&) > ``` (Note the last and second last sentence) In `17.4.3.6/1 and /2`: > In certain cases (replacement functions, handler functions, operations on types used to instantiate standard library template components), the C++ Standard Library depends on components supplied by a C++ program. If these components do not meet their requirements, the Standard places no requirements on the implementation. > > In particular, the effects are undefined in the following cases: > > * for types used as template arguments when instantiating a template component, if the operations on the type do not implement the semantics of the applicable Requirements subclause (20.1.5, 23.1, 24.1, 26.1). Operations on such types can report a failure by throwing an exception unless otherwise specified. Now, if you look at the specification of `auto_ptr` you will note it has a copy-assignment operator that takes a non-const `auto_ptr`. Thus, the implicitly declared copy assignment operator of your class will *also* take a non-const type as its parameter. If you read the above places carefully, you will see how it says that instantiating a vector with your type as written is undefined behavior.
Objects stored in containers are required to be "CopyConstructable" as well as "Assignable" (C++2008 23.1/3). Your class tries to deal with the CopyConstructable requirement (though I'd argue it still doesn't meet it - I edited that argument out since it's not required and because it's arguable I suppose), but it doesn't deal with the Assignable requirement. To be Assignable (C++2008 23.1/4), the following must be true where `t` is a value of `T` and `u` is a value of (possibly `const`) `T`: > `t = u` returns a `T&` and `t` is equivalent to `u` The standard also says in a note (20.4.5/3): "`auto_ptr` does not meet the CopyConstructible and Assignable requirements for Standard Library container elements and thus instantiating a Standard Library container with an `auto_ptr` results in undefined behavior." Since you don't declare or define an assignment operator, an implicit one will be provided that uses the `auto_ptr`'s assignment operator, which definitely makes `t` not equivalent to `u`, not to mention that it won't work at all for "`const T u`" values (which is what [Earwicker's answer](https://stackoverflow.com/questions/704780/class-containing-autoptr-stored-in-vector/705153#705153) points out - I'm just pointing out the exact portion(s) of the standard).
Class containing auto_ptr stored in vector
[ "", "c++", "stl", "" ]
i have an xml file similar to this: ``` <?xml version="1.0" encoding="UTF-8"?> <data> <resource key="123">foo</resource> <resource key="456">bar</resource> <resource key="789">bar</resource> </data> ``` i want to put this into a Dictionary (sorted) as key value pairs. i.e: 123:foo, 456:bar...etc the keys are unknown. how can i do this?
Try this, ``` string s = "<data><resource key=\"123\">foo</resource><resource key=\"456\">bar</resource><resource key=\"789\">bar</resource></data>"; XmlDocument xml = new XmlDocument(); xml.LoadXml(s); XmlNodeList resources = xml.SelectNodes("data/resource"); SortedDictionary<string,string> dictionary = new SortedDictionary<string,string>(); foreach (XmlNode node in resources){ dictionary.Add(node.Attributes["key"].Value, node.InnerText); } ```
This looks like a job for Linq to Xml ``` static void Main(string[] args) { XDocument yourDoc = XDocument.Load("the.xml"); var q = from c in yourDoc.Descendants("resource") orderby (int) c.Attribute("key") select c.Attribute("key").Value + ":" + c.Value; foreach (string s in q) Console.WriteLine(s); Console.ReadLine(); } ```
convert xml to sorted dictionary
[ "", "c#", "xml", "" ]
I've just replaced this piece of code: ``` foreach( var source in m_sources ) { if( !source.IsExhausted ) { .... } } ``` with this one: ``` foreach( var source in m_sources.Where( src => !src.IsExhausted ) ) { ... } ``` Now the code looks better (to me) but I'm wondering what's really happening here. I'm concerned about performance in this case, and it'd be bad news if applying this filter would mean that some kind of compiler magic would take place. Are the two pieces of code doing basically the 'same' thing? Are temporary containers created to do the filtering then passing them to my foreach? Any help on the subject will be pretty much appreciated. Thanks.
The `yield return` keyword and lambdas do involve the creation of hidden classes at compile time and the allocation of extra objects at runtime, and if your background is in C or C++ then it's only natural to be concerned about performance. Natural, but wrong! [I tried measuring the overhead](http://incrediblejourneysintotheknown.blogspot.com/2009/03/amazing-speed-of-net-garbage-collector.html) for lambdas with closure over local variables, and found it to be so incredibly small (a matter of nanoseconds) that it would be of no significance in almost all applications.
It depends on the type if m\_sources. If it is a Data Context from LINQ to SQL or Entity Framework the argument you pass is compiled as an instance of Expression and parsed to create SQL (with the help of the data model). There are some real costs in this process, but likely (in most cases) to be dominated by the round trip to the database. If it is IEnumerable then Where is pretty much implemented as: ``` public static IEnumnerable<T> Where(this IEnumerable<T> input, Func<T, bool> predicate) { foreach (var v in input) { if (predicate(v)) { yield return v; } } } ``` Which is pretty efficient and performs lazily (so if you break out of your loop early the predicate will not be applied to the whole collection).
What is the LINQ to objects 'where' clause doing behind the scenes?
[ "", "c#", ".net", "linq", "performance", "where-clause", "" ]
When creating a "forgotten password" mechanism, we might want to create a tempory password for the user which is stored using SHA1 (feel free to suggest other C# Cryptography mechanism). How long should we make the tempory password? Too short, it could be brute forced. Too long and the unnecessary length is redundant since the string is hashed anyway? (Since a 20 character and 50 character string results in a hash of the same length anyway) **Update** Sorry if this was misleading. Sure we can pick a number out of the air, but I was wondering if there was a good mathematical reason to pick 13 rather than 12.
I think this is good advice regarding temp passwords: [The definitive guide to form-based website authentication](https://stackoverflow.com/questions/549/the-definitive-guide-to-website-authentication-beta/477583#477583) It talks about avoiding generating them in favour of getting to the real action the user wants.
I generally go with 10 characters. No particular reason for that, just something that I'd guess is above average length for a password chosen by a user. Just by the fact that it's randomly generated, it'll probably be more secure and more difficult to brute force than anything chosen by your users. People pick stupid passwords such as myspace1, stackoverflow1, 12341234 etc.
Optimum tempory password length for SHA hash
[ "", "c#", "sha1", "change-password", "" ]
### Duplicate: > [How do you Programmatically Download a Webpage in Java?](https://stackoverflow.com/questions/238547/how-do-you-programmatically-download-a-webpage-in-java) > > [How to fetch html in Java](https://stackoverflow.com/questions/31462/how-to-fetch-html-in-java) I'm developping an application that consists on: the user inputs an URL of some website, and then the application have to analyze that URL. How can I have access to the HTML file, using Java? Does I need to use HttpRequest? How does that works? Thks.
URLConnection is fine for simple cases. When there are things like redirections involved, you are better off using Apache's [HTTPClient](http://hc.apache.org/httpclient-3.x/)
You could just use a URLConnection. See this [Java Tutorial](http://java.sun.com/docs/books/tutorial/networking/urls/readingWriting.html) from Sun
Get html file Java
[ "", "java", "" ]
I have an asynchronous operation that for various reasons needs to be triggered using an HTTP call to an ASP.NET web page. When my page is requested, it should start this operation and immediately return an acknowledgment to the client. This method is also exposed via a WCF web service, and it works perfectly. On my first attempt, an exception was thrown, telling me: ``` Asynchronous operations are not allowed in this context. Page starting an asynchronous operation has to have the Async attribute set to true and an asynchronous operation can only be started on a page prior to PreRenderComplete event. ``` So of course I added the `Async="true"` parameter to the `@Page` directive. Now, I'm not getting an error, but the page is blocking until the Asynchronous operation completes. How do I get a true fire-and-forget page working? **Edit:** Some code for more info. It's a bit more complicated than this, but I've tried to get the general idea in there. ``` public partial class SendMessagePage : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { string message = Request.QueryString["Message"]; string clientId = Request.QueryString["ClientId"]; AsyncMessageSender sender = new AsyncMessageSender(clientId, message); sender.Start(); Response.Write("Success"); } } ``` The AsyncMessageSender class: ``` public class AsyncMessageSender { private BackgroundWorker backgroundWorker; private string client; private string msg; public AsyncMessageSender(string clientId, string message) { this.client = clientId; this.msg = message; // setup background thread to listen backgroundThread = new BackgroundWorker(); backgroundThread.WorkerSupportsCancellation = true; backgroundThread.DoWork += new DoWorkEventHandler(backgroundThread_DoWork); } public void Start() { backgroundThread.RunWorkerAsync(); } ... // after that it's pretty predictable } ```
If you don't care about returning anything to the user, you can just fire up either a separate thread, or for a quick and dirty approach, use a delegate and invoke it asynchrnously. If you don't care about notifying the user when the async task finishes, you can ignore the callback. Try putting a breakpoint at the end of the SomeVeryLongAction() method, and you'll see that it finishes running after the page has already been served up: ``` private delegate void DoStuff(); //delegate for the action protected void Page_Load(object sender, EventArgs e) { } protected void Button1_Click(object sender, EventArgs e) { //create the delegate DoStuff myAction = new DoStuff(SomeVeryLongAction); //invoke it asynchrnously, control passes to next statement myAction.BeginInvoke(null, null); Button1.Text = DateTime.Now.ToString(); } private void SomeVeryLongAction() { for (int i = 0; i < 100; i++) { //simulation of some VERY long job System.Threading.Thread.Sleep(100); } } ```
If you are running webforms set Ansync = "true" in your .aspx page where you are making the request. `<%@ Page Language="C#" Async="true" ... %>`
Running an asynchronous operation triggered by an ASP.NET web page request
[ "", "c#", "asp.net", "asynchronous", "" ]
I'm searching for a workflow library/framework for Python. I'm astonished that there I cannot find anything which is simple and not attached to Zope/Plone. Does anyone know of an open-source, simple workflow library/framework. It's preferred to support Django, but not required.
Try [GoFlow](http://code.djangoproject.com/wiki/GoFlow), a workflow engine for Django.
Unfortunately it seems like most/all of the projects listed here are no longer active. Here's a new project which is currently ongoing: <http://packages.python.org/django-workflows/overview.html>
Does anyone know about workflow frameworks/libraries in Python?
[ "", "python", "django", "workflow", "" ]
I am exporting data from a database using PHP to convert it into a CSV. I figured it'd be useful to provide the first row with a title (similar to the `<th>` element in HTML) so the end user would understand the column's meanings. Example ``` ============= | id | name | ============= | 0 | tim | | 1 | tom | ============= ``` Which would look like this as a CSV ``` id, name 0, tim 1, tom ``` Is there a way to mark up the first row's columns or do anything differently that programs that often read CSVs (example Microsoft Excel) will mark it up accordingly. I.e. provide a semantic hook to inform the client (possibly Excel but not restricted to) that this is a column header?
Nope. And to make it even more fun, there's nothing that says that the header line has to be present at all. Good times, good times...
One key thing to avoid with CSVs is to avoid using 'ID' as the first characters in the file. The lowercase 'id' or double-quoted '"ID"' is acceptable, but if Excel comes across upper-case 'ID' it tries to open the file as a SYLK file and fails. (edit: note that single quotes in the above should be ignored)
Is there a way (or best practice) to markup the head (<th> equivalent) of a CSV document?
[ "", "php", "csv", "" ]
I just installed a trial version of [ReSharper](http://www.jetbrains.com/resharper/index.html) and one of the first things I noticed is that it always suggests to replace explicitly typed local variables with implicitly typed ones, e.g: ``` public string SomeMethod(int aParam) { int aNumber = SomeOtherMethod(aParam); // should be changed to: var aNumber = SomeOtherMethod(aParam); } ``` I think explicitly typed variables are more readable (more explicit). What do you think about ReSharper's suggestion? Is there any advantage in using implicitly typed variables? When do you use implicit/explict vars?
I personally only use “var” when I can clearly distinguish the variable Type by just reading the declaration, for example: ``` var someVariable = new List<int>(); ``` In the example above, its evident that “var” refers to “List<int>”. I don’t like to use “var” when I have to go to some method definition to find out what variable type “var” represents or by having to rely on visual studio intelli-popup or whatever that is called, for example this in not ok to me: ``` var someVaraible = SomeMethod(); ``` I mean, what is the “SomeMethod” function supposed to return? Can you tell just by looking at the line of code? No you can’t, so that is why I avoid using “var” on those situations.
There's a lot of discussion about this, but I think it all comes down to personal taste, just like using the 'this' keyword almost everywhere. I *personally* prefer explictly typed variables, but when using nested generic collections things can become more readable using an implicitly typed variable. Look at: ``` Dictionary<string, Dictionary<string, string>> myDictionary = new Dictionary<string, Dictionary<string, string>>(); ``` vs: ``` var myDictionary = new Dictionary<string, Dictionary<string, string>>(); ``` EDIT: this SO topic covers the same topic, with some nice replies: [What to use: var or object name type?](https://stackoverflow.com/questions/236878/what-to-use-var-or-object-name-type) EDIT2: Working a lot with async nowadays, I find that using explicity typed variables can sometimes prevent nasty bugs. Consider this silly example where you would want to return the Id of a user. Also consider that `GetUserAsync` returns a `Task<User>`. If you use implicitly typed variables, you would end up using something like this: ``` public long GetUserId() { var user = GetUserAsync(); return user.Id; } ``` This compiles, but it is wrong. 'user' is actually a `Task<User>`. And it compiles as `Task` also has an `Id` property. In this case, one would accidentally return the Id of a Task instead of the User. ``` public long GetUserId() { User user = GetUserAsync(); return user.Id; } ``` The above does not compile, as the compiler will complain that you cannot cast a Task to a User. Adding the `await` keyword of course solves this. I've actually had this happen to me once :-)
Using implicitly typed local variables
[ "", "c#", "coding-style", "implicit-typing", "" ]
I have written a library that exposes references to several related object types. All of these objects have their lifetimes managed by the library internally via `boost::shared_ptr` A user of the library would also be able to know, by nature of the library, the lifetimes of any of the exposed objects. So they could store pointers or keep references to these objects. It would be reasonable for them to do this and know when those objects are no longer valid. But I feel guilty forcing my users to be reasonable. **Is it acceptable to have a library expose `weak_ptr`'s to its objects? Have other libraries done this?** I have profiled this library's usage in apps and have found it to be too mission-critical to expose `weak_ptr` exclusively. **Would it be wiser to have matching API functions expose either a reference *or* a weak\_ptr or to make any object capable of exposing a `weak_ptr` to itself?**
If the `smart_ptr`s are already directly accessible to the library's users, then they've already got access to the `weak_ptr`s, simply via the corresponding `weak_ptr`'s constructor. But if the `smart_ptr`s are all internal to the library, that's a different story. In that case, I'd recommend letting each object pass out `weak_ptr`s to itself, in addition to any other access your library offers. That gives the users the most flexibility: if they need a `weak_ptr`, they've got immediate access to it; if they need a `shared_ptr`, they can easily get it; and if they just need access to the object itself, they can ignore the smart pointers entirely. Of course, I don't know what your library does or how it's used or designed. That might change my recommendation.
Coming up with convoluted mechanisms to get at the objects of your library will only result in people not using your library. If the semantics of the library dictate you need to have people using weak\_ptrs, there no way around the user knowing that the objects may go away at some point. Make the interface express as much information about the usage of the library as possible, keeps documentation down and makes it infinitely easier to use. You can't design around bad/inexperienced users.
Is it wise to provide access to weak_ptr in a library interface?
[ "", "c++", "boost", "weak-ptr", "" ]
I'm about to start on a little project i'm trying to do where I create a C++ program to store inventory data into a file ( I guess a .txt will do ) * • Item Description • Quantity on Hand • Wholesale Cost • Retail Cost • Date Added to Inventory I need to be able to: • Add new records to the file • Display any record in the file • Change any record in the file Is there anything I should know of before I start this that could make this much more easy & efficient... Like for example, should I try and use XML or what that be too hard to work with via C++? I've never really understood the most efficient way of doing this. Like would I search through the file and look for things in brackets or something? *EDIT* The datasize shouldn't be too large. It is for homework I guess you could say. I want to write the struct's contents into a file's route, how would I go about doing that?
There are many approaches. Is this for homework or for real use? If it's for homework, there are probably some restrictions on what you may use. Otherwise I suggest some embedded DBMS like [SQLite](http://www.sqlite.org/). There are others too, but this will be the most powerful solution, and will also have the easiest implementation. XML is also acceptable, and has many reusable implementations available, but it will start loosing performance once you go into thousands of records. The same goes for JSON. And one might still debat which one is simpler - JSON or XML. Another possibility is to create a `struct` and write its contents directly to the file. Will get tricky though if the record size is not constant. And, if the record format changes, the file will need to be rebuilt. Otherwise this solution could be one of the best performance-wise - if implemented carefully.
Could you please enlighten us why don't you want to use a database engine for it? If it is just for learning then.... give us please an estimated size of stored data in that file and the access pattern (how many users, how often they do it etc.)? The challenge will be to create an efficient search and modification code. For the search, it's about data structures and organization. For the modification, it's how would you write updates to the file without reading it completely into memory, updating it there and then writing it again completely back to the file.
What is the best way I should go about creating a program to store information into a file, edit the information in that file, and add new information
[ "", "c++", "xml", "file", "save", "inventory", "" ]
I'm making a dll that has to respond to an application's requests. One of the application's requirements is that a call should not take long to complete. Say, I have a function foo(), which is called by the host application: ``` int foo(arg){ // some code i need to execute, say, LengthyRoutine(); return 0; } ``` Lets say, foo has to perform a task (or call a function) that is certain to take a long time. The application allows me to set a wait variable; if this variable is non-zero when foo returns, it calls foo again and again (resetting the wait variable before each call) until wait is returned 0. What's the best approach to this? Do I go: ``` int foo(arg){ if (inRoutine == TRUE) { wait = 1; return 0; } else { if (doRoutine == TRUE) { LengthyRoutine(); return 0; } } return 0; } ``` This doesn't really solve the problem that LengthyRoutine is gonna take a long time to complete. Should I spawn a thread of some sort that updates inRoutine depending on whether or not it has finished its task? Thanks..
Spawning another thread is pretty much the best way to do it, just make sure you set the result variables before you set the variable that says you're finished to avoid race conditions. If this is called often you might want to spawn a worker thread ahead of time and reuse it to avoid thread start overhead. There is another possible solution, do part of the work each time the function is called, however this spends more time in the DLL and probably isn't optimal, as well as being more complex to implement the worker code for most algos.
If C programming, use callback - pass the callback to foo. You have to agree on the callback signature and do some housekeeping to trigger it when the work in LengthyRoutine is done. ``` typedef (void) callbackFunction(void); int foo(arg, callbackFunction) { // some code i need to execute, say, // register callback and return right away // Trigger the LengthyRoutine to run after this function returns return 0; } LengthyRoutine() { // do lenghty routine // now inform the caller with their suppiled callback callbackFunction(); } ``` Essentially [Observer](http://en.wikipedia.org/wiki/Observer_pattern#C.2B.2B) Pattern in C. C++ makes the work a lot easier/cleaner in my opinion
How to avoid blocking (C++, Win32)
[ "", "c++", "multithreading", "blocking", "" ]
Trying to find some verbose reference on the intricacies of Attributes. Any help would be appreciated. At this point, I'd specifically like to know what time during runtime does an attribute constructor get ran? * If it's over a class * If it's over a property * If it's over a method Thanks.
The constructor is invoked when you call GetCustomAttributes() on the type or MemberInfo.
Reading the norm (17.3.2 in the C# 2.0 version) it's unspecified. Only the way to convert from the metatada to an instance is. So you may need to test on different implementations, because if it isn't specified it's bound to be interpreted differently.
At what time during runtime does an attribute constructor get run?
[ "", "c#", ".net", "attributes", "" ]
I'm working on a script for work to extract data from an old template engine schema: ``` [%price%] { $54.99 } [%/price%] [%model%] { WRT54G } [%/model%] [%brand%]{ LINKSYS } [%/brand%] ``` everything within the [% %] is the key, and everything in the { } is the value. Using Python and regex, I was able to get this far: (?<=[%)(?P\w\*?)(?=\%]) which returns ['price', 'model', 'brand'] I'm just having a problem getting it match the bracket data as a value
just for grins: ``` import re RE_kv = re.compile("\[%(.*)%\].*?\n?\s*{\s*(.*)") matches = re.findall(RE_kv, test, re.M) for k, v in matches: print k, v ``` output: ``` price $54.99 model WRT54G brand LINKSYS ``` Note I did just enough regex to get the matches to show up, it's not even bounded at the end for the close brace. Use at your own risk.
I agree with Devin that a single regex isn't the best solution. If there do happen to be any strange cases that aren't handled by your regex, there's a real risk that you won't find out. I'd suggest using a finite state machine approach. Parse the file line by line, first looking for a price-model-brand block, then parse whatever is within the braces. Also, make sure to note if any blocks aren't opened or closed correctly as these are probably malformed. You should be able to write something like this in python in about 30-40 lines of code.
Parsing template schema with Python and Regular Expressions
[ "", "python", "regex", "parsing", "grouping", "" ]
Let's say I have this SQL: ``` SELECT p.ParentId, COUNT(c.ChildId) FROM ParentTable p LEFT OUTER JOIN ChildTable c ON p.ParentId = c.ChildParentId GROUP BY p.ParentId ``` How can I translate this into LINQ to SQL? I got stuck at the COUNT(c.ChildId), the generated SQL always seems to output COUNT(\*). Here's what I got so far: ``` from p in context.ParentTable join c in context.ChildTable on p.ParentId equals c.ChildParentId into j1 from j2 in j1.DefaultIfEmpty() group j2 by p.ParentId into grouped select new { ParentId = grouped.Key, Count = grouped.Count() } ``` Thank you!
``` from p in context.ParentTable join c in context.ChildTable on p.ParentId equals c.ChildParentId into j1 from j2 in j1.DefaultIfEmpty() group j2 by p.ParentId into grouped select new { ParentId = grouped.Key, Count = grouped.Count(t=>t.ChildId != null) } ```
Consider using a subquery: ``` from p in context.ParentTable let cCount = ( from c in context.ChildTable where p.ParentId == c.ChildParentId select c ).Count() select new { ParentId = p.Key, Count = cCount } ; ``` --- If the query types are connected by an association, this simplifies to: ``` from p in context.ParentTable let cCount = p.Children.Count() select new { ParentId = p.Key, Count = cCount } ; ```
LINQ - Left Join, Group By, and Count
[ "", "c#", ".net", "linq", "linq-to-sql", "" ]
I have a Java program that stores a lot of mappings from Strings to various objects. Right now, my options are either to rely on hashing (via HashMap) or on binary searches (via TreeMap). I am wondering if there is an efficient and standard trie-based map implementation in a popular and quality collections library? I've written my own in the past, but I'd rather go with something standard, if available. Quick clarification: While my question is general, in the current project I am dealing with a lot of data that is indexed by fully-qualified class name or method signature. Thus, there are many shared prefixes.
You might want to look at the [Trie implementation that Limewire is contributing](https://github.com/google/guava/issues/10) to the Google Guava.
There is no trie data structure in the core Java libraries. This may be because tries are usually designed to store character strings, while Java data structures are more general, usually holding any `Object` (defining equality and a hash operation), though they are sometimes limited to `Comparable` objects (defining an order). There's no common abstraction for "a sequence of symbols," although `CharSequence` is suitable for character strings, and I suppose you could do something with `Iterable` for other types of symbols. Here's another point to consider: when trying to implement a conventional trie in Java, you are quickly confronted with the fact that Java supports Unicode. To have any sort of space efficiency, you have to restrict the strings in your trie to some subset of symbols, or abandon the conventional approach of storing child nodes in an array indexed by symbol. This might be another reason why tries are not considered general-purpose enough for inclusion in the core library, and something to watch out for if you implement your own or use a third-party library.
Where do I find a standard Trie based map implementation in Java?
[ "", "java", "algorithm", "optimization", "trie", "" ]
I have the following table: ``` CREATE TABLE `score` ( `score_id` int(10) unsigned NOT NULL auto_increment, `user_id` int(10) unsigned NOT NULL, `game_id` int(10) unsigned NOT NULL, `thescore` bigint(20) unsigned NOT NULL, `timestamp` timestamp NOT NULL default CURRENT_TIMESTAMP, PRIMARY KEY (`score_id`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8; ``` That's a score table the stores the user\_id and game\_id and score of each game. there are trophies for the first 3 places of each game. I have a user\_id and I would like to check if that specific user got any trophies from any of the games. Can I somehow create this query without creating a temporary table ?
``` SELECT s1.* FROM score s1 LEFT OUTER JOIN score s2 ON (s1.game_id = s2.game_id AND s1.thescore < s2.thescore) GROUP BY s1.score_id HAVING COUNT(*) < 3; ``` This query returns the rows for all winning games. Although ties are included; if the scores are 10,16,16,16,18 then there are four winners: 16,16,16,18. I'm not sure how you handle that. You need some way to resolve ties in the join condition. For example, if ties are resolved by the earlier game winning, then you could modify the query this way: ``` SELECT s1.* FROM score s1 LEFT OUTER JOIN score s2 ON (s1.game_id = s2.game_id AND (s1.thescore < s2.thescore OR s1.thescore = s2.thescore AND s1.score_id < s2.score_id)) GROUP BY s1.score_id HAVING COUNT(*) < 3; ``` You could also use the `timestamp` column to resolve ties, if you can depend on it being `UNIQUE`. However, MySQL tends to create a temporary table for this kind of query anyway. Here's the output of `EXPLAIN` for this query: ``` +----+-------------+-------+------+---------------+------+---------+------+------+---------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+------+---------------+------+---------+------+------+---------------------------------+ | 1 | SIMPLE | s1 | ALL | NULL | NULL | NULL | NULL | 9 | Using temporary; Using filesort | | 1 | SIMPLE | s2 | ALL | PRIMARY | NULL | NULL | NULL | 9 | | +----+-------------+-------+------+---------------+------+---------+------+------+---------------------------------+ ```
``` SELECT game_id, user_id FROM score score1 WHERE (SELECT COUNT(*) FROM score score2 WHERE score1.game_id = score2.game_id AND score2.thescore > score1.thescore) < 3 ORDER BY game_id ASC, thescore DESC; ```
How to fetch 3 first places of each game from the score table in mysql?
[ "", "mysql", "sql", "greatest-n-per-group", "" ]
I have a UserControl that contains an UpdatePanel which wraps some other controls. The UserControl will be used on some pages that already have a ScriptManager and other pages that do not have a ScriptManager. I'd like the UserControl to automatically bring its own ScriptManager if one does not exist. I have tried ScriptManager.GetCurrent and if it returns null i create my own ScriptManager and insert it into the Form but I can't find a place early enough in the UserControl's lifecycle to run this code. I keep getting the error "The control with ID 'uPnlContentList' requires a ScriptManager on the page. The ScriptManager must appear before any controls that need it." every time i try loading the page. The places i've tried running my code are OnInit, CreateChildControls and PageLoad and they never get called because it dies before reaching them. Where should I put this check?
I hate to come at this in another direction, but are you using a master page? And if so, have you considered placing a single ScriptManager on it and being done with it?
put a ScriptManager in your MasterPage that all your ASPX files reference. Then anywhere else in the site that you need to work with the ScriptManager use a ScriptManagerProxy that will get the ScriptManager from the master page and let you work with it in the user control and in the code behind. Now calling ScriptManager.GetCurrent(Page) will give you a reference to a script manager. ``` <asp:ScriptManagerProxy> <Scripts> <asp:ScriptReference Path="~/Scripts/myscript.js" /> <Scripts> </asp:ScriptManagerProxy> ``` Here's a link: [MSDN Info](http://msdn.microsoft.com/en-us/library/system.web.ui.scriptmanagerproxy.aspx)
AJAX ScriptManager in UserControl
[ "", "c#", "ajax", "user-controls", "scriptmanager", "" ]
I'm trying to modify a CSS style attribute for a div based on the information I get from a database table in the code behind of my aspx page. The following is essentially what I am trying to do, but I get errors. Aspx: ``` <div id="testSpace" runat="server"> Test </div> ``` Code Behind: ``` testSpace.Style = "display:none;" testSpace.Style("display") = "none"; ``` What am I doing wrong?
``` testSpace.Style.Add("display", "none"); ```
It's an HtmlGenericControl so not sure what the recommended way to do this is, so you could also do: ``` testSpace.Attributes.Add("style", "text-align: center;"); ``` or ``` testSpace.Attributes.Add("class", "centerIt"); ``` or ``` testSpace.Attributes["style"] = "text-align: center;"; ``` or ``` testSpace.Attributes["class"] = "centerIt"; ```
How do you modify a CSS style in the code behind file for divs in ASP.NET?
[ "", "c#", "asp.net", "css", "" ]
Does C# inline access to properties? I'm aware of the 32 byte (instruction?) limit on the JIT for inlining, but will it inline properties or just *pure* method calls?
It's up to the JIT (the C# compiler doesn't do any inlining as far as I'm aware), but I believe the JIT will inline trivial properties in most cases. Note that it *won't* inline members of types deriving from `MarshalByRefObject` which includes `System.Windows.Forms.Control` (via `System.ComponentModel.Component`). I've also seen `double` fields end up being less efficient when accessed via properties - it could be that there are some subtleties around that (due to register use etc). Also note that the 64-bit and 32-bit JITs are different, including their treatment of what gets inlined. EDIT: I've just found a [2004 blog entry](http://blogs.msdn.com/davidnotario/archive/2004/11/01/250398.aspx) by David Notario with some more information. However, that was before 2.0 shipped - I wouldn't be surprised to see that at least *some* of it had changed now. Might be of interest anyway. EDIT: [Another question](https://stackoverflow.com/questions/648409/inlining-functions) referred to a [2008 Vance Morrison blog entry](http://blogs.msdn.com/vancem/archive/2008/08/19/to-inline-or-not-to-inline-that-is-the-question.aspx) which gives more information. Interesting stuff.
A property access *is* just a pure method call. There is no difference in the IL the compiler emits for a property access and for a method call with a similar signature, which sort of answers your question.
Does C# inline properties?
[ "", "c#", ".net", "optimization", "clr", "" ]
When i try to change a UI property (specifically enable) my thread throws System.Threading.ThreadAbortException How do i access UI in a Thread.
You could use a BackgroundWorker and then change the UI like this: ``` control.Invoke((MethodInvoker)delegate { control.Enabled = true; }); ```
If you are using C# 3.5, it is really easy to use extension methods and lambdas to prevent updating the UI from other threads. ``` public static class FormExtensions { public static void InvokeEx<T>(this T @this, Action<T> action) where T : Form { if (@this.InvokeRequired) { @this.Invoke(action, @this); } else { action(@this); } } } ``` So now you can use `InvokeEx` on any form and be able to access any properties/fields that aren't part of `Form`. ``` this.InvokeEx(f => f.label1.Text = "Hello"); ```
Accessing UI in a thread
[ "", "c#", "multithreading", "user-interface", "" ]
I need to dispose of an object so it can release everything it owns, but it doesn't implement the IDisposable so I can't use it in a using block. How can I make the garbage collector collect it?
You can force a collection with GC.Collect(). Be very careful using this, since a full collection can take some time. The best-practice is to just let the GC determine when the best time to collect is. Does the object contain unmanaged resources but does not implement IDisposable? If so, it's a bug. If it doesn't, it shouldn't matter if it gets released right away, the garbage collector should do the right thing.
If it "owns" anything other than memory, you need to fix the object to use IDisposable. If it's not an object you control this is something worth picking a different vendor over, because it speaks to the core of how well your vendor really understands .Net. If it does just own memory, even a lot of it, all you have to do is make sure the object goes out of scope. *Don't* call `GC.Collect()` — it's one of those things that if you have to ask, you shouldn't do it.
Garbage Collection on one object, C#
[ "", "c#", ".net-3.5", "garbage-collection", "" ]
Say I want to run a simple Python script that is a web server. And I want to connect a browser to it that refreshes every N seconds, i.e. so the page is updated. How would I do that? I assume the Python server script would generate some HTML that contains javascript that sleeps and forces a refresh? Or can the browser force a refresh? Thanks in advance.
``` <meta http-equiv="refresh" content="5" /> ``` That would reload the page after 5 seconds.
Refreshing the page is quite an old method and is generally a more expensive in terms of bandwidth than simply updating specific parts of your page. The solutions presented here already will do what you want, but you should also try looking into dynamic updates, using AJAX. It's not too hard to grasp. Basically rather than the browser refreshing the entire page, you can use AJAX to request only the data from the web-server that needs updating on the page. There are plenty of tutorials on AJAX. If you need to support IE6, it can be just a tad more difficult, otherwise it is quite simple. In theory: 1. Create an XMLHTTPRequest object. 2. Tell it a URL that points to the data that needs updating on your page (it doesn't have to be XML). 3. Give the object a function that can process the data when it is received. 4. Tell the object to send the request to the server and retrieve the data.
How do I get a web browser to refresh automatically?
[ "", "javascript", "html", "browser", "" ]
The following javascript to resize a select list breaks in Google Chrome. It works when tabbing into the field, but clicking on it results in the "Aw, Snap!" error page. ``` <select onfocus="this.setAttribute('size', 3);"> <option>selectList with onfocus</option> <option>2</option> <option>3</option> <option>4</option> </select> ``` Works fine in FF and IE. It's some kind of conflict between onfocus (there's no problems if I implement it onClick) and setting the size attribute. I'm told it breaks in Safari too. Any assistance, ideas or workarounds are greatly appreciated. (P.S. Yeh I know it's not very nice form to resize a select list, but it's what the boss/client wants)
Change the line with the select to this: ``` <select onfocus="var that = this; setTimeout(function() {that.setAttribute('size', 3);}, 0);"> ``` It works for me in Chrome. I haven't tried in Safari but I suspect it will work as well. Basically all we do here is escaping the call stack with setTimeout, which appears to get around the bug in Webkit.
I also found this: * <http://code.google.com/p/chromium/issues/detail?id=4579> * <http://bugs.webkit.org/show_bug.cgi?id=17648> So it seems this is a WebKit issue and for some reason they don't want to fix it (make it standards-compliant).
Google Chrome breaks when onfocus sets select size
[ "", "javascript", "select", "google-chrome", "focus", "setattribute", "" ]
Is there a more succinct way to define a class in a namespace than this: ``` namespace ns { class A {}; } ``` I was hoping something like `class ns::A {};` would work, but alas not.
You're close, you can forward declare the class in the namespace and then define it outside if you want: ``` namespace ns { class A; // just tell the compiler to expect a class def } class ns::A { // define here }; ``` What you cannot do is define the class in the namespace without members and then define the class again outside of the namespace. That violates the One Definition Rule (or somesuch nonsense).
You can do that, but it's not really more succint. ``` namespace ns { class A; } class ns::A { }; ``` Or ``` namespace ns { class B; } using ns::B; class B { }; ```
Defining a class within a namespace
[ "", "c++", "class", "syntax", "namespaces", "definition", "" ]
This is more out of curiosity that a real requirement, but I'm wondering if it's possible to treat JavaScript as (ideally) a first-class .NET citizen, or (secondarily) have some way of invoking/interpreting pure JavaScript functions (that don't require the DOM) in a desktop setting? Has anyone ever attempted implementing a CLR version of JavaScript? Something tugs at the back of my mind concerning this, but now that I think about it it was probably PHP, not JavaScript.
For your second option, there's [Rhino](http://en.wikipedia.org/wiki/Rhino_(JavaScript_engine)) and [things like it](http://en.wikipedia.org/wiki/JavaScript#Uses_outside_web_pages).
Using the DLR (Dynamic Language Runtime) you can use Managed JScript. See the official JScript blog from Microsoft here. <http://blogs.msdn.com/jscript/archive/2007/05/04/managed-jscript-announced.aspx> This is goes for Ruby (IronRuby), Python (IronPython), and Dynamic VB. You can also write your own DLR language.
Interpreting JavaScript outside of the browser?
[ "", ".net", "javascript", "clr", "" ]
I'm interested mostly in C++ and method/class name/signature automatic changes.
I do this a lot, so I'm axiously awaiting other replies too. The only tricks I know are really basic. Here are my best friends in Emacs when refactoring code: ``` M-x query-replace ``` This allows you to do a global search and replace. You'll be doing this a ton when you move methods and commonly-accessed data to other classes or namespaces. ``` C-x 3 ``` This gives you a display with two buffers side-by side. You can then proceed to load different files in them, and move your cursor from one to the other with `C-x o`. This is pretty basic stuff, but I mention it because of how powerful it makes the next one... ``` C-x ( (type any amount of stuff and/or emacs commands here) C-x ) ``` This is how you define a macro in emacs. Any time you find yourself needing to do the same thing over and over to a bunch of code (and it is too complex for query-replace), this is a lifesaver. If you mess up, you can hit `C-g` to stop the macro definition, and then undo (`C-_`) until you are back to where you started. The keys to invoke the macro are `C-x e`. If you want to do it a bunch of times, you can hit `Esc` and type in a number first. Eg: `Esc 100 C-x e` will try to invoke your macro 100 times. (Note: On Windows you can get "Meta" by hitting the Esc key, or holding down Alt).
In recent Emacs versions (24), Semantic is able to this. 0. Possibly activate semantic mode `M-x semantic-mode RET`. 1. Bring up the Symref buffer with `C-c , g`. 2. Press `C-c C-e` to open all references. 3. Rename with `R`. 4. Save all the edited buffers with `C-x s !`
How can I refactor C++ source code using emacs?
[ "", "c++", "emacs", "refactoring", "" ]
I am looking now for some time about how can a programmer simulate a AI decision based on percentages of actions for the final fantasy tactic-like games (strategy game). Say for example that the AI character has the following actions: Attack 1: 10% Attack 2: 9% Magic : 4% Move : 1% All of this is far from equaling 100% Now at first I though about having an array with 100 empty slots, attack would have 10 slots, attack 2 9 slots on the array. Combining random I could get the action to do then. My problem here is it is not really efficient, or doesn't seem to be. Also important thing, what do I do if I get on an empty slot. Do I have to calculate for each character all actions based on 100% or define maybe a "default" action for everyone ? Or maybe there is a more efficient way to see all of this ? I think that percentage is the easiest way to implement an AI.
The best answer I can come up with is to make a list of all the possible moves you want the character to have, give each a relative value, then scale all of them to total 100%. EDIT: For example, here are three moves I have. I want attack and magic to be equally likely, and fleeing to be half as likely as attacking or using magic: * attack = 20 * magic = 20 * flee = 10 This adds up to 50, so dividing each by this total gives me a fractional value (multiply by 100 for percentage): * attack = 0.4 * magic = 0.4 * flee = 0.2 Then, I would make from this a list of cumulative values (i.e. each entry is a sum of that entry and all that came before it): * attack = 0.4 * magic = 0.8 * flee = 1 Now, generate a random number between 0 and 1 and find the first entry in the list that is greater than or equal to that number. That is the move you make.
No, you just create threshholds. One simple way is: 0 - 9 -> Attack1 10 - 18 -> Attack 2 19 - 22 -> Magic 23 -> Move Something else -> 24-99 (you need to add up to 100) Now create a random number and mod it by 100 (so num = randomNumber % 100) to define your action. The better the random number to close to a proper distribution you will get. So you take the result and see which category it falls into. You can actually make this even more efficient but it is a good start.
How to manage AI actions based on percentages
[ "", "c#", "" ]
I was thinking about some stuff lately and I was wondering what would be the RIGHT way to do something like the following scenario (I'm sure it is a quite common thing for DB guys to do something like it). Let's say you have a products table, something like this (MySQL): ``` CREATE TABLE `products` ( `id` int(11) NOT NULL auto_increment, `product_name` varchar(255) default NULL, `product_description` text, KEY `id` (`id`), KEY `product_name` (`product_name`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8; ``` Nothing out of the ordinary here. Now lets say that there are a hierarchy of categories in a different table, and there is a separate table which binds many-to-many relationships with products table - so that each product belongs to some kind of a category (I'll omit those, because thats not the issue here). Now comes the interesting part - what IF each of the categories mandates additional set of variables to the product items. For example products in the computer monitors category must have LCD/CRT enum field, screen size enum etc. - and some other category, lets say ice creams have some other variables like flavor varchar, shelf storage time int etc. The problem herein lies in that all products have a common set of variables (id, name, description and sort of like that), but there are additional variables which are not consistent from category to category - but all products should share common set, because in the end they all belong to the products group, so one can query for example SELECT \* FROM products ORDER BY company\_id (trivial example, maybe not representative, but you get the picture). Now, I see severa potential resolutions: - generate separate table for each product category and store products there with appropriate additional variables - stupid and not query friendly - product table stays the same with common variables, and for each category create a separate table with additional variables binding two tables with a JOIN - normalized, but query performance and clarity issues - how would one filter down products from category (1st table - products) and additional filter for extra variable (17" LCD monitors ie.) - it would require SQL JOIN trickery - products table stays the same and add another variable type text that holds for example JSON data that hold additional variables - compact and neat, but can't filter through variables with SQL I know I'm missing something quite obvious and simple here - I'm a bit rusty on the normalization techniques :) --- *edit:* I've been searching around stackoverflow before asking this question without success. However, after I've posted the question I have clicked on one of my tags 'normalization' and found several similar questions which resulted in to look up 'generalization specialization relational design'. Point of the story is that this must be the first occurrence in my internet life that tags are actually useful in search. However, I would still like to hear from you guys and your opinions. --- *edit2*: The problem with approach no.2 is that I expect somewhere around ~1000 specializations. There is a hierarchy (1-4 level deep) of categories and end nodes add specialized variables - they accumulate in the order of ~1000, so it would be a bit unpractical to add specialized tables to join with. --- **edit3**: Due to the vast number of attribute volatility in my case "entity attribute value" that was suggested looks like the way to go. Here comes query nightmares! Thanks guys.
How many product types do you expect? Do they each have their own application logic? You can do a generalized model called the "entity attribute value" model, but it has a LOT of pitfalls when you're trying to deal with specific properties of a product. Simple search queries turn into real nightmares at times. The basic idea is that you have a table that holds the product ID, property name (or ID into a properties table), and the value. You can also add in tables to hold templates for each product type. So one set of tables would tell you for any given product what properties it can have (possibly along with valid value ranges) and another set of tables would tell you for any individual product what the values are. I would caution strongly against using this model though, since it seems like a really slick idea until you have to actually implement it. If you number of product types is reasonably limited, I'd go with your second solution - one main product table with base attributes and then additional tables for each specific type of product.
I've been doing this in `Oracle`. I had the following tables: ``` t_class (id RAW(16), parent RAW(16)) -- holds class hierachy. t_property (class RAW(16), property VARCHAR) -- holds class members. t_declaration (id RAW(16), class RAW(16)) -- hold GUIDs and types of all class instances t_instance (id RAW(16), class RAW(16), property VARCHAR2(100), textvalue VARCHAR2(200), intvalue INT, doublevalue DOUBLE, datevalue DATE) -- holds 'common' properties t_class1 (id RAW(16), amount DOUBLE, source RAW(16), destination RAW(16)) -- holds 'fast' properties for class1. t_class2 (id RAW(16), comment VARCHAR2(200)) -- holds 'fast' properties for class2 --- etc. ``` `RAW(16)` is where `Oracle` holds `GUID`s If you want to select all properties for an object, you issue: ``` SELECT i.* FROM ( SELECT id FROM t_class START WITH id = (SELECT class FROM t_declaration WHERE id = :object_id) CONNECT BY parent = PRIOR id ) c JOIN property p ON p.class = c.id LEFT JOIN t_instance i ON i.id = :object_id AND i.class = p.class AND i.property = p.property ``` `t_property` hold stuff you normally don't search on (like, text descriptions etc.) Fast properties are in fact normal tables you have in the database, to make the queries efficient. They hold values only for the instances of a certain class or its descendants. This is to avoid extra joins. You don't have to use fast tables and limit all your data to these four tables. For you task it will look like this (I'll use strings in square brackets instead of GUID's for the sake of brevity): ``` t_class id parent [ClassItem] [ClassUnknown] [ClassMonitor] [ClassItem] [ClassLCD] [ClassMonitor] t_property class property [ClassItem] price [ClassItem] vendor [ClassItem] model [ClassMonitor] size [ClassLCD] matrixType t_declaration id class [1] [ClassLCD] -- Iiyama ProLite E1700 t_instance -- I'll put all values into one column, disregarding type (INT, VARCHAR etc) id class property value [1] [ClassItem] price $300 [1] [ClassItem] vendor Iiyama [1] [ClassItem] model ProLite E1700s [1] [ClassMonitor] size 17 [1] [ClassLCD] matrixType TFT ``` If you need some complex query that searches on, say, `size` AND `matrixType`, you may remove them from `property` and `instance` and create another table: ``` t_lcd (id RAW(16), size INT, matrixType VARCHAR2(200)) id size matrixType [1] 17 TFT ``` and use it to join with other properties instead of `t_declaration` in the query above. But this model is viable even without the fast tables.
How would you model data variables variance on common scheme? SQL
[ "", "sql", "mysql", "database", "database-design", "normalization", "" ]
I've been looking at most python crypto libraries, I've decided to use either PyCrypto or M2Crypto. I've discarded ezPyCrypto because it only supports MD5 for signing and Keyczar because it's not mature enough. So I've read that RSA is vulnerable to several attacks if the to-be-encrypted text (or signature hash) is not properly padded. What does it mean? Neither PyCrypto or M2Crypto mention anything about this and google didn't find anything relevant. Do these libraries automatically add the paddign? How can one tell? If the answer to the above is no, what is considered proper padding?
PyCrypto doesn't add the mentioned padding. M2Crypto instead does. M2Crypto is built on top of openSSL, supports mostlyl everything you need, is still maintained and up to date while PyCrypto issues several deprecation warnings.
One of the reason for **random** padding might be that "from the book" RSA with low exponent (let's say 3) can be cracked really simply if the exact same message is sent to several people (three). You'd therefore better make sure that you don't send the exact same message by applying some kind of random (yet inversible) transformation to your message before. Maybe that's what thing padding is about !? EDIT: I looked on wikipedia. what I was talking about is called Hastad's attack.
Python: How to add RSA padding?
[ "", "python", "encryption", "rsa", "" ]
Let T1 and T2 are `DataTable`s with following fields ``` T1(CustID, ColX, ColY) T2(CustID, ColZ) ``` I need the joint table ``` TJ (CustID, ColX, ColY, ColZ) ``` How this can be done in C# code in a simple way? Thanks.
If you are allowed to use LINQ, take a look at the following example. It creates two DataTables with integer columns, fills them with some records, join them using LINQ query and outputs them to Console. ``` DataTable dt1 = new DataTable(); dt1.Columns.Add("CustID", typeof(int)); dt1.Columns.Add("ColX", typeof(int)); dt1.Columns.Add("ColY", typeof(int)); DataTable dt2 = new DataTable(); dt2.Columns.Add("CustID", typeof(int)); dt2.Columns.Add("ColZ", typeof(int)); for (int i = 1; i <= 5; i++) { DataRow row = dt1.NewRow(); row["CustID"] = i; row["ColX"] = 10 + i; row["ColY"] = 20 + i; dt1.Rows.Add(row); row = dt2.NewRow(); row["CustID"] = i; row["ColZ"] = 30 + i; dt2.Rows.Add(row); } var results = from table1 in dt1.AsEnumerable() join table2 in dt2.AsEnumerable() on (int)table1["CustID"] equals (int)table2["CustID"] select new { CustID = (int)table1["CustID"], ColX = (int)table1["ColX"], ColY = (int)table1["ColY"], ColZ = (int)table2["ColZ"] }; foreach (var item in results) { Console.WriteLine(String.Format("ID = {0}, ColX = {1}, ColY = {2}, ColZ = {3}", item.CustID, item.ColX, item.ColY, item.ColZ)); } Console.ReadLine(); // Output: // ID = 1, ColX = 11, ColY = 21, ColZ = 31 // ID = 2, ColX = 12, ColY = 22, ColZ = 32 // ID = 3, ColX = 13, ColY = 23, ColZ = 33 // ID = 4, ColX = 14, ColY = 24, ColZ = 34 // ID = 5, ColX = 15, ColY = 25, ColZ = 35 ```
I wanted a function that would join tables without requiring you to define the columns using an anonymous type selector, but had a hard time finding any. I ended up having to make my own. Hopefully this will help anyone in the future who searches for this: ``` private DataTable JoinDataTables(DataTable t1, DataTable t2, params Func<DataRow, DataRow, bool>[] joinOn) { DataTable result = new DataTable(); foreach (DataColumn col in t1.Columns) { if (result.Columns[col.ColumnName] == null) result.Columns.Add(col.ColumnName, col.DataType); } foreach (DataColumn col in t2.Columns) { if (result.Columns[col.ColumnName] == null) result.Columns.Add(col.ColumnName, col.DataType); } foreach (DataRow row1 in t1.Rows) { var joinRows = t2.AsEnumerable().Where(row2 => { foreach (var parameter in joinOn) { if (!parameter(row1, row2)) return false; } return true; }); foreach (DataRow fromRow in joinRows) { DataRow insertRow = result.NewRow(); foreach (DataColumn col1 in t1.Columns) { insertRow[col1.ColumnName] = row1[col1.ColumnName]; } foreach (DataColumn col2 in t2.Columns) { insertRow[col2.ColumnName] = fromRow[col2.ColumnName]; } result.Rows.Add(insertRow); } } return result; } ``` An example of how you might use this: ``` var test = JoinDataTables(transactionInfo, transactionItems, (row1, row2) => row1.Field<int>("TransactionID") == row2.Field<int>("TransactionID")); ``` One caveat: This is certainly not optimized, so be mindful when getting to row counts above 20k. If you know that one table will be larger than the other, try to put the smaller one first and the larger one second.
Inner join of DataTables in C#
[ "", "c#", "join", "datatable", "inner-join", "" ]
Is there any in-built class/method for comparing content of two audio/ video files? Or is there any in-built class/method for converting a audio/video file to bit stream?
You could use the hash functions in System.Security.Cryptography on two file streams and compare them. This is easy to do and works well for small files. If your files are big, which they probably are if you're dealing with audio/video, then reading in the file and generating the hash can take a bit of time.
The other answers are good - either hashing (if you are comparing the file to multiple candidates) or a byte-wise comparison (if comparing two single files). Here are a couple of additional thoughts: First, check the file sizes - if they are different, then don't waste time comparing bytes. These are quick to check. Second, try searching from the end or the middle of the file using a binary chop approach. E.g., suppose you have a file like this: ``` ABCDEFGHIJKLMNOP ``` Then it is modified to this: ``` ABCDEF11GHIJKLMN ``` For the file size to remain the same, and content to have been inserted, the other bytes will be "knocked out". So a binary chop approach might pick this up with less reads (e.g., in seek to and read bytes SIZE/2-10 to SIZE/2+10 from both files, and compare). You could try to combine the techniques. If you do it over a good enough sample of the data you deal with, you might find that of all the different files you compare (example): * 80% were found because the file size was different (10ms per file) * 10% were found due to binary chop (50ms per file) * 10% were found due to linear byte comparisons (2000ms per file) Doing a binary chop over the whole file wouldn't be so smart, since I expect the hard disk will be faster if reading linearly rather than seeking to random spots. But if you check SIZE/2, then SIZE/4+SIZE/4x3, then SIZE/8, for say 5 iterations, you might find most of the differences without having to do a bytewise comparrison. Just some ideas. Also, instead of reading from the front of the file, perhaps try reading from the end of the file backwards. Again you might be trading off seek time for probability, but in the "insert" scenario, assuming a change is made halfway into the file, you'll probably find this faster by starting from the end than from the start.
File comparison in C#
[ "", "c#", "video", "file", "audio", "comparison", "" ]
I had this connection pool problem: > [How to solve a connection pool problem between ASP.NET and SQL Server?](https://stackoverflow.com/questions/670774/how-to-solve-a-connection-pool-problem-between-asp-net-and-sql-server) And I am now tracing using the SQL profiler, and I found some queries take about 400 duration to finish and return data. Is that value too much? Could it cause the previous connection pool problem?
400 means 400 ms, or 0.4 seconds. Whether that is too long really depends. If this is your front page or other high traffic area, then that's too much unless this is an intranet site. You may need to rethink how that data can be obtained, or use caching to reduce the number of times you actually have to run the query. If this is a report, or other more involved page, then 400 ms is not out of line. At best you might want to check if the query can be done a bit faster to improve the overall user experience. You may also want to run the Tuning Advisor to determine if the queries are only running slow because you don't have the correct indexes.
It depends on your application. That's less than 1/2 a second. If you're dealing with tens of millions of rows of data then that might be pretty good. On the other hand, if you've got 1000s of users, your datasets are small, and this query is being run constantly, then maybe you need to look at the performance of the query.
Is a duration of 394 when executing a SQL query too much?
[ "", "asp.net", "sql", "sql-server", "" ]
I have a XBAP application with the following user control: ``` <UserControl x:Class="XXX.UsersGrid" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Height="Auto" Width="Auto"> <UserControl.Resources> <DataTemplate x:Key="UpArrowUsers"> <DockPanel> <TextBlock Text="xxUser" x:Name="upArrowUsersHeader" HorizontalAlignment="Center"></TextBlock> <Path x:Name="arrow" StrokeThickness = "1" Fill= "gray" Data= "M 5,10 L 15,10 L 10,5 L 5,10"/> </DockPanel> </DataTemplate> </UserControl> ... ``` Now I want to fetch the string "xxUser" from a resx file which is embed as resource in the application How do I achieve this?
None of those answers are close to what you want. I'd start by reading about Localization in WPF. You'll find that if you are doing localization with WPF you'll want x:Uid defined on every node in your app. <http://msdn.microsoft.com/en-us/library/ms788718.aspx>
I was able to do it in a program with: ``` <TextBlock VerticalAlignment="Center" Margin="3" Text="{x:Static prop:Resources.OpenButton}" Visibility="{Binding Source={x:Static prop:Settings.Default}, Path=ShowButtonText, Converter={StaticResource BoolToVis}}"></TextBlock> ``` I also had to include the .Properties namespace in my xaml, like so: ``` xmlns:prop="clr-namespace:MyProjectNamespace.Properties" ``` This allowed me to not only use the string resources I had defined for my project for globalization, but I was also able to bind (two way) to my application's Settings. This let me *very easily* remember the window's position, size, etc. As you can see, use Settings. for settings, and Resources. for resources. As Steven mentioned, I think the "official" way or the "best" way is to stick x:Uid on everything that you want to globalize, but I didn't and it worked with no problems. I think the x:Uid thing is mostly required if you are using automated tools or breaking the translation task up as you would in a large project. I just did all my own stuff manually in VS, so maybe it was ok. Ben
How to fetch string from resource to assign in WPF Resource section in xaml
[ "", "c#", "wpf", "localization", "xbap", "" ]
I've been running into problem after problem trying to use the a third party HTML editor to do what (I hoped) was a simple operation. Because of these problems, I'm looking for recommendations for an alternative HTML parser I could use to perform the operations. Here's my situation, I have span tags in my html (with an ID attribute to identify them) and I simply want to replace their contents based on an update in another area of my client. For example: ``` <html> <body> <p>Hello <span id="1">name</span> you are <span id="2">age</span></p> </body> </html> ``` I've been trying to use the `HTMLDocument` class in `javax.swing.text` like this: ``` Element e; e = doc.getElement(document.getDefaultRootElement(), Attribute.ID, "1"); document.setInnerHTML(element, "John"); e = doc.getElement(document.getDefaultRootElement(), Attribute.ID, "2"); document.setInnerHTML(element, "99"); ``` but the element returned is a leaf element and won't allow the innerHTML to be set. Unfortunately, the document, reader & parser are all supplied by a 3rd party & so I can't really modify it. So, what I was hoping for was that someone else has had a similar problem and could recommend an alternative library to do this? Thanks in advance, B.
Can you really not accomplish that with java.swing.text.HTMLDocument? I have never tried this but reading through the API something along the line of ``` document.replace(e.getStartOffset(), e.getEndOffset()-e.getStartOffset(), "John", null) ``` instead of using setInnerHtml() could work.
Have you tried [HTML Parser](http://htmlparser.sourceforge.net/)? It is a robust, open source HTML parsing library for Java.
recommendations for a java HTML parser/editor
[ "", "java", "html", "parsing", "" ]
My company is currently in the process of creating a large multi-tier software package in C#. We have taken a SOA approach to the structure and I was wondering whether anyone has any advice as to how to make it extensible by users with programming knowledge. This would involve a two-fold process: approval by the administrator of a production system to allow a specific plugin to be used, and also the actual plugin architecture itself. We want to allow the users to write scripts to perform common tasks, modify the layout of the user interface (written in WPF) and add new functionality (ie. allowing charting of tabulated data). Does anyone have any suggestions of how to implement this, or know where one might obtain the knowledge to do this kind of thing? I was thinking this would be the perfect corner-case for releasing the software open-source with a restrictive license on distribution, however, I'm not keen on allowing the competition access to our source code. Thanks. EDIT: Thought I'd just clarify to explain why I chose the answer I did. I was referring to production administrators external to my company (ie. the client), and giving them someway to automate/script things in an easier manner without requiring them to have a full knowledge of c# (they are mostly end-users with limited programming experience) - I was thinking more of a DSL. This may be an out of reach goal and the Managed Extensibility Framework seems to offer the best compromise so far.
I would take a look at the **MEF** initiative from Microsoft. It's a framework that lets you add extensibility to your applications. It's in beta now, but should be part of .Net 4.0. Microsoft shares the source, so you can look how it's implemented and interface with it. So basically your extensibility framework will be open for everyone to look at but it won't force you to publish your application code or the plug-ins code.
Just use interfaces. Define an IPlugin that every plugin must implement, and use a well defined messaging layer to allow the plugin to make changes in the main program. You may want to look at a program like Mediaportal or Meedios which heavily depend on user plugins.
Extensibility without Open-Source
[ "", "c#", ".net", "plugins", "extensibility", "" ]
I want to make ^N work the same as Down arrow in a tree control. I thought I'd just have to add the following to the KeyDown handler: ``` SendKeys.Send("{Down}"); ``` but this gets treated as a Control-Down arrow since the control key is currently pressed. The [msdn page](http://msdn.microsoft.com/en-us/library/system.windows.forms.sendkeys.send.aspx) describes how to turn on the control modifier but not how to turn it off. Thanks, Keith
Sorry to come late to the party but I think I found a solution: First, import SetKeyboardState: ``` [DllImport("user32.dll")] public static extern bool SetKeyboardState(byte[] lpKeyState); ``` Then, just call it with a zeroed byte array before calling SendKeys.Send(): ``` SetKeyboardState(new byte[256]); SendKeys.Send("your key sequence"); ``` That worked for me. Hope this helps!
You can p/invoke to keybd\_event. That will let you simulate a key being both in "up" and "down" state.
SendKeys.Send and turning off key modifiers
[ "", "c#", "keyboard", "sendkeys", "" ]
What's the difference between casting an Int to a string and the ToString() method ? For example :- ``` int MyInt = 10; label1.Text = (string)MyInt; // This Doesn't Work label1.Text = MyInt.ToString(); // but this does. ```
Well, `ToString()` is just a method call which returns a string. It's defined in `object` so it's always valid to call on anything (other than a null reference). The cast operator can do one of four things: * A predefined conversion, e.g. `int` to `byte` * An execution time reference conversion which may fail, e.g. casting `object` to `string`, which checks for the target object being an appropriate type * A user-defined conversion (basically calling a static method with a special name) which is known at compile-time * An unboxing conversion which may fail, e.g. casting `object` to `int` In this case, you're asking the compiler to emit code to convert from `int` to `string`. None of the above options apply, so you get a compile-time error.
The difference is that with the cast, you ask the compiler to assume that the int is in fact a string, which is not the case. With the ToString(), you ask for a string representation for the int, which is in fact a string :)
What's the difference between casting an int to a string and the ToString() method in C#
[ "", "c#", "" ]
How can I delete a column of a Dataset? Is there a method that will do this for me like this: ``` rh.dsDetail = ds.Tables[0].Columns.Remove( ``` Or maybe like this: ``` rh.dsDetail = ds.Tables[0].Columns.Remove( ds.Tables[0].Columns[1],ds.Tables[0].Columns[2]) ```
First, a `DataTable` has columns, not a data-set. If you want to get rid of them, then: ``` table.Columns.Clear(); ``` otherwise, if you have the index: ``` table.Columns.RemoveAt(0); ``` should do the job if you have the column index. Note that if you remove column 0, then the numbers will shuffle (so you might need to do in reverse order). Alternatively, you may want to remove by name: ``` table.Columns.Remove("Foo"); ```
I have done this before in both winforms & web apps. You must go thru the DataTable in reverse and then do an AcceptChanges at the end. This particular example is typical, in that it removes all the GUIDs (columns ending with "id"). ``` private void RemoveUselessFields(ref DataTable dtResults) { for (int i = dtResults.Columns.Count - 1; i >= 0; i--) { DataColumn column = dtResults.Columns[i]; if (column.ColumnName.Substring(column.ColumnName.Length - 2, 2).ToUpper() == "ID") { dtResults.Columns.Remove(column); } } dtResults.AcceptChanges(); } ```
How can I delete a column of a Dataset?
[ "", "c#", ".net", "dataset", "" ]
If I type in Description: Apple Quantity: 10 Wholesale Cost: 30 Retail Cost: 20 Date Added: December These are the contents in my .dat file: **1Apple103020December** But when I load my program, it doesn't load the struct back in correctly resulting in there being 0 items in my list. Is that what it is suppose to look like or am I doing something seriously wrong. Code: ``` #include "stdafx.h" #include <iostream> #include <fstream> #include <string> #include <vector> using namespace System; using namespace std; #pragma hdrstop bool isValidChoice(int size, int choice); template<typename T> void writeVector(ofstream &out, const vector<T> &vec); template<typename T> vector<T> readVector(ifstream &in); template<typename T> vector<T> addItem(vector<T> &vec); template<typename T> void printItemDescriptions(vector<T> &vec); template<typename T> int displayRecord(vector<T> &vec); struct InventoryItem { string Description; int Quantity; int wholesaleCost; int retailCost; string dateAdded; } ; int main(void) { cout << "Welcome to the Inventory Manager extreme! [Version 1.0]" << endl; ifstream in("data.dat"); if (in.is_open()) { cout << "File \'data.dat\' has been opened successfully." << endl; } else { cout << "Error opening data.dat" << endl;} cout << "Loading data..." << endl; vector<InventoryItem> structList = readVector<InventoryItem>( in ); cout <<"Load complete." << endl << endl; in.close(); while (1) { string line = ""; cout << "There are currently " << structList.size() << " items in memory."; cout << endl << endl; cout << "Commands: " << endl; cout << "1: Add a new record " << endl; cout << "2: Display a record " << endl; cout << "3: Edit a current record " << endl; cout << "4: Delete a record " << endl; cout << "5: Save current information " << endl; cout << "6: Exit the program " << endl; cout << endl; cout << "Enter a command 1-6: "; getline(cin , line); int rValue = atoi(line.c_str()); system("cls"); ofstream out("data.dat"); switch (rValue) { case 1: addItem(structList); break; case 2: displayRecord(structList); break; case 3: break; case 4: break; case 5: if (!structList.size()) { cout << "There are no items to save! Enter one first!" << endl << endl; system("pause"); system("cls"); break; } writeVector(out , structList); break; case 6: return 0; default: cout << "Command invalid. You can only enter a command number 1 - 6. Try again. " << endl; } out.close(); } system("pause"); return 0; } template<typename T> void writeVector(ofstream &out, const vector<T> &vec) { out << vec.size(); for(vector<T>::const_iterator i = vec.begin(); i != vec.end(); i++) { out << *i; } cout << "Save completed!" << endl << endl; } ostream &operator<<(ostream &out, const InventoryItem &i) { out << i.Description; out << i.Quantity; out << i.wholesaleCost << i.retailCost; out << i.dateAdded; return out; } istream &operator>>(istream &in, InventoryItem &i) { in >> i.Description; in >> i.Quantity; in >> i.wholesaleCost >> i.retailCost; in >> i.dateAdded; return in; } template<typename T> vector<T> readVector(ifstream &in) { size_t size; if (in.fail()) { in >> size; } else { size = 0; } vector<T> vec; vec.reserve(size); for(unsigned int i = 0; i < size; i++) { T tmp; in >> tmp; vec.push_back(tmp); } return vec; } template<typename T> vector<T> addItem(vector<T> &vec) { system("cls"); string word; unsigned int number; InventoryItem newItem; cout << "-Add a new item-" << endl << endl; cout << "Enter the description for the item: "; getline (cin , word); newItem.Description = word; cout << endl; cout << "Enter the quantity on hand for the item: "; getline (cin , word); number = atoi(word.c_str()); newItem.Quantity = number; cout << endl; cout << "Enter the Retail Cost for the item: "; getline (cin , word); number = atoi(word.c_str()); newItem.retailCost = number; cout << endl; cout << "Enter the Wholesale Cost for the item: "; getline (cin , word); number = atoi(word.c_str()); newItem.wholesaleCost = number; cout << endl; cout << "Enter current date: "; getline (cin , word); newItem.dateAdded = word; vec.push_back(newItem); return vec; } template<typename T> void printItemDescriptions(vector<T> &vec) { int size = vec.size(); if (size) { cout << "---------------------------------" << endl; cout << "| ~ Item Descriptions ~ |" << endl; cout << "---------------------------------" << endl; cout << "*********************************" << endl; for (int i = 0 ; i < size ; i++) { cout << "(" << i+1 << ")" << ": " << vec[i].Description << endl; } cout << "*********************************" << endl << endl; } } template<typename T> int displayRecord(vector<T> &vec) { string word = ""; string quit = "quit"; int choice = 1; int size = vec.size(); if (size) { printItemDescriptions(vec); cout << endl; while (1) { cout << "Type \"exit\" to return to the Main Menu." << endl << endl; cout << "Enter \"list\" to re-display the items." << endl << endl; cout << endl; cout << "Pick the number of the item you would like to display: "; getline (cin , word); if (convertToLower(word) == "exit") { system("cls"); return 0; } if (convertToLower(word) == "list") { system("cls"); displayRecord(vec); } choice = atoi(word.c_str()); if (isValidChoice(size, choice)) { system("cls"); cout << endl << "[Item (" << choice << ") details] " << endl << endl; cout << "******************" << endl; cout << "* Description * " << vec[choice-1].Description << endl; cout << "******************" << endl << endl; cout << "******************" << endl; cout << "*Quantity On Hand* " << vec[choice-1].Quantity << endl; cout << "******************" << endl << endl; cout << "******************" << endl; cout << "* Wholesale Cost * " << vec[choice-1].wholesaleCost << endl; cout << "****************** " << endl << endl; cout << "******************" << endl; cout << "* Retail Cost * " << vec[choice-1].retailCost << endl; cout << "****************** " << endl << endl; cout << "******************" << endl; cout << "* Data Added * " << vec[choice-1].dateAdded << endl; cout << "****************** " << endl << endl; } else { system("cls"); cout << "That item doesn't exist!" << endl; cout << "Pick another item or enter \"list\" to see available items." << endl << endl; } } } else { cout << "There are currently no items to display." << endl << endl; system("pause"); system("cls"); return 0; } return 1; } bool isValidChoice(int size, int choice) { for (int i = 0 ; i <= size ; i++) { if (choice == size) { return true; } } return false; } string convertToLower(string word) { for (unsigned int i = 0 ; i < word.size() ; i++) { word[i] = tolower(word[i]); } return word; } ```
In your read function: ``` if (in.fail()) ``` should be: ``` if ( ! in.fail()) ``` Or better: ``` int n; if ( ! (in >> n) ) { n = 0; } ```
Use some separator for strucutre items, e.g. " " (space)
What am I doing wrong with my serializing a vector with structs in it to a .dat file?
[ "", "c++", "serialization", "vector", "fstream", "" ]
Every 15-30 minutes Netbeans shows a "`java.lang.OutOfMemoryError: PermGen space`". From what I learned from Google this seems to be related to classloader leaks or memory leaks in general. Unfortunately all suggestions I found were related to application servers and I have no idea to adapted them to Netbeans. (I'm not even sure it's the same problem) Is it a problem in my application? How can I find the source?
It is because of constant class loading. Java stores class byte code and all the constants (e.g. string constants) in permanent heap that is not garbage collected by default (which make sense in majority of situations because classes are loaded only once during the lifetime of an application). In applications that often load classes during an entire lifetime that are: * web and application servers during hot redeployment; * IDE's when running developed applications (every time you hit Run button in Netbeans or eclipse it loads your application's classes a new); * etc this behavior is improper because a heap fills full eventually. You need to turn on permanent heap garbage collection to prevent this error. I use options ``` -XX:MaxPermSize=256M -XX:+CMSClassUnloadingEnabled -XX:+CMSPermGenSweepingEnabled ``` (stopped my eclipse 3.4 from throwing "`java.lang.OutOfMemoryError: PermGen space`" so it should also work with netbeans). **Edit**: Just note that for Netbeans you set those options in: `[Netbeans installation]\etc\netbeans.conf` You should prefixe those options with `-J` and add them in `netbeans_default_options` (see comments in `netbeans.conf` for more informations).
Try to add the following argument to netbeans netconf: -J-XX:MaxPermSize=256m
How can I prevent PermGen space errors in Netbeans?
[ "", "java", "netbeans", "netbeans6.5", "java-6", "" ]
I have seen the [prior](https://stackoverflow.com/questions/10634/should-i-switch-from-nant-to-msbuild) questions and answers. In that one question, the original poster asked a followup question: > what are the compelling reasons to use msbuild? are there cons? I didn't see the answer to that. I'd like to know the converse, too. What are the compelling features of Nant? I think, for nant, cross-platform is big. For msbuild, it is the currency and integration with Visual Studio. Does this sound right? Anything else? **EDIT/Added**: anyone have a feature list comparison? Someone said "nant has more features out of the box." Which ones? Would it make sense to combine these projects, combine efforts so as to benefit mutually? Has anyone asked MS if they'd be willing to contribute msbuild to the community, like WiX? What are the chances? **EDIT2**: I just found [this prior discussion](https://stackoverflow.com/questions/476163/nant-or-msbuild-which-one-to-choose-and-when), not sure why I couldn't find it before.
**Nant** has more features out of the box, but **MSBuild** has a much better fundamental structure (item metadata rocks) which makes it much easier to build reusable MSBuild scripts. MSBuild takes a while to understand, but once you do it's very nice. Learning materials: * [Inside the Microsoft Build Engine: Using MSBuild and Team Foundation Build](https://rads.stackoverflow.com/amzn/click/com/0735626286) by Sayed Ibrahim Hashimi (Jan, 2009) * [Deploying .NET Applications: Learning MSBuild and ClickOnce](https://rads.stackoverflow.com/amzn/click/com/B001D25YT6) by Sayed Y. Hashimi (Sep, 2008)
I simply find NAnt easier to use. I dare say this is partly due to my background in Ant, but I found building a NAnt file for Protocol Buffers to be a *much* simpler job than building an MSBuild file for MiscUtil. (Even now there are things in the MiscUtil build which I'd like to include but can't - it seems ridiculously hard to dump the output of a task to a text file, IIRC.) The concepts are simpler, and there seem to be fewer gotchas in terms of when file collections are evaluated etc. I currently like using a setup which I previously thought was really silly - I use NAnt for my "main" build file, but invoke MSBuild to do the actual "compile my .NET project" step. The idea of having two build systems for the same project is abhorrent, but I basically don't treat the MSBuild part as a full build system - it's just an easy way of compiling, and I never need to manually examine the project file. (I only interact with it via Visual Studio.) I've been able to evolve my Protocol Buffers build very easily this way, and I doubt I'd have had the same experience if I'd used MSBuild. Soon I'm going to try building it all with Mono (when 2.4 is released - until then there are showstoppers in gmcs) at which point we'll see how portable the strategy is...
Why would I want to continue to use Nant when MSBuild is available?
[ "", "c#", ".net", "msbuild", "nant", "" ]
I'm trying to pass command-line arguments to a C# application, but I have problem passing something like this ``` "C:\Documents and Settings\All Users\Start Menu\Programs\App name" ``` even if I add `" "` to the argument. Here is my code: ``` public ObjectModel(String[] args) { if (args.Length == 0) return; //no command line arg. //System.Windows.Forms.MessageBox.Show(args.Length.ToString()); //System.Windows.Forms.MessageBox.Show(args[0]); //System.Windows.Forms.MessageBox.Show(args[1]); //System.Windows.Forms.MessageBox.Show(args[2]); //System.Windows.Forms.MessageBox.Show(args[3]); if (args.Length == 3) { try { RemoveInstalledFolder(args[0]); RemoveUserAccount(args[1]); RemoveShortCutFolder(args[2]); RemoveRegistryEntry(); } catch (Exception e) { } } } ``` And here is what I'm passing: ``` C:\WINDOWS\Uninstaller.exe "C:\Program Files\Application name\" "username" "C:\Documents and Settings\All Users\Start Menu\Programs\application name" ``` The problem is I can get the first and the second args correctly, but the last one it gets as `C:\Documents`. Any help?
I just ran a check and verified the problem. It surprised me, but it is the last \ in the first argument. ``` "C:\Program Files\Application name\" <== remove the last '\' ``` This needs more explanation, does anybody have an idea? I'm inclined to call it a bug. --- Part 2, I ran a few more tests and ``` "X:\\aa aa\\" "X:\\aa aa\" next ``` becomes ``` X:\\aa aa\ X:\\aa aa" next ``` A little Google action gives some insight from a [blog by Jon Galloway](http://weblogs.asp.net/jgalloway/archive/2006/10/05/_5B002E00_NET-Gotcha_5D00_-Commandline-args-ending-in-_5C002200_-are-subject-to-CommandLineToArgvW-whackiness.aspx), the basic rules are: * the backslash is the escape character * always escape quotes * only escape backslashes when they precede a quote.
To add Ian Kemp's answer If you assembly is called "myProg.exe" and you pass in the string "C:\Documents and Settings\All Users\Start Menu\Programs\App name" link so ``` C:\>myprog.exe "C:\Documents and Settings\All Users\Start Menu\Programs\App name" ``` the string "C:\Documents and Settings\All Users\Start Menu\Programs\App name" will be at args[0].
Passing command-line arguments in C#
[ "", "c#", "command-line-arguments", "" ]
If I write a PHP script to connect to an SMTP server (in my case, this is Postfix installed locally) and send an email, the first "received" header in the e-mail turns out as: ``` Received: from [SERVER_IP] (SERVER_HOSTNAME [127.0.0.1]) by SERVER_HOSTNAME (Postfix) with ESMTP id D682A34381 for <TO_EMAIL>; Thu, 5 Mar 2009 17:25:18 +0000 (GMT) ``` This is because the PHP script is being accessed through the browser and Apache is bound to the SERVER\_IP. However, if I execute the same script from the PHP CLI, then the first "received" header is instead: ``` Received: from localhost.localdomain (SERVER_HOSTNAME [127.0.0.1]) by SERVER_HOSTNAME (Postfix) with ESMTP id AB51934381 for <TO_EMAIL>; Thu, 5 Mar 2009 17:18:01 +0000 (GMT) ``` This is because the PHP CLI is being called by a regular Linux user (through a cron). How can I cause the PHP CLI to be bound to the hostname/IP so that when it is called by the user through the cron job, the "received" header shows the server hostname/IP instead of localhost.localdomain? **Note 1:** the hostname is correctly set in hostname --fqnd, /etc/hosts, /etc/sysconfig/network and /proc/sys/kernel/hostname **Note 2:** I'm using [Swift Mailer](http://www.swiftmailer.org/) in PHP, although this is probably irrelevant.
Consider seriously using Swift Mailer's "NativeMail" or even the "Sendmail" interface. Both of these options will avoid the network traffic, the internal IP address, and the Received header in-question. They will also be significantly faster, which may be important to you. If you insist on using SMTP, note that Swift "autodetects" the local host name (that is supplied in the SMTP dialog) using the `$_SERVER["SERVER_ADDR"]` variable, which won't be set by the CLI. If you cannot arrange for this to have a better value, you can pass it to the Swift constructor as the second argument, as in: ``` $swift = new Swift(new Swift_Connection_SMTP("localhost"), "my_domain_here"); ```
I believe it will work better if you don't connect to the server by the localhost address, and instead connect by to its assigned real ip address or the fully qualified domain name.
PHP hostname bindings
[ "", "php", "linux", "email", "" ]
I have used the "select" keyword and extension method to return an `IEnumerable<T>` with LINQ, but I have a need to return a generic `Dictionary<T1, T2>` and can't figure it out. The example I learned this from used something in a form similar to the following: ``` IEnumerable<T> coll = from x in y select new SomeClass{ prop1 = value1, prop2 = value2 }; ``` I've also done the same thing with extension methods. I assumed that since the items in a `Dictionary<T1, T2>` can be iterated as `KeyValuePair<T1, T2>` that I could just replace "SomeClass" in the above example with "`new KeyValuePair<T1, T2> { ...`", but that didn't work (Key and Value were marked as readonly, so I could not compile this code). Is this possible, or do I need to do this in multiple steps? Thanks.
The extensions methods also provide a [ToDictionary](http://msdn.microsoft.com/en-us/library/system.linq.enumerable.todictionary.aspx) extension. It is fairly simple to use, the general usage is passing a lambda selector for the key and getting the object as the value, but you can pass a lambda selector for both key and value. ``` class SomeObject { public int ID { get; set; } public string Name { get; set; } } SomeObject[] objects = new SomeObject[] { new SomeObject { ID = 1, Name = "Hello" }, new SomeObject { ID = 2, Name = "World" } }; Dictionary<int, string> objectDictionary = objects.ToDictionary( o => o.ID, o => o.Name); ``` Then `objectDictionary[1]` Would contain the value "Hello"
A more explicit option is to project collection to an IEnumerable of `KeyValuePair` and then convert it to a Dictionary. ``` Dictionary<int, string> dictionary = objects .Select(x=> new KeyValuePair<int, string>(x.Id, x.Name)) .ToDictionary(x=>x.Key, x=>x.Value); ```
Select a Dictionary<T1, T2> with LINQ
[ "", "c#", ".net", "linq", "generics", "" ]
I know this isn't specific to PHP, but what's the point of using timezones listed like this :<https://www.php.net/manual/en/timezones.america.php>? For example "America/Indianapolis" and "America/New\_York". What is the problem with EST, EDT, CST, CDT, &c?
This is just a [different timezone format](http://en.wikipedia.org/wiki/List_of_tz_zones_by_name) provided by the [Zoneinfo database](http://en.wikipedia.org/wiki/Zoneinfo): > The time zones in the database are > given uniform names, such as > “America/New\_York”, in an attempt to > make them easier to understand by > humans and to remove ambiguity.
`'America/New_York'` doesn't depend on the date. `EST` is only valid in winter, while in summer you have to change it to `EDT`. Other problem with 3 letter codes is, that there are conflicts. `EST` means Eastern Standard Time, but it might be in America or Australia.
What's the point of using wordy Time Zones?
[ "", "php", "timezone", "" ]
I'm trying to write an extension method in .NET that will operate on a generic collection, and remove all items from the collection that match a given criteria. This was my first attempt: ``` public static void RemoveWhere<T>(this ICollection<T> Coll, Func<T, bool> Criteria){ foreach (T obj in Coll.Where(Criteria)) Coll.Remove(obj); } ``` However this throws an InvalidOperationException, "Collection was modified; enumeration operation may not execute". Which does make sense, so I made a second attempt with a second collection variable to hold the items that need to be removed and iterate through that instead: ``` public static void RemoveWhere<T>(this ICollection<T> Coll, Func<T, bool> Criteria){ List<T> forRemoval = Coll.Where(Criteria).ToList(); foreach (T obj in forRemoval) Coll.Remove(obj); } ``` This throws the same exception; I'm not sure I really understand why as 'Coll' is no longer the collection being iterated over, so why can't it be modified? If anyone has any suggestions as to how I can get this to work, or a better way to achieve the same, that'd be great. Thanks.
For `List<T>`, this exists already, as `RemoveAll(Predicate<T>)`. As such, I'd suggest that you keep the name (allowing familiarity, and precedence). Basically, you can't remove while iterating. There are two common options: * use indexer based iteration (`for`) and removal * buffer the items to remove, and remove after the `foreach` (as you've already done) So perhaps: ``` public static void RemoveAll<T>(this IList<T> list, Func<T, bool> predicate) { for (int i = 0; i < list.Count; i++) { if (predicate(list[i])) { list.RemoveAt(i--); } } } ``` Or more generally for any `ICollection<T>`: ``` public static void RemoveAll<T>(this ICollection<T> collection, Func<T, bool> predicate) { T element; for (int i = 0; i < collection.Count; i++) { element = collection.ElementAt(i); if (predicate(element)) { collection.Remove(element); i--; } } } ``` This approach has the advantage of avoiding lots of extra copies of the list.
As Marc said, `List<T>.RemoveAll()` is the way to go for lists. I'm surprised your second version didn't work though, given that you've got the call to `ToList()` after the `Where()` call. Without the `ToList()` call it would certainly make sense (because it would be evaluated lazily), but it should be okay as it is. Could you show a short but complete example of this failing? EDIT: Regarding your comment in the question, I still can't get it to fail. Here's a short but *complete* example which works: ``` using System; using System.Collections.Generic; using System.Linq; public class Staff { public int StaffId; } public static class Extensions { public static void RemoveWhere<T>(this ICollection<T> Coll, Func<T, bool> Criteria) { List<T> forRemoval = Coll.Where(Criteria).ToList(); foreach (T obj in forRemoval) { Coll.Remove(obj); } } } class Test { static void Main(string[] args) { List<Staff> mockStaff = new List<Staff> { new Staff { StaffId = 3 }, new Staff { StaffId = 7 } }; Staff newStaff = new Staff{StaffId = 5}; mockStaff.Add(newStaff); mockStaff.RemoveWhere(s => s.StaffId == 5); Console.WriteLine(mockStaff.Count); } } ``` If you could provide a similar *complete* example which fails, I'm sure we can work out the reason.
How to conditionally remove items from a .NET collection
[ "", "c#", ".net", "collections", "extension-methods", "" ]
I need to calculate Count, Average and a couple other things of a column as a result of a Query and then put the result in a text box in a form. I was thinking about either implementing it in the Query and putting the results in a column or something like that, OR use a VBA function to calculate it; however I don't know how to calculate this for an entire column in VBA. Any suggestions/examples?
Actually I found that this is very easy in VBA. I didn't want to add another field to my Query so I did this: ``` Forms!MyForm!AvgTextBox = Avg([mytable.values2Baveraged]) Forms!MyForm!CountTextBox = Count([mytable.values2Bcounted]) ``` This calculates the function on the entire column. This worked perfectly just in case anyone cared.
Have you considered the domain aggregate functions? These roughly take the form: ``` DAvg("SomeField","SomeTable","Where Statement, If Required") DCount("*","SomeTable") ``` You can set the Control Soirce of a control to a function, but domain aggregate functions may not suit with a large recordset. An alternative is to use a recordset: ``` Dim rs As DAO.Recordset Set rs = CurrentDB.OpenRecordset("SELECT Count(*) As CountAll " _ & "FROM SomeTable") Me!txtTextBox=rs!CountAll ```
Average, Count, etc of entire column of table in SQL Query or as Function in VBA to Display result in Form?
[ "", "sql", "ms-access", "vba", "" ]
I understand copy constructor is called on three instances 1. When instantiating one object and initializing it with values from another object. 2. When passing an object by value. **3. When an object is returned from a function by value.** I have question with no.3 if copy constructor is called when an object value is returned, shouldn't it create problems if object is declared locally in the function. i mean the copy constructor is a deep copy one and takes reference of an object as parameter
It's called exactly to avoid problems. A new object serving as result is initialized from the locally-defined object, then the locally defined object is destroyed. In case of deep-copy user-defined constructor it's all the same. First storage is allocated for the object that will serve as result, then the copy constructor is called. It uses the passed reference to access the locally-defined object and copy what's necessary to the new object.
The copy is done before the called function exits, and copies the then-existing local variable into the return value. The called function has access to the memory the return value will occupy, even though that memory is not "in scope" when the copy is being made, it's still available.
Copy Constructor in C++ is called when object is returned from a function?
[ "", "c++", "constructor", "copy-constructor", "" ]
I have simple base and derived class that I want both have `shared_from_this()`. This simple solution: ``` class foo : public enable_shared_from_this<foo> { void foo_do_it() { cout<<"foo::do_it\n"; } public: virtual function<void()> get_callback() { return boost::bind(&foo::foo_do_it,shared_from_this()); } virtual ~foo() {}; }; class bar1 : public foo , public enable_shared_from_this<bar1> { using enable_shared_from_this<bar1>::shared_from_this; void bar1_do_it() { cout<<"foo::do_it\n"; } public: virtual function<void()> get_callback() { return boost::bind(&bar1::bar1_do_it,shared_from_this()); } }; ``` Causes exception `tr1::bad_weak_ptr` in following code: ``` shared_ptr<foo> ptr(shared_ptr<foo>(new bar1)); function<void()> f=ptr->get_callback(); f(); ``` So after "googling" I have found following solution: ``` class bar2 : public foo { void bar2_do_it() { cout<<"foo::do_it\n"; } shared_ptr<bar2> shared_from_this() { return boost::static_pointer_cast<bar2>(foo::shared_from_this()); } public: virtual function<void()> get_callback() { return boost::bind(&bar2::bar2_do_it,shared_from_this()); } }; ``` And now it works. Is there any better and more convinient and correct way to `enable_shared_from_this` for both parent and child? Thanks
Sorry, but there isn't. The problem is that `shared_ptr<foo>` and `shared_ptr<bar1>` are different types. I don't understand everything that's going on under the hood, but I *think* that when the constructor returns and is assigned to a `shared_ptr<foo>`, the internal `weak_ptr<bar1>` sees that nothing is pointing to it (because only a `shared_ptr<bar1>` would increment the counter) and resets itself. When you call `bar1::shared_from_this` in `get_callback`, you get the exception because the internal `weak_ptr` isn't pointing to anything. Essentially, `enable_shared_from_this` only seems to work transparently from a single class in a hierarchy. If you try [implementing it manually](http://www.boost.org/doc/libs/1_38_0/libs/smart_ptr/sp_techniques.html#from_this), the problem should become obvious.
The OP solution can be made more convenient by defining the following on the base class. ``` protected: template <typename Derived> std::shared_ptr<Derived> shared_from_base() { return std::static_pointer_cast<Derived>(shared_from_this()); } ``` This can be made more convenient by placing it in a base class (for reuse). ``` #include <memory> template <class Base> class enable_shared_from_base : public std::enable_shared_from_this<Base> { protected: template <class Derived> std::shared_ptr<Derived> shared_from_base() { return std::static_pointer_cast<Derived>(shared_from_this()); } }; ``` and then deriving from it as follows. ``` #include <functional> #include <iostream> class foo : public enable_shared_from_base<foo> { void foo_do_it() { std::cout << "foo::do_it\n"; } public: virtual std::function<void()> get_callback() { return std::bind(&foo::foo_do_it, shared_from_base<foo>()); } }; class bar1 : public foo { void bar1_do_it() { std::cout << "bar1::do_it\n"; } public: virtual std::function<void()> get_callback() override { return std::bind(&bar1::bar1_do_it, shared_from_base<bar1>()); } }; ```
How to enable_shared_from_this of both parent and derived
[ "", "c++", "boost", "smart-pointers", "" ]
This is a followup question to one I previously asked: [start-program-if-not-already-running-in-java](https://stackoverflow.com/questions/660205/start-program-if-not-already-running-in-java "start-program-if-not-already-running-in-java") I didn't get a great solution there (as there doesn't appear to be one), but I have a related question: Is there anyway to launch an application in Java code (an .exe in Windows, not a Java app) and have it start minimized? Or perhaps to minimize it right after start? That would solve the focus issue from the other question and the already running problem would more or less deal with itself. Clarification issues again: the Java client and the .exe are running in Windows and I really don't have the ability to write any wrappers or make use of JNI mojo or anything like that. I more or less need a pure Java solution. Again, thanks for the help and I am more than willing to accept an answer that is simply: "This is just not possible."
Windows only: ``` public class StartWindowMinimized { public static void main(String[] args) throws IOException { if (args.length != 1) { System.err .println("Expected: one argument; the command to launch minimized"); } String cmd = "cmd.exe /C START /MIN "; Runtime.getRuntime().exec(cmd + args[0]); } } ``` Sample usage: ``` java -cp . StartWindowMinimized notepad.exe java -cp . StartWindowMinimized cmd.exe ``` To understand the arguments involved: ``` cmd /? START /? ```
I'm not that familiar with the specifics of Java, but according to a web site I just looked at, if you're using java.awt.Frame (which includes JFrame from Swing), you should use the function off of that frame called setState, which accepts Frame.ICONIFIED and Frame.NORMAL as a parameter (iconified would be the minimized state). [How do I minimize a Java application window?](http://www.jguru.com/faq/view.jsp?EID=53754)
Launch Application in a minimized state from Java
[ "", "java", "windows", "" ]
I have a table full of id's,categories and weights that I need to reference in my program as I read in records that contain those categories. What is the most efficient method to read those from a database and put into a structure that I can reference? The ID's (and possibly the names) would be unique Data might look like: ``` ID,Category,Weight 1,Assignment,5 2,Test,10 3,Quiz,5 4,Review,3 ```
Your best bet is to read in your table using a DataReader, and put each row into an object containing Category and Weight, then each object into a Dictionary.
If you're using a later version of .NET, you could always use Linq to just grab that data for you.
Loading a lookup table from a database into a C# program - data structure?
[ "", "c#", "list", "data-structures", "lookup", "" ]
While tracing the active connection on my db i found that some times the connections exceeds 100, is that normal? and after few minutes it return back to 20 or 25 active connection [more details about my problem](https://stackoverflow.com/questions/673584/is-a-duration-of-394-when-executing-a-sql-query-too-much) **Traffic on the site is around 200 visitor per day.** **Why i am asking? because the default MaxPool in the asp.net connection string is 100 Also i am using Connection in the website IIS**
That really depends on your site and your traffic. I've seen a site peek out at over 350 active connections to SQL during its peak time. That was for roughly 7,000 concurent web users, on two web servers, plus various backend processes. # Edit Some additional information that we need to give you a better answer: * How many Web Processes hit your sql server? For example are you using web gardens? Do you have multiple servers how many if you do? This is important because then you can calculate how many connections you can have by figuring out how many worker threads per process you have configured. Assume worse case, each thread is running which would add a connection to the pool. * Are you using connection pooling? If so your going to see the connections stick around after the user's request ends. By default its enabled. * How many concurent users do you have? But, I think your going after this wrong, your having an issue with no free connections available in your pool. The first thing I'd look for is any leaked connections (connections being held open for longer then they should). For example passing a data reader up to the Web Page, could be a sign of this. Next thing is to evaluate the default settings. Maybee you should run a web garden which should give you more connections, or increase the number of connections available. The last thing I would do is try to opitmize queries like in your last question. Let's say you cut those queries in half, all you've done is bought yourself more time until more users come onto the system, and your right back here, only this time you might not be able to optimize that query yet again.
You're leaving out some details making it difficult to answer correctly but... It depends, really. If you're not using connection pooling then each time a page is hit that requires access to the database a new connection is going to be opened. So sure, it could be perfectly normal.
Is more than 100 active connection to SQL server db not normal in an ASP.NET website?
[ "", "asp.net", "sql", "sql-server", "security", "" ]
Below is a page that handles a login script and I am wondering if I have put it any security holes. I have been reading articles on protecting from injections and others and wanted to make sure that my code is secure. It is submitted via ajax and returns JSON based on the login being correct or not. ``` <?php ob_start(); session_start(); include ("config.inc.php"); include ("jsonEncode.php"); // ausername and apassword sent from form $ausername = ''; $apassword = ''; $ausername = mysql_real_escape_string(stripslashes($_GET['username'])); $apassword = mysql_real_escape_string(stripslashes($_GET['password'])); $sql = "SELECT * FROM admin WHERE ausername='$ausername' AND apassword='$apassword' LIMIT 1"; $result = mysql_query($sql) or die(mysql_error()); $data = mysql_fetch_array($result); $count = mysql_num_rows($result); if($count==1){ $_SESSION['ausername'] = $ausername; $_SESSION['apassword'] = $apassword; $_SESSION['admin_id'] = $data['a_id']; $a_id = $data['a_id']; $_SESSION['LastLogin'] = $data['last_login']; $query = "UPDATE admin SET last_login = Now() WHERE `a_id`= $a_id"; mysql_query($query); //echo $query; $_SESSION['aloggedin'] = "1234"; // valid $var = array('avalid' => 1, 'ausername' => $ausername, 'apassword' => $apassword); print php_json_encode($var); }else{ // invalid $var = array('avalid' => 0, 'ausername' => $ausername, 'apassword' => $apassword); print php_json_encode($var); } ?> ```
You might want to use the POST method rather than GET with the login form, otherwise their password will appear in the URL and URLs aren't very secure (they might get bookmarked or sent to another server as a referral URL, for example).
You don't need to strip the slashes. Unless you are also stripping slashes when these columns are populated, you've actually introduced a security hole -- if for whatever reason you don't have a unique constraint on the username field, and/or you have slashes in the in the stored username or password fields, and their passwords differed only by a slash, you could get one user logged in as another.
Is my SQL request secure with mysql_real_escape_string?
[ "", "php", "mysql", "" ]
I want to pass a variable set by the user to the `setcookie` function. I would like to let user change the color of some parts of website. so far the information about color is sent to server with `$_SESSION['colorcode']` and I would like to add it to `setcookie` as well so when the user logs in to the site next time, his/her color is there. I've got this code: ``` setcookie( 'colorcode', $_SESSION['colorcode'], time() + 60 * 60 * 24 * 30, '', '', false, true ); ``` I would like to save the value of variable in cookie, but it works just for the session. what is wrong? how to do it so the color is there when the user logs in? I'm looking for another way than storing it in database or file.
Did you read back the value from the cookie at the beginning of the next session? Setting the cookie looks good but I think the last parameters could be omitted. ``` setcookie("colorcode", $_SESSION['colorcode'], time()+3600*24*30, '/'); ``` Perhaps even the path (`'/'`) is optional. But this only sets the cookie. You have to read the data back in, when the user returns to your site the next time. ``` if ( !isset($_SESSION['colorcode']) and isset($_COOKIE['colorcode']) ) { if ( preg_match('/^#?[0-9a-fA-F]{6}$/', $_COOKIE['colorcode']) ) { $_SESSION['colorcode'] = $_COOKIE['colorcode']; } else { // bad value... delete cookie if you like } } ``` When there is no `colorcode` in the session but the cookie-value exists, then the data is validated and if it's a valid 6 digit hex color code, then the value is inserted into the session. The validation is nessessary because a cookie is data that comes from the user and therefore potentially malicious.
Try to check this: ``` setcookie('colorcode',$_SESSION['colorcode'],time()+60*60*24*30); ```
how to use variable in setcookie()?
[ "", "php", "session", "cookies", "" ]
Some of my colleagues are convinced that committing build artefacts to the subversion repository is a good idea. The argument is that this way, installation and update on the test machines is easy - just "svn up"! I'm sure there are weighty arguments against this bad practice, but all I can think of are lame ones like "it takes up more room". What are the best, killer reasons to not do this? And what other approaches should we do instead? This is for Java code if that makes a difference. Everything is compiled from Eclipse (with no automated PDE builds). When I say add the build artifacts, I mean a commit would look like this: ``` "Added the new Whizbang feature" M src/foo/bar/Foo.java M bin/Foo.jar ``` Each code change has the corresponding generated jar file.
In my opinion the code repository should only contain source code as well as third party libraries required to compile this source code (also the third party libraries might be retrieved with some dependency management tool during the build process). The resulting binaries should not get checked in along with the source code. I think the problem in your case is that you don't have proper build scripts in place. That's why building a binary from the sources involves some work like starting up eclipse, importing the project, adjusting classpathes, etc... If there are build scripts in place, getting the binaries can be done with a command like: ``` svn update; ant dist ``` I think the most important reason not to checkin the binaries along with the source is the resulting size of your repository. This will cause: * Larger repository and maybe too few space on versioning system server * Lots of traffic between versioning system server and the clients * Longer update times (imagine you do an SVN update from the internet...) Another reason might be: * Source code is easily comparable, so lots of the features of a versioning system do make sense. But you can't easily compare binaries... Also your approach as described above introduces a lot of overhead in my opinion. What if a developer forgets to update a corresponding jar file?
Firstly, Subversion (and all others nowadays) are not source code control managers (I always thought SCM means Software Configuration Management), but version control systems. That means they store changes to the stuff you store in them, it doesn't have to be source code, it could be image files, bitmap resources, configuration files (text or xml), all kinds of stuff. There's only 1 reason why built binaries shouldn't be considered as part of this list, and that's because you can rebuild them. However, think why you would want to store the released binaries in there as well. Firstly, its a system to assist you, not to tell you how you should build your applications. Make the computer work for you, instead of against you. So what if storing binaries takes up space - you have hundreds of gigabytes of disk space and super fast networks. Its not a big deal to store binary objects in there anymore (whereas ten years ago it might have been a problem - this is perhaps why people think of binaries in SCM as a bad practice). Secondly, as a developer, you might be comfortable with using the system to rebuild any version of an application, but the others who might use it (eg qa, test, support) might not. This means you'd need an alternative system to store the binaries, and really, you already have such a system, its your SCM! Make use of it. Thirdly, you assume that you *can* rebuild from source. Obviously you store all the source code in there, but you don't store the compiler, the libraries, the sdks, and *all* the other dependant bits that are required. What happens when someone comes along and asks "can you build me the version we shipped 2 years ago, a customer has a problem with that version". 2 years is an eternity nowadays, do you even have the same compiler you used back then? What happens when you check all the source out only to find that the newly updated sdk is incompatible with your source and fails with errors? Do you wipe your development box and reinstall all the dependencies just to build this app? Can you even remember what all the dependencies were?! The last point is the big one, to save a few k of disk space, you might cost yourself days if not weeks of pain. (And Sod's law also says that whichever app you need to rebuild will be the one that required the most obscure, difficult to set up dependency you were ever glad to get rid of) So store the binaries in your SCM, don't worry over trivialities. PS. we stick all binaries in their own 'release' directory per project, then when we want to update a machine, we use a special 'setup' project that consists of nothing but svn:externals. You export the setup project and you're done as it fetches the right things and puts them into the right directory structure.
Alternative to binaries in Subversion
[ "", "java", "svn", "" ]
Why wasn't the `.clone()` method specified in the `java.lang.Cloneable` interface ?
Basically, it's a broken interface. Ken Arnold and Bill Venners discussed it in [Java Design Issues](http://www.artima.com/intv/issues3.html). Arnold: > If I were to be God at this point, and many people are probably glad I am not, I would say deprecate `Cloneable` and have a `Copyable`, because `Cloneable` has problems. Besides the fact that it's misspelled, `Cloneable` doesn't contain the `clone` method. That means you can't test if something is an instance of `Cloneable`, cast it to `Cloneable`, and invoke `clone`. You have to use reflection again, which is awful. That is only one problem, but one I'd certainly solve.
See this bug in the Java bugs database: <https://bugs.java.com/bugdatabase/view_bug?bug_id=4098033> Essentially, this is a design flaw in earlier versions of Java that they are not intending to fix in the Cloneable interface as to do so would break compatibility with some existing code.
Java: Rationale of the Cloneable interface
[ "", "java", "clone", "specifications", "cloneable", "" ]
I'm running Python 2.4 in a game engine and I want to be able to turn off all prints if needed. For example I'd like to have the prints on for a debug build, and then turned off for a release build. It's also imperative that it's as transparent as possible. My solution to this in the C code of the engine is having the `printf` function inside a `vararg` macro, and defining that to do nothing in a release build. This is my current solution: ``` DebugPrints = True def PRINT (*args): global DebugPrints if DebugPrints: string = "" for arg in args: string += " " + str(arg) print string ``` It makes it easy to toggle print outs, but there is possibly a better way to format the string. My main issue is that this is actually adding a lot more function calls to the program. I'm wondering if there is anything you can do to how the print keyword works?
yes, you can assign `sys.stdout` to whatever you want. Create a little class with a `write` method that does nothing: ``` class DevNull(object): def write(self, arg): pass import sys sys.stdout = DevNull() print "this goes to nirvana!" ``` With the same technique you can also have your prints logged to a file by setting `sys.stdout` to an opened file object.
I know an answer has already been marked as correct, but Python has a debug flag that provides a cleaner solution. You use it like this: ``` if __debug__: print "whoa" ``` If you invoke Python with -O or -OO (as you normally would for a release build), `__debug__` is set to `False`. What's even better is that `__debug__` is a special case for the interpreter; it will actually strip out that code when it writes the `pyc/pyo` files, making the resulting code smaller/faster. Note that you can't assign values to `__debug__`, so it's entirely based off those command-line arguments.
What is the best way to toggle python prints?
[ "", "python", "printing", "" ]
I have the following code: ``` delegate int doStuffDel(int instanceNo, int sleepTime, int repeatCount); string result; private int doStuff(int instanceNo, int sleepTime, int repeatCount) { for (int i = 0; i < repeatCount; i++) { Console.Write(instanceNo); Thread.Sleep(sleepTime); } result = instanceNo + " repeated " + repeatCount; return instanceNo; } private void button3_Click(object sender, EventArgs e) { doStuffDel del = doStuff; IAsyncResult ar = del.BeginInvoke(3, 120, 50, finishedCallback, result); } private void finishedCallback(IAsyncResult ar) { Console.WriteLine("Done. The result was " + ar.AsyncState.ToString()); } ``` I thought the res.AsyncState would return the string, passed as the last argument in the call to BeginInvoke, but it's null. Does anybody know why? PS, I know I could pass the del as the last argument in BeginInvoke and then call EndInvoke in the callback, to get some result from the doStuff method, -- or I could just get the string val from the class! -- but I am surpised that the AsyncState on the AsyncResult object is null...
The value of `ar.AsyncState` is the value of result when you originally pass it to `BeginInvoke`. The fact that when you passed it, you used the value of `result` doesn't mean that it's "bound" to the result variable. It's just the value of the `result` variable when you pass it to start with. To see this, either set `result` to some test value before calling `BeginInvoke`, or just pass a string literal instead. Either way, you'll get that value in the callback. On the other hand, your callback and original delegate both have the same target object, so you could use use `result` within the callback and it would have the value you want - that's another way of maintaining state.
It's passed in when you call BeginInvoke. At that time, *result* is null. If you set *result* before the call, you'll see it in the callback.
Passing data into a callback method (via BeginInvoke) in c#
[ "", "c#", ".net", "multithreading", "" ]
What's the difference between: ``` InputStream is = this.getClass().getClassLoader().getResourceAsStream(fileName) ``` and ``` InputStream is = Thread.currentThread().getContextClassLoader().getResourceAsStream(fileName) ``` and ``` InputStream is = this.getClass().getResourceAsStream(fileName) ``` When are each one more appropriate to use than the others? The file that I want to read is in the classpath as my class that reads the file. My class and the file are in the same jar and packaged up in an EAR file, and deployed in WebSphere 6.1.
There are subtle differences as to how the `fileName` you are passing is interpreted. Basically, you have 2 different methods: `ClassLoader.getResourceAsStream()` and `Class.getResourceAsStream()`. These two methods will locate the resource differently. In `Class.getResourceAsStream(path)`, the path is interpreted as a path local to the package of the class you are calling it from. For example calling, `String.class.getResourceAsStream("myfile.txt")` will look for a file in your classpath at the following location: `"java/lang/myfile.txt"`. If your path starts with a `/`, then it will be considered an absolute path, and will start searching from the root of the classpath. So calling `String.class.getResourceAsStream("/myfile.txt")` will look at the following location in your class path `./myfile.txt`. `ClassLoader.getResourceAsStream(path)` will consider all paths to be absolute paths. So calling `String.class.getClassLoader().getResourceAsStream("myfile.txt")` and `String.class.getClassLoader().getResourceAsStream("/myfile.txt")` will both look for a file in your classpath at the following location: `./myfile.txt`. Everytime I mention a location in this post, it could be a location in your filesystem itself, or inside the corresponding jar file, depending on the Class and/or ClassLoader you are loading the resource from. In your case, you are loading the class from an Application Server, so your should use `Thread.currentThread().getContextClassLoader().getResourceAsStream(fileName)` instead of `this.getClass().getClassLoader().getResourceAsStream(fileName)`. `this.getClass().getResourceAsStream()` will also work. Read [this article](http://www.javaworld.com/javaworld/javaqa/2003-08/01-qa-0808-property.html) for more detailed information about that particular problem. --- ## Warning for users of Tomcat 7 and below One of the answers to this question states that my explanation seems to be incorrect for Tomcat 7. I've tried to look around to see why that would be the case. So I've looked at the source code of Tomcat's `WebAppClassLoader` for several versions of Tomcat. The implementation of `findResource(String name)` (which is utimately responsible for producing the URL to the requested resource) is virtually identical in Tomcat 6 and Tomcat 7, but is different in Tomcat 8. In versions 6 and 7, the implementation does not attempt to normalize the resource name. This means that in these versions, `classLoader.getResourceAsStream("/resource.txt")` may not produce the same result as `classLoader.getResourceAsStream("resource.txt")` event though it should (since that what the Javadoc specifies). [[source code]](https://github.com/apache/tomcat/blob/7.0.96/java/org/apache/catalina/loader/WebappClassLoaderBase.java) In version 8 though, the resource name is normalized to guarantee that the absolute version of the resource name is the one that is used. Therefore, in Tomcat 8, the two calls described above should always return the same result. [[source code]](https://github.com/apache/tomcat/blob/8.5.45/java/org/apache/catalina/loader/WebappClassLoaderBase.java) As a result, you have to be extra careful when using `ClassLoader.getResourceAsStream()` or `Class.getResourceAsStream()` on Tomcat versions earlier than 8. And you must also keep in mind that `class.getResourceAsStream("/resource.txt")` actually calls `classLoader.getResourceAsStream("resource.txt")` (the leading `/` is stripped).
Use `MyClass.class.getClassLoader().getResourceAsStream(path)` to load resource associated with your code. Use `MyClass.class.getResourceAsStream(path)` as a shortcut, and for resources packaged within your class' package. Use `Thread.currentThread().getContextClassLoader().getResourceAsStream(path)` to get resources that are part of client code, not tightly bounds to the calling code. You should be careful with this as the thread context class loader could be pointing at anything.
Different ways of loading a file as an InputStream
[ "", "java", "inputstream", "" ]
I have some site content that contains abbreviations. I have a list of recognised abbreviations for the site, along with their explanations. I want to create a regular expression which will allow me to replace all of the recognised abbreviations found in the content with some markup. For example: content: ``` This is just a little test of the memb to see if it gets picked up. Deb of course should also be caught here. ``` abbreviations: ``` memb = Member; deb = Debut; ``` result: ``` This is just a little test of the [a title="Member"]memb[/a] to see if it gets picked up. [a title="Debut"]Deb[/a] of course should also be caught here. ``` (This is just example markup for simplicity). Thanks. EDIT: CraigD's answer is nearly there, but there are issues. I only want to match whole words. I also want to keep the correct capitalisation of each word replaced, so that deb is still deb, and Deb is still Deb as per the original text. For example, this input: ``` This is just a little test of the memb. And another memb, but not amemba. Deb of course should also be caught here.deb! ```
First you would need to [`Regex.Escape()`](http://msdn.microsoft.com/en-us/library/system.text.regularexpressions.regex.escape.aspx) all the input strings. Then you can look for them in the string, and iteratively replace them by the markup you have in mind: ``` string abbr = "memb"; string word = "Member"; string pattern = String.Format("\b{0}\b", Regex.Escape(abbr)); string substitue = String.Format("[a title=\"{0}\"]{1}[/a]", word, abbr); string output = Regex.Replace(input, pattern, substitue); ``` EDIT: I asked if a simple [`String.Replace()`](http://msdn.microsoft.com/en-us/library/system.string.replace.aspx) wouldn't be enough - but I can see why regex is desirable: you can use it to enforce "whole word" replacements only by making a pattern that uses word boundary anchors. You can go as far as building a single pattern from all your escaped input strings, like this: ``` \b(?:{abbr_1}|{abbr_2}|{abbr_3}|{abbr_n})\b ``` and then using a [match evaluator](http://msdn.microsoft.com/en-us/library/cft8645c.aspx) to find the right replacement. This way you can avoid iterating the input string more than once.
Not sure how well this will scale to a big word list, but I think it should give the output you want (although in your question the 'result' seems identical to 'content')? Anyway, let me know if this is what you're after ``` using System; using System.Collections.Generic; using System.Linq; using System.Text.RegularExpressions; namespace ConsoleApplication1 { class Program { static void Main(string[] args) { var input = @"This is just a little test of the memb to see if it gets picked up. Deb of course should also be caught here."; var dictionary = new Dictionary<string,string> { {"memb", "Member"} ,{"deb","Debut"} }; var regex = "(" + String.Join(")|(", dictionary.Keys.ToArray()) + ")"; foreach (Match metamatch in Regex.Matches(input , regex /*@"(memb)|(deb)"*/ , RegexOptions.IgnoreCase | RegexOptions.ExplicitCapture)) { input = input.Replace(metamatch.Value, dictionary[metamatch.Value.ToLower()]); } Console.Write (input); Console.ReadLine(); } } } ```
Highlight a list of words using a regular expression in c#
[ "", "c#", "regex", "" ]
I have a menu item in the Tools menu, but it needs to go in the file->new menu, however even changing "Tools" to "File" in the pre-generated visual studio code does not give the expected result!!!?
Have you tried running devenv.exe /resetaddin Your.AddIn.Name at the command line (e.g. devenv.exe /resetaddin MyAddin1.Connect)?
You need to change the eventlistner code as well. Check the auto generated code segment at the top part of the code.
Visual Studio Addin - Adding a Command to the file->new menu
[ "", "c#", "visual-studio", "visual-studio-addins", "" ]
I want to crop a thumbnail image in my Django application, so that I get a quadratic image that shows the center of the image. This is not very hard, I agree. I have already written some code that does exactly this, but somehow it lacks a certain ... elegance. I don't want to play code golf, but there must be a way to express this shorter and more pythonic, I think. ``` x = y = 200 # intended size image = Image.open(filename) width = image.size[0] height = image.size[1] if (width > height): crop_box = ( ((width - height)/2), 0, ((width - height)/2)+height, height ) image = image.crop(crop_box) elif (height > width): crop_box = ( 0, ((height - width)/2), width, ((height - width)/2)+width ) image = image.crop(crop_box) image.thumbnail([x, y], Image.ANTIALIAS) ``` Do you have any ideas, SO? **edit:** explained x, y
I think this should do. ``` size = min(image.Size) originX = image.Size[0] / 2 - size / 2 originY = image.Size[1] / 2 - size / 2 cropBox = (originX, originY, originX + size, originY + size) ```
The `fit()` function in the PIL [ImageOps](http://www.pythonware.com/library/pil/handbook/imageops.htm) module does what you want: ``` ImageOps.fit(image, (min(*image.size),) * 2, Image.ANTIALIAS, 0, (.5, .5)) ```
What's a more elegant rephrasing of this cropping algorithm? (in Python)
[ "", "image", "python-imaging-library", "python", "crop", "" ]
I found this piece of code inside System.Web.ISAPIRuntime using Reflector ``` public void DoGCCollect() { for (int i = 10; i > 0; i--) { GC.Collect(); } } ``` Can anyone comment on this? Is there a reason to do GC.Collect() in a loop? Why 10 times and not 3, 5 or 20? Analysis indicated that it's not used inside .net framework but it's public so I suppose that IIS could call it... **edit :** Just for clarification purposes : I have never called GC.Collect and I have no intentions of using it. I know it's a bad idea in most (if not all) cases. The question is why .net framework does it. Thanks for all your answers.
I don't think you're going to get a better explanation that "one of Microsoft's programmers is pretty clueless, and apparently, no one else bothered to look at his code before it was checked in". ;) It does look scary though. It's a fairly common response to bugs you don't understand though. For whatever reason, you're trying to call the GC, and somehow, calling it doesn't seem to solve your problem (perhaps the real problem was just that you should wait for the finalizer thread or something), so the naive solution is obviously "Well, I'll just keep calling it then". Similar to pressing 'print' repeatedly until your document is printed. Perhaps you should submit it to thedailywtf.com though.
Yes, that's horrible. Firstly, you shouldn't need to do it at all. However, if you really want to force garbage collection as hard as you can, and wait for it to finish, you should probably use: ``` GC.Collect(GC.MaxGeneration, GCCollectionMode.Forced); GC.WaitForPendingFinalizers(); // Collect anything that's just been finalized GC.Collect(GC.MaxGeneration, GCCollectionMode.Forced); ``` Really not a nice idea though.
GC.Collect in a loop?
[ "", "c#", ".net", "garbage-collection", "" ]
Is there some way to make the following possible, or should it be done elsewhere? ``` class JobRecordForm(forms.ModelForm): supervisor = forms.ModelChoiceField( queryset = User.objects.filter(groups__name='Supervisors'), widget = forms.RadioSelect, initial = request.user # is there some way to make this possible? ) class Meta: model = JobRecord ```
You might want to handle this in your view function. Since your view function must create the initial form, and your view function knows the user. ``` form = JobRecordForm( {'supervisor':request.user} ) ``` This will trigger validation of this input, BTW, so you can't provide hint values this way.
If you do this in your view.py instead: ``` form = JobRecordForm( initial={'supervisor':request.user} ) ``` Then you won't trigger the validation. See <http://docs.djangoproject.com/en/dev/ref/forms/api/#dynamic-initial-values>
Django Forms, set an initial value to request.user
[ "", "python", "django-forms", "" ]
So I am trying to write an .sh file that will be executable, this is how I'm currently writing it: ``` Writer output = null; try { output = new BufferedWriter(new FileWriter(file2)); output.write(shellScriptContent); output.close(); } catch (IOException ex) { Logger.getLogger(PunchGUI.class.getName()).log(Level.SEVERE, null, ex); } ``` So that writes the file just fine, but it is not executable. Is there a way to change the executable status when I write it? Edit: To further clarify, I am trying to make it execute by default, so that for instance, if you double clicked the generated file, it would automatically execute.
You'd need to chmod it, and you can probably do it by exec'ing a system command like such: Really all you'd need is to fire off something like this: ``` Runtime.getRuntime().exec("chmod u+x "+FILENAME); ``` But if you want to keep track of it more explicitly can capture stdin / stderr then something more like: ``` Process p = Runtime.getRuntime().exec("chmod u+x "+FILENAME); BufferedReader stdInput = new BufferedReader(new InputStreamReader(p.getInputStream())); BufferedReader stdError = new BufferedReader(new InputStreamReader(p.getErrorStream())); ``` Which I got from here: <http://www.devdaily.com/java/edu/pj/pj010016/pj010016.shtml> **Update:** Test program: ``` package junk; import java.io.BufferedWriter; import java.io.FileWriter; import java.io.IOException; import java.io.Writer; public class Main{ private String scriptContent = '#!/bin/bash \n echo "yeah toast!" > /tmp/toast.txt'; public void doIt(){ try{ Writer output = new BufferedWriter(new FileWriter("/tmp/toast.sh")); output.write(scriptContent); output.close(); Runtime.getRuntime().exec("chmod u+x /tmp/toast.sh"); }catch (IOException ex){} } public static void main(String[] args){ Main m = new Main(); m.doIt(); } } ``` On linux if you open up a file browser and double click on /tmp/toast.sh and choose to run it, it should generate a text file /tmp/toast.txt with the words 'yeah toast'. I assume Mac would do the same since it's BSD under the hood.
You can call [`File.setExecutable()`](http://java.sun.com/javase/6/docs/api/java/io/File.html#setExecutable(boolean)) to set the owner's executable bit for the file, which might be sufficient for your case. Or you can just `chmod` it yourself with a system call with `Process`. Alas, full-powered programmatic alteration of file permissions isn't available until Java 7. It'll be part of the New IO feature set, which you can read more about [**here**](http://jcp.org/en/jsr/detail?id=203).
Write an executable .sh file with Java for OSX
[ "", "java", "shell", "scripting", "executable", "sh", "" ]
What is the best way to **deep** clone an interconnected set of objects? Example: ``` class A { B theB; // optional // ... } class B { A theA; // optional // ... } class Container { A[] a; B[] b; } ``` The obvious thing to do is walk the objects and deep clone everything as I come to it. This creates a problem however -- if I clone an `A` that contains a `B`, and that `B` is also in the `Container`, that `B` will be cloned twice after I clone the `Container`. The next logical step is to create a `Dictionary` and look up every object before I clone it. This seems like it could be a slow and ungraceful solution, however. Any thoughts?
Its not an elegant solution for sure, but it isn't uncommon to use a dictionary (or hashmap). One of the benefits is that a hashmap has a constant lookup time, so speed does not really suffer here.
The dictionary solution you suggested is the best I know of. To optimize further, you could use `object.GetHashCode()` to get a hash for the object, and use that as the dictionary key. Should be fast unless you're talking about huge object trees (10s to 100s of thousands of objects).
How to deep clone interconnected objects in C#?
[ "", "c#", "clone", "" ]
In a while loop I have: ``` $row = mysql_fetch_array($result) ``` Held under `$row` are two arrays which when shown using `print_r` display the following: ``` Array ( [4] => Post Content [value] => Post Content ) Array ( [4] => Post Title [value] => Post Title ) ``` How can I choose "Post Content" and "Post Title" from the array without it being repeated in the while loop? The original version of this question confused the duplication with the problem. The issue is how to extract the second array `[value]` when they are both held under `$row`.
Final solution was found by slightly restructuring the original query as per Topbit's suggestion and the use of two while loops. ``` while($row_title = mysql_fetch_array($result_title)){ $row_body = mysql_fetch_array($result_body); // code } ``` Thank you to all those who suggested solutions.
You can also make sure the values won't be insterted twice using mysql\_fetch\_assoc() or mysql\_fetch\_row(). Thus: ``` $row = mysql_fetch_assoc($result); Array ( ['value'] => Post Content ) Array ( ['value'] => Post Title ) ```
How to select specific part of an array when two arrays are returned?
[ "", "php", "arrays", "" ]
I want to learn a text manipulation language and I have zeroed in on Python. Apart from text manipulation Python is also used for numerical applications, machine learning, AI, etc. My question is how do I approach the learning of Python language so that I am quickly able to write sophisticated text manipulation utilities. Apart from regular expressions in the context of "text manipulation" what language features are more important than others what modules are useful and so on.
Beyond regular expressions here are some important features: * Generators, see [Generator Tricks for Systems Programmers](http://www.dabeaz.com/generators-uk/) by David Beazley for a lot of great examples to pipeline unlimited amounts of text through generators. For tools, I recommend looking at the following: * [Whoosh](http://whoosh.ca/), a pure Python search engine that will give you some nice real life examples of parsing text using [pyparsing](http://pyparsing.wikispaces.com/) and text processing in Python in general. * Ned Batcheldor's nice [reviews of various Python parsing tools](http://nedbatchelder.com/text/python-parsers.html). * [mxTextTools](http://www.egenix.com/products/python/mxBase/mxTextTools/) * [Docutils](http://docutils.sourceforge.net/) source code for more advanced text processing in Python, including a sophisticated state machine. **Edit:** A good links specific to text processing in Python: * [Text Processing in Python](http://gnosis.cx/TPiP/) by David Mertz. I think the book is still available, although it's probably a bit dated now.
There's a book [Text Processing in Python](http://gnosis.cx/TPiP/). I didn't read it myself yet but I've read other articles of this author and generally they're a good staff.
Python and text manipulation
[ "", "python", "text", "" ]
Is everything under the GAC precompiled (ngened)? If so, then all of .NET is precompiled, so it's not possible for the CLR to optimize them at runtime? Like if you use List in your application then CLR will not be able to optimize the List itself, but only how it's used in your app? Doesn't this defeat the purpose of JIT, to get lots of optimizations at runtime? So effectively losing all the potential optimizations for the BCL?
No, the GAC is not automatically pre-JITted; however, GAC is a pre-requisite to pre-JIT. Actually, only a small minority of things are pre-JITted. Besides which - if the BCL was pre-JITted, then those optimisations *would have **already** been done* by NGEN, so the " losing all the potential optimizations" is a non-issue.
The GAC can contain non-ngen'd code (it must contain it as well as the native images when using ngen since ngened images do not contain all the needed meta data). Ngen'd code requires the dll be installed in the GAC to function efficiently (technically you can do it without but the resulting name verification triggers full read of your dll *anyway* which is likely to make your startup time *worse*). Pre 3.5 sp1 the ngen compilation was definitely fractionally different from the runtime one see [this article](http://www.codeguru.com/columns/experts/article.php/c4651) for some more details. I would imagine this still holds true for 3.5SP1 since those issues are hard to solve. Since ngen only really gives you two big wins you should consider whether either/both are significant in your scenario to justify the complexity and cost associated with their use. 1. The startup time for those dll's is much reduced * To be a really big win though all dlls loaded at start up need to be ngened to avoid the overhead of loading the jit itself) 2. The native images can share memory space across multiple processes. * Pretty pointless if you only run one or two processes. I suggest [This article detailing some of the changes in the 2.0 ngen](http://msdn.microsoft.com/en-gb/magazine/cc163808.aspx) is a good read, it covers things like hard binding which is a big improvement and links in to the excellent general documentation on writing efficient managed code though it has suffered from link rot see [msdn Chapter 5](http://msdn.microsoft.com/en-us/library/ms998547.aspx) it refers to. (note that that doc is old but many of the themes are still valid)
GAC vs JIT
[ "", "c#", ".net", "optimization", "gac", "" ]
I have some data that I would like to gzip, uuencode and then print to standard out. What I basically have is: ``` compressor = Popen("gzip", stdin = subprocess.PIPE, stdout = subprocess.PIPE) encoder = Popen(["uuencode", "dummy"], stdin = compressor.stdout) ``` The way I feed data to the compressor is through compressor.stdin.write(stuff). What I really need to do is to send an EOF to the compressor, and I have no idea how to do it. At some point, I tried compressor.stdin.close() but that doesn't work -- it works well when the compressor writes to a file directly, but in the case above, the process doesn't terminate and stalls on compressor.wait(). Suggestions? In this case, gzip is an example and I really need to do something with piping the output of one process to another. Note: The data I need to compress won't fit in memory, so communicate isn't really a good option here. Also, if I just run ``` compressor.communicate("Testing") ``` after the 2 lines above, it still hangs with the error ``` File "/usr/lib/python2.4/subprocess.py", line 1041, in communicate rlist, wlist, xlist = select.select(read_set, write_set, []) ```
I suspect the issue is with the order in which you open the pipes. UUEncode is funny is that it will whine when you launch it if there's no incoming pipe in just the right way (try launching the darn thing on it's own in a Popen call to see the explosion with just PIPE as the stdin and stdout) Try this: ``` encoder = Popen(["uuencode", "dummy"], stdin=PIPE, stdout=PIPE) compressor = Popen("gzip", stdin=PIPE, stdout=encoder.stdin) compressor.communicate("UUencode me please") encoded_text = encoder.communicate()[0] print encoded_text begin 644 dummy F'XL(`%]^L$D``PL-3<U+SD])5<A-52C(24TL3@4`;2O+"!(````` ` end ``` You are right, btw... there is no way to send a generic EOF down a pipe. After all, each program really defines its own EOF. The way to do it is to close the pipe, as you were trying to do. EDIT: I should be clearer about uuencode. As a shell program, it's default behaviour is to expect console input. If you run it without a "live" incoming pipe, it will block waiting for console input. By opening the encoder second, before you had sent material down the compressor pipe, the encoder was blocking waiting for you to start typing. Jerub was right in that there was something blocking.
This is not the sort of thing you should be doing directly in python, there are eccentricities regarding the how thing work that make it a much better idea to do this with a shell. If you can just use subprocess.Popen("foo | bar", shell=True), then all the better. What might be happening is that gzip has not been able to output all of its input yet, and the process will no exit until its stdout writes have been finished. You can look at what system call a process is blocking on if you use strace. Use `ps auxwf` to discover which process is the gzip process, then use `strace -p $pidnum` to see what system call it is performing. Note that stdin is FD 0 and stdout is FD 1, you will probably see it reading or writing on those file descriptors.
Python Popen, closing streams and multiple processes
[ "", "python", "stream", "popen", "eof", "" ]
From MySQL query I get data which I put into a dictionary "d": > d = {0: (datetime.timedelta(0, > 25200),), 1: (datetime.timedelta(0, > 25500),), 2: (datetime.timedelta(0, > 25800),), 3: (datetime.timedelta(0, > 26100),), 4: (datetime.timedelta(0, > 26400),), 5: (datetime.timedelta(0, > 26700),)} I have a list "m" with numbers like: > m = [3, 4, 1, 4, 7, 4] I'd like to test "m" and if there is number "4", I'd like to receive another list "h" with hours from "d" where index from list "m" would be corresponding with keys from dictionary "d", so: m[1], m[3], m[5] would get me hours assigned to d[1], d[3], d[5] in list "h": > h = [7:05:00, 7:15:00, 7:25:00] I'll appreciate your input for that...
I'm not entirely sure if this is what you're looking for, but I'll take a shot: ``` >>> indices = [index for index, i in enumerate(m) if i == 4] >>> h = [d[i][0] for i in indices] ``` Then you have to process the timedeltas as you want to.
``` deltas = [str(d[i][0]) for i, j in enumerate(m) if j == 4] ``` produces list of delta representation as strings.
Python - Acquire value from dictionary depending on location/index in list
[ "", "python", "list", "dictionary", "" ]
I have written a small application, that can restore a database (C# and SQL2005), but after I have accessed the database, I can't drop it - it says that it is in use.. I guess it has to do with the SQLconnection-pooling, but can I force it to relase the database??
Specify 'Pooling=False' in the connection string.
Dispose your SqlConnection object.
How to force SQLconnection to release a database?
[ "", "c#", "dispose", "sqlconnection", "" ]
I am a long time PHP user when it comes to web applications and am mostlz comfortable with it. However, I have a one semi-large project whose maintenance / extensibility has reached its end of a life cycle. I was weighing on different PHP frameworks (there were no when the project originated), since it is the way to go for this project, and I came to conclusion the ebst option would be to do it with Zend Framework. * Symfony seemed to complicated (I don't like setting up database model as that * CakePHP seemed murky * Igniter I liked at first, but then it seemed to me it is more like Zend with less features and no Zend behind it * Zend I like the system of that I can use only what I like and not being tied into a specific directory structure, and of course there is Zend behind it. Performance is what potentially bothers me Now, after this little rationale behind choosing Zend, there are several things I see as a deal breaker when choosing a framework. * I haven't used ORM in the past because I am more than comfortable writing SQL directly, so I still need to be convinced to use ORM * Not too much abstraction going on from the guts * Flexible directory structure As long as this project is going to be written anew, I just as might write it in Python/Django, since I am quite familiar with Python, but not with Django. So, I would like to know if there is someone that worked with both Zend Framework and Django frameworks and if can outline a few key point differences? I must also say that this project is made as a standard site/admin dual project. That is, it is basically two sites in one. One is for frontend and users, other is for data administration in the backend. I must and will build backend on my own, some scaffolding methods would be cool, but full automatic scaffolding is as good as nothing in this case. I am still quite not sure how one approaches building basically two applications within a directory structure of, what is supposed to be, one application. Do you just make two separate applications and rely on URL scheme from there on to separate them? www.example.com and all of the /\* being one application and www.example.com/admin/\* being a second application. Sorry for the long question(s), but as you can see - everything is pretty much related to one problem - I need to start a project anew, it has already established database+data which I can remodel, but would like to keep that kind of work at minimum. --- Ok, thank you everybody - looks like I'll try and implement this stuff with Zend, gives me most flexibility out of the package (I did tests with both), and we'll see how it goes.
Zend doesn't include a real ORM. It provides some helper classes but you are mostly on your own in modelling your database and writing your own SQL queries. So you would have full freedom there. As Deresh says, Zend is modularized so you can pick and choose the parts you want. Personally I use Zend together with DB\_DataObjects from PEAR as by ORM. It can auto-generate your skeleton code. It is a very simple solution for handling simple queries but I can always write custom SQL where necessary. Regarding separation of the two admin and frontend I would suggest putting them on different domains, e.g.: admin.yoursite.com (backend) and www.yoursite.com (frontend). You can probably work it with having them both on the same URL but it is not really a use-case that is supported by Django or Zend.
Well Django is more fullstack framework than Zend. It's more similar to symfony than Zend. Django can reverse engineer your database into ORM classes. and has a cli tool that help you do stuff ( admin and model generators, project skeleton generation etc.) Zend is more of a component framework. it has its own MVC and ORM implementation but you need to write that stuff alone. Its approach is to use only stuff that you need without imposing some directory structure. In your case Django will have some advantages because of its great admin generator module, and Django itself is pretty fast (faster than most PHP frameworks). I'm personally using symfony with Zend framework for some stuff I need (Mail, OpenID, Lucene search), but have played a bit with Django and I like it.
What are the real world differences between Zend Framework and Django?
[ "", "php", "django", "zend-framework", "" ]
### My table: ``` ID NAME COST PAR P_val S_val 1 X 5 0 1 0 1 y 5 0 2 0 1 z 5 0 0 5 2 XY 4 0 4 4 ``` I need to update the `PAR` field with the `SUM(S_val)`, grouped by ID: * For ID 1 `PAR` should be `SUM(SVAL) WHERE ID=1` * For ID 2 `PAR` should be `SUM(SVAL) WHERE ID=2` ### Expected ouput: ``` ID NAME COST PAR P_val S_val 1 X 5 5 1 0 1 y 5 5 2 0 1 z 5 5 0 5 2 XY 4 4 4 4 ``` How can I `UPDATE` the `PAR` value? ### My code: ``` UPDATE Table_Name SET PAR = (SELECT SUM(S_val) FROM Table_Name WHERE ID=1) FROM Table_Name ``` This does not work.
Unfortunately, you cannot update a table joined with itself in `MySQL`. You'll need to create a function as a workaround: ``` DELIMITER $$ CREATE FUNCTION `fn_get_sum`(_id INT) RETURNS int(11) READS SQL DATA BEGIN DECLARE r INT; SELECT SUM(s_val) INTO r FROM table_name WHERE id = _id; RETURN r; END $$ DELIMITER ; UPDATE table_name SET par = fn_get_sum(id) ```
Try: ``` UPDATE Table_NAme SET PAR= summedValue FROM TAble_NAME t JOIN ( SELECT ID, SUM(S_val) as summedvalue FROM TABLE_NAME GROUP BY ID ) s on t.ID = s.ID ```
UPDATE with SUM() in MySQL
[ "", "sql", "mysql", "" ]
I have looked for a **good** example of a **Builder pattern** (in C#), but cannot find one either because I don't understand the Builder pattern or I am trying to do something that was never intended. For example, if I have an abstract automobile and abstract builder methods to create car parts, I should be able to send all 30 of my choices to the Director, have it build the pieces I need, then build my automobile. Regardless of which car, truck, semi, etc. produced, I should be able to "drive" it in exactly the same way. First problem is most examples hard code property values in to the concrete parts, which I really think should come from a database. I thought the idea was to send my choices to the Director (from a data source) and have the builder create a customized product based on my data. Second problem is I want the builder methods to actually create the parts then assign them to the product, not pass strings but real strongly typed product parts. For example, I want to create a form on the fly by having a Builder manufacture form fields for me, including a label, an input section, validation, etc. This way I can read the object from my ORM, check out the object's metadata, pass this to my Builder and add the newly created user control result to my web form. However, every Builder example I find only has hard coded data instead of passing choices from the main code to the Builder and kicking out a customized product. Everything seems to be a big static case statement. For example, if I have three parameters with 10 choices each, I don't want to build 30 concrete Builder methods, I want to create only enough to manufacture the properties my product requires, which may be only three. I am tempted to have the Director exist in the main code only. There should be a way to automatically determine which concrete builder method to call similar to polymorphism and method overloads (although that is a very bad example) instead of using a case statement within the pattern. (Every time I need to add a new product type, I will need to modify the existing Director, which is bad).
I am going refer to the C# example in the Wikipedia Article [here](http://en.wikipedia.org/wiki/Builder_pattern). > First problem is most examples hard code property values in to the concrete parts, which I really think should come from a database. I thought the idea was to send my choices to the Director (from a data source) and have the builder create a customized product based on my data. In this case you would have class implementing PizzaBuilder that knows how to retrieve data from a database. You can do it several ways. One would be make a HawaiianPizzaBuilder. When the class initializes it queries the database for a Hawaiian Pizza and retrieves the row. Then when the various Build(x) methods are called it would set the properties to the corresponding field of the retrieved database row. Another would be just makes a PizzaDatabaseBuilder and make sure that when you initialize the class you pass it the ID of the row you need for that type of pizza. For example instead of ``` waiter.PizzaBuilder = new HawaiianPizzaBuilder(); ``` You use ``` waiter.PizzaBuilder = new PizzaDatabaseBuilder("Hawaiian"); ``` > Second problem is I want the builder methods to actually create the parts then assign them to the product, not pass strings but real strongly typed product parts. Should not be an issue. What you need is an other Factory/Builder type pattern to initialize the fields of the Pizza. For example instead of ``` public override void BuildDough() { pizza.Dough = "pan baked"; } ``` you would do something like ``` public override void BuildDough() { pizza.Dough = new DoughBuilder("pan baked"); } ``` or ``` public override void BuildDough() { pizza.Dough = new PanBakedDoughBuilder(); } ``` DoughBuilder can go to another table in your database to properly fill out a PizzaDough Class.
Mostly the call of a BuilderPattern looks like this: ``` Car car = new CarBuilder().withDoors(4).withColor("red").withABS(true).build(); ```
Design Pattern: Builder
[ "", "c#", "design-patterns", "interface-builder", "builder", "" ]
Is C# able to define macros as is done in the C programming language with pre-processor statements? I would like to simplify regular typing of certain repeating statements such as the following: ``` Console.WriteLine("foo"); ```
No, C# does not support preprocessor macros like C. Visual Studio on the other hand has [snippets](https://learn.microsoft.com/en-us/visualstudio/ide/visual-csharp-code-snippets?view=vs-2017). Visual Studio's snippets are a feature of the IDE and are expanded in the editor rather than replaced in the code on compilation by a preprocessor.
You can use a C preprocessor (like mcpp) and rig it into your .csproj file. Then you chnage "build action" on your source file from Compile to Preprocess or whatever you call it. Just add **BeforBuild** to your .csproj like this: ``` <Target Name="BeforeBuild" Inputs="@(Preprocess)" Outputs="@(Preprocess->'%(Filename)_P.cs')"> <Exec Command="..\Bin\cpp.exe @(Preprocess) -P -o %(RelativeDir)%(Filename)_P.cs" /> <CreateItem Include="@(Preprocess->'%(RelativeDir)%(Filename)_P.cs')"> <Output TaskParameter="Include" ItemName="Compile" /> </CreateItem> ``` You may have to manually change Compile to Preprocess on at least one file (in a text editor) - then the "Preprocess" option should be available for selection in Visual Studio. I know that macros are heavily overused and misused but removing them completely is equally bad if not worse. A classic example of macro usage would be **NotifyPropertyChanged**. Every programmer who had to rewrite this code by hand thousands of times knows how painful it is without macros.
C# Macro definitions in Preprocessor
[ "", "c#", "macros", "c-preprocessor", "" ]
I'm writing a module and want to have a unified exception hierarchy for the exceptions that it can raise (e.g. inheriting from a `FooError` abstract class for all the `foo` module's specific exceptions). This allows users of the module to catch those particular exceptions and handle them distinctly, if needed. But many of the exceptions raised from the module are raised because of some other exception; e.g. failing at some task because of an OSError on a file. What I need is to **“wrap” the exception caught such that it has a different type and message**, so that information is available further up the propagation hierarchy by whatever catches the exception. But I don't want to lose the existing type, message, and stack trace; that's all useful information for someone trying to debug the problem. A top-level exception handler is no good, since I'm trying to decorate the exception before it makes its way further up the propagation stack, and the top-level handler is too late. This is partly solved by deriving my module `foo`'s specific exception types from the existing type (e.g. `class FooPermissionError(OSError, FooError)`), but that doesn't make it any easier to wrap the existing exception instance in a new type, nor modify the message. Python's [PEP 3134](http://www.python.org/dev/peps/pep-3134/) “Exception Chaining and Embedded Tracebacks” discusses a change accepted in Python 3.0 for “chaining” exception objects, to indicate that a new exception was raised during the handling of an existing exception. What I'm trying to do is related: I need it also working in earlier Python versions, and I need it not for chaining, but only for polymorphism. What is the right way to do this?
**Python 3** introduced **exception chaining** (as described in [PEP 3134](http://www.python.org/dev/peps/pep-3134/)). This allows, when raising an exception, to cite an existing exception as the “cause”: ``` try: frobnicate() except KeyError as exc: raise ValueError("Bad grape") from exc ``` The caught exception (`exc`, a KeyError) thereby becomes part of (is the “cause of”) the new exception, a ValueError. The “cause” is available to whatever code catches the new exception. By using this feature, the `__cause__` attribute is set. The built-in exception handler also [knows how to report the exception's “cause” and “context”](https://www.python.org/dev/peps/pep-3134/#enhanced-reporting) along with the traceback. --- In **Python 2**, it appears this use case has no good answer (as described by [Ian Bicking](http://blog.ianbicking.org/2007/09/12/re-raising-exceptions/) and [Ned Batchelder](http://nedbatchelder.com/blog/200711/rethrowing_exceptions_in_python.html)). Bummer.
You can use sys.exc\_info() to get the traceback, and raise your new exception with said traceback (as the PEP mentions). If you want to preserve the old type and message, you can do so on the exception, but that's only useful if whatever catches your exception looks for it. For example ``` import sys def failure(): try: 1/0 except ZeroDivisionError, e: type, value, traceback = sys.exc_info() raise ValueError, ("You did something wrong!", type, value), traceback ``` Of course, this is really not that useful. If it was, we wouldn't need that PEP. I'd not recommend doing it.
Re-raise exception with a different type and message, preserving existing information
[ "", "python", "exception", "polymorphism", "" ]
I've got 2 entities in JPA: Entry and Comment. Entry contains two collections of Comment objects. ``` @Entity public class Entry { ... @OneToMany(fetch = FetchType.EAGER, cascade = CascadeType.ALL) @IndexColumn(base = 1, name = "dnr") private List<Comment> descriptionComments = new ArrayList<Comment>(); @OneToMany(fetch = FetchType.EAGER, cascade = CascadeType.ALL) @IndexColumn(base = 1, name = "pmnr") private List<Comment> postMortemComments = new ArrayList<Comment>(); ... } ``` To store such objects, JPA+Hibernate creates "Entry" table, "Comment" table and SINGLE "Entry\_Comment": `create table Entry_Comment (Entry_id integer not null, postMortemComments_id integer not null, pmnr integer not null, descriptionComments_id integer not null, dnr integer not null, primary key (Entry_id, dnr), unique (descriptionComments_id), unique (postMortemComments_id))` Storing of objects fail as `descriptionComments_id` and `postMortemComments_id` cannot be "not null" at the same time. How do I store object containing two collections of the same type using JPA+Hibernate?
This is one of the many Hibernate bugs ([HHH-3410](http://opensource.atlassian.com/projects/hibernate/browse/HHH-3410) to be precise). I've managed to fix it by adding `@JoinTable` annotations to `@OneToMany` relationships, each having its own table name. In your case it would look like this: ``` @OneToMany(fetch = FetchType.EAGER, cascade = CascadeType.ALL) @JoinTable(name="entity_descriptioncomments") @IndexColumn(base = 1, name = "dnr") private List<Comment> descriptionComments = new ArrayList<Comment>(); @OneToMany(fetch = FetchType.EAGER, cascade = CascadeType.ALL) @JoinTable(name="entity_postmortemcomments") @IndexColumn(base = 1, name = "pmnr") private List<Comment> postMortemComments = new ArrayList<Comment>(); ``` **Note**: you must add `@IndexColumn` annotation as well (because of the other Hibernate issue with multiple EAGER bags: [HHH-1718](https://hibernate.onjira.com/browse/HHH-1718)/[EJB-346](https://hibernate.onjira.com/browse/EJB-346)).
To store 2 collections like that in JPA with *DataNucleus* (<http://www.datanucleus.org>) you would do exactly as you've done. You have no `@JoinTable` annotation hence a FK should be placed in `Comment` for each of the collections. If you actually do have `@JoinTable` somewhere (or XML equivalent) then setting the names of the respective join tables (one for each collection) would work too (so they have their own join table). Having a shared join table between 2 collections is possible in DataNucleus too, but that's not standard JPA, instead a vendor extension. How that maps to Hibernate I've no idea, but then this is JPA so should be consistent since thats the point of having a spec ;-)
How to have 2 collections of the same type in JPA?
[ "", "java", "hibernate", "jpa", "collections", "persistence", "" ]