Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I reloaded an old project, and tried to run it only to run into the Binding Failure MDA Assistant: ``` BindingFailure was detected Message: The assembly with display name 'SoapTest.XmlSerializers' failed to load in the 'LoadFrom' binding context of the AppDomain with ID 1. The cause of the failure was: System.IO.FileNotFoundException: Could not load file or assembly 'SoapTest.XmlSerializers, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null' or one of its dependencies. The system cannot find the file specified. ``` After googling this, the only solution I could find was to turn off the MDA. This works, but I would prefer to solve the underlying problem (SoapTest.XmlSerializers.dll not being in the bin\Debug directory). How can I fix this? What creates the SoapTest.XmlSerializers.dll, and why is it not being created on a Rebuild? Yes, I have tried updating the Web Reference (this doesn't help). No I don't want to upgrade to WCF Soap Services.
actually yes , i went to the propensities window and set the option of "Generate serialization assembly" to "on" , by this you insure every time you build the project , i guess that error happens when you edit the assmbly info of the class library you are referencing without re building it
In VS.NET you can go to the project settings and try different settings for XmlSerialization - I believe there is a setting for auto that you may want to disable.
How do I fix a Binding Failure in Soap?
[ "", "c#", ".net", "soap", "" ]
We're reviewing one of the company's system's exception handling and found a couple of interesting things. Most of the code blocks (if not all of them) are inside a try/catch block, and inside the catch block a new BaseApplicationException is being thrown - which seems to be coming from the Enterprise Libraries. I'm in a bit of a trouble here as I don't see the benefits off doing this. (throwing another exception anytime one occurs) One of the developers who's been using the system for a while said it's because that class's in charge of publishing the exception (sending emails and stuff like that) but he wasn't too sure about it. After spending some time going through the code I'm quite confident to say, that's all it does is collecting information about the environment and than publishing it. My question is: - Is it reasonable to wrap all the code inside try { } catch { } blocks and than throw a new exception? And if it is, why? What's the benefit? My personal opinion is that it would be much easier to use an HttpModule, sign up for the Error event of the Application event, and do what's necessary inside the module. If we'd go down this road, would we miss something? Any drawbacks? Your opinion's much appreciated.
**Never**1 `catch (Exception ex)`. Period2. There is no way you can handle all the different kinds of errors that you may catch. Never3 catch an Exception-derived type if you can't handle it or provide additional information (to be used by subsequent exception handlers). Displaying an error message is **not** the same as *handling* the error. A couple of reasons for this, from the top of my head: * Catching and rethrowing is expensive * You'll end up losing the stack trace * You'll have a low signal-to-noice ratio in your code If you know how to handle a **specific** exception (and reset the application to pre-error state), catch it. (That's why it's called *exception* ***handling***.) To handle exceptions that are not caught, listen for the appropriate events. When doing WinForms, you'll need to listen for `System.AppDomain.CurrentDomain.UnhandledException`, and - if your doing `Threading` - `System.Windows.Forms.Application.ThreadException.` For web apps, there are similar mechanisms (`System.Web.HttpApplication.Error`). As for wrapping framework exceptions in your application (non-)specific exceptions (i.e. `throw new MyBaseException(ex);`): Utterly pointless, and a bad smell.4 --- ***Edit*** 1 *Never* is a very harsh word, especially when it comes to engineering, as @Chris pointed out in the comments. I'll admit to being high on principles when I first wrote this answer. 2,3 See 1. 4 If you don't bring anything new to the table, I still stand by this. If you have caught `Exception ex` as part of a method that you know could fail in any number of ways, I believe that the current method should reflect that in it's signature. And as you know, exceptions is not part of the method signature.
If I am reading the question correctly, I would say that implementing a try / catch which intercept exceptions (you don't mention - is it catching all exceptions, or just a specific one?) and throws a different exception is generally a bad thing. ## Disadvantages: At the very least you will lose stack trace information - the stack you will see will only extend to the method in which the new exception is thrown - you potentially lose some good debug info here. If you are catching Exception, you are running the risk of masking critical exceptions, like OutOfMemory or StackOverflow with a less critical exception, and thus leaving the process running, where perhaps it should have been torn down. ## Possible Advantages: In some very specific cases you could take an exception which doesn't have much debug value (like some exceptions coming back from a database) and wrap with an exception which adds more context, e.g id of the object you were dealing with. However, in almost all cases this is a bad smell and should be used with caution. Generally you should only catch an exception when there is something realistic that you can do in that location- ie recovering, rolling back, going to plan B etc. If there is nothing you can do about it, just allow it to pass up the chain. You should only catch and throw a new exception if there is specific and useful data available in that location which can augment the original exception and hence aid debugging.
Exception handling using an HttpModule
[ "", "c#", "asp.net", "exception", "httpmodule", "" ]
[open\_basedir](http://www.php.net/features.safe-mode) limits the files that can be opened by PHP within a directory-tree. I am storing several class libraries and configuration files outside of my web root directory. This way the web server does not make them publicly accessible. However when I try to include them from my application I get an open\_basedir restriction error like this: > Warning: realpath() > [function.realpath]: open\_basedir > restriction in effect. > File(/var/www/vhosts/domain.tld/zend/application) > is not within the allowed path(s): > (/var/www/vhosts/domain.tld/httpdocs:/tmp) > in > /var/www/vhosts/domain.tld/httpdocs/index.php > on line 5 My web root is here: ``` /var/www/vhosts/domain.tld/httpdocs ``` My libraries and configuration directory are here: ``` /var/www/vhosts/domain.tld/zend ``` What would be the best workaround to relax the open\_basedir restriction so that the the directory tree under the domain folder becomes available to my application? I have a number of domains that I want to do this with, and I'm also obviously wary of creating security vulnerabilities. Note: I am using CentOS, Apache, Plesk, and I have root ssh access to the server. And though this doesn't apply to Zend Framework directly, I am using it in this instance. So here is the inclusion from Zend's bootstrap: ``` define('APPLICATION_PATH', realpath(dirname(__FILE__) . '/../zend/application/')); set_include_path(APPLICATION_PATH . '/../zend/library' . PATH_SEPARATOR . get_include_path()); ```
You can also do this easily on a per-directory basis using the Apache (assuming this is your web server) configuration file (e.g. httpd.conf) ``` <Directory /var/www/vhosts/domain.tld/httpdocs> php_admin_value open_basedir "/var/www/vhosts/domain.tld/httpdocs:/var/www/vhosts/domain.tld/zend" </Directory> ``` you can also completely remove the restriction with ``` <Directory /var/www/vhosts/domain.tld/httpdocs> php_admin_value open_basedir none </Directory> ```
add the paths you need to access to (/var/www/vhosts/domain.tld/zend) to your open\_basedir directive (you can specify several paths using the path separator ':' or ';' in windows) note that the values in the open\_basedir are prefixes, which means that anything under the /var/www/vhosts/domain.tld/zend will be accessible
How can I relax PHP's open_basedir restriction?
[ "", "php", "security", "apache", "" ]
I have a set of Word documents which I want to publish using a PHP tool I've written. I copy and paste the Word documents into a text box and then save them into MySQL using the PHP program. The problem I Have arises from all the non-standard characters that Word documents have, like curly quotes and ellipses ("..."). What I do at the moment is manually search and replace these kinds of things (and also foreign symbols such as e-acute) with either plain text or HTML entities (&eacute ; etc) Is there a function in PHP I can call that will take the output of a Word document and convert everything that should be entities into entities, and other symbols that don't display properly in Firefox into symbols that do display. Thanks!
A better solution would be to ensure that your database is set-up to support UTF-8 characters. The additional characters available in the extended set should cover all the "non-standard" characters that you're talking about. Otherwise, if you really must convert these characters into HTML entities, use [htmlentities()](http://www.php.net/htmlentities).
This has served me well in the past: ``` $str = mb_convert_encoding($str, 'HTML-ENTITIES', 'UTF-8') ```
Converting a Word document into usable HTML in PHP
[ "", "php", "ms-word", "" ]
We have PHP 5.2.6 deployed to c:\php and in that folder there is the php.ini file. On Windows, can a website override these settings similar to the way that apache has .htaccess? e.g. ``` DirectoryIndex index.php index.html <IfModule mod_php5.c> php_flag magic_quotes_gpc off php_flag register_globals off </IfModule> <IfModule mod_php4.c> php_flag magic_quotes_gpc off php_flag register_globals off </IfModule> ``` ***Update:*** I was aware of ini\_set() but wondered if there was a declarative way to do this in a configuration file in the website rather than in script.
I would recommend doing all you can to avoid changing r`egister_globals` to on as it's a major security hole. But you can try using `init_set()` to change the settings within your PHP code, although some settings cannot be changed once PHP has started running. (These are somewhat server dependent I believe.)
You can override the directives in the php.ini file several ways, but not all directives can be changed by each method. See the [php.ini directives](http://www.php.net/manual/en/ini.php) page in the manual for a list of the directives and the methods that will work on each one. The last column in the table lists the methods that will work on that particular method. In increasing level of access: * `PHP_INI_USER` - Can be set in user scripts with `ini_set()` (or any higher method) * `PHP_INI_PERDIR` - Can be set using the .htacess file with `php_value` for string values or `php_flag` for binary values (or any higher method) * `PHP_INI_SYSTEM` - Can be set using php.ini or httpd.conf only (both require access to the server's configuration files) * `PHP_INI_ALL` - Can be set using any of the above methods
Can php.ini settings be overridden in by a website using PHP + IIS6?
[ "", "php", "windows", "iis", "configuration", "" ]
I am writing a diagnostic page for SiteScope and one area we need to test is if the connection to the file/media assets are accesible from the web server. One way I think I can do this is load the image via code behind and test to see if the IIS status message is 200. So basically I should be able to navigate to within the site to a folder like this: /media/1/image.jpg and see if it returns 200...if not throw exception. I am struggling to figure out how to write this code. Any help is greatly appreciated. Thanks
Just use HEAD. No need to download the entire image if you don't need it. Here some boilerplate code. ``` HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create("url"); request.Method = "HEAD"; bool exists; try { request.GetResponse(); exists = true; } catch { exists = false; } ```
You might want to also check that you got an OK status code (ie HTTP 200) and that the mime type from the response object matches what you're expecting. You could extend that along the lines of, ``` public bool doesImageExistRemotely(string uriToImage, string mimeType) { HttpWebRequest request = (HttpWebRequest)WebRequest.Create(uriToImage); request.Method = "HEAD"; try { HttpWebResponse response = (HttpWebResponse)request.GetResponse(); if (response.StatusCode == HttpStatusCode.OK && response.ContentType == mimeType) { return true; } else { return false; } } catch { return false; } } ```
Test to see if an image exists in C#
[ "", "c#", ".net", "iis", "" ]
Is there any good reason that an empty set of round brackets (parentheses) isn't valid for calling the default constructor in C++? ``` MyObject object; // ok - default ctor MyObject object(blah); // ok MyObject object(); // error ``` I seem to type "()" automatically everytime. Is there a good reason this isn't allowed?
**Most vexing parse** This is related to what is known as "C++'s most vexing parse". Basically, anything that can be interpreted by the compiler as a function declaration will be interpreted as a function declaration. Another instance of the same problem: ``` std::ifstream ifs("file.txt"); std::vector<T> v(std::istream_iterator<T>(ifs), std::istream_iterator<T>()); ``` `v` is interpreted as a declaration of function with 2 parameters. The workaround is to add another pair of parentheses: ``` std::vector<T> v((std::istream_iterator<T>(ifs)), std::istream_iterator<T>()); ``` Or, if you have C++11 and list-initialization (also known as uniform initialization) available: ``` std::vector<T> v{std::istream_iterator<T>{ifs}, std::istream_iterator<T>{}}; ``` With this, there is no way it could be interpreted as a function declaration.
Because it is treated as the declaration for a function: ``` int MyFunction(); // clearly a function MyObject object(); // also a function declaration ```
Why can't the default constructor be called with empty brackets?
[ "", "c++", "constructor", "default-constructor", "c++-faq", "most-vexing-parse", "" ]
I have data that looks like > ``` > CUSTOMER, CUSTOMER_ID, PRODUCT > ABC INC 1 XYX > ABC INC 1 ZZZ > DEF CO 2 XYX > DEF CO 2 ZZZ > DEF CO 2 WWW > GHI LLC 3 ZYX > ``` I'd like to write a query that'd make the data look like this: > ``` > CUSTOMER, CUSTOMER_ID, PRODUCTS > ABC INC 1 XYX, ZZZ > DEF CO 2 XYX, ZZZ, WWW > GHI LLC 3 ZYX > ``` Using Oracle 10g if helps. I saw something that would work using MYSQL, but I need a plain SQL or ORACLE equivalent. I've also seen examples of stored procs that could be made, however, I cannot use a stored proc with the product i'm using. Here's how'd it work in MySQL if I were using it ``` SELECT CUSTOMER, CUSTOMER_ID, GROUP_CONCAT( PRODUCT ) FROM MAGIC_TABLE GROUP BY CUSTOMER, CUSTOMER_ID ``` Thank you.
[This link](http://www.oracle-base.com/articles/10g/StringAggregationTechniques.php) refers to a number of examples of different ways to do this on Oracle. See if there's something there that you have permissions on your database to do.
I think LISTAGG is the best aggregate group by function to use in this situation: ``` SELECT CUSTOMER, CUSTOMER_ID, LISTAGG(PRODUCT, ', ') WITHIN GROUP (ORDER BY PRODUCT) FROM SOME_TABLE GROUP BY CUSTOMER, CUSTOMER_ID ORDER BY 1, 2 ```
SQL - Multiple Values comma separated when using GROUP BY
[ "", "sql", "oracle", "oracle10g", "" ]
So what I'm looking for here is something like PHP's [print\_r](https://www.php.net/print_r) function. This is so I can debug my scripts by seeing what's the state of the object in question.
You are really mixing together two different things. Use [`dir()`](https://docs.python.org/3/library/functions.html#dir), [`vars()`](https://docs.python.org/3/library/functions.html#vars) or the [`inspect`](https://docs.python.org/3/library/inspect.html) module to get what you are interested in (I use `__builtins__` as an example; you can use any object instead). ``` >>> l = dir(__builtins__) >>> d = __builtins__.__dict__ ``` Print that dictionary however fancy you like: ``` >>> print l ['ArithmeticError', 'AssertionError', 'AttributeError',... ``` or ``` >>> from pprint import pprint >>> pprint(l) ['ArithmeticError', 'AssertionError', 'AttributeError', 'BaseException', 'DeprecationWarning', ... >>> pprint(d, indent=2) { 'ArithmeticError': <type 'exceptions.ArithmeticError'>, 'AssertionError': <type 'exceptions.AssertionError'>, 'AttributeError': <type 'exceptions.AttributeError'>, ... '_': [ 'ArithmeticError', 'AssertionError', 'AttributeError', 'BaseException', 'DeprecationWarning', ... ``` Pretty printing is also available in the interactive debugger as a command: ``` (Pdb) pp vars() {'__builtins__': {'ArithmeticError': <type 'exceptions.ArithmeticError'>, 'AssertionError': <type 'exceptions.AssertionError'>, 'AttributeError': <type 'exceptions.AttributeError'>, 'BaseException': <type 'exceptions.BaseException'>, 'BufferError': <type 'exceptions.BufferError'>, ... 'zip': <built-in function zip>}, '__file__': 'pass.py', '__name__': '__main__'} ```
You want `vars()` mixed with `pprint()`: ``` from pprint import pprint pprint(vars(your_object)) ```
Is there a built-in function to print all the current properties and values of an object?
[ "", "python", "debugging", "introspection", "pretty-print", "python-datamodel", "" ]
In PHP, I am able to use a normal function as a variable without problem, but I haven't figured out how to use a static method. Am I just missing the right syntax, or is this not possible? (EDIT: the first suggested answer does not seem to work. I've extended my example to show the errors returned.) ``` function foo1($a,$b) { return $a/$b; } class Bar { static function foo2($a,$b) { return $a/$b; } public function UseReferences() { // WORKS FINE: $fn = foo1; print $fn(1,1); // WORKS FINE: print self::foo2(2,1); print Bar::foo2(3,1); // DOES NOT WORK ... error: Undefined class constant 'foo2' //$fn = self::foo2; //print $fn(4,1); // DOES NOT WORK ... error: Call to undefined function self::foo2() //$fn = 'self::foo2'; //print $fn(5,1); // DOES NOT WORK ... error: Call to undefined function Bar::foo2() //$fn = 'Bar::foo2'; //print $fn(5,1); } } $x = new Bar(); $x->UseReferences(); ``` (I am using PHP v5.2.6 -- does the answer change depending on version too?)
PHP handles callbacks as strings, not function pointers. The reason your first test works is because the PHP interpreter assumes **foo1** as a string. If you have E\_NOTICE level error enabled, you should see proof of that. "Use of undefined constant foo1 - assumed 'foo1'" You can't call static methods this way, unfortunately. The scope (class) is relevant so you need to use call\_user\_func instead. ``` <?php function foo1($a,$b) { return $a/$b; } class Bar { public static function foo2($a,$b) { return $a/$b; } public function UseReferences() { $fn = 'foo1'; echo $fn(6,3); $fn = array( 'self', 'foo2' ); print call_user_func( $fn, 6, 2 ); } } $b = new Bar; $b->UseReferences(); ```
In php 5.2, you can use a variable as the method name in a static call, but to use a variable as the class name, you'll have to use callbacks as described by BaileyP. However, from php 5.3, you **can** use a variable as the class name in a static call. So: ``` class Bar { public static function foo2($a,$b) { return $a/$b; } public function UseReferences() { $method = 'foo2'; print Bar::$method(6,2); // works in php 5.2.6 $class = 'Bar'; print $class::$method(6,2); // works in php 5.3 } } $b = new Bar; $b->UseReferences(); ?> ```
Reference to static method in PHP?
[ "", "php", "syntax", "" ]
I want to use the DateTime.TryParse method to get the datetime value of a string into a Nullable. But when I try this: ``` DateTime? d; bool success = DateTime.TryParse("some date text", out (DateTime)d); ``` the compiler tells me > 'out' argument is not classified as a variable Not sure what I need to do here. I've also tried: ``` out (DateTime)d.Value ``` and that doesn't work either. Any ideas?
``` DateTime? d=null; DateTime d2; bool success = DateTime.TryParse("some date text", out d2); if (success) d=d2; ``` (There might be more elegant solutions, but why don't you simply do something as above?)
As Jason says, you can create a variable of the right type and pass that. You might want to encapsulate it in your own method: ``` public static DateTime? TryParse(string text) { DateTime date; if (DateTime.TryParse(text, out date)) { return date; } else { return null; } } ``` ... or if you like the conditional operator: ``` public static DateTime? TryParse(string text) { DateTime date; return DateTime.TryParse(text, out date) ? date : (DateTime?) null; } ``` Or in C# 7: ``` public static DateTime? TryParse(string text) => DateTime.TryParse(text, out var date) ? date : (DateTime?) null; ```
How do I use DateTime.TryParse with a Nullable<DateTime>?
[ "", "c#", "datetime", "nullable", "" ]
This may seem a bit trivial, but I have not been able to figure it out. I am opening up a SPSite and then trying to open up a SPWeb under that SPSite. This is working fine on the VPC, which has the same Site Collection/Site hierarchy, but on production, I get an exception telling me that the URL is invalid when I try the SPSite.OpenWeb(webUrl);. I have verified that the URL’s are correct. The Code: ``` try { SPSite scheduleSiteCol = new SPSite(branchScheduleURL); lblError.Text += Environment.NewLine + "Site Collection URL: " + scheduleSiteCol.Url; SPWeb scheduleWeb = scheduleSiteCol.OpenWeb(branchScheduleURL.Replace(scheduleSiteCol.Url, "")); //<--- Throws error on this line SPList scheduleList = scheduleWeb.GetList(branchScheduleURL + "/lists/" + SPContext.Current.List.Title); return scheduleList.GetItemById(int.Parse(testID)); } catch (System.Exception ex) { lblError.Text += Environment.NewLine + ex.ToString(); return null; } ``` Note: branchScheduleURL is actually the whole URL that includes the URL of the Web as well. The output + exception: > Site Collection URL: <https://ourSite.com/mocc> > > System.ArgumentException: Invalid URL: /internal/scheduletool. at Microsoft.SharePoint.SPSite.OpenWeb(String strUrl, Boolean requireExactUrl) at Microsoft.SharePoint.SPSite.OpenWeb(String strUrl) at MOCCBranchScheduleListWeb.MOCCBranchScheduleListV3.GetConflictListItem(String branchScheduleURL, String testID)System.NullReferenceException: Object reference not set to an instance of an object. at MOCCBranchScheduleListWeb.MOCCBranchScheduleListV3.CheckForConflicts(String[] cfcFlags1, DateTime startTime, DateTime endTime, String[] cfcFlags2) Note: <https://ourSite.com/mocc/internal/scheduletool> is the SPWeb I am trying to open. Am I missing something obvious? Any help would be greatly appreciated. Thanks.
Looks at the examples table at the bottom of [this page](http://msdn.microsoft.com/en-us/library/ms955307.aspx). Try not sending any parameters into the OpenWeb() method (2nd row).
Try getting the SPWeb object for "Internal" first. then get the SubWeb SPWebCollection for that and object. From that, try to get the SPWeb object for "ScheduleTool" using the GetSubwebsForCurrentUser() Method.
MOSS 2007 -- Invalid URL Exception SPSite.OpenWeb(...)
[ "", "c#", "sharepoint", "sharepoint-2007", "moss", "wss", "" ]
Is it possible to run both debuggers within the same PHP installation simultaneously. They both use different ports so communication with the client IDEs/other apps wouldn't be an issue. I ask only because using the Zend Debugger with ZendStudio has proven to be much easier (fewer steps to start/stop debugging from the browser), but I really like some of the profiling tools available that only work with XDebug. So in a nutshell, I would love to be able to have the best of both worlds if possible.
<http://www.suspekt.org/2008/08/04/xdebug-203-stealth-patch/> (in particular the last comment) seems to indicate that the profiling parts of Xedebug will work fine alongside Zend Debugger, with the patch installed.
It is possible - the simplest way on a development web server would be to run 2 different apache processes with different php.ini files referencing the different debugger modules
Using Xdebug & Zend Debugger Simultaneously?
[ "", "php", "profiling", "xdebug", "zend-studio", "zend-debugger", "" ]
Take this non-compiling code for instance: ``` public string GetPath(string basefolder, string[] extraFolders) { string version = Versioner.GetBuildAndDotNetVersions(); string callingModule = StackCrawler.GetCallingModuleName(); return AppendFolders(basefolder, version, callingModule, extraFolders); } private string AppendFolders(params string[] folders) { string outstring = folders[0]; for (int i = 1; i < folders.Length; i++) { string fixedPath = folders[i][0] == '\\' ? folders[i].Substring(1) : folders[i]; Path.Combine(outstring, fixedPath); } return outstring; } ``` This example is a somewhat simplified version of testing code I am using. Please, I am only interested in solutions having directly to do with the param keyword. I know how lists and other similar things work. Is there a way to "explode" the extraFolders array so that it's contents can be passed into AppendFolders along with other parameters?
One option is to make the `params` parameter an `object[]`: ``` static string appendFolders(params object[] folders) { return (string) folders.Aggregate("",(output, f) => Path.Combine( (string)output ,(f is string[]) ? appendFolders((object[])f) : ((string)f).TrimStart('\\'))); } ``` If you want something more strongly-typed, another option is to create a custom union type with implicit conversion operators: ``` static string appendFolders(params StringOrArray[] folders) { return folders.SelectMany(x=>x.AsEnumerable()) .Aggregate("", (output, f)=>Path.Combine(output,f.TrimStart('\\'))); } class StringOrArray { string[] array; public IEnumerable<string> AsEnumerable() { return soa.array;} public static implicit operator StringOrArray(string s) { return new StringOrArray{array=new[]{s}};} public static implicit operator StringOrArray(string[] s) { return new StringOrArray{array=s};} } ``` In either case, this **will** compile: ``` appendFolders("base", "v1", "module", new[]{"debug","bin"}); ```
Just pass it. The folders parameter is an array first. the "params" functionality is a little bit of compiler magic, but it's not required. ``` AppendFolders(extraFolders); ``` Now, it this particulat instance, you'll have to add some things to that array, first. ``` List<string> lstFolders = new List<string>(extraFolders); lstFolder.Insert(0, callingModule); lstFolder.Insert(0, version); lstFolder.Insert(0, basefolder); return AppendFolders(lstFolders.ToArray()); ```
Is it possible to explode an array so that its elements can be passed to a method with the params keyword?
[ "", "c#", "parameters", "keyword", "variadic-functions", "params-keyword", "" ]
I need to run a JNDI provider without the overhead of a J2EE container. I've tried to follow the directions in this [article](http://www.javaworld.com/javaworld/jw-04-2002/jw-0419-jndi.html), which describes (on page 3) exactly what I want to do. Unfortunately, these directions fail. I had to add the jboss-common.jar to my classpath too. Once I did that, I get a stack trace: ``` $ java org.jnp.server.Main 0 [main] DEBUG org.jboss.naming.Naming - Creating NamingServer stub, theServer=null,rmiPort=0,clientSocketFactory=null,serverSocketFactory=org.jboss.net.sockets.DefaultSocketFactory@ad093076[bindAddress=null] Exception in thread "main" java.lang.NullPointerException at org.jnp.server.Main.getNamingInstance(Main.java:301) at org.jnp.server.Main.initJnpInvoker(Main.java:354) at org.jnp.server.Main.start(Main.java:316) at org.jnp.server.Main.main(Main.java:104) ``` I'm hoping to make this work, but I would also be open to other lightweight standalone JNDI providers. All of this is to make ActiveMQ work, and if somebody can suggest another lightweight JMS provider that works well outside of the vm the clients are in without a full blown app server that would work too.
[Apache ActiveMQ](http://activemq.apache.org/) already comes with an integrated lightweight JNDI provider. See [these instructions on using it](http://activemq.apache.org/jndi-support.html). Basically you just add the jndi.properties file to the classpath and you're done. ``` java.naming.factory.initial = org.apache.activemq.jndi.ActiveMQInitialContextFactory # use the following property to configure the default connector java.naming.provider.url = failover:tcp://localhost:61616 # use the following property to specify the JNDI name the connection factory # should appear as. #connectionFactoryNames = connectionFactory, queueConnectionFactory, topicConnectionFactry # register some queues in JNDI using the form # queue.[jndiName] = [physicalName] queue.MyQueue = example.MyQueue # register some topics in JNDI using the form # topic.[jndiName] = [physicalName] topic.MyTopic = example.MyTopic ```
Use a jndi.properties file like this: ``` java.naming.factory.initial=org.apache.activemq.jndi.ActiveMQInitialContextFactory # use the following property to configure the default connector java.naming.provider.url=tcp://jmshost:61616 # use the following property to specify the JNDI name the connection factory # should appear as. #connectionFactoryNames = connectionFactory, queueConnectionFactory, topicConnectionFactry # register some queues in JNDI using the form # queue.[jndiName] = [physicalName] #queue.MyQueue = example.MyQueue # register some topics in JNDI using the form # topic.[jndiName] = [physicalName] topic.myTopic = MY.TOPIC ``` Make sure that this file is in your classpath. Then you can lookup the topic/queue like this (minus appropriate try/catches): ``` context = new InitialContext(properties); context = (Context) context.lookup("java:comp/env/jms"); topicConnectionFactory = (TopicConnectionFactory) context.lookup("ConnectionFactory"); topic = (Topic) context.lookup("myTopic"); ```
JNDI without a J2EE Container (with JNP? Maybe some other provider?)
[ "", "java", "jakarta-ee", "jboss", "jms", "jndi", "" ]
Assume you have a flat table that stores an ordered tree hierarchy: ``` Id Name ParentId Order 1 'Node 1' 0 10 2 'Node 1.1' 1 10 3 'Node 2' 0 20 4 'Node 1.1.1' 2 10 5 'Node 2.1' 3 10 6 'Node 1.2' 1 20 ``` Here's a diagram, where we have `[id] Name`. Root node 0 is fictional. ``` [0] ROOT / \ [1] Node 1 [3] Node 2 / \ \ [2] Node 1.1 [6] Node 1.2 [5] Node 2.1 / [4] Node 1.1.1 ``` What minimalistic approach would you use to output that to HTML (or text, for that matter) as a correctly ordered, correctly indented tree? Assume further you only have basic data structures (arrays and hashmaps), no fancy objects with parent/children references, no ORM, no framework, just your two hands. The table is represented as a result set, which can be accessed randomly. Pseudo code or plain English is okay, this is purely a conceptional question. Bonus question: Is there a fundamentally better way to store a tree structure like this in a RDBMS? --- **EDITS AND ADDITIONS** To answer one commenter's ([Mark Bessey](https://stackoverflow.com/users/17826/mark-bessey)'s) question: A root node is not necessary, because it is never going to be displayed anyway. ParentId = 0 is the convention to express "these are top level". The Order column defines how nodes with the same parent are going to be sorted. The "result set" I spoke of can be pictured as an array of hashmaps (to stay in that terminology). For my example was meant to be already there. Some answers go the extra mile and construct it first, but thats okay. The tree can be arbitrarily deep. Each node can have N children. I did not exactly have a "millions of entries" tree in mind, though. Don't mistake my choice of node naming ('Node 1.1.1') for something to rely on. The nodes could equally well be called 'Frank' or 'Bob', no naming structure is implied, this was merely to make it readable. *I have posted my own solution so you guys can pull it to pieces.*
Now that [MySQL 8.0 supports recursive queries](https://dev.mysql.com/doc/refman/8.0/en/with.html), we can say that [all popular SQL databases support recursive queries](https://stackoverflow.com/questions/324935/mysql-with-clause/325243#325243) in standard syntax. ``` WITH RECURSIVE MyTree AS ( SELECT * FROM MyTable WHERE ParentId IS NULL UNION ALL SELECT m.* FROM MyTABLE AS m JOIN MyTree AS t ON m.ParentId = t.Id ) SELECT * FROM MyTree; ``` I tested recursive queries in MySQL 8.0 in my presentation [Recursive Query Throwdown](https://www.slideshare.net/billkarwin/recursive-query-throwdown) in 2017. Below is my original answer from 2008: --- There are several ways to store tree-structured data in a relational database. What you show in your example uses two methods: * **Adjacency List** (the "parent" column) and * **Path Enumeration** (the dotted-numbers in your name column). Another solution is called **Nested Sets**, and it can be stored in the same table too. Read "[Trees and Hierarchies in SQL for Smarties](https://rads.stackoverflow.com/amzn/click/com/1558609202)" by Joe Celko for a lot more information on these designs. I usually prefer a design called **Closure Table** (aka "Adjacency Relation") for storing tree-structured data. It requires another table, but then querying trees is pretty easy. I cover Closure Table in my presentation [Models for Hierarchical Data with SQL and PHP](http://www.slideshare.net/billkarwin/models-for-hierarchical-data) and in my book [SQL Antipatterns Volume 1: Avoiding the Pitfalls of Database Programming](https://pragprog.com/titles/bksap1/sql-antipatterns-volume-1/). ``` CREATE TABLE ClosureTable ( ancestor_id INT NOT NULL REFERENCES FlatTable(id), descendant_id INT NOT NULL REFERENCES FlatTable(id), PRIMARY KEY (ancestor_id, descendant_id) ); ``` Store all paths in the Closure Table, where there is a direct ancestry from one node to another. Include a row for each node to reference itself. For example, using the data set you showed in your question: ``` INSERT INTO ClosureTable (ancestor_id, descendant_id) VALUES (1,1), (1,2), (1,4), (1,6), (2,2), (2,4), (3,3), (3,5), (4,4), (5,5), (6,6); ``` Now you can get a tree starting at node 1 like this: ``` SELECT f.* FROM FlatTable f JOIN ClosureTable a ON (f.id = a.descendant_id) WHERE a.ancestor_id = 1; ``` The output (in MySQL client) looks like the following: ``` +----+ | id | +----+ | 1 | | 2 | | 4 | | 6 | +----+ ``` In other words, nodes 3 and 5 are excluded, because they're part of a separate hierarchy, not descending from node 1. --- Re: comment from e-satis about immediate children (or immediate parent). You can add a "`path_length`" column to the `ClosureTable` to make it easier to query specifically for an immediate child or parent (or any other distance). ``` INSERT INTO ClosureTable (ancestor_id, descendant_id, path_length) VALUES (1,1,0), (1,2,1), (1,4,2), (1,6,1), (2,2,0), (2,4,1), (3,3,0), (3,5,1), (4,4,0), (5,5,0), (6,6,0); ``` Then you can add a term in your search for querying the immediate children of a given node. These are descendants whose `path_length` is 1. ``` SELECT f.* FROM FlatTable f JOIN ClosureTable a ON (f.id = a.descendant_id) WHERE a.ancestor_id = 1 AND path_length = 1; +----+ | id | +----+ | 2 | | 6 | +----+ ``` --- Re comment from @ashraf: "How about sorting the whole tree [by name]?" Here's an example query to return all nodes that are descendants of node 1, join them to the FlatTable that contains other node attributes such as `name`, and sort by the name. ``` SELECT f.name FROM FlatTable f JOIN ClosureTable a ON (f.id = a.descendant_id) WHERE a.ancestor_id = 1 ORDER BY f.name; ``` --- Re comment from @Nate: ``` SELECT f.name, GROUP_CONCAT(b.ancestor_id order by b.path_length desc) AS breadcrumbs FROM FlatTable f JOIN ClosureTable a ON (f.id = a.descendant_id) JOIN ClosureTable b ON (b.descendant_id = a.descendant_id) WHERE a.ancestor_id = 1 GROUP BY a.descendant_id ORDER BY f.name +------------+-------------+ | name | breadcrumbs | +------------+-------------+ | Node 1 | 1 | | Node 1.1 | 1,2 | | Node 1.1.1 | 1,2,4 | | Node 1.2 | 1,6 | +------------+-------------+ ``` --- A user suggested an edit today. SO moderators approved the edit, but I am reversing it. The edit suggested that the ORDER BY in the last query above should be `ORDER BY b.path_length, f.name`, presumably to make sure the ordering matches the hierarchy. But this doesn't work, because it would order "Node 1.1.1" after "Node 1.2". If you want the ordering to match the hierarchy in a sensible way, that is possible, but not simply by ordering by the path length. For example, see my answer to [MySQL Closure Table hierarchical database - How to pull information out in the correct order](https://stackoverflow.com/questions/8252323/mysql-closure-table-hierarchical-database-how-to-pull-information-out-in-the-c).
If you use nested sets (sometimes referred to as Modified Pre-order Tree Traversal) you can extract the entire tree structure or any subtree within it in tree order with a single query, at the cost of inserts being more expensive, as you need to manage columns which describe an in-order path through thee tree structure. For [django-mptt](http://code.google.com/p/django-mptt/), I used a structure like this: ``` id parent_id tree_id level lft rght -- --------- ------- ----- --- ---- 1 null 1 0 1 14 2 1 1 1 2 7 3 2 1 2 3 4 4 2 1 2 5 6 5 1 1 1 8 13 6 5 1 2 9 10 7 5 1 2 11 12 ``` Which describes a tree which looks like this (with `id` representing each item): ``` 1 +-- 2 | +-- 3 | +-- 4 | +-- 5 +-- 6 +-- 7 ``` Or, as a nested set diagram which makes it more obvious how the `lft` and `rght` values work: ``` __________________________________________________________________________ | Root 1 | | ________________________________ ________________________________ | | | Child 1.1 | | Child 1.2 | | | | ___________ ___________ | | ___________ ___________ | | | | | C 1.1.1 | | C 1.1.2 | | | | C 1.2.1 | | C 1.2.2 | | | 1 2 3___________4 5___________6 7 8 9___________10 11__________12 13 14 | |________________________________| |________________________________| | |__________________________________________________________________________| ``` As you can see, to get the entire subtree for a given node, in tree order, you simply have to select all rows which have `lft` and `rght` values between its `lft` and `rght` values. It's also simple to retrieve the tree of ancestors for a given node. The `level` column is a bit of denormalisation for convenience more than anything and the `tree_id` column allows you to restart the `lft` and `rght` numbering for each top-level node, which reduces the number of columns affected by inserts, moves and deletions, as the `lft` and `rght` columns have to be adjusted accordingly when these operations take place in order to create or close gaps. I made some [development notes](http://code.google.com/p/django-mptt/source/browse/trunk/NOTES) at the time when I was trying to wrap my head around the queries required for each operation. In terms of actually working with this data to display a tree, I created a [`tree_item_iterator`](http://code.google.com/p/django-mptt/source/browse/trunk/mptt/utils.py#29) utility function which, for each node, should give you sufficient information to generate whatever kind of display you want. More info about MPTT: * [Trees in SQL](https://web.archive.org/web/20081102031702/http://www.intelligententerprise.com/001020/celko.jhtml) (archive) * [Storing Hierarchical Data in a Database](http://www.sitepoint.com/print/hierarchical-data-database) * [Managing Hierarchical Data in MySQL](http://mikehillyer.com/articles/managing-hierarchical-data-in-mysql/)
What is the most efficient/elegant way to parse a flat table into a tree?
[ "", "sql", "algorithm", "recursion", "tree", "hierarchical-data", "" ]
Using something like this: ``` try: # Something... except Exception as excep: logger = logging.getLogger("component") logger.warning("something raised an exception: " + excep) logger.info("something raised an exception: " + excep) ``` I would rather not have it on the error-level cause in my special case it is not an error.
From the [logging documentation](http://docs.python.org/library/logging.html#logging.Logger.debug): > There are three keyword arguments in `kwargs` which are inspected: `exc_info`, `stack_info`, and `extra`. > > If `exc_info` does not evaluate as false, it causes exception information to be added to the logging message. If an exception tuple (in the format returned by [`sys.exc_info()`](https://docs.python.org/3/library/sys.html#sys.exc_info)) or an exception instance is provided, it is used; otherwise, [`sys.exc_info()`](https://docs.python.org/3/library/sys.html#sys.exc_info) is called to get the exception information. So do: ``` logger.warning("something raised an exception:", exc_info=True) ```
Here is one that works (python 2.6.5). ``` logger.critical("caught exception, traceback =", exc_info=True) ```
How do I log an exception at warning- or info-level with traceback using the python logging framework?
[ "", "python", "exception", "logging", "python-logging", "" ]
I'm writing a Windows Forms Application in C#.NET On startup, the application displays a splash screen which is running in a separate thread. Whilst the splash screen is showing, the main application is initialising. Once the main application has finished initialising, the main form of the application is displayed, and the splash screen still shows over the top. Everything so far is as expected. Then, the Splash screen is closed, which causes that thread to exit. For some reason, at the point, the main application windows gets sent behind all other open Windows, notably the Windows Explorer window where you clicked the .exe file to run the application in the first place! What could be causing the windows to suddenly jump "behind" like this?
Try calling .Activate() on your main window when your thread closes. It's never been active, and thus has low Z-Order, so whatever is higher will naturally be above it. I had to fix this exact scenario in our app. Don't forget! You may need to marshal the call to the correct thread using an Invoke()!
I've had this happen at times too. Bob's response is the easiest and works for me in the majority of cases. However, there have been some times where I need to use brute force. Do this via interop like this: [DllImport("user32.dll")] public static extern bool SetForegroundWindow(IntPtr hWnd);
Application window sent behind other windows on closing different thread (C#)
[ "", "c#", "winforms", "multithreading", "splash-screen", "" ]
I have found that when I execute the show() method for a contextmenustrip (a right click menu), if the position is outside that of the form it belongs to, it shows up on the taskbar also. I am trying to create a right click menu for when clicking on the notifyicon, but as the menu hovers above the system tray and not inside the form (as the form can be minimised when right clicking) it shows up on the task bar for some odd reason Here is my code currently: ``` private: System::Void notifyIcon1_MouseClick(System::Object^ sender, System::Windows::Forms::MouseEventArgs^ e) { if(e->Button == System::Windows::Forms::MouseButtons::Right) { this->sysTrayMenu->Show(Cursor->Position); } } ``` What other options do I need to set so it doesn't show up a blank process on the task bar.
Try assigning your menu to the ContextMenuStrip property of NotifyIcon rather than showing it in the mouse click handler.
The best and right way, without Reflection is: ``` { UnsafeNativeMethods.SetForegroundWindow(new HandleRef(notifyIcon.ContextMenuStrip, notifyIcon.ContextMenuStrip.Handle)); notifyIcon.ContextMenuStrip.Show(Cursor.Position); } ``` where **UnsafeNativeMethods.SetForegroundWindow** is: ``` public static class UnsafeNativeMethods { [DllImport("user32.dll", CharSet = CharSet.Auto, ExactSpelling = true)] public static extern bool SetForegroundWindow(HandleRef hWnd); } ```
Show a ContextMenuStrip without it showing in the taskbar
[ "", ".net", "c++", "winforms", "" ]
I have boiled down an issue I'm seeing in one of my applications to an incredibly simple reproduction sample. I need to know if there's something amiss or something I'm missing. Anyway, below is the code. The behavior is that the code runs and steadily grows in memory until it crashes with an OutOfMemoryException. That takes a while, but the behavior is that objects are being allocated and are not being garbage collected. I've taken memory dumps and ran !gcroot on some things as well as used ANTS to figure out what the problem is, but I've been at it for a while and need some new eyes. This reproduction sample is a simple console application that creates a Canvas and adds a Line to it. It does this continually. This is all the code does. It sleeps every now and again to ensure that the CPU is not so taxed that your system is unresponsive (and to ensure there's no weirdness with the GC not being able to run). Anyone have any thoughts? I've tried this with .NET 3.0 only, .NET 3.5 and also .NET 3.5 SP1 and the same behavior occurred in all three environments. Also note that I've put this code in a WPF application project as well and triggered the code in a button click and it occurs there too. ``` using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Windows.Controls; using System.Windows.Shapes; using System.Windows; namespace SimplestReproSample { class Program { [STAThread] static void Main(string[] args) { long count = 0; while (true) { if (count++ % 100 == 0) { // sleep for a while to ensure we aren't using up the whole CPU System.Threading.Thread.Sleep(50); } BuildCanvas(); } } [System.Runtime.CompilerServices.MethodImpl(System.Runtime.CompilerServices.MethodImplOptions.NoInlining)] private static void BuildCanvas() { Canvas c = new Canvas(); Line line = new Line(); line.X1 = 1; line.Y1 = 1; line.X2 = 100; line.Y2 = 100; line.Width = 100; c.Children.Add(line); c.Measure(new Size(300, 300)); c.Arrange(new Rect(0, 0, 300, 300)); } } } ``` NOTE: the first answer below is a bit off-base since I explicitly stated already that this same behavior occurs during a WPF application's button click event. I did not explicitly state, however, that in that app I only do a limited number of iterations (say 1000). Doing it that way would allow the GC to run as you click around the application. Also note that I explicitly said I've taken a memory dump and found my objects were rooted via !gcroot. I also disagree that the GC would not be able to run. The GC does not run on my console application's main thread, especially since I'm on a dual core machine which means the Concurrent Workstation GC is active. Message pump, however, yes. To prove the point, here's a WPF application version that runs the test on a DispatcherTimer. It performs 1000 iterations during a 100ms timer interval. More than enough time to process any messages out of the pump and keep the CPU usage low. ``` using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Windows; using System.Windows.Controls; using System.Windows.Shapes; namespace SimpleReproSampleWpfApp { public partial class Window1 : Window { private System.Windows.Threading.DispatcherTimer _timer; public Window1() { InitializeComponent(); _timer = new System.Windows.Threading.DispatcherTimer(); _timer.Interval = TimeSpan.FromMilliseconds(100); _timer.Tick += new EventHandler(_timer_Tick); _timer.Start(); } [System.Runtime.CompilerServices.MethodImpl(System.Runtime.CompilerServices.MethodImplOptions.NoInlining)] void RunTest() { for (int i = 0; i < 1000; i++) { BuildCanvas(); } } [System.Runtime.CompilerServices.MethodImpl(System.Runtime.CompilerServices.MethodImplOptions.NoInlining)] private static void BuildCanvas() { Canvas c = new Canvas(); Line line = new Line(); line.X1 = 1; line.Y1 = 1; line.X2 = 100; line.Y2 = 100; line.Width = 100; c.Children.Add(line); c.Measure(new Size(300, 300)); c.Arrange(new Rect(0, 0, 300, 300)); } void _timer_Tick(object sender, EventArgs e) { _timer.Stop(); RunTest(); _timer.Start(); } } } ``` NOTE2: I used the code from the first answer and my memory grew very slowly. Note that 1ms is much slower and less iterations than my example. You have to let it run for a couple minutes before you start to notice growth. After 5 minutes it's at 46MB from a starting point of 30MB. NOTE3: Removing the call to .Arrange completely eliminates the growth. Unfortunately, that call is pretty vital to my use since in many cases I'm creating PNG files from the Canvas (via the RenderTargetBitmap class). Without the call to .Arrange it doesn't layout the canvas at all.
I was able to reproduce your problem using the code you provided. Memory keeps growing because the Canvas objects are never released; a memory profiler indicates that the Dispatcher's ContextLayoutManager is holding on to them all (so that it can invoke OnRenderSizeChanged when necessary). It seems that a simple workaround is to add ``` c.UpdateLayout() ``` to the end of `BuildCanvas`. That said, note that `Canvas` is a `UIElement`; it's supposed to be used in UI. It's not designed to be used as an arbitrary drawing surface. As other commenters have already noted, the creation of thousands of Canvas objects may indicate a design flaw. I realise that your production code may be more complicated, but if it's just drawing simple shapes on a canvas, GDI+-based code (i.e., the System.Drawing classes) may be more appropriate.
WPF in .NET 3 and 3.5 has an internal memory leak. It only triggers under certain situations. We could never figure out exactly what triggers it, but we had it in our app. Apparently it's fixed in .NET 4. I think it's the same as the one mentioned in [this blog post](http://wesaday.wordpress.com/tag/cmilchannel-hwndsource/) At any rate, putting the following code in the `App.xaml.cs` constructor solved it for us ``` public partial class App : Application { public App() { new HwndSource(new HwndSourceParameters()); } } ``` If nothing else solves it, try that and see
Simple WPF sample causes uncontrolled memory growth
[ "", "c#", "wpf", "memory-leaks", "" ]
What I am looking for is a way to call a method after another method has been invoked but before it is entered. Example: ``` public class Test { public void Tracer ( ... ) { } public int SomeFunction( string str ) { return 0; } public void TestFun() { SomeFunction( "" ); } } ``` In the example above I would like to have Tracer() called after SomeFunction() has been invoked by TestFun() but before SomeFunction() is entered. I'd also like to get reflection data on SomeFunction(). --- I found something interesting in everyone's answers. The **best answer to the question** is to use Castle's DynamicProxy; however, this is not that I'm going to use to solve my problem because it requires adding a library to my project. I have only a few methods that I need to "trace" so I've chosen to go with a modified "core" methodology mixed with the way Dynamic Proxy is implemented. I explain this in my answer to my own question below. Just as a note I'm going to be looking into AOP and the ContextBoundObject class for some other applications.
You can use a dynamic proxy ([Castle's DynamicProxy](http://www.castleproject.org/dynamicproxy/index.html) for example) to intercept the call, run whatever code you wish, and then either invoke your method or not, depending on your needs.
**Use delegates!** ``` delegate void SomeFunctionDelegate(string s); void Start() { TraceAndThenCallMethod(SomeFunction, "hoho"); } void SomeFunction(string str) { //Do stuff with str } void TraceAndThenCallMethod(SomeFunctionDelegate sfd, string parameter) { Trace(); sfd(parameter); } ```
Is there a way in .NET to have a method called automatically after another method has been invoked but before it is entered
[ "", "c#", ".net", "" ]
I have the following two models: ``` class Activity(models.Model): name = models.CharField(max_length=50, help_text='Some help.') entity = models.ForeignKey(CancellationEntity) ... class Cancellation(models.Model): activity = models.ForeignKey(Activity) date = models.DateField(default=datetime.now().date()) description = models.CharField(max_length=250) ... ``` I would like the Activity model to be aware when a Cancellation related to it is saved (both inserted or updated). What is the best way to go about this?
What you want to look into is [Django's signals](http://docs.djangoproject.com/en/dev/ref/signals/) (check out [this page](http://docs.djangoproject.com/en/dev/topics/signals/) too), specifically the model signals--more specifically, the **post\_save** signal. Signals are Django's version of a plugin/hook system. The post\_save signal gets sent every time a model is saved, whether it's updated or created (and it'll let you know if it was created). This is how you'd use signals to get notified when an Activity has a Cancellation ``` from django.db.models.signals import post_save class Activity(models.Model): name = models.CharField(max_length=50, help_text='Some help.') entity = models.ForeignKey(CancellationEntity) @classmethod def cancellation_occurred (sender, instance, created, raw): # grab the current instance of Activity self = instance.activity_set.all()[0] # do something ... class Cancellation(models.Model): activity = models.ForeignKey(Activity) date = models.DateField(default=datetime.now().date()) description = models.CharField(max_length=250) ... post_save.connect(Activity.cancellation_occurred, sender=Cancellation) ```
What's wrong with the following? ``` class Cancellation( models.Model ): blah blah def save( self, **kw ): for a in self.activity_set.all(): a.somethingChanged( self ) super( Cancellation, self ).save( **kw ) ``` It would allow you to to control the notification among models very precisely. In a way, this is the canonical "Why is OO so good?" question. I think OO is good precisely because your collection of Cancellation and Activity objects can cooperate fully.
In Django how do I notify a parent when a child is saved in a foreign key relationship?
[ "", "python", "django", "django-models", "" ]
I am selecting from a table that has an XML column using T-SQL. I would like to select a certain type of node and have a row created for each one. For instance, suppose I am selecting from a *people* table. This table has an XML column for *addresses*. The XML is formated similar to the following: ``` <address> <street>Street 1</street> <city>City 1</city> <state>State 1</state> <zipcode>Zip Code 1</zipcode> </address> <address> <street>Street 2</street> <city>City 2</city> <state>State 2</state> <zipcode>Zip Code 2</zipcode> </address> ``` How can I get results like this: **Name**         **City**         **State** Joe Baker   Seattle      WA Joe Baker   Tacoma     WA Fred Jones  Vancouver BC
Here is your solution: ``` /* TEST TABLE */ DECLARE @PEOPLE AS TABLE ([Name] VARCHAR(20), [Address] XML ) INSERT INTO @PEOPLE SELECT 'Joel', '<address> <street>Street 1</street> <city>City 1</city> <state>State 1</state> <zipcode>Zip Code 1</zipcode> </address> <address> <street>Street 2</street> <city>City 2</city> <state>State 2</state> <zipcode>Zip Code 2</zipcode> </address>' UNION ALL SELECT 'Kim', '<address> <street>Street 3</street> <city>City 3</city> <state>State 3</state> <zipcode>Zip Code 3</zipcode> </address>' SELECT * FROM @PEOPLE -- BUILD XML DECLARE @x XML SELECT @x = ( SELECT [Name] , [Address].query(' for $a in //address return <address street="{$a/street}" city="{$a/city}" state="{$a/state}" zipcode="{$a/zipcode}" /> ') FROM @PEOPLE AS people FOR XML AUTO ) -- RESULTS SELECT [Name] = T.Item.value('../@Name', 'varchar(20)'), street = T.Item.value('@street' , 'varchar(20)'), city = T.Item.value('@city' , 'varchar(20)'), state = T.Item.value('@state' , 'varchar(20)'), zipcode = T.Item.value('@zipcode', 'varchar(20)') FROM @x.nodes('//people/address') AS T(Item) /* OUTPUT*/ Name | street | city | state | zipcode ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Joel | Street 1 | City 1 | State 1 | Zip Code 1 Joel | Street 2 | City 2 | State 2 | Zip Code 2 Kim | Street 3 | City 3 | State 3 | Zip Code 3 ```
Here's how I do it generically: I shred the source XML via a call such as ``` DECLARE @xmlEntityList xml SET @xmlEntityList = ' <ArbitrarilyNamedXmlListElement> <ArbitrarilyNamedXmlItemElement><SomeVeryImportantInteger>1</SomeVeryImportantInteger></ArbitrarilyNamedXmlItemElement> <ArbitrarilyNamedXmlItemElement><SomeVeryImportantInteger>2</SomeVeryImportantInteger></ArbitrarilyNamedXmlItemElement> <ArbitrarilyNamedXmlItemElement><SomeVeryImportantInteger>3</SomeVeryImportantInteger></ArbitrarilyNamedXmlItemElement> </ArbitrarilyNamedXmlListElement> ' DECLARE @tblEntityList TABLE( SomeVeryImportantInteger int ) INSERT @tblEntityList(SomeVeryImportantInteger) SELECT XmlItem.query('//SomeVeryImportantInteger[1]').value('.','int') as SomeVeryImportantInteger FROM [dbo].[tvfShredGetOneColumnedTableOfXmlItems] (@xmlEntityList) ``` by utilizing the scalar-valued function ``` /* Example Inputs */ /* DECLARE @xmlListFormat xml SET @xmlListFormat = ' <ArbitrarilyNamedXmlListElement> <ArbitrarilyNamedXmlItemElement>004421UB7</ArbitrarilyNamedXmlItemElement> <ArbitrarilyNamedXmlItemElement>59020UH24</ArbitrarilyNamedXmlItemElement> <ArbitrarilyNamedXmlItemElement>542514NA8</ArbitrarilyNamedXmlItemElement> </ArbitrarilyNamedXmlListElement> ' declare @tblResults TABLE ( XmlItem xml ) */ -- ============================================= -- Author: 6eorge Jetson -- Create date: 01/02/3003 -- Description: Shreds a list of XML items conforming to -- the expected generic @xmlListFormat -- ============================================= CREATE FUNCTION [dbo].[tvfShredGetOneColumnedTableOfXmlItems] ( -- Add the parameters for the function here @xmlListFormat xml ) RETURNS @tblResults TABLE ( -- Add the column definitions for the TABLE variable here XmlItem xml ) AS BEGIN -- Fill the table variable with the rows for your result set INSERT @tblResults SELECT tblShredded.colXmlItem.query('.') as XmlItem FROM @xmlListFormat.nodes('/child::*/child::*') as tblShredded(colXmlItem) RETURN END --SELECT * FROM @tblResults ```
Select XML nodes as rows
[ "", "sql", "xml", "t-sql", "xpath", "" ]
What translation occurs when writing to a file that was opened in text mode that does not occur in binary mode? Specifically in MS Visual C. ``` unsigned char buffer[256]; for (int i = 0; i < 256; i++) buffer[i]=i; int size = 1; int count = 256; ``` Binary mode: ``` FILE *fp_binary = fopen(filename, "wb"); fwrite(buffer, size, count, fp_binary); ``` Versus text mode: ``` FILE *fp_text = fopen(filename, "wt"); fwrite(buffer, size, count, fp_text); ```
I believe that most platforms will ignore the "t" option or the "text-mode" option when dealing with streams. On windows, however, this is not the case. If you take a look at the description of the fopen() function at: [MSDN](http://msdn.microsoft.com/en-us/library/yeby3zcb(vs.71).aspx), you will see that specifying the "t" option will have the following effect: * line feeds ('\n') will be translated to '\r\n" sequences on output * carriage return/line feed sequences will be translated to line feeds on input. * If the file is opened in append mode, the end of the file will be examined for a ctrl-z character (character 26) and that character removed, if possible. It will also interpret the presence of that character as being the end of file. This is an unfortunate holdover from the days of CPM (something about the sins of the parents being visited upon their children up to the 3rd or 4th generation). Contrary to previously stated opinion, the ctrl-z character will not be appended.
In text mode, a newline "\n" may be converted to a carriage return + newline "\r\n" Usually you'll want to open in binary mode. Trying to read any binary data in text mode won't work, it will be corrupted. You can read text ok in binary mode though - it just won't do automatic translations of "\n" to "\r\n". See [fopen](http://hostprogressive.com/support/php_5_docs/function.fopen.html)
Difference between files written in binary and text mode
[ "", "c++", "c", "file-io", "" ]
Using the following code I get a nice formatted string: ``` Request.QueryString.ToString ``` Gives me something like: &hello=world&microsoft=sucks But when I use this code to clone the collection to another object (of the same type) I get the Type() back from the ToString() method instead. ``` System.Collections.Specialized.NameValueCollection variables = new System.Collections.Specialized.NameValueCollection(Request.QueryString); if (!string.IsNullOrEmpty(variables["sid"])) variables.Remove("sid"); Response.Write(variables.ToString()); ``` Is there a tidier way to output it rather than looking and building the string manually?
You can also use Reflector to extract the `HttpValueCollection` class into your own, and use it then.
HttpValueCollection is internal, but you can use "var" to declare it without extract it with reflector. ``` var query = HttpUtility.ParseQueryString(Request.Url.Query); query["Lang"] = myLanguage; // Add or replace param string myNewUrl = Request.Url.AbsolutePath + "?" + query; ```
Outputting a manipulated QueryString in C#
[ "", "c#", "asp.net", "collections", "query-string", "" ]
OK, first for context look at the Windows desktop; You can take items (folders, files) on the desktop and drag them around to different places and they "stay" where you dragged them. This seems to be a pretty useful feature to offer users so as to allow them to create their own "groupings" of items. My question is thus: Is there a control in .NET that approximates this behavior with a collection of items? I'm thinking something like a listview in "LargeIcon" mode, but it allows you to drag the icons around to different places inside the control.
You can do this with a standard ListView control by implementing drag-and-drop. Here's a sample control that does this: ``` using System; using System.Drawing; using System.Windows.Forms; public class MyListView : ListView { private Point mItemStartPos; private Point mMouseStartPos; public MyListView() { this.AllowDrop = true; this.View = View.LargeIcon; this.AutoArrange = false; this.DoubleBuffered = true; } protected override void OnDragEnter(DragEventArgs e) { if (e.Data.GetData(typeof(ListViewItem)) != null) e.Effect = DragDropEffects.Move; } protected override void OnItemDrag(ItemDragEventArgs e) { // Start dragging ListViewItem item = e.Item as ListViewItem; mItemStartPos = item.Position; mMouseStartPos = Control.MousePosition; this.DoDragDrop(item, DragDropEffects.Move); } protected override void OnDragOver(DragEventArgs e) { // Move icon ListViewItem item = e.Data.GetData(typeof(ListViewItem)) as ListViewItem; if (item != null) { Point mousePos = Control.MousePosition; item.Position = new Point(mItemStartPos.X + mousePos.X - mMouseStartPos.X, mItemStartPos.Y + mousePos.Y - mMouseStartPos.Y); } } } ```
I think the closest would the ListView control, but even that is more like an explorer window. You might be able to create your own view that does what you want, but you'd need to manually persist icon locations somewhere.
.NET: is there a Click-and-drag "Desktop-Like" control?
[ "", "c#", ".net", "forms", "controls", "user-interface", "" ]
Is it possible to use `request.setAttribute` on a JSP page and then on HTML Submit get the same request attribute in the `Servlet`?
No. Unfortunately the Request object is only available until the page finishes loading - once it's complete, you'll lose all values in it unless they've been stored somewhere. If you want to persist attributes through requests you need to either: 1. Have a hidden input in your form, such as `<input type="hidden" name="myhiddenvalue" value="<%= request.getParameter("value") %>" />`. This will then be available in the servlet as a request parameter. 2. Put it in the session (see `request.getSession()` - in a JSP this is available as simply `session`) I recommend using the Session as it's easier to manage.
The reply by Phil Sacre was correct however the session shouldn't be used just for the hell of it. You should only use this for values which *really* need to live for the lifetime of the session, such as a user login. It's common to see people overuse the session and run into more issues, especially when dealing with a collection or when users return to a page they previously visited only to find they have values still remaining from a previous visit. A smart program minimizes the scope of variables as much as possible, a bad one uses session too much.
Using request.setAttribute in a JSP page
[ "", "java", "jakarta-ee", "setattribute", "request-object", "" ]
Consider this: ``` var query = from r in this._db.Recipes where r.RecipesID == recipeID select new { r.RecipesID, r.RecipesName }; ``` How would i get individual columns in my `query` object without using a for-loop? Basicly: how do I translate `DataTable.Rows[0]["ColumnName"]` into Linq syntax?
This is the way to go about it: ``` DataContext dc = new DataContext(); var recipe = (from r in dc.Recipes where r.RecipesID == 1 select r).FirstOrDefault(); if (recipe != null) { id = recipe.RecipesID; name = recipe.RecipesName; } ```
It's really unclear what you are looking for, as your two samples are compatible. As close as I can figure, what you want is: ``` var rows = query.ToList(); string name = rows[0].RecipesName; ```
Linq to SQL: DataTable.Rows[0]["ColumnName"] equivalent
[ "", "c#", "linq", "c#-3.0", "" ]
How would I reset the primary key counter on a sql table and update each row with a new primary key?
I would add another column to the table first, populate that with the new PK. Then I'd use update statements to update the new fk fields in all related tables. Then you can drop the old PK and old fk fields. **EDIT:** Yes, as Ian says you will have to drop and then recreate all foreign key constraints.
Not sure which DBMS you're using but if it happens to be SQL Server: ``` SET IDENTITY_INSERT [MyTable] ON ``` allows you to update/insert the primary key column. Then when you are done updating the keys (you could use a [CURSOR](http://msdn.microsoft.com/en-us/library/ms180169.aspx) for this if the logic is complicated) ``` SET IDENTITY_INSERT [MyTable] OFF ``` Hope that helps!
renumber primary key
[ "", "sql", "primary-key", "" ]
I have been trying to read a picture saved in Access DB as a OLE object in a PictureBox in a C# windows Application. The code that does this is presented below: ``` string connString = @"Provider=Microsoft.Jet.OLEDB.4.0;Data Source=D:\Rajesh\SampleDB_2003.mdb;"; OleDbConnection oConn = new OleDbConnection(connString); oConn.Open(); string commandString = "select * from employee where id = " + id + ""; OleDbCommand oCmd = new OleDbCommand(commandString, oConn); OleDbDataReader oReader = oCmd.ExecuteReader(CommandBehavior.SequentialAccess); while (oReader.Read()) { txtID.Text = ((int)oReader.GetValue(0)).ToString(); txtName.Text = (string)oReader.GetValue(1); txtAge.Text = ((int)oReader.GetValue(2)).ToString(); txtType.Text = (string)oReader.GetValue(3); byte[] imageBytes = (byte[])oReader.GetValue(4); MemoryStream ms = new MemoryStream(); ms.Write(imageBytes, 0, imageBytes.Length); Bitmap bmp = new Bitmap(ms); pbPassport.Image = bmp; } ``` When I execute the above code, an 'Parameter is not valid' exception is thrown at the line: ``` Bitmap bmp = new Bitmap(ms) ``` From the exception message, it is clear that 'ms' is in a format that is not recognisable. Any suggestion to get past this?
Your bytestream is corrupted somehow, becouse I tried the exact method of yours but filled the byte array with PNG data from a file instead. I would suggest creating two streams, one from the database, and one from the file that was the source of the image in the database. Then compare them byte by byte. If there is even one byte of diffrence, the database image data is corrupt.
Unfortunately I have no good answer for you, but I can tell you that when I tried, I got the same results. Sometimes skipping the first 78 bytes of the byte array worked, sometimes it didn't. This is because the OLE Object datatype stores some kind of header in the field, so that Access knows what type of OLE Object it is. I could not find a reliable way to work out exactly where this header stopped and real data started, but I also gave up, so good luck :)
Read a picture from Access DB into PictureBox
[ "", "c#", "ms-access", "picturebox", "" ]
I have recently been working with Python using Komodo Edit and other simpler editors but now I am doing a project which is to be done in C# using VS 08. I would appreciate any hints on how to get productive on that platform as quickly as possible.
As far as becoming proficient with C# I would highly recommend [Programming C#](https://rads.stackoverflow.com/amzn/click/com/0596527438) and [C# in Depth](https://rads.stackoverflow.com/amzn/click/com/1933988363). For Visual Studio, start poking around in the IDE a lot, play around, get familiar with it. Start with simple projects and explore all the different aspects. Learn how to [optimize Visual Studio](https://stackoverflow.com/questions/8440/visual-studio-optimizations) and get familiar with some of the great [keyboard shortcuts / hidden features](https://stackoverflow.com/questions/98606/favorite-visual-studio-keyboard-shortcuts) of the IDE. Definitely do each of the following at least once: **Projects:** * Create a simple console application (e.g. hello world) * Create a class library (managed .dll) and use it from another application you create * Create a simple windows application * Create a simple asp.net web app **Debugging:** * Debug a command line app * Get familiar with: breakpoints, the locals and watch windows, step over, step into, step out of, continue, stop debugging * Create a command line app which uses a function in a class library. Store the dll and symbol file (.pdb) for the library but delete the source code, debug through app as it goes into the library * Debug into a webservice * Learn how to use ILDasm and ILAsm **Command Line:** * Get familiar with the Visual Studio command line environment * Build using only the command line * Debug from the command line using devenv.exe /debugexe * Use ILDasm / ILAsm from the command line to disassemble a simple app into .IL, reassemble it into a differently named file, test to see that it still works **Testing:** * Create unit tests (right click in a method, select the option to create a test) * Learn how to: run all unit tests, run all unit tests under the debugger, rerun failed unit tests, see details on test failures, run a subset of unit tests * Learn how to collect code coverage statistics for your tests **Source Control:** * Learn how to interact with your source control system of choice while developing using VS **Refactoring et al:** * Become familiar with all of the built-in refactorings (especially rename and extract method) * Use "Go To Definition" * Use "Find All References" * Use "Find In Files" (ctrl-shift-F) **IDE & Keyboard Shortcuts:** * Learn how to use the designer well for web and winforms * Get very familiar with the Solution Explorer window * Experiment with different window layouts until you find one your comfortable with, keep experimenting later to see if that's still the best choice * Learn the ins and outs of intellisense, use it to your advantage as much as possible * Learn the keyboard shortcut for everything you do
I would personally concentrate on learning the core parts of both C# and .NET first. For me, that would mean writing console apps (rather than Windows Forms) to experiment with the language and important aspects like IO. When you're happy with the foundations, move onto whichever "peripheral" technology (WinForms, WPF, ASP.NET, WCF etc) you need for your project. In terms of books, I can recommend both [C# 3.0 in a Nutshell](http://msmvps.com/blogs/jon_skeet/archive/2008/03/31/book-review-c-3-0-in-a-nutshell.aspx) and [Accelerated C# 2008](http://msmvps.com/blogs/jon_skeet/archive/2008/08/01/book-review-accelerated-c-2008-by-trey-nash.aspx). The links are to my reviews of the books. Both cover language + core libraries. I wouldn't worry too much about LINQ to start with - get comfortable with the rest of the language, particularly delegates and generics, before you tackle LINQ. At that point, I'd *thoroughly* recommend playing with LINQ to Objects for quite a while before you start using LINQ to SQL or the Entity Framework. (On the other hand, if you need to use XML at all, I'd go straight to LINQ to XML - it's a whole XML API, not just a LINQ provider. It's much nicer than the normal DOM API.)
Fastest way to get productive in VS 08 and C#
[ "", "c#", "visual-studio", "visual-studio-2008", "development-environment", "" ]
I have a problem wih a logging setup in a apring webapp deployed under tomcat 6. The webapp uses the commons-logging api, on runtime log4j should be used. The log file is created but remains empty - no log entries occur. the setup is the following: WEB-INF/web.xml: ``` <context-param> <param-name>log4jConfigLocation</param-name> <param-value>/WEB-INF/log4j.xml</param-value> </context-param> <listener> <listener-class>org.springframework.web.util.Log4jConfigListener</listener-class> </listener> ``` WEB-INF/classes/commons-logging.properties: ``` org.apache.commons.logging.Log=org.apache.commons.logging.impl.Log4JLogger ``` WEB-INF/log4j.xml: ``` <log4j:configuration xmlns:log4j='http://jakarta.apache.org/log4j/'> <appender name="CONSOLE" class="org.apache.log4j.ConsoleAppender"> ... </appender> <appender name="FILE" class="org.apache.log4j.RollingFileAppender"> <param name="File" value="${catalina.home}/logs/my.log"/> ... </appender> <logger name="my.package"> <level value="INFO"/> </logger> <root> <level value="ERROR"/> <appender-ref ref="CONSOLE"/> <appender-ref ref="FILE"/> </root> </log4j:configuration> ``` The file logs/my.log is created, but no logs appear. The are info logs on the tomcat console, but not with the layout pattern configured. The commons-logging-1.1.1.jar and log4j-1.2.14.jar are included in WEB-INF/lib. Any idea what is wrong here?
There are numerous documented instances on the web warning people about the use of commons-logging. So much so, that [SLF4J](http://www.slf4j.org/) is gaining a lot of popularity. Considering that you are not interested in using Tomcat with Log4j, you should just use Log4j directly in your application. Particularly if there is no chance that you'll be switching logging frameworks in the future. It'll reduce the complexity of your application, and get rid of any class loader issues you are having with commons-logging. This should be a relatively easy search and replace in your text, as commons-logging and log4j both use a similar call structure for their logging methods.
Be especially careful that you have **not** placed log4j.jar in the Tomcat commons/lib directory. If the root classloader loads the log4j libraries, you'll run into conflicts and initialization problems when your webapps also try to use log4j. If you need to use log4j for common Tomcat logging, you'll need to be careful that your webapps do not attempt to load log4j as well. If you have multiple webapps on the server, then you'll need discipline that each webapp's log initialization does not stomp on the initialization of other webapps. Each webapp will need to use unique Logger IDs, which can be accomplished with unique package names. Using a common log4j in Tomcat with multiple webapps causes serious conflicts when you have shared libraries that all want to do logging, such as Hibernate or Spring. The next webapp that attempts to initialize log4j may close the logger of the previous one. It can be a mess.
Problem with Commons Logging / Log4j setup in spring webapp with tomcat 6
[ "", "java", "tomcat", "logging", "log4j", "" ]
I am writing an XML document in C#. I have something like this... ``` string output = "REAPP DUE NO OF M CASE NBR APPL NBR DATE GRPS M CASE NBR APPL NBR DATE GRPS _ _ _"; ``` and I do this... ``` objXmlTextWriter.WriteStartElement("Case"); objXmlTextWriter.WriteString(record); objXmlTextWriter.WriteEndElement(); ``` and the xml element turns out like this... ``` <Case>REAPP DUE NO OF REAPP DUE NO OF M CASE NBR APPL NBR DATE GRPS M CASE NBR APPL NBR DATE GRPS _ _ _ </Case> ``` It has basically converted white space with length greater than 1 to 1 character of white space. How do I prevent this?
Sorry, got confused before (deleted the wrong answer). But I can't reproduce what you seem to be seeing. Check this program... it does preserve spaces. ``` class Program { static void Main(string[] args) { XmlWriter w = XmlTextWriter.Create("./foo.xml"); w.WriteStartElement("foo"); w.WriteString(" THIS HAS VARYING SPACeS "); w.WriteEndElement(); w.Close(); StreamReader sr = new StreamReader("./foo.xml"); Console.WriteLine(sr.ReadToEnd()); Console.ReadKey(); } } ```
OK... This is only an IE display problem. Something to do with the font I guess. When I open the XML in notepad++ I see the white space is actually preserved properly.
Preserve white space in string with XmlTextWriter.WriteString
[ "", "c#", ".net", "xml", "whitespace", "" ]
Ruby on Rails has a lot of ways to generate JavaScript. Particularly when it comes to Ajax. Unfortunately, there are a few problems that I often see with the JavaScript that it generates. * Rails typically uses inline event handling. ``` <a onclick="somejavascript(); return false;" /> ``` This is generally frowned upon, as it's mixing behavior in with the XHTML. * The generated JavaScript also relies heavily on Prototype. Personally, I prefer jQuery. * In my experience, the attitude with a lot of Rails developers has been to write as much of the code in Ruby as possible. The final step is to generate some very procedural and repetitive JavaScript. Often, this code ends up being very inflexible and difficult to debug. So, my question is: how much JavaScript do you write manually for your projects and how much of it is generated server-side with Rails/Ruby? Or is there a happy medium where you get the benefits of both? With a subquestion: if you write a lot of the JavaScript manually, what techniques do you use to fit it into the MVC model?
If you prefer jQuery you can use the [jQuery on Rails Project](http://ennerchi.com/projects/jrails). A drop in to replace Prototype with jQuery. Some of what Rails does with Javascript generation is good and some is bad. In the bad instances, write it yourself and keep it unobtrusive. At any given time you're uncomfortable with the Javascript Rails generates, you can go ahead and write it yourself. And be sure to check out this great intro to [unobtrusive Javascript](http://www.railsenvy.com/2008/1/3/unobtrusive-javascript) that was done with Rails in mind.
I used to work in Symfony (a Rails clone) and at first, we used a lot of Javascript helpers. Client requirements led us (me!) to have to write a lot of code the helpers just couldn't generate. I eventually came to the conclusion that I prefer not to use helpers **at all.** Progressive enhancement is the way to go, in my opinion. Generate standards-friendly HTML that works without JavaScript enabled, then pile on the fancy functionality on document ready. By the way, I've also switched from Prototype to jQuery and have no desire to switch back! In my opinion, jQuery is better suited to progressive enhancement.
How much JavaScript do you let Rails generate?
[ "", "javascript", "ruby-on-rails", "ruby", "" ]
I have a Timestamp value that comes from my application. The user can be in any given local TimeZone. Since this date is used for a WebService that assumes the time given is always in GMT, I have a need to convert the user's parameter from say (EST) to (GMT). Here's the kicker: The user is oblivious to his TZ. He enters the creation date that he wants to send to the WS, so what I need is: **User enters:** 5/1/2008 6:12 PM (EST) **The parameter to the WS needs to be**: 5/1/2008 6:12 PM (GMT) I know TimeStamps are always supposed to be in GMT by default, but when sending the parameter, even though I created my Calendar from the TS (which is supposed to be in GMT), the hours are always off unless the user is in GMT. What am I missing? ``` Timestamp issuedDate = (Timestamp) getACPValue(inputs_, "issuedDate"); Calendar issueDate = convertTimestampToJavaCalendar(issuedDate); ... private static java.util.Calendar convertTimestampToJavaCalendar(Timestamp ts_) { java.util.Calendar cal = java.util.Calendar.getInstance( GMT_TIMEZONE, EN_US_LOCALE); cal.setTimeInMillis(ts_.getTime()); return cal; } ``` With the previous Code, this is what I get as a result (Short Format for easy reading): [May 1, 2008 11:12 PM]
Thank you all for responding. After a further investigation I got to the right answer. As mentioned by Skip Head, the TimeStamped I was getting from my application was being adjusted to the user's TimeZone. So if the User entered 6:12 PM (EST) I would get 2:12 PM (GMT). What I needed was a way to undo the conversion so that the time entered by the user is the time I sent to the WebServer request. Here's how I accomplished this: ``` // Get TimeZone of user TimeZone currentTimeZone = sc_.getTimeZone(); Calendar currentDt = new GregorianCalendar(currentTimeZone, EN_US_LOCALE); // Get the Offset from GMT taking DST into account int gmtOffset = currentTimeZone.getOffset( currentDt.get(Calendar.ERA), currentDt.get(Calendar.YEAR), currentDt.get(Calendar.MONTH), currentDt.get(Calendar.DAY_OF_MONTH), currentDt.get(Calendar.DAY_OF_WEEK), currentDt.get(Calendar.MILLISECOND)); // convert to hours gmtOffset = gmtOffset / (60*60*1000); System.out.println("Current User's TimeZone: " + currentTimeZone.getID()); System.out.println("Current Offset from GMT (in hrs):" + gmtOffset); // Get TS from User Input Timestamp issuedDate = (Timestamp) getACPValue(inputs_, "issuedDate"); System.out.println("TS from ACP: " + issuedDate); // Set TS into Calendar Calendar issueDate = convertTimestampToJavaCalendar(issuedDate); // Adjust for GMT (note the offset negation) issueDate.add(Calendar.HOUR_OF_DAY, -gmtOffset); System.out.println("Calendar Date converted from TS using GMT and US_EN Locale: " + DateFormat.getDateTimeInstance(DateFormat.SHORT, DateFormat.SHORT) .format(issueDate.getTime())); ``` The code's output is: (User entered 5/1/2008 6:12PM (EST) Current User's TimeZone: EST Current Offset from GMT (in hrs):-4 (Normally -5, except is DST adjusted) TS from ACP: 2008-05-01 14:12:00.0 Calendar Date converted from TS using GMT and US\_EN Locale: 5/1/08 6:12 PM (GMT)
``` public static Calendar convertToGmt(Calendar cal) { Date date = cal.getTime(); TimeZone tz = cal.getTimeZone(); log.debug("input calendar has date [" + date + "]"); //Returns the number of milliseconds since January 1, 1970, 00:00:00 GMT long msFromEpochGmt = date.getTime(); //gives you the current offset in ms from GMT at the current date int offsetFromUTC = tz.getOffset(msFromEpochGmt); log.debug("offset is " + offsetFromUTC); //create a new calendar in GMT timezone, set to this date and add the offset Calendar gmtCal = Calendar.getInstance(TimeZone.getTimeZone("GMT")); gmtCal.setTime(date); gmtCal.add(Calendar.MILLISECOND, offsetFromUTC); log.debug("Created GMT cal with date [" + gmtCal.getTime() + "]"); return gmtCal; } ``` Here's the output if I pass the current time ("12:09:05 EDT" from `Calendar.getInstance()`) in: > DEBUG - input calendar has date [Thu Oct 23 12:09:05 EDT 2008] > DEBUG - offset is -14400000 > DEBUG - Created GMT cal with date [Thu Oct 23 08:09:05 EDT 2008] 12:09:05 GMT is 8:09:05 EDT. The confusing part here is that `Calendar.getTime()` returns you a `Date` in your current timezone, and also that there is no method to modify the timezone of a calendar and have the underlying date rolled also. Depending on what type of parameter your web service takes, your may just want to have the WS deal in terms of milliseconds from epoch.
How to handle calendar TimeZones using Java?
[ "", "java", "calendar", "timezone", "" ]
What is the best way to copy the contents of one stream to another? Is there a standard utility method for this?
**From .NET 4.5 on, there is the [`Stream.CopyToAsync` method](http://msdn.microsoft.com/en-us/library/system.io.stream.copytoasync.aspx)** ``` input.CopyToAsync(output); ``` This will return a [`Task`](http://msdn.microsoft.com/en-us/library/system.threading.tasks.task.aspx) that can be continued on when completed, like so: ``` await input.CopyToAsync(output) // Code from here on will be run in a continuation. ``` Note that depending on where the call to `CopyToAsync` is made, the code that follows may or may not continue on the same thread that called it. The [`SynchronizationContext`](http://msdn.microsoft.com/en-us/library/system.threading.synchronizationcontext.aspx) that was captured when calling [`await`](http://msdn.microsoft.com/en-us/library/vstudio/hh191443.aspx) will determine what thread the continuation will be executed on. Additionally, this call (and this is an implementation detail subject to change) still sequences reads and writes (it just doesn't waste a threads blocking on I/O completion). **From .NET 4.0 on, there's is the [`Stream.CopyTo` method](http://msdn.microsoft.com/en-us/library/system.io.stream.copyto.aspx)** ``` input.CopyTo(output); ``` **For .NET 3.5 and before** There isn't anything baked into the framework to assist with this; you have to copy the content manually, like so: ``` public static void CopyStream(Stream input, Stream output) { byte[] buffer = new byte[32768]; int read; while ((read = input.Read(buffer, 0, buffer.Length)) > 0) { output.Write (buffer, 0, read); } } ``` Note 1: This method will allow you to report on progress (x bytes read so far ...) Note 2: Why use a fixed buffer size and not `input.Length`? Because that Length may not be available! From the [docs](https://learn.microsoft.com/en-us/dotnet/api/system.io.stream.canseek): > If a class derived from Stream does not support seeking, calls to Length, SetLength, Position, and Seek throw a NotSupportedException.
`MemoryStream` has `.WriteTo(outstream);` and .NET 4.0 has `.CopyTo` on normal stream object. .NET 4.0: ``` instream.CopyTo(outstream); ```
How do I copy the contents of one stream to another?
[ "", "c#", "stream", "copying", "" ]
Here's the scenario: You have a Windows server that users remotely connect to via RDP. You want your program (which runs as a service) to know who is currently connected. This may or may not include an interactive console session. Please note that this is the **not** the same as just retrieving the current interactive user. I'm guessing that there is some sort of API access to Terminal Services to get this info?
Here's my take on the issue: ``` using System; using System.Collections.Generic; using System.Runtime.InteropServices; namespace EnumerateRDUsers { class Program { [DllImport("wtsapi32.dll")] static extern IntPtr WTSOpenServer([MarshalAs(UnmanagedType.LPStr)] string pServerName); [DllImport("wtsapi32.dll")] static extern void WTSCloseServer(IntPtr hServer); [DllImport("wtsapi32.dll")] static extern Int32 WTSEnumerateSessions( IntPtr hServer, [MarshalAs(UnmanagedType.U4)] Int32 Reserved, [MarshalAs(UnmanagedType.U4)] Int32 Version, ref IntPtr ppSessionInfo, [MarshalAs(UnmanagedType.U4)] ref Int32 pCount); [DllImport("wtsapi32.dll")] static extern void WTSFreeMemory(IntPtr pMemory); [DllImport("wtsapi32.dll")] static extern bool WTSQuerySessionInformation( IntPtr hServer, int sessionId, WTS_INFO_CLASS wtsInfoClass, out IntPtr ppBuffer, out uint pBytesReturned); [StructLayout(LayoutKind.Sequential)] private struct WTS_SESSION_INFO { public Int32 SessionID; [MarshalAs(UnmanagedType.LPStr)] public string pWinStationName; public WTS_CONNECTSTATE_CLASS State; } public enum WTS_INFO_CLASS { WTSInitialProgram, WTSApplicationName, WTSWorkingDirectory, WTSOEMId, WTSSessionId, WTSUserName, WTSWinStationName, WTSDomainName, WTSConnectState, WTSClientBuildNumber, WTSClientName, WTSClientDirectory, WTSClientProductId, WTSClientHardwareId, WTSClientAddress, WTSClientDisplay, WTSClientProtocolType } public enum WTS_CONNECTSTATE_CLASS { WTSActive, WTSConnected, WTSConnectQuery, WTSShadow, WTSDisconnected, WTSIdle, WTSListen, WTSReset, WTSDown, WTSInit } static void Main(string[] args) { ListUsers(Environment.MachineName); } public static void ListUsers(string serverName) { IntPtr serverHandle = IntPtr.Zero; List<string> resultList = new List<string>(); serverHandle = WTSOpenServer(serverName); try { IntPtr sessionInfoPtr = IntPtr.Zero; IntPtr userPtr = IntPtr.Zero; IntPtr domainPtr = IntPtr.Zero; Int32 sessionCount = 0; Int32 retVal = WTSEnumerateSessions(serverHandle, 0, 1, ref sessionInfoPtr, ref sessionCount); Int32 dataSize = Marshal.SizeOf(typeof(WTS_SESSION_INFO)); IntPtr currentSession = sessionInfoPtr; uint bytes = 0; if (retVal != 0) { for (int i = 0; i < sessionCount; i++) { WTS_SESSION_INFO si = (WTS_SESSION_INFO)Marshal.PtrToStructure((System.IntPtr)currentSession, typeof(WTS_SESSION_INFO)); currentSession += dataSize; WTSQuerySessionInformation(serverHandle, si.SessionID, WTS_INFO_CLASS.WTSUserName, out userPtr, out bytes); WTSQuerySessionInformation(serverHandle, si.SessionID, WTS_INFO_CLASS.WTSDomainName, out domainPtr, out bytes); Console.WriteLine("Domain and User: " + Marshal.PtrToStringAnsi(domainPtr) + "\\" + Marshal.PtrToStringAnsi(userPtr)); WTSFreeMemory(userPtr); WTSFreeMemory(domainPtr); } WTSFreeMemory(sessionInfoPtr); } } finally { WTSCloseServer(serverHandle); } } } } ```
Another option, if you don't want to deal with the P/Invokes yourself, would be to use the [Cassia](https://github.com/danports/cassia) library: ``` using System; using System.Security.Principal; using Cassia; namespace CassiaSample { public static class Program { public static void Main(string[] args) { ITerminalServicesManager manager = new TerminalServicesManager(); using (ITerminalServer server = manager.GetRemoteServer("your-server-name")) { server.Open(); foreach (ITerminalServicesSession session in server.GetSessions()) { NTAccount account = session.UserAccount; if (account != null) { Console.WriteLine(account); } } } } } } ```
How do you retrieve a list of logged-in/connected users in .NET?
[ "", "c#", ".net", "windows-services", "authentication", "" ]
I am relatively new to matchers. I am toying around with [hamcrest](http://code.google.com/p/hamcrest/) in combination with JUnit and I kinda like it. Is there a way, to state that one of multiple choices is correct? Something like ``` assertThat( result, is( either( 1, or( 2, or( 3 ) ) ) ) ) //does not work in hamcrest ``` The method I am testing returns one element of a collection. The list may contain multiple candidates. My current implementation returns the first hit, but that is not a requirement. I would like my testcase to succeed, if any of the possible candidates is returned. How would you express this in Java? (I am open to hamcrest-alternatives)
``` assertThat(result, anyOf(equalTo(1), equalTo(2), equalTo(3))) ``` From [Hamcrest tutorial](http://hamcrest.org/JavaHamcrest/tutorial#logical): > `anyOf` - matches if any matchers match, short circuits (like Java ||) See also [Javadoc](http://hamcrest.org/JavaHamcrest/javadoc/2.2/org/hamcrest/core/AnyOf.html). Moreover, you could write your own Matcher, which is quite easy to do.
marcos is right, but you have a couple other options as well. First of all, there *is* an either/or: ``` assertThat(result, either(is(1)).or(is(2))); ``` but if you have more than two items it would probably get unwieldy. Plus, the typechecker gets weird on stuff like that sometimes. For your case, you could do: ``` assertThat(result, isOneOf(1, 2, 3)) ``` or if you already have your options in an array/Collection: ``` assertThat(result, isIn(theCollection)) ``` See also [Javadoc](http://hamcrest.org/JavaHamcrest/javadoc/1.3/org/hamcrest/Matchers.html).
Multiple correct results with Hamcrest (is there an or-matcher?)
[ "", "java", "junit", "hamcrest", "matcher", "" ]
I have a table that was imported as all UPPER CASE and I would like to turn it into Proper Case. What script have any of you used to complete this?
Here's a UDF that will do the trick... ``` create function ProperCase(@Text as varchar(8000)) returns varchar(8000) as begin declare @Reset bit; declare @Ret varchar(8000); declare @i int; declare @c char(1); if @Text is null return null; select @Reset = 1, @i = 1, @Ret = ''; while (@i <= len(@Text)) select @c = substring(@Text, @i, 1), @Ret = @Ret + case when @Reset = 1 then UPPER(@c) else LOWER(@c) end, @Reset = case when @c like '[a-zA-Z]' then 0 else 1 end, @i = @i + 1 return @Ret end ``` You will still have to use it to update your data though.
This function: * "Proper Cases" all "UPPER CASE" words that are delimited by white space * leaves "lower case words" alone * works properly even for non-English alphabets * is portable in that it does not use fancy features of recent SQL server versions * can be easily changed to use NCHAR and NVARCHAR for unicode support,as well as any parameter length you see fit * white space definition can be configured ``` CREATE FUNCTION ToProperCase(@string VARCHAR(255)) RETURNS VARCHAR(255) AS BEGIN DECLARE @i INT -- index DECLARE @l INT -- input length DECLARE @c NCHAR(1) -- current char DECLARE @f INT -- first letter flag (1/0) DECLARE @o VARCHAR(255) -- output string DECLARE @w VARCHAR(10) -- characters considered as white space SET @w = '[' + CHAR(13) + CHAR(10) + CHAR(9) + CHAR(160) + ' ' + ']' SET @i = 1 SET @l = LEN(@string) SET @f = 1 SET @o = '' WHILE @i <= @l BEGIN SET @c = SUBSTRING(@string, @i, 1) IF @f = 1 BEGIN SET @o = @o + @c SET @f = 0 END ELSE BEGIN SET @o = @o + LOWER(@c) END IF @c LIKE @w SET @f = 1 SET @i = @i + 1 END RETURN @o END ``` Result: ``` dbo.ToProperCase('ALL UPPER CASE and SOME lower ÄÄ ÖÖ ÜÜ ÉÉ ØØ ĈĈ ÆÆ') ----------------------------------------------------------------- All Upper Case and Some lower Ää Öö Üü Éé Øø Cc Ææ ```
SQL Server: Make all UPPER case to Proper Case/Title Case
[ "", "sql", "sql-server", "uppercase", "title-case", "" ]
I have been tasked to optimize some sql queries at work. Everything I have found points to using Explain Plan to identify problem areas. The problem I can not find out exactly what explain plan is telling me. You get Cost, Cardinality, and bytes. What do this indicate, and how should I be using this as a guide. Are low numbers better? High better? Any input would be greatly appreciated. Or if you have a better way to go about optimizing a query, I would be interested.
You get more than that actually depending on what you are doing. Check out this [explain plan](http://www.oracle-base.com/articles/8i/ExplainPlanUsage.php) page. I'm assuming a little bit here that you are using Oracle and know how to run the script to display the plan output. What may be more important to start with is looking at the left hand side for the use of a particular index or not and how that index is being utilized. You should see things like "(Full)", "(By Index Rowid)", etc if you are doing joins. The cost would be the next thing to look at with lower costs being better and you will notice that if you are doing a join that is not using an index you may get a very large cost. You may also want to read details about the [explain plan columns](http://www.cs.umbc.edu/help/oracle8/server.815/a67775/ch13_exp.htm).
I also assume you are using Oracle. And I also recommend that you check out the explain plan web page, for starters. There is a lot to optimization, but it can be learned. A few tips follow: First, when somebody tasks you to optimize, they are almost always looking for acceptable performance rather than ultimate performance. If you can reduce a query's running time from 3 minutes down to 3 seconds, don't sweat reducing it down to 2 seconds, until you are asked to. Second, do a quick check to make sure the queries you are optimizing are logically correct. It sounds absurd, but I can't tell you the number of times I've been asked for advice on a slow running query, only to find out that it was occasionally giving wrong answers! And as it turns out, debugging the query often turned out to speed it up as well. In particular, look for the phrase "Cartesian Join" in the explain plan. If you see it there, the chances are awfully good that you've found an unintentional cartesian join. The usual pattern for an unintentional cartesian join is that the FROM clause lists tables separated by comma, and the join conditions are in the WHERE clause. Except that one of the join conditions is missing, so that Oracle has no choice but to perform a cartesian join. With large tables, this is a performance disaster. It is possible to see a Cartesian Join in the explain plan where the query is logically correct, but I associate this with older versions of Oracle. Also look for the unused compound index. If the first column of a compound index is not used in the query, Oracle may use the index inefficiently, or not at all. Let me give an example: The query was: ``` select * from customers where State = @State and ZipCode = @ZipCode ``` (The DBMS was not Oracle, so the syntax was different, and I've forgotten the original syntax). A quick peek at the indexes revealed an index on Customers with the columns (Country, State, ZipCode) in that order. I changed the query to read ``` select * from customers where Country = @Country and State = @State and ZipCode = @ZipCode ``` and now it ran in about 6 seconds instead of about 6 minutes, because the optimizer was able to use the index to good advantage. I asked the application programmers why they had omitted the country from the criteria, and this was their answer: they knew that all the addresses had country equal to 'USA' so they figured they could speed up the query by leaving that criterion out! Unfortunately, optimizing database retrieval is not really the same as shaving microseconds off of computing time. It involves understanding the database design, especially indexes, and at least an overview of how the optimizer does its job. You generally get better results from the optimizer when you learn to collaborate with it instead of trying to outsmart it. Good luck coming up to speed at optimization!
How to use Explain Plan to optimize queries?
[ "", "sql", "oracle", "optimization", "" ]
What is the most reliable and secure way to determine what page either sent, or called (via AJAX), the current page. I don't want to use the `$_SERVER['HTTP_REFERER']`, because of the (lack of) reliability, and I need the page being called to only come from requests originating on my site. Edit: I am looking to verify that a script that preforms a series of actions is being called from a page on my website.
The REFERER is sent by the client's browser as part of the HTTP protocol, and is therefore unreliable indeed. It might not be there, it might be forged, you just can't trust it if it's for security reasons. If you want to verify if a request is coming from your site, well you can't, but you can verify the user has been to your site and/or is authenticated. Cookies are sent in AJAX requests so you can rely on that.
What I have found best is a CSRF token and save it in the session for links where you need to verify the referrer. So if you are generating a FB callback then it would look something like this: ``` $token = uniqid(mt_rand(), TRUE); $_SESSION['token'] = $token; $url = "http://example.com/index.php?token={$token}"; ``` Then the index.php will look like this: ``` if(empty($_GET['token']) || $_GET['token'] !== $_SESSION['token']) { show_404(); } //Continue with the rest of code ``` I do know of secure sites that do the equivalent of this for all their secure pages.
Determining Referer in PHP
[ "", "php", "http-referer", "" ]
I have a swing application that includes radio buttons on a form. I have the `ButtonGroup`, however, looking at the available methods, I can't seem to get the name of the selected `JRadioButton`. Here's what I can tell so far: * From ButtonGroup, I can perform a `getSelection()` to return the `ButtonModel`. From there, I can perform a `getActionCommand`, but that doesn't seem to always work. I tried different tests and got unpredictable results. * Also from `ButtonGroup`, I can get an Enumeration from `getElements()`. However, then I would have to loop through each button just to check and see if it is the one selected. Is there an easier way to find out which button has been selected? I'm programing this in Java 1.3.1 and Swing.
I would just loop through your `JRadioButtons` and call `isSelected()`. If you really want to go from the `ButtonGroup` you can only get to the models. You could match the models to the buttons, but then if you have access to the buttons, why not use them directly?
I got similar problem and solved with this: ``` import java.util.Enumeration; import javax.swing.AbstractButton; import javax.swing.ButtonGroup; public class GroupButtonUtils { public String getSelectedButtonText(ButtonGroup buttonGroup) { for (Enumeration<AbstractButton> buttons = buttonGroup.getElements(); buttons.hasMoreElements();) { AbstractButton button = buttons.nextElement(); if (button.isSelected()) { return button.getText(); } } return null; } } ``` It returns the text of the selected button.
How do I get which JRadioButton is selected from a ButtonGroup
[ "", "java", "swing", "" ]
I'm writing a query to summarize some data. I have a flag in the table that is basically boolean, so I need some sums and counts based on one value of it, and then the same thing for the other value, like so: ``` select location ,count(*) ,sum(duration) from my.table where type = 'X' and location = @location and date(some_tstamp) = @date group by location ``` And then the same for another value of the type column. If I join this table twice, how do I still group so I can only get aggregation for each table, i.e. count(a.`*`) instead of count(\*)... Would it be better to write two separate queries? **EDIT** Thanks everybody, but that's not what I meant. I need to get a summary where type = 'X' and a summary where type = 'Y' separately...let me post a better example. What I meant was a query like this: ``` select a.location ,count(a.*) ,sum(a.duration) ,count(b.*) ,sum(b.duration) from my.table a, my.table b where a.type = 'X' and a.location = @location and date(a.some_tstamp) = @date and b.location = @location and date(b.some_tstamp) = @date and b.type = 'Y' group by a.location ``` What do I need to group by? Also, DB2 doesn't like count(a.`*`), it's a syntax error.
``` select location ,Sum(case when type = 'X' then 1 else 0 end) as xCount ,Sum(case when type = 'Y' then 1 else 0 end) as YCount ,Sum(case when type = 'X' then duration else 0 end) as xCountDuration ,Sum(case when type = 'Y' then duration else 0 end) as YCountDuration from my.table where location = @location and date(some_tstamp) = @date group by location ``` This should work in SQL Server. I guess db2 should have something similar. Edit: Add a where condition to limit the records to select type = X or type = Y, if "type" can have value other than X and Y.
Your example with the join doesn't make a lot of sense. You're doing a Cartesian product between A and B. Is this really what you want? The following will find count(\*) and sum(duration) for each pair that satisfies the WHERE clause. Based on your description, this sounds like what you're looking for: ``` select type ,location ,count(*) ,sum(duration) from my.table where type IN ('X', 'Y') and location = @location and date(some_tstamp) = @date group by type, location ```
Group by when joining the same table twice
[ "", "sql", "db2", "" ]
How do I get the [ASCII](https://en.wikipedia.org/wiki/ASCII) value of a character as an `int` in Python?
From [here](http://mail.python.org/pipermail/python-win32/2005-April/003100.html): > The function **`ord()`** gets the int value > of the char. And in case you want to > convert back after playing with the > number, function **`chr()`** does the trick. ``` >>> ord('a') 97 >>> chr(97) 'a' >>> chr(ord('a') + 3) 'd' >>> ``` In Python 2, there was also the `unichr` function, returning the [Unicode](http://en.wikipedia.org/wiki/Unicode) character whose ordinal is the `unichr` argument: ``` >>> unichr(97) u'a' >>> unichr(1234) u'\u04d2' ``` In Python 3 you can use `chr` instead of `unichr`. --- [ord() - Python 3.6.5rc1 documentation](https://docs.python.org/3/library/functions.html#ord) [ord() - Python 2.7.14 documentation](https://docs.python.org/2/library/functions.html#ord)
Note that `ord()` doesn't give you the ASCII value per se; it gives you the numeric value of the character in whatever encoding it's in. Therefore the result of `ord('ä')` can be 228 if you're using Latin-1, or it can raise a `TypeError` if you're using UTF-8. It can even return the Unicode codepoint instead if you pass it a unicode: ``` >>> ord(u'あ') 12354 ```
How to get the ASCII value of a character
[ "", "python", "ascii", "" ]
I want to do some basic filtering on a file. Read it, do processing, write it back. I'm not looking for "golfing", but want the simplest and most elegant method to achieve this. I came up with: ``` from __future__ import with_statement filename = "..." # or sys.argv... with open(filename) as f: new_txt = # ...some translation of f.read() open(filename, 'w').write(new_txt) ``` The `with` statement makes things shorter since I don't have to explicitly open and close the file. Any other ideas ?
Actually an easier way using fileinput is to use the inplace parameter: ``` import fileinput for line in fileinput.input (filenameToProcess, inplace=1): process (line) ``` If you use the inplace parameter it will redirect stdout to your file, so that if you do a print it will write back to your file. This example adds line numbers to your file: ``` import fileinput for line in fileinput.input ("b.txt",inplace=1): print "%d: %s" % (fileinput.lineno(),line), ```
I would go for elegance a different way: implement your file-reading and filtering operations as generators, You'll write more lines of code, but it will be more flexible, maintainable, and performant code. See David M. Beazley's [Generator Tricks for Systems Programmers](http://www.dabeaz.com/generators/), which is a really important thing for anyone who's writing this kind of code to read.
Open file, read it, process, and write back - shortest method in Python
[ "", "python", "coding-style", "" ]
Which algorithm does the JavaScript `Array#sort()` function use? I understand that it can take all manner of arguments and functions to perform different kinds of sorts, I'm simply interested in which algorithm the vanilla sort uses.
I've just had a look at the WebKit (Chrome, Safari …) [source](http://trac.webkit.org/browser/trunk/Source/JavaScriptCore/runtime/ArrayPrototype.cpp?rev=138530#L647). Depending on the type of array, different sort methods are used: [Numeric arrays](http://trac.webkit.org/browser/trunk/Source/JavaScriptCore/runtime/JSArray.cpp?rev=138530#L972) (or arrays of primitive type) are sorted using the C++ standard library function [`std::qsort`](http://en.cppreference.com/w/cpp/algorithm/qsort) which implements some variation of quicksort (usually [introsort](https://en.wikipedia.org/wiki/Introsort)). [Contiguous arrays of non-numeric type](http://trac.webkit.org/browser/trunk/Source/JavaScriptCore/runtime/JSArray.cpp?rev=138530#L1065) are stringified and sorted using mergesort, if available (to obtain a stable sorting) or `qsort` if no merge sort is available. For other types (non-contiguous arrays and presumably for associative arrays) WebKit uses either [selection sort](http://en.wikipedia.org/wiki/Selection_Sort) (which they call [“min” sort](http://trac.webkit.org/browser/trunk/Source/JavaScriptCore/runtime/ArrayPrototype.cpp?rev=138530#L668)) or, in some cases, it sorts via an AVL tree. Unfortunately, the documentation here is rather vague so you’d have to trace the code paths to actually see for which types which sort method is used. And then there are gems like [this comment](http://trac.webkit.org/browser/trunk/Source/JavaScriptCore/runtime/JSArray.cpp?rev=138530#L1124): ``` // FIXME: Since we sort by string value, a fast algorithm might be to use a // radix sort. That would be O(N) rather than O(N log N). ``` – Let’s just hope that whoever actually “fixes” this has a better understanding of asymptotic runtime than the writer of this comment, and realises that [radix sort has a slightly more complex runtime description](https://stackoverflow.com/a/474040/1968) than simply O(N). (Thanks to phsource for pointing out the error in the original answer.)
If you look at this bug [224128](https://bugzilla.mozilla.org/show_bug.cgi?id=224128), it appears that MergeSort is being used by Mozilla.
Javascript Array.sort implementation?
[ "", "javascript", "algorithm", "arrays", "sorting", "" ]
I have a search form on each of my pages. If I use form helper, it defaults to `$_POST`. I'd like the search term to show up in the URI: ``` http://example.com/search/KEYWORD ``` I've been on Google for about an hour, but to no avail. I've only found articles on how `$_GET` is basically disabled, because of the native URI convention. I can't be the first person to want this kind of functionality, am I? Thanks in advance!
There's a better fix if you're dealing with people without JS enabled. **View:** ``` <?php echo form_open('ad/pre_search');?> <input type="text" name="keyword" /> </form> ``` **Controller** ``` <?php function pre_search() { redirect('ad/search/.'$this->input->post('keyword')); } function search() { // do stuff; } ?> ``` I have used this a lot of times before.
As far as I know, there is no method of accomplishing this with a simple POST. However, you can access the form via Javascript and update the destination. For example: ``` <form id="myform" onsubmit="return changeurl();" method="POST"> <input id="keyword"> </form> <script> function changeurl() { var form = document.getElementById("myform"); var keyword = document.getElementById("keyword"); form.action = "http://mysite.com/search/"+escape(keyword.value); return true; } </script> ```
Simple Search: Passing Form Variable to URI Using CodeIgniter
[ "", "php", "codeigniter", "" ]
I know plenty about the different ways of handling parsing text for information. For parsing integers for example, what kind of performance can be expected. I am wondering if anyone knows of any good stats on this. I am looking for some real numbers from someone who has tested this. Which of these offers the best performance in which situations? ``` Parse(...) // Crash if the case is extremely rare .0001% If (SomethingIsValid) // Check the value before parsing Parse(...) TryParse(...) // Using TryParse try { Parse(...) } catch { // Catch any thrown exceptions } ```
Always use **T.TryParse(string str, out T value)**. Throwing exceptions is expensive and should be avoided if you can handle the situation *a priori*. Using a try-catch block to "save" on performance (because your invalid data rate is low) is an abuse of exception handling at the expense of maintainability and good coding practices. Follow sound software engineering development practices, write your test cases, run your application, THEN benchmark and optimize. > "We should forget about small efficiencies, say about 97% of the time: **premature optimization is the root of all evil**. Yet we should not pass up our opportunities in that critical 3%" -Donald Knuth Therefore you assign, arbitrarily like in carbon credits, that the performance of try-catch is *worse* and that the performance of TryParse is *better*. Only after we've run our application and determined that we have some sort of slowdown w.r.t. string parsing would we even consider using anything other than TryParse. *(edit: since it appears the questioner wanted timing data to go with good advice, here is the timing data requested)* Times for various failure rates on 10,000 inputs from the user (for the unbelievers): ``` Failure Rate Try-Catch TryParse Slowdown 0% 00:00:00.0131758 00:00:00.0120421 0.1 10% 00:00:00.1540251 00:00:00.0087699 16.6 20% 00:00:00.2833266 00:00:00.0105229 25.9 30% 00:00:00.4462866 00:00:00.0091487 47.8 40% 00:00:00.6951060 00:00:00.0108980 62.8 50% 00:00:00.7567745 00:00:00.0087065 85.9 60% 00:00:00.7090449 00:00:00.0083365 84.1 70% 00:00:00.8179365 00:00:00.0088809 91.1 80% 00:00:00.9468898 00:00:00.0088562 105.9 90% 00:00:01.0411393 00:00:00.0081040 127.5 100% 00:00:01.1488157 00:00:00.0078877 144.6 /// <param name="errorRate">Rate of errors in user input</param> /// <returns>Total time taken</returns> public static TimeSpan TimeTryCatch(double errorRate, int seed, int count) { Stopwatch stopwatch = new Stopwatch(); Random random = new Random(seed); string bad_prefix = @"X"; stopwatch.Start(); for(int ii = 0; ii < count; ++ii) { string input = random.Next().ToString(); if (random.NextDouble() < errorRate) { input = bad_prefix + input; } int value = 0; try { value = Int32.Parse(input); } catch(FormatException) { value = -1; // we would do something here with a logger perhaps } } stopwatch.Stop(); return stopwatch.Elapsed; } /// <param name="errorRate">Rate of errors in user input</param> /// <returns>Total time taken</returns> public static TimeSpan TimeTryParse(double errorRate, int seed, int count) { Stopwatch stopwatch = new Stopwatch(); Random random = new Random(seed); string bad_prefix = @"X"; stopwatch.Start(); for(int ii = 0; ii < count; ++ii) { string input = random.Next().ToString(); if (random.NextDouble() < errorRate) { input = bad_prefix + input; } int value = 0; if (!Int32.TryParse(input, out value)) { value = -1; // we would do something here with a logger perhaps } } stopwatch.Stop(); return stopwatch.Elapsed; } public static void TimeStringParse() { double errorRate = 0.1; // 10% of the time our users mess up int count = 10000; // 10000 entries by a user TimeSpan trycatch = TimeTryCatch(errorRate, 1, count); TimeSpan tryparse = TimeTryParse(errorRate, 1, count); Console.WriteLine("trycatch: {0}", trycatch); Console.WriteLine("tryparse: {0}", tryparse); } ```
Try-Catch will always be the slower. TryParse will be faster. The IF and TryParse are the same.
Parsing Performance (If, TryParse, Try-Catch)
[ "", "c#", "parsing", "text", "" ]
I know that you can run almost all Java in *Dalvik's VM* that you can in *Java's VM* but the limitations are not very clear. Has anyone run into any major stumbling blocks? Any major libraries having trouble? Any languages that compile to Java byte code (**Scala**, **Jython** etc...) not work as expected?
There is a number of things that Dalvik will not handle or will not handle quite the same way as standard Java bytecode, though most of them are quite advanced. The **most severe example is runtime bytecode generation** and custom class loading. Let's say you would like to create some bytecode and then use classloader to load it for you, if that trick works on your normal machine, it is guaranteed to not work on Dalvik, unless you change your bytecode generation. That prevents you from using certain dependency injection frameworks, most known example being Google Guice (though I am sure some people work on that). On the other hand AspectJ should work as it uses bytecode instrumentation as a compilation step (though I don't know if anyone tried). As to other jvm languages -- anything that in the end compiles to standard bytecode and does not use bytecode instrumentation at runtime can be converted to Dalvik and should work. I know people did run Jython on Android and it worked ok. Other thing to be aware of is that there is **no just in time compilation**. This is not strictly Dalviks problem (you can always compile any bytecode on the fly if you wish) but that Android does not support that and is unlikely to do so. In the effect while microbenchmarking for standard Java was useless -- components had different runtime characterstics in tests than as parts of larger systems -- microbenchmarks for Android phones totally make sense.
If you see "[Dalvik Virtual Machine internals](http://www.youtube.com/watch?v=ptjedOZEXPM)" Google IO session, you can find Dalvik does not support [generational GC](http://en.wikipedia.org/wiki/Garbage_collection_%28computer_science%29#Generational_GC_.28aka_Ephemeral_GC.29). So, it could degrade performance of frequent object creation and deletion. Java VM supports generational GC so, it would show better GC performance for the same situation. And also, Dalvik uses [trace-granuality JIT](http://dl.google.com/googleio/2010/android-jit-compiler-androids-dalvik-vm.pdf) instead of method granuality JIT.
What can you not do on the Dalvik VM (Android's VM) that you can in Sun VM?
[ "", "java", "android", "jvm", "dalvik", "" ]
I need write an update statement that used multiple tables to determine which rows to update, since in Oracle, multiple tables aren't allowed. The following query will return a "ORA-00971: Missing SET keyword" error ``` UPDATE TABLE1 a, TABLE2 b SET a.COL1 = 'VALUE' WHERE a.FK = b.PK AND b.COL2 IN ('SET OF VALUES') ``` Looking up the UPDATE statement syntax on oracle, I found the following [link](http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14261/update_statement.htm), which shows that you can use a subquery in place of a table name. When I tried to write the query like this, I got a "ORA-01779: Cannot modify a column which maps to a non key-preserved table" ``` UPDATE ( SELECT a.COL1 FROM TABLE1 a, TABLE2 b WHERE a.FK = b.PK AND b.COL2 IN ('SET OF VALUES') ) update_tbl SET update_tbl.COL1 = 'VALUE' ``` I did rewrite the query (show below) using an EXISTS statement instead and it works fine, but would still like to know how this is done. ``` UPDATE TABLE1 update_tbl SET update_tbl.COL1 = 'VALUE' WHERE EXISTS ( SELECT 1 FROM TABLE1 a TABLE2 b WHERE a.FK = b.PK AND b.COL2 IN ('SET OF VALUES') AND update_tbl.PK = a.PK ) ``` Thanks! -Nate
Another option: ``` UPDATE TABLE1 a SET a.COL1 = 'VALUE' WHERE a.FK IN ( SELECT b.PK FROM TABLE2 b WHERE b.COL2 IN ('SET OF VALUES') ) ``` Your second example would work if (a) the view included the **declared** PK of TABLE1: ``` UPDATE ( SELECT a.COL1, a.PKCOL FROM TABLE1 a, TABLE2 b WHERE a.FK = b.PK AND b.COL2 IN ('SET OF VALUES') ) update_tbl SET update_tbl.COL1 = 'VALUE' ``` ... and (b) TABLE1.FK was a **declared** foreign key to TABLE2 (By declared I mean that a constraint exists and is enabled).
I find that a nice, quick, consistent way to turn a SELECT statement into an UPDATE is to make the update based on the ROWID. ``` UPDATE TABLE1 SET COL1 = 'VALUE' WHERE ROWID in ( SELECT a.rowid FROM TABLE1 a, TABLE2 b WHERE a.FK = b.PK AND b.COL2 IN ('SET OF VALUES') ) ``` So, your inner query is defining the rows to update.
Using a subquery instead of a table name in an Oracle Update Statement
[ "", "sql", "oracle", "" ]
I have a page that is supposed to launch the Print Preview page onload. I found this: ``` var OLECMDID = 7; /* OLECMDID values: * 6 - print * 7 - print preview * 1 - open window * 4 - Save As */ var PROMPT = 1; // 2 DONTPROMPTUSER var WebBrowser = '<OBJECT ID="WebBrowser1" WIDTH=0 HEIGHT=0 CLASSID="CLSID:8856F961-340A-11D0-A96B-00C04FD705A2"></OBJECT>'; document.body.insertAdjacentHTML('beforeEnd', WebBrowser); WebBrowser1.ExecWB(OLECMDID, PROMPT); WebBrowser1.outerHTML = ""; ``` But... 1. it does not work in FireFox. 2. it's kind of ugly. Is there a better way for IE or a way that works for FireFox?
You can't, Print Preview is a feature of a browser, and therefore should be protected from being called by JavaScript as it would be a security risk. That's why your example uses Active X, which bypasses the JavaScript security issues. So instead use the print stylesheet that you already should have and show it for media=screen,print instead of media=print. Read [Alist Apart: Going to Print](http://www.alistapart.com/articles/goingtoprint/) for a good article on the subject of print stylesheets.
I think the best that's possible in cross-browser JavaScript is `window.print()`, which (in Firefox 3, for me) brings up the 'print' dialog and not the print preview dialog. FYI, the *print dialog* is your computer's Print popup, what you get when you do Ctrl-p. The *print preview* is Firefox's own Preview window, and it has more options. It's what you get with Firefox Menu > Print...
How can Print Preview be called from Javascript?
[ "", "javascript", "printing", "" ]
In my experience Meta-templates are really fun (when your compilers are compliant), and can give good performance boosts, and luckily I'm surrounded by seasoned C++ programmers that also grok meta-templates, however occasionally a new developer arrives and can't make heads or tails of some of the meta-template tricks we use (mostly Andrei Alenxandrescu stuff), for a few weeks until he gets initiated appropriately. So I was wondering what's the situation for other C++ programmers out there? Should meta-template programming be something C++ programmers should be "required" to know (excluding entry level students of course), or not? Edit: Note my question is related to production code and not little samples or prototypes
If you can you find enough candidates who really know template meta-programing then by all means, require it. You will be showing a lot of qualified and potentially productive people the door (there are plenty of legitimate reasons not to know how to do this, namely that if you do it on a lot of platforms, you will create code that can't compile, or that average developers will have trouble understanding). Template meta-programming is great, but let's face it, it's pushing C++ to the limit. Now, a candidate should probably understand basics (compute n! at compile time, or at least explain how it works if they are shown the code). If your new developers are reliably becoming productive within a few weeks, then your current recruiting is probably pretty good.
Yes, but I would not personally place a high priority on it. It's a nifty feature, but it's a bit situational, and good C++ code can be developed without it. I've personally used it once or twice, but haven't really found it to be valuable enough in *my* work to regularly use it. (Maybe that's a function of my lack of C++ production experience, though)
Are C++ meta-templates required knowledge for programmers?
[ "", "c++", "metaprogramming", "" ]
I have a table, call it TBL. It has two columns,call them A and B. Now in the query I require one column as A and other column should be a comma seprated list of all B's which are against A in TBL. e.g. TBL is like this 1 Alpha 2 Beta 1 Gamma 1 Delta Result of query should be 1 Alpha,Gamma,Delta 2 Beta This type of thing is very easy to do with cursors in stored procedure. But I am not able to do it through MS Access, because apparently it does not support stored procedures. Is there a way to run stored procedure in MS access? or is there a way through SQL to run this type of query
You can concatenate the records with a User Defined Function (UDF). The code below can be pasted 'as is' into a standard module. The SQL for you example would be: ``` SELECT tbl.A, Concatenate("SELECT B FROM tbl WHERE A = " & [A]) AS ConcA FROM tbl GROUP BY tbl.A ``` This code is by DHookom, Access MVP, and is taken from <http://www.tek-tips.com/faqs.cfm?fid=4233> ``` Function Concatenate(pstrSQL As String, _ Optional pstrDelim As String = ", ") _ As String 'example 'tblFamily with FamID as numeric primary key 'tblFamMem with FamID, FirstName, DOB,... 'return a comma separated list of FirstNames 'for a FamID ' John, Mary, Susan 'in a Query '(This SQL statement assumes FamID is numeric) '=================================== 'SELECT FamID, 'Concatenate("SELECT FirstName FROM tblFamMem ' WHERE FamID =" & [FamID]) as FirstNames 'FROM tblFamily '=================================== ' 'If the FamID is a string then the SQL would be '=================================== 'SELECT FamID, 'Concatenate("SELECT FirstName FROM tblFamMem ' WHERE FamID =""" & [FamID] & """") as FirstNames 'FROM tblFamily '=================================== '======For DAO uncomment next 4 lines======= '====== comment out ADO below ======= 'Dim db As DAO.Database 'Dim rs As DAO.Recordset 'Set db = CurrentDb 'Set rs = db.OpenRecordset(pstrSQL) '======For ADO uncomment next two lines===== '====== comment out DAO above ====== Dim rs As New ADODB.Recordset rs.Open pstrSQL, CurrentProject.Connection, _ adOpenKeyset, adLockOptimistic Dim strConcat As String 'build return string With rs If Not .EOF Then .MoveFirst Do While Not .EOF strConcat = strConcat & _ .Fields(0) & pstrDelim .MoveNext Loop End If .Close End With Set rs = Nothing '====== uncomment next line for DAO ======== 'Set db = Nothing If Len(strConcat) > 0 Then strConcat = Left(strConcat, _ Len(strConcat) - Len(pstrDelim)) End If Concatenate = strConcat End Function ```
I believe you can create VBA functions and use them in your access queries. That might help you.
Does MS access(2003) have anything comparable to Stored procedure. I want to run a complex query in MS acceess
[ "", "sql", "ms-access", "stored-procedures", "ms-office", "" ]
Exception is: 'Country' has a SelectedValue which is invalid because it does not exist in the list of items. Parameter name: value ``` UserService.DsUserAttributes dsCountry = us_service.GetUserAttributeDropDown(systemId, "Country"); Country.DataSource = dsCountry.tblDropDownValues; Country.DataTextField = "AttrValue"; Country.DataValueField = "Id"; Country.DataBind(); ``` The values held within dsCountry.tblDropDownValues are: ``` Id AttrValue AttrName aefa28e0-a118-11dd-ad8b-080c210c9a66 PLEASE SELECT Country 213a743e-ea0b-419c-bd44-03b1c35241b3 USA Country eefa1387-8dc0-11d8-975f-13da67a41a5d CANADA Country ```
Every time I got this error it was because the keys I was matching on to bind my data didn't match. The lines you showed may not be the problem. Look into when you are possibly loading a record from another table and binding their values into the dropdown list. For example, if you have a dropdown list on your page that contains all your country names with country Ids as Value-behinds These values are all stored in a table CountryTbl. You also have a grid on the page where a user can select which record from another table they want to edit. Let's say this record contains Information like Name, Phone #, and Country, and its all stored in another table UserTbl. When the form is attempting to bind its data from UserTbl, it is trying to set your Country DropDownList equal to a bound value from UserTbl. But what can happen, and has happened to me many times, is that you have bad data in your UserTbl like a Country that doesn't exist anymore, or another value in its Country field that just doesn't match any of your values in the Country Dropdown. Look into your database tables and see if you have any CountryIds in your "UserTbl" that don't match any of the ContryIds in your "CountryTbl".
Try binding the data on postback. It sounds like when the event handler is accessing the DropDownList, it hasn't been repopulated with the values you are initially binding to the DropDownList. Also, remember to bind early enough so that it is bound before the event handler starts its processing. OnInit or OnLoad should be good enough. Otherwise, might need some more details: 1. When is the exception occurring? During the rendering, or on the postback in the event handler? 2. Do you have a stack trace on the error that might point to the point in the code where the selected value is accessed? 3. When are you binding the data?
SelectedValue is invalid - doesn't exist in list - C#/ASP.NET
[ "", "c#", "asp.net", "" ]
Has Java 1.6 SDK been released for mac? I can't seem to find it...
Yes, but only for 64-bit Intel-based Macs, i.e. Macs using Core 2 (duo or solo) or Xeon chips. The original Core chip is not supported, nor is any PPC chip. Also, Java 6 is currently Leopard-only. The update should appear in Software Update, if you have a compatible Mac and recent-enough OS version. Here are the release notes for the [first update](http://developer.apple.com/releasenotes/Java/JavaLeopardUpdate1RN/Introduction/chapter_1_section_1.html). And here are the [second update](http://developer.apple.com/releasenotes/Java/JavaLeopardUpdate2RN/Introduction/chapter_1_section_1.html)'s release notes.
If you don't have a 64-bit Intel-based Mac you may also be interested in [SoyLatte](http://landonf.bikemonkey.org/static/soylatte/) which is an independent X11-based port of Java 1.6 based on the FreeBSD version.
Java 1.6 SDK on Mac OS 10.5
[ "", "java", "macos", "" ]
At the moment, our application uses the Trident Win32 component, but we want to move away from that for a few reasons, chief among them being our desire to go cross-platform. We're looking at [WebKit](http://webkit.org/) and [Gecko](http://developer.mozilla.org/en/Gecko), but I'd love to get some feedback before I make a decision. Here are some of the most important requirements: 1. It should be relatively fast, with a small footprint. Ideally, we would be able to trim away anything we don't need without too much effort. 2. Decent documentation is important. I don't anticipate needing to do anything too unusual with it, but you never know. 3. We're using C++, and would like to work with a well-designed object-oriented architecture, if possible. 4. Cross-platform is a must, and good performance would be helpful in the long run (we may end up porting to mobile platforms). Are there any considerations I need to take into account before making a decision? Has anyone worked with WebKit or Gecko before? If so, are there any articles or tutorials I might find useful? Update: Thanks for the responses guys. We ended up going with Qt 4.5, which includes WebKit. We're really pleased with it so far, in fact I think Qt is probably the best UI framework I've ever used; the difference between coding with the native Win32 APIs and this is staggering. It's also real easy to learn, the only major issue we had was getting used to the signals/slots paradigm.
A little history might help in your decision. When Apple was considering which engine to use in making Safari they looked at Gecko, but decided to go with KHTML, fork it and called it WebKit. Their reasons for doing this was that Gecko had tons of legacy cruft still leftover from Netscape and was far more complicated. KHTML/WebKit was newer, and thus had less legacy. It was also cleaner, quicker, and better documented. One of the goals of Firefox 3 was to clean up the codebase and simplify it. From what I've heard they did do this, but I don't know how it compares with current iterations of WebKit. It apparently wasn't enough for Google when they made Chrome, and they have a significant stake in Firefox. See [here](http://arstechnica.com/articles/paedia/mozilla-committed-to-gecko.ars/1) for more details.
I am biased, but if you do not mind using (LGPL-ed) Qt, what about using [QtWebKit](http://doc.trolltech.com/4.5/qtwebkit.html)? It is fairly easy to embed, cross-platform of course, has a nice web browser demonstration, and also other [related examples](http://labs.trolltech.com/blogs/category/labs/internet/webkit/).
What is the best HTML Rendering Engine to embed in an application?
[ "", "c++", "qt", "webkit", "gecko", "rendering-engine", "" ]
Is there a dialect of XML for defining the tables, indexes and relations of a relational database, and a "compiler" or stylesheet to transform that definition into SQL CREATE statements (DDL)? EG, something that might look like: ``` <Table Name="orders"> <Column Name="order_id" Type="varchar" Size="20"/> ... etc ... </Table> ``` I'd like to keep the configuration of a service and its dependencies all in one place, and XML is looking like the best choice because of its wide support and its ability to mix namespaces. With it, I could write an installation program that can install this service and create the database, its tables, indexes, relations, etc. without being tied to a specific SQL implementation. **Edit:** This has nothing to do with ORM.
Something like [xml2ddl](http://xml2ddl.berlios.de/)?
Sounds like XML based migrations, never seen one though. If you're into OR/M you could take a look at (N)Hibernate's [hbm2ddl](http://www.hibernate.org/hib_docs/reference/en/html/toolsetguide.html) tool. It generates the appropriate create commands for the schema on various database dialects out of an XML definition.
Is there an XML language for defining/authoring SQL database schemas?
[ "", "sql", "xml", "database", "xslt", "" ]
Using Vim, I'm trying to pipe text selected in visual mode to a UNIX command and have the output appended to the end of the current file. For example, say we have a SQL command such as: ``` SELECT * FROM mytable; ``` I want to do something like the following: ``` <ESC> V " select text :'<,'>!mysql -uuser -ppass mydb ``` But instead of having the output overwrite the currently selected text, I would like to have the output appended to the end of the file. You probably see where this is going. I'm working on using Vim as a simple SQL editor. That way, I don't have to leave Vim to edit, tweak, test SQL code.
How about copying the selected text to the end of the file, select the copy and run the command? If you do not want to repeat the same commands over and over again, you can record the sequence by using `q` or add a new command. I have tried the latter as follows: ``` :com -range C <line1>,<line2>yank | $ | put | .,$ !rev ``` With it you can select some lines and then type `:C`. This will first yank the selection, then go to the end of the file, paste the yanked text and run the command (`rev` in this case) over the new text.
If you prefer more programmatic approach, you can have ``` :call append(line("$"), system("command", GetSelectedText())) ``` where `GetSelectedText` is the reusable function: ``` func! GetSelectedText() normal gv"xy let result = getreg("x") normal gv return result endfunc ```
How to pipe visually selected text to a UNIX command and append output to current buffer in Vim
[ "", "sql", "vim", "editor", "vi", "" ]
I'm searching for a **PHP syntax highlighting engine** that can be customized (i.e. I can provide my **own tokenizers** for new languages) and that can handle several languages *simultaneously* (i.e. on the same output page). This engine has to work well together with **CSS classes**, i.e. it should format the output by inserting `<span>` elements that are adorned with `class` attributes. Bonus points for an extensible schema. I do *not* search for a client-side syntax highlighting script (JavaScript). So far, I'm stuck with [GeSHi](http://qbnz.com/highlighter/). Unfortunately, GeSHi fails abysmally for several reasons. The main reason is that the different language files define completely different, inconsistent styles. I've worked hours trying to refactor the different language definitions down to a common denominator but since most definition files are in themselves quite bad, I'd finally like to switch. Ideally, I'd like to have an API similar to [CodeRay](http://coderay.rubychan.de/), [Pygments](http://pygments.org/) or the JavaScript [dp.SyntaxHighlighter](http://code.google.com/p/syntaxhighlighter/). ## Clarification: I'm looking for a code highlighting software written *in* PHP, not *for* PHP (since I need to use it from inside PHP).
Since no existing tool satisfied my needs, I wrote my own. Lo and behold: ## [Hyperlight](https://github.com/klmr/hyperlight) Usage is extremely easy: just use ``` <?php hyperlight($code, 'php'); ?> ``` to highlight code. Writing new language definitions is relatively easy, too – using regular expressions and a powerful but simple state machine. By the way, I still *need* a lot of definitions so feel free to contribute.
[I marked this answer as *Community Wiki* because you're specifically *not* looking for Javascript] **<http://softwaremaniacs.org/soft/highlight/>** is a PHP (*plus the following list of other languages supported*) syntax highlighting library: > *Python, Ruby, Perl, PHP, XML, HTML, CSS, Django, Javascript, VBScript, Delphi, Java, C++, C#, Lisp, RenderMan (RSL and RIB), Maya Embedded Language, SQL, SmallTalk, Axapta, 1C, Ini, Diff, DOS .bat, Bash* It uses *<span class="keyword">* style markup. It has also been integrated in the [dojo toolkit](http://dojotoolkit.org/) (as a dojox project: [dojox.lang.highlight](http://trac.dojotoolkit.org/browser/dojox/trunk/lang/tests/test_highlight.html?rev=11623&format=txt)) Though not the most popular way to run a webserver, strictly speaking, Javascript is not only implemented on the client-side, but there are also [Server-Side Javascript engine/platform combinations too](http://en.wikipedia.org/wiki/Server-side_JavaScript).
PHP syntax highlighting
[ "", "php", "syntax-highlighting", "" ]
What is the best way to overwrite a specific line in a file? I basically want to search a file for the string '@parsethis' and overwrite the rest of that line with something else.
If the file isn't too big, the best way would probably be to read the file into an array of lines with [file()](http://php.net/file), search through the array of lines for your string and edit that line, then [implode()](http://php.net/implode) the array back together and [fwrite()](http://php.net/fwrite) it back to the file.
If the file is really big (log files or something like this) and you are willing to sacrifice speed for memory consumption you could open two files and essentially do the trick [Jeremy Ruten](https://stackoverflow.com/questions/235604/overwite-line-in-file-with-php#235610) proposed by using files instead of system memory. ``` $source='in.txt'; $target='out.txt'; // copy operation $sh=fopen($source, 'r'); $th=fopen($target, 'w'); while (!feof($sh)) { $line=fgets($sh); if (strpos($line, '@parsethis')!==false) { $line='new line to be inserted' . PHP_EOL; } fwrite($th, $line); } fclose($sh); fclose($th); // delete old source file unlink($source); // rename target file to source file rename($target, $source); ```
Overwrite Line in File with PHP
[ "", "php", "file-io", "fopen", "fwrite", "" ]
I have a little demonstration below of a peculiar problem. ``` using System; using System.Windows.Forms; namespace WindowsApplication1 { public class TestForm : Form { private System.Windows.Forms.TabControl tabControl1; private System.Windows.Forms.TabPage tabPage1; private System.Windows.Forms.TabPage tabPage2; private System.Windows.Forms.TextBox textBox1; public TestForm() { //Controls this.tabControl1 = new System.Windows.Forms.TabControl(); this.tabPage1 = new System.Windows.Forms.TabPage(); this.tabPage2 = new System.Windows.Forms.TabPage(); this.textBox1 = new System.Windows.Forms.TextBox(); // tabControl1 this.tabControl1.Anchor = ((System.Windows.Forms.AnchorStyles)((((System.Windows.Forms.AnchorStyles.Top | System.Windows.Forms.AnchorStyles.Bottom) | System.Windows.Forms.AnchorStyles.Left) | System.Windows.Forms.AnchorStyles.Right))); this.tabControl1.Controls.Add(this.tabPage1); this.tabControl1.Controls.Add(this.tabPage2); this.tabControl1.Location = new System.Drawing.Point(12, 12); this.tabControl1.Name = "tabControl1"; this.tabControl1.SelectedIndex = 0; this.tabControl1.Size = new System.Drawing.Size(260, 240); this.tabControl1.TabIndex = 0; this.tabControl1.Selected += new System.Windows.Forms.TabControlEventHandler(this.tabControl1_Selected); // tabPage1 this.tabPage1.Controls.Add(this.textBox1); this.tabPage1.Location = new System.Drawing.Point(4, 22); this.tabPage1.Name = "tabPage1"; this.tabPage1.Size = new System.Drawing.Size(252, 214); this.tabPage1.TabIndex = 0; this.tabPage1.Text = "tabPage1"; // tabPage2 this.tabPage2.Location = new System.Drawing.Point(4, 22); this.tabPage2.Name = "tabPage2"; this.tabPage2.Size = new System.Drawing.Size(192, 74); this.tabPage2.TabIndex = 1; this.tabPage2.Text = "tabPage2"; // textBox1 this.textBox1.Anchor = ((System.Windows.Forms.AnchorStyles)(((System.Windows.Forms.AnchorStyles.Top | System.Windows.Forms.AnchorStyles.Left) | System.Windows.Forms.AnchorStyles.Right))); this.textBox1.Location = new System.Drawing.Point(6, 38); this.textBox1.Name = "textBox1"; this.textBox1.Size = new System.Drawing.Size(240, 20); this.textBox1.TabIndex = 0; // TestForm this.ClientSize = new System.Drawing.Size(284, 264); this.Controls.Add(this.tabControl1); this.Name = "Form1"; this.Text = "Form1"; } //Tab Selected private void tabControl1_Selected(object sender, EventArgs e) { this.Text = "TextBox Width: " + this.textBox1.Width.ToString(); } } //Main static class Program { static void Main() { Application.Run(new TestForm()); } } } ``` If you run the above C# code you will have a small form containing a tabcontrol. Within the tabcontrol is a texbox on the first tab. If you follow these steps you will see the problem: 1. Select tabPage2 (textBox1's width is reported in the form title) 2. Resize the form 3. Select tabPage1 (The wrong textBox1 width is reported) Any ideas what is going on here? The textbox is obviously bigger than what is being reported. If you click again on tabPage2 the correct size is then updated. Obviously there is an event updating the width of textBox1. Can i trigger this when tabPage1 is selected?
Firstly, thanks for the complete program - it made it *much* easier to work out what was going on! While the textbox isn't visible, it isn't resized. When you select tabPage1, the Selected event fires *before* the controls become visible and the textbox gets laid out again. Now, that's why it's happening - but what's your real situation? If you actually want to capture the size of controls changing, subscribe to their Resize events. If not, could you explain more about what you're trying to achieve?
I'm pretty sure that what's happening is the `Selected` event is raised slightly before the tab page becomes visible. The text box is not resized until the tab page becomes visible, so you end up checking the value of the text box's size before it is actually resized. When you change tabs again, the text box is already resized, so you get the correct value. Change the last few lines of your example form to look like this and it will become apparent: ``` this.textBox1.SizeChanged += TextboxSizeChanged; } //Tab Selected private void tabControl1_Selected(object sender, EventArgs e) { System.Diagnostics.Debug.WriteLine("tab selected"); this.Text = "TextBox Width: " + this.textBox1.Width.ToString(); } private void TextboxSizeChanged(object sender, EventArgs e) { System.Diagnostics.Debug.WriteLine("Textbox resized"); } ```
Question about an event perculiarity in a TabControl
[ "", "c#", "winforms", "event-handling", "tabcontrol", "" ]
Does anyone know how to programmaticly find out where the java classloader actually loads the class from? I often work on large projects where the classpath gets very long and manual searching is not really an option. I recently had a [problem](https://stackoverflow.com/questions/226280/eclipse-class-version-bug "problem") where the classloader was loading an incorrect version of a class because it was on the classpath in two different places. So how can I get the classloader to tell me where on disk the actual class file is coming from? ***Edit:*** What about if the classloader actually fails to load the class due to a version mismatch (or something else), is there anyway we could find out what file its trying to read before it reads it?
Here's an example: ``` package foo; public class Test { public static void main(String[] args) { ClassLoader loader = Test.class.getClassLoader(); System.out.println(loader.getResource("foo/Test.class")); } } ``` This printed out: ``` file:/C:/Users/Jon/Test/foo/Test.class ```
Another way to find out where a class is loaded from (without manipulating the source) is to start the Java VM with the option: `-verbose:class`
Find where java class is loaded from
[ "", "java", "classpath", "classloader", "" ]
I am using C#. By default, when I add a web form in Visual Studio 2008 with or without a master page, the AutoEventWireup attribute is set to true in the page directive. This attribute is also set to true inside the master page master directive. What value should I have AutoEventWireup set to (true/false)? What are the pros and cons of both values? Any help is greatly appreciated. Thank you.
That is a way of automatically wiring up event handlers to events based on naming conventions that Microsoft has setup. The way this is implemented is with reflection, if I remember correctly. At runtime, ASP.NET will inspect your class, look for methods with signatures that match the naming convention expected, and then wire those up as handlers for their respective events. That said, the pros are that it is a standard approach and saves you the trouble of wiring up the event handlers yourself. A perceived "con" would be that it takes an extra step (reflection) that costs a bit more than if you were to do it yourself. For the most part, the reflection "cost" is so little that it really isn't worth mentioning, but it is important to be aware of what is happening under the covers.
That just causes runtime to automatically hook up the conventional page lifecycle methods like Page\_Load to their equivalent eventhandler (Page.Load). If you set AutoEventWireup to false, then you need something like: ``` protected override void OnLoad(EventArgs e) { base.OnLoad(e); } ``` instead of the MS convention of: ``` protected void Page_Load(object sender, EventArgs e) { } ``` You also don't have to worry about calling base.OnLoad, because the wireup automatically does that. But, there might be a small performance benefit from setting it to false - I have never verified that though. -Nate Davis
What value should AutoEventWireup be set to in the page directive in ASP.Net?
[ "", "c#", "asp.net", "vb.net", "" ]
Suppose a header file defines a function template. Now suppose two implementation files `#include` this header, and each of them has a call to the function template. In both implementation files the function template is instantiated with the same type. ``` // header.hh template <typename T> void f(const T& o) { // ... } // impl1.cc #include "header.hh" void fimpl1() { f(42); } // impl2.cc #include "header.hh" void fimpl2() { f(24); } ``` One may expect the linker would complain about multiple definitions of `f()`. Specifically, if `f()` wouldn't be a template then that would indeed be the case. * How come the linker doesn't complain about multiple definitions of `f()`? * Is it specified in the standard that the linker must handle this situation gracefully? In other words, can I always count on programs similar to the above to compile and link? * If the linker can be clever enough to disambiguate a set of function template instantiations, why can't it do the same for regular functions, given they are identical as is the case for instantiated function templates?
In order to support C++, the linker is smart enough to recognize that they are all the same function and throws out all but one. EDIT: clarification: The linker doesn't compare function contents and determine that they are the same. Templated functions are marked as such and the linker recognizes that they have the same signatures.
The Gnu C++ compiler's manual has [a good discussion of this](http://gcc.gnu.org/onlinedocs/gcc/Template-Instantiation.html). An excerpt: > C++ templates are the first language > feature to require more intelligence > from the environment than one usually > finds on a UNIX system. Somehow the > compiler and linker have to make sure > that each template instance occurs > exactly once in the executable if it > is needed, and not at all otherwise. > There are two basic approaches to this > problem, which are referred to as the > Borland model and the Cfront model. > > ## Borland model > > Borland C++ solved the template > instantiation problem by adding the > code equivalent of common blocks to > their linker; the compiler emits > template instances in each translation > unit that uses them, and the linker > collapses them together. The advantage > of this model is that the linker only > has to consider the object files > themselves; there is no external > complexity to worry about. This > disadvantage is that compilation time > is increased because the template code > is being compiled repeatedly. Code > written for this model tends to > include definitions of all templates > in the header file, since they must be > seen to be instantiated. > > ## Cfront model > > The AT&T C++ translator, Cfront, > solved the template instantiation > problem by creating the notion of a > template repository, an automatically > maintained place where template > instances are stored. A more modern > version of the repository works as > follows: As individual object files > are built, the compiler places any > template definitions and > instantiations encountered in the > repository. At link time, the link > wrapper adds in the objects in the > repository and compiles any needed > instances that were not previously > emitted. The advantages of this model > are more optimal compilation speed and > the ability to use the system linker; > to implement the Borland model a > compiler vendor also needs to replace > the linker. The disadvantages are > vastly increased complexity, and thus > potential for error; for some code > this can be just as transparent, but > in practice it can be very difficult > to build multiple programs in one > directory and one program in multiple > directories. Code written for this > model tends to separate definitions of > non-inline member templates into a > separate file, which should be > compiled separately. > > When used with GNU ld version 2.8 or > later on an ELF system such as > GNU/Linux or Solaris 2, or on > Microsoft Windows, G++ supports the > Borland model. On other systems, G++ > implements neither automatic model.
Multiple definitions of a function template
[ "", "c++", "templates", "linker", "" ]
After looking at another question on SO ([Using NaN in C++](https://stackoverflow.com/questions/235386/using-nan-in-c)) I became curious about `std::numeric_limits<double>::signaling_NaN()`. I could not get signaling\_NaN to throw an exception. I thought perhaps by signaling it really meant a signal so I tried catching SIGFPE but nope... Here is my code: ``` double my_nan = numeric_limits<double>::signaling_NaN(); my_nan++; my_nan += 5; my_nan = my_nan / 10; my_nan = 15 / my_nan; cout << my_nan << endl; ``` `numeric_limits<double>::has_signaling_NaN` evaluates to true, so it is implemented on my system. Any ideas? I am using ms visual studio .net 2003's C++ compiler. I want to test it on another when I get home. Thanks!
You can use the [`_control87()`](http://msdn.microsoft.com/en-us/library/e9b52ceh.aspx) function to enable floating-point exceptions. From the MSDN documentation on `_control87()`: > Note: > > The run-time libraries mask all floating-point exceptions by default. When floating point exceptions are enabled, you can use [`signal()`](http://msdn.microsoft.com/en-us/library/xdkz3x12.aspx) or [SEH (Structured Exception Handling)](http://msdn.microsoft.com/en-us/library/ms680657(VS.85).aspx) to catch them.
A word of warning: Using 3rd party DLLs may silently enable these exceptions. This is especially true for loading DLL's that are written in a language that enables them by default. I've had that happen in two instances: Printing from an embedded browser control to a HP printer, and registering my DLL (that sets some initial values to NaN) from InnoSetup which is written in Delphi.
How to use std::signaling_nan?
[ "", "c++", "visual-c++", "floating-point", "nan", "" ]
Is it possible to serialize and deserialize a class in C++? I've been using Java for 3 years now, and serialization / deserialization is fairly trivial in that language. Does C++ have similar features? Are there native libraries that handle serialization? An example would be helpful.
The [`Boost::serialization`](http://www.boost.org/doc/libs/1_36_0/libs/serialization/doc/index.html) library handles this rather elegantly. I've used it in several projects. There's an example program, showing how to use it, [here](http://www.boost.org/doc/libs/1_36_0/libs/serialization/example/demo.cpp). The only native way to do it is to use streams. That's essentially all the `Boost::serialization` library does, it extends the stream method by setting up a framework to write objects to a text-like format and read them from the same format. For built-in types, or your own types with `operator<<` and `operator>>` properly defined, that's fairly simple; see [the C++ FAQ](https://isocpp.org/wiki/faq/serialization#serialize-simple-types) for more information.
I realize this is an old post but it's one of the first that comes up when searching for `c++ serialization`. I encourage anyone who has access to C++11 to take a look at [cereal](https://uscilab.github.io/cereal/), a C++11 header-only library for serialization that supports binary, JSON, and XML out of the box. cereal was designed to be easy to extend and use and has a similar syntax to boost::serialization.
Is it possible to serialize and deserialize a class in C++?
[ "", "c++", "serialization", "" ]
Can you add new statements (like `print`, `raise`, `with`) to Python's syntax? Say, to allow.. ``` mystatement "Something" ``` Or, ``` new_if True: print "example" ``` Not so much if you *should*, but rather if it's possible (short of modifying the python interpreters code)
You may find this useful - [Python internals: adding a new statement to Python](http://eli.thegreenplace.net/2010/06/30/python-internals-adding-a-new-statement-to-python/), quoted here: --- This article is an attempt to better understand how the front-end of Python works. Just reading documentation and source code may be a bit boring, so I'm taking a hands-on approach here: I'm going to add an `until` statement to Python. All the coding for this article was done against the cutting-edge Py3k branch in the [Python Mercurial repository mirror](http://code.python.org/hg/branches/py3k/). ### The `until` statement Some languages, like Ruby, have an `until` statement, which is the complement to `while` (`until num == 0` is equivalent to `while num != 0`). In Ruby, I can write: ``` num = 3 until num == 0 do puts num num -= 1 end ``` And it will print: ``` 3 2 1 ``` So, I want to add a similar capability to Python. That is, being able to write: ``` num = 3 until num == 0: print(num) num -= 1 ``` ### A language-advocacy digression This article doesn't attempt to suggest the addition of an `until` statement to Python. Although I think such a statement would make some code clearer, and this article displays how easy it is to add, I completely respect Python's philosophy of minimalism. All I'm trying to do here, really, is gain some insight into the inner workings of Python. ### Modifying the grammar Python uses a custom parser generator named `pgen`. This is a LL(1) parser that converts Python source code into a parse tree. The input to the parser generator is the file `Grammar/Grammar`**[1]**. This is a simple text file that specifies the grammar of Python. **[1]**: From here on, references to files in the Python source are given relatively to the root of the source tree, which is the directory where you run configure and make to build Python. Two modifications have to be made to the grammar file. The first is to add a definition for the `until` statement. I found where the `while` statement was defined (`while_stmt`), and added `until_stmt` below **[2]**: ``` compound_stmt: if_stmt | while_stmt | until_stmt | for_stmt | try_stmt | with_stmt | funcdef | classdef | decorated if_stmt: 'if' test ':' suite ('elif' test ':' suite)* ['else' ':' suite] while_stmt: 'while' test ':' suite ['else' ':' suite] until_stmt: 'until' test ':' suite ``` **[2]**: This demonstrates a common technique I use when modifying source code I’m not familiar with: *work by similarity*. This principle won’t solve all your problems, but it can definitely ease the process. Since everything that has to be done for `while` also has to be done for `until`, it serves as a pretty good guideline. Note that I've decided to exclude the `else` clause from my definition of `until`, just to make it a little bit different (and because frankly I dislike the `else` clause of loops and don't think it fits well with the Zen of Python). The second change is to modify the rule for `compound_stmt` to include `until_stmt`, as you can see in the snippet above. It's right after `while_stmt`, again. When you run `make` after modifying `Grammar/Grammar`, notice that the `pgen` program is run to re-generate `Include/graminit.h` and `Python/graminit.c`, and then several files get re-compiled. ### Modifying the AST generation code After the Python parser has created a parse tree, this tree is converted into an AST, since ASTs are [much simpler to work with](http://eli.thegreenplace.net/2009/02/16/abstract-vs-concrete-syntax-trees/) in subsequent stages of the compilation process. So, we're going to visit `Parser/Python.asdl` which defines the structure of Python's ASTs and add an AST node for our new `until` statement, again right below the `while`: ``` | While(expr test, stmt* body, stmt* orelse) | Until(expr test, stmt* body) ``` If you now run `make`, notice that before compiling a bunch of files, `Parser/asdl_c.py` is run to generate C code from the AST definition file. This (like `Grammar/Grammar`) is another example of the Python source-code using a mini-language (in other words, a DSL) to simplify programming. Also note that since `Parser/asdl_c.py` is a Python script, this is a kind of [bootstrapping](http://en.wikipedia.org/wiki/Bootstrapping_%28compilers%29) - to build Python from scratch, Python already has to be available. While `Parser/asdl_c.py` generated the code to manage our newly defined AST node (into the files `Include/Python-ast.h` and `Python/Python-ast.c`), we still have to write the code that converts a relevant parse-tree node into it by hand. This is done in the file `Python/ast.c`. There, a function named `ast_for_stmt` converts parse tree nodes for statements into AST nodes. Again, guided by our old friend `while`, we jump right into the big `switch` for handling compound statements and add a clause for `until_stmt`: ``` case while_stmt: return ast_for_while_stmt(c, ch); case until_stmt: return ast_for_until_stmt(c, ch); ``` Now we should implement `ast_for_until_stmt`. Here it is: ``` static stmt_ty ast_for_until_stmt(struct compiling *c, const node *n) { /* until_stmt: 'until' test ':' suite */ REQ(n, until_stmt); if (NCH(n) == 4) { expr_ty expression; asdl_seq *suite_seq; expression = ast_for_expr(c, CHILD(n, 1)); if (!expression) return NULL; suite_seq = ast_for_suite(c, CHILD(n, 3)); if (!suite_seq) return NULL; return Until(expression, suite_seq, LINENO(n), n->n_col_offset, c->c_arena); } PyErr_Format(PyExc_SystemError, "wrong number of tokens for 'until' statement: %d", NCH(n)); return NULL; } ``` Again, this was coded while closely looking at the equivalent `ast_for_while_stmt`, with the difference that for `until` I've decided not to support the `else` clause. As expected, the AST is created recursively, using other AST creating functions like `ast_for_expr` for the condition expression and `ast_for_suite` for the body of the `until` statement. Finally, a new node named `Until` is returned. Note that we access the parse-tree node `n` using some macros like `NCH` and `CHILD`. These are worth understanding - their code is in `Include/node.h`. ### Digression: AST composition I chose to create a new type of AST for the `until` statement, but actually this isn't necessary. I could've saved some work and implemented the new functionality using composition of existing AST nodes, since: ``` until condition: # do stuff ``` Is functionally equivalent to: ``` while not condition: # do stuff ``` Instead of creating the `Until` node in `ast_for_until_stmt`, I could have created a `Not` node with an `While` node as a child. Since the AST compiler already knows how to handle these nodes, the next steps of the process could be skipped. ### Compiling ASTs into bytecode The next step is compiling the AST into Python bytecode. The compilation has an intermediate result which is a CFG (Control Flow Graph), but since the same code handles it I will ignore this detail for now and leave it for another article. The code we will look at next is `Python/compile.c`. Following the lead of `while`, we find the function `compiler_visit_stmt`, which is responsible for compiling statements into bytecode. We add a clause for `Until`: ``` case While_kind: return compiler_while(c, s); case Until_kind: return compiler_until(c, s); ``` If you wonder what `Until_kind` is, it's a constant (actually a value of the `_stmt_kind` enumeration) automatically generated from the AST definition file into `Include/Python-ast.h`. Anyway, we call `compiler_until` which, of course, still doesn't exist. I'll get to it an a moment. If you're curious like me, you'll notice that `compiler_visit_stmt` is peculiar. No amount of `grep`-ping the source tree reveals where it is called. When this is the case, only one option remains - C macro-fu. Indeed, a short investigation leads us to the `VISIT` macro defined in `Python/compile.c`: ``` #define VISIT(C, TYPE, V) {\ if (!compiler_visit_ ## TYPE((C), (V))) \ return 0; \ ``` It's used to invoke `compiler_visit_stmt` in `compiler_body`. Back to our business, however... As promised, here's `compiler_until`: ``` static int compiler_until(struct compiler *c, stmt_ty s) { basicblock *loop, *end, *anchor = NULL; int constant = expr_constant(s->v.Until.test); if (constant == 1) { return 1; } loop = compiler_new_block(c); end = compiler_new_block(c); if (constant == -1) { anchor = compiler_new_block(c); if (anchor == NULL) return 0; } if (loop == NULL || end == NULL) return 0; ADDOP_JREL(c, SETUP_LOOP, end); compiler_use_next_block(c, loop); if (!compiler_push_fblock(c, LOOP, loop)) return 0; if (constant == -1) { VISIT(c, expr, s->v.Until.test); ADDOP_JABS(c, POP_JUMP_IF_TRUE, anchor); } VISIT_SEQ(c, stmt, s->v.Until.body); ADDOP_JABS(c, JUMP_ABSOLUTE, loop); if (constant == -1) { compiler_use_next_block(c, anchor); ADDOP(c, POP_BLOCK); } compiler_pop_fblock(c, LOOP, loop); compiler_use_next_block(c, end); return 1; } ``` I have a confession to make: this code wasn't written based on a deep understanding of Python bytecode. Like the rest of the article, it was done in imitation of the kin `compiler_while` function. By reading it carefully, however, keeping in mind that the Python VM is stack-based, and glancing into the documentation of the `dis` module, which has [a list of Python bytecodes](http://docs.python.org/py3k/library/dis.html) with descriptions, it's possible to understand what's going on. ### That's it, we're done... Aren't we? After making all the changes and running `make`, we can run the newly compiled Python and try our new `until` statement: ``` >>> until num == 0: ... print(num) ... num -= 1 ... 3 2 1 ``` Voila, it works! Let's see the bytecode created for the new statement by using the `dis` module as follows: ``` import dis def myfoo(num): until num == 0: print(num) num -= 1 dis.dis(myfoo) ``` Here's the result: ``` 4 0 SETUP_LOOP 36 (to 39) >> 3 LOAD_FAST 0 (num) 6 LOAD_CONST 1 (0) 9 COMPARE_OP 2 (==) 12 POP_JUMP_IF_TRUE 38 5 15 LOAD_NAME 0 (print) 18 LOAD_FAST 0 (num) 21 CALL_FUNCTION 1 24 POP_TOP 6 25 LOAD_FAST 0 (num) 28 LOAD_CONST 2 (1) 31 INPLACE_SUBTRACT 32 STORE_FAST 0 (num) 35 JUMP_ABSOLUTE 3 >> 38 POP_BLOCK >> 39 LOAD_CONST 0 (None) 42 RETURN_VALUE ``` The most interesting operation is number 12: if the condition is true, we jump to after the loop. This is correct semantics for `until`. If the jump isn't executed, the loop body keeps running until it jumps back to the condition at operation 35. Feeling good about my change, I then tried running the function (executing `myfoo(3)`) instead of showing its bytecode. The result was less than encouraging: ``` Traceback (most recent call last): File "zy.py", line 9, in myfoo(3) File "zy.py", line 5, in myfoo print(num) SystemError: no locals when loading 'print' ``` Whoa... this can't be good. So what went wrong? ### The case of the missing symbol table One of the steps the Python compiler performs when compiling the AST is create a symbol table for the code it compiles. The call to `PySymtable_Build` in `PyAST_Compile` calls into the symbol table module (`Python/symtable.c`), which walks the AST in a manner similar to the code generation functions. Having a symbol table for each scope helps the compiler figure out some key information, such as which variables are global and which are local to a scope. To fix the problem, we have to modify the `symtable_visit_stmt` function in `Python/symtable.c`, adding code for handling `until` statements, after the similar code for `while` statements **[3]**: ``` case While_kind: VISIT(st, expr, s->v.While.test); VISIT_SEQ(st, stmt, s->v.While.body); if (s->v.While.orelse) VISIT_SEQ(st, stmt, s->v.While.orelse); break; case Until_kind: VISIT(st, expr, s->v.Until.test); VISIT_SEQ(st, stmt, s->v.Until.body); break; ``` **[3]**: By the way, without this code there’s a compiler warning for `Python/symtable.c`. The compiler notices that the `Until_kind` enumeration value isn’t handled in the switch statement of `symtable_visit_stmt` and complains. It’s always important to check for compiler warnings! And now we really are done. Compiling the source after this change makes the execution of `myfoo(3)` work as expected. ### Conclusion In this article I've demonstrated how to add a new statement to Python. Albeit requiring quite a bit of tinkering in the code of the Python compiler, the change wasn't difficult to implement, because I used a similar and existing statement as a guideline. The Python compiler is a sophisticated chunk of software, and I don't claim being an expert in it. However, I am really interested in the internals of Python, and particularly its front-end. Therefore, I found this exercise a very useful companion to theoretical study of the compiler's principles and source code. It will serve as a base for future articles that will get deeper into the compiler. ### References I used a few excellent references for the construction of this article. Here they are, in no particular order: * [PEP 339: Design of the CPython compiler](http://www.python.org/dev/peps/pep-0339/) - probably the most important and comprehensive piece of *official* documentation for the Python compiler. Being very short, it painfully displays the scarcity of good documentation of the internals of Python. * "Python Compiler Internals" - an article by Thomas Lee * "Python: Design and Implementation" - a presentation by Guido van Rossum * Python (2.5) Virtual Machine, A guided tour - a presentation by Peter Tröger [original source](http://eli.thegreenplace.net/2010/06/30/python-internals-adding-a-new-statement-to-python/)
One way to do things like this is to preprocess the source and modify it, translating your added statement to python. There are various problems this approach will bring, and I wouldn't recommend it for general usage, but for experimentation with language, or specific-purpose metaprogramming, it can occassionally be useful. For instance, lets say we want to introduce a "myprint" statement, that instead of printing to the screen instead logs to a specific file. ie: ``` myprint "This gets logged to file" ``` would be equivalent to ``` print >>open('/tmp/logfile.txt','a'), "This gets logged to file" ``` There are various options as to how to do the replacing, from regex substitution to generating an AST, to writing your own parser depending on how close your syntax matches existing python. A good intermediate approach is to use the tokenizer module. This should allow you to add new keywords, control structures etc while interpreting the source similarly to the python interpreter, thus avoiding the breakage crude regex solutions would cause. For the above "myprint", you could write the following transformation code: ``` import tokenize LOGFILE = '/tmp/log.txt' def translate(readline): for type, name,_,_,_ in tokenize.generate_tokens(readline): if type ==tokenize.NAME and name =='myprint': yield tokenize.NAME, 'print' yield tokenize.OP, '>>' yield tokenize.NAME, "open" yield tokenize.OP, "(" yield tokenize.STRING, repr(LOGFILE) yield tokenize.OP, "," yield tokenize.STRING, "'a'" yield tokenize.OP, ")" yield tokenize.OP, "," else: yield type,name ``` (This does make myprint effectively a keyword, so use as a variable elsewhere will likely cause problems) The problem then is how to use it so that your code is usable from python. One way would just be to write your own import function, and use it to load code written in your custom language. ie: ``` import new def myimport(filename): mod = new.module(filename) f=open(filename) data = tokenize.untokenize(translate(f.readline)) exec data in mod.__dict__ return mod ``` This requires you handle your customised code differently from normal python modules however. ie "`some_mod = myimport("some_mod.py")`" rather than "`import some_mod`" Another fairly neat (albeit hacky) solution is to create a custom encoding (See [PEP 263](http://www.python.org/dev/peps/pep-0263/)) as [this](http://code.activestate.com/recipes/546539/) recipe demonstrates. You could implement this as: ``` import codecs, cStringIO, encodings from encodings import utf_8 class StreamReader(utf_8.StreamReader): def __init__(self, *args, **kwargs): codecs.StreamReader.__init__(self, *args, **kwargs) data = tokenize.untokenize(translate(self.stream.readline)) self.stream = cStringIO.StringIO(data) def search_function(s): if s!='mylang': return None utf8=encodings.search_function('utf8') # Assume utf8 encoding return codecs.CodecInfo( name='mylang', encode = utf8.encode, decode = utf8.decode, incrementalencoder=utf8.incrementalencoder, incrementaldecoder=utf8.incrementaldecoder, streamreader=StreamReader, streamwriter=utf8.streamwriter) codecs.register(search_function) ``` Now after this code gets run (eg. you could place it in your .pythonrc or site.py) any code starting with the comment "# coding: mylang" will automatically be translated through the above preprocessing step. eg. ``` # coding: mylang myprint "this gets logged to file" for i in range(10): myprint "so does this : ", i, "times" myprint ("works fine" "with arbitrary" + " syntax" "and line continuations") ``` Caveats: There are problems to the preprocessor approach, as you'll probably be familiar with if you've worked with the C preprocessor. The main one is debugging. All python sees is the preprocessed file which means that text printed in the stack trace etc will refer to that. If you've performed significant translation, this may be very different from your source text. The example above doesn't change line numbers etc, so won't be too different, but the more you change it, the harder it will be to figure out.
Can you add new statements to Python's syntax?
[ "", "python", "syntax", "language-specifications", "" ]
I'm making another app's window topmost to ensure that a click in my app brings the other's dialog into views. The problem I'm having is that I don't get focus back to my app after the call. If the other app has more windows one of them ends up with focus, and otherwise no window (looking at the taskbar only) gets focus. Where should I start investigating the issue? My code for making the other app topmost is: ``` Process p = Process.GetProcessById(trackedProcessID); IntPtr h = p.MainWindowHandle; uint TOPMOST_FLAGS = SWP_NOMOVE | SWP_NOSIZE | SWP_ASYNCWINDOWPOS; SetWindowPos(h, HWND_TOPMOST, 0, 0, 0, 0, TOPMOST_FLAGS); ``` with constants as ``` public static readonly uint SWP_NOMOVE = 0x0002; public static readonly uint SWP_NOSIZE = 0x0001; public static readonly uint SWP_ASYNCWINDOWPOS = 0x4000; public static readonly IntPtr HWND_TOPMOST = new IntPtr(-1); ```
Related: [Unexpected behaviour of Process.MainWindowHandle](https://stackoverflow.com/questions/48288/unexpected-behaviour-of-processmainwindowhandle) Basically MainWindowHandle gives you the current top-most window of the process despite what the documentation says. That explains why the main window of your other process doesn't necessarily get focus. Your other problem is because you are not returning focus to your app after giving it away. Actually, the correct term for what you are doing is establishing [z-order](http://en.wikipedia.org/wiki/Z-order). Instead of trying to control z-order - which cannot be guaranteed - you might be better off sending messages to the other process.
Have you tried SWP\_NOACTIVATE?
focus lost after making another app's window topmost
[ "", "c#", "winapi", "focus", "z-order", "topmost", "" ]
Building a client-side swing application what should be notified on a bus (application-wide message system, similar in concept to JMS but much simpler) and what should be notified using direct listeners? When using a bus, I always have an unescapable feeling of "I have no idea who uses that and where". Also, no set order, hard to veto events, hard to know exactly what's going on at a set time. On the other hand, using listeners means either directly referencing the source object (coupling) or passing the reference through myriad conversions (A--b\_listener-->B, B--c\_listener-->C only because a needs to know something only C can to tell, but B has no interest in). So, are there any rule of the thumb regarding this? Any suggestion how to balance?
Event buses are very, very useful tools for providing decoupling in certain architectures. Listeners are easy to implement, but they have significant limitations when your object and dependency graph gets large. Listeners tend to run into problems with cyclic dependencies (events can 'bounce' in odd ways, and you wind up having to play games to ensure that you don't get stuck. Most binding frameworks do this for you, but there's something distasteful about knowing that listener events are shooting off into a million places). I make this kind of decision based on project size and scalability. If it's a big app, or there are aspects of the app that can by dynamic (like plugin modules, etc...) then a bus is a good way to keep the architecture clean (OSGI-like module containers are another approach, but heavier weight). If you are considering a bus architecture, I recommend that you take a look at the [Event Bus](https://eventbus.dev.java.net/) project - it works very well with Swing and provides a robust, out of the box solution for what you are asking about.
In my experience, trying to make Swing do something it wasn't designed for, or doesn't want you to do, is extremely painful. I would go with the simplest thing that would work; keep your code clean, do it the "Swing Way" until you start seeing problems.
Bus or listeners/delegates in client-side Swing application?
[ "", "java", "swing", "events", "architecture", "" ]
When I build a unit test project before the tests are executed the test output is copied to a TestResults folder and then the tests are executed. The issue I'm having is that not all the files in the Debug/bin directory are copied to the TestResults project. How can I get a file that is copied to the Debug/bin directory to also be copied to the TestResults folder?
The standard way to do this is by specifying the [deployment items](http://msdn.microsoft.com/en-us/library/ms182473.aspx "MSDN: Team System Test Deployment") in the *`.testrunconfig`* file, which can be accessed via the *Edit Test Run Configurations* item in the Visual Studio ***Test*** menu or in the *Solution Items* folder.
You can specify deployment attribute like an example shown below; Also you need to set "Content" & "Copy if newer" property ( there is no documentation on the later settings, but you have set those to make it work. ``` [TestMethod] [DeploymentItem("mytestdata.xml")] public void UploadTest() { } ```
How can I get "Copy to Output Directory" to work with Unit Tests?
[ "", "c#", ".net", "visual-studio-2008", "unit-testing", "mstest", "" ]
How do I print debug messages in the Google Chrome JavaScript Console? Please note that the JavaScript Console is not the same as the JavaScript Debugger; they have different syntaxes AFAIK, so the **print** command in JavaScript Debugger will not work here. In the JavaScript Console, `print()` will send the parameter to the printer.
Executing following code from the browser address bar: ``` javascript: console.log(2); ``` successfully prints message to the "JavaScript Console" in Google Chrome.
Improving on Andru's idea, you can write a script which creates console functions if they don't exist: ``` if (!window.console) console = {}; console.log = console.log || function(){}; console.warn = console.warn || function(){}; console.error = console.error || function(){}; console.info = console.info || function(){}; ``` Then, use any of the following: ``` console.log(...); console.error(...); console.info(...); console.warn(...); ``` These functions will log different types of items (which can be filtered based on log, info, error or warn) and will not cause errors when console is not available. These functions will work in Firebug and Chrome consoles.
How do I print debug messages in the Google Chrome JavaScript Console?
[ "", "javascript", "console", "debugging", "google-chrome", "" ]
I'm writing a client-server app using BSD sockets. It needs to run in the background, continuously transferring data, but cannot hog the bandwidth of the network interface from normal use. Depending on the speed of the interface, I need to throttle this connection to a certain max transfer rate. What is the best way to achieve this, programmatically?
The problem with sleeping a constant amount of 1 second after each transfer is that you will have choppy network performance. Let BandwidthMaxThreshold be the desired bandwidth threshold. Let TransferRate be the current transfer rate of the connection. Then... If you detect your TransferRate > BandwidthMaxThreshold then you do a SleepTime = 1 + SleepTime \* 1.02 (increase sleep time by 2%) Before or after each network operation do a Sleep(SleepTime) If you detect your TransferRate is a lot lower than your BandwidthMaxThreshold you can decrease your SleepTime. Alternatively you could just decay/decrease your SleepTime over time always. Eventually your SleepTime will reach 0 again. Instead of an increase of 2% you could also do an increase by a larger amount linearly of the difference between TransferRate - BandwidthMaxThreshold. This solution is good, because you will have no sleeps if the user's network is already not as high as you would like.
The best way would be to use a [token bucket](http://en.wikipedia.org/wiki/Token_bucket). Transmit only when you have enough tokens to fill a packet (1460 bytes would be a good amount), or if you are the receive side, read from the socket only when you have enough tokens; a bit of simple math will tell you how long you have to wait before you have enough tokens, so you can sleep that amount of time (be careful to calculate how many tokens you gained by how much you *actually* slept, since most operating systems can sleep your process for longer than you asked). To control the size of the bursts, limit the maximum amount of tokens you can have; a good amount could be one second worth of tokens.
How do you throttle the bandwidth of a socket connection in C?
[ "", "c++", "c", "sockets", "network-programming", "" ]
I was reading some 3rd party code and I found this: ``` x.Flags = x.Flags ^ Flags.Hidden; ``` What does it do? I've used '&' and '|' for bitwise 'and' and 'or' with enums, but it's the first time I see the that symbol...
^ is the bitwise XOR operator in C#. EDIT: a ^ b returns true if a is true and b is false or if a is false and b is true, but not both.
That would be the 'xor' operator. In your example code, it would toggle the Flags.Hidden either on or off, depending on the current value of x.Flags. The benefit of doing it this way is that it allows you to change the setting for Flags.Hidden without affecting any other flags that have been set.
What does '^' do in c# (Enums)?
[ "", "c#", "enums", "operators", "" ]
How do I generate friendly URLs within the ASP.NET MVC Framework? For example, we've got a URL that looks like this: ``` http://site/catalogue/BrowseByStyleLevel/1 ``` The 1 is Id of the study level (Higher in this case) to browse, but I'l like to reformat the URL in the same way StackOverflow does it. For example, these two URLs will take you to the same place: > <https://stackoverflow.com/questions/119323/nested-for-loops-in-different-languages> > > <https://stackoverflow.com/questions/119323/> **EDIT:** The friendly part of the url is referred to as a *slug*.
There are two steps to solve this problem. First, create a new route or change the default route to accept an additional parameter: ``` routes.MapRoute( "Default", // Route name "{controller}/{action}/{id}/{ignoreThisBit}", new { controller = "Home", action = "Index", id = "", ignoreThisBit = ""} // Parameter defaults ) ``` Now you can type whatever you want to at the end of your URI and the application will ignore it. When you render the links, you need to add the "friendly" text: ``` <%= Html.ActionLink("Link text", "ActionName", "ControllerName", new { id = 1234, ignoreThisBit="friendly-text-here" }); ```
This is how I have implemented the slug URL on my application. **Note:** The default Maproute should not be changed and also the routes are processed in the order in which they're added to the route list. ``` routes.MapRoute( "Default", // Route name "{controller}/{action}/{id}", // URL with parameters new { controller = "Home", action = "Index", id = UrlParameter.Optional } // Parameter defaults ); routes.MapRoute("Place", "{controller}/{action}/{id}/{slug}", new { controller = "Place", action = "Details", id = UrlParameter.Optional,slug="" }); ```
How can I create a friendly URL in ASP.NET MVC?
[ "", "c#", "asp.net-mvc", "asp.net-mvc-routing", "friendly-url", "slug", "" ]
Is there a way to Deserialize Expressions in C#, I would like to store Expressions in a Database and load them at run time.
I continued work on the library that was mentioned by [Serializing and Deserializing Expression Trees in C#](https://stackoverflow.com/questions/217961/serializing-and-deserializing-expression-trees-in-c/1486253#1486253) It looks like the project was abandoned (2008) but I did some work on it and now it works with .NET 4.0 and Silverlight. I made bug fixes to their code and also made it more DAL-independent. <http://expressiontree.codeplex.com/>
There's an project on GitHub/Nuget called [MetaLinq](https://github.com/mcintyre321/metalinq) that aims to make working with expression tress more easy. It converts between normal Expressions and 'EditableExpressions' which are mutable and fully Serializable, so can be used with Json, Xml, Binary etc. Also check this [blog post](https://nomadic-developer.com/2007/05/24/announcing-metalinq-linq-to-expressions/) for more info.
Serializing and Deserializing Expression Trees in C#
[ "", "c#", "serialization", "expression-trees", "" ]
The situation is as follows: I've got 2 models: 'Action' and 'User'. These models refer to the tables 'actions' and 'users', respectively. My action table contains a column `user_id`. At this moment, I need an overview of all actions, and the users to which they are assigned to. When i use `$action->fetchAll()`, I only have the user ID, so I want to be able to join the data from the user model, preferably without making a call to `findDependentRowset()`. I thought about creating custom `fetchAll()`, `fetchRow()` and `find()` methods in my model, but this would break default behaviour. What is the best way to solve this issue? Any help would be greatly appreciated.
I designed and implemented the table-relationships feature in Zend Framework. My first comment is that you wouldn't use `findDependentRowset()` anyway -- you'd use `findParentRow()` if the Action has a foreign key reference to User. ``` $actionTable = new Action(); $actionRowset = $actionTable->fetchAll(); foreach ($actionRowset as $actionRow) { $userRow = $actionRow->findParentRow('User'); } ``` **Edit:** In the loop, you now have an $actionRow and a $userRow object. You can write changes back to the database through either object by changing object fields and calling `save()` on the object. You can also use the Zend\_Db\_Table\_Select class (which was implemented after I left the project) to retrieve a Rowset based on a join between Action and User. ``` $actionTable = new Action(); $actionQuery = $actionTable->select() ->setIntegrityCheck(false) // allows joins ->from($actionTable) ->join('user', 'user.id = action.user_id'); $joinedRowset = $actionTable->fetchAll($actionQuery); foreach ($joinedRowset as $joinedRow) { print_r($joinedRow->toArray()); } ``` Note that such a Rowset based on a join query is read-only. You cannot set field values in the Row objects and call `save()` to post changes back to the database. **Edit:** There is no way to make an arbitrary joined result set writable. Consider a simple example based on the joined result set above: ``` action_id action_type user_id user_name 1 Buy 1 Bill 2 Sell 1 Bill 3 Buy 2 Aron 4 Sell 2 Aron ``` Next for the row with action\_id=1, I change one of the fields that came from the User object: ``` $joinedRow->user_name = 'William'; $joinedRow->save(); ``` Questions: when I view the next row with action\_id=2, should I see 'Bill' or 'William'? If 'William', does this mean that saving row 1 has to automatically update 'Bill' to 'William' in all other rows in this result set? Or does it mean that `save()` automatically re-runs the SQL query to get a refreshed result set from the database? What if the query is time-consuming? Also consider the object-oriented design. Each Row is a separate object. Is it appropriate that calling `save()` on one object has the side effect of changing values in a separate object (even if they are part of the same collection of objects)? That seems like a form of [Content Coupling](http://en.wikipedia.org/wiki/Coupling_(computer_science)) to me. The example above is a relatively simple query, but much more complex queries are also permitted. Zend\_Db cannot analyze queries with the intention to tell writable results from read-only results. That's also why MySQL views are not updateable.
You could always make a view in your database that does the join for you. ``` CREATE OR REPLACE VIEW VwAction AS SELECT [columns] FROM action LEFT JOIN user ON user.id = action.user_id ``` Then just use ``` $vwAction->fetchAll(); ``` Just remember that views in MySQL are read-only (assuming this is MySQL)
Automatically joining tables without breaking default behaviour in Zend Framework
[ "", "php", "zend-framework", "" ]
I am trying to override the DataGridViewTextBoxCell's paint method in a derived class so that I can indent the foreground text by some variable amount of pixels. I would like it if the width of the column adjusts so that its total width is the length of my cells text plus the "buffer" indent. Does anyone know of a way to accomplish this? My lame implementation is listed below: ``` public class MyTextBoxCell : DataGridViewTextBoxCell{ .... protected override void Paint(Graphics graphics, Rectangle clipBounds, Rectangle cellBounds, int rowIndex, DataGridViewElementStates cellState, object value, object formattedValue, string errorText, DataGridViewCellStyle cellStyle, DataGridViewAdvancedBorderStyle advancedBorderStyle, DataGridViewPaintParts paintParts) { clipBounds.Inflate(100, 0); DataGridViewPaintParts pp = DataGridViewPaintParts.Background | DataGridViewPaintParts.Border | DataGridViewPaintParts.ContentBackground | DataGridViewPaintParts.ErrorIcon; base.Paint(graphics, clipBounds, cellBounds, rowIndex, cellState, value, formattedValue, errorText, cellStyle, advancedBorderStyle, pp); string text = formattedValue as string; //My lame attempt to indent 20 pixels?? TextRenderer.DrawText(graphics, text, cellStyle.Font, new Point(cellBounds.Location.X + 20, cellBounds.Location.Y), cellStyle.SelectionForeColor ,TextFormatFlags.EndEllipsis); } ``` }
If you are trying to auto-size the columns (depending on size of the cell contents) then you should look at `Column.AutoSizeMode` property and `Column.DefaultCellStyle` property. ``` static const int INDENTCOEFF = 5; DataGridViewCellStyle cellStyle = new DataGridViewCellStyle(); cellStyle.Padding = new Padding(INDENTCOEFF , 5, INDENTCOEFF , 5); //left,top,right,bottom MyColumn.DefaultCellStyle = cellStyle; MyColumn.AutoSizeMode = DataGridViewAutoSizeColumnMode.AllCells; ```
You can just hook up to the CellFormattingEvent in the datagridview and do your formatting there. Or, if you're inherting from the DataGridView, you can just override the OnCellFormatting method. The code would look something like this: ``` if (e.ColumnIndex == 1) { string val = (string)e.Value; e.Value = String.Format(" {0}", val); e.FormattingApplied = true; } ``` Just some rough code, but you get the idea.
Overriding DataGridViewTextBoxCell Paint Method
[ "", "c#", "winforms", "" ]
Is there a good test suite or tool set that can automate website navigation -- with Javascript support -- and collect the HTML from the pages? Of course I can scrape straight HTML with BeautifulSoup. But this does me no good for sites that require Javascript. :)
You could use [Selenium](http://selenium.dev/) or [Watir](http://watir.com/) to drive a real browser. Ther are also some JavaScript-based headless browsers: * [PhantomJS](http://www.phantomjs.org/) is a headless Webkit browser. + [pjscrape](http://nrabinowitz.github.com/pjscrape/) is a scraping framework based on PhantomJS and jQuery. + [CasperJS](http://n1k0.github.com/casperjs/) is a navigation scripting & testing utility bsaed on PhantomJS, if you need to do a little more than point at URLs to be scraped. * [Zombie](http://zombie.labnotes.org/) for [Node.js](http://nodejs.org) Personally, I'm most familiar with Selenium, which has support for writing automation scripts in a good number of languagues and has more mature tooling, such as the excellent [Selenium IDE](http://selenium-ide.openqa.org/) extension for Firefox, which can be used to write and run testcases, and can export test scripts to many languages.
Using [HtmlUnit](http://htmlunit.sourceforge.net/) is also a possibility. > HtmlUnit is a "GUI-Less browser for > Java programs". It models HTML > documents and provides an API that > allows you to invoke pages, fill out > forms, click links, etc... just like > you do in your "normal" browser. > > It has fairly good JavaScript support > (which is constantly improving) and is > able to work even with quite complex > AJAX libraries, simulating either > Firefox or Internet Explorer depending > on the configuration you want to use. > > It is typically used for testing > purposes or to retrieve information > from web sites.
What's a good tool to screen-scrape with Javascript support?
[ "", "javascript", "screen-scraping", "" ]
In the documentation of hardware that allows us to control it via UDP/IP, I found the following fragment: > In this communication protocol, DWORD is a 4 bytes data, WORD is a 2 bytes data, > BYTE is a single byte data. The storage format is little endian, namely 4 bytes (32bits) data is stored as: d7-d0, d15-d8, d23-d16, d31-d24; double bytes (16bits) data is stored as: d7-d0 , d15-d8. I am wondering how this translates to C#? Do I have to convert stuff before sending it over? For example, if I want to send over a 32 bit integer, or a 4 character string?
C# itself doesn't define the endianness. Whenever you convert to bytes, however, you're making a choice. The [BitConverter](http://msdn.microsoft.com/en-us/library/system.bitconverter.aspx) class has an [IsLittleEndian](http://msdn.microsoft.com/en-us/library/system.bitconverter.islittleendian.aspx) field to tell you how it will behave, but it doesn't give the choice. The same goes for BinaryReader/BinaryWriter. My [MiscUtil](http://pobox.com/~skeet/csharp/miscutil) library has an EndianBitConverter class which allows you to define the endianness; there are similar equivalents for BinaryReader/Writer. No online usage guide I'm afraid, but they're trivial :) (EndianBitConverter also has a piece of functionality which isn't present in the normal BitConverter, which is to do conversions in-place in a byte array.)
You can also use ``` IPAddress.NetworkToHostOrder(...) ``` For short, int or long.
C# little endian or big endian?
[ "", "c#", "hardware", "udp", "endianness", "" ]
I am trying to debug a JavaScript script that gets read in a Firefox extension and executed. I only can see errors via the Firebug console (my code is invisible to Firebug), and it's reporting a "unterminated string literal." I checked the line and the lines around it and everything seems fine-parentheses, braces, and quotes are balanced, etc. What are other possible causes that I should be looking for?
Most browsers seem to have problems with code like this: ``` var foo = "</script>"; ``` In Firefox, Opera and IE8 this results in an unterminated string literal error. Can be pretty nasty when serializing html code which includes scripts.
Look for linebreaks! Those are often the cause.
Common sources of unterminated string literal
[ "", "javascript", "debugging", "firefox-addon", "" ]
Does anyone have a good articles or tutorial on correctly using dispose and IDisposable. I am trying to explain this to some junior dev and wanted to get some extra material, examples, etc.
A good place to start is this MSDN article by Shawn Farkas <http://msdn.microsoft.com/en-us/magazine/cc163392.aspx>. The definitive article is by Joe Duffy at <http://www.bluebytesoftware.com/blog/2005/04/08/DGUpdateDisposeFinalizationAndResourceManagement.aspx> with some follow-up in <http://www.bluebytesoftware.com/blog/2008/02/18/IDisposableFinalizationAndConcurrency.aspx>
Many older sources of advice on this imply that IDisposable is linked in some way to finalizers. Pay very close attention to the comments of Herb Sutter on this subject. Do NOT leap to the conclusion that you need to write a finalizer if you implement IDisposable, to "make sure that Dispose runs eventually in case the client forgets to call it". This is generally not the case. A class may implement IDisposable so that it can make various calls on managed objects that it owns, e.g. Dispose those objects in turn. This is absolutely pointless to do from a finalizer because finalizers can't reliably access other managed objects. Also anything a finalizer does must be capable of being called from any thread. Finalizers are a very obscure advanced thing, whereas IDisposable is a simple pattern that ought to be ubiquitous. And the advent of SafeHandle [changes the situation further](https://stackoverflow.com/questions/155780/safehandle-in-c).
Winforms Disposing IDisposable
[ "", "c#", "winforms", "dispose", "idisposable", "" ]
I was wondering if there is a native C++ (or STL/Boost) function which will search a CString for a specified string? e.g. ``` CString strIn = "Test number 1"; CString strQuery = "num"; bool fRet = SomeFn(strIn, StrQuery); if( fRet == true ) { // Ok strQuery was found in strIn ... ``` I have found a small number of functions like CompareNoCase IndexOf etc... but so far they don't really do what I want them to do (or use CLR/.Net) Thanks!
[CString::Find()](http://msdn.microsoft.com/ja-jp/library/ms928981.aspx) is what you want, one of the overloads does sub-string searching. ``` CString strIn = "test number 1"; int index = strIn.Find("num"); if (index != -1) // ok, found ```
[string::find](http://www.cplusplus.com/reference/string/string/find.html)
Searching CStrings in C++
[ "", "c++", "mfc", "string", "search", "cstring", "" ]
Here is a snippet of the code : ``` HttpWebRequest webRequest = (HttpWebRequest)WebRequest.Create(request.RawUrl); WebRequest.DefaultWebProxy = null;//Ensure that we will not loop by going again in the proxy HttpWebResponse response = (HttpWebResponse)webRequest.GetResponse(); string charSet = response.CharacterSet; Encoding encoding; if (String.IsNullOrEmpty(charSet)) encoding = Encoding.Default; else encoding = Encoding.GetEncoding(charSet); StreamReader resStream = new StreamReader(response.GetResponseStream(), encoding); return resStream.ReadToEnd(); ``` The problem is if I test with : <http://www.google.fr> All "é" are not displaying well. I have try to change ASCII to UTF8 and it still display wrong. I have tested the html file in a browser and the browser display the html text well so I am pretty sure the problem is in the method I use to download the html file. What should I change? *removed dead ImageShack link* ### Update 1: Code and test file changed
Firstly, the easier way of writing that code is to use a StreamReader and ReadToEnd: ``` HttpWebRequest webRequest = (HttpWebRequest)WebRequest.Create(myURL); using (HttpWebResponse response = (HttpWebResponse)webRequest.GetResponse()) { using (Stream resStream = response.GetResponseStream()) { StreamReader reader = new StreamReader(resStream, Encoding.???); return reader.ReadToEnd(); } } ``` Then it's "just" a matter of finding the right encoding. How did you create the file? If it's with Notepad then you probably want `Encoding.Default` - but that's obviously not portable, as it's the default encoding for *your* PC. In a well-run web server, the response will indicate the encoding in its headers. Having said that, response headers sometimes claim one thing and the HTML claims another, in some cases.
CharacterSet is "ISO-8859-1" by default, if it is not specified in server's content type header (different from "charset" meta tag in HTML). I compare HttpWebResponse.CharacterSet with charset attribute of HTML. If they are different - I use the charset as specified in HTML to re-read the page again, but with correct encoding this time. See the code: ``` string strWebPage = ""; // create request System.Net.WebRequest objRequest = System.Net.HttpWebRequest.Create(sURL); // get response System.Net.HttpWebResponse objResponse; objResponse = (System.Net.HttpWebResponse)objRequest.GetResponse(); // get correct charset and encoding from the server's header string Charset = objResponse.CharacterSet; Encoding encoding = Encoding.GetEncoding(Charset); // read response using (StreamReader sr = new StreamReader(objResponse.GetResponseStream(), encoding)) { strWebPage = sr.ReadToEnd(); // Close and clean up the StreamReader sr.Close(); } // Check real charset meta-tag in HTML int CharsetStart = strWebPage.IndexOf("charset="); if (CharsetStart > 0) { CharsetStart += 8; int CharsetEnd = strWebPage.IndexOfAny(new[] { ' ', '\"', ';' }, CharsetStart); string RealCharset = strWebPage.Substring(CharsetStart, CharsetEnd - CharsetStart); // real charset meta-tag in HTML differs from supplied server header??? if(RealCharset!=Charset) { // get correct encoding Encoding CorrectEncoding = Encoding.GetEncoding(RealCharset); // read the web page again, but with correct encoding this time // create request System.Net.WebRequest objRequest2 = System.Net.HttpWebRequest.Create(sURL); // get response System.Net.HttpWebResponse objResponse2; objResponse2 = (System.Net.HttpWebResponse)objRequest2.GetResponse(); // read response using (StreamReader sr = new StreamReader(objResponse2.GetResponseStream(), CorrectEncoding)) { strWebPage = sr.ReadToEnd(); // Close and clean up the StreamReader sr.Close(); } } } ```
Encoding trouble with HttpWebResponse
[ "", "c#", "encoding", "" ]
I have a WPF app that makes use of a Winforms User Control that I have created using C++/CLI. When my app goes to parse the XAML for my main window, it throws an exception. The information appears to be somewhat abbreviated, but it says: ``` A first chance exception of type 'System.Windows.Markup.XamlParseException' occurred in PresentationFramework.dll Additional information: is not a valid Win32 application. (Exception from HRESULT: 0x800700C1) Error in markup file 'OsgViewer;component/osgviewerwin.xaml' Line 1 Position 9. ``` I commented out my Winforms control in the XAML and everything loads fine. I figured maybe the constructor for my control is doing something bad, so I put a breakpoint in it, but the breakpoint does not appear to be enabled when I start to run the app, and is never hit, which I understand to mean the DLL containing that line is not loaded. Which would most likely cause an exception to be thrown when an object of a type in the DLL is instantiated - the body of the object's constructor couldn't be found. I have done this successfully on a different project in the past, so I pulled in a different WinForms User Control from that app, and instantiated it in the XAML, and that all works fine. So it's something in this DLL. I have a reference to the DLL in my WPF C# app, and when I load the DLL in Object Browser all the required classes and namespaces show up fine. The app compiles fine, the problem just shows up when parsing the XAML. Anybody seen something like this? Any ideas as to what could be causing this? Ideas for debugging it? Thanks! ``` <Window x:Class="OsgViewer.OsgViewerWin" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:int="clr-namespace:System.Windows.Forms.Integration;assembly=WindowsFormsIntegration" xmlns:myns="clr-namespace:MyGlobalNS.MyNS;assembly=MyAssembly" ... <int:WindowsFormsHost x:Name="m_Host"> <myns:CMyClass x:Name="m_MyClass" /> </int:WindowsFormsHost> ... </window> ```
I have experienced problems like that (but not with the exact same error message). It seems as if WPF cannot instantiate your Winforms User Control. The challenge is to find out why. Here are my suggestions that you could try: 1. Check if you have enabled unmanaged debugging (in Project Properties -> Debug) 2. Find out if there are any dependencies your C++/CLI DLL where the Winforms control is implemented and if those dependencies cannot be resolved. In order to find out dependencies on native DLLs, you should use the tool [Dependency Walker (depends.exe)](http://www.dependencywalker.com/). .NET Reflector will only examine managed dependencies. 3. Comment out code of your Winforms User Control step by step and try again. 4. Use Gflags.exe to turn on *Loader Snaps* (cf. [Debugging LoadLibrary Failures](http://blogs.msdn.com/junfeng/archive/2006/11/20/debugging-loadlibrary-failures.aspx))
I've seen this problem when trying to use boost::threads. To support thread-local storage, boost::threads makes some Win32 API call that is incompatible with CLI applications. The problem gets triggered if you try to #include something from threads in CLI code. Solution is to either avoid using boost::threads entirely or restrict its use to .cpp files in native code.
WPF throwing an exception parsing XAML that includes a Winforms User Control
[ "", "c#", "wpf", "winforms", "xaml", "user-controls", "" ]
When I am creating a new database table, what factors should I take into account for selecting the primary key's data type?
Sorry to do that, but I found that the answers I gave to related questions (you can check [this](https://stackoverflow.com/questions/159087/composite-primary-keys-versus-unique-object-id-field#164338) and [this](https://stackoverflow.com/questions/224576/will-sql-server-2005-penalize-me-for-using-an-nvarchar50-as-a-primary-key-inste#225407)) could apply to this one. I reshaped them a little bit... You will find many posts dealing with this issue, and each choice you'll make has its pros and cons. Arguments for these usually refer to relational database theory and database performance. On this subject, my point is very simple: ***surrogate primary keys ALWAYS work***, while ***Natural keys MIGHT NOT ALWAYS work one of these days***, and this for multiple reasons: field too short, rules change, etc. To this point, you've guessed here that I am basically a member of the uniqueIdentifier/surrogate primary key team, and even if I appreciate and understand arguments such as the ones presented here, I am still looking for the case where "natural" key is better than surrogate ... In addition to this, one of the most important but always forgotten arguments in favor of this basic rule is related to **code normalization and productivity**: each time I create a table, **shall I lose time** 1. identifying its primary key and its physical characteristics (type, size) 2. remembering these characteristics each time I want to refer to it in my code? 3. explaining my PK choice to other developers in the team? **My answer is no** to all of these questions: 1. I have no time to lose trying to identify "the best Natural Primary Key" when the surrogate option gives me a bullet-proof solution. 2. I do not want to remember that the Primary Key of my Table\_whatever is a 10 characters long string when I write the code. 3. I don't want to lose my time negotiating the Natural Key length: "well if You need 10 why don't you take 12 ***to be on the safe side***?". This ***"on the safe side"*** argument really annoys me: If you want to stay on the safe side, it means that you are really not far from the unsafe side! Choose surrogate: it's bullet-proof! So I've been working for the last five years with a very basic rule: each table (let's call it 'myTable') has its first field called `'id_MyTable'` which is of uniqueIdentifier type. Even if this table supports a "many-to-many" relation, where a field combination offers a very acceptable Primary Key, I prefer to create this `'id_myManyToManyTable'` field being a uniqueIdentifier, just to stick to the rule, and because, finally, it does not hurt. The major advantage is that you don't have to care anymore about the use of Primary Key and/or Foreign Key within your code. Once you have the table name, you know the PK name and type. Once you know which links are implemented in your data model, you'll know the name of available foreign keys in the table. And if you still want to have your "Natural Key" somewhere in your table, I advise you to build it following a standard model such as ``` Tbl_whatever id_whatever, unique identifier, primary key code_whatever, whateverTypeYouWant(whateverLengthYouEstimateTheRightOne), indexed ..... ``` Where id\_ is the prefix for primary key, and code\_ is used for "natural" indexed field. Some would argue that the code\_ field should be set as unique. This is true, and it can be easily managed either through DDL or external code. Note that many "natural" keys are calculated (invoice numbers), so they are already generated through code I am not sure that my rule is the best one. But it is a very efficient one! If everyone was applying it, we would for example avoid time lost answering to this kind of question!
If using a numeric key, make sure the datatype is giong to be large enough to hold the number of rows you might expect the table to grow to. If using a guid, does the extra space needed to store the guid need to be considered? Will coding against guid PKs be a pain for developers or users of the application. If using composite keys, are you sure that the combined columns will always be unique?
What should I consider when selecting a data type for my primary key?
[ "", "sql", "database-design", "" ]
I come from a .NET world and I'm new to writting C++. I'm just wondering what are the preferred naming conventions when it comes to naming local variables and struct members. For example, the legacy code that I've inheritted has alot of these: ``` struct MyStruct { TCHAR szMyChar[STRING_SIZE]; bool bMyBool; unsigned long ulMyLong; void* pMyPointer; MyObject** ppMyObjects; } ``` Coming from a C# background I was shocked to see the variables with hungarian notation (I couldn't stop laughing at the pp prefix the first time I saw it). I would much rather name my variables this way instead (although I'm not sure if capitalizing the first letter is a good convention. I've seen other ways (see links below)): ``` struct MyStruct { TCHAR MyChar[STRING_SIZE]; bool MyBool; unsigned long MyLong; void* MyPointer; MyObject** MyObjects; } ``` My question: Is this (the former way) still a preferred way to name variables in C++? References: <http://geosoft.no/development/cppstyle.html> <http://www.syntext.com/books/syntext-cpp-conventions.htm> <http://ootips.org/hungarian-notation.html> Thanks!
That kind of Hungarian Notation is fairly useless, and possibly worse than useless if you have to change the type of something. ([The *proper* kind of Hungarian Notation](http://www.joelonsoftware.com/articles/Wrong.html) is a different story.) I suggest you use whatever your group does. If you're the only person working on the program, name them whatever way makes the most sense to you.
The most important thing is to be consistent. If you're working with a legacy code base, name your variables and functions *consistently* with the naming convention of the legacy code. If you're writing new code that is only interfacing with old code, use your naming convention in the new code, but be consistent with yourself too.
Variable Naming Conventions in C++
[ "", "c++", "variables", "naming-conventions", "" ]
I'm getting odd results from a `MySQL SELECT` query involving a `LEFT JOIN`, and I can't understand whether my understanding of `LEFT JOIN` is wrong or whether I'm seeing a genuinely odd behavior. I have a two tables with a many-to-one relationship: For every record in `table 1` there are 0 or more records in `table 2`. I want to select all the records in table 1 with a column that counts the number of related records in table 2. As I understand it, `LEFT JOIN` should always return all records on the `LEFT` side of the statement. Here's a test database that exhibits the problem: ``` CREATE DATABASE Test; USE Test; CREATE TABLE Dates ( dateID INT UNSIGNED NOT NULL AUTO_INCREMENT, date DATE NOT NULL, UNIQUE KEY (dateID) ) TYPE=MyISAM; CREATE TABLE Slots ( slotID INT UNSIGNED NOT NULL AUTO_INCREMENT, dateID INT UNSIGNED NOT NULL, UNIQUE KEY (slotID) ) TYPE=MyISAM; INSERT INTO Dates (date) VALUES ('2008-10-12'),('2008-10-13'),('2008-10-14'); INSERT INTO Slots (dateID) VALUES (3); ``` The Dates table has three records, and the Slots 1 - and that record points to the third record in Dates. If I do the following query.. ``` SELECT d.date, count(s.slotID) FROM Dates AS d LEFT JOIN Slots AS s ON s.dateID=d.dateID GROUP BY s.dateID; ``` ..I expect to see a table with 3 rows in - two with a count of 0, and one with a count of 1. But what I actually see is this: ``` +------------+-----------------+ | date | count(s.slotID) | +------------+-----------------+ | 2008-10-12 | 0 | | 2008-10-14 | 1 | +------------+-----------------+ ``` The first record with a zero count appears, but the later record with a zero count is ignored. Am I doing something wrong, or do I just not understand what LEFT JOIN is supposed to do?
You need to `GROUP BY d.dateID`. In two of your cases, `s.DateID` is `NULL` (`LEFT JOIN`) and these are combined together. I think you will also find that this is invalid (ANSI) SQL, because d.date is not part of a `GROUP BY` or the result of an aggregate operation, and should not be able to be `SELECT`ed.
I think you mean to group by d.dateId.
MySQL LEFT JOIN SELECT not selecting all the left side records?
[ "", "sql", "mysql", "join", "" ]
Given the following inheritance tree, what would be the best way of implementing it in a way that works? ``` abstract class Foo<T> : IEnumerable<T> { public abstract Bar CreateBar(); } class Bar<T> : Foo<T> { // Bar's provide a proxy interface to Foo's and limit access nicely. // The general public shouldn't be making these though, they have access // via CreateBar() protected Bar(Foo base) { // snip... } } class Baz<T> : Foo<T> { public Bar CreateBar() { return new Bar(this); } } ``` This fails with: `'Bar.Bar()' is inaccessible due to its protection level`. I don't want the constructor being public, only classes that inherit from `Foo` should be able to create `Bar`s. `Bar` is a specialised `Foo`, and any type of `Foo` should be able to create one. Public internal is an 'option' here, as the majority of the predefined extensions to `Foo` will be internal to the DLL, but I consider this a sloppy answer, since anyone who comes along later who wants to create their own type of `Foo` or `Baz` (which is likely to happen) will be stuck with a default `CreateBar()` implementation, which may or may not meet their needs. Perhaps there is a way of refactoring this to make it work nicely? I'm banging my head on the wall trying to design this so it'll work though. **Edit (More info):** Slightly more concrete: Foo is implementing IEnumerable and long story short, Bar is providing the same interface, but to a limited subset of that enumerable object. All Foo's should be able to create subsets of themselves (ie. Bar) and return it. But I don't want to have everyone who ever wants to implement a Foo to have to worry about this, because Bar will do the proxying and worry about limiting the range, etc.
Okay, new answer: 1. Split Bar into an interface and a concrete class. 2. Express the public abstract method in terms of IBar. 3. Make Bar a private nested class in Foo, implementing IBar. Give it an internal constructor which you can call from Foo. 4. Write a protected method in Foo which creates an instance of Bar from itself. Classes deriving from Foo can use this to implement the abstract method if just proxying is good enough, and classes with more complicated needs can just implement IBar directly. You could even change the abstract method to a virtual one, and create a new Bar from "this" by default. EDIT: One variant on this would be to make Bar a *protected* nested class within Foo, with a public constructor. That way any derived class would be able to instantiate it for themselves, but no unrelated class would be able to "see" it at all. You'd still need to separate the interface from the implementation (so that the interface can be public) but I think that's a good thing anyway.
Would it be possible for you to make Baz a nested type within Bar? That's the only way you'll give it more access to Bar than it would otherwise have. Just having the same parent class only gives it access to protected members of Foo, and Foo doesn't have special access to Bar. I suspect there are other tortuous ways of doing this with nested types, but really it's going to be quite unpleasant for maintenance engineers. It's quite an odd design though, to force one derived class to create an instance of a different class derived from the same base class. Is that really what you need? Perhaps if you put this in more concrete terms it would be easier to come up with alternative designs.
Inheritance trees and protected constructors in C#
[ "", "c#", "inheritance", "protected", "" ]
I am doing 2nd year computer science and we have a software engineering group project. There are 5 people in the group and we would like to build a web application in php. Please suggest some ideas for me
Have a look a Paul Graham's list of "Startup Ideas We'd Like to Fund" - lots more ideas and the CMS has been done to death. <http://ycombinator.com/ideas.html> The list in short: 1. A cure for the disease of which the RIAA is a symptom. 2. Simplified browsing 3. New news 4. Outsourced IT 5. Enterprise software 2.0 6. More variants of CRM 7. Something your company needs that doesn't exist 8. Dating 9. Photo/video sharing services 10. Auctions 11. Web Office apps 12. Fix advertising 13. Online learning 14. Tools for measurement 15. Off the shelf security 16. A form of search that depends on design 17. New payment methods (tricky) 18. The WebOS (si tienes 'webos', sorry Spanish joke) 19. Application and/or data hosting 20. Shopping guides 21. Finance software for individuals and small businesses 22. A web-based Excel/database hybrid 23. More open alternatives to Wikipedia 24. A buffer against bad customer service 25. ACraigslist competitor 26. Better video chat 27. Hardware/software hybrids 28. Fixing email overload 29. Easy site builders for specific markets 30. Startups for startups
How about a [content management system](http://en.wikipedia.org/wiki/Content_management_system)? This allows you to show off every part of your web development skill and it shouldn't be hard to do with a 5 person team. CMSs often include file uploads, file management and on-line text editors (something like TinyMCE). They're actually quite fun to develop and when the system is completed it has a great "wow" factor. Especially when you show people how you can edit the contents of your website on-line.
Suggestions for a Web application for a group project
[ "", "php", "web-applications", "" ]
Is there a good way to read RAW image files (especially Canon CR2 and Adobe DNG files) as GDI+ bitmaps that is reasonably fast? I found an example running under WPF that would read an image using any installed image codec and then display it in an image control. And I modified this example to create a GDI+ bitmap by writing the WPF image to a MemoryStream and creating the Bitmap from that. But this process is slow! Horribly slow! Opening a simple image takes around 10 seconds on my computer. This solution also requires references to the WPF assemblies and that doesn't feel right, especially not since I would like to run the code in an ASP.NET project. There are programs that will do batch conversions of the images, but I would prefer converting the images dynamically when requested. So, any suggestions?
Disclaimer: I work at Atalasoft. Our [DotImage Photo Pro](http://www.atalasoft.com/products/dotimage) product can do this. If you want to try to do it yourself, look into wrapping the opensource DCRaw or look at how Paint.NET does it (I think there's a RAW plugin for it)
The DotImage Photo Pro component worked well, but I had a problem extracting the preview image from raw files using it. It is also outside my budget for this project. But, I found the code for a RAW plugin for Paint.NET [here](http://paintdotnet.forumer.com/viewtopic.php?f=16&t=3759) and it was quite simple to adapt to my needs. The plugin runs the DCRaw executable usign `Process.Start` and reads its output from the `StandardOutput` stream. Quite simple and fast! :-) Edit: The link to the plugin doesn't work anymore, but here is the code I used to extract the images. The following code extracts the jpg-preview stored in the raw file. If you want the full image you should remove the -e argument. But be aware that for some cameras you will get a ppm-image that GDI+ cannot read. ``` public Stream GetImageData(string inputFile, string dcRawExe) { var startInfo = new ProcessStartInfo(dcRawExe) { Arguments = "-c -e \"" + inputFile + "\"", RedirectStandardOutput = true, UseShellExecute = false }; var process = Process.Start(startInfo); var image = Image.FromStream(process.StandardOutput.BaseStream); var memoryStream = new MemoryStream(); image.Save(memoryStream, ImageFormat.Png); return memoryStream; } ``` Also, you will need a copy of DCRaw. I used the DcrawMS.exe from this site: <http://www.insflug.org/raw/Downloads/>
Reading RAW image files as GDI+ bitmaps
[ "", "c#", "asp.net", ".net", "gdi+", "dng", "" ]
Is there a simple way to parse a date that may be in MM/DD/yyyy, or M/D/yyyy, or some combination? i.e. the zero is optional before a single digit day or month. To do it manually, one could use: ``` String[] dateFields = dateString.split("/"); int month = Integer.parseInt(dateFields[0]); int day = Integer.parseInt(dateFields[1]); int year = Integer.parseInt(dateFields[2]); ``` And validate with: ``` dateString.matches("\\d\\d?/\\d\\d?/\\d\\d\\d\\d") ``` Is there a call to SimpleDateFormat or JodaTime that would handle this?
Looks like my problem was using "MM/DD/yyyy" when I should have used "MM/dd/yyyy". Uppercase **D** is "Day in year", while lowercase **d** is "Day in month". ``` new SimpleDateFormat("MM/dd/yyyy").parse(dateString); ``` Does the job. Also, "M/d/y" works interchangeably. A closer reading of the [SimpleDateFormat API Docs](http://java.sun.com/j2se/1.5.0/docs/api/java/text/SimpleDateFormat.html) reveals the following: "For parsing, the number of pattern letters is ignored unless it's needed to separate two adjacent fields."
Yep, use setLenient: ``` DateFormat df = new SimpleDateFormat("MM/dd/yyyy"); df.setLenient(true); System.out.println(df.parse("05/05/1999")); System.out.println(df.parse("5/5/1999")); ```
Date time parsing that accepts 05/05/1999 and 5/5/1999, etc
[ "", "java", "date", "time", "parsing", "" ]
I am inserting a column in a DataGridView programmatically (i.e., not bound to any data tables/databases) as follows: ``` int lastIndex = m_DGV.Columns.Count - 1; // Count = 4 in this case DataGridViewTextBoxColumn col = (DataGridViewTextBoxColumn)m_DGV.Columns[lastIndex]; m_DGV.Columns.RemoveAt(lastIndex); m_DGV.Columns.Insert(insertIndex, col); // insertIndex = 2 ``` I have found that my columns are visually out of order sometimes using this method. A workaround is to manually set the DisplayIndex property of the column afterwards. Adding this code "fixes it", but I don't understand why it behaves this way. ``` Console.Write(m_DGV.Columns[0].DisplayIndex); // Has value of 0 Console.Write(m_DGV.Columns[1].DisplayIndex); // Has value of 1 Console.Write(m_DGV.Columns[2].DisplayIndex); // Has value of 3 Console.Write(m_DGV.Columns[3].DisplayIndex); // Has value of 2 col.DisplayIndex = insertIndex; Console.Write(m_DGV.Columns[0].DisplayIndex); // Has value of 0 Console.Write(m_DGV.Columns[1].DisplayIndex); // Has value of 1 Console.Write(m_DGV.Columns[2].DisplayIndex); // Has value of 2 Console.Write(m_DGV.Columns[3].DisplayIndex); // Has value of 3 ``` As an aside, my grid can grow its column count dynamically. I wanted to grow it in chunks, so each insert didn't require a column allocation (and associated initialization). Each "new" column would then be added by grabbing an unused column from the end, inserting it into the desired position, and making it visible.
I suspect this is because the order of the columns in the DataGridView do not necessarily dictate the display order, though without explicitly being assigned by default the order of the columns dictate the DisplayIndex property values. That is why there is a DisplayIndex property, so you may add columns to the collection without performing Inserts - you just need to specify the DisplayIndex value and a cascade update occurs for everything with an equal or greater DisplayIndex. It appears from your example the inserted column is also receiving the first skipped DisplayIndex value. From [a question/answer](http://www.themssforum.com/Csharp/DataGridView-DisplayIndex/) I found: > Changing the DisplayIndex will cause > all the columns between the old > DisplayIndex and the new DisplayIndex > to be shifted. As with nearly all collections (other than LinkedLists) its always better to add to a collection [than insert into](http://www.dotnetperls.com/Content/List-Insert.aspx) a collection. The behavior you are seeing is a reflection of that rule.
I have a couple of ideas. 1. How about addressing your columns by a unique name, rather than the index in the collection? They might not already have a name, but you could keep track of who's who if you gave them a name that meant something. 2. You can use the `GetFirstColumn`, `GetNextColumn`, `GetPreviousColumn`, `GetLastColumn` methods of the `DataGridViewColumnCollection` class, which work on display order, not the order in the collection. You can also just iterate through the collection using a for loop and `m_DGV.Columns[i]` until you find the one you want. 3. Create an inherited `DataGridView` and `DataGridViewColumnCollection`. The `DataGridView` simply is overridden to use your new collection class. Your new `DataGridViewColumnCollection` will include a method to address the collection by display index, presumably by iterating through the collection until you find the one you want (see #2). Or you can save a dictionary and keep it updated for very large numbers of columns. I doubt the performance increase of keeping a dictionary, since every time a column moves, you essentially have to rewrite the entire thing. Iterating through is O(n) anyway, and unless you're talking asynchronous operations with hundreds of columns, you're probably okay. You might be able to override the `this[]` operator as well, assuming it doesn't screw up the `DataGridView`. Idea #1 might be the easiest to implement, but not necessarily the prettiest. Idea #2 works, and you can put it in a function `DataGridViewColumn GetColumnByDisplayIndex(int Index)`. Idea #3 is cute, and certainly the most encapsulated approach, but isn't exactly trivial.
Why does Columns.Insert not update the DisplayIndex in DataGridView (C#)?
[ "", "c#", "datagridview", "" ]
I am looking for an Applet with similar functionality to the Oracle/Stellent OutsideIn ActiveX control or the Autonomy KeyView technology that act as a browser plug-in allowing the rendering/display of a large number of file formats (Word processing, spreadhseet, graphics, etc.) I currently use the Stellent solution, but due to some restrictions of some of our clients would prefer something that either exists as a Java Applet, Silverlight control, or has a Java API that I could build an applet on top of (neither of the two I mentioned do). > At a bare minimum it would need to display at least the following formats: > > > MS Word, Excel, PowerPoint > > > > MS Outlook MSG files > > > > Adobe PDF > > > > Standard image formats: BMP, PNG, JPEG, TIFF > > > > WordPerfect > > > > HTML Any suggestions?
If a commercial product is an option, [ViewOne](http://www.daeja.com/products/viewone1.asp) is a nice product. It's an Applet and you can view a large variety of document.
If you are looking for pure Java component that supports all these formats, I'm pretty confident that it doesn't exist. If what you want is to embed Browser, MS Office, Acrobat etc. you would need an ActiveX container. Here are some choices: [JDIC](https://jdic.dev.java.net/#demos) - if you are using Swing (see the Document Viewer demo.) [SWT ActiveX container](http://www.alphaworks.ibm.com/tech/swtactivexcontainer) - if you are using SWT [TeamDev WinPack](http://www.teamdev.com/winpack/demo.jsf) - if your time is more valuable than your money ;-) The product is very polished, the price is reasonable and the support is excellent. Note that with any of these you need to have installed Acrobat, MS Office (or the free doc viewers) and whatever else applications you need to edit the file formats.
Looking for a "Universal" Document viewer component/library
[ "", "java", "applet", "viewer", "plugins", "" ]
What is the most efficient way in C# 2.0 to check each character in a string and return true if they are all valid hexadecimal characters and false otherwise? ### Example ``` void Test() { OnlyHexInString("123ABC"); // Returns true OnlyHexInString("123def"); // Returns true OnlyHexInString("123g"); // Returns false } bool OnlyHexInString(string text) { // Most efficient algorithm to check each digit in C# 2.0 goes here } ```
``` public bool OnlyHexInString(string test) { // For C-style hex notation (0xFF) you can use @"\A\b(0[xX])?[0-9a-fA-F]+\b\Z" return System.Text.RegularExpressions.Regex.IsMatch(test, @"\A\b[0-9a-fA-F]+\b\Z"); } ```
Something like this: (I don't know C# so I'm not sure how to loop through the chars of a string.) ``` loop through the chars { bool is_hex_char = (current_char >= '0' && current_char <= '9') || (current_char >= 'a' && current_char <= 'f') || (current_char >= 'A' && current_char <= 'F'); if (!is_hex_char) { return false; } } return true; ``` **Code for Logic Above** ``` private bool IsHex(IEnumerable<char> chars) { bool isHex; foreach(var c in chars) { isHex = ((c >= '0' && c <= '9') || (c >= 'a' && c <= 'f') || (c >= 'A' && c <= 'F')); if(!isHex) return false; } return true; } ```
Check a string to see if all characters are hexadecimal values
[ "", "c#", "" ]
What would be the best way to fill an array from user input? Would a solution be showing a prompt message and then get the values from from the user?
``` string []answer = new string[10]; for(int i = 0;i<answer.length;i++) { answer[i]= Console.ReadLine(); } ```
Could you clarify the question a bit? Are you trying to get a fixed number of answers from the user? What data type do you expect -- text, integers, floating-point decimal numbers? That makes a big difference. If you wanted, for instance, an array of integers, you could ask the user to enter them separated by spaces or commas, then use ``` string foo = Console.ReadLine(); string[] tokens = foo.Split(","); List<int> nums = new List<int>(); int oneNum; foreach(string s in tokens) { if(Int32.TryParse(s, out oneNum)) nums.Add(oneNum); } ``` Of course, you don't necessarily have to go the extra step of converting to ints, but I thought it might help to show how you would.
How to Fill an array from user input C#?
[ "", "c#", "" ]
``` private string? typeOfContract { get { return (string?)ViewState["typeOfContract"]; } set { ViewState["typeOfContract"] = value; } } ``` Later in the code I use it like this: ``` typeOfContract = Request.QueryString["type"]; ``` I am getting the following error at the declaration of `typeOfContract` line stating: > The type 'string' must be a non-nullable value type in order to use > it as parameter 'T' in the generic type or method > 'System.Nullable<T>' Any ideas? Basically, I want to make sure that `"type"` exists in the `QueryString` before performing an action.
System.String is a reference type and already "nullable". Nullable<T> and the ? suffix are for value types such as Int32, Double, DateTime, etc.
You are making it complicated. `string` is already nullable. You don't need to make it **more** nullable. Take out the `?` on the property type.
C# nullable string error
[ "", "c#", "nullable", "" ]
I am trying to create a databound WPF GridView whose rows can either be read-only or editable (by double-clicking or through a context menu). I would like for the row to return to a read-only state if any of its editable controls loses focus. The functionality I am looking for is very similar to [this example](http://blogs.msdn.com/atc_avalon_team/archive/2006/03/14/550934.aspx) but with an entire row being editted simultaneously (rather than a single cell). Does anyone know how to implement this?
With the ListView + GridView control il quite complex because this control "thinks in column" so you have to create a template for every column and switch the read-only template with edit template (for every cell). I suggest you to take a look a the xceed DataGrid. It's free and it implements the edit functionality in a simpler way (you can find info here: <http://xceed.com/Grid_WPF_Intro.html>)
there is also the "official" wpf datagrid from microsoft at codeplex : <http://www.codeplex.com/wpf>
Editable WPF GridView Row
[ "", "c#", "wpf", "gridview", "" ]
How do I get the list of open file handles by process id in C#? I'm interested in digging down and getting the file names as well. Looking for the programmatic equivalent of what process explorer does. Most likely this will require interop. Considering adding a bounty on this, the implementation is nasty complicated.
Ouch this is going to be hard to do from managed code. There is a [sample on codeproject](http://www.codeproject.com/KB/shell/OpenedFileFinder.aspx?fid=422864&df=90&mpp=25&noise=3&sort=Position&view=Quick&fr=26&select=2277170) Most of the stuff can be done in interop, but you need a driver to get the filename cause it lives in the kernel's address space. Process Explorer embeds the driver in its resources. Getting this all hooked up from C# and supporting 64bit as well as 32, is going to be a major headache.
You can also run the command line app, [Handle](http://technet.microsoft.com/en-us/sysinternals/bb896655.aspx), by Mark Rusinovich, and parse the output.
How do I get the list of open file handles by process in C#?
[ "", "c#", ".net", "" ]
I'm using Windows Vista and C#.net 3.5, but I had my friend run the program on XP and has the same problem. So I have a C# program that I have running in the background with an icon in the SystemTray. I have a low level keyboard hook so when I press two keys (Ctr+windows in this case) it'll pull of the application's main form. The form is set to be full screen in the combo key press even handler: ``` this.FormBorderStyle = FormBorderStyle.None; this.WindowState = FormWindowState.Maximized; ``` So it basically works. When I hit CTR+Windows it brings up the form, no matter what program I have given focus to. But sometimes, the taskbar will still show up over the form, which I don't want. I want it to always be full screen when I hit that key combo. I figure it has something to do with what application has focus originally. But even when I click on my main form, the taskbar sometimes stays there. So I wonder if focus really is the problem. It just seems like sometimes the taskbar is being stubborn and doesn't want to sit behind my program. Anyone have any ideas how I can fix this? EDIT: More details- I'm trying to achieve the same effect that a web browser has when you put it into fullscreen mode, or when you put powerpoint into presentation mode. In a windows form you do that by putting the border style to none and maximizing the window. But sometimes the window won't cover the taskbar for some reason. Half the time it will. If I have the main window topmost, the others will fall behind it when I click on it, which I don't want if the taskbar is hidden.
Try this (where `this` is your form): ``` this.Bounds = Screen.PrimaryScreen.Bounds; this.TopMost = true; ``` That'll set the form to fullscreen, and it'll cover the taskbar.
I've tried so many solutions, some of them works on Windows XP and all of them did NOT work on Windows 7. After all I write a simple method to do so. ``` private void GoFullscreen(bool fullscreen) { if (fullscreen) { this.WindowState = FormWindowState.Normal; this.FormBorderStyle = System.Windows.Forms.FormBorderStyle.None; this.Bounds = Screen.PrimaryScreen.Bounds; } else { this.WindowState = FormWindowState.Maximized; this.FormBorderStyle = System.Windows.Forms.FormBorderStyle.Sizable; } } ``` the order of code is important and will not work if you change the place of WindwosState and FormBorderStyle. One of the advantages of this method is leaving the TOPMOST on false that allow other forms to come over the main form. It absolutely solved my problem.
C# - Why won't a fullscreen winform app ALWAYS cover the taskbar?
[ "", "c#", ".net", "winforms", "fullscreen", "keyboard-hook", "" ]
Is it more performant to have a bidimensional array (`type[,]`) or an array of arrays (`type[][]`) in C#? Particularly for initial allocation and item access
Of course, if all else fails... test it! Following gives (in "Release", at the console): ``` Size 1000, Repeat 1000 int[,] set: 3460 int[,] get: 4036 (chk=1304808064) int[][] set: 2441 int[][] get: 1283 (chk=1304808064) ``` So a jagged array is quicker, at least in this test. Interesting! However, it is a *relatively* small factor, so I would still stick with whichever describes my requirement better. Except for some specific (high CPU/processing) scenarios, readability / maintainability should trump a small performance gain. Up to you, though. Note that this test assumes you access the array much more often than you create it, so I have not included timings for creation, where I would *expect* rectangular to be slightly quicker unless memory is highly fragmented. ``` using System; using System.Diagnostics; static class Program { static void Main() { Console.WriteLine("First is just for JIT..."); Test(10,10); Console.WriteLine("Real numbers..."); Test(1000,1000); Console.ReadLine(); } static void Test(int size, int repeat) { Console.WriteLine("Size {0}, Repeat {1}", size, repeat); int[,] rect = new int[size, size]; int[][] jagged = new int[size][]; for (int i = 0; i < size; i++) { // don't count this in the metrics... jagged[i] = new int[size]; } Stopwatch watch = Stopwatch.StartNew(); for (int cycle = 0; cycle < repeat; cycle++) { for (int i = 0; i < size; i++) { for (int j = 0; j < size; j++) { rect[i, j] = i * j; } } } watch.Stop(); Console.WriteLine("\tint[,] set: " + watch.ElapsedMilliseconds); int sum = 0; watch = Stopwatch.StartNew(); for (int cycle = 0; cycle < repeat; cycle++) { for (int i = 0; i < size; i++) { for (int j = 0; j < size; j++) { sum += rect[i, j]; } } } watch.Stop(); Console.WriteLine("\tint[,] get: {0} (chk={1})", watch.ElapsedMilliseconds, sum); watch = Stopwatch.StartNew(); for (int cycle = 0; cycle < repeat; cycle++) { for (int i = 0; i < size; i++) { for (int j = 0; j < size; j++) { jagged[i][j] = i * j; } } } watch.Stop(); Console.WriteLine("\tint[][] set: " + watch.ElapsedMilliseconds); sum = 0; watch = Stopwatch.StartNew(); for (int cycle = 0; cycle < repeat; cycle++) { for (int i = 0; i < size; i++) { for (int j = 0; j < size; j++) { sum += jagged[i][j]; } } } watch.Stop(); Console.WriteLine("\tint[][] get: {0} (chk={1})", watch.ElapsedMilliseconds, sum); } } ```
I believe that [,] can allocate one contiguous chunk of memory, while [][] is N+1 chunk allocations where N is the size of the first dimension. So I would guess that [,] is faster on initial allocation. Access is probably about the same, except that [][] would involve one extra dereference. Unless you're in an exceptionally tight loop it's probably a wash. Now, if you're doing something like image processing where you are referencing *between* rows rather than traversing row by row, locality of reference will play a big factor and [,] will probably edge out [][] depending on your cache size. As Marc Gravell mentioned, usage is key to evaluating the performance...
What's better in regards to performance? type[,] or type[][]?
[ "", "c#", "performance", "arrays", "" ]