instruction
stringlengths
0
30k
|c#|.net|winforms|
When you use a Windows Forms TextBox, the default number of tab stops (spaces) is 8. How do you modify this?
First add the following namespace using System.Runtime.InteropServices; Then add the following after the class declaration: private const int EM_SETTABSTOPS = 0x00CB; [DllImport("User32.dll", CharSet = CharSet.Auto)] public static extern IntPtr SendMessage(IntPtr h, int msg, int wParam, int [] lParam); Then add the following to the Form_Load event: // define value of the Tab indent int[] stops = {16}; // change the indent SendMessage(this.textBox1.Handle, EM_SETTABSTOPS, 1, stops);
How do I get a value from an XML web service in C#?
|c#|xml|
In C#, if I need to open an HTTP connection, download XML and get one value from the result, how would I do that? For consistency, imagine the webservice is at www.webservice.com and that if you pass it the POST argument fXML=1 it gives you back <xml><somekey>somevalue</somekey></xml> I'd like it to spit out "somevalue".
It can be used to hide your code from casual observers, but as you say: it's easily circumvented. It really can't be any other way, since the server needs to decrypt the code to execute it. It's DRM, basically, and fails for the same reason as all the other DRM does - you can't simultaneously hide the data, and allow it to be accessed.
If your leaning is to Python, you may be interested in checking out the Python Wrapper for Filemaker. It provides two way access to the Filemaker data via Filemaker's built-in XML services. You can find some quite thorough information on this at: [http://code.google.com/p/pyfilemaker/][1] [1]: http://code.google.com/p/pyfilemaker/
[Pdftotext][1] An open source program (part of Xpdf) which you could call from python (not what you asked for but might be useful). I've used it with no problems. I think google use it in google desktop. [1]: http://en.wikipedia.org/wiki/Pdftotext
If I have to expose APIs, I prefer doing it as JSON. Python has excellent support for JSON objects (JSON Objects are infact python dictionaries)
Doesn't ODBC use the old "?" syntax for parameters? Try this: select col1, col2 from table1 where col3 = ? The order of your parameters become important then, but it's less vulnerable to SQL injection than simple appending the parameter value.
Doesn't ODBC use the old "?" syntax for parameters? Try this: select col1, col2 from table1 where col3 = ? The order of your parameters become important then, but it's less vulnerable to SQL injection than simply appending the parameter value.
Doesn't ODBC use the old "?" syntax for parameters? Try this: select col1, col2 from table1 where col3 = ? The order of your parameters becomes important then, but it's less vulnerable to SQL injection than simply appending the parameter value.
Yep, use table and index partitioning with filegroups. You don't even have to change the select statements, only if you want to get the last bit of speed out of the result. Another option can be the workload balancing with two servers and two way replication between them.
Sending e-mail from a Custom SQL Server Reporting Services Delivery Extension
|c#|reporting-services|
I've developed my own delivery extension for Reporting Services 2005, to integrate this with our SaaS marketing solution. It takes the subscription, and takes a snapshot of the report with a custom set of parameters. It then renders the report, sends an e-mail with a link and the report attached as XLS. Everything works fine, until mail delivery... Here's my code for sending e-mail: public static List<string> SendMail(SubscriptionData data, Stream reportStream, string reportName, string smptServerHostname, int smtpServerPort) { List<string> failedRecipients = new List<string>(); MailMessage emailMessage = new MailMessage(data.ReplyTo, data.To); emailMessage.Priority = data.Priority; emailMessage.Subject = data.Subject; emailMessage.IsBodyHtml = false; emailMessage.Body = data.Comment; if (reportStream != null) { Attachment reportAttachment = new Attachment(reportStream, reportName); emailMessage.Attachments.Add(reportAttachment); reportStream.Dispose(); } try { SmtpClient smtp = new SmtpClient(smptServerHostname, smtpServerPort); // Send the MailMessage smtp.Send(emailMessage); } catch (SmtpFailedRecipientsException ex) { // Delivery failed for the recipient. Add the e-mail address to the failedRecipients List failedRecipients.Add(ex.FailedRecipient); } catch (SmtpFailedRecipientException ex) { // Delivery failed for the recipient. Add the e-mail address to the failedRecipients List failedRecipients.Add(ex.FailedRecipient); } catch (SmtpException ex) { throw ex; } catch (Exception ex) { throw ex; } // Return the List of failed recipient e-mail addresses, so the client can maintain its list. return failedRecipients; } Values for SmtpServerHostname is localhost, and port is 25. I veryfied that I can actually send mail, by using Telnet. And it works. **Here's the error message I get from SSRS:** ReportingServicesService!notification!4!08/28/2008-11:26:17:: Notification 6ab32b8d-296e-47a2-8d96-09e81222985c completed. Success: False, Status: Exception Message: Failure sending mail. Stacktrace: at MyDeliveryExtension.MailDelivery.SendMail(SubscriptionData data, Stream reportStream, String reportName, String smptServerHostname, Int32 smtpServerPort) in C:\inetpub\wwwroot\CustomReporting\MyDeliveryExtension\MailDelivery.cs:line 48 at MyDeliveryExtension.MyDelivery.Deliver(Notification notification) in C:\inetpub\wwwroot\CustomReporting\MyDeliveryExtension\MyDelivery.cs:line 153, DeliveryExtension: My Delivery, Report: Clicks Development, Attempt 1 ReportingServicesService!dbpolling!4!08/28/2008-11:26:17:: NotificationPolling finished processing item 6ab32b8d-296e-47a2-8d96-09e81222985c **Could this have something to do with Trust/Code Access Security?** My delivery extension is granted full trust in rssrvpolicy.config: <CodeGroup class="UnionCodeGroup" version="1" PermissionSetName="FullTrust" Name="MyDelivery_CodeGroup" Description="Code group for MyDelivery extension"> <IMembershipCondition class="UrlMembershipCondition" version="1" Url="C:\Program Files\Microsoft SQL Server\MSSQL.2\Reporting Services\ReportServer\bin\MyDeliveryExtension.dll" /> </CodeGroup> Could trust be an issue here? Another theory: SQL Server and SSRS was installed in the security context of Local System. Am I right, or is this service account restricted access to any network resource? Even its own SMTP Server? I tried changing all SQL Server Services logons to Administrator - but still without any success. I also tried logging onto the SMTP server in my code, by proviiding: NetworkCredential("Administrator", "password") and also NetworkCredential("Administrator", "password", "MyRepServer") Can anyone help here, please?
> And you can change your code "on the fly" without restarting the server You can accomplish this with ASP.net if you make a Web Site project (as opposed to a Web Application Project). Using a Web Site project, you can post changes to code-behinds without having to refresh anything on the server, and the server does the compile work for you on all code changes. See <a href="http://msdn.microsoft.com/en-us/library/aa730880(VS.80).aspx#wapp_topic5">here</a> for more info on this. This should also solve your difficulties with deploying the assembly to the GAC. As the server handles all compilations for Web Site projects, you wont have to redeploy any assemblies when changing files.
Capturing Cmd-C keyboard event from modular Flex application in browser or AIR.
|apache-flex|air|
It seems that it is impossible to capture the keyboard event normally used for copy when running a Flex application in the browser or as an AIR app, presumably because the browser or OS is intercepting it first. Is there a way to tell the browser or OS to let the event through? For example, on a AdvancedDataGrid I have set the keyUp event to handleCaseListKeyUp(event), which calls the following function: private function handleCaseListKeyUp(event:KeyboardEvent):void { var char:String = String.fromCharCode(event.charCode).toUpperCase(); if (event.ctrlKey && char == "C") { trace("Ctrl-C"); copyCasesToClipboard(); return; } if (!event.ctrlKey && char == "C") { trace("C"); copyCasesToClipboard(); return; } // Didn't match event to capture, just drop out. trace("charCode: " + event.charCode); trace("char: " + char); trace("keyCode: " + event.keyCode); trace("ctrlKey: " + event.ctrlKey); trace("altKey: " + event.altKey); trace("shiftKey: " + event.shiftKey); } When run, I can never get the release of the "C" key while also pressing the command key (which shows up as KeyboardEvent.ctrlKey). I get the following trace results: charCode: 0 char: keyCode: 17 ctrlKey: false altKey: false shiftKey: false As you can see, the only event I can capture is the release of the command key, the release of the "C" key isn't even sent. Has anyone successfully implemented standard copy and paste keyboard handling? Am I destined to just use the "C" key on it's own (as shown in the code example) or make a copy button available? Or do I need to create the listener manually at a higher level and pass the event down into my modular application's guts?
Inherited must be explicitly called in descendant objects as well as in visual form inheritance. If you use class completion then it adds inherited automatically if you flagged the definition as override (but not for reintroduce). If you are using visual form inheritance then when you add a new event hander through the form editor then it will add inherited as well.
Does anyone know where to find free database design templates?
|database|templates|
I'm obviously not talking about a full solution, but just a good starting point for common applications for software architects. It could be for a CMS, e-commerce storefront, address book, etc. A UML diagram is not essential, but a table schema with data types in the least. Thanks!
You might take a look at [Free Pascal][1] as it includes a 64 bit version and is mostly Delphi compatible syntax. [1]: http://www.freepascal.org/
FTS *can* help in this scenario, the question is whether it is worth it or not. To begin with, let's look at why `LIKE` may not be the most effective search. When you use `LIKE`, especially when you are searching with a % at the beginning of your comparison, SQL Server needs to perform both a table scan of every single row *and* a byte byte by byte check of the column you are checking. FTS has some better algorithms for matching data as does some better statistics on variations of names. Therefore FTS can provide better performance for matching Smith, Smythe, Smithers, etc when you look for Smith. It is, however, a bit more complex to use FTS, as you'll need to master `CONTAINS` vs `FREETEXT` and the arcane format of the search. However, if you want to do a search where either FName or LName match, you can do that with one statement instead of an OR. To determine if FTS is going to be effective, determine how much data you have. I use FTS on a database of several hundred million rows and that's a real benefit over searching with `LIKE`, but I don't use it on every table. If your table size is more reasonable, less than a few million, you can get similar speed by creating an index for each column that you're going to be searching on and SQL Server should perform an index scan rather than a table scan.
Update schema and rows in one transaction, SQL Server 2005
|database|transactions|ado|
I'm currently updating a legacy system which allows users to dictate part of the schema of one of its tables. Users can create and remove columns from the table through this interface. This legacy system is using ADO 2.8, and is using SQL Server 2005 as its database (you don't even WANT to know what database it was using before the attempt to modernize this beast began... but I digress. =) ) In this same editing process, users can define (and change) a list of valid values that can be stored in these user created fields (if the user wants to limit what can be in the field). When the user changes the list of valid entries for a field, if they remove one of the valid values, they are allowed to choose a new "valid value" to map any rows that have this (now invalid) value in it, so that they now have a valid value again. In looking through the old code, I noticed that it is extremely vulnerable to putting the system into an invalid state, because the changes mentioned above are not done within a transaction (so if someone else came along halfway through the process mentioned above and made their own changes... well, you can imagine the problems that might cause). The problem is, I've been trying to get them to update under a single transaction, but whenever the code gets to the part where it changes the schema of that table, all of the other changes (updating values in rows, be it in the table where the schema changed or not... they can be completely unrelated tables even) made up to that point in the transaction appear to be silently dropped. I receive no error message indicating that they were dropped, and when I commit the transaction at the end no error is raised... but when I go to look in the tables that were supposed to be updated in the transaction, only the new columns are there. None of the non-schema changes made are saved. Looking on the net for answers has, thus far, proved to be a waste of a couple hours... so I turn here for help. Has anyone ever tried to perform a transaction through ADO that both updates the schema of a table and updates rows in tables (be it that same table, or others)? Is it not allowed? Is there any documentation out there that could be helpful in this situation? EDIT: Okay, I did a trace, and these commands were sent to the database (explanations in parenthesis) **(I don't know what's happening here, looks like it's creating a temporary stored procedure...?)** <pre><code> declare @p1 int set @p1=180150003 declare @p3 int set @p3=2 declare @p4 int set @p4=4 declare @p5 int set @p5=-1 </code></pre> **(Retreiving the table that holds definition information for the user-generated fields)** <pre><code> exec sp_cursoropen @p1 output,N'SELECT * FROM CustomFieldDefs ORDER BY Sequence',@p3 output,@p4 output,@p5 output select @p1, @p3, @p4, @p5 go </code></pre> **(I think my code was iterating through the list of them here, grabbing the current information)** <pre><code> exec sp_cursorfetch 180150003,32,1,1 go exec sp_cursorfetch 180150003,32,1,1 go exec sp_cursorfetch 180150003,32,1,1 go exec sp_cursorfetch 180150003,32,1,1 go exec sp_cursorfetch 180150003,32,1,1 go exec sp_cursorfetch 180150003,32,1,1 go exec sp_cursorfetch 180150003,32,1,1 go exec sp_cursorfetch 180150003,32,1,1 go exec sp_cursorfetch 180150003,1025,1,1 go exec sp_cursorfetch 180150003,1028,1,1 go exec sp_cursorfetch 180150003,32,1,1 go </code></pre> **(This appears to be where I'm entering the modified data for the definitions, I go through each and update any changes that occurred in the definitions for the custom fields themselves)** <pre><code> exec sp_cursor 180150003,33,1,N'[CustomFieldDefs]',@Sequence=1,@Description='asdf',@Format='U|',@IsLookUp=1,@Length=50,@Properties='U|',@Required=1,@Title='__asdf',@Type='',@_Version=1 go exec sp_cursorfetch 180150003,32,1,1 go exec sp_cursor 180150003,33,1,N'[CustomFieldDefs]',@Sequence=2,@Description='give',@Format='Y',@IsLookUp=0,@Length=0,@Properties='',@Required=0,@Title='_give',@Type='B',@_Version=1 go exec sp_cursorfetch 180150003,32,1,1 go exec sp_cursor 180150003,33,1,N'[CustomFieldDefs]',@Sequence=3,@Description='up',@Format='###-##-####',@IsLookUp=0,@Length=0,@Properties='',@Required=0,@Title='_up',@Type='N',@_Version=1 go exec sp_cursorfetch 180150003,32,1,1 go exec sp_cursor 180150003,33,1,N'[CustomFieldDefs]',@Sequence=4,@Description='Testy',@Format='',@IsLookUp=0,@Length=50,@Properties='',@Required=0,@Title='_Testy',@Type='',@_Version=1 go exec sp_cursorfetch 180150003,32,1,1 go exec sp_cursor 180150003,33,1,N'[CustomFieldDefs]',@Sequence=5,@Description='you',@Format='U|',@IsLookUp=0,@Length=250,@Properties='U|',@Required=0,@Title='_you',@Type='',@_Version=1 go exec sp_cursorfetch 180150003,32,1,1 go exec sp_cursor 180150003,33,1,N'[CustomFieldDefs]',@Sequence=6,@Description='never',@Format='mm/dd/yyyy',@IsLookUp=0,@Length=0,@Properties='',@Required=0,@Title='_never',@Type='D',@_Version=1 go exec sp_cursorfetch 180150003,32,1,1 go exec sp_cursor 180150003,33,1,N'[CustomFieldDefs]',@Sequence=7,@Description='gonna',@Format='###-###-####',@IsLookUp=0,@Length=0,@Properties='',@Required=0,@Title='_gonna',@Type='C',@_Version=1 go exec sp_cursorfetch 180150003,32,1,1 go </code></pre> **(This is where my code removes the deleted through the interface before this saving began]... it is also the ONLY thing as far as I can tell that actually happens during this transaction)** <pre><code> ALTER TABLE CustomizableTable DROP COLUMN _weveknown; </code></pre> **(Now if any of the definitions were altered in such a way that the user-created column's properties need to be changed or indexes on the columns need to be added/removed, it is done here, along with giving a default value to any rows that didn't have a value yet for the given column... note that, as far as I can tell, NONE of this actually happens when the stored procedure finishes.)** <code><pre> go SELECT * FROM sys.columns WHERE object_id = OBJECT_ID(N'CustomizableTable') AND name = '__asdf' go ALTER TABLE CustomizableTable ALTER COLUMN __asdf VarChar(50) NULL go IF EXISTS (SELECT * FROM sys.indexes WHERE object_id = OBJECT_ID(N'[dbo].[CustomizableTable]') AND name = N'idx___asdf') CREATE NONCLUSTERED INDEX idx___asdf ON CustomizableTable ( __asdf ASC) WITH (PAD_INDEX = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, IGNORE_DUP_KEY = OFF, ONLINE = OFF); go select * from IF EXISTS (SELECT * FROM sys.indexes WHERE object_id = OBJECT_ID(N'[dbo].[CustomizableTable]') AND name = N'idx___asdf') CREATE NONCLUSTERED INDEX idx___asdf ON CustomizableTable ( __asdf ASC) WITH (PAD_INDEX = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, IGNORE_DUP_KEY = OFF, ONLINE = OFF); go UPDATE CustomizableTable SET [__asdf] = '' WHERE [__asdf] IS NULL go SELECT * FROM sys.columns WHERE object_id = OBJECT_ID(N'CustomizableTable') AND name = '_give' go ALTER TABLE CustomizableTable ALTER COLUMN _give Bit NULL go IF EXISTS (SELECT * FROM sys.indexes WHERE object_id = OBJECT_ID(N'[dbo].[CustomizableTable]') AND name = N'idx__give') DROP INDEX idx__give ON CustomizableTable WITH ( ONLINE = OFF ); go UPDATE CustomizableTable SET [_give] = 0 WHERE [_give] IS NULL go SELECT * FROM sys.columns WHERE object_id = OBJECT_ID(N'CustomizableTable') AND name = '_up' go ALTER TABLE CustomizableTable ALTER COLUMN _up Int NULL go IF EXISTS (SELECT * FROM sys.indexes WHERE object_id = OBJECT_ID(N'[dbo].[CustomizableTable]') AND name = N'idx__up') DROP INDEX idx__up ON CustomizableTable WITH ( ONLINE = OFF ); go UPDATE CustomizableTable SET [_up] = 0 WHERE [_up] IS NULL go SELECT * FROM sys.columns WHERE object_id = OBJECT_ID(N'CustomizableTable') AND name = '_Testy' go ALTER TABLE CustomizableTable ADD _Testy VarChar(50) NULL go IF EXISTS (SELECT * FROM sys.indexes WHERE object_id = OBJECT_ID(N'[dbo].[CustomizableTable]') AND name = N'idx__Testy') DROP INDEX idx__Testy ON CustomizableTable WITH ( ONLINE = OFF ); go UPDATE CustomizableTable SET [_Testy] = '' WHERE [_Testy] IS NULL go SELECT * FROM sys.columns WHERE object_id = OBJECT_ID(N'CustomizableTable') AND name = '_you' go ALTER TABLE CustomizableTable ALTER COLUMN _you VarChar(250) NULL go IF EXISTS (SELECT * FROM sys.indexes WHERE object_id = OBJECT_ID(N'[dbo].[CustomizableTable]') AND name = N'idx__you') DROP INDEX idx__you ON CustomizableTable WITH ( ONLINE = OFF ); go UPDATE CustomizableTable SET [_you] = '' WHERE [_you] IS NULL go SELECT * FROM sys.columns WHERE object_id = OBJECT_ID(N'CustomizableTable') AND name = '_never' go ALTER TABLE CustomizableTable ALTER COLUMN _never DateTime NULL go IF EXISTS (SELECT * FROM sys.indexes WHERE object_id = OBJECT_ID(N'[dbo].[CustomizableTable]') AND name = N'idx__never') DROP INDEX idx__never ON CustomizableTable WITH ( ONLINE = OFF ); go UPDATE CustomizableTable SET [_never] = '1/1/1900' WHERE [_never] IS NULL go SELECT * FROM sys.columns WHERE object_id = OBJECT_ID(N'CustomizableTable') AND name = '_gonna' go ALTER TABLE CustomizableTable ALTER COLUMN _gonna Money NULL go IF EXISTS (SELECT * FROM sys.indexes WHERE object_id = OBJECT_ID(N'[dbo].[CustomizableTable]') AND name = N'idx__gonna') DROP INDEX idx__gonna ON CustomizableTable WITH ( ONLINE = OFF ); go UPDATE CustomizableTable SET [_gonna] = 0 WHERE [_gonna] IS NULL go </pre></code> **(Closing the Transaction...?)** <code><pre> exec sp_cursorclose 180150003 go </pre></code> After all that ado above, only the deletion of the column occurs. Everything before and after it in the transaction appears to be ignored, and there were no messages in the SQL Trace to indicate that something went wrong during the transaction.
I'm currently updating a legacy system which allows users to dictate part of the schema of one of its tables. Users can create and remove columns from the table through this interface. This legacy system is using ADO 2.8, and is using SQL Server 2005 as its database (you don't even WANT to know what database it was using before the attempt to modernize this beast began... but I digress. =) ) In this same editing process, users can define (and change) a list of valid values that can be stored in these user created fields (if the user wants to limit what can be in the field). When the user changes the list of valid entries for a field, if they remove one of the valid values, they are allowed to choose a new "valid value" to map any rows that have this (now invalid) value in it, so that they now have a valid value again. In looking through the old code, I noticed that it is extremely vulnerable to putting the system into an invalid state, because the changes mentioned above are not done within a transaction (so if someone else came along halfway through the process mentioned above and made their own changes... well, you can imagine the problems that might cause). The problem is, I've been trying to get them to update under a single transaction, but whenever the code gets to the part where it changes the schema of that table, all of the other changes (updating values in rows, be it in the table where the schema changed or not... they can be completely unrelated tables even) made up to that point in the transaction appear to be silently dropped. I receive no error message indicating that they were dropped, and when I commit the transaction at the end no error is raised... but when I go to look in the tables that were supposed to be updated in the transaction, only the new columns are there. None of the non-schema changes made are saved. Looking on the net for answers has, thus far, proved to be a waste of a couple hours... so I turn here for help. Has anyone ever tried to perform a transaction through ADO that both updates the schema of a table and updates rows in tables (be it that same table, or others)? Is it not allowed? Is there any documentation out there that could be helpful in this situation?
Simple: if(!MyNamespace) MyNamespace = {}; MyNamespace.foo = function() { this.length = 0; }; MyNamespace.foo.prototype.getLength = function() { return this.length; };
@Thomas WISA is: W=Windows<br> I=IIS<br> S=SQL (Microsoft SQL Server)<br> A=ASP (or ASP .NET) As for choosing between them, I would think that the available resources and talent would be the deciding factor. If you can get great ASP .NET and MS SQL devs, go that route. If you've got a bunch of PHP/MySQL gurus on hand, go LAMP. The reality is, regardless of the pros and cons of the platform, you'll struggle to get a great system on WISA out of a primarily PHP dev team, and vice versa.
Lucene doesn't allow you to start WildcardQueries with an asterisk by default, because those are incredibly expensive queries and will be very, very, very slow on large indexes. If you're using the Lucene QueryParser, call setAllowLeadingWildcard(true) on it to enable it. If you want all of the documents with a certain field set, you are much better off querying or walking the index programmatically than using QueryParser. You should really only use QueryParser to parse user input.
> What do you do if you are given a pile of crap and seem like you are stuck in a perpetual state of cleanup that you know with the addition of any new feature or code can break the current set because the current software is like a house of cards? > How can we do unit testing then? You start small. The project I just got into had no unit testing until a few months ago. When coverage was that low, we would simply pick a file that had no coverage and click "add tests". Right now we're up to over 40%, and we've managed to pick off most of the low-hanging fruit. (The best part is that even at this low level of coverage, we've already run into many instances of the code doing the wrong thing, and the testing caught it. That's a huge motivator to push people to add more testing.)
Cost is our biggest thing pushing us towards the LAMP environment, no question about it. Trying to go through Corporate procurement for Windows and SQL server licenses is horrific.
How to write a spec that is productive?
|specs|
I've seen different program managers write specs in different format. Almost every one has had his/her own style of writing a spec. On one hand are those wordy documents which given to a programmer are likely to cause him/her missing a few things. I personally dread the word documents spec...I think its because of my reading style...I am always speed reading things which I think will cause me to miss out on key points. On the other hand, I have seen this innovative specs written in Excel by one of our clients. The way he used to write the spec was kind of create a mock application in Excel and use some VBA to mock it. He would do things like on button click where should the form go or what action should it perform (in comments). On data form, he would display a form in cells and on each data entry cell he would comment on what valid values are, what validation should it perform etc. I think that using this technique, it was less likely to miss out on things that needed to be done. Also, it was much easier to unit test it for the developer. The tester too had a better understanding of the system as it 'performed' before actually being written. Visio is another tool to do screen desing but I still think Excel has a better edge over it considering its VBA support and its functions. Do you think this should become a more popular way of writing spec? I know it involves a bit of extra work on part of project manager(or whoever is writing the spec) but the payoff is huge...I myself could see a lot of productivity gain from using it. And if there are any better formats of specs that would actually help programmer.
The [WMD editor][1] is completely free (in the speech and beer senses of the word). It's available under an MIT-style license. [1]: http://wmd-editor.com/
C2 wiki discussion: http://c2.com/cgi/wiki?BuyDontBuild
[Sandcastle][1] works pretty well for generating documentation and it supports a couple different formats for the generation. As far as I know, there isn't any Visual Studio integration for it, but there is a [GUI][2] for it that is available. [1]: http://www.codeplex.com/Sandcastle [2]: http://www.inchl.nl/SandcastleGUI/
[Sandcastle][1] works pretty well for generating documentation and it supports a couple different formats for the generation. As far as I know, there isn't any Visual Studio integration for it, but there is a [SandcastleGUI][2] for it that is available. [1]: http://www.codeplex.com/Sandcastle [2]: http://www.inchl.nl/SandcastleGUI/
It turns out that readFully() adds more performance overhead than I was willing to put up with. In the end I compromised: I call skipBytes() once, and if that returns fewer than the right number of bytes, I call readFully() for the remaining bytes.
It sounds like the simplest thing to do here may be to create an abstract base class with all of the common GIS members and then to inherit the other X classes that will have nothing more than the necessary NHibernate mappings. I would then use the Factory pattern to create the object of the specific type using your metadata.
It's the [WMD](http://wmd-editor.com/) Markdown editor which is free and seems to be pretty easy to use. Just include the javascript for it and (in the easiest case), it just attaches to the first textarea it finds. Here's some info about the [Perl implementation of Markdown](http://daringfireball.net/projects/markdown/) which, according to the site, WMD is 100% compatible with.
It's the [WMD](http://wmd-editor.com/) Markdown editor which is free and seems to be pretty easy to use. Just include the javascript for it and (in the easiest case), it just attaches to the first textarea it finds. Here's some info about the [Perl implementation of Markdown](http://daringfireball.net/projects/markdown/) which, according to the site, WMD is 100% compatible with. <br /> [@Chris Upchurch](http://stackoverflow.com/questions/51808#51822) Technically the current release isn't open-source, just free to use. The next version is supposed to be released with an MIT license though. >>>>"now completely free to use. The next release will be open source under an MIT-style license."
With PowerShell: Get-Process | Sort-Object CPU -desc | Select-Object -first 3 | Format-Table | CPU,ProcessName -hidetableheader
With PowerShell: Get-Process | Sort-Object CPU -desc | Select-Object -first 3 | Format-Table | CPU,ProcessName -hidetableheader returns somewhat like: 16.8041632 System 12.568072 csrss 11.9872368 powershell
As alternative, log4net, though potentially dangerous, has given me better results than System.Diagnostics. Basically in log4net, you have a method for the various log levels, each with an Exception parameter. So, when you pass the second exception, it will print the stack trace to whichever appender you have configured. example: `Logger.Error("Danger!!!", myException );`
I think it will be useful to read this first: [Creating and Consuming a Web Service][1] (in .NET) This is a series of tutorials of how web services are used in .NET, including how XML input is used (deserialization). [1]: https://web.archive.org/web/20211020134836/https://aspnet.4guysfromrolla.com/articles/062602-1.aspx
Tested only in octave... (no matlab license). Variations of arrayfun() exist, check the documentation. #!/usr/bin/octave -qf function result = dostuff(my_ten, my_one) result = my_ten + my_one endfunction tens = [ 10 20 30 ]; ones = [ 1 2 3]; x = arrayfun(@dostuff, tens, ones); x Yields... x = 11 22 33
How to insert a row into a dataset via SSIS
|sql-server|ssis|
I'm trying to create an SSIS package that takes data from an XML data source and for each row inserts another row with some preset values. Any ideas? I'm thinking I could use a DataReader source to generate the preset values by doing the following: SELECT 'foo' as 'attribute1', 'bar' as 'attribute2' The question is, how would I insert one row of this type for every row in the XML data source?
Java Desktop application framework
|java|eclipse|desktop|platform|netbeans|
I am working on designing and building a desktop application. I am thinking about using eclipse or netbeans for the base of this application. However, I have never built on either of these platforms. I am personally leaning to using netbeans because it seams like that platform is an easer learning curve. But, I wanted to ask people that have actually build on these platforms before which one is easier to use? My personal definition of easer is as follows: * Easy to get started with * Consistent and logical API * Good documentation * Easy to build and deploy Thanks very much, Josh
|java|eclipse|desktop|netbeans|platform|
You can use something like that: var client = new WebClient(); var response = client.UploadValues("www.webservice.com", "POST", new NameValueCollection {{"fXML", "1"}}); using (var reader = new StringReader(Encoding.UTF8.GetString(response))) { var xml = XElement.Load(reader); var value = xml.Element("somekey").Value; Console.WriteLine("Some value: " + value); } Note I didn't have a chance to test this code, but it should work :)
Avoid [modes][1]. [1]: http://www.osnews.com/story/18904/Common_Usability_Terms_pt_V_Modes
Avoid [modes][1]. It's frustrating to a user when input works sometimes but not others, or does different things at different times. [1]: http://www.osnews.com/story/18904/Common_Usability_Terms_pt_V_Modes
Win32 ToolTip disappears never to re-appear with Commctl 6
|windows|winapi|
|project-management|productivity|specs|
The best way to deal with it is to probably put a no-cache directive in your ASP.NET pages (or a master page if you're using one). I don't think there's a way to deal with this directly in your ASP.NET code (since the cache decision is happening on the client). As for MVC, don't know how you would accomplish that (assuming it's different from Web Forms-based ASP.NET); I haven't used it.
One of the Microsoft Press books has excellent examples of various documents, including an SRS (which I think is what you are talking about). It might be one of the requirements books by Weigert (I think that's his name, I'm blanking on it right now). I've seen US government organizations use that as a template, and from my three work experiences with the government, they like to make their own whereever they can, so if they are reusing it, it must be good. Also - a spec should contain NO CODE, in my opinion. It should focus on what the system must do, should do, and can not do using text and diagrams.
Best method to parse various custom XML documents in Java
|java|xml|
What is the best method to parse multiple, discrete, custom XML documents with Java?
[Joel on Software][1] is particularly good at these and has some good articles about the subject... [a specific case][2] [1]: http://www.joelonsoftware.com/articles/fog0000000036.html [2]: http://www.joelonsoftware.com/articles/AardvarkSpec.html
As far as I know (or at least have read) is its best to try not to work in response to user events, but rather think "in the page".. Architect your application so it doesn't care if the back button is pushed.. It will just deal with it.. This may mean a little extra work from a development point of view, but overall will make the application a lot more robust.. I.e if step 3 performs some data chages, then the user clicks back (to step 2) and clicks next again, then the application checks to see if the changes have been made.. Or ideally, it doesnt make any _hard_ changes until the user clicks "OK" at the end.. This way, all the changes are stored and you can repopulate the form based on previously entered values on load, each and every time.. I hope that makes sense :)
WISA can be cheap, if your application doesn't need anything beyond shared hosting, there is little cost. It can also be expensive, then again so can LAMP once you get to the same size. Personally, I like the WISA stack, but its more out of familiarity than anything. Two things that stand out: * SqlServer - Only oracle comes close to this, none of the free RDMBS can even hold a candle to it. * C# - Performance wise, its far better than either of the big three P's in lamp (Perl, PHP and Python). Of course, if you use Java its comparable. There is no need to be religious about one or the other. Do what fits your needs best, and do what you prefer to work in.
My decision was based on two things. First and foremost I hated programming in ASP. I did it for an old job, and when given a choice I would choose PHP. I also tend to enjoy Linux over Windows. When it came to actually picking though, the corporate heads chose LAMP due to cost. Because let's be honest as developers, language isn't that big of deal. One thing I didn't get into, but apparently MySQL isn't exactly free in business situations. I don't know the details, but you should look into it before getting sued.
Capturing Cmd-C (or Ctrl-C) keyboard event from modular Flex application in browser or AIR.
It seems that it is impossible to capture the keyboard event normally used for copy when running a Flex application in the browser or as an AIR app, presumably because the browser or OS is intercepting it first. Is there a way to tell the browser or OS to let the event through? For example, on an AdvancedDataGrid I have set the keyUp event to handleCaseListKeyUp(event), which calls the following function: private function handleCaseListKeyUp(event:KeyboardEvent):void { var char:String = String.fromCharCode(event.charCode).toUpperCase(); if (event.ctrlKey && char == "C") { trace("Ctrl-C"); copyCasesToClipboard(); return; } if (!event.ctrlKey && char == "C") { trace("C"); copyCasesToClipboard(); return; } // Didn't match event to capture, just drop out. trace("charCode: " + event.charCode); trace("char: " + char); trace("keyCode: " + event.keyCode); trace("ctrlKey: " + event.ctrlKey); trace("altKey: " + event.altKey); trace("shiftKey: " + event.shiftKey); } When run, I can never get the release of the "C" key while also pressing the command key (which shows up as KeyboardEvent.ctrlKey). I get the following trace results: charCode: 0 char: keyCode: 17 ctrlKey: false altKey: false shiftKey: false As you can see, the only event I can capture is the release of the command key, the release of the "C" key while holding the command key isn't even sent. Has anyone successfully implemented standard copy and paste keyboard handling? Am I destined to just use the "C" key on it's own (as shown in the code example) or make a copy button available? Or do I need to create the listener manually at a higher level and pass the event down into my modular application's guts?
var today = new Date(); var yesterday = new Date().setDate(today.getDate() -1);
var d = new Date(); d.setDate(d.getDate()-1);
The technical answer is that the ***Grammar*** of the PHP language only allows subscript notation on the end of **variable expressions** and not **expressions** in general, which is the case in most other languages. I've always viewed it as a deficiency in the language, because it is possible to have a grammar that resolves subscripts against any expression unambiguously. It could be the case, however, that they're using an inflexible parser generator or they simply don't want to break some sort of backwards compatibility. Here are a couple more examples of invalid subscripts on valid expressions: $x = array(1,2,3); print ($x)[1]; //illegal, inside a parenthetical expression, not a variable function ret($foo) { return $foo; } echo ret($x)[1]; // illegal, inside a call expression, not a variable
*getDate()-1* should do the trick Quick example: var day = new Date( "January 1 2008" ); day.setDate(day.getDate() -1); alert(day);
If I understand this correctly... We had a object that could use different hardware options. To facilitate this we used a abstract interface of Device. Device had a bunch of functions that would be fired on certain events. The use would be the same but the various implementations of the Device would either have a fully-fleshed out functions or just return immediately. To make life even easier, the functions were void and threw exceptions on when something went wrong.
You'll probably have to use a tool, such as [ILMerge][1], to merge the two assemblies. [1]: http://research.microsoft.com/~mbarnett/ILMerge.aspx
I recommend that you use the [Simple Logging Facade for Java][1] (SLF4J). It supports different providers that include Log4J and can be used as a replacement for Apache Commons Logging. [1]: http://slf4j.org
I tend to prefer using the new Linq syntax: myListView.DataSource = ( from rec in GetAllRecords().Values where rec.Name == "foo" select rec ).ToList(); myListView.DataBind(); Why are you getting a dictionary when you don't use the key? You're paying for that overhead.
Thanks @Eric it works! Just a few code corrections for future reference: - the first select statements misses a coma after wp_term_relationship tr2 - In the same select statemt the following must be change: <pre><code>wp_terms t2, wp_term_taxonomy tt2, wp_term_relationship tr2</code></pre> should be <pre><code>wp_terms t3, wp_term_taxonomy tt3, wp_term_relationship tr3</code></pre>
What you are looking for is called DOSKEY You can use the doskey command to create macros in the command interpreter. For example: doskey mcd=mkdir "$*"$Tpushd "$*" creates a new command "mcd" that creates a new directory and then changes to that directory (I prefer "pushd" to "cd" in this case because it lets me use "popd" later to go back to where I was before) The $* will be replaced with the remainder of the command line after the macro, and the $T is used to delimit the two different commands that I want to evaluate. If I typed: mcd foo/bar at the command line, it would be equivalent to: mkdir "foo/bar"&pushd "foo/bar" The next step is to create a file that contains a set of macros which you can then import by using the /macrofile switch. I have a file (c:\tools\doskey.macros) which defines the commands that I regularly use. Each macro should be specified on a line with the same syntax as above. But you don't want to have to manually import your macros every time you launch a new command interpreter, to make it happen automatically, just open up the registry key HKEY_LOCAL_MACHINE\Software\Microsoft\Command Processor\AutoRun and set the value to be doskey /macrofile "c:\tools\doskey.macro". Doing this will make sure that your macros are automatically predefined every time you start a new interpreter. Extra thoughts: - If you want to do other things in AutoRun (like set environment parameters), you can delimit the commands with the ampersand. Mine looks like: set root=c:\SomeDir&doskey /macrofile "c:\tools\doskey.macros" - If you prefer that your AutoRun settings be set per-user, you can use the HKCU node instead of HKLM. - You can also use doskey to control things like the size of the command history. - I like to end all of my navigation macros with \$* so that I can chain things together - Be careful to add quotes as appropriate in your macros if you want to be able to handle paths with spaces in them.
Yes, [A-Za-z] will be much faster than setting the `RegexOptions.IgnoreCase`, largely because of Unicode strings. But it's also much more limiting -- [A-Za-z] does *not* match accented international characters, it's literally the A-Za-z ASCII set and nothing more. I don't know if you saw Tim Bray's answer to my message, but it's a good one: > One of the trickiest issues in internationalized search is upper and lower case. This notion of case is limited to languages written in the Latin, Greek, and Cyrillic character sets. English-speakers naturally expect search to be case-insensitive if only because they’re lazy: if Nadia Jones wants to look herself up on Google she’ll probably just type in nadia jones and expect the system to take care of it. > So it’s fairly common for search systems to “normalize” words by converting them all to lower- or upper-case, both for indexing and queries. > The trouble is that the mapping between cases is not always as straightforward as it is in English. For example, the German lower-case character “ß” becomes “SS” when upper-cased, and good old capital “I” when down-cased in Turkish becomes the dotless “ı” (yes, they have “i”, its upper-case version is “İ”). I have read (but not verified first-hand) that the rules for upcasing accented characters such “é” are different in France and Québec. One of the results of all this is that software such as java.String.toLowerCase() tends to run astonishingly slow as it tries to work around all these corner-cases. http://www.tbray.org/ongoing/When/200x/2003/10/11/SearchI18n
Both `FREETEXTTABLE` and `CONTAINSTABLE` will return the `[RANK]` column, but make sure you are using either the correct variation or union both of them to get all appropriate results.
Using [P/Invoke Interop Assistant][1]: [System.Runtime.InteropServices.StructLayoutAttribute(System.Runtime.InteropServices.LayoutKind.Sequential)] public struct SidIdentifierAuthority { /// BYTE[6] [System.Runtime.InteropServices.MarshalAsAttribute( System.Runtime.InteropServices.UnmanagedType.ByValArray, SizeConst = 6, ArraySubType = System.Runtime.InteropServices.UnmanagedType.I1)] public byte[] Value; } public partial class NativeMethods { /// Return Type: BOOL->int ///pIdentifierAuthority: PSID_IDENTIFIER_AUTHORITY->_SID_IDENTIFIER_AUTHORITY* ///nSubAuthorityCount: BYTE->unsigned char ///nSubAuthority0: DWORD->unsigned int ///nSubAuthority1: DWORD->unsigned int ///nSubAuthority2: DWORD->unsigned int ///nSubAuthority3: DWORD->unsigned int ///nSubAuthority4: DWORD->unsigned int ///nSubAuthority5: DWORD->unsigned int ///nSubAuthority6: DWORD->unsigned int ///nSubAuthority7: DWORD->unsigned int ///pSid: PSID* [System.Runtime.InteropServices.DllImportAttribute("advapi32.dll", EntryPoint = "AllocateAndInitializeSid")] [return: System.Runtime.InteropServices.MarshalAsAttribute(System.Runtime.InteropServices.UnmanagedType.Bool)] public static extern bool AllocateAndInitializeSid( [System.Runtime.InteropServices.InAttribute()] ref SidIdentifierAuthority pIdentifierAuthority, byte nSubAuthorityCount, uint nSubAuthority0, uint nSubAuthority1, uint nSubAuthority2, uint nSubAuthority3, uint nSubAuthority4, uint nSubAuthority5, uint nSubAuthority6, uint nSubAuthority7, out System.IntPtr pSid); } [1]: http://www.codeplex.com/clrinterop
You mean like this? <edmx:ConceptualModels> <Schema xmlns="http://schemas.microsoft.com/ado/2006/04/edm" Namespace="Model1" Alias="Self"> <EntityContainer Name="Model1Container" > <EntitySet Name="ColorSet" EntityType="Model1.Color" /> <EntitySet Name="DoctorSet" EntityType="Model1.Doctor" /> <EntitySet Name="PatientSet" EntityType="Model1.Patient" /> <EntitySet Name="UsedCarSet" EntityType="Model1.UsedCar" /> <AssociationSet Name="Vehicle_Color" Association="Model1.Vehicle_Color"> <End Role="Colors" EntitySet="ColorSet" /> <End Role="Vehicles" EntitySet="UsedCarSet" /></AssociationSet> <AssociationSet Name="DoctorPatient" Association="Model1.DoctorPatient"> <End Role="Doctor" EntitySet="DoctorSet" /> <End Role="Patient" EntitySet="PatientSet" /></AssociationSet> </EntityContainer> <EntityType Name="Color"> <Key> <PropertyRef Name="ColorID" /></Key> <Property Name="ColorID" Type="Int32" Nullable="false" /> <NavigationProperty Name="Vehicles" Relationship="Model1.Vehicle_Color" FromRole="Colors" ToRole="Vehicles" /></EntityType> <EntityType Name="Doctor"> <Key> <PropertyRef Name="DoctorID" /></Key> <Property Name="DoctorID" Type="Int32" Nullable="false" /> <NavigationProperty Name="Patients" Relationship="Model1.DoctorPatient" FromRole="Doctor" ToRole="Patient" /></EntityType> <EntityType Name="Patient"> <Key> <PropertyRef Name="PatientID" /></Key> <Property Name="PatientID" Type="Int32" Nullable="false" /> <NavigationProperty Name="Doctors" Relationship="Model1.DoctorPatient" FromRole="Patient" ToRole="Doctor" /> </EntityType> <EntityType Name="UsedCar"> <Key> <PropertyRef Name="VehicleID" /></Key> <Property Name="VehicleID" Type="Int32" Nullable="false" /> <NavigationProperty Name="Color" Relationship="Model1.Vehicle_Color" FromRole="Vehicles" ToRole="Colors" /></EntityType> <Association Name="Vehicle_Color"> <End Type="Model1.Color" Role="Colors" Multiplicity="1" /> <End Type="Model1.UsedCar" Role="Vehicles" Multiplicity="*" /></Association> <Association Name="DoctorPatient"> <End Type="Model1.Doctor" Role="Doctor" Multiplicity="*" /> <End Type="Model1.Patient" Role="Patient" Multiplicity="*" /></Association> </Schema> </edmx:ConceptualModels>
I'm creating a [ToolTip](http://msdn.microsoft.com/en-us/library/bb760250&#40;VS.85&#41;.aspx) window and adding tools to it using the flags TTF_IDISHWND | TTF_SUBCLASS. (c++, win32) I have a manifest file such that my program uses the new WindowsXP themes (comctrl32 version 6). When I hover over a registered tool, the tip appears. Good. When I click the mouse, the tip disappears. Ok. However, moving away from the tool and back again does not make the tip re-appear. I need to hover over a different tool and then come back to my tool to get the tip to come back. When I remove my manifest file (to use the older non-XP comctrl32), the problem goes away. After doing some experimentation, I discovered the following differences between ToolTips in Comctl32 version 5 (old) and Comctl32 version 6 (new): - New TTF_TRANSPARENT ToolTips (when used In-Place) actually return HTCLIENT from WM_NCITTEST if a mouse button is down, thus getting WM_LBUTTONDOWN and stealing focus for a moment before vanishing. This causes the application's border to flash. - Old TTF_TRANSPARENT ToolTips always return HTTRANSPARENT from WM_NCHITTEST, and thus never get WM_LBUTTONDOWN themselves and never steal focus. (This seems to be just aesthetic, but may impact the next point...) - New ToolTips seem not to get WM_TIMER events after a mouse-click, and only resume getting (a bunch of) timer events after being de-activated and re-activated. Thus, they do not re-display their tip window after a mouse click and release. - Old ToolTips get a WM_TIMER message as soon as the mouse is moved again after click/release, so they are ready to re-display their tip. Thus, as a comctl32 workaround, I had to: - subclass the TOOLTIPS_CLASS window and always return HTTRANSPARENT from WM_NCHITTEST if the tool asked for transparency. - avoid using TTF_SUBCLASS and rather process the mouse messages myself so I could de-activate/re-activate upon receiving WM_xBUTTONUP. I assume that the change in internal behavior was to accommodate the new "clickable" features in ToolTips like hyperlinks, but the hover behavior appears to be thus broken. Does anyone know of a better solution than my subclass workaround? Am I missing some other point?
Will random data appended to a JPG make it unusable?
|hacks|jpeg|file-format|steganography|
So, to simplify my life I want to be able to append from 1 to 7 additional characters on the end of some jpg images my program is processing*. Having tried this out with a few programs, it appears they are fine with the additional characters, which occur after the [FF D9 that specifies the end of the image][1] - so it appears that the file format is well defined enough that the 'corruption' I'm adding at the end shouldn't matter. I can always post process the files later if needed, but my preference is to do the simplest thing possible - which is to let them remain (I'm decrypting other file types and they won't mind, so having a special case is annoying). I figure with all the talk of [Steganography][2] hullaballo years ago, someone has some input here... -Adam * (encryption processing by 8 byte blocks, I don't want to save pre-encrypted file size, so append 0x00 to input data, and leave them there after decoding) [1]: http://en.wikipedia.org/wiki/JPEG#Syntax_and_structure [2]: http://en.wikipedia.org/wiki/Steganography
The JAutodoc plugin for eclipse does exactly what you need, but with a package granularity : right click on a package, select "Add javadoc for members..." and the skeleton will be added. There are numerous interesting options : templates for javadoc, adding a TODO in the header of every file saying : "template javadoc, must be filled...", etc.
The Server Explorer should support any database system that provides an ODBC driver. In the case of Oracle there is a built in driver with Visual Studio. In the Add Connection Dialog click the change button on the data source you should then get a list of the providers you have drivers for.
Here is instructions on how to connect to your MySQL database from Visual Studio: > To make the connection in server > explorer you need to do the following: > > * first of all you need to install the MyODBC connector 3.51 (or latest) on > the development machine (NB. you can > find this at > http://www.mysql.com/products/connector/odbc/ > ) > > * Create a datasource in Control Panel/Administrative Tools with a > connection to your database. This data > source is going to be used purely for > Server Manager and you dont need to > worry about creating the same data > source on your clients PC when you > have made your VS.NET application > (Unless you want to) - I dont want to > cover this in this answer, too long. > For the purpose of this explanation I > will pretend that you created a MyODBC > data source called 'AADSN' to database > 'noddy' on mysqlserver 'SERVER01' and > have a root password of 'fred'. The > server can be either the Computer Name > (found in Control > Panel/System/Computer Name), or > alternatively it can be the IP > Address. NB. Make sure that you test > this connection before continuing with > this explanation. > > * open your VS.NET project > > * go to server explorer > > * right-click on 'Data Connections' > > * select 'Add Connection' > > * In DataLink Properties, go to the provider tab and select "Microsoft OLE > DB Provider For ODBC drivers" > > * Click Next > > * If you previously created an ODBC data source then you could just select > that. The disadvantage of this is that > when you install your project > application on the client machine, the > same data source needs to be there. I > prefer to use a connection string. > This should look something like: > > DSN=AADSN;DESC=MySQL ODBC 3.51 Driver > DSN;DATABASE=noddy;SERVER=SERVER01;UID=root;PASSWORD=fred;PORT=3306;SOCKET=;OPTION=11;STMT=; > > If you omit the password from the > connection string then you must make > sure that the datasource you created > (AADSN) contains a password. I am not > going to describe what these mean, you > can look in the documentation for > myodbc for that, just ensure that you > get a "Connection Succeeded" message > when you test the datasource.
Starting a new job focused on brownfield application refactoring & Agile
|.net|vb.net|refactoring|brownfield|
I am starting a new job on Monday. The company has a home grown enterprise case management application written in ASP.NET/VB.NET. They are attempting to implement an Agile development process. They have gone so far as to get two people Scrum Master certified and hire an Agile coach. They are currently focused on 6-9 months of refactoring. My question is what are some good approaches/tooling given this environment for becoming familiar with the code base and being productive as soon as I hit the ground?