instruction stringlengths 0 30k ⌀ |
|---|
Suppose I have the following C code:
unsigned int u = 1234;
int i = -5678;
unsigned int result = u + i;
What implicit conversions are going on here, and is this code safe for all values of u and i? (safe, in the sense that even though _result_ in this example will overflow to some huge positive number, I could cast it back to an *int* and get the real result)
|
<Assuming Windows>
If they only need to edit Office files one user at a time infrequently, get the files on a network share with appropriate permissions and back them up nightly. Active Directory will warn them if someone already has it open.
If it's more complicated than just office consider Sharepoint. I think SVN is too complicated especially since conflicts and comparisons of binary files e.g. old word docs doesn't work really. |
See section 2.7, *Type Conversions* in the K&R book |
So you can clean up any open connections, etc. initialized in the try block. If you opened a connection and then an exception occurred, that exception would not be properly closed. This type of scenario is what the finally block is for. |
Implicit casting and interger overflowing in the evaluation of expressions question in c. |
Does this help:
http://www.west-wind.com/weblog/posts/76293.aspx |
Does this help:
http://www.west-wind.com/weblog/posts/76293.aspx
Response.ContentType = "application/octet-stream";
Response.AppendHeader("Content-Disposition","attachment; filename=logfile.txt");
Response.TransmitFile( Server.MapPath("~/logfile.txt") );
Response.End();
Response.TransmitFile is the accepted way of sending large files, instead of Response.WriteFile.
|
On Vista could it also be that it's "marked" as unsafe because it's been downloaded from the internet and you have to click the unblock button on it's explorer properties dialog? |
Check out the **db-utils** package. If you use apt, you can install it with the following: `apt-get install db4.4-util`
Additional links:
- [http://rpmfind.net/linux/rpm2html/search.php?query=db-utils][1]
- [http://packages.ubuntu.com/search?suite=default§ion=all&arch=any&searchon=names&keywords=db4.4-util][2]
[1]: http://rpmfind.net/linux/rpm2html/search.php?query=db-utils
[2]: http://packages.ubuntu.com/search?suite=default§ion=all&arch=any&searchon=names&keywords=db4.4-util |
Check out the **db-utils** package. If you use apt, you can install it with the following: `apt-get install db4.4-util`
Additional links:
- [http://rpmfind.net/linux/rpm2html/search.php?query=db-utils][1]
- [http://packages.ubuntu.com/search?keywords=db4.4-util][2]
- [Man page of db4.4_dump][3]
[1]: http://rpmfind.net/linux/rpm2html/search.php?query=db-utils
[2]: http://packages.ubuntu.com/search?keywords=db4.4-util
[3]: http://www.linux.gr/cgi-bin/man/man2html?db4.4_dump+1 |
<code><pre>
ALTER TABLE table ADD COLUMN tmp;
UPDATE table SET tmp = X;
UPDATE table SET X = Y;
UPDATE table SET Y = tmp;
ALTER TABLE table DROP COLUMN tmp;
</code></pre>
Something like this? |
<code><pre>
ALTER TABLE table ADD COLUMN tmp;
UPDATE table SET tmp = X;
UPDATE table SET X = Y;
UPDATE table SET Y = tmp;
ALTER TABLE table DROP COLUMN tmp;
</code></pre>
Something like this?
Edit: About Greg's comment:
No, this doesn't work:
<code><pre>
mysql> select * from test;
+------+------+
| x | y |
+------+------+
| 1 | 2 |
| 3 | 4 |
+------+------+
2 rows in set (0.00 sec)
mysql> update test set x=y, y=x;
Query OK, 2 rows affected (0.00 sec)
Rows matched: 2 Changed: 2 Warnings: 0
mysql> select * from test;
+------+------+
| x | y |
+------+------+
| 2 | 2 |
| 4 | 4 |
+------+------+
2 rows in set (0.00 sec)
</code></pre> |
Auto defines in C editors... Why? |
|c|eclipse|define| |
When I let Eclipse create a new file (.c or .h file) in a C project the editor always auto creates a #define at the top of the file like this: If the file is named 'myCFile.c' there will be a #define at the start of the file like this
#ifndef MYCFILE_C_
#define MYCFILE_C_
I have seen other editors do this as well (Codewright and SlikEdit I think).
The #defines don't seem to do anything for the editor as I can just delete them without any problem, and I can't think of a reason why *I* would want to use them. Does anyone know why they are there? |
The first answer you gave uses what is known as ANSI join syntax, the other is valid and will work in any relational database.
I agree with grom that you should use ANSI join syntax. As they said, the main reason is for clarity. Rather than having a where clause with lots of predicates, some of which join tables and others restricting the rows returned with the ANSI join syntax you are making it blindingly clear which conditions are being used to join your tables and which are being used to restrict the results. |
JohnLavoie - you don't need that. DATE in Oracle is actually a date and time data type. The only difference between DATE and TIMESTAMP is that DATE resolves down to the second but TIMESTAMP resolves down to the micro second. Therefore the Ask Tom article is perfectly valid for TIMESTAMP columns as well. |
Sometimes using reflection can be a bit of a hack, but a lot of the time it's simply the most fantastic code tool.
Look at the .Net property grid - anyone who's used Visual Studio will be familiar with it. You can point it at any object and it it will produce a simple property editor. That uses reflection, in fact most of VS's toolbox does.
Look at unit tests - they're loaded by reflection (at least in NUnit and MSTest).
Reflection allows dynamic-style behaviour from static languages.
The one thing it really needs is duck typing - the C# compiler already supports this: you can `foreach` anything that looks like `IEnumerable`, whether it implements the interface or not. You can use the C#3 collection syntax on any class that has a method called `Add`.
Use reflection wherever you need dynamic-style behaviour - for instance you have a collection of objects and you want to check the same property on each.
The risks are similar for dynamic types - compile time exceptions become run time ones. You code is not as 'safe' and you have to react accordingly.
The .Net reflection code is very quick, but not as fast as the explicit call would have been. |
I recently used reflection in C# for finding implementations of a specific interface. I had written a simple batch-style interpreter that looked up "actions" for each step of the computation based on the class name. Reflecting the current namespace then pops up the right implementation of my IStep inteface that can be Execute()ed. This way, adding new "actions" is as easy as creating a new derived class - no need to add it to a registry, or even worse: forgetting to add it to a registry... |
As the mantra goes 'Go with the simplest thing that can possibly work.'
- If fake classes can get the job done, go with them.
- If you need an interface with multiple methods to be mocked, go with a mock framework.
Avoid using mocks *always* because they make tests brittle. Your tests now have intricate knowledge of the methods called by the implementation, if the mocked interface changes... your tests break. So use your best judgment.. |
As the mantra goes 'Go with the simplest thing that can possibly work.'
- If fake classes can get the job done, go with them.
- If you need an interface with multiple methods to be mocked, go with a mock framework.
Avoid using mocks *always* because they make tests brittle. Your tests now have intricate knowledge of the methods called by the implementation, if the mocked interface or your implementation changes... your tests break. This is bad coz you'll spend additional time getting your tests to run instead of just getting your SUT to run. *Tests should not be inappropriately intimate with the implementation.* So use your best judgment.. |
We have been developing an Outlook Add-in using Visual Studio 2008. However I am facing a strange behavior while adding a command button to a custom command bar. This behavior is reflected when we add the button in the reply, reply all and forward windows. The issue is that the caption of the command button is not visible though when we debug using VS it shows the caption correctly. But the button is captionless when viewed in Outlook(2003).
I have the code snippet as below. Any help would be appreciated.
private void AddButtonInNewInspector(Microsoft.Office.Interop.Outlook.Inspector inspector)
{
try
{
if (inspector.CurrentItem is Microsoft.Office.Interop.Outlook.MailItem)
{
try
{
foreach (CommandBar c in inspector.CommandBars)
{
if (c.Name == "custom")
{
c.Delete();
}
}
}
catch
{
}
finally
{
//Add Custom Command bar and command button.
CommandBar myCommandBar = inspector.CommandBars.Add("custom", MsoBarPosition.msoBarTop, false, true);
myCommandBar.Visible = true;
CommandBarControl myCommandbarButton = myCommandBar.Controls.Add(MsoControlType.msoControlButton, 1, "Add", System.Reflection.Missing.Value, true);
myCommandbarButton.Caption = "Add Email";
myCommandbarButton.Width = 900;
myCommandbarButton.Visible = true;
myCommandbarButton.DescriptionText = "This is Add Email Button";
CommandBarButton btnclickhandler = (CommandBarButton)myCommandbarButton;
btnclickhandler.Click += new Microsoft.Office.Core._CommandBarButtonEvents_ClickEventHandler(this.OnAddEmailButtonClick);
}
}
}
catch (System.Exception ex)
{
MessageBox.Show(ex.Message.ToString(), "AddButtInNewInspector");
}
} |
Actually, I recently made the switch from Visual Studio (*years* of experience there) to Linux and the first thing I did was try to find a reasonable IDE.
And then I noticed that this simply isn't how you work there*, and I threw everything out, spent a few days reading manuals, set up my shell (bash), set up a GVIM environment, learned the GCC/binutils toolchain, `make` and `gdb` and lived happily ever after.
There are certainly disadvantages but I am sure that I work much more productive since making the switch.
*) It <em>really</em> isn't! And that's not an anachronism either because the toolchain described above is incredibly mature and powerful. Any IDE has to replicate hundreds of unique features to get anywhere near the power of these tools that have been honed to the point of perfection for decades. The learning curve might be quite steep, but much less so than I expected. |
I'm currently using this:
String content = null;
URLConnection connection = null;
try {
connection = new URL("http://www.google.com").openConnection();
Scanner scanner = new Scanner(connection.getInputStream());
scanner.useDelimiter("\\Z");
content = scanner.next();
}catch ( Exception ex ) {
ex.printStackTrace();
}
System.out.println(content);
But not sure if there's a better way. |
You may want to check the documentation for Apache's org.apache.commons.HttpClient package and the related packages [here][1]. Sending an HTTP request from a Java application is pretty easy to do. Poking through the documentation should get you off in the right direction.
[1]: http://hc.apache.org/httpclient-3.x/apidocs/index.html |
Something like:
Pattern exp = Pattern.compile(
"<meta name=\"generator\" value=\"([^\"]*)\" />");
Matcher matcher=exp.matcher(html);
if(matcher.find())
{
System.out.println("Generator: "+matcher.group(1));
} |
Depends on how sophisticated of an Http request you need to build (authentication, etc). Here's one simple way I've seen used in the past.
StringBuilder html = new StringBuilder();
java.net.URL url = new URL("http://www.google.com/");
BufferedReader input = null;
try {
input new BufferedReader(
new InputStreamReader(url.openStream()));
String htmlLine;
while ((htmlLine=input.readLine())!=null) {
html.appendLine(htmlLine);
}
}
finally {
input.close();
}
Pattern exp = Pattern.compile(
"<meta name=\"generator\" value=\"([^\"]*)\" />");
Matcher matcher = exp.matcher(html.toString());
if(matcher.find())
{
System.out.println("Generator: "+matcher.group(1));
}
*Probably plenty of typos here to be found when compiled.
(hope this wasn't homework)*
|
I haven't tried this, but wouldn't the basic framework be
1. Open a java.net.HttpURLConnection
2. Get an input stream using getInputStream
3. Use the regular expression in Mike's answer to parse out the bit you want
|
This error ofter occurs because incompatibility of VMWare Tools Version and recent Kernels (You can test it using older Kernels). Sometimes you can fix some thing with patches all over the internet, but I prefer to downgrade my kernel or don't using latest distribution's version in VMWare. It can be really annoying. Another problem you may have is with your mouse pointer in X Windows, like if it was a inch to left or below than it really shows.
About vmware-any-any-update117, it's a patch to VMWare running under linux, usually Workstation version. It won't have effect in Tools. |
This is a topic that I was just talking about at work. Mainly the problem is that unless database migrations is handled for you nicely by your framework, eg rails and their migration scripts, then it is left up to you.
The current way that we do it has apparent flaws, and I am open to other suggestions.
1. Have a schema dump with static data that is required to be there kept up to date and in version control.
2. Every time you do a schema changing action, ALTER, CREATE, etc. dump it to a file and throw it in version control.
3. Make sure you update the original sql db dump.
4. When doing pushes to live make sure you or your script applies the sql files to the db.
5. Clean up old sql files that are in version control as they become old.
This is by no means optimal and is really not intended as a "backup" db. It's simply to make pushes to live easy, and to keep developers on the same page. There is probably something cool you could setup with capistrano as far as automating the application of the sql files to the db.
Db specific version control would be pretty awesome. There is probably something that does that and if there isn't there probably should be. |
As the mantra goes 'Go with the simplest thing that can possibly work.'
1. If fake classes can get the job done, go with them.
2. If you need an interface with multiple methods to be mocked, go with a mock framework.
Avoid using mocks *always* because they make tests brittle. Your tests now have intricate knowledge of the methods called by the implementation, if the mocked interface or your implementation changes... your tests break. This is bad coz you'll spend additional time getting your tests to run instead of just getting your SUT to run. *Tests should not be inappropriately intimate with the implementation.*
So use your best judgment.. I prefer mocks when it'll help save me writing-updating a fake class with n>>3 methods.
**Update** Epilogue/Deliberation:
(Thanks to Toran Billups for example of a mockist test. See below)
Hi Doug, Well I think we've transcended into another holy war - Classic TDDers vs Mockist TDDers. I think I'm belong to the former.
- If I am on test#101 Test_ExportProductList and I find I need to add a new param to IProductService.GetProducts(). I do that get this test green. I use a refactoring tool to update all other references. Now I find all the mockist tests calling this member now blow up. Then I have to go back and update all these tests - a waste of time. Why did ShouldPopulateProductsListOnViewLoadWhenPostBackIsFalse fail? Was it because the code is broken? Rather the tests are broken. I favor the *one test failure = 1 place to fix*. Mocking freq goes against that. Would stubs be better? If it I had a fake_class.GetProducts().. sure One place to change instead of shotgun surgery over multiple Expect calls. In the end it's a matter of style.. if you had a common utility method MockHelper.SetupExpectForGetProducts() - that'd also suffice.. but you'll see that this is uncommon.
- If you place a white strip on the test name, the test is hard to read. Lot of plumbing code for the mock framework hides the actual test being performed.
- requires you to learn this particular flavor of a mocking framework
|
[TurboGears](http://www.turbogears.org/) currently uses [SQLObject](http://www.sqlobject.org/) by default but you can use it with [SQLAlchemy](http://docs.turbogears.org/1.0/SQLAlchemy). They are saying that the next major release of TurboGears (1.1) will use SQLAlchemy by default. |
Unexpected behaviour of Process.MainWindowHandle |
|c#|.net|msdn| |
I've been trying to understand [Process.MainWindowHandle][1].
According to MSDN; "The main window is the window that is created when the process is started. After initialization, other windows may be opened, including the Modal and TopLevel windows, but <em>the first window associated with the process remains the main window</em>." (Emphasis added)
But while debugging I noticed that MainWindowHandle seemed to change value... which I wasn't expecting, especially after consulting the documentation above.
To confirm the behaviour I created a standalone WinForms app with a timer to check the MainWindowHandle of the "DEVENV" (Visual Studio) process every 100ms.
Here's the interesting part of this test app...
IntPtr oldHWnd = IntPtr.Zero;
void GetMainwindowHandle()
{
Process[] processes = Process.GetProcessesByName("DEVENV");
if (processes.Length!=1)
return;
IntPtr newHWnd = processes[0].MainWindowHandle;
if (newHWnd != oldHWnd)
{
oldHWnd = newHWnd;
textBox1.AppendText(processes[0].MainWindowHandle.ToString("X")+"\r\n");
}
}
private void timer1Tick(object sender, EventArgs e)
{
GetMainwindowHandle();
}
You can see the value of MainWindowHandle changing when you (for example) click on a drop-down menu inside VS.
![MainWindowHandleMystery][2]
Perhaps I've misunderstood the documentation.
Can anyone shed light?
[1]: http://msdn.microsoft.com/en-gb/library/system.diagnostics.process.mainwindowhandle.aspx
[2]: http://i37.tinypic.com/11ky1yb.jpg |
1. [Code::Blocks][1]
2. [Eclipse CDT][2]
[1]: http://www.codeblocks.org/
[2]: http://www.eclipse.org/cdt/
Soon you'll find that IDEs are not enough, and you'll have to learn the GCC toolchain anyway (which isn't hard, at least learning the basic functionality). But no harm in reducing the transitional pain with the IDEs, IMO. |
I would strongly recommend getting a copy of Robert C. Martin's book [Working Effectively With Legacy Code][1] (by "Legacy Code" Martin means any system that isn't adequately covered by unit tests). It is full of good ideas which should help you with your refactoring and phasing in of best practices.
Downsides of ORM - from experience, there is a bit of a learning curve in getting to grips with the concepts and idiosyncracies of the chosen ORM solution. But once you've overcome these you should find that future development and maintenance is considerably swifter. I don't miss the days when I used to laboriously hand-craft data access layers.
[1]: http://www.amazon.com/Working-Effectively-Legacy-Robert-Martin/dp/0131177052 |
I would strongly recommend getting a copy of Robert C. Martin's book [Working Effectively With Legacy Code][1] (by "Legacy Code" Martin means any system that isn't adequately covered by unit tests). It is full of good ideas which should help you with your refactoring and phasing in of best practices.
Sure, you could phase in the introduction of an ORM, initially using it for accessing some subset of your domain model. And yes, I have found that use of an ORM speeds up development time - this is one of the key benefits and I certainly don't miss the days when I used to laboriously hand-craft data access layers.
Downsides of ORM - from experience, there is inevitably a bit of a learning curve in getting to grips with the concepts, configuration and idiosyncracies of the chosen ORM solution.
[1]: http://www.amazon.com/Working-Effectively-Legacy-Robert-Martin/dp/0131177052 |
I would strongly recommend getting a copy of Michael Feather's book [Working Effectively With Legacy Code][1] (by "Legacy Code" Feathers means any system that isn't adequately covered by unit tests). It is full of good ideas which should help you with your refactoring and phasing in of best practices.
Sure, you could phase in the introduction of an ORM, initially using it for accessing some subset of your domain model. And yes, I have found that use of an ORM speeds up development time - this is one of the key benefits and I certainly don't miss the days when I used to laboriously hand-craft data access layers.
Downsides of ORM - from experience, there is inevitably a bit of a learning curve in getting to grips with the concepts, configuration and idiosyncracies of the chosen ORM solution.
*Edit: corrected author's name*
[1]: http://www.amazon.com/Working-Effectively-Legacy-Robert-Martin/dp/0131177052 |
Your PM's concerns are valid - make sure you get your system under test before making any major refactorings.
I would strongly recommend getting a copy of Robert C. Martin's book [Working Effectively With Legacy Code][1] (by "Legacy Code" Martin means any system that isn't adequately covered by unit tests). This is chock full of good ideas for how to break down those couplings and dependencies you speak of, in a safe manner that won't risk introducing regression bugs.
[1]: http://www.amazon.com/Working-Effectively-Legacy-Robert-Martin/dp/0131177052
|
|c| |
Lets take the code
int a, b, c;
...
if ((a + b) > C)
If the values of a and b add to exceed the max value of an int will the integrity of the comparison be compromised? I was thinking that there might be an implicit up cast or overflow bit checked and factored into the evaluation of this expression. |
C: Implicit casting and interger overflowing in the evaluation of expressions |
Well, for one thing, if you RETURN inside your try block, the finally will still run, but code listed below the try-catch-finally block will not. |
The finally block is supposed to execute whether you caught the exception or not.
See [Try / Catch / Finally example][1]
[1]: http://neptune.netcomp.monash.edu.au/JavaHelp/howto/try_catch_finally.htm#whyFinally |
In the case, that the try and the catch are empty, there is no difference. Otherwise you can be sure, that the finally will be executed.
If you, for example throw a new Exception in your catchblock (rethrow), than the assignment will only be executed, if it is in the finally-block.
Normally a finally is used to clean up after yourself (close DB-connections, File-Handles and the likes).
You should never use control-statements (return, break, continue) in a finally, as this can be a maintenance nightmare and is therefore considered bad practice |
When you cast from signed to unsigned (and vice versa) the internal representation of the number does not change. What changes is how the compiler interprets the sign bit. So yes, aside from the possible overflows, it is safe to cast from signed to unsigned, though the result will probably be much larger after changing sign. |
Ok, so what you really need is a "jagged array". This will allow you to have an "array that contains other arrays of varying lengths".
Dim arr As Integer()() = {New Integer() {1, 2, 3}, New Integer() {4, 5, 6, 7, 8, 9, 9, 9}, New Integer() {5, 4, 3, 2}}
For x = 0 To arr.GetUpperBound(0)
Console.WriteLine("Row " & x & " has " & arr(x).GetUpperBound(0) & " columns")
For y = 0 To arr(x).GetUpperBound(0)
Console.WriteLine("(" & x & "," & y & ") = " & arr(x)(y))
Next
Next
Output:
Row 0 has 2 columns
(0,0) = 1
(0,1) = 2
(0,2) = 3
Row 1 has 7 columns
(1,0) = 4
(1,1) = 5
(1,2) = 6
(1,3) = 7
(1,4) = 8
(1,5) = 9
(1,6) = 9
(1,7) = 9
Row 2 has 3 columns
(2,0) = 5
(2,1) = 4
(2,2) = 3
(2,3) = 2
|
Dog should return a Leg not a DogLeg as the return type. The actual class may be a DogLeg, but the point is to decouple so the user of Dog doesn't have to know about DogLegs, they only need to know about Legs.
Change:
class Dog : Animal
{
public override DogLeg GetLeg() {...}
}
to:
class Dog : Animal
{
public override Leg GetLeg() {...}
}
Don't Do this:
if(a instanceof Dog){
DogLeg dl = (DogLeg)a.GetLeg();
it defeats the purpose of programing to the abstract type. |
MVC framework where model's has datasource classes with the actual database language, the developer in most cases uses save, saveField, delete, find etc methods and the framework translates this to sql queries. This is not only safer and easier, it is also very convenient in that the code is datasource indepenedent, ie you can change database server and keep the code. |
Hava a look at SOS Assist (http://old.thinktecture.com/SOSAssist/default.html), it provides a GUI around SOS. |
By default, the browser does not print background images at all.
In Firefox
* File > Page Setup > Check Off "Print Background"
* File > Print Preview
In IE
* Tools > Internet Options > Advanced > Printing
* Check Off "Print Background Images and Colors"
In Opera
* File > Print Options > Check Off "Print Page Background"
* File > Print Preview (You may have to scroll down/up to see it refresh)
|
Use the [ASCII table][1] to pick a range of letters, where the: $range_start , $range_end is a value from the decimal column in the ASCII table.
// range is numbers through capital and lower case letters
$range_start = 48;
$range_end = 122;
$random_string = "";
$random_string_length = 10;
for ($i = 0; $i < $random_string_length; $i++) {
$ascii_no = round( mt_rand( $range_start , $range_end ) ); // generates a number within the range
// finds the character represented by $ascii_no and adds it to the random string
// study **chr** function for a better understanding
$random_string .= chr( $ascii_no );
}
echo $random_string;
See More:
- [chr][2] function
- [mt_rand][3] function
[1]: http://www.asciitable.com/
[2]: http://au.php.net/chr
[3]: http://au.php.net/mt_rand |
Use the [ASCII table][1] to pick a range of letters, where the: $range_start , $range_end is a value from the decimal column in the ASCII table.
I find that this method is nicer compared [to the method described][2] where the range of characters is specifically defined within another string.
// range is numbers (48) through capital and lower case letters (122)
$range_start = 48;
$range_end = 122;
$random_string = "";
$random_string_length = 10;
for ($i = 0; $i < $random_string_length; $i++) {
$ascii_no = round( mt_rand( $range_start , $range_end ) ); // generates a number within the range
// finds the character represented by $ascii_no and adds it to the random string
// study **chr** function for a better understanding
$random_string .= chr( $ascii_no );
}
echo $random_string;
See More:
- [chr][3] function
- [mt_rand][4] function
[1]: http://www.asciitable.com/
[2]: http://stackoverflow.com/questions/48124/generating-pseudorandom-alpha-numeric-strings#48125
[3]: http://au.php.net/chr
[4]: http://au.php.net/mt_rand |
This may well be a bug in the designer; if you take a look around at the .designer.cs file (maybe doing a diff from before and after you set NullValue to null) you should be able to see the code it generates. |
Storing Windows passwords. |
|windows|password|security| |
You can transform your XML data using [XSLT][1]
Another option is to use XLinq.
If you want concrete code example provide us with sample data
[1]: http://www.xml.com/pub/a/2002/08/14/dotnetxslt.html |
You can transform your XML data using [XSLT][1]
Another option is to use XLinq.
If you want concrete code example provide us with sample data
**EDIT**:
here is a sample XSLT transform for your XML file:
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:output method="text"/>
<xsl:template match="//error/serverVariables">
<xsl:text>Server variables:
</xsl:text>
<xsl:for-each select="item">
<xsl:value-of select="@name"/>:<xsl:value-of select="value/@string"/>
<xsl:text>
</xsl:text>
</xsl:for-each>
</xsl:template>
<xsl:template match="//error/queryString">
<xsl:text>Query string items:
</xsl:text>
<xsl:for-each select="item">
<xsl:value-of select="@name"/>:<xsl:value-of select="value/@string"/>
<xsl:text>
</xsl:text>
</xsl:for-each>
</xsl:template>
</xsl:stylesheet>
You can apply this transform using [XslCompiledTransform][2] class.
It should give output like this:
> Server variables:
> ALL_HTTP:HTTP_CONNECTION:close HTTP_USER_AGENT:Mozilla/4.0 (compatible MSIE 6.0; Windows NT 5.1; SV1)
> AUTH_TYPE:
> HTTPS:off
> HTTPS_KEYSIZE:
> HTTP_USER_AGENT:Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1;S )
>
> Query string items:
> tid:196
[1]: http://www.xml.com/pub/a/2002/08/14/dotnetxslt.html
[2]: http://msdn.microsoft.com/en-us/library/system.xml.xsl.xslcompiledtransform.aspx |
You could use something like:
svnadmin dump repositorypath | gzip > backupname.svn.gz |
You could use something like (Linux):
svnadmin dump repositorypath | gzip > backupname.svn.gz
Since Windows does not support GZip it is just:
svnadmin dump repositorypath > backupname.svn |
It's to guard against [multiple definitions][1].
[1]: http://www.fredosaurus.com/notes-cpp/preprocessor/ifdef.html |
I think it's a throwback of C include issues, where multiple copies of the source would get included - unless you are meticulous with include chains (One file includes n others).
Checking if a symbol is defined and including only if the symbol is defined - was a way out of this. |
I've recently started using CruiseControl.NET (<http://confluence.public.thoughtworks.org/display/CCNET/Welcome+to+CruiseControl.NET>). It works reasonably well, although configuration could be easier. CruiseControl.NET is free and open source, and seems to integrate with most standard tools, although I've personally only used it with CVS, SVN, NUnit and MSBuild. |
Sometimes people include a whole .c file in other .c files (or even .h files), so it has the exact same purpose of preventing an include file from getting included multiple times and the compiler spitting out multiple definition errors.
It is strange, though, that it would be the default behavior of an editor to put this in anything but a .h file. This would be a rarely needed feature. |
Your PM's concerns are valid - make sure you get your system under test before making any major refactorings.
I would strongly recommend getting a copy of Robert C. Martin's book [Working Effectively With Legacy Code][1] (by "Legacy Code" Martin means any system that isn't adequately covered by unit tests). This is chock full of good ideas for how to break down those couplings and dependencies you speak of, in a safe manner that won't risk introducing regression bugs.
Good luck with the refactoring programme; in my experience it's an enjoyable and cathartic process from which you can learn a lot.
[1]: http://www.amazon.com/Working-Effectively-Legacy-Robert-Martin/dp/0131177052
|
Your PM's concerns are valid - make sure you get your system under test before making any major refactorings.
I would strongly recommend getting a copy of Michael Feather's book [Working Effectively With Legacy Code][1] (by "Legacy Code" Feathers means any system that isn't adequately covered by unit tests). This is chock full of good ideas for how to break down those couplings and dependencies you speak of, in a safe manner that won't risk introducing regression bugs.
Good luck with the refactoring programme; in my experience it's an enjoyable and cathartic process from which you can learn a lot.
[1]: http://www.amazon.com/Working-Effectively-Legacy-Robert-Martin/dp/0131177052
|
It depends exactly what you mean by 'better'. I'd go for the object oriented way (using classes) because I find it makes for cleaner code (at least in my opinion). However, I'm not sure what the speed penalties might be for that option. |
C++ IDE for linux |
|c++|ide|linux| |
|c++|linux|ide| |
I'm currently in the process of expanding my programming horizons to linux. In order to do that, it is important to have a good basic toolset on which you can rely on. and what is more basic then the IDE in which you write your code? (honestly, you don't want to code in notepad; been there done that).
there are two other questions/answers I could find here at stackoverflow that are somewhat related:
- [Lightweight IDE for linux][1] and
- [What tools do you use to develop
C++ applications on Linux?][2]
but I'm not really looking for a *lightweight* IDE and if it is really worth the money I will pay for it, so it doesn't need to be free as well.
So my question is: *what is a good IDE available in linux to use as a programming platform for writing c++ code?*
The minimum should be like any other good IDE: syntax highlighting, code completion (like [intellisense][3] or its eclipse counterpart) and integrated debugging (basic breakpoints are good)
I already have searched for it myself, but there is so much to choose from that it is almost impossible to separate the good from the bads by hand, especially for someone like me without any c++ coding experience in linux. However I do know that [eclipse supports c++][4], and I really like that IDE for java, but is it any good for c++ and won't I miss out on something that is even better?
the second post actually has some good suggestions, but what I am missing is what exactly makes the sugested IDE so good for the user, what are its advantages/disadvantages?
maybe my question should therefore be: *what IDE do you propose given your own experience with it?, and why that one? ... convince me*
[1]: http://stackoverflow.com/questions/2756/lightweight-ide-for-linux
[2]: http://stackoverflow.com/questions/17228/what-tools-do-you-use-to-develop-c-applications-on-linux
[3]: http://en.wikipedia.org/wiki/IntelliSense
[4]: http://www.eclipse.org/cdt/ |
Other than the downtime a few weeks ago. None that I heard of.
They did a good job considering the one time it was down was because of an obscure server error that cascaded throughout the cloud. They was very open about it and resolve it as soon as they found out.(it happened during a weekend, iirc)
So they are pretty reliable. My advice is double check your code. And bring it up to amazon support if it is still a problem. |
I just left [this post in your other thread][1], though what you have above might work as well. I don't think either would be any easier than the other. The Apache packages can be accessed by just using `import org.apache.commons.HttpClient` at the top of your code.
Edit: Forgot the link ;)
[1]: http://stackoverflow.com/questions/31415/quick-way-to-find-a-value-in-html-java |
I was using the basicHttp binding but the problem was actually with the XMLSerializer. It doesn't properly recognize the wsdl generated by WCF (even with basicHttp bindings) for anything other than basic value types.
We got around this by added the reference to the 3.0 dll's and using the datacontract serializer. |
I assume you're on Vista using VS2008? In that case I think that the [FOS_PICKFOLDERS option][1] is being used when calling the Vista file dialog [IFileDialog][2]. I'm afraid that in .NET code this would involve plenty of gnarly P/Invoke interop code to get working.
[1]: http://msdn.microsoft.com/en-us/library/bb761832.aspx
[2]: http://msdn.microsoft.com/en-us/library/bb775966.aspx |
I agree, quad-checking your code would be a good idea. I'm not saying that it can't happen, but I don't believe that I have ever seen it, and I've used S3 a pretty good bit now. I have, however, mismanaged exceptions/connection breaks a few times and ended up with pieces that didn't match what I was expecting.
I would be pretty surprised if they actually send bad data, but, as always, anything is possible. |
The Brail view engine has been implemented to be used in ASP.NET MVC. The [MvcContrib][1] project implemented the code. The [source code][2] is located on Google Code.
As far as the controllers, I really am not sure. I am not that familiar with Boo. I know a lot of developers use it for configuration instead of using xml for instance. My tips would be, if Boo can inherit off the Controller base class and you stick to the naming conventions, you should be alright. If you vary off the naming conventions, well you would need to implement your own IControllerFactory to instantiate the boo controllers as the requests come in.
I have been following the ASP.NET MVC bits since the first CTP and through that whole time, I have not seen somebody use Boo to code with. I think you will be the first to try to accomplish this.
[1]: http://www.mvccontrib.org
[2]: http://code.google.com/p/mvccontrib/ |
I uninstalled the previous 32bit version, reinstalled as 64bit, and now I get a completely different error. Its mentioned as requiring FP2 to fix, but since I'm using Express-C, I can't install the fixpack (IBM doesn't provide fixpacks for free DB2 products). Anyway, thanks for the help. At least I can come closer to connecting now. :) |
I have a lot of experience with this. My application is highly iterative, and schema changes happen frequently. I do a production release roughly every 2 to 3 weeks, with 50-100 items cleared from my FogBugz list for each one. Every release we've done over the last few years has required schema changes to support new features.
The key to this is to practice the changes several times in a test environment before actually making them on the live servers.
I keep a deployment checklist file that is copied from a template and then heavily edited for each release with anything that is out of the ordinary.
I have two scripts that I run on the database, one for schema changes, one for programmability (procedures, views, etc). The changes script is coded by hand, and the one with the procs is scripted via Powershell. The change script is run when everything is turned off (you have to pick a time that annoys the least amount of users for this), and it is run command by command, manually, just in case anything goes weird. The most common problem I have run into is adding a unique constraint that fails due to duplicate rows.
When preparing for an integration testing cycle, I go through my checklist on a test server, as if that server was production. Then, in addition to that, I go get an actual copy of the production database (this is a good time to swap out your offsite backups), and I run the scripts on a restored local version (which is also good because it proves my latest backup is sound). I'm killing a lot of birds with one stone here.
So that's 4 databases total:
1. Dev: all changes must be made in the change script, never with studio.
2. Test: Integration testing happens here
3. Copy of production: Last minute deployment practice
4. Production
You really, really need to get it right when you do it on production. Backing out schema changes is hard.
As far as hotfixes, I will only ever hotfix procedures, never schema, unless it's a very isolated change and crucial for the business. |
I'm writing an administrative application which will poll multiple Windows systems for various bits of data. In many cases it will use WMI, but in some cases it may need to read remote registry or remotely execute some command or script on the polled system. These functions will require admin-level access to the polled systems. Now, I expect that in most use cases, there will be a domain, and the polling application can run as a service with Domain Admin (or equivalent) privileges, which means I do not have to worry about storing passwords - the admin setting up the app will define the service's username/password via standard Windows mechanisms.
But there's always a few black sheep out there. Systems for which we will have to define a username and password, store them securely, then invoke this user/pass pair at the time we poll that system. So keep in mind - in this case the program being written *is the user* who *sends* password to the authenticating system.
I am not sure whether I will need to use a reversible hash which I then decrypt to plaintext at time of use, or if there is some Windows mechanism which would allow me to store and then reuse the hash only. Obviously the second mechanism is preferable; I'd like my program to either *never* know the password's plaintext value, or know it for the shortest amount of time possible.
I need suggestions for smart and secure ways to accomplish this.
Thanks for looking! |
|windows|security|passwords| |
|c#|.net|windows|security|passwords| |
I'm writing (in C# with .NET 3.5) an administrative application which will poll multiple Windows systems for various bits of data. In many cases it will use WMI, but in some cases it may need to read remote registry or remotely execute some command or script on the polled system. This polling will happen at repeating intervals - usually nightly, but can be configured to happen more (or less) frequently. So the poll could happen as often as every 10 minutes or as rarely as once a month. It needs to happen in an automated way, without any human intervention.
These functions will require admin-level access to the polled systems. Now, I expect that in most use cases, there will be a domain, and the polling application can run as a service with Domain Admin (or equivalent) privileges, which means I do not have to worry about storing passwords - the admin setting up the app will define the service's username/password via standard Windows mechanisms.
But there's always a few black sheep out there. The program may run in nondomain environments, or in cases where some polled systems are not members of the domain. In these cases we will have to define a username and password, store them securely, then invoke this user/pass pair at the time we poll that system. So keep in mind - in this case the program being written *is the user* who *sends* password to the authenticating system.
I am not sure whether I will need to use a reversible hash which I then decrypt to plaintext at time of use, or if there is some Windows mechanism which would allow me to store and then reuse the hash only. Obviously the second mechanism is preferable; I'd like my program to either *never* know the password's plaintext value, or know it for the shortest amount of time possible.
I need suggestions for smart and secure ways to accomplish this.
Thanks for looking! |
Preventing XML Serialization of IEnumerable and ICollection<T> & Inherited Types |
|xml|serialization|inheritance|.net-2.0| |
**NOTE: _XMLIgnore_ is NOT the answer!**
OK, so following on from my question on [XML Serialization and Inherited Types](http://stackoverflow.com/questions/20084/xml-serialization-and-inherited-types), I began integrating that code into my application I am working on, stupidly thinking all will go well..
I ran into problems with a couple of classes I have that implement _IEnumerable_ and _ICollection<T>_
The problem with these is that when the XMLSerializer comes to serialize these, it views them as an external property, and instead of using the property we would like it to (i.e. the one with our _AbstractXmlSerializer_ ) it comes here and falls over (due to the type mismatch), pretty much putting us back to square one.
My current solution is to remove the interface implementation (in this current application, its no real big deal, just made the code prettier).
**Do I need to swallow my pride on this one and accept it cant be done?** I know I have kinda pushed and got more out of the XmlSerializer than what was expected of it :)
_ _ _
###Edit
I should also add, I am currently working in framework 2. |
**NOTE: _XMLIgnore_ is NOT the answer!**
OK, so following on from my question on [XML Serialization and Inherited Types](http://stackoverflow.com/questions/20084/xml-serialization-and-inherited-types), I began integrating that code into my application I am working on, stupidly thinking all will go well..
I ran into problems with a couple of classes I have that implement _IEnumerable_ and _ICollection<T>_
The problem with these is that when the XMLSerializer comes to serialize these, it views them as an external property, and instead of using the property we would like it to (i.e. the one with our _AbstractXmlSerializer_ ) it comes here and falls over (due to the type mismatch), pretty much putting us back to square one. You cannot decorate these methods with the **XmlIgnore** attribute either, so we cannot stop it that way.
My current solution is to remove the interface implementation (in this current application, its no real big deal, just made the code prettier).
**Do I need to swallow my pride on this one and accept it cant be done?** I know I have kinda pushed and got more out of the XmlSerializer than what was expected of it :)
_ _ _
###Edit
I should also add, I am currently working in framework 2. |
|xml|.net-2.0|serialization|inheritance| |
**NOTE: _XMLIgnore_ is NOT the answer!**
OK, so following on from my question on [XML Serialization and Inherited Types](http://stackoverflow.com/questions/20084/xml-serialization-and-inherited-types), I began integrating that code into my application I am working on, stupidly thinking all will go well..
I ran into problems with a couple of classes I have that implement _IEnumerable_ and _ICollection<T>_
The problem with these is that when the XMLSerializer comes to serialize these, it views them as an external property, and instead of using the property we would like it to (i.e. the one with our _AbstractXmlSerializer_ ) it comes here and falls over (due to the type mismatch), pretty much putting us back to square one. You cannot decorate these methods with the **XmlIgnore** attribute either, so we cannot stop it that way.
My current solution is to remove the interface implementation (in this current application, its no real big deal, just made the code prettier).
**Do I need to swallow my pride on this one and accept it cant be done?** I know I have kinda pushed and got more out of the XmlSerializer than what was expected of it :)
_ _ _
###Edit
I should also add, I am currently working in framework 2.
_ _ _
###Update
I have accepted [lomaxx's answer](http://stackoverflow.com/questions/31799/preventing-xml-serialization-of-ienumerable-and-icollectiont-inherited-types#31810). In my scenario I cannot actually do this, but I do know it will work. Since their have been no other suggestions, I ended up removing the interface implementation from the code. |
Since you're using Tortoise you may want to check out this link on LosTechies. It should be almost exactly what you are looking for.
http://www.lostechies.com/blogs/joshua_lockwood/archive/2007/09/12/subversion-tip-of-the-day-moving-files.aspx |
What are the names given to these 2 LINQ expressions |
|linq| |
I'm trying to find the correct names for these 2 "types" of coding expressions in LINQ so that I can refer to them correctly. I want to say that the first is called "Fluent Style"?
var selectVar = arrayVar.Select( (a,i) => new { Line = a });
var selectVar =
from s in arrayVar
select new { Line = s };
|
Web browsers will not POST any disabled or hidden form elements.
(@Dan: asp:Buttons do not call __doPostback() because HTML input controls already submit the form.)
So what's happening is:
1. The user clicks on a button in your dialog.
2. The button calls SimpleModal's close() method, hiding the dialog and the button
3. The client POSTs the form (without the button's ID)
4. The ASP.NET framework can't figure out which button was clicked
5. Your server-side code doesn't get executed.
The solution is to do whatever you need to do on the client (closing the dialog in this case) and then call __doPostback() yourself.
For example (where "dlg" is the client-side SimpleModal dialog reference):
btn.OnClientClick = string.Format("{0}; dlg.close();",
ClientScript.GetPostBackEventReference(btn, null));
That should hide the dialog, submit the form, and call whatever server-side event you have for that button. |
Web browsers will not POST any disabled or hidden form elements.
So what's happening is:
1. The user clicks on a button in your dialog.
2. The button calls SimpleModal's close() method, hiding the dialog and the button
3. The client POSTs the form (without the button's ID)
4. The ASP.NET framework can't figure out which button was clicked
5. Your server-side code doesn't get executed.
The solution is to do whatever you need to do on the client (closing the dialog in this case) and then call __doPostback() yourself.
For example (where "dlg" is the client-side SimpleModal dialog reference):
btn.OnClientClick = string.Format("{0}; dlg.close();",
ClientScript.GetPostBackEventReference(btn, null));
That should hide the dialog, submit the form, and call whatever server-side event you have for that button.
@Dan
asp:Buttons do not call __doPostback() because HTML input controls already submit the form.
|
Web browsers will not POST any disabled or hidden form elements.
So what's happening is:
1. The user clicks on a button in your dialog.
2. The button calls SimpleModal's close() method, hiding the dialog and the button
3. The client POSTs the form (without the button's ID)
4. The ASP.NET framework can't figure out which button was clicked
5. Your server-side code doesn't get executed.
The solution is to do whatever you need to do on the client (closing the dialog in this case) and then call __doPostback() yourself.
For example (where "dlg" is the client-side SimpleModal dialog reference):
btn.OnClientClick = string.Format("{0}; dlg.close();",
ClientScript.GetPostBackEventReference(btn, null));
That should hide the dialog, submit the form, and call whatever server-side event you have for that button.
@Dan
>> All standard ASP.NET postbacks work by calling a __doPostBack javascript method on the page.
asp:Buttons do not call __doPostback() because HTML input controls already submit the form.
|
The definitive pieces on this subject have been written by Joe Celko, and he has worked a number of them into a book called Joe Celko's Trees and Hierarchies in SQL for Smarties.
He favours a technique called directed graphs. An introduction to his work on this subject can be found [here](http://www.intelligententerprise.com/001020/celko.jhtml)
|
Jeff's mechanism was to create some sort of cached object which ASP.Net would automatically recreate at some sort of interval - It seemed to be an ASP.Net specific solution, so probably won't help you (or me) much in Java world.
See [https://stackoverflow.fogbugz.com/default.asp?W13117][1]
> Atwood: Well, I originally asked on Twitter, because I just wanted something light weight. I really didn't want to like write a windows service. I felt like that was out of band code. Plus the code that actually does the work is a web page in fact, because to me that is a logical unit of work on a website is a web page. So, it really is like we are calling back into the web site, it's just like another request in the website, so I viewed it as something that should stay inline, and the little approach that we came up that was recommended to me on Twitter was to essentially to add something to the application cache with a fixed expiration, then you have a call back so when that expires it calls a certain function which does the work then you add it back in to the cache with the same expiration. So, it's a little bit, maybe "ghetto" is the right word.
My approach has always been to have to OS (i.e. Cron or the Windows task scheduler) load a specific URL at some interval, and then setup a page at that URL to check it's queue, and perform whatever tasks were required, but I'd be interested to hear if there's a better way.
[1]: https://stackoverflow.fogbugz.com/default.asp?W13117 |
Jeff's mechanism was to create some sort of cached object which ASP.Net would automatically recreate at some sort of interval - It seemed to be an ASP.Net specific solution, so probably won't help you (or me) much in Java world.
See [https://stackoverflow.fogbugz.com/default.asp?W13117][1]
> Atwood: Well, I originally asked on Twitter, because I just wanted something light weight. I really didn't want to like write a windows service. I felt like that was out of band code. Plus the code that actually does the work is a web page in fact, because to me that is a logical unit of work on a website is a web page. So, it really is like we are calling back into the web site, it's just like another request in the website, so I viewed it as something that should stay inline, and the little approach that we came up that was recommended to me on Twitter was to essentially to add something to the application cache with a fixed expiration, then you have a call back so when that expires it calls a certain function which does the work then you add it back in to the cache with the same expiration. So, it's a little bit, maybe "ghetto" is the right word.
My approach has always been to have to OS (i.e. Cron or the Windows task scheduler) load a specific URL at some interval, and then setup a page at that URL to check it's queue, and perform whatever tasks were required, but I'd be interested to hear if there's a better way.
From the transcript, it looks like FogBugz uses the windows service loading a URL approach also.
> Spolsky: So we have this special page called heartbeat.asp. And that page, whenever you hit it, and anybody can hit it at anytime: doesn't hurt. But when that page runs it checks a queue of waiting tasks to see if there's anything that needs to be done. And if there's anything that needs to be done, it does one thing and then looks in that queue again and if there's anything else to be done it returns a plus, and the entire web page that it returns is just a single character with a plus in it. And if there's nothing else to be done, the queue is now empty, it returns a minus. So, anybody can call this and hit it as many times, you can load up heartbeat.asp in your web browser you hit Ctrl-R Ctrl-R Ctrl-R Ctrl-R until you start getting minuses instead of pluses. And when you've done that FogBugz will have completed all of its maintenance work that it needs to do. So that's the first part, and the second part is a very, very simple Windows service which runs, and its whole job is to call heartbeat.asp and if it gets a plus, call it again soon, and if it gets a minus call it again, but not for a while. So basically there's this Windows service that's always running, that has a very, very, very simple task of just hitting a URL, and looking to see if it gets a plus or a minus and, and then scheduling when it runs again based on whether it got a plus or a minus. And obviously you can do any kind of variation you want on this theme, like for example, uh you could actually, instead of returning just a plus or minus you could say "Okay call me back in 60 seconds" or "Call me back right away I have more work to be done." And that's how it works... so that maintenance service it just runs, you know, it's like, you know, a half page of code that runs that maintenance service, and it never has to change, and it doesn't have any of the logic in there, it just contains the tickling that causes these web pages to get called with a certain guaranteed frequency. And inside that web page at heartbeat.asp there's code that maintains a queue of tasks that need to be done and looks at how much time has elapsed and does, you know, late-night maintenance and every seven days delete all the older messages that have been marked as spam and all kinds of just maintenance background tasks. And uh, that's how that does that.
[1]: https://stackoverflow.fogbugz.com/default.asp?W13117 |