instruction stringlengths 0 30k ⌀ |
|---|
A lot of people would probably argue that the second option is "correct", because any value used more than once should be refactored into a constant. I would most likely use the first option. You have already gotten close to the "Code Complete" solution by encapsulating the dictionary entry in a strong typed property. This reduces the chance of screwing up retrieving the wrong Dictionary entry in your implementation.
There are only 2 places where you could mess up typing "myKey", in the getter and setter, and this would be very easy to spot.
The second option would just get too messy. |
What about Exif? It's probably not as secure as what you're thinking, but most users don't even know it exists and if you make it that easy to read the watermark information those who care will still be able to do it anyway. |
As per Coincoin's answer, this feature is also great for setting the default editor for ASPX. If you want to go to the Code Editor most often, then this is a default you'd want to change. |
Alternative Hostname for an IIS web site for internal access only |
|iis|sharepoint|moss| |
I'm using IIS in Windows 2003 Server for a SharePoint intranet. External incoming requests will be using the host header `portal.mycompany.com` and be forced to use SSL.
I was wondering if there's a way to set up an alternate host header such as `http://internalportal/` which only accepts requests from the internal network, but doesn't force the users to use SSL.
Any recommendations for how to set this up? |
|sharepoint|iis|moss|wss| |
What is the best way to create rounded corners using CSS? |
|css| |
What is the best way to create rounded corners using CSS? |
ImageMagick delegates the PDF->bitmap conversion to GhostScript anyway, so here's a command you can use (it's based on the actual command listed by the `ps:alpha` delegate in ImageMagick, just adjusted to use JPEG as output):
gs -q -dQUIET -dPARANOIDSAFER -dBATCH -dNOPAUSE -dNOPROMPT \
-dMaxBitmap=500000000 -dLastPage=1 -sDEVICE=jpeg -dTextAlphaBits=4 \
-dGraphicsAlphaBits=4 -r72x72 -sOutputFile=$OUTPUT -f$INPUT
where `$OUTPUT` and `$INPUT` are the output and input filenames. Adjust the `72x72` to whatever resolution you need. (Obviously, strip out the backslashes if you're writing out the whole command as one line.)
This is good for two reasons:
1. You don't need to have ImageMagick installed anymore. Not that I have anything against ImageMagick (I love it to bits), but I believe in simple solutions.
2. ImageMagick does a two-step conversion. First PDF->PPM, then PPM->JPEG. This way, the conversion is one-step.
Other things to consider: with the files I've tested, PNG compresses better than JPEG. If you want to use PNG, change the `-sDEVICE=jpeg` to `-sDEVICE=png16m`. |
ImageMagick delegates the PDF->bitmap conversion to GhostScript anyway, so here's a command you can use (it's based on the actual command listed by the `ps:alpha` delegate in ImageMagick, just adjusted to use JPEG as output):
gs -q -dQUIET -dPARANOIDSAFER -dBATCH -dNOPAUSE -dNOPROMPT \
-dMaxBitmap=500000000 -dLastPage=1 -dAlignToPixels=0 -dGridFitTT=0 \
-sDEVICE=jpeg -dTextAlphaBits=4 -dGraphicsAlphaBits=4 -r72x72 \
-sOutputFile=$OUTPUT -f$INPUT
where `$OUTPUT` and `$INPUT` are the output and input filenames. Adjust the `72x72` to whatever resolution you need. (Obviously, strip out the backslashes if you're writing out the whole command as one line.)
This is good for two reasons:
1. You don't need to have ImageMagick installed anymore. Not that I have anything against ImageMagick (I love it to bits), but I believe in simple solutions.
2. ImageMagick does a two-step conversion. First PDF->PPM, then PPM->JPEG. This way, the conversion is one-step.
Other things to consider: with the files I've tested, PNG compresses better than JPEG. If you want to use PNG, change the `-sDEVICE=jpeg` to `-sDEVICE=png16m`. |
Have a look at [Sandcastle][1], which does exactly that. It's also one of the more simpler solutions out there, and it's more or less the tool of choice, so in the long run, maybe we could help you to set up Sandcastle if you specify what issues you encountered during setup?
[1]: http://blogs.msdn.com/sandcastle/ |
Despite what the [MSDN article][1] says about User/Machine Hive, it doesn't write to HKEY_USERS. Rather it writes to HKCU if you select Just Me and HKLM if you select Everyone.
So my solution is going to be to use the User/Machine Hive, and then in the application it checks if the registry entries are in HKCU and if not, copies them from HKLM. I know this probably isn't the most ideal way of doing it, but it has the least amount of changes.
[1]: http://msdn.microsoft.com/en-us/library/x6kd89c5(VS.80).aspx |
Google Tech Talk: Linus Torvalds on git
<http://www.youtube.com/watch?v=4XpnKHJAok8>
The Git Wiki's comparison page
<http://git.or.cz/gitwiki/GitSvnComparsion> |
Git is not better than Subversion. But is also not worse. It's different.
The key difference is that it is decentralized. Imagine you are a developer on the road, you develop on your laptop and you want to have source control so that you can go back 3 hours.
With Subversion, you have a Problem: The SVN Repository may be in a location you can't reach (in your company, and you don't have internet at the moment), you cannot commit. If you want to make a copy of your code, you have to literally copy/paste it.
With Git, you do not have this problem. Your local copy is a repository, and you can commit to it and get all benefits of source control. When you regain connectivity to the main repository, you can commit against it.
This looks good at first, but just keep in mind the added complexity to this approach.
Git seems to be the "new, shiny, cool" thing. It's by no means bad (there is a reason Linus wrote it for the Linux Kernel development after all), but I feel that many people jump on the "Distributed Source Control" train just because it's new and is written by Linus Torvalds, without actually knowing why/if it's better.
Subversion has Problems, but so does Git, Mercurial, CVS, TFS or whatever. |
You'll need to compile with the target CPU set to x86 to force your code to use the 32 bit version of the VFP OLE Db provider. [Microsoft has stated][1] that there are no plans on releasing a 64-bit edition of the Visual FoxPro OLE Db provider. For what's worth, Microsoft has [also stated][2] that VFP 9 is the final version of Visual FoxPro and support will end in 2015. If you need the OLE DB provider for VFP 9, you can get it [here][3].
[1]: http://msdn.microsoft.com/en-us/vfoxpro/bb190293.aspx
[2]: http://msdn.microsoft.com/en-us/vfoxpro/bb308952.aspx
[3]: http://www.microsoft.com/downloads/details.aspx?FamilyId=E1A87D8F-2D58-491F-A0FA-95A3289C5FD4&displaylang=en |
Depends on the version, 4 is by value, 5 is by reference. |
Would you mind sharing which hosting environment you're using? It's common for a shared host to support Python and MySQL with the MySQLdb module (e.g., GoDaddy does this). Can you checked to see if that is installed?
Try the following CGI script to see if MySQLdb is installed.
#!/usr/bin/python
import MySQLdb
print '''Content-Type: text/html
<html>
<head>
<title>MySQLdb module contents</title>
</head>
<body>
MySQLdb module contains:
<ul>%s</ul>
</body>
</html>
''' % ''.join([ '<li>%s</li>' % i for i in dir(MySQLdb) ]) |
So I'm working on a project that will allow users to enter poker hand histories from sites like PokerStars and then display the hand to them.
It seems that regex would be a great tool for this, however I rank my regex knowledge at "slim to none".
So I'm using PHP and looping through this block of text line by line and on lines like this:
Seat 1: fabulous29 (835 in chips)
Seat 2: Nioreh_21 (6465 in chips)
Seat 3: Big Loads (3465 in chips)
Seat 4: Sauchie (2060 in chips)
I want to extract seat number, name, & chip count so the format is
Seat [number]: [letters&numbers&characters] ([number] in chips)
I have NO IDEA where to start or what commands I should even be using to optimize this.
Any advice is greatly appreciated - even if it is just a link to a tutorial on PHP regex or the name of the command(s) I should be using.
Cheers! |
|php|regex| |
We actually use a combination of NAnt and MSBuild with Cruise Control. NAnt is used for script flow control and calls MSBuild to compile projects. After the physical build is triggered, NAnt is used to publish the individual project build outputs to a shared location.
Not sure this is <i>the best</i> process. I think many of us are still looking for a great build tool. One promising thing I heard on [.Net Rocks][1] recently is [James Kovac's PSake][2], a build system he based entirely on PowerShell. Sounds really promising since what you can do with PowerShell is fairly limitless in theory.
[1]: http://www.dotnetrocks.com/default.aspx?showNum=362
[2]: http://codebetter.com/blogs/james.kovacs/archive/2008/06/27/introducing-psake.aspx |
We use MSBuild, because we started with VS2005 (now 2008), and MSBuild was already "built in" to the SDK - less maintenance on the build server. It's a nAnt clone, really. |
We use MSBuild, because we started with VS2005 (now 2008), and MSBuild was already "built in" to the SDK - less maintenance on the build server. It's a nAnt clone, really - both tools are infinitely flexible in that they let you create custom build tasks in code, and both have a decent set of community build tasks already created.
- [MSBuild Community Tasks][1]
- [NAntContrib][2]
[1]: http://msbuildtasks.tigris.org
[2]: http://nantcontrib.sourceforge.net/ |
I've used both and prefer nAnt. It's really hard for me to say one is "better" than the other. |
Look at the PCRE section in the PHP Manual. Also, http://www.regular-expressions.info/ is a great site for learning regex. Disclaimer: Regex is very addictive once you learn it. |
ADD:
LIMIT n,1
That will limit the results to one result starting at result n. |
You should only need to override the equals() method if you want specific behaviour when adding objects to sorted data structures (SortedSet etc.)
When you do that you should also override hashCode.
See [here][1] for a complete explanation.
[1]: http://www.geocities.com/technofundo/tech/java/equalhash.html |
I'm not entirely sure what exactly to use for that without trying it, but a great tool I use all the time to validate my RegEx is [RegExr][1] which gives a great flash interface for trying out your regex, including real time matching and a library of predefined snippets to use. Definitely a great time saver :)
[1]: http://gskinner.com/RegExr/ |
there is an [official][1] recommendation
and in [djangobook][2]
[1]: http://docs.djangoproject.com/en/dev/topics/auth/#storing-additional-information-about-users
[2]: http://www.djangobook.com/en/1.0/chapter12/#cn222 |
One method you might want to consider if you're going to be working with the values a lot is to write them to a temporary table first. Then you just join on it like normal.
This way, you're only parsing once.
It's easiest to use one of the 'Split' UDFs, but so many people have posted examples of those, I figured I'd go a different route ;)
This example will create a temporary table for you to join on (#tmpDept) and fill it with the department id's that you passed in. I'm assuming you're separating them with commas, but you can -- of course -- change it to whatever you want.
IF OBJECT_ID('tempdb..#tmpDept', 'U') IS NOT NULL
BEGIN
DROP TABLE #tmpDept
END
SET @DepartmentIDs=REPLACE(@DepartmentIDs,' ','')
CREATE TABLE #tmpDept (DeptID INT)
DECLARE @DeptID INT
IF IsNumeric(@DepartmentIDs)=1
BEGIN
SET @DeptID=@DepartmentIDs
INSERT INTO #tmpDept (DeptID) SELECT @DeptID
END
ELSE
BEGIN
WHILE CHARINDEX(',',@DepartmentIDs)>0
BEGIN
SET @DeptID=LEFT(@DepartmentIDs,CHARINDEX(',',@DepartmentIDs)-1)
SET @DepartmentIDs=RIGHT(@DepartmentIDs,LEN(@DepartmentIDs)-CHARINDEX(',',@DepartmentIDs))
INSERT INTO #tmpDept (DeptID) SELECT @DeptID
END
END
This will allow you to pass in one department id, multiple id's with commas in between them, or even multiple id's with commas and spaces between them.
So if you did something like:
SELECT Dept.Name
FROM Departments
JOIN #tmpDept ON Departments.DepartmentID=#tmpDept.DeptID
ORDER BY Dept.Name
You would see the names of all of the department IDs that you passed in...
Again, this can be simplified by using a function to populate the temporary table... I mainly did it without one just to kill some boredom :-P
-- Kevin Fairchild
|
According to [ASP.NET AJAX in Action](http://www.amazon.com/gp/redirect.html?ie=UTF8&location=http%3A%2F%2Fwww.amazon.com%2FASP-NET-AJAX-Action-Alessandro-Gallo%2Fdp%2F1933988142&tag=diaryofamadma-20&linkCode=ur2&camp=1789&creative=9325), p. 257
> Just before the old markup is replaced with the updated HTML, all the DOM elements in the panel are examined for Microsoft Ajax behaviours or controls attached to them. To avoid memory leaks, the components associated with DOM elements are disposed, and then destroyed when the HTMl is replaced.
So as far as I know, any asp.net ajax components within the update panel are disposed to prevent memory leaks, but anything else in there will just be replaced with the html received.
So if you don't have any asp.net ajax components in the target container for the response, it would be basically the same as an inner html replacement with any other js framework / ajax request, so i would say that it's just the how the browser handles this, rather than asp.net ajax causing this.
Also, while it may be "leaking", it may be by design, meaning that the browser might not have reclaimed the dom elements yet and released them. Also, drip might be causing those to leak, as it is attaching to those dom elements. |
I setup the gridview and then used the html text writer object to spit it out to a .xls file, like so:
Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load
'get the select command of the gridview
sqlGridview.SelectCommand = Session("strSql")
gvCompaniesExport.DataBind()
lblTemp.Text = Session("strSql")
'do the export
doExport()
'close the window
Dim closeScript As String = "<script language='javascript'> window.close() </scri"
closeScript = closeScript & "pt>"
'split the ending script tag across a concatenate to keep it from causing problems
'this will write it to the asp.net page and fire it off, closing the window
Page.RegisterStartupScript("closeScript", closeScript)
End Sub
Public Sub doExport()
Response.AddHeader("content-disposition", "attachment;filename=IndianaCompanies.xls")
Response.ContentType = "application/vnd.ms-excel"
Response.Charset = ""
Me.EnableViewState = False
Dim objStrWriter As New System.IO.StringWriter
Dim objHtmlTextWriter As New System.Web.UI.HtmlTextWriter(objStrWriter)
'Get the gridview HTML from the control
gvCompaniesExport.RenderControl(objHtmlTextWriter)
'writes the dg info
Response.Write(objStrWriter.ToString())
Response.End()
End Sub
|
To combine revision 3 and 4 into a single revision, you can use git rebase. If you want to remove the changes in revision 3, you need to use the edit command in the interactive rebase mode. If you want to combine the changes into a single revision, use squash.
I have successfully used this squash technique, but have never needed to remove a revision before. The git-rebase documentation under "Splitting commits" should hopefully give you enough of an idea to figure it out. (Or someone else might know).
From the <a href="http://www.kernel.org/pub/software/scm/git/docs/git-rebase.html">git documentation</a>:
<blockquote>
<p>Start it with the last commit you want to retain as-is:</p>
<blockquote><code>git rebase -i <after-this-commit></code></blockquote>
<p>An editor will be fired up with all the commits in your current branch (ignoring merge commits), which come after the given commit. You can reorder the commits in this list to your heart's content, and you can remove them. The list looks more or less like this:</p>
<blockquote><pre>pick deadbee The oneline of this commit
pick fa1afe1 The oneline of the next commit
...</pre></blockquote>
<p>The oneline descriptions are purely for your pleasure; git-rebase will not look at them but at the commit names ("deadbee" and "fa1afe1" in this example), so do not delete or edit the names.</p>
<p>By replacing the command "pick" with the command "edit", you can tell git-rebase to stop after applying that commit, so that you can edit the files and/or the commit message, amend the commit, and continue rebasing.</p>
<p>If you want to fold two or more commits into one, replace the command "pick" with "squash" for the second and subsequent commit. If the commits had different authors, it will attribute the squashed commit to the author of the first commit.</p>
</blockquote> |
Possible in [CSS2][1] (@page, looks like [Opera only][2]) and in [CSS3][3] which will work [nowhere][4]. Sorry.
[1]: http://www.w3.org/TR/CSS2/page.html#page-size-prop
[2]: http://www.webdevout.net/browser-support-css#css2propsprint
[3]: http://www.w3.org/TR/css3-page/#page-size-prop
[4]: http://www.webdevout.net/browser-support-css#css3propsprint |
It means that rather than needing to push the instruction pointer on the stack, you can simply jump to the top of a recursive function and continue execution. This allows for functions to recurse indefinitely without overflowing the stack.
I wrote a [blog][1] post on the subject, which has graphical examples of what the stack frames look like.
[1]: http://blogs.msdn.com/chrsmith/archive/2008/08/07/understanding-tail-recursion.aspx |
Have a look at Hibernate as a simpler way to interface to a database. |
If I were designing the next version of C#, I'd think about each function having an automatically included local variable holding the name of the class and the name of the function. If most instances, the compiler's optimizer would take it out.
I'm not sure there's much of a demand for that sort of thing though. |
If I were designing the next version of C#, I'd think about each function having an automatically included local variable holding the name of the class and the name of the function. In most cases, the compiler's optimizer would take it out.
I'm not sure there's much of a demand for that sort of thing though. |
Who is your web host?
This is just a guess, but it's common for a shared host to support Python and MySQL with the MySQLdb module (e.g., GoDaddy does this). Try the following CGI script to see if MySQLdb is installed.
#!/usr/bin/python
module_name = 'MySQLdb'
head = '''Content-Type: text/html
%s is ''' % module_name
try:
__import__(module_name)
print head + 'installed'
except ImportError:
print head + 'not installed' |
I don't have any experience with <http://www.SiteGround.com> as a web host personally.
This is just a guess, but it's common for a shared host to support Python and MySQL with the MySQLdb module (e.g., GoDaddy does this). Try the following CGI script to see if MySQLdb is installed.
#!/usr/bin/python
module_name = 'MySQLdb'
head = '''Content-Type: text/html
%s is ''' % module_name
try:
__import__(module_name)
print head + 'installed'
except ImportError:
print head + 'not installed' |
It's by value according to the [PHP Documentation][1].
> By default, function arguments are passed by value (so that if the value of the argument within the function is changed, it does not get changed outside of the function). To allow a function to modify its arguments, they must be passed by reference.
>
> To have an argument to a function always passed by reference, prepend an ampersand (&) to the argument name in the function definition.
<?php
function add_some_extra(&$string)
{
$string .= 'and something extra.';
}
$str = 'This is a string, ';
add_some_extra($str);
echo $str; // outputs 'This is a string, and something extra.'
?>
[1]: http://fr3.php.net/manual/en/functions.arguments.php |
Git and DVCS in general is great for developers doing a lot of coding independently of each other because everyone has their own branch. If you need a change from someone else, though, she has to commit to her local repo and then she must push that changeset to you or you must pull it from her.
My own reasoning also makes me think DVCS makes things harder for QA and release management if you do things like centralized releases. Someone has to be responsible for doing that push/pull from everyone else's repository, resolving any conflicts that would have been resolved at initial commit time before, then doing the build, and then having all the other developers re-sync their repos.
All of this can be addressed with human processes, of course; DVCS just broke something that was fixed by centralized version control in order to provide some new conveniences. |
How do you debug PHP scripts? |
|php|debugging| |
How do you debug your PHP script?
I am aware of basic debugging such as using the Error Reporting. The breakpoint debugging in PHPEclipse is also quite useful. Any other good/better techniques out there? |
I've used the [Zend Studio (5.5)][1], together with [Zend Platform][2]. That gives proper debugging, breakpoints/stepping over the code etc., although at a price.
[1]: http://www.zend.com/products/studio/
[2]: http://www.zend.com/en/products/platform/ |
When you only use a magic string in one context, like you do, I think it's alright.
But if you ever need to use the key in another part of the class, go <code>const</code>. |
Hudson can't build my Maven 2 project because it says artifacts are missing from the repository? (they aren't) |
|continuous-integration|maven-2|hudson|mvn| |
I'm using Hudson and Maven 2 for my automated build/CI. I can build fine with maven from the command line, but when I run the same goal with Hudson, the build fails complaining of missing artifacts. I'm running Hudson as a windows XP service. |
Database sharding and Rails |
|database|ruby-on-rails|activerecord|sharding|shards| |
What's the best way to deal with a sharded database in Rails? Should the sharding be handled at the application layer, the active record layer, the database driver layer, a proxy layer, or something else altogether? What are the pros and cons of each? |
If you disable the text box (set `Enable=false`), the text in it is still scrollable and selectable. If you don't like the visual presentation of a disabled text box (gray background usually) you can manually override the colors.
Be warned, manually overriding colors is going to make your form/control look weird on systems that do not use the default color/theme settings. Don't assume that because your control is white that everyone's control is going to be white. That's why you should always use the system colors whenever possible (defined in the `System.Drawing.SystemColors` enumeration) such as `SystemColors.ControlLight`. |
Make sure you're running Hudson as the same user that you are using to run Maven from the command line. Maven creates a separate repository for each user. If you are running Hudson as a Windows service, this won't be the same user as you have logged on as and will be running "mvn" commands with. This means the artifacts in the repositories may be different.
To fix, either start Hudson manually as the user which works, or update the repository for the user which Hudson is running as. |
This page does a good job of addressing some of my questions: http://www.b-list.org/weblog/2006/sep/10/django-tips-laying-out-application/
Specifically:
> 1. To define custom template tags or filters, you must create a sub-directory in the application’s directory called templatetags, and it must contain a file named __init__.py so that it can be imported as a Python module.
> 2. To define unit tests which will automatically be noticed by Django’s testing framework, put them in a module called tests (which can be either a file named tests.py or a directory called tests). The testing framework will also find any doctests in that module, but the preferred place for those is, of course, the docstrings of the classes or functions they’re designed to test.
> 3. To provide custom SQL which will be executed immediately after your application is installed, create a sub-directory called sql inside the application’s directory; the file names should be the same as the names of the models whose tables they’ll operate on; for example, if you have an app named weblog containing a model named Entry, then the file sql/entry.sql inside the app’s directory can be used to modify or insert data into the entries table as soon as it’s been created.
The note about tests.py and tests (the directory) also holds for models, which helps address the problem of having way to many tests (or models) for one file.
I would still like to see some examples / suggestions for app/project break down, and big django sites that work well. |
Your algorithm isn't quite clearly defined. If you have a complete graph, your algorithm would seem to entail, in the first step, removing all but the two minimum elements. Also, listing _all_ the cycles in a graph can take exponential time. |
Your algorithm isn't quite clearly defined. If you have a complete graph, your algorithm would seem to entail, in the first step, removing all but the two minimum elements. Also, listing _all_ the cycles in a graph can take exponential time.
Elaboration:
In a graph with n nodes and an edge between every pair of nodes, there are, if I have my math right, n!/(2k(n-k)!) cycles of size k, if you're counting a cycle as some subgraph of k nodes and k edges with each node having degree 2. |
Have you tried debugging into the framework source code? Maybe there are some comments in the HttpCachePolicy class with an explanation. Sounds like a bug to me though.
I agree with you. There is nothing in the HTTP Spec that should prevent Etag when cache-control is private.
I see where UpdateCachedHeaders will only add the ETag if its not private but I don't see anywhere that the header is removed if to specify it manually. |
I think you need to use HttpCacheability.ServerAndPrivate
That should give you cache-control: private in the headers and let you set an ETag.
The documentation on that needs to be a bit better. |
I think you need to use HttpCacheability.ServerAndPrivate
That should give you cache-control: private in the headers and let you set an ETag.
The documentation on that needs to be a bit better.
**Edit:** Markus found that you also have call cache.SetOmitVaryStar(true) otherwise the cache will add the Vary: * header to the output and you don't want that. |
I wouldn't consider SOAP legacy at all. REST vs. SOAP is really just the continuation of the debate of COM/CORBA vs. HTTP POST/GET etc. SOAP is nothing more than an updated version of the same principles defined with C and C (contracts, providers, consumers etc.). It's just that has appeared to SOAP succeed (at least partially) where the other two failed (and it could be that SOAP just has a better marketing team), that is that SOAP really does allow to different systems to connect rather easily compared to it's predecessors. That being said, it still suffers from the same drawbacks that COM/CORBA did...it can get really complex.
I think REST is just coming back into style at the moment. It's nothing new, people are just taking another look at it. Look at the web. It's REST and it's been around for years. 5 years from now people are going to look back and say the same thing about it being legacy and the need to change. It's the nature of software development. Everything goes in cycles.
The debate about which one is better is going to be just like the tabs vs. spaces debate. There are going to be people on different sides swearing that one is better. Really in the end, they both accomplish the same goal. Sure one will be a better solution than the other in some situations, but in the end neither will be superior 100% of the time. |
For a laugh:
ary = eval("[[this, is],[a, nested],[array]]".gsub(/(\w+?)/, "'\\1'") )
=> [["this", "is"], ["a", "nested"], ["array"]]
Disclaimer: You definitely shouldn't do this as `eval` is a terrible idea, but it is fast and has the useful side effect of throwing an exception if your nested arrays aren't valid |
Where does "Change Management" end and "Project Failure" begin? |
|project-management|project-failure|change-management| |
I got into a mini-argument with my boss recently regarding "project failure." After three years, our project to migrate a codebase to a new platform (a project I was on for 1.5 years, but my team lead was on for only a few months) went live. He, along with senior management of both my company and the client (I'm one of those god-awful consultants you hear so much about. My engagement is an "Application Outsourcing") declared the project to be a success. I disagreed, stating that old presentations I had found showed that compared to the original schedule, the delay in deployment was best measured in months and could potentially be measured in years. I explained what I know of project failure, and the studies and statistics behind failure rates. He responded that that was all academia, and that no project he led had failed, thanks to the wonders of change/risk management - what seems to come down to explaining delays and re-evaluating the schedule based on new data.
Maybe consulting like this differs from other projects, but it seems like this is just failure wrapped up in a prettier name to avoid the stigma of having failed to deliver on time, on budget, or with full functionality. The fact that he explained that my company gave away hours of work for free in order to finish the project within the maxed out budget says a lot.
So I ask you this:
* What is change management, and how does it apply to a project?
* Where does "change management" end, and "project failure" begin?
|
I got into a mini-argument with my boss recently regarding "project failure." After three years, our project to migrate a codebase to a new platform (a project I was on for 1.5 years, but my team lead was on for only a few months) went live. He, along with senior management of both my company and the client (I'm one of those god-awful consultants you hear so much about. My engagement is an "Application Outsourcing") declared the project to be a success. I disagreed, stating that old presentations I had found showed that compared to the original schedule, the delay in deployment was best measured in months and could potentially be measured in years. I explained what I know of project failure, and the studies and statistics behind failure rates. He responded that that was all academia, and that no project he led had failed, thanks to the wonders of change/risk management - what seems to come down to explaining delays and re-evaluating the schedule based on new data.
Maybe consulting like this differs from other projects, but it seems like this is just failure wrapped up in a prettier name to avoid the stigma of having failed to deliver on time, on budget, or with full functionality. The fact that he explained that my company gave away hours of work for free in order to finish the project within the maxed out budget says a lot.
So I ask you this:
* What is change management, and how does it apply to a project?
* Where does "change management" end, and "project failure" begin?
<hr>
@shog9:
I wasn't asking about a blame game with the consultants, especially since in this case I *represent* the consultants. I was looking for views on when a project should be considered "failed" regardless of if the needed functionality *was* finally implemented.
I'm looking for the difference between "this is actually a little more complex than we thought, and it's going to be another week" which I'd expect is somewhat typical, and "project failure" - however you want to define failure. Is there even a difference? Does this minor level of schedule slippage constitute statistical "project failure?" |
I follow the article at [http://dev.piwik.org/trac/wiki/HowToSetupDevelopmentEnvironmentWindows][1] to setup an Eclipse environment that has debugging features like you mentioned. The ability to step into the code is a much better way to debug then the old method of var_dump and print at various points to see where your flow goes wrong. When all else fails though and all I have is SSH and vim I still var_dump()/die() to find where the code goes south.
[1]: http://dev.piwik.org/trac/wiki/HowToSetupDevelopmentEnvironmentWindows |
print_r( debug_backtrace() );
or something like that :-) |
It's all about the ease of use/steps required to do something.
If I'm developing a single project on my PC/laptop, git is better, because it is far easier to set up and use.
You don't need a server, and you don't need to keep typing repository URL's in when you do merges.
If it were just 2 people, I'd say git is also easier, because you can just push and pull from eachother.
Once you get beyond that though, I'd go for subversion, because at that point you need to set up a 'dedicated' server or location.
You can do this just as well with git as with SVN, but the benefits of git get outweighed by the need to do additional steps to synch with a central server. In SVN you just commit. In git you have to git commit, then git push. The additional step gets annoying simply because you end up doing it so much.
SVN also has the benefit of better GUI tools, however the git ecosystem seems to be catching up quickly, so I wouldn't worry about this in the long term. |
If you have the ability to use WScript.Shell then you can just execute pscp.exe from the [Putty][1] package. Obviously this is less then ideal but it will get the job done and let you use SCP/SFTP in classic ASP.
[1]: http://www.chiark.greenend.org.uk/~sgtatham/putty/ |
I am by no means authoritative, but I believe the only supported path is from 6.5 to 7. Certainly that would be the most sane route, then I believe you can migrate from 7 directly to 2005 pretty painlessly.
As for scripting out all the objects - I would advise against it as you will inevitably miss something (unless you database is truly trivial). |
<http://www.php.net/manual/en/migration5.oop.php>
> In PHP 5 there is a new Object Model. PHP's handling of objects has been completely rewritten, allowing for better performance and more features. In previous versions of PHP, objects were handled like primitive types (for instance integers and strings). The drawback of this method was that semantically the whole object was copied when a variable was assigned, or passed as a parameter to a method. In the new approach, objects are referenced by handle, and not by value (one can think of a handle as an object's identifier). |
Variables containing primitive types are passed by value in PHP5. Variables containing objects are passed by reference. There's quite an interesting article from Linux Journal from 2006 which mentions this and other OO differences between 4 and 5.
<http://www.linuxjournal.com/article/9170> |
I worked on a project for my previous employer that used .NET, and there was a built in .resx format we used. We basically had a file that had all translations in the .resx file, and then multiple files with different translations. The consequence of this is that you have to be very diligent about ensuring that all strings visible in the application are stored in the .resx, and anytime one is changed you have to update all languages you support.
If you get lazy and don't notify the people in charge of translations, or you embed strings without going through your localization system, it will be a nightmare to try and fix it later. Similarly, if localization is an afterthought, it will be very difficult to put in place. Bottom line, if you don't have all visible strings stored externally in a standard place, it will be very difficult to find all that need to be localized.
One other note, very strictly avoid concatenating visible strings directly, such as
String message = "The " + item + " is on sale!";
Instead, you must use something like
String message = String.Format("The {0} is on sale!", item);
The reason for this is that different languages often order the words differently, and concatenating strings directly will need a new build to fix, but if you used some kind of string replacement mechanism like above, you can modify your .resx file (or whatever localization files you use) for the specific language that needs to reorder the words. |
I have found mootools to be pretty slick for animations, just a little smoother than JQuery.
Generally prefer JQuery, which I find to be a little more intuitive (in my head anyway), but would go mootools if slick animation is your biggest requirement. |
I suppose how successful the project is depends on who the client is. If the client were the company directors and they are happy, then the project was successful regardless of the failures along the way. |
XStream from codehaus.org
XML serialization/deserialization largely without coding.
You can use annotations to tweak it.
Working well in two projects where I work.
See my users group presentation at [http://cjugaustralia.org/?p=61][1]
[1]: http://cjugaustralia.org/?p=61 |
I would go for the your final option JavaDB (Sun's distribution of [Derby][1]) and use an object relational layer like [Hibernate][2] or [iBatis][3]. Using the first three aproaches means you are going to spend more time building a database engine than developing application features.
[1]: http://db.apache.org/derby
[2]: http://hibernate.org
[3]: http://ibatis.apache.org |
EXEC sp_helplogins
You can also pass an "@LoginNamePattern" parameter to get information about a specific login:
EXEC sp_helplogins @LoginNamePattern='fred' |
I have found that Indy, while a simpler concept in the beginning, is awkward to manage due to the need to kill sockets to free threads at application termination. In addition, I had the Indy library stop working after an OS patch upgrade. ScktComp works well for my application. |
Select * From Master..SysUsers Where IsSqlUser = 1 |
Environment.NewLine
for system independant newlines. |
@allain, @GateKiller your query selects users not logins
To select logins you can use this query:
SELECT name FROM master..sysxlogins WHERE sid IS NOT NULL
In MSSQL2005/2008 syslogins table is used insted of sysxlogins |
Have a look in the syslogins or sysusers tables in the master schema. Not sure if this still still around in more recent MSSQL versions though. In MSSQL 2005 there are views called sys.syslogins and sys.sysusers. |
Selecting from sysusers will get you information about *users* on the selected database, not *logins* on the server. |
@DrPizza Profiling only really helps improve an implementation, it doesn't reveal opportunities for parallel execution, or suggest better algorithms (unless you've experience to the otherwise, in which case I'd really like to see your profiler).
I've only single core machines at home, but ran a Java equivalent of your BitArray sieve, and a single threaded version of the inversion of the sieve - holding the marking primes in an array, and using a [wheel][1] to reduce the search space by a factor of five, then marking a bit array in increments of the wheel using each marking prime. It also reduces storage to O(sqrt(N)) instead of O(N), which helps both in terms of the largest N, paging, and bandwidth.
For medium values of N (1e8 to 1e12), the primes up to sqrt(N) can be found quite quickly, and after that you should be able to parallelise the subsequent search on the CPU quite easily. On my single core machine, the wheel approach finds primes up to 1e9 in 28s, whereas your sieve (after moving the sqrt out of the loop) takes 86s - the improvement is due to the wheel; the inversion means you can handle N larger than 2^32 but makes it slower. Code can be found [here][2]. You could parallelise the output of the results from the naive sieve after you go past sqrt(N) too, as the bit array is not modified after that point; but once you are dealing with N large enough for it to matter the array size is too big for ints.
[1]: http://en.wikipedia.org/wiki/Wheel_factorization
[2]: http://www.tincancamera.com/examples/java/primes/ |
Consistency, convenience and readability. "UINT32" is much more readable and writeable than "unsigned long long", which is the equivalent for some systems.
Also, the compiler and processor may be fixed for the life of a project, but the code from that project may find new life in another project. In this case, having consistent data types is very convenient. |
It depends.
If I am genuinely hacking something up for my own use then I will write the best code that I don't have to think about. Let the compiler be my friend for warnings etc. but I won't automatically create types for the hell of it.
The more likely the code is to be used, even occasionally, I ramp up the level of checks.
- minimal magic numbers
- better variable names
- fully checked & defined array/string lengths
- programming by contract assertions
- null value checks
- exceptions (depending upon context of the code)
- basic explanatory comments
- accessible usage documentation (if perl etc.) |
Is the C# static constructor thread safe? |
|c#|multithreading|singleton| |
In other words, is this Singleton implementation thread safe:
public class Singleton
{
private static Singleton instance;
private Singleton() { }
static Singleton()
{
instance = new Singleton();
}
public static Singleton Instance
{
get { return instance; }
}
} |
I'll take a different definition of defensive programming, as the one that's advocated by _[Effective Java](http://java.sun.com/docs/books/effective/)_ by Josh Bloch. In the book, he talks about how to handle mutable objects that callers pass to your code (e.g., in setters), and mutable objects that you pass to callers (e.g., in getters).
* For setters, make sure to clone any mutable objects, and store the clone. This way, callers cannot change the passed-in object after the fact to break your program's invariants.
* For getters, either return an immutable view of your internal data, if the interface allows it; or else return a clone of the internal data.
* When calling user-supplied callbacks with internal data, send in an immutable view or clone, as appropriate, unless you intend the callback to alter the data, in which case you have to validate it after the fact.
The take-home message is to make sure no outside code can hold an alias to any mutable objects that you use internally, so that you can maintain your invariants. |
I don't know if you can say that there is any one technique that is "the best". Below are a whole bunch of different approaches. Find one that suits your site and coding style, and go with it.
1. [CSS Design: Creating Custom Corners
& Borders][1]
2. [CSS Rounded Corners 'Roundup'][2]
3. [25 Rounded Corners Techniques with CSS][3]
[1]: http://www.alistapart.com/articles/customcorners/
[2]: http://www.smileycat.com/miaow/archives/000044.php
[3]: http://www.cssjuice.com/25-rounded-corners-techniques-with-css/ |
I looked at this early on in the creation of Stack Overflow and couldn't find *any* method of creating rounded corners that didn't leave me feeling like I just walked through a sewer.
[CSS3 does finally define](http://24ways.org/2006/rounded-corner-boxes-the-css3-way) the
border-radius:
Which is exactly how you'd want it to work. Although this works OK in the latest versions of Safari and Firefox, but not at all in IE7 (and I don't think in IE8) or Opera.
In the meantime, it's hacks all the way down. I'm interested in hearing what other people think is the cleanest way to do this across IE7, FF2/3, Safari3, and Opera 9.5 at the moment.. |
Publishing Flash video |
|flash|video|webcam| |
What options do I have to publish Flash video from webcams other than Adobe's Media Server and Red5? I would prefer a solution in .NET, Python or PHP but can't find anything.
|
What options do I have to publish Flash video from webcams other than Adobe's Flash Media Server and Red5? I would prefer a solution in .NET, Python or PHP but can't find anything.
|