instruction
stringlengths 21
27.8k
| chosen
stringlengths 18
28.2k
| rejected
stringlengths 18
33.6k
| __index_level_0__
int64 0
50k
|
|---|---|---|---|
<p>Suppose I have <a href="http://www.eia.doe.gov/emeu/aer/txt/ptb0808.html" rel="nofollow noreferrer">a table</a>. Now, I'm interested in Getting Useful Data Easily. This means I'd rather not drop it into Excel and go through contortions, nor somehow get it into CSV and then into a DB, and then into SQL.</p>
<p>I'd like to be able to execute a SQL query directly against a table in HTML.</p>
<p>Has anyone heard of a tool like this before? I've never seen one.</p>
|
<p>There's no tool to query an HTML table with SQL as far as I know, but <a href="http://www.w3schools.com/xquery/" rel="nofollow noreferrer">XQuery</a> is a standard language you can use to find elements in an XML or XHTML source in a manner very similar to using SQL to query databases.</p>
|
<p>No, there is absolutely no tool that lets you run SQL queries against an HTML table... but you could easily write some simple JavaScript if you had a specific query you wanted done.</p>
| 30,899
|
<p>What are the best workarounds for using a SQL <code>IN</code> clause with instances of <code>java.sql.PreparedStatement</code>, which is not supported for multiple values due to SQL injection attack security issues: One <code>?</code> placeholder represents one value, rather than a list of values.</p>
<p>Consider the following SQL statement:</p>
<pre><code>SELECT my_column FROM my_table where search_column IN (?)
</code></pre>
<p>Using <code>preparedStatement.setString( 1, "'A', 'B', 'C'" );</code> is essentially a non-working attempt at a workaround of the reasons for using <code>?</code> in the first place. </p>
<p>What workarounds are available?</p>
|
<p>An analysis of the various options available, and the pros and cons of each is available in Jeanne Boyarsky's <em><a href="http://www.javaranch.com/journal/200510/Journal200510.jsp#a2" rel="nofollow noreferrer">Batching Select Statements in JDBC</a></em> entry on JavaRanch Journal.</p>
<p>The suggested options are:</p>
<ul>
<li>Prepare <code>SELECT my_column FROM my_table WHERE search_column = ?</code>, execute it for each value and UNION the results client-side. Requires only one prepared statement. Slow and painful.</li>
<li>Prepare <code>SELECT my_column FROM my_table WHERE search_column IN (?,?,?)</code> and execute it. Requires one prepared statement per size-of-IN-list. Fast and obvious.</li>
<li>Prepare <code>SELECT my_column FROM my_table WHERE search_column = ? ; SELECT my_column FROM my_table WHERE search_column = ? ; ...</code> and execute it. [Or use <code>UNION ALL</code> in place of those semicolons. --ed] Requires one prepared statement per size-of-IN-list. Stupidly slow, strictly worse than <code>WHERE search_column IN (?,?,?)</code>, so I don't know why the blogger even suggested it.</li>
<li>Use a stored procedure to construct the result set.</li>
<li>Prepare N different size-of-IN-list queries; say, with 2, 10, and 50 values. To search for an IN-list with 6 different values, populate the size-10 query so that it looks like <code>SELECT my_column FROM my_table WHERE search_column IN (1,2,3,4,5,6,6,6,6,6)</code>. Any decent server will optimize out the duplicate values before running the query.</li>
</ul>
<p>None of these options are ideal.</p>
<p>The best option if you are using JDBC4 and a server that supports <code>x = ANY(y)</code>, is to use <code>PreparedStatement.setArray</code> as described in <a href="https://stackoverflow.com/questions/178479/preparedstatement-in-clause-alternatives/10240302#10240302">Boris's anwser</a>.</p>
<p>There doesn't seem to be any way to make <code>setArray</code> work with IN-lists, though.</p>
<hr />
<p>Sometimes SQL statements are loaded at runtime (e.g., from a properties file) but require a variable number of parameters. In such cases, first define the query:</p>
<pre class="lang-none prettyprint-override"><code>query=SELECT * FROM table t WHERE t.column IN (?)
</code></pre>
<p>Next, load the query. Then determine the number of parameters prior to running it. Once the parameter count is known, run:</p>
<pre class="lang-java prettyprint-override"><code>sql = any( sql, count );
</code></pre>
<p>For example:</p>
<pre class="lang-java prettyprint-override"><code>/**
* Converts a SQL statement containing exactly one IN clause to an IN clause
* using multiple comma-delimited parameters.
*
* @param sql The SQL statement string with one IN clause.
* @param params The number of parameters the SQL statement requires.
* @return The SQL statement with (?) replaced with multiple parameter
* placeholders.
*/
public static String any(String sql, final int params) {
// Create a comma-delimited list based on the number of parameters.
final StringBuilder sb = new StringBuilder(
String.join(", ", Collections.nCopies(possibleValue.size(), "?")));
// For more than 1 parameter, replace the single parameter with
// multiple parameter placeholders.
if (sb.length() > 1) {
sql = sql.replace("(?)", "(" + sb + ")");
}
// Return the modified comma-delimited list of parameters.
return sql;
}
</code></pre>
<p>For certain databases where passing an array via the JDBC 4 specification is unsupported, this method can facilitate transforming the slow <code>= ?</code> into the faster <code>IN (?)</code> clause condition, which can then be expanded by calling the <code>any</code> method.</p>
|
<p>My workaround (JavaScript)</p>
<pre><code> var s1 = " SELECT "
+ "FROM table t "
+ " where t.field in ";
var s3 = '(';
for(var i =0;i<searchTerms.length;i++)
{
if(i+1 == searchTerms.length)
{
s3 = s3+'?)';
}
else
{
s3 = s3+'?, ' ;
}
}
var query = s1+s3;
var pstmt = connection.prepareStatement(query);
for(var i =0;i<searchTerms.length;i++)
{
pstmt.setString(i+1, searchTerms[i]);
}
</code></pre>
<p><code>SearchTerms</code> is the array which contains your input/keys/fields etc</p>
| 21,511
|
<p>What's a nice way to merge two sorted arrays in ActionScript (specifically ActionScript 3.0)? The resulting array should be sorted and without duplicates.</p>
|
<p>To merge (concatenate) arrays, use <code>.concat()</code>.</p>
<p>Below are two examples of how you can concatenate arrays and remove duplicates at the same time.</p>
<p>More convenient way: (you can use <code>ArrayUtil.createUniqueCopy()</code> from <a href="http://code.google.com/p/as3corelib/" rel="noreferrer">as3corelib</a>)</p>
<pre><code>// from as3corelib:
import com.adobe.utils.ArrayUtil;
var a1:Array = ["a", "b", "c"];
var a2:Array = ["c", "b", "x", "y"];
var c:Array = ArrayUtil.createUniqueCopy(a1.concat(a2)); // result: ["a", "b", "c", "x", "y"]
</code></pre>
<p>Slightly faster way: (you can loop through the arrays yourself and use <a href="http://livedocs.adobe.com/flex/3/langref/Array.html#indexOf()" rel="noreferrer"><code>Array.indexOf()</code></a> to check for duplicates)</p>
<pre><code>var a1:Array = ["a", "b", "c"];
var a2:Array = ["c", "b", "x", "y"];
var a3:Array = ["a", "x", "x", "y", "z"];
var c:Array = arrConcatUnique(a1, a2, a3); // result: ["a", "b", "c", "x", "y", "z"]
private function arrConcatUnique(...args):Array
{
var retArr:Array = new Array();
for each (var arg:* in args)
{
if (arg is Array)
{
for each (var value:* in arg)
{
if (retArr.indexOf(value) == -1)
retArr.push(value);
}
}
}
return retArr;
}
</code></pre>
|
<p>Please follow the below step to get your answer:</p>
<ol>
<li>Concat two array using "Concat" Methos.</li>
<li>New Array (concated) sort using "Sort" method which provided as API in Array Class</li>
<li>Make user defined function to remove duplicates (see below functions)</li>
<li>> function removeDuplicates(p_arr:Array):Array {
<blockquote>
<pre><code> var ansArr:Array = new Array();
var len:uint = p_arr.length;
var i:uint = 0;
var j:uint = 0;
ansArr[j] = p_arr[i];
i++;
j++;
while(i<len)
{
if(ansArr[j] != p_arr[i])
{
ansArr[j] = p_arr[i];
j++;
}
i++;
}
return ansArr;
</code></pre>
<p>}</li>
</ol>
<p>Returned "ansArr" will be sorted and without duplicate merged array of two array.</p>
</blockquote></p>
| 43,498
|
<p>I am trying to debug an asp.net web form that requires a value from the querystring. I just want to debug that page without having to go through the entire process to create the querystring value and sending the request.</p>
<p>Any suggestions on how to attach a querystring value in VS2008 when I start the debugging process.</p>
<p>NOTE: I do not want to hard code the value in the pages code behind.</p>
|
<p>Visual Studio has an option to select which page to start when debugging.</p>
<p>It's under Properties -> Web -> Start Url</p>
<p>There you can enter the exact url you want.</p>
|
<p>Woot, this helped me greatly!</p>
<p>Simply right-click on your web project from the solution explorer, click properties (or alt-enter), click the Web left-side-tab, and set a specific start page (including your qs param)!</p>
| 41,631
|
<p>I'd like to have it yell hooray whenever an assert statement succeeds, or at the very least have it display the number of successful assert statements that were encountered.</p>
<p>I'm using JUnit4.</p>
<p>Any suggestions?</p>
|
<p>If you want to see some output for each successful assertion, another simple approach which requires no external dependencies or source code, would be to define your own Assert class which delegates all methods to the standard JUnit Assert class, as well as logging successful assertions (failed assertions will be reported as usual by the JUnit class).</p>
<p>You then run a global search-and-replace on your test classes from "org.junit.Assert" => "com.myco.test.Assert", which should fix-up all regular and static import statements.</p>
<p>You could also then easily migrate your approach to the quieter-is-better-camp and change the wrapper class to just report the total # of passed assertions per test or per class, etc.</p>
|
<p>junit's javadoc unfortunately says that only failed assertions are recorded (<a href="http://junit.sourceforge.net/javadoc_40/index.html" rel="nofollow noreferrer">http://junit.sourceforge.net/javadoc_40/index.html</a>)</p>
<p>so it seems it would not be possible</p>
| 21,762
|
<p>I'm not much of a Visual Basic person, but I am tasked with maintaining an old VB6 app. Whenever I check out a file, the editor will replace a bunch of the uppercase variable names with lowercase automatically. How can I make this stop!? I don't want to have to change them all back, and it's a pain to have these changes show up in SourceSafe "Differences" when I'm trying to locate the REAL differences.</p>
<p>It is changing it automatically in the definition, too:
<code>Dim C as Control</code> becomes <code>Dim c as Control</code>. <code>Dim X&</code> becomes <code>Dim x&</code>. But it doesn't do it all the time; for example, three lines down from <code>Dim x&</code>, there's a <code>Dim Y&</code>, uppercase, which it did not change. Why's it do this to me?</p>
|
<p>Continuing from <a href="https://stackoverflow.com/questions/248760/vb6-editor-changing-case-of-variable-names#248773">DJ's answer</a>...</p>
<p>And it won't only change the case of variables in the same scope either.</p>
<p>It will change the case of all variables with the same name in your entire project. So even if they're declared in uppercase in one place, another module might have different variables using the same variable names in lowercase, causing all variables in your project to change to lowercase, depending on which of the declarations was loaded (?) or edited last.</p>
<p>So the reason your C and X variables are changing case, while the Y isn't, is probably because C and X are declared somewhere else in your project too, but in lowercase, while Y isn't.</p>
<p>There's another mention of it <a href="http://www.programmingtalk.com/showthread.php?t=47839" rel="nofollow noreferrer">here</a>, where they mostly seem concerned with such variable names conflicting when case is being used to differentiate local from global variables. They end up going for prefixes instead.</p>
<p>The only alternative I can think of is to use some other editor with VB6-highlighting capabilities to do your editing...</p>
|
<p>DJ is spot on... VB always changes the case of variables to match the original declaration. It's a 'feature'.</p>
| 30,916
|
<p>I recall reading somewhere that using references to smart pointers can cause memory corruption. Is this simply because of using the reference of the smart pointer after its been destroyed? Or does the reference counting get messed up?</p>
<p>Thanks for clarifying</p>
|
<p>Assuming you are talking about shared_ptr here...</p>
<blockquote>
<p>Is this simply because of using the
reference of the smart pointer after
its been destroyed?</p>
</blockquote>
<p>This is a good answer. You may not know absolutely the lifetime of the pointer your reference refers too. </p>
<p>To get around this, you'd want to look into boost::weak_ptr. It doesn't participate in reference counting. When you need to use it, it gives you a shared_ptr which goes away once your done with it. It will also let you know when the refered to pointer has been collected. </p>
<p>From the <a href="http://www.boost.org/doc/libs/1_36_0/libs/smart_ptr/weak_ptr.htm#expired" rel="noreferrer">weak_ptr</a> documentation</p>
<blockquote>
<p>The weak_ptr class template stores a
"weak reference" to an object that's
already managed by a shared_ptr. To
access the object, a weak_ptr can be
converted to a shared_ptr using the
shared_ptr constructor or the member
function lock. When the last
shared_ptr to the object goes away and
the object is deleted, the attempt to
obtain a shared_ptr from the weak_ptr
instances that refer to the deleted
object will fail: the constructor will
throw an exception of type
boost::bad_weak_ptr, and
weak_ptr::lock will return an empty
shared_ptr.</p>
</blockquote>
<p>Note the method expired() will also tell you if your ptr is still around.</p>
|
<p>We have custom made smartpointers and we always make a habit of passing a 'const refsomething &'</p>
<p>it doesn't increment or decrement the smart pointer, and as such - and more importantly - calls to InterLockedIncrement/Decrement are avoided which in turns avoids a <a href="http://en.wikipedia.org/wiki/Memory_barrier" rel="nofollow noreferrer">memory fence</a> and all the things that go with that : bus locking, cache invalidation, ...</p>
| 21,598
|
<p>I have a table with a binary column which stores files of a number of different possible filetypes (PDF, BMP, JPEG, WAV, MP3, DOC, MPEG, AVI etc.), but no columns that store either the name or the type of the original file. Is there any easy way for me to process these rows and determine the type of each file stored in the binary column? Preferably it would be a utility that only reads the file headers, so that I don't have to fully extract each file to determine its type.</p>
<p><strong>Clarification</strong>: I know that the approach here involves reading just the beginning of each file. I'm looking for a good resource (aka links) that can do this for me without too much fuss. Thanks.</p>
<p>Also, <strong>just C#/.NET on Windows, please</strong>. I'm not using Linux and can't use Cygwin (doesn't work on Windows CE, among other reasons).</p>
|
<p>you can use these tools to find the file format.</p>
<p>File Analyser
<a href="http://www.softpedia.com/get/Programming/Other-Programming-Files/File-Analyzer.shtml" rel="nofollow noreferrer">http://www.softpedia.com/get/Programming/Other-Programming-Files/File-Analyzer.shtml</a></p>
<p>What Format
<a href="http://www.jozy.nl/whatfmt.html" rel="nofollow noreferrer">http://www.jozy.nl/whatfmt.html</a></p>
<p>PE file format analyser
<a href="http://peid.has.it/" rel="nofollow noreferrer">http://peid.has.it/</a></p>
<p>This website may be helpful for you.
<a href="http://mark0.net/onlinetrid.aspx" rel="nofollow noreferrer">http://mark0.net/onlinetrid.aspx</a></p>
<p>Note:
i have included the download links to make sure that you are getting the right tool name and information.</p>
<p>please verify the source before you download them.</p>
<p>i have used a tool in the past i think it is File Analyser, which will tell you the closest match.</p>
<p>happy tooling.</p>
|
<p>A lot of filetypes have well defined headers that begin the file. You could check the first few bytes to check to see how the file begins.</p>
| 41,223
|
<p>I have a project that I have been working on for a while, just one of those little pet projects that I would like to one day release to open source. </p>
<p>Now I started the project about 12 months ago but I was only working on it lightly, I have just started to concentrate a lot more of my time on it(almost every night). </p>
<p>Because it is a framework like application I sometimes struggle with a sense of direction due to the fact I don't have anything driving my design decisions and I sometimes end up making features that are hard to use or even find. I have been reading about how to do TDD and thought maybe this will help me with some of the problems that I am having. </p>
<p>So the question is do you think it's a good idea to start using TDD on a project that doesn't already use it.</p>
<p>EDIT: I have just added a bit to clarify what I mean by struggle with a "sense of direction", it properly wasn't the best thing to say without clarification.</p>
|
<p>In my opinion, it's never too late to adopt a better practice - or to drop a worse one - so I'd say "Yes, you should start".</p>
<p>However ... (there's always a "but") ...</p>
<p>... one of the biggest gains of TDD is that it impacts on your design, encouraging you to keep reponsibilties separate, interactions clean and so on. </p>
<p>At this point in your project, you may find it difficult to get tests written for some aspects of your framework. Don't give up though, even if you can't test some areas, your quality will be the better for the areas you can test, and your skills will improve for the experience.</p>
|
<p>Absolutely.</p>
<p>Introduce TDD to new code and if time allows, introduce "Comment Driven Design" with your existing code if it's not already tested.</p>
<ul>
<li>Comment out the block of existing code you need to test</li>
<li>Write your test</li>
<li>Uncomment your original code one statement at a time (if you have an if block, uncomment the entire block)</li>
<li>Determine if your original code ultimately passes your test and if not, re-write to pass your tests accordingly</li>
</ul>
| 37,653
|
<p>Is it possible to integrate my PHP web-based ecommerce application with Quickbook Online Edition?</p>
<p>When I make a sale on my web site, I would like to be able to make the corresponding journal entry in my accounting books.</p>
<p>Note, I'm referring to Quickbook <strong>Online Edition</strong>, <strong>not</strong> the desktop software.</p>
|
<p>I now have built a set of PHP classes that facilitates communication with QuickBooks Online Edition. It makes communicating with QuickBooks Online Edition as easy as: </p>
<pre><code>// Create the connection to QuickBooks
$API = new QuickBooks_API(...);
// Build the Customer object
$Customer = new QuickBooks_Object_Customer();
$Customer->setName($name);
$Customer->setShipAddress('134 Stonemill Road', '', '', '', '', 'Storrs', 'CT', '', '06268');
// Send the request to QuickBooks
$API->addCustomer($Customer, '_add_customer_callback', 15);
// The framework also supports processing raw qbXML requests
$API->qbxml('
<CustomerQueryRq>
<FullName>Keith Palmer Jr.</FullName>
</CustomerQueryRq>', '_raw_qbxml_callback');
</code></pre>
<p>You can download the framework from my thread here:
<a href="https://idnforums.intuit.com/messageview.aspx?catid=56&threadid=9164" rel="noreferrer">QuickBooks Online Edition PHP Package</a></p>
<p>I've started writing some documentation/tips on how to integrate web applications with QuickBooks Online Edition here:
<a href="http://wiki.consolibyte.com/wiki/doku.php/quickbooks_online_edition" rel="noreferrer">QuickBooks Integration wiki</a></p>
|
<p>It looks like Quickbooks OE has an XML-based SDK, available at:</p>
<p><a href="http://developer.intuit.com/technical_resources/default.aspx?id=1492" rel="nofollow noreferrer">http://developer.intuit.com/technical_resources/default.aspx?id=1492</a></p>
| 24,047
|
<p>I have one large access database that I need to normalize into five tables and a lookup table. I understand the theory behind normalization and have already sketched out the look of the tables but I am lost on how to transform my table to get the database normalized. The table analyzers doesn't offer the the breakdown that I want. </p>
|
<p>If you have a single table, add an Autonumber field to it.</p>
<p>Then create your other tables, and use the Autonumber value from the original single table as the foreign key to join them back to the original data.</p>
<p>If you had tblPerson:</p>
<pre><code> tblPerson
LastName, FirstName, WorkPhone, HomePhone
</code></pre>
<p>and you wanted to break it down, add PersonID autonumber and then create a phone table:</p>
<pre><code> tblPhone
PhoneID, PersonID, PhoneNumber, Type
</code></pre>
<p>Then you'd append data from tblPerson for the appropriate fields:</p>
<pre><code> INSERT INTO tblPhone (PersonID, PhoneNumber, Type)
SELECT tblPerson.PersonID, tblPerson.WorkPhone, "Work"
FROM tblPerson
WHERE tblPerson.WorkPhone Is Not Null;
</code></pre>
<p>and then you'd run another query for the home phone:</p>
<pre><code> INSERT INTO tblPhone (PersonID, PhoneNumber, Type)
SELECT tblPerson.PersonID, tblPerson.HomePhone, "Home"
FROM tblPerson
WHERE tblPerson.HomePhone Is Not Null;
</code></pre>
<p>Someone suggested a UNION query, which you'd have to save as you can't have a UNION query as a subselect in Jet SQL. The saved query would look something like this:</p>
<pre><code> SELECT tblPerson.PersonID, tblPerson.WorkPhone, "Work" As Type
FROM tblPerson
WHERE tblPerson.WorkPhone Is Not Null
UNION ALL
SELECT tblPerson.PersonID, tblPerson.HomePhone, "Home" As Type
FROM tblPerson
WHERE tblPerson.HomePhone Is Not Null;
</code></pre>
<p>If you saved that as qryPhones, you'd then append qryPhones with this SQL:</p>
<pre><code> INSERT INTO tblPhone (PersonID, PhoneNumber, Type)
SELECT qryPhones.PersonID, qryPhones.WorkPhone, qryPhones.Type
FROM qryPhones;
</code></pre>
<p>Obviously, this is just the simplest example. You'd do the same for all the fields. The key is that you have to create a PK value for your source table that will tie all the derived records back to the original table.</p>
|
<p>Can queries, particularly Union queries, offer a solution? Where are you seeing a problem?</p>
| 34,902
|
<p>I would like to access the work items in our TFS programmatically. Shouldn't there be an obvious command line tool to extract such information? Or a WebService I can just call? I already have checked into using Excel - this is neat, but I want more hardcore...</p>
|
<p>Take a look at the TFS API (<a href="http://msdn.microsoft.com/en-us/library/bb130146(VS.80).aspx" rel="nofollow noreferrer">http://msdn.microsoft.com/en-us/library/bb130146(VS.80).aspx</a>). Access to the same code used by Microsoft to create the Visual Studio integration and their version control command line tool (<a href="http://msdn.microsoft.com/en-us/library/z51z7zy0(VS.80).aspx" rel="nofollow noreferrer">tf.exe</a>).</p>
<p>You can also take a look at the power tools. <a href="http://msdn.microsoft.com/en-us/tfs2008/bb980963.aspx" rel="nofollow noreferrer">tfpt.exe</a> is the power tool command line and has many other advanced features. That said - you can do pretty much what you want with the SDK.</p>
<p>The new version of the power tools will be out soon, and that looks to have <a href="http://blogs.msdn.com/bharry/archive/2008/10/01/preview-of-the-next-tfs-power-tools-release.aspx" rel="nofollow noreferrer">Powershell support</a> coming</p>
<p>Enjoy!</p>
|
<p>If you download <a href="http://msdn.microsoft.com/sv-se/tfs2008/bb980963(en-us).aspx" rel="nofollow noreferrer">tfs power tools</a> you can use "tfpt query" to your advantage.</p>
| 25,030
|
<p>The following code snippet illustrates a memory leak when opening XPS files. If you run it and watch the task manager, it will grow and not release memory until the app exits.</p>
<p>'****** Console application BEGINS.</p>
<pre><code>Module Main
Const DefaultTestFilePath As String = "D:\Test.xps"
Const DefaultLoopRuns As Integer = 1000
Public Sub Main(ByVal Args As String())
Dim PathToTestXps As String = DefaultTestFilePath
Dim NumberOfLoops As Integer = DefaultLoopRuns
If (Args.Count >= 1) Then PathToTestXps = Args(0)
If (Args.Count >= 2) Then NumberOfLoops = CInt(Args(1))
Console.Clear()
Console.WriteLine("Start - {0}", GC.GetTotalMemory(True))
For LoopCount As Integer = 1 To NumberOfLoops
Console.CursorLeft = 0
Console.Write("Loop {0:d5}", LoopCount)
' The more complex the XPS document and the more loops, the more memory is lost.
Using XPSItem As New Windows.Xps.Packaging.XpsDocument(PathToTestXps, System.IO.FileAccess.Read)
Dim FixedDocSequence As Windows.Documents.FixedDocumentSequence
' This line leaks a chunk of memory each time, when commented out it does not.
FixedDocSequence = XPSItem.GetFixedDocumentSequence
End Using
Next
Console.WriteLine()
GC.Collect() ' This line has no effect, I think the memory that has leaked is unmanaged (C++ XPS internals).
Console.WriteLine("Complete - {0}", GC.GetTotalMemory(True))
Console.WriteLine("Loop complete but memory not released, will release when app exits (press a key to exit).")
Console.ReadKey()
End Sub
End Module
</code></pre>
<p>'****** Console application ENDS.</p>
<p>The reason it loops a thousand times is because my code processes lots of files and leaks memory quickly forcing an OutOfMemoryException. Forcing Garbage Collection does not work (I suspect it is an unmanaged chunk of memory in the XPS internals).</p>
<p>The code was originally in another thread and class but has been simplified to this.</p>
<p>Any help greatly appreciated.</p>
<p>Ryan</p>
|
<p>Well, I found it. It IS a bug in the framework and to work around it you add a call to UpdateLayout. Using statement can be changed to the following to provide a fix;</p>
<pre><code> Using XPSItem As New Windows.Xps.Packaging.XpsDocument(PathToTestXps, System.IO.FileAccess.Read)
Dim FixedDocSequence As Windows.Documents.FixedDocumentSequence
Dim DocPager As Windows.Documents.DocumentPaginator
FixedDocSequence = XPSItem.GetFixedDocumentSequence
DocPager = FixedDocSequence.DocumentPaginator
DocPager.ComputePageCount()
' This is the fix, each page must be laid out otherwise resources are never released.'
For PageIndex As Integer = 0 To DocPager.PageCount - 1
DirectCast(DocPager.GetPage(PageIndex).Visual, Windows.Documents.FixedPage).UpdateLayout()
Next
FixedDocSequence = Nothing
End Using
</code></pre>
|
<p>I can't give you any authoritative advice, but I did have a few thoughts:</p>
<ul>
<li>If you want to watch your memory inside the loop, you need to be collecting memory inside the loop as well. Otherwise you will <em>appear</em> to leak memory by design, since it's more efficient to collect larger blocks less frequently (as needed) rather than constantly be collecting small amounts. In this case the scope block creating the using statement <em>should</em> be enough, but your use of GC.Collect indicates that maybe something else is going on.</li>
<li>Even GC.Collect is only a suggestion (okay, very <em>strong</em> suggestion, but still a suggestion): it doesn't guarantee that all outstanding memory is collected.</li>
<li>If the internal XPS code really is leaking memory, the only way to force the OS to collect it is to trick the OS into thinking the application has ended. To do that you could perhaps create a dummy application that handles your xps code and is called from the main app, or moving the xps code into it's own AppDomain inside your main code may be enough as well.</li>
</ul>
| 26,906
|
<p>Our app is made up of several Modules, and we would like to take advantage of the XP feature that would allow these to be grouped together. For example all windows in "Module A" would be grouped together, separately from windows in "Module B". </p>
<p>I've tried setting the AssemblyTitle attribute in the
project's AssemblyInfo.cs file but still no title appears, only the count of
the number of items. </p>
<p>Is there any way to have control over this, or is it all controlled by Windows?</p>
<p>This is in a WinForms application, for Windows XP. Note that all modules are launched/hosted by a single process but what we want to do is group together all windows contained in a certain module. The Application style is SDI/MDI hybrid, just like MS Word.</p>
<p>Thanks</p>
|
<p>In addition to what Eoin says, Windows will only group taskbar buttons on a process-by-process basis. In other words, all the windows in your app will appear under one group.</p>
<p>The only way to get them into separate groups is to define one process for each of the groups you want. Then use [assembly: AssemblyDescription] to customise the title of each.</p>
<p>To my knowledge there is no way to modify these taskbar groups by hand.</p>
|
<p>IMo that feature is about the worst feature in XP and i am so glad that i can disable it.</p>
<p>But that aside, if you have the group option enabled in your taskbar then windows should do it automaticly.
In our company we have Java applications and all different java windows are always grouped becuase they are al started by java.exe</p>
<p>I think that if you start allmodules with the same tool it should work</p>
| 30,546
|
<p>I have a somewhat complicated branching structure at work (at least for me). It is something like this:</p>
<pre>
Main
|
1
|
2
| \
3 \
Ver2
|
1
| \
2 \
| ProjectA
3 |
1
</pre>
<p>There are 2 branches off of main. "Ver2" which has everyone's changes for the next version, and "ProjectA" which is my work.</p>
<p>My question is: Is there a way to create a config spec that knows what has been merged so I get:</p>
<ol>
<li>Anything from ProjectA that has not been merged</li>
<li>If the LATEST from ProjectA has been merged to Ver2, then get the LATEST from Ver2 branch</li>
<li>If there is not a ProjectA branch, get from Ver2</li>
<li>If there is no Ver2, get from MAIN</li>
</ol>
<p>For example, in the above case, if I merged version 1 from ProjectA to version 2 in Ver2 branch, then I would want to see version 3 on Ver2. However, if I have not yet merged those files, I would want version 1 from ProjectA in my view.</p>
|
<p>Mentioning the dragon book and antlr means you've answered your own question.</p>
<p>If you're looking for other parser generators you could also check out boost::spirit (<a href="http://spirit.sourceforge.net/" rel="noreferrer">http://spirit.sourceforge.net/</a>).</p>
<p>Depending on what you're trying to achieve you might also want to consider a DSL, which you can either parse yourself or write in a scripting language like boo, ruby, python etc...</p>
|
<p>Hmm … your request is a bit unspecific. While there are many recent developments in this general area, they're all quite specialized (naturally, since the field has matured). The original parsing approaches haven't really changed, though. You might want to read up on changes in parser creation tools (<a href="http://www.antlr.org/" rel="nofollow noreferrer">Antlr</a>, <a href="http://www.devincook.com/goldparser/" rel="nofollow noreferrer">Gold Parser</a>, to name but a few).</p>
| 30,285
|
<p>I would like to construct a query that displays all the results in a table, but is offset by 5 from the start of the table. As far as I can tell, MySQL's <code>LIMIT</code> requires a limit as well as an offset. Is there any way to do this?</p>
|
<p>From the <a href="http://dev.mysql.com/doc/refman/5.0/en/select.html#id4651990" rel="noreferrer">MySQL Manual on LIMIT</a>:</p>
<blockquote>
<p>To retrieve all rows from a certain
offset up to the end of the result
set, you can use some large number for
the second parameter. This statement
retrieves all rows from the 96th row
to the last:</p>
</blockquote>
<pre><code>SELECT * FROM tbl LIMIT 95, 18446744073709551615;
</code></pre>
|
<p>Just today I was reading about the best way to get huge amounts of data (more than a million rows) from a mysql table. One way is, as suggested, using <code>LIMIT x,y</code> where <code>x</code> is the offset and <code>y</code> the last row you want returned. However, as I found out, it isn't the most efficient way to do so. If you have an autoincrement column, you can as easily use a <code>SELECT</code> statement with a <code>WHERE</code> clause saying from which record you'd like to start.</p>
<p>For example,
<code>SELECT * FROM table_name WHERE id > x;</code></p>
<p>It seems that mysql gets all results when you use <code>LIMIT</code> and then only shows you the records that fit in the offset: not the best for performance.</p>
<p>Source: Answer to this question <a href="http://forums.mysql.com/read.php?24,112440,112440" rel="nofollow">MySQL Forums</a>. Just take note, the question is about 6 years old.</p>
| 31,882
|
<p>I have an action handling a form post, but I want to make sure they are authenticated before the action. The problem is that the post data is lost because they user is redirected to the login page, and then back. </p>
<pre><code> [AcceptVerbs(HttpVerbs.Post)]
[Authorize]
public ActionResult AskQuestion(string question)
{
....
}
</code></pre>
<p>Any ideas?</p>
<p>Cheers</p>
|
<p>You need to serialize your form values and a RedirectUrl to a hidden field.</p>
<p>After authentication deserialize the data in your hidden field and redirect based on the value of the RedirectUrl.</p>
<p>You will need a custom Authorize class to handle this.</p>
|
<p>You can also use the session to save the information...</p>
| 49,457
|
<p>I have written some code in my VB.NET application to send an HTML e-mail (in this case, a lost password reminder).</p>
<p>When I test the e-mail, it gets eaten by my spam filter. One of the things that it's scoring badly on is because of the following problem:</p>
<pre>MIME_QP_LONG_LINE RAW: Quoted-printable line longer than 76 chars</pre>
<p>I've been through the source of the e-mail, and I've broken each line longer than 76 characters into two lines with a CR+LF in between, but that hasn't fixed the problem.</p>
<p>Can anyone point me in the right direction?</p>
<p>Thanks!</p>
|
<p>Quoted printable expands 8 bit characters to "={HEX-Code}", thus making the messages longer. Maybe you are just hitting this limit?</p>
<p>Have you tried to break the message at, say, 70 characters? That should provide space for a couple of characters per line.</p>
<p>Or you just encode the email with Base64 - all mail client can handle that.</p>
<p>Or you just set Content-Transfer-Encoding to 8bit and send the data unencoded. I know of no mail server unable to handle 8bit bytes these days.</p>
|
<p>This is a bug in the implementation of the Quoted-Printable encoding in System.Net.Mail.MailMessage, which has been there for a long time, but is apparently now fixed, as of .Net 4 Beta 2.</p>
<p><a href="http://connect.microsoft.com/VisualStudio/feedback/details/156052/mailmessage-body-encoding-quoted-printable-violates-rfcs-soft-line-breaks-requirements" rel="nofollow noreferrer">http://connect.microsoft.com/VisualStudio/feedback/details/156052/mailmessage-body-encoding-quoted-printable-violates-rfcs-soft-line-breaks-requirements</a></p>
<p>One work-around is to use Base64 encoding instead (even though it would not otherwise be good practice to send a plain-text MIME part in a non-human readable encoding like this). Asking the user of the class to manually split the lines of the message before sending it is not a general solution, as the modified message is not what they wanted to send (e.g. it might include a link which is longer than 76 chars, and so cannot be split). Quoted-Printable can handle messages with lines which are longer than 76 chars <em>before</em> encoding, as long as it is implemented correctly.</p>
| 8,304
|
<p>I would very much like to integrate <a href="http://www.logilab.org/857" rel="noreferrer">pylint</a> into the build process for
my python projects, but I have run into one show-stopper: One of the
error types that I find extremely useful--:<code>E1101: *%s %r has no %r
member*</code>--constantly reports errors when using common django fields,
for example:</p>
<pre><code>E1101:125:get_user_tags: Class 'Tag' has no 'objects' member
</code></pre>
<p>which is caused by this code:</p>
<pre><code>def get_user_tags(username):
"""
Gets all the tags that username has used.
Returns a query set.
"""
return Tag.objects.filter( ## This line triggers the error.
tagownership__users__username__exact=username).distinct()
# Here is the Tag class, models.Model is provided by Django:
class Tag(models.Model):
"""
Model for user-defined strings that help categorize Events on
on a per-user basis.
"""
name = models.CharField(max_length=500, null=False, unique=True)
def __unicode__(self):
return self.name
</code></pre>
<p>How can I tune Pylint to properly take fields such as objects into account? (I've also looked into the Django source, and I have been unable to find the implementation of <code>objects</code>, so I suspect it is not "just" a class field. On the other hand, I'm fairly new to python, so I may very well have overlooked something.)</p>
<p><strong>Edit:</strong> The only way I've found to tell pylint to not warn about these warnings is by blocking all errors of the type (E1101) which is not an acceptable solution, since that is (in my opinion) an extremely useful error. If there is another way, without augmenting the pylint source, please point me to specifics :)</p>
<p>See <a href="http://creswick.github.io/blog/2008/09/05/wrestling-python/" rel="noreferrer">here</a> for a summary of the problems I've had with <code>pychecker</code> and <code>pyflakes</code> -- they've proven to be far to unstable for general use. (In pychecker's case, the crashes originated in the pychecker code -- not source it was loading/invoking.)</p>
|
<p>Do not disable or weaken Pylint functionality by adding <code>ignores</code> or <code>generated-members</code>.<br>
Use an actively developed Pylint plugin that <strong>understands</strong> Django.<br>
<a href="https://github.com/landscapeio/pylint-django" rel="noreferrer">This Pylint plugin for Django</a> works quite well:</p>
<pre><code>pip install pylint-django
</code></pre>
<p>and when running pylint add the following flag to the command:</p>
<pre><code>--load-plugins pylint_django
</code></pre>
<p>Detailed blog post <a href="https://blog.landscape.io/using-pylint-on-django-projects-with-pylint-django.html" rel="noreferrer">here</a>.</p>
|
<p>For heroku users, you can also use <a href="https://stackoverflow.com/a/31000713/527489">Tal Weiss's answer to this question</a> using the following syntax to run pylint with the pylint-django plugin (replace <code>timekeeping</code> with your app/package):</p>
<pre><code># run on the entire timekeeping app/package
heroku local:run pylint --load-plugins pylint_django timekeeping
# run on the module timekeeping/report.py
heroku local:run pylint --load-plugins pylint_django timekeeping/report.py
# With temporary command line disables
heroku local:run pylint --disable=invalid-name,missing-function-docstring --load-plugins pylint_django timekeeping/report.py
</code></pre>
<p>Note: I was unable to run without specifying project/package directories.</p>
<p>If you have issues with <code>E5110: Django was not configured.</code>, you can also invoke as follows to try to work around that (again, change <code>timekeeping</code> to your app/package):</p>
<pre><code>heroku local:run python manage.py shell -c 'from pylint import lint; lint.Run(args=["--load-plugins", "pylint_django", "timekeeping"])'
# With temporary command line disables, specific module
heroku local:run python manage.py shell -c 'from pylint import lint; lint.Run(args=["--load-plugins", "pylint_django", "--disable=invalid-name,missing-function-docstring", "timekeeping/report.py"])'
</code></pre>
| 14,135
|
<p>One mentor I respect suggests that a simple bean is a waste of time - that value objects 'MUST' contain some business logic to be useful. </p>
<p>Another says such code is difficult to maintain and that all business logic must be externalized. </p>
<p>I realize this question is subjective. Asking anyway - want to know answers from more perspectives.</p>
|
<p>The idea of putting data and business logic together is to promote encapsulation, and to expose as little internal state as possible to other objects. That way, clients can rely on an interface rather than on an implementation. See the <a href="http://www.pragmaticprogrammer.com/articles/tell-dont-ask" rel="noreferrer">"Tell, Don't Ask"</a> principle and the <a href="http://en.wikipedia.org/wiki/Law_of_Demeter" rel="noreferrer">Law of Demeter</a>. Encapsulation makes it easier to understand the states data can be in, easier to read code, easier to decouple classes and generally easier to unit test.</p>
<p>Externalising business logic (generally into "Service" or "Manager" classes) makes questions like "where is this data used?" and "What states can it be in?" a lot more difficult to answer. It's also a procedural way of thinking, wrapped up in an object. This can lead to an <a href="http://www.martinfowler.com/bliki/AnemicDomainModel.html" rel="noreferrer">anemic domain model</a>.</p>
<p>Externalising behaviour isn't always bad. For example, a <a href="http://martinfowler.com/eaaCatalog/serviceLayer.html" rel="noreferrer">service layer</a> might orchestrate domain objects, but without taking over their state-manipulating responsibilities. Or, when you are mostly doing reads/writes to a DB that map nicely to input forms, maybe you don't need a domain model - or the painful object/relational mapping overhead it entails - at all.</p>
<p>Transfer Objects often serve to decouple architectural layers from each other (or from an external system) by providing the minimum state information the calling layer needs, without exposing any business logic.</p>
<p>This can be useful, for example when preparing information for the view: just give the view the information it needs, and nothing else, so that it can concentrate on <em>how</em> to display the information, rather than <em>what</em> information to display. For example, the TO might be an aggregation of several sources of data.</p>
<p>One advantage is that your views and your domain objects are decoupled. Using your domain objects in JSPs can make your domain harder to refactor and promotes the indiscriminate use of getters and setters (hence breaking encapsulation).</p>
<p>However, there's also an overhead associated with having a lot of Transfer Objects and often a lot of duplication, too. Some projects I've been on end up with TO's that basically mirror other domain objects (which I consider an anti-pattern).</p>
|
<p>My personal preference is to put all business logic in the domain model itself, that is in the "true" domain objects. So when Data Transfer Objects are created they are mostly just a (immutable) state representation of domain objects and hence contain no business logic. They can contain methods for cloning and comparing though, but the meat of the business logic code stays in the domain objects.</p>
| 13,556
|
<p>How do we filter an xml document based on another xml document. I have to remove all the elements which are not there in the lookup xml. Both the input xml and lookup xml has the same root elements, we are using XSLT 1.0.</p>
<p>Ex Input</p>
<pre><code><Root>
<E1 a="1">V1</E1>
<E2>V2</E2>
<E3>V3</E3>
<E5>
<SE51>SEV1</SE51>
<SE52>SEV2</SE52>
</E5>
<E6>
<SE61>SEV3</SE61>
<SE62>SEV4</SE62>
</E6>
</Root>
</code></pre>
<p>Filter Xml</p>
<pre><code><Root>
<E1 a="1"></E1>
<E2></E2>
<E5>
<SE51></SE51>
<SE52></SE52>
</E5>
</Root>
</code></pre>
<p>Expected Output</p>
<pre><code><Root>
<E1 a="1">V1</E1>
<E2>V2</E2>
<E5>
<SE51>SEv1</SE51>
<SE52>SEV2</SE52>
</E5>
</Root>
</code></pre>
|
<p>Here is the required transformation:</p>
<pre>
<xsl:stylesheet version="1.0"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:z="inline:text.xml"
exclude-result-prefixes="z"
>
<xsl:output omit-xml-declaration="yes" indent="yes"/>
<xsl:strip-space elements="*"/>
<z:filter>
<Root>
<E1 a="1"></E1>
<E2></E2>
<E5>
<SE51></SE51>
<SE52></SE52>
</E5>
</Root>
</z:filter>
<xsl:variable name="vFilter" select=
"document('')/*/z:filter"/>
<xsl:template match="/">
<xsl:apply-templates select="*[name()=name($vFilter/*)]">
<xsl:with-param name="pFiltNode" select="$vFilter/*"/>
</xsl:apply-templates>
</xsl:template>
<xsl:template match="*">
<xsl:param name="pFiltNode"/>
<xsl:copy>
<xsl:copy-of select="@*"/>
<xsl:for-each select="text() | *">
<xsl:choose>
<xsl:when test="self::text()">
<xsl:copy-of select="."/>
</xsl:when>
<xsl:otherwise>
<xsl:variable name="vFiltNode"
select="$pFiltNode/*[name()=name(current())]"/>
<xsl:apply-templates select="self::node()[$vFiltNode]">
<xsl:with-param name="pFiltNode" select="$vFiltNode"/>
</xsl:apply-templates>
</xsl:otherwise>
</xsl:choose>
</xsl:for-each>
</xsl:copy>
</xsl:template>
</xsl:stylesheet>
</pre>
<p>When this transformation is applied on the following XML document (the original one plus the addition of <code><SE511>SEV11</SE511></code> to <strong>demonstrate that the filtering works on any level</strong>:</p>
<pre>
<Root>
<E1 a="1">V1</E1>
<E2>V2</E2>
<E3>V3</E3>
<E5>
<SE51>SEV1</SE51>
<SE511>SEV11</SE511>
<SE52>SEV2</SE52>
</E5>
<E6>
<SE61>SEV3</SE61>
<SE62>SEV4</SE62>
</E6>
</Root>
</pre>
<p><strong>the wanted result is produced</strong>:</p>
<pre>
<Root>
<E1 a="1">V1</E1>
<E2>V2</E2>
<E3>V3</E3>
<E5>
<SE51>SEV1</SE51>
<SE511>SEV11</SE511>
<SE52>SEV2</SE52>
</E5>
<E6>
<SE61>SEV3</SE61>
<SE62>SEV4</SE62>
</E6>
</Root>
</pre>
<p><strong>Do notice the following details</strong> of this solution:</p>
<ol>
<li>Templates are applied only to elements that have a matching node in the filter-document and also to all text nodes of such elements.</li>
<li>The template that matches an element is passed as parameter the corresponding node in the filter-document.</li>
<li>When applying templates to an element-child, its corresponding node is found and passed as the expected parameter.</li>
</ol>
<p>Do enjoy!</p>
|
<p>Hmmm, you're sort of talking about merging (assuming your filter doc is variable). There's a couple of possibilities which vary with the language you're implementing all of this in. Could you provide more info about the app?</p>
<p>Otherwise I suggest a quick google on "xslt +merge" and see if some result there grabs you.</p>
| 43,656
|
<p><em>Omitting details of methods to calculate primes, and methods of factorisation.</em></p>
<h3>Why bother to factorise ?</h3>
<h3>What are its applications ?</h3>
|
<p>Wow, so much fighting in this thread.</p>
<p>Ironically, this question HAS a major valid answer.</p>
<p>Factorization is actually used heavily in encryption/decryption algorithms, so much so that the RSA regularly conducts competitions wherein the task is to factorize certain large numbers that are multiples of very large prime numbers.</p>
<p>This is, in turn, because several encryption/decryption algorithms are based on the premise that factorization takes a very long time, which (supposedly) makes it difficult and/or impractical to crack certain encryption/decryption algorithms given the assumption that the hacker/cracker does not have access to public/private keys.</p>
<p>Factorization algorithms can then be used to verify just how strong any given encryption/decryption algorithm is.</p>
|
<p>It can be used to crack some types of encryption (if they key was small enough).</p>
<p>You would also need it for some types of scientific software.</p>
<p>One more application is to answer <a href="http://projecteuler.net/" rel="nofollow noreferrer">ProjectEuler</a> Questions.</p>
| 30,044
|
<p>I normally don't work on Windows development, and am completely unfamiliar with the toolchain and build system. My embedded product includes some Windows DLLs from a third party in its filesystem (which are used by a Windows machine which mounts the filesystem).</p>
<p>I have a problem: the most recent release of these DLLs have tripled in size compared to previous builds, and they no longer fit in the filesystem. There have not been many changes in the functionality of the DLLs, so I suspect the developers simply forgot to strip debug symbols in this drop. I will ask them, but getting an answer often takes days due to timezone and language differences.</p>
<p>Could someone explain, using simple steps for someone unfamiliar with VisualC, how to determine if a DLL still contains debugging information and how to strip it out?</p>
|
<p>Generally the debug info itself is built as a separate <code>*.pdb</code> file (Program DataBase), instead of being appended onto the binary as in unix. If the developers did indeed build a debug version of the library, a more serious problem might be that of dependencies. If a release version of a binary links to <code>MSVCRT.DLL</code>, then the debug build would link to <code>MSVCRTD.DLL</code> (other runtime libraries are similarly named with the D suffix). To find the dependencies for a particular binary, try:</p>
<pre><code>dumpbin /imports whatever.dll
</code></pre>
<p>This will show all the runtime dependencies for the library <code>whatever.dll</code> (note that both library names and symbols from those libraries are listed). If you do not see the list of dependencies you expect, there is probably a problem that can only be fixed by having the original developer rebuild the library in the proper build mode.</p>
|
<p>Ignoring for the moment the other suggestions such as getting a release version, which is valid. The tool the developers would be looking for is actually <code>link.exe</code> from Visual Studio (or the SDK or WDK).</p>
<p>If they would like you to be able to make use of a debugger together with their code they could create public PDB files for you. The options they'd want to use are:</p>
<pre><code> /PDB:filename
/PDBSTRIPPED:filename
</code></pre>
<p>However, I'm afraid that you yourself can't do a lot about it. The PDB files themselves are separate files and <strong>debug information does not usually get included in binaries on modern MS compilers</strong> (though some RTTI stuff may get included, not to mention the file names and strings for <code>ASSERT</code> and similar macros and "functions" - which is the most likely explanation for the perceived bloat).</p>
<p>Note: <code>binplace.exe</code> from the WDK provides the same functionality as the above flags, but has a somewhat more convoluted (albeit fitting for the WDK build process) syntax.</p>
| 22,874
|
<p>Knuth <a href="http://www-cs-faculty.stanford.edu/~knuth/news08.html" rel="nofollow noreferrer">recently objected</a> to 64-bit systems, saying that for programs which fit in 4 gigs of memory, "they effectively throw away half of the cache" because the pointers are twice as big as on a 32-bit system.</p>
<p>My question is: can this problem be avoided by installing a 32-bit operating system on a 64-bit machine? And are there any bandwidth-intensive benchmarks which demonstrate the advantage in this case?</p>
|
<p>The answer is: yes it can to a certain extent, although the performance difference is unlikely to be great.</p>
<p>Any benchmark to test this will have to do a lot of pointer resolution, which will be difficult to separate out from the noise. Designing a benchmark that will not optimise away is difficult. <a href="http://www.ibm.com/developerworks/java/library/j-jtp02225.html" rel="nofollow noreferrer">This article about flawed java benchmarks</a> was posted by someone in response to another question, but many of the principles described in it will apply to this.</p>
|
<p>i've seen somewhere that the best mix (on x86 CPUs) is to use a 64-bit OS and 32-bit applications.</p>
<p>with a 64-bit OS you get:</p>
<ul>
<li>ability to handle more than 4GB of address space</li>
<li>more, bigger registers to help in data-copying operations</li>
</ul>
<p>with a 32-bit app you get:</p>
<ul>
<li>smaller pointers</li>
<li>less, smaller registers to save on context switches</li>
</ul>
<p>cons:</p>
<ul>
<li>all libraries must be duplicated. tiny by HD space standards.</li>
<li>all loaded libraries are duplicated on RAM. not so tiny...</li>
</ul>
<p>surprisingly, there seems not to be any overhead when switching modes. I guess that breaking from userspace to kernel costs the same, no matter the bitness of the userspace.</p>
<p>of course, there are some applications that benefit from big address space. but for everything else, you can get an extra 5% performance by staying at 32-bit.</p>
<p>and no, i don't care about this small speedup. but it doesn't "offend" me to run 32-bit FireFox on a 64-bit KUbuntu machine (like i've seen on some forums)</p>
| 31,732
|
<p>I need to pass an ID and a password to a batch file at the time of running rather than hardcoding them into the file.</p>
<p>Here's what the command line looks like:</p>
<pre><code>test.cmd admin P@55w0rd > test-log.txt
</code></pre>
|
<p>Another useful tip is to use <code>%*</code> to mean "all". For example:</p>
<pre><code>echo off
set arg1=%1
set arg2=%2
shift
shift
fake-command /u %arg1% /p %arg2% %*
</code></pre>
<p>When you run:</p>
<pre><code>test-command admin password foo bar
</code></pre>
<p>The above batch file will run:</p>
<pre><code>fake-command /u admin /p password admin password foo bar
</code></pre>
<p>I may have the syntax slightly wrong, but this is the general idea.</p>
|
<p>If you're worried about security/password theft (that led you to design this solution that takes login credentials at execution instead of static hard coding without the need for a database), then you could store the api or half the code of password decryption or decryption key in the program file, so at run time, user would type username/password in console to be hashed/decrypted before passed to program code for execution via <code>set /p</code>, if you're looking at user entering credentials at run time.</p>
<p>If you're running a script to run your program with various user/password, then command line args will suit you.</p>
<p>If you're making a test file to see the output/effects of different logins, then you could store all the logins in an encrypted file, to be passed as arg to test.cmd, unless you wanna sit at command line & type all the logins until finished.</p>
<p>The number of args that can be supplied is <a href="https://learn.microsoft.com/en-us/troubleshoot/windows-client/shell-experience/command-line-string-limitation" rel="nofollow noreferrer">limited to total characters on command line</a>. To overcome this limitation, the previous paragraph trick is a workaround without risking exposure of user passwords.</p>
| 4,610
|
<p>I'm using jQuery UI's draggable and droppable libraries in a simple ASP.NET proof of concept application. This page uses the ASP.NET AJAX UpdatePanel to do partial page updates. The page allows a user to drop an item into a trashcan div, which will invoke a postback that deletes a record from the database, then rebinds the list (and other controls) that the item was drug from. All of these elements (the draggable items and the trashcan div) are inside an ASP.NET UpdatePanel.</p>
<p>Here is the dragging and dropping initialization script:</p>
<pre><code> function initDragging()
{
$(".person").draggable({helper:'clone'});
$("#trashcan").droppable({
accept: '.person',
tolerance: 'pointer',
hoverClass: 'trashcan-hover',
activeClass: 'trashcan-active',
drop: onTrashCanned
});
}
$(document).ready(function(){
initDragging();
var prm = Sys.WebForms.PageRequestManager.getInstance();
prm.add_endRequest(function()
{
initDragging();
});
});
function onTrashCanned(e,ui)
{
var id = $('input[id$=hidID]', ui.draggable).val();
if (id != undefined)
{
$('#hidTrashcanID').val(id);
__doPostBack('btnTrashcan','');
}
}
</code></pre>
<p>When the page posts back, partially updating the UpdatePanel's content, I rebind the draggables and droppables. When I then grab a draggable with my cursor, I get an "htmlfile: Unspecified error." exception. I can resolve this problem in the jQuery library by replacing <code>elem.offsetParent</code> with calls to this function that I wrote:</p>
<pre><code>function IESafeOffsetParent(elem)
{
try
{
return elem.offsetParent;
}
catch(e)
{
return document.body;
}
}
</code></pre>
<p>I also have to avoid calls to elem.getBoundingClientRect() as it throws the same error. For those interested, I only had to make these changes in the <code>jQuery.fn.offset</code> function in the <a href="http://plugins.jquery.com/project/dimensions" rel="nofollow noreferrer">Dimensions Plugin</a>.</p>
<p>My questions are: </p>
<ul>
<li>Although this works, are there better ways (cleaner; better performance; without having to modify the jQuery library) to solve this problem?</li>
<li>If not, what's the best way to manage keeping my changes in sync when I update the jQuery libraries in the future? For, example can I extend the library somewhere other than just inline in the files that I download from the jQuery website.</li>
</ul>
<p><b>Update:</b></p>
<p>@some It's not publicly accessible, but I will see if SO will let me post the relevant code into this answer. Just create an ASP.NET Web Application (name it <b>DragAndDrop</b>) and create these files. Don't forget to set Complex.aspx as your start page. You'll also need to download the <a href="http://ui.jquery.com/download_builder/" rel="nofollow noreferrer">jQuery UI drag and drop plug in</a> as well as <a href="http://code.google.com/p/jqueryjs/downloads/detail?name=jquery-1.2.6.js" rel="nofollow noreferrer">jQuery core</a></p>
<p><b>Complex.aspx</b></p>
<pre><code><%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Complex.aspx.cs" Inherits="DragAndDrop.Complex" %>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" >
<head runat="server">
<title>Untitled Page</title>
<script src="jquery-1.2.6.min.js" type="text/javascript"></script>
<script src="jquery-ui-personalized-1.5.3.min.js" type="text/javascript"></script>
<script type="text/javascript">
function initDragging()
{
$(".person").draggable({helper:'clone'});
$("#trashcan").droppable({
accept: '.person',
tolerance: 'pointer',
hoverClass: 'trashcan-hover',
activeClass: 'trashcan-active',
drop: onTrashCanned
});
}
$(document).ready(function(){
initDragging();
var prm = Sys.WebForms.PageRequestManager.getInstance();
prm.add_endRequest(function()
{
initDragging();
});
});
function onTrashCanned(e,ui)
{
var id = $('input[id$=hidID]', ui.draggable).val();
if (id != undefined)
{
$('#hidTrashcanID').val(id);
__doPostBack('btnTrashcan','');
}
}
</script>
</head>
<body>
<form id="form1" runat="server">
<asp:ScriptManager ID="ScriptManager1" runat="server">
</asp:ScriptManager>
<div>
<asp:UpdatePanel ID="updContent" runat="server" UpdateMode="Always">
<ContentTemplate>
<asp:LinkButton ID="btnTrashcan" Text="trashcan" runat="server" CommandName="trashcan"
onclick="btnTrashcan_Click" style="display:none;"></asp:LinkButton>
<input type="hidden" id="hidTrashcanID" runat="server" />
<asp:Button ID="Button1" runat="server" Text="Save" onclick="Button1_Click" />
<table>
<tr>
<td style="width: 300px;">
<asp:DataList ID="lstAllPeople" runat="server" DataSourceID="odsAllPeople"
DataKeyField="ID">
<ItemTemplate>
<div class="person">
<asp:HiddenField ID="hidID" runat="server" Value='<%# Eval("ID") %>' />
Name:
<asp:Label ID="lblName" runat="server" Text='<%# Eval("Name") %>' />
<br />
<br />
</div>
</ItemTemplate>
</asp:DataList>
<asp:ObjectDataSource ID="odsAllPeople" runat="server" SelectMethod="SelectAllPeople"
TypeName="DragAndDrop.Complex+DataAccess"
onselecting="odsAllPeople_Selecting">
<SelectParameters>
<asp:Parameter Name="filter" Type="Object" />
</SelectParameters>
</asp:ObjectDataSource>
</td>
<td style="width: 300px;vertical-align:top;">
<div id="trashcan">
drop here to delete
</div>
<asp:DataList ID="lstPeopleToDelete" runat="server"
DataSourceID="odsPeopleToDelete">
<ItemTemplate>
ID:
<asp:Label ID="IDLabel" runat="server" Text='<%# Eval("ID") %>' />
<br />
Name:
<asp:Label ID="NameLabel" runat="server" Text='<%# Eval("Name") %>' />
<br />
<br />
</ItemTemplate>
</asp:DataList>
<asp:ObjectDataSource ID="odsPeopleToDelete" runat="server"
onselecting="odsPeopleToDelete_Selecting" SelectMethod="GetDeleteList"
TypeName="DragAndDrop.Complex+DataAccess">
<SelectParameters>
<asp:Parameter Name="list" Type="Object" />
</SelectParameters>
</asp:ObjectDataSource>
</td>
</tr>
</table>
</ContentTemplate>
</asp:UpdatePanel>
</div>
</form>
</body>
</html>
</code></pre>
<p><b>Complex.aspx.cs</b></p>
<pre><code>namespace DragAndDrop
{
public partial class Complex : System.Web.UI.Page
{
protected void Page_Load(object sender, EventArgs e)
{
}
protected List<int> DeleteList
{
get
{
if (ViewState["dl"] == null)
{
List<int> dl = new List<int>();
ViewState["dl"] = dl;
return dl;
}
else
{
return (List<int>)ViewState["dl"];
}
}
}
public class DataAccess
{
public IEnumerable<Person> SelectAllPeople(IEnumerable<int> filter)
{
return Database.SelectAll().Where(p => !filter.Contains(p.ID));
}
public IEnumerable<Person> GetDeleteList(IEnumerable<int> list)
{
return Database.SelectAll().Where(p => list.Contains(p.ID));
}
}
protected void odsAllPeople_Selecting(object sender, ObjectDataSourceSelectingEventArgs e)
{
e.InputParameters["filter"] = this.DeleteList;
}
protected void odsPeopleToDelete_Selecting(object sender, ObjectDataSourceSelectingEventArgs e)
{
e.InputParameters["list"] = this.DeleteList;
}
protected void Button1_Click(object sender, EventArgs e)
{
foreach (int id in DeleteList)
{
Database.DeletePerson(id);
}
DeleteList.Clear();
lstAllPeople.DataBind();
lstPeopleToDelete.DataBind();
}
protected void btnTrashcan_Click(object sender, EventArgs e)
{
int id = int.Parse(hidTrashcanID.Value);
DeleteList.Add(id);
lstAllPeople.DataBind();
lstPeopleToDelete.DataBind();
}
}
}
</code></pre>
<p><b>Database.cs</b></p>
<pre><code>namespace DragAndDrop
{
public static class Database
{
private static Dictionary<int, Person> _people = new Dictionary<int,Person>();
static Database()
{
Person[] people = new Person[]
{
new Person("Chad")
, new Person("Carrie")
, new Person("Richard")
, new Person("Ron")
};
foreach (Person p in people)
{
_people.Add(p.ID, p);
}
}
public static IEnumerable<Person> SelectAll()
{
return _people.Values;
}
public static void DeletePerson(int id)
{
if (_people.ContainsKey(id))
{
_people.Remove(id);
}
}
public static Person CreatePerson(string name)
{
Person p = new Person(name);
_people.Add(p.ID, p);
return p;
}
}
public class Person
{
private static int _curID = 1;
public int ID { get; set; }
public string Name { get; set; }
public Person()
{
ID = _curID++;
}
public Person(string name)
: this()
{
Name = name;
}
}
}
</code></pre>
|
<p>@arilanto - I include this script after my jquery scripts. Performance wise, it's not the best solution, but it is a quick easy work around.</p>
<pre><code>function IESafeOffsetParent(elem)
{
try
{
return elem.offsetParent;
}
catch(e)
{
return document.body;
}
}
// The Offset Method
// Originally By Brandon Aaron, part of the Dimension Plugin
// http://jquery.com/plugins/project/dimensions
jQuery.fn.offset = function() {
/// <summary>
/// Gets the current offset of the first matched element relative to the viewport.
/// </summary>
/// <returns type="Object">An object with two Integer properties, 'top' and 'left'.</returns>
var left = 0, top = 0, elem = this[0], results;
if ( elem ) with ( jQuery.browser ) {
var parent = elem.parentNode,
offsetChild = elem,
offsetParent = IESafeOffsetParent(elem),
doc = elem.ownerDocument,
safari2 = safari && parseInt(version) < 522 && !/adobeair/i.test(userAgent),
css = jQuery.curCSS,
fixed = css(elem, "position") == "fixed";
// Use getBoundingClientRect if available
if (false && elem.getBoundingClientRect) {
var box = elem.getBoundingClientRect();
// Add the document scroll offsets
add(box.left + Math.max(doc.documentElement.scrollLeft, doc.body.scrollLeft),
box.top + Math.max(doc.documentElement.scrollTop, doc.body.scrollTop));
// IE adds the HTML element's border, by default it is medium which is 2px
// IE 6 and 7 quirks mode the border width is overwritable by the following css html { border: 0; }
// IE 7 standards mode, the border is always 2px
// This border/offset is typically represented by the clientLeft and clientTop properties
// However, in IE6 and 7 quirks mode the clientLeft and clientTop properties are not updated when overwriting it via CSS
// Therefore this method will be off by 2px in IE while in quirksmode
add( -doc.documentElement.clientLeft, -doc.documentElement.clientTop );
// Otherwise loop through the offsetParents and parentNodes
} else {
// Initial element offsets
add( elem.offsetLeft, elem.offsetTop );
// Get parent offsets
while ( offsetParent ) {
// Add offsetParent offsets
add( offsetParent.offsetLeft, offsetParent.offsetTop );
// Mozilla and Safari > 2 does not include the border on offset parents
// However Mozilla adds the border for table or table cells
if ( mozilla && !/^t(able|d|h)$/i.test(offsetParent.tagName) || safari && !safari2 )
border( offsetParent );
// Add the document scroll offsets if position is fixed on any offsetParent
if ( !fixed && css(offsetParent, "position") == "fixed" )
fixed = true;
// Set offsetChild to previous offsetParent unless it is the body element
offsetChild = /^body$/i.test(offsetParent.tagName) ? offsetChild : offsetParent;
// Get next offsetParent
offsetParent = offsetParent.offsetParent;
}
// Get parent scroll offsets
while ( parent && parent.tagName && !/^body|html$/i.test(parent.tagName) ) {
// Remove parent scroll UNLESS that parent is inline or a table to work around Opera inline/table scrollLeft/Top bug
if ( !/^inline|table.*$/i.test(css(parent, "display")) )
// Subtract parent scroll offsets
add( -parent.scrollLeft, -parent.scrollTop );
// Mozilla does not add the border for a parent that has overflow != visible
if ( mozilla && css(parent, "overflow") != "visible" )
border( parent );
// Get next parent
parent = parent.parentNode;
}
// Safari <= 2 doubles body offsets with a fixed position element/offsetParent or absolutely positioned offsetChild
// Mozilla doubles body offsets with a non-absolutely positioned offsetChild
if ( (safari2 && (fixed || css(offsetChild, "position") == "absolute")) ||
(mozilla && css(offsetChild, "position") != "absolute") )
add( -doc.body.offsetLeft, -doc.body.offsetTop );
// Add the document scroll offsets if position is fixed
if ( fixed )
add(Math.max(doc.documentElement.scrollLeft, doc.body.scrollLeft),
Math.max(doc.documentElement.scrollTop, doc.body.scrollTop));
}
// Return an object with top and left properties
results = { top: top, left: left };
}
function border(elem) {
/// <summary>
/// This method is internal.
/// </summary>
/// <private />
add( jQuery.curCSS(elem, "borderLeftWidth", true), jQuery.curCSS(elem, "borderTopWidth", true) );
}
function add(l, t) {
/// <summary>
/// This method is internal.
/// </summary>
/// <private />
left += parseInt(l, 10) || 0;
top += parseInt(t, 10) || 0;
}
return results;
};
</code></pre>
|
<p>i tried the following workaround for the <strong>getBoundingClientRect()</strong> unspecified error whilst drag n drop, and it works fine.</p>
<p>in the jquery.1.4.2.js (<em>i.e base jquery file, where the error is thrown exactly</em>)</p>
<p>replace the <strong>elem.getBoundingClientRect()</strong> function call in js file</p>
<p>//the line which throws the unspecified error</p>
<p><code>var box = elem.getBoundingClientRect(),</code></p>
<p>with this..</p>
<p><code>
var box = null;
try
{
box = elem.getBoundingClientRect();
}
catch(e)
{
box = { top : elem.offsetTop, left : elem.offsetLeft } ;
}
</code> </p>
<p>This solves the issue and drag n drop will work quitely even after post back through update panel</p>
<p>Regards</p>
<p>Raghu</p>
| 48,547
|
<p>I'm using VisualSVN Server to host an SVN repo, and for some automation work, I'd like to be able to get specific versions via the http[s] layer.</p>
<p>I can get the HEAD version simply via an http[s] request to the server (httpd?) - but is there any ability to specify the revision, perhaps as a query-string? I can't seem to find it...</p>
<p>I don't want to do a checkout unless I can help it, as there are a lot of files in the specific folder, and I don't want them all - just one or two.</p>
|
<p>Better late than never;
<a href="https://entire/Path/To/Folder/file/?p=REV" rel="noreferrer">https://entire/Path/To/Folder/file/?p=REV</a></p>
<p>?p=Rev specifies the revision</p>
|
<p>Subversion does not publicly document the Uris it uses internally to access that information. (And where it is documented, it is explicitly stated that this can change in future versions)</p>
<p>To access this information on the web you could use a web viewer (E.g. <a href="http://websvn.tigris.org" rel="nofollow noreferrer">websvn, <a href="http://viewvc.org" rel="nofollow noreferrer">viewvc</a>).</p>
<p>If you want to access it from your own program you could also use a client binding like SharpSvn</a>.</p>
<pre><code>using (SvnClient client = new SvnClient())
using (FileStream fs = File.Create("c:\\temp\\file.txt"))
{
// Perform svn cat http://svn.collab.net/svn/repos/trunk/COMMITTERS -r 23456
// > file.txt
SvnCatArgs a = new SvnCatArgs();
a.Revision = 23456;
client.Cat(new Uri("http://svn.collab.net/svn/repos/trunk/COMMITTERS"), a, fs);
}
</code></pre>
<p>[Update 2008-12-31: One of the next few versions of Subversion will start documenting public urls you can use for retrieving old versions.]</p>
| 19,062
|
<p>I am trying to unlock a file from a C# program, using unlocker.</p>
<p>In my UI, I put a button to unlock the file the app couldn't delete. When the user pushes the button, I want unlocker (the famous app) to be opened.</p>
<p>I have read about in the Unlocker web, and there is some explanations about the commandline to use but nothing works.</p>
<p>I write the following code but nothing happens:</p>
<p>"c:\Program Files\unlocker\unlocker.exe" -L "PATHFORTHEFILE.doc"</p>
<p>Nothing happens. I have tried without parameters and with -LU.</p>
<p>Any idea?</p>
<p>Something more efficient than unlocker to integrate it with software?</p>
|
<p>If unlocker comes with parameters -L and -U, I don't think L would be the one you want to unlock with. Probably U is for unlocking ;)</p>
<p>If you have any control of the application that is locking the file, it would be a better solution to have that program free the file rather than a third party app rip it away like this.</p>
|
<p>Look at the documentation for the <a href="http://msdn.microsoft.com/en-us/library/system.diagnostics.process.aspx" rel="nofollow noreferrer">System.Diagnostics.Process</a> class and the related ProcessStartInfo class.</p>
| 42,458
|
<p>Other than pencil & paper?
I found freemind, the mapping tool very useful.
Any other ideas?</p>
|
<p>Personally I like physical paper notebooks and pencils to workout ideas :-), but some times I use <a href="http://freemind.sourceforge.net/" rel="nofollow noreferrer">FreeMind</a>.</p>
<p>Check this list of <a href="http://en.wikipedia.org/wiki/List_of_mind_mapping_software" rel="nofollow noreferrer">Mind Mapping Software</a>.</p>
|
<p>A whiteboard and a camera to capture what's on the whiteboard quickly, to be able to move on to the next thing.</p>
| 44,601
|
<p>Since both a <code>Table Scan</code> and a <code>Clustered Index Scan</code> essentially scan all records in the table, why is a Clustered Index Scan supposedly better?</p>
<p>As an example - what's the performance difference between the following when there are many records?:</p>
<pre><code>declare @temp table(
SomeColumn varchar(50)
)
insert into @temp
select 'SomeVal'
select * from @temp
-----------------------------
declare @temp table(
RowID int not null identity(1,1) primary key,
SomeColumn varchar(50)
)
insert into @temp
select 'SomeVal'
select * from @temp
</code></pre>
|
<p>In a table without a clustered index (a heap table), data pages are not linked together - so traversing pages requires a <a href="http://msdn.microsoft.com/en-us/library/ms188270.aspx" rel="noreferrer">lookup into the Index Allocation Map</a>.</p>
<p>A clustered table, however, has it's <a href="http://msdn.microsoft.com/en-us/library/ms177443.aspx" rel="noreferrer">data pages linked in a doubly linked list</a> - making sequential scans a bit faster. Of course, in exchange, you have the overhead of dealing with keeping the data pages in order on <code>INSERT</code>, <code>UPDATE</code>, and <code>DELETE</code>. A heap table, however, requires a second write to the IAM.</p>
<p>If your query has a <code>RANGE</code> operator (e.g.: <code>SELECT * FROM TABLE WHERE Id BETWEEN 1 AND 100</code>), then a clustered table (being in a guaranteed order) would be more efficient - as it could use the index pages to find the relevant data page(s). A heap would have to scan all rows, since it cannot rely on ordering.</p>
<p>And, of course, a clustered index lets you do a CLUSTERED INDEX SEEK, which is pretty much optimal for performance...a heap with no indexes would always result in a table scan.</p>
<p>So:</p>
<ul>
<li><p>For your example query where you select all rows, the only difference is the doubly linked list a clustered index maintains. This should make your clustered table just a tiny bit faster than a heap with a large number of rows.</p></li>
<li><p>For a query with a <code>WHERE</code> clause that can be (at least partially) satisfied by the clustered index, you'll come out ahead because of the ordering - so you won't have to scan the entire table.</p></li>
<li><p>For a query that is not satisified by the clustered index, you're pretty much even...again, the only difference being that doubly linked list for sequential scanning. In either case, you're suboptimal.</p></li>
<li><p>For <code>INSERT</code>, <code>UPDATE</code>, and <code>DELETE</code> a heap may or may not win. The heap doesn't have to maintain order, but does require a second write to the IAM. I think the relative performance difference would be negligible, but also pretty data dependent.</p></li>
</ul>
<p>Microsoft has a <a href="http://www.microsoft.com/technet/prodtechnol/sql/bestpractice/clusivsh.mspx" rel="noreferrer">whitepaper</a> which compares a clustered index to an equivalent non-clustered index on a heap (not exactly the same as I discussed above, but close). Their conclusion is basically to put a clustered index on all tables. I'll do my best to summarize their results (again, note that they're really comparing a non-clustered index to a clustered index here - but I think it's relatively comparable):</p>
<ul>
<li><code>INSERT</code> performance: clustered index wins by about 3% due to the second write needed for a heap.</li>
<li><code>UPDATE</code> performance: clustered index wins by about 8% due to the second lookup needed for a heap.</li>
<li><code>DELETE</code> performance: clustered index wins by about 18% due to the second lookup needed and the second delete needed from the IAM for a heap.</li>
<li>single <code>SELECT</code> performance: clustered index wins by about 16% due to the second lookup needed for a heap.</li>
<li>range <code>SELECT</code> performance: clustered index wins by about 29% due to the random ordering for a heap.</li>
<li>concurrent <code>INSERT</code>: heap table wins by 30% under load due to page splits for the clustered index.</li>
</ul>
|
<p>A table scan has to examine every single row of the table. The clustered index scan only needs to scan the index. It doesn't scan every record in the table. That's the point, really, of indices.</p>
| 3,930
|
<p>I have been working in a web project(asp.net) for around six months. The final product is about to go live. The project uses SQL Server as the database. We have done performance testing with some large volumes of data, results show that performance degrades when data becomes too large, say 2 million rows (timeout issues, delayed reponses, etc). At first we were using fully normailized database, but now we made it partially normalized due to performance issues (to reduce joins). First of all, is it the right decision?
Plus what are the possible solutions when data size becomes very large, as the no. of clients increase in future? </p>
<p>I would like to add further:</p>
<ul>
<li>2 million rows are entity tables, tables resolving the relations have much larger rows.</li>
<li>Performance degrades when data + no. of users increases. </li>
<li>Denormalization was done after identifying the heavily used queries.</li>
<li>We are also using some heavy amount of xml columns and xquery. Can this be the cause?</li>
<li>A bit off the topic, some folks in my project say that dynamic sql query is faster than a stored procedure approach. They have done some kind of performance testing to prove their point. I think the opposite is true. Some of the heavily used queries are dynamicaly created where as most of other queries are encapsulated in stored procedures.</li>
</ul>
|
<p>In the scheme of things, a few million rows is not a particulary large Database.</p>
<p>Assuming we are talking about an OLTP database, denormalising without first identifying the root cause of your bottlenecks is a <strong>very, very bad idea</strong>. </p>
<p>The first thing you need to do is profile your query workload over a representative time period to identify where most of the work is being done (for instance, using SQL Profiler, if you are using SQL Server). Look at the number of logical reads a query performs multiplied by the number of times executed. Once you have identified the top ten worst performing queries, you need to examine the query execution plans in detail.</p>
<p>I'm going to go out on a limb here (because it is usually the case), but I would be surprised if your problem is not either</p>
<ol>
<li>Absence of the 'right' covering indexes for the costly queries</li>
<li>Poorly configured or under specified disk subsystem</li>
</ol>
<p>This <a href="https://stackoverflow.com/questions/257906/ms-sql-server-2008-how-can-i-log-and-find-the-most-expensive-queries#257944">SO answer</a> describes how to profile to find the worst performing queries in a workload.</p>
|
<p>We've always tried to develop using a database that is as close to the "real world" as possible. That way you avoid a lot of gotcha's like this one, since any ol' developer would go mental if his connection kept timing out during debugging. The best way to debug Sql performance problems IMO is what Mitch Wheat suggest; profile to find the offending scripts and start with them. Optimizing scripts can take you far and then you need to look at indexes. Also make sure that you Sql Server has enought horsepower, especially IO (disk) is important. And don't forget; cache is king. Memory is cheap; buy more. :)</p>
| 20,048
|
<p>I'd like to convert a Parallels Virtual Machine image on my mac into an image usable by Virtual PC 2007. Does anyone know how to do that, or if it is possible?</p>
|
<p>It looks like qemu-img from <a href="http://bellard.org/qemu/" rel="nofollow noreferrer">qemu</a> can do this, at least looking at its commandline help on a Ubuntu 8.04 machine where it claims support for, among others, the "parallels" and the "vpc" format.</p>
<p>Have not tried myself, though. Hope this helps.</p>
|
<p>If it's a Windows image, I would mount the VM using a tool like <a href="http://www.prowesscorp.com/support/help/smartdeploy_vdc/Welcome_to_SmartVDK_v1.0.htm" rel="nofollow noreferrer">SmartVDK</a>, then capture the VM with ImageX to a WIM file. You can then mount a blank VHD with SmartVDK and apply the image using ImageX /APPLY.</p>
<p>The qemu-img tool is better if you're performing the conversion on a Mac or Linux machine.</p>
<p>Keep in mind that you will probably encounter difficulties booting the drive if the drive serials have changed. Also, the hardware will be different. It is often better to build a new image and then to mount the converted drive, copying over anything else you need.</p>
| 8,998
|
<p>The market is flooded with VPS (virtual private server) hosting options. It seems everyone and their mother has a overloaded server in his/her closet. Enterprise options always seem priced insanely high, which make the ones that are cheap and claim enterprise level seem shaky.</p>
<p>What do you look for in a quality VPS provider (language support, 24/hr tech, etc), and how if at all do you check their credibility?</p>
|
<p>Most virtual hosting platforms will have a trial period in which you can test out their reliability. They will also give you a list of their high profile sites on their systems. Most keep track of the traffic hogs as it's a great way for them to attest their own stability.</p>
<p>I would recommend <a href="http://www.slicehost.com/" rel="nofollow noreferrer">Slicehost</a> as I have been with them for over a year and love the control. They have an amazing panel in which you can console in, rebuild slices, and restart slices in an instance. They also allow a VERY fast and painless memory upgrade, bandwidth pooling (taking all of your accounts bandwidth into one large pool), and they allow lots of different Linux kernel OSes.</p>
<p>So to answer your question without sounding like a complete advertisement:</p>
<ol>
<li>Check about their remote capabilities to manage your VPS.</li>
<li>Check out their largest clients and some big sites on their systems.</li>
<li>Test out their VPS for 30 days or so and give their support a test!</li>
<li>Check out forums where people talk about services (like this thread mentioning Slicehost 3 times already).</li>
<li>Check out places and make sure people aren't complaining of overselling or crowding out servers. I know in a VPS world, things are sandboxed a lot more than shared hosts, but it's still nice to know they can handle loads.</li>
<li>Check out the abilities to move servers or add more memory to your VPS.</li>
</ol>
<p>Those are things that I look for.</p>
|
<p>I've tried quite a few of them. The only one that I can recommend wholeheartedly is <a href="http://www.slicehost.com/" rel="nofollow noreferrer">Slicehost</a>. They are incredibly good at what they do. I have many clients running on their systems.</p>
| 13,487
|
<p>I've got menu items that look like this</p>
<pre><code><ul>
<li>Item1<span class="context-trigger"></span></li>
<li>Item2<span class="context-trigger"></span></li>
<li>Item3<span class="context-trigger"></span></li>
</ul>
</code></pre>
<p>with CSS that turns the above into a horizontal menu, and JS that turns the [spans] into buttons that bring up contextual menus. Vaguely like this:</p>
<pre>
Item1^ Item2^ Item3^
</pre>
<p>If the menu gets too wide for the browser width, it wraps, which is what I want. The problem is that sometimes it's putting in line-breaks before the [spans]. I only want it to break between [li]s. Any ideas?</p>
|
<p>try using </p>
<pre><code>white-space: nowrap;
</code></pre>
<p>in the css definition of your context-trigger class.</p>
<p>Edit: I think patmortech is correct though, putting nowrap on the span does not work, because there is no "white space" content. It might also be that sticking the style on the LI element does not work either, because the browser might breakup the parts because the span is a nested element in li. You might reconsider your code, drop the SPAN element and use css on the LI elements.</p>
|
<p>If you float the <code><li></code> elements, you should get the effect you want.</p>
| 32,802
|
<p>We have a system that is concurrently inserted a large amount of data from multiple stations while also exposing a data querying interface. The schema looks something like this (sorry about the poor formatting):</p>
<pre><code>[SyncTable]
SyncID
StationID
MeasuringTime
[DataTypeTable]
TypeID
TypeName
[DataTable]
SyncID
TypeID
DataColumns...
</code></pre>
<p>Data insertion is done in a "Synchronization" and goes like this (we only insert data into the system, we never update)</p>
<pre><code>INSERT INTO SyncTable(StationID, MeasuringTime) VALUES (X,Y); SELECT @@IDENTITY
INSERT INTO DataTable(SyncID, TypeID, DataColumns) VALUES
(SyncIDJustInserted, InMemoryCachedTypeID, Data)
... lots (500) similar inserts into DataTable ...
</code></pre>
<p>And queries goes like this ( for a given station, measuringtime and datatype)</p>
<pre><code>SELECT SyncID FROM SyncTable WHERE StationID = @StationID
AND MeasuringTime = @MeasuringTime
SELECT DataColumns FROM DataTable WHERE SyncID = @SyncIDJustSelected
AND DataTypeID = @TypeID
</code></pre>
<p>My question is how can we combine the transaction level on the inserts and NOLOCK/READPAST hints on the queries so that:</p>
<ol>
<li>We maximize the concurrency in our system while favoring the inserts (we need to store a lot of data, something as high as 2000+ records a second)</li>
<li>Queries only return data from "commited" synchronization (we don't want a result set with a half inserted synchronization or a synchronization with some skipped entries due to lock-skipping)</li>
<li>We don't care if the "newest" data is included in the query, we care more for consistency and responsiveness then for "live" and up-to-date data</li>
</ol>
<p>This may be very conflicting goals and may require a high transaction isolation level but I am interested in all tricks and optimizations to achieve high responsiveness on both inserts and selects. I'll be happy to elaborate if more details are needed to flush out more tweaks and tricks.</p>
<p>UPDATE: Just adding a bit more information for future replies. We are running SQL Server 2005 (2008 within six months probably) on a SAN network with 5+ TB of storage initially. I'm not sure what kind of RAID the SAn is set up to and precisely how many disks we have available.</p>
|
<ol>
<li><p>What type of disk system will you be using? If you have a large striped RAID array, writes should perform well. If you can estimate your required reads and writes per second, you can plug those numbers into a formula and see if your disk subsystem will keep up. Maybe you have no control over hardware...</p></li>
<li><p>Wouldn't you wrap the inserts in a transaction, which would make them unavailable to the reads until the insert is finished?</p></li>
<li><p>This should follow if your hardware is configured correctly and you're paying attention to your SQL coding - which it seems you are.</p></li>
</ol>
<p>Look into SQLIO.exe and SQL Stress tools:</p>
<p>SQLIOStress.exe
SQLIOStress.exe simulates various patterns of SQL Server 2000 I/O behavior to ensure rudimentary I/O safety.</p>
<p>The SQLIOStress utility can be downloaded from the Microsoft Web site. See the following article.</p>
<p>• How to Use the SQLIOStress Utility to Stress a Disk Subsystem such as SQL Server
<a href="http://support.microsoft.com/default.aspx?scid=kb;en-us;231619" rel="nofollow noreferrer">http://support.microsoft.com/default.aspx?scid=kb;en-us;231619</a></p>
<p>Important The download contains a complete white paper with extended details about the utility.</p>
<p>SQLIO.exe
SQLIO.exe is a SQL Server 2000 I/O utility used to establish basic benchmark testing results.</p>
<p>The SQLIO utility can be downloaded from the Microsoft Web site. See the following:
• SQLIO Performance Testing Tool (SQL Development) – Customer Available
<a href="http://download.microsoft.com/download/f/3/f/f3f92f8b-b24e-4c2e-9e86-d66df1f6f83b/SQLIO.msi" rel="nofollow noreferrer">http://download.microsoft.com/download/f/3/f/f3f92f8b-b24e-4c2e-9e86-d66df1f6f83b/SQLIO.msi</a></p>
|
<ol>
<li><p>What type of disk system will you be using? If you have a large striped RAID array, writes should perform well. If you can estimate your required reads and writes per second, you can plug those numbers into a formula and see if your disk subsystem will keep up. Maybe you have no control over hardware...</p></li>
<li><p>Wouldn't you wrap the inserts in a transaction, which would make them unavailable to the reads until the insert is finished?</p></li>
<li><p>This should follow if your hardware is configured correctly and you're paying attention to your SQL coding - which it seems you are.</p></li>
</ol>
<p>Look into SQLIO.exe and SQL Stress tools:</p>
<p>SQLIOStress.exe
SQLIOStress.exe simulates various patterns of SQL Server 2000 I/O behavior to ensure rudimentary I/O safety.</p>
<p>The SQLIOStress utility can be downloaded from the Microsoft Web site. See the following article.</p>
<p>• How to Use the SQLIOStress Utility to Stress a Disk Subsystem such as SQL Server
<a href="http://support.microsoft.com/default.aspx?scid=kb;en-us;231619" rel="nofollow noreferrer">http://support.microsoft.com/default.aspx?scid=kb;en-us;231619</a></p>
<p>Important The download contains a complete white paper with extended details about the utility.</p>
<p>SQLIO.exe
SQLIO.exe is a SQL Server 2000 I/O utility used to establish basic benchmark testing results.</p>
<p>The SQLIO utility can be downloaded from the Microsoft Web site. See the following:
• SQLIO Performance Testing Tool (SQL Development) – Customer Available
<a href="http://download.microsoft.com/download/f/3/f/f3f92f8b-b24e-4c2e-9e86-d66df1f6f83b/SQLIO.msi" rel="nofollow noreferrer">http://download.microsoft.com/download/f/3/f/f3f92f8b-b24e-4c2e-9e86-d66df1f6f83b/SQLIO.msi</a></p>
| 36,528
|
<p>We have an intranet site backed by SVN, such that the site is a checkout out copy of the repository (working folder used only by IIS). Something on the site has been causing problems today, and I want to know how to find out what was checked out to that working folder in the last 48 hours.</p>
<p><strong>Update:</strong> If there's an option I need to turn on to enable this in the future, what is it?</p>
<p>Also, as a corollary question, if I have to use the file creation time, how can I do that quickly in a recursive manner for a large folder?</p>
<hr>
<p>If I have to check creation times, then <a href="https://stackoverflow.com/questions/56682/how-to-see-if-a-subfile-of-a-directory-has-changed">this question</a> will be helpful to the solution as well.</p>
|
<p>All the code in the web folder should be backed by SVN commits, shouldn't it?</p>
<p>If this is the case you should easily be able to track the problem down just by looking through your SVN logs at the last few changes that got committed.
<code>svn info</code> will tell you which revision the working copy currently is at, so you know where to start looking</p>
<p>Once you track down the commit with the bug in it, you can use <code>svn blame</code> to find the person that did it, and explain to them what they overlooked and how they caused the bug. Then you can make them buy everyone lunch for screwing up the site.</p>
<p>If you have locally modified/added any files which aren't in SVN, then <code>svn stat</code> and <code>svn diff</code> will show you what those changes are, so you can figure out if they are causing the problem too. You should then revert those changes so your working copy is a clean checkout, or commit the changes into the repository. </p>
<p>There's nothing worse than trying to track down a bug in your code only to find out 3 hours later that the bug is not actually in any of your code, but in some stupid local tweak someone made in the working copy that never got committed :-(</p>
|
<p>I would run a <em>svn st</em> in the web folder (to find any files that are changed since the checkout) and compare that to the repository.</p>
| 7,882
|
<p>Can anyone give an idea of how should I implement undo/redo of cutting/copying/pasting of files (dirs, subdirs) mapped in a treeview in C#?</p>
<p>It would be great to have some code samples.</p>
|
<p>Undo / redo is typically implemented using the so-called "command pattern". Search with Google or read the following article:</p>
<blockquote>
<p><a href="http://blogs.vbcity.com/jspano/articles/198.aspx" rel="noreferrer">http://blogs.vbcity.com/jspano/articles/198.aspx</a></p>
</blockquote>
|
<p>For a quick linear undo/redo, you can use <a href="http://www.dofactory.com/Patterns/PatternMemento.aspx" rel="nofollow noreferrer">Memento pattern</a> using zip of file as memento.</p>
| 48,049
|
<p>I came across this snippet of code on MSDN:</p>
<pre><code>entityBuilder.Metadata = @"res://*/AdventureWorksModel.csdl|
res://*/AdventureWorksModel.ssdl|
res://*/AdventureWorksModel.msl";
</code></pre>
<p>What does the <code>res://*/</code> mean and how does it work? I think it has to do with resource files, but I am not sure.</p>
<p>Google is no help because of the punctuation.</p>
|
<p>You've got the right idea:</p>
<p><a href="http://msdn.microsoft.com/en-us/library/aa767740.aspx" rel="nofollow noreferrer" title="res protocol">res protocol</a></p>
<p>EDIT: Incidentally, you should check <a href="http://searchdotnet.com/" rel="nofollow noreferrer" title="searchdotnet">searchdotnet</a> for technical searches, it still uses google but filters on .net stuff.</p>
|
<p>The 'res' protocol allows only access to unmanaged win32 resources, for managed .net assemblies use the new 'resx' protocol: <a href="http://social.msdn.microsoft.com/forums/en-US/csharpgeneral/thread/097794c5-6acd-4563-8237-5e385ca7e563" rel="nofollow">MSDN</a></p>
| 39,318
|
<p>Coming from <a href="https://stackoverflow.com/questions/356778/php-query-single-value-per-iteration-or-fetch-all-at-start-and-retrieve-from-ar">another question of mine</a> where I learnt not to EVER use db queries within loops I consequently have to learn how to fetch all the data in a convenient way before I loop through it.</p>
<p>Let's say I have two tables 'scales' and 'items'. Each item in items belongs to one scale in scales and is linked with a foreign key (scaleID). I want to fetch all that data into an array structure in one query such that the first dimension are all the scales with all the columns and nested within, all items of one scale all columns.</p>
<p>Result would be something like that:</p>
<pre><code>scale 1, scaleParam1, scaleParam2, ...
....item1, itemParam1, itemParam2, ...
....item2, itemParam1, itemParam2, ...
scale 2, scaleParam2, scaleParam2, ...
....item1, itemParam1, itemParam2, ...
....item2, itemParam1, itemParam2, ...
</code></pre>
<p>So far I've done mainly left joins for one-to-one relationships. This is a one-to-many and I just can't wrap my mind around it. </p>
<p>Is it a right join, could it also be done with a subquery, how to get the full outer rows into it as well...</p>
<p>later I would like to iterate through it with to nested foreach loops.</p>
<p>Maybe it's just that I have a headache...</p>
|
<p>The query should look something like this:</p>
<pre><code>SELECT * FROM scales
INNER JOIN items ON scales.id = items.scale_id
</code></pre>
<p>If you want to iterate through with nested loops, you'll need to pull this data into an array - hopefully you're not pulling back so much that it'll eat up too much memory.</p>
<pre><code>$scales = array();
while ($row = mysql_fetch_assoc($data))
{
if (!isset($scales[$row['scale_id']]))
{
$row['items'] = array();
$scales[$row['scale_id']] = $row;
}
$scales[$row['scale_id']]['items'][] = $row;
}
</code></pre>
<p>Then you can loop through:</p>
<pre><code>foreach ($scales as $scale)
{
foreach ($scale['items'] as $item)
; //... do stuff
}
</code></pre>
<p>Note: this is somewhat naive in that $scale and $item will both contain fields from BOTH tables... if that's a problem then you need to change the assignments in the loop above to pull out only the fields you want.</p>
|
<p>It might be easier to first get all the scales, then all the items.</p>
<pre><code>//first get scales
while ($row = fetchrowfunctionhere()) {
$scale = $scales->createFromArray($row);
}
//then get items
$lastId = null;
while ($row = fetchrowfunctionhere()) {
$scaleId = $row['scaleID'];
if ($lastId != $scaleId) {
$scale = $scales->getByScaleId($scaleId);
}
$item = $items->createFromArray($row);
$scale->addItem($item);
$lastId = $scaleId;
}
</code></pre>
<p>or everything in one sql</p>
<pre><code>$lastId = null;
while ($row = fetchrowfunctionhere()) {
$scaleData = array_slice($row, 0, 5, true);
$itemData = array_slice($row, 5, 5, true);
$scaleId = $scaleData['scaleID'];
if ($lastId != $scaleId) {
$scale = $scales->createFromArray($scaleData);
}
$item = $items->createFromArray($itemData);
$scale->addItem($item);
$lastId = $scaleId;
}
</code></pre>
<p>everything as one happy array</p>
<pre><code>while ($row = fetchrowfunctionhere()) {
$scaleData = array_slice($row, 0, 5, true);
$itemData = array_slice($row, 5, 5, true);
$scaleId = $scaleData['scaleID'];
if (!isset($scales[$scaleId])) {
$scales[$scaleId] = $scaleData;
}
$itemId = $itemData['itemID'];
$scales[$scaleId]['items'][$itemId] = $itemData;
}
</code></pre>
| 46,571
|
<p>I have a project where multiple developers are using a copy of the same windows Virtual PC image (W2K3 SE SP2). Because our solution is tied to the machine-name (less than ideal, i know) all of the developers have the same machine name.</p>
<p>We use a VPN to connect to a remote system, upon connection we get the "Windows Error: A duplicate name exists on the network" error.</p>
<p>Since all development is happening locally, we're not dependent on other machines connecting to us -- only outbound connections.</p>
<p>I know it's best practice to change the machine name, but what's the reasoning behind this? What impact would this have?</p>
|
<p>You can't. The compiler needs to know the size of the object when declaring the class.</p>
<p>References are an alternative, although they have to be instantiated at construction time, so it's not always feasible.</p>
<p>Another alternative are smart pointers, but I suppose that's technically still a pointer.</p>
<p>It would be good to know why you don't want to use a pointer to suggest some other construct though...</p>
|
<p>If you are able to use a reference, you can retain the same use syntax. However, your reference has to be initialised straight away in the constructor, so your ctor absolutely must be defined out-of-line. (You will also need to free the object in the destructor too.)</p>
<pre><code>// bar.h
class foo;
class bar {
foo& foo_;
public:
bar();
~bar();
};
// bar.cc
bar::bar() : foo_(*new foo)
{
// ...
}
bar::~bar()
{
// ...
delete &foo_;
}
</code></pre>
<p>Your mileage may vary. :-)</p>
| 39,347
|
<p>Using only ANSI C, is there any way to measure time with milliseconds precision or more? I was browsing time.h but I only found second precision functions.</p>
|
<p>There is no ANSI C function that provides better than 1 second time resolution but the POSIX function <a href="http://www.opengroup.org/onlinepubs/000095399/functions/gettimeofday.html" rel="noreferrer"><code>gettimeofday</code></a> provides microsecond resolution. The clock function only measures the amount of time that a process has spent executing and is not accurate on many systems.</p>
<p>You can use this function like this:</p>
<pre><code>struct timeval tval_before, tval_after, tval_result;
gettimeofday(&tval_before, NULL);
// Some code you want to time, for example:
sleep(1);
gettimeofday(&tval_after, NULL);
timersub(&tval_after, &tval_before, &tval_result);
printf("Time elapsed: %ld.%06ld\n", (long int)tval_result.tv_sec, (long int)tval_result.tv_usec);
</code></pre>
<p>This returns <code>Time elapsed: 1.000870</code> on my machine.</p>
|
<p>Under windows:</p>
<pre><code>SYSTEMTIME t;
GetLocalTime(&t);
swprintf_s(buff, L"[%02d:%02d:%02d:%d]\t", t.wHour, t.wMinute, t.wSecond, t.wMilliseconds);
</code></pre>
| 47,159
|
<p>For example, referencing something as System.Data.Datagrid as opposed to just Datagrid. Please provide examples and explanation. Thanks.</p>
|
<p>The benefit is that you don't need to add an import for everything you use, especially if it's the only thing you use from a particular namespace, it also prevents collisions.</p>
<p>The downside, of course, is that the code balloons out in size and gets harder to read the more you use specific qualifiers.</p>
<p>Personally I tend to use imports for most things unless I know for sure I will only be using something from a particular namespace once or twice, so it won't impact the readability of my code.</p>
|
<p>I don't think there is really a downside, just readability vs actual time spent coding. In general if you don't have namespaces with ambiguous object I don't think it's really needed. Another thing to consider is level of use. If you have one method that uses reflection and you are alright with typeing System.Reflection 10 times, then it's not a big deal but if you plan on using a namespace alot then I would recommend an include.</p>
| 12,452
|
<p>OK, this begins to drive me crazy. I have an asp.net webapp. Pretty straightforward, most of the code in the .aspx.vb, and a few classes in App_Code.</p>
<p>The problem, which has begun to occur only today (even though most of the code was already written), is that once in a while, I have this error message :</p>
<blockquote>
<p>Error BC30002: Type ‘XXX’ is not defined</p>
</blockquote>
<p>The error occurs about every time I modify the files in the App_Code folder. EDIT : OK, this happens also if I don't touch anything for a while then refresh the page. I'm still trying to figure out exactly how to trigger this error.</p>
<p>I just have to wait a little bit without touching anything, then refresh the page and it works, but it's very annoying.</p>
<p>So I searched a little bit, but nothing came up except imports missing. Any idea ?</p>
|
<p>I think I found the problem.</p>
<p>My code was like that :</p>
<pre><code>Imports CMS
Sub Whatever()
Dim a as new Arbo.MyObject() ' Arbo is a namespace inside CMS
Dim b as new Util.MyOtherObject() ' Util is a namespace inside Util
End Sub
</code></pre>
<p>I'm not sure why I wrote it like that, but it turns out the fact I was calling classes without either calling their whole namespace or importing their whole namespace was triggering the error.</p>
<p>I rewrote it like this :</p>
<pre><code>Imports CMS.Arbo
Imports CMS.Util
Sub Whatever()
Dim a as new MyObject()
Dim b as new MyOtherObject()
End Sub
</code></pre>
<p>And now it works...</p>
|
<p>Sounds like it happens every time the website spins up (the app gets recycled every time you touch app_code and probably you have IIS configured to shut down the website after X minutes of inactivity).</p>
<p>I bet it has something to do with the asp.net worker process not having the correct access rights on the server. So its trying to load an assembly and is being denied. </p>
<p><a href="http://msdn.microsoft.com/en-us/library/aa302435.aspx" rel="nofollow noreferrer">Check this link</a> and Table 19.3 for a list of all the folders the worker process account must have access to in order to function. And don't forget to give it rights to all files and folders in your virtual directory!</p>
| 11,063
|
<p>We've just "upgraded" our production database server from 32-bit to 64-bit. It's running SQL Server 2005 Standard on Windows Server 2003. During the night after the upgrade the server was unavailable for nearly an hour - client requests were timing out. The problem then seemed to fix itself. The only clue I have as to the problem is what's in the SQL server logs:</p>
<p>LazyWriter: warning, no free buffers found.</p>
<p>Memory Manager
VM Reserved = 8470288 KB
VM Committed = 2167672 KB
AWE Allocated = 0 KB
Reserved Memory = 1024 KB
Reserved Memory In Use = 0 KB</p>
<p>Message
Memory node Id = 0
VM Reserved = 8464528 KB
VM Committed = 2162000 KB
AWE Allocated = 0 KB
SinglePage Allocator = 103960 KB
MultiPage Allocator = 31832 KB</p>
<p>MEMORYCLERK_SQLGENERAL (Total)
VM Reserved = 0 KB
VM Committed = 0 KB
AWE Allocated = 0 KB
SM Reserved = 0 KB
SM Committed = 0 KB
SinglePage Allocator = 4352 KB</p>
<p>Then there are many more messages like it starting with MEMORYCLERK.</p>
<p>Does anyone know what is going on? It seems like it's run out of memory and, granted, the server only has 2GB of physical RAM, which isn't very much by today's standards, but surely it shouldn't just completely STOP WORKING? Should I set the maximum memory SQL is allowed to use to 1.6GB or so? Is there something else I can do (OTHER THAN installing more RAM, obviously)?</p>
|
<p>2GB is certainly not very much. In fact I believe Microsoft recommends that you have 2GB of memory just to run the OS and other tasks.</p>
<p><a href="http://blogs.msdn.com/slavao/archive/2006/11/13/q-a-does-sql-server-always-respond-to-memory-pressure.aspx" rel="nofollow noreferrer">Check this blog posting</a> and <a href="http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=1739998&SiteID=1" rel="nofollow noreferrer">this microsoft forum posting</a> for more information.</p>
<p>Memory is cheap, add more if you can. </p>
<p><a href="https://i.stack.imgur.com/goN9f.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/goN9f.jpg" alt="alt text"></a><br>
<sub>(source: <a href="http://icanhascheezburger.files.wordpress.com/2008/03/funny-pictures-computer-more-rams-field.jpg" rel="nofollow noreferrer">wordpress.com</a>)</sub> </p>
|
<p>There's been some sporadic reports of MSSQL allocating enough memory to cause page faulting to disk<a href="http://support.microsoft.com/kb/918483/en-us" rel="nofollow noreferrer">1</a> - which, of course, results in drastically decreased performance.</p>
<p>Though I haven't seen anything official from MS, reports are that setting max memory to somewhere between 512M and 1G less than physical RAM should help.</p>
<p>Enterprise Edition allows you to keep MSSQL pages from being paged out, which should also solve the problem. And, obviously, more RAM will help - but probably may not alleviate it.</p>
<p><a href="http://support.microsoft.com/kb/918483/en-us" rel="nofollow noreferrer">1</a> There is some debate as to whether MSSQL is trying to allocate too much RAM, the OS is paging it out, or MSSQL is just allocating to the wrong pools. Regardless, max mem should help cases 1 and 2, and SP2 is supposed to solve 3.</p>
<p>Edit: A colleague pointed me to a related <a href="http://support.microsoft.com/kb/918483/en-us" rel="nofollow noreferrer">KB article</a> with a few hotfixes listed. It references different error messages (are you running SP2?), but the symptoms and behavior seem to fit your situation.</p>
| 23,034
|
<p>Most ASP.NET hosts give you a single website in IIS. Then, they let you set subfolders as applications. Are there any shared ASP.NET 3.5 hosts that give you multiple websites with a single account?</p>
<p>I have several low traffic websites that don't use much bandwidth.</p>
|
<p>WebHost4Life offers this, though there's a small charge per domain. ($15/year or so). I'm sure most hosts can do this, but fees vary.</p>
|
<p>I find that for low bandwidth websites that the lower end packages at <a href="http://www.vpsland.com/ezwin.html" rel="nofollow noreferrer">http://www.vpsland.com/ezwin.html</a> You get the control of Remote Desktop and loading pretty much anything you want starting at $18. The only issue I had with them was their support was a bit slow, and their connection seemed slower than most.</p>
<p>I then switched to MaximumASP with one of their VPS's. I absolutely love MaximumASP, their support was great and their servers are speedy. The only issue I have with them is I can't use MySQL, they block it for performance reasons. (use referral id: CRAB-6573)</p>
<p>I have since switched to Mosso (<a href="http://www.mosso.com" rel="nofollow noreferrer">http://www.mosso.com</a>). I love Mosso. I don't get the full access via Remote Desktop that I get with my MaximumASP account (which I stil have for my Windows Sharepoint Services sites, and other sites where Remote access is required.). But, Mosso is fast, has great support, and I can run ASP.NET, PHP, Ruby on Rails, and more all within the same domain, or I can have multiple domains (at no additional cost). The only issue I have is I found that their description of MSSQL being included was a bit misleading. I was shocked to find that MSSQL would cost $5 month per database per 500MB of data. Seems a bit steep. However, I use NHibernate in all my sites so it was no big deal to convert to MySQL so I went that route and haven't looked back. If you are interested in Mosso, give them my referral id and you get $100 off your second month. REF-CODEMONKEY</p>
<p>Another good host is Gate.com. I found that they were middle of the road as far as performance. But their support was great, and the price was right.</p>
<p>The one host I would avoid is 1&1.com. I made the stupid mistake of registering my domain with their "FREE" offer. I then found that I did not have the control I needed for file system rights. I cancelled the account within 30 days and got my money back. However, my domain ended up on the auction block. I still have not gotten the domain back. They keep paying for it. It was a very good domain (only 5 characters), but I have to chalk it up as a stupid mistake (which I knew better, I was just being cheap).</p>
| 37,283
|
<p>I need my .net application to use the .html extension instead of .aspx </p>
<p>I'm converting a php app and there are external applications which depend on that extension to function.</p>
<p>What is the best way to do this?</p>
<p>Thanks</p>
|
<p>In IIS, when you create the application for the virtual directory, click on "Configuration" for the application, and edit "App mappings", i.e. add a new mapping for html.</p>
<p>Or, in your web.config, in add this sections:</p>
<pre><code><httpHandlers>
<remove verb="*" path="*.html" />
<add verb="*" path="*.html" type="System.Web.UI.PageHandlerFactory" />
</httpHandlers>
<compilation>
<buildProviders>
<buildProvider
extension=".html"
type="System.Web.Compilation.PageBuildProvider" />
</buildProviders>
</compilation>
</code></pre>
<p>EDIT: Added the section, according to the comment. Thanks Chris.</p>
|
<p>Note that I am not 100% sure this will work with the PHP extension, we are using this procedure for a custom extension here.</p>
<p>You can change the IIS configuration:
Open the IIS Console (right click on My Computer > Manage... > Services and applications)</p>
<ul>
<li>If you are in a website, open the websites properties and the "Home directory" tab.</li>
<li>If you are in a virtual directory, the properties then the "Virtual Directory" tab.</li>
</ul>
<p>Click The "Configuration Button", look up the .aspx extension and use the same configuration for the ".php" extension (tip: you can copy paste the executable dll name between both dialogs)</p>
| 29,757
|
<p>I have an application that uses a cron like job to update a set of data. The update process happens once a minute and doesn't last long. A servlet exposes this data set to the users. My problem is that during the update process, the servlet requests should block and wait for the process to complete. </p>
<p>In bottom line I have these two functions:</p>
<pre><code>private void updateData() {
}
public List getData() {
}
</code></pre>
<p>The first function runs once a minute. The second one can be called any number of times simultaneously. When updateData runs, all calls of getData must wait for it to complete. One getData call shouldn't block subsequent calls of the same function. The updateData function has higher priority than the getData, i.e. when updateData is to run, it has to wait for all calls of getData to complete, but new calls shouldn't be allowed to start.</p>
<p>What synchronization mechanism should I use for a case like this? I am using a Java server, but I would be interested to know what solutions exist for other platforms as well.</p>
|
<p>You can use a <a href="http://java.sun.com/j2se/1.5.0/docs/api/java/util/concurrent/locks/ReadWriteLock.html" rel="nofollow noreferrer">ReadWriteLock</a> instead of synchronize. </p>
<blockquote>
<p>A ReadWriteLock maintains a pair of associated locks, one for read-only operations and one for writing. The read lock may be held simultaneously by multiple reader threads, so long as there are no writers. The write lock is exclusive. </p>
</blockquote>
<pre><code>public void updateData() {
lock.writeLock().lock();
try {
/* do stuff. */
} finally {
lock.writeLock().unlock();
}
}
public List getData() {
lock.readLock().lock();
try {
/* process/return data. */
} finally {
lock.readLock().unlock();
}
</code></pre>
<p>}</p>
|
<p>You need to synchronize access on the data.</p>
<pre><code>public void updateData() {
synchronized (updateLock) {
/* do stuff. */
}
}
public List getData() {
List data;
synchronized (updateLock) {
data = getRealData();
}
/* process/return data. */
}
</code></pre>
| 48,481
|
<p>Is anyone using the business intelligence tool Inetsoft Style Report ? I'm stuck with it and was wondering if anyone has advice on tuning and/or best practices for server admin? We are running on a fast Solaris box using Tomcat with a db2 database. </p>
|
<p>You have my sympathy. We run ours on a dual Xeon with 4G ram, and it's still a pig. One of our programmers is an ex-Inetsoft employee and has done everything known to optimize it.</p>
<p>My only suggestion is if your organization is considering StyleReport, run!</p>
|
<p>One thing you can check is whether the CPU is fully utilized when the server is running. If it's not at or close to 100%, check the Tomcat thread pool size and try increase it. There should be parameters in StyleReport to control it's thread pool sizes too. Normally if the CPUs are fully utilized, the performance should be ok.</p>
| 31,293
|
<p>I keep a JMS connection always open, because I have a MessageListener on it. </p>
<p>Is it a common need to worry about minimizing maintenance of applications with long lived JMS connections? </p>
<p>I was thinking something along the lines of try to recover from some possible common well known kinds of failure, like temporary connectivity failure.</p>
|
<p>A good JMS provider will deal with network outages such as a dropped socket or a message broker failing over or being rebooted. e.g. here is how you <a href="http://activemq.apache.org/how-do-i-configure-automatic-reconnection.html" rel="nofollow noreferrer">enable automatic reconnection</a> in <a href="http://activemq.apache.org/" rel="nofollow noreferrer">Apache ActiveMQ</a>.</p>
<p>Its often quite a pain to recreate all of your JMS resources (connection, sessions, producers, consumers) - its much easier for the JMS provider to do it for you.</p>
<p>If you must use a provider which can't support this feature - consider either switching, or using the Spring JMS helper classes which can do some of this for you. </p>
|
<p>Your will need to handle two cases:</p>
<ol>
<li>Firewall between you and the JMS server. Most firewalls will cut an "idle" connection after a couple of hours. If this is the case, send a message every hour or so or, if you can, enable TCP_KEEPALIVE. This is a TCP/IP option which will cause the underlying socket to send a test message after some time.</li>
<li>The other server is rebooted. When this happens, you'll get an "connection lost" error when you try to send the next message. In this case, just open the connection once more and try again.</li>
</ol>
<p>I suggest to install a local JMS server (any will do), connect your app to it and stop the server. That will give you the error message to expect for case #2. Then write a unit test with <a href="http://mockrunner.sourceforge.net/" rel="nofollow noreferrer">MockRunner</a> to make sure your error handling is correct.</p>
| 44,202
|
<p>If you want to associate some constant value with a class, here are two ways to accomplish the same goal:</p>
<pre><code>class Foo
{
public:
static const size_t Life = 42;
};
class Bar
{
public:
enum {Life = 42};
};
</code></pre>
<p>Syntactically and semantically they appear to be identical from the client's point of view:</p>
<pre><code>size_t fooLife = Foo::Life;
size_t barLife = Bar::Life;
</code></pre>
<p>Is there any reason other than just pure style concerns why one would be preferable to another?</p>
|
<p>The <code>enum</code> hack used to be necessary because many compilers didn't support in-place initialization of the value. Since this is no longer an issue, go for the other option. Modern compilers are also capable of optimizing this constant so that no storage space is required for it.</p>
<p>The only reason for not using the <code>static const</code> variant is if you want to <em>forbid</em> taking the address of the value: you can't take an address of an <code>enum</code> value while you can take the address of a constant (and this would prompt the compiler to reserve space for the value after all, but only <em>if</em> its address is really taken).</p>
<p>Additionally, the taking of the address will yield a link-time error unless the constant is explicitly <em>defined</em> as well. Notice that it can still be initialized at the site of declaration:</p>
<pre><code>struct foo {
static int const bar = 42; // Declaration, initialization.
};
int const foo::bar; // Definition.
</code></pre>
|
<p>Well, if needed, you can take the address of a static const Member Value. You've have to declare a separate member variable of enum type to take the address of it.</p>
| 25,110
|
<p>By default, IIS6 does not serve .json (no wildcard <code>MIME</code> type).</p>
<p>Therefore a 404 not found is thrown. I then add a new MIME type <code>(.json, text/plain or application/x-javascript or application/json)</code> which works fine.</p>
<p>However, when you then add a new mapping <code>(Home Directory -> Configuration -> Add) with .json, C:\WINDOWS\system32\inetsrv\asp.dll</code>, "<code>GET,POST</code>" and try to browse to the file, you get a 404. </p>
<p>If you remove the mapping and try and <code>POST or GET</code> to it, you get a <code>405</code>.</p>
<p>...</p>
<p>Suggestions?</p>
|
<p>By default, IIS in W2K3 and above won't serve files that aren't of a MIME type that it knows about (instead returning 404 errors).</p>
<p>You need to add a MIME type to IIS to allow it to serve that type of file. You can set it at the site level or at the server level.</p>
<p>To set this for the entire server:</p>
<ul>
<li>Open the properties for the server in IIS Manager and click MIME Types</li>
<li>Click "New". Enter "JSON" for the extension and "application/json" for the MIME type.</li>
</ul>
<hr>
<h2>Update</h2>
<p>Given this post is found in the Sencha "getting started guide", I thought it's worth <em>upgrading</em> the steps.</p>
<p>On Win 7 Enterprise SP 1 with 64-bit
The IIS has a different outlook. So the steps:</p>
<p>*. Open IIS Manager. Then you get following window.</p>
<p><img src="https://i.stack.imgur.com/tTsKd.png" alt="enter image description here"></p>
<p>*. Right click on MIME and choose <code>open feature</code></p>
<p>*. Click on ADD from top right corner Actions menu</p>
<p><img src="https://i.stack.imgur.com/UBcZT.png" alt="enter image description here"></p>
<p>*. Rest is as per Evan's.</p>
|
<p>If you don't have IIS installed, obviously, you'll want to install it prior to launching the IIS Manager. I needed (on Windows 7) to go to: 'control panel/program and features' then click 'turn windows features on or off'</p>
<p>Reference: <a href="http://www.howtogeek.com/howto/windows-vista/how-to-install-iis-on-windows-vista/">http://www.howtogeek.com/howto/windows-vista/how-to-install-iis-on-windows-vista/</a></p>
<p>It appears that starting the IIS Manager is different for different systems. I did 'start/run/inetmgr.</p>
<p>Reference: <a href="http://msdn.microsoft.com/en-us/library/bb763170(v=vs.100).aspx" rel="nofollow">http://msdn.microsoft.com/en-us/library/bb763170(v=vs.100).aspx</a></p>
<p>Reference: <a href="http://technet.microsoft.com/en-us/library/cc770472(v=ws.10).aspx" rel="nofollow">http://technet.microsoft.com/en-us/library/cc770472(v=ws.10).aspx</a></p>
<p>Troy Frericks.
#</p>
| 43,160
|
<p>Looking to do a very small, quick 'n dirty side project. I like the fact that the Google App Engine is running on Python with Django built right in - gives me an excuse to try that platform... but my question is this:</p>
<p>Has anyone made use of the app engine for anything other than a toy problem? I see some good example apps out there, so I would assume this is good enough for the real deal, but wanted to get some feedback.</p>
<p>Any other success/failure notes would be great.</p>
|
<p>I have tried app engine for my small quake watch application
<a href="http://quakewatch.appspot.com/" rel="nofollow noreferrer">http://quakewatch.appspot.com/</a></p>
<p>My purpose was to see the capabilities of app engine, so here are the main points:</p>
<ol>
<li>it doesn't come by default with Django, it has its own web framework which is pythonic has URL dispatcher like Django and it uses Django templates
So if you have Django exp. you will find it easy to use
<ul>
<li>But you can use any pure python framework and Django can be easily added see
<a href="http://code.google.com/appengine/articles/django.html" rel="nofollow noreferrer">http://code.google.com/appengine/articles/django.html</a>
google-app-engine-django (<a href="http://code.google.com/p/google-app-engine-django/" rel="nofollow noreferrer">http://code.google.com/p/google-app-engine-django/</a>) project is excellent and works almost like working on a Django project</li>
</ul></li>
<li>You can not execute any long running process on server, what you do is reply to request and which should be quick otherwise appengine will kill it
So if your app needs lots of backend processing appengine is not the best way
otherwise you will have to do processing on a server of your own</li>
<li>My quakewatch app has a subscription feature, it means I had to email latest quakes as they happend, but I can not run a background process in app engine to monitor new quakes
solution here is to use a third part service like pingablity.com which can connect to one of your page and which executes the subscription emailer
but here also you will have to take care that you don't spend much time here
or break task into several pieces</li>
<li>It provides Django like modeling capabilities but backend is totally different but for a new project it should not matter.</li>
</ol>
<p>But overall I think it is excellent for creating apps which do not need lot of background processing.</p>
<p>Edit:
Now <a href="http://code.google.com/appengine/docs/python/taskqueue/" rel="nofollow noreferrer">task queues</a> can be used for running batch processing or scheduled tasks</p>
<p>Edit:
after working/creating a real application on GAE for a year, now my opnion is that unless you are making a application which needs to scale to million and million of users, don't use GAE. Maintaining and doing trivial tasks in GAE is a headache due to distributed nature, to avoid deadline exceeded errors, count entities or do complex queries requires complex code, so small complex application should stick to LAMP.</p>
<p>Edit:
Models should be specially designed considering all the transactions you wish to have in future, because entities only in same entity group can be used in a transaction and it makes the process of updating two different groups a nightmare e.g. transfer money from user1 to user2 in transaction is impossible unless they are in same entity group, but making them same entity group may not be best for frequent update purposes....
read this <a href="http://blog.notdot.net/2009/9/Distributed-Transactions-on-App-Engine" rel="nofollow noreferrer">http://blog.notdot.net/2009/9/Distributed-Transactions-on-App-Engine</a></p>
|
<p>Take a look the the <a href="http://mysqlgame.appspot.com/" rel="nofollow noreferrer">sql game</a>, it is very stable and actually pushed traffic limits at one point so that it was getting throttled by Google. I have seen nothing but good news about App Engine, other than hosting you app on servers someone else controls completely.</p>
| 13,541
|
<p>I'm developing a shareware desktop application. I'm to the point where I need to implement the trial-use/activation code. How do you approach something like this? I have my own ideas, but I want to see what the stackoverflow community thinks.</p>
<p>I'm developing with C++/Qt. The intended platform is Windows/Mac/Linux.</p>
<p>Thanks for your advice!</p>
|
<p><strong>What to protect against and what not to protect against:</strong></p>
<p>Keep in mind that people will always find a way to get around your trial period. So you want to make it annoying for the person to have to get around your trial period, but it doesn't matter if it's impossible to get around you trial period. </p>
<p>Most people will think it's too much work to try and get around your trial period if there is even a simple mechanism. For example people can always use filemon/regmon to see which files and registry entries change upon installing your software.</p>
<p>That being said, a simple mechanism is best, because it wastes less of your time. </p>
<p><strong>Here are some ideas:</strong></p>
<ul>
<li>You can do a tick count somewhere in registry for every unique day that is run. If tick count > 30 then show them an expired message. </li>
<li>You can store the install date, but take head to check if they have more days available than your trial is supposed to be, then do tell them they are expired. This will protect against people changing their date before installing to a future day. </li>
<li>I would recommend to make your uninstall, remove your "days running" count. This is because people may re-evaluate your product months later and eventually buy. But if they can't evaluate it, they won't buy. No serious user would have time to uninstall/re-install just to gain extra use of your product.</li>
</ul>
<p><strong>Extending trials:</strong></p>
<p>For us, when a customer requests a trial extension, we send them an automated email that contains a program "TrialExtend.exe" and a trial extend code. This program contacts our server with the trial extend code to validate it. If the code is validated, their trial period is reset. </p>
|
<p>If you are [fairly] likely to have a network connection, you can have the installer register with your website, then check against it every time it starts.</p>
<p>If that's not feasible, writing a value into a world-modifiable point on the filesystem (a registry entry, entry in and /etc conf file, etc) may be workable.</p>
| 24,105
|
<p>I am looking for pointers to the solution of the following problem: I have a set of rectangles, whose height is known and x-positions also and I want to pack them in the more compact form. With a little drawing (where all rectangles are of the same width, but the width may vary in real life), i would like, instead of. </p>
<pre><code>-r1-
-r2--
-r3--
-r4-
-r5--
</code></pre>
<p>something like.</p>
<pre><code>-r1- -r3--
-r2-- -r4-
-r5--
</code></pre>
<p>All hints will be appreciated. I am not necessarily looking for "the" best solution.</p>
|
<p>Your problem is a simpler variant, but you might get some tips reading about heuristics developed for the "binpacking" problem. There has been a lot written about this, but <a href="http://en.wikipedia.org/wiki/Bin_packing_problem" rel="nofollow noreferrer">this page</a> is a good start. </p>
|
<p>Put a tetris-like game into you website. Generate the blocks that fall and the size of the play area based on your paramters. Award points to players based on the compactness (less free space = more points) of their design. Get your website visitors to perform the work for you.</p>
| 18,480
|
<p>I have a logical error. I provided the following as input:</p>
<ul>
<li>the salary is 30000</li>
<li>the child n° is 9</li>
</ul>
<p>So the the net salary will be:</p>
<ul>
<li><p>the family bonus + salary - tax</p>
<pre><code> (750) + (30000) - (3000)
</code></pre></li>
<li><p>but my program count them as </p>
<pre><code> (1500) + (30000) + (6000)
</code></pre></li>
</ul>
<p>My program doubled (accumulated) the family bonus and the tax. Can anyone explain why?</p>
<pre><code>class Program
{
static void Main(string[] args)
{
Employee e = new Employee();
e.ReadEmployee();
e.PrintEmployee();
}
}
class Employee
{
private string n;
private int byear;
private double sal;
private bool gen;
private bool mar;
private int child;
public static double tax = 0;
public static double familybonus = 0;
public string Ename
{
get { return this.n; }
set
{
this.n = value;
}
}
public int Birthyear
{
get { return this.byear; }
set
{
if (value >= 1970 && value <= 1990) this.byear = value;
else this.byear = 0;
}
}
public double Salary
{
get { return this.sal; }
set
{
if (value >= 5000 && value <= 50000) this.sal = value;
else this.sal = 0;
}
}
public bool Gender
{
get { return this.gen; }
set { this.gen = value; }
}
public bool Married
{
get { return this.mar; }
set { this.mar = value; }
}
public int NChildren
{
get { return this.child; }
set
{
if (value >= 0 && value <= 12) this.child = value;
else this.child = 0;
}
}
public double getAge()
{
return 2008 - this.Birthyear;
}
public double getNet()
{
double net = getFamilyBonus() + this.Salary - getTax();
return net;
}
public double getFamilyBonus()
{
if (this.Married == true)
familybonus += 300;
if (this.NChildren == 1) familybonus += 200;
else if (this.NChildren == 2) familybonus += 350;
else if (this.NChildren >= 3) familybonus += 450;
return familybonus;
}
public double getTax()
{
if (Salary < 10000)
tax = 0;
if (Salary <= 10000 && Salary >= 20000)
tax += Salary * 0.05;
else tax += Salary * 0.1;
return tax;
}
public void ReadEmployee()
{
Console.Write("Enter Employee Name: ");
Ename = Console.ReadLine();
Console.Write("Enter Employee birth date: ");
Birthyear = int.Parse(Console.ReadLine());
while (Birthyear < 1970 || Birthyear > 1990)
{
Console.WriteLine("Invalid Birthyear!");
Console.Write("Enter Employee Birth date: ");
Birthyear = int.Parse(Console.ReadLine());
}
string g = null;
while (g != "M" && g != "m" && g != "F" && g != "f")
{
Console.Write("Enter Employee Gender (M/F)");
g = Convert.ToString(Console.ReadLine());
}
if (g == "M" || g == "m")
Gender = true;
else
Gender = false;
Console.Write("Enter Employee Salary: ");
Salary = Double.Parse(Console.ReadLine());
while (Salary < 5000 || Salary > 50000)
{
Console.WriteLine("Invalid Salary!");
Console.Write("Enter Employee Salary: ");
Salary = int.Parse(Console.ReadLine());
}
string m = null;
while (m != "true" && m != "True" && m != "false" && m != "False")
{
Console.Write("Married (true/false)");
m = Console.ReadLine();
}
if (m == "true")
this.Married = true;
else
this.Married = false;
Console.Write("Enter Employee Children count: ");
NChildren = int.Parse(Console.ReadLine());
while (NChildren < 0 || NChildren > 12)
{
Console.WriteLine("Invalid NChildren!");
Console.Write("Enter Employee Children count: ");
NChildren = int.Parse(Console.ReadLine());
}
}
public void PrintEmployee()
{
Console.Write("Hello ");
{
if (Gender == true)
Console.Write("Mr. ");
else
Console.Write("Mrs. ");
Console.WriteLine(Ename);
}
Console.WriteLine("You are {0} years old", getAge());
Console.WriteLine("Salary= {0}", Salary);
Console.WriteLine("Tax= {0}", getTax());
Console.WriteLine("Family bonus= {0}", getFamilyBonus());
Console.WriteLine("Net= {0}", getNet());
}
}
</code></pre>
|
<p>I took the existing code, and hard-wired the inputs (rather than using Console.ReadLine()), I get:</p>
<blockquote>
<p>You are 28 years old Salary= 30000
Tax= 3000 Family bonus= 750 Net= 25500</p>
</blockquote>
<p>The main problem seems to be not initializing values - i.e. treating fields as variables:</p>
<pre><code>public double getTax()
{
if (Salary < 10000)
tax = 0;
if (Salary <= 10000 && Salary >= 20000)
tax += Salary * 0.05;
else tax += Salary * 0.1;
return tax;
}
</code></pre>
<p>OK - and what does <code>tax</code> start at if <code>Salary >= 10000</code>, etc. Likewise <code>familyBouns</code> in <code>getFamilyBonus</code>. By the way, <strong>how can Salary be both <code><= 10000</code> and <code>>= 20000</code></strong>?</p>
<p>To illustrate, I've changed the output to:</p>
<pre><code> Console.WriteLine("Tax= {0}", getTax());
Console.WriteLine("Tax= {0}", getTax());
Console.WriteLine("Tax= {0}", getTax());
</code></pre>
<p>Which shows:</p>
<blockquote>
<p>Tax= 3000 Tax= 6000 Tax= 9000</p>
</blockquote>
<p>My advice would be: don't store calculated values unless you know the math is so complex that it is worth it. Just calculate them as needed (no field at all).</p>
|
<p>Another problem seems to lie in the fact that you don't initialize familybonus when you say familybonus += 300. So everytime you call GetFamilybonus it's added to the previous result. You call it twice in the PrintEmployee function, once directly and once indirectly by calling getNet;</p>
| 36,909
|
<p>What is the best way to iterate through a strongly-typed generic List in C#.NET and VB.NET?</p>
|
<p>For C#:</p>
<pre><code>foreach(ObjectType objectItem in objectTypeList)
{
// ...do some stuff
}
</code></pre>
<p>Answer for VB.NET from <strong>Purple Ant</strong>:</p>
<pre><code>For Each objectItem as ObjectType in objectTypeList
'Do some stuff '
Next
</code></pre>
|
<p>I may be missing something, but iterating through a generic list should be fairly simple if you use my examples below. The List<> class implements the IList and IEnumerable interfaces so that you can easily iterate through them basically any way you want.</p>
<p>The most efficient way would be to use a for loop: </p>
<pre><code>for(int i = 0; i < genericList.Count; ++i)
{
// Loop body
}
</code></pre>
<p>You may also choose to use a foreach loop:</p>
<pre><code>foreach(<insertTypeHere> o in genericList)
{
// Loop body
}
</code></pre>
| 3,600
|
<p>What is the Big-O time complexity of the following nested loops:</p>
<pre class="lang-cpp prettyprint-override"><code>for (int i = 0; i < N; i++) {
for (int j = i + 1; j < N; j++) {
System.out.println("i = " + i + " j = " + j);
}
}
</code></pre>
<p>Would it be <em>O(N^2)</em> still?</p>
|
<p>Yep, it's still O(n^2), it has a smaller constant factor, but that doesn't affect O notation.</p>
|
<p>Yes, it would be N squared. The actual number of steps would the sum of 1 to N, which is .5*(N - 1)^2, if I'm not mistaken. Big O only takes into account the highest exponant and no constants, and thus, this is still N squared.</p>
| 47,255
|
<p>PLA has a heat capacity of <a href="https://www.sd3d.com/wp-content/uploads/2017/06/MaterialTDS-PLA_01.pdf" rel="nofollow noreferrer">1.8-2.1 J/g-K</a>, while <a href="http://www.matweb.com/search/datasheet_print.aspx?matguid=4de1c85bb946406a86c52b688e3810d0" rel="nofollow noreferrer">PETG 1.1-1.3 J/g-K</a>. This means that each gram of PLA needs more energy to heat up. I assume no "melting latent energy", since we talk about plastics.</p>
<p>The density is about the same.</p>
<p>Still, printing speed for PETG is said to be kept at max at 60 mm/s, while PLA can easily go up to 100 mm/s.</p>
<p>Why is PETG supposed to be printed slower than PLA?</p>
<p>Edit: a link to a more recent question may be of interest: <a href="https://3dprinting.stackexchange.com/questions/10173/power-consumption-of-filament-extrusion/10175?noredirect=1#comment30444_10175">Power consumption of filament extrusion</a></p>
|
<p>I'm adding this answer to somewhat challenge the findings of my original answer, and the premise of the question: PETG does not need lower print speeds, and can even be printed at higher speeds than PLA under some conditions due to reduced need for cooling. You can see this from some of the "#speedboatrace" entries printed with PETG. So what was really going on with the original claim and my agreement with it?</p>
<p>I think my original answer is still somewhat true: it's likely that it takes more hotend power to melt PETG at a rate that can be successfully extruded <em>and bonded</em> than to do the same for PLA. But there are other factors at play in the perception that "PETG has to be printed slow".</p>
<p>FarO did not specify details of the printer(s) in question, but I found the big limiting factor for my Ender 3 printing PETG was the stock extruder, which presumably was skipping bad to begin with, and even worse with Linear Advance, trying to keep the filament under high pressure to compensate for its compressibility. Since replacing the extruder with a direct drive one, I've had no problem printing PETG at the same speed as PLA, and both can print much faster than I ever could with the stock bowden extruder.</p>
|
<p>The density of PLA is around 1.25 g/cm³ and the density of PETG is around 1.38 g/cm³. When you're talking about the amount of energy needed to melt a particular <em>volume</em> (which is what your extrusion units are) rather than mass, you need to scale the heat capacities (with units of <span class="math-container">$\frac{\mathrm J}{\mathrm g\cdot \mathrm K}$</span>) by the density to get <span class="math-container">$\frac{\mathrm J}{\mathrm{cm}^3\cdot \mathrm K}$</span>. This brings their volumetric heat capacities somewhat closer: 2.25-2.63 vs 1.52-1.79 (about 47 % higher for PLA rather than your figure of about 62 %), but with PLA still higher.</p>
<p>You also have to account for heat loss to the environment. PLA is typically printed around 200 °C or 210 °C at most; PETG in my experience requires 250 °C to reach low enough viscosity to be printable at any speed. Assuming an ambient 20 °C, the rate of heat loss should be something like 25 % higher for PETG. So the hotend has that much additional energy needed to begin with.</p>
<p>Beside that, PLA is printed at temperatures where it's still extrudable and able to bond even if the temperature drops significantly below the nominal nozzle temperature (down to 180 °C, maybe even slightly lower), whereas PETG has trouble with increased viscosity and poor bonding right away if temperature drops.</p>
<p>Going broader still, PETG seems to need to keep its heat longer after being extruded in order for layers to bond well. (As evidenced by the need to lower fan or turn it off completely.) A slow-moving nozzle both provides heat (from the proximity of the nozzle itself) to slow the cooling, and reduces air flow across the part (by not causing as much air flow itself just by moving).</p>
| 1,633
|
<p>How do I write code where a company icon appears on the left side next to the URL address in the browser's address bar?</p>
|
<p>You are looking for a <a href="https://stackoverflow.com/questions/6642/preferred-way-to-use-favicons#6645">Favicon</a>.</p>
|
<p>load a file on the webserver called <a href="https://stackoverflow.com/favicon.ico">favicon.ico</a></p>
| 3,135
|
<p>I have had a bug recently that only manifested itself when the library was built as a release build rather than a debug build. The library is a .NET dll with a COM wrapper and I am using CoCreateInstance to create a class from the dll in an unmanaged c++ app. When I finally tracked the bug down it was caused by accessing a singleton object. I had the singleton instance declared like so:</p>
<pre><code>private static readonly MyObjectType s_instance = new MyObjectType;
</code></pre>
<p>and then accessed it with:</p>
<pre><code>public static MyObjectType Instance
{
get
{
return s_instance;
}
}
</code></pre>
<p>this was failing. Changing it to:</p>
<pre><code>private static MyObjectType s_instance;
public static MyObjectType Instance
{
get
{
if (s_instance==null)
{
s_instance = new MyObjectType();
}
return s_instance;
}
}
</code></pre>
<p>fixed the issue. Any ideas why the initial usage didn't work and if there are any downsides to doing it either way?</p>
<p>The release dll seemed to be perfectly usable from another managed app.</p>
|
<p>Try adding an (empty) static constructor, or initialize the singleton <em>in</em> a static constructor.</p>
<p>Jon Skeet has a full discussion of singleton patterns <a href="https://csharpindepth.com/articles/Singleton" rel="nofollow noreferrer">here</a>. I'm not sure why it failed, but at a guess it could relate to the <code>beforefieldinit</code> flag. See his 4th example, where he adds a static constructor to tweak this flag. I don't claim to be an expert on <code>beforefieldinit</code>, but this symptom seems to fit some of the symptoms discussed <a href="https://csharpindepth.com/articles/BeforeFieldInit" rel="nofollow noreferrer">here</a>.</p>
|
<p>Just reiterating what Marc Gravell said, but it sounds <em>a lot</em> like a beforefieldinit problem, which means the empty static constructor is your solution. You'd need to post any and all constructors in the class to get a definitive answer.</p>
<p>The second method has the advantage of lazy loading (where that is an advantage).</p>
| 26,794
|
<p>With more and more code pushed to the Views in Asp.Net MVC (i.e. AJAX, JQuery, etc...), how do you maintain the 'testability'?</p>
<ul>
<li>How do you test your Views?</li>
<li>How do you test your views with client-side jscript code?</li>
<li>How do you test your Views with Async behavior?</li>
</ul>
<p>It seems that most examples on the testability of MVC deal with controllers. What about Views?</p>
|
<p><a href="http://selenium.openqa.org/" rel="nofollow noreferrer">Selenium</a> is a great tool for testing the front end of any web app. It is written in the browser's native language, JavaScript. Having the browser run the test framework code gives your tests the ability to expose browser incompatibility issues. It is free and open source.</p>
|
<p>Also see other free browser automation tools like ArtOfTest and WatiN. The Selenium stack can be a little complicated to set up.</p>
| 18,896
|
<p>Ten years ago when I first encountered the <a href="http://en.wikipedia.org/wiki/Capability_Maturity_Model" rel="noreferrer">CMM for software</a> I was, I suppose like many, struck by how accurately it seemed to describe the chaotic "level one" state of software development in many businesses, particularly with its reference to reliance on heroes. It also seemed to provide realistic guidance for an organisation to progress up the levels improving their processes.</p>
<p>But while it seemed to provide a good model and realistic guidance for improvement, I never really witnessed an adherence to CMM having a significant positive impact on any organisation I have worked for, or with. I know of one large software consultancy that claims CMM level 5 - the highest level - when I can see first hand that their processes are as chaotic, and the quality of their software products as varied, as other, non-CMM businesses.</p>
<p>So I'm wondering, has anyone seen a real, tangible benefit from adherence to process improvement according to CMM?</p>
<p>And if you have seen improvement, do you think that the improvement was specifically attributable to CMM, or would an alternative approach (such as <a href="http://en.wikipedia.org/wiki/Six_Sigma" rel="noreferrer">six-sigma</a>) have been equally or more beneficial?</p>
<p>Does anyone still believe?</p>
<p>As an aside, for those who haven't yet seen it, check out this funny-because-its-true <a href="http://en.wikipedia.org/wiki/Capability_Immaturity_Model" rel="noreferrer">parody</a></p>
|
<p>At the heart of the matter lies this problem, neatly described by the CMM guidance itself...</p>
<p>“<em>...Sound judgment is necessary to use the CMM correctly and with insight. Intelligence, experience and knowledge must shape an appropriate interpretation of the CMM in a specific environment. That interpretation should be based on the business needs and objectives of the organization and the projects. A rote, checklist-oriented application of the CMM has the potential to harm an organization rather than help it...</em>”</p>
<p>From Page 14, section 1.6 of <em>The Capability Maturity Model, Guidelines for Improving the Software Process</em> by the Carnegie Mellon University Software Engineering Institute, ISBN 0-201-54664-7.</p>
|
<p>At school, I was taught: CMM is a good Idea, but lacking certification (anyone can say they are level 5 / level 4) it ends up being a marketing tool for offshore shops. So, yeah, the idea is sound, but how do you prove adherence?</p>
| 9,106
|
<p><strong>For a particular segment of Java code, I'd like to measure:</strong></p>
<ul>
<li><strong>Execution time (most likely <em>thread execution time</em>)</strong></li>
<li><strong>Memory usage</strong></li>
<li><strong>CPU load (specifically attributable to the code segment)</strong></li>
</ul>
<p>I'm a relative Java novice and am not familiar with how this might be achieved. I've been referred to <a href="http://en.wikipedia.org/wiki/JMX" rel="nofollow noreferrer">JMX</a>, however I'm not sure how that might be used, and JMX looks a bit 'heavy' for what I'm looking to do.</p>
<p>Ideally I'd like some measurement class that can be told what I would like to measure, with the option of calling a <code>start()</code> method prior to a code segment and a <code>stop()</code> method after. Relevant metrics would be logged to a file I specify.</p>
<p>For example:</p>
<pre><code>import com.example.metricLogger;
metricLogger logger = new metricLogger();
logger.setLogPath(pathToLogFile);
logger.monitor(executionTime);
logger.monitor(memoryUsage);
logger.monitor(cpuLoad);
logger.start();
/* Code to be measured */
logger.stop();
</code></pre>
<p><strong>Is there any standard/common/conventional way of achieving this in Java?</strong></p>
<p>Such measurements are for one-off performance comparisons, and so I'm not looking for any in-production long-term monitoring processes.</p>
<p>I'm more than happy to be referred to tutorials or external examples and don't expect a full answer here. That said, if anything as simple as the above can be achieved a realistic example would go down really well.</p>
|
<p>Profiling may be an easier option since you don't require in-production stats. Profiling also doesn't require code modification. VisualVM (which ships w/ the JDK 1.6.06+) is a simple tool. If you want something more in-depth I'd go with Eclipse TPTP, Netbeans profiler, or JProfiler(pay).</p>
<p>If you want to write you own, consider the following:</p>
<p>Simple measurments like execution time can be done by "clocking" the section you're interested in:</p>
<pre><code>long start = System.nanoTime(); // requires java 1.5
// Segment to monitor
double elapsedTimeInSec = (System.nanoTime() - start) * 1.0e-9;
</code></pre>
<p>You can use a similar technique to monitor memory via Runtime.getRuntime().*memory() methods. Keep in mind that tracking memory usage in a garbage collected environment is trickier than simple subtraction.</p>
<p>CPU load is hard to measure in Java, I typically stick with execution time and optimize the longer / repetitive sections</p>
|
<p>We can measure the cpu and memory used during a specific invoked method by collecting the cpu and memory metrics during its execution.<br>
Of course if other concurrent threads for other methods consumes memory and cpu during its execution, you are stuck. So it is a valid approach while you are able to execute a method in a isolated way. </p>
<p>For the CPU you can get its current value : </p>
<pre><code>OperatingSystemMXBean osBean = ManagementFactory.getPlatformMXBean(
OperatingSystemMXBean.class);
double processCpuLoad = osBean.getProcessCpuLoad();
</code></pre>
<p>For the memory you can do that : </p>
<pre><code>MemoryMXBean memoryMXBean = ManagementFactory.getMemoryMXBean();
int currentHeapUsedInMo = (int) (memoryMXBean.getHeapMemoryUsage().getUsed() / 1_000_000);
</code></pre>
<p>About the memory measure, waiting for a major collect before executing the method improves its reliability. </p>
<p>For example something like that may help : </p>
<pre><code>import com.google.common.testing.GcFinalization;
GcFinalization.awaitFullGc();
foo.execute(); // method to execute
</code></pre>
<p><code>GcFinalization</code> comes from the <a href="https://static.javadoc.io/com.google.guava/guava-testlib/23.0/com/google/common/testing/GcFinalization.html" rel="nofollow noreferrer">Guava test library</a>. </p>
<p>All that has few overheads. So the idea is collecting metrics (for example each second) for each invoked method you want to monitor and when the method returned, compute the max/average or any useful information for them. </p>
<p>I would favor AOP to do that.<br>
Spring AOP is a simple and good way to create aspects and set pointcuts for them but you can also do it with AspectJ if you need some particular things in terms of AOP features. </p>
| 38,769
|
<p>I maintain several old MFC applications using Visual Studio 7 and I was considering upgrading to Visual Studio 2008. After reading <a href="https://stackoverflow.com/questions/86562/what-is-missing-in-the-visual-studio-express-editions#86614">this question</a>, I realise that the Express edition will not be able to do this.</p>
<p>Does anyone know if I will be able to compile old MFC apps with VS2008 Standard edition or will I need to get the more expensive Professional edition ?</p>
|
<p>Packaging GTK and its dependencies on Windows is a full-time project in itself. Qt is much more easily distributed since it has no dependencies that do not come with Windows.</p>
<p>Qt has been cross platform from the start. GTK has not always been cross platform. Such fundamental decisions shape the overall design, and should be made before any code is written in my opinion.</p>
<p>I would comment more, but I'd be drifting into speculation, the above two things I know for certain.</p>
|
<p>I recommend to use Qt because:</p>
<ul>
<li>It's cross-platform and and covers wide range of operating systems (including mobile)</li>
<li>It is opensource and has a fast speed in getting better</li>
<li>It has the a nice GUI designer and a very capable IDE (Qt Creator)</li>
<li>The API design is excellent and easy to use</li>
<li>It has a great documentation which is easy to read</li>
<li>It has the Qt translation system which enables you to have a multilingual app</li>
<li>The GUI layout system where the widgets resize themselves according to a layout makes everything much easier</li>
<li>The QML gives you the power to create fantastic GUI with great graphics and animations</li>
<li>It has great support for networking and connectivity(socket, SSL, www, IPC, ...)</li>
<li>It has QTestLib for testing the code</li>
<li>It has many language binding if you don't want to use C++</li>
</ul>
| 24,331
|
<p>What is the difference between Obfuscation, Hashing, and Encryption?</p>
<p>Here is my understanding: </p>
<ul>
<li>Hashing is a one-way algorithm; cannot be reversed </li>
<li>Obfuscation is similar to encryption but doesn't require any "secret" to understand (ROT13 is one example) </li>
<li>Encryption is reversible but a "secret" is required to do so</li>
</ul>
|
<p>Hashing is a technique of creating semi-unique keys based on larger pieces of data. In a given hash you will eventually have "collisions" (e.g. two different pieces of data calculating to the same hash value) and when you do, you typically create a larger hash key size.</p>
<p>obfuscation generally involves trying to remove helpful clues (i.e. meaningful variable/function names), removing whitespace to make things hard to read, and generally doing things in convoluted ways to make following what's going on difficult. It provides no serious level of security like "true" encryption would.</p>
<p>Encryption can follow several models, one of which is the "secret" method, called private key encryption where both parties have a secret key. Public key encryption uses a shared one-way key to encrypt and a private recipient key to decrypt. With public key, only the recipient needs to have the secret.</p>
|
<p>All fine, except obfuscation is not really similar to encryption - sometimes it doesn't even involve ciphers as simple as ROT13.</p>
| 22,291
|
<p>When hosting a WCF service on IIS you have an option of manually configuring the endpoint or declaratively by means of WebServiceHostFactory. It doesn't seem to be that difficult to manually create the endpoint so I figured I would ask.</p>
<ul>
<li>What are the benefits of using WebServiceHostFactory? </li>
<li>Are there any performance implications to dynamically creating the endpoints?</li>
</ul>
|
<p>Can you clarify : are you asking specifically about <strong>Web</strong>ServiceHostFactory (emph: "Web")? Or just the difference between IIS hosting it vs starting your own server through code?</p>
<p>WebServiceHostFactory is new in .NET 3.5, and supports some of the newer AJAX/JSON stuff.</p>
<p>Actually, within IIS (using .svc), you are already using a ServiceHostFactory - simply the default one shipped with WCF. You can write your own factory if you want, and I've done this in the past to create a factory that <strong>only</strong> listens on https (I had an issues on a farm hosting multiple sites, where it couldn't identify the correct site for http, but https was fine - so I completely disabled http via the factory).</p>
<p>Performance shouldn't be any different as long as you don't go mad and listen on 200 end-points...</p>
<p>Generally, manually creating the server is used when you are hosting the server in (for example) a windows service. IIS is fine for some things, but app-pools get recycled, so aren't ideal for a server that needs to retain long-lived state. IIS has the advantage of being much easier to configure, especially with security (SSL etc) and compression.</p>
|
<p>I am definitely not an expert (yet), but cons that come to mind are:</p>
<ul>
<li>you can only have one authentication
method at a time (ie. not both
Windows and anonymous) cf.
<a href="https://stackoverflow.com/questions/575021/webservicehostfactory-and-iis-authentication">WebServiceHostFactory and IIS authentication</a></li>
<li>Error handling is hard to do in a
generic way (no Application_OnError,
so you'll have to setup your
endpoints manually after all)</li>
</ul>
<p>Pro:</p>
<ul>
<li>effortless setup of REST services from scratch.</li>
</ul>
| 32,559
|
<p>I want to write an <code>onClick</code> event which submits a form several times, iterating through selected items in a multi-select field, submitting once for each. </p>
<p><strong>How do I code the loop?</strong></p>
<p>I'm working in Ruby on Rails and using <code>remote_function()</code> to generate the JavaScript for the ajax call.</p>
|
<p>My quick answer (as I've not coded it yet) would be to create another function that creates a POST using XMLHTTPRequest and the specific parameters for a single call. Then inside your onClick() handler call that function as you loop through your selected items.</p>
<p>I would suggest that you do a Proof of Concept just using a dummy HTML page and javascript and then try to figure out how to get it to work in RoR.</p>
<p>Also, why are you attempting to make the multiple calls from the browser as opposed to handling the looping conditions in the RoR controller?</p>
|
<p>Unless you're modifying the browser DOM, I can't think of a reason that you would want to do this. (But without knowing fully what you're trying to do, I could be wrong in this case =)</p>
<p>You should be able to send back data from mulitple objects (even nested complex objects in your form) in just one POST.</p>
<p>Chances are the rails code will be a lot less complex, easier to write (and easier to debug!) than any javascript you come up with. </p>
<p>If you need to update different parts of the page depending on what the user has selected, you can still make multiple updates to the DOM via RJS in your render :update block, so that shouldn't be an issue.</p>
<p>You'll also have the (large) benefit of only one server round-trip instead of the multiple trips you would need using multiple POSTS.</p>
| 8,656
|
<p>i got a client side javascript function which is triggered on a button click (basically, its a calculator!!). Sometimes, due to enormous data on the page, the javascript calculator function take to long & makes the page appear inactive to the user. I was planning to display a transparent div over entire page, maybe with a busy indicator (in the center) till the calculator function ends, so that user waits till process ends. </p>
<pre>
function CalculateAmountOnClick() {
// Display transparent div
// MY time consuming loop!
{
}
// Remove transparent div
}
</pre>
<p>Any ideas on how to go about this? Should i assign a css class to a div (which surrounds my entire page's content) using javascript when my calculator function starts? I tried that but didnt get desired results. Was facing issues with transparency in IE 6. Also how will i show a loading message + image in such a transparent div?</p>
<p>TIA</p>
|
<p>Javacript to show a curtain:</p>
<pre><code>function CalculateAmountOnClick () {
var curtain = document.body.appendChild( document.createElement('div') );
curtain.id = "curtain";
curtain.onkeypress = curtain.onclick = function(){ return false; }
try {
// your operations
}
finally {
curtain.parentNode.removeChild( curtain );
}
}
</code></pre>
<p>Your CSS:</p>
<pre><code>#curtain {
position: fixed;
_position: absolute;
z-index: 99;
left: 0;
top: 0;
width: 100%;
height: 100%;
_height: expression(document.body.offsetHeight + "px");
background: url(curtain.png);
_background: url(curtain.gif);
}
</code></pre>
<p>(Move MSIE 6 underscore hacks to conditionally included files as desired.)</p>
<p>You could set this up as add/remove functions for the curtain, or as a wrapper:</p>
<pre><code>function modalProcess( callback ) {
var ret;
var curtain = document.body.appendChild( document.createElement('div') );
curtain.id = "curtain";
curtain.onkeypress = curtain.onclick = function(){ return false; }
try {
ret = callback();
}
finally {
curtain.parentNode.removeChild( curtain );
}
return ret;
}
</code></pre>
<p>Which you could then call like this:</p>
<pre><code>var result = modalProcess(function(){
// your operations here
});
</code></pre>
|
<p>In addition to all of the above, don't forget to put an invisible iframe behind the shim, so that it shows up above select boxes in IE.</p>
<p>Edit:
This site, although it provides a solution to a more complex problem, does cover creating a modal background.
<a href="http://www.codeproject.com/KB/aspnet/ModalDialogV2.aspx" rel="nofollow noreferrer">http://www.codeproject.com/KB/aspnet/ModalDialogV2.aspx</a></p>
| 25,192
|
<p>I'm executing several discrete queries in a single batch against SQL Server. For example:</p>
<pre>
update tableX set colA = 'freedom';
select lastName from customers;
insert into tableY (a,b,c) values (x,y,z);
</pre>
<p>Now, I want to capture the result in a DataSet (from select statement) which is easy enough to do...but how do I also capture the "meta" response from that command similar to the way Query Analyzer/SQL Mgt Studio does when it displays the "Messages" tab and diplays something similar to:</p>
<pre>
(1 Row affected)
(2 Rows Updated)
</pre>
|
<p>look into SQL Connection events. I think that's what you're after:
<a href="http://msdn.microsoft.com/en-us/library/a0hee08w.aspx" rel="noreferrer">http://msdn.microsoft.com/en-us/library/a0hee08w.aspx</a></p>
|
<p>Nick is right to suggest <code>@@ROWCOUNT</code> - in fact, as a matter of routine I always use <code>SET NOCOUNT ON</code>, which has a (small) performance benefit - but more importantly, this detail is an implementation detail - so you code shouldn't care...</p>
<p>If you want to return a value (such as number of rows updated), use any of:</p>
<ul>
<li>return value</li>
<li>output parameter</li>
<li>SELECT statement</li>
</ul>
<p>The last has the most overhead</p>
| 38,423
|
<p>So I've seen some very good design software, but almost all of it is very expensive. I'm just wondering if there's a good cheap design software out there.</p>
|
<p>Try Fusion 360. It's free for educators, students, enthusiasts and start-ups. It's not 100% intuitive, but once you learn the basics, it probably has all the facilities that you will ever need for mechanical design.</p>
|
<p>Try Fusion 360. It's free for educators, students, enthusiasts and start-ups. It's not 100% intuitive, but once you learn the basics, it probably has all the facilities that you will ever need for mechanical design.</p>
| 707
|
<p>I have a <code>QDirModel</code> whose current directory is set. Then I have a <code>QListView</code> which is supposed to show the files in that directory. This works fine.</p>
<p>Now I want to limit the files shown, so it only shows <em>png</em> files (the filename ends with .png). The problem is that using a <code>QSortFilterProxyModel</code> and setting the filter regexp will try to match every parent of the files as well. According to the documentation:</p>
<blockquote>
<p>For hierarchical models, the filter is
applied recursively to all children.
If a parent item doesn't match the
filter, none of its children will be
shown.</p>
</blockquote>
<p>So, how do I get the <code>QSortFilterProxyModel</code> to only filter the files in the directory, and not the directories it resides in?</p>
|
<p>We ran into something similar where I work, and ended up making our own proxy model to do our filtering. However, looking through the documentation for what you want (which seems like it would be a more common case), I came across two possibilities.</p>
<ol>
<li>You might be able to set a name filter on the QDirModel and filter things that way. I have no idea if this will work like you want, or if the name filters apply to directories also. The documentation is kind of sparse on these.</li>
<li>Subclass the QSortFilterProxyModel and override the <code>filterAcceptsRow</code> function. From the documentation:</li>
</ol>
<blockquote>
<p>Custom filtering behavior can be achieved by reimplementing the filterAcceptsRow() and filterAcceptsColumn() functions.</p>
</blockquote>
<p>Then you could presumably use the model index to check if the index item is a directory (automatically accept) or a file (filter on filename).</p>
|
<p>Just use <a href="http://api.kde.org/frameworks-api/frameworks5-apidocs/kitemmodels/html/classKRecursiveFilterProxyModel.html" rel="nofollow">KRecursiveFilterProxyModel</a> model from the <a href="http://api.kde.org/frameworks-api/frameworks5-apidocs/kitemmodels/html/index.html" rel="nofollow">KItemModels</a> KDE API</p>
| 31,232
|
<p>I want to display some WPF elements near to the selected item of a ListView. How can I obtain the coordinates (screen or relative) of the selected ListViewItem? </p>
<pre><code><ListView
x:Name="TechSchoolListView"
ClipToBounds="False"
Width="Auto" Height="Auto"
HorizontalContentAlignment="Stretch"
VerticalContentAlignment="Top"
ItemTemplate="{DynamicResource TechSchoolDataTemplate}"
ItemsSource="{Binding Path=TechSchoolResearchList, Mode=Default}"
SelectedIndex="1"
SelectedValue="{Binding Path=SelectedTechSchool, Mode=Default}"
SelectionChanged="TechSchoolList_SelectionChanged"
ItemContainerStyle="{DynamicResource TechSchoolItemContainerStyle}"
ScrollViewer.CanContentScroll="False"
ScrollViewer.VerticalScrollBarVisibility="Disabled" >
<ListView.Background>
<SolidColorBrush Color="{DynamicResource PanelBackgroundColor}"/>
</ListView.Background>
</ListView>
</code></pre>
|
<p>You should use <a href="http://msdn.microsoft.com/en-us/library/aa346420.aspx" rel="nofollow noreferrer">ContainerFromElement</a> to get the item's container, which is a visual and from there you can get the coordinates. You can't express this in XAML, however. You need to do it in code, on one of the ListView events, raised when the selected item is changed. Btw, keep in mind that the item can be its own container.</p>
<p>You can't do this in XAML, as there's no attached property on the item that shows the item is selected. (though I haven't played with WPF in a while, so that might have changed)</p>
|
<p>Although Franci Penov's answer is correct I would like to give a code sample to show how what he was saying worked for me.</p>
<pre><code>UIElement selectedContainer = (UIElement)(sender as
ListView).ItemContainerGenerator.ContainerFromIndex((sender as
ListView).SelectedIndex);
Point startPoint = selectedContainer.PointToScreen(new Point(0,0));
</code></pre>
| 26,996
|
<p>This is really annoying, we've switched our client downloads page to a different site and want to send a link out with our installer. When the link is created and overwrites the existing file, the metadata in windows XP still points to the same place even though the contents of the .url shows the correct address. I can change that URL property to google.com and it points to the same place when I copy over the file. </p>
<pre>
[InternetShortcut]
URL=https://www.xxxx.com/?goto=clientlogon.php
IDList=
HotKey=0
</pre>
<p>It works if we rename our link .url file. But we expect that the directory will be reused and that would result in one bad link and one good link which is more confusing than it is cool. </p>
|
<p>Take a look at here: <a href="http://www.cyanwerks.com/file-format-url.html" rel="nofollow noreferrer">http://www.cyanwerks.com/file-format-url.html</a></p>
<p>It explains there's a Modified field you can add to the .url file. It also explains how to interpret it.</p>
|
<p>.URL files are wierd (are they documented anywhere?)</p>
<p>Mine look like this and I don't seem to have that problem (maybe because of the Modified entry?)</p>
<pre><code>[DEFAULT]
BASEURL=http://www.xxxx.com/Help
[InternetShortcut]
URL=http://www.xxxx.com/Help
Modified=60D0EDADF1CAC5014B
</code></pre>
| 17,062
|
<p>I'm a bit new to jQuery and hope somebody can help me out.</p>
<p>I'm trying to change an element (li) to another element (div) after the (li) has been dropped.</p>
<p>Sample code:</p>
<pre><code>$("#inputEl>li").draggable({
revert: true,
opacity: 0.4,
helper: "clone"
});
$("#dropEl")
.droppable({
accept: ".drag",
hoverClass: "dropElhover",
drop: function(ev, ui) {
// change the li element to div here
}
});
</code></pre>
<p>The problem is, when i use</p>
<pre><code>drop: function(ev, ui) {
$(ui.draggable).replaceWith("<div>Some content</div>");
}
</code></pre>
<p>the original draggable elements will be disabled when the function above is triggered.</p>
<p>I'm using the latest jQuery and jQuery UI stable versions.</p>
|
<p>So, what you want is to keep your original list intact and drop list items into dropEl?
How about this:</p>
<pre><code>drop: function(ev,ui) {
$(this).append("<div>Some content</div>");
}
</code></pre>
<p>Or, if you want to replace the list elements with a div element and also have the div element draggable, you could try this:</p>
<pre><code>drop: function(ev, ui) {
$(ui.draggable).replaceWith("<div>Some content</div>");
$("#inputEl>div").draggable({
revert: true,
opacity: 0.4,
helper: "clone"
});
}
</code></pre>
<p>The original draggable call only makes items draggable at the time it is called. If you change or add elements and want them to be draggable, you will need to call the draggable() function again.</p>
|
<p>thanx for the reply.</p>
<p>Your first code worked, and plus I can also sort the divs in the droppable like this:</p>
<pre><code> drop: function(ev, ui) {
$(this).append("<div>Some content</div>");
$("#dropEl").sortable();
}
</code></pre>
<p><strike>Now the problem is how do I know which list is which once I have changed it to divs?</strike></p>
<p>I use the following code to get each element id:</p>
<pre><code>drop: function(ev, ui) {
revert: true;
var this_id = $(ui.draggable).attr("id");
$(this).append('<div id="'+this_id+'">Some content</div>');
$("#dropEl").sortable();
}
</code></pre>
| 36,968
|
<p>I am a beginner of python and have a question, very confusing for me.
If I define a function first but within the function I have to use a variable which is defined in another function below, can I do it like this? Or how can I import the return things of another function into a function?
for example:</p>
<pre><code>def hello(x,y):
good=hi(iy,ix)
"then do somethings,and use the parameter'good'."
return something
def hi(iy,ix):
"code"
return good
</code></pre>
|
<p>The scope of functions <code>hello</code> and <code>hi</code> are entirely different. They do not have any variables in common.</p>
<p>Note that the result of calling <code>hi(x,y)</code> is some object. You save that object with the name <code>good</code> in the function <code>hello</code>.</p>
<p>The variable named <code>good</code> in <code>hello</code> is a different variable, unrelated to the variable named <code>good</code> in the function <code>hi</code>.</p>
<p>They're spelled the same, but the exist in different namespaces. To prove this, change the spelling the <code>good</code> variable in one of the two functions, you'll see that things still work.</p>
<hr>
<p>Edit. Follow-up: "so what should i do if i want use the result of <code>hi</code> function in <code>hello</code> function?"</p>
<p>Nothing unusual. Look at <code>hello</code> closely.</p>
<pre><code>def hello(x,y):
fordf150 = hi(y,x)
"then do somethings,and use the variable 'fordf150'."
return something
def hi( ix, iy ):
"compute some value, good."
return good
</code></pre>
<p>Some script evaluates <code>hello( 2, 3)</code>.</p>
<ol>
<li><p>Python creates a new namespace for the evaluation of <code>hello</code>.</p></li>
<li><p>In <code>hello</code>, <code>x</code> is bound to the object <code>2</code>. Binding is done position order.</p></li>
<li><p>In <code>hello</code>, <code>y</code> is bound to the object <code>3</code>.</p></li>
<li><p>In <code>hello</code>, Python evaluates the first statement, <code>fordf150 = hi( y, x )</code>, <code>y</code> is 3, <code>x</code> is 2.</p>
<p>a. Python creates a new namespace for the evaluation of <code>hi</code>.</p>
<p>b. In <code>hi</code>, <code>ix</code> is bound to the object <code>3</code>. Binding is done position order.</p>
<p>c. In <code>hi</code>, <code>iy</code> is bound to the object <code>2</code>.</p>
<p>d. In <code>hi</code>, something happens and <code>good</code> is bound to some object, say <code>3.1415926</code>.</p>
<p>e. In <code>hi</code>, a <code>return</code> is executed; identifying an object as the value for <code>hi</code>. In this case, the object is named by <code>good</code> and is the object <code>3.1415926</code>.</p>
<p>f. The <code>hi</code> namespace is discarded. <code>good</code>, <code>ix</code> and <code>iy</code> vanish. The object (<code>3.1415926</code>), however, remains as the value of evaluating <code>hi</code>.</p></li>
<li><p>In <code>hello</code>, Python finishes the first statement, <code>fordf150 = hi( y, x )</code>, <code>y</code> is 3, <code>x</code> is 2. The value of <code>hi</code> is <code>3.1415926</code>.</p>
<p>a. <code>fordf150</code> is bound to the object created by evaluating <code>hi</code>, <code>3.1415926</code>.</p></li>
<li><p>In <code>hello</code>, Python moves on to other statements.</p></li>
<li><p>At some point <code>something</code> is bound to an object, say, <code>2.718281828459045</code>.</p></li>
<li><p>In <code>hello</code>, a <code>return</code> is executed; identifying an object as the value for <code>hello</code>. In this case, the object is named by <code>something</code> and is the object <code>2.718281828459045</code>.</p></li>
<li><p>The namespace is discarded. <code>fordf150</code> and <code>something</code> vanish, as do <code>x</code> and <code>y</code>. The object (<code>2.718281828459045</code>), however, remains as the value of evaluating <code>hello</code>.</p></li>
</ol>
<p>Whatever program or script called <code>hello</code> gets the answer.</p>
|
<p>The "hello" function doesn't mind you calling the "hi" function which is hasn't been defined yet, provided you don't try to actually use the "hello" function until after the both functions have been defined.</p>
| 46,658
|
<p>I realize that since UNIX sockets are platform-specific, there has to be some non-Java code involved. Specifically, we're interested in using JDBC to connect to a MySQL instance which only has UNIX domain sockets enabled. </p>
<p>It doesn't look like this is supported, but from what I've read it should be at least possible to write a SocketFactory for JDBC based on UNIX sockets <em>if</em> we can find a decent implementation of UNIX sockets for Java. </p>
<p>Has anyone tried this? Does anyone know of such an implementation?</p>
|
<p>Checkout the JUDS library. It is a Java Unix Domain Socket library...</p>
<p><a href="https://github.com/mcfunley/juds" rel="noreferrer">https://github.com/mcfunley/juds</a></p>
|
<p>Some searching on the internet has uncovered the following useful-looking library:</p>
<p><a href="http://www.nfrese.net/software/gnu_net_local/overview.html" rel="nofollow noreferrer">http://www.nfrese.net/software/gnu_net_local/overview.html</a></p>
<p><a href="http://web.archive.org/web/20080820110115/http://www.nfrese.net/software/gnu_net_local/doc/javadoc/index.html" rel="nofollow noreferrer">Wayback Link</a></p>
<p>Writing a socket factory should be easy enough. Once you've done so, you can pass it to your driver <a href="http://mysql.telepac.pt/doc/refman/5.0/en/mxj-driver-launched.html" rel="nofollow noreferrer">THUSLY</a>.(<a href="http://web.archive.org/web/20080820110115/http://www.nfrese.net/software/gnu_net_local/doc/javadoc/index.html" rel="nofollow noreferrer">Wayback Link</a>).</p>
<pre><code>import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.ResultSet;
import java.sql.Statement;
import com.mysql.management.driverlaunched.ServerLauncherSocketFactory;
public class ConnectorMXJTestExample {
public static void main(String[] args) throws Exception {
String hostColonPort = "localhost:3336";
String driver = com.mysql.jdbc.Driver.class.getName();
String url = "jdbc:mysql://" + hostColonPort + "/" + "?"
+ "socketFactory="
+ ServerLauncherSocketFactory.class.getName();
String userName = "root";
String password = "";
Class.forName(driver);
Connection conn = null;
try {
conn = DriverManager.getConnection(url, userName, password);
Statement stmt = conn.createStatement();
ResultSet rs = stmt.executeQuery("SELECT VERSION()");
rs.next();
String version = rs.getString(1);
rs.close();
stmt.close();
System.out.println("------------------------");
System.out.println(version);
System.out.println("------------------------");
} finally {
try {
conn.close();
} catch (Exception e) {
e.printStackTrace();
}
ServerLauncherSocketFactory.shutdown(hostColonPort);
}
}
}
</code></pre>
| 20,580
|
<p>Is there a way to dump an entire assemblies source to files using reflector?</p>
|
<p>Yes, right click on the assembly and choose Export...</p>
|
<p>Yes, right click on the assembly and choose Export...</p>
| 33,749
|
<p>As far as I know, there's no way to use {% include %} within a dynamic JS file to include styles. But I don't want to have to make another call to the server to download styles. </p>
<p>Perhaps it would be possible by taking a stylesheet and injecting it into the head element of the document...has anyone does this before? </p>
|
<p>In your JS file:</p>
<pre><code>var style = document.createElement('link');
style.setAttribute('rel', 'stylesheet');
style.setAttribute('type', 'text/css');
style.setAttribute('href', 'style.css');
document.getElementsByTagName('head')[0].appendChild(style);
</code></pre>
<p>Hope that helps.</p>
|
<p>I can envision cases where you'd want to dynamically generate JS or CSS, but generally you're better off creating static files for each and making your code general enough to fulfill all your needs.</p>
<p>This goes beyond a simple matter of code reuse - if you're dynamically generating any of this, it will need to be re-downloaded each time it's used. You're wasting CPU time rendering the templates, and wasting bandwidth sending the same (or potentially the same) data over the wire over and over.</p>
<p>But if you have a good use case for meta-coding, there's no reason why you can't either:</p>
<p>a) put the JS or CSS in the header (or body, in the case of JS) of your rendered template
b) create a view for the JS or CSS, and use Django's template engine to render them.</p>
<p>The {% include %} tag will work fine for (a), and for (b) you'd just use normal HTML to reference the URL of your view.</p>
| 49,766
|
<p>I am currently working on the authentication of an AJAX based site, and was wondering if anybody had any reccomendations on best practices for this sort of thing.</p>
<p>My original approach was a cookie based system. Essentially I set a cookie with an auth code, and every data access changed the cookie. As well, whenever there was a failed authentication, all sessions by that user were de-authenticated, to keep hijackers out. To hijack a session, somebody would have to leave themselves logged in, and a hacker would need to have the very last cookie update sent to spoof a session.</p>
<p>Unfortunatley, due to the nature of AJAX, when making multiple requests quickly, they might come back out of order, setting the cookie wrong, and breaking the session, so I need to reimplement.</p>
<p>My ideas were: </p>
<ul>
<li>A decidedly less secure session based method</li>
<li>using SSL over the whole site (seems like overkill)</li>
<li>Using an iFrame which is ssl authenticated to do secure transactions (I just sorta assume this is possible, with a little bit of jquery hacking)</li>
</ul>
<p>The issue is not the data being transferred, the only concern is that somebody might get control over an account that is not theirs.</p>
<p>A decidedly less secure session based method</p>
|
<p>Personally, I have not found using SSL for the entire site (or most of it) to be overkill. Maybe a while ago when speeds and feeds were slower. Now I wouldn't hesitate to put any part of a site under SSL.
<p>
If you've decided that using SSL for the entire site is acceptable, you might consider just using the old "Basic Authentication" where the server returns the <em>401</em> response which causes the browser to prompt for username/password. If your application can live with this type of login, is works great for AJAX and all other accesses to your site because the browser handles re-submitting requests with appropriate credentials (and it is safe if you use SSL, but <b>only</b> if you use SSL -- don't use Basic auth with plain http!).</p>
|
<p>What if you put a "generated" timestamp on each of the responses from the server and the AJAX application could always use the cookie with the latest timestamp.</p>
| 14,470
|
<p>I wonder what's the best deployment directory for Rails apps? Some developers use directories such as <code>/u/apps/#{appname}</code>. Are there <strong>any</strong> advantages when using <code>/u/apps/#{appname}</code> instead of <code>/var/www/#{appname}</code> or other OS default directories?</p>
<p>Obviously I want to pick the directory with the best security properties and the least friction for setting up the server environment.</p>
<p>How do you deploy your Rails apps? Why are you using a specific directory? Do you think it really matters anyway?</p>
|
<p>As other people have said, it really doesn't matter where you keep your applications - the thing that does matter is that you're consistent about it, so that whichever server you're on, its just a case of going to the usual location.</p>
<p>I think the only reason people use /u/apps/#{appname} is that it's Capistrano's default setting - certainly it seems odd to me doing things that way.</p>
|
<p>Like the other posters I think you should just put them wherever feels most natural. Read <code>man hier</code> if you'd like to see what directories in the standard UNIX hierarchy are meant for. I like putting things somewhere logical under <code>/var</code></p>
<p>Another very important consideration is that you should never put your Rails application directory somewhere where <code>RAILS_ROOT</code> will be accessible on the web. So sticking an entire Rails application in the subdirectory of a regular site is a big no-no.</p>
| 20,096
|
<p>Here's the scenario: </p>
<p>A C# Windows Application project stored in SVN is used to create an executable. Normally, a build server handles the build process and creates builds at regular intervals which are used by testing. In this particular instance I was asked to modify a specific build and create the executable. </p>
<p>I'm not entirely sure if the build server modifies the project files, but I know it creates a tag in SVN of the source code it used to compile the executables. Using that tag I've checked out the code on a second machine, which is a development machine. I then compiled the source on the development machine.</p>
<p>When executed, the application that was compiled on the development machine does not function exactly like the one compiled by the build server. For example, on the testing machines a DateTime Parse execption is detected by the application. However, the build machine's executable does not throw any exeptions. If I run the executable on the development machine no exceptions are thrown.</p>
<p>So in summary, both machines are theoretically using the same source code and projects.<br>
The development machine's executable only works on the dev machine. The Build machine's executable works on every machine, including the dev machine.</p>
<p>Are the machine's Regional Settings or Time Zone stored in the compiled executable? Any idea what might cause this behaviour or how to check the executables to find the possible differences and correct them?</p>
<p>Unfortunately, I cannot take a testing machine and attach a debugger to it. As soon as I can I will.</p>
|
<p>The app uses the Regional Settings of the machine it's running on, and it looks like it is your problem. You can force a thread to use a specific culture by setting System.Threading.Thread.CurrentThread.CurrentCulture and System.Threading.Thread.CurrentThread.CurrentUICulture to a specific value.</p>
|
<p>I had a similar problem once (except in C++) When I compared the sizes of the compiled executables, they were way off. Unfortunately, after days of searching, the best solution I found was to uninstall VS05 and re-install it.</p>
| 11,247
|
<p>I usually type my map declarations but was doing some maint and found one without typing. This got me thinking (Oh No!). What is the default typing of a Map declaration. Consider the following:</p>
<pre><code>Map map = new HashMap();
map.put("one", "1st");
map.put("two", new Integer(2));
map.put("three", "3rd");
for (Map.Entry entry : map.entrySet()) {
System.out.println(entry.getKey() + " -> " + entry.getValue());
}
</code></pre>
<p>this errors with a incompatible types on Map.Entry. So if I type the declaration with:</p>
<pre><code>Map<Object, Object> map = new HashMap();
</code></pre>
<p>then all works well. So what is the default type that gets set in the declaration about? Or am I missing something else?</p>
|
<p>There is no default type.</p>
<p>The types in Java generics are only for compile-time checking. They are erased at runtime and essentially gone.</p>
<p>Think of generics as a static helper to a) better document your code, and b) enable some limited compile-time checking for type safety.</p>
|
<p><a href="http://java.sun.com/j2se/1.4.2/docs/api/java/util/HashMap.html" rel="nofollow noreferrer">HashMap</a> is a collection of objects, Think C++ containers. Each element of the map is a "bucket" to hold data.<br>
You are putting different types of data in the buckets, the hashmap needs to know that these are not all the same data type. If only one type of data was placed in the hashmap, you would get a warning but it would compile. </p>
| 33,041
|
<p>If I simply wrap my query with:</p>
<pre><code>BEGIN TRANSACTION
COMMIT TRANSACTION
</code></pre>
<p>If anything fails inside of that, will it automatically rollback?</p>
<p>From looking at other code, they seem to check for an error, if there is an error then they do a GOTO statement which then calls ROLLBACK TRANSACTION</p>
<p>But that seems like allot of work, to have to check for IF( @@ERROR <> 0) after every insert/update.</p>
|
<p>I typically do something like this inside my stored procedures. It keeps things nice and safe and passes along any errors that I encounter.</p>
<pre><code>SET XACT_ABORT ON;
BEGIN TRY
BEGIN TRANSACTION;
-- Code goes here
COMMIT TRANSACTION;
END TRY
BEGIN CATCH
IF @@TRANCOUNT > 0
ROLLBACK TRANSACTION;
DECLARE
@ERROR_SEVERITY INT,
@ERROR_STATE INT,
@ERROR_NUMBER INT,
@ERROR_LINE INT,
@ERROR_MESSAGE NVARCHAR(4000);
SELECT
@ERROR_SEVERITY = ERROR_SEVERITY(),
@ERROR_STATE = ERROR_STATE(),
@ERROR_NUMBER = ERROR_NUMBER(),
@ERROR_LINE = ERROR_LINE(),
@ERROR_MESSAGE = ERROR_MESSAGE();
RAISERROR('Msg %d, Line %d, :%s',
@ERROR_SEVERITY,
@ERROR_STATE,
@ERROR_NUMBER,
@ERROR_LINE,
@ERROR_MESSAGE);
END CATCH
</code></pre>
|
<p>For transaction control you use begin, commit and rollback. You begin a transaction by supplying BEGIN TRANSACTION. Then you put the various SQL statements you need. Then you end the transaction by issuing either a commit or rollback. COMMIT TRANSACTION will commit all the changes that you did to the database after the BEGIN statement and make them permanent, so to speak. ROLLBACK TRANSACTION will rollback all changes that you did to the database after the BEGIN statement. However, it will not change variable values.</p>
<p>Example:</p>
<pre><code>BEGIN TRANSACTION
UPDATE table SET column = 'ABC' WHERE column = '123'
COMMIT TRANSACTION
--//column now has a value of 'ABC'
BEGIN TRANSACTION
UPDATE table SET column = 'ABC' WHERE column = '123'
ROLLBACK TRANSACTION
--//column still has it's previous value ('123') No changes were made.
</code></pre>
| 37,063
|
<p>We're using WatiN for testing our UI, but one page (which is unfortunately not under our teams control) takes forever to finish loading. Is there a way to get WatiN to click a link on the page before the page finishes rendering completely?</p>
|
<p>Here's the code we found to work:</p>
<pre><code>IE browser = new IE(....);
browser.Button("SlowPageLoadingButton").ClickNoWait();
Link continueLink = browser.Link(Find.ByText("linktext"));
continueLink.WaitUntilExists();
continueLink.Click();
</code></pre>
|
<p>You should be able to leave out the call to WaitUntilExists() since WatiN does this internally when you call a method or property on an element (like the link.Click() in you rexample).</p>
<p>HTH,
Jeroen van Menen
Lead dev WatiN</p>
| 7,140
|
<p>We have a project coming up where the PM is insistent that the team should "eat their own dog food"?</p>
<p>At what point is it realistic to do this?</p>
<p>e.g. assume we have to write an editor. We can't use this editor at the beginning to actually code because it doesn't exist. We have to use another editor.</p>
<p>For a while during the project, using a buggy editor is going to slow the project down and will be counter productive.</p>
<p>So at what point do we switch?</p>
<p>Update: After some discussion within the team, the points we will stress during development are:</p>
<ul>
<li>Implement smallest subset possible to start off with</li>
<li>Identify critical features asap</li>
<li>Only switch some of the developers to use the new product to minimise risk</li>
</ul>
|
<p><em>Some</em> of you should be using it as soon as you possibly can. The first version should be stripped-down, with only the most essential features that you <em>need</em> in order to use it as an (in this case) editor. Once you start using it you'll find out in a hurry which features are important.</p>
|
<p>Depending on how the development in being done you can switch earlier or later. If you are using a TDD methodology or where finding and fixing bugs is higher on the list I would start whenever you have enough features you feel would help your day to day life. This could be really early in the development if you have prioritized your features effectively.</p>
<p>Otherwise I would wait until you get to some of the later stages, pre alpha or pre beta. This means that you are not feeling too much pain early in the development. </p>
<p>As mentioned by other, if you can change your development efforts to try to make the product usable earlier do it! I would recommend to have people start using the product in earnest as early as possible to help evaluate the various features and get your initial users emotionally attached to the product. A developer who cares will often put in that extra effort to make the project just that much better.</p>
| 28,253
|
<p>I have a need to run a relatively large number of virtual machines on a relatively small number of physical hosts. Each virtual machine isn't doing to much - each only needs to run essentially one basic network service - think SMTP or the like. Furthermore, the load on each is going to be extremely light. </p>
<p>Unfortunately, the numbers are something like 100 virtual machines on 5 physical hosts. Each host is decent enough - core 2 with 2 gigs of ram and a 1tb disk. However, I know just taking a vmware image of ubuntu and throwing on that machine wont get me anywhere near 100 instances and would be something closer to 20.</p>
<p>So, is there any hope for this ratio of images to hosts? Also, which implementation of virtual machine would be best suited for this purpose - ie has efficient overall usage of resources? We mostly use vmware here, but if there is a significant performance advantage that could be gained by switching to Xen or the like, I am sure we would consider it.</p>
<p>Thank you in advance for your insights :)</p>
<p>Note: We ended up using OpenVZ and it worked rather well. The default parameters for an ubuntu template let us run about 40 instances per machine.</p>
|
<p>there are three main fronts to make those fit:</p>
<ol>
<li><p>lower overhead. OpenVZ, Vserver, chroot, would be ideal if applicable. if you really need each instance be a real VM with it's own kernel, try KVM/Xen instead of VMWare. may be less mature, but you'll have a lot more flexibility.</p></li>
<li><p>smaller guests. try Ubuntu JeOS, or roll your own with busybox</p></li>
<li><p>share as much as possible between guests. try sharing a single R/O image with all the OS, and mount a small R/W image for each guest on /var, /home, /etc, etc</p></li>
</ol>
|
<p>Cloud Foundry. I know nothing about VMs compared to anyone else who may have submitted an answer, but from what I understand if you have a host, a VM on that host, and then Cloud Foundry on that VM you can easily create a base Secondary VM and easily replicate and configure all of your services within that Secondary VM set, while keeping hardware usage low. I don't know if it will work for sure, but from what I understand that would be one of the more minimal approaches and it is a two hull approach which would reduce possible risk of damaging the host machine.</p>
| 18,638
|
<p>Is it even possible to perform address (physical, not e-mail) validation? It seems like the sheer number of address formats, even in the US alone, would make this a fairly difficult task. On the other hand it seems like a task that would be necessary for several business requirements.</p>
|
<p>Here's a free and sort of "outside the box" way to do it. Not 100% perfect, but it should reject blatantly non-existent addresses.</p>
<p>Submit the entire address to <a href="https://developers.google.com/maps/documentation/geocoding/" rel="noreferrer">Google's geocoding web service</a>. This service attempts to return the exact coordinates of the location you feed it, i.e. latitude and longitude.</p>
<p>In my experience if the address is invalid you will get a result of 602 from the service. There's definitely a possibility of false positives or false negatives, but used in conjunction with other consistency checks it could be useful.</p>
<p>(<a href="http://developer.yahoo.com/maps/rest/V1/geocode.html" rel="noreferrer">Yahoo's geocoding web service</a>, on the other hand, will return the coordinates of the center of the town if the town exists but the rest of the address is bogus. Potentially useful as long as you pay close attention to the "precision" field in the result).</p>
|
<p>For US addresses you can require a valid state, and verify that the zip is valid. You could even check that the zip code is in the right state, but beyond that I don't think there are many tests you could run that wouldn't provide a lot of false negatives.</p>
<p>What are you trying to do -- prevent simple mistakes or enforcing some kind of identity check?</p>
| 16,346
|
<p>I am using a FORM LABS 3 printer with clear resin. After printing the model, I wash it with Isopropenyl and dry it. Then I cure it using Formlabs Form Cure for 5 minutes under 60 C°.
After curing the model, the clear print loses some of its transparency.</p>
<p>Is this normal? can it be avoided?</p>
|
<p>This happens to most resins and the amount of haziness is directly related to the type of resin. Not all clear resins do this mind you, but it has to do with the curing sprlectrum of light(natural sunlight cures do this way worse.)</p>
|
<p>Clouding is a known issue with colored transparent resins, as is yellowing with clear resin.</p>
<p>Uncle Jessy did quite a good video explaining the issue and how to best avoid it.</p>
<p>The conclusion was that you should wash and dry them with as little UV exposure as possible (Drying them inside a box in a warm room rather than in direct sunlight), then coating them with Clear Coat lacquer or a similar product, then curing them.</p>
<p><a href="https://www.youtube.com/watch?v=1Ya0DSVYXsE&t=5s" rel="nofollow noreferrer">enter link description here</a></p>
| 2,097
|
<p>I use LINQ to Objects instructions on an ordered array.
Which operations shouldn't I do to be sure the order of the array is not changed?</p>
|
<p>I examined the methods of <a href="http://msdn.microsoft.com/en-us/library/system.linq.enumerable_methods.aspx" rel="noreferrer">System.Linq.Enumerable</a>, discarding any that returned non-IEnumerable results. I checked the remarks of each to determine how the order of the result would differ from order of the source.</p>
<p>Preserves Order Absolutely. You can map a source element by index to a result element</p>
<ul>
<li>AsEnumerable</li>
<li>Cast</li>
<li>Concat</li>
<li>Select</li>
<li>ToArray</li>
<li>ToList</li>
</ul>
<p>Preserves Order. Elements are filtered or added, but not re-ordered.</p>
<ul>
<li>Distinct</li>
<li>Except</li>
<li>Intersect</li>
<li>OfType</li>
<li>Prepend (new in .net 4.7.1)</li>
<li>Skip</li>
<li>SkipWhile</li>
<li>Take</li>
<li>TakeWhile</li>
<li>Where</li>
<li>Zip (new in .net 4)</li>
</ul>
<p>Destroys Order - we don't know what order to expect results in.</p>
<ul>
<li>ToDictionary</li>
<li>ToLookup</li>
</ul>
<p>Redefines Order Explicitly - use these to change the order of the result</p>
<ul>
<li>OrderBy</li>
<li>OrderByDescending</li>
<li>Reverse</li>
<li>ThenBy</li>
<li>ThenByDescending</li>
</ul>
<p>Redefines Order according to some rules.</p>
<ul>
<li>GroupBy - The IGrouping objects are yielded in an order based on the order of the elements in source that produced the first key of each IGrouping. Elements in a grouping are yielded in the order they appear in source. </li>
<li>GroupJoin - GroupJoin preserves the order of the elements of outer, and for each element of outer, the order of the matching elements from inner.</li>
<li>Join - preserves the order of the elements of outer, and for each of these elements, the order of the matching elements of inner. </li>
<li>SelectMany - for each element of source, selector is invoked and a sequence of values is returned.</li>
<li>Union - When the object returned by this method is enumerated, Union enumerates first and second in that order and yields each element that has not already been yielded. </li>
</ul>
<hr>
<p>Edit: I've moved Distinct to Preserving order based on this <a href="https://github.com/dotnet/corefx/blob/master/src/System.Linq/src/System/Linq/Enumerable.cs" rel="noreferrer">implementation</a>.</p>
<pre><code> private static IEnumerable<TSource> DistinctIterator<TSource>
(IEnumerable<TSource> source, IEqualityComparer<TSource> comparer)
{
Set<TSource> set = new Set<TSource>(comparer);
foreach (TSource element in source)
if (set.Add(element)) yield return element;
}
</code></pre>
|
<p>The question here is specifically referring to LINQ-to-Objects.</p>
<p>If your using LINQ-to-SQL instead there is no order there unless you impose one with something like: </p>
<pre><code>mysqlresult.OrderBy(e=>e.SomeColumn)
</code></pre>
<p>If you do not do this with LINQ-to-SQL then the order of results can vary between subsequent queries, even of the same data, which could cause an intermittant bug.</p>
| 25,033
|
<p>Is there a simple way, in a pretty standard UNIX environment with bash, to run a command to delete all but the most recent X files from a directory?</p>
<p>To give a bit more of a concrete example, imagine some cron job writing out a file (say, a log file or a tar-ed up backup) to a directory every hour. I'd like a way to have another cron job running which would remove the oldest files in that directory until there are less than, say, 5.</p>
<p>And just to be clear, there's only one file present, it should never be deleted.</p>
|
<p>The problems with the existing answers:</p>
<ul>
<li>inability to handle filenames with embedded spaces or newlines.
<ul>
<li>in the case of solutions that invoke <code>rm</code> directly on an unquoted command substitution (<code>rm `...`</code>), there's an added risk of unintended globbing.</li>
</ul>
</li>
<li>inability to distinguish between files and directories (i.e., if <em>directories</em> happened to be among the 5 most recently modified filesystem items, you'd effectively retain <em>fewer</em> than 5 files, and applying <code>rm</code> to directories will fail).</li>
</ul>
<p><a href="https://stackoverflow.com/a/299911/45375">wnoise's answer</a> addresses these issues, but the solution is <em>GNU</em>-specific (and quite complex).</p>
<p>Here's a pragmatic, <strong>POSIX-compliant solution</strong> that comes with only <strong>one caveat</strong>: it cannot handle filenames with embedded <em>newlines</em> - but I don't consider that a real-world concern for most people.</p>
<p><sup>For the record, here's the explanation for why it's generally not a good idea to parse <code>ls</code> output: <a href="http://mywiki.wooledge.org/ParsingLs" rel="noreferrer">http://mywiki.wooledge.org/ParsingLs</a></sup></p>
<pre><code>ls -tp | grep -v '/$' | tail -n +6 | xargs -I {} rm -- {}
</code></pre>
<p><sup>Note: This command operates in the <strong><em>current</em> directory</strong>; to <strong>target a directory <em>explicitly</em>, use a subshell (<code>(...)</code>) with <code>cd</code></strong>:<br />
<code>(cd /path/to && ls -tp | grep -v '/$' | tail -n +6 | xargs -I {} rm -- {})</code><br />
The same <strong>applies analogously to the commands below</strong>.</sup></p>
<p>The above is <strong>inefficient</strong>, because <code>xargs</code> has to invoke <code>rm</code> separately <em>for each filename</em>.<br />
However, your platform's specific <code>xargs</code> implementation may allow you to solve this problem:</p>
<hr />
<p>A solution that <strong>works with <em>GNU</em> <code>xargs</code></strong> is to use <strong><code>-d '\n'</code></strong>, which makes <code>xargs</code> consider each input line a separate argument, yet passes as many arguments as will fit on a command line <em>at once</em>:</p>
<pre><code>ls -tp | grep -v '/$' | tail -n +6 | xargs -d '\n' -r rm --
</code></pre>
<p><sup>Note: Option <code>-r</code> (<code>--no-run-if-empty</code>) ensures that <code>rm</code> is not invoked if there's <em>no input</em>.</sup></p>
<p>A solution that <strong>works with <em>both</em> <em>GNU</em> <code>xargs</code> <em>and</em> <em>BSD</em> <code>xargs</code></strong> (including on <strong>macOS</strong>) - though technically still <em>not</em> POSIX-compliant - is to use <strong><code>-0</code></strong> to handle <code>NUL</code>-separated input, after first translating newlines to <code>NUL</code> (<code>0x0</code>) chars., which also passes (typically) all filenames <em>at once</em>:</p>
<pre><code>ls -tp | grep -v '/$' | tail -n +6 | tr '\n' '\0' | xargs -0 rm --
</code></pre>
<p><strong>Explanation:</strong></p>
<ul>
<li><p><code>ls -tp</code> prints the names of filesystem items sorted by how recently they were modified , in descending order (most recently modified items first) (<code>-t</code>), with directories printed with a trailing <code>/</code> to mark them as such (<code>-p</code>).</p>
<ul>
<li>Note: It is the fact that <code>ls -tp</code> always outputs file / directory <em>names</em> only, not full paths, that necessitates the subshell approach mentioned above for targeting a directory other than the current one (<code>(cd /path/to && ls -tp ...)</code>).</li>
</ul>
</li>
<li><p><code>grep -v '/$'</code> then weeds out directories from the resulting listing, by omitting (<code>-v</code>) lines that have a trailing <code>/</code> (<code>/$</code>).</p>
<ul>
<li><em>Caveat</em>: Since a <em>symlink that points to a directory</em> is technically not itself a directory, such symlinks will <em>not</em> be excluded.</li>
</ul>
</li>
<li><p><code>tail -n +6</code> skips the first <em>5</em> entries in the listing, in effect returning all <em>but</em> the 5 most recently modified files, if any.<br />
Note that in order to exclude <code>N</code> files, <code>N+1</code> must be passed to <code>tail -n +</code>.</p>
</li>
<li><p><code>xargs -I {} rm -- {}</code> (and its variations) then invokes on <code>rm</code> on all these files; if there are no matches at all, <code>xargs</code> won't do anything.</p>
<ul>
<li><code>xargs -I {} rm -- {}</code> defines placeholder <code>{}</code> that represents each input line <em>as a whole</em>, so <code>rm</code> is then invoked once for each input line, but with filenames with embedded spaces handled correctly.</li>
<li><code>--</code> in all cases ensures that any filenames that happen to start with <code>-</code> aren't mistaken for <em>options</em> by <code>rm</code>.</li>
</ul>
</li>
</ul>
<hr />
<p>A <strong>variation</strong> on the original problem, <strong>in case the matching files need to be processed <em>individually</em> or <em>collected in a shell array</em></strong>:</p>
<pre><code># One by one, in a shell loop (POSIX-compliant):
ls -tp | grep -v '/$' | tail -n +6 | while IFS= read -r f; do echo "$f"; done
# One by one, but using a Bash process substitution (<(...),
# so that the variables inside the `while` loop remain in scope:
while IFS= read -r f; do echo "$f"; done < <(ls -tp | grep -v '/$' | tail -n +6)
# Collecting the matches in a Bash *array*:
IFS=$'\n' read -d '' -ra files < <(ls -tp | grep -v '/$' | tail -n +6)
printf '%s\n' "${files[@]}" # print array elements
</code></pre>
|
<pre><code>leaveCount=5
fileCount=$(ls -1 *.log | wc -l)
tailCount=$((fileCount - leaveCount))
# avoid negative tail argument
[[ $tailCount < 0 ]] && tailCount=0
ls -t *.log | tail -$tailCount | xargs rm -f
</code></pre>
| 4,538
|
<p>According to this</p>
<p><a href="http://perldoc.perl.org/UNIVERSAL.html" rel="noreferrer">http://perldoc.perl.org/UNIVERSAL.html</a></p>
<p>I shouldn't use UNIVERSAL::isa() and should instead use $obj->isa() or CLASS->isa().</p>
<p>This means that to find out if something is a reference in the first place and then is reference to this class I have to do</p>
<pre><code>eval { $poss->isa("Class") }
</code></pre>
<p>and check $@ and all that gumph, or else</p>
<pre><code>use Scalar::Util 'blessed';
blessed $ref && $ref->isa($class);
</code></pre>
<p>My question is why? What's wrong with UNIVERSAL::isa called like that? It's much cleaner for things like:</p>
<pre><code>my $self = shift if UNIVERSAL::isa($_[0], __PACKAGE__)
</code></pre>
<p>To see whether this function is being called on the object or not. And is there a nice clean alternative that doesn't get cumbersome with ampersands and potentially long lines?</p>
|
<p>The primary problem is that if you call <code>UNIVERSAL::isa</code> directly, you are bypassing any classes that have overloaded <code>isa</code>. If those classes rely on the overloaded behavior (which they probably do or else they would not have overridden it), then this is a problem. If you invoke <code>isa</code> directly on your blessed object, then the correct <code>isa</code> method will be called in either case (overloaded if it exists, UNIVERSAL:: if not).</p>
<p>The second problem is that <code>UNIVERSAL::isa</code> will only perform the test you want on a blessed reference just like every other use of <code>isa</code>. It has different behavior for non-blessed references and simple scalars. So your example that doesn't check whether <code>$ref</code> is blessed is not doing the right thing, you're ignoring an error condition and using <code>UNIVERSAL</code>'s alternate behavior. In certain circumstances this can cause subtle errors (for example, if your variable contains the name of a class).</p>
<p>Consider:</p>
<pre><code>use CGI;
my $a = CGI->new();
my $b = "CGI";
print UNIVERSAL::isa($a,"CGI"); # prints 1, $a is a CGI object.
print UNIVERSAL::isa($b,"CGI"); # Also prints 1!! Uh-oh!!
</code></pre>
<p>So, in summary, don't use <code>UNIVERSAL::isa</code>... Do the extra error check and invoke <code>isa</code> on your object directly.</p>
|
<p>Right. It does a wrong thing for classes that overload <code>isa</code>. Just use the following idiom:</p>
<pre><code>if (eval { $obj->isa($class) }) {
</code></pre>
<p>It is easily understood and commonly accepted.</p>
| 25,000
|
<p>With SVN, I had a single big repository I kept on a server, and checked-out on a few machines. This was a pretty good backup system, and allowed me easily work on any of the machines. I could checkout a specific project, commit and it updated the 'master' project, or I could checkout the entire thing.</p>
<p>Now, I have a bunch of git repositories, for various projects, several of which are on github. I also have the SVN repository I mentioned, imported via the git-svn command..</p>
<p>Basically, I like having all my code (not just projects, but random snippets and scripts, some things like my CV, articles I've written, websites I've made and so on) in one big repository I can easily clone onto remote machines, or memory-sticks/harddrives as backup.</p>
<p>The problem is, since it's a private repository, and git doesn't allow checking out of a specific folder (that I could push to github as a separate project, but have the changes appear in both the master-repo, and the sub-repos)</p>
<p>I <em>could</em> use the git submodule system, but it doesn't act how I want it too (submodules are pointers to other repositories, and don't really contain the actual code, so it's useless for backup)</p>
<p>Currently I have a folder of git-repos (for example, ~/code_projects/proj1/.git/ ~/code_projects/proj2/.git/), and after doing changes to proj1 I do <code>git push github</code>, then I copy the files into ~/Documents/code/python/projects/proj1/ and do a single commit (instead of the numerous ones in the individual repos). Then do <code>git push backupdrive1</code>, <code>git push mymemorystick</code> etc</p>
<p>So, the question: How do your personal code and projects with git repositories, and keep them synced and backed-up?</p>
|
<p>I would <strong>strongly</strong> advise against putting unrelated data in a given
Git repository. The overhead of creating new repositories is quite
low, and that is a <em>feature</em> that makes it possible to keep
different lineages completely separate.</p>
<p>Fighting that idea means ending up with unnecessarily tangled history,
which renders administration more difficult and--more
importantly--"archeology" tools less useful because of the resulting
dilution. Also, as you mentioned, Git assumes that the "unit of
cloning" is the repository, and practically has to do so because of
its distributed nature.</p>
<p>One solution is to keep every project/package/etc. as its own <em>bare</em>
repository (i.e., without working tree) under a blessed hierarchy,
like:</p>
<pre><code>/repos/a.git
/repos/b.git
/repos/c.git
</code></pre>
<p>Once a few conventions have been established, it becomes trivial to
apply administrative operations (backup, packing, web publishing) to
the complete hierarchy, which serves a role not entirely dissimilar to
"monolithic" SVN repositories. Working with these repositories also
becomes somewhat similar to SVN workflows, with the addition that one
<em>can</em> use local commits and branches:</p>
<pre><code>svn checkout --> git clone
svn update --> git pull
svn commit --> git push
</code></pre>
<p>You can have multiple remotes in each working clone, for the ease of
synchronizing between the multiple parties:</p>
<pre><code>$ cd ~/dev
$ git clone /repos/foo.git # or the one from github, ...
$ cd foo
$ git remote add github ...
$ git remote add memorystick ...
</code></pre>
<p>You can then fetch/pull from each of the "sources", work and commit
locally, and then push ("backup") to each of these remotes when you
are ready with something like (note how that pushes the <em>same</em> commits
and history to each of the remotes!):</p>
<pre><code>$ for remote in origin github memorystick; do git push $remote; done
</code></pre>
<p>The easiest way to turn an existing working repository <code>~/dev/foo</code>
into such a bare repository is probably:</p>
<pre><code>$ cd ~/dev
$ git clone --bare foo /repos/foo.git
$ mv foo foo.old
$ git clone /repos/foo.git
</code></pre>
<p>which is mostly equivalent to a <code>svn import</code>--but does not throw the
existing, "local" history away.</p>
<p>Note: <em>submodules</em> are a mechanism to include shared <em>related</em>
lineages, so I indeed wouldn't consider them an appropriate tool for
the problem you are trying to solve.</p>
|
<p>There is another method for having nested git repos, but it doesn't solve the problem you're after. Still, for others who are looking for the solution I was:</p>
<p>In the top level git repo just hide the folder in .gitignore containing the nested git repo. This makes it easy to have two separate (but nested!) git repos.</p>
| 5,745
|
<p>Are there any documented techniques for speeding up mySQL dumps and imports?</p>
<p>This would include my.cnf settings, using ramdisks, etc. </p>
<p>Looking only for documented techniques, preferably with benchmarks showing potential speed-up.</p>
|
<ol>
<li>Get a copy of <em>High Performance MySQL</em>. Great book. </li>
<li>Extended inserts in dumps </li>
<li>Dump with --tab format so you can use mysqlimport, which is
faster than mysql < dumpfile</li>
<li>Import with multiple threads, one for each table.</li>
<li>Use a different database engine if possible. importing into a
heavily transactional engine like innodb is awfully slow. Inserting
into a non-transactional engine like
MyISAM is much much faster.</li>
<li>Look at the table compare script in the Maakit toolkit and see if you can
update your tables rather than dumping them and importing them. But
you're probably talking about backups/restores.</li>
</ol>
|
<p>Using indexes but not too much, activate query cache, using sphinx for big database, here is some good tips <a href="http://www.keedeo.com/media/1857/26-astuces-pour-accelerer-vos-requetes-mysql" rel="nofollow">http://www.keedeo.com/media/1857/26-astuces-pour-accelerer-vos-requetes-mysql</a> (In French)</p>
| 9,270
|
<p>Is there a simple and foolproof way we can test an AJAX installation? We have a problem in calling a webscript using AJAX form a JS file. The error is 'ServiceLib' is not defined. The error gets a few hits on Google.</p>
<p>We've added some AJAX functionality to a customer's app. This works fine here in the office on dev machines and on our IIS Server, it works fine on the customer's test web site, but when we put the app on the live site, the webscript calls fail.</p>
<p>The customer installed AJAX on their live server a few days ago. We've verified that the service lib files are there and in the right places. </p>
<p>We've already spent hours on this with no solution and still do not know for sure whether there is something wrong with our code, or something is wrong on their server, or for that matter, whether AJAX is even correctly installed. Part of our problem is that we have no access to their live server, so there is not much we can do other than change lines in our own code, give the app files to our contact there, and see what happens. The contact knows less than we do, so we are working blind. A strange situation, I know, but there is beaurocracy involved.</p>
<p>Many thanks
Mike Thomas</p>
|
<p>Firebug might help - if you can get someone at the far end to install it, it may be able to give you an insight into what is going on with the ajax requests via its console, which logs and gives you the ability to view the return data of all ajax requests.</p>
|
<p>I'm thinking...</p>
<p>There are three parts to the process:<br>
1) The client-side javascript logic in the browser sends the HTTP request to the server.<br>
2) The server-side ASP.NET page processes it and responds.<br>
3) The client-side logic receives the response and updates the web page, or whatever. </p>
<p>Swap out each part with something simpler and diagnostic to see where in the pipeline the break is.</p>
<p>For example, create a diagnostic webpage that's a substitue for #1 that calls the server-side page directly.</p>
<p>If that seems to work, create a different server-side ASP.NET page that's very simple, just logs something, to prove that the real #1 does what your diagnostic #1 did.</p>
<p>Ya know, your standard debugging binary search...</p>
| 41,748
|
<p>I'm running an xcopy command in a batch script which copies a file to a shared drive on another workstation; however the workstation requires a login before connecting to the share. Is there a way to script the login/connect into the batch file? </p>
<p>thanks in advance</p>
|
<p>You can use the "net use x:\servername\sharename /u:username password" command to login to the share within the batch file. However putting the password into a plaintext batch file is generally a bad idea.</p>
|
<p>You can use net use to map a temporary drive and login using the credentials. This is what we had to do. Perhaps there is a better way. Then at the end of the script we unmap the drive.</p>
<p>Here is a link to the net use command: <a href="http://www.cezeo.com/tips-and-tricks/net-use-command/" rel="nofollow noreferrer">http://www.cezeo.com/tips-and-tricks/net-use-command/</a></p>
| 41,632
|
<p>I have a web application project (wap) that is successfully being deployed to a development server by our tfsbuild server.</p>
<p>I'd like the build server to run our collection of webtests after deployment.</p>
<p>What is a best practice (or ANY practice) for doing this?</p>
|
<p>You're almost there with your code. I agree with you, the MSDN is not quite explicit on what's inside that byte array, but here's what you can do :</p>
<pre><code>IPAddress address = new IPAddress(_ClientIPAddress.Address.Skip(2).Take(4).ToArray());
</code></pre>
<p>The first two bytes do not seem to be used, but in the case of AF_INET (which is IPv4, or 2) the next four bytes are the IPv4 address of the client.</p>
<p>You might also want to make sure that your code will handle IPv6 (AF_INET6) properly, or handle the fact that AF_INET6 is a likely value. You'll probably need to read 16 bytes instead of 4 for this protocol.</p>
|
<p>The real answer can be found here. <a href="http://www.tech-archive.net/Archive/Windows/microsoft.public.windows.terminal_services/2007-03/msg00474.html" rel="nofollow noreferrer">http://www.tech-archive.net/Archive/Windows/microsoft.public.windows.terminal_services/2007-03/msg00474.html</a></p>
| 36,736
|
<p>I'm looking for a simple algorithm to generate a large number of random, aesthetically pleasing colors. So no crazy neon colors, colors reminiscent of feces, etc. </p>
<p>I've found solutions to this problem but they rely on alternative color palettes than RGB.
I would rather just use straight RGB than mapping back and forth. These other solutions also can at most generate only 32 or so pleasing random colors. </p>
<p>Any ideas would be great.</p>
|
<p>You could average the RGB values of random colors with those of a constant color:</p>
<p><em>(example in Java)</em></p>
<pre><code>public Color generateRandomColor(Color mix) {
Random random = new Random();
int red = random.nextInt(256);
int green = random.nextInt(256);
int blue = random.nextInt(256);
// mix the color
if (mix != null) {
red = (red + mix.getRed()) / 2;
green = (green + mix.getGreen()) / 2;
blue = (blue + mix.getBlue()) / 2;
}
Color color = new Color(red, green, blue);
return color;
}
</code></pre>
<p><br/>
Mixing random colors with white (255, 255, 255) creates neutral pastels by increasing the lightness while keeping the hue of the original color. These randomly generated pastels usually go well together, especially in large numbers.</p>
<p>Here are some pastel colors generated using the above method:</p>
<p><img src="https://i.stack.imgur.com/8jKGx.jpg" alt="First"></p>
<p><br/>
You could also mix the random color with a constant pastel, which results in a tinted set of neutral colors. For example, using a light blue creates colors like these:</p>
<p><img src="https://i.stack.imgur.com/zI406.jpg" alt="Second"></p>
<p><br/>
Going further, you could add heuristics to your generator that take into account complementary colors or levels of shading, but it all depends on the impression you want to achieve with your random colors.</p>
<p>Some additional resources:</p>
<ul>
<li><a href="http://en.wikipedia.org/wiki/Color_theory" rel="noreferrer">http://en.wikipedia.org/wiki/Color_theory</a></li>
<li><a href="http://en.wikipedia.org/wiki/Complementary_color" rel="noreferrer">http://en.wikipedia.org/wiki/Complementary_color</a></li>
</ul>
|
<p>you could have them be within a certain brightness. that would control the ammount of "neon" colors a bit. for instance, if the "brightness"</p>
<pre><code>brightness = sqrt(R^2+G^2+B^2)
</code></pre>
<p>was within a certain high bound, it would have a washed out, light color to it. Conversely, if it was within a certain low bound, it would be darker. This would eliminate any crazy, standout colors and if you chose a bound really high or really low, they would all be fairly close to either white or black.</p>
| 6,467
|
<p>Given this:</p>
<pre><code>Public Sub timReminder_Tick(ByVal sender As System.Object, ByVal e As System.EventArgs)
If DateTime.Now() > g_RemindTime Then
Reminders.ShowDialog()
timReminder.Enabled = False
End If
End Sub
</code></pre>
<p>I want to be able to say this (as I would in Delphi):</p>
<pre><code>timReminder.Tick = timReminder_Tick
</code></pre>
<p>But I get errors when I try it.</p>
<p>Does anyone know how I can assign a custom event to a timer's on-tick event at runtime in VB.NET?</p>
|
<p>Use the 'AddHandler' and 'AddressOf' keywords to add a handler to the Tick event.</p>
<pre>
AddHandler timeReminder.Tick, AddressOf timeReminder_Tick
</pre>
|
<p>The <code>addHandler</code> is a very powerful tool.</p>
<p>Try using it to add an event to a series of controls within a collection.</p>
<p>The handler can add validation or error checking to all types of controls and will work with whatever you add to the form.</p>
| 18,873
|
<p>In C# WinForms, what's the proper way to get the backward/forward history stacks for the System.Windows.Forms.WebBrowser?</p>
|
<p>Check out <a href="http://www.bsalsa.com/downloads.html" rel="nofollow noreferrer">http://www.bsalsa.com/downloads.html</a>. This is a series of Delphi components (free source code, you can see an example of this here: <a href="http://staruml.cvs.sourceforge.net/staruml/staruml/staruml/components/plastic-components/src/embeddedwb.pas?revision=1.1&view=markup" rel="nofollow noreferrer">http://staruml.cvs.sourceforge.net/staruml/staruml/staruml/components/plastic-components/src/embeddedwb.pas?revision=1.1&view=markup</a> - it's the starUML projects code) and they have, among other things, a way to get at the history, favorites, etc using the IE MSHTML interfaces. It's written in Object Pascal but it shouldn't be too hard to figure out what's going on. If you download the "Embedded Web Browser Components Package" take a look at the stuff in EmbeddedWB_D2005\Source - there's all sorts of goodies there.</p>
|
<p>It doesn't look like it's possible.</p>
<p>My suggestion would be to catch the <strong>Navigated</strong> event and maintain your own list. A possible problem with that is when the user clicks <em>back</em> in the browser, you don't know to unwind the stack.</p>
| 7,836
|
<p>What resources have to be manually cleaned up in <em>C#</em> and what are the consequences of not doing so?</p>
<p>For example, say I have the following code:</p>
<pre><code>myBrush = new System.Drawing.SolidBrush(System.Drawing.Color.Black);
// Use Brush
</code></pre>
<p>If I don't clean up the brush using the dispose method, I'm assuming the garbage collector frees the memory used at program termination? Is this correct?</p>
<p>What other resources do I need to manually clean up?</p>
|
<ul>
<li>Handles to internal windows data structures.</li>
<li>Database connections.</li>
<li>File handles.</li>
<li>Network connections.</li>
<li>COM/OLE references.</li>
</ul>
<p>The list goes on.</p>
<p>It's important to call <code>Dispose</code> or even better yet, use the <code>using</code> pattern.</p>
<pre><code>using (SolidBrush myBrush = new System.Drawing.SolidBrush(System.Drawing.Color.Black))
{
// use myBrush
}
</code></pre>
<hr>
<p>If you don't dispose something, it'll be cleaned up when the garbage collector notices that there are no more references to it, which may be after some time.</p>
<p>In the case of <code>System.Drawing.Brush</code>, Windows will keep internal windows structures for the brush loaded in memory until all programs release their handle.</p>
|
<p>The garbage collector will handle any managed resources. In your example, the brush will be cleaned up when the garbage collector decides to, which will happen some time after the last reference to the brush is no longer valid.</p>
<p>There are certain things that need to be manually cleaned up, but those are pointers retrieved from unmanaged sources, such as DLL calls, nothing within the .NET Framework needs this treatment however.</p>
| 14,270
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.