instruction
stringlengths 21
27.8k
| chosen
stringlengths 18
28.2k
| rejected
stringlengths 18
33.6k
| __index_level_0__
int64 0
50k
|
|---|---|---|---|
<p>I need to design a SOAP api (my first one!). What are the best practices regarding errors returned to the caller.</p>
<p>Assuming an api as follow</p>
<pre><code>[WebMethod]
public List<someClass> GetList(String param1)
{
}
</code></pre>
<p>Should I</p>
<ul>
<li>Throw an exception. Let the SOAP infrastructure generate a SOAP fault -- and the caller would have to try/catch. This is not very explanatory to the caller
<ol start="2">
<li>Have the return parameter be a XMLDOcument of some sort, with the first element being a return value and then the List. </li>
<li>Looking at the return SOAP packet I see that the response generated looks like the following</li>
</ol></li>
</ul>
<blockquote>
<p></p>
</blockquote>
<pre><code> <GetListResponse>
<GetListResult>
...
...
</GetListResult>
</GetListResponse>
</code></pre>
<p>Can we somehow change the return packet so that the "GetListResult" element is changed to "GetListError" in case of error</p>
<ul>
<li>Any other way?</li>
</ul>
<p>Thanks!</p>
|
<p>Probably the most appropriate SOA pattern to follow would be a Fault Contract, which is essentially a Data Contract that is wrapped in the SOAPException.</p>
<p>I am posting examples in .NET, since it looks like that is what you are using (and that is what I know :) )</p>
<p>In WCF, you can define a DataContract, then decorate your OperationContract interface with a a "FaultContract" attribute that specifies it as the return value:</p>
<pre><code>public partial interface MyServiceContract
{
[System.ServiceModel.FaultContract(typeof(MyService.FaultContracts.ErrorMessageFaultContract))]
[System.ServiceModel.OperationContract(...)]
ResponseMessage SOAMethod(RequestMessage request) {...}
}
</code></pre>
<p>For ASMX web services, (as it appears you are using from your code snippet), you can't use this attribute or setup. So to implement the pattern, you would need to:</p>
<ul>
<li>Define a serializable class to hold your exception information (i.e. ErrorData)</li>
<li>When an exception is thrown in your service, catch it and in your error handling code, add the info to the ErrorData class</li>
<li><p>Append the serialized ErrorData class to a SoapException class:</p>
<pre><code>SoapException mySoapException = new SoapException(message, SoapException.ServerFaultCode, "", serialzedErrorDataClass);
</code></pre></li>
<li><p>Throw the SoapException in your code</p></li>
<li>On your client side, you will need to deserialize the message to interpret it.</li>
</ul>
<p>Kind of seems like a lot of work, but this way you have total control of what data gets returned. Incidentally, this is the pattern that is used by the <a href="http://msdn.microsoft.com/en-us/library/cc487895.aspx" rel="nofollow noreferrer">ServiceFactory</a> from Microsoft patterns & practices for ASMX web services.</p>
|
<p>I can't give you specifies for .net (which seems to be what you're asking), but SOAP provides a mechanism for expressing strongly-typed exceptions. The SOAP fault element can have an optional FaultDetail sub-element, and this can contain arbitrary XML documents, such as your GetListError. These document types should be defined in the WSDL as a wsdl:fault inside the wsdl:operation</p>
<p>The trick is persuading the web service stack to turn an exception (which is the "correct" way of writing your business logic) into a properly marshalled fault detail. And I can't help you with that bit.</p>
| 39,966
|
<p>I'm a longtime Java programmer working on a PHP project, and I'm trying to get PHPUnit up and working. When unit testing in Java, it's common to put test case classes and regular classes into separate directories, like this -</p>
<pre><code>/src
MyClass.java
/test
MyClassTest.java
</code></pre>
<p>and so on.</p>
<p>When unit testing with PHPUnit, is it common to follow the same directory structure, or is there a better way to lay out test classes? So far, the only way I can get the "include("MyClass.php")" statement to work correctly is to include the test class in the same directory, but I don't want to include the test classes when I push to production.</p>
|
<p>I think it's a good idea to keep your files separate. I normally use a folder structure like this:</p>
<pre><code>/myapp/src/ <- my classes
/myapp/tests/ <- my tests for the classes
/myapp/public/ <- document root
</code></pre>
<p>In your case, for including the class in your test file, why not just pass the the whole path to the include method?</p>
<pre><code>include('/path/to/myapp/src/MyClass.php');
</code></pre>
<p>or </p>
<pre><code>include('../src/MyClass.php');
</code></pre>
|
<p>I put my test cases next the the source in a file with the same name but a .phpt extension. The deployment script simply filters out *.phpt when they push to production.</p>
| 9,135
|
<p>I am working on writing a Highly Available agent for JBoss Application Server to run on Solaris Open HA Cluster. As I don't know much of the JBoss AS, can someone please tell me how can I probe the status of the application server?</p>
<p>I want to know the health of the application server, for example whether it is currently running or not.</p>
|
<p>Out of the box, JBoss has a JMX console that provides information about the modules loaded into the micro-kernel and the services that are running. This application is usually available at <a href="http://hostname:8080/jmx-console" rel="nofollow noreferrer">http://hostname:8080/jmx-console</a>, and you could conceivably use its presence or absence as an indicator of whether your JBoss server is running.</p>
<p>In the context of an HA cluster, you probably have a load balancing switch or other layer4-7 aware device in front of the nodes. If you want to detect the status of each node in the cluster, you'll need to make sure your using the node's local IP address.</p>
<p>Of course, most of the processes that run in the micro-kernel are JMX enabled ... if you want to know the status of an individual process, just ask it!</p>
|
<p>Thanks for the answer. </p>
<p>One way to check for the status of the JBOSS server would be to probe the JBOSS port (it can be port of any essential service, eg. like JNDI service) with a TCP socket request. If port is busy means JBOSS is up else otherwise.</p>
<p>This, I figured out, is more programmable way of checking the status. :P Specially in case of Solaris OHAC, where JBOSS would reside on a global filesystem.</p>
| 24,459
|
<p>Part of my application maps resources stored in a number of locations onto web URLs like this:</p>
<pre><code>http://servername/files/path/to/my/resource/
</code></pre>
<p>The resources location is modelled after file paths and as a result there can be an unlimited level of nesting. Is it possible to construct an MVC route that matches this so that I get the path in its entirety passed into my controller? Either as a single string or possibly as an params style array of strings.</p>
<p>I guess this requires a match on the files keyword, followed by some sort of wildcard. Though I have no idea if MVC supports this. </p>
|
<p>A route like</p>
<pre><code>"Files/{*path}"
</code></pre>
<p>will get the path as a single string. The <code>*</code> designates it as a wildcard mapping and it will consume the whole URL after <code>"Files/"</code>.</p>
|
<p>For more information on ASP.NET's Routing feature, please see MSDN:</p>
<p><a href="http://msdn.microsoft.com/en-us/library/cc668201.aspx" rel="nofollow noreferrer">http://msdn.microsoft.com/en-us/library/cc668201.aspx</a></p>
<p>And for the "catch-all" parameters you want to use, see the section under "Handling a Variable Number of Segments".</p>
| 41,858
|
<p>Requirements:</p>
<ul>
<li>free, preferably open-source</li>
<li>implemented in one of the .NET managed langs</li>
</ul>
<p>Google found these:</p>
<ul>
<li><a href="http://www.codeproject.com/KB/recipes/diffengine.aspx" rel="noreferrer">A Generic, Reusable Diff
Algorithm</a> on codeproject</li>
<li><a href="http://www.mathertel.de/Diff/" rel="noreferrer">An O(ND) Difference Algorithm for C#</a></li>
<li><a href="http://razor.occams.info/code/diff/" rel="noreferrer">Diff/Merge/Patch Library for C#/.NET</a> by Joshua Tauberer</li>
</ul>
<p>EDIT:</p>
<p>No apps please, only libraries.</p>
|
<p>You can grab <a href="https://stackoverflow.com/questions/848246/how-can-i-use-javascript-within-an-excel-macro">the COM component that uses Google's Diff/Patch/Match</a>. It works from .NET. </p>
<p><strong>Update, 2010 Oct 17</strong>: The <a href="http://code.google.com/p/google-diff-match-patch/" rel="noreferrer">Google Diff/Patch/Merge code</a> has been ported to C#. The COM component still works, but if you're coming from .NET, you'll wanna use the .NET port directly. </p>
|
<p>GitSharp includes a diff engine based on meyers diff. Take a look at the demo which implements a simple wpf diff viewer based on the Diff.Sections collection: <a href="http://www.eqqon.com/index.php/GitSharp#GitSharp.Demo" rel="noreferrer">http://www.eqqon.com/index.php/GitSharp#GitSharp.Demo</a></p>
| 16,745
|
<p>I'm thinking of starting a wiki, probably on a low cost LAMP hosting account. I'd like the option of exporting my content later in case I want to run it on <code>IIS/ASP.NET</code> down the line. I know in the weblog world, there's an open standard called BlogML which will let you export your blog content to an <strong>XML</strong> based format on one site and import it into another. Is there something similar with wikis?</p>
|
<p>The correct answer is ... "it depends".</p>
<p>It depends on which wiki you're using or planning to use. I've used various over the years <a href="http://moinmo.in/" rel="noreferrer">MoinMoin</a> was ok, used files rather than database, <a href="https://help.ubuntu.com/" rel="noreferrer">Ubuntu</a> seem to like it. <a href="http://www.mediawiki.org/wiki/MediaWiki" rel="noreferrer">MediaWiki</a>, everyone knows about and <a href="http://jamwiki.org" rel="noreferrer">JAMWiki</a> is a java clone(ish) of MediaWiki with the aim to be markup compatible with MediaWiki, both use databases and you can generally connect whichever database you want, JAMWiki is pre-configured to use an internal HSQLDB instance.</p>
<p>I recently converted about 80 pages from a MoinMoin wiki into JAMWiki pages and this was probably 90% handled by a tiny perl script I found somewhere (I'll provide a link if I can find it again). The other 10% was unfortunately a by-hand experience (they were of the utmost importance with them being recipies for the missus) ;-)</p>
<p>I also recently setup a Mediawiki instance for work and that took all of about 8 minutes to do. So that'd be my choice.</p>
|
<p>I haven't heard of WikiML.</p>
<p>I think your biggest obstacle is gonna be converting one wiki markup to another. For example, some wikis use markdown (which is what Stack Overflow uses), others use another markup syntax (e.g. BBCode, ...), etc.. The bottom line is - assuming the contents are databased it's not impossible to export and parse it to make it "fit" in another system. It might just be a pain in the ass.</p>
<p>And if the contents are not databased, it's gonna be a royal pain in the ass. :D</p>
<p>Another solution would be to stay with the same system. I am not sure what the reason is for changing the technology later on. It's not like a growing project requires IIS/ASP.NET all of the sudden. (It might just be the other way around.) But for example, if you could stick with PHP for a while, you could also run that on IIS.</p>
| 5,980
|
<p>I want to use the <a href="http://nltk.sourceforge.net/index.php/Main_Page" rel="noreferrer">nltk</a> libraries in c++. </p>
<p>Is there a glue language/mechanism I can use to do this? </p>
<p>Reason:
I havent done any serious programming in c++ for a while and want to revise NLP concepts at the same time.</p>
<p>Thanks</p>
|
<p>You can also try the <a href="http://www.boost.org/doc/libs/1_37_0/libs/python/doc/index.html" rel="noreferrer">Boost.Python</a> library; which has <a href="http://www.boost.org/doc/libs/1_37_0/libs/python/doc/v2/callbacks.html" rel="noreferrer">this capability</a>. This library is mainly used to expose C++ to Python, but can be used the other way around.</p>
|
<p>I haven't tried directly calling Python functions from C++, but here are some alternative ideas...</p>
<p>Generally, it's easier to call C++ code from a high-level language like Python than the other way around. If you're interested in this approach, then you could create a C++ codebase and access it from Python. You could either directly use the external API provided by python [it should be described somewhere in the Python docs] or use a tool like SWIG to automate the C++-to-Python wrapping process.</p>
<p>Depending on how you want to use the library, you could alternatively create Python scripts which you call from C++ with the <a href="http://www.opengroup.org/onlinepubs/000095399/functions/exec.html" rel="nofollow noreferrer">exec*</a> functions.</p>
| 38,004
|
<p>So I have a small C# app that needs to periodically check the contents of directories on multiple machines on the network. I thought I could just read \hostname\C$ as a directory path, but with the normal Directory class there doesn't seem to be a way to authenticate against the other servers so you can access the hidden share.
I'm sure there's an easy way to do this that I've overlooked, but at the moment I'm a bit stumpted.</p>
|
<p>From <a href="http://bytes.com/forum/thread689145.html" rel="nofollow noreferrer">http://bytes.com/forum/thread689145.html</a>:</p>
<blockquote>
<p>All processes run in the context of a
logged-in user account. If you want to
open a file on another computer, your
application must be running in the
context of a user that has permissions
to open files on that machine. You can
do this with Impersonation.</p>
</blockquote>
<p>The easiest way seems to be to give the current user appropriate rights on the other machines.</p>
|
<p>Are you looking for a way to set the current user at run-time?</p>
<p>If not, as long as the user running the process has access to those machines, this will work for you:</p>
<pre><code>DirectoryInfo di = new DirectoryInfo(@"\\machineName\c$\temp");
FileInfo[] files = di.GetFiles();
foreach (FileInfo f in files)
{
Debug.WriteLine(f.Name);
}
</code></pre>
| 16,954
|
<p>I have a string. I need to replace all instances of a given array of strings from this original string - how would I do that?</p>
<p>Currently I am using...</p>
<pre><code>var inputString = "this is my original string.";
var replacement = "";
var pattern = string.Join("|", arrayOfStringsToRemove);
Regex.Replace(inputString, pattern, replacement);
</code></pre>
<p>This works fine, but unfortunately it breaks down when someone tries to remove a character that has a special meaning in the regex.</p>
<p>How should I do this? Is there a better way?</p>
|
<p>Build the pattern using Regex.Escape:</p>
<pre><code>StringBuilder pattern = new StringBuilder();
foreach (string s in arrayOfStringsToRemove)
{
pattern.Append("(");
pattern.Append(Regex.Escape(s));
pattern.Append(")|");
}
Regex.Replace(inputString, pattern.ToString(0, pattern.Length - 1), // remove trailing |
replacement);
</code></pre>
|
<p>You need to escape special characters with a backslash</p>
<pre><code>\
</code></pre>
<p>Sometimes you may need to use two backslashes</p>
<pre><code>\\
</code></pre>
| 46,803
|
<p>I have a Prusa i3 MK3 or maybe it was upgraded to a i3 MK3S.</p>
<p>How can I figure out?</p>
<p>The <a href="https://shop.prusa3d.com/en/original-prusa-i3-mk3s/1390-original-prusa-i3-mk3-to-mk3s-upgrade-kit.html" rel="nofollow noreferrer">description of the upgrade kit</a> talks about</p>
<ul>
<li>the SuperPINDA (how is it different from the old one?)</li>
<li>a number of small changes (which?)</li>
<li>improved plastic parts (which parts, how are they different?)</li>
<li>metal clips (where to look for them?)</li>
<li>a number of minor changes to the extruder plastic parts (which ones, before and after?)</li>
</ul>
<p>I'd like to figure that out without taking the printer apart.</p>
|
<p><a href="https://i.stack.imgur.com/1ldX4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1ldX4.png" alt="Left right" /></a></p>
<p><a href="https://i.stack.imgur.com/TtrmF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TtrmF.png" alt="Filament sensors" /></a></p>
<p>The MK3 has 4 pins on the filament sensor, the MK3S has only 3 pins. While you need to take the extruder apart to see that, you can also have a look at the cable instead:</p>
<p><a href="https://i.stack.imgur.com/5C6JE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5C6JE.png" alt="Filament sensor cable" /></a></p>
<p>Matching the sensor, the MK3 has a 4 strand wire including blue and the MK3S has a 3 strand cable without blue.</p>
<p><a href="https://i.stack.imgur.com/nuCqp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nuCqp.png" alt="Bearing clips" /></a></p>
<p>Looking under the heatbed, you'll find the MK3 bearings are fixed with U-bolts while the MK3S has broader bearing clips.</p>
|
<p>On the LCD, the MK3 will show <strong>Original Prusa MK3 OK</strong>, while the MK3S/+ will show <strong>Original Prusa MK3S OK</strong></p>
| 1,985
|
<p>If something goes wrong in a WCF REST call, such as the requested resource is not found, how can I play with the HTTP response code (setting it to something like HTTP 404, for example) in my OperationContract method?</p>
|
<p>There is a <a href="http://msdn.microsoft.com/en-us/library/system.servicemodel.web.weboperationcontext.aspx" rel="noreferrer"><code>WebOperationContext</code></a> that you can access and it has a <a href="http://msdn.microsoft.com/en-us/library/system.servicemodel.web.weboperationcontext.outgoingresponse.aspx" rel="noreferrer"><code>OutgoingResponse</code></a> property of type <a href="http://msdn.microsoft.com/en-us/library/system.servicemodel.web.outgoingwebresponsecontext.aspx" rel="noreferrer"><code>OutgoingWebResponseContext</code></a> which has a <a href="http://msdn.microsoft.com/en-us/library/system.servicemodel.web.outgoingwebresponsecontext.statuscode.aspx" rel="noreferrer"><code>StatusCode</code></a> property that can be set.</p>
<pre><code>WebOperationContext ctx = WebOperationContext.Current;
ctx.OutgoingResponse.StatusCode = System.Net.HttpStatusCode.OK;
</code></pre>
|
<p>This did not work for me for WCF Data Services. Instead, you can use DataServiceException in case of Data Services. Found the following post useful.
<a href="http://social.msdn.microsoft.com/Forums/en/adodotnetdataservices/thread/f0cbab98-fcd7-4248-af81-5f74b019d8de" rel="nofollow">http://social.msdn.microsoft.com/Forums/en/adodotnetdataservices/thread/f0cbab98-fcd7-4248-af81-5f74b019d8de</a></p>
| 16,951
|
<p>So I was reading these Asp.Net <a href="http://www.hanselman.com/blog/ASPNETInterviewQuestions.aspx" rel="nofollow noreferrer">interview questions</a> at Scott Hanselman's blog and I came across this question. Can anyone shed some light of what he's talking about.</p>
|
<pre><code><asp:LinkButton ID="lbEdit" CssClass="button"
OnClientClick="javascript:alert('do something')"
onclick="OnEdit" runat="server">Edit</asp:LinkButton>
</code></pre>
<p>The <code>OnClientClick</code> attribute means you can add some JavaScript without losing PostBack functionality would be my answer in the interview.</p>
|
<p>I think what he's asking here is how you wire up javascript functions to work hand in hand with your ASP.NET postback functionality.</p>
<p>i.e. How can I trigger a control's event using my own JavaScript?</p>
<p>The ASP.NET class library contains a <code>ClientScript</code> class - Found in the <code>System.Web.UI.Page</code> class - which enables you to programmatically add JavaScript to your ASP.NET page.</p>
<p>This contains a method called <code>GetPostBackEventReference</code> which will generate the <code>__doPostBack</code> script ASP.NET utilises to trigger events wired up to your web controls.</p>
<p>Hope that makes sense</p>
| 12,513
|
<p>Is their a way to use a non-member non-friend function on an object using the same "dot" notation as member functions?</p>
<p>Can I pull a (any) member out of a class, and have users use it in the same way they always have?</p>
<p>Longer Explanation:</p>
<p><a href="http://www.ddj.com/cpp/184401197" rel="nofollow noreferrer">Scott Meyers</a>, Herb Sutter, et all, argue that non-member non-friend functions are a part of an object's interface, and can improve encapsulation. I agree with them.</p>
<p>However, after recently reading this article: <a href="http://www.gotw.ca/gotw/084.htm" rel="nofollow noreferrer">http://www.gotw.ca/gotw/084.htm</a> I find myself questioning the syntax implications.</p>
<p>In that article, Herb proposes having a single <code>insert</code>, <code>erase</code>, and <code>replace</code> member, and several non-member non-friend functions of the same name.</p>
<p>Does this mean, as I think it does, that Herb thinks some functions should be used with the dot notation, and others as a global function?</p>
<pre><code>std::string s("foobar");
s.insert( ... ); /* One like this */
insert( s , ...); /* Others like this */
</code></pre>
<p>Edit:</p>
<p>Thanks everyone for your very useful answers, however, I think the point of my question has been overlooked.</p>
<p>I specifically did not mention the specific case of operators, and how they retain the "natural" notation. Nor that you should wrap everything in a namespace. These things are written in the article I linked to.</p>
<p><strong>The question itself was:</strong></p>
<p>In the article, Herb suggests that one insert() method be a member, while the rest are non-member non-friend functions.</p>
<p>This implies that to use one form of insert() you have to use dot notation, while for the others, you do not.</p>
<p><strong>Is it just me, or does that sound crazy?</strong></p>
<p>I have a hunch that perhaps you can use a single syntax. (Im thinking how Boost::function can take a *this parameter for mem_fun).</p>
|
<p>Yes, it means that part of the interface of an object is composed of non member functions.</p>
<p>And you're right about the fact it involves the use of the following notation, for an object of class T:</p>
<pre><code>void T::doSomething(int value) ; // method
void doSomething(T & t, int value) ; // non-member non-friend function
</code></pre>
<p>If you want the doSomething function/method return void, and have an int parameter called "value".</p>
<p>But two things are worth mentioning.</p>
<p>The first is that the functions part of the interface of a class should be in the same namespace. This is yet another reason (if another reason was needed) to use namespaces, if only to "put together" an object and the functions that are part of its interface.</p>
<p>The good part is that it promotes good encapsulation. But bad part is that it uses a function-like notation I, personally, dislike a lot.</p>
<p>The second is that operators are not subject to this limitation. For example, the += operator for a class T can be written two ways:</p>
<pre><code>T & operator += (T & lhs, const T & rhs) ;
{
// do something like lhs.value += rhs.value
return lhs ;
}
T & T::operator += (const T & rhs) ;
{
// do something like this->value += rhs.value
return *this ;
}
</code></pre>
<p>But both notations are used as:</p>
<pre><code>void doSomething(T & a, T & b)
{
a += b ;
}
</code></pre>
<p>which is, from an aesthetic viewpoint, quite better than the function-like notation.</p>
<p>Now, it would be a very cool syntactic sugar to be able to write a function from the same interface, and still be able to call it through the "." notation, like in C#, as mentioned by michalmocny.</p>
<h2>Edit: Some examples</h2>
<p>Let's say I want, for whatever reason, to create two "Integer-like" classes.
The first will be IntegerMethod:</p>
<pre><code>class IntegerMethod
{
public :
IntegerMethod(const int p_iValue) : m_iValue(p_iValue) {}
int getValue() const { return this->m_iValue ; }
void setValue(const int p_iValue) { this->m_iValue = p_iValue ; }
IntegerMethod & operator += (const IntegerMethod & rhs)
{
this->m_iValue += rhs.getValue() ;
return *this ;
}
IntegerMethod operator + (const IntegerMethod & rhs) const
{
return IntegerMethod (this->m_iValue + rhs.getValue()) ;
}
std::string toString() const
{
std::stringstream oStr ;
oStr << this->m_iValue ;
return oStr.str() ;
}
private :
int m_iValue ;
} ;
</code></pre>
<p>This class has 6 methods which can acess its internals.</p>
<p>The second is IntegerFunction:</p>
<pre><code>class IntegerFunction
{
public :
IntegerFunction(const int p_iValue) : m_iValue(p_iValue) {}
int getValue() const { return this->m_iValue ; }
void setValue(const int p_iValue) { this->m_iValue = p_iValue ; }
private :
int m_iValue ;
} ;
IntegerFunction & operator += (IntegerFunction & lhs, const IntegerFunction & rhs)
{
lhs.setValue(lhs.getValue() + rhs.getValue()) ;
return lhs ;
}
IntegerFunction operator + (const IntegerFunction & lhs, const IntegerFunction & rhs)
{
return IntegerFunction(lhs.getValue() + rhs.getValue()) ;
}
std::string toString(const IntegerFunction & p_oInteger)
{
std::stringstream oStr ;
oStr << p_oInteger.getValue() ;
return oStr.str() ;
}
</code></pre>
<p>It has only 3 methods, and such, reduces the quantity of code that can access its internals. It has 3 non-member non-friend functions.</p>
<p>The two classes can be used as:</p>
<pre><code>void doSomething()
{
{
IntegerMethod iMethod(25) ;
iMethod += 35 ;
std::cout << "iMethod : " << iMethod.toString() << std::endl ;
IntegerMethod result(0), lhs(10), rhs(20) ;
result = lhs + 20 ;
// result = 10 + rhs ; // WON'T COMPILE
result = 10 + 20 ;
result = lhs + rhs ;
}
{
IntegerFunction iFunction(125) ;
iFunction += 135 ;
std::cout << "iFunction : " << toString(iFunction) << std::endl ;
IntegerFunction result(0), lhs(10), rhs(20) ;
result = lhs + 20 ;
result = 10 + rhs ;
result = 10 + 20 ;
result = lhs + rhs ;
}
}
</code></pre>
<p>When we compare the operator use ("+" and "+="), we see that making an operator a member or a non-member has no difference in its apparent use. Still, there are two differences:</p>
<ol>
<li><p>the member has access to all its internals. The non-member must use public member methods</p>
</li>
<li><p>From some binary operators, like +, *, it is interesting to have type promotion, because in one case (i.e., the lhs promotion, as seen above), it won't work for a member method.</p>
</li>
</ol>
<p>Now, if we compare the non-operator use ("toString"), we see the member non-operator use is more "natural" for Java-like developers than the non-member function. Despite this unfamiliarity, for C++ it is important to accept that, despite its syntax, the non-member version is better from a OOP viewpoint because it does not have access to the class internals.</p>
<p>As a bonus: If you want to add an operator (resp. a non-operator function) to an object which has none (for example, the GUID structure of <windows.h>), then you can, without needing to modify the structure itself. For the operator, the syntax will be natural, and for the non-operator, well...</p>
<p><i>Disclaimer: Of course these class are dumb: the set/getValue are almost direct access to its internals. But replace the Integer by a String, as proposed by Herb Sutter in <a href="http://www.gotw.ca/gotw/084.htm" rel="nofollow noreferrer">Monoliths "Unstrung"</a>, and you'll see a more real-like case.</i></p>
|
<p>Yes, they should be either global or namespace-scoped.
Non-member non-friend functions look much prettier in C# where they do use dot notation (they are called <a href="http://en.wikipedia.org/wiki/Extension_method" rel="nofollow noreferrer">extension methods</a>).</p>
| 43,068
|
<p>Is there a way to make sure a (large, 300K) background picture is always displayed first BEFORE any other content is shown on the page?</p>
<p>On the server we have access to PHP.</p>
|
<p>All the html content is served and parsed before it even starts to fetch the image, so you have a problem before you start. </p>
<p>You could circumvent this by programmatically hiding the content, and then triggering a "show" of it when the image is loaded. </p>
<p>ie: </p>
<pre><code><html>
<body>
<image here/>
<div id="content" style="display:none;" >
</div>
<script type="psudocode">
when(image.loaded){
$("#content").show();
}
</script>
</body>
</html>
</code></pre>
|
<p>I think the only way you'll be able to do this is with javascript - Send the user HTML that only contains your background image and some javascript that either waits for a certain amount of time before displaying the rest of the content or uses AJAX to retrieve the rest of the content (essentially the same thing).</p>
| 24,484
|
<p>I was trying to use the slime-connect function to get access to a remote server with sbcl. I followed all the steps from the slime.mov movie from <a href="http://www.guba.com/watch/30000548671" rel="noreferrer">Marco Baringer,</a> but I got stuck when creating the ssh connection for slime. This is after already starting the swank server on the remote machine. I did it like this:</p>
<p><code>ssh -L 4005:127.0.0.1:4005 user@server.com</code></p>
<p>And I got this errors, on local SLIME: </p>
<p>Lisp connection closed unexpectedly: connection broken by remote peer </p>
<p>...and on the remote server: </p>
<p>channel 3: open failed: connect failed: Connection refused</p>
<p>What could possibly be wrong?</p>
|
<p>I don't know, but you can try to connect to swank on remote machine locally. </p>
<pre><code>ssh user@server.com
telnet 127.0.0.1:4005
</code></pre>
<p>May be there you will find errors. Also you can try localhost:4005 instead of 127.0.0.1 and check if localhost interface is properly configured.</p>
|
<p>For me the problem was that the <code>slime</code> (v2.22) function from Emacs started with additional argument <code>from-emacs t</code> which <code>swank-loader.lisp</code> didn't support (v2.22). </p>
<p>What worked for me is editing of <code>slime-v2.22/swank-loader.lisp:init</code> to accept one new argument <code>from-emacs</code> which isn't used of cause in the function's body, because I don't know in what way this argument should be treated. But <code>slime</code> starts now fine and workable. </p>
<p>Also while starting <code>slime</code> I receive a warning about incompatible versions: slime v2.23 and swank v2.22, but as I checked with <code>list-packages</code> and simply by folder names -- I have <code>slime</code> and <code>swank</code> both of versions v2.22. That's a confusion for me right now. </p>
<p>If somebody knows details about it, please, comment. </p>
| 48,731
|
<p>I`ve read that columns that are chosen for indices should discriminate well among the rows, i.e. index columns should not contain a large number of rows with the same value. This would suggest that booleans or an enum such as gender would be a bad choice for an index. </p>
<p>But say I want to find users by gender and in my particular database, only 2% of the users are female, then in that case it seems like the gender column would be a useful index when getting the female users, but not when getting all the male users.</p>
<p>So would it generally be a good idea to put an index on such a column?</p>
|
<p>Indexing a low-cardinality column to improve search performance is common in my world. Oracle supports a "bitmapped index" which is designed for these situations. See <a href="http://www.dba-oracle.com/art_9i_indexing.htm" rel="nofollow noreferrer">this article</a> for a short overview. </p>
<p>Most of my experience is with Oracle, but I assume that other RDBMS' support something similar.</p>
|
<p>This is a case where I would let the server statistics inform me of when to create the index. Unless you know that this query is going to predominate or that running such a query would not meet your performance goals a priori, then creating the index prematurely may just cost you performance rather than increase it. Also, you may want to think about how you would actually use the query. In this case, my guess would be that you'd typically be doing some sort of aggregation based on this column rather than simply selecting the users who meet the criteria. In that event, you'll be doing the table scan anyway and the index won't buy you anything.</p>
| 39,098
|
<p>What is the best compression algorithm that allows random reads/writes in a file?</p>
<p>I know that any adaptive compression algorithms would be out of the question.</p>
<p>And I know huffman encoding would be out of the question.</p>
<p>Does anyone have a better compression algorithm that would allow random reads/writes?</p>
<p>I think you could use any compression algorithm if you write it in blocks, but ideally I would not like to have to decompress a whole block at a time. But if you have suggestions on an easy way to do this and how to know the block boundaries, please let me know. If this is part of your solution, please also let me know what you do when the data you want to read is across a block boundary?</p>
<p>In the context of your answers please assume the file in question is 100GB, and sometimes I'll want to read the first 10 bytes, and sometimes I'll want to read the last 19 bytes, and sometimes I'll want to read 17 bytes in the middle. .</p>
|
<p>I am stunned at the number of responses that imply that such a thing is impossible.</p>
<p>Have these people never heard of "compressed file systems",
which have been around since before Microsoft was sued in 1993 by Stac Electronics over compressed file system technology?</p>
<p>I hear that <a href="http://en.wikibooks.org/wiki/Data_Compression/Dictionary_compression#LZS" rel="noreferrer">LZS</a> and <a href="http://en.wikibooks.org/wiki/Data_Compression/Dictionary_compression#LZJB" rel="noreferrer">LZJB</a> are popular algorithms for people implementing compressed file systems, which necessarily require both random-access reads and random-access writes.</p>
<p>Perhaps the simplest and best thing to do is to turn on file system compression for that file, and let the OS deal with the details.
But if you insist on handling it manually, perhaps you can pick up some tips by reading about <a href="http://www.informit.com/articles/article.aspx?p=26353&seqNum=4" rel="noreferrer">NTFS transparent file compression</a>.</p>
<p>Also check out:
<a href="https://stackoverflow.com/questions/429987/compression-formats-with-good-support-for-random-access-within-archives">"StackOverflow: Compression formats with good support for random access within archives?"</a></p>
|
<p>I don't know of any compression algorithm that allows random reads, never mind random writes. If you need that sort of ability, your best bet would be to compress the file in chunks rather than as a whole. </p>
<p>e.g.<br>We'll look at the read-only case first. Let's say you break up your file into 8K chunks. You compress each chunk and store each compressed chunk sequentially. You will need to record where each compressed chunk is stored and how big it is. Then, say you need to read N bytes starting at offset O. You will need to figure out which chunk it's in (O / 8K), decompress that chunk and grab those bytes. The data you need may span multiple chunks, so you have to deal with that scenario.</p>
<p>Things get complicated when you want to be able to write to the compressed file. You have to deal with compressed chunks getting bigger and smaller. You may need to add some extra padding to each chunk in case it expands (it's still the same size uncompressed, but different data will compress to different sizes). You may even need to move chunks if the compressed data is too big to fit back in the original space it was given.</p>
<p>This is basically how compressed file systems work. You might be better off turning on file system compression for your files and just read/write to them normally.</p>
| 29,261
|
<p>We are developing a web application which is available in 3 languages. </p>
<p>There are these key-value pairs to translate everything. At this moment we use Excel (key, german, french, english) for this. But this does not work well ... if there is more than 1 person editing this file, you have no chance to automatically merge the different files.</p>
<p>Is there a good (and free) tool which can handle this job?</p>
<p>--- additional information ---</p>
<p>(This is a STRUTS application) But the question is how to manage these kinds of information in general (or at least in an conveinient way, which also supports multiple users editing this single file ("<em>mergeable</em>" filetypes))</p>
|
<p>Why not use <a href="http://www.gnu.org/software/gettext/" rel="nofollow noreferrer">gettext</a> and manage separate .po files? See that <a href="http://mel.melaxis.com/devblog/2005/08/06/localizing-php-web-sites-using-gettext/" rel="nofollow noreferrer">blog entry</a>.</p>
|
<p>Good Question.
There are some "Best Practice" depending on what you actually code in (java, ms-windows c#).</p>
<p>I solved this (but I think there must be a better way) by using a SQL db instead of excel file, and a wrote a plug for VS (VB6,........,..., emacs) that was able to insert new keys into the db without going to round trip with version control. The keys are the developers name of what they think is a best guess for a label. (key => save, sv => "spara", no => "", en => "save").</p>
<p>This db can then be generated as a module, class, obj, txt, to appropriate code(platform)
and can be accessed, depending on the ide, so in c#, bt,label = corelang.save;</p>
<p>Someone else can then do all the language stuff, and then we just update the db and rerun the generation to the platform resources.</p>
| 33,169
|
<p>In <code>C</code> can I <strong>pass a multidimensional array</strong> to a function <strong>as a single argument</strong> when I don't know what the dimensions of the array are going to be?</p>
<p>Besides, my multidimensional array may contain types other than strings.</p>
|
<p>You can do this with any data type. Simply make it a pointer-to-pointer:</p>
<pre><code>typedef struct {
int myint;
char* mystring;
} data;
data** array;
</code></pre>
<p>But don't forget you still have to malloc the variable, and it does get a bit complex:</p>
<pre><code>//initialize
int x,y,w,h;
w = 10; //width of array
h = 20; //height of array
//malloc the 'y' dimension
array = malloc(sizeof(data*) * h);
//iterate over 'y' dimension
for(y=0;y<h;y++){
//malloc the 'x' dimension
array[y] = malloc(sizeof(data) * w);
//iterate over the 'x' dimension
for(x=0;x<w;x++){
//malloc the string in the data structure
array[y][x].mystring = malloc(50); //50 chars
//initialize
array[y][x].myint = 6;
strcpy(array[y][x].mystring, "w00t");
}
}
</code></pre>
<p>The code to deallocate the structure looks similar - don't forget to call free() on everything you malloced! (Also, in robust applications you should <a href="http://www.google.com/search?q=check+malloc+return" rel="noreferrer">check the return of malloc()</a>.)</p>
<p>Now let's say you want to pass this to a function. You can still use the double pointer, because you probably want to do manipulations on the data structure, not the pointer to pointers of data structures:</p>
<pre><code>int whatsMyInt(data** arrayPtr, int x, int y){
return arrayPtr[y][x].myint;
}
</code></pre>
<p>Call this function with:</p>
<pre><code>printf("My int is %d.\n", whatsMyInt(array, 2, 4));
</code></pre>
<p>Output:</p>
<pre><code>My int is 6.
</code></pre>
|
<pre><code>int matmax(int **p, int dim) // p- matrix , dim- dimension of the matrix
{
return p[0][0];
}
int main()
{
int *u[5]; // will be a 5x5 matrix
for(int i = 0; i < 5; i++)
u[i] = new int[5];
u[0][0] = 1; // initialize u[0][0] - not mandatory
// put data in u[][]
printf("%d", matmax(u, 0)); //call to function
getche(); // just to see the result
}
</code></pre>
| 2,606
|
<p>I have a string "1112224444' it is a telephone number. I want to format as 111-222-4444 before I store it in a file. It is on a datarecord and I would prefer to be able to do this without assigning a new variable.</p>
<p>I was thinking:</p>
<pre><code>String.Format("{0:###-###-####}", i["MyPhone"].ToString() );
</code></pre>
<p>but that does not seem to do the trick.</p>
<p>** UPDATE **</p>
<p>Ok. I went with this solution</p>
<pre><code>Convert.ToInt64(i["Customer Phone"]).ToString("###-###-#### ####")
</code></pre>
<p>Now its gets messed up when the extension is less than 4 digits. It will fill in the numbers from the right. so</p>
<pre><code>1112224444 333 becomes
11-221-244 3334
</code></pre>
<p>Any ideas?</p>
|
<p>I prefer to use regular expressions:</p>
<pre><code>Regex.Replace("1112224444", @"(\d{3})(\d{3})(\d{4})", "$1-$2-$3");
</code></pre>
|
<p>Here is an improved version of @Jon Skeet answer with null checks and has a extension method.</p>
<pre class="lang-csharp prettyprint-override"><code>public static string ToTelephoneNumberFormat(this string value, string format = "({0}) {1}-{2}") {
if (string.IsNullOrWhiteSpace(value))
{
return value;
}
else
{
string area = value.Substring(0, 3) ?? "";
string major = value.Substring(3, 3) ?? "";
string minor = value.Substring(6) ?? "";
return string.Format(format, area, major, minor);
}
}
</code></pre>
| 22,859
|
<p>I am trying to establish a basic .NET Remoting communication between 2x 64bit windows machines. If Machine1 is acting as client and Machine2 as server, then everything works fine. The other way around the following exception occurs:</p>
<p>System.Net.Sockets.SocketException: No connection could be made because the target machine actively refused it 172.16.7.44:6666</p>
<p>The server code:</p>
<pre><code>TcpChannel channel = new TcpChannel(6666);
ChannelServices.RegisterChannel(channel);
RemotingConfiguration.RegisterWellKnownServiceType(
typeof(MyRemotableObject),"HelloWorld",WellKnownObjectMode.Singleton);
</code></pre>
<p>The client code:</p>
<pre><code>TcpChannel chan = new TcpChannel();
ChannelServices.RegisterChannel(chan);
// Create an instance of the remote object
remoteObject = (MyRemotableObject)Activator.GetObject(
typeof(MyRemotableObject), "tcp://172.16.7.44:6666/HelloWorld");
</code></pre>
<p>Any idea whats wrong with my code?</p>
|
<p>Windows Firewall? (Question author says this is not it.)</p>
<p>To track down connection issues the standard approach applies (apply in any order):</p>
<ul>
<li>ping the machine</li>
<li>double check if some process really is listening in port 6666 (<code>netstat -an</code>)</li>
<li>telnet the machine on port 6666</li>
<li>try to use a different service on the machine.</li>
<li>check if some configuration upsets the server process listening on 6666 and causes it to refuse you. (don't know if that is possible with .NET remoting)</li>
<li>watch communication with the machine using a packet sniffer (Packetyzer, for example) to find out what's going on at the TCP/IP level.</li>
<li>maybe active network infrastructure components between server and client (layer-3 switches, firewalls, NAT-routers, whatever) are interfering</li>
</ul>
|
<p>Windows Firewall? (Question author says this is not it.)</p>
<p>To track down connection issues the standard approach applies (apply in any order):</p>
<ul>
<li>ping the machine</li>
<li>double check if some process really is listening in port 6666 (<code>netstat -an</code>)</li>
<li>telnet the machine on port 6666</li>
<li>try to use a different service on the machine.</li>
<li>check if some configuration upsets the server process listening on 6666 and causes it to refuse you. (don't know if that is possible with .NET remoting)</li>
<li>watch communication with the machine using a packet sniffer (Packetyzer, for example) to find out what's going on at the TCP/IP level.</li>
<li>maybe active network infrastructure components between server and client (layer-3 switches, firewalls, NAT-routers, whatever) are interfering</li>
</ul>
| 35,999
|
<p>I'm in need of a distributed file system that must scale to very large sizes (about 100TB realistic max). Filesizes are mostly in the 10-1500KB range, though some files may peak at about 250MB.</p>
<p>I very much like the thought of systems like GFS with built-in redundancy for backup which would - statistically - render file loss a thing of the past.</p>
<p>I have a couple of requirements:</p>
<ul>
<li>Open source</li>
<li>No SPOFs</li>
<li>Automatic file replication (that is, no need for RAID)</li>
<li>Managed client access</li>
<li>Flat namespace of files - preferably</li>
<li>Built in versioning / delayed deletes</li>
<li>Proven deployments</li>
</ul>
<p>I've looked seriously at MogileFS as it does fulfill most of the requirements. It does not have any managed clients, but it should be rather straight forward to do a port of the Java client. However, there is no versioning built in. Without versioning, I will have to do normal backups besides the file replication built into MogileFS.</p>
<p>Basically I need protection from a programming error that suddenly purges a lot of files it shouldn't have. While MogileFS does protect me from disk & machine errors by replicating my files over X number of devices, it doesn't save me if I do an unwarranted delete.</p>
<p>I would like to be able to specify that a delete operation doesn't actually take effect until after Y days. The delete will logically have taken place, but I can restore the file state for Y days until it's actually deleten. Also MogileFS does not have the ability to check for disk corruption during writes - though again, this could be added.</p>
<p>Since we're a Microsoft shop (Windows, .NET, MSSQL) I'd optimally like the core parts to be running on Windows for easy maintainability, while the storage nodes run *nix (or a combination) due to licensing.</p>
<p>Before I even consider rolling my own, do you have any suggestions for me to look at? I've also checked out HadoopFS, OpenAFS, Lustre & GFS - but neither seem to match my requirements.</p>
|
<p>Do you absolutely need to host this on your own servers? Much of what you need could be provided by Amazon S3. The delayed delete feature could be implemented by recording deletes to a SimpleDB table and running a garbage collection pass periodically to expunge files when necessary. </p>
<p>There is still a single point of failure if you rely on a single internet connection. And of course you could consider Amazon themselves to be a point of failure but the failure rate is always going to be far lower because of scale.</p>
<p>And hopefully you realize the other benefits, the ability to scale to any capacity. No need for IT staff to replace failed disks or systems. Usage costs will continually drop as disk capacity and bandwidth gets cheaper (while disks you purchase depreciate in value).</p>
<p>It's also possible to take a hybrid approach and use S3 as a secure backend archive and cache "hot" data locally, and find a caching strategy that best fits your usage model. This can greatly reduce bandwidth usage and improve I/O, epecially if data changes infrequently.</p>
<p>Downsides:</p>
<ul>
<li>Files on S3 are immutable, they can
only be replaced entirely or
deleted. This is great for caching,
not so great for efficiency when
making small changes to large files.</li>
<li>Latency and bandwidth are those of
your network connection. Caching can
help improve this but you'll never
get the same level of performance.</li>
</ul>
<p>Versioning would also be a custom solution, but could be implemented using SimpleDB along with S3 to track sets of revisions to a file. Overally, it really depends on your use case if this would be a good fit.</p>
|
<p>You could try running a source control system on top of your reliable file system. The problem then becomes how to expunge old check ins after your timeout. You can setup an Apache server with DAV_SVN and it will commit each change made through the DAV interface. I'm not sure how well this will scale with large file sizes that you describe.</p>
| 42,235
|
<p>Setup is SQL2005 SP2 with Reporting Services installed local on Win2003 64bit.
When users browse report manager on <a href="http://server/reports" rel="nofollow noreferrer">http://server/reports</a> they get login dialog for every request, but only if they use IE7. In FireFox all works.</p>
<p>The site is in "local intranet" zone on IE.</p>
<p>It seems like it is a NTLM, I've tested reinstall, change permission on service account, change permission on SRS directory, no works.</p>
|
<p>After reading what the error code means thanks to the answer from VonC I understood where to look. The problem was a lot more obscure that it seems.</p>
<p>I looked into the configuration folder for Eclipse (logs are either written there or in the .metadata folder when something goes wrong), and I found a huge log file. Inside the file I found the following error:</p>
<pre><code>application org.eclipse.sdk not found
</code></pre>
<p>and the following exception, followed by a plugin name, several times:</p>
<pre><code>java.util.zip.ZipException: Too many open files
</code></pre>
<p>Several plugins could not be loaded, causing a cascade of missing dependencies that prevented Eclipse from launching. I searched the web for this exception and found the following bug description from SUN, which says that <a href="http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6423026" rel="nofollow noreferrer">Java 1.5.0 can not open more than 2,100 zip files</a>.</p>
<p>The problem started a while after I installed the BABEL project translations for Eclipse into the build computer. These are more than 900 fragments, containing translations for many plugins, one for each language. As I installed it on top of an existing eclipse installation, it seemed that it was not a problem to open them.... until I cleared the workspace for the builds. Then Eclipse wouldn't launch anymore. I went over the limit. It didn't help that the first thing I tried to fix the build was, again.... clearing everything.</p>
<p>Because I only use this computer for headless builds, I didn't realize that the problem was in Eclipse itself and I was looking inside the build process. I only realized when I looked into the log file.</p>
<p>After installing Java 1.6.0_11 I was able to launch Eclipse and go on with my build.</p>
|
<p>It should mean "<strong><em>ant</em></strong> <strong>build failed</strong>", meaning the headless ant script fails at some point.</p>
<p>You should check if you can catch the log/output generated by this script to analyze this ant session and see at what point the ant script fails.</p>
<hr>
<p>If it fails right away, it usually is because of:</p>
<ul>
<li>a change in rights (writing access), or </li>
<li>in environment variable modifications, or </li>
<li>in resource access (path non accessible). </li>
</ul>
<p>You also have to check if the computer is still in its original windows domain, and if the rights (admin ?) associated with the account running the ant script are still the same.</p>
| 48,905
|
<p>I'm re-designing an app I inherited that sends digital photos from a laptop to a web server. The idea is to take photos "out on the field" and have them instantly published on a web page (with some more fancy features).</p>
<p>Typical scenario<br>
1. Photos are transferred from the camera to the laptop using standard USB.<br>
2. The photos are processed in various ways. (Not important)<br>
3. Each photo is POSTed in small pieces (~64 kb each) using a webrequest to a standard Apache web server where it's merged together again.</p>
<p>The problem with the current design is that it often hangs when the network connection is unreliable. As we're using a mobile network (3G) and often end up out of coverage, I need a way to handle this properly.</p>
<p>My question is whether there's a better solution for doing this that won't make the app hang when the connection drops every now and then.</p>
<p>(Bonus question is how this could be properly unit tested without having to take a hike with the laptop.)</p>
<p>EDIT 2008-11-24:
I've now managed to set up a proper test environment for this using a combination of NetLimiter and TMnetsim (freeware). I tried setting 5 kb/sec and dropping 1% of all packets - my app still works well with the new design.</p>
<p>EDIT 2008-12-11:
Just to update how I did this. I created one background worker (as suggested below) that is started whenever a camera is detected to copy the photos from the camera to PC. Then another background worker i started when files arrive on PC to upload using asynchronous HTTP transfer. It sure was a pain to get everything right, especially since the operation should be "cancellable" at any time... But anyhow, now it works. A big THANKS to everyone who helped me!</p>
|
<p>I'd avoid using HTTP at all from any thread that has UI unless you actually want to block until the response is received. You can try using the same logic from a background thread which will run as long as it needs to. Just be sure to have logic which will detect when the connection is lost (probably from a timeout) and will retry at a regular (but not frequent) interval until connecting again.</p>
<p>Your best bet would be creating some sort of background worker process which will upload the photos once they are saved to a dropbox directory on the device. I will say though that creating a .NET based background process is not trivial.</p>
|
<p>Firstly, I would put the transfer process outside of the application. Sync files with a utility that allows it to restart transfer from the middle of the last transfer.
<br><br>
You can simulate communication drops with some type of <a href="http://en.wikipedia.org/wiki/Faraday_cage" rel="nofollow noreferrer">Faraday cage</a>.</p>
| 35,510
|
<p>I was asked this question during an interview. They're both O(nlogn) and yet most people use Quicksort instead of Mergesort. Why is that?</p>
|
<p>Quicksort has O(<i>n</i><sup>2</sup>) worst-case runtime and O(<i>n</i>log<i>n</i>) average case runtime. However, it’s superior to merge sort in many scenarios because many factors influence an algorithm’s runtime, and, when taking them all together, quicksort wins out.</p>
<p>In particular, the often-quoted runtime of sorting algorithms refers to the number of comparisons or the number of swaps necessary to perform to sort the data. This is indeed a good measure of performance, especially since it’s independent of the underlying hardware design. However, other things – such as locality of reference (i.e. do we read lots of elements which are probably in cache?) – also play an important role on current hardware. Quicksort in particular requires little additional space and exhibits good cache locality, and this makes it faster than merge sort in many cases.</p>
<p>In addition, it’s very easy to avoid quicksort’s worst-case run time of O(<i>n</i><sup>2</sup>) almost entirely by using an appropriate choice of the pivot – such as picking it at random (this is an excellent strategy).</p>
<p>In practice, many modern implementations of quicksort (in particular libstdc++’s <code>std::sort</code>) are actually <a href="http://en.wikipedia.org/wiki/Introsort" rel="noreferrer">introsort</a>, whose theoretical worst-case is O(<i>n</i>log<i>n</i>), same as merge sort. It achieves this by limiting the recursion depth, and switching to a different algorithm (<a href="http://en.wikipedia.org/wiki/Heapsort" rel="noreferrer">heapsort</a>) once it exceeds log<i>n</i>.</p>
|
<p>In c/c++ land, when not using stl containers, I tend to use quicksort, because it is built
into the run time, while mergesort is not.</p>
<p>So I believe that in many cases, it is simply the path of least resistance.</p>
<p>In addition performance can be much higher with quick sort, for cases where the entire dataset does not fit into the working set.</p>
| 9,579
|
<pre><code>int i = 4;
string text = "Player ";
cout << (text + i);
</code></pre>
<p>I'd like it to print <code>Player 4</code>.</p>
<p>The above is obviously wrong but it shows what I'm trying to do here. Is there an easy way to do this or do I have to start adding new includes?</p>
|
<p>Well, if you use cout you can just write the integer directly to it, as in</p>
<pre><code>std::cout << text << i;
</code></pre>
<p>The C++ way of converting all kinds of objects to strings is through <a href="http://en.cppreference.com/w/cpp/io/basic_stringstream" rel="noreferrer">string streams</a>. If you don't have one handy, just create one.</p>
<pre><code>#include <sstream>
std::ostringstream oss;
oss << text << i;
std::cout << oss.str();
</code></pre>
<p>Alternatively, you can just convert the integer and append it to the string.</p>
<pre><code>oss << i;
text += oss.str();
</code></pre>
<p>Finally, the Boost libraries provide <a href="http://www.boost.org/doc/libs/release/doc/html/boost_lexical_cast.html" rel="noreferrer"><code>boost::lexical_cast</code></a>, which wraps around the stringstream conversion with a syntax like the built-in type casts.</p>
<pre><code>#include <boost/lexical_cast.hpp>
text += boost::lexical_cast<std::string>(i);
</code></pre>
<p>This also works the other way around, i.e. to parse strings.</p>
|
<p>There are a few options, and which one you want depends on the context.</p>
<p>The simplest way is</p>
<pre><code>std::cout << text << i;
</code></pre>
<p>or if you want this on a single line</p>
<pre><code>std::cout << text << i << endl;
</code></pre>
<p>If you are writing a single threaded program and if you aren't calling this code a lot (where "a lot" is thousands of times per second) then you are done.</p>
<p>If you are writing a multi threaded program and more than one thread is writing to cout, then this simple code can get you into trouble. Let's assume that the library that came with your compiler made cout thread safe enough than any single call to it won't be interrupted. Now let's say that one thread is using this code to write "Player 1" and another is writing "Player 2". If you are lucky you will get the following:</p>
<pre><code>Player 1
Player 2
</code></pre>
<p>If you are unlucky you might get something like the following</p>
<pre><code>Player Player 2
1
</code></pre>
<p>The problem is that std::cout << text << i << endl; turns into 3 function calls. The code is equivalent to the following:</p>
<pre><code>std::cout << text;
std::cout << i;
std::cout << endl;
</code></pre>
<p>If instead you used the C-style printf, and again your compiler provided a runtime library with reasonable thread safety (each function call is atomic) then the following code would work better:</p>
<pre><code>printf("Player %d\n", i);
</code></pre>
<p>Being able to do something in a single function call lets the io library provide synchronization under the covers, and now your whole line of text will be atomically written.</p>
<p>For simple programs, std::cout is great. Throw in multithreading or other complications and the less stylish printf starts to look more attractive.</p>
| 9,038
|
<p>I'm building a small web app in PHP that stores some information in a plain text file. However, this text file is used/modified by all users of my app at some given point in time and possible at the same time.</p>
<p>So the questions is. What would be the best way to make sure that only one user can make changes to the file at any given point in time?</p>
|
<p>You should put a lock on the file</p>
<pre><code> $fp = fopen("/tmp/lock.txt", "r+");
if (flock($fp, LOCK_EX)) { // acquire an exclusive lock
ftruncate($fp, 0); // truncate file
fwrite($fp, "Write something here\n");
fflush($fp); // flush output before releasing the lock
flock($fp, LOCK_UN); // release the lock
} else {
echo "Couldn't get the lock!";
}
fclose($fp);
</code></pre>
<p>Take a look at the <a href="http://www.php.net/flock" rel="nofollow noreferrer">http://www.php.net/flock</a></p>
|
<p>A single file for many users really shouldn't be the strategy you use I don't think - otherwise you'll probably need to implement a single (global) access point that monitors if the file is currently being edited or not. Aquire a lock, do your modification, release the lock etc. I'd go with 'Nobody's suggestion to use a database (SQLite if you don't want the overhead of a fully decked out RDBMS)</p>
| 37,474
|
<p>I have a tree of active record objects, something like:</p>
<pre><code>class Part < ActiveRecord::Base
has_many :sub_parts, :class_name => "Part"
def complicated_calculation
if sub_parts.size > 0
return self.sub_parts.inject(0){ |sum, current| sum + current.complicated_calculation }
else
sleep(1)
return rand(10000)
end
end
end
</code></pre>
<p>It is too costly to recalculate the complicated_calculation each time. So, I need a way to cache the value. However, if any part is changed, it needs to invalidate its cache and the cache of its parent, and grandparent, etc.</p>
<p>As a rough draft, I created a column to hold the cached calculation in the "parts" table, but this smells a little rotten. It seems like there should be a cleaner way to cache the calculated values without stuffing them along side the "real" columns.</p>
|
<p>I suggest using association callbacks.</p>
<pre><code>class Part < ActiveRecord::Base
has_many :sub_parts,
:class_name => "Part",
:after_add => :count_sub_parts,
:after_remove => :count_sub_parts
private
def count_sub_parts
update_attribute(:sub_part_count, calculate_sub_part_count)
end
def calculate_sub_part_count
# perform the actual calculation here
end
end
</code></pre>
<p>Nice and easy =)</p>
|
<p>I've found that sometimes there is good reason to de-normalize information in your database. I have something similar in an app that I am working on and I just re-calculate that field anytime the collection changes. </p>
<p>It doesn't use a cache and it stores the most up to date figure in the database. </p>
| 21,850
|
<p>What does it mean when a <a href="http://en.wikipedia.org/wiki/PostgreSQL" rel="noreferrer">PostgreSQL</a> process is "idle in transaction"?</p>
<p>On a server that I'm looking at, the output of "ps ax | grep postgres" I see 9 PostgreSQL processes that look like the following:</p>
<pre><code>postgres: user db 127.0.0.1(55658) idle in transaction
</code></pre>
<p>Does this mean that some of the processes are hung, waiting for a transaction to be committed? Any pointers to relevant documentation are appreciated.</p>
|
<p>The <a href="http://www.postgresql.org/docs/8.3/interactive/monitoring-ps.html" rel="noreferrer">PostgreSQL manual</a> indicates that this means the transaction is open (inside BEGIN) and idle. It's most likely a user connected using the monitor who is thinking or typing. I have plenty of those on my system, too.</p>
<p>If you're using Slony for replication, however, the <a href="http://slony1.projects.postgresql.org/slony1-1.2.6/doc/adminguide/faq.html" rel="noreferrer">Slony-I FAQ</a> suggests <code>idle in transaction</code> may mean that the network connection was terminated abruptly. Check out the discussion in that FAQ for more details.</p>
|
<p>As mentioned here: <a href="http://archives.postgresql.org/pgsql-bugs/2008-06/msg00102.php" rel="noreferrer">Re: BUG #4243: Idle in transaction</a> it is probably best to check your pg_locks table to see what is being locked and that might give you a better clue where the problem lies.</p>
| 7,386
|
<p>What's the shortest Perl one-liner that print out the first 9 powers of a hard-coded 2 digit decimal (say, for example, .37), each on its own line? </p>
<p>The output would look something like:</p>
<pre><code>1
0.37
0.1369
[etc.]
</code></pre>
<p>Official Perl golf rules:</p>
<ol>
<li>Smallest number of (key)strokes wins</li>
<li>Your stroke count includes the command line</li>
</ol>
|
<p>With perl 5.10.0 and above:</p>
<pre><code>perl -E'say 0.37**$_ for 0..8'
</code></pre>
<p>With older perls you don't have <code>say</code> and -E, but this works:</p>
<pre><code>perl -le'print 0.37**$_ for 0..8'
</code></pre>
<p>Update: the first solution is made of 30 key strokes. Removing the first 0 gives 29. Another space can be saved, so my final solution is this with 28 strokes:</p>
<pre><code>perl -E'say.37**$_ for 0..8'
</code></pre>
|
<pre><code>perl -e "for(my $i = 1; $i < 10; $i++){ print((.37**$i). \"\n\"); }"
</code></pre>
<p>Just a quick entry. :)</p>
<p>Fixed to line break!</p>
| 25,782
|
<p>Will the individual UML diagram shapes be NSView subclasses or NSBezierPaths? How are the diagrams created and managed?</p>
|
<p>One way to do this is to:</p>
<ul>
<li>Create a document-based app</li>
<li>Design model classes for the different objects the end-user will be able to draw in your canvas, all sharing one abstract superclass</li>
<li>In your CanvasView class, implement drawRect and have it call the NSDocument subclass, or for more granular classes it's viewcontroller, to get all the objects that should be drawn in the right order to draw them.</li>
<li>For each of these objects, call a drawInteriorInView:rect: method or something similar that they all have implemented, from within your CanvasView's drawRect: implementation.</li>
</ul>
<p>The advantage of such a granular design is that you can decide to replace NSBezierPath drawing with straight CoreGraphics calls if you find a need to do so, without having to completely re-architect the app.</p>
<p>Typical Cocoa controls, like for instance a tableView, implement a bunch of different drawing methods, one for the background, one for the gridlines, etc. etc. all of them called (when applicable) from the view's drawRect:.</p>
<p>Or you could of course look at <a href="http://apptree.net/drawkitmain.htm" rel="nofollow noreferrer">GCDrawKit</a>, which seems to have a pretty functional implementation. Especially check out the sample app that comes with it.</p>
|
<p>Have you looked at OmniGraffle? It may do what you need.</p>
<p>[non-programming-related answer...]</p>
| 39,598
|
<p>I know in ASP.NET I can get an item from a DropDownList by using</p>
<pre><code>DropDownList1.Items.FindByText
</code></pre>
<p>Is there a similar method I can use in WPF for a ComboBox?</p>
<p>Here's the scenario.</p>
<p>I have a table called RestrictionFormat that contains a column called RestrictionType, the type is a foreign key to a table that stores these values.</p>
<p>In my editor application I'm writing, when the user selects the RestrictionFormat from a ComboBox (this works fine), I'm pulling up the details for editing. I'm using a second ComboBox to make sure the user only selects one RestrictionType when editing. I already have the second combobox bound property from the RestrictionType table, but I need to change the selected index on it to match the value specified in the record.</p>
<hr>
<p>Here's the scenario.</p>
<p>I have a table called RestrictionFormat that contains a column called RestrictionType, the type is a foreign key to a table that stores these values.</p>
<p>In my editor application I'm writing, when the user selects the RestrictionFormat from a ComboBox (this works fine), I'm pulling up the details for editing. I'm using a second ComboBox to make sure the user only selects one RestrictionType when editing. I already have the second combobox bound property from the RestrictionType table, but I need to change the selected index on it to match the value specified in the record.</p>
<p>Does this make sense?</p>
|
<p>Can you use ItemContainerGenerator?</p>
<p>ItemContainerGenerator contains a ContainerFromItem method that takes an object parameter. If you have a reference to the full object that your comboBox contains (or a way to reconstruct it), you can use the following:</p>
<pre><code>ComboBoxItem item =
(ComboBoxItem)myComboBox.ItemContainerGenerator.ContainerFromItem(myObject);
</code></pre>
|
<p>You can retrieve combobox items in two ways:</p>
<p>By item:</p>
<pre><code>ComboBoxItem item = (ComboBoxItem) control.ItemContainerGenerator.ContainerFromItem(control.SelectedItem);
</code></pre>
<p>By index:</p>
<pre><code>ComboBoxItem item = (ComboBoxItem) control.ItemContainerGenerator.ContainerFromIndex(1);
</code></pre>
| 6,260
|
<p>I'm currently logging via the simplest of methods within my servlet using Tomcat. I use the ServletConfig.getServletContext().log to record activity. This writes to the localhost.YYYY-MM-DD.log in $TOMCAT_HOME/logs.</p>
<p>I don't want to get away from the simplicity of this logging mechanism unless absolutely necessary. But I would like to name my log file. Rather than "localhost".YYYY-MM-DD.log, is there a way to have it write to "myAppName".YYYY-MM-DD.log. I know I could create my own mechanism, but again, I looking for simplicity here.</p>
<p>I'm hoping to stay away from a complete framework like Log4j.</p>
|
<p>This question is old, but I figured it was valuable to have the correct answer here.</p>
<p>Filip is confusing server-side Java with client-side Java. He is correct that you cannot share sessions between two server-side platforms, such as Java (J2EE) and ASP.Net without using a custom approach.</p>
<p>However, applets are client-side and therefore should be able to access the session information of the host page. The issue is that ASP.Net 2.0 added the HttpOnly flag on session cookies. This flag prevents JavaScript and Java applets from accessing these cookies.</p>
<p>The workaround is to turn off the HttpOnly flag on session cookies. While you may be able to do it in the configuration in newer versions of ASP.Net, in previous versions the solution was to add the following code to your Global.asax file:</p>
<pre><code>protected void Application_EndRequest(object sender, EventArgs e)
{
/**
* @note Remove the HttpOnly attribute from session cookies, otherwise the
* Java applet won't have access to the session. This solution taken
* from
* http://blogs.msdn.com/jorman/archive/2006/03/05/session-loss-after-migrating-to-asp-net-2-0.aspx
*
* For more information on the HttpOnly attribute see:
*
* http://msdn.microsoft.com/netframework/programming/breakingchanges/runtime/aspnet.aspx
* http://msdn2.microsoft.com/en-us/library/system.web.httpcookie.httponly.aspx
*/
if (Response.Cookies.Count > 0)
{
foreach (string lName in Response.Cookies.AllKeys)
{
if (lName == FormsAuthentication.FormsCookieName ||
lName.ToLower() == "asp.net_sessionid")
{
Response.Cookies[lName].HttpOnly = false;
}
}
}
}
</code></pre>
<p>Note that even with this fix, not all browser/OS/Java combinations can access cookies. I'm currently researching an issue with session cookies not being accessible on Firefox 4.0.1 with Java 1.6.0_13 on Windows XP. </p>
<p>The workaround is to use the approach Dr. Dad suggested, where the session ID gets passed to the applet as a parameter, and then either gets embedded into the request URL (requires URL sessions to be turned on in the server-side configuration) or sent as a manually-set cookie.</p>
|
<p>Filip's answer isn't entirely correct. I ran a program to sniff the HTTP headers on my workstation, and the Java applet does in fact present the ASP.NET authentication ticket in some circumstances - just not reliably enough for my needs.</p>
<p>Eventually I did find a solution to this, but it didn't entirely solve my problem. You can add an entry to the web.config in .NET 2.0: <code><httpCookies httpOnlyCookies="false" /></code>; but this didn't work for all my users.</p>
<p>The long term solution turned out to be modifying the Java applet so that it doesn't need to retrieve anything from the web server.</p>
| 24,637
|
<p>Does anyone have tools or experience with code coverage for PL/SQL. I believe this is possible using DBMS_PROFILER?</p>
|
<p><a href="http://www.toadworld.com/BLOGS/tabid/67/EntryID/267/Default.aspx" rel="noreferrer">http://www.toadworld.com/BLOGS/tabid/67/EntryID/267/Default.aspx</a> has info about checking code coverage using the PL/SQL profiler. </p>
<p>Some helpful info about profiling on 9i or 10g is included in Metalink Article 243755.1 "Implementing and Using the PL/SQL Profiler" for information on profiling code. Grab the prof.zip from the bottom of the article, it has a profiler.sql which will nicely format your results after a profiling run. </p>
<p>More 10g documentation is available here without a MetaLinka account: <a href="http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14258/d_profil.htm" rel="noreferrer">http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14258/d_profil.htm</a></p>
<p>If you are running 11g there is a new Hierarchical Profiler documented here:
<a href="http://download.oracle.com/docs/cd/B28359_01/appdev.111/b28424/adfns_profiler.htm" rel="noreferrer">http://download.oracle.com/docs/cd/B28359_01/appdev.111/b28424/adfns_profiler.htm</a></p>
|
<p>With Oracle 12.2c now you can use <strong>DBMS_PLSQL_CODE_COVERAGE</strong> package. In fact, this provides basic-block level coverage.</p>
<p>Basic-block is the smallest executable code.</p>
<p>These are few references to find more information.</p>
<p><a href="https://docs.oracle.com/en/database/oracle/oracle-database/12.2/adfns/basic-block-coverage.html#GUID-35842E6A-630A-418F-8314-6A30F1FBDF83" rel="nofollow noreferrer">Oracle documentation</a></p>
<p><a href="https://oracle-base.com/articles/12c/dbms_plsql_code_coverage-12cr2" rel="nofollow noreferrer">Tutorial</a></p>
<p><a href="https://youtu.be/CIKQy8su5DQ?t=2704" rel="nofollow noreferrer">Youtube Video</a></p>
| 18,391
|
<p>I have to delete some rows from a data table. I've heard that it is not ok to change a collection while iterating through it. So instead of a for loop in which I check if a row meets the demands for deletion and then mark it as deleted, I should first iterate through the data table and add all of the rows in a list, then iterate through the list and mark the rows for deletions. What are the reasons for this, and what alternatives do I have (instead of using the rows list I mean)?.</p>
|
<p>Iterating Backwards through the List sounds like a better approach, because if you remove an element and other elements "fall into the gap", that does not matter because you have already looked at those. Also, you do not have to worry about your counter variable becoming larger than the .Count.</p>
<pre><code> List<int> test = new List<int>();
test.Add(1);
test.Add(2);
test.Add(3);
test.Add(4);
test.Add(5);
test.Add(6);
test.Add(7);
test.Add(8);
for (int i = test.Count-1; i > -1; i--)
{
if(someCondition){
test.RemoveAt(i);
}
}
</code></pre>
|
<p>When I need to remove an item from a collection that I am enumerating I usually enumerate it in reverse.</p>
| 39,736
|
<p>I've had a hard time trying to find good examples of how to manage database schemas and data between development, test, and production servers.</p>
<p>Here's our setup. Each developer has a virtual machine running our app and the MySQL database. It is their personal sandbox to do whatever they want. Currently, developers will make a change to the SQL schema and do a dump of the database to a text file that they commit into SVN.</p>
<p>We're wanting to deploy a continuous integration development server that will always be running the latest committed code. If we do that now, it will reload the database from SVN for each build.</p>
<p>We have a test (virtual) server that runs "release candidates." Deploying to the test server is currently a very manual process, and usually involves me loading the latest SQL from SVN and tweaking it. Also, the data on the test server is inconsistent. You end up with whatever test data the last developer to commit had on his sandbox server.</p>
<p>Where everything breaks down is the deployment to production. Since we can't overwrite the live data with test data, this involves manually re-creating all the schema changes. If there were a large number of schema changes or conversion scripts to manipulate the data, this can get really hairy.</p>
<p>If the problem was just the schema, It'd be an easier problem, but there is "base" data in the database that is updated during development as well, such as meta-data in security and permissions tables.</p>
<p>This is the biggest barrier I see in moving toward continuous integration and one-step-builds. How do <strong><em>you</em></strong> solve it?</p>
<hr>
<p>A follow-up question: how do you track database versions so you know which scripts to run to upgrade a given database instance? Is a version table like Lance mentions below the standard procedure?</p>
<hr>
<p>Thanks for the reference to Tarantino. I'm not in a .NET environment, but I found their <a href="http://code.google.com/p/tarantino/wiki/DatabaseChangeManagement" rel="noreferrer">DataBaseChangeMangement wiki page</a> to be very helpful. Especially this <a href="http://tarantino.googlecode.com/svn/docs/Database-Change-Management.ppt" rel="noreferrer">Powerpoint Presentation (.ppt)</a></p>
<p>I'm going to write a Python script that checks the names of <code>*.sql</code> scripts in a given directory against a table in the database and runs the ones that aren't there in order based on a integer that forms the first part of the filename. If it is a pretty simple solution, as I suspect it will be, then I'll post it here.</p>
<hr>
<p>I've got a working script for this. It handles initializing the DB if it doesn't exist and running upgrade scripts as necessary. There are also switches for wiping an existing database and importing test data from a file. It's about 200 lines, so I won't post it (though I might put it on pastebin if there's interest).</p>
|
<p>There are a couple of good options. I wouldn't use the "restore a backup" strategy.</p>
<ol>
<li><p>Script all your schema changes, and have your CI server run those scripts on the database. Have a version table to keep track of the current database version, and only execute the scripts if they are for a newer version.</p></li>
<li><p>Use a migration solution. These solutions vary by language, but for .NET I use Migrator.NET. This allows you to version your database and move up and down between versions. Your schema is specified in C# code.</p></li>
</ol>
|
<p>I've written a tool which (by hooking into <a href="http://www.codeplex.com/OpenDBiff" rel="nofollow noreferrer">Open DBDiff</a>) compares database schemas, and will suggest migration scripts to you. If you make a change that deletes or modifies data, it will throw an error, but provide a suggestion for the script (e.g. when a column in missing in the new schema, it will check if the column has been renamed and create xx - generated script.sql.suggestion containing a rename statement).</p>
<p><a href="http://code.google.com/p/migrationscriptgenerator/" rel="nofollow noreferrer">http://code.google.com/p/migrationscriptgenerator/</a> SQL Server only I'm afraid :( It's also pretty alpha, but it is VERY low friction (particularly if you combine it with Tarantino or <a href="http://code.google.com/p/simplescriptrunner/" rel="nofollow noreferrer">http://code.google.com/p/simplescriptrunner/</a>)</p>
<p>The way I use it is to have a SQL scripts project in your .sln. You also have a db_next database locally which you make your changes to (using Management Studio or <a href="http://wiki.fluentnhibernate.org/show/GettingStarted:+First+Project" rel="nofollow noreferrer">NHibernate Schema Export</a> or <a href="http://msdn.microsoft.com/en-us/library/bb399420.aspx" rel="nofollow noreferrer">LinqToSql CreateDatabase</a> or something). Then you execute migrationscriptgenerator with the _dev and _next DBs, which creates. the SQL update scripts for migrating across.</p>
| 2,811
|
<p>I'm currently playing around with <a href="http://pear.php.net/package/HTML_QuickForm" rel="noreferrer">HTML_QuickForm</a> for generating forms in PHP. It seems kind of limited in that it's hard to insert my own javascript or customizing the display and grouping of certain elements.</p>
<p>Are there any alternatives to QuickForm that might provide more flexibility?</p>
|
<p>If you find it hard to insert Javascript into the form elements, consider using a JavaScript framework such as <a href="http://www.prototypejs.org/" rel="nofollow noreferrer">Prototype</a> or <a href="http://jquery.com/" rel="nofollow noreferrer">jQuery</a>. There, you can centralize the task of injecting event handling into form controls.</p>
<p>By that, I mean that you won't need to insert event handlers into the HTML form code. Instead, you register those events from somewhere else. For example, in Prototype you would be able to write something like this:</p>
<pre><code>$('myFormControl').observe('click', myClickFunction)
</code></pre>
<p>Also have a look at the answers to <a href="https://stackoverflow.com/questions/34126/whats-the-best-way-to-add-event-in-javascript">another question</a>.</p>
<p>/EDIT: of course, you can also insert custom attributes and thus event handlers into the form elements using HTML_QuickForm. However, the above way is superior.</p>
|
<p>I can't really say anything about it but, the other day, I ran across the <a href="http://www.phpformclass.com/page/index" rel="nofollow noreferrer">clonefish</a> form library. It looked promising enough to end up in my bookmarks list as a "look at this later".</p>
| 8,524
|
<p>In my multithreaded application and I see heavy lock contention in it, preventing good scalability across multiple cores. I have decided to use lock free programming to solve this.</p>
<p>How can I write a lock free structure?</p>
|
<p>Short answer is:</p>
<p>You cannot.</p>
<p>Long answer is:</p>
<p>If you are asking this question, you do not probably know enough to be able to create a lock free structure. Creating lock free structures is extremely hard, and only experts in this field can do it. Instead of writing your own, search for an existing implementation. When you find it, check how widely it is used, how well is it documented, if it is well proven, what are the limitations - even some lock free structure other people published are broken.</p>
<p>If you do not find a lock free structure corresponding to the structure you are currently using, rather adapt the algorithm so that you can use some existing one.</p>
<p>If you still insist on creating your own lock free structure, be sure to:</p>
<ul>
<li>start with something very simple</li>
<li>understand memory model of your target platform (including read/write reordering constraints, what operations are atomic)</li>
<li>study a lot about problems other people encountered when implementing lock free structures</li>
<li>do not just guess if it will work, prove it</li>
<li>heavily test the result</li>
</ul>
<p>More reading:</p>
<p><a href="http://en.wikipedia.org/wiki/Lock-free_and_wait-free_algorithms" rel="noreferrer">Lock free and wait free algorithms at Wikipedia</a></p>
<p><a href="http://www.ddj.com/cpp/210600279" rel="noreferrer">Herb Sutter: Lock-Free Code: A False Sense of Security</a></p>
|
<p>Well, it depends on the kind of structure, but you have to make the structure so that it carefully and silently detects and handles possible conflicts.</p>
<p>I doubt you can make one that is 100% lock-free, but again, it depends on what kind of structure you need to build.</p>
<p>You might also need to shard the structure so that multiple threads work on individual items, and then later on synchronize/recombine.</p>
| 11,787
|
<p>I am using the ADONetAppender to (try) to log data via a stored procedure (so that I may inject logic into the logging routine).</p>
<p>My configuration settings are listed below. Can anybody tell what I'm doing wrong?</p>
<pre class="lang-xml prettyprint-override"><code><appender name="ADONetAppender_SqlServer" type="log4net.Appender.ADONetAppender">
<bufferSize value="1" />
<threshold value="ALL"/>
<param name="ConnectionType" value="System.Data.SqlClient.SqlConnection, System.Data, Version=1.0.3300.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" />
<param name="ConnectionString" value="<MyConnectionString>" />
<param name="UseTransactions" value="False" />
<commandText value="dbo.LogDetail_via_Log4Net" />
<commandType value="StoredProcedure" />
<parameter>
<parameterName value="@AppLogID"/>
<dbType value="String"/>
<size value="50" />
<layout type="log4net.Layout.PatternLayout">
<conversionPattern value="%property{LoggingSessionId}" />
</layout>
</parameter>
<parameter>
<parameterName value="@CreateUser"/>
<dbType value="String"/>
<size value="50" />
<layout type="log4net.Layout.PatternLayout">
<conversionPattern value="%property{HttpUser}" />
</layout>
</parameter>
<parameter>
<parameterName value="@Message"/>
<dbType value="String"/>
<size value="8000" />
<layout type="log4net.Layout.PatternLayout">
<conversionPattern value="%message" />
</layout>
</parameter>
<parameter>
<parameterName value="@LogLevel"/>
<dbType value="String"/>
<size value="50"/>
<layout type="log4net.Layout.PatternLayout">
<conversionPattern value="%level" />
</layout>
</parameter>
</appender>
</code></pre>
|
<p>Use "AnsiString" as dbType for varchar. "String" for nvarchar.</p>
<p><a href="http://msdn.microsoft.com/en-us/library/system.data.dbtype%28v=VS.90%29.aspx" rel="noreferrer">http://msdn.microsoft.com/en-us/library/system.data.dbtype%28v=VS.90%29.aspx</a></p>
|
<p>
</p>
<pre><code></configSections>
<log4net>
<appender name="AdoNetAppender" type="log4net.Appender.AdoNetAppender">
<bufferSize value="1"/>
<connectionType value="System.Data.SqlClient.SqlConnection, System.Data, Version=1.0.5000.0,Culture=neutral, PublicKeyToken=b77a5c561934e089"/>
<connectionString value="Data Source=yourservername;initial Catalog=Databasename;User ID=sa;Password=xyz;"/>
<commandText value="INSERT INTO Log4Net ([Date], [Thread], [Level], [Logger], [Message],
[Exception]) VALUES (@log_date, @thread, @log_level, @logger, @message, @exception)"/>
<parameter>
<parameterName value="@log_date"/>
<dbType value="DateTime"/>
<layout type="log4net.Layout.RawTimeStampLayout"/>
</parameter>
<parameter>
<parameterName value="@thread"/>
<dbType value="String"/>
<size value="255"/>
<layout type="log4net.Layout.PatternLayout">
<conversionPattern value="%thread ip=%property{ip}"/>
</layout>
</parameter>
<parameter>
<parameterName value="@log_level"/>
<dbType value="String"/>
<size value="50"/>
<layout type="log4net.Layout.PatternLayout">
<conversionPattern value="%level"/>
</layout>
</parameter>
<parameter>
<parameterName value="@logger"/>
<dbType value="String"/>
<size value="255"/>
<layout type="log4net.Layout.PatternLayout">
<conversionPattern value="%logger"/>
</layout>
</parameter>
<parameter>
<parameterName value="@message"/>
<dbType value="String"/>
<size value="4000"/>
<layout type="log4net.Layout.PatternLayout">
<conversionPattern value="%message"/>
</layout>
</code></pre>
| 15,744
|
<p>I have a problem redrawing a custom view in simple cocoa application. Drawing is based on one parameter that is being changed by a simple NSSlider. However, although i implement -setParameter: and -parameter methods and bind slider's value to that parameter in interface builder i cannot seem to make a custom view to redraw itself.</p>
<p>The code that does redrawing is like this:</p>
<pre><code>- (void)setParameter:(int)newParameter {
parameter = newParamter;
NSLog(@"Updated parameter: %d", parameter);
[self setNeedsDisplay:YES];
}
</code></pre>
<p>I DO get the message about setting the new parameter although the view doesn't redraw itself. Any ideas are welcome!</p>
|
<p>The usual syntax is: <code>[self setNeedsDisplay:YES]</code>, although I would assume that that means the same thing. Are you implementing the <code>- (void)drawRect:(NSRect)rect</code> method, or using the <code>drawRect:</code> method of your superclass?</p>
|
<p>In the iOS 6 there isn't such function to call: <code>setNeedsDisplay:YES</code>. I've got the same problem, and came with this solution: <a href="https://stackoverflow.com/a/15027374/1280800">https://stackoverflow.com/a/15027374/1280800</a> .</p>
<p>Hope it will help.</p>
| 42,414
|
<p>I am trying to connect to 2 databases on the same instance of MySQL from 1 PHP script.</p>
<p>At the moment the only way I've figured out is to connect to both databases with a different user for each.</p>
<p>I am using this in a migration script where I am grabbing data from the original database and inserting it into the new one, so I am looping through large lists of results.</p>
<p>Connecting to 1 database and then trying to initiate a second connection with the same user just changes the current database to the new one.</p>
<p>Any other ideas?</p>
|
<p>You'll need to pass a boolean true as the optional fourth argument to mysql_connect(). See <a href="http://php.net/mysql_connect" rel="noreferrer">PHP's mysql_connect() documentation</a> for more info.</p>
|
<p>First Connect Two Database</p>
<pre><code>$database1 = mysql_connect("localhost","root","password");
$database2 = mysql_connect("localhost","root","password");
</code></pre>
<p>Now Select The Database </p>
<pre><code>$database1_select = mysql_select_db("db_name_1") or die("Can't Connect To Database",$database1);
$database_select = mysql_select_db("db_name_2") or die("Can't Connect To Database",$database2);
</code></pre>
<p>Now if we want to run query then specify database Name at the end like,</p>
<pre><code>$select = mysql_query("SELECT * FROM table_name",$database1);
</code></pre>
| 29,120
|
<p>We have a warm sql backup. full backup nightly, txn logs shipped every so often during the day and restored. I need to move the data files to another disk. These DB's are in a "warm backup" state (such that I can't unmark them as read-only - "Error 5063: Database '<dbname>' is in warm standby. A warm-standby database is read-only.
") and am worried about detaching and re-attaching. </p>
<p>How do we obtain the "warm backup" status after detach/attach operations are complete?</p>
|
<p>The only solution I know is to create a complete backup of your active database and restore this backup to a copy of the database in a 'warm backup' state. First create a backup from the active db:</p>
<pre><code>backup database activedb to disk='somefile'
</code></pre>
<p>Then restore the backup on another sql server. If needed you can use the WITH REPLACE option to change the default storage directory</p>
<pre><code>restore database warmbackup from disk='somefile'
with norecovery, replace ....
</code></pre>
<p>Now you can create backups of the logs and restore them to the warmbackup with the restore log statement.</p>
|
<p>It looks like you didn't complete the restore task , just do the restore task only for the TRANSACTOINAL LOG .Then it will be fine immediately when you finish that.</p>
| 13,013
|
<p>I'm currently adding verbose tooltips to our site, and I'd like (without having to resort to a whizz-bang jQuery plugin, I know there are many!) to use carriage returns to format the tooltip.</p>
<p>To add the tip I'm using the <code>title</code> attribute. I've looked around the usual sites and using the basic template of:</p>
<pre><code><a title='Tool?Tip?On?New?Line'>link with tip</a>
</code></pre>
<p>I've tried replacing the <code>?</code> with:</p>
<ul>
<li><code><br /></code></li>
<li><code>&013; / &#13;</code></li>
<li><code>\r\n</code></li>
<li><code>Environment.NewLine</code> (I'm using C#)</li>
</ul>
<p>None of the above works. Is it possible?</p>
|
<p>It’s simple: just press <kbd>Enter</kbd>!</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code><a href="#" title='Tool
Tip
On
New
Line'>link with tip</a></code></pre>
</div>
</div>
</p>
|
<p>Use <code>data-html="true"</code> and apply <code><br></code>.</p>
| 46,800
|
<p>just wanted to gather different ideas and perspectives as to which layer should (and why) LINQ fall into?</p>
|
<p>it depends on what you want to do with linq. when using linq2sql i`d recommend the DAL, but Linq is more than just database access. you can use it to manipulate lists, ienumerables of business objects and so on... Linq itself can be useful everywhere in your application.</p>
|
<p>I think LINQ should be the very lower-level (DAL) and I think it should be wrapped into a BLL.</p>
<p>I know a lot of people like to use the partial accessibility of the models that LINQ to SQL generates but I think you should have clear separation of interests (see what I did there?). I think if you're going to have business logic it needs to be decoupled completely from your data access logic.</p>
<p>I think what makes it tricky is the fact that you can keep chaining those LINQ extension methods anywhere you have a using System.Linq line in your code. Again though I think LINQ belongs with the definition and should be at the lowest possible level. It also makes TDD/Unit Testing much, much easier when you wrap the usage of LINQ in a BLL.</p>
| 21,071
|
<p>If a part is wanted to be made the strongest possible, what slicer settings should be used? </p>
<ul>
<li><p>3-5 shells vs all shells, no infill? </p></li>
<li><p>100% infill vs some other % infill?</p></li>
<li><p>Thin layer height vs thick layer height?</p></li>
<li><p>Any other relevant settings?</p></li>
</ul>
|
<p>If your real question is what would be the strongest then I say - the solid would be the strongest - no doubt.</p>
<p>But if the question is: </p>
<ul>
<li>what be the strongest in comparison to weight or</li>
<li>what is the strongest in comparison to the cost (amount of material)</li>
</ul>
<p>then these are good questions!</p>
<p>You can of course find many tutorials and comparisons on the net and there will be many answers - which all of them could be good/bad ;)</p>
<p>If these are your questions then instead of simple answer you can ask more questions like:</p>
<ul>
<li>in which orientation or</li>
<li>for what purpose or</li>
<li>for continues stress or maybe for variable stress or</li>
<li>for bending forces / shearing forces or maybe tearing forces</li>
</ul>
<p>all these forces and circumstances could require other answer... which could also lead to other questions :)</p>
<p>But according to my experience, the strongest settings (for general purpose) is 3 outlines (and the same number of first/last layers) and triangle infill 20-25 %</p>
<p>Why I think this is the strongest, 3 layers gives good chance to have well stickiness even if there are geometric/design issues and triangle infill gives good (and common) way to carry and spread forces.</p>
<p>But as I said it depends on many input data.</p>
<p>Let's look at these figures:</p>
<p><a href="https://i.stack.imgur.com/q6OeG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/q6OeG.png" alt="enter image description here"></a></p>
<p>in figure A we have the strongest composition for compression; this is because all working forces try to damage material particles which is of course hard to do (depending on material density and length of polymers and the way they are tangled and so on - in general - material strength only).</p>
<p>If we consider figure B where forces try to tear apart layers then we know that we base on stickiness between layers which can vary on printing parameters (as is temperature and speed).</p>
<p>Finally, figure C shows shearing forces - in terms of layered structure it doesn't really differ from tearing apart but the results (the resistance of and object) is even weaker - it's because we base on stickiness and we additionally have less effective field of working stickiness) which reduces endurance of an object.</p>
|
<p>This question is practically unanswerable without the load case or the part being known.</p>
<p>Input for the "strongest" part is depending on:</p>
<ul>
<li>Load case (compression, tension, shear)</li>
<li>Part design</li>
<li># of perimeters</li>
<li>Filament type</li>
<li>Infill percentage (incl. local increased infill for e.g. fasteners; see e.g. <a href="/q/6522">"Different infill in the same part"</a>)</li>
<li>Part orientation when slicing</li>
<li>etc.</li>
</ul>
<p>Do note that 100 % infill does not guarantee the strongest solution, from <a href="https://community.ultimaker.com/topic/19727-100-infill-settings/" rel="nofollow noreferrer">ahoeben</a>:</p>
<blockquote>
<p>Final note: 100% infill is not always the strongest or best quality. If you overextrude by just a little bit, that will quickly add up with 100% infill. With a lower % of infill, the overextruded material has somewhere to go. <br><br>There can also be issues with cooling with high amounts of infill; you are not only putting more material on the print, but also more heat. On the other hand printing a layer is going to take a long time, so there should be time to cool. But shrinking/warping while cooling is also something that is affected by the amount of material.</p>
</blockquote>
| 491
|
<p>Vista has introduced a new API to display a text in the list view control when it doesn't have any items. As the MSDN library states, I should process the <code>LVN_GETEMPTYMARKUP</code> notification.</p>
<p>In the inherited <code>ListView</code> control the <code>WndProc</code> method is overriden:</p>
<pre><code>protected override void WndProc(ref Message m) {
try {
if(m.Msg == 78 /* WM_NOTIFY */) {
var nmhdr = (NMHDR)Marshal.PtrToStructure(m.LParam, typeof(NMHDR));
if(nmhdr.code == -187 /* LVN_GETEMPTYMARKUP */) {
var nmlvemptymarkup =
(NMLVEMPTYMARKUP)Marshal.PtrToStructure(m.LParam, typeof(NMLVEMPTYMARKUP));
nmlvemptymarkup.szMarkup = "The ListView is empty.";
m.Result = (IntPtr)1;
}
}
} finally {
base.WndProc(ref m);
}
}
</code></pre>
<p>However, it doesn't work (although it doesn't throw any exception). Actually I never get <code>nmhdr.code</code> equals to -187. Any ideas?</p>
|
<p><code>WM_NOTIFY</code> messages are not sent to the control (the listview), but rather to the parent (the form). This made sense in the Win32 world because these messages are very useful to intercept but it was moderately painful to subclass the control, especially when you wanted different behaviour in each case. With .NET that's no longer the case.</p>
<p>Conveniently, the standard .NET message processing "reflects" the message back at the control, so that you can intercept the message and handle it within the control's WndProc -- but the reflected message is no longer <code>WM_NOTIFY</code> (0x004E), but rather <code>WM_REFLECT</code>|<code>WM_NOTIFY</code> (0x204E).</p>
<p>So if you modify your WndProc to look for that value instead then it should work.</p>
|
<p>Have you tried calling <code>SetWindowTheme</code> on the control, as indicated in <a href="http://msdn.microsoft.com/en-us/magazine/cc163384.aspx" rel="nofollow noreferrer">this article</a>?</p>
| 36,541
|
<p>I'm looking for suggestions on how to debug and chase down deadlock issues in an SQL Server 2000 database. I've had it recommended to me to use trace flags 1024 and 3605, which I have found give me the following:</p>
<p>1024 - this trace flag returns the type of locks participating in the deadlock and the current command affected. </p>
<p>3605 - this trace flag sends trace output to the error log.</p>
<p>The specific stored procedures, tables and indexes still need to be uncovered, so the goal is to use these trace flags to do so. And then from there I'll know better what indexes need tuning, locking hints for tuning queries, and potential sproc bugs to fix.</p>
<p>Any other suggestions or happy ending war stories on chasing down this all to common problem?</p>
|
<p>Here is the bible on deadlock troubleshooting: <a href="http://blogs.msdn.com/bartd/archive/2006/09/09/Deadlock-Troubleshooting_2C00_-Part-1.aspx" rel="nofollow noreferrer">http://blogs.msdn.com/bartd/archive/2006/09/09/Deadlock-Troubleshooting_2C00_-Part-1.aspx</a></p>
|
<p>Once you know the sql statements involved, a code analysis will surely help. Some thumb rules are checking for the order in which the rows are accessed and checking for the isolation level used for the SQL statements. A profiler trace can help a lot.</p>
<p>Most of the time, it is because of a reader trying to get a shared lock on a resource locked exclusively by a writer.</p>
| 7,581
|
<p>My table structure looks like this:</p>
<pre><code> tbl.users tbl.issues
+--------+-----------+ +---------+------------+-----------+
| userid | real_name | | issueid | assignedid | creatorid |
+--------+-----------+ +---------+------------+-----------+
| 1 | test_1 | | 1 | 1 | 1 |
| 2 | test_2 | | 2 | 1 | 2 |
+--------+-----------+ +---------+------------+-----------+
</code></pre>
<p>Basically I want to write a query that will end in a results table looking like this:</p>
<pre><code> (results table)
+---------+------------+---------------+-----------+--------------+
| issueid | assignedid | assigned_name | creatorid | creator_name |
+---------+------------+---------------+-----------+--------------+
| 1 | 1 | test_1 | 1 | test_1 |
| 2 | 1 | test_1 | 2 | test_2 |
+---------+------------+---------------+-----------+--------------+
</code></pre>
<p>My SQL looks like this at the moment:</p>
<pre><code>SELECT
`issues`.`issueid`,
`issues`.`creatorid`,
`issues`.`assignedid`,
`users`.`real_name`
FROM `issues`
JOIN `users`
ON ( `users`.`userid` = `issues`.`creatorid` )
OR (`users`.`userid` = `issues`.`assignedid`)
ORDER BY `issueid` ASC
LIMIT 0 , 30
</code></pre>
<p>This returns something like this:</p>
<pre><code> (results table)
+---------+------------+-----------+-----------+
| issueid | assignedid | creatorid | real_name |
+---------+------------+-----------+-----------+
| 1 | 1 | 1 | test_1 |
| 2 | 1 | 2 | test_1 |
| 2 | 1 | 2 | test_2 |
+---------+------------+-----------+-----------+
</code></pre>
<p>Can anyone help me get to the desired results table?</p>
|
<pre><code>SELECT
IssueID,
AssignedID,
CreatorID,
AssignedUser.real_name AS AssignedName,
CreatorUser.real_name AS CreatorName
FROM Issues
LEFT JOIN Users AS AssignedUser
ON Issues.AssignedID = AssignedUser.UserID
LEFT JOIN Users AS CreatorUser
ON Issues.CreatorID = CreatorUser.UserID
ORDER BY `issueid` ASC
LIMIT 0, 30
</code></pre>
|
<pre><code>SELECT DISTINCT (i.issueid, i.creatorid, i.assignedid, u.real_name)
FROM issues i, users u
WHERE u.userid = i.creatorid OR u.userid = assignedid
ORDER BY i.issueid ASC
LIMIT 0 , 30
</code></pre>
<p>Not sure if the parenthesis are needed or not.</p>
| 31,705
|
<p>How do I create an ASP.NET web service that returns JSON formatted data?</p>
|
<p>The most important thing to understand is to know how to represent data in JSON format.
Please refer <a href="http://www.json.org/" rel="nofollow noreferrer">http://www.json.org/</a> to know more about it.</p>
<p>Once you understand this, then the rest part is pretty straight forward.</p>
<p>Please check the following URL for an example of the same.</p>
<p><a href="http://www.ajaxprojects.com/ajax/tutorialdetails.php?itemid=264" rel="nofollow noreferrer">http://www.ajaxprojects.com/ajax/tutorialdetails.php?itemid=264</a>
<a href="http://code.msdn.microsoft.com/JSONSampleDotNet" rel="nofollow noreferrer">http://code.msdn.microsoft.com/JSONSampleDotNet</a> <br>
<a href="http://www.phdcc.com/xml2json.htm" rel="nofollow noreferrer">http://www.phdcc.com/xml2json.htm</a></p>
<p>I recommend Jquery library for this. It's a lightweight rich library which supports calling web services, handle json data format output etc.</p>
<p>Refer www.jquery.com for more info.</p>
|
<p>In our project the requirements were as follow -- ASP.NET 2.0 on the server, and pure Javascript on the browser (no JQuery libs, or .NET AJAX)</p>
<p>In that case on the server side, just mark the webmethod to use JSON. Note that both input and output params are json formatted</p>
<pre><code>[WebMethod]
[ScriptMethod(ResponseFormat = ResponseFormat.Json)]
public String Foo(String p1, String p2)
{
return "Result: p1= " + p1 + " p2= " + p2;
}
</code></pre>
<p>On the javascript side, use the regular XmlHttpRequest object, make sure you format your input params as JSON and do an 'eval' on output parms.</p>
<pre><code>var httpobj = getXmlHttpRequestObject();
//Gets the browser specific XmlHttpRequest Object
function getXmlHttpRequestObject()
{
if (window.XMLHttpRequest)
return new XMLHttpRequest();
else if(window.ActiveXObject)
return new ActiveXObject("Microsoft.XMLHTTP");
}
CallService()
{
//Set the JSON formatted input params
var param = "{'p1' : 'value1', 'p2' : 'value2'}";
//Send it to webservice
if(httpobj.readyState == 4 || httpobj.readyState == 0)
{
httpobj.open("POST", 'service.asmx/' + 'Foo', true);
//Mark the request as JSON and UTF-8
httpobj.setRequestHeader('Content-Type','application/json; charset=utf-8');
httpobj.onreadystatechange = OnSuccess;
httpobj.send(param);
}
}
OnSuccess()
{
if (httpobj.readyState == 4)
{
//Retrieve the JSON return param
var response = eval("(" + httpobj.responseText + ")");
}
}
</code></pre>
| 42,088
|
<p>I would like to turn the HTML generated by my CFM page into a PDF, and have the user prompted with the standard "Save As" prompt when navigating to my page.</p>
|
<p>You should use the cfdocument tag (with format="PDF") to generate the PDF by placing it around the page you are generating. You'll want to specify a filename attribute, otherwise the document will just stream right to your browser.</p>
<p>After you have saved the content as a PDF, use cfheader and cfcontent in combination to output the PDF as an attachment ("Save As") and add the file to the response stream. I also added deletefile="Yes" on the cfcontent tag to keep the file system clean of the files.</p>
<pre><code><cfdocument format="PDF" filename="file.pdf" overwrite="Yes">
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<title>Hello World</title>
</head>
<body>
Hello World
</body>
</html>
</cfdocument>
<cfheader name="Content-Disposition" value="attachment;filename=file.pdf">
<cfcontent type="application/octet-stream" file="#expandPath('.')#\file.pdf" deletefile="Yes">
</code></pre>
<p>As an aside: I'm just using file.pdf for the filename in the example below, but you might want to use some random or session generated string for the filename to avoid problems resulting from race conditions.</p>
|
<p>I'm not that familiar with ColdFusion, but what you need to do is set the Content-Type of the page when the user requests it to be application/octet-stream. This will prompt them for a download every time. </p>
<p>Hope this helps!</p>
| 9,969
|
<p>I converted VS2006 vc++ project to VS2008. When compiling I get the above error. How do i fix it? am I missing this exe ?</p>
|
<p>There is a bug in the Visual Studio 2008 Standard Edition installer. It does not install cl.exe if you only install Visual C++ but not Visual C#. To work around this you have to install Visual C# even if you do not need this.</p>
|
<p>I had this problem under Windows 10 and solved it by adding the following paths to the PATH environment variable:</p>
<p>C:\ProgramFilesC\VS2008\Common7\IDE
C:\ProgramFilesC\VS2008\VC\bin\x86_amd64</p>
<p>where C:\ProgramFilesC\VS2008 is the path where I installed Visual Studio.</p>
| 13,462
|
<p>When an <code>Expression<T></code> is compiled, is the resultant code implicitly cached by the framework? I'm thinking along the lines of the static <code>Regex</code> methods where the framework implicitly compiles and caches the last few regexes.</p>
<p>If compiled <code>Expression<T></code> objects are <b>not</b> cached, can you recommend some best practices for keeping the compile-time down or any gotchas that could cause problems if I manually cache an expression?</p>
<pre><code>public MyResultType DoSomething(int arg1, int arg2)
{
var result = invokeHandler(
(IDoSomethingHandler h) => h.DoSomething(arg1, arg2)
);
return result;
}
private TResult invokeHandler<T, TResult>(Expression<Func<T, TResult>> action)
where T : class
{
// Here, I might want to check to see if action is already cached.
var compiledAction = action.Compile();
var methodCallExpr = action as MethodCallExpression;
// Here, I might want to store methodCallExpr in a cache somewhere.
var handler = ServiceLocator.Current.GetInstance<T>();
var result = compiledAction(handler);
return result;
}
</code></pre>
<p>In this example, I'm slightly concerned that if I cache the compiled expression, that it will use the values of <code>arg1</code> and <code>arg2</code> as they were at the time the expression was compiled, rather than retrieving those values from the appropriate place in the stack (i.e. rather than getting the current values).</p>
|
<p>No; I do not believe that it is; if you want it cached, you must hold onto the <code>Delegate</code> reference (typically <code>Func<...></code> or <code>Action<...></code>). Likewise, if you want to get the best performance, you would compile it as a parameterised expression, so you can send in different values when you invoke it.</p>
<p>In this case, re-phrasing would help:</p>
<pre><code>public MyResultType DoSomething(int arg1, int arg2)
{
var result = invokeHandler(
(IDoSomethingHandler h, int a1, int a2) => h.DoSomething(a1, a2),
arg1, arg2);
return result;
}
private TResult invokeHandler<T, TResult>(Expression<Func<T,int,int,TResult>> action,
int arg1, int arg2)
where T : class
{
// Here, I might want to check to see if action is already cached.
var compiledAction = action.Compile();
var methodCallExpr = action as MethodCallExpression;
// Here, I might want to store methodCallExpr in a cache somewhere.
var handler = ServiceLocator.Current.GetInstance<T>();
var result = compiledAction(handler, arg1, arg2);
return result;
}
</code></pre>
<p>i.e. make the numbers parameters of the expression, and pass the <em>actual</em> ones it at runtime (rather than being constants in the expression).</p>
|
<p>Lambda experssions are not cached automatically. You will need to implement you own caching/memoization algorithms for that. Check the related Stackoverflow question:</p>
<p><a href="https://stackoverflow.com/questions/66382/is-it-possible-to-cache-a-value-evaluated-in-a-lambda-expression-c-linq">Is it possible to cache a value evaluated in a lambda expression?</a></p>
<p>It is important to note that lambda expressions are lazy evaluated in C#.</p>
| 32,322
|
<p>I develop exclusively on VMs. I currently run Boot Camp on a MacBook Pro and do all my development on a series of Virtual PC VMs for many different environments. This <a href="http://www.andrewconnell.com/blog/articles/UseVirtualPCsDifferencingDisksToYourAdvantage.aspx" rel="nofollow noreferrer">post by Andrew Connell</a> litterally changed the way I work.</p>
<p>I'm thinking about switching to Fusion and running everything in OS X but I wasn't able to answer the following questions about VM Fusion/Workstation/Server. <strong>I need to know if the following features from Virtual PC/Server exist in their VMWare counter parts.</strong></p>
<ol>
<li>Differencing Disks (ability to create a Base VM and provision new VMs which just add deltas on top of the base [saves a ton of disk space, and makes it easy to spin up new VMs with a base set of funcitonality]). <em>(Not available with Fusion, need Workstation [$189])</em></li>
<li>Undo disks (ability to rollback all changes to the VM within a session). <em>(Available in both Workstation and Fusion [$189/$79.99 respectively])</em></li>
<li>Easily NAT out a different subnet for the VM to sit in. <em>(In both Fusion/Workstation).</em></li>
<li>Share VMs between VM Player and VM Server. I'd like to build up a VM locally (on OS X/Fusion) and then move it to some server (Win2k3/Win2k8 and VM Server) and host it there but with VM Server. <em>(In both Fusion/Workstation).</em></li>
<li>An equivalent to Hyper-V. <em>(Both Fusion and Workstation take advantage of type-2 hypervisor a for 64x VMs, neither do for 32 bit VMs. VMWare claims they're no slower as a result some <a href="http://www.thehypervisor.com/?p=57" rel="nofollow noreferrer">benchmarks corroborate this assertion</a>).</em></li>
<li>Ability to Share disks between multiple VMs. If I have a bunch of databases on a virtual disk and want them to appear on more than one VM I should be able to just attach them. <em>(Available in both Fusion and Workstation)</em></li>
<li>(Nice to have) Support for multiple processors assigned to a VM <em>(Available in both Fusion and Workstation).</em></li>
</ol>
<p><strong>Is there a VMWare guru out there who knows for sure that the above features are available on the other side?</strong> </p>
<p><strong>Also the above has been free (as long as you have licenses for Windows machines), besides buying Fusion are there any other costs?</strong></p>
<p><strong>The end result of my research, thanks so much!</strong>
<em>You can only create Linked clones and Full Clones (which are close to differencing disks) in VMWare Workstation (not Fusion). Workstation also has at better snapshot management in addition to other features which are difficult to enumerate. That being said Workstation is $189 (as opposed to $79) and not available on OS X. In addition Fusion 1.1 (current release) has a bunch of display bugs on OS X 10.5 (works well on 10.4). These will be remedied in Fusion 2.0 which is currently in (RC1). I'll probably wait until v2.0 comes out and then use both Workstation/Fusion to provision and use these VMs on OS X.</em></p>
|
<p>There's actually a way to turn that "feature" off. This will allow the user to post whichever characters they want, and there will be no need to convert characters to an alias using Javascript. See this article for <a href="http://mdid.org/mdidwiki/index.php?title=Disabling_Request_Validation" rel="nofollow noreferrer">disabling request validation</a>. It means that you'll have to do your own validation, but from the sounds of your post, it seems that is what you are looking to do anyway. You can also disable it per page by following <a href="http://www.asp.net/learn/whitepapers/request-validation/" rel="nofollow noreferrer">the instructions here</a>.</p>
|
<p>My first comment would be to avoid using JavaScript to change the angle brackets. Bypassing this is as simple as disabling JavaScript in the browser. Almost all server-side languages have some utility method that converts some HTML characters into their entity counterparts. For instance, PHP uses htmlentities(), and I am sure .NET has an equivalent utility method. In the least, you can do a regex replace for angle brackets, parenthesis and double quotes, and that will get you a long way toward a secure solution.</p>
| 8,707
|
<p>I've been working on a web crawling .NET app in my free time, and one of the features of this app that I wanted to included was a pause button to pause a specific thread.</p>
<p>I'm relatively new to multi-threading and I haven't been able to figure out a way to pause a thread indefinitely that is currently supported. I can't remember the exact class/method, but I know there is a way to do this but it has been flagged as obsolete by the .NET framework.</p>
<p>Is there any good general purpose way to indefinitely pause a worker thread in C# .NET.</p>
<p>I haven't had a lot of time lately to work on this app and the last time I touched it was in the .NET 2.0 framework. I'm open to any new features (if any) that exist in the .NET 3.5 framework, but I'd like to know of solution that also works in the 2.0 framework since that's what I use at work and it would be good to know just in case.</p>
|
<p>Never, ever use <code>Thread.Suspend</code>. The major problem with it is that 99% of the time you can't know what that thread is doing when you suspend it. If that thread holds a lock, you make it easier to get into a deadlock situation, etc. Keep in mind that code you are calling may be acquiring/releasing locks behind the scenes. Win32 has a similar API: <code>SuspendThread</code> and <code>ResumeThread</code>. The following docs for <code>SuspendThread</code> give a nice summary of the dangers of the API:</p>
<p><a href="http://msdn.microsoft.com/en-us/library/ms686345(VS.85).aspx" rel="noreferrer">http://msdn.microsoft.com/en-us/library/ms686345(VS.85).aspx</a></p>
<blockquote>
<p>This function is primarily designed for use by debuggers. It is not intended to be used for thread synchronization. Calling SuspendThread on a thread that owns a synchronization object, such as a mutex or critical section, can lead to a deadlock if the calling thread tries to obtain a synchronization object owned by a suspended thread. To avoid this situation, a thread within an application that is not a debugger should signal the other thread to suspend itself. The target thread must be designed to watch for this signal and respond appropriately.</p>
</blockquote>
<p>The proper way to suspend a thread indefinitely is to use a <code>ManualResetEvent</code>. The thread is most likely looping, performing some work. The easiest way to suspend the thread is to have the thread "check" the event each iteration, like so:</p>
<pre><code>while (true)
{
_suspendEvent.WaitOne(Timeout.Infinite);
// Do some work...
}
</code></pre>
<p>You specify an infinite timeout so when the event is not signaled, the thread will block indefinitely, until the event is signaled at which point the thread will resume where it left off.</p>
<p>You would create the event like so:</p>
<pre><code>ManualResetEvent _suspendEvent = new ManualResetEvent(true);
</code></pre>
<p>The <code>true</code> parameter tells the event to start out in the signaled state.</p>
<p>When you want to pause the thread, you do the following:</p>
<pre><code>_suspendEvent.Reset();
</code></pre>
<p>And to resume the thread:</p>
<pre><code>_suspendEvent.Set();
</code></pre>
<p>You can use a similar mechanism to signal the thread to exit and wait on both events, detecting which event was signaled.</p>
<p>Just for fun I'll provide a complete example:</p>
<pre><code>public class Worker
{
ManualResetEvent _shutdownEvent = new ManualResetEvent(false);
ManualResetEvent _pauseEvent = new ManualResetEvent(true);
Thread _thread;
public Worker() { }
public void Start()
{
_thread = new Thread(DoWork);
_thread.Start();
}
public void Pause()
{
_pauseEvent.Reset();
}
public void Resume()
{
_pauseEvent.Set();
}
public void Stop()
{
// Signal the shutdown event
_shutdownEvent.Set();
// Make sure to resume any paused threads
_pauseEvent.Set();
// Wait for the thread to exit
_thread.Join();
}
public void DoWork()
{
while (true)
{
_pauseEvent.WaitOne(Timeout.Infinite);
if (_shutdownEvent.WaitOne(0))
break;
// Do the work here..
}
}
}
</code></pre>
|
<p>The Suspend() and Resume() may be depricated, however they are in no way useless.
If, for example, you have a thread doing a lengthy work altering data, and the user wishes to stop it, he clicks on a button. Of course, you need to ask for verification, but, at the same time you do not want that thread to continue altering data, if the user decides that he really wants to abort.
Suspending the Thread while waiting for the user to click that Yes or No button at the confirmation dialog is the <em>only</em> way to prevent it from altering the data, before you signal the designated abort event that will allow it to stop.
Events may be nice for simple threads having one loop, but complicated threads with complex processing is another issue.
Certainly, Suspend() must <em>never</em> be used for syncronising, since its usefulness is not for this function.</p>
<p>Just my opinion.</p>
| 17,259
|
<p>Is exposing CRUD operations through SOAP web services a bad idea? My instinct tells me that it is not least of which because the overhead of doing database calls overhead could be huge. I'm struggling to find documentation for/against this (anti)pattern so I was wondering if anyone could point me to some documentation or has an opinion on the matter. </p>
<p>Also, if anyone knows of best practises (and/or documentation to that effect) when designing soap services, that would be great.</p>
<p>Here's an example of how the web service would look:</p>
<ul>
<li>Create</li>
<li>Delete</li>
<li>Execute</li>
<li>Fetch</li>
<li>Update </li>
</ul>
<p>And here's what the implementation would look like:</p>
<pre><code>[WebMethod]
public byte[] Fetch(byte[] requestData)
{
SelectRequest request = (SelectRequest)Deserialize(requestData);
DbManager crudManager = new DbManager();
object result = crudManager.Select(request.ObjectType, request.Criteria);
return Serialize(result);
}
</code></pre>
|
<p>I think publishing a SOAP service that exposes CRUD operations to anonymous, public "users" would be a particularly bad idea. If, however, you can restrict one or both of these caveats, then I see nothing wrong with it (moreover I've implemented such services many times).</p>
<ul>
<li><p>You can require, in addition to whatever method parameters your require to perform the operation, username & password parameters that in effect authenticates the originator prior to processing the request: a failure to authenticate can be signalled with the return of a SOAP exception. If you were especially paranoid, you could optionally run the service over SSL</p></li>
<li><p>You can have the server solution that deals with sending and receiving the requests filter based on IP, onyl allowing requests from a list of approved addresses.</p></li>
</ul>
<p>Yes, there are overheads to running requests over SOAP (as opposed to exposing direct database access) - namely the processing time to wrap a request into a HTTP request, open a socket & send it (and the reverse at the receiving end and the again for the response) - but, it does have advantages.</p>
<p>Java (though the NetBeans IDE) and .Net (through VS), both support consumption of Web Services into projects / solutions - the biggest benefit of this is that objects / structures on the remote service are automatically translated into native objects in the consuming application, which is exceptionally handy.</p>
|
<p>There is nothing wrong with exposing the CRUD operations via SOAP web-services per se.</p>
<p>You will obviously find quite a lot of examples for such services.</p>
<p>However depending on your particular requirements you might find that for you using SOAP is too much overhead or that you could be better off using use JSON/AJAX etc.</p>
<p>So I believe that unless you will provide additional details about your particular details there is no good answer for your question.</p>
| 22,687
|
<p>Imagine in the Global.asax.cs file I had an instance class as a private field. Let's say like this:</p>
<pre><code>private MyClass _myClass = new MyClass();
</code></pre>
<p>And I had a static method on Global called GetMyClass() that gets the current HttpApplication and returns that instance.</p>
<pre><code>public static MyClass GetMyClass()
{
return ((Global)HttpContext.Current.ApplicationInstance)._myClass;
}
</code></pre>
<p>So I could get the instance on the current requests httpapplication by calling Global.GetMyClass().</p>
<p>Keep in mind that there is more than one (Global) HttpApplication. There is an HttpApplication for each request and they are pooled/shared, so in the truest sense it is not a real <a href="http://en.wikipedia.org/wiki/Singleton_pattern" rel="nofollow noreferrer">singleton</a>. But it does follow the pattern to a degree.</p>
<p>So as the question asked, would you consider this at the very least the singleton pattern?</p>
<p>Would you say it should not be used? Would you discourage its use? Would you say it's a <strong><em>possibly</em></strong> bad practice like a true singleton.</p>
<p>Could you see any problems that may arise from this type of usage scenario?</p>
<p>Or would you say it's not a true singleton, so it's OK, and not bad practice. Would you recommend this as a semi-quasi singleton where an instance per request is required? If not what other pattern/suggestion would you use/give?</p>
<p>Have you ever used anything such as this?</p>
<p>I have used this on past projects, but I am unsure if it's a practice I should stay away from. I have never had any issues in the past though.</p>
<p>Please give me your thoughts and opinions on this.</p>
<p>I am not asking what a singleton is. And I consider a singleton bad practice when used improperly which is in many many many cases. That is me. However, that is not what I am trying to discuss. I am trying to discuss THIS scenario I gave.</p>
|
<p>Whether or not this fits the cookie-cutter pattern of a Singleton, it still suffers from the same problems as Singleton:</p>
<ul>
<li>It is a static, concrete reference and cannot be substituted for separate behavior or stubbed/mocked during a test</li>
<li>You cannot subclass this and preserve this behavior, so it's quite easy to circumvent the singleton nature of this example</li>
</ul>
|
<p>I would say that it is definitely NOT a singleton. Design patterns are most useful as definitions of common coding practices. When you talk about singletons, you are talking about an object where there is only one instance.</p>
<p>As you yourself have noted, there are multiple HttpApplications, so your code does not follow the design of a Singleton and does not have the same side-effects.</p>
<p>For example, one might use a singleton to update currency exchange rates. If this person unknowingly used your example, they would fire up seven instances to do the job that 'only one object' was meant to do.</p>
| 22,427
|
<p>Modern UI's are starting to give their UI elments nice inertia when moving. Tabs slide in, page transitions, even some listboxes and scroll elments have nice inertia to them (the iphone for example). What is the best algorythm for this? It is more than just gravity as they speed up, and then slow down as they fall into place. I have tried various formulae's for speeding up to a maximum (terminal) velocity and then slowing down but nothing I have tried "feels" right. It always feels a little bit off. Is there a standard for this, or is it just a matter of playing with various numbers until it looks/feels right?</p>
|
<p>You're talking about two different things here.</p>
<p>One is momentum - giving things residual motion when you release them from a drag. This is simply about remembering the velocity of a thing when the user releases it, then applying that velocity to the object every frame and also reducing the velocity every frame by some amount. How you reduce velocity every frame is what you experiment with to get the feel right.</p>
<p>The other thing is ease-in and ease-out animation. This is about smoothly accelerating/decelerating objects when you move them between two positions, instead of just linearly interpolating. You do this by simply feeding your 'time' value through a sigmoid function before you use it to interpolate an object between two positions. One such function is</p>
<pre><code>smoothstep(t) = 3*t*t - 2*t*t*t [0 <= t <= 1]
</code></pre>
<p>This gives you both ease-in and ease-out behaviour. However, you'll more commonly see only ease-out used in GUIs. That is, objects start moving snappily, then slow to a halt at their final position. To achieve that you just use the right half of the curve, ie.</p>
<pre><code>smoothstep_eo(t) = 2*smoothstep((t+1)/2) - 1
</code></pre>
|
<p>It's playing with the numbers.. What feels good is good.</p>
<p>I've tried to develop magic formulas myself for years. In the end the ugly hack always felt best. Just make sure you somehow time your animations properly and don't rely on some kind of redraw/refresh rate. These tend to change based on the OS.</p>
| 23,837
|
<p>I'm writing a small GUI app that contains some "editor" functionality, and something that I'd like to let users open a few sample text files to test things out quickly. The easiest way of doing this would be to package a separate zip with the appropriate sample files, and have them open them manually; I'd like to make things a little more user-friendly and allow them to pick the files from inside the application and then run them.</p>
<p>So... what do I use? I initially considered .properties but that doesn't seem terribly well suited for the job...</p>
|
<p>You can include a resource file right in the jar and then open it as a resource stream in your app. If you're using Spring, you can inject the resource right into a bean. If not, check out <a href="http://java.sun.com/javase/6/docs/api/java/lang/Class.html#getResourceAsStream(java.lang.String)" rel="nofollow noreferrer">Class.getResourceAsStream()</a>. You just have to be careful about the path you use to get to the resource file.</p>
|
<p>Your FileDialog can be given a FilenameFilter that filters files by any criteria you like. You can default-point it to a directory of sample files, have it ignore everything not named ".sample" or "MySampleXXXX.java", e.g. </p>
<pre><code>myDialog.setFilenameFilter( new FilenameFilter() {
public void accept (File dir, String name) {
return name.startsWith("FooBar");
}
}
</code></pre>
| 33,939
|
<p>I'm writing a web app that points to external links. I'm looking to create a non-sequential, non-guessable id for each document that I can use in the URL. I did the obvious thing: treating the url as a string and str#crypt on it, but that seems to choke on any non-alphanumberic characters, like the slashes, dots and underscores.</p>
<p>Any suggestions on the best way to solve this problem?</p>
<p>Thanks!</p>
|
<p>Depending on how long a string you would like you can use a few alternatives:</p>
<pre><code>require 'digest'
Digest.hexencode('http://foo-bar.com/yay/?foo=bar&a=22')
# "687474703a2f2f666f6f2d6261722e636f6d2f7961792f3f666f6f3d62617226613d3232"
require 'digest/md5'
Digest::MD5.hexdigest('http://foo-bar.com/yay/?foo=bar&a=22')
# "43facc5eb5ce09fd41a6b55dba3fe2fe"
require 'digest/sha1'
Digest::SHA1.hexdigest('http://foo-bar.com/yay/?foo=bar&a=22')
# "2aba83b05dc9c2d9db7e5d34e69787d0a5e28fc5"
require 'digest/sha2'
Digest::SHA2.hexdigest('http://foo-bar.com/yay/?foo=bar&a=22')
# "e78f3d17c1c0f8d8c4f6bd91f175287516ecf78a4027d627ebcacfca822574b2"
</code></pre>
<p>Note that this won't be unguessable, you may have to combine it with some other (secret but static) data to salt the string:</p>
<pre><code>salt = 'foobar'
Digest::SHA1.hexdigest(salt + 'http://foo-bar.com/yay/?foo=bar&a=22')
# "dbf43aff5e808ae471aa1893c6ec992088219bbb"
</code></pre>
<p>Now it becomes much harder to generate this hash for someone who doesn't know the original content and has no access to your source.</p>
|
<p>Use <a href="http://ruby-doc.org/stdlib/libdoc/digest/rdoc/index.html" rel="nofollow noreferrer">Digest::MD5</a> from Ruby's standard library:</p>
<pre><code>Digest::MD5.hexdigest(my_url)
</code></pre>
| 9,344
|
<p>I am trying to determine what issues could be caused by using the following serialization surrogate to enable serialization of anonymous functions/delegate/lambdas. </p>
<pre><code>// see http://msdn.microsoft.com/msdnmag/issues/02/09/net/#S3
class NonSerializableSurrogate : ISerializationSurrogate
{
public void GetObjectData(object obj, SerializationInfo info, StreamingContext context)
{
foreach (FieldInfo f in obj.GetType().GetFields(BindingFlags.Instance | BindingFlags.Public | BindingFlags.NonPublic))
info.AddValue(f.Name, f.GetValue(obj));
}
public object SetObjectData(object obj, SerializationInfo info, StreamingContext context,
ISurrogateSelector selector)
{
foreach (FieldInfo f in obj.GetType().GetFields(BindingFlags.Instance | BindingFlags.Public | BindingFlags.NonPublic))
f.SetValue(obj, info.GetValue(f.Name, f.FieldType));
return obj;
}
}
</code></pre>
<p><strong>Listing 1</strong> <em>adapted from</em> <a href="http://www.agilekiwi.com/dotnet/CountingDemo.cs" rel="noreferrer">Counting Demo</a></p>
<p>The main issue I can think of that might be a problem is that the anonymous class is an internal compiler detail and it's structure is not guaranteed to remain constant between revisions to the .NET Framework. I'm fairly certain this is the case based on my research into the similar problem with iterators.</p>
<h2>Background</h2>
<p>I am investigating the serialization of anonymous functions. I was expecting this not to work, but found it did for some cases. As long as the lambda did *not& force the compiler to generate an anonymous class everything works fine. </p>
<p>A SerializationException is thrown if the compiler requires a generated class to implement the anonymous function. This is because the compiler generated class is not marked as serializable.</p>
<h2>Example</h2>
<pre><code>namespace Example
{
[Serializable]
class Other
{
public int Value;
}
[Serializable]
class Program
{
static void Main(string[] args)
{
MemoryStream m = new MemoryStream();
BinaryFormatter f = new BinaryFormatter();
// Example 1
Func<int> succeeds = () => 5;
f.Serialize(m, succeeds);
// Example 2
Other o = new Other();
Func<int> fails = () => o.Value;
f.Serialize(m, fails); // throws SerializationException - Type 'Example.Program+<>c__DisplayClass3' in Assembly 'Example, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null' is not marked as serializable.
}
}
</code></pre>
<p><strong>Listing 2</strong></p>
<p>This is similar to the issue of trying to serialize <em>iterators</em> and I had found the following code in a previous search (see <a href="http://www.agilekiwi.com/dotnet/CountingDemo.cs" rel="noreferrer">countingdemo</a>) Using the code from <strong>Listing 1</strong> and an ISurrogateSelector I was able to successfully serialize and deserialize the second failing example.</p>
<h2>Objective</h2>
<p>I have a system that is exposed via a web service. The system has a complex but small state (many objects, not a lot of properties per object). The state is saved in the ASP.NET Cache, but is also serialized to a BLOB in SQL in case of cache expiration. Some objects need to execute arbitrary "events" upon reaching some condition. Hence they have properties accepting Action/Func objects. Contrived example:</p>
<pre><code> class Command
{
public Command(Action action, Func<bool> condition);
}
</code></pre>
<p>Somewhere else</p>
<pre><code> void DoSomethingWithThing(Thing thing)
{
state = Store.GetCurrentState();
Command cmd = new Command(() => thing.Foo(), () => thing.IsReady())
state.Add(cmd);
Store.Save(state);
}
</code></pre>
|
<p>Did you see this post that I wrote as a followup to the CountingDemo: <a href="http://dotnet.agilekiwi.com/blog/2007/12/update-on-persistent-iterators.html" rel="noreferrer">http://dotnet.agilekiwi.com/blog/2007/12/update-on-persistent-iterators.html</a> ? Unfortunately, Microsoft have confirmed that they probably will change the compiler details (one day), in a way that is likely to cause problems. (e.g. f/when you update to the new compiler, you won't be able to deserialise the stuff you saved under the old (current) compiler.)</p>
|
<p>I'm not 100% on this, but I believe that if you want to "save" a delegate or some code to the database that can be fairly dynamic, what you need to do is create an Expression, then you can compile the expression into a Func<...>. </p>
<p><a href="http://blogs.msdn.com/charlie/archive/2008/01/31/expression-tree-basics.aspx" rel="nofollow noreferrer">Expression Tree Basics</a></p>
<p><a href="http://www.lostechies.com/blogs/jimmy_bogard/archive/2009/06/17/more-on-late-bound-invocations-with-expression-trees.aspx" rel="nofollow noreferrer">Late Bound Invocations with Expression Trees</a></p>
| 41,658
|
<p>In my everlasting quest to suck less I'm trying to understand the "yield" statement, but I keep encountering the same error.</p>
<blockquote>
<p>The body of [someMethod] cannot be an iterator block because
'System.Collections.Generic.List< AClass>' is not an iterator interface type.</p>
</blockquote>
<p>This is the code where I got stuck:</p>
<pre><code>foreach (XElement header in headersXml.Root.Elements()){
yield return (ParseHeader(header));
}
</code></pre>
<p>What am I doing wrong? Can't I use yield in an iterator? Then what's the point?
In this example it said that <code>List<ProductMixHeader></code> is not an iterator interface type.
<code>ProductMixHeader</code> is a custom class, but I imagine <code>List</code> is an iterator interface type, no?</p>
<p><strong>--Edit--</strong><br>
Thanks for all the quick answers.<br>
I know this question isn't all that new and the same resources keep popping up.<br>
It turned out I was thinking I could return <code>List<AClass></code> as a return type, but since <code>List<T></code> isn't lazy, it cannot. Changing my return type to <code>IEnumerable<T></code> solved the problem :D</p>
<p>A somewhat related question (not worth opening a new thread): is it worth giving <code>IEnumerable<T></code> as a return type if I'm sure that 99% of the cases I'm going to go .ToList() anyway? What will the performance implications be?</p>
|
<p>A method using <em>yield return</em> must be declared as returning one of the following two interfaces:</p>
<pre><code>IEnumerable<SomethingAppropriate>
IEnumerator<SomethingApropriate>
</code></pre>
<p>(thanks <a href="https://stackoverflow.com/users/22656/jon-skeet">Jon</a> and <a href="https://stackoverflow.com/users/23354/marc-gravell">Marc</a> for pointing out IEnumerator)</p>
<p>Example:</p>
<pre><code>public IEnumerable<AClass> YourMethod()
{
foreach (XElement header in headersXml.Root.Elements())
{
yield return (ParseHeader(header));
}
}
</code></pre>
<p>yield is a lazy producer of data, only producing another item after the first has been retrieved, whereas returning a list will return everything in one go.</p>
<p>So there is a difference, and you need to declare the method correctly.</p>
<p>For more information, read <a href="https://stackoverflow.com/questions/317462/some-help-understanding-yield#317502">Jon's answer here</a>, which contains some very useful links.</p>
|
<p>What does the method you're using this in look like? I don't think this can be used in just a loop by itself.</p>
<p>For example...</p>
<pre><code>public IEnumerable<string> GetValues() {
foreach(string value in someArray) {
if (value.StartsWith("A")) { yield return value; }
}
}
</code></pre>
| 41,010
|
<p>Is there a way in PHP to compile a regular expression, so that it can then be compared to multiple strings without repeating the compilation process? Other major languages can do this -- Java, C#, Python, Javascript, etc.</p>
|
<p>The Perl-Compatible Regular Expressions library may have already be optimized for your use case without providing a Regex class like other languages do: </p>
<blockquote>
<p>This extension maintains a global per-thread cache of compiled regular expressions (up to 4096).</p>
<p><a href="http://www.php.net/manual/en/intro.pcre.php" rel="noreferrer">PCRE Introduction</a></p>
</blockquote>
<p>This is how the study modifier which <a href="https://stackoverflow.com/questions/209906/compile-regex-in-php#210027">Imran</a> described can store the compiled expression between calls.</p>
|
<p>I'm not positive that you can. If you check out <a href="https://rads.stackoverflow.com/amzn/click/com/0596528124" rel="nofollow noreferrer" rel="nofollow noreferrer">Mastering Regular Expressions</a>, some PHP specific optimization techniques are discussed in Chapter10: PHP. Specifically the use of the S pattern modifier to cause the regex engine to "Study" the regular expression before it applies it. Depending on your pattern and your text, this could give you some speed improvements.</p>
<p><em>Edit</em>: you can take a peek at the contents of the book using <a href="http://books.google.ro/books?id=NYEX-Q9evKoC&dq=Mastering+Regular+Expressions&printsec=frontcover&source=bn&hl=ro&ei=XAraSZv9Kcma_AaTjPXPDA&sa=X&oi=book_result&ct=result&resnum=5#PPA478,M1" rel="nofollow noreferrer">books.google.com</a>.</p>
| 25,768
|
<p>What do I lose by adopting test driven design?</p>
<p>List only negatives; do not list benefits written in a negative form.</p>
|
<p>If you want to do "real" TDD (read: test first with the red, green, refactor steps) then you also have to start using mocks/stubs, when you want to test integration points.</p>
<p>When you start using mocks, after a while, you will want to start using Dependency Injection (DI) and a Inversion of Control (IoC) container. To do that you need to use interfaces for everything (which have a lot of pitfalls themselves). </p>
<p>At the end of the day, you have to write a lot more code, than if you just do it the "plain old way". Instead of just a customer class, you also need to write an interface, a mock class, some IoC configuration and a few tests.</p>
<p>And remember that the test code should also be maintained and cared for. Tests should be as readable as everything else and it takes time to write good code.</p>
<p>Many developers don't quite understand how to do all these "the right way". But because everybody tells them that TDD is the only true way to develop software, they just try the best they can.</p>
<p>It is much harder than one might think. Often projects done with TDD end up with a lot of code that nobody really understands. The unit tests often test the wrong thing, the wrong way. And nobody agrees how a good test should look like, not even the so called gurus.</p>
<p>All those tests make it a lot harder to "change" (opposite to refactoring) the behavior of your system and simple changes just becomes too hard and time consuming.</p>
<p>If you read the TDD literature, there are always some very good examples, but often in real life applications, you must have a user interface and a database. This is where TDD gets really hard, and most sources don't offer good answers. And if they do, it always involves more abstractions: mock objects, programming to an interface, MVC/MVP patterns etc., which again require a lot of knowledge, and... you have to write even more code.</p>
<p>So be careful... if you don't have an enthusiastic team and at least one experienced developer who knows how to write good tests and also knows a few things about good architecture, you really have to think twice before going down the TDD road.</p>
|
<p>TDD requires a certain organization for your code. This might be inefficient or difficult to read. Or even architecturally wrong; for example, since <code>private</code> methods cannot be called outside a class, you have to make methods non-private to make them testable, which is just wrong.</p>
<p>When code changes, you have to change the tests as well. With refactoring this can be a
lot of extra work.</p>
| 8,987
|
<p>As an amateur software developer (I'm still in academia) I've written a few schemas for XML documents. I routinely run into design flubs that cause ugly-looking XML documents because I'm not entirely certain what the semantics of XML exactly are.</p>
<p>My assumptions:</p>
<pre><code><property> value </property>
</code></pre>
<p>property = value</p>
<pre><code><property attribute="attval"> value </property>
</code></pre>
<p>A property with a special descriptor, the attribute.</p>
<pre><code><parent>
<child> value </child>
</parent>
</code></pre>
<p>The parent has a characteristic "child" which has the value "value."</p>
<pre><code><tag />
</code></pre>
<p>"Tag" is a flag or it directly translates to text. I'm not sure on this one.</p>
<pre><code><parent>
<child />
</parent>
</code></pre>
<p>"child" describes "parent." "child" is a flag or boolean. I'm not sure on this one, either.</p>
<p>Ambiguity arises if you want to do something like representing cartesian coordinates:</p>
<pre><code><coordinate x="0" y="1" />
<coordinate> 0,1 </coordinate>
<coordinate> <x> 0 </x> <y> 1 </y> </coordinate>
</code></pre>
<p>Which one of these options is most correct? I would lean towards the third based upon my current conception of XML schema design, but I really don't know.</p>
<p>What are some resources that succinctly describe how to effectively design xml schemas?</p>
|
<p>One general (but important!) recommendation is never to store multiple logical pieces of data in a single node (be it a text node or an attribute node). Otherwise, you end up needing your own parsing logic <strong>on top of</strong> the XML parsing logic you normally get for free from your framework.</p>
<p>So in your coordinate example,
<code><coordinate x="0" y="1" /></code>
and
<code><coordinate> <x>0</x> <y>1</y> </coordinate></code>
are both reasonable to me.</p>
<p>But <code><coordinate> 0,1 </coordinate></code> isn’t very good, because it’s storing two logical pieces of data (the X-coordinate and the Y-coordinate) in a single XML node—forcing the consumer to parse the data <strong>outside</strong> of their XML parser. And while splitting a string by a comma is pretty simple, there are still some ambiguities like what happens if there's an extra comma at the end.</p>
|
<p>Look at the relationships of the data you are trying to represent is the best approach that I've found.</p>
| 28,666
|
<p>I am trying to do a search in my Eclipse (Java) workspace to find all instances of static variables that are not final.</p>
<p>I tried various regexes but they do not result in any matches. Can someone suggest a regex that will match all lines containing <code>static</code> and not containing <code>final</code>, and not ending in a <code>{</code>?</p>
<p>The last part about not ending with a <code>{</code> will eliminate static methods.</p>
<p>An example:</p>
<pre><code>public class FlagOffendingStatics {
private static String shouldBeFlagged = "not ok";
private static final String ok = "this is fine";
public static void methodsAreOK() {
}
}
</code></pre>
|
<p>This pattern works:</p>
<pre><code>[^(final)] static [^(final)][^(\})]*$
</code></pre>
<p>Here is a test:</p>
<pre><code>$ cat test.txt
private int x = "3";
private static x = "3";
private final static String x = "3";
private static final String x = "3";
private static String x = "3";
public static void main(String args[]) {
blah;
}
$ grep "[^(final)] static [^(final)][^(\})]*$" test.txt
private static x = "3";
private static String x = "3";
</code></pre>
<p>(I realize that <code>private static x = "3";</code> isn't valid syntax, but the pattern still holds ok.)</p>
<p>The pattern accounts for the fact that <code>final</code> can appear before or after <code>static</code> with <code>[^(final)] static [^(final)]</code>. The rest of the pattern, <code>[^(\})]*$</code>, is meant to prevent any <code>{</code> characters from appearing in the remainder of the line.</p>
<p>This pattern will not work however if anyone likes to write their method statements like this:</p>
<pre><code>private static void blah()
{
//hi!
}
</code></pre>
|
<p>One of the IntelliJ code inspections already does this. You can actually run the code inspector stand alone if you want and have it generate a report (usefull for a nightly build).</p>
<p>As the previous poster said, Find Bugs will do this and I imagine other code inspection tools will do it as well. You're probably better off integrating one of those more complete code inspection tools rather than a one-off script just for this one thing.</p>
| 29,786
|
<p>Using the following code I get a nice formatted string:</p>
<pre><code>Request.QueryString.ToString
</code></pre>
<p>Gives me something like: &hello=world&microsoft=sucks</p>
<p>But when I use this code to clone the collection to another object (of the same type) I get the Type() back from the ToString() method instead.</p>
<pre><code>System.Collections.Specialized.NameValueCollection variables = new System.Collections.Specialized.NameValueCollection(Request.QueryString);
if (!string.IsNullOrEmpty(variables["sid"]))
variables.Remove("sid");
Response.Write(variables.ToString());
</code></pre>
<p>Is there a tidier way to output it rather than looking and building the string manually?</p>
|
<p>HttpValueCollection is internal, but you can use "var" to declare it without extract it with reflector.</p>
<pre><code>var query = HttpUtility.ParseQueryString(Request.Url.Query);
query["Lang"] = myLanguage; // Add or replace param
string myNewUrl = Request.Url.AbsolutePath + "?" + query;
</code></pre>
|
<p>Request.QueryString actually return a HttpValueCollection object (which unfortuately, is internal to System.Web so you can't you it). </p>
<p>Nevertheless, HttpValueCollection is derived from NameValueCollection, and it's Remove() method remains intact, so you should be able to call Request.QueryString.Remove("sid");</p>
| 28,462
|
<p>In a recent project the "lead" developer designed a database schema where "larger" tables would be split across two separate databases with a view on the main database which would union the two separate database-tables together. The main database is what the application was driven off of so these tables looked and felt like ordinary tables (except some quirky things around updating). This seemed like a HUGE performance problem. We do see problems with performance around these tables but nothing to make him change his mind about his design. Just wondering what is the best way to do this, or if it is even worth doing?</p>
|
<p>I don't think that you are really going to gain anything by partitioning the table across multiple databases in a single server. All you have essentially done there is increased the overhead in working with the "table" in the first place by having several instances (i.e. open in two different DBs) of it under a single SQL Server instance.</p>
<p>How large of a dataset do you have? I have a client with a 6 million row table in SQL Server that contains 2 years worth of sales data. They use it transactionally and for reporting without any noticiable speed problems. </p>
<p>Tuning the indexes and choosing the correct clustered index is crucial to performance of course. </p>
<p>If your dataset is really large and you are looking to partition, you will get more bang for your buck partitioning the table across physical servers. </p>
|
<p>I would disagree with the assumption that nothing can be gained by partitioning.</p>
<p>If the partition data is physically and logically aligned, then the potential IO of queries should be dramatically reduced.</p>
<p>For example, We have a table which has the batch field as an INT representing an INT.</p>
<p>If we partition the data by this field and then re-run a query for a particular batch, we should be able to run set statistics io ON before and after partitioning and see a reduction in IO,</p>
<p>If we have a million rows per partition and each partition is written to a separate device. The query should be able to eliminate the nonessential partitions.</p>
<p>I've not done a lot of partitioning on SQL Server, but I do have experience of partitioning on Sybase ASE, and this is known as partition eliminiation. When I have time I'm going to test out the scenario on a SQL Server 2005 machine.</p>
| 20,327
|
<p>I'm getting a 404 error when trying to run another web service on an IIS 6 server which is also running Sharepoint 2003. I'm pretty sure this is an issue with sharepoint taking over IIS configuration. Is there a way to make a certain web service or web site be ignored by whatever Sharepoint is doing?</p>
|
<p>I found the command line solution.</p>
<pre><code>STSADM.EXE -o addpath -url http://localhost/<your web service/app> -type exclusion
</code></pre>
|
<p>you'll have to go into the SharePoint admin console and explicitely allow that web application to run on on the same web site as SharePoint. </p>
<p>I believe it is under defined managed paths.</p>
<p>Central Administration > Application Management > Define Managed Paths</p>
| 12,987
|
<p>There are a lot of different systems for balancing load and achieving redundancy in production servers (Not just web servers)</p>
<ul>
<li>Round-robin DNS</li>
<li>Linux Virtual Server</li>
<li>Cisco Local Director</li>
<li>F5 BigIP</li>
<li>Windows NLB</li>
<li>etc?</li>
</ul>
<p>If you use one of these (or another) in production, which one? How well does it work for you? Have you evaluated others?</p>
|
<p>For our apache processes we use(d): <a href="http://www.f5.com/products/big-ip/" rel="noreferrer">http://www.f5.com/products/big-ip/</a>
This seems like the industry standard. I guess it all comes down to how much you're paying, and what you're load balancing. </p>
<p>e.g. Websphere could be done:</p>
<p>big ip -> Apache 1 -> WebSphere 1</p>
<p>big ip -> Apache 2 -> WebSphere 2</p>
<p>or you could cross it:</p>
<p>big ip -> Apache 1 -> WebSphere 1 & 2 (round robin)</p>
<p>big ip -> Apache 2 -> WebSphere 2 & 1 (round robin)</p>
<p>We used the latter and it worked perfectly. Watch out for the scenario where one host fails: in most cases you're going to lose that request if it just times out. </p>
|
<p>We are using HAProxy with great success. I had never seen it go above 2% CPU usage even during high load average.</p>
| 20,777
|
<p>Is this seen as an in efficient prime number generator. It seems to me that this is pretty efficient. Is it the use of the stream that makes the program run slower?</p>
<p>I am trying to submit this to <a href="http://www.spoj.pl/" rel="nofollow noreferrer">SPOJ</a> and it tells me that my time limit exceeded... </p>
<pre><code>#include <iostream>
#include <sstream>
using namespace std;
int main() {
int testCases, first, second, counter = 0;
bool isPrime = true;
stringstream out;
cin >> testCases;
for (int i = 0; i < testCases; i++) {
// get the next two numbers
cin >> first >> second;
if (first%2 == 0)
first++;
// find the prime numbers between the two given numbers
for (int j = first; j <= second; j+=2) {
// go through and check if j is prime
for (int k = 2; k < j; k++) {
if (j%k == 0) {
isPrime = false;
break;
}
}
if (isPrime) {
out << j << "\n";
}
isPrime = true;
}
out << "\n";
}
cout << out.str();
return 0;
}
</code></pre>
<p>EDIT: The program is supposed to generate prime numbers between the numbers specified in the input. (See here for more details: <a href="http://www.spoj.pl/problems/PRIME1/" rel="nofollow noreferrer">Prime Generator Problem</a> )</p>
<p>-Tomek</p>
|
<p>This is one step (skipping even numbers) above the naive algorithm. I would suggest the <a href="http://en.wikipedia.org/wiki/Sieve_of_Eratosthenes" rel="nofollow noreferrer">Sieve Of Eratosthenes</a> as a more efficient algorithm. From the above link:</p>
<blockquote>
<p>The complexity of the algorithm is
O((nlogn)(loglogn)) with a memory
requirement of O(n). The segmented
version of the sieve of Eratosthenes,
with basic optimizations such as wheel
factorization, uses O(n) operations
and O(n1 / 2loglogn / logn) bits of
memory.</p>
</blockquote>
<p>The algorithm you give is somewhere near O(n^2). The speedup you get by skipping evens isn't that great because you would find an even number not to be prime on the first test. The sieve has a much greater memory requirement, but the runtime complexity is far superior for large <em>N</em>.</p>
|
<p>It can be made slightly more efficient. You don't need to start k at 2, you're already making sure not to test even numbers. So start k at 3.<br>
Then increment k by 2 every time because you don't need to test other even numbers.
The most efficient way that I can think of is to only test if a number is divisible by known prime numbers (then when you find another one add that to the list you test with).</p>
| 28,646
|
<p>My knowledge about implementing a parser is a bit rusty.</p>
<p>I have no idea about the current state of research in the area, and could need some links regarding recent advances and their impact on performance. </p>
<p>General resources about writing a parser are also welcome, (tutorials, guides etc.) since much of what I had learned at college I have already forgotten :)</p>
<p>I have the Dragon book, but that's about it.</p>
<p>And does anyone have input on parser generators like ANTLR and their performance? (ie. comparison with other generators)</p>
<p><strong>edit</strong> My main target is RDF/OWL/SKOS in N3 notation. </p>
|
<p>Mentioning the dragon book and antlr means you've answered your own question.</p>
<p>If you're looking for other parser generators you could also check out boost::spirit (<a href="http://spirit.sourceforge.net/" rel="noreferrer">http://spirit.sourceforge.net/</a>).</p>
<p>Depending on what you're trying to achieve you might also want to consider a DSL, which you can either parse yourself or write in a scripting language like boo, ruby, python etc...</p>
|
<p>Hmm … your request is a bit unspecific. While there are many recent developments in this general area, they're all quite specialized (naturally, since the field has matured). The original parsing approaches haven't really changed, though. You might want to read up on changes in parser creation tools (<a href="http://www.antlr.org/" rel="nofollow noreferrer">Antlr</a>, <a href="http://www.devincook.com/goldparser/" rel="nofollow noreferrer">Gold Parser</a>, to name but a few).</p>
| 30,284
|
<p>I'm asking myself if it is possible to check if in ADO.NET the current transaction can be rolled back.</p>
<p>The msdn suggests the following implementation:</p>
<pre><code>private static void ExecuteSqlTransaction(string connectionString)
{
using (SqlConnection connection = new SqlConnection(connectionString))
{
connection.Open();
SqlCommand command = connection.CreateCommand();
SqlTransaction transaction;
// Start a local transaction.
transaction = connection.BeginTransaction("SampleTransaction");
// Must assign both transaction object and connection
// to Command object for a pending local transaction
command.Connection = connection;
command.Transaction = transaction;
try
{
command.CommandText =
"Insert into Region (RegionID, RegionDescription) VALUES (100, 'Description')";
command.ExecuteNonQuery();
command.CommandText =
"Insert into Region (RegionID, RegionDescription) VALUES (101, 'Description')";
command.ExecuteNonQuery();
// Attempt to commit the transaction.
transaction.Commit();
Console.WriteLine("Both records are written to database.");
}
catch (Exception ex)
{
Console.WriteLine("Commit Exception Type: {0}", ex.GetType());
Console.WriteLine(" Message: {0}", ex.Message);
// Attempt to roll back the transaction.
try
{
transaction.Rollback();
}
catch (Exception ex2)
{
// This catch block will handle any errors that may have occurred
// on the server that would cause the rollback to fail, such as
// a closed connection.
Console.WriteLine("Rollback Exception Type: {0}", ex2.GetType());
Console.WriteLine(" Message: {0}", ex2.Message);
}
}
}
}
</code></pre>
<p>Additionally there is the note:
Try/Catch exception handling should always be used when rolling back a transaction. A Rollback generates an InvalidOperationException if the connection is terminated or if the transaction has already been rolled back on the server.</p>
<p>But I can't really believe that a try/catch is the recommended solution to check if a rollback is possible.</p>
<p>I know that in the SQL Server implementation the SQLTransaction objects returns null on the Connection property if the transaction is a "zombie".</p>
<p>But this is quite implementation specific and it only works with SQL Server.</p>
<p>So is there a db-independent way to detect if a transaction can be rolled back?</p>
<p>tia
Martin</p>
|
<p>A lot of this complexity is handled by using a TransactionScope object in a "using" statement - check it out on <a href="http://msdn.microsoft.com/en-us/library/system.transactions.transactionscope.aspx" rel="nofollow noreferrer">MSDN</a>. One caveat is that a TransactionScope will automatically "scale up" to use a distributed transaction when it is deemed necessary - sometimes this is desireable, and other times it isn't, so be careful if you are nesting TransactionScopes.</p>
|
<p>the problem is that in non SQL 2005 the transactionscope is promoted to a distributed transaction which is quite an overhead.</p>
| 44,232
|
<p>I need a database that could be stored network drive and would allow multiple users (up to 20) to use it without any server software.</p>
<p>I'm considering MS Access or Berkeley DB.</p>
<p>Can you share your experience with file databases? <br>
Which one did you use, did you have any problems with it?</p>
|
<p>I would suggest <a href="http://www.sqlite.org/index.html" rel="noreferrer">SQLite</a> because the entire database is stored in a single file, and it quite safely handles multiple users accessing it at the same time. There are several different libraries that you can use for your client application and there is no server software needed.</p>
<p>One of the strengths is that it mimics SQL servers so closely that if you need to convert from using a database file to a full-fledged SQL Server, most of your queries in your client won't need to change. You'll just need to migrate the data over to the new server database (which I wouldn't be surprised if there are programs to convert SQLite databases to MySQL databases, for example.)</p>
|
<p>I have been using Access for some time and in a variety of situations, including on-line. I have found that Access works well if it is properly set up according to the <a href="http://msdn.microsoft.com/en-us/library/aa167840.aspx" rel="nofollow noreferrer">guidelines</a>. One advantage of Access is that it includes everything in one package: Forms, Query Building, Reports, Database Management, and VBA. In addition, it works well with all other Office applications. The Access 2007 runtime can be obtained free from <a href="http://www.microsoft.com/downloads/details.aspx?familyid=d9ae78d9-9dc6-4b38-9fa6-2c745a175aed&displaylang=en" rel="nofollow noreferrer">here</a>, which makes distribution less expensive. Access is certainly unsuitable for large operations, but it should be quite suitable for twenty users. EDIT: <a href="http://office.microsoft.com/en-us/access/HP051868081033.aspx" rel="nofollow noreferrer">Microsoft</a> puts the number of concurrent users at 255.</p>
| 5,545
|
<p>When I'm creating a user for my web application, an SMTP email (using ASP.NET's SmtpClient) is sent to the user with the
automatically generated password. However, sometimes what I notice is that it times out and the new user simply won't receive the email with the password.</p>
<p>Alright, so I'll display a message indicating that the mail did not go through but the user is created.</p>
<p>Therefore, the sys admin has 2 options so far:</p>
<ol>
<li>Reset the password for the user and hope another SMTP mail is sent with the auto-generated password.</li>
<li>Delete and recreate the user.</li>
</ol>
<p>I could rollback the user creation if the smtp is not sent but what is the best practice to tackle this problem?</p>
<p>I'm thinking that I should retry sending the email 3 times with a timeout period of 5 seconds each. So 15 seconds would be the worse case scenario.</p>
<p>Is this the way to go?</p>
|
<p>Well, depending on your platform, if you can just hand off your mail to a local MTA, it should handle the retries and such. Your program can just queue the mail and move on, not worry about dealing with timeouts and graylists etc.</p>
<p>If the message still can't be delivered, you could always try resending it (via a password reset feature). If that fails as well, most likely there was a mistake in the email address, and I would suggest deleting the account, causing the user to re-register.</p>
<p>This, of course, might not be possible on some systems, depending what can be done with an unconfirmed user - that really depends on what you allow people to do before their email is validated.</p>
|
<p>IMHO you should notify the user, asking him to verify the email, without retries. </p>
<p>If the user does not verify the email and leaves the page, you better roll back the account since the user can not access it anyway. </p>
<p>Most cases of timeout would be caused by invalid email accounts. Users either made a mistake or gave you a non existent email addressto avoid being spammed. </p>
<p>If at all possible, do not ask for your users emails. Yhe number one rule of programming should be: DO NOT annoy the user.</p>
| 8,148
|
<p>I would like to ask if there are any other alternatives, aside from DocumentViewer, for displaying an XPS document in a WPF application? A ready-to-use control or class in .NET if possible. </p>
<p>This is because DocumentViewer is a little slow when you are scrolling through the pages.</p>
<p>Thanks!</p>
|
<p>No, unless there are any third-party controls that I'm unaware of.</p>
|
<p>Use <a href="http://www.nixps.com/" rel="nofollow noreferrer">NiXPS</a> to convert your XPS document to PDF and use any PDF viewer for WPF out there. This way you will get better performance than you do with DocumentViewer.</p>
| 49,369
|
<p>I am using WinCVS as client and CVSNT as my source control server. Some of the files I wanted to add to my CVS repo, were added as Unicode files. Now, I want to recommit the same as ANSI (aka ASCII) files. However, despite deleting the old files from the repo, every time I add the file with the same name, it automatically assigns Unicode encoding to the file. </p>
<p>Is there a way out? Or in other words, can I change the encoding of a file, once it is added to CVS?</p>
|
<p>There's a couple of things that (might) come into play here:</p>
<ul>
<li><p>you can disable automatic file type detection in WinCvs itself: go to Admin|Preferences|Globals, the option named "Supply control when adding files" - in theory you should be able to use the regular Add command from the toolbar after you have done this</p></li>
<li><p>make sure you don't have any entries in your <a href="http://www.cvsnt.org/manual/html/Wrappers.html" rel="nofollow noreferrer">cvswrappers</a> (both client- and server-side) that define the file types you're adding as unicode</p></li>
<li><p>recent versions of WinCvs come bundled with a macro for adding files with a specific k-mode for the cases not covered by the WinCvs UI (look for Macros|Add|Extended Add Selection... - you should probably explicitly force it to use "Text" (aka -kt) to make sure the server performs no file type auto-detection either</p></li>
<li><p>CVSNT supports versioning of file type changes. The command sequence for this in your case would be <code>cvs update -kt</code> followed by <code>cvs commit -f</code></p></li>
<li><p>recent versions of WinCvs also come bundled with a macro for performing the latter, it's under Macros|CVS|Change File Options</p></li>
</ul>
<p>[I am the author of both of the macros quoted here so feel free to contact me if they're giving you any trouble - you can find my contact information inside the macros themselves]</p>
|
<p>Firstly, I'd recommend using <a href="http://www.tortoisecvs.org/" rel="nofollow noreferrer">TortoiseCVS</a> as it has better <code>CVSNT</code> support. Whie <code>CVS</code> and <code>CVSNT</code> are very similar, <code>CVSNT</code> is <strong><em>not</em></strong> <code>CVS</code>.
TortoiseCVS <code>Add</code> dialog will show the file types as it guesses they are and you can override the filetype there.</p>
<p>For the existing situation, assuming that you don't want to keep the history of the Unicode files, you may try the following.</p>
<p>OK, first the warning:</p>
<p>THOU SHALT NOT EDIT THE CVS REPOSITORY BY HAND (unless thou art truly desperate)</p>
<p>Now for the instructions to break said rule, at your own risk.</p>
<ol>
<li>Make a backup of your CVSNT repository directory (simple ZIP file will do)</li>
<li>On the client, ensure that the Unicode files are actually deleted locally and in the repository.</li>
<li>In the CVSNT repository directory:
<ol start="3">
<li>Find the module directory where you originally added the files</li>
<li>Find the <code>Attic</code> directory</li>
<li>Delete the files and any <code>,v</code> extensions of them.</li>
</ol></li>
<li>On the client:
<ol start="7">
<li>Verify that the files have ANSI encoding (plain ol' text files)</li>
<li><code>cvs add</code> the files again</li>
</ol></li>
</ol>
| 35,598
|
<p>I am developing an Intranet Application in ASP.NET that uses Windows Authentication. I have created an AD class that gets information from the Active Directory Domain Controller. This is what I use to get things like User Groups that the currently logged user is in. The problem is in the ASP.NET Roles that the user is in. Seems funny, but the list of AD Groups that the user is in is totally different from the list of Roles that the user has.</p>
<p>Here is what I get:</p>
<p>List of AD Groups for a certian user</p>
<ol>
<li>Developers</li>
<li>Account Operators</li>
<li>Domain Admins</li>
<li>IS_IUSRS</li>
<li>Administrators</li>
</ol>
<p>List of the same user's Roles</p>
<ol>
<li>PUDDOM\Domain Users</li>
<li>Everyone</li>
<li>BUILTIN\Users</li>
<li>BUILTIN\Administrators</li>
<li>NT AUTHORITY\INTERACTIVE</li>
<li>NT AUTHORITY\Authenticated Users</li>
<li>NT AUTHORITY\This Organization</li>
<li>LOCAL</li>
<li>PUDDOM\Domain Admins</li>
<li>PUDDOM\Denied RODC Password Replication Group</li>
<li>PUDDOM\DnsAdmins</li>
</ol>
<p>OTHER INFORMATION:</p>
<ol>
<li>I have my website use Integrated Windows Authentication from IIS7.</li>
<li>A portion of my <code>web.config</code> looks like </li>
</ol>
<p>this:</p>
<pre><code><authentication mode="Windows" />
<authorization>
<deny users="?" />
</authorization>
<roleManager defaultProvider="AspNetWindowsTokenRoleProvider" enabled="true">
<providers>
<remove name="AspNetSqlRoleProvider" />
</providers>
</roleManager>
</code></pre>
<p>How do I reconcile the list?</p>
<p>I need to make my Roles list reflect the same data as my AD Groups list because I will use Role trimming in my site map and other stuff. </p>
|
<p>Part of your result is because some of the groups the user belongs to are members of other groups. The full list of a user's groups will include all groups the user belongs to, both directly and via membership in other groups who are also members. </p>
<p>The list will also include built-in groups that are normally hidden from the native admin tools by default.</p>
<p>Your best bet is to have the application store a master list of the groups you want your application to see, or a list of groups you want your application to ignore (either an include list, or an exclude list). Then when you pull back a specific user's roles, just compare it to the configured exclude or include list and filter out the unwanted results. </p>
<p>If you want an easier way, there is an open source <a href="http://www.codeproject.com/KB/aspnet/active_directory_roles.aspx" rel="nofollow noreferrer">AD role provider over at codeproject</a> that already has support for both exclude and include lists. It also has other nice features like optional caching that will REALLY speed up your application. The build-in AD role provider performs very poorly. </p>
|
<p>You might need to assign the website permissions specifically to the groups you want to enumerate.</p>
| 33,099
|
<p>How to UDP Broadcast with C in Linux?</p>
|
<p>In many IP stack, such as Linux, this code does not work. Your socket must have broadcast permissions. Try this:</p>
<pre><code>bcast_sock = socket(AF_INET, SOCK_DGRAM, 0);
int broadcastEnable=1;
int ret=setsockopt(bcast_sock, SOL_SOCKET, SO_BROADCAST, &broadcastEnable, sizeof(broadcastEnable));
/* Add other code, sockaddr, sendto() etc. */
</code></pre>
|
<p>Typically using the Berkeley sockets API, to <code>sendto()</code> one or more datagrams to a known <a href="http://en.wikipedia.org/wiki/Broadcast_address" rel="nofollow noreferrer">broadcast-class</a> IP address.</p>
| 43,784
|
<p>I would like to automate the process of setting up a new PC, this would include downloading and installing the latest windows and office updates; installing software from a network share (some software will require a restart so the script would need to be able to login and continue) adding PC to a domain and setting up local user accounts.</p>
<p>Is this possible and what would be the best scripting language to achieve this?</p>
|
<p>Check out <a href="http://www.nliteos.com/" rel="nofollow noreferrer">nLite</a>. Allows you to pre-configure many options, slipstream updates and service packs, etc.</p>
|
<p>The standard method in enterprise IT is the <a href="https://learn.microsoft.com/en-us/windows/deployment/deploy-windows-mdt/deploy-windows-10-with-the-microsoft-deployment-toolkit" rel="nofollow noreferrer">Microsoft Deployment Toolkit (MDT)</a>. Even if another OS deployment technique (SCCM, BigFix, SpecOps...) is used, the Windows images are often developed in MDT. </p>
<p>There is no better guide to getting started than <a href="https://deploymentresearch.com/Research" rel="nofollow noreferrer">Johan Arwidmark's</a> book series <a href="https://deploymentartist.com/Books" rel="nofollow noreferrer">"Deployment Fundamentals"</a>. There is also material at <a href="https://www.windows-noob.com/forums/forum/74-microsoft-deployment-toolkit-mdt/" rel="nofollow noreferrer">Windows Noob</a>.</p>
<p>You could integrate <a href="https://chocolatey.org/" rel="nofollow noreferrer">Chocolatey</a>, <a href="https://boxstarter.org/" rel="nofollow noreferrer">BoxStarter</a> or <a href="https://ninite.com/" rel="nofollow noreferrer">Ninite</a> for app management after the OS is deployed.</p>
| 12,850
|
<p>Sometimes I get Oracle connection problems because I can't figure out which tnsnames.ora file my database client is using.</p>
<p>What's the best way to figure this out? ++happy for various platform solutions. </p>
|
<p>Oracle provides a utility called <code>tnsping</code>:</p>
<pre><code>R:\>tnsping someconnection
TNS Ping Utility for 32-bit Windows: Version 9.0.1.3.1 - Production on 27-AUG-20
08 10:38:07
Copyright (c) 1997 Oracle Corporation. All rights reserved.
Used parameter files:
C:\Oracle92\network\ADMIN\sqlnet.ora
C:\Oracle92\network\ADMIN\tnsnames.ora
TNS-03505: Failed to resolve name
R:\>
R:\>tnsping entpr01
TNS Ping Utility for 32-bit Windows: Version 9.0.1.3.1 - Production on 27-AUG-20
08 10:39:22
Copyright (c) 1997 Oracle Corporation. All rights reserved.
Used parameter files:
C:\Oracle92\network\ADMIN\sqlnet.ora
C:\Oracle92\network\ADMIN\tnsnames.ora
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (COMMUNITY = **)
(PROTOCOL = TCP) (Host = ****) (Port = 1521))) (CONNECT_DATA = (SID = ENTPR0
1)))
OK (40 msec)
R:\>
</code></pre>
<p>This should show what file you're using. The utility sits in the Oracle <code>bin</code> directory.</p>
|
<p>The easiest way is probably to check the <strong>PATH</strong> environment variable of the process that is connecting to the database. Most likely the tnsnames.ora file is in <em>first Oracle bin directory in path</em>..\network\admin. TNS_ADMIN environment variable or value in registry (for the current Oracle home) may override this.</p>
<p>Using filemon like suggested by others will also do the trick.</p>
| 3,169
|
<p>I've been developing a Smalltalk variant for just the fun of it and I wonder what would be a fellow stackoverflowers choice when it comes to targeting a back-end. These were my current considerations:</p>
<p>.NET, JVM: These two VM's are mainly for statically typed languages and I assume it would be quite hard to target such a dynamic language like smalltalk.</p>
<p>Python (as source code): Seems to be the simplest way. Also it would be better if I could emit Python bytecode but it's not well documented as other VM's AFAIK (Need to dig Python's source code for details!). </p>
<p>Self made interpreter: Out of the question as it's no fun :-)</p>
<p>LLVM, NekoVM, Parrot are other options I'm checking out. What would be your take on this?</p>
|
<p>Don't discount .NET or the JVM so quickly. Dynamic languages are being developed for both (e.g. Groovy, JRuby, Jython on the JVM; IronRuby, IronPython on .NET) and .NET is gaining the "DLR" - Dynamic Language Runtime. (See <a href="http://blogs.msdn.com/hugunin/" rel="nofollow noreferrer">Jim Hugunin's blog</a> for more details.)</p>
|
<p>.NET as the DLR now that sit on top of the CLR for Dynamic language.</p>
| 32,784
|
<p>I am working on a large C++ project in Visual Studio 2008, and there are a lot of files with unnecessary <code>#include</code> directives. Sometimes the <code>#include</code>s are just artifacts and everything will compile fine with them removed, and in other cases classes could be forward declared and the #include could be moved to the <code>.cpp</code> file. Are there any good tools for detecting both of these cases?</p>
|
<p>While it won't reveal unneeded include files, Visual studio has a setting <code>/showIncludes</code> (right click on a <code>.cpp</code> file, <code>Properties->C/C++->Advanced</code>) that will output a tree of all included files at compile time. This can help in identifying files that shouldn't need to be included.</p>
<p>You can also take a look at the pimpl idiom to let you get away with fewer header file dependencies to make it easier to see the cruft that you can remove.</p>
|
<p>Some of the existing answers state that it's hard. That's indeed true, because you need a full compiler to detect the cases in which a forward declaration would be appropriate. You cant parse C++ without knowing what the symbols mean; the grammar is simply too ambiguous for that. You must know whether a certain name names a class (could be forward-declared) or a variable (can't). Also, you need to be namespace-aware.</p>
| 9,997
|
<p>I am trying to write a JavaScript function that will return its first argument(function) with all the rest of its arguments as preset parameters to that function.</p>
<p>So:</p>
<pre>function out(a, b) {
document.write(a + " " + b);
}
function setter(...) {...}
setter(out, "hello")("world");
setter(out, "hello", "world")();
</pre>
<p>Would output "hello world" twice. for some implementation of setter</p>
<p>I ran into an issue with manipulating the arguments array on my first try, but it seems there would be a better way to do this.</p>
|
<p>First of all, you need a partial - <a href="https://stackoverflow.com/questions/218025/what-is-the-difference-between-currying-and-partial-application"><strong>there is a difference between a partial and a curry</strong></a> - and here is all you need, <em>without a framework</em>:</p>
<pre><code>function partial(func /*, 0..n args */) {
var args = Array.prototype.slice.call(arguments, 1);
return function() {
var allArguments = args.concat(Array.prototype.slice.call(arguments));
return func.apply(this, allArguments);
};
}
</code></pre>
<p>Now, using your example, you can do exactly what you are after:</p>
<pre><code>partial(out, "hello")("world");
partial(out, "hello", "world")();
// and here is my own extended example
var sayHelloTo = partial(out, "Hello");
sayHelloTo("World");
sayHelloTo("Alex");
</code></pre>
<p>The <code>partial()</code> function could be used to implement, but <em>is not</em> currying. Here is a quote from <a href="http://www.uncarved.com/blog/not_currying.mrk" rel="noreferrer"><strong>a blog post on the difference</strong></a>:</p>
<blockquote>
<p>Where partial application takes a function and from it builds a function which takes fewer arguments, currying builds functions which take multiple arguments by composition of functions which each take a single argument.</p>
</blockquote>
<p>Hope that helps.</p>
|
<p>**
EDIT: See Jason Bunting's response. This answer actually shows a sub-par way of chaining numerous out calls, not a single out-call with presets for some of the arguments. If this answer actually helps with a similar problem, you should be sure to make use of apply and call as Jason recommends, instead of the obscure way to use eval that I thought up.
**</p>
<p>Well... your out will actually write "undefined" a lot in this... but this should be close to what you want:</p>
<pre><code>function out(a, b) {
document.write(a + " " + b);
}
function getArgString( args, start ) {
var argStr = "";
for( var i = start; i < args.length; i++ ) {
if( argStr != "" ) {
argStr = argStr + ", ";
}
argStr = argStr + "arguments[" + i + "]"
}
return argStr;
}
function setter(func) {
var argStr = getArgString( arguments, 1 );
eval( "func( " + argStr + ");" );
var newSettter = function() {
var argStr = getArgString( arguments, 0 );
if( argStr == "" ) {
argStr = "func";
} else {
argStr = "func, " + argStr;
}
return eval( "setter( " + argStr + ");" );
}
return newSettter;
}
setter(out, "hello")("world");
setter(out, "hello", "world")();
</code></pre>
<p>I'd probably move the code in getArgString into the setter function itself though... a little bit safer since I used 'eval's.</p>
| 41,548
|
<p>By default data extracted by the <code>GROUP BY</code> clause is ordered as ascending.
How to change it to descending.</p>
|
<p>You should use the derived tables on your SQL.
For example if you want to pick up the most recent row for an specific activity you're attempt to use:</p>
<pre><code>select *
from activities
group by id_customer
order by creation_date
</code></pre>
<p>but it doesn't work. Try instead:</p>
<pre><code>SELECT *
FROM ( select *
from activities
order by creation_date desc ) sorted_list
GROUP BY id_customer
</code></pre>
|
<p><code>ORDER BY foo DESC</code>?</p>
| 11,690
|
<p>I find it much easier to write MXML classes with embedded Script than trying to make an actionscript file.</p>
<p>I was wondering however what kind of overhead there is of using an MXML file - in terms of file size. I'm not especially concerned about performance, but if that is relevant would be interested in any findings.</p>
<p>Obviously I'm talking about UI components that have layout. MXML is MUCH easier to visualize and modify but I'm unclear exactly what it compiles down too. I'm hoping there is a negligible difference.</p>
|
<p>If you're going for the same functionality, MXML is not going to make your swf any bigger.</p>
<p>The thing that's affecting size is using the Flex SDK and its components. Whether you declare them with MXML or AS3, you're using them and their code is being built into the swf. By the same token, if you're referencing the Flex RSL, and thus avoiding building the Flex stuff directly into your swf, it will be the same size either way. Data Binding does create a lot of events and listeners, so that might cause some bloat, but not any more than if you declared the data binding mechanism with the AS3 utility functions. </p>
<p>Since MXML does generate intermediate AS3 code, it might be more verbose than you would care to write on your own, so you could see some additional size from that. To peek at it (which is good for understanding in general) you can look at with the compiler directive to keep the generated code. </p>
<blockquote>
<p>From: <a href="http://www.flashguru.co.uk/flex-2-compilation-hidden-goodies" rel="nofollow noreferrer">http://www.flashguru.co.uk/flex-2-compilation-hidden-goodies</a></p>
<ol>
<li>Right-click a Flex Project in the Navigator Panel.</li>
<li>Select Properties from the Context Menu.</li>
<li>Select Flex Compiler in the Properties Window.</li>
<li>Enter -keep-generated-actionscript into the ‘Additional compiler
arguments’ field.</li>
<li>Click ‘OK’ to apply the changes.</li>
<li>Build your Flex Project by clicking the Run button.</li>
<li>Right-click your Flex Project again in the Navigator Panel.</li>
<li>Choose Refresh from the Context-Menu.</li>
<li>A new folder should appear under your Flex Project in the Navigator
Panel, named ‘generated’</li>
</ol>
</blockquote>
<p>This is a good thing to do once you get into debugging and profiling your project, since you can really see where the compiler is doing the right (or wrong) thing.</p>
|
<p>Including the Flex framework (whether via MXML or pure Actionscript) will significantly increase the size of your final SWF.</p>
<p>I've just written a flash movie that consists of a single button. The MXML version was 175K while the no-Flex Actionscript version was 2K.</p>
<p>Though I'm not using it myself, this project seems promising for getting (some) Flex functionality with a lot less bloat:</p>
<p><a href="http://code.google.com/p/e4xu/" rel="nofollow noreferrer">http://code.google.com/p/e4xu/</a></p>
| 33,069
|
<p>What is the best method for adding options to a <code><select></code> from a JavaScript object using jQuery?</p>
<p>I'm looking for something that I don't need a plugin to do, but I would also be interested in the plugins that are out there.</p>
<p>This is what I did:</p>
<pre><code>selectValues = { "1": "test 1", "2": "test 2" };
for (key in selectValues) {
if (typeof (selectValues[key] == 'string') {
$('#mySelect').append('<option value="' + key + '">' + selectValues[key] + '</option>');
}
}
</code></pre>
<p><strong>A clean/simple solution:</strong></p>
<p>This is a cleaned up and simplified <a href="https://stackoverflow.com/questions/170986/what-is-the-best-way-to-add-options-to-a-select-from-an-array-with-jquery/171007#171007">version of matdumsa's</a>:</p>
<pre><code>$.each(selectValues, function(key, value) {
$('#mySelect')
.append($('<option>', { value : key })
.text(value));
});
</code></pre>
<p>Changes from matdumsa's: (1) removed the close tag for the option inside append() and (2) moved the properties/attributes into an map as the second parameter of append().</p>
|
<p>The same as other answers, in a jQuery fashion:</p>
<pre><code>$.each(selectValues, function(key, value) {
$('#mySelect')
.append($("<option></option>")
.attr("value", key)
.text(value));
});
</code></pre>
|
<p>I decided to chime in a bit.</p>
<ol>
<li>Deal with prior selected option; some browsers mess up when we append</li>
<li>ONLY hit DOM once with the append</li>
<li>Deal with <code>multiple</code> property while adding more options</li>
<li>Show how to use an object</li>
<li>Show how to map using an array of objects</li>
</ol>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-js lang-js prettyprint-override"><code>// objects as value/desc
let selectValues = {
"1": "test 1",
"2": "test 2",
"3": "test 3",
"4": "test Four"
};
//use div here as using "select" mucks up the original selected value in "mySelect"
let opts = $("<div />");
let opt = {};
$.each(selectValues, function(value, desc) {
opts.append($('<option />').prop("value", value).text(desc));
});
opts.find("option").appendTo('#mySelect');
// array of objects called "options" in an object
let selectValuesNew = {
options: [{
value: "1",
description: "2test 1"
},
{
value: "2",
description: "2test 2",
selected: true
},
{
value: "3",
description: "2test 3"
},
{
value: "4",
description: "2test Four"
}
]
};
//use div here as using "select" mucks up the original selected value
let opts2 = $("<div />");
let opt2 = {}; //only append after adding all options
$.map(selectValuesNew.options, function(val, index) {
opts2.append($('<option />')
.prop("value", val.value)
.prop("selected", val.selected)
.text(val.description));
});
opts2.find("option").appendTo('#mySelectNew');</code></pre>
<pre class="snippet-code-html lang-html prettyprint-override"><code><script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script>
<select id="mySelect">
<option value="" selected="selected">empty</option>
</select>
<select id="mySelectNew" multiple="multiple">
<option value="" selected="selected">2empty</option>
</select></code></pre>
</div>
</div>
</p>
| 20,622
|
<p>How can I find poor performing SQL queries in Oracle?</p>
<p>Oracle maintains statistics on shared SQL area and contains one row per SQL string(v$sqlarea).
But how can we identify which one of them are badly performing?</p>
|
<p>I found this SQL statement to be a useful place to start (sorry I can't attribute this to the original author; I found it somewhere on the internet):</p>
<pre><code>SELECT * FROM
(SELECT
sql_fulltext,
sql_id,
elapsed_time,
child_number,
disk_reads,
executions,
first_load_time,
last_load_time
FROM v$sql
ORDER BY elapsed_time DESC)
WHERE ROWNUM < 10
/
</code></pre>
<p>This finds the top SQL statements that are currently stored in the SQL cache ordered by elapsed time. Statements will disappear from the cache over time, so it might be no good trying to diagnose last night's batch job when you roll into work at midday.</p>
<p>You can also try ordering by disk_reads and executions. Executions is useful because some poor applications send the same SQL statement way too many times. This SQL assumes you use bind variables correctly.</p>
<p>Then, you can take the <code>sql_id</code> and <code>child_number</code> of a statement and feed them into this baby:-</p>
<pre><code>SELECT * FROM table(DBMS_XPLAN.DISPLAY_CURSOR('&sql_id', &child));
</code></pre>
<p>This shows the actual plan from the SQL cache and the full text of the SQL.</p>
|
<p>While searching I got the following query which does the job with one assumption(query execution time >6 seconds)</p>
<hr>
<p>SELECT username, sql_text, sofar, totalwork, units </p>
<p>FROM v$sql,v$session_longops</p>
<p>WHERE sql_address = address AND sql_hash_value = hash_value</p>
<p>ORDER BY address, hash_value, child_number;</p>
<hr>
<p>I think above query will list the details for current user.</p>
<p>Comments are welcome!!</p>
| 40,909
|
<p>So I've switched the Trigorilla board in the printer with a SKR 1.3 with TMC2208 drivers and installed the latest Marlin 2.0, with a config based on <a href="https://www.thingiverse.com/thing:3741425/" rel="nofollow noreferrer">this one</a>. You can find the <a href="https://pastebin.com/ij1G5tSw" rel="nofollow noreferrer">Configuration.h here</a>, the only thing I changed in Configuration_adv.h was the pin of the hotend fan. </p>
<p>Now when let the printer autocalibrate the delta settings, it tells me that the height is 141.35 mm, instead of the actual ~300 mm and I had to set the radius to 78 mm, instead of the actual 115 mm so that it doesn't try to probe outside the bed.</p>
<p>What settings could I have set so horribly wrong that I get these results?</p>
|
<p>So the problem was that the TMC2208 were wired for UART mode, yet Marlin was configured for standalone, which apparently makes them work, but with completely wrong step sizes. Changing it in the configuration completely eliminated the problem</p>
|
<p>I would check the "steps per distance" setting. If the motors were moving more than the firmware thinks, the height would measure as shorter than actual (since the number of steps would be less than the firmware expected). Similarly, the radius would scale up.</p>
<p>You replaced the controller and motor drivers, so perhaps the micro-stepping is different.</p>
<p>If the result is inconsistently wrong, it could be a dynamics setting, such as acceleration or max velocity.</p>
| 1,476
|
<p>In C I could, for example, zero out bit #10 in a 32 bit unsigned value like so:</p>
<pre><code>unsigned long value = 0xdeadbeef;
value &= ~(1<<10);
</code></pre>
<p>How do I do that in Python ?</p>
|
<p>Bitwise operations on Python ints work much like in C. The <code>&</code>, <code>|</code> and <code>^</code> operators in Python work just like in C. The <code>~</code> operator works as for a signed integer in C; that is, <code>~x</code> computes <code>-x-1</code>.</p>
<p>You have to be somewhat careful with left shifts, since Python integers aren't fixed-width. Use bit masks to obtain the low order bits. For example, to do the equivalent of shift of a 32-bit integer do <code>(x << 5) & 0xffffffff</code>.</p>
|
<p>Have you tried copying and pasting your code into the Python REPL to see what will happen?</p>
<pre><code>>>> value = 0xdeadbeef
>>> value &= ~(1<<10)
>>> hex (value)
'0xdeadbaef'
</code></pre>
| 17,787
|
<p>What would I do if I want to have a generic method that only accepts types that have overloaded an operator, for instance the subtraction operator. I tried using an interface as a constraint but interfaces can't have operator overloading. </p>
<p>What is the best way to achieve this?</p>
|
<p>There is no immediate answer; operators are static, and cannot be expressed in constraints - and the existing primatives don't implement any specific interface (contrast to IComparable[<T>] which can be used to emulate greater-than / less-than).</p>
<p>However; if you just want it to work, then in .NET 3.5 there are some options...</p>
<p>I have put together a library <a href="http://www.yoda.arachsys.com/csharp/miscutil/usage/genericoperators.html" rel="noreferrer">here</a> that allows efficient and simple access to operators with generics - such as:</p>
<pre><code>T result = Operator.Add(first, second); // implicit <T>; here
</code></pre>
<p>It can be downloaded as part of <a href="http://www.yoda.arachsys.com/csharp/miscutil/" rel="noreferrer">MiscUtil</a></p>
<p>Additionally, in C# 4.0, this becomes possible via <code>dynamic</code>:</p>
<pre><code>static T Add<T>(T x, T y) {
dynamic dx = x, dy = y;
return dx + dy;
}
</code></pre>
<p>I also had (at one point) a .NET 2.0 version, but that is less tested. The other option is to create an interface such as </p>
<pre><code>interface ICalc<T>
{
T Add(T,T)()
T Subtract(T,T)()
}
</code></pre>
<p>etc, but then you need to pass an <code>ICalc<T>;</code> through all the methods, which gets messy.</p>
|
<p>There is a piece of code stolen from the internats that I use a lot for this. It looks for or builds using <code>IL</code> basic arithmetic operators. It is all done within an <code>Operation<T></code> generic class, and all you have to do is assign the required operation into a delegate. Like <code>add = Operation<double>.Add</code>. </p>
<p>It is used like this:</p>
<pre><code>public struct MyPoint
{
public readonly double x, y;
public MyPoint(double x, double y) { this.x=x; this.y=y; }
// User types must have defined operators
public static MyPoint operator+(MyPoint a, MyPoint b)
{
return new MyPoint(a.x+b.x, a.y+b.y);
}
}
class Program
{
// Sample generic method using Operation<T>
public static T DoubleIt<T>(T a)
{
Func<T, T, T> add=Operation<T>.Add;
return add(a, a);
}
// Example of using generic math
static void Main(string[] args)
{
var x=DoubleIt(1); //add integers, x=2
var y=DoubleIt(Math.PI); //add doubles, y=6.2831853071795862
MyPoint P=new MyPoint(x, y);
var Q=DoubleIt(P); //add user types, Q=(4.0,12.566370614359172)
var s=DoubleIt("ABC"); //concatenate strings, s="ABCABC"
}
}
</code></pre>
<p><code>Operation<T></code> Source code courtesy of paste bin: <a href="http://pastebin.com/nuqdeY8z" rel="nofollow">http://pastebin.com/nuqdeY8z</a></p>
<p>with attribution below:</p>
<pre><code>/* Copyright (C) 2007 The Trustees of Indiana University
*
* Use, modification and distribution is subject to the Boost Software
* License, Version 1.0. (See accompanying file LICENSE_1_0.txt or copy at
* http://www.boost.org/LICENSE_1_0.txt)
*
* Authors: Douglas Gregor
* Andrew Lumsdaine
*
* Url: http://www.osl.iu.edu/research/mpi.net/svn/
*
* This file provides the "Operations" class, which contains common
* reduction operations such as addition and multiplication for any
* type.
*
* This code was heavily influenced by Keith Farmer's
* Operator Overloading with Generics
* at http://www.codeproject.com/csharp/genericoperators.asp
*
* All MPI related code removed by ja72.
*/
</code></pre>
| 17,774
|
<p>How do you remove the jagged edges from a wide button in internet explorer? For example:</p>
<p><img src="https://i.stack.imgur.com/em5K0.gif" alt="alt text"></p>
|
<p>You can also eliminate Windows XP's styling of buttons (and every other version of Windows) by setting the <code>background-color</code> and/or <code>border-color</code> on your buttons.</p>
<p>Try the following styles:</p>
<pre><code>background-color: black;
color: white;
border-color: red green blue yellow;
</code></pre>
<p>You can of course make this much more pleasing to the eyes. But you get my point :)</p>
<p>Stack Overflow uses this approach.</p>
|
<p>Not too much you can do about it, but the good news is that it is fixed in IE8</p>
<p><a href="http://webbugtrack.blogspot.com/2007/08/bug-101-buttons-render-stretched-and.html" rel="nofollow noreferrer">http://webbugtrack.blogspot.com/2007/08/bug-101-buttons-render-stretched-and.html</a></p>
| 18,145
|
<p>Is there a difference between 3D printing and additive manufacturing if any then explain?</p>
|
<p>Yes and No at the same time:</p>
<h1>3D Printing is a subset of Additive Manufacturing</h1>
<h3>but treated as a synonym at this time</h3>
<p>3D printing is a process that takes some material, in a fluid state that fuses with the model to shape an object from it. The material could be plastics, ceramic paste or even metal. The fluid state could be the normal state, or just be present for the fusing process (think powder and resin based systems), or be a transitional phase (as in filament based systems).</p>
<p>Additive manufacturing is just a slight bit bigger: at the moment most, if not all, AM processes are some sort of 3D printing. But AM could include other processes that don't fit 3D printing. For example, an automatic bricklaying machine could, under some view, be Additive Manufacturing, but it is not 3D printing in the traditional sense.</p>
<p>So: All 3D Printing is Additive Manufacturing, but not all Additive Manufacturing is necessarily 3D Printing.</p>
|
<h3>Origin</h3>
<p>3D printing and additive manufacturing (AM) both refer to a <a href="https://en.wikipedia.org/wiki/3D_printing_processes" rel="nofollow noreferrer">range of processes</a> where, opposed to subtractive manufacturing methodologies, materials are joined to create products. E.g. <a href="https://en.wikipedia.org/wiki/Fused_filament_fabrication" rel="nofollow noreferrer">FFF</a>, <a href="https://en.wikipedia.org/wiki/Selective_laser_sintering" rel="nofollow noreferrer">SLS</a>, etc.</p>
<p>From <a href="https://wohlersassociates.com/additive-manufacturing.html" rel="nofollow noreferrer">this reference</a> you see a reference to 3D printing:</p>
<blockquote>
<p>Additive manufacturing is the official industry standard term (ASTM
F2792) for all applications of the technology. It is defined as the
process of joining materials to make objects from 3D model data,
usually layer upon layer, as opposed to subtractive manufacturing
methodologies.</p>
</blockquote>
<p>From e.g. <a href="https://www.tth.com/difference-3d-printing-additive-manufacturing-rapid-prototyping/" rel="nofollow noreferrer">this reference</a> one reads that there is no difference:</p>
<blockquote>
<p>Between the terms 3D printing and additive manufacturing, there is no
difference. 3D printing and additive manufacturing are synonyms for
the same process.</p>
</blockquote>
<h3>Useage now</h3>
<p>However, as the AM processes and applications grew in time, 3D printing has become a subset of AM. As worded by <a href="https://www.additivemanufacturing.media/columns/additive-manufacturing-and-3d-printing-are-two-different-things" rel="nofollow noreferrer">Peter Zelinski</a> in August 2017:</p>
<blockquote>
<p>To be sure, the terms overlap. They can be used in ways that make them
sound like synonyms. But the relationship between them and the
difference between them is this: <strong>3D printing is the operation at the
heart of additive manufacturing</strong>, just as “turning” or “molding” might
be the operation at the heart of a conventional manufacturing process.
In short, <strong>additive manufacturing requires and includes 3D printing,
but it also entails more than 3D printing, and it refers to something
more rigorous</strong>.</p>
</blockquote>
| 1,095
|
<p>How does one <strong>reliably</strong> determine a file's type? File extension analysis is not acceptable. There must be a rubyesque tool similar to the UNIX file(1) command?</p>
<p>This is regarding MIME or content type, not file system classifications, such as directory, file, or socket.</p>
|
<p>There is a ruby binding to <code>libmagic</code> that does what you need. It is available as a gem named <a href="https://rubygems.org/gems/ruby-filemagic/versions/0.7.1" rel="noreferrer">ruby-filemagic</a>:</p>
<pre><code>gem install ruby-filemagic
</code></pre>
<p>Require <code>libmagic-dev</code>.</p>
<p>The documentation seems a little thin, but this should get you started:</p>
<pre><code>$ irb
irb(main):001:0> require 'filemagic'
=> true
irb(main):002:0> fm = FileMagic.new
=> #<FileMagic:0x7fd4afb0>
irb(main):003:0> fm.file('foo.zip')
=> "Zip archive data, at least v2.0 to extract"
irb(main):004:0>
</code></pre>
|
<p>You could give a go with <a href="http://mime-types.rubyforge.org/" rel="nofollow noreferrer">MIME::Types for Ruby</a>.</p>
<blockquote>
<p>This library allows for the identification of a file’s likely MIME content type. The identification of MIME content type is based on a file’s filename extensions.</p>
</blockquote>
| 7,469
|
<p>I am a bit confused about socket programming in C.</p>
<p>You create a socket, bind it to an interface and an IP address and get it to listen. I found a couple of web resources on that, and understood it fine. In particular, I found an article <s><a href="http://users.actcom.co.il/~choo/lupg/tutorials/internetworking/internet-theory.html" rel="nofollow noreferrer">Network programming under Unix systems</a></s> to be very informative.</p>
<p>What confuses me is the timing of data arriving on the socket.</p>
<p>How can you tell when packets arrive, and how big the packet is, do you have to do all the heavy lifting yourself? </p>
<p>My basic assumption here is that packets can be of variable length, so once binary data starts appearing down the socket, how do you begin to construct packets from that?</p>
|
<p>Short answer is that you have to do all the heavy lifting yourself. You can be notified that there is data available to be read, but you won't know how many bytes are available. In most IP protocols that use variable length packets, there will be a header with a known fixed length prepended to the packet. This header will contain the length of the packet. You read the header, get the length of the packet, then read the packet. You repeat this pattern (read header, then read packet) until communication is complete.</p>
<p>When reading data from a socket, you request a certain number of bytes. The read call may block until the requested number of bytes are read, but it can return fewer bytes than what was requested. When this happens, you simply retry the read, requesting the remaining bytes.</p>
<p>Here's a typical C function for reading a set number of bytes from a socket:</p>
<pre><code>/* buffer points to memory block that is bigger than the number of bytes to be read */
/* socket is open socket that is connected to a sender */
/* bytesToRead is the number of bytes expected from the sender */
/* bytesRead is a pointer to a integer variable that will hold the number of bytes */
/* actually received from the sender. */
/* The function returns either the number of bytes read, */
/* 0 if the socket was closed by the sender, and */
/* -1 if an error occurred while reading from the socket */
int readBytes(int socket, char *buffer, int bytesToRead, int *bytesRead)
{
*bytesRead = 0;
while(*bytesRead < bytesToRead)
{
int ret = read(socket, buffer + *bytesRead, bytesToRead - *bytesRead);
if(ret <= 0)
{
/* either connection was closed or an error occurred */
return ret;
}
else
{
*bytesRead += ret;
}
}
return *bytesRead;
}
</code></pre>
|
<p>Sockets operate at a higher level than raw packets - it's like a file you can read/write from. Also, when you try to read from a socket, the operating system will block (put on hold) your process until it has data to fulfill the request.</p>
| 7,132
|
<p>What is the difference between <strong><em>anonymous methods</em></strong> of C# 2.0 and <strong><em>lambda expressions</em></strong> of C# 3.0.?</p>
|
<p><a href="http://msdn.microsoft.com/en-us/library/0yw3tz5k.aspx" rel="noreferrer">The MSDN page on anonymous methods explains it</a></p>
<blockquote>
<p>In versions of C# before 2.0, the only
way to declare a delegate was to use
named methods. C# 2.0 introduced
anonymous methods and in C# 3.0 and
later, lambda expressions supersede
anonymous methods as the preferred way
to write inline code. However, the
information about anonymous methods in
this topic also applies to lambda
expressions. There is one case in
which an anonymous method provides
functionality not found in lambda
expressions. Anonymous methods enable
you to omit the parameter list, and
this means that an anonymous method
can be converted to delegates with a
variety of signatures. This is not
possible with lambda expressions. For
more information specifically about
lambda expressions, see Lambda
Expressions (C# Programming Guide).</p>
</blockquote>
<p><a href="http://msdn.microsoft.com/en-us/library/bb397687.aspx" rel="noreferrer">And regarding lambda expressions</a>:</p>
<blockquote>
<p>A lambda expression is an anonymous function that can contain expressions and statements, and can be used to create delegates or expression tree types.
All lambda expressions use the lambda operator =>, which is read as "goes to". The left side of the lambda operator specifies the input parameters (if any) and the right side holds the expression or statement block. The lambda expression x => x * x is read "x goes to x times x." This expression can be assigned to a delegate type as follows: </p>
</blockquote>
|
<p>First, convenience: lambdas are easier to read and write.</p>
<p>Second, expressions: lambdas can be compiled to <em>either</em> a delegate, <em>or</em> an expression tree (<code>Expression<T></code> for some delegate-type T, such as <code>Func<int,bool></code>). Expression trees are the more exciting, as it is the key to LINQ to out-of-process data stores.</p>
<pre><code>Func<int,bool> isEven = i => i % 2 == 0;
Expression<Func<int,bool>> isEven = i => i % 2 == 0;
</code></pre>
<p>Note that lambda expressions with statement bodies can only be compiled to delegates, not <code>Expression</code>s:</p>
<pre><code>Action a = () => { Console.WriteLine(obj.ToString()); };
</code></pre>
| 25,563
|
<p>I am trying to generate a report by querying 2 databases (Sybase) in classic ASP.</p>
<p>I have created 2 connection strings:<br></p>
<blockquote>
<p>connA for databaseA<br>
connB for databaseB</p>
</blockquote>
<p>Both databases are present on the same server (don't know if this matters)<br></p>
<p>Queries:</p>
<p><code>q1 = SELECT column1 INTO #temp FROM databaseA..table1 WHERE xyz="A"</code></p>
<p><code>q2 = SELECT columnA,columnB,...,columnZ FROM table2 a #temp b WHERE b.column1=a.columnB</code></p>
<p>followed by:</p>
<pre><code>response.Write(rstsql) <br>
set rstSQL = CreateObject("ADODB.Recordset")<br>
rstSQL.Open q1, connA<br>
rstSQL.Open q2, connB
</code></pre>
<p>When I try to open up this page in a browser, I get error message:</p>
<blockquote>
<p>Microsoft OLE DB Provider for ODBC Drivers error '80040e37'</p>
<p>[DataDirect][ODBC Sybase Wire Protocol driver][SQL Server]#temp not found. Specify owner.objectname or use sp_help to check whether the object exists (sp_help may produce lots of output).</p>
</blockquote>
<p>Could anyone please help me understand what the problem is and help me fix it?</p>
<p>Thanks.</p>
|
<p>With both queries, it looks like you are trying to insert into #temp. #temp is located on one of the databases (for arguments sake, databaseA). So when you try to insert into #temp from databaseB, it reports that it does not exist.</p>
<p>Try changing it from <em>Into <strong>#temp</strong> From</em> to <em>Into <strong>databaseA.dbo.#temp</strong> From</em> in both statements. </p>
<p>Also, make sure that the connection strings have permissions on the other DB, otherwise this will not work.</p>
<p>Update: relating to the temp table going out of scope - if you have one connection string that has permissions on both databases, then you could use this for both queries (while keeping the connection alive). While querying the table in the other DB, be sure to use [DBName].[Owner].[TableName] format when referring to the table.</p>
|
<p>your temp table is out of scope, it is only 'alive' during the first connection and will not be available in the 2nd connection
Just move all of it in one block of code and execute it inside one conection</p>
| 3,716
|
<p>Just wondering how much people log within their applications???</p>
<p>I have seen this:</p>
<blockquote>
<p>"I typically like to use the ERROR log
level to log any exceptions that are
caught by the application. I will use
the INFO log level as a "first level"
debugging scheme to show whenever I
enter or exit a method. From there I
use the DEBUG log level to trace
detailed information. The FATAL log
level is used for any exceptions that
I have failed to catch in my web based
applications."</p>
</blockquote>
<p>Which had this code sample with it:</p>
<pre><code>Public Class LogSample
Private Shared ReadOnly Log As log4net.ILog = log4net.LogManager.GetLogger(GetType(LogSample))
Public Function AddNumbers(ByVal Number1 As Integer, ByVal Number2 As Integer) As Integer
Dim intResults As Integer
Log.Info("Starting AddNumbers Method...")
Log.Debug("Number1 Specified: " & Number1)
Log.Debug("Number2 Specified: " & Number2)
intResults = Number1 + Number2
Try
intResults = Number1 + Number2
Catch ex As Exception
Log.Error("Error Adding Nubmers.", ex)
End Try
Log.Info("AddNumbers Method Complete.")
Return intResults
End Function
End Class
</code></pre>
<p>But this just seems to add so much to the method. For instance a class that would normally be maybe 7 lines of code suddenly becomes 12 lines of code. The method also loses some of its clarity and simplicity.</p>
<p>But in saying that the benefit of having the logging in place can be good. For instance performance monitoring in a production system, chasing down aberrant bugs in production (not that you would have all this logging turned on all the time.</p>
<p>Hence I am wondering what people do?
Cheers
Anthony </p>
|
<p>You are right that this does make the code more difficult to read and maintain. One recommendation is to consider looking into an AOP (Aspect oriented Programming) tool to separate your logging logic from your application logic. Castle Windsor and Spring are two that come to mind within the .Net community that you may want to research.</p>
|
<p>From a security standpoint logging can be an interesting topic. I wrote a <a href="http://blogs.csoonline.com/when_ddos_attacks_become_personal" rel="nofollow noreferrer">blog entry</a> on CSO Online a while back in the wake of a couple of DDOS attacks. This is the section where I talked about logging, hope it helps a bit:</p>
<blockquote>
<p>Techniques such as log throttling,
write only logs, and using log servers
can strengthen the retroactive
security of a system. After a possible
DDoS attack has occurred the company
will no doubt want to investigate the
attack. An investigation is only
possible if the correct level of
logging has been used. Too much and
the logs will quickly become filled,
which could be the reason for the DoS
in the first place. Too little and the
logs will be worthless because they
don’t contain enough information to
catch the criminal.</p>
</blockquote>
| 28,708
|
<p>Here's another C#/.NET question based merely on curiousity more than an immediate <em>need</em> ...</p>
<p>If you had a <code>Socket</code> instance and you wanted to wrap it in the higher-level <code>TcpClient</code> class, is that possible and how would you do it?</p>
<p>Conversely if you have an instance of <code>TcpClient</code>, is it possible to get the underlying <code>Socket</code>?</p>
|
<blockquote>
<p>If you had a Socket instance and you
wanted to wrap it in the higher-level
TcpClient class, is that possible and
how would you do it?</p>
</blockquote>
<pre><code>Socket socket = ...;
TcpClient client = new TcpClient();
client.Client = socket;
</code></pre>
<blockquote>
<p>Conversely if you have an instance of
TcpClient, is it possible to get the
underlying Socket?</p>
</blockquote>
<p>Get the underlying Socket using <a href="http://msdn.microsoft.com/en-us/library/system.net.sockets.tcpclient.client.aspx" rel="noreferrer">TcpClient.Client</a> property.</p>
|
<p>From TcpClient to Socket is very easy. <code>tcpClientInstance.Client</code> is the underlying Socket instance.</p>
| 24,848
|
<p>I need to read selected files, matching on the file name, from a remote zip archive using Python. I don't want to save the full zip to a temporary file (it's not that large, so I can handle everything in memory).</p>
<p>I've already written the code and it works, and I'm answering this myself so I can search for it later. But since evidence suggests that I'm one of the dumber participants on Stackoverflow, I'm sure there's room for improvement.</p>
|
<p>Here's how I did it (grabbing all files ending in ".ranks"):</p>
<pre><code>import urllib2, cStringIO, zipfile
try:
remotezip = urllib2.urlopen(url)
zipinmemory = cStringIO.StringIO(remotezip.read())
zip = zipfile.ZipFile(zipinmemory)
for fn in zip.namelist():
if fn.endswith(".ranks"):
ranks_data = zip.read(fn)
for line in ranks_data.split("\n"):
# do something with each line
except urllib2.HTTPError:
# handle exception
</code></pre>
|
<p>Bear in mind that merely decompressing a ZIP file may result in <a href="http://en.wikipedia.org/wiki/Decompression_bomb" rel="nofollow noreferrer">a security vulnerability</a>.</p>
| 12,010
|
<p>I have the following configuration, but I have not able to find any documentation on how to set a maximum backup files on date rolling style. I know that you can do this with size rolling style by using the maxSizeRollBackups.</p>
<pre><code><appender name="AppLogFileAppender" type="log4net.Appender.RollingFileAppender">
<file value="mylog.log" />
<appendToFile value="true" />
<lockingModel type="log4net.Appender.FileAppender+MinimalLock" />
<rollingStyle value="Date" />
<datePattern value=".yyMMdd.'log'" />
<layout type="log4net.Layout.PatternLayout">
<conversionPattern value="%d %-5p %c - %m%n" />
</layout>
</appender>
</code></pre>
|
<p>You can't.</p>
<p>from
<a href="https://logging.apache.org/log4net/release/sdk/html/T_log4net_Appender_RollingFileAppender.htm" rel="noreferrer">log4net SDK Reference<br />
RollingFileAppender Class
</a></p>
<blockquote>
<p><strong>CAUTION</strong></p>
<p>A maximum number of backup files when rolling on date/time boundaries is not supported.</p>
</blockquote>
|
<p>It's fairly easy to inherit from a log4net appender and add say your own override method which performs the clean up of files. I overrode OpenFile to do this. Here's an example of a custom log4net appender to get you started: <a href="https://stackoverflow.com/a/2385874/74585">https://stackoverflow.com/a/2385874/74585</a></p>
| 12,089
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.