instruction
stringlengths 27
22.5k
| chosen
stringlengths 27
28.2k
| rejected
stringlengths 19
24.9k
| __index_level_0__
int64 0
10k
|
|---|---|---|---|
<p>In SQL Server how do you query a database to bring back all the tables that have a field of a specific name?</p>
|
<p>The following query will bring back a unique list of tables where <code>Column_Name</code> is equal to the column you are looking for:</p>
<pre><code>SELECT Table_Name
FROM INFORMATION_SCHEMA.COLUMNS
WHERE Column_Name = 'Desired_Column_Name'
GROUP BY Table_Name
</code></pre>
|
<pre><code>SELECT Table_Name
FROM Information_Schema.Columns
WHERE Column_Name = 'YourFieldName'
</code></pre>
| 2,565
|
<p>I know Hostmonster allows Python. Has anyone successfully run Django on there? Any problems? </p>
|
<p>I know this is an old question, but here is the answer: <strong>yes, Hostmonster does support Django</strong></p>
<p>There used to be <a href="http://helpdesk.hostmonster.com/index.php/kb/article/000531" rel="nofollow noreferrer">this Hostmonster KB article</a> for details, including instructions on how to set it up, but that link appears to be dead these days, and the only article about Hostmonster article about Django is about <a href="https://my.hostmonster.com/cgi/help/585" rel="nofollow noreferrer">troubleshooting</a>. You can find instructions on how to set up Django on Hostmonster <a href="http://www.calebmadrigal.com/django-on-hostmonster/" rel="nofollow noreferrer">elsewhere</a>.</p>
|
<p>Well, if it support Python, you might be able to run Django using a FastCGI setup, as described here: <a href="https://docs.djangoproject.com/en/dev/howto/deployment/fastcgi/" rel="nofollow noreferrer">FastCGI Docs</a>. If it supports mod_python (which I guess it's what you mean), then sure, you can install it using the steps listed here: <a href="https://docs.djangoproject.com/en/1.1/howto/deployment/modpython/" rel="nofollow noreferrer">ModPython docs</a></p>
| 4,163
|
<p>How do I sort a list of dictionaries by a specific key's value? Given:</p>
<pre><code>[{'name': 'Homer', 'age': 39}, {'name': 'Bart', 'age': 10}]
</code></pre>
<p>When sorted by <code>name</code>, it should become:</p>
<pre><code>[{'name': 'Bart', 'age': 10}, {'name': 'Homer', 'age': 39}]
</code></pre>
|
<p>The <a href="https://docs.python.org/library/functions.html#sorted" rel="noreferrer"><code>sorted()</code></a> function takes a <code>key=</code> parameter</p>
<pre><code>newlist = sorted(list_to_be_sorted, key=lambda d: d['name'])
</code></pre>
<p>Alternatively, you can use <a href="https://docs.python.org/library/operator.html#operator.itemgetter" rel="noreferrer"><code>operator.itemgetter</code></a> instead of defining the function yourself</p>
<pre><code>from operator import itemgetter
newlist = sorted(list_to_be_sorted, key=itemgetter('name'))
</code></pre>
<p>For completeness, add <code>reverse=True</code> to sort in descending order</p>
<pre><code>newlist = sorted(list_to_be_sorted, key=itemgetter('name'), reverse=True)
</code></pre>
|
<p>sorting by multiple columns, while in descending order on some of them:
the cmps array is global to the cmp function, containing field names and inv == -1 for desc 1 for asc</p>
<pre><code>def cmpfun(a, b):
for (name, inv) in cmps:
res = cmp(a[name], b[name])
if res != 0:
return res * inv
return 0
data = [
dict(name='alice', age=10),
dict(name='baruch', age=9),
dict(name='alice', age=11),
]
all_cmps = [
[('name', 1), ('age', -1)],
[('name', 1), ('age', 1)],
[('name', -1), ('age', 1)],]
print 'data:', data
for cmps in all_cmps: print 'sort:', cmps; print sorted(data, cmpfun)
</code></pre>
| 9,850
|
<p>Almost 5 years ago Joel Spolsky wrote this article, <a href="http://www.joelonsoftware.com/articles/Unicode.html" rel="nofollow noreferrer">"The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!)"</a>.</p>
<p>Like many, I read it carefully, realizing it was high-time I got to grips with this "replacement for ASCII". Unfortunately, 5 years later I feel I have slipped back into a few bad habits in this area. Have you?</p>
<p>I don't write many specifically international applications, however I have helped build many ASP.NET internet facing websites, so I guess that's not an excuse. </p>
<p>So for my benefit (and I believe many others) can I get some input from people on the following:</p>
<ul>
<li>How to "get over" ASCII once and for all</li>
<li>Fundamental guidance when working with Unicode.</li>
<li>Recommended (recent) books and websites on Unicode (for developers). </li>
<li>Current state of Unicode (5 years after Joels' article) </li>
<li>Future directions.</li>
</ul>
<p>I must admit I have a .NET background and so would also be happy for information on Unicode in the .NET framework. Of course this shouldn't stop anyone with a differing background from commenting though.</p>
<p>Update: See <a href="https://stackoverflow.com/questions/898/internationalization-in-your-projects">this related question</a> also asked on StackOverflow previously.</p>
|
<p>Since I read the Joel article and some other I18n articles I always kept a close eye to my character encoding; And it actually works if you do it consistantly. If you work in a company where it is standard to use UTF-8 and everybody knows this / does this it will work.</p>
<p>Here some interesting articles (besides Joel's article) on the subject:</p>
<ul>
<li><a href="http://www.tbray.org/ongoing/When/200x/2003/04/06/Unicode" rel="noreferrer">http://www.tbray.org/ongoing/When/200x/2003/04/06/Unicode</a></li>
<li><a href="http://www.tbray.org/ongoing/When/200x/2003/04/26/UTF" rel="noreferrer">http://www.tbray.org/ongoing/When/200x/2003/04/26/UTF</a></li>
</ul>
<p>A quote from the first article; Tips for using Unicode:</p>
<ul>
<li>Embrace Unicode, don't fight it; it's probably the right thing to do, and if it weren't you'd probably have to anyhow.</li>
<li>Inside your software, store text as UTF-8 or UTF-16; that is to say, pick one of the two and stick with it.</li>
<li>Interchange data with the outside world using XML whenever possible; this makes a whole bunch of potential problems go away.</li>
<li>Try to make your application browser-based rather than write your own client; the browsers are getting really quite good at dealing with the texts of the world.</li>
<li>If you're using someone else's library code (and of course you are), assume its Unicode handling is broken until proved to be correct.</li>
<li>If you're doing search, try to hand the linguistic and character-handling problems off to someone who understands them.</li>
<li>Go off to Amazon or somewhere and buy the latest revision of the printed Unicode standard; it contains pretty well everything you need to know.</li>
<li>Spend some time poking around the Unicode web site and learning how the code charts work.</li>
<li>If you're going to have to do any serious work with Asian languages, go buy the O'Reilly book on the subject by Ken Lunde.</li>
<li>If you have a Macintosh, run out and grab Lord Pixel's Unicode Font Inspection tool. Totally cool.</li>
<li>If you're really going to have to get down and dirty with the data, go attend one of the twice-a-year Unicode conferences. All the experts go and if you don't know what you need to know, you'll be able to find someone there who knows.</li>
</ul>
|
<p>I spent a while working with search engine software - You wouldn't believe how many web sites serve up content with HTTP headers or meta tags which lie about the encoding of the pages. Often, you'll even get a document which contains both ISO-8859 characters and UTF-8 characters.</p>
<p>Once you've battled through a few of those sorts of issues, you start taking the proper character encoding of data you produce really seriously.</p>
| 8,357
|
<p>Does any know a good rule of thumb for the appropriate pagefile size for a Windows 2003 server running SQL Server?</p>
|
<p>Irrelevant of the size of the RAM, you still need a pagefile at least 1.5 times the amount of physical RAM. This is true even if you have a 1 TB RAM machine, you'll need 1.5 TB pagefile on disk (sounds crazy, but is true).</p>
<p>When a process asks MEM_COMMIT memory via VirtualAlloc/VirtualAllocEx, the requested size needs to be reserved in the pagefile. This was true in the first Win NT system, and is still true today see <a href="http://msdn.microsoft.com/en-us/library/ms810627.aspx" rel="noreferrer">Managing Virtual Memory in Win32</a>:</p>
<blockquote>
<p>When memory is committed, physical
pages of memory are allocated <strong>and
space is reserved in a pagefile</strong>.</p>
</blockquote>
<p>Bare some extreme odd cases, SQL Server will always ask for MEM_COMMIT pages. And given the fact that SQL uses a <a href="http://msdn.microsoft.com/en-us/library/ms178145.aspx" rel="noreferrer">Dynamic Memory Management</a> policy that reserves upfront as much buffer pool as possible (reserves and <em>commits</em> in terms of VAS), SQL Server will request at start up a huge reservation of space in the pagefile. If the pagefile is not properly sized errors 801/802 will start showing up in SQL's ERRORLOG file and operations. </p>
<p>This always causes some confusion, as administrators erroneously assume that a large RAM eliminates the need for a pagefile. In truth the contrary happens, a large RAM increases the need for pagefile, just because of the inner workings of the Windows NT memory manager. The reserved pagefile is, hopefully, never used. </p>
|
<p>If you're looking for high performance, you are going to want to avoid paging completely, so the page file size becomes less significant. Invest in as much RAM as feasible for the DB server.</p>
| 2,472
|
<p>I have never worked with web services and rails, and obviously this is something I need to learn.
I have chosen to use hpricot because it looks great.
Anyway, _why's been nice enough to provide the following example on the <a href="http://code.whytheluckystiff.net/hpricot/" rel="nofollow noreferrer">hpricot website</a>:</p>
<pre><code> #!ruby
require 'hpricot'
require 'open-uri'
# load the RedHanded home page
doc = Hpricot(open("http://redhanded.hobix.com/index.html"))
# change the CSS class on links
(doc/"span.entryPermalink").set("class", "newLinks")
# remove the sidebar
(doc/"#sidebar").remove
# print the altered HTML
puts doc
</code></pre>
<p>Which looks simple, elegant, and easy peasey.
Works great in Ruby, but my question is: How do I break this up in rails?</p>
<p>I experimented with adding this all to a single controller, but couldn't think of the best way to call it in a view.</p>
<p>So if you were parsing an XML file from a web API and printing it in nice clean HTML with Hpricot, how would you break up the activity over the models, views, and controllers, and what would you put where?</p>
|
<p>Model, model, model, model, model. Skinny controllers, simple views.</p>
<p>The RedHandedHomePage model does the parsing on initialization, then call 'def render' in the controller, set output to an instance variable, and print that in a view.</p>
|
<p>I'd probably go for a REST approach and have resources that represent the different entities within the XML file being consumed. Do you have a specific example of the XML that you can give?</p>
| 4,845
|
<p>As first layer is so important, I am looking for an easy way to generate the gcode to print just the first layer.
I see that with Slic3r you can cut from a Z
But for test purposes I prefer just selecting a number of layers to be generated so I can easily generate different "first layer(s) tests" with different first layer(s) settings (width, height, speed, flow....)
The only way I achieve it is editing the gcode.
Any help?
Thanks</p>
|
<p>I understand your question like this:</p>
<blockquote>
<p>I know I could cut the mesh and just slice the bottom of my model, but since I am interested in a given <strong>number of layers</strong> and the heigh of a layer may change according to settings (e.g.: 0.2mm, 0.1mm, 0.05mm...), I want to find a way to generate an arbitrary number of layers from the full model. I use slic3r.</p>
</blockquote>
<p>If my understanding is correct, then you can achieve what you want with a few steps.</p>
<p><strong>Use verbose GCODE</strong></p>
<p>The setting is under "Print settings → Output Options". This will output gcode with comments in it.</p>
<p><strong>Save the finishing gcode of a valid printing job</strong></p>
<p>Basically, open a valid gcode file, and save the last few lines (comments will help you to understand which ones, it changes from printer to printer) in a separate file (<code>gcode.tail</code>). These lines are typically those that move away the nozzle from the print, disable the heating element, the steppers and the part cooling fan.</p>
<p><strong>Prepare the <code>first-lines.sh</code> script</strong></p>
<pre><code>#! /usr/bin/env sh
sed -e '/move to next layer (3)/,$d' $1 > /tmp/gcode.tmp
echo ~/gcode.tail >> /tmp/gcode.tmp
echo /tmp/gcode.tmp
</code></pre>
<p>What this script does is:</p>
<ul>
<li>take a file name from the command line (<code>$1</code>) and savie into <code>gcode.tmp</code> only the part of it up to and excluding the line saying "move to the next layer (3)" (you should actually use the number of layers you actually want here, <code>3</code> is just an example). Again, the presence of such a line depends from you generating "verbose gcode".</li>
<li>append to <code>gcode.tmp</code> the content of the file <code>gcode.tail</code> (here replace <code>~/</code> with the actual path on your machine.</li>
<li>output as a stream the full content of <code>gcode.tmp</code></li>
</ul>
<p><strong>Set your printer to automatically run the script onto the generated gcode</strong></p>
<p>This setting is again under "Print settings → Output Options". You have to type in the full path to <code>first-lines.sh</code>. Also remember to make the script executable (<code>chmod +x first-lines.sh</code>).</p>
<p>You can also hover over the textbox to get additional information of how you can access slic3r variables there (for example you may want to read the layer height from the settings and compute within the script the number of layers you want to keep).</p>
<p><strong>Profit</strong></p>
<p>:)</p>
<p>Final notes:</p>
<ul>
<li>I tried the sed command and have post-processing scripts running on my gcode myself, so it should work, but I haven't tried the full procedure myself, if you encounter bugs please leave a comment so I can fix the answer for everybody. :)</li>
<li>I use slic3r Prusa Edition (I believe these settings are the same, but just in case... you may wish to <a href="https://github.com/prusa3d/Slic3r/releases" rel="nofollow noreferrer">download</a> that version.</li>
<li>All of the above should work out-of-the-box on all mainstream Linux distributions and OSX. For windows, it has been suggested in the comments to install <a href="http://www.cygwin.com/" rel="nofollow noreferrer">CygWin</a>.</li>
<li>Since this procedure still slices the full model and then throw away most of it, you could make it faster by only slicing a reasonably thick "bottom part" of your model. For example: say that you know you will never want to print more than 5 layers and never with a layer height past 0.3mm... in this case you could only keep the bottom 2mm of your model and you'd be safe for all other combinations of layers and layer heights. Don't keep <em>exactly</em> 1.5mm though, as this is likely to generate a different top layer than the one in the full model.</li>
</ul>
<p>Good luck! :)</p>
|
<p>You could just insert an M30 at the end of layer 1, and the job will end there.</p>
| 789
|
<p>Is there any need of Virtual Constructors? If so can any one post a scenario?</p>
|
<p>If you are talking about virtual destructors in C++ (there isn't any such thing as virtual constructors) then they should always be used if you are using your child classes polymorphically.</p>
<pre><code>class A
{
~A();
}
class B : public A
{
~B();
}
A* pB = new B();
delete pB; // NOTE: WILL NOT CALL B's destructor
class A
{
virtual ~A();
}
class B : public A
{
virtual ~B();
}
A* pB = new B();
delete pB; // NOTE: WILL CALL B's destructor
</code></pre>
<p><strong>Edit:</strong> Not sure why I've got a downvote for this (would be helpful if you left a comment...) but have a read here as well</p>
<p><a href="http://blogs.msdn.com/oldnewthing/archive/2004/05/07/127826.aspx" rel="nofollow noreferrer">http://blogs.msdn.com/oldnewthing/archive/2004/05/07/127826.aspx</a></p>
|
<p>In C++, all constructors are implicitly virtual (with a little extra). That is, the constructor of the base class is called before that of the derived class. So, it's like they're sort of virtual. Because, in a virtual method, if the derived class implements a method of the same signature, only the method in the derived class is invoked.</p>
<p>However, <strong>in a constructor, BOTH METHODS ARE INVOKED</strong> (see example below).</p>
<p>For a more complete explanation of why this is so, please see Item 9 of Effective C++, Third Edition, By Scott Meyers (Never call a virtual function during construction or destruction). The title of the item may be misleading in relation to the question, but if you read the explanation, it'll make perfect sense.</p>
<pre><code>#include <iostream>
#include <vector>
class Animal {
public:
Animal(){
std::cout << "Animal Constructor Invoked." << std::endl;
}
virtual void eat() {
std::cout << "I eat like a generic animal.\n";
}
//always make destructors virtual in base classes
virtual ~Animal() {
}
};
class Wolf : public Animal {
public:
Wolf(){
std::cout << "Wolf Constructor Invoked." << std::endl;
}
void eat() {
std::cout << "I eat like a wolf!" << std::endl;
}
};
int main() {
Wolf wolf;
std::cout << "-------------" << std::endl;
wolf.eat();
}
</code></pre>
<p>Output:</p>
<pre><code>Animal Constructor Invoked.
Wolf Constructor Invoked.
-------------
I eat like a wolf!
</code></pre>
| 9,600
|
<p>If I have something like a UILabel linked to a xib file, do I need to release it on dealloc of my view? The reason I ask is because I don't alloc it, which makes me think I don't need to release it either?
eg (in the header):</p>
<pre><code>IBOutlet UILabel *lblExample;
</code></pre>
<p>in the implementation:</p>
<pre><code>....
[lblExample setText:@"whatever"];
....
-(void)dealloc{
[lblExample release];//?????????
}
</code></pre>
|
<p>If you follow what is now considered to be best practice, you <em>should</em> release outlet properties, because you should have retained them in the set accessor:</p>
<pre><code>@interface MyController : MySuperclass {
Control *uiElement;
}
@property (nonatomic, retain) IBOutlet Control *uiElement;
@end
@implementation MyController
@synthesize uiElement;
- (void)dealloc {
[uiElement release];
[super dealloc];
}
@end
</code></pre>
<p>The advantage of this approach is that it makes the memory management semantics explicit and clear, <em>and it works consistently across all platforms for all nib files</em>.</p>
<p>Note: The following comments apply only to iOS prior to 3.0. With 3.0 and later, you should instead simply nil out property values in viewDidUnload.</p>
<p>One consideration here, though, is when your controller might dispose of its user interface and reload it dynamically on demand (for example, if you have a view controller that loads a view from a nib file, but on request -- say under memory pressure -- releases it, with the expectation that it can be reloaded if the view is needed again). In this situation, you want to make sure that when the main view is disposed of you also relinquish ownership of any other outlets so that they too can be deallocated. For UIViewController, you can deal with this issue by overriding <code>setView:</code> as follows:</p>
<pre><code>- (void)setView:(UIView *)newView {
if (newView == nil) {
self.uiElement = nil;
}
[super setView:aView];
}
</code></pre>
<p>Unfortunately this gives rise to a further issue. Because UIViewController currently implements its <code>dealloc</code> method using the <code>setView:</code> accessor method (rather than simply releasing the variable directly), <code>self.anOutlet = nil</code> will be called in <code>dealloc</code> as well as in response to a memory warning... This will lead to a crash in <code>dealloc</code>.</p>
<p>The remedy is to ensure that outlet variables are also set to <code>nil</code> in <code>dealloc</code>:</p>
<pre><code>- (void)dealloc {
// release outlets and set variables to nil
[anOutlet release], anOutlet = nil;
[super dealloc];
}
</code></pre>
|
<p>Related: <a href="https://stackoverflow.com/questions/6578/understanding-reference-counting-with-cocoa-objective-c">Understanding reference counting with Cocoa / Objective C</a></p>
| 8,714
|
<p>In my database, I have an entity table (let's call it Entity). Each entity can have a number of entity types, and the set of entity types is static. Therefore, there is a connecting table that contains rows of the entity id and the name of the entity type. In my code, EntityType is an enum, and Entity is a Hibernate-mapped class.<br>
in the Entity code, the mapping looks like this:</p>
<pre><code>@CollectionOfElements
@JoinTable(
name = "ENTITY-ENTITY-TYPE",
joinColumns = @JoinColumn(name = "ENTITY-ID")
)
@Column(name="ENTITY-TYPE")
public Set<EntityType> getEntityTypes() {
return entityTypes;
}
</code></pre>
<p>Oh, did I mention I'm using annotations?<br>
Now, what I'd like to do is create an HQL query or search using a Criteria for all Entity objects of a specific entity type.<p>
<a href="http://opensource.atlassian.com/projects/hibernate/browse/HHH-869?page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel" rel="nofollow noreferrer">This</a> page in the Hibernate forum says this is impossible, but then this page is 18 months old. Can anyone tell me if this feature has been implemented in one of the latest releases of Hibernate, or planned for the coming release?</p>
|
<p>HQL:</p>
<pre><code>select entity from Entity entity where :type = some elements(entity.types)
</code></pre>
<p>I think that you can also write it like:</p>
<pre><code>select entity from Entity entity where :type in(entity.types)
</code></pre>
|
<p>Is your relationship bidirectional, i.e., does <code>EntityType</code> have an <code>Entity</code> property? If so, you can probably do something like <code>entity.Name from EntityType where name = ?</code></p>
| 7,185
|
<p>I am writing a VC++ MFC dialog based app which requires Microsoft MapPoint embedding in it. To do this I'm using MS VC++ .NET 2003 and MapPoint Europe 2006 to do this but am having problems as when I select "Insert ActiveX Control" no MapPoint control appears in the list of options. I have tried manually registering <code>mappointcontrol.ocx</code> with <code>regsvr32</code> which appears to succeed but still the control doesn't appear on the list.</p>
<p>Can anyone suggest what I am doing wrong here, and any possible solutions.</p>
<p>Thanks</p>
<p>Ian</p>
|
<p>I've found some benefits to makefiles with large projects, mainly related to unifying the location of the project settings. It's somewhat easier to manage the list of source files, include paths, preprocessor defines and so on, if they're all in a makefile or other build config file. With multiple configurations, adding an include path means you need to make sure you update every config manually through Visual Studio's fiddly project properties, which can get pretty tedious as a project grows in size.</p>
<p>Projects which use a lot of custom build tools can be easier to manage too, such as if you need to compile pixel / vertex shaders, or code in other languages without native VS support.</p>
<p>You'll still need to have various different project configurations however, since you'll need to differentiate the invocation of the build tool for each config (e.g. passing in different command line options to make).</p>
<p>Immediate downsides that spring to mind:</p>
<ul>
<li>Slower builds: VS isn't particularly quick at invoking external tools, or even working out whether it needs to build a project in the first place.</li>
<li>Awkward inter-project dependencies: It's fiddly to set up so that a dependee causes the base project to build, and fiddlier to make sure that they get built in the right order. I've had some success getting SCons to do this, but it's always a challenge to get working well.</li>
<li>Loss of some useful IDE features: Edit & Continue being the main one!</li>
</ul>
<p>In short, you'll spend less time managing your project configurations, but more time coaxing Visual Studio to work properly with it.</p>
|
<p>You can use nant to build the projects individually thus replacing the solution and have 1 coding solution and no build solutions.</p>
<p>1 thing to keep in mind, is that the solution and csproj files from vs 2005 and up are msbuild scripts. So if you get acquainted with msbuild you might be able to wield the existing files, to make vs easier, and to make your deployment easier.</p>
| 7,507
|
<p>In my most C++ project I heavily used ASSERTION statement as following:</p>
<pre><code>int doWonderfulThings(const int* fantasticData)
{
ASSERT(fantasticData);
if(!fantasticData)
return -1;
// ,,,
return WOW_VALUE;
}
</code></pre>
<p>But TDD community seems like to enjoy doing something like this:</p>
<pre><code>int doMoreWonderfulThings(const int* fantasticData)
{
if(!fantasticData)
return ERROR_VALUE;
// ...
return AHA_VALUE;
}
TEST(TDD_Enjoy)
{
ASSERT_EQ(ERROR_VALUE, doMoreWonderfulThings(0L));
ASSERT_EQ(AHA_VALUE, doMoreWonderfulThings("Foo"));
}
</code></pre>
<p>Just with my experiences first approaches let me remove so many subtle bugs.
But TDD approaches are very smart idea to handle legacy codes.</p>
<p>"Google" - they compare "FIRST METHOD" to "Walk the shore with life-vest, swim ocean without any safe guard".</p>
<p>Which one is better?
Which one makes software robust?</p>
|
<p>In my (limited) experience the first option is quite a bit safer. In a test-case you only test predefined input and compare the outcome, this works well as long as every possible edge-case has been checked. The first option just checks every input and thus tests the 'live' values, it filters out bugs real quickly, however it comes with a performance penalty.</p>
<p>In <a href="https://rads.stackoverflow.com/amzn/click/com/0735619670" rel="nofollow noreferrer" rel="nofollow noreferrer">Code Complete</a> Steve McConnell learns us the first method can be used successfully to filter out bugs in a <strong>debug</strong> build. In release build you can filter-out all assertions (for instance with a compiler flag) to get the extra performance.</p>
<p>In my opinion the best way is to use both methods:</p>
<p>Method 1 to catch illegal values</p>
<pre><code>int doWonderfulThings(const int* fantasticData)
{
ASSERT(fantasticData);
ASSERTNOTEQUAL(0, fantasticData)
return WOW_VALUE / fantasticData;
}
</code></pre>
<p>and method 2 to test edge-cases of an algorithm.</p>
<pre><code>int doMoreWonderfulThings(const int fantasticNumber)
{
int count = 100;
for(int i = 0; i < fantasticNumber; ++i) {
count += 10 * fantasticNumber;
}
return count;
}
TEST(TDD_Enjoy)
{
// Test lower edge
ASSERT_EQ(0, doMoreWonderfulThings(-1));
ASSERT_EQ(0, doMoreWonderfulThings(0));
ASSERT_EQ(110, doMoreWonderfulThings(1));
//Test some random values
ASSERT_EQ(350, doMoreWonderfulThings(5));
ASSERT_EQ(2350, doMoreWonderfulThings(15));
ASSERT_EQ(225100, doMoreWonderfulThings(150));
}
</code></pre>
|
<p>I don't know which particlar TDD subcommunity you're refering to but the TDD patterns I've come across either use Assert.AreEqual() for positive results or otherwise use an ExpectedException mechanism (e.g., attributes in .NET) to declare the error that should be observed.</p>
| 3,789
|
<p>I've got a JavaScript "object", built this way:</p>
<pre><code>function foo()
{
this.length = 0;
}
foo.prototype.getLength = function()
{
return this.length;
}
...
</code></pre>
<p>I know how to emulate namespaces with singleton JavaScript objects, but what is the best way to "namepace" an object such as that above that will intanced? </p>
<p>I know that several JavaScript libraries have namepacing capabilities, but I'm using jQuery and would rather not add another library to the mix. I'd like to be able to provide my own, perhaps by exploiting jQuery, intrinsic namespacing scheme for the JS objects of mine that need to be instanced. </p>
<p>Thanks
rp</p>
|
<p>Simple:</p>
<pre><code>if(!MyNamespace) MyNamespace = {};
MyNamespace.foo = function() {
this.length = 0;
};
MyNamespace.foo.prototype.getLength = function() {
return this.length;
};
</code></pre>
|
<p>Another alternative may be the <a href="http://www.bobjs.com" rel="nofollow">bob.js</a> framework:</p>
<pre><code>bob.ns.setNs('myApp.myFunctions', {
say: function(msg) {
console.log(msg);
}
});
//sub-namespace
bob.ns.setNs('myApp.myFunctions.mySubFunctions', {
hello: function(name) {
myApp.myFunctions.say('Hello, ' + name);
}
});
//call:
myApp.myFunctions.mySubFunctions.hello('Bob');
</code></pre>
| 4,296
|
<p>I took a data structures class in C++ last year, and consequently implemented all the major data structures in templated code. I saved it all on a flash drive because I have a feeling that at some point in my life, I'll use it again. I imagine <em>something</em> I end up programming will need a B-Tree, or is that just delusional? How long do you typically save the code you write for possible reuse? </p>
|
<p>Forever (or as close as I can get). That's the whole point of a source control system.</p>
|
<p>Source control, keep it offsite and keep it for life! You'll never have to worry about it.</p>
| 9,365
|
<p>When building projects in C++, I've found debugging linking errors to be tricky, especially when picking up other people's code. What strategies do people use for debugging and fixing linking errors?</p>
|
<p>Not sure what your level of expertise is, but here are the basics. </p>
<p>Below is a linker error from VS 2005 - yes, it's a giant mess if you're not familiar with it.</p>
<pre><code>ByteComparator.obj : error LNK2019: unresolved external symbol "int __cdecl does_not_exist(void)" (?does_not_exist@@YAHXZ) referenced in function "void __cdecl TextScan(struct FileTextStats &,char const *,char const *,bool,bool,__int64)" (?TextScan@@YAXAAUFileTextStats@@PBD1_N2_J@Z)
</code></pre>
<p>There are a couple of points to focus on: </p>
<ul>
<li>"ByteComparator.obj" - Look for a ByteComparator.cpp file, this is the source of the linker problem</li>
<li>"int __cdecl does_not_exist(void)" - This is the symbol it couldn't find, in this case a function named does_not_exist()</li>
</ul>
<p>At this point, in many cases the fastest way to resolution is to search the code base for this function and find where the implementation is. Once you know where the function is implemented you just have to make sure the two places get linked together.</p>
<p>If you're using VS2005, you would use the "Project Dependencies..." right-click menu. If you're using gcc, you would look in your makefiles for the executable generation step (gcc called with a bunch of .o files) and add the missing .o file.</p>
<hr>
<p>In a second scenario, you may be missing an "external" dependency, which you don't have code for. The Win32 libraries are often times implemented in static libraries that you have to link to. In this case, go to <a href="http://msdn.microsoft.com/en-us/default.aspx" rel="noreferrer">MSDN</a> or <a href="http://www.google.com/microsoft" rel="noreferrer">"Microsoft Google"</a> and search for the API. At the bottom of the API description the library name is given. Add this to your project properties "Configuration Properties->Linker->Input->Additional Dependencies" list. For example, the function timeGetTime()'s <a href="http://msdn.microsoft.com/en-us/library/ms713418(VS.85).aspx" rel="noreferrer">page on MSDN</a> tells you to use Winmm.lib at the bottom of the page.</p>
|
<p>One of the common linking errors I've run into is when a function is used differently from how it's defined. If you see such an error you should make sure that every function you use is properly declared in some .h file.<br>
You should also make sure that all the relevant source files are compiled into the same lib file. An error I've run into is when I have two sets of files compiled into two separate libraries, and I cross-call between libraries.</p>
<p>Is there a failure you have in mind?</p>
| 5,535
|
<p>A couple weeks before, I bought a custom 3D printer that has an Ultimaker 2 motherboard in it. However, the dimensions of the printer is not same with Ultimaker 2 (X and Y same, a bit smaller on Z). The printer had tinkerfirmware installed in it. Today, I tried to print a premade .gcode file (Which was for another 3D printer I guess) and after pressing print, The machine told me the file will overwrite machine settings, and I pressed yes for it. After that, the dimensions of my 3D printer has changed in it's firmware. The bed is raising more than it should while starting calibrating, and not setting it's position precisely. (To make the 1mm gap, I had to move the bed down 4-5mm away from where it should be.) Now the question is, what can I do to fix this problem ? I also tried reinstalling original firmware which didn't really worked. (All the parts are orginal except the frame, which is a bit more smaller on height) How should I measure the height of printing area?</p>
|
<p>Your printer is an Ultimaker clone or something else? All of the original firmwares located on TinkerGnome's Github are configured for Ultimaker printers so if you are using them on something different you will need to configure it before using it. The easiest option would be editing this print file that changed your settings to your desired settings and then reloading it.</p>
<p>How to find your actual Z? Well that's a bit difficult without more information. I'm guessing from your description that your printer homes at Z max? If it's homing at Z max you need to home the machine, jog the Z axis to where you want 0 to be (usually using a piece of paper between the nozzle and bed), then record the Z axis position and enter that as your travel limit in the firmware. If your printer homes at Z min this could be as simple as changing the homing offset.</p>
|
<p>Firmware <em>is</em> stored in EEPROM, and may contain some default values, but does not affect the <em>calibration</em> values which are stored in EEPROM.</p>
<p>You should be able to reset the calibration to 'factory defaults', this actually means to take the defaults in the firmware and store them in EEPROM.</p>
<p>Use <code>M502</code> to load firmware defaults into the current session.
Use <code>M500</code> to write the settings from the current session into EEPROM.</p>
<p>It is unusual for a design in gcode to include modifications to the settings, but maybe it was done to change acceleration or something similar.</p>
| 1,020
|
<p>I want to log onto Stack Overflow using OpenID, but I thought I'd set up my own OpenID provider, just because it's harder :) How do you do this in Ubuntu?</p>
<p>Edit: Replacing 'server' with the correct term OpenID provider (Identity provider would also be correct according to <a href="http://en.wikipedia.org/wiki/Openid#Using_OpenID" rel="noreferrer">wikipedia</a>).</p>
|
<p>I personnally used <a href="https://siege.org/phpmyid.html" rel="nofollow noreferrer">phpMyID</a> just for StackOverflow. It's a simple two-files PHP script to put somewhere on a subdomain. Of course, it's not as easy as installing a .deb, but since OpenID relies completely on HTTP, I'm not sure it's advisable to install a self-contained server...</p>
|
<p>I totally understand where you're coming from with this question. I already had a OpenID at <a href="http://www.myopenid.com" rel="nofollow noreferrer">www.myopenid.com</a> but it feels a bit weird relying on a 3rd party for such an important login (a.k.a my permanent "home" on the internet).</p>
<p>Luckily, It is easy to move to using your own server as a openID server - in fact, it can be done with just two files with phpMyID.</p>
<ul>
<li>Download "phpMyID-0.9.zip" from <a href="http://siege.org/projects/phpMyID/" rel="nofollow noreferrer">http://siege.org/projects/phpMyID/</a></li>
<li>Move it to your server and unzip it to view the README file which explains everything.</li>
<li>The zip has two files: <em>MyID.config.php, MyID.php</em>. I created a directory called <code><mydocumentroot>/OpenID</code> and renamed <em>MyID.config.php</em> to <em>index.php</em>. This means my OpenID URL will be very cool: <code>http://<mywebsite>/OpenID</code></li>
<li>Decide on a username and password and then create a hash of them using: <code>echo -n '<myUserNam>:phpMyID:<myPassword>' | openssl md5</code></li>
<li>Open <em>index.php</em> in a text editor and add the username and password hash in the placeholder. Save it.</li>
<li>Test by browsing to <code>http://<mywebsite>/OpenID/</code></li>
<li>Test ID is working using: <a href="http://www.openidenabled.com/resources/openid-test/checkup/" rel="nofollow noreferrer">http://www.openidenabled.com/resources/openid-test/checkup/</a></li>
</ul>
<p>Rerefence info: <a href="http://www.wynia.org/wordpress/2007/01/15/setting-up-an-openid-with-php/" rel="nofollow noreferrer">http://www.wynia.org/wordpress/2007/01/15/setting-up-an-openid-with-php/</a> , <a href="http://siege.org/projects/phpMyID/" rel="nofollow noreferrer">http://siege.org/projects/phpMyID/</a> , <a href="https://blog.stackoverflow.com/2009/01/using-your-own-url-as-your-openid/">https://blog.stackoverflow.com/2009/01/using-your-own-url-as-your-openid/</a></p>
| 4,820
|
<p>Has anyone succeeded in installing the auto bed levelling on a Rumba board with Marlin firmware?</p>
<p>I have the last stable version <a href="https://github.com/MarlinFirmware/Marlin/releases/tag/1.1.0-RC6" rel="nofollow noreferrer">1.1.0 RC6</a>.</p>
<p>I would appreciate some direction especially about:</p>
<ul>
<li>How and which pin to activate for the servo?</li>
<li>How to test it with G-code before I move to settings of the probe sequence?</li>
</ul>
<p>I have only installed the hardware for now (5 V servo) connected to Ext. 3 (EXP3):</p>
<ul>
<li>Pin 2 (+5V);</li>
<li>Pin 4 (GND), and;</li>
<li>Pin 6 (PWM),</li>
</ul>
<p><a href="https://i.stack.imgur.com/ETHOL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ETHOL.png" alt="Servo and RAMPS 1.4 and RUMBA connections"></a></p>
<p><a href="https://i.stack.imgur.com/ZGvw0.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZGvw0.jpg" alt="RUMBA EXP3 pinout"></a></p>
<p>but I cannot move it with the G-code command <code>M280 P0 S180</code>. I have no idea where to put my hands on firmware to get this going. However my ultimate goal is to set the ABL.</p>
|
<p>General note, I do not have this board so I cannot test these steps myself, read the documentation in configuration.h, it is very detailed and should guide you pretty well. I am specifically looking at Marlin 1.1 RC7 on Github, so the lines below may vary slightly from what you see.</p>
<p>As to the pins to connect on the board for the servo, pins_RUMBA.h is where they are defined/mapped. For other boards, there is a pins_[your_board_name].h that will define the pins for any given board. </p>
<p>The default Servo pin for Rumba is:</p>
<pre><code>#define SERVO0_PIN 5
</code></pre>
<p>Pin 6 appears to be used for a third extruder heater.</p>
<pre><code>#define HEATER_2_PIN 6 // EXTRUDER 3
</code></pre>
<p>In configuration.h you must uncomment (delete the slashes "//" at the beginning) the lines and fill in your stow and deploy angles in the second line for the servo. Find these lines under the Z probe options heading.</p>
<pre><code>//#define Z_ENDSTOP_SERVO_NR 0
//#define Z_SERVO_ANGLES {70,0} // Z Servo Deploy and Stow angles
</code></pre>
<p>Define your probe offsets from your extruder nozzle:</p>
<pre><code>#define X_PROBE_OFFSET_FROM_EXTRUDER 10 // X offset: -left +right [of the nozzle]
#define Y_PROBE_OFFSET_FROM_EXTRUDER 10 // Y offset: -front +behind [the nozzle]
#define Z_PROBE_OFFSET_FROM_EXTRUDER 0 // Z offset: -below +above [the nozzle]
</code></pre>
<p>Based on your comment for using two z end stop switches, there is an option you must enable to use the standard end stop switch for homing, and only use the probe end stop for mesh bed leveling type operations. The config.h file has a lot of information on this, please read it for your own and your printers safety. </p>
<p>Uncomment this line:</p>
<pre><code>//#define Z_MIN_PROBE_ENDSTOP
</code></pre>
<p>and comment this line:</p>
<pre><code>#define Z_MIN_PROBE_USES_Z_MIN_ENDSTOP_PIN
</code></pre>
<p>Then set the carriage height to allow the z probe room to swing down and move:</p>
<pre><code>#define Z_PROBE_DEPLOY_HEIGHT 15 // Raise to make room for the probe to deploy / stow
#define Z_PROBE_TRAVEL_HEIGHT 5 // Raise between probing points.
</code></pre>
<p>For autobed leveling uncomment:</p>
<pre><code>//#define AUTO_BED_LEVELING_FEATURE // Delete the comment to enable
</code></pre>
<p>Then set probe points corners: </p>
<pre><code>#if ENABLED(AUTO_BED_LEVELING_GRID)
#define LEFT_PROBE_BED_POSITION 15
#define RIGHT_PROBE_BED_POSITION 170
#define FRONT_PROBE_BED_POSITION 20
#define BACK_PROBE_BED_POSITION 170
#define MIN_PROBE_EDGE 10 // The Z probe minimum square sides can be no smaller than this.
</code></pre>
<p>Set the number of points to probe in each direction (x and y), default is 2, so it will probe 4 locations, the other common choice is 3, so it will probe a grid of 9 locations.</p>
<pre><code>// Set the number of grid points per dimension.
// You probably don't need more than 3 (squared=9).
#define AUTO_BED_LEVELING_GRID_POINTS 2
</code></pre>
<p>That should be everything you need for a basic setup, although there are more options that I did not go through. Please look at all the documentation comments in configuration.h file as it is very comprehensive, even if it can be a bit confusing.</p>
<p>I hope this helps!</p>
|
<p><strong>For future reference.</strong></p>
<p>My issue about the servo not moving was caused by a wiring mistake.
The Exp. 3 has 14 pins has per this diagram.</p>
<p><a href="https://i.stack.imgur.com/g4yyn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/g4yyn.png" alt="enter image description here"></a></p>
<p>However when phisically looking at the board, what you see is this:
<a href="https://i.stack.imgur.com/51eF6.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/51eF6.jpg" alt="enter image description here"></a></p>
<p>I took the first 2 pins on the right of such connector and the 3rd one of the first row thinking that I was connecting pins 2-4-5 of Exp. 3.
I was wrong, because the first 2 (1-2) pins are not part of Exp. 3.</p>
<p>The right way to connect the servo is as following:
<a href="https://i.stack.imgur.com/nDWcX.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nDWcX.jpg" alt="enter image description here"></a></p>
<p>Then use PWM1 (pin 5 Ext.3)
I decided to leave trace of this issue and the relevant solution for someone that may experience the same issue.</p>
| 416
|
<p>After a long battle with SKR Mini v2, TFT35 and BLTouch and creating the right firmware. I thought I was through it all and ready to start printing again after finally being able to set the Z offset and auto level the bed. My printer has other thoughts. Now my bed temperature will only heat up to 10 °C below the set point temperature and after a few minutes it starts beeping and says this on the screen "<code>Heating Failed: Bed Printer Halted, Please Reset</code>". As an example, set it to 60 °C, it will get up to 50 °C normally and stop at 50 °C.</p>
<p>Anyone gone through this? I'm sure there is some setting in the firmware that I have missed up. I'm hoping someone can educate me on my mistake.</p>
|
<p>Searching the error message "Heating Failed: Bed Printer Halted, Please Reset" seems to indicate that the bed heater is timing out from not reaching temperature.</p>
<ol>
<li><p>If you measure the voltage applied to the bed heater before the error message, does the voltage stay at Max.; i.e. 12 V for a 12 V bed heater? Or, does the voltage stay constant?</p>
</li>
<li><p>If you raise the target bed temperature, does is still error out at stop at 50 °C?</p>
</li>
<li><p>Since you only indicate changing firmware, we would assume the bed heater is the same as when previously working. Is this true?</p>
</li>
<li><p>Is the resistance of the bed heater a few ohms and not megaohms?</p>
</li>
</ol>
|
<p>I was having the same issue after installing BTT SKR Mini E3 V2 and BLTouch on my Ender 5 Pro.</p>
<p>I did two things and my bed heats normally now, but I changed/did both things and can't say which fixed the issue for me.</p>
<ul>
<li>I noticed the case fan wasn't coming on. I had it plugged into <code>Fan 1</code> on the board. In my Marlin firmware, I noticed in <code>Configuration_adv.h</code> the <code>USE_CONTROLLER_FAN</code> was commented out, so I enabled it (removed the '//' in front of <code>#define USE_CONTROLLER_FAN</code>). After I flashed the firmware with this change, the controller box fan did start kicking on as I'd expect.</li>
<li>I also did PID tuning on the bed. The firmware I'd compiled had the <code>PIDTEMPBED</code> enabled as I prefer to tune the bed when I do the hotend too. I hadn't yet tuned the bed when I was getting the 10-degree heating failure. My bed was set to 60 °C and it wouldn't heat above 52 °C without giving the "Heating Failed: Bed Printer Halted, Please Reset". After PID tuning (which I did after the fan firmware change too) the bed heats up to 60 °C and stays there now.</li>
</ul>
| 1,836
|
<p>I'm trying to use the <a href="http://optiflag.rubyforge.org/discussion.html" rel="nofollow noreferrer">Optiflag</a> package in my Ruby code and whenever I try to do the necessary <code>require optiflag.rb</code>, my program fails with the standard <code>no such file to load -- optiflag</code> message. I added the directory with that library to my $PATH variable, but it's still not working. Any ideas?</p>
|
<p>is it a gem? Are you doing </p>
<pre><code>require 'rubygems'
require 'optiflag'
</code></pre>
<p>or equivalent?</p>
|
<p>I also keep having this problem with RubyXL, tried to use single and double quotes. Is there something else that needs to be done? Maybe putting a file somewhere? I already succesfully installed the gem with sudo gem install rubyXL (RubyXL actually din't work).</p>
| 7,670
|
<p>I have a recursive algorithm which steps through a string, character by character, and parses it to create a tree-like structure. I want to be able to keep track of the character index the parser is currently at (for error messages as much as anything else) but am not keen on implementing something like a tuple to handle multiple returned types.</p>
<p>I tried using an Integer type, declared outside the method and passed into the recursive method, but because it's final, recursive call increments are "forgotten" when I return. (Because the increment of the Integer value makes the passed-by-value object reference point at a new object)</p>
<p>Is there a way to get something similar to work which won't pollute my code?</p>
|
<p>Since you've already discovered the pseudo-mutable integer "hack," how about this option:</p>
<p>Does it make sense for you to make a separate Parser class? If you do this, you can store the current state in a member variable. You probably need to think about how you're going to handle any thread safety issues, and it might be overkill for this particular application, but it might work for you.</p>
|
<p>To be honest I would recode the function to make it a linear algorithm that uses a loop. This way you have no chance of running out of heap space if you are stepping through an extremely large string. Also, you would not need to have a the extra parameter just to keep track of the count.</p>
<p>This also would probably have the result of making the algorithm faster because it does not need to make a function call for every character.</p>
<p>Unless of course there is a specific reason it needs to be recursive.</p>
| 5,499
|
<p>I've seen this weird behavior on several sites recently: I scroll down a page and follow a link to another page. When I click the Back button and return, I am left back at the top of the previous page, not at the link. This is very annoying if I'm clicking on links in a search results page or a list of "10 Best Foo Bars...".</p>
<p>See this page as an <a href="http://www.typophile.com/forum/5" rel="nofollow noreferrer">example</a>. Strangely, the page works as expected in IE6 on WinXP, but not on FF2 on the same machine. On Mac OS X 10.4 it works in FF2, but not in FF3. I checked for any weird preference settings, but I can't find any that are different.</p>
<p>Any idea what is causing this?</p>
|
<p>Many sites have a text box (for searching the site, or something) that is set to automatically take focus when the page loads (using javascript or something). In many browsers, the page will jump to that text box when it gets focus.</p>
<p>It really is very annoying :(</p>
|
<p>Typically this behaviour is caused by the browser cache set by the site having a small or no time before expiry.</p>
<p>On many sites, when you hit "back" you get brought back to the link you hit, as your browser is pulling the page from your cache. If this cache has not been set, a new page request is made, and the browser treats it as fresh content.</p>
<p>On the page <a href="http://www.typophile.com/forum/5" rel="nofollow noreferrer">linked above</a>, the <a href="http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.21" rel="nofollow noreferrer">"Expires" header</a> seems to be set to less than a minute ahead of my local clock, which is causing my browser to get a fresh copy when I hit "back" after that expiry time.</p>
| 9,523
|
<p>There's a <a href="http://groups.google.com/group/comp.lang.c++.moderated/browse_thread/thread/8e0235d58c8635c2" rel="noreferrer" title="assertions: does it matter that they are disabled in production?">discussion</a> going on over at comp.lang.c++.moderated about whether or not assertions, which in C++ only exist in debug builds by default, should be kept in production code or not.</p>
<p>Obviously, each project is unique, so my question here is <strong>not</strong> so much <strong>whether</strong> assertions should be kept, <strong>but in which cases</strong> this is recommendable/not a good idea.</p>
<p>By assertion, I mean:</p>
<ul>
<li>A run-time check that tests a condition which, when false, reveals a bug in the software.</li>
<li>A mechanism by which the program is halted (maybe after really minimal clean-up work).</li>
</ul>
<p>I'm not necessarily talking about C or C++.</p>
<p>My own opinion is that if you're the programmer, but don't own the data (which is the case with most commercial desktop applications), you should keep them on, because a failing asssertion shows a bug, and you should not go on with a bug, with the risk of corrupting the user's data. This forces you to test strongly before you ship, and makes bugs more visible, thus easier to spot and fix.</p>
<p>What's your opinion/experience?</p>
<p>Cheers,</p>
<p>Carl</p>
<p>See related question <a href="https://stackoverflow.com/questions/419406/are-assertions-good">here</a></p>
<hr>
<p><strong>Responses and Updates</strong></p>
<p>Hey Graham,</p>
<blockquote>
<p>An assertion is error, pure and simple and therefore should be handled like one.
Since an error should be handled in release mode then you don't really need assertions.</p>
</blockquote>
<p>That's why I prefer the word "bug" when talking about assertions. It makes things much clearer. To me, the word "error" is too vague. A missing file is an error, not a bug, and the program should deal with it. Trying to dereference a null pointer is a bug, and the program should acknowledge that something smells like bad cheese.</p>
<p>Hence, you should test the pointer with an assertion, but the presence of the file with normal error-handling code.</p>
<hr>
<p>Slight off-topic, but an important point in the discussion.</p>
<p>As a heads-up, if your assertions break into the debugger when they fail, why not. But there are plenty of reasons a file could not exist that are completely outside of the control of your code: read/write rights, disk full, USB device unplugged, etc. Since you don't have control over it, I feel assertions are not the right way to deal with that.</p>
<p>Carl</p>
<hr>
<p>Thomas,</p>
<p>Yes, I have Code Complete, and must say I strongly disagree with that particular advice.</p>
<p>Say your custom memory allocator screws up, and zeroes a chunk of memory that is still used by some other object. I happens to zero a pointer that this object dereferences regularly, and one of the invariants is that this pointer is never null, and you have a couple of assertions to make sure it stays that way. What do you do if the pointer suddenly is null. You just if() around it, hoping that it works?</p>
<p>Remember, we're talking about product code here, so there's no breaking into the debugger and inspecting the local state. This is a real bug on the user's machine.</p>
<p>Carl</p>
|
<p>Assertions are comments that do not become outdated. They document which theoretical states are intended, and which states should not occur. If code is changed so states allowed change, the developer is soon informed and needs to update the assertion.</p>
|
<p>An assertion is error, pure and simple and therefore should be handled like one.</p>
<p>Since an error should be handled in release mode then you don't really need assertions.</p>
<p>The main benefit I see for assertions is a conditional break - they are much easier to setup than drilling through VC's windows to setup something that takes 1 line of code.</p>
| 3,832
|
<p>I have a flex application that needs the ability to generate and execute JavaScript. When I say this, I mean I need to execute raw JavaScript that I create in my Flex application (not just an existing JavaScript method)</p>
<p>I am currently doing this by exposing the following JavaScript method:</p>
<pre><code>function doScript(js){ eval(js);}
</code></pre>
<p>I can then do something like this in Flex (note: I am doing something more substantial then an alert box in the real Flex app):</p>
<pre><code>ExternalInterface.call("doScript","alert('foo'));
</code></pre>
<p>My question is does this impose any security risk, I am assuming it's not since the Flex and JasvaScript all run client side...</p>
<p>Is there a better way to do this?</p>
|
<p>There's no need for the JavaScript function, the first argument to <code>ExternalInterface</code> can be any JavaScript code, it doesn't have to be a function name (the documentation says so, but it is wrong).</p>
<p>Try this:</p>
<pre><code>ExternalInterface.call("alert('hello')");
</code></pre>
|
<pre><code>ExternalInterface.call("eval", "alert('hello');");
</code></pre>
| 4,086
|
<p>From my experience with <a href="http://en.wikipedia.org/wiki/OpenID" rel="nofollow noreferrer">OpenID</a>, I see a number of significant downsides:</p>
<p><strong>Adds a <a href="http://en.wikipedia.org/wiki/Single_Point_of_Failure" rel="nofollow noreferrer">Single Point of Failure</a> to the site</strong><br>
It is not a failure that can be fixed by the site even if detected. If the OpenID provider is down for three days, what recourse does the site have to allow its users to login and access the information they own?</p>
<p><strong>Takes a user to another sites content and every time they logon to your site</strong><br>
Even if the OpenID provider does not have an error, the user is re-directed to their site to login. The login page has content and links. So there is a chance a user will actually be drawn away from the site to go down the Internet rabbit hole.</p>
<p>Why would I want to send my users to another company's website?<br/>
[ Note: my provider no longer does this and seems to have fixed this problem (for now).]</p>
<p><strong>Adds a non-trivial amount of time to the signup</strong><br>
To sign up with the site a new user is forced to read a new standard, chose a provider, and signup. Standards are something that the technical people should agree to in order to make a user experience frictionless. They are not something that should be thrust on the users.</p>
<p><strong>It is a Phisher's Dream</strong><br>
OpenID is incredibly insecure and stealing the person's ID as they log in is trivially easy. [ taken from David Arno's <a href="https://stackoverflow.com/questions/60436/what-is-the-benefit-of-using-only-openid-authentication-on-a-site#173467">Answer</a> below ]</p>
<hr>
<p>For all of the downside, the one upside is to allow users to have fewer logins on the Internet. If a site has opt-in for OpenID then users who want that feature can use it.</p>
<p><strong>What I would like to understand is:</strong><br>
What benefit does a site get for making OpenID <b>mandatory</b>?</p>
|
<p>The benefit of making OpenID mandatory is simply that login code for the website does not need to be written (beyond the OpenID integration), and no precautions need to be taken around storing user passwords etc.</p>
<p>Not having your own login code also means not having to deal with a lot of support issues like resetting of lost passwords etc.</p>
<p>Certainly most of your downsides are valid, so I guess it becomes a trade off.</p>
<p>What surprises me is that there are not more sites forming a close relationship with a particular OpenID provider to simply the account signup phase - i.e. some sort of 'You can use any OpenID you like, but you can also create one right now by entering a username and password etc' login page, which automatically creates a new account with the selected provider for you.</p>
|
<p>The main benefit of having an OpenID will be seen in the long term. Instead of having to apply to different sites for an identity, you do that once and then use it on all the sites that require a unique identity. Of course for secure sites like banking and trading it will need a different kind of thinking altogether. But for social networking sites and the like you can use it easily. </p>
<p>Mom and Dad will find it easy too because now they have to remember only one username/password. A lot of times it gets hard for us to remember what login we have at which site, and end up using the correct username/password of Site A on Site B. OpenID will solve that. Plus it's a good revenue model for an OpenID provider and user. I can enter to one such provider all the details I am willing to give and every such detail I give I can earn money. </p>
<p>Maybe the provider can coax me to tell it more about myself using that as an incentive, which it can then sell to the sites I register with. So Site A pays OpenID for my information. OpenID then passes a cut of that on to me. Site A doesn't have to manage users, OpenID gets money, user gets money, everybody is happy :)</p>
<p>This way you won't have to make OpenID mandatory. People themselves will want it. OpenID providers will then compete amongst themselves to provide better services, and where there is competition there will be better value provided to all concerned. I think it's a fabulous idea.</p>
<p><strong>Edit:</strong>
Regarding downtimes at one particular provider; if OpenID provider A is not confident of providing 100% uptime, it can take the help of another provider B, and the user on Provider A can choose from the options provider A gives. The site which goes to provider A to authenticate a user will know which other providers to go to in case provider A is not working. This will be stored in its database on first login automatically.
Anybody want to brainstorm the implementation details ? :) </p>
| 8,529
|
<p>Suppose I have the X, Y, and Z coordinates (either in a list or a function z = f(x,y)) that defines a shape as the one provided and I want to 3D print it with a solid bottom, is there an easy way to do this? If not, how can a functionally well-defined shape be put into a 3D modeling software like FreeCAD?</p>
<p><a href="https://i.stack.imgur.com/5bcSU.jpg" rel="nofollow noreferrer" title="3D graph"><img src="https://i.stack.imgur.com/5bcSU.jpg" alt="3D graph" title="3D graph" /></a></p>
|
<p>For stuff like this, OpenSCAD is your friend. There are several different approaches you could take:</p>
<ol>
<li><p>Generate an image file with grayscale color representing the height of the function on an XY grid, and use the <a href="https://en.wikibooks.org/wiki/OpenSCAD_User_Manual/Other_Language_Features#Surface" rel="nofollow noreferrer"><code>surface</code></a> feature to import it as a heightmap.</p>
</li>
<li><p>Write the function as a mathematical expression in OpenSCAD language, and write a module to generate a <a href="https://en.wikibooks.org/wiki/OpenSCAD_User_Manual/Primitive_Solids#polyhedron" rel="nofollow noreferrer"><code>polyhedron</code></a> by iterating over a sufficiently fine coordiante grid, sampling the function, and producing points and triangles.</p>
</li>
<li><p>Use a library someone else has already written for this purpose. I'm not aware of specific ones but pretty sure there are quite a few.</p>
</li>
</ol>
|
<p>Not sure what you're asking exactly, but you can use this workflow:</p>
<ol>
<li>SideFX Houdini: math function + STL file generation
<a href="https://i.stack.imgur.com/28jr1.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/28jr1.jpg" alt="enter image description here" /></a>
<a href="https://i.stack.imgur.com/E0Itn.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/E0Itn.jpg" alt="enter image description here" /></a></li>
<li>(optional) FreeCAD: Import STL as mesh, do whatever you need to do in FreeCAD, export to STL
<a href="https://i.stack.imgur.com/txRno.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/txRno.png" alt="enter image description here" /></a></li>
<li>a slicer: import the STL and slice it.
<a href="https://i.stack.imgur.com/Ofydl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ofydl.png" alt="enter image description here" /></a>
<a href="https://i.stack.imgur.com/zCxgG.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zCxgG.jpg" alt="enter image description here" /></a></li>
</ol>
| 2,032
|
<p>Everyone remembers google browser sync right? I thought it was great. Unfortunately Google decided not to upgrade the service to Firefox 3.0. Mozilla is developing a replacement for google browser sync which will be a part of the Weave project. I have tried using Weave and found it to be very very slow or totally inoperable. Granted they are in a early development phase right now so I can not really complain. </p>
<p>This specific problem of browser sync got me to thinking though. What do all of you think of Mozilla or someone making a server/client package that we, the users, could run on your 'main' machine? Now you just have to know your own IP or have some way to announce it to your client browsers at work or wherever.</p>
<p>There are several problems I can think of with this: non static IPs, Opening up ports on your local comp etc. It just seems that Mozilla does not want to handle this traffic created by many people syncing their browsers. There is not a way for them to monetize this traffic since all the data uploaded must be encrypted. </p>
|
<p>Mozilla Weave is capable of running on personal servers. It uses WebDAV to communicate with HTTP servers and can be configured to connect to private servers. I've tried setting it up on my own servers but with no success (Mainly because I'm not very good at working with Apache to configure WebDAV)</p>
<p>I'm hoping Mozilla Weave eventually allows FTP access so I can easily use my server to host my firefox profile.</p>
<p>If you're interested in trying Mozilla Weave on a personal server, there's a tutorial here:<br>
<a href="http://marios.tziortzis.com/page/blog/article/setting-up-mozilla-weave-on-your-server/" rel="nofollow noreferrer">http://marios.tziortzis.com/page/blog/article/setting-up-mozilla-weave-on-your-server/</a></p>
|
<p>I've been using the Firefox Scrapbook extension, sync'd via FolderShare. It takes a little setup, but the nice thing is that Scrapbook grabs a local copy of each page so it works offline or if the site goes away.</p>
| 3,444
|
<p>After hours of debugging, it appears to me that in FireFox, the innerHTML of a DOM reflects what is actually in the markup, but in IE, the innerHTML reflects what's in the markup PLUS any changes made by the user or dynamically (i.e. via Javascript).</p>
<p>Has anyone else found this to be true? Any interesting work-arounds to ensure both behave the same way?</p>
|
<p>I use jQuery's <a href="http://docs.jquery.com/Attributes/html" rel="noreferrer">.html()</a> to get a consistent result across browsers.</p>
|
<p>Using prototype and the $("thisid") syntax instead of document.getElementById("thisid") might do the trick for you. It worked for me.</p>
| 5,738
|
<p>What is the difference between using angle brackets and quotes in an <code>include</code> directive?</p>
<ul>
<li><code>#include <filename></code></li>
<li><code>#include "filename"</code></li>
</ul>
|
<p>What differs is the locations in which the preprocessor searches for the file to be included.</p>
<ul>
<li><p><code>#include <filename></code> The preprocessor searches in an implementation-defined manner, normally in directories pre-designated by the compiler/IDE. This method is normally used to include header files for the C standard library and other header files associated with the target platform.</p>
</li>
<li><p><code>#include "filename"</code> The preprocessor also searches in an implementation-defined manner, but one that is normally used to include programmer-defined header files and typically includes same directory as the file containing the directive (unless an absolute path is given).</p>
</li>
</ul>
<p>For GCC, a more complete description is available in the GCC <a href="https://gcc.gnu.org/onlinedocs/cpp/Search-Path.html" rel="noreferrer">documentation on search paths</a>.</p>
|
<p>There exists two ways to write #include statement.These are:</p>
<pre><code>#include"filename"
#include<filename>
</code></pre>
<p>The meaning of each form is </p>
<pre><code>#include"mylib.h"
</code></pre>
<p>This command would look for the file <code>mylib.h</code> in the current directory as well as the specified list of directories as mentioned n the include search path that might have been set up.</p>
<pre><code>#include<mylib.h>
</code></pre>
<p>This command would look for the file <code>mylib.h</code> in the specified list of directories only.</p>
<p>The include search path is nothing but a list of directories that would be searched for the file being included.Different C compilers let you set the search path in different manners.</p>
| 4,174
|
<p>From the <a href="http://marlinfw.org/docs/gcode/M502.html" rel="nofollow noreferrer"><code>M502</code> documentation page</a> can be read that <code>M502</code>:</p>
<blockquote>
<p>Reset all configurable settings to their factory defaults.</p>
</blockquote>
<p><sub><em>Please note that this phrasing from the manual has been used in the question title!</em></sub></p>
<blockquote>
<p>To also reset settings in EEPROM, follow with M500.</p>
</blockquote>
<p>Note that:</p>
<blockquote>
<p>This command can be used even if EEPROM_SETTINGS is disabled.</p>
</blockquote>
<p>The question is what is the definition of <em>"all configurable settings"</em>?</p>
<p>Are these the settings that are displayed with <a href="http://marlinfw.org/docs/gcode/M503.html" rel="nofollow noreferrer"><code>M503</code></a>, or are there hidden settings?</p>
|
<p>What Marlin does when <code>M502</code> is called is defined in the <a href="https://github.com/MarlinFirmware/Marlin/blob/bugfix-2.0.x/Marlin/src/module/configuration_store.cpp" rel="noreferrer"><code>configuration_store.cpp</code></a> file.</p>
<p>It resets:</p>
<ul>
<li>Max acceleration</li>
<li>Steps per mm</li>
<li>Max feedrate / speed</li>
<li>Min segment time</li>
<li>Acceleration (Normal, Retract, Travel)</li>
<li>Min feedrate</li>
<li>Min travel feedrate</li>
<li>Jerk settings</li>
<li>Junction deviation</li>
<li>Home and SCARA offsets</li>
<li>Hot end offsets</li>
<li>Filament runout sensor distance</li>
<li>Tool change parameters (Swap length, extra prime, prime speed, retract speed, Park
positions, Z raise)</li>
<li>Backlash correction distances and smoothing parameters</li>
<li>Extensible UI</li>
<li>Magnetic parking extruder settings</li>
<li>ABL (fade height, stored points, nozzle offset, servo angles</li>
<li>Delta calibration data (Height, Endstop offset, radius, rod length, segments per
second, calibration radius, trim angle)</li>
<li>Dual / triple endstop adjustments</li>
<li>Preheat parameters</li>
<li>PID parameters</li>
<li>self-defined thermistors</li>
<li>LCD contrast</li>
<li>Power loss recovery</li>
<li>Firmware retraction</li>
<li>Filament diameter (for volumetric extrusion)</li>
<li>Endstops (if disabled)</li>
<li>Stepper drivers</li>
<li>Linear advance parameters</li>
<li>Motor currents (digipot)</li>
<li>CNC coordinate system (if selected)</li>
<li>Skew correction parameters</li>
<li>Advance pause filament change lengths</li>
</ul>
|
<p>Technically, the description as "factory settings" is misguiding, as the settings called up are much better described as "firmware defined settings". But since Firmware upgrades usually are rare and far between, these settings can be considered "factory" for the usual user, even as we always urge users to test if their firmware <a href="https://3dprinting.stackexchange.com/questions/8466/what-is-thermal-runaway-protection">has TRP enabled</a> and upgrade if not so. </p>
<p>Depending on the firmware, this usually means the settings described in <a href="https://3dprinting.stackexchange.com/a/11439/8884">this answer</a>, but it could also be more narrow or extend to different and custom settings inside the firmware. Marlin, when it uses <a href="https://sebastian.expert/enable-disable-eeprom-commands-marlin/" rel="nofollow noreferrer">EEPROM_SETTINGS</a>, uses <code>Configurations.h</code> and the additional <code>Configurations_adv.h</code> to define what the factory settings are.</p>
<p>For example in <a href="https://3dprinting.stackexchange.com/q/11410/8884">this question</a> the firmware defined the additional settings in <code>Configuration_adv.h</code>. Installing firmware does not by itself alter the EEPROM, so these settings needed to be <em>seeded into SRAM</em> via <a href="http://marlinfw.org/docs/gcode/M500.html" rel="nofollow noreferrer"><code>M502</code></a> and then <em>saved into EEPROM</em> via <a href="http://marlinfw.org/docs/gcode/M500.html" rel="nofollow noreferrer"><code>M500</code></a>. </p>
<p>The remaining commands in the <code>M50X</code> series are obviously <a href="http://marlinfw.org/docs/gcode/M500.html" rel="nofollow noreferrer"><code>M501</code></a> and <a href="http://marlinfw.org/docs/gcode/M500.html" rel="nofollow noreferrer"><code>M503</code></a>. <code>M501</code> overwrites the SRAM settings with those from the EEPROM, useful if you toy with the SRAM settings to troubleshoot or play with offsets in a somewhat safe manner. <code>M503</code> in turn reports all settings currently in the SRAM, which can be changed during running. Most of these settings can be stored into the EEPROM, if EEPROM_SETTINGS is enabled, but they don't necessarily have to come from the EEPROM at the moment, as they can be altered due to a lot of reasons. Your G-code to print could call for example <code>G20</code> and <code>M149 K</code> just to mess with you by swapping to Inches and Kelvin, but that would be easily fixable by recalling <code>M501</code>. This following <code>M503</code> output was given as an example by <a href="https://sebastian.expert/marlin-g-codes-m500-m501-m502-m503/" rel="nofollow noreferrer">Sebastian.expert</a>:</p>
<pre><code>G21 ; Units in mm
M149 C ; Units in Celsius
Filament settings: Disabled
M200 D1.75
M200 D0
Steps per unit:
M92 X100.00 Y100.00 Z398.70 E100.00
Maximum feedrates (units/s):
M203 X400.00 Y400.00 Z8.00 E50.00
Maximum Acceleration (units/s2):
M201 X1000 Y1000 Z100 E10000
Acceleration (units/s2): P R T
M204 P400.00 R1000.00 T1000.00
Advanced: S T B X Z E
M205 S0.00 T0.00 B20000 X10.00 Y10.00 Z0.30 E5.00
Home offset:
M206 X0.00 Y0.00 Z0.00
Auto Bed Leveling:
M420 S1 Z0.00
Material heatup parameters:
M145 S0 H195 B55 F0
M145 S1 H205 B60 F0
PID settings:
M301 P52.25 I5.60 D122.00
Z-Probe Offset (mm):
M851 Z-2.41
</code></pre>
<p>Note that with a <a href="https://3dprinting.stackexchange.com/questions/10573/what-is-a-printer-console-terminal">console or terminal</a>, you could sometimes alter EEPROM settings directly via a UI. Among the terminals I know to have this ability is Repetier Host.</p>
| 1,519
|
<p>I once worked with an architect who banned the use of SQL views. His main reason was that views made it too easy for a thoughtless coder to needlessly involve joined tables which, if that coder tried harder, could be avoided altogether. Implicitly he was encouraging code reuse via copy-and-paste instead of encapsulation in views.</p>
<p>The database had nearly 600 tables and was highly normalised, so most of the useful SQL was necessarily verbose.</p>
<p>Several years later I can see at least one bad outcome from the ban - we have many hundreds of dense, lengthy stored procs that verge on unmaintainable.</p>
<p>In hindsight I would say it was a bad decision, but what are your experiences with SQL views? Have you found them bad for performance? Any other thoughts on when they are or are not appropriate?</p>
|
<p>There are some very good uses for views; I have used them a lot for tuning and for exposing less normalized sets of information, or for UNION-ing results from multiple selects into a single result set.</p>
<p>Obviously any programming tool can be used incorrectly, but I can't think of any times in my experience where a poorly tuned view has caused any kind of drawbacks from a performance standpoint, and the value they can provide by providing explicitly tuned selects and avoiding duplication of complex SQL code can be significant.</p>
<p>Incidentally, I have never been a fan of architectural "rules" that are based on keeping developers from hurting themselves. These rules often have unintended side-effects -- the last place I worked didn't allow using NULLs in the database, because developers might forget to check for null. This ended up forcing us to work around "1/1/1900" dates and integers defaulted to "0" in all the software built against the databases, and introducing a litany of bugs caused by devs working around places where NULL was the appropriate value.</p>
|
<p>Let's see if I can come up with a lame analogy ...</p>
<p>"I don't need a phillips screwdriver. I carry a flat head and a grinder!"</p>
<p>Dismissing views out of hand will cause pain long term. For one, it's easier to debug and modify a single view definition than it is to ship modified code.</p>
| 6,417
|
<p>Ok I’m so new at Blender. Half the time it’s knowing what question you have to ask to get the answer you need...</p>
<p>I want to make a 3D printable warrior type action figure but I only have one very detailed frontal view. Can I input that image into somewhere and it gives me a 3D model to start from? Like will it generate a 360 view? I have no idea where to start.</p>
|
<p>In short, no. A 2D image has insufficient information to determine a 3D form.</p>
<p>If you want to do this yourself, what you could do is start with the 2D outline in a program like Blender (as 0scar mentioned in a comment), extrude it to make a thin "cardboard cutout", then begin shaping it into three dimensions from there. Imagine it like cutting a slab of Play-doh with a cookie cutter matching your 2D outline, then using the picture and your imagination as a guide to form it into 3D. I'm not sure whether something like that makes any more sense than just starting from scratch modeling it.</p>
<p>Alternatively, nowadays there <em>might</em> be some "AI" models to produce a reasonable guess at what 3D structure you want, with the knowledge that it's supposed to be a person, for a 2D image you provide. I'm not sure if there's anything yet of usable quality, but it's something you could look for.</p>
|
<p>If you only have a 2D photo then as others have already said its not really possible. However if you have the action figure itself you could try a technique called Photogrammetry where you use your phone to take multiple images of the object and then use software to build up a model. Search youtube for guides on Photogrammetry or 3D scanning with your phone. I've never done it but certainly something I'd like to try!</p>
| 2,008
|
<p>I'm trying to write some PHP to upload a file to a folder on my webserver. Here's what I have:</p>
<pre><code><?php
if ( !empty($_FILES['file']['tmp_name']) ) {
move_uploaded_file($_FILES['file']['tmp_name'], './' . $_FILES['file']['name']);
header('Location: http://www.mywebsite.com/dump/');
exit;
}
?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN"
"http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd">
<html>
<head>
<title>Dump Upload</title>
</head>
<body>
<h1>Upload a File</h1>
<form action="upload.php" enctype="multipart/form-data" method="post">
<input type="hidden" name="MAX_FILE_SIZE" value="1000000000" />
Select the File:<br /><input type="file" name="file" /><br />
<input type="submit" value="Upload" />
</form>
</body>
</html>
</code></pre>
<p>I'm getting these errors:</p>
<blockquote>
<p>Warning: move_uploaded_file(./test.txt) [function.move-uploaded-file]: failed to open stream: Permission denied in E:\inetpub\vhosts\mywebsite.com\httpdocs\dump\upload.php on line 3</p>
<p>Warning: move_uploaded_file() [function.move-uploaded-file]: Unable to move 'C:\WINDOWS\Temp\phpA30E.tmp' to './test.txt' in E:\inetpub\vhosts\mywebsite.com\httpdocs\dump\upload.php on line 3</p>
<p>Warning: Cannot modify header information - headers already sent by (output started at E:\inetpub\vhosts\mywebsite.com\httpdocs\dump\upload.php:3) in E:\inetpub\vhosts\mywebsite.com\httpdocs\dump\upload.php on line 4</p>
</blockquote>
<p>PHP version 4.4.7
Running IIS on a Windows box. This particular file/folder has 777 permissions.</p>
<p>Any ideas?</p>
|
<p>As it's Windows, there is no real 777. If you're using <a href="http://fr2.php.net/manual/en/function.chmod.php" rel="noreferrer">chmod</a>, check the Windows-related comments.</p>
<p>Check that the IIS Account can access (read, write, modify) these two folders:</p>
<pre><code>E:\inetpub\vhosts\mywebsite.com\httpdocs\dump\
C:\WINDOWS\Temp\
</code></pre>
|
<p>Create a folder named "image" with folder permission <code>777</code></p>
<pre><code><?php
move_uploaded_file($_FILES['file']['tmp_name'],"image/".$_FILES['file']['name']);
?>
</code></pre>
| 2,569
|
<p>I'm using the VS2008 installer (plus a custom Orca action) to create an installer for my .NET product.</p>
<p>I just recently found out that one of the third-party assemblies I was using is x86-specific (as it includes some native code); thus, x64 customers were getting crashes on startup with errors about the assembly not being appropriate for their platform.</p>
<p>I sent such a customer a copy of the x64 version of this third-party assembly, and told him to just copy it over the existing x86 one. It worked, sweet! So now I just need to make the installer do this for me.</p>
<p>This actually appears nontrivial :(. Ideally, I just want the installer (which would be x86, since that can run on both platforms) to include both the x86 and x64 versions of this third-party assembly, and install the appropriate one. In other words, I want a single installer that makes my users' lives easy.</p>
<p>I thought I had this worked out, using MSI conditional statements and all that. But apparently no... the VS2008 setup projects won't compile unless you specify "x86" or "x64." If you specify x86, it gives a compilation error saying it can't include the x64 assembly. If you specify x64, then the result cannot be executed on an x86 computer. Damn!</p>
<p>Someone must have had this problem before. Unfortunately Google is unhelpful, so I turn to StackOverflow!</p>
|
<p>When I looked into this a year ago, I came to the conclusion that it was not possible. It's worth noting that many Microsoft-supplied MSI files come in separate x86 and x64 flavors -- and presumably, they'd only deliver a single file if that were possible.</p>
|
<p>I have had some success by using two features to selectively include the two sets of files (in separate components of course, with their individualized file identifiers!). </p>
<p>The installation must be marked as x32 to install on both x32 and x64. It will always install to the x32 directories and will largely be treated as a 32 bit application running under WOW.</p>
<p>Using the VersionNT64 property you can determine if a Windows 64 installation is present, and conditionally install the files you need.</p>
<p>I'm not sure how much of this functionality is available in VS2008 install projects - I am using some other commercial tools to set up the installer this way. Of course you can use Orca to do it too, though it was non-trivial to get it to work with commercial tools and Orca is much harder.</p>
| 8,618
|
<p>How can I get the MAC Address using only the compact framework?</p>
|
<p>1.4 of the OpenNETCF code gets the information from the following P/Invoke call:</p>
<pre><code> [DllImport ("iphlpapi.dll", SetLastError=true)]
public static extern int GetAdaptersInfo( byte[] ip, ref int size );
</code></pre>
<p>The physical address (returned as MAC address) I think is around about index 400 - 408 of the byte array after the call. So you can just use that directly if you don't want to use OpenNETCF (why though? OpenNETCF rocks more than stone henge!)</p>
<p>Wonderful P/Invoke.net gives a full example <a href="http://www.pinvoke.net/default.aspx/iphlpapi/GetAdaptersInfo.html" rel="nofollow noreferrer">here</a>.</p>
<p>Oh and to properly answer your question:</p>
<blockquote>
<p>only using the Compact Framework</p>
</blockquote>
<p>You cant. That's life with CF, if you want some fun try sending data with a socket synchronously with a timeout. :D</p>
|
<p>Add a reference to System.Management.dll and use something like:</p>
<pre><code>Dim mc As System.Management.ManagementClass
Dim mo As ManagementObject
mc = New ManagementClass("Win32_NetworkAdapterConfiguration")
Dim moc As ManagementObjectCollection = mc.GetInstances()
For Each mo In moc
If mo.Item("IPEnabled") = True Then
ListBox1.Items.Add("MAC address " & mo.Item("MacAddress").ToString())
End If
Next
</code></pre>
| 6,440
|
<p>Do you know if it's possible to build an application for the LinkedIn platform?</p>
|
<p>While LinkedIn has promised a public API for a very long time now, they have yet to deliver. </p>
<p>No, there is no <strong>public</strong> LinkedIn API yet. </p>
<p>IMO, their widgets (which there are only two of at the moment, which are very limited) don't count. </p>
<p>They say that they are open to being contacted with specific uses for their API and they may give access to parts as needed - but that is if they accept your ideas for integration. They have been very picky with this - and have not accepted my attempts to integrate with LinkedIn yet, they tell me I have to wait with everyone else, apparently my applications are not "high-profile" enough. </p>
<p>Sure, you'll find many Google results talking about their "promised" API, but they are empty promises and won't be of much help. </p>
|
<p>Yes, Linkedin has an API:</p>
<ul>
<li><a href="http://www.programmableweb.com/api/linkedin" rel="nofollow noreferrer">http://www.programmableweb.com/api/linkedin</a></li>
<li><a href="http://blog.linkedin.com/blog/2007/12/the-intelligent.html" rel="nofollow noreferrer">http://blog.linkedin.com/blog/2007/12/the-intelligent.html</a></li>
</ul>
<p>So you could build an application that uses it.</p>
<p><strong>Update:</strong> (<a href="http://blog.linkedin.com/blog/2007/12/the-intelligent.html" rel="nofollow noreferrer">from second link</a>)</p>
<blockquote>
<p>We’ll be phasing all of this in over the coming months and to get involved with the Intelligent Application Platform either for APIs, widgets, or hosted applications, send us an e-mail to <strong><em>developers@linkedin.com</em></strong> telling us what you want to build and what you need to build it.</p>
</blockquote>
<p>Since there are published Mashups using LinkedIn I would assume that means you can use the API even if the documentation isn't readily available.</p>
<p>As a tip, in the future include links to what you found that didn't work, so we know not to give it to you again.</p>
<p>I poked around a bit more and I found some <a href="http://www.linkedin.com/static?key=developers_widgets&trk=hb_ft_widgets" rel="nofollow noreferrer">more on their widgets</a> which appears to be the main focus of their API.</p>
| 6,225
|
<p>I have heard there is a way to cheat single inheritance and implement multiple inheritance in Java. Does anyone know how to implement this(with out using interface)?</p>
<p>Just out of curiosity ;-)</p>
|
<p>Sure you can, but it's tricky and you should really consider if that's the way you want to go.<br>
The idea is to use scope-based inheritance coupled with type-based one. Which is type-talk for saying that for internal purposes, inner classes "inherit" methods and fields of the outer class. It's a bit like mixins, where the outer class is mixed-in to the inner class, but not as safe, as you can change the state of the outer class as well as use its methods.<br>
Gilad Bracha (one of the main java language designers) wrote a <a href="http://bracha.org/dyla.pdf" rel="nofollow noreferrer">paper</a> discussing that.
So, suppose you want to share some methods for internal use between some unrelated classes (e.g, for string manipulation), you can create sub classes of them as inner classes of a class that has all the needed methods, and the sub classes could use methods both from their super classes and from the outer class. </p>
<p>Anyway, it's tricky for complex classes, and you could get most of the functionality using static imports (from java 5 on). Great question for job interviews and pub quizzes, though ;-)</p>
|
<p>Yes you can say that it's a trick and it is very Interesting you cannot inherit multiple classes to a single class but it is possible to implement multiple Interfaces to a class like</p>
<pre><code>public class parents implements first, second{
}
</code></pre>
<p>but remember, you have to override methods declared in interfaces.</p>
| 9,596
|
<p>Given a table of votes (users vote for a choice, and must supply an email address):</p>
<pre><code>votes
--
id: int
choice: int
timestamp: timestamp
ip: varchar
email: varchar
</code></pre>
<p>What's the best way to count "unique" votes (a user being a unique combination of email + ip) given the constraint they may only vote <em>twice</em> per hour?</p>
<p>It's possible to count the number of hours between first and last vote and determine the maximum number of allowed votes for that timeframe, but that allows users to compress all their votes into say, a single hour-long window, and still have them counted.</p>
<p>I realize anonymous online voting is inherently flawed, but I'm not sure how to do this with SQL. Should I be using an external script or whatever instead? (For each choice, for each email+ip pair, get a vote, calculate the next +1h timestamp, count/discard/tally votes, move on to the next hour, etc...)</p>
|
<p>Something like </p>
<pre><code>select email, ip, count(choice)
from votes
group by email, ip, datepart(hour, timestamp)
</code></pre>
<p>If I understand correctly</p>
|
<p>You could rewrite your insert statement to only allow votes to be inserted based on your contrainsts:</p>
<pre><code>Insert Into Votes
(Choice, Timestamp, IP, Email)
Select
Top 1
@Choice, @Timestamp, @IP, @Email
From
Votes
Where
(Select Count(*) From Votes Where
IP = @IP
and Email = @Email
and Timestamp > DateAdd(h, -2, GetDate())) < 3
</code></pre>
<p>You didn't mention which SQL language you were using so this is in SQL Server 2005.</p>
| 9,688
|
<p>Typically in a large network a computer needs to operate behind an authenticated proxy - any connections to the outside world require a username/password which is often the password a user uses to log into email, workstation etc.</p>
<p>This means having to put the network password in the <code>apt.conf</code> file as well as typically the <code>http_proxy, ftp_proxy</code> and <code>https_proxy</code> environment variables defined in <code>~/.profile</code></p>
<p>I realise that with <code>apt.conf</code> that you could set <code>chmod 600</code> (which it isn't by default on Ubuntu/Debian!) but on our system there are people who need root priveleges .</p>
<p>I also realise that it is technically impossible to secure a password from someone who has root access, however I was wondering if there was a way of <i>obscuring</i> the password to prevent accidental discovery. Windows operates with users as admins yet somehow stores network passwords (probably stored deep in the registry obscured in some way) so that in typical use you won't stumble across it in plain text</p>
<p>I only ask since the other day, I entirely by accident discovered somebody elses password in this way when comparing configuration files across systems.</p>
<p>@monjardin - Public key authentication is not an alternative on this network I'm afraid. Plus I doubt it is supported amongst the majority of commandline tools.</p>
<p>@Neall - I don't mind the other users having web access, they can use my credentials to access the web, I just don't want them to happen across my password in plain text.</p>
|
<p>With the following approach you never have to save your proxy password in plain text. You just have to type in a password interactively as soon as you need http/https/ftp access:</p>
<ul>
<li>Use openssl to encrypt your plain text proxy password into a file, with e.g. AES256 encryption:</li>
</ul>
<blockquote>
<p>openssl enc -aes-256-cbc -in pw.txt -out pw.bin</p>
</blockquote>
<ul>
<li>Use a (different) password for protecting the encoded file</li>
<li>Remove plain text pw.txt</li>
<li>Create an alias in e.g. ~/.alias to set your http_proxy/https_proxy/ftp_proxy environment variables (set appropriate values for $USER/proxy/$PORT)</li>
</ul>
<blockquote>
<p>alias myproxy='PW=`openssl aes-256-cbc -d -in pw.bin`; PROXY="http://$USER:$PW@proxy:$PORT"; export http_proxy=$PROXY; export https_proxy=$PROXY; export ftp_proxy=$PROXY'</p>
</blockquote>
<ul>
<li>you should source this file into your normal shell environment (on some systems this is done automatically)</li>
<li>type 'myproxy' and enter your openssl password you used for encrypting the file</li>
<li>done.</li>
</ul>
<p><strong>Note:</strong> the password is available (and readable) inside the users environment for the duration of the shell session. If you want to clean it from the environment after usage you can use another alias:</p>
<blockquote>
<p>alias clearproxy='export http_proxy=; export https_proxy=; export
ftp_proxy='</p>
</blockquote>
|
<p>Is public key authentication a valid alternative for you?</p>
| 3,945
|
<p>I have a Data Access Object TransactionDao. When you call TransactionDao.Save(transaction) I would like for it to setting a transaction.IsSaved=true flag (this is a simplification the actual thing I'm trying to do is not quite so banal). So when mocking my TransactionDao with RhinoMocks how can I indicate that it should transform its input?</p>
<p>Ideally I would like to write something like this:</p>
<pre><code>Expect.Call(delegate {dao.Save(transaction);}).Override(x => x.IsSaved=true);
</code></pre>
<p>Does anyone know how to do this?</p>
<hr>
<p>Though I got a hint how to do it from the answer specified below the actual type signature is off, you have to do something like this:
Because of what Mark Ingram posted, seems like the best answer, though nobody's explicitly said it, is to do this:</p>
<pre><code>public delegate void FakeSave(Transaction t);
...
Expect.Call(delegate {dao.Save(t); }).Do( new FakeSave(delegate(Transaction t2) { t.IsSaved = true; }));
</code></pre>
|
<p>Gorge, </p>
<p>The simplest solution, which I found, applied to your question is the following:</p>
<pre><code>Expect.Call(() => dao.Save(transaction))
.Do(new Action<Transaction>(x => x.IsSaved = true));
</code></pre>
<p>So you don't need to create a special delegate or anything else. Just use Action which is in standard .NET 3.5 libraries.</p>
<p>Hope this help.
Frantisek</p>
|
<p>you should mock the transaction and make it return true fo IsSaved, if you can mock the transaction of course.</p>
<pre><code>ITransaction transaction = _Mocker.dynamicMock<ITransaction>;
Expect.Call(transaction.IsSaved).IgnoreArguments.Return(true);
_mocker.ReplayAll();
dao.Save(transaction);
</code></pre>
| 8,396
|
<p>I'm running Repetier Host v1.6.1 with Repetier Firmware v0.92.9. My computer is running Windows 7 Pro SP1, 64-bit.</p>
<p>If I set a print going via USB then switch to another user (note: I do not log out), then the pinter's display shows that the command buffer drops from 16 to 0 until it stops printing altogether. If I switch back to the user that is running Repetier Host then the buffer fills up again and the print job resumes.</p>
<p>Before I updated Repetier Host this didn't happen, I could leave it running while I switched users and the job would run just fine. I'm not sure why this behaviour has changed, but is there any way to get it to run properly under a background user?</p>
|
<p>Is it possible that in updating Repetier you inadvertently installed it for a single user rather than for everyone? If so, that might account for its stopping when the user is changed. </p>
|
<p>I believe what happens here is that Windows suspends the process running the print job, either due to the program not being in focus, because you switch user, or both.</p>
<p>You could try to <em>increase the priority of the print process in task manager</em>, and see if that helps.</p>
<p><strong>In Windows 7:</strong></p>
<ol>
<li>Open Task Manager</li>
<li>In the <em>Applications</em> tab, right click the application, and select <em>Go To Process</em>, which will take you to its background process in the <em>Process</em> tab.</li>
<li>Right click the process, go to <em>Set Priority</em> and select some priority higher than the current level.</li>
</ol>
<p><strong>In Windows 10:</strong></p>
<ol>
<li>Open Task Manager</li>
<li>In the <em>Processes</em> tab, right click the application, and select <em>Go To Details</em>, which will take you to its background process in the <em>Details</em> tab.</li>
<li>Right click the process, go to <em>Set Priority</em> and select some priority higher than the current level.</li>
</ol>
<p><em>PS: <a href="https://stackoverflow.com/questions/1663993/what-is-the-realtime-process-priority-setting-for">Avoid setting the priority to Realtime</a>, as that effectively will give the process full control of your computer's resources, which could kneel your computer if the program is poorly written.</em></p>
| 297
|
<p>I have a <a href="http://www.samurize.com/modules/news/" rel="noreferrer">Samurize</a> config that shows a CPU usage graph similar to Task manager. </p>
<p>How do I also display the name of the process with the current highest CPU usage percentage? </p>
<p>I would like this to be updated, at most, once per second. Samurize can call a command line tool and display it's output on screen, so this could also be an option.</p>
<hr>
<p>Further clarification: </p>
<p>I have investigated writing my own command line c# .NET application to enumerate the array returned from System.Diagnostics.Process.GetProcesses(), but the Process instance class does not seem to include a CPU percentage property. </p>
<p>Can I calculate this in some way?</p>
|
<p>What you want to get its the instant CPU usage (kind of)...</p>
<p>Actually, the instant CPU usage for a process does not exists. Instead you have to make two measurements and calculate the average CPU usage, the formula is quite simple:</p>
<blockquote>
<p>AvgCpuUsed = [TotalCPUTime(process,time2) - TotalCPUTime(process,time1)] / [time2-time1]</p>
</blockquote>
<p>The lower Time2 and Time1 difference is, the more "instant" your measurement will be. Windows Task Manager calculate the CPU use with an interval of one second. I've found that is more than enough and you might even consider doing it in 5 seconds intervals cause the act of measuring itself takes up CPU cycles...</p>
<p>So, first, to get the average CPU time</p>
<pre><code> using System.Diagnostics;
float GetAverageCPULoad(int procID, DateTme from, DateTime, to)
{
// For the current process
//Process proc = Process.GetCurrentProcess();
// Or for any other process given its id
Process proc = Process.GetProcessById(procID);
System.TimeSpan lifeInterval = (to - from);
// Get the CPU use
float CPULoad = (proc.TotalProcessorTime.TotalMilliseconds / lifeInterval.TotalMilliseconds) * 100;
// You need to take the number of present cores into account
return CPULoad / System.Environment.ProcessorCount;
}
</code></pre>
<p>now, for the "instant" CPU load you'll need an specialized class:</p>
<pre><code> class ProcLoad
{
// Last time you checked for a process
public Dictionary<int, DateTime> lastCheckedDict = new Dictionary<int, DateTime>();
public float GetCPULoad(int procID)
{
if (lastCheckedDict.ContainsKey(procID))
{
DateTime last = lastCheckedDict[procID];
lastCheckedDict[procID] = DateTime.Now;
return GetAverageCPULoad(procID, last, lastCheckedDict[procID]);
}
else
{
lastCheckedDict.Add(procID, DateTime.Now);
return 0;
}
}
}
</code></pre>
<p>You should call that class from a timer (or whatever interval method you like) for <strong>each process you want to monitor</strong>, if you want all the processes just use the <a href="http://msdn.microsoft.com/en-us/library/1f3ys1f9.aspx" rel="nofollow noreferrer">Process.GetProcesses</a> static method</p>
|
<p>With PowerShell:</p>
<pre><code>Get-Process | Sort-Object CPU -desc | Select-Object -first 3 | Format-Table CPU,ProcessName -hidetableheader
</code></pre>
<p>returns somewhat like:</p>
<pre><code> 16.8641632 System
12.548072 csrss
11.9892168 powershell
</code></pre>
| 7,488
|
<p>Which compiles to faster code: "ans = n * 3" or "ans = n+(n*2)"?</p>
<p>Assuming that n is either an int or a long, and it is is running on a modern Win32 Intel box.</p>
<p>Would this be different if there was some dereferencing involved, that is, which of these would be faster?</p>
<pre>
long a;
long *pn;
long ans;
...
*pn = some_number;
ans = *pn * 3;
</pre>
<p>Or</p>
<pre>
ans = *pn+(*pn*2);
</pre>
<p>Or, is it something one need not worry about as optimizing compilers are likely to account for this in any case?</p>
|
<p>IMO such micro-optimization is not necessary unless you work with some exotic compiler. I would put readability on the first place.</p>
|
<p>Compilers are good at optimising code such as yours. Any modern compiler would produce the same code for both cases and additionally replace <code>* 2</code> by a left shift.</p>
| 7,730
|
<p>I have taken over a large code base and would like to get an overview how and where certain classes and their methods are used.</p>
<p>Is there any good tool that can somehow visualize the dependencies and draw a nice call tree or something similar?</p>
<p>The code is in C++ in Visual Studio if that helps narrow down any selection.</p>
|
<p>Here are a few options:</p>
<ul>
<li><a href="http://www.codedrawer.com/index.html" rel="nofollow noreferrer">CodeDrawer</a></li>
<li><a href="http://www.westernwares.com/" rel="nofollow noreferrer">CC-RIDER</a></li>
<li><a href="http://www.doxygen.nl/index.html" rel="nofollow noreferrer">Doxygen</a></li>
</ul>
<p>The last one, doxygen, is more of an automatic documentation tool, but it is capable of generating dependency graphs and inheritance diagrams. It's also licensed under the GPL, unlike the first two which are not free.</p>
|
<p>In Java I would start with JDepend. In .NET, with NDepend. Don't know about C++.</p>
| 5,630
|
<p>I have to do some JavaScript in the future, so it is time to update my toolbox. Right now I use Firefox with some addons:</p>
<ul>
<li>JavaScript Shell from <a href="https://www.squarefree.com/bookmarklets/webdevel.html" rel="nofollow noreferrer">https://www.squarefree.com/bookmarklets/webdevel.html</a></li>
<li>Firefox Dom Inspector</li>
<li>Firebug</li>
<li>Greasemonkey</li>
<li>Stylish</li>
</ul>
<p>I plan to use <a href="http://www.hacksrus.com/~ginda/venkman/" rel="nofollow noreferrer">Venkman Javascript debugger</a> as well as <a href="https://github.com/pivotal/jsunit" rel="nofollow noreferrer">jsunit</a> and <a href="http://jslint.com/" rel="nofollow noreferrer">js-lint</a>.</p>
<p>For programming I'm stick with vim.</p>
<p>So what other tools do you use when developing JavaScript?</p>
|
<p>I use both Firefox and IE for Web Development and a few add-ons in each:</p>
<p><strong>Firefox:</strong></p>
<ul>
<li><a href="https://addons.mozilla.org/en-US/firefox/addon/1843" rel="nofollow noreferrer">Firebug</a></li>
<li><a href="https://addons.mozilla.org/en-US/firefox/addon/60" rel="nofollow noreferrer">Web Developer Toolbar</a></li>
</ul>
<p><strong>Internet Explorer:</strong></p>
<ul>
<li><a href="http://www.microsoft.com/en-us/download/details.aspx?id=18359" rel="nofollow noreferrer">IE Developer Toolbar</a></li>
<li><a href="http://www.fiddlertool.com/" rel="nofollow noreferrer">Fiddler</a></li>
<li>Visual Studio for JS Debugging</li>
</ul>
|
<p>A couple more::</p>
<ol>
<li>IE Explorer Toolbar </li>
<li>Firefox Developer Toolbar </li>
</ol>
| 5,470
|
<p>Does anyone know how to setup <a href="http://www.selenic.com/mercurial/wiki/index.cgi/Mercurial" rel="noreferrer">Mercurial</a> to use <a href="http://www.perforce.com/perforce/products/merge.html" rel="noreferrer">p4merge</a> as the merge/diff tool on OS X 10.5?</p>
|
<p><strong>This will work for merging:</strong></p>
<p>Place this into your <code>~/.hgrc</code> (or, optionally, your <code>Mercurial.ini</code> on Windows):</p>
<pre><code>[merge-tools]
p4.priority = 100
p4.premerge = True # change this to False if you're don't trust hg's internal merge
p4.executable = /Applications/p4merge.app/Contents/MacOS/p4merge
p4.gui = True
p4.args = $base $local $other $output
</code></pre>
<p>Requires Mercurial 1.0 or newer. Clearly you'll need to update the path to that executable to reflect where you'd got p4merge installed.</p>
<hr>
<p><strong>You can't change what <code>hg diff</code> uses</strong>; but you <em>can</em> use the <code>extdiff</code> extension to create new diff commands that use the display you want. </p>
<p>So <code>hg pdiff</code> could run p4 merge, etc.</p>
|
<p>I'm guessing there's a CLI tool for p4merge (which I know nothing about).</p>
<p>I wrote a blog post about using Changes.app, and some other GUI tools with Mercurial: <a href="http://schinckel.net/2008/04/26/mercurial-with-os-x-gui-tools/" rel="nofollow noreferrer">Using Mercurial with GUI Tools.</a></p>
<p>Basically, you need to know the calling expectations of the CLI tool that loads up the diff tool. IE, how to make it load data from a particular file, and how to make it wait for exit. There should be enough info on the post to give you some ideas.</p>
| 6,459
|
<p>Is there a way to have msdn documentation on a usb key ?
either web or the MSDN Library program.
i've been setting up my usbkey with portableapps stuff.</p>
|
<p>i think when you do step 2 and install the documentation just tell direct it to the usb key drive letter. easy peasy.</p>
|
<p><a href="https://windows-cdn.softpedia.com/screenshots/MSDN-to-USB_2.png" rel="nofollow noreferrer"><img src="https://windows-cdn.softpedia.com/screenshots/MSDN-to-USB_2.png" alt="enter image description here" /></a></p>
<p>@Oleg You can use <a href="https://marketplace.visualstudio.com/items?itemName=Moataz99.MSDNtoUSB" rel="nofollow noreferrer">MSDN to USB</a>, it works offline with any Visual Studio 2010|2015|2017|2019 IDE product.</p>
<p>Apparently, you've to download the docs 1st, then use this tool to "<strong>Backup MSDN</strong>" to your USB drive, later use the same tool to "<strong>Locate MSDN</strong>".</p>
| 5,806
|
<p>As a LAMP developer considering moving to a .Net IIS platform, one of my concerns is the loss of productivity due to lack of shell... Has anyone else had this experience? Is there possibly a Linux shell equivalent for Windows?</p>
|
<p>Depending on what version of IIS you're considering, I would second lbrandy's recommendation to check out <a href="http://www.microsoft.com/windowsserver2003/technologies/management/powershell/default.mspx" rel="noreferrer">PowerShell</a>. Microsoft is working on a PowerShell provider for IIS (specifically version 7). There is a decent post about this at <a href="http://blogs.iis.net/thomad/archive/2008/04/14/iis-7-0-powershell-provider-tech-preview-1.aspx" rel="noreferrer">http://blogs.iis.net/thomad/archive/2008/04/14/iis-7-0-powershell-provider-tech-preview-1.aspx</a>. The upcoming version of PowerShell will also <a href="http://www.microsoft.com/technet/scriptcenter/topics/winpsh/remote.mspx" rel="noreferrer">add remoting capabilities</a> so that you can remotely manage machines. PowerShell is quite different from *NIX shells, though, so that is something to consider.</p>
<p>Hope this helps.</p>
|
<p>You should make your choice of server platform based on the environment as a whole, and that includes the admin/management interfaces supplied.</p>
<p>I'm afraid that if you don't like the way Windows implements management of IIS, then that's too bad. Having said that, a bit of delving around in the WMI interfaces will generally yield a solution that you should find usable. I used to do quite a bit of WMI scripting (mostly via PowerShell) in order to have a reliable environment rebuild capability.</p>
| 2,390
|
<p>Ok, I did it, I ordered myself an Ender-3, a genuine 24V e3D hotend, inductive sensor and some better tubing/clamps to cope with the problem the CR10/Ender line has occasionally.</p>
<p>But now I need to fix up my Cura for the machine coming in. The start is the CR10, and fixing the dimensions is easy.</p>
<p>But now comes the tricky part: Start and End G-code. For my TronXY I never bothered with changing it away from the "basic" settings that a "custom 3D printer" on Marlin gave, but this time I want to know what I type in there. The basic code, after I dragged out the G-code handbook from the RepRap wiki to add the missing comments is:</p>
<pre><code>G28 ;Home
G1 Z15.0 F6000 ;Move the Gantry up 15mm going fast
;Prime the extruder
G92 E0 ; reset extrusion distance
G1 F200 E3 ; extrude 3mm of feed stock
G92 E0 ; reset extrusion distance
</code></pre>
<p>The <a href="https://reprap.org/wiki/Start_GCode_routines" rel="noreferrer">RepRap Wiki</a> suggests that there could be made so much more from this. </p>
<p>I would love to <em>swipe</em> the nozzle before starting to print, making sure that the curled up filament from this first extrusion doesn't get squished against the nozzle and make a bad first layer.</p>
<p><strong>How does an example (commented) G-code for swiping the nozzle look like?</strong></p>
|
<h2>The lazy way: Skirt/Brim</h2>
<p>With my TronXY X1 I learned pretty fast, that this first bit of extrusion on an unheated bed can totally mess up the first layer by being just in the way, as explained in the question.</p>
<p>To some degree, this behavior can be avoided by adding a skirt of a certain length. An equally good alternative that also increases bed adhesion for tricky parts is the brim. Both are not set via G-code but can be added by the slicer. In Ultimaker Cura both are found in the tab Build Plate Adhesion as type, as the following pictures show:</p>
<h3>Skirt: 2 lines, minimum length 250mm</h3>
<p><a href="https://i.stack.imgur.com/iN6IM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iN6IM.png" alt="Skirt: 2 lines, minimum length 250mm" /></a></p>
<h3>Brim: minimum length 250mm, 8mm width</h3>
<p><a href="https://i.stack.imgur.com/uL1Fn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uL1Fn.png" alt="Brim: minimum length 250mm, 8mm width" /></a></p>
<h2>The Prusa Priming-line</h2>
<p>Angus/MakersMuse introduced me to the Prusa Priming Line in one of his <a href="https://www.youtube.com/watch?v=6csbJ5965Bk" rel="nofollow noreferrer">tutorial videos</a>. For his Wanhao he used (for the video) just this start G-code script:</p>
<pre><code>G28
G1 Y-3 F500 ; Move out of print volume
G1 X60 E9 F500 ; start purge line
G1 X100 E12.5 F500 ; finish purge line
</code></pre>
<p>This resulted in a nice line like this:
<a href="https://i.stack.imgur.com/GAWQW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GAWQW.png" alt="A printed "Prusa Priming Line"" /></a></p>
<p>In Ultimaker Cura, this piece of code is added via the preferences of a machine. In Cura 3, you usually had to add the starting G-code yourself. Since at least version 4.2.1, all preset machines contain some variant of a priming line, usually at the left side.</p>
|
<h2>Blob & Cut</h2>
<p>A different approach is to move to a position out off the print bed, then extrude a long piece of filament and then move forward, cutting the filament at the edge of the print surface. A sample G-code that needs to be adjusted to your bed:</p>
<pre><code>G28
G1 Y-5 X5 Z0.1 F500 ; Move off the print surface, a bit above the 0
G92 E0 ; reset extrusion distance
G1 F200 E10 ; extrude 10mm of feed stock
G92 E0 ; reset extrusion distance
G1 Y2 F500 ; Move into the print platform
</code></pre>
| 957
|
<p>In podcast #15, Jeff mentioned he twittered about how to run a regular event in the background as if it was a normal function - unfortunately I can't seem to find that through twitter. Now I need to do a similar thing and are going to throw the question to the masses.</p>
<p>My current plan is when the first user (probably me) enters the site it starts a background thread that waits until the alloted time (hourly on the hour) and then kicks off the event blocking the others (I am a Windows programmer by trade so I think in terms of events and WaitOnMultipleObjects) until it completes.</p>
<p>How did Jeff do it in Asp.Net and is his method applicable to the Java web-app world?</p>
|
<p>I think developing a custom solution for running background tasks doesn't always worth, so I recommend to use the <a href="http://www.quartz-scheduler.org/" rel="nofollow noreferrer">Quartz Scheduler</a> in Java.</p>
<p>In your situation (need to run background tasks in a web application) you could use the ServletContextListener included in the distribution to <a href="http://www.quartz-scheduler.org/api/2.2.1/org/quartz/ee/servlet/QuartzInitializerListener.html" rel="nofollow noreferrer">initialize the engine at the startup of your web container</a>.</p>
<p>After that you have a number of possibilities to start (trigger) your background tasks (jobs), e.g. you can use Calendars or cron-like expressions. In your situation most probably you should settle with <a href="http://www.quartz-scheduler.org/api/2.2.1/org/quartz/CronTrigger.html" rel="nofollow noreferrer">SimpleTrigger</a> that lets you run jobs in fixed, regular intervals.</p>
<p>The jobs themselves can be described easily too in Quartz, however you haven't provided any details about what you need to run, so I can't provide a suggestion in that area.</p>
|
<p>Here is how they do it on StackOverflow.com:</p>
<p><a href="https://blog.stackoverflow.com/2008/07/easy-background-tasks-in-aspnet/">https://blog.stackoverflow.com/2008/07/easy-background-tasks-in-aspnet/</a></p>
| 7,070
|
<p>I'm trying to perform a SQL query through a linked SSAS server. The initial query works fine:</p>
<pre><code>SELECT "Ugly OLAP name" as "Value"
FROM OpenQuery( OLAP, 'OLAP Query')
</code></pre>
<p>But if I try to add:</p>
<pre><code>WHERE "Value" > 0
</code></pre>
<p>I get an error</p>
<blockquote>
<p>Invalid column name 'Value'</p>
</blockquote>
<p>Any ideas what I might be doing wrong?</p>
<hr>
<p>So the problem was that the order in which elements of the query are processed are different that the order they are written. According to this source:</p>
<p><a href="http://blogs.x2line.com/al/archive/2007/06/30/3187.aspx" rel="noreferrer">http://blogs.x2line.com/al/archive/2007/06/30/3187.aspx</a></p>
<p>The order of evaluation in MSSQL is:</p>
<ol>
<li>FROM</li>
<li>ON</li>
<li>JOIN</li>
<li>WHERE</li>
<li>GROUP BY</li>
<li>HAVING</li>
<li>SELECT</li>
<li>ORDER BY</li>
</ol>
<p>So the alias wasn't processed until after the WHERE and HAVING clauses.</p>
|
<p>This should work:</p>
<pre><code>SELECT A.Value
FROM (
SELECT "Ugly OLAP name" as "Value"
FROM OpenQuery( OLAP, 'OLAP Query')
) AS a
WHERE a.Value > 0
</code></pre>
<p>It's not that Value is a reserved word, the problem is that it's a column alias, not the column name. By making it an inline view, "Value" becomes the column name and can then be used in a where clause.</p>
|
<p>Oh, bummer. I just saw, you select AS FOO. Don't you need a HAVING claus in this case?</p>
<pre><code>SELECT whatever AS value FROM table HAVING value > 1;
</code></pre>
<p>I still would not use "value". But to be sure, look it up in your docs!</p>
| 6,846
|
<p>The topic says the most of it - what is the reason for the fact that static methods can't be declared in an interface?</p>
<pre><code>public interface ITest {
public static String test();
}
</code></pre>
<p>The code above gives me the following error (in Eclipse, at least): "Illegal modifier for the interface method ITest.test(); only public & abstract are permitted".</p>
|
<p>There are a few issues at play here. The first is the issue of declaring a static method without defining it. This is the difference between</p>
<pre><code>public interface Foo {
public static int bar();
}
</code></pre>
<p>and</p>
<pre><code>public interface Foo {
public static int bar() {
...
}
}
</code></pre>
<p>The first is impossible for the reasons that <a href="https://stackoverflow.com/questions/21817/why-cant-i-declare-static-methods-in-an-interface#21826">Espo</a> mentions: you don't know which implementing class is the correct definition.</p>
<p>Java <em>could</em> allow the latter; and in fact, starting in Java 8, it does!</p>
|
<p>Perhaps a code example would help, I'm going to use C#, but you should be able to follow along.</p>
<p>Lets pretend we have an interface called IPayable</p>
<pre><code>public interface IPayable
{
public Pay(double amount);
}
</code></pre>
<p>Now, we have two concrete classes that implement this interface:</p>
<pre><code>public class BusinessAccount : IPayable
{
public void Pay(double amount)
{
//Logic
}
}
public class CustomerAccount : IPayable
{
public void Pay(double amount)
{
//Logic
}
}
</code></pre>
<p>Now, lets pretend we have a collection of various accounts, to do this we will use a generic list of the type IPayable</p>
<pre><code>List<IPayable> accountsToPay = new List<IPayable>();
accountsToPay.add(new CustomerAccount());
accountsToPay.add(new BusinessAccount());
</code></pre>
<p>Now, we want to pay $50.00 to all those accounts:</p>
<pre><code>foreach (IPayable account in accountsToPay)
{
account.Pay(50.00);
}
</code></pre>
<p>So now you see how interfaces are incredibly useful. </p>
<p>They are used on instantiated objects only. Not on static classes. </p>
<p>If you had made pay static, when looping through the IPayable's in accountsToPay there would be no way to figure out if it should call pay on BusinessAcount or CustomerAccount.</p>
| 4,188
|
<p>I'm working on a web application that needs to prints silently -- that is without user involvement. What's the best way to accomplish this? It doesn't like it can be done with strictly with Javascript, nor Flash and/or AIR. The closest I've seen involves a Java applet.</p>
<p>I can understand why it would a Bad Idea for just any website to be able to do this. This specific instance is for an internal application, and it's perfectly acceptable if the user needs to add the URL to a trusted site list, install an addon, etc.</p>
|
<p>Here’s what you need to do to enable Firefox immediately print without showing the print preferences dialog box.</p>
<ol>
<li><p>Type about:config at Firefox’s location bar and hit Enter.</p></li>
<li><p>Right click at anywhere on the page and select New > Boolean</p></li>
<li><p>Enter the preference name as print.always_print_silent and click OK.</p></li>
</ol>
<hr>
<p>I found that somewhere and it helped me</p>
|
<p>I have to be honest, I am kinda thinking out loud here.. But could it not be done with an applet or some sort (be it Java or whatever) that is given trusted permissions (such as that within the Intranet zone) or something?</p>
<p>May be worth investigating what permissions can be given to each zone?</p>
<p>Following a Google, I think you definately have a challenge, so far most of the articles I have seen involve printing to <a href="http://aspalliance.com/462" rel="nofollow noreferrer">printers connected to the server</a>.</p>
<p>If its internal, would it be possible to route printing from the server to department/user printers or something?</p>
| 4,193
|
<pre>
kt@rails-ubuntu:~/rails/ledger/releases/20080820144914/.git/hooks$ ls -al
ls: cannot access post-commit: No such file or directory
ls: cannot access update: No such file or directory
ls: cannot access post-update: No such file or directory
ls: cannot access commit-msg: No such file or directory
ls: cannot access pre-rebase: No such file or directory
ls: cannot access post-receive: No such file or directory
ls: cannot access pre-applypatch: No such file or directory
ls: cannot access pre-commit: No such file or directory
total 8
drwxrwxr-x 2 kt kt 4096 2008-09-09 18:10 .
drwxrwxr-x 4 kt kt 4096 2008-09-09 18:10 ..
-????????? ? ? ? ? ? commit-msg
-????????? ? ? ? ? ? post-commit
-????????? ? ? ? ? ? post-receive
-????????? ? ? ? ? ? post-update
-????????? ? ? ? ? ? pre-applypatch
-????????? ? ? ? ? ? pre-commit
-????????? ? ? ? ? ? pre-rebase
-????????? ? ? ? ? ? update
</pre>
|
<p>First off, here's your question, nicely formatted (surround it in < pre > tags to get this):</p>
<pre>kt@rails-ubuntu:~/rails/ledger/releases/20080820144914/.git/hooks$ ls -al
ls: cannot access post-commit: No such file or directory
ls: cannot access update: No such file or directory
ls: cannot access post-update: No such file or directory
[snip]</pre>
<p>Anyway, you need to boot up in single-user mode and run fsck. If you can't reboot right now, just move the directory to /tmp and forget about it.</p>
|
<p>I had the same problem caused by Aptana Studio working with rails more than once.
The long term solution was to avoid using aptana to create files.</p>
| 9,854
|
<p>I came across several issues which seem to have been lowered.
Firstly, I changed from a 0.4 to a 0.5 mm nozzle. Because of the backpressure I was not able to print my PETG (Colorfabb XT filament) below 270°C which caused unresolvable oozing. After that I was able to extrude till 230°C.</p>
<p>The left print below shows the result. I disassembled the hotend, there was no leak or whatsoever. Maybe it was too cold for printing. However, the temperature displayed was 250°C. Then I replaced the cheap aluminum heat block with a copper alloy based one. After that my PIDs did not work anymore. I had to greatly enhance the d-term, otherwise there was a big overshoot. Guess there was a serious heat conducting issue with the old BQ hotend. </p>
<p><a href="https://i.stack.imgur.com/uxd43.jpg" rel="nofollow noreferrer" title="Prints using BQ Alu Heatblock (left) and Copper alloy heatblock (right)"><img src="https://i.stack.imgur.com/uxd43.jpg" alt="Prints using BQ Alu Heatblock (left) and Copper alloy Heatblock (right)" title="Prints using BQ Alu Heatblock (left) and Copper alloy heatblock (right)"></a></p>
<p>Anyway, from there it became better. However, I noticed that I still have severe underextrusion after travel moves (second piece, first picture, first piece, second picture). I use Cura, so I activated retraction with the feature to prime after travel moves. I got the wall closed just after 0.35 mm³. </p>
<p><a href="https://i.stack.imgur.com/Oj1Ey.jpg" rel="nofollow noreferrer" title="Prints using Copper alloy Heatblock (left)"><img src="https://i.stack.imgur.com/Oj1Ey.jpg" alt="Prints using Copper alloy Heatblock (left)" title="Prints using Copper alloy Heatblock (left)"></a></p>
<p>My Question: Is this underextrusion after travel moves normal for PETG/XT? I did not discover such behavior with PLA or ABS in the past. </p>
<p>Current Site Advice: Despite the weight, copper heat blocks seem to be worth the upgrade. </p>
|
<p>Have you tried adjusting the Extra Restart Distance in Simplify3D or Retraction Extra Prime in Ultimaker Cura? Most slicers should have a similar setting, perhaps named just slightly differently. A small positive value can help prime the nozzle after a long travel when you've lost back pressure or oozed a little.</p>
|
<p>Zero retraction is just a special (worst) case of under-retraction. You'll pretty much always lose material to oozing if you don't retract before travel. At best this ends up hidden inside the print (but can still affect weight and weight balance); usually it'll also harm the surface.</p>
<p>Ensure that retraction is set to <em>always</em> happen (not skipping short travel), and at least 5-6 mm for bowden extruders. Direct extruders can get away with less but I'm not sure exactly how much less. Less-rigid materials need more retraction to make up for compression of the filament between the extruder gear and the nozzle.</p>
| 950
|
<p>I'm using a <code>Dictionary<string, int></code> where the <code>int</code> is a count of the key.</p>
<p>Now, I need to access the last-inserted Key inside the Dictionary, but I do not know the name of it. The obvious attempt:</p>
<pre><code>int LastCount = mydict[mydict.keys[mydict.keys.Count]];
</code></pre>
<p>does not work, because <code>Dictionary.Keys</code> does not implement a []-indexer.</p>
<p>I just wonder if there is any similar class? I thought about using a Stack, but that only stores a string. I could now create my own struct and then use a <code>Stack<MyStruct></code>, but I wonder if there is another alternative, essentially a Dictionary that implements an []-indexer on the Keys?</p>
|
<p>As @Falanwe points out in a comment, doing something like this is <strong><em>incorrect</em></strong>:</p>
<pre><code>int LastCount = mydict.Keys.ElementAt(mydict.Count -1);
</code></pre>
<p>You <strong>should not</strong> depend on the order of keys in a Dictionary. If you need ordering, you should use an <a href="https://learn.microsoft.com/en-us/dotnet/api/system.collections.specialized.ordereddictionary" rel="noreferrer">OrderedDictionary</a>, as suggested in this <a href="https://stackoverflow.com/a/5540/1724702">answer</a>. The other answers on this page are interesting as well.</p>
|
<p>Visual Studio's <a href="https://visualstudio.uservoice.com/forums/121579-visual-studio-2015/suggestions/16494583-generic-ordereddictionary" rel="nofollow noreferrer">UserVoice</a> gives a link to <a href="https://github.com/mattmc3/dotmore/blob/master/dotmore/Collections/Generic/OrderedDictionary.cs" rel="nofollow noreferrer">generic OrderedDictionary implementation</a> by dotmore.</p>
<p>But if you only need to get key/value pairs by index and don't need to get values by keys, you may use one simple trick. Declare some generic class (I called it ListArray) as follows:</p>
<pre><code>class ListArray<T> : List<T[]> { }
</code></pre>
<p>You may also declare it with constructors:</p>
<pre><code>class ListArray<T> : List<T[]>
{
public ListArray() : base() { }
public ListArray(int capacity) : base(capacity) { }
}
</code></pre>
<p>For example, you read some key/value pairs from a file and just want to store them in the order they were read so to get them later by index:</p>
<pre><code>ListArray<string> settingsRead = new ListArray<string>();
using (var sr = new StreamReader(myFile))
{
string line;
while ((line = sr.ReadLine()) != null)
{
string[] keyValueStrings = line.Split(separator);
for (int i = 0; i < keyValueStrings.Length; i++)
keyValueStrings[i] = keyValueStrings[i].Trim();
settingsRead.Add(keyValueStrings);
}
}
// Later you get your key/value strings simply by index
string[] myKeyValueStrings = settingsRead[index];
</code></pre>
<p>As you may have noticed, you can have not necessarily just pairs of key/value in your ListArray. The item arrays may be of any length, like in jagged array.</p>
| 2,623
|
<p>Is there an easy way to tell if a ruby script is already running and then handle it appropriately? For example: I have a script called really_long_script.rb. I have it cronned to run every 5 minutes. When it runs, I want to see if the previous run is still running and then stop the execution of the second script. Any ideas?</p>
|
<p>The ps is a really poor way of doing that and probably open to race conditions.</p>
<p>The traditional Unix/Linux way would be to write the PID to a file (typically in /var/run) and check to see if that file exists on startup.</p>
<p>e.g. the pidfile being located at /var/run/myscript.pid then you'd check to see if that exists before running the program. There are a few tricks to avoid race conditions involving using O_EXCL (exclusing locking) to open the file and symbolic links.</p>
<p>However unlikely, you should try to code to avoid race conditions by using atomic operations on the filesystem. </p>
<p>To save re-inventing the wheel, you might want to look at <a href="http://rubyforge.org/projects/pidify/" rel="noreferrer">http://rubyforge.org/projects/pidify/</a></p>
|
<p>In bash:</p>
<pre><code>if ps aux | grep really_long_script.rb | grep -vq grep
then
echo Script already running
else
ruby really_long_script.rb
fi
</code></pre>
| 5,910
|
<p>I have a self-referential Role table that represents a tree structure </p>
<pre><code>ID [INT] AUTO INCREMENT
Name [VARCHAR]
ParentID [INT]
</code></pre>
<p>I am using an ADO.NET DataTable and DataAdapter to load and save values to this table. This works if I only create children of existing rows. If I make a child row, then make a child of that child, then Update, the temporary ID value generated by the DataTable is going into the ParentID column. I have the following data relation set:</p>
<pre><code>dataset.Relations.Add(New DataRelation("RoleToRole",RoleTable.Columns("ID"), RoleTable.Columns("ParentID")))
</code></pre>
<p>And when I make new child rows in the DataTable I call the SetParentRow method</p>
<pre><code>newRow.SetParentRow(parentRow)
</code></pre>
<p>Is there something special I have to do to get the ID generation to propagate recursively when I call Update on the DataAdapter?</p>
|
<p>I don't know ADO.net in particular, but most ORMs won't automatically insert the ID of a new record in a relationship. You'll have to resort to the 2-step process:</p>
<ol>
<li>build and save parent</li>
<li>build and save child with relationship to parent</li>
</ol>
<p>The reason that this is difficult for ORMs is because you might have circular dependencies, and it wouldn't know which object it needed to create an ID for first. Some ORMs are smart enough to figure out those relationships where there are no such circular dependencies, but most aren't.</p>
|
<p>Does it make any difference if you go</p>
<pre><code>newRow.SetParentRow(parentRow, RoleTable.Relations("RoleToRole"))
</code></pre>
| 2,724
|
<p>We've got two developers on the same closed (ugh, stupid gov) network, Another developer a couple minutes drive down the road, and a fourth developer half-way across the country. E-Mail, ftp, and removal media are all possible methods of transfer for the people not on the same network.</p>
<p>I am one of the two closed network developers, consider us the "master" location.</p>
<p>What is the best Mercurial setup/pattern for group? What is the best way to trasmit changes to/from the remote developers? As I am in charge, I figured that I would have to keep at least one master repo with another local repo in which I can develop. Each other person should just need a clone of the master. Is this right? I guess this also makes me responsible for the merging? </p>
<p>As you can see, I'm still trying to wrap my head around distributed version control. I don't think there is any other way to do this with the connectivity situation.</p>
|
<p>The users outside the network can make <a href="http://hgbook.red-bean.com/hgbookch12.html#x16-26800012.1" rel="nofollow noreferrer">patches</a>, and/or use <a href="http://hgbook.red-bean.com/hgbookch14.html#x18-32100014.4" rel="nofollow noreferrer">email</a> to send the updates to the main repo or someone, like yourself to merge them. The other internal people can have local copies, like yourself and do merges --but if you are having these out of network patches, it might be better that one person deal with them so nobody gets confused, but that's something you'd have to consider yourself.</p>
<p>Syncing the other way, you'd create a patch, and them email or get a flash drive to the remote developers to patch their system. You're going to need some good communication in the team man, I am thankful I'm not in your shoes.</p>
<p>Those are my only suggestions --well, the obvious, get them a VPN connection! I'd love to hear how it goes, what plans stabilize into a weekly groove, et cetera. </p>
|
<p>Correct. The only way anything makes it onto the closed network is via flash drive.</p>
| 2,660
|
<p>When I use the PrintOut method to print a Worksheet object to a printer, the "Printing" dialog (showing filename, destination printer, pages printed and a Cancel button) is displayed even though I have set DisplayAlerts = False. The code below works in an Excel macro but the same thing happens if I use this code in a VB or VB.Net application (with the reference changes required to use the Excel object).</p>
<pre><code>Public Sub TestPrint()
Dim vSheet As Worksheet
Application.ScreenUpdating = False
Application.DisplayAlerts = False
Set vSheet = ActiveSheet
vSheet.PrintOut Preview:=False
Application.DisplayAlerts = True
Application.ScreenUpdating = True
End Sub
</code></pre>
<p>EDIT: The answer below sheds more light on this (that it may be a Windows dialog and not an Excel dialog) but does not answer my question. Does anyone know how to prevent it from being displayed?</p>
<p>EDIT: Thank you for your extra research, Kevin. It looks very much like this is what I need. Just not sure I want to blindly accept API code like that. Does anyone else have any knowledge about these API calls and that they're doing what the author purports?</p>
|
<p>When you say the "Printing" Dialog, I assume you mean the "Now printing xxx on " dialog rather than standard print dialog (select printer, number of copies, etc). Taking your example above & trying it out, that is the behaviour I saw - "Now printing..." was displayed briefly & then auto-closed.</p>
<p>What you're trying to control may not be tied to Excel, but instead be Windows-level behaviour. If it is controllable, you'd need to a) disable it, b) perform your print, c) re-enable. If your code fails, there is a risk this is not re-enabled for other applications.</p>
<p>EDIT: Try this solution: <a href="http://www.mrexcel.com/archive2/11900/13336.htm" rel="nofollow noreferrer">How do you prevent printing dialog when using Excel PrintOut method</a>. It seems to describe exactly what you are after.</p>
|
<p>The API calls in the article linked by Kevin Haines hide the Printing dialog like so:</p>
<ol>
<li>Get the handle of the Printing dialog window.</li>
<li>Send a message to the window to tell it not to redraw</li>
<li>Invalidate the window, which forces a redraw that never happens</li>
<li>Tell Windows to repaint the window, which causes it to disappear.</li>
</ol>
<p>That's oversimplified to put it mildly.</p>
<p>The API calls are safe, but you will probably want to make sure that screen updating for the Printing dialog is set to True if your application fails.</p>
| 9,281
|
<p>I'm writing an application that is basically just a preferences dialog, much like the tree-view preferences dialog that Visual Studio itself uses. The function of the application is simply a pass-through for data from a serial device to a file. It performs many, many transformations on the data before writing it to the file, so the GUI for the application is simply all the settings that dictate what those transformations should be.</p>
<p>What's the best way to go about designing/coding a tree-view preferences dialog? The way I've been going about it is building the main window with a docked tree control on the left. Then I have been creating container controls that correspond to each node of the tree. When a node is selected, the app brings that node's corresponding container control to the front, moves it to the right position, and maximizes it in the main window. This seems really, really clunky while designing it. It basically means I have tons of container controls beyond the edge of the main window during design time that I have to keep scrolling the main window over to in order to work with them. I don't know if this totally makes sense the way I'm writing this, but maybe this visual for what I'm talking about will make more sense:</p>
<p><img src="https://i.stack.imgur.com/bVRJB.png" alt="form design"></p>
<p>Basically I have to work with this huge form, with container controls all over the place, and then do a bunch of run-time reformatting to make it all work. This seems like a <em>lot</em> of extra work. Am I doing this in a totally stupid way? Is there some "obvious" easier way of doing this that I'm missing?</p>
|
<p>A tidier way is to create separate forms for each 'pane' and, in each form constructor, set</p>
<pre><code>this.TopLevel = false;
this.FormBorderStyle = FormBorderStyle.None;
this.Dock = DockStyle.Fill;
</code></pre>
<p>That way, each of these forms can be laid out in its own designer, instantiated one or more times at runtime, and added to the empty area like a normal control.</p>
<p>Perhaps the main form could use a <code>SplitContainer</code> with a static <code>TreeView</code> in one panel, and space to add these forms in the other. Once they are added, they could be flipped through using <code>Hide/Show</code> or <code>BringToFront/SendToBack</code> methods.</p>
<pre><code>SeparateForm f = new SeparateForm();
MainFormSplitContainer.Panel2.Controls.Add(f);
f.Show();
</code></pre>
|
<p>I would probably create several panel classes based on a base class inheriting CustomControl. These controls would then have methods like Save/Load and stuff like that. If so I can design each of these panels separately.</p>
<p>I have used a Wizard control that in design mode, handled several pages, so that one could click next in the designer and design all the pages at once through the designer. Though this had several disadvantages when connecting code to the controls, it probably means that you could have a similar setup by building some designer classes. I have never myself written any designer classes in VS, so I can't say how to or if its worth it :-)</p>
<p>I'm a little curious of how you intend to handle the load/save of values to/from the controls? There must be a lot of code in one class if all your pages are in one big Form?</p>
<p>And yet another way would of course be to generate the gui code as each page is requested, using info about what type of settings there are. </p>
| 2,577
|
<p>In Cura, I can edit my .ini profile settings as needed, for example, when I change filament or models. It's annoying to have to edit the profile on another computer and update it on my Pi just to change the infill, etc. I've searched <a href="http://plugins.octoprint.org/" rel="noreferrer">http://plugins.octoprint.org/</a> and can't find any plugins that do this.</p>
|
<p>As far as I know, this is currently not possible.</p>
|
<p>There is an import function.
<a href="https://github.com/foosel/OctoPrint/wiki/Plugin:-Cura" rel="nofollow">https://github.com/foosel/OctoPrint/wiki/Plugin:-Cura</a></p>
<p>Btw. now in Cura 2.1 you'll also need to reimport your .ini.</p>
<p>Regards :)</p>
| 162
|
<p>I'm curious about everyones practices when it comes to using or distributing libraries for an application that you write.</p>
<p>First of all, when developing your application do you link the debug or release version of the libraries? (For when you run your application in debug mode)</p>
<p>Then when you run your app in release mode just before deploying, which build of the libraries do you use?</p>
<p>How do you perform the switch between your debug and release version of the libraries? Do you do it manually, do you use macros, or whatever else is it that you do?</p>
|
<p>I would first determine what requirements are needed from the library:</p>
<ol>
<li>Debug/Release</li>
<li>Unicode support</li>
<li>And so on..</li>
</ol>
<p>With that determined you can then create configurations for each combination required by yourself or other library users.</p>
<p>When compiling and linking it is very important that you keep that libraries and executable consistent with respect to configurations used i.e. don't mix release & debug when linking.
I know on the Windows/VS platform this can cause subtle memory issues if debug & release libs are mixed within an executable.</p>
<p>As Brian has mentioned to Visual Studio it's best to use the Configuration Manager to setup how you want each configuration you require to be built.</p>
<p>For example our projects require the following configurations to be available depending on the executable being built.</p>
<ol>
<li>Debug+Unicode</li>
<li>Debug+ASCII</li>
<li>Release+Unicode</li>
<li>Release+ASCII</li>
</ol>
<p>The users of this particular project use the Configuration Manager to match their executable requirements with the project's available configurations.</p>
<p>Regarding the use of macros, they are used extensively in implementing compile time decisions for requirements like if the debug or release version of a function is to be linked. If you're using VS you can view the pre-processor definitions attribute to see how the various macros are defined e.g. _DEBUG _RELEASE, this is how the configuration controls whats compiled.</p>
<p>What platform are you using to compile/link your projects?</p>
<p>EDIT: Expanding on your updated comment..</p>
<p>If the <strong>Configuration Manager</strong> option is not available to you then I recommend using the following properties from the project:</p>
<ul>
<li><strong>Linker</strong>-><strong>Additional Library Directories</strong> or <strong>Linker</strong>-><strong>Input</strong></li>
</ul>
<p>Use the macro <code>$(ConfigurationName)</code> to link with the appropriate library configuration e.g. Debug/Release.</p>
<pre><code>$(ProjectDir)\..\third-party-prj\$(ConfigurationName)\third-party.lib
</code></pre>
<ul>
<li><strong>Build Events</strong> or <strong>Custom Build Step</strong> configuration property</li>
</ul>
<p>Execute a copy of the required library file(s) from the dependent project prior (or after) to the build occurring.</p>
<pre><code>xcopy $(ProjectDir)\..\third-party-prj\$(ConfigurationName)\third-party.dll $(IntDir)
</code></pre>
<p>The macro <code>$(ProjectDir)</code> will be substituted for the current project's location and causes the operation to occur relative to the current project.
The macro <code>$(ConfigurationName)</code> will be substituted for the currently selected configuration (default is <code>Debug</code> or <code>Release</code>) which allows the correct items to be copied depending on what configuration is being built currently.</p>
<p>If you use a regular naming convention for your project configurations it will help, as you can use the <code>$(ConfigurationName)</code> macro, otherwise you can simply use a fixed string.</p>
|
<p>I use VS. The way that I do it is that the libraries I need through the references of the project. Which basically just says in what folder to look for a specific library at project load time. I develop my libraries to be as project independent or reusable as possible. Therefore they are all projects of their own. So of the libraries that I need for a specific project, I create a "3rdParty" or "libs" folder at the same level as my "src" folder in my svn folder tree. I tend to only use released libraries, but when I get some unknown issues and want to switch to debug, I manually copy a debug version of the files in the "lib" folder and reload the project.</p>
<p>I am unsure wether I should be keeping both debug and released versions in my svn tree. Although since they are projects of their own, keeping them in the svn tree of another project doesn't right. They can be built again without an hitch at any moment.</p>
<p>And then I wanted to find a way of making the switch more...hmmm...well basically automatic if you while, but that's not what I really mean. It just feels that switching the files manually between released and debug isn't right. Maybe I haven't found it yet, but what I would like is an option that would do like:
For library "stack.dll" look in "......\3rdParty\" for release and "......\3rdPartyD\" for debug.</p>
<p>Anything that those something like I don't know. What do you suggest?
Remember libraries are external projects. There the built files are totally elsewhere. In fact think of it as you have to check out another project, build it, and copy the built library if you want another copy. How would you set that up?</p>
| 6,776
|
<p>How is your javaScript code organized? Does it follow patterns like MVC, or something else? </p>
<p>I've been working on a side project for some time now, and the further I get, the more my webpage has turned into a full-featured application. Right now, I'm sticking with <a href="http://jquery.com" rel="noreferrer">jQuery</a>, however, the logic on the page is growing to a point where some organization, or dare I say it, "architecture" is needed. My first approach is "MVC-ish":</p>
<ul>
<li>The 'model' is a JSON tree that gets extended with helpers</li>
<li>The view is the DOM plus classes that tweak it</li>
<li>The controller is the object where I connect events handling and kick off view or model manipulation</li>
</ul>
<p>I'm very interested, however, in how other people have built more substantial javaScript apps. I'm not interested in GWT, or other server-oriented approaches... just in the approach of "javaScript + <generic web service-y thingy here>"</p>
<p>Note: earlier I said javaScript "is not really OO, not really functional". This, I think, distracted everyone. Let's put it this way, because javaScript is unique in many ways, and I'm coming from a strongly-typed background, I don't want to force paradigms I know but were developed in very different languages.</p>
|
<p>..but Javascript has many facets that <strong>are</strong> OO.</p>
<p>Consider this:</p>
<pre><code>var Vehicle = jQuery.Class.create({
init: function(name) { this.name = name; }
});
var Car = Vehicle.extend({
fillGas: function(){
this.gas = 100;
}
});
</code></pre>
<p>I've used this technique to create page-level javascript classes that have their own state, this helps keep it contained (and I often identify areas that I can reuse and put into other classes).</p>
<p>This is also especially useful when you have components/server controls that have their own script to execute, but when you might have multiple instances on the same page. This keeps the state separate.</p>
|
<p>Not 100% sure what you mean here, but I will say that after doing ASP.NET for the last 6 years, my web pages are now mostly driven by JavaScript once the basic page rendering is done by the server. I use JSON for everything (have been for about 3 years now) and use <a href="http://www.mochikit.com" rel="nofollow noreferrer"><strong>MochiKit</strong></a> for my client-side needs.</p>
<p>By the way, JavaScript <em>is</em> OO, but since it uses prototypical inheritance, people don't give it credit in that way. I would also argue that it is functional as well, it all depends on how you write it. If you are really interested in functional programming styles, check out <a href="http://www.mochikit.com" rel="nofollow noreferrer"><strong>MochiKit</strong></a> - you may like it; it leans quite a bit towards the functional programming side of JavaScript.</p>
| 5,249
|
<p>I have a svn repository, R, that depends on a library, l, in another repository.</p>
<p>The goal is that when someone checks out R, they also check out l. We want l to still be in its own repository so that l can be updated without dependence on R.</p>
<p>I don't know much about external svn links, but I believe that when depending on a svn-based library one can link to it externally, 'ext'.</p>
<p>If l is in a git repository, can I do something similar? I'd like to preserve the goal stated above.</p>
|
<p>I suggest using a script wrapper for svn co. </p>
<pre><code>#!/bin/sh
svn co path://server/R svn-R
git clone path://server/l git-l
</code></pre>
<p>Or similar.</p>
|
<p>I suggest using a script wrapper for svn co. </p>
<pre><code>#!/bin/sh
svn co path://server/R svn-R
git clone path://server/l git-l
</code></pre>
<p>Or similar.</p>
| 9,834
|
<p>When I was in China my company's website was blocked for about 24 hours.</p>
<p>I assume it was the "Great Chinese Firewall" but I was wondering if there is any way that I can find out exactly where a packet or TCP/IP connection gets blocked.</p>
<p>I was able to verify that it wasn't being blocked at our end(I used the local host file to point to the backup server inside of China) or at the end of our server (Other people could still connect to both ISPs).</p>
<p>I tried tracert but only port 80 was being redirected. I could ssh into the server without any problems.</p>
<p>The other problem is that most of the routers in China just drop the packets and don't respond to ping etc so you can't find out their IP addresses.</p>
<p>In the future are there any tools that can track down where packets are being blocked?</p>
|
<p><a href="http://michael.toren.net/code/tcptraceroute/" rel="nofollow noreferrer">tcptraceroute</a></p>
|
<p>I have lot's of problems with that firewall. Having my server into EEUU doesn't help. If you need tools to test your site hosted outside from china like you were in China, you can try that page:</p>
<p><a href="http://www.websitepulse.com/help/tools.php" rel="nofollow noreferrer">http://www.websitepulse.com/help/tools.php</a></p>
<p>Good luck</p>
| 7,206
|
<p>Does someone have any tips/advice on database design for a web application? The kind of stuff that can save me a lot of time/effort in the future when/if the application I'm working on takes off and starts having a lot of usage.</p>
<p>To be a bit more specific, the application is a strategy game (browser based, just text) that will mostly involve players issuing "orders" that will be stored in the database and processed later, with the results also being stored there (the history of "orders" and the corresponding results will probably get quite big).</p>
<p><strong>Edited to add more details</strong> (as requested):</p>
<p>platform: Django</p>
<p>database engine: I was thinking of using MySQL (unless there's a big advantage in using another)</p>
<p>the schema: all I have now are some Django models, and that's far too much detail to post here. And if I start posting schemas this becomes too specific, and I was looking for general tips. For example, consider that I issue "orders" that will be later processed and return a result that I have to store to display some kind of "history". In this case is it better to have a separate table for the "history" or just one that aggregates both the "orders" and the result? I guess I could cache the "history" table, but this would take more space in the database and also more database operations because I would have to constantly create new rows instead of just altering them in the aggregate table.</p>
|
<p>You have probably touched on a much larger issue of designing for high scalability and performance in general.</p>
<p>Essentially, for your database design I would follow good practices such as adding foreign keys and indexes to data you expect to be used frequently, normalise your data by splitting it into smaller tables and identify which data is to be read frequently and which is to be written frequently and optimise.</p>
<p>Much more important than your database design for high performance web applications, is your effective use of caching both at the client level through HTML page caching and at the server level through cached data or serving up static files in place of dynamic files.</p>
<p>The great thing about caching is that it can be added as it is needed, so that when your application does take off then you evolve accordingly.</p>
<p>As far as your historical data is concerned, this is a great thing to cache as you do not expect it to change frequently. If you wish to produce regular and fairly intensive reports from your data, then it is good practise to put this data into another database so as not to bring your web application to a halt whilst they run.</p>
<p>Of course this kind of optimisation really isn't necessary unless you think your application will warrant it.</p>
|
<p>Why don't you post the schema you have now? It's too broad a question to answer usefully without some detail of what platform and database you're going to use and the table structure you're proposing...</p>
| 6,975
|
<p>Using Firebug v1.20b7 with Firefox v3.0.1 I use firebug a lot for web devlopment.</p>
<p>I have very often the problem that Firebug won't show its web console for seeing the <code>POSTs</code> and <code>GETs</code>. I can view all the other tabs, including the NET tab that gives me a lot of the same information that the CONSOLE tab does.</p>
<p>Curious if anyone else has had this problem, and maybe a solution, or maybe this is a bug of Firebug.</p>
|
<p>There is a limitation in firebug (or rather, in firefox iteself), which will be fixed in one of the newer Firefox releases.</p>
<p>The bug is caused by the fact that firebug needs to send data <em>a second time</em> to monitor what's going on in the connection.</p>
<p>There's now a special API hook in the firefox trunk that should prevent this workaround in the future, so that firebug can really spy on what's going on :)</p>
|
<p>Well, 1.20b7 is technically a <em>beta</em> version of Firebug. :)</p>
<p>I've had problems with certain features off and on, but a restart of Firefox seems to fix it more often than not.</p>
| 2,759
|
<p>I've got a System.Generic.Collections.List(Of MyCustomClass) type object.</p>
<p>Given integer varaibles pagesize and pagenumber, how can I collect only any single page of <code>MyCustomClass</code> objects?</p>
<p>This is what I've got. How can I improve it?</p>
<pre><code>'my given collection and paging parameters
Dim AllOfMyCustomClassObjects As System.Collections.Generic.List(Of MyCustomClass) = GIVEN
Dim pagesize As Integer = GIVEN
Dim pagenumber As Integer = GIVEN
'collect current page objects
Dim PageObjects As New System.Collections.Generic.List(Of MyCustomClass)
Dim objcount As Integer = 1
For Each obj As MyCustomClass In AllOfMyCustomClassObjects
If objcount > pagesize * (pagenumber - 1) And count <= pagesize * pagenumber Then
PageObjects.Add(obj)
End If
objcount = objcount + 1
Next
'find total page count
Dim totalpages As Integer = CInt(Math.Floor(objcount / pagesize))
If objcount Mod pagesize > 0 Then
totalpages = totalpages + 1
End If
</code></pre>
|
<p>Generic.List should provide the Skip() and Take() methods, so you could do this:</p>
<pre><code>Dim PageObjects As New System.Collections.Generic.List(Of MyCustomClass)
PageObjects = AllOfMyCustomClassObjects.Skip(pagenumber * pagesize).Take(pagesize)
</code></pre>
<hr>
<p>If by "without Linq" you meant on the 2.0 Framework, I don't believe List(Of T) supports those methods. In that case, use GetRange like Jonathan suggested.</p>
|
<p>You use GetRange on your IEnuramble implementing collection:</p>
<pre><code>List<int> lolInts = new List<int>();
for (int i = 0; i <= 100; i++)
{
lolInts.Add(i);
}
List<int> page1 = lolInts.GetRange(0, 49);
List<int> page2 = lilInts.GetRange(50, 100);
</code></pre>
<p>I trust you can figure out how to use GetRange to grab an individual page from here.</p>
| 4,148
|
<p><code>System.IO.Directory.GetFiles()</code> returns a <code>string[]</code>. What is the default sort order for the returned values? I'm assuming by name, but if so how much does the current culture effect it? Can you change it to something like creation date? </p>
<p><strong>Update:</strong> MSDN points out that the sort order is not guaranteed for .Net 3.5, but the 2.0 version of the page doesn't say anything at all and neither page will help you sort by things like creation or modification time. That information is lost once you have the array (it contains only strings). I could build a comparer that would check for each file it gets, but that means accessing the file system repeatedly when presumably the .GetFiles() method already does this. Seems very inefficient.</p>
|
<p>If you're interested in properties of the files such as CreationTime, then it would make more sense to use System.IO.DirectoryInfo.GetFileSystemInfos().
You can then sort these using one of the extension methods in System.Linq, e.g.:</p>
<pre><code>DirectoryInfo di = new DirectoryInfo("C:\\");
FileSystemInfo[] files = di.GetFileSystemInfos();
var orderedFiles = files.OrderBy(f => f.CreationTime);
</code></pre>
<p>Edit - sorry, I didn't notice the .NET2.0 tag so ignore the LINQ sorting. The suggestion to use System.IO.DirectoryInfo.GetFileSystemInfos() still holds though.</p>
|
<p>Just an idea. I like to find an easy way out and try re use already available resources. if I were to sort files I would've just create a process and make syscal to "DIR [x:\Folders\SubFolders*.*] /s /b /on" and capture the output. </p>
<p>With system's DIR command you can sort by :</p>
<pre><code>/O List by files in sorted order.
sortorder N By name (alphabetic) S By size (smallest first)
E By extension (alphabetic) D By date/time (oldest first)
G Group directories first - Prefix to reverse order
The /S switch includes sub folders
</code></pre>
<p>I AM NOT SURE IF D = By Date/Time is using LastModifiedDate or FileCreateDate. But if the needed sort order is already built-in in the DIR command, I will get that by calling syscall. And it's FAST. I am just the lazy guy ;)</p>
<p><em>After a little googling I found switch to sort by particular date/time:-</em></p>
<pre><code>/t [[:]TimeField] : Specifies which time field to display or use for sorting. The following list describes each of the values you can use for TimeField.
Value Description
c : Creation
a : Last access
w : Last written
</code></pre>
| 7,618
|
<p>I have my own class inside the file "Particles.h" and the class's implementation is inside "Particles.cpp"</p>
<p>I want the file "Load.h" to recognize my classes inside there, so I've added the line</p>
<pre><code>#include "Particles.h"
</code></pre>
<p>and the file doesn't recognize it and in the past everything was OK (I haven't made any changes inside that class).</p>
<p>What should I do?</p>
|
<p>It sounds like your include path - the list of directories that the compiler scans in order to locate files that you #include - is set incorrectly. Which compiler are you using?</p>
|
<p>make sure the file "Particles.cpp" has also included "Particles.h" to start with and the files are in the same folder and they are all part of the same project. it will help if you also share the error message that you are getting from your compiler.</p>
| 9,751
|
<p>What are they and what are they good for?</p>
<p>I do not have a CS degree and my background is VB6 -> ASP -> ASP.NET/C#. Can anyone explain it in a clear and concise manner?</p>
|
<p>Imagine if every single line in your program was a separate function. Each accepts, as a parameter, the next line/function to execute. </p>
<p>Using this model, you can "pause" execution at any line and continue it later. You can also do inventive things like temporarily hop up the execution stack to retrieve a value, or save the current execution state to a database to retrieve later.</p>
|
<p>Think of threads. A thread can be run, and you can get the result of its computation. A continuation is a thread that you can copy, so you can run the same computation twice.</p>
| 6,186
|
<p>I have a spool of translucent PLA filament that doesn't work well with the filament sensor on my Prusa i3 MK3. The translucency trips up the sensor, making it think the filament ran out. I thought I'd create a filament profile in Slic3r and disable the sensor in the "Start G-code" block that gets inserted at the beginning of the exported gcode file. </p>
<p>I've got the following code:</p>
<pre><code>M900 K{if printer_notes=~/.*PRINTER_HAS_BOWDEN.*/}200{else}30{endif}; Filament gcode
M406 ; Disable filament sensor
M117 Filament sensor OFF
</code></pre>
<p>The first line is provided by Prusa's default PLA profile. The second line should disable the sensor, and the third line should print the "Filament sensor OFF" message. If I look in the gcode, it's there:</p>
<pre><code>G92 E0.0
M221 S95
M900 K30; Filament gcode
M406 ; Disable filament sensor
M117 Filament sensor OFF
G21 ; set units to millimeters
G90 ; use absolute coordinates
M83 ; use relative distances for extrusion
;BEFORE_LAYER_CHANGE
</code></pre>
<p>But if I print this gcode file, I see no message, and when checking the sensor in the "Tune" menu while printing, the sensor is still on.</p>
<p>I thought I might have a problem with line endings, but looking at the file in a hex editor, all the lines seem to end with a <code>0A</code> line feed character, including mine. </p>
<p>Why isn't my printer doing anything with the M406 and M117 messages? Full gcode file <a href="https://pastebin.com/YDTN2Qes" rel="nofollow noreferrer">here</a>.</p>
|
<p>Following on from Toon's answer, here is a run down of <a href="https://www.youtube.com/channel/UCb8Rde3uRL1ohROUVg46h1A" rel="nofollow noreferrer">Thomas Sanladerer</a>'s excellent
video: <a href="https://www.youtube.com/watch?v=Mbn1ckR86Z8" rel="nofollow noreferrer">3D printing guides: Calibration and why you might be doing it wrong</a>.</p>
<p><em>However, this may not be a definitive answer to the actual question about warts and bumps...</em></p>
<hr>
<p><strong><a href="https://www.youtube.com/watch?v=Mbn1ckR86Z8&t=8" rel="nofollow noreferrer">0:08 - A step back</a></strong></p>
<p>Back in time - when the RepRap project (and the hobby grade 3D printing market) was new territory - it was seen to be a <em>doable</em> technology, with no restrictions imposed by patents. The new printers created and developed included Darwin, Sells Mendel and Prusa Mendel. These often produced unusable parts.</p>
<p>However, impromptu solutions, or kludges led to poor quality fixes giving poor quality prints, by today's standards. However, people (today) believe that because they worked back then,. that they must still be valid solutions today. However this is not necessarily the case. </p>
<p>The common misconception is that it is necessary to calibrate the esteps per mm for all axes other than extruder - adjusting the x, y and z esteps per mm until the 10 mm cube measures exactly 10x10x10 mm, even if that means squeezing the callipers.</p>
<hr>
<blockquote>
<p><strong><a href="https://www.youtube.com/watch?v=Mbn1ckR86Z8&t=85" rel="nofollow noreferrer">1:25 - Car analogy</a></strong></p>
<p>You find that your car pulls to the left, when going in a straight
line, so you adjust the steering. However, then in hard corners and
the rain the car handles poorly. </p>
<p>Upon closer inspection, it then turns out that the car had a flat
tyre. You wouldn't compensate for having a flat tyre by adjusting the
steering, now would you?</p>
</blockquote>
<hr>
<p>In order to get that 10 mm cube precise, it is usual to calibrate for the filament diameter, and extrusion multiplier (most straightforward option), but some printers aren't even that precise in the first place.</p>
<p>Mechanical, ripple, slaw, blacklash, can throw you off by 0.1 mm. Compensation for this 0.1 mm is certainly possible and achievable. However, then for a larger print, say 100 mm, then these <em>overcompensation</em> will become more evident, and you will be one entire milimeter off the desired dimensions.</p>
<p>So, use the ideal calculated esteps per mm. Timing belts and threaded rods are made to tight tolerances. therefore the worst case of ideal step per mm setting is an inaccuracy of 0.5%.</p>
<p>So, to find the ideal calculated steps use <a href="http://prusaprinters.org/calculator/" rel="nofollow noreferrer">Prusa's calculator</a> which is very good indeed. </p>
<p>If you are not using belts, or very large printer, then it is worth recalibrating the steps per mm for x and y, as 0.5% will make a noticeable difference in larger parts.</p>
<p>Use the files and instructions for these <strong>Calibration sticks</strong> on <a href="https://www.youmagine.com/designs/calibration-sticks" rel="nofollow noreferrer">Youmagine</a>, for proper recalibrating without results slewed by the extrusion multiplier being off by a bit.</p>
<p><strong><a href="https://www.youtube.com/watch?v=Mbn1ckR86Z8&t=225" rel="nofollow noreferrer">3:45 - So what do I need to do?</a></strong></p>
<p>What do you need to empirically calibrate your printer? In actual fact, not all that much:</p>
<ul>
<li>extruder steps per mm setting</li>
<li><p>extrusion multiplier (see video link - <a href="https://www.youtube.com/watch?v=YUPfBJz3I6Y" rel="nofollow noreferrer">Extruder calibration</a>)</p></li>
<li><p>print speed, jerk and acceleration settings - These depend upon how much quality you want to sacrifice for increased speed.</p>
<p>Pro-tip: <em>slow your printing down</em>. For example, try printing at half speed. Quality <em>may</em> be improved, and even if it isn't you will be able to observe more clearly what is happening, and going wrong.
(see video link - <a href="https://www.youtube.com/watch?v=7HsIZuj9vOs" rel="nofollow noreferrer">Super Fast Guide:Tuning Speeds</a>)</p></li>
</ul>
<p><strong><a href="https://www.youtube.com/watch?v=Mbn1ckR86Z8&t=270" rel="nofollow noreferrer">4:30 - Other than that?</a></strong></p>
<p>There is not much else needs calibrating, per se.</p>
<p>With regards to slicer software, there are only a certain range of settings make sense, but this isn't printer calibration. You simply learn the slicer software and, with familiarity, see how far you can go.</p>
<p>These days any well maintained and well built and solid printer will produce good prints.</p>
<p>Most slicers give you decent prints without tweaking or calibrating, other than the basic settings about your printer and deciding how the part should be printed.</p>
<p>What about print temp and retract settings? Well, just use the default settings, or settings which depend upon the type of filament. So, no calibration is required there, as it is a property of the filament.</p>
<p><strong><a href="https://www.youtube.com/watch?v=Mbn1ckR86Z8&t=324" rel="nofollow noreferrer">5:24 - Summing up</a></strong></p>
<p>Don't try to calibrate everything</p>
<p>The technology, in particular the software, i.e. slicers, is still developing and improving. Slic3r's prototpye beta (in Nov 2014) has added compensation for fitting errors(?) without messing other things up, which is essentially what the cube calibration tries to do, but in the correct way.</p>
|
<p>Have you correctly calibrated your steps per mm a.k.a. esteps? Tom made a great video about it:</p>
<p><a href="https://www.youtube.com/watch?v=Mbn1ckR86Z8" rel="nofollow noreferrer">3D printing guides: Calibration and why you might be doing it wrong</a></p>
| 903
|
<p>I have an old notebook computer that works just fine, but the outside of the lid is badly damaged and needs to be replaced. The screen and wiring are fine, so I only need to replace the housing that is exposed to the outside world.</p>
<p><strong>What is the best filament for an impact-resistant printed housing?</strong> Should I consider other options that may prevent damage to the internal components? Are there any alternatives with cosmetic benefits?</p>
<p>Edit:
Since I was asked, presume I may be willing to buy a new part to upgrade or accommodate a new filament type.</p>
|
<p>For casings I use a combination of TPU and PETG or PLA. PETG shell gives it rigidity and TPU gives it a bit of impact protection. So corners and inside layers of TPU within a hard PETG or PLA shell (shell has no corners).</p>
<p>I haven't had a problem with either but obviously PLA won't withstand heat very well, so it depends on environment.</p>
<p>For a laptop case you'd maybe want to do it the other way around with the outside shell of TPU and inside layers of PETG for rigidity.</p>
|
<p>If you just cared about impact resistance of the housing itself, the clear choice would be TPU, which would be basically indestructible. However, the housing is there to protect what's inside - not only from impact, but from stresses (e.g. bending) that could break it. This means you need a material that both provides rigidity and avoids breaking easily itself.</p>
<p>If you were doing an old (90s or earlier) style laptop case that's a tank, I'd actually say yeah, go with TPU 95A or higher (98A or so if you could get it) and add some reinforcement ribs/stiffeners. This stuff can be quite rigid at 100% infill, and it will hold up fine to heat, abrasion, even most chemicals. But if this is a modern slim style case, a small amount of material needs to provide a lot of rigidity and that's not going to work.</p>
<p>PLA actually fares really well here in some ways - it's one of the most rigid printable materials, and very easy to get good bonding. If you check for example CNC Kitchen's strength tests, you'll find plain PLA usually coming out on top of most comparisons. However, PLA doesn't handle heat well, which might rule it out.</p>
<p>ASA, ABS, or PC is probably your best bet, but I don't have any experience with them so I'll leave the part about them as something for another answerer to write.</p>
| 2,139
|
<p>The <code>datepicker</code> function only works on the first input box that is created.</p>
<p>I'm trying to duplicate a datepicker by cloning the <code>div</code> that is containing it.</p>
<pre><code><a href="#" id="dupMe">click</a>
<div id="template">
input-text <input type="text" value="text1" id="txt" />
date time picker <input type="text" id="example" value="(add date)" />
</div>
</code></pre>
<p>To initialize the datepicker, according to the <a href="http://docs.jquery.com/UI/Datepicker" rel="nofollow noreferrer">jQuery UI documentation</a> I only have to do <code>$('#example').datepicker();</code> and it does work, but only on the first datepicker that is created.</p>
<p>The code to duplicate the <code>div</code> is the following:</p>
<pre><code>$("a#dupMe").click(function(event){
event.preventDefault();
i++;
var a = $("#template")
.clone(true)
.insertBefore("#template")
.hide()
.fadeIn(1000);
a.find("input#txt").attr('value', i);
a.find("input#example").datepicker();
});
</code></pre>
<p>The strangest thing is that on the <code>document.ready</code> I have:</p>
<pre><code>$('#template #example').datepicker();
$("#template #txt").click(function() { alert($(this).val()); });
</code></pre>
<p>and if I click on the <code>#txt</code> it always works.</p>
|
<p>I use a CSS class instead:</p>
<pre><code><input type="text" id="BeginDate" class="calendar" />
<input type="text" id="EndDate" class="calendar" />
</code></pre>
<p>Then, in your <code>document.ready</code> function:</p>
<pre><code>$('.calendar').datepicker();
</code></pre>
<p>Using it that way for multiple calendar fields works for me.</p>
|
<p>The html I am cloning has multiple datepicker inputs.</p>
<p>Using Ryan Stemkoski's answer and Alex King's comment, I came up with this solution:</p>
<pre><code>var clonedObject = this.el.find('.jLo:last-child')
clonedObject.find('input.ui-datepicker').each(function(index, element) {
$(element).removeClass('hasDatepicker');
$(element).datepicker();
});
clonedObject.appendTo('.jLo');
</code></pre>
<p>Thanks yall.</p>
| 5,515
|
<p>I've just installed two TMC2208 drivers on my RAMPS board. I followed a very good step by step tutorial and after some issues, I got it nearly to work.</p>
<p>One problem I still have is that when I tell the printer to lift the Z axis by 5 mm, it lifts it by 10 cm.</p>
<p>I haven't changed anything regarding the steps/mm. Previously I had the Pololus, with 1/16 microstepping and now I also have 1/16 on configuration_adv.h file on Marlin 1.1.8</p>
<p>However what I noticed when doing a <code>M122</code> is a line which reads:</p>
<pre><code>msteps 256
</code></pre>
<p>which sounds like the microstepping was set at 1/256 instead.</p>
<p>Maybe somebody could tell me if I missed something?</p>
<p><strong>UPDATE:</strong></p>
<p>After some more digging into it, here is what I've done so far:</p>
<ul>
<li>Solder the pins on the driver. Original from Watterrot</li>
<li>Solder the bridge pads for enabling UART communication</li>
<li>Solder the pin for the communication heading upwards</li>
<li>Change the <code>configuration_adv.h</code> on Marlin (1.1.8) and enable all that is to enable: USE_TMC2208, Enable debugging, selecting the Z axis, etc</li>
<li>Check the pins on <code>pins_RAMPS.h</code> and make sure they are available in my setting</li>
<li>Make a Y cable with the 1 kOhm resistor for the TX pin</li>
<li>Hook everything up</li>
</ul>
<p>No matter what I did, the motor moves twice as much as requested. Although I set up 1/16 microstepping, the same I had with my Pololus, I performed the reverse calculation to find out that the actual microstepping on the driver is 1/8.</p>
<p>After more investigation, the issue seems to be that the driver is not recognized at all by the Marlin/Board. Thinking that it was a problem with the TX/RX communication, I dug into the available info out there and I found this, <a href="https://github.com/MarlinFirmware/Marlin/issues/9396" rel="nofollow noreferrer">Bug: TMC2208 UART Communication uses wrong pins for SoftwareSerial #9396</a>.</p>
<p>I proceeded to change the assigned pins for serial RX/TX, but everything is exactly the same.</p>
<p>I tried a different Arduino (original), another RAMPS board and even the 1.1.x and 2.0 bugfix branches from Marlin.</p>
<p>It seems that the driver is on "legacy" mode and software manipulation is not possible. Although I went through the steps to enable it.</p>
|
<p>I don't have these controllers, but I read that with default settings the TMC2208 will interpolate the microsteps set by the I/O configuration pins to
256 microsteps. Please look into how you set up the dip switches / jumper caps on your board, it seems that only 2 are used (MS1 and MS2). Furthermore, can't you just decrease the count of the array <code>DEFAULT_AXIS_STEPS_PER_UNIT</code> for the Z entry in your configuration file?</p>
|
<p>Most likely your issue is related to the PDN_UART pin on TCM2208 Driver board, on some manufacturers boards the jumper is not set to UART mode by default, so most likely u need to solder jumper to right configuration. Look at datasheet of your driver board.
for example
<a href="https://github.com/bigtreetech/BIGTREETECH-TMC2208-V3.0/blob/master/TMC2208-V3.0%20manual.pdf" rel="nofollow noreferrer">https://github.com/bigtreetech/BIGTREETECH-TMC2208-V3.0/blob/master/TMC2208-V3.0%20manual.pdf</a> </p>
| 904
|
<p>Our software must be able to run on SQL Server 2000 and 2005. To simplify development, we're running our SQL Server 2005 databases in compatibility level 80. However, database performance seems slower on SQL 2005 than on SQL 2000 in some cases (we have not confirmed this using benchmarks yet). Would upgrading the compatibility level to 90 improve performance on the SQL 2005 servers?</p>
|
<p>I think i read somewhere, that the SQL Server 2005 database engine should be about 30% faster than the SQL Server 2000 engine. It might be, that you have to run your database in compatibility mode 90 to get these benefits.</p>
<p>But i stumbled on two scenarios, where performance can drop dramatically when using mssql 2005 compared to mssql 2000:</p>
<ol>
<li><p>Parameter Sniffing: When using a stored procedure, sql server will calculate exactly one execution plan at the time, you first call the procedure. The execution plan depends on the parameter values given for that call. In our case, procedures which normally took about 10 seconds are running for hours under mssql 2005. Take a look <a href="http://furrukhbaig.wordpress.com/category/parameter-sniffing/" rel="noreferrer">here</a> and <a href="http://www.microsoft.com/technet/prodtechnol/sql/2005/qrystats.mspx" rel="noreferrer">here</a>.</p></li>
<li><p>When using distributed queries, mssql 2005 behaves different concerning assumptions about the sort order on the remote server. Default behavior is, that the server copies the whole remote tables involved in a query to the local tempdb and then execute the joins locally. Workaround is to use OPENQUERY, where you can control exactly which resultset is transferred from the remote server.</p></li>
</ol>
|
<p>Also a FYI, if you run compatibility level 90 then some things are not supported anymore like old style outer joins <code>(*= and =*)</code></p>
| 2,885
|
<p>Is there any DRM or license management solution for 3D printing? I'm looking for something, that would help me limit the number of prints someone can make from my projects. Basically, I would like to sell the "right to make no more than X copies" of my design. I don't expect it to be bullet-proof (like Widevine L1 for video), but it should at least help me with license management.</p>
|
<p>Good luck with that.</p>
<p>Issues you will face:</p>
<ul>
<li>using a G-code editor (or built-in printer software) to create multiple copies of the object in a single print session</li>
<li>user writing the printer file to an SD card, then block-copying the SD card</li>
<li>defining a "print". Specifically:</li>
<li>is #2 another functional copy of the object, or did #1 fail? Failed prints happen a lot.</li>
<li>is #3 another functional copy of the object, or was #2 damaged in post-processing? Like removing supports or left it in the acetone 5 seconds too long and it's now a blob.</li>
<li>is #4 another functional copy of the object, or was #3 tossed because the guy running the printer grabbed the wrong spool and made a perfectly good print in the perfectly wrong material/colour.</li>
<li>is #5 another functional copy of the object, or was #4 damaged in shipping?</li>
</ul>
<p>The whole process is one-way. There just isn't a path for anything other than the end user to know what they pulled off the print bed. Unless you make the entire process from design software to printer you won't change that. And good luck selling it, 'cause the user base is rather strongly opposed to that kind of thing. For example, ANY kind of DRM in the product, from printer software to filament, is an immediate no-buy for me. I don't care if you are paying me to take it, I will find an alternative without that particular annoyance. And no, I don't use Microsoft or Adobe products either. </p>
|
<h2>Generally: No</h2>
<p>Let's face the obvious problems of the files exchanged and the files used for printing, and then look into why it is a bad idea in the first place.</p>
<h2>G-code</h2>
<p>G-code is in its design a .txt file that contains specific orders for a machine. There is a g-code command that forces the printer to delete the file (<a href="https://marlinfw.org/docs/gcode/M030.html" rel="nofollow noreferrer"><code>M30</code></a>) but that needs the exact file path - and just does nothing if the path is incorrect. One could make a slicer profile, that after printing, deletes the file from the SD-card, but doing so as a user is only useful for one-off jobs and it can be prevented by the user by simply shutting down the printer before the line is triggered.</p>
<p>Because .gcode is essentially a .txt without any special features but a custom ending, it can't provide DRM beyond containing a self-delete-after-completion.</p>
<h2>STL & OBJ</h2>
<p>STL and OBJ are the most relevant exchange files. They are open source and not intended to contain any DRM.</p>
<h2>Other formats?</h2>
<p>Even with a different format, you would need both a <strong>proprietary 3D format</strong> that can only be read by your DRM-enabled slicer as well as a <strong>proprietary printer file type</strong> that both contain DRM. However, using a proprietary printer command type means you lock down your file to be only available to your printer family, which in turn makes this printer family <strong>less</strong> desirable, as it can't work with the standard g-code format.</p>
| 1,676
|
<p>Given a .SCAD file which contains some modules, how can I execute one of those modules from the command line?</p>
<p><strong>example.scad</strong></p>
<pre><code>module One() { ... }
module Two() { ... }
</code></pre>
<p><strong>render.sh</strong></p>
<pre><code>openscad -q -o one.stl --module One example.scad
</code></pre>
<p>Note that there is no <code>--module</code> option, but that is what I'm attempting to do. The workaround would be to make another .SCAD file e.g. <code>one.scad</code> which includes <code>example.scad</code>, and simply calls <code>One();</code> within and render that file from the shell file. But this is not ideal.</p>
|
<p>The <code>openscad -D</code> option can actually include arbitrary code, not just variable definitions, so you can include a call to the module. You can even use <code>/dev/null</code> on Linux or NUL on Windows as the input file and have 100 % of your code in <code>-D</code> statements.</p>
|
<p>elaborating on the answer from @r-github-stop-helping-ice, I find the following pattern useful:</p>
<h6>example.scad</h6>
<pre><code>module a() { .. }
module b() { .. }
module c() { .. }
module print(item=0) {
if (item==1) a();
if (item==2) b();
if (item==3) c();
}
item = 0;
print(item);
</code></pre>
<h6>render.sh</h6>
<pre><code>#!/bin/bash
for i in {1..3}; do
openscad -D item=${i} -o item_${i}.stl example.scad
done
</code></pre>
| 1,786
|
<p>What kind of multi-threading issues do you have to be careful for in asp.net? </p>
|
<p>One thing to watch out for at things that expire (I think httpContext does), if you are using it for operations that are "fire and forget" remember that all of a sudden if the asp.net cleanup code runs before your operation is done, you won't be able to access certain information. </p>
|
<p>Programmatic Caching is one area which immediately comes to my mind. It is a great feature which needs to be used carefully. Since it is shared across requests, you have to put locks around it before updating it.</p>
<p>Another place I would check is any code accessing filesystem like writing to log files. If one request has a read-write lock on a file, other concurrent requests will error out if not handled properly.</p>
| 7,293
|
<p>I'm trying to use <code>jQuery</code> to format code blocks, specifically to add a <code><pre></code> tag inside the <code><code></code> tag:</p>
<pre><code>$(document).ready(function() {
$("code").wrapInner("<pre></pre>");
});
</code></pre>
<p>Firefox applies the formatting correctly, but IE puts the entire code block on one line. If I add an alert </p>
<pre><code>alert($("code").html());
</code></pre>
<p>I see that IE has inserted some additional text into the pre tag:</p>
<pre><code><PRE jQuery1218834632572="null">
</code></pre>
<p>If I reload the page, the number following jQuery changes.</p>
<p>If I use <code>wrap()</code> instead of <code>wrapInner()</code>, to wrap the <code><pre></code> outside the <code><code></code> tag, both IE and Firefox handle it correctly. But shouldn't <code><pre></code> work <em>inside</em> <code><code></code> as well?</p>
<p>I'd prefer to use <code>wrapInner()</code> because I can then add a CSS class to the <code><pre></code> tag to handle all formatting, but if I use <code>wrap()</code>, I have to put page formatting CSS in the <code><pre></code> tag and text/font formatting in the <code><code></code> tag, or Firefox and IE both choke. Not a huge deal, but I'd like to keep it as simple as possible.</p>
<p>Has anyone else encountered this? Am I missing something?</p>
|
<p>That's the difference between <a href="http://www.w3.org/TR/html4/struct/global.html#h-7.5.3" rel="noreferrer">block and inline</a> elements. <a href="http://www.w3.org/TR/html4/sgml/dtd.html#block" rel="noreferrer"><code>pre</code> is a block level element</a>. It's not legal to put it inside a <code>code</code> tag, which <a href="http://www.w3.org/TR/html4/struct/text.html#h-9.2.1" rel="noreferrer">can only contain inline content</a>.</p>
<p>Because browsers have to support whatever godawful tag soup they might find on the real web, Firefox tries to do what you mean. IE happens to handle it differently, which is fine by the spec; behavior in that case is unspecified, because it should never happen.</p>
<ul>
<li>Could you instead <em>replace</em> the <code>code</code> element with the <code>pre</code>? (Because of the block/inline issue, technically that should only work if the elements are inside <a href="http://www.w3.org/TR/html4/sgml/dtd.html#flow" rel="noreferrer">an element with "flow" content</a>, but the browsers might do what you want anyway.)</li>
<li>Why is it a <code>code</code> element in the first place, if you want <code>pre</code>'s behavior?</li>
<li>You could also give the <code>code</code> element <code>pre</code>'s whitespace preserving power with the CSS <a href="http://www.blooberry.com/indexdot/css/properties/text/whitespace.htm" rel="noreferrer"><code>white-space: pre</code></a>, but apparently <a href="http://www.quirksmode.org/css/whitespace.html" rel="noreferrer">IE 6 only honors that in Strict Mode</a>.</li>
</ul>
|
<p>Are you using the latest jQuery ?
What if you try </p>
<pre><code>$("code").wrapInner(document.createElement("pre"));
</code></pre>
<p>Is it any better or do you get the same result ?</p>
| 3,387
|
<p>I get this error on an update panel within a popupControlExtender which is within a dragPanelExtender.</p>
<p>I see that a lot of other people have this issue and have various fixes none of which have worked for me.</p>
<p>I would love to hear a logical explanation for why this is occurring and a foolproof way to avoid such issues in the future.</p>
<p>I have found that like others maintain this error does not occur when the trigger is a LinkButton rather than an ImageButton, still wondering if anyone has an explanation.</p>
|
<p>My best guess is that the UpdatePanel is not able to write out the custom "async" property to the postback request properly. This is likely due to blocking from one of the controls wrapping it (my gut feeling is that it's the popupControlExtender - it tends to have odd behavior with updatepanels, as it is intended to manage the events inside it for it's show/hide purposes).</p>
<p>I would recommend either removing the updatepanel and rolling your own solution for your specific business need for having it there, or implementing your own popup script (probably slightly easier to write).</p>
<p>Incidentally, for some background, the "this._postbackSettings.async" is your AJAX.NET framework trying to figure out whether this is an async call or not. You might be able to overcome it by setting this programaticly before the postback is sent (catch the postback event and add the field to the postback request if it is not already there).</p>
<p>Just some thoughts...I do not believe there is a "plug and play" answer for this one!</p>
|
<p>Settign "EnablePartialRendering" to false on the ScriptManager control prevents the error, but it is not an optimal solution. Losing the benefit of partial rendering could be a big deal, depending on your application.</p>
<p>Just for the record, I wasn't doing exactly the same as other folks who saw the error. I have a PopupControlExtender, in which is a checkboxlist. I added a "select all" link with a javascript method to programmatically select/deselect all. I'm not using an Imagebutton. I didn't see the error before adding the javascript and now even after removing it the error remains. There has to be another change I'm missing.</p>
<p>I hope this helps someone...</p>
<p>--Matt</p>
| 8,191
|
<p>I've build a 3D printer from sourced parts and mounted the hotend cooler to blow air over the heatsink. </p>
<p>Talking to a friend, he said it's better to reverse the airflow over the heatsink, but couldn't give me an argument other than everywhere he saw it was like this: all coolers are mount to suck the hot air away from the heatsink.</p>
<p>Is it one way better than the other way ? And if so, why ?</p>
|
<p>Getting hot air from the radiator is definitely wrong idea because of few reasons:</p>
<ol>
<li>hot air can damage your fan (as they are usually not heatproof). Cold air cools the fan</li>
<li>cold air is denser so fan can suck more cold air than hot air so cooling is more efficient (fig A)</li>
<li>in terms of plug of radiator (with dust) it's much better to try to push cold air into plugged fan (fig C) than to suck the air from it (fig B). When flow is decreased there is higher possibility to damage the fan as it doesn't cool itself</li>
</ol>
<p>Of course we are talking about 3D printers with small plastic fans. Please notice (almost) all fans construction: they are designed to get air from the top of the fan and push it to the bottom (this construction prevents damaging/plugging fan motor by dust)... and their mounting wholes (or clips) are usually... naturally suggesting to direct air to the heatsink)</p>
<p><a href="https://i.stack.imgur.com/DMoSn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DMoSn.png" alt="enter image description here"></a></p>
|
<p>In the case of 3D Printing, it's going to be faster to cool the radiator by blowing out the heat from the source.</p>
<p>The idea is that you're trying to get rid of as much heat as possible in the quickest means possible. By blowing away from the radiator, you're allowing the ambient temperature to cool the hot air being blown out.</p>
<p>If you were to blow the ambient air towards the radiator, the blown air will warm slightly as it is overwhelmed by the heat of the radiator. Even though the ambient air may be cooler, it will take more time to cool off the radiator.</p>
<p>Example of bad airflow, which will take much longer to cool the radiator:
<a href="https://i.stack.imgur.com/N6U9u.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/N6U9u.png" alt="enter image description here"></a></p>
<p>Example of good airflow, allowing the hot air to be quickly cooled by the ambient temperature of the build space:
<a href="https://i.stack.imgur.com/RRqeg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RRqeg.png" alt="enter image description here"></a></p>
| 481
|
<p>I deleted a branch in TFS and just found out that I need the changes that were on it.</p>
<p>How do I recover the branch or the changes done on it?</p>
|
<p>Specifically in Visual Studio go to "Tools-Options" then Select "Source Control-visual Studio Team Founation Server" and check the "Show deleted items in the Source Control explorer".</p>
<p>Having done that - you can then right click a folder and say "Undelete"</p>
|
<p>As described in the <a href="http://blogs.msdn.com/jimlamb/articles/tfsfaq.aspx" rel="noreferrer">TFS FAQ</a>:</p>
<p><strong>Are Deletes physical or logical? Can accidental deletes be recovered?</strong></p>
<p>Deletes are fully recoverable with the “undelete” operation. You wouldn’t want to do a SQL restore because that would roll back every change to the TFS in the time since the file was deleted.</p>
| 7,203
|
<p>When designing a database to use MVCC (Multi-Version Concurrency Control), you create tables with either a boolean field like "IsLatest" or an integer "VersionId", and you never do any updates, you only insert new records when things change. </p>
<p>MVCC gives you automatic auditing for applications that require a detailed history, and it also relieves pressure on the database with regards to update locks. The cons are that it makes your data size much bigger and slows down selects, due to the extra clause necessary to get the latest version. It also makes foreign keys more complicated.</p>
<p>(Note that I'm <em>not</em> talking about the native MVCC support in RDBMSs like SQL Server's snapshot isolation level)</p>
<p>This has been discussed in other posts here on Stack Overflow. [todo - links]</p>
<p>I am wondering, which of the prevalent entity/ORM frameworks (Linq to Sql, ADO.NET EF, Hibernate, etc) can cleanly support this type of design? This is a major change to the typical ActiveRecord design pattern, so I'm not sure if the majority of tools that are out there could help someone who decides to go this route with their data model. I'm particularly interested in how foreign keys would be handled, because I'm not even sure of the best way to data model them to support MVCC.</p>
|
<p>I might consider implementing the MVCC tier purely in the DB, using stored procs and views to handle my data operations. Then you could present a reasonable API to any ORM that was capable of mapping to and from stored procs, and you could let the DB deal with the data integrity issues (since it's pretty much build for that). If you went this way, you might want to look at a more pure Mapping solution like IBatis or IBatis.net.</p>
|
<p>I always figured you'd use a db trigger on update and delete to push those rows out into a TableName_Audit table.</p>
<p>That'd work with ORMs, give you your history and wouldn't decimate select performance on that table. Is that a good idea or am I missing something?</p>
| 6,889
|
<p>Can i print out a url <code>/admin/manage/products/add</code> of a certain view in a template?</p>
<p>Here is the rule i want to create a link for</p>
<pre><code>(r'^manage/products/add/$', create_object, {'model': Product, 'post_save_redirect': ''}),
</code></pre>
<p>I would like to have /manage/products/add in a template without hardcoding it. How can i do this?</p>
<p><strong>Edit:</strong> I am not using the default admin (well, i am but it is at another url), this is my own</p>
|
<p>You can use <code>get_absolute_url</code>, but that will only work for a particular object. Since your object hasn't been created yet, it won't work in this case.</p>
<p>You want to use <a href="https://docs.djangoproject.com/en/1.2/topics/http/urls/#naming-url-patterns" rel="nofollow noreferrer">named URL patterns</a>. Here's a quick intro:</p>
<p>Change the line in your urls.py to:</p>
<pre><code>(r'^manage/products/add/$', create_object, {'model': Product, 'post_save_redirect': ''}, "create-product"),
</code></pre>
<p>Then, in your template you use this to display the URL:</p>
<pre><code>{% url create-product %}
</code></pre>
<p>If you're using Django 1.5 or higher you need this:</p>
<pre><code>{% url 'create-product' %}
</code></pre>
<p>You can do some more powerful things with named URL patterns, they're very handy. Note that they are only in the development version (and also 1.0).</p>
|
<p>The preferred way of creating the URL is by adding a <code>get_absolute_url</code> method to your model classes. You can hardcode the path there so you at least get closer to following the KISS philosophy.</p>
<p>You can go further by utilizing the <code>permalink</code> decorator that figures the path based on the urls configuration.</p>
<p>You can read more in the django documentation <a href="https://docs.djangoproject.com/en/dev/ref/models/instances/#get-absolute-url" rel="nofollow noreferrer">here</a>.</p>
| 6,941
|
<p>Which class design is better and why?</p>
<pre><code>public class User
{
public String UserName;
public String Password;
public String FirstName;
public String LastName;
}
public class Employee : User
{
public String EmployeeId;
public String EmployeeCode;
public String DepartmentId;
}
public class Member : User
{
public String MemberId;
public String JoinDate;
public String ExpiryDate;
}
</code></pre>
<p>OR</p>
<pre><code>public class User
{
public String UserId;
public String UserName;
public String Password;
public String FirstName;
public String LastName;
}
public class Employee
{
public User UserInfo;
public String EmployeeId;
public String EmployeeCode;
public String DepartmentId;
}
public class Member
{
public User UserInfo;
public String MemberId;
public String JoinDate;
public String ExpiryDate;
}
</code></pre>
|
<p>The question is simply answered by recognising that inheritance models an "IS-A" relationship, while membership models a "HAS-A" relationship.</p>
<ul>
<li>An employee IS A user</li>
<li>An employee HAS A userinfo</li>
</ul>
<p>Which one is correct? This is your answer.</p>
|
<p>Three more options:</p>
<ol>
<li><p>Have the <code>User</code> class contain the supplemental information for both employees and members, with unused fields blank (the <code>ID</code> of a particular <code>User</code> would indicate whether the user was an employee, member, both, or whatever).</p></li>
<li><p>Have an <code>User</code> class which contains a reference to an <code>ISupplementalInfo</code>, where <code>ISupplementalInfo</code> is inherited by <code>ISupplementalEmployeeInfo</code>, <code>ISupplementalMemberInfo</code>, etc. Code which is applicable to all users could work with <code>User</code> class objects, and code which had a <code>User</code> reference could get access to a user's supplemental information, but this approach would avoid having to change <code>User</code> if different combinations of supplemental information are required in future.</p></li>
<li><p>As above, but have the <code>User</code> class contain some kind of collection of <code>ISupplementalInfo</code>. This approach would have the advantage of facilitating the run-time addition of properties to a user (e.g. because a <code>Member</code> got hired). When using the previous approach, one would have to define different classes for different combinations of properties; turning a "member" into a "member+customer" would require different code from turning an "employee" into an "employee+customer". The disadvantage of the latter approach is that it would make it harder to guard against redundant or inconsistent attributes (using something like a <code>Dictionary<Type, ISupplementalInfo></code> to hold supplemental information could work, but would seem a little "bulky").</p></li>
</ol>
<p>I would tend to favor the second approach, in that it allows for future expansion better than would direct inheritance. Working with a collection of objects rather than a single object might be slightly burdensome, but that approach may be better able than the others to handle changing requirements.</p>
| 5,978
|
<p>How do you access a SQL Server 2005 Express Edition from a application in a network computer?</p>
<p>The access I need is both from application (Linq-to-SQL and ODBC) and from Management Studio</p>
|
<p>See this KB Article. <a href="http://support.microsoft.com/default.aspx?scid=kb;EN-US;914277" rel="noreferrer">How to configure SQL Server 2005 to allow remote connections</a>.<br>
Oh, and remember that the SQLServer name will probably be MyMachineName\SQLExpress</p>
|
<p>If you're running it on a 2k3 box, you need to install all updates for Sql Server and the 2003 server. </p>
<p>Check the event logs after you start the Sql Server. It logs everything well, telling you if its being blocked, and where it is listening for connections.</p>
<p>From a remote machine, you can use telnet to see if a sql server is listening for remote connections. You just need the IP and the port of the server (default is 1433). From the command line:</p>
<pre><code>telnet 192.168.10.10 1433
</code></pre>
<p>If you get a blank screen, its listening. If you get thrown back to the command prompt, something is blocking you.</p>
| 3,081
|
<p>I did this Just for kicks (so, not exactly a question, i can see the downmodding happening already) but, in lieu of Google's newfound <a href="http://www.google.com/search?hl=en&q=1999999999999999-1999999999999995&btnG=Search" rel="nofollow noreferrer">inability</a> to do <a href="http://www.google.com/search?hl=en&q=400000000000002-400000000000001&btnG=Search" rel="nofollow noreferrer">math</a> <a href="http://www.google.com/search?hl=en&q=10000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001-10000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000&btnG=Search" rel="nofollow noreferrer">correctly</a> (check it! according to google 500,000,000,000,002 - 500,000,000,000,001 = 0), i figured i'd try the following in C to run a little theory.</p>
<pre><code>int main()
{
char* a = "399999999999999";
char* b = "399999999999998";
float da = atof(a);
float db = atof(b);
printf("%s - %s = %f\n", a, b, da-db);
a = "500000000000002";
b = "500000000000001";
da = atof(a);
db = atof(b);
printf("%s - %s = %f\n", a, b, da-db);
}
</code></pre>
<p>When you run this program, you get the following</p>
<pre><code> 399999999999999 - 399999999999998 = 0.000000
500000000000002 - 500000000000001 = 0.000000
</code></pre>
<p>It would seem like Google is using simple 32 bit floating precision (the error here), if you switch float for double in the above code, you fix the issue! Could this be it?</p>
<p>/mp</p>
|
<p>in C#, try (double.maxvalue == (double.maxvalue - 100)) , you'll get true ...</p>
<p>but thats what it is supposed to be:</p>
<p><a href="http://en.wikipedia.org/wiki/Floating_point#Accuracy_problems" rel="nofollow noreferrer">http://en.wikipedia.org/wiki/Floating_point#Accuracy_problems</a> </p>
<p>thinking about it, you have 64 bit representing a number greater than 2^64 (double.maxvalue), so inaccuracy is expected. </p>
|
<blockquote>
<p>2^64 is not the maximum value of a double. 2^64 is the number of unique values that a double (or any other 64-bit type) can hold. Double.MaxValue is equal to 1.79769313486232e308.</p>
</blockquote>
<p>Not even; the IEEE encodings use multiple encodings for the same values. Specifically, NaN is represented by an exponent of all-bits-1, and then <em>any</em> non-zero value for the mantissa. As such, there are 2<sup>52</sup> NaNs for doubles, 2<sup>23</sup> NaNs for singles.</p>
| 4,675
|
<p>I've got a Renkforce RF1000 which should be a good 3D printer. I got it second-hand for my birthday one year ago. I've got no way of contacting the old owner.</p>
<p>I spend a good amount of hours fine-tuning the slicer settings last year but at best got mediocre prints. Between September and a week ago I lived somewhere else and didn't touch my printer.</p>
<p>Now here's what I don't know:</p>
<ul>
<li>I don't know what parts are replaced</li>
<li>I don't know what my nozzle size is</li>
<li>I don't know if the limit switches are calibrated correctly
<ul>
<li>Though I think they are. This doesn't seem to be a problem. I did re-calibrate the Z-axis</li>
</ul>
</li>
</ul>
<p>Here are some important details:</p>
<ul>
<li>I use 3 mm Renkforce PLA filament which I print at 190 °C on a bed heated at 60 °C. The PLA is over one year old now.</li>
<li>There's a fan on the motor on top that isn't connected to anything.</li>
</ul>
<p>Here are some of the problems I've got:</p>
<ul>
<li>I've had multiple prints failing due to the extruder not working properly. The motor keeps on spinning but the "feed knurl" remains stationary</li>
<li>I can't seem to get the right extraction settings</li>
<li>I can't seem to get my prints to consistently stick. It tends to work when I heat the bed to 60 °C and use glue and get lucky.</li>
</ul>
<p>Feel free to give any thoughts you've got. These are the most important questions I've got:</p>
<ol>
<li>Should I replace the nozzle with <a href="https://www.conrad.com/p/renkforce-nozzle-v2-03-mm-suitable-for-3d-printer-renkforce-rf1000-renkforce-rf2000-1296238?searchTerm=renkforce%20nozzle&searchType=suggest&searchSuggest=product" rel="nofollow noreferrer">this one</a> so that I know what nozzle I've got and so I'm sure this isn't a problem?</li>
<li>Should I replace the filament with new 1.75 mm PLA? If so, why?</li>
<li>How do I fix the extruder?</li>
</ol>
<ul>
<li>I tried getting the "feed knurl" off but can't seem to do this easily. I've got some super glue I could try to put in there but something's telling me this might be a very bad idea...</li>
</ul>
<ol start="4">
<li>Is the unconnected fan important and if so: what do I do with it? There's no remaining wire to connect it to.</li>
<li>How tight should the 4 screws that hold the filament between the extruder and the rolling disk be?</li>
</ol>
<p>For now, these are all hardware problems. I can post my Slic3r settings too but I believe the hardware should be fixed before going into slicer settings.</p>
<p>Here are some pictures showing the problems:</p>
<p><a href="https://i.stack.imgur.com/ynZ9v.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ynZ9v.jpg" alt="Top image one" /></a></p>
<p>This is the extruder. The feeding mechanism can be seen in front. It shows the "feed knurl" of which the inside spins while the outside remains stationary (question 3). Next to it are 4 screws which determine how tight the filament is held against the extruder (question 5). On the back it shows a black fan, this got placed by the previous owner but isn't connected (question 4).</p>
<p><a href="https://i.stack.imgur.com/Cu0UM.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Cu0UM.jpg" alt="Image unconnected fan with unscrewed extruder" /></a></p>
<p>This image shows the unconnected fan (question 4) to the right. Behind it is the motor that's connected to the extruder. The motor works but the extruder doesn't spin with it. The extruder has a little black hole on top.</p>
<p><a href="https://i.stack.imgur.com/WyWj2.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WyWj2.jpg" alt="Front view extruder" /></a></p>
<p>This shows the extruder from the front. The inner layer spins, the outer layer doesn't (question 3)</p>
<p><a href="https://i.stack.imgur.com/yZd1o.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yZd1o.jpg" alt="Failed print example" /></a></p>
<p>These are some of the prints when the extruder was still working.</p>
<p><a href="https://i.stack.imgur.com/zdToY.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zdToY.jpg" alt="Nozzle and print bed" /></a></p>
<p>Nozzle and print bed (question 1)</p>
|
<p>One hundred percent infill is not necessarily stronger than lower values. By having such a high infill figure, the forces on the model as it cools are magnified and not in a particularly good manner.</p>
<p>Consider that you could use twenty to thirty percent infill to get the strength you require for this application, saving filament and time for the print. You've not noted how many wall layers used, but for increased strength, four to five would make for a very strong model.</p>
|
<p>The reason for this sort of error might be either
1) a clogged nozzel, <a href="https://www.youtube.com/watch?v=bg4sOaSvimY" rel="nofollow noreferrer">try doing this </a>
2) disturbed bed level,<a href="https://www.youtube.com/watch?v=lL3Gmy4hh3Y" rel="nofollow noreferrer">resolve this issue </a>
3) poor filament quality,</p>
| 1,644
|
<p>I am using a FORM LABS 3 printer with clear resin. After printing the model, I wash it with Isopropenyl and dry it. Then I cure it using Formlabs Form Cure for 5 minutes under 60 C°.
After curing the model, the clear print loses some of its transparency.</p>
<p>Is this normal? can it be avoided?</p>
|
<p>This happens to most resins and the amount of haziness is directly related to the type of resin. Not all clear resins do this mind you, but it has to do with the curing sprlectrum of light(natural sunlight cures do this way worse.)</p>
|
<p>Clouding is a known issue with colored transparent resins, as is yellowing with clear resin.</p>
<p>Uncle Jessy did quite a good video explaining the issue and how to best avoid it.</p>
<p>The conclusion was that you should wash and dry them with as little UV exposure as possible (Drying them inside a box in a warm room rather than in direct sunlight), then coating them with Clear Coat lacquer or a similar product, then curing them.</p>
<p><a href="https://www.youtube.com/watch?v=1Ya0DSVYXsE&t=5s" rel="nofollow noreferrer">enter link description here</a></p>
| 2,097
|
<p>In OS X, in order to quickly get at menu items from the keyboard, I want to be able to type a key combination, have it run a script, and have the script focus the Search field in the Help menu. It should work just like the key combination for Spotlight, so if I run it again, it should dismiss the menu. I can run the script with Quicksilver, but how can I write the script?</p>
|
<p>Here is the script I came up with.</p>
<pre><code>tell application "System Events"
tell (first process whose frontmost is true)
click menu "Help" of menu bar 1
end tell
end tell
</code></pre>
|
<p>Here is the script I came up with.</p>
<pre><code>tell application "System Events"
tell (first process whose frontmost is true)
click menu "Help" of menu bar 1
end tell
end tell
</code></pre>
| 9,477
|
<p>I'm trying to install <a href="http://godi.camlcity.org/godi/index.html" rel="noreferrer">GODI</a> on linux (Ubuntu). It's a library management tool for the ocaml language. I've actually installed this before --twice, but awhile ago-- with no issues --that I can remember-- but this time I just can't figure out what I'm missing.</p>
<pre><code>$ ./bootstrap --prefix /home/nlucaroni/godi
$ ./bootstrap_stage2
.: 1: godi_confdir: not found
Error: Command fails with code 2: /bin/sh
Failure!
</code></pre>
<p>I had added the proper directories to the path, and they show up with a quick <code>echo $path</code>, and <code>godi_confdir</code> reported as being:</p>
<pre><code> /home/nlucaroni/godi/etc
</code></pre>
<p>(...and the directory exists, with the godi.conf file present). So, I can't figure out why <code>./bootstrap_stage2</code> isn't working.</p>
|
<p>What is the output of <code>which godi_confdir</code>?</p>
<p>P.S. I remember having this exact same problem, but I don't remember precisely how I fixed it.</p>
|
<p>What is the output of <code>which godi_confdir</code>?</p>
<p>P.S. I remember having this exact same problem, but I don't remember precisely how I fixed it.</p>
| 6,952
|
<p>We are rewriting our legacy <a href="https://en.wikipedia.org/wiki/Accounting_information_system" rel="nofollow noreferrer">accounting system</a> in VB.NET and SQL Server. We brought in a new team of .NET/ SQL Programmers to do the rewrite. Most of the system is already completed with the dollar amounts using floats. The legacy system language, I programmed in, did not have a float, so I probably would have used a decimal.</p>
<p>What is your recommendation?</p>
<p>Should the float or decimal data type be used for dollar amounts?</p>
<p>What are some of the pros and cons for either?</p>
<p>One <em>con</em> mentioned in our <a href="https://en.wikipedia.org/wiki/Scrum_%28software_development%29#Daily_scrum" rel="nofollow noreferrer">daily scrum</a> was you have to be careful when you calculate an amount that returns a result that is over two decimal positions. It sounds like you will have to round the amount to two decimal positions.</p>
<p>Another <em>con</em> is all displays and printed amounts have to have a <em>format statement</em> that shows two decimal positions. I noticed a few times where this was not done and the amounts did not look correct. (i.e. 10.2 or 10.2546)</p>
<p>A <em>pro</em> is the float-only approach takes up eight bytes on disk where the decimal would take up nine bytes (decimal 12,2).</p>
|
<blockquote>
<p>Should Float or Decimal data type be used for dollar amounts?</p>
</blockquote>
<p>The answer is easy. Never floats. <em>NEVER</em>!</p>
<p>Floats were according to <a href="http://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=4610933" rel="nofollow noreferrer">IEEE 754</a> always binary, only the new standard <a href="http://www.intel.com/technology/itj/2007/v11i1/s2-decimal/1-sidebar.htm" rel="nofollow noreferrer">IEEE 754R</a> defined decimal formats. Many of the fractional binary parts can never equal the exact decimal representation.</p>
<p>Any binary number can be written as <code>m/2^n</code> (<code>m</code>, <code>n</code> positive integers), any decimal number as <code>m/(2^n*5^n)</code>.
As binaries lack the prime <code>factor 5</code>, all binary numbers can be exactly represented by decimals, but not vice versa.</p>
<pre><code>0.3 = 3/(2^1 * 5^1) = 0.3
0.3 = [0.25/0.5] [0.25/0.375] [0.25/3.125] [0.2825/3.125]
1/4 1/8 1/16 1/32
</code></pre>
<p>So you end up with a number either higher or lower than the given decimal number. Always.</p>
<p>Why does that matter? Rounding.</p>
<p>Normal rounding means 0..4 down, 5..9 up. So it <em>does</em> matter if the result is
either <code>0.049999999999</code>.... or <code>0.0500000000</code>... You may know that it means 5 cent, but the the computer does not know that and rounds <code>0.4999</code>... down (wrong) and <code>0.5000</code>... up (right).</p>
<p>Given that the result of floating point computations always contain small error terms, the decision is pure luck. It gets hopeless if you want decimal round-to-even handling with binary numbers.</p>
<p>Unconvinced? You insist that in your account system everything is perfectly ok?
Assets and liabilities equal? Ok, then take each of the given formatted numbers of each entry, parse them and sum them with an independent decimal system!</p>
<p>Compare that with the formatted sum. Oops, there is something wrong, isn't it?</p>
<blockquote>
<p>For that calculation, extreme accuracy and fidelity was required (we used Oracle's
FLOAT) so we could record the "billionth's of a penny" being accured.</p>
</blockquote>
<p>It doesn't help against this error. Because all people automatically assume that the computer sums right, and practically no one checks independently.</p>
|
<p>Your accountants will want to control how you round. Using float means that you'll be constantly rounding, usually with a <code>FORMAT()</code> type statement, which isn't the way you want to do it (use <code>floor</code> / <code>ceiling</code> instead).</p>
<p>You have currency datatypes (<code>money</code>, <code>smallmoney</code>), which should be used instead of float or real. Storing decimal (12,2) will eliminate your roundings, but will also eliminate them during intermediate steps - which really isn't what you'll want at all in a financial application.</p>
| 8,718
|
<p>What is the best way of creating an alphabetically sorted list in Python?</p>
|
<p>Basic answer:</p>
<pre><code>mylist = ["b", "C", "A"]
mylist.sort()
</code></pre>
<p>This modifies your original list (i.e. sorts in-place). To get a sorted copy of the list, without changing the original, use the <a href="http://docs.python.org/library/functions.html#sorted" rel="noreferrer"><code>sorted()</code></a> function:</p>
<pre><code>for x in sorted(mylist):
print x
</code></pre>
<p>However, the examples above are a bit naive, because they don't take locale into account, and perform a case-sensitive sorting. You can take advantage of the optional parameter <code>key</code> to specify custom sorting order (the alternative, using <code>cmp</code>, is a deprecated solution, as it has to be evaluated multiple times - <code>key</code> is only computed once per element).</p>
<p>So, to sort according to the current locale, taking language-specific rules into account (<a href="http://docs.python.org/library/functools.html#functools.cmp_to_key" rel="noreferrer"><code>cmp_to_key</code></a> is a helper function from functools):</p>
<pre><code>sorted(mylist, key=cmp_to_key(locale.strcoll))
</code></pre>
<p>And finally, if you need, you can specify a <a href="http://docs.python.org/library/locale.html" rel="noreferrer">custom locale</a> for sorting:</p>
<pre><code>import locale
locale.setlocale(locale.LC_ALL, 'en_US.UTF-8') # vary depending on your lang/locale
assert sorted((u'Ab', u'ad', u'aa'),
key=cmp_to_key(locale.strcoll)) == [u'aa', u'Ab', u'ad']
</code></pre>
<p>Last note: you will see examples of case-insensitive sorting which use the <code>lower()</code> method - those are incorrect, because they work only for the ASCII subset of characters. Those two are wrong for any non-English data:</p>
<pre><code># this is incorrect!
mylist.sort(key=lambda x: x.lower())
# alternative notation, a bit faster, but still wrong
mylist.sort(key=str.lower)
</code></pre>
|
<p>Suppose <code>s = "ZWzaAd"</code> </p>
<p>To sort above string the simple solution will be below one.</p>
<pre><code>print ''.join(sorted(s))
</code></pre>
| 5,683
|
<p>I am designing a part that has to clamp around a 11mm bushing, and due to other design constraints, it has to be printed with a semicircle-shaped overhang:</p>
<p><a href="https://i.stack.imgur.com/LcLSm.png" rel="noreferrer"><img src="https://i.stack.imgur.com/LcLSm.png" alt="enter image description here"></a></p>
<p>This is proving very challenging to print. Two copies of this part have to clamp tightly around the bushing in all directions. Support material is rather hard to remove from the very top of the arc (where the overhang angle is the highest) and I often end up removing just too little of the support material (so the part doesn't fit around the bushing) or too much (and the bushing can wobble around).</p>
<p>Is there any way I can modify the design of this part (bearing in mind that it absolutely has to be printed in this orientation) to make it more tolerant of my inaccuracy when removing supports, or is there perhaps some way to manually design supports that are easier to remove (Simplify3D and Cura both don't quite cut it)?</p>
|
<p>You could modify it as shown in my picture. I added lines tangent to the 11mm circle and in this example I set them to a 40 degree overhang which should be fine, the top line is also tangent to the circle and in my experience it's easier to bridge a small section rather than do a bunch of small overhangs like an arc would do. You still end up with quite a bit of contact but also easy to print.</p>
<p>I did something similar to this on <a href="https://github.com/limited660/E1x/blob/master/STL%20Files/X_Axis_Idler_Block_-_1x.stl" rel="noreferrer">my printer</a> where the smooth rods enter horizontal holes.</p>
<p><a href="https://i.stack.imgur.com/ByUUM.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/ByUUM.jpg" alt="Angled Overhang"></a></p>
|
<blockquote>
<p>if I printed it in a different orientation it would delaminate too easily</p>
</blockquote>
<p>If you flip it around, the orientation of the layers would be the same, except that you would print from top to bottom instead of bottom to top.</p>
<p>With the arch opening to the top, there would be <strong>no overhang</strong>. Without overhang, you don't need support structures (red). The holes for screw heads have flat roofs (green) that the printer should be able to gap by moving over them quickly.</p>
<p>Here's an image that shows the original orientation at the top and my proposed flipped orientation at the bottom. The blue lien is the print bed.
<a href="https://i.stack.imgur.com/xqHJl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xqHJl.png" alt="both orientations"></a></p>
| 139
|
<p>I've started using Vim to develop Perl scripts and am starting to find it very powerful. </p>
<p>One thing I like is to be able to open multiple files at once with:</p>
<pre><code>vi main.pl maintenance.pl
</code></pre>
<p>and then hop between them with:</p>
<pre><code>:n
:prev
</code></pre>
<p>and see which file are open with:</p>
<pre><code>:args
</code></pre>
<p>And to add a file, I can say: </p>
<pre><code>:n test.pl
</code></pre>
<p>which I expect would then be added to my list of files, but instead it wipes out my current file list and when I type <code>:args</code> I only have <code>test.pl</code> open.</p>
<p>So how can I add and remove files in my args list?</p>
|
<p>Why not use tabs (introduced in Vim 7)?
You can switch between tabs with <code>:tabn</code> and <code>:tabp</code>,
With <code>:tabe <filepath></code> you can add a new tab; and with a regular <code>:q</code> or <code>:wq</code> you close a tab.
If you map <code>:tabn</code> and <code>:tabp</code> to your <kbd>F7</kbd>/<kbd>F8</kbd> keys you can easily switch between files.</p>
<p>If there are not that many files or you don't have Vim 7 you can also split your screen in multiple files: <code>:sp <filepath></code>. Then you can switch between splitscreens with <kbd>Ctrl</kbd>+<kbd>W</kbd> and then an arrow key in the direction you want to move (or instead of arrow keys, <kbd>w</kbd> for next and <kbd>W</kbd> for previous splitscreen)</p>
|
<p>if you're on osx and want to be able to click on your tabs, use MouseTerm and SIMBL (taken from <a href="http://ayaz.wordpress.com/2010/10/19/using-mouse-inside-vim-on-terminal-app/" rel="nofollow noreferrer">here</a>). Also, check out this <a href="https://stackoverflow.com/questions/1727261/scrolling-inside-vim-in-macs-terminal?rq=1">related discussion</a>.</p>
| 7,720
|
<p>As you may know, in <code>VS 2008</code> <kbd>ctrl</kbd>+<kbd>tab</kbd> brings up a nifty navigator window with a thumbnail of each file. I love it, but there is one tiny thing that is annoying to me about this feature: <em>the window stays around after releasing the <kbd>ctrl</kbd> key</em>. When doing an <kbd>alt</kbd>+<kbd>tab</kbd> in windows, you can hit tab to get to the item you want (while still holding down the <kbd>alt</kbd> key), and then when you find what you want, <em>lifting up</em> on the <kbd>alt</kbd> key selects that item.</p>
<p>I wish <code>VS 2008</code> would do the same. For me, when I lift off of <kbd>ctrl</kbd>, the window is still there. I have to hit <kbd>enter</kbd> to actually select the item. I find this annoying.</p>
<p>Does anyone know how to make <code>VS 2008</code> dismiss the window on the <em>release</em> of the <kbd>ctrl</kbd> key?</p>
|
<p>You probably have the text-to-speech narrator enabled.</p>
<p><a href="http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=2467648&SiteID=1&mode=1" rel="nofollow noreferrer">http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=2467648&SiteID=1&mode=1</a></p>
<blockquote>
<p>Just uncheck all checkboxes under
"Text-To-Speech" narrator software.</p>
<p>--> To open Narrator using the keyboard, press <strong><kbd>CTRL</kbd>+<kbd>ESC</kbd></strong>, press <strong><kbd>R</kbd></strong>,
type narrator, and then press Enter.</p>
</blockquote>
<p>This one drove me crazy for several months until I found this posting.</p>
|
<p>Just in case anyone still needed a fix for this (I've encountered this behavior in VS2010) what you can do is:<br /></p>
<ul>
<li>Close VS</li>
<li>Enable sticky keys</li>
<li>Reopen VS</li>
<li>Disable sticky keys</li>
</ul>
<p>This solved it for me.</p>
| 3,758
|
<p>In Ruby, I'm trying to do the following.</p>
<pre><code>def self.stats(since)
return Events.find(:all, :select => 'count(*) as this_count', :conditions => ['Date(event_date) >= ?', (Time.now - since)]).first.this_count
end
</code></pre>
<p>where "since" is a string representing an amount of time ('1 hour', '1 day', '3 days') and so on. Any suggestions?</p>
|
<p>I hacked this together with the ActiveSupport gem:</p>
<pre><code>require 'active_support'
def string_to_date(date_string)
parts = date_string.split
return parts[0].to_i.send(parts[1])
end
sinces = ['1 hour', '1 day', '3 days']
sinces.each do |since|
puts "#{since} ago: #{string_to_date(since).ago(Time.now)}"
end
</code></pre>
<p>[edit] To answer your question, you might try it like that:</p>
<pre><code>:conditions => ['Date)event_date) >= ?', (string_to_date(since).ago(Time.now))]
</code></pre>
|
<p>I agree with <a href="https://stackoverflow.com/a/40580/1933902">John Millikin</a>. <a href="http://chronic.rubyforge.org/" rel="nofollow noreferrer">Chronic</a>, or even your own helpers, would be a much lighter and effective dependency to carry than whole ActiveSupport, assuming you are not already trapped inside Rails.</p>
| 6,180
|
<p>How do you restore a database backup using SQL Server 2005 over the network? I recall doing this before but there was something odd about the way you had to do it.</p>
|
<p>The database is often running as a service under an account with no network access. If this is the case, then you wouldn't be able to restore directly over the network. Either the backup needs to be copied to the local machine or the database service needs to run as a user with the proper network access.</p>
|
<pre><code>EXEC sp_configure 'show advanced options', 1
GO
</code></pre>
<p>-- Update currently configured values for advanced options.</p>
<pre><code>RECONFIGURE
GO
-- To enable xp_cmdshell
EXEC sp_configure 'xp_cmdshell', 1
GO
</code></pre>
<p>-- Update currently configured values for advanced options.</p>
<pre><code>RECONFIGURE
GO
</code></pre>
<p>--This should be run on command prompt (cmd)</p>
<pre><code>NET USE Z: \\172.100.1.100\Shared Password /USER:administrator /Persistent:no
</code></pre>
<p>then on SQL Server</p>
<pre><code>EXEC xp_cmdshell 'NET USE Z: \\172.100.1.100\Shared Password /USER:administrator /Persistent:no'
</code></pre>
<p>--Afterwards drive Z: will be visible in Server Management studio, or just</p>
<pre><code>RESTORE DATABASE DB FROM DISK = 'Z:\DB.BAK'
WITH REPLACE
</code></pre>
| 3,723
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.