instruction
stringlengths
27
22.5k
chosen
stringlengths
27
28.2k
rejected
stringlengths
19
24.9k
__index_level_0__
int64
0
10k
<p>I am getting a 24&nbsp;V based Ender 3. From the factory, it has an aluminium bed. So I also put this <a href="https://www.amazon.de/gp/product/B071ZQ6VV6/ref=oh_aui_detailpage_o01_s00?ie=UTF8&amp;psc=1" rel="noreferrer">LJ12 A3-4-Z/BX Inductive NPN NO 4 mm</a> with 6-36&nbsp;V operation current into the box together with a few other spare parts. Now, as I read up on these things something dawns on me: The normal input and output voltage of a simple switch is 5&nbsp;V, as sensors are ran on 5&nbsp;V on most boards (and in digital logics). The sensors run on 6&nbsp;V plus though.</p> <p>I do not want to fry my machine by putting in 24&nbsp;V into the sensor input: What do I have to do (besides making a mount)?</p>
<p>The inductive sensors work better when you apply a higher voltage than 5&nbsp;V. Usually they are rated for 6-36&nbsp;V, but please do check.</p> <p>To prevent frying your board when connecting the sensor to (12 or) 24 Volts you could optically isolate the 5&nbsp;V and the (12 or) 24&nbsp;V circuit with an OptoCoupler module:</p> <p><em>Image of an optocoupler module</em><br> <a href="https://i.stack.imgur.com/Xfrlg.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/Xfrlg.jpg" alt="OptoCoupler module"></a></p> <p>This module uses an optical switch based on the output of the sensor and should be correctly connected:</p> <p><em>Image of connecting an optocoupler module to the sensor and to the board</em><br> <sub>Please note that the image uses a capacitive sensor rather than a inductive sensor, both are connected similarly</sub> <a href="https://i.stack.imgur.com/munY6.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/munY6.jpg" alt="wiring of an optocoupler module"></a></p> <p>Note that there are many sorts of sensors, a few are listed <a href="https://3dprint.wiki/reprap/anet/a8/improvement/autobedleveling#sensor_support" rel="noreferrer">here</a>. Generally speaking, the larger the diameter of the sensor, the larger the detection distance to the bed. Note that these work well with metal beds (Iron/steel better than Aluminium), but will not work for glass (capacitive sensors work on glass but are prone to drift by moisture in the air, a touch sensor may then be a better alternative).</p>
<p>Another possibility is to create a <a href="https://en.wikipedia.org/wiki/Voltage_divider" rel="nofollow noreferrer">voltage divider</a> circuit. </p> <blockquote> <p>In electronics, a voltage divider (also known as a potential divider) is a passive linear circuit that produces an output voltage (V<sub>out</sub>) that is a fraction of its input voltage (V<sub>in</sub>). Voltage division is the result of distributing the input voltage among the components of the divider. A simple example of a voltage divider is two resistors connected in series, with the input voltage applied across the resistor pair and the output voltage emerging from the connection between them.</p> </blockquote> <p>Note that fluctuating voltages of the power supply can have side effects that the voltage of the divider is either too low or high for the pin and could result in either a detection miss or could burn the on board processor. </p> <p>A voltage divider based on 12&nbsp;V is depicted below based on <a href="https://mertarauh.com/2017/01/18/dont-trust-the-internet-and-how-to-add-an-inductive-proximity-sensor-to-your-3d-printer-the-proper-and-easiest-way" rel="nofollow noreferrer">this source</a>:</p> <p><a href="https://i.stack.imgur.com/6HSYk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6HSYk.png" alt="12 V voltage divider"></a></p> <p>Please note that for different voltage you require a different value, e.g. for 24&nbsp;V you would require (<a href="http://www.ohmslawcalculator.com/voltage-divider-calculator" rel="nofollow noreferrer">by calculation</a>) a 2.63&nbsp;k&#8486; resistor (which does not exist, so put two resistors in <a href="https://en.wikipedia.org/wiki/Series_and_parallel_circuits" rel="nofollow noreferrer">series</a>; a 2.2&nbsp;k&#8486; and a 430&nbsp;&#8486; to get that value).</p> <hr> <p><em>NOTE:<br>This is posted as a separate answer as the question by the OP can be answered by different solutions; this solution uses a totally different approach than my other answer. It is up to the OP to decide which of the answers suits the OP best. This is not in contrast to the SE approach as can be taken from various Meta questions on this topic. To incorporated this answer into the other answer (with multiple ways under captions) is per definition not better than having various answers; different solutions should be in different answers so that they can be voted independently of each other</em></p>
958
<p>A friend of mine is now building a web application with J2EE and Struts, and it's going to be prepared to display pages in several languages.</p> <p>I was told that the best way to support a multi-language site is to use a properties file where you store all the strings of your pages, something like:</p> <pre><code>welcome.english = "Welcome!" welcome.spanish = "¡Bienvenido!" ... </code></pre> <p>This solution is ok, but what happens if your site displays news or something like that (a blog)? I mean, content that is not static, that is updated often... The people that keep the site have to write every new entry in each supported language, and store each version of the entry in the database. The application loads only the entries in the user's chosen language.</p> <p>How do you design the database to support this kind of implementation?</p> <p>Thanks.</p>
<p>They way I have designed the database before is to have an News-table containing basic info like NewsID (int), NewsPubDate (datetime), NewsAuthor (varchar/int) and then have a linked table NewsText that has these columns: NewsID(int), NewsText(text), NewsLanguageID(int). And at last you have a Language-table that has LanguageID(int) and LanguageName(varchar).</p> <p>Then, when you want to show your users the news-page you do:</p> <pre><code>SELECT NewsText FROM News INNER JOIN NewsText ON News.NewsID = NewsText.NewsID WHERE NewsText.NewsLanguageID = &lt;&lt;Session["UserLanguageID"]&gt;&gt; </code></pre> <p>That Session-bit is a local variable where you store the users language when they log in or enters the site for the first time.</p>
<p>@Auron</p> <p>thats what we apply it to. Our apps are all PHP, but gettext has a long heritage. </p> <p>Looks like there is a <a href="http://www.gnu.org/software/autoconf/manual/gettext/Java.html" rel="nofollow noreferrer">good Java implementation</a></p>
6,065
<p>Additionally, how can I format it as a string padded with zeros?</p>
<p>To generate the number call rand with the result of the expression "10 to the power of 10"</p> <pre><code>rand(10 ** 10) </code></pre> <p>To pad the number with zeros you can use the string format operator</p> <pre><code>'%010d' % rand(10 ** 10) </code></pre> <p>or the <code>rjust</code> method of string</p> <pre><code>rand(10 ** 10).to_s.rjust(10,'0') </code></pre>
<p>Random 10 numbers:</p> <pre><code>require 'string_pattern' puts "10:N".gen </code></pre>
5,486
<p>I've been put in charge of coming up with a training itinerary for my team at work for a migration from c++ to Visual Studio 2008 with C#. </p> <p>The actual language switch I'm not too worried about, its the learning curve to Visual Studio. What does everything think would be the best way to set up a training course? </p> <p>I was thinking of having a list of different features of the IDE and having the team members create pages on our internal wiki on them, but I'm not sure if that would be hands on enough to be useful.</p>
<p>We are a C++ shop, that is moving to C# for UI work (our image processing and 3D graphics code will stay in native C++). I found <a href="http://media.wiley.com/assets/264/22/0764557599_bonus_AppD.pdf" rel="nofollow noreferrer">C# for C++ Developers</a> a very quick and handy introduction to the language. Our team has been using Visual Studio for while, whereas I came from an SVN/Slickedit/CMake/Ant kind of environment in my last job. I found it very helpful to just dive in and start working, but as I figured things out, I documented them on our internal wiki. It's been about 6 months, but not only am I very comfortable with Visual Studio, but the rest of the team has had me streamlining our build process, and converting our build system to do out-of-place builds from Visual Studio (which I document on the wiki, of course). So I'd say do both - dive in and do real work, but document what you learn - which not only helps others, but it reinforces it in your mind. </p>
<p>I purchased the on-demand training from pluralsight about 4 months ago and IMHO is the best training out there. </p> <p><a href="http://www.pluralsight.com/main/olt/Courses.aspx" rel="nofollow noreferrer">link text</a></p>
5,615
<p>What is the best way to implement, from a web page a download action using asp.net 2.0?</p> <p>Log files for a action are created in a directory called [Application Root]/Logs. I have the full path and want to provide a button, that when clicked will download the log file from the IIS server to the users local pc.</p>
<p>Does this help:</p> <p><a href="http://www.west-wind.com/weblog/posts/76293.aspx" rel="noreferrer">http://www.west-wind.com/weblog/posts/76293.aspx</a></p> <pre><code>Response.ContentType = "application/octet-stream"; Response.AppendHeader("Content-Disposition","attachment; filename=logfile.txt"); Response.TransmitFile( Server.MapPath("~/logfile.txt") ); Response.End(); </code></pre> <p>Response.TransmitFile is the accepted way of sending large files, instead of Response.WriteFile.</p>
<p><a href="http://forums.asp.net/p/1481083/3457332.aspx" rel="noreferrer">http://forums.asp.net/p/1481083/3457332.aspx</a></p> <pre><code>string filename = @"Specify the file path in the server over here...."; FileInfo fileInfo = new FileInfo(filename); if (fileInfo.Exists) { Response.Clear(); Response.AddHeader("Content-Disposition", "attachment; filename=" + fileInfo.Name); Response.AddHeader("Content-Length", fileInfo.Length.ToString()); Response.ContentType = "application/octet-stream"; Response.Flush(); Response.TransmitFile(fileInfo.FullName); Response.End(); } </code></pre> <p><br /> <strong>Update:</strong></p> <p>The initial code</p> <pre><code>Response.AddHeader("Content-Disposition", "inline;attachment; filename=" + fileInfo.Name); </code></pre> <p>has "inline;attachment" i.e. two values for Content Disposition.</p> <p>Don't know when exactly it started, but in Firefox <strong>only</strong> the proper file name was not appearing. The file download box appears with the name of the webpage and its extension (<em>pagename.aspx</em>). After download, if you rename it back to the actual name; file opens successfully.</p> <p>As per <a href="https://www.iana.org/assignments/cont-disp/cont-disp.xhtml" rel="noreferrer">this page</a>, it operates on <strong>First Come First Served basis</strong>. Changing the value to <code>attachment</code> only solved the issue.</p> <p>PS: I am not sure if this is the best practice but the issue is resolved.</p>
5,843
<p>Forgive this long-winded post, but I thought it would be helpful to include the back story leading up to a very scary situation on my Ender 3.</p> <p>I was printing a job and everything started out fine just like it normally does. I'm not sure if it is related, but partway through the job, there was a power dip severe enough to trip the alarm on the UPS my computer is attached to. It didn't seem to affect the job on the printer though. (the printer is not on a UPS)</p> <p>A little while later I noticed the extruder clicking like there was a jam, and the print had started severely under extruding. The hot end temp was showing around 209 °C, which is about what it was supposed to be, so I figured the nozzle was getting old and had probably gotten something stuck in it. I canceled the job, did a cold pull, then went ahead and replaced the nozzle and restarted the job.</p> <p>The second run started off looking ok, but the clicking and under extruding soon came back. I stopped the job and was standing there next to the printer trying to figure out what was going on when I noticed it reboot itself. Octoprint showed a message (I can't recall the exact wording) of something to the effect of &quot;your printer reset and the job is probably failed&quot;.</p> <p>Weird. It's never done that before. I hadn't restarted since the power dip, so I thought maybe it was just in a bad state and physically powered everything down to start fresh. But as soon as it powered back up, the heater went active and the hot end started warming up. The status display said the temp was set at 0 °C, but the &quot;current temp&quot; reading was rising. I quickly went into the menus to select &quot;cool down&quot; and couldn't find it. There was no cool-down option on the menu!</p> <p>By this time the hot end was up to around 270 °C and starting to smoke, so I shut the whole thing down again. I let it sit for a little while and powered up again. Immediately upon receiving power, the heater went active and started heating up again! I'm pretty sure had I not been standing there it would have eventually caught fire and possibly burned down my house!</p> <p>This is the original Creality board, but I did flash it with Marlin U1.R2.7 maybe a year ago. It has been completely stable since. Thermal runaway protection is enabled (and tested), but I don't think it would have done anything in this scenario since the printer thinks the heater is off (hence the missing cooldown option). Even if it did detect an issue, I'm not sure it would have been able to do anything since it seems to be permanently shorted &quot;on&quot;.</p> <p>Has anyone heard of something like this before? Any idea what might have caused it?</p> <p>I opened the case and inspected the board for signs of something shorting out or a smoked component, but everything appears normal. I disconnected the heater so I could power up and everything seems to work other than the heater terminals always having power.</p> <p>Obviously, my trust in this machine has been severely diminished. I already ordered an SKR Mini E3 V2 to replace the Creality board since I had been thinking of upgrading anyway. But I would really like to know what happened so I can assess the probability of it happening again.</p> <p>Ideally, I would also like to figure out some way of protecting against this happening again in the future and causing a fire when I might not be standing next to the printer next time.</p> <p>My first thought is some sort of thermal fuse (like the ones in hairdryers) in line with the heater that would &quot;blow&quot; if it gets past a certain temperature. A typical consumer printer should never have a reason to get above 260-270 °C, so it seems appropriate to kill the circuit if it goes beyond that regardless of what the board is telling it to do.</p> <p>Has anyone tried retrofitting something like that? Are you aware of any printers on the market that include such a feature?</p> <p>Any suggestions on how to troubleshoot the board to figure out what happened?</p>
<p>Your onboard MOSFET has failed, you need to replace the component, yet better the board. It seems the MOSFET isn’t reacting to the signal from the microprocessor, instead it is “always on”.</p> <p>Note that a MOSFET usually fails short, but can fail open in certain circumstances. See <a href="https://electronics.stackexchange.com/questions/27124/do-mosfets-usually-burn-open-or-closed">Do MOSFETs usually burn open or closed?</a>.</p>
<p>My understanding is that the hot end heats until the thermistor signals to the board that it is near/at temperature.</p> <p>On an Ender V2 with the Meanwell power supply, the unit has thermal runaway protection so if the thermistor fails the board will shut down.</p> <p>In your case, the failed thermistor never alerts by changing resistance, the board will keep heating the block. So I think you need to pull the thermistor and test its resistance as the temperature changes. Connect it to a multimeter with the Resistance setting, then heat it with a hairdryer.</p> <p>Another possibility is if a failed print has clogged the nozzle area, and the resulting blob of plastic has broken a wire to the thermister. This might be repairable by soldering.</p> <p>Thermistors are available as replacement items too, may pay to simply get one along with a spare belt and nozzles.</p>
2,063
<p>Is it possible to configure <a href="http://www.apachefriends.org/en/xampp.html" rel="noreferrer">xampp</a> to serve up a file outside of the <code>htdocs</code> directory?</p> <p>For instance, say I have a file located as follows:</p> <p><code>C:\projects\transitCalculator\trunk\TransitCalculator.php</code></p> <p>and my <a href="http://www.apachefriends.org/en/xampp.html" rel="noreferrer">xampp</a> files are normally served out from:</p> <p><code>C:\xampp\htdocs\</code></p> <p>(because that's the default configuration) Is there some way to make Apache recognize and serve up my <code>TransitCalculator.php</code> file without moving it under <code>htdocs</code>? Preferably I'd like Apache to serve up/have access to the entire contents of the projects directory, and I don't want to move the projects directory under <code>htdocs</code>.</p> <p>edit: edited to add Apache to the question title to make Q/A more "searchable"</p>
<p>Ok, per <a href="https://stackoverflow.com/questions/1408/#2471">pix0r</a>'s, <a href="https://stackoverflow.com/questions/1408/#1413">Sparks</a>' and <a href="https://stackoverflow.com/questions/1408/#1414">Dave</a>'s answers it looks like there are three ways to do this:</p> <hr> <h2><a href="https://stackoverflow.com/questions/1408/#2471">Virtual Hosts</a></h2> <ol> <li>Open C:\xampp\apache\conf\extra\httpd-vhosts.conf.</li> <li>Un-comment ~line 19 (<code>NameVirtualHost *:80</code>).</li> <li><p>Add your virtual host (~line 36):</p> <pre><code>&lt;VirtualHost *:80&gt; DocumentRoot C:\Projects\transitCalculator\trunk ServerName transitcalculator.localhost &lt;Directory C:\Projects\transitCalculator\trunk&gt; Order allow,deny Allow from all &lt;/Directory&gt; &lt;/VirtualHost&gt; </code></pre></li> <li><p>Open your hosts file (C:\Windows\System32\drivers\etc\hosts).</p></li> <li><p>Add</p> <pre><code>127.0.0.1 transitcalculator.localhost #transitCalculator </code></pre> <p>to the end of the file (before the Spybot - Search &amp; Destroy stuff if you have that installed).</p></li> <li>Save (You might have to save it to the desktop, change the permissions on the old hosts file (right click > properties), and copy the new one into the directory over the old one (or rename the old one) if you are using Vista and have trouble).</li> <li>Restart Apache.</li> </ol> <p>Now you can access that directory by browsing to <a href="http://transitcalculator.localhost/" rel="noreferrer">http://transitcalculator.localhost/</a>.</p> <hr> <h2><a href="https://stackoverflow.com/questions/1408/#1413">Make an Alias</a></h2> <ol> <li><p>Starting ~line 200 of your <code>http.conf</code> file, copy everything between <code>&lt;Directory "C:/xampp/htdocs"&gt;</code> and <code>&lt;/Directory&gt;</code> (~line 232) and paste it immediately below with <code>C:/xampp/htdocs</code> replaced with your desired directory (in this case <code>C:/Projects</code>) to give your server the correct permissions for the new directory.</p></li> <li><p>Find the <code>&lt;IfModule alias_module&gt;&lt;/IfModule&gt;</code> section (~line 300) and add</p> <pre><code>Alias /transitCalculator "C:/Projects/transitCalculator/trunk" </code></pre> <p>(or whatever is relevant to your desires) below the <code>Alias</code> comment block, inside the module tags.</p></li> </ol> <hr> <h2><a href="https://stackoverflow.com/questions/1408/#1414">Change your document root</a></h2> <ol> <li><p>Edit ~line 176 in C:\xampp\apache\conf\httpd.conf; change <code>DocumentRoot "C:/xampp/htdocs"</code> to <code>#DocumentRoot "C:/Projects"</code> (or whatever you want).</p></li> <li><p>Edit ~line 203 to match your new location (in this case <code>C:/Projects</code>).</p></li> </ol> <hr> <p><strong>Notes:</strong> </p> <ul> <li>You have to use forward slashes "/" instead of back slashes "\".</li> <li>Don't include the trailing "/" at the end.</li> <li><em>restart your server</em>.</li> </ul>
<p>Solution to allow Apache 2 to host websites outside of htdocs:</p> <p>Underneath the "DocumentRoot" directive in httpd.conf, you should see a directory block. Replace this directory block with:</p> <pre><code>&lt;Directory /&gt; Options FollowSymLinks AllowOverride All Allow from all &lt;/Directory&gt; </code></pre> <p><strong><em>REMEMBER NOT TO USE THIS CONFIGURATION IN A REAL ENVIRONMENT</em></strong></p>
2,381
<p>I'm a very experienced Java programmer who has spent my entire time working with Java 1.4 and earlier. Where can I find a quick reference that will give me everything I need to know about the new features in Java5 and later in a quick reference?</p>
<p>Here's a good place to start:</p> <p><a href="https://docs.oracle.com/javase/1.5.0/docs/relnotes/features.html" rel="nofollow noreferrer">https://docs.oracle.com/javase/1.5.0/docs/relnotes/features.html</a></p> <p><a href="https://web.archive.org/web/20081011231920/http://java.sun.com/developer/technicalArticles/releases/j2se15" rel="nofollow noreferrer">http://java.sun.com/developer/technicalArticles/releases/j2se15/</a></p>
<p>I can recommend Bruce Eckel's "Thinking in Java" 4th edition. He goes over a bunch of basic stuff you can skip, but his treatment of new 1.5 features is very thorough, especially the chapter on generics. And it is a good Java reference to own.</p>
8,858
<p>Are there any techniques for getting a smooth finish for parts printed with co-polyester (PET) filaments? More specifically, I am looking for an alternative that does not roughen the look of the part - such as using sandpaper - but rather works like acetone baths for ABS.</p> <p>In particular, I want to treat ColorFabb's XT filament made from the <a href="http://www.eastman.com/Markets/3D_Printing/Pages/Products.aspx" rel="nofollow">Eastman Amphora™ 3D polymer</a> (<a href="http://ws.eastman.com/ProductCatalogApps/PageControllers/ProdDatasheet_PC.aspx?Product=71100831&amp;sCategoryName=Generic" rel="nofollow">datasheet</a>). This is also the polymer is also used in:</p> <ul> <li>ColorFabb <a href="http://colorfabb.com/co-polyesters" rel="nofollow">nGen and XT</a></li> <li>Taulman3D n-vent</li> <li>TripTech Athiri 1800</li> <li>3DXTech 3DXNano</li> </ul>
<p>Ethyl acetate (sold as a MEK substitute) is supposed to work for vapor smoothing PET. It doesn't seem very toxic (it's used to decaffinate cofee and tea, and as a nail polish remover), but you might want to look more into it. There's a post on Printed Solid's blog where he vapor smoothed colorFabb XT and MadeSolid PET+ along with a few other filaments and got some good results.</p> <p><a href="http://printedsolid.com/blogs/news/37035395-vapor-smoothing-3d-printed-parts-pla-colorfabb-xt-t-glase-pet" rel="nofollow">http://printedsolid.com/blogs/news/37035395-vapor-smoothing-3d-printed-parts-pla-colorfabb-xt-t-glase-pet</a></p> <p>The links in the blog don't work for me, but google was able to find slightly larger versions:</p> <p><a href="https://cdn.shopify.com/s/files/1/0887/0138/files/blog_2014-03-20-18.38.04-1024x613.jpg?16147388421280943481" rel="nofollow">https://cdn.shopify.com/s/files/1/0887/0138/files/blog_2014-03-20-18.38.04-1024x613.jpg?16147388421280943481</a></p> <p><a href="https://cdn.shopify.com/s/files/1/0887/0138/files/blog_2014-03-21-18.16.28-1024x612.jpg?9543779874607042697" rel="nofollow">https://cdn.shopify.com/s/files/1/0887/0138/files/blog_2014-03-21-18.16.28-1024x612.jpg?9543779874607042697</a></p>
<p>I've found a <a href="http://www.plasticsintl.com/plastics_chemical_resistence_chart.html" rel="nofollow">chart</a> which covers several plastics and solvents and only two of them (<a href="https://en.wikipedia.org/wiki/Chloromethane" rel="nofollow">Chloromethane</a> and <a href="https://en.wikipedia.org/wiki/Chloroform" rel="nofollow">Chloroform</a>) are rated "D" which includes dissolving the material and both seem to be quite nasty and I doubt you will be able to purchase them without being placed on several lists.</p> <p>Is it possible that something like <a href="http://www.smooth-on.com/Epoxy-Coatings-XTC/c1397_1429/index.html" rel="nofollow">XTC-3D</a> from Smooth-On would work for you?</p> <p>Also some more information on dissolving PET <a href="https://www.researchgate.net/post/How_do_you_dissolve_polyethyelene_terephthalate_PET" rel="nofollow">here</a>, several sources also mention PET is affected by Hydrogen Peroxide but they do not mention to what degree the plastic is affected.</p>
152
<p>If you have a JSF <code>&lt;h:commandLink&gt;</code> (which uses the <code>onclick</code> event of an <code>&lt;a&gt;</code> to submit the current form), how do you execute JavaScript (such as asking for delete confirmation) prior to the action being performed?</p>
<pre><code>&lt;h:commandLink id="myCommandLink" action="#{myPageCode.doDelete}"&gt; &lt;h:outputText value="#{msgs.deleteText}" /&gt; &lt;/h:commandLink&gt; &lt;script type="text/javascript"&gt; if (document.getElementById) { var commandLink = document.getElementById('&lt;c:out value="${myPageCode.myCommandLinkClientId}" /&gt;'); if (commandLink &amp;&amp; commandLink.onclick) { var commandLinkOnclick = commandLink.onclick; commandLink.onclick = function() { var result = confirm('Do you really want to &lt;c:out value="${msgs.deleteText}" /&gt;?'); if (result) { return commandLinkOnclick(); } return false; } } } &lt;/script&gt; </code></pre> <p>Other Javascript actions (like validating form input etc) could be performed by replacing the call to <code>confirm()</code> with a call to another function.</p>
<p>In JSF 1.2 you can specify onclick events.</p> <p>Also, other libraries such as <a href="http://myfaces.apache.org/" rel="nofollow noreferrer">MyFaces</a> or <a href="http://www.icefaces.org" rel="nofollow noreferrer">IceFaces</a> implement the "onclick" handler.</p> <p>What you'd need to do then is simply:</p> <pre><code>&lt;h:commandLink action="#{bean.action}" onclick="if(confirm('Are you sure?')) return false;" /&gt; </code></pre> <p>Note: you can't just do <code>return confirm(...)</code> as this will block the rest of the JavaScript in the onClick event from happening, which would effectively stop your action from happening no matter what the user returned!</p>
9,934
<p>I've got the following rough structure:</p> <pre><code>Object -&gt; Object Revisions -&gt; Data </code></pre> <p>The Data can be shared between several Objects.</p> <p>What I'm trying to do is clean out old Object Revisions. I want to keep the first, active, and a spread of revisions so that the last change for a time period is kept. The Data might be changed a lot over the course of 2 days then left alone for months, so I want to keep the last revision before the changes started and the end change of the new set.</p> <p>I'm currently using a cursor and temp table to hold the IDs and date between changes so I can select out the low hanging fruit to get rid of. This means using @LastID, @LastDate, updates and inserts to the temp table, etc... </p> <p>Is there an easier/better way to calculate the date difference between the current row and the next row in my initial result set without using a cursor and temp table? </p> <p>I'm on sql server 2000, but would be interested in any new features of 2005, 2008 that could help with this as well.</p>
<p>Here is example SQL. If you have an Identity column, you can use this instead of "ActivityDate".</p> <pre><code>SELECT DATEDIFF(HOUR, prev.ActivityDate, curr.ActivityDate) FROM MyTable curr JOIN MyTable prev ON prev.ObjectID = curr.ObjectID WHERE prev.ActivityDate = (SELECT MAX(maxtbl.ActivityDate) FROM MyTable maxtbl WHERE maxtbl.ObjectID = curr.ObjectID AND maxtbl.ActivityDate &lt; curr.ActivityDate) </code></pre> <p>I could remove "prev", but have it there assuming you need IDs from it for deleting.</p>
<p>Hrmm, interesting challenge. I think you can do it without a self-join if you use the new-to-2005 pivot functionality.</p>
7,567
<p>Is it possible to programmatically push files to a wireless SD card - like a www.eye.fi card?</p> <p>I use a Mac and thought I could do this using some AppleScript - but have not found a way...</p> <p>Derek</p>
<p>The eye-fi card relies on image files being written to a specific directory in the card before they'll transfer them. Beyond that it works exactly like a memory card.</p> <p>Write a file to it as if you're writing a regular memory card, and as long as it's a jpg image file of reasonable size, and in an appropriate directory (something under \DCIM\ probably) and they should transfer.</p> <p>If you're having trouble, double check that it works with your camera, and find out where your camera puts the images on the card, and duplicate that. You might even try naming them similar names to the types of images your camera produces.</p> <p>-Adam</p>
<p>It looks like you can treat it just like an external hard drive (plug the memory card in and figure out where the mount point is).</p>
8,113
<p>I have a <strong>DLL</strong> with some <strong>COM objects</strong>. Sometimes, this objects <strong>crashes</strong> and register an error event in the Windows Event Log with lots of hexadecimal informations. I have no clue why this crashes happens. </p> <p>So, How can I trace those COM objects exceptions?</p>
<p>The first step is to lookup the Fail code's hex value (E.G. E_FAIL 0x80004005). I've had really good luck with posting that value in Google to get a sense of what the error code means. </p> <p>Then, I just use trial and error to try to isolate the location in code that's failing, and the root cause of the failure. </p>
<p>If you just want a really quick way to find out what the error code means, you could use the "Error Lookup" tool packaged with Visual Studio (details <a href="http://msdn.microsoft.com/en-us/library/akay62ya(VS.80).aspx" rel="nofollow noreferrer">here</a>). Enter the hex value, and it will give you the string describing that error code.</p> <p>Of course, once you know that, you've still got to figure out why it's happening.</p>
3,215
<p>We've been using "Drip" to try and identify why pages with UpdatePanels in them tend to use a lot of client-side memory. With a page with a regular postback, we are seeing 0 leaks detected by Drip. However, when we add an update panel to the mix, every single DOM object that is inside of the update panel appears to leak (according to Drip).</p> <p>I am not certain is Drip is reliable enough to report these kinds of things - the reported leaks do seem to indicate Drip is modifying the page slightly. </p> <p>Does anyone have any experience with this? Should I panic and stop using Microsoft Ajax? I'm not above doubting Microsoft, but it seems fishy to me that it could be <em>this</em> bad.</p> <p>Also, if you know of a tool that is better than Drip, that would be helpful as well.</p>
<p>According to <a href="http://www.amazon.com/gp/redirect.html?ie=UTF8&amp;location=http%3A%2F%2Fwww.amazon.com%2FASP-NET-AJAX-Action-Alessandro-Gallo%2Fdp%2F1933988142&amp;tag=diaryofamadma-20&amp;linkCode=ur2&amp;camp=1789&amp;creative=9325" rel="nofollow noreferrer">ASP.NET AJAX in Action</a>, p. 257</p> <blockquote> <p>Just before the old markup is replaced with the updated HTML, all the DOM elements in the panel are examined for Microsoft Ajax behaviours or controls attached to them. To avoid memory leaks, the components associated with DOM elements are disposed, and then destroyed when the HTMl is replaced.</p> </blockquote> <p>So as far as I know, any asp.net ajax components within the update panel are disposed to prevent memory leaks, but anything else in there will just be replaced with the html received.</p> <p>So if you don't have any asp.net ajax components in the target container for the response, it would be basically the same as an inner html replacement with any other js framework / ajax request, so i would say that it's just the how the browser handles this, rather than asp.net ajax causing this.</p> <p>Also, while it may be "leaking", it may be by design, meaning that the browser might not have reclaimed the dom elements yet and released them. Also, drip might be causing those to leak, as it is attaching to those dom elements.</p>
<p>That's very likely. This was pretty much what we assumed (browser problem, not necessarily Ajax).</p> <p>Our problem is now, with this application being accessed by many people via a Citrix environment, with each page continually creating DOM objects and not releasing them, the Citrix environment starts thrashing after some usage. I've seen similar complaints online (especially where you are dumb enough to access an Ajax website via Citrix), but it doesn't make me feel much better that this is the intended behavior.</p> <p>I'm wondering now if anyone has come up with a clever workaround. We also have a client app where we are using the .NET BrowserControl to access these websites, rather than just straight IE7, so if anyone knows a secret API call (<em>FreeStaleDomObjectsFTW()</em>) we can utilize from that end of the stack, that would be useful as well.</p>
6,571
<p>I've come across a rather interesing (and frustrating) problem with IE6. We are serving up some server generated pdfs and then simply setting headers in PHP to force a browser download of the file. Works fine and all, except in IE6 but <strong>only</strong> if the windows user account is set to standard user (ie. not administrator).</p> <p>Since this is for a corporate environment, of course all their accounts are setup this way. Weird thing is, that in the download dialog, the Content-Type is not recognized:</p> <pre><code>header( 'Pragma: public' ); header( 'Expires: 0' ); header( 'Cache-Control: must-revalidate, pre-check=0, post-check=0' ); header( 'Cache-Control: public' ); header( 'Content-Description: File Transfer' ); header( 'Content-Type: application/pdf' ); header( 'Content-Disposition: attachment; filename="xxx.pdf"' ); header( 'Content-Transfer-Encoding: binary' ); echo $content; exit; </code></pre> <p>I also tried writing the file content to a temporary file first so I could also set the <code>Content-Length</code> in the header but that didn't help.</p>
<p>some versions of IE seem to take</p> <pre><code>header( 'Expires: 0' ); header( 'Cache-Control: must-revalidate, pre-check=0, post-check=0' ); </code></pre> <p>way too seriously and remove the downloaded content before it's passed to the plugin to display it. </p> <p>Remove these two and you should be fine.</p> <p>And make sure you are not using any server-side GZIP compression when working with PDFs because some versions of Acrobat seem to struggle with this.</p> <p>I know I'm vague here, but above tips are based on real-world experience I got using a web application serving dynamically built PDFs containing barcodes. I don't know what versions are affected, I only know that using the two "tricks" above made the support calls go away :p</p>
<p>As pilif already mentions, make sure to turn off the server-side gzip compression. For me this has caused problems with PDF files (among other types) and for maybe-not-so-obscure reasons also with .zip files both under Internet Explorer and FireFox.</p> <p>As far as I could tell, the last bit of the zip footer would get stripped (at least by FireFox) causing a corrupted format.</p> <p>In PHP you can use the following code:</p> <pre><code>ini_set("zlib.output_compression",0); </code></pre>
7,180
<p>I'm trying to write a simple audio player for a website, and am using the EMBED... tag to embed the audio and setting HIDDEN="true" and using various javascript commands to control the audio playback. It works fine for realplayer and mplayer but the quicktime plugin doesn't respond to javascript if the hidden bit is set - is there any workaround for this?</p>
<p>First, i suggest you to use the <code>object</code> html tag which is standardized, <code>embed</code> is not.</p> <p>Then you could simply hide your embeded audio using CSS instead of this <code>hidden</code> param.</p> <p>Even better, you should hide it using CSS through JavaScript, because if you do it this way, people who don't have JavaScript enabled but support CSS are able use the plugin player directly.</p>
<p>I found that setting height=0 width=0 worked the same as hidden=true and solved the problem</p>
9,745
<p>Creality does offer its firmware on <a href="https://www.creality3d.cn/download/firmware_c0001" rel="noreferrer">creality3d.cn</a> as .hex files. These are pretty good as backups as one can't alter and destroy them by accident.</p> <p>But... How do you install them?!</p> <hr> <p>This is about installing firmware <em>directly</em> and <em>without</em> another microcontroller. To use another mictrocontroller is <a href="https://3dprinting.stackexchange.com/questions/6685">How to install new firmware via a Microcontroller?</a></p>
<p>Creality also does provide an installation PDF. The process they propose is twofold and might need different settings on other machines<sup>1</sup>. Spots where I assume you might need to adjust are noted with<sup>A</sup>. Note that <strong>this solution depends on CURA</strong>.</p> <h2>1. Install the printer as a periphery machine.</h2> <p>This part is specific for Windows. If you use Linux or a MAC, you will need to use a different setup, but you might get the same results.</p> <ol> <li>Turn on the power on the printer and connect it from the MircoUSB to a USB of the computer. This should automatically install the driver. If not, the Driver is on the SD card provided with the Printer<sup>2</sup>. <ul> <li>To manually install <code>windows Key</code> + "<code>MANAGER</code>" and choose <em>Device manager</em>. Find the serial port that shows yellow, Right-click, choose <code>Update driver software &gt; Browse my computer for driver software</code>. Now <code>Browse</code>, find the location of USB driver on the SD card and click <code>Next</code>.</li> <li>Generally,the serial port(COM) you need update has the biggest number, but can change.</li> <li>A good idea is to confirm the correct port with a software like <a href="https://www.repetier.com/download-now/" rel="nofollow noreferrer">Repetier Host</a>, with which you can control the printer directly - if it works, you got the drivers and the port correct. Also, you know the correct Baudrate.</li> </ul></li> <li>After the driver installation, launch <a href="https://ultimaker.com/en/products/ultimaker-cura-software" rel="nofollow noreferrer">CURA</a> to do some settings. In <code>File &gt; Preferences</code>: <ul> <li>Print Window is "Pronterface UI"<sup>A</sup></li> </ul></li> <li>Switch to <code>Machine &gt; Machine Settings</code>: <ul> <li>Serial Port: choose the one that just was updated</li> <li>Baudrate: 115200<sup>A</sup></li> </ul></li> </ol> <h2>2. Upload the .hex file via cura</h2> <ol> <li><code>Machine &gt; Install custom Firmware</code></li> <li>Make sure the printer is connected, then <code>OK</code></li> <li>find the .hex file on your PC, then confirm.</li> <li>Wait for the process to finish.</li> </ol> <hr> <p><sup>1 - most likely, you will have to change the baudrate</sup><br> <sup>2 - This might not be true for all manufacturers, but is for creality. Other manufacturers might have different sources for these.</sup><br> <sup>A - Adjust as needed!</sup></p>
<p>The Prusa i3 MK3 comes with <a href="https://www.prusa3d.com/slic3r-prusa-edition/" rel="nofollow noreferrer">Slic3r, Prusa Edition</a>. It has a menu for flashing the firmware, which takes a HEX file as input.</p> <p><a href="https://i.stack.imgur.com/L5TEH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/L5TEH.png" alt="Slic3r screenshot"></a></p> <p>You can then select the HEX file. It will auto-detect the printer, if connected via USB.</p> <p><a href="https://i.stack.imgur.com/AQ9Lk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AQ9Lk.png" alt="Screenshot firmware update details"></a></p>
1,002
<p>When entering a question, stackoverflow presents you with a list of questions that it thinks likely to cover the same topic. I have seen similar features on other sites or in other programs, too (Help file systems, for example), but I've never programmed something like this myself. Now I'm curious to know what sort of algorithm one would use for that.</p> <p>The first approach that comes to my mind is splitting the phrase into words and look for phrases containing these words. Before you do that, you probably want to throw away insignificant words (like 'the', 'a', 'does' etc), and then you will want to rank the results.</p> <p>Hey, wait - let's do that for web pages, and then we can have a ... watchamacallit ... - a "search engine", and then we can sell ads, and then ...</p> <p>No, seriously, what are the common ways to solve this problem?</p>
<p>One approach is the so called bag-of-words model.</p> <p>As you guessed, first you count how many times words appear in the text (usually called document in the NLP-lingo). Then you throw out the so called stop words, such as "the", "a", "or" and so on.</p> <p>You're left with words and word counts. Do this for a while and you get a comprehensive set of words that appear in your documents. You can then create an index for these words: "aardvark" is 1, "apple" is 2, ..., "z-index" is 70092. </p> <p>Now you can take your word bags and turn them into vectors. For example, if your document contains two references for aardvarks and nothing else, it would look like this:</p> <pre><code>[2 0 0 ... 70k zeroes ... 0]. </code></pre> <p>After this you can count the "angle" between the two vectors with <a href="http://en.wikipedia.org/wiki/Dot_product" rel="noreferrer">a dot product</a>. The smaller the angle, the closer the documents are.</p> <p>This is a simple version and there other more advanced techniques. May the <a href="http://en.wikipedia.org/wiki/Document_classification" rel="noreferrer">Wikipedia be with you</a>.</p>
<p>Here is the bag of words solution with tfidfvectorizer in python 3</p> <pre><code>#from sklearn.feature_extraction.text import CountVectorizer from sklearn.feature_extraction.text import TfidfVectorizer import nltk nltk.download('stopwords') s=set(stopwords.words('english')) train_x_cleaned = [] for i in train_x: sentence = filter(lambda w: not w in s,i.split(&quot;,&quot;)) train_x_cleaned.append(' '.join(sentence)) vectorizer = TfidfVectorizer(binary=True) train_x_vectors = vectorizer.fit_transform(train_x_cleaned) print(vectorizer.get_feature_names_out()) print(train_x_vectors.toarray()) from sklearn import svm clf_svm = svm.SVC(kernel='linear') clf_svm.fit(train_x_vectors, train_y) test_x = vectorizer.transform([&quot;test phrase 1&quot;, &quot;test phrase 2&quot;, &quot;test phrase 3&quot;]) print (type(test_x)) clf_svm.predict(test_x) </code></pre>
9,598
<p>Is the edit control I'm typing in now, with all its buttons and rules freely available for use? </p> <p>My web project is also .Net based.</p>
<p>It's the <a href="http://wmd-editor.com/" rel="nofollow noreferrer">WMD</a> Markdown editor which is free and seems to be pretty easy to use. Just include the javascript for it and (in the easiest case), it just attaches to the first textarea it finds.</p> <p>Here's some info about the <a href="http://daringfireball.net/projects/markdown/" rel="nofollow noreferrer">Perl implementation of Markdown</a> which, according to the site, WMD is 100% compatible with.</p> <p><br /> <a href="https://stackoverflow.com/questions/51808#51822">@Chris Upchurch</a> Technically the current release isn't open-source, just free to use. The next version is supposed to be released with an MIT license though.</p> <blockquote> <blockquote> <blockquote> <blockquote> <p>"now completely free to use. The next release will be open source under an MIT-style license."</p> </blockquote> </blockquote> </blockquote> </blockquote>
<p>The <a href="http://wmd-editor.com/" rel="nofollow noreferrer">WMD editor</a> is completely free (in the speech and beer senses of the word). It's available under an MIT-style license.</p>
7,502
<p>We are using classic asp to call a C# dll and in the C# dll we are using System.Drawing.Printing.PrinterSettings.InstalledPrinters to get a list of availabe printers. If the printers are not shared they will not show up when a user trys to print. The Local System account can see and print to them from a VB6 dll and Administrators can print just fine from the C# dll as you might expect. Is there some sort of permissions we need to grant the user so these printers will be available?</p>
<p>As I recall, running a website uses the Network User account, which may not have permission to view local printers.</p> <p>There was a page on MSDN that said how you can impersonate another user that might have access to the printers, but I've not been able to find it.</p> <p>Edit: I posted too soon. <a href="http://support.microsoft.com/kb/306158" rel="nofollow noreferrer">Here's the page</a>.</p> <p>HTH</p>
<p>As I recall, running a website uses the Network User account, which may not have permission to view local printers.</p> <p>There was a page on MSDN that said how you can impersonate another user that might have access to the printers, but I've not been able to find it.</p> <p>Edit: I posted too soon. <a href="http://support.microsoft.com/kb/306158" rel="nofollow noreferrer">Here's the page</a>.</p> <p>HTH</p>
3,554
<p>I have a web site in asp.net that uses a master page. In this master page I have a multiview control with one view that has all the content for the content pages and one view that has some local content to the master page where I show error messages from all content pages.</p> <p>In this error view I have a asp.net Label control that displays the error messages. Usually the label works fine, but in some few cases the label is null, which renders a NullReferenceException. Now I have handled this case by checking if the label is null before using it, but still my question is:</p> <p><em>Why is this label null? What are the circumstances that can generate this?</em></p> <p><strong>EDIT:</strong> In the master page I have a method called SetErrorText that takes a string and sets the label. I'm calling this method from the content pages' Page_Load method, and this generally works fine. In all but two cases (that I've discovered so far) the label is initialised, and nothing separates these two cases from all the ones that work.</p> <p>Also, all other controls in the master page are initialised, such as the View-control that houses the label.</p> <p>When the Page_Load of a content page rolls around, the master page should be populated.</p>
<p>It seems that the problem was one of sloppiness. Someone had forgotten to delete the auto-generated Content-controls that Visual Studio throws in on all content pages where the master page has a ContentPlaceHolder-control.</p> <p>If a content page has a Content-control, all controls that are placed in the ContentPlaceHolder-control on the master page will be null, it seems.</p>
<p>Could you be accessing it before it is created? Check the page lifecycle: <a href="http://msdn.microsoft.com/en-us/library/ms178472.aspx" rel="nofollow noreferrer">http://msdn.microsoft.com/en-us/library/ms178472.aspx</a></p>
6,719
<p>Two reasons this would be useful, in case there's some other way to achieve these goals: 1) Building in the Flash IDE is really slow; I was hoping a third-party compiler would be faster. 2) It would be nice to be able to build projects on machines that don't have the Flash IDE installed. I'm familiar with other AS3 compilers that are out there, but I don't know of any that take FLA files as input.</p>
<p>To answer the original question, there is no way to compile FLAs without using the Flash IDEs. </p> <p>The only partial solution to to use a command line script that automates opening Flash Authoring and compiling the FLA. You can find one such example here:</p> <p><a href="http://www.mikechambers.com/blog/2004/02/20/flashcommand-flash-command-line-compiler-for-os-x/" rel="nofollow noreferrer">http://www.mikechambers.com/blog/2004/02/20/flashcommand-flash-command-line-compiler-for-os-x/</a></p> <p>If you just need to compile ActionScript code and assets, there are a number of options (some included in this thread), including the mxmlc compiler included in the Flex SDK (provided by Adobe).</p> <p><a href="http://www.adobe.com/products/flex/" rel="nofollow noreferrer">http://www.adobe.com/products/flex/</a></p> <p>Hope that helps...</p> <p>mike chambers</p> <p>mesh@adobe.com</p>
<p>There's a plugin for Eclipse called FDT. It uses the open source compiler MTASC and supports ANT. The tool is free for OpenSource developers. Get more Infos here: <a href="http://fdt.powerflasher.com/" rel="nofollow noreferrer">http://fdt.powerflasher.com/</a></p> <p>Hope it helps :)</p>
8,886
<p>Here's a basic regex technique that I've never managed to remember. Let's say I'm using a fairly generic regex implementation (e.g., grep or grep -E). If I were to do a list of files and match any that end in either <code>.sty</code> or <code>.cls</code>, how would I do that?</p>
<pre><code>ls | grep -E "\.(sty|cls)$" </code></pre> <ul> <li><code>\.</code> matches literally a <code>"."</code> - an unescaped <code>.</code> matches any character</li> <li><code>(sty|cls)</code> - match <code>"sty"</code> or <code>"cls"</code> - the | is an <code>or</code> and the brackets limit the expression.</li> <li><code>$</code> forces the match to be at the end of the line</li> </ul> <p>Note, you want <code>grep -E</code> or <code>egrep</code>, not <code>grep -e</code> as that's a different option for lists of patterns.</p>
<pre><code>egrep "\.sty$|\.cls$" </code></pre>
6,000
<p>On several occasions I've wanted lettering/numbering printed as part of a design, but with the ability to make it stand out more effectively in the printed object without having to do detailed manual finishing. Is there a good material which can catch in and fill sunken (depth 0.4&nbsp;mm, width 0.6-1.0&nbsp;mm) lines/strokes of alphanumeric characters without sticking to the surface (including fine layer ridges) of the print? My best result so far has been with crayon wax, but I wonder if there are more suitable materials. (Polymer clay, perhaps?)</p> <p>Results with crayons:</p> <p><img src="https://i.stack.imgur.com/xLmJ6.jpg" alt="3D printed D20s with numbering filled with crayon wax"></p> <p>Durability is nice (and essential for some applications), but for many uses I have in mind it's not such a big deal. For example another place I've wanted clear text is on test panels to check nut/bolt thread sizes, in which case the text is unlikely to receive harsh treatment but any heat-based curing processes might effect the dimensional accuracy negatively. So both durable and non-durable solutions are interesting to me.</p>
<p>The great pics really help with the answerability of this question. From how catastrophic the failure is, and how it's clearly independent of any specialty needs for the particular print such as tiny bed-adhesion contacts, sharp overhangs, bridges, etc. this is definitely not a problem with temperature. Different people recommend different temperatures for PLA, but I find that 210°C works well for me, and if you go much lower you'll hit problems getting the needed extrusion rate for anything but slow print speeds.</p> <p>I've seen nearly this exact phenomenon before, so I knew it was probably a matter of the bed being too high, blocking extrusion of the first layer and forcing what little material can escape out to the sides of the nozzle, then tearing into it when the next adjacent line is laid out.</p> <p>If I didn't know that, though, I'd still start looking for a source of the problem that's related to extrusion rate. Something was clearly wrong with getting the right amount of material in the right space, which indicates to me that there's either too much material (overextrusion/wrong filament diameter selected) or too little space (bed to high).</p>
<p>I found the solution to be the exact opposite. My bed was too low (as in too far from the hotend). All the above mentioned aside, I did also drop the flow rate on the brim and initial layer by roughly 2-3 %. Now it prints perfectly again. (your mileage may vary)</p>
1,448
<p>In C#, when you implement an interface, all members are implicitly public. Wouldn't it be better if we could specify the accessibility modifier (<code>protected</code>, <code>internal</code>, except <code>private</code> of course), or should we just use an abstract class instead?</p>
<p>If an interface is internal, all its members will be internal to the assembly. If a nested interface is protected, only the subclasses of the outer class could access that interface.</p> <p>Internal members for an interface outside of its declaring assembly would be pointless, as would protected members for an interface outside of its declaring outer class.</p> <p>The point of an interface is to describe a <strong>contract</strong> between a implementing type and users of the interface. Outside callers aren't going to care and shouldn't <strong>have</strong> to care about implementation, which is what internal and protected members are for.</p> <p>For protected members that are called by a base class, abstract classes are the way to go for specifying a contract between base classes and classes that inherit from them. But in this case, implementation details are usually very relevant, unless it's a degenerate pure abstract class (where <em>all</em> members are abstract) in which case protected members are useless. In that case, go with an interface and save the single base class for implementing types to choose.</p>
<p>I'm familiar with Java rather than C#, but why an earth would you want a private member within an interface? It couldn't have any implementation and would be invisible to implementing classes, so would be useless. Interfaces exist to specify behaviour. If you need default behaviour than use an abstract class.</p>
3,818
<p>How do you create a database from an Entity Data Model.</p> <p>So I created a database using the EDM Designer in VisualStudio 2008, and now I want to generate the SQL Server Schema to create storage in SQL Server. </p>
<p>From what I understand you are not just supposed to use EDM as a "pretty" database designer, in fact EDM does not depend on a specific storage layer. It tries to abstract that part for the developer. There are design schemas (CSDL) and storage schemas (SSDL). Anyway, don't mean to lecture you. ;)</p> <p>There is <a href="http://msdn.microsoft.com/en-us/library/bb387165.aspx" rel="nofollow noreferrer">EDM Generator</a>, which you use to create models and class, etc.. For a DDL kind of export, I've never done that but what I did was map my EDM to an existing database, which was easier for me to get started.</p> <p>There is a great tutorial on MSDN, which details step by step instructions on how to go about using an existing database, but also touches the how to start from scratch approach.</p> <p><a href="http://msdn.microsoft.com/en-us/magazine/cc163286.aspx" rel="nofollow noreferrer">http://msdn.microsoft.com/en-us/magazine/cc163286.aspx</a></p>
<p>The Feature "Generate Database Schema from Model" is scheduled for a future release of Entity Framework. V1 does'nt support schema generatiorn based on EF models.</p>
8,607
<p>During my last print I noticed this jumpy behavior during heat up (blue is bed, red is hotend):</p> <p><a href="https://i.stack.imgur.com/KlD2K.png" rel="nofollow noreferrer" title="Graph of heatbed and hotend temperature over time"><img src="https://i.stack.imgur.com/KlD2K.png" alt="Graph of heatbed and hotend temperature over time" title="Graph of heatbed and hotend temperature over time"></a></p> <p>So I am wondering about the jumps. What could cause this? I do not think that it is a defunct sensor (it is in both sensors) and I also don't think it is real. Could this pose some sort of hazard as temperature might not properly be controlled?</p> <p>I use an Anet A8 printer with Marlin 1.1.9 and Octoprint</p>
<p>Since the terminal temperatures never exceed the setpoints, there's no apparent potential for disaster. Whether it's due to missing readings or to some sequencing of power (current) applied to the bed vs. the hotend, it really doesn't matter. </p> <p>If you have a similar graph of the temperatures over an hour of printing and you see signifcant anomalies there, that might be of interest. </p>
<p>It looks like everything is ok with the real temperature, and it is just Octoprint missing some readings, like Oscar said. But I think it is unlikely the USB cables fault, the whole print would fail in that case. I think the PI is to busy, maybe due to a Webcam streaming at high rate (try reducing the framerate). Another reason might be if you use a PI Zero W, especially if you transfer data over the Wifi at the same time, since Wifi and USB cause bottlenecks for each other on that board.</p> <p>PS. I'd comment, but don't have privilege yet.</p>
1,278
<p>During printing, my printer occasionally makes some mystery moves: it will very slowly move either the X or Y axis all the way to the left/front, before very slowly moving back to its original position and resuming the print as normal. I've checked my G-code files, and the moves are definitely not part of the G-code. What could be causing this?</p> <p>I'm printing from an SD card on a Cartesian printer.</p>
<p>The issue was due to a corrupt SD-card, which was occasionally having some garbage read from it. It turns out that Marlin will try interpret a corrupt move command like <code>G0 X1q3.54</code> and still read as many numbers as it can. In this example, it would be interpreted as <code>G0 X1</code> rather than (as might have been intended) <code>G0 X103.54</code>.</p> <p>This explains my symptoms perfectly:</p> <ul> <li><p>X and Y always moved to (approximately) their home positions, but it was always only one of them (it's quite unlikely that both moves are corrupted).</p></li> <li><p>Z was not affected because Z moves are much rarer in the G-code (only on layer change) and thus it was very unlikely that a Z move would be affected.</p></li> <li><p>E was not affected since a request to move E to near 0 would be prevented by Marlin's long extrusion prevention.</p></li> </ul>
<p>Beides a corrupt SD card that stores worng bits, leading to absurd commands, it's also possible that ithers parts in the Creatin of the file are compromised:</p> <p>This can for example happen if the card is removed during writing - but in this case it should be mostly an incomplete file on importing into an interpreter.</p> <p>More a random distribution of bad commands would appear if the writing process in itself is faulty, for example if the SD Port is defect or the adapter hast a error. Another way that writing can fail is if the drivers for the SD card adapter/port are corrupted. </p> <p>To detect a bad file or corrupted card is possible by re-importing the G-Code into a slicer (for example CURA allows this) and look at the tool paths. If any port does this with any card, software is to blame: See if it persists after a driver update (rare!) and a reinstall of the slicer. If it fails in one port but works in a different the port or adapter might bei at fault and might need replacement. If it is endemic to one card, this one is corrupt and to be thrown out. If it is endemic to a single file, overwrite it with a new one - sometimes writing fails for reasons that are almost impossible to understand.</p> <p>If the file&amp;card are fine but read wrongly at the printer, then the card reader in the printer or the board are to blame.</p>
251
<p>How do you calculate the distance between 2 cities? </p>
<p>If you need to take the curvature of the earth into account, the Great-Circle distance is what you're looking for. <a href="http://en.wikipedia.org/wiki/Great-circle_distance" rel="nofollow noreferrer">The Wikipedia article</a> probably does a better job of explaining how the formula works than me, and there's also <a href="http://www.edwilliams.org/avform.htm" rel="nofollow noreferrer">this aviation formulary page</a> that covers that goes into more detail.</p> <p>The formulas are only the first part of the puzzle though, if you need to make this work for arbitrary cities, you'll need a location database to get the lat/long from. Luckily you can get this for free from <a href="http://www.geonames.org/" rel="nofollow noreferrer">Geonames.org</a>, although there are commercial db's available (ask google). So, in general, look up the two cities you want, get the lat/long co-orinates and plug them into the formula as in <a href="http://en.wikipedia.org/wiki/Great-circle%5Fdistance#Worked_example" rel="nofollow noreferrer">the Wikipedia Worked Example</a>.</p> <p>Other suggestions:</p> <ul> <li>For a full commercial solution, there's <a href="http://www.alk.com/pcmiler/pcmiler-system-req.asp" rel="nofollow noreferrer">PC Miler</a> which is used by many trucking companies to calculate shipping rates.</li> <li>Make calls to the Google Maps (or other) api. If you need to do many requests per day, consider caching the results on the server.</li> <li>Also <strong>very important</strong> is to consider building an equivalence database for cities, suburbs, towns etc. if you think you'll ever need to group your data. This gets really complicated though, and you may not find a one-size-fits-all solution for your problem.</li> </ul> <p>Last but not least, Joel wrote an article about this problem a while back, so here you go: <a href="http://www.joelonsoftware.com/items/2006/10/09.html" rel="nofollow noreferrer">New Feature: Job Search</a></p>
<p>You find the Lat/Lon of the city, then use a distance estimation algorithm for Lat/Lon coordinates.</p>
4,348
<p>It seems there are some missing lines on the outer wall on the Z-axis with my prints. I'm not able to pinpoint the problem. Does anyone have ideas about what might be wrong with my setup/settings?</p> <p>Example:</p> <p><a href="https://i.stack.imgur.com/3q1k3.jpg" rel="nofollow noreferrer" title="Printed model with printing errors highlighted"><img src="https://i.stack.imgur.com/3q1k3.jpg" alt="Printed model with printing errors highlighted" title="Printed model with printing errors highlighted" /></a></p> <p>Here are some settings that I think are relevant:<br /> Printer: Ender 3 v1<br /> Filament: Das Filament<br /> Slicer: Cura</p> <ul> <li>Hotend temp: 215 °C</li> <li>Layer height: 0.2 mm</li> <li>Wall speed: 30 mm/s</li> <li>Travel speed: 200 mm/s</li> <li>Retraction distance: 6.5 mm</li> <li>Combing mode: not in skin (Max comb: 30)</li> </ul> <p>Cheers</p>
<p>According to <a href="https://github.com/MarlinFirmware/Configurations/pull/633#issuecomment-995206382" rel="nofollow noreferrer">'The-EG' comment</a> in this GitHub issue, <a href="https://github.com/MarlinFirmware/Configurations/pull/633" rel="nofollow noreferrer">Add Creality Ender 2 Pro config #633</a>, you can often determine the stepper drivers by one of a few ways:</p> <ol> <li><p>Listen to the sound. The 'TMC22**' will sound much quieter</p> </li> <li><p>Look for a marking in Sharpie on the SD Card reader</p> <pre><code>C = HR4998 E = A4988 A = TMC2208 B = TMC2209 H = TMC2225 </code></pre> </li> <li><p>Remove the heat sync</p> <p><a href="https://github.com/MarlinFirmware/Configurations/pull/633#issuecomment-995480295" rel="nofollow noreferrer">https://github.com/MarlinFirmware/Configurations/pull/633#issuecomment-995480295</a></p> <p>After removing the heat sync, it appears that the Chip is actually a <code>MS35775</code></p> <p><a href="https://i.stack.imgur.com/0o5mK.jpg" rel="nofollow noreferrer" title="Closeup of MS35775 on board"><img src="https://i.stack.imgur.com/0o5mK.jpg" alt="Closeup of MS35775 on board" title="Closeup of MS35775 on board" /></a></p> </li> </ol>
<p>MS35775 appears to be TMC208 compatible. You can find the data sheet on relmon.com here is the overview:</p> <ul> <li>2-Phase stepping motor peak current of 2A</li> <li>Step / dir interface 2, 4, 8, 16, or 32 microstep</li> <li>Internal 256 micro steps</li> <li>Quiet mode</li> <li>Fast mode</li> <li>HS Rdson 0.29 Ω ,LS Rdson 0.28 Ω</li> <li>Voltage range 4.75 ~ 36V</li> <li>When the motor is still, it will enter into the power saving mode automatically</li> <li>Internal resistor mode is optional (no need for external sense resistor)</li> <li>Single wire UART bus and OTP control</li> <li>QFN28 package</li> </ul>
2,073
<p>I have a z-axis which follows schematically the same principle as the makerbot one's (threaded rod and two leading rods with linear ball bearings carrying a level). It is from an old experimental lab doing physics or chemistry.</p> <p>The axis move gorgeously about 5 cm, but then it get's stuck on either sides of this way. Both driving threaded rod and the leading rods seem perfectly fine and should be able to allow for further movement. This is as far as I can see by eye.</p> <p>Where should I look to find further issues and how could I improve the performance? Do I need to take the construction apart?</p>
<p>There could be a few issues at play.</p> <ol> <li>The smooth rods are not parallel which is causing the bearings to bind the further you go up.</li> <li>Part of the thread is damaged not allowing it to pass through the nut.</li> <li>The threaded rod is bent significantly to where it either doesn't pass through the nut or bind the assembly. (Is the end of the threaded rod opposite the motor constrained?)</li> <li>Even less likely is that the motor could be damaged. Since you said it moved 5mm I would have to assume that is several revolutions of the motor so this is unlikely but possible.</li> </ol> <p>The easiest way to find the problem would be to unhook/remove the threaded rod and see if you can move the carriage up and down the rods by hand, if so you just narrowed the problem down significantly.</p>
<p>There could be a few issues at play.</p> <ol> <li>The smooth rods are not parallel which is causing the bearings to bind the further you go up.</li> <li>Part of the thread is damaged not allowing it to pass through the nut.</li> <li>The threaded rod is bent significantly to where it either doesn't pass through the nut or bind the assembly. (Is the end of the threaded rod opposite the motor constrained?)</li> <li>Even less likely is that the motor could be damaged. Since you said it moved 5mm I would have to assume that is several revolutions of the motor so this is unlikely but possible.</li> </ol> <p>The easiest way to find the problem would be to unhook/remove the threaded rod and see if you can move the carriage up and down the rods by hand, if so you just narrowed the problem down significantly.</p>
217
<p>Is it possible to actually make use of placement new in portable code when using it for arrays?</p> <p>It appears that the pointer you get back from new[] is not always the same as the address you pass in (5.3.4, note 12 in the standard seems to confirm that this is correct), but I don't see how you can allocate a buffer for the array to go in if this is the case.</p> <p>The following example shows the problem. Compiled with Visual Studio, this example results in memory corruption:</p> <pre><code>#include &lt;new&gt; #include &lt;stdio.h&gt; class A { public: A() : data(0) {} virtual ~A() {} int data; }; int main() { const int NUMELEMENTS=20; char *pBuffer = new char[NUMELEMENTS*sizeof(A)]; A *pA = new(pBuffer) A[NUMELEMENTS]; // With VC++, pA will be four bytes higher than pBuffer printf("Buffer address: %x, Array address: %x\n", pBuffer, pA); // Debug runtime will assert here due to heap corruption delete[] pBuffer; return 0; } </code></pre> <p>Looking at the memory, the compiler seems to be using the first four bytes of the buffer to store a count of the number of items in it. This means that because the buffer is only <code>sizeof(A)*NUMELEMENTS</code> big, the last element in the array is written into unallocated heap.</p> <p>So the question is can you find out how much additional overhead your implementation wants in order to use placement new[] safely? Ideally, I need a technique that's portable between different compilers. Note that, at least in VC's case, the overhead seems to differ for different classes. For instance, if I remove the virtual destructor in the example, the address returned from new[] is the same as the address I pass in.</p>
<p>Personally I'd go with the option of not using placement new on the array and instead use placement new on each item in the array individually. For example:</p> <pre><code>int main(int argc, char* argv[]) { const int NUMELEMENTS=20; char *pBuffer = new char[NUMELEMENTS*sizeof(A)]; A *pA = (A*)pBuffer; for(int i = 0; i &lt; NUMELEMENTS; ++i) { pA[i] = new (pA + i) A(); } printf("Buffer address: %x, Array address: %x\n", pBuffer, pA); // dont forget to destroy! for(int i = 0; i &lt; NUMELEMENTS; ++i) { pA[i].~A(); } delete[] pBuffer; return 0; } </code></pre> <p>Regardless of the method you use, make sure you manually destroy each of those items in the array before you delete pBuffer, as you could end up with leaks ;)</p> <p><em>Note</em>: I haven't compiled this, but I think it should work (I'm on a machine that doesn't have a C++ compiler installed). It still indicates the point :) Hope it helps in some way!</p> <hr> <p>Edit:</p> <p>The reason it needs to keep track of the number of elements is so that it can iterate through them when you call delete on the array and make sure the destructors are called on each of the objects. If it doesn't know how many there are it wouldn't be able to do this.</p>
<p>I think gcc does the same thing as MSVC, but of course this doesn't make it "portable".</p> <p>I think you can work around the problem when NUMELEMENTS is indeed a compile time constant, like so:</p> <p><code> typedef A Arr[NUMELEMENTS];</p> <p>A* p = new (buffer) Arr; </code></p> <p>This should use the scalar placement new.</p>
3,607
<p>The .NET garbage collector will eventually free up memory, but what if you want that memory back immediately? What code do you need to use in a class <code>MyClass</code> to call</p> <pre><code>MyClass.Dispose() </code></pre> <p>and free up all the used space by variables and objects in <code>MyClass</code>?</p>
<p>IDisposable has nothing to do with freeing memory. IDisposable is a pattern for freeing <em>unmanaged</em> resources -- and memory is quite definitely a managed resource.</p> <p>The links pointing to GC.Collect() are the correct answer, though use of this function is generally discouraged by the Microsoft .NET documentation.</p> <p><strong>Edit:</strong> Having earned a substantial amount of karma for this answer, I feel a certain responsibility to elaborate on it, lest a newcomer to .NET resource management get the wrong impression.</p> <p>Inside a .NET process, there are two kinds of resource -- managed and unmanaged. "Managed" means that the runtime is in control of the resource, while "unmanaged" means that it's the programmer's responsibility. And there really is only one kind of managed resource that we care about in .NET today -- memory. The programmer tells the runtime to allocate memory and after that it's up to the runtime to figure out when the memory can freed. The mechanism that .NET uses for this purpose is called <a href="http://en.wikipedia.org/wiki/Garbage_collection_(computer_science)" rel="noreferrer">garbage collection</a> and you can find plenty of information about GC on the internet simply by using Google.</p> <p>For the other kinds of resources, .NET doesn't know anything about cleaning them up so it has to rely on the programmer to do the right thing. To this end, the platform gives the programmer three tools:</p> <ol> <li>The IDisposable interface and the "using" statement in VB and C#</li> <li>Finalizers</li> <li>The IDisposable pattern as implemented by many BCL classes</li> </ol> <p>The first of these allows the programmer to efficiently acquire a resource, use it and then release it all within the same method.</p> <pre><code>using (DisposableObject tmp = DisposableObject.AcquireResource()) { // Do something with tmp } // At this point, tmp.Dispose() will automatically have been called // BUT, tmp may still a perfectly valid object that still takes up memory </code></pre> <p>If "AcquireResource" is a factory method that (for instance) opens a file and "Dispose" automatically closes the file, then this code cannot leak a file resource. But the memory for the "tmp" object itself may well still be allocated. That's because the IDisposable interface has absolutely no connection to the garbage collector. If you <em>did</em> want to ensure that the memory was freed, your only option would be to call <code>GC.Collect()</code> to force a garbage collection.</p> <p>However, it cannot be stressed enough that this is probably not a good idea. It's generally much better to let the garbage collector do what it was designed to do, which is to manage memory.</p> <p>What happens if the resource is being used for a longer period of time, such that its lifespan crosses several methods? Clearly, the "using" statement is no longer applicable, so the programmer would have to manually call "Dispose" when he or she is done with the resource. And what happens if the programmer forgets? If there's no fallback, then the process or computer may eventually run out of whichever resource isn't being properly freed.</p> <p>That's where finalizers come in. A finalizer is a method on your class that has a special relationship with the garbage collector. The GC promises that -- before freeing the memory for any object of that type -- it will first give the finalizer a chance to do some kind of cleanup.</p> <p>So in the case of a file, we theoretically don't need to close the file manually at all. We can just wait until the garbage collector gets to it and then let the finalizer do the work. Unfortunately, this doesn't work well in practice because the garbage collector runs non-deterministically. The file may stay open considerably longer than the programmer expects. And if enough files are kept open, the system may fail when trying to open an additional file.</p> <p>For most resources, we want both of these things. We want a convention to be able to say "we're done with this resource now" and we want to make sure that there's at least some chance for the cleanup to happen automatically if we forget to do it manually. That's where the "IDisposable" pattern comes into play. This is a convention that allows IDispose and a finalizer to play nicely together. You can see how the pattern works by looking at the <a href="http://msdn.microsoft.com/en-us/library/system.idisposable.aspx" rel="noreferrer">official documentation for IDisposable</a>.</p> <p><strong>Bottom line:</strong> If what you really want to do is to just make sure that memory is freed, then IDisposable and finalizers will not help you. But the IDisposable interface is part of an extremely important pattern that all .NET programmers should understand.</p>
<p>In answer to the original question, with the information given so far by the original poster, it is 100% certain that he does not know enough about programming in .NET to even be given the answer: use GC.Collect(). I would say it is 99.99% likely that he really doesn't need to use GC.Collect() at all, as most posters have pointed out. </p> <p>The correct answer boils down to 'Let the GC do its job. Period. You have other stuff to worry about. But you might want to consider whether and when you should dispose of or clean up specific objects, and whether you need to implement IDisposable and possibly Finalize in your class.'</p> <p>Regarding Keith's post and his Rule #4:</p> <p>Some posters are confusing rule 3 and rule 4. Keith's rule 4 is absolutely correct, unequivocately. It's the one rule of the four that needs no editing at all. I would slightly rephrase some of his other rules to make them clearer, but they are essentially correct if you parse them correctly, and actually read the whole post to see how he expands on them.</p> <ol> <li><p>If your class doesn't use an unmanaged resource AND it also never instantiates another object of a class that itself uses, directly or ultimately, an unmanaged object (i.e., a class that implements IDisposable), then there would be no need for your class to either implement IDisposable itself, or even call .dispose on anything. (In such a case, it is silly to think you actually NEED to immediately free up memory with a forced GC, anyway.)</p></li> <li><p>If your class uses an unmanaged resource, OR instantiates another object that itself implements IDisposable, then your class should either:</p> <p>a) dispose/release these immediately in a local context in which they were created, OR...</p> <p>b) implement IDisposable in the pattern recommended within Keith's post, or a few thousand places on the internet, or in literally about 300 books by now.</p> <p>b.1) Furthermore, if (b), and it is an unmanaged resource that has been opened, both IDisposable AND Finalize SHOULD ALWAYS be implemented, per Keith's Rule #4.<br> In this context, Finalize absolutely IS a safety net in one sense: if someone instantiates YOUR IDisposable object that uses an unmanaged resource, and they fail to call dispose, then Finalize is the last chance for YOUR object to close the unmanaged resource properly.<br> (Finalize should do this by calling Dispose in such a way that the Dispose method skips over releasing anything BUT the unmanaged resource. Alternatively, if your object's Dispose method IS called properly by whatever instantiated your object, then it BOTH passes on the Dispose call to all IDisposable objects it has instantiated, AND releases the unmanaged resources properly, ending with a call to suppress the Finalize on your object, which means that the impact of using Finalize is reduced if your object is disposed properly by the caller. All of these points are included in Keith's post, BTW.) </p> <p>b.2) IF your class is only implementing IDisposable because it needs to essentially pass on a Dispose to an IDisposable object it has instantiated, then don't implement a Finalize method in your class in that case. Finalize is for handling the case that BOTH Dispose was never called by whatever instantiated your object, AND an unmanaged resource was utilized that's still unreleased. </p></li> </ol> <p>In short, regarding Keith's post, he is completely correct, and that post is the most correct and complete answer, in my opinion. He may use some short-hand statements that some find 'wrong' or object to, but his full post expands on the usage of Finalize completely, and he is absolutely correct. Be sure to read his post completely before jumping on one of the rules or preliminary statements in his post.</p>
3,341
<p>I hope I haven't painted myself into a corner. I've gotten what seems to be most of the way through implementing a Makefile and I can't get the last bit to work. I hope someone here can suggest a technique to do what I'm trying to do.</p> <p>I have what I'll call "bills of materials" in version controlled files in a source repository and I build something like:</p> <pre><code>make VER=x </code></pre> <p>I want my Makefile to use $(VER) as a tag to retrieve a BOM from the repository, generate a dependency file to include in the Makefile, rescan including that dependency, and then build the product. </p> <p>More generally, my Makefile may have several targets -- A, B, C, etc. -- and I can build different versions of each so I might do:</p> <pre><code>make A VER=x make B VER=y make C VER=z </code></pre> <p>and the dependency file includes information about all three targets.</p> <p>However, creating the dependency file is somewhat expensive so if I do:</p> <pre><code>make A VER=x ...make source (not BOM) changes... make A VER=x </code></pre> <p>I'd really like the Makefile to not regenerate the dependency. And just to make things as complicated as possible, I might do:</p> <pre><code>make A VER=x .. change version x of A's BOM and check it in make A VER=x </code></pre> <p>so I need to regenerate the dependency on the second build.</p> <p>The check out messes up the timestamps used to regenerate the dependencies so I think I need a way for the dependency file to depend not on the BOM but on some indication that the BOM changed.</p> <p>What I've come to is making the BOM checkout happen in a .PHONY target (so it always gets checked out) and keeping track of the contents of the last checkout in a ".sig" file (if the signature file is missing or the contents are different than the signature of the new file, then the BOM changed), and having the dependency generation depend on the signatures). At the top of my Makefile, I have some setup:</p> <pre><code>BOMS = $(addsuffix .bom,$(MAKECMDGOALS) SIGS = $(subst .bom,.sig,$(BOMS)) DEP = include.d -include $(DEP) </code></pre> <p>And it seems I always need to do:</p> <pre><code>.PHONY: $(BOMS) $(BOMS): ...checkout TAG=$(VER) $@ </code></pre> <p>But, as noted above, if i do just that, and continue:</p> <pre><code>$(DEP) : $(BOMS) ... recreate dependency </code></pre> <p>Then the dependency gets updated every time I invoke make. So I try:</p> <pre><code>$(DEP) : $(SIGS) ... recreate dependency </code></pre> <p>and</p> <pre><code>$(BOMS): ...checkout TAG=$(VER) $@ ...if $(subst .bom,.sig,$@) doesn't exist ... create signature file ...else ... if new signature is different from file contents ... update signature file ... endif ...endif </code></pre> <p>But the dependency generation doesn't get tripped when the signature changes. I think it's because because $(SIGS) isn't a target, so make doesn't notice when the $(BOMS) rule updates a signature.</p> <p>I tried creating a .sig:.bom rule and managing the timestamps of the checked out BOM with touch but that didn't work.</p> <p>Someone suggested something like:</p> <pre><code>$(DEP) : $(SIGS) ... recreate dependency $(BOMS) : $(SIGS) ...checkout TAG=$(VER) $@ $(SIGS) : ...if $(subst .bom,.sig,$(BOMS)) doesn't exist ... create it ...else ... if new signature is different from file contents ... update signature file ... endif ...endif </code></pre> <p>but how can the BOM depend on the SIG when the SIG is created from the BOM? As I read that it says, "create the SIG from the BOM and if the SIG is newer than the BOM then checkout the BOM". How do I bootstrap that process? Where does the first BOM come from?</p>
<p>Make is very bad at being able to detect actual file changes, as opposed to just updated timestamps. </p> <p>It sounds to me that the root of the problem is that the bom-checkout always modifies the timestamp of the bom, causing the dependencies to be regenerated. I would probably try to solve this problem instead -- try to checkout the bom without messing up the timestamp. A wrapper script around the checkout tool might do the trick; first checkout the bom to a temporary file, compare it to the already checked out version, and replace it only if the new one is different.</p> <p>If you're not strictly bound to using make, there are other tools which are much better at detecting actual file changes (SCons, for example).</p>
<p>I'm not a make expert, but I would try have $(BOMS) depend on $(SIGS), and making the $(SIGS) target execute the if/else rules that you currently have under the $(BOMS) target.</p> <pre><code>$(DEP) : $(SIGS) ... recreate dependency $(BOMS) : $(SIGS) ...checkout TAG=$(VER) $@ $(SIGS) : ...if $(subst .bom,.sig,$(BOMS)) doesn't exist ... create it ...else ... if new signature is different from file contents ... update signature file ... endif ...endif </code></pre> <p><strong>EDIT:</strong> You're right, of course, you can't have $(BOM) depend on $(SIGS). But in order to have $(DEP) recreate, you need to have $(SIG) as a target. Maybe have an intermediate target that depends on both $(BOM) and $(SIG).</p> <pre><code>$(DEP) : $(SIGS) ... recreate dependency $(NEWTARGET) : $(BOMS) $(SIGS) $(BOMS) : ...checkout TAG=$(VER) $@ $(SIGS) : ...if $(subst .bom,.sig,$(BOMS)) doesn't exist ... create it ...else ... if new signature is different from file contents ... update signature file ... endif ...endif </code></pre> <p>$(SIGS) might also need to depend on $(BOMS), I would play with that and see.</p>
9,705
<p>When compiling from source, I never know which configure flags to use to optimize the compilation for my environment. Assume the following hardware/OS:</p> <ul> <li>Single Core, 2 GHz Intel</li> <li>512MB Ram</li> <li>Debian 4</li> </ul> <p>I usually just go with </p> <p><code>./configure --prefix=/usr/local</code></p> <p>Should I be doing anything else?</p>
<p>I always use Debian packages. Compiling from sources can break your development environment during libraries conflicts and such problems are hard to detect.</p>
<p>You might want to check those few options out, which may be required by a Ruby On Rails environment, in which case they should be compiled. Just make sure the directory corresponds to your current settings.</p> <p>--with-openssl-dir=/usr --with-readline-dir=/usr --with-zlib-dir=/usr</p>
4,354
<p>I'm considering using the ChangePassword control on an ASP.NET 2.0 Webform. I don't want the 'cancel' button to show.</p> <p>Is there a good way to hide it without resorting to silly "width = 0" sort of games? </p> <p>Or perhaps there's a generic way to walk through the parts of a composite control like this and hide individual parts?</p>
<p>Set CancelButtonStyle.CssClass to something like "hiddenItem" and set the CSS to "display:none".</p> <p>Otherwise you can convert the control to a template and simply delete away the cancel-button manually. When you click the control in Design-mode in Visual Studio, you get a little arrow with options and one of them is "Convert to Template".</p>
<p>You can use the ChangePassword.CancelButtonStyle Property to set the CSS-class on the Cancel Button. Then just apply "display: none" on the specified class.</p>
5,442
<p>Does anyone know of a way to alter a computed column without dropping the column in SQL Server. I want to stop using the column as a computed column and start storing data directly in the column, but would like to retain the current values.</p> <p>Is this even possible?</p>
<p>Not that I know of but here is something you can do</p> <p>add another column to the table update that column with the values of the computed column then drop the computed column</p>
<p>Ok, so let me see if I got this straight. You want to take a column that is currently computed and make it a plain-jane data column. Normally this would drop the column but you want to keep the data in the column.</p> <ol> <li>Make a new table with the primary key columns from your source table and the generated column.</li> <li>Copy the data from your source table into the new table.</li> <li>Change the column on your source table.</li> <li>Copy the data back.</li> </ol> <p>No matter what you do I am pretty sure changing the column will drop it. This way is a bit more complex but not that bad and it saves your data.</p> <p>[Edit: @SqlMenace's answer is much easier. :) Curse you Menace!! :)]</p>
6,578
<p>I have up to 4 files based on this structure (note the prefixes are dates)</p> <ul> <li>0830filename.txt</li> <li>0907filename.txt</li> <li>0914filename.txt</li> <li>0921filename.txt</li> </ul> <p>I want to open the the most recent one (0921filename.txt). how can i do this in a batch file?</p> <p>Thanks.</p>
<p>This method uses the actual file modification date, to figure out which one is the latest file:</p> <pre><code>@echo off for /F %%i in ('dir /B /O:-D *.txt') do ( call :open "%%i" exit /B 0 ) :open start "dummy" "%~1" exit /B 0 </code></pre> <p>This method, however, chooses the last file in alphabetic order (or the first one, in reverse-alphabetic order), so if the filenames are consistent - it will work:</p> <pre><code>@echo off for /F %%i in ('dir /B *.txt^|sort /R') do ( call :open "%%i" exit /B 0 ) :open start "dummy" "%~1" exit /B 0 </code></pre> <p>You actually have to choose which method is better for you.</p>
<p>Use regular expression to parse the relevant integer out and compare them.</p>
7,504
<p>I heard that if you use port 443 (the port usually used for https) for ssh, the encrypted packets look the same to your isp.</p> <p>Could this be a way to avoid traffic shaping/throttling?</p>
<p>I'm not sure it's true that any given ssh packet "looks" the same as any given https packet.</p> <p>However, over their lifetime they don't behave the same way. The session set up and tear down don't look alike (SSH offer a plain text banner during initial connect, for one thing). Also, typically wouldn't an https session be short lived? Connect, get your data, disconnect, whereas ssh would connect and persist for long periods of time? I think perhaps using 443 instead of 22 might get past naive filters, but I don't think it would fool someone specifically looking for active attempts to bypass their filters.</p> <p>Is throttling ssh a common occurrence? I've experienced people blocking it, but I don't think I've experienced throttling. Heck, I usually use ssh tunnels to bypass other blocks since people don't usually care about it.</p>
<p>Your ISP is probably more likely to traffic shape port 443 over 22, seeing as 22 requires more real-time responsiveness.</p> <p>Not really a programming question though, maybe you'll get a more accurate response somewhere else..</p>
4,692
<p>Does Java have a built-in way to escape arbitrary text so that it can be included in a regular expression? For example, if my users enter "$5", I'd like to match that exactly rather than a "5" after the end of input.</p>
<p>Since <a href="http://download.oracle.com/javase/7/docs/api/java/util/regex/Pattern.html#quote(java.lang.String)" rel="noreferrer">Java 1.5, yes</a>:</p> <pre><code>Pattern.quote("$5"); </code></pre>
<p>^(Negation) symbol is used to match something that is not in the character group.</p> <p>This is the link to <a href="https://learn.microsoft.com/en-us/dotnet/standard/base-types/regular-expression-language-quick-reference" rel="nofollow noreferrer">Regular Expressions</a></p> <p>Here is the image info about negation:</p> <p><img src="https://i.stack.imgur.com/m5cXU.png" alt="Info about negation"></p>
8,499
<p>I am using a rich text editor on a web page. .NET has feature that prevent one from posting HTML tags, so I added a JavaScript snippet to change the angle brackets to and alias pair of characters before the post. The alias is replaced on the server with the necessary angle bracket and then stored in the database. With XSS aside, what are common ways of fixing this problem. (i.e. Is there a better way?)</p> <p>If you have comments on XSS(cross-site scripting), I'm sure that will help someone.</p>
<p>There's actually a way to turn that "feature" off. This will allow the user to post whichever characters they want, and there will be no need to convert characters to an alias using Javascript. See this article for <a href="http://mdid.org/mdidwiki/index.php?title=Disabling_Request_Validation" rel="nofollow noreferrer">disabling request validation</a>. It means that you'll have to do your own validation, but from the sounds of your post, it seems that is what you are looking to do anyway. You can also disable it per page by following <a href="http://www.asp.net/learn/whitepapers/request-validation/" rel="nofollow noreferrer">the instructions here</a>.</p>
<p>My first comment would be to avoid using JavaScript to change the angle brackets. Bypassing this is as simple as disabling JavaScript in the browser. Almost all server-side languages have some utility method that converts some HTML characters into their entity counterparts. For instance, PHP uses htmlentities(), and I am sure .NET has an equivalent utility method. In the least, you can do a regex replace for angle brackets, parenthesis and double quotes, and that will get you a long way toward a secure solution.</p>
8,706
<p>I'm starting developing with Flex on my Mac but i don't find good tools to ease the development (apart from Flex Builder).</p> <p>What is your favourite choice for Flex development on Mac?</p>
<p>TextMate + the <a href="http://blog.simongregory.com/10/textmate-actionscript-3-and-flex-bundles/" rel="nofollow noreferrer">Flex and ActionScript 3 bundles</a> is a great combo. Throw in <a href="http://ciaranwal.sh/2008/08/05/textmate-plug-in-projectplus" rel="nofollow noreferrer">ProjectPlus</a> and you have an almost full featured development environment. What's missing is visual design tools (which I'm sceptical of anyway), debugger (the command line version isn't very easy to work with) and a profiler.</p> <p>I've long used TextMate and the additions mentioned above for all my Flex development, but lately the lack of debugger and profiler has made me use FlexBuilder too, just to get those tools.</p>
<p>TextMate is great, but if you're looking for something free, you can hack as3 onto XCode (I've used it, and it is fine, but some of the highlighting is off, and auto-completion is weak).</p> <p>As for a debugging environment, I would recommend XTrace (<a href="http://mabblog.com/xtrace.html" rel="nofollow noreferrer">http://mabblog.com/xtrace.html</a>). The library that comes with it is as3, but you can easily port it to as3 (as I did).</p>
4,737
<p>The default Python install on OS X 10.5 is 2.5.1 with a FAT 32 bit (Intel and PPC) client. I want to setup apache and mysql to run django. In the past, I have run Apache and MySQL to match this install in 32 bit mode (even stripping out the 64 bit stuff from Apache to make it work).</p> <p>I want to upgrade Python to 64 bit. I am completely comfortable with compiling it from source with one caveat. How do I match the way that the default install is laid out? Especially, with regards to site-packages being in <code>/Library/Python/2.5/</code> and not the one in buried at the top of the framework once I compile it. </p>
<p>Personally, I wouldn't worry about it until you see a problem. Messing with the default python install on a *Nix system can cause more trouble than it's worth. I can say from personal experience that you never truly understand what python has done for the nix world until you have a problem with it.</p> <p>You can also add a second python installation, but that also causes more problems than it's worth IMO.</p> <p>So I suppose the best question to start out with would be why exactly do you want to use the 64 bit version of python?</p>
<p>Essentially, yes. I was not sure you could do it like that (current version does not do it like that). When using the python install script, however, there is no option (that I can find) to specify where to put directories and files (eg --prefix). I was hoping to match the current layout of python related files so as to avoid 'polluting' my machine with redundant files.</p>
4,969
<p>The <code>Open</code> button on the open file dialog used in certain windows applications includes a dropdown arrow with a list of additional options &mdash; namely <code>Open With..</code>. </p> <p><a href="https://i.stack.imgur.com/GLM3T.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GLM3T.png" alt="Open File Dialog"></a></p> <p>I haven't seen this in every Windows application, so you may have to try a few to get it, but SQL Server Management Studio and Visual Studio 2017 will both show the button that way if you go to the menu and choose <em><code>File</code>-><code>Open</code>-><code>File...</code></em></p> <p>I want to use a button like this with a built-in list in one of my applications, but I can't find the control they're using anywhere in Visual Studio. I should clarify that I'm looking for that specific button, not the entire dialog. Any thoughts?</p>
<p>I used the draggable search in Spy++ (installed with VS) to look at the split open button on the file-open dialog of VS.</p> <p>This revealed that it's an ordinary windows button with a style which includes BS_DEFSPLITBUTTON. That's a magic keyword which gets you to some interesting places, including</p> <p><a href="http://www.codeplex.com/windowsformsaero/SourceControl/FileView.aspx?itemId=212902&amp;changeSetId=9930" rel="noreferrer">http://www.codeplex.com/windowsformsaero/SourceControl/FileView.aspx?itemId=212902&amp;changeSetId=9930</a></p> <p>and here</p> <p><a href="http://msdn.microsoft.com/en-us/library/bb775949.aspx#using_splits" rel="noreferrer">http://msdn.microsoft.com/en-us/library/bb775949.aspx#using_splits</a></p> <p>Hope this helps you. </p> <p>EDIT:</p> <p>I've actually just tried that code from CodePlex and it does create a split button - but you do have to make sure you've set the button's FlatStyle to 'System' rather than 'Standard' which is the default. I've not bothered to hook-up the event handling stuff for the drop-down, but that's covered in the MSDN link, I think.</p> <p>Of course, this is Vista-only (but doesn't need Aero enabled, despite the name on codeplex) - if you need earlier OS support, you'll be back to drawing it yourself.</p>
<p>Since I found the control in Windows itself, I was hoping to find it built-in somewhere already so I didn't have to add anything to my code-base to use it. But the split button at <a href="http://blogs.msdn.com/jfoscoding/articles/491523.aspx" rel="nofollow noreferrer">this link</a> (found via the msdn suggestion) looks pretty promising.</p> <p>I'll try it later myself, but I don't know how well it will handle visual styles.</p>
5,098
<p>I have a 12 V/30 A power supply. Will a RAMPS 1.4 shield burn if I use that power supply with it?</p>
<p>Voltage and current behave differently and it is important to remember two essential points:</p> <ul> <li>PSU voltage specifications <em><strong>should never be exceeded</strong></em>, else the circuit will (most likely) be damaged, and;</li> <li>PSU current specifications <em><strong>can be exceeded</strong></em>, without any risk to the circuit<sup>1</sup>.</li> </ul> <p>So, if your circuit requires 12 V and 3 A, then the PSU must supply a maximum of 12 V, but current can be 5 A, 10 A or whatever.</p> <p>It may help to think of it this way:</p> <ul> <li>Voltage will force its way into a circuit, and if you exceed the specified requirement of that circuit, then the circuit will be damaged, for sure;</li> <li>Current passively sits there and is is only taken by the circuit as it is required - if your power supply can provide 1000 A, but the circuit only needs 11 A, then the circuit will only take 11 A. The remaining 989 A will be ignored.</li> </ul> <p><strong>Note</strong>: There is one caveat, where the mismatching is reversed: If a PSU can only supply 5A, but the circuit requires 10 A, then the PSU <em>may</em> be damaged as the circuit attempts to draw more current than the PSU can provide.</p> <hr /> <h3>The water analogy</h3> <p>You can think of electricity as similar to water in a water tank, and a tap as being the outlet/socket/circuit:</p> <ul> <li>The higher the water tank is above the tap, the higher the water pressure - <strong>this water pressure is analogous to voltage</strong>. If the water pressure is excessively high, then the tap could fail, in the same way that a circuit will blow,</li> <li>The larger the tank, the more water there is that can be provided - <strong>This capacity of the tank is analogous to the current</strong>. You don't have to use, or drink, all of the water in the tank, but (often) it's nice to have an excess available, even though it might be wasteful or not particularly cost effective.</li> </ul> <hr /> <h3>Footnote</h3> <p><sup>1</sup> The only major downside to having an over rated PSU with respect to current, is that should the circuit that the PSU is supplying have a issue, such as a short circuit, then a lot more current will be drawn, than if the PSU wasn't so over rated. This could be dangerous, if, for example, you are touching part of that circuit. This is why the current rating of a PSU should closely match the current requirement of the circuit which the PSU is supplying.</p> <p>Other considerations for using an over rated PSU could (arguably) include a longer life or increased reliability. This would be due to less stress being placed upon the PSU, if the current rating of the PSU is, for example 25 - 50 % greater than the current requirement of the circuit.</p>
<p>No, it won't burn if you use a power supply that put out 12 Volts and tolerates up to 30 Amps.</p>
2,113
<p>What's the easiest way to convert a percentage to a color ranging from Green (100%) to Red (0%), with Yellow for 50%?</p> <p>I'm using plain 32bit RGB - so each component is an integer between 0 and 255. I'm doing this in C#, but I guess for a problem like this the language doesn't really matter that much.</p> <p>Based on Marius and Andy's answers I'm using the following solution:</p> <pre><code>double red = (percent &lt; 50) ? 255 : 256 - (percent - 50) * 5.12; double green = (percent &gt; 50) ? 255 : percent * 5.12; var color = Color.FromArgb(255, (byte)red, (byte)green, 0); </code></pre> <p>Works perfectly - Only adjustment I had to make from Marius solution was to use 256, as (255 - (percent - 50) * 5.12 yield -1 when 100%, resulting in Yellow for some reason in Silverlight (-1, 255, 0) -> Yellow ...</p>
<p>I made this function in JavaScript. It returns the color is a css string. It takes the percentage as a variable, with a range from 0 to 100. The algorithm could be made in any language:</p> <pre><code>function setColor(p){ var red = p&lt;50 ? 255 : Math.round(256 - (p-50)*5.12); var green = p&gt;50 ? 255 : Math.round((p)*5.12); return "rgb(" + red + "," + green + ",0)"; } </code></pre>
<p>Because it's R-G-B, the colors go from integer values of -1 (white), to -16777216 for black. with red green and yellow somewhere in the middle that. Yellow is actually -256, while red is -65536 and green is -16744448. So yellow actually isn't between red and green in the RGB notation. I know that in terms of wavelenghts, green is on one side, and red is on the other side of the spectrum, but I've never seen this type of notation used in computers, as the spectrum doesn't represent all visible colours.</p>
4,466
<p>I have a set of calculation methods sitting in a .Net DLL. I would like to make those methods available to Excel (2003+) users so they can use them in their spreadsheets.</p> <p>For example, my .net method:</p> <pre><code>public double CalculateSomethingReallyComplex(double a, double b) {...} </code></pre> <p>I would like enable them to call this method just by typing a formula in a random cell:</p> <pre><code>=CalculateSomethingReallyComplex(A1, B1) </code></pre> <p>What would be the best way to accomplish this?</p>
<p>There are two methods - you can used Visual Studio Tools for Office (VSTO):</p> <p><a href="http://blogs.msdn.com/pstubbs/archive/2004/12/31/344964.aspx" rel="nofollow noreferrer">http://blogs.msdn.com/pstubbs/archive/2004/12/31/344964.aspx</a></p> <p>or you can use COM:</p> <p><a href="http://blogs.msdn.com/eric_carter/archive/2004/12/01/273127.aspx" rel="nofollow noreferrer">http://blogs.msdn.com/eric_carter/archive/2004/12/01/273127.aspx</a></p> <p>I'm not sure if the VSTO method would work in older versions of Excel, but the COM method should work fine.</p>
<p>There are two methods - you can used Visual Studio Tools for Office (VSTO):</p> <p><a href="http://blogs.msdn.com/pstubbs/archive/2004/12/31/344964.aspx" rel="nofollow noreferrer">http://blogs.msdn.com/pstubbs/archive/2004/12/31/344964.aspx</a></p> <p>or you can use COM:</p> <p><a href="http://blogs.msdn.com/eric_carter/archive/2004/12/01/273127.aspx" rel="nofollow noreferrer">http://blogs.msdn.com/eric_carter/archive/2004/12/01/273127.aspx</a></p> <p>I'm not sure if the VSTO method would work in older versions of Excel, but the COM method should work fine.</p>
7,800
<p>I keep looking around for 3D scanning software but can't find anything I like. Any suggestions? I'm more looking for a free app so I can use my android phone, but am open to desktop based solutions too.</p>
<h2>1. Meshroom</h2> <p><a href="https://alicevision.org/" rel="nofollow noreferrer">https://alicevision.org/</a></p> <p>Meshroom can take any number of overlapping photographs and turn them into a 3D model. Those pictures can be taken from any phone, or even a drone. For best results, you will need to know the make/model and sensor size of the camera.</p> <p>Works on Windows/Linux and requires a CUDA compatible video card</p> <p><strong>Resources</strong></p> <ul> <li><a href="https://alicevision.org/#meshroom" rel="nofollow noreferrer">https://alicevision.org/#meshroom</a></li> <li><a href="https://blog.prusaprinters.org/photogrammetry-2-3d-scanning-simpler-better-than-ever_29393/" rel="nofollow noreferrer">Prusa Blog on Meshroom</a></li> </ul> <h2>2. ADOP</h2> <p><a href="https://github.com/darglein/ADOP" rel="nofollow noreferrer">https://github.com/darglein/ADOP</a></p> <p>ADOP is a neural network similar to Meshroom that can combine photos into a 3D model, however, it has a very impressive ability to generate highly accurate data from just a few pictures. It also provides a virtual reality viewer.</p> <p>Works on Linux/Windows. Does require some familiarity compiling code</p> <p><strong>Resources</strong></p> <ul> <li>ADOP featured on <a href="https://www.youtube.com/watch?v=dZ_5TPWGPQI" rel="nofollow noreferrer">2-minute papers YouTube channel </a></li> <li><a href="https://github.com/darglein/ADOP" rel="nofollow noreferrer">https://github.com/darglein/ADOP</a></li> </ul>
<p>I personally use <a href="https://scann3d.it/" rel="nofollow noreferrer">SCANN3D</a>. It is available on Android but I don't know if it is for desktop too.</p>
2,031
<p>I have the following html code: </p> <pre><code>&lt;h3 id="headerid"&gt;&lt;span onclick="expandCollapse('headerid')"&gt;&amp;uArr;&lt;/span&gt;Header title&lt;/h3&gt; </code></pre> <p>I would like to toggle between up arrow and down arrow each time the user clicks the span tag. </p> <pre><code>function expandCollapse(id) { var arrow = $("#"+id+" span").html(); // I have tried with .text() too if(arrow == "&amp;dArr;") { $("#"+id+" span").html("&amp;uArr;"); } else { $("#"+id+" span").html("&amp;dArr;"); } } </code></pre> <p>My function is going always the else path. If I make a javacript:alert of <code>arrow</code> variable I am getting the html entity represented as an arrow. How can I tell jQuery to interpret the <code>arrow</code> variable as a string and not as html. </p>
<p>When the HTML is parsed, what JQuery sees in the DOM is a <code>UPWARDS DOUBLE ARROW</code> ("⇑"), not the entity reference. Thus, in your Javascript code you should test for <code>"⇑"</code> or <code>"\u21d1"</code>. Also, you need to change what you're switching to:</p> <pre><code>function expandCollapse(id) { var arrow = $("#"+id+" span").html(); if(arrow == "\u21d1") { $("#"+id+" span").html("\u21d3"); } else { $("#"+id+" span").html("\u21d1"); } } </code></pre>
<p>Maybe you're not getting an exact match because the browser is lower-casing the entity or something. Try using a carat (^) and lower-case "v" just for testing.</p> <p>Edited - My first theory was plain wrong.</p>
4,983
<p>Even though I have a robust and fast computer (Pentium Dual Core 2.0 with 2Gb RAM), I'm always searching for lightweight software to have on it, so it runs fast even when many apps are up and running simultaneously.</p> <p>On the last few weeks I've been migrating gradually to Linux and want to install a free lightweight yet useful IDE to program on C++ and PHP. Syntax highlighting and code completion tips are must-haves.</p>
<p><a href="http://projects.gnome.org/gedit/" rel="nofollow noreferrer">gedit</a></p> <ul> <li>Syntax highlighting</li> <li>Fast, lightweight</li> <li>Tabs</li> <li>GUI</li> </ul>
<p>any of the popular editors can be turned into an ide.</p> <p>I use Vi on the console and have used various gui editors over the years. This doesn't just go for linux I use Crimson Editor on windows as a C/python/z80asm ide.</p>
2,486
<p>I'm using <code>IIS 5.1</code> in Windows XP on my development computer. I'm going to set up HTTPS on my company's web server, but I want to try doing it locally before doing it on a production system.</p> <p>But when I go into the Directory Security tab of my web site's configuration section, the "Secure communication" groupbox is disabled. Is there something I need to do to make this groupbox enabled?</p>
<p>I'm having a similar issue. You might be able to do something like this:</p> <pre><code>SET arithabort arith_overflow off SET @the_date = CAST(@date_string AS DATETIME) IF @the_date is NULL set @the_date = getdate() SET arithabort arith_overflow on </code></pre> <p>However, this doesn't work well in a select. It will work well in a cursor (boo) or in logic before / after a SQL batch.</p>
<p>Found <a href="http://blog.sqlauthority.com/2007/07/12/sql-server-validate-field-for-date-datatype-using-function-isdate/" rel="nofollow noreferrer">this in the second result</a> in Google when searching for "validate date string sql".</p> <pre><code>----Invalid date SELECT ISDATE('30/2/2007') RETURNS : 0 (Zero) ----Valid date SELECT ISDATE('12/12/20007') RETURNS : 1 (ONE) ----Invalid DataType SELECT ISDATE('SQL') RETURNS : 0 (Zero) </code></pre>
3,051
<p>I've just tried to make some small labels with embossed lettering. Base is 1.5 mm, with letters and a perimeter protruding by another 1 mm. After printing, I realised that Cura is not giving me a top layer, all I get is an inner and outer wall for the perimeter and the lettering:</p> <p><a href="https://i.stack.imgur.com/Hri17.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Hri17.png" alt="Screenshot showing missing infill in top layer"></a></p> <p>The base is OK, with 4 layers, but there is no infill at the top. Presumably there is a setting that I've missed (I can appreciate that maybe this is a detail to the slicing which is model dependant). Layers are 0.2 mm, top and bottom set to 4 layers.</p> <p>I had walls set to 1.6 mm, (4 layers) and the wall features here are 1 mm (i.e. 2.5x the nozzle). The features are solid so far as I know, not drawn as a hollow.</p> <p>I found a relevant bug for Cura: <a href="https://github.com/Ultimaker/Cura/issues/1303" rel="nofollow noreferrer">https://github.com/Ultimaker/Cura/issues/1303</a> (and I also understand a workaround now, just need to write up an answer).</p>
<p>What is happening here is that Cura is struggling with the quantised extrusion width. This behaviour can be improved in the slicer, they have a issue tracked here <a href="https://github.com/Ultimaker/Cura/issues/1303" rel="nofollow noreferrer">https://github.com/Ultimaker/Cura/issues/1303</a> and 2.4 beta has improved it.</p> <p>There are some pictures in this <a href="https://3dprinting.stackexchange.com/a/3581/4927">answer</a> which show how Cura 2.3 slices a rectangle of increasing width. I got nothing at the nozzle width, then a pair of overlapping lines up to twice the width (which I assume will be over-extruded due to the overlap). For anything wider than simply two sides, the result depends on the number of walls requested.</p> <ul> <li>One wall results in 100% infill for small regions (regardless of the infill setting). This is good, but maybe 1 wall is not apropriate for the whole model.</li> <li>Multiple walls seem to prevent the infill untill there is space for pairs of walls. So width of 3x nozzle has a gap. Small tweeks to the nozzle size can maybe push this quantisation point about, if 1 wall is not a good workaround.</li> </ul> <p>For rectangular geometry, it's worth trying to quantize small parts in 0.8mm increments. For curves, setting 1 wall shell might be better.</p>
<p>Try Cura 2.4.0-beta2: there is option named "Fill Gaps Between Walls" under "Shell" settings, set it to "everywhere". In stable Cura 2.3.1 this option is missing.</p>
440
<p>I saw this cool <a href="http://www.thingiverse.com/thing:1231999" rel="nofollow noreferrer">Nuka-Cola bottle opener</a>. I made it:</p> <p><a href="https://i.stack.imgur.com/ysVvW.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ysVvW.jpg" alt="3D print of Nuka-Cola bottle opener"></a> </p> <p>As you can see, nasty gaps have appeared in the text. What could've caused them? I am sure I set the filament diameter correctly. So why did the Slic3er not generate enough paths here to fill the letters properly?</p> <p>I used Slic3er and Repetier Host. My printer is Prusa i3 MK2 1.75mm. The material I used was PLA.</p>
<p>I had a similar issue as described in <a href="https://3dprinting.stackexchange.com/questions/3461/why-does-my-domed-print-have-a-thin-shell">this question</a>. Curiously, yours is almost opposite as my raised lettering was OK, but the surrounding areas were poorly filled.</p> <p>I came to the conclusion that Slic3r was simply not able to properly handle the geometry in my part. I tried with Cura and had great results!</p>
<p>The proper term here is Under Extrusion.</p> <p>There are a lot of reasons this can happen. For my Ultimaker, over time it developed very bad underextrusion due to the build up of carbon. I had to do atomic pulls / increase the extrusion multiplier. </p> <p>I would ...</p> <ol> <li><p>verify that you move exactly the amount of plastic you think you are. Mark 200mm of material. Extrude 100. Fix the firmware steps per mm for Extruder if incorrect. </p></li> <li><p>triple check the filament measurements. Try all along the length. See if the plastic has bad variance.</p></li> <li>Verify that it is not printing as expected. Is your slicer generating the gaps? Letters are hard for slicing programs. If it cannot print a full line, it often will print a gap. You can adjust this by allowing greater line overlaps. This is probably what is happening.</li> </ol>
516
<p>I'm using an Ender 5 with standard PLA and Creality slicer 4.8.2.</p> <p>How can I deliberately maximise stringing, and if possible get it to be as consistent as possible.</p> <p>My aim is to have &quot;thousands of hair like threads strung between two rocky pillars&quot;.</p> <p>If possible I'd like to do this in the slicer with PLA, rather than using cotton or some other material after the model has printed.</p>
<ol> <li>Eliminate retraction in slicer.</li> <li>Print at a higher hot end temperature; something like +10°C higher than recommenced temperature.</li> <li>Slow down speed hot end moves when not printing.</li> <li>Maximize hot end movement without printing where you want strings.</li> </ol>
<p>Slicers will perform a retraction when moving from one solid to another, the value of which is part of the settings. I've not researched if a specific slicer will allow a negative retraction, but if it's possible, it's likely to create adjustable stringing.</p> <p>If negative retraction is not possible, one can identify the retraction segments in g-code of the print and find/replace those values with extrude rather than retract. My slicer, Simplify3D does not support reversed retraction, but the code is clear when examined with a text editor:</p> <pre><code>G1 E-4.0000 F2400 G1 Z0.300 F1200 G1 X118.760 Y117.415 F12000 G1 Z0.250 F1200 G1 E0.0000 F2400 G92 E0.0000 G1 X122.415 Y113.760 E0.0972 F900 build g-code removed for clarity G1 X118.760 Y117.415 E1.5303 G92 E0.0000 G1 E-4.0000 F2400 G1 Z0.300 F1200 G1 X158.280 Y117.216 F12000 G1 Z0.250 F1200 G1 E0.0000 F2400 G92 E0.0000 G1 X162.216 Y113.280 E0.1047 F900 build g-code removed for clarity G1 X158.280 Y117.216 E1.5902 G92 E0.0000 G1 E-4.0000 F2400 build g-code removed for clarity G1 E0.0000 F2400 G92 E0.0000 G1 X162.415 Y113.760 E0.0972 F900 build g-code removed for clarity G1 X158.760 Y117.415 E1.5303 G92 E0.0000 G1 E-4.0000 F2400 </code></pre> <p>All entries beginning with G1 E-4.0000 represent the 4 mm retraction called by the slicer. One could search for just that code and replace it with a positive value. Some experimentation is indicated to accomplish the desired result. The F value is feed rate and presents another value to adjust.</p> <p>Pursuant to Oscar's comment, I overlooked the lack of movement as a factor. He is correct, such a modification is likely to create a blob. As a possible compensation, a better modification would be to create a custom extrusion code with a reduced flow rate, using the existing code as a reference.</p> <p>This starts to complicate the process substantially, requiring far more calculations and edits. I suggest that it could yet be accomplished, but would be more easily done so with post processing of the code via Python or similar, with which I am not qualified to address.</p>
2,187
<p>Is it possible to have one application binary build for multiple mobile devices (on <a href="http://brew.qualcomm.com/brew/" rel="nofollow noreferrer">BREW</a> platform), rather than making a separate build for each device using build script with conditional compilation.</p> <p>In particular is is possible to use single BREW application build for multiple screen resolutions?</p> <p>Note that the goal is to have a single <em>binary</em> build. If it would be just to have a single codebase, than conditional compilation and smart build script would do the trick.</p>
<p>Yes, it is possible, we were able to do this at my previous place of work. What's required is tricky though:</p> <ol> <li>Compile for the lowest common denominator BREW version. Version 1.1 is the base for all current handsets out there.</li> <li>Your code must be able to handle multiple resolutions. The methods for detecting screen width and height are accurate for all handsets in my experience.</li> <li>All your resources must load on all devices. This would require making your own custom image loader to work around certain device issues. For sound, I know simple MIDI type 0 works on all but QCP should also work (no experience of it myself).</li> <li>Use bitmap fonts. There are too many device issues with fonts to make it worthwhile using the system fonts.</li> <li>Design your code structure as a finite state machine. I cannot emphasise this enough - do this and many, many problems never materialise.</li> <li>Have workarounds for every single device issue. This is the hard part! It's possible but this rabbit hole is incredibly deep...</li> </ol> <p>In the end, the more complex and advanced the application, the less likely you can go this route. Some device properties simply cannot be detected reliably at runtime (such as platform ID) and so multiple builds are then required.</p>
<p>Another idea might be to have the handsets divided into 2 to 4 categories based on say screen dimensions and create builds for them. It is a much faster route too as you will be able to support all the handsets you want to support with much lesser complexity.</p> <p>Another thing to see is the BREW versions on the handsets you want to launch on. If say BREW 1.1 is on one handset and that is owned by a small percentage in your target market, it doesnt make sense to work to support it.</p>
9,676
<p>Are there any open-source libraries that all programmers should know about? I'm thinking something general, a sort of extension to the standard <code>java.util</code> that contains basic functions that are useful for all kinds of application.</p>
<ul> <li><p>Apache Commons</p></li> <li><p>Log4j</p></li> <li><p>Google collections</p></li> </ul>
<p><a href="http://jexcelapi.sourceforge.net/" rel="nofollow noreferrer">JXL</a> for Excel workbook creation/edition. I work in a bank and the multipurpose report tool for diary work is Excel. Whatever appliction we do must import/export from/to Excel. The only fail it's that it has memory problems with large workbooks and formating it's a little obscure</p>
8,973
<p>I've had a tough time setting up my replication server. Is there any program (OS X, Windows, Linux, or PHP no problem) that lets me monitor and resolve replication issues? (btw, for those following, I've been on this issue <a href="https://stackoverflow.com/questions/8166/mysql-replication-if-i-dont-specify-any-databases-will-logbin-log-everything">here</a>, <a href="https://stackoverflow.com/questions/3798/full-complete-mysql-db-replication-ideas-what-do-people-do">here</a>, <a href="https://stackoverflow.com/questions/8365/mysql-administrator-backups-compatibility-mode-what-exactly-is-this-doing">here</a> and <a href="https://stackoverflow.com/questions/30660/mysql-binary-log-replication-can-it-be-set-to-ignore-errors">here</a>)</p> <p>My production database is several megs in size and growing. Every time the database replication stops and the databases inevitably begin to slide out of sync i cringe. My last resync from dump took almost 4 hours roundtrip!</p> <p>As always, even after sync, I run into this kind of show-stopping error:</p> <pre><code>Error 'Duplicate entry '252440' for key 1' on query. </code></pre> <p>I would love it if there was some way to closely monitor whats going on and perhaps let the software deal with it. I'm even all ears for service companies which may help me monitor my data better. Or an alternate way to mirror altogether.</p> <p><strong>Edit</strong>: going through my previous questions i found <a href="https://stackoverflow.com/questions/30660/mysql-binary-log-replication-can-it-be-set-to-ignore-errors#30889">this</a> which helps tremendously. I'm still all ears on the monitoring solution.</p>
<p>To monitor the servers we use the free <a href="http://www.maatkit.org/tools.html" rel="nofollow noreferrer">tools from Maatkit</a> ... simple, yet efficient.</p> <p>The binary replication is available in 5.1, so I guess you've got some balls. We still use 5.0 and it works OK, but of course we had our share of issues with it.</p> <p>We use a Master-Master replication with a MySql Proxy as a load-balancer in front, and to prevent it from having errors:</p> <ul> <li>we removed all unique indexes</li> <li>for the few cases where we really needed unique constraints we made sure we used REPLACE instead of INSERT (MySql Proxy can be used to guard for proper usage ... it can even rewrite your queries)</li> <li>scheduled scripts doing intensive reports are always accessing the same server (not the load-balancer) ... so that dangerous operations are replicated safely</li> </ul> <p>Yeah, I know it sounds simple and stupid, but it solved 95% of all the problems we had.</p>
<p>We use mysql replication to replicate data to close to 30 servers. We monitor them with nagios. You can probably check the replication status and use an event handler to restart it with 'SET GLOBAL SQL_SLAVE_SKIP_COUNTER=1; Start Slave;'. That will fix the error, but you'll lose the insert that caused the error. </p> <p>About the error, do you use memory tables on your slaves? I ask this because the only time we ever got a lot of these error they where caused by a bug in the latests releases of mysql. 'Delete From Table Where Field = Value' will delete only one row in memory tables even though they where multiple rows.</p> <p><a href="http://bugs.mysql.com/bug.php?id=30590" rel="nofollow noreferrer">mysql bug descritpion</a></p>
5,590
<p>I want to know what exactly is the sequence of calls that occurs when a getter/setter created through Class::MethodMaker is called?</p> <p>How much costlier are getter/setters defined by MethodMaker than the native ones (overwritten in the module)?</p>
<p>I don't have a simple answer for your question regarding Class::MethodMaker performance. As a previous answer mentioned, you can use the debugger to find out what's going on under the hood. However, I know that Class::MethodMaker generates <em>huge</em> amounts of code at install time. This would indicate three separate things to me:</p> <ol> <li>Regarding run-time, it's <em>probably</em> on the faster side of the whole slew of method generators. Why generate loads of code at install time otherwise?</li> <li>It installs O(Megabytes) of code on your disk!</li> <li>It may potentially be slow at compile time, depending on what parts of the generated code are loaded for simple use cases.</li> </ol> <p>You really need to spend a few minutes to think about what you really need. If you want simple accessor methods auto-generated but write anything more complicated by hand, maybe look at Class::Accessor::Fast. Or, if you want the fastest possible accessor-methods, investigate Class::XSAccessor, whose extra-simple methods run as C/XS code and are approximately twice as fast as the fastest Perl accessor. (Note: I wrote the latter module, so take this with a grain of salt.)</p> <p>One further comment: if you're ever going to use the PAR/PAR::Packer toolkit for packaging your application, note that the large amount of code of Class::MethodMaker results in a significantly larger executable and a slower initial start-up time. Additionally, there's a known incompatibility between C::MethodMaker and PAR. But that may be considered a PAR bug.</p>
<p>The real question is: does it matter?</p> <p>It's yet another accessors generating module. These modules all have a speed/functionality trade-off. Just pick one that offers everything you need. It's not like accessors are likely to become a bottleneck in your application.</p>
8,745
<p>I'd like to learn how to program in Assembler. I've done a bit of assembly before (during my A-Level Computing course) but that was very definitely a simplified 'pseudo-assembler'. I've borrowed my Dad's old Z80 Assembler reference manual, and that seems quite interesting so if possible I'd like to have a go with Z80 assembler.</p> <p>However, I don't have a Z80 processor to hand, and would like to do it on my PC (I have windows or linux so either is good). I've found various assemblers around on the internet, but I'm not particularly interested in assembling down to a hex file, I want to just be able to assemble it to something that some kind of simulator on the PC can run. Preferably this simulator would show me the contents of all the registers, memory locations etc, and let me step through instructions. I've found a few bits of software that suggest they might do this - but they either refuse to compile, or don't seem to work properly. Has anyone got any suggestions? If there are good simulator/IDE things available for another type of assembler then I could try that instead (assuming there is a good online reference manual available).</p>
<blockquote> <p>I've found a few bits of software that suggest they might do this - but they either refuse to compile, or don't seem to work properly. Has anyone got any suggestions?</p> </blockquote> <p>Write one. You're best off picking a nice, simple instruction set (Z80 should be perfect). I remember doing this as a first-year undergraduate exercise - I think we wrote the simulator in C++ and simulated 6800 assembly, but really any language/instruction set will do.</p> <p>The idea of "learning assembly language" these days is to get the idea of how computers work at the lowest level, only a select few (compiler writers, etc.) have any real reason to actually be writing assembly code these days. Modern processors are stuffed full of features designed to be used by compilers to help optimise code for speed/concurrent execution/power consumption/etc., and trying to write assembly by hand for a modern processor would be a nightmare. </p> <p>Don't fret about getting your application production-ready unless you want to - in all likelihood the bits of software you've found so far were written by people exactly like you who wanted to figure out how assembly works and wrote their own simulator, then realised how much work would be involved in getting it "production ready" so the general public could use it.</p>
<p>Take a look at <a href="http://www.z80.info/z80emu.htm" rel="nofollow noreferrer">Thomas Scherrer Z80 Emulators</a> for a listing of potential emulators you could use.</p>
4,717
<p>I have a bit of an odd request. I am studying the vulnerability of 3D printers and would like to know if there is a way to disable the limit switches on, for example, an Ender 3 in the G-code.</p> <p>Ideally, the exploit would be used by plugging in the malicious code via SD card into the 3D printer. I have found ways to change the nozzle temp and things like that, however, nothing on the limit switches.</p> <p>If I were the manufacturer, I wouldn't implement the function, so if it's not possible it will not be a surprise. If that's the case, what would be some other options for tearing this thing up?</p>
<h2>Ethics and justification:</h2> <p>If you have physical access to the device you could just... physically damage the device. Running arbitrary G-code is just more complicated</p> <p>Most printer farms have a strict no outside G-code policy for good reason. Because direct physical access to the printer does not provide any security.</p> <p>This is also a problem for CNC Mills and other equipment. The firmware is not there to protect you. If you ask the system to put the spindle into XYZ_POS, the device's job is to deliver your 300 dollar end mill into the part as expediently and directly as you have told it. It does not hold your hand.</p> <p>As a result, it's probably best that <strong>no one</strong> run pre-compiled G-code. You should use an STL and generate your own G-code. Not only does it allow you to properly tune the print for your specific printer, but it prevents nasties from breaking your printer.</p> <p>Do not test this on a system you do not own. You will face civil and criminal liability for any damages as a result of performing this on printers that are not your own. Additionally, I would question the value of your research provided that physical access to a device automatically makes it vulnerable; let alone running arbitrary code from an untrusted source.</p> <p>To help your research: Most &quot;secure&quot; facilities prevent even touching a machine with external code. All code has to be generated on trusted and isolated machines by trusted persons. There is no guarantee that the &quot;trusted&quot; person can't manufacture a job that will crash their 300,000 dollar CNC mill and cause damage - but there are serious repercussions for doing so.</p> <h2>It's extremely difficult to &quot;tear up&quot; a printer by turning off the endstops.</h2> <p>You can disable physical endstops with a simple command <a href="https://marlinfw.org/docs/gcode/M121.html" rel="nofollow noreferrer"><code>M121</code></a>.</p> <p>However, this only does so much.</p> <p>With <a href="https://marlinfw.org/docs/gcode/M121.html" rel="nofollow noreferrer"><code>M121</code></a>, the printer is only really vulnerable before it has been <a href="https://marlinfw.org/docs/gcode/G028.html" rel="nofollow noreferrer">home</a>d. Most printers will <a href="https://marlinfw.org/docs/gcode/G028.html" rel="nofollow noreferrer">auto home</a> when starting a print, or refuse to print until they are <a href="https://marlinfw.org/docs/gcode/G028.html" rel="nofollow noreferrer">homed</a>. If issuing <code>M121</code> then arbitrarily trying to ram it past an endstop, it will continue until it counts steps to the <a href="https://marlinfw.org/docs/gcode/M211.html" rel="nofollow noreferrer">software endstop value</a> <code>[XYZ]_MIN_POS</code> to <code>[XYZ]_MAX_POS</code>.</p> <p>Older printers didn't have software endstops and would keep going until they were stopped or turned off. But this wouldn't <em>explicitly</em> cause damage as physical limitations would prevent it. Outside of old designs that do not physically limit the Z position - and crashing the nozzle into the printer.</p> <p>Nowadays - the printer would crash into the axis limit and then stop because it would hit the software endstop (and funnily enough, be homed as a result - I have done this as an experiment on my own printers)</p> <p>This would not damage the printer - just be annoying for a short time until it hits the software endstop.</p> <p>You can however make it work like an older printer provided it does not have stall or crash sensing (like Prusas have by default) by issuing <a href="https://marlinfw.org/docs/gcode/M211.html" rel="nofollow noreferrer"><code>M211 S0</code></a>.</p> <p>This turns off the software endstops. And should allow it to continually try to reach the value programmed in the G-code. It should continue to count until it gets to XYZ_POS then finally stop.</p> <p>Will this cause damage? Maybe. If the Z-axis is high and it can physically push its way into the printbed hard enough it can cause damage to the hotend. However, generally, most printers are designed (these days) that the Z-axis cannot go too far into the print bed and cause (too much) damage. Modified printers can be vulnerable if they modify the bed but do not physically limit the Z-axis from going down too far. Additionally, when the printer <a href="https://marlinfw.org/docs/gcode/G028.html" rel="nofollow noreferrer">autohomes</a> on print it usually will set zero to the endstop before the payload can be run, giving a home and only allowing excursion to the axis positives. When this happens, no damage can really occur outside of overheating the steppers/drivers and possibly damaging belts.</p> <p>There are also some firmware protections to help protect the end-user (e.g. Prusa Crash Detection) - depending on the printer these may need to be circumvented as well.</p> <p>How you can really tear up a printer? If you have physical access to it - you have total control over it. Simply being physically near it is enough for you to just throw it out a window. It's arbitrary to try to run any code. But if you're trying to research how a threat actor can mess up someone's printer?</p> <p>Thomas Sandladerer made an excellent video on these vulnerabilities (specific to 3D printers) here: <div class="youtube-embed"><div> <iframe width="640px" height="395px" src="https://www.youtube.com/embed/RKdsp2vrmag?start=0"></iframe> </div></div></p> <p>In his video, he's speaking directly on the threat of running pre-formed G-code from public sources. (Something that you should not do - something that <strong>no one</strong> should do)</p> <h2>Conclusion:</h2> <p>In security, physical access to a system by an untrusted person or running untrusted code on a machine is not allowed.</p> <p>For this vulnerability to be exploited, it requires a person to do things that they should not be doing - running generated G-code from an untrusted source. This includes from &quot;friends&quot; or even family.</p> <p>I won't tell you how to &quot;really tear up&quot; a 3D printer as there are a plethora of horror stories out on the internet of printers catching fire let alone the threat of running some random code on your printer. How to do it is publicly available. You just have to figure it out yourself.</p> <p>The lesson here is simple: <strong>Just don't run pre-compiled G-code.</strong></p>
<h2>Potential threat?</h2> <p>I can imagine that this is a potential threat for printers that are operated in an environment where there are lots of people using the printer where there is access to the SD card (e.g. schools, libraries or maker spaces). It would literally take a file on the SD card to be run the next time the printer boots to change settings through G-code. As you can imagine, it is quite easy to change the coordinates of the origin - this has been discussed in previous questions - but also speeds, endstop state and acceleration values for instance.</p> <h2>Would it destroy a printer?</h2> <p>Yes, it can. Lead screw motion is stronger than belts, the latter will just skip, but lead screws may do some serious damage depending on the printer construction (I've had my unintentional fair share of that). But, the most obvious problem is the time that is needed to fix the problem; re-calibrating and repairing.</p> <h2>Limited to malicious intent?</h2> <p>No, managing an Ultimaker 3 Extended for a staff association I can talk from experience that it doesn't take malicious code, or intent, to mess up a printer. Working from home due to COVID-19 measures (my work needs those who can work at home to stay at home, in order to allow those who operate machines and test equipment to go to work and comply with the x people per y square meter rules that the government laid out) I have been away from the UM3E to find it in a terrible state a year later. It appeared that there have been people, that had no experience in slicing, using the wrong type of materials for slicing with respect to that was loaded into the machine... (we operate non-Ultimaker materials, so the material is not recognized by the RFID reader). The printer is now under lock and key...</p>
2,089
<p>I've been reading a lot about closures and I think I understand them, but without clouding the picture for myself and others, I am hoping someone can explain closures as succinctly and clearly as possible. I'm looking for a simple explanation that might help me understand where and why I would want to use them.</p>
<p><a href="http://mrevelle.blogspot.com/2006/10/closure-on-closures.html" rel="noreferrer">Closure on closures</a></p> <blockquote> <p>Objects are data with methods attached, closures are functions with data attached.</p> </blockquote> <pre><code>def make_counter(): i = 0 def counter(): # counter() is a closure nonlocal i i += 1 return i return counter c1 = make_counter() c2 = make_counter() print (c1(), c1(), c2(), c2()) # -&gt; 1 2 1 2 </code></pre>
<p>The best explanation I ever saw of a closure was to explain the mechanism. It went something like this:</p> <p>Imagine your program stack as a degenerate tree where each node has only one child and the single leaf node is the context of your currently executing procedure.</p> <p>Now relax the constraint that each node can have only one child.</p> <p>If you do this, you can have a construct ('yield') that can return from a procedure without discarding the local context (i.e. it doesn't pop it off the stack when you return). The next time the procedure is invoked, the invocation picks up the old stack (tree) frame and continues executing where it left off.</p>
3,479
<p>In a recent sharepoint project, I implemented an authentication webpart which should replace the NTLM authentication dialog box. It works fine as long as the user provides valid credentials. Whenever the user provides invalid credentials, the NTLM dialog box pops up in Internet Explorer.</p> <p>My Javascript code which does the authentication via XmlHttpRequest looks like this:</p> <pre><code>function Login() { var request = GetRequest(); // retrieves XmlHttpRequest request.onreadystatechange = function() { if (this.status == 401) { // unauthorized request -&gt; invalid credentials // do something to suppress NTLM dialog box... // already tried location.reload(); and window.location = &lt;url to authentication form&gt;; } } request.open("GET", "http://myServer", false, "domain\\username", "password"); request.send(null); } </code></pre> <p>I don't want the NTLM dialog box to be displayed when the user provides invalid credentials. Instead the postback by the login button in the authentication form should be executed. In other words, the browser should not find out about my unauthorized request.</p> <p>Is there any way to do this via Javascript?</p>
<p><a href="https://stackoverflow.com/users/2199/mark-brackett">Mark</a>'s comment is correct; The NTLM auth prompt is triggered by a 401 response code and the presence of NTLM as the first mechanism offered in the WWW-Authenticate header (Ref: <a href="http://curl.haxx.se/rfc/ntlm.html" rel="nofollow noreferrer">The NTLM Authentication Protocol</a>).</p> <p>I'm not sure if I understand the question description correctly, but I think you are trying to wrap the NTLM authentication for SharePoint, which means you don't have control over the server-side authentication protocol, correct? If you're not able to manipulate the server side to avoid sending a 401 response on failed credentials, then you will not be able to avoid this problem, because it's part of the (client-side) spec:</p> <h3><a href="http://www.w3.org/TR/2006/WD-XMLHttpRequest-20060619/" rel="nofollow noreferrer">The XMLHttpRequest Object</a></h3> <blockquote> <p>If the UA supports HTTP Authentication [RFC2617] it SHOULD consider requests originating from this object to be part of the protection space that includes the accessed URIs and send Authorization headers and handle 401 Unauthorised requests appropriately. if authentication fails, UAs should prompt the users for credentials.</p> </blockquote> <p>So the spec actually calls for the browser to prompt the user accordingly if any 401 response is received in an XMLHttpRequest, just as if the user had accessed the URL directly. As far as I can tell the only way to really avoid this would be for you to have control over the server side and cause 401 Unauthorized responses to be avoided, as Mark mentioned.</p> <p>One last thought is that you may be able to get around this using a proxy, such a separate server side script on another webserver. That script then takes a user and pass parameter and checks the authentication, so that the user's browser isn't what's making the original HTTP request and therefore isn't receiving the 401 response that's causing the prompt. If you do it this way you can find out from your &quot;proxy&quot; script if it failed, and if so then prompt the user again until it succeeds. On a successful authentication event, you can simply fetch the HTTP request as you are now, since everything works if the credentials are correctly specified.</p>
<p>I was able to get this working for all browsers except firefox. See my blog post below from a few years ago. My post is aimed at IE only but with some small code changes it should work in Chrome and safari. </p> <p><a href="http://steve.thelineberrys.com/ntlm-login-with-anonymous-fallback-2/" rel="nofollow">http://steve.thelineberrys.com/ntlm-login-with-anonymous-fallback-2/</a></p> <p>EDIT:</p> <p>The gist of my post is wrapping your JS xml call in a try catch statement. In IE, Chrome, and Safari, this will suppress the NTLM dialog box. It does not seem to work as expected in firefox. </p>
4,939
<p>I've heard that creating a new process on a Windows box is more expensive than on Linux. Is this true? Can somebody explain the technical reasons for why it's more expensive and provide any historical reasons for the design decisions behind those reasons?</p>
<p>mweerden: NT has been designed for multi-user from day one, so this is not really a reason. However, you are right about that process creation plays a less important role on NT than on Unix as NT, in contrast to Unix, favors multithreading over multiprocessing.</p> <p>Rob, it is true that fork is relatively cheap when COW is used, but as a matter of fact, fork is mostly followed by an exec. And an exec has to load all images as well. Discussing the performance of fork therefore is only part of the truth.</p> <p>When discussing the speed of process creation, it is probably a good idea to distinguish between NT and Windows/Win32. As far as NT (i.e. the kernel itself) goes, I do not think process creation (NtCreateProcess) and thread creation (NtCreateThread) is significantly slower as on the average Unix. There might be a little bit more going on, but I do not see the primary reason for the performance difference here. </p> <p>If you look at Win32, however, you'll notice that it adds quite a bit of overhead to process creation. For one, it requires the CSRSS to be notified about process creation, which involves LPC. It requires at least kernel32 to be loaded additionally, and it has to perform a number of additional bookkeeping work items to be done before the process is considered to be a full-fledged Win32 process. And let's not forget about all the additional overhead imposed by parsing manifests, checking if the image requires a compatbility shim, checking whether software restriction policies apply, yada yada.</p> <p>That said, I see the overall slowdown in the sum of all those little things that have to be done in addition to the raw creation of a process, VA space, and initial thread. But as said in the beginning -- due to the favoring of multithreading over multitasking, the only software that is seriously affected by this additional expense is poorly ported Unix software. Although this sitatuion changes when software like Chrome and IE8 suddenly rediscover the benefits of multiprocessing and begin to frequently start up and teardown processes...</p>
<p>All that plus there's the fact that on the Win machine most probably an antivirus software will kick in during the CreateProcess... That's usually the biggest slowdown.</p>
7,013
<p>I'd like to use a different icon for the demo version of my game, and I'm building the demo with a different build config than I do for the full verison, using a preprocessor define to lockout some content, use different graphics, etc. Is there a way that I can make Visual Studio use a different icon for the app Icon in the demo config but continue to use the regular icon for the full version's config?</p>
<p>According to <a href="http://msdn.microsoft.com/en-us/library/aa381033(VS.85).aspx" rel="noreferrer">this page</a> you may use preprocessor directives in your *.rc file. You should write something like this</p> <pre><code>#ifdef _DEMO_VERSION_ IDR_MAINFRAME ICON "demo.ico" #else IDR_MAINFRAME ICON "full.ico" #endif </code></pre>
<p>This will get you halfway there: <a href="http://www.codeproject.com/KB/dotnet/embedmultipleiconsdotnet.aspx" rel="nofollow noreferrer">http://www.codeproject.com/KB/dotnet/embedmultipleiconsdotnet.aspx</a></p> <p>Then you need to find the Win32 call which will set the displayed icon from the list of embedded icons.</p>
9,839
<p>Recently thanks to rails' popularity, many people start using activerecord as model. however, before I heard of rails (my peer group was not a fan of open source stuff, we were taught in a .NET school...) and while I was doing my final year project, i found this definition for a model</p> <blockquote> <p>The model represents enterprise data and the business rules that govern access to and updates of this data. Often the model serves as a software approximation to a real-world process, so simple real-world modeling techniques apply when defining the model.</p> </blockquote> <p>it doesn't say the model should represent one table as what activerecord does. And normally within a transaction, one may have to query a few unrelated tables and then manipulate data from different tables... so if activerecord is used as model, then either one would have to cram all the logic code into the controller (which is kinda popular in some php frameworks) that makes it difficult to test or hack the activerecord model so that it performs database operation on not only the table it maps to, but also other related tables as well...</p> <p>so, what is so great about abusing (IMHO) activerecord as the model in a MVC architectural pattern?</p>
<p>Martin Fowler described this pattern in Patterns of Enterprise Application Architecture together with two other patterns or architectures. These patterns are good for different situations and different amounts of complexity.</p> <p>If you want to so only simple stuff you can use Transaction Script. This is an architecture you saw in lot's of old ASP and PHP pages where a single script contained the business logic, data-access logic and presentation logic. This falls apart fast when things get more complicated.</p> <p>The next thing you can do is add some separation between presentation and model. This is activerecord. The model is still tied to the database but you've a bit more flexibility because you can reuse your model/dataccess between views/pages/whatever. It's not as flexible as it could be but depending on your data-access solution it can be flexible enough. Frameworks like CSLA in .Net have a lot of aspects from this patterm (I think Entity Framework looks a bit too much like this too). It can still handle a lot of complexity without becoming unmaintainable.</p> <p>The next step is separating your data-access layer and your model. This usually requires a good OR mapper or a lot of work. So not everyone wants to go this way. Lot's of methodologies like domain driven design perscribe this approach.</p> <p>So it's all a matter of context. What do you need and what is the best solution. I even still use transaction-script sometimes for simple single use code.</p>
<p>The great thing about using the Rails ActiveRecord as a model in MVC is that it gives you an automatic ORM (Object Relational Mapper) and easy way to create associations between models. As you have pointed out, MVC can sometimes be lacking.</p> <p>Therefore, for some complex transaction involving many models, I'd suggest to use a Presenter in between your controller and your models (<a href="http://blog.jayfields.com/2007/03/rails-presenter-pattern.html" rel="nofollow noreferrer">Rails Presenter Pattern</a>). The Presenter would aggregate your models and transactional logic and would remain easily testable. You definitely want to strive to keep all of your business logic in your models or presenters, and out of your controllers (<a href="http://weblog.jamisbuck.org/2006/10/18/skinny-controller-fat-model" rel="nofollow noreferrer">Skinny Controller, Fat Model</a>).</p>
8,048
<p>I am trying to create a Task Scheduler task to start my SQL Server 2005 instance every morning, because something stops it every night. This is a temporary solution until I can diagnose the stoppage.</p> <p>I created a task to run under my admin user, and to start the program, <em>cmd</em> with the arguments <em>/c net start mssqlserver</em>. When I manually run the command, in a console under my admin user, it runs, but when I try to manually execute the task, it logs the following message, and the service remains stopped:</p> <p><em>action "C:\Windows\system32\cmd.EXE" with return code 2</em>.</p> <p>Any suggestions?</p>
<p>Use an enum:</p> <pre><code>public enum Foo { INSTANCE; } </code></pre> <p>Joshua Bloch explained this approach in his <a href="http://sites.google.com/site/io/effective-java-reloaded" rel="noreferrer">Effective Java Reloaded</a> talk at Google I/O 2008: <a href="http://www.youtube.com/watch?v=pi_I7oD_uGI#t=28m50s" rel="noreferrer">link to video</a>. Also see slides 30-32 of his presentation (<a href="https://14b1424d-a-62cb3a1a-s-sites.googlegroups.com/site/io/effective-java-reloaded/effective_java_reloaded.pdf?attachauth=ANoY7crKCOet2NEUGW7RV1XfM-Jn4z8YJhs0qJM11OhLRnFW_JbExkJtvJ3UJvTE40dhAciyWcRIeGJ-n3FLGnMOapHShHINh8IY05YViOJoZWzaohMtM-s4HCi5kjREagi8awWtcYD0_6G7GhKr2BndToeqLk5sBhZcQfcYIyAE5A4lGNosDCjODcBAkJn8EuO6572t2wU1LMSEUgjvqcf4I-Fp6VDhDvih_XUEmL9nuVJQynd2DRpxyuNH1SpJspEIdbLw-WWZ&amp;attredirects=0" rel="noreferrer">effective_java_reloaded.pdf</a>):</p> <blockquote> <h3>The Right Way to Implement a Serializable Singleton</h3> <pre><code>public enum Elvis { INSTANCE; private final String[] favoriteSongs = { "Hound Dog", "Heartbreak Hotel" }; public void printFavorites() { System.out.println(Arrays.toString(favoriteSongs)); } } </code></pre> </blockquote> <p><strong>Edit:</strong> An <a href="http://www.ddj.com/java/208403883?pgno=3" rel="noreferrer">online portion of "Effective Java"</a> says: </p> <blockquote> <p>"This approach is functionally equivalent to the public field approach, except that it is more concise, provides the serialization machinery for free, and provides an ironclad guarantee against multiple instantiation, even in the face of sophisticated serialization or reflection attacks. While this approach has yet to be widely adopted, <strong>a single-element enum type is the best way to implement a singleton</strong>."</p> </blockquote>
<p>Sometimes a simple "<strong><code>static Foo foo = new Foo();</code></strong>" is not enough. Just think of some basic data insertion you want to do.</p> <p>On the other hand you would have to synchronize any method that instantiates the singleton variable as such. Synchronisation is not bad as such, but it can lead to performance issues or locking (in very very rare situations using this example. The solution is</p> <pre><code>public class Singleton { private static Singleton instance = null; static { instance = new Singleton(); // do some of your instantiation stuff here } private Singleton() { if(instance!=null) { throw new ErrorYouWant("Singleton double-instantiation, should never happen!"); } } public static getSingleton() { return instance; } } </code></pre> <p>Now what happens? The class is loaded via the class loader. Directly after the class was interpreted from a byte Array, the VM executes the <strong>static { }</strong> - block. that's the whole secret: The static-block is only called once, the time the given class (name) of the given package is loaded by this one class loader.</p>
9,613
<p>I have an absolutely positioned <code>div</code> containing several children, one of which is a relatively positioned <code>div</code>. When I use a <code>percentage-based width</code> on the child <code>div</code>, it collapses to <code>0 width</code> on IE7, but not on Firefox or Safari.</p> <p>If I use <code>pixel width</code>, it works. If the parent is relatively positioned, the percentage width on the child works.</p> <ol> <li>Is there something I'm missing here?</li> <li>Is there an easy fix for this besides the <code>pixel-based width</code> on the child?</li> <li>Is there an area of the CSS specification that covers this?</li> </ol>
<p>The parent <code>div</code> needs to have a defined <code>width</code>, either in pixels or as a percentage. In Internet&nbsp;Explorer&nbsp;7, the parent <code>div</code> needs a defined <code>width</code> for child percentage <code>div</code>s to work correctly.</p>
<p>The <code>div</code> needs to have a defined width:</p> <pre><code>&lt;div id="parent" style="width:230px;"&gt; &lt;div id="child1"&gt;&lt;/div&gt; &lt;div id="child2"&gt;&lt;/div&gt; &lt;/div&gt; </code></pre>
2,231
<p>When using a filament based printer, what operator behaviors increase the frequency at which a bed must be relevelled between prints?</p>
<h1>Mechanical interaction</h1> <h2>Operator induced regular actions</h2> <p>When an operator reaches into the machine and operates something on the bed, this can induce errors that slowly accumulate. The most typical operation would be to remove something from the printbed, cleaning the printbed or swapping the printbed wholesale all can result in a slow but steady unleveling.</p> <p>Regular leveling can counteract this. With good training, you might vet away with once in a dozen or less.</p> <h2>Maintenance of/work on the printhead</h2> <p>During maintenance such as swapping nozzles, checking connections and cleaning the printhead, there is a very high chance that the printbed is touched due to the usually very cramped area one has to work in. For example, it is near impossible not to touch the printed when swapping nozzles on my Ender3, if I want to use my torque screwdriver. This is true even with the Z-Axis at the highest position, due to the dimensions of my torque screwdriver.</p> <p>By virtue of the work on the printhead, the 0-level is usually thrown off anyway, and as such a relevel is <strong>always</strong> in order after any printhead maintenance more invasive than cleaning the fan ducts.</p> <p>Avoiding nozzle swaps unless necessary can reduce the workload - it might be cheaper in the long term to have two machines with different setups than one machine where you swap the nozzle for each print - unless you charge for the accompanying work on the setup change.</p> <h2>Operator induced irregular actions</h2> <p>There are cases where the operator did not plan to operate in the area of the printbed but actually might impact it by reckless or accidental action. In other words: accidents happen, tools drop onto the printbed and hands end there if an operator stumbles.</p> <p>While releveling might not be <em>necessary</em> after all such accidental contacts, occasionally checking it and fixing it is good practice.</p> <h2>Mechanical failure</h2> <p>The way you test and maintain your Z-level is paramount in how often you need to validate the Z-level. If your springs are too strong and push the leveling knobs off on their own or your Z-sensor is mounted only weakly, then these create problems on their own.</p> <p>Note that even in normal operation, the oscillation of the printer will make any bolt under stress that is not glued in place or jammed in tight loosen a little over time. This does include the bed leveling knobs.</p>
<p>The only ones I have found are.</p> <p>Manually putting pressure on the bed when removing prints.</p> <p>Removing the bed covering, eg a glass plate</p> <p>Damaging the bed in some way. For example my bed has high spots on it (always has). This means that if I remove the glass plate I use and put it back, it sits slightly different. If I orient the glass a different way from prior it always needs levelling.</p> <p>Changing filament types.</p> <p>Changing cover types eg magnetic and glass are different thicknesses.</p> <p>Changing first layer needs. Sometimes I need the first layer squished a bit depending what I'm doing.</p> <p>Lastly on my Ender 3 Pro if the z-axis switch isn't screwed in tight enough it can slip down a fraction.</p>
2,148
<p>I would like to think that some of the software I'm writing today will be used in 30 years. But I am also aware that a lot of it is based upon the UNIX tradition of exposing time as the number of seconds since 1970.</p> <pre class="lang-c prettyprint-override"><code>#include &lt;stdio.h&gt; #include &lt;time.h&gt; #include &lt;limits.h&gt; void print(time_t rt) { struct tm * t = gmtime(&amp;rt); puts(asctime(t)); } int main() { print(0); print(time(0)); print(LONG_MAX); print(LONG_MAX+1); } </code></pre> <p>Execution results in:</p> <ul> <li>Thu Jan 1 00:00:00 1970</li> <li>Sat Aug 30 18:37:08 2008</li> <li>Tue Jan 19 03:14:07 <strong>2038</strong></li> <li>Fri Dec 13 20:45:52 <strong>1901</strong></li> </ul> <blockquote> <blockquote> <p>The functions ctime(), gmtime(), and localtime() all take as an argument a time value representing the time in seconds since the Epoch (00:00:00 UTC, January 1, 1970; see time(3) ).</p> </blockquote> </blockquote> <p>I wonder if there is anything proactive to do in this area as a programmer, or are we to trust that all software systems (aka Operating Systems) will some how be magically upgraded in the future?</p> <p><strong>Update</strong> It would seem that indeed 64-bit systems are safe from this:</p> <pre class="lang-java prettyprint-override"><code>import java.util.*; class TimeTest { public static void main(String[] args) { print(0); print(System.currentTimeMillis()); print(Long.MAX_VALUE); print(Long.MAX_VALUE + 1); } static void print(long l) { System.out.println(new Date(l)); } } </code></pre> <ul> <li>Wed Dec 31 16:00:00 PST 1969</li> <li>Sat Aug 30 12:02:40 PDT 2008</li> <li>Sat Aug 16 23:12:55 PST <strong>292278994</strong></li> <li>Sun Dec 02 08:47:04 PST <strong>292269055</strong></li> </ul> <p>But what about the year 292278994?</p>
<p>I have written portable replacement for time.h (currently just localtime(), gmtime(), mktime() and timegm()) which uses 64 bit time even on 32 bit machines. It is intended to be dropped into C projects as a replacement for time.h. It is being used in Perl and I intend to fix Ruby and Python's 2038 problems with it as well. This gives you a safe range of +/- 292 million years.</p> <p>You can find the code <a href="https://github.com/schwern/y2038" rel="noreferrer">at the y2038 project</a>. Please feel free to post any questions to the <a href="https://github.com/schwern/y2038/issues" rel="noreferrer">issue tracker</a>.</p> <p>As to the "this isn't going to be a problem for another 29 years", peruse this <a href="https://github.com/schwern/y2038/wiki/Why-Bother%3F" rel="noreferrer">list of standard answers</a> to that. In short, stuff happens in the future and sometimes you need to know when. I also have <a href="http://www.slideshare.net/schwern/whos-afraid-of-2038-presentation" rel="noreferrer">a presentation on the problem, what is not a solution, and what is</a>.</p> <p>Oh, and don't forget that many time systems don't handle dates before 1970. Stuff happened before 1970, sometimes you need to know when.</p>
<p>By 2038, time libraries should all be using 64-bit integers, so this won't actually be that big of a deal (on software that isn't completely unmaintained).</p> <p>COBOL programs might be fun though.</p>
5,690
<p>I'm currently working on a ray-tracer in C# as a hobby project. I'm trying to achieve a decent rendering speed by implementing some tricks from a c++ implementation and have run into a spot of trouble.</p> <p>The objects in the scenes which the ray-tracer renders are stored in a KdTree structure and the tree's nodes are, in turn, stored in an array. The optimization I'm having problems with is while trying to fit as many tree nodes as possible into a cache line. One means of doing this is for nodes to contain a pointer to the left child node only. It is then implicit that the right child follows directly after the left one in the array.</p> <p>The nodes are structs and during tree construction they are succesfully put into the array by a static memory manager class. When I begin to traverse the tree it, at first, seems to work just fine. Then at a point early in the rendering (about the same place each time), the left child pointer of the root node is suddenly pointing at a null pointer. I have come to the conclusion that the garbage collecter has moved the structs as the array lies on the heap.</p> <p>I've tried several things to pin the addresses in memory but none of them seems to last for the entire application lifetime as I need. The 'fixed' keyword only seems to help during single method calls and declaring 'fixed' arrays can only be done on simple types which a node isn't. Is there a good way to do this or am I just too far down the path of stuff C# wasn't meant for.</p> <p>Btw, changing to c++, while perhaps the better choice for a high performance program, is not an option.</p>
<p>Firstly, if you're using C# normally, you can't suddenly get a null reference due to the garbage collector moving stuff, because the garbage collector also updates all references, so you don't need to worry about it moving stuff around.</p> <p>You can pin things in memory but this may cause more problems than it solves. For one thing, it prevents the garbage collector from compacting memory properly, and may impact performance in that way.</p> <p>One thing I would say from your post is that using structs may not help performance as you hope. C# fails to inline any method calls involving structs, and even though they've fixed this in their latest runtime beta, structs frequently don't perform that well.</p> <p>Personally, I would say C++ tricks like this don't generally tend to carry over too well into C#. You may have to learn to let go a bit; there can be other more subtle ways to improve performance ;)</p>
<p>Is it really prohibitive to store the pair of array reference and index?</p>
5,920
<p>I am looking for a text editor to be used in a web page. Where users can format the text and get a WYSIWYG experience. Doesn't need to be too fancy. But has to be easy to use and integrate into the page. Has to generate HTML as output. Support AJAX (one I checked works only with standard form submit) and has to be small in terms of download to the user's browser.</p>
<p>Well it depends what platform you are on if you are looking for server-side functionality as well, but the defacto badass WYSIWYg in my opinion is <a href="http://www.fckeditor.net/" rel="noreferrer">FCKeditor</a>. I have worked with this personally in numerous environments (both professional and hobby level) and have always been impressed.</p> <p>It's certainly worth a look. I believe it is employed by open source projects such as SubText as well. Perhaps, <a href="http://weblogs.asp.net/jgalloway/" rel="noreferrer">Jon Galloway</a> can add to this if he reads this question. Or <a href="http://haacked.com/Default.aspx" rel="noreferrer">Phil</a> if he is currently a user.</p>
<p>Using fck for some tine now, after "free text box", or something like that. Had problems only once, when I put fck inside asp.net ajax updatepanel, but found fix on forums. Problem was solved in next release.<br> I would like to see some nice photo browser in it, because fck comes only with simple browser that displays filename, no thumbs. The other one, that has thumbs costs bunch of money.<br> Didn't try it with asp.net mvc, don't know how will uploading work. It uses one ascx for wrapping js functionality.</p>
4,150
<p>I have been attempting to get my Anet to print for a couple months now. Haven't had a huge amount of time but when I have I've worked on it. A lot of the problems I have had I have been able to trouble shoot, for example for a while I was unable to get any filament to lay, and I have since fixed this issue. Now however my filament will extrude but it is very spacey, and not solid. I have tried re-calibrating the bed, un-clogging the extruder, and increasing the flow rate but nothing works. Is it possible that I have a bad board and that is causing my problem?</p> <p>[<img src="https://i.stack.imgur.com/k9Hj6.jpg" alt="Failed print" title="Failed print"></p>
<p>The skirt <em>looks</em> ok, which tends to rule out the most extreme flow or blockage issues. However, skirt/bottom layer can be over-squashed so not ideal for calibration.</p> <p>I assume this is intended to be a 100% layer rather than infill. It looks like you're achieving about 50% infill, which is a good clue.</p> <p>I guessed (and confirmed in comments) that you are set for 3mm filament, but using 1.75? (I had cura default to this on me not long ago, despite printing perfectly before, and no intentional changes). One easy check would have been to try one of the provided pre-sliced models that came with the printer.</p> <p>The effect of configuring for 3mm filament is to reduce flow by a factor of 3 (diameter squared), so even a 50% over extrusion still leaves you with less than half the plastic volume required.</p> <p>After fixing this, you will need to raise the Z-home position slightly to enable the first layer to extrude properly.</p>
<p>I had similar issues with my A8 at first. If it <em>did</em> work fine and now it does not, then it is not a firmware issue. My issues were resolved when I got a <em>new</em> nozzle. If the tip is worn out or squished from crashing into the bed, no amount of cleaning is going to solve it. Also the filament it came with is probably extremely dusty and can easily reclog after cleaning.</p>
1,322
<p>And why would I use one over the other in my code?</p>
<blockquote> <p><strong>Caution: This answer appears to be out of date.</strong> It was posted before .NET 4 became available, which apparently introduced some optimizations regarding <code>Type</code> and thus rendered the information in this answer obsolete. See <a href="https://stackoverflow.com/a/1438443/240733">this more recent answer</a> for details.</p> </blockquote> <p>According to <a href="http://blogs.msdn.com/vancem/archive/2006/10/01/779503.aspx" rel="nofollow noreferrer">this blog post (from 2006) by Vance Morrison</a>, <code>RuntimeTypeHandle</code> is a value type (<code>struct</code>) that wraps an unmanaged pointer, so <code>Type.GetTypeHandle(obj).Equals(anotherHandle)</code> is faster to use for strict "is exactly the same type" comparisons than <code>obj.GetType().Equals(anotherType)</code> &mdash; the latter creates <code>System.Type</code> instances which are, apparently, heavier.</p> <p>However, it's also less obvious, and definitely falls under the category "micro-optimization" so if you're wondering when you need one over the other, you should probably just use System.Type.</p>
<p>In .NET 4.0 Beta 1 <code>RuntimeTypeHandle</code> just wraps <code>RuntimeType</code>. It seems all benefits of using it as a cheap <code>Type</code> proxy have gone.</p> <h3>Evidence for the above claim:</h3> <ul> <li>Microsoft's <a href="https://referencesource.microsoft.com/#mscorlib/system/runtimehandles.cs" rel="nofollow noreferrer">Reference Source for the <code>System.RuntimeTypeHandle</code> type</a> shows that this type is indeed only a wrapper around <code>System.RuntimeType</code> these days. </li> <li><a href="http://higherlogics.blogspot.ch/2013/09/clr-cost-of-dynamic-type-tests.html" rel="nofollow noreferrer">Sandro Magi's 2013 blog article "CLR: The Cost of Dynamic Type Tests" </a> contains a benchmark and a final note showing that the supposed performance benefits of <code>RuntimeTypeHandle</code> are gone in .NET 4.</li> </ul>
6,158
<p>My first attempt to print in ABS has been an unmitigated disaster, and I could do with some advice before I try again.</p> <p>I've been trying to print a <a href="https://www.thingiverse.com/thing:4067324" rel="noreferrer">Bean Hopper Extension for Melitta Barista TS Smart</a> coffee machine, but every time the layers separate, usually at or near a corner.</p> <p><a href="https://i.stack.imgur.com/Net7D.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Net7Dm.png" alt="Bean Hopper Extension for Melitta Barista TS Smart" /></a></p> <p>My first attempt had it lifting from the bed before a few layers had completed. When I tried it with a thin brim on the outside, it stuck down for a while, but eventually started lifting away from the bed. When I tried it with a raft, it got further, but then the print detached itself from the raft at one corner. My final attempt used a wider brim on both inside and outside, and it managed to get even further through the print. None of the brim had detached from the bed, but then I noticed layers higher up pulling apart, and when I eventually cancelled the print, I saw it had started separating earlier and managed to recover in later layers.</p> <p>At this point I gave up and printed it in PLA, first time, but I would like to understand what I can do to improve my chances of printing this next time.</p> <p>Also, I plan to redesign this with the outer wall extending straight up to the level of the bottom of the notch, with a flat top in which I intend to add a channel for a rubber seal. Will widening of part of the model cause more problems for an ABS print, or will it help strengthen it and prevent it separating as before?</p> <p><sup>I'm using an Ender 3 V2 with a heated bed, using Prit-stik as my bed glue.</sup></p>
<p>First of all, the bed should be at 100-110 °C and fan 0 %.</p> <p>I had <em>incredibly</em> good results by using a (sacrificial) &quot;draft shield&quot; as shown below together with a large brim (10 mm). It creates a warmer micro-climate which keeps the print a bit warmer, with much less warping. My printer bed back then was 130x130 mm, therefore drafts were strong. You will get even better results, you have about 200x200 mm.</p> <p><a href="https://i.stack.imgur.com/M0l3m.png" rel="noreferrer"><img src="https://i.stack.imgur.com/M0l3m.png" alt="enter image description here" /></a></p> <p><div class="youtube-embed"><div> <iframe width="640px" height="395px" src="https://www.youtube.com/embed/OCcwXF5Z40A?start=47"></iframe> </div></div></p> <p>If you were able to add something like this, it would be even better:</p> <p><a href="https://i.stack.imgur.com/Qnwfl.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Qnwfl.png" alt="enter image description here" /></a></p> <p>You can get it by designing in Fusion 360 or anything else a &quot;U&quot; shaped part, which you will add twice to each print and scale accordingly.</p> <p>The reason is that the normal draft shield will deform a lot and detach from the print bed at least in part, resulting in an opening at the bottom and a chimney effect which reduces the effectiveness of the shield. With the two U parts, it won't happen as easy.</p> <p>The draft shield is the first thing to try.</p> <p>You can also add a &quot;shield&quot; out of cardboard around the printer. You don't need to close the top, it will work quite good.</p> <p>You can also get 2-4 20-40 W halogen lamps (from a car, for example, they run already at 12 V, but don't connect them to the same power supply as the printer!) and point them at the part. If you have two, mount them at the two sides of the horizontal gantry: they don't need to move horizontally, only vertically. They will keep the part quite warm without the need of an enclosure (but it would help). If you have 4 mount them maybe at the corners of the bed, hoping the shaking won't kill them too quickly.</p> <p>The solutions I listed so far result in better prints, but also in <strong>stronger</strong> prints because they reduce the causes for warping. The halogen lamps, in particular, keep the printed part quite hot and do wonders.</p> <p>Another solution which is the easiest and which works extremely well but which weakens the print because it only hides the warping is to use Dimafix on the bed. The print will stick like there is no tomorrow (bed temperature above 90 °C) but the stresses are still there and the print may split in the middle between layers, if it's tall.</p>
<p>You make no mention of a heated enclosure, so I assume you aren't using one. ABS undergoes significant contraction as it cools down. If you're not using an enclosure, the temperature differences between the heated bed, the cool middle of the print, and the heat of the freshly-printed plastic will cause severe warping.</p>
1,964
<p>My printer is ignoring the &quot;Z offset&quot; setting in Cura and the &quot;Bed Z&quot; stored in the printer LCD settings? It has been working before but after a firmware update of my printer, it doesn't work. Even after flashing the previous firmware back &quot;Bed Z&quot; changes no longer affect anything. No matter if I change &quot;Bed Z&quot; during prints or if I use the &quot;Z offset&quot; option in Cura, the nozzle still prints at the same height.</p>
<h2>Z offset in Cura</h2> <p>Z offset in Cura is nothing more than an additional increase/decrease in height calculated throughout the whole file. This does require the installment of a plugin from the Ultimaker Cura &quot;Marketplace&quot; called &quot;Z Offset Setting&quot; by developer &quot;fieldOfView&quot;. This should work (if not, this implies the plugin is defective, version 3.5.7 appears to work as it should which is the latest version to date), the G-code is parsed (as in post-processed) by the plugin to recalculate all Z values (a redefinition of the Z level with <code>G92</code> might have been a lot easier...), as an example an object is sliced without an offset, a negative offset (-0.333 mm) and a positive offset (0.333 mm) below. The layer numbered <code>;LAYER:0</code> will start printing at:</p> <p>No offset:</p> <pre><code>G0 F4285.7 X81.669 Y84.791 Z0.27 </code></pre> <p>Negative offset:</p> <pre><code>G0 F4285.7 X81.669 Y84.791 Z-0.063 ;adjusted by z offset </code></pre> <p>Positive offset:</p> <pre><code>G0 F4285.7 X81.669 Y84.791 Z0.603 ;adjusted by z offset </code></pre> <h2>Z offset in firmware</h2> <p>A Z offset is the distance between the nozzle and the build plate surface that is needed for good adhesion. This is obtained from levelling the bed putting a piece of A4/Letter paper in between the nozzle and build plate during levelling of the bed. Basically this is an offset from the endstop position. Using the menu of a Marlin operated printer you can manually adjust this value e.g. during printing of the first layer by turning the jog dial for Z babystepping. The paper levelling method is a manual/mechanical Z offset that can be extended with the value changed by the menu. When storing such an offset you basically change the offset from the endstop position, this can be done with <a href="https://marlinfw.org/docs/gcode/M206.html" rel="nofollow noreferrer">G-code <code>M206</code></a>. Storing the value to be retained for future prints, the value needs to be stored to memory (using <a href="https://marlinfw.org/docs/gcode/M500.html" rel="nofollow noreferrer">G-code <code>M500</code></a> or an equivalent save settings from the menu of the printer controller).</p> <p><em>Note this offset is something different when using a Z probe. The Z probe offset in firmware is set by <code>M851</code> or <code>G29.1</code>, depending on the used firmware.</em></p>
<p>I have the same problem. You need to check your G-code to detect where's the problem.</p> <p>Try comparing the G-code file with different values of the Z offset.</p> <p>In my case it looks like this:</p> <p>No offset</p> <pre><code>;LAYER_COUNT:107 ;LAYER:0 M106 S255 G1 F300 Z0.84 G0 F6000 X124.645 Y78.208 Z0.84 ;TYPE:SKIRT G1 F300 Z0.44 G1 F1500 E0 </code></pre> <p>With offset (comments made by Cura)</p> <pre><code>;LAYER_COUNT:107 ;LAYER:0 M106 S255 G1 F300 Z2.06 ;adjusted by z offset G92 Z0.84 ;consider this the original z before offset G0 F6000 X119.093 Y42.498 Z0.84 ;TYPE:SKIRT G1 F300 Z0.44 </code></pre> <p>I hope this helps.</p>
1,657
<p>The first time I load the website in the production web server, it start very slow, subsequent pages load very quickly (included the home page).</p> <p>I precompiled the site, but nothing changes. </p> <p>I don't have any code at Application start. I don't have cached items.</p> <p>Any ideas? How can I find out what is happening?</p>
<p>It's just your app domain loading up and loading any binaries into memory. Also, it's initializing static variables, so if you have a static variable that loads up a lot of data from the db, it might take a bit.</p>
<p>This sounds very much like background compiling; though if you're precompiling, that shouldn't be an issue.</p> <p>First thing I would look at is your ORM (if any). NHibernate, in particular, has a serious startup penalty, as it runs multiple compilers in the background at startup to turn each class in your data layer into its own in-memory assembly.</p>
4,586
<p>I've got a lot of ugly code that looks like this:</p> <pre><code>if (!string.IsNullOrEmpty(ddlFileName.SelectedItem.Text)) results = results.Where(x =&gt; x.FileName.Contains(ddlFileName.SelectedValue)); if (chkFileName.Checked) results = results.Where(x =&gt; x.FileName == null); if (!string.IsNullOrEmpty(ddlIPAddress.SelectedItem.Text)) results = results.Where(x =&gt; x.IpAddress.Contains(ddlIPAddress.SelectedValue)); if (chkIPAddress.Checked) results = results.Where(x =&gt; x.IpAddress == null); ...etc. </code></pre> <p><code>results</code> is an <code>IQueryable&lt;MyObject&gt;</code>.<br> The idea is that for each of these innumerable dropdowns and checkboxes, if the dropdown has something selected, the user wants to match that item. If the checkbox is checked, the user wants specifically those records where that field is null or an empty string. (The UI doesn't let both be selected at the same time.) This all adds to the LINQ Expression which gets executed at the end, after we've added all the conditions.</p> <p>It <em>seems</em> like there ought to be some way to pull out an <code>Expression&lt;Func&lt;MyObject, bool&gt;&gt;</code> or two so that I can put the repeated parts in a method and just pass in what changes. I've done this in other places, but this set of code has me stymied. (Also, I'd like to avoid "Dynamic LINQ", because I want to keep things type-safe if possible.) Any ideas?</p>
<p>I'd convert it into a single Linq statement:</p> <pre><code>var results = //get your inital results from x in GetInitialResults() //either we don't need to check, or the check passes where string.IsNullOrEmpty(ddlFileName.SelectedItem.Text) || x.FileName.Contains(ddlFileName.SelectedValue) where !chkFileName.Checked || string.IsNullOrEmpty(x.FileName) where string.IsNullOrEmpty(ddlIPAddress.SelectedItem.Text) || x.FileName.Contains(ddlIPAddress.SelectedValue) where !chkIPAddress.Checked || string.IsNullOrEmpty(x. IpAddress) select x; </code></pre> <p>It's no shorter, but I find this logic clearer.</p>
<pre><code>results = results.Where(x =&gt; (string.IsNullOrEmpty(ddlFileName.SelectedItem.Text) || x.FileName.Contains(ddlFileName.SelectedValue)) &amp;&amp; (!chkFileName.Checked || string.IsNullOrEmpty(x.FileName)) &amp;&amp; ...); </code></pre>
7,863
<p>Looking for an example that:</p> <ol> <li>Launches an EXE</li> <li>Waits for the EXE to finish.</li> <li>Properly closes all the handles when the executable finishes.</li> </ol>
<p>Something like this:</p> <pre><code>STARTUPINFO info={sizeof(info)}; PROCESS_INFORMATION processInfo; if (CreateProcess(path, cmd, NULL, NULL, TRUE, 0, NULL, NULL, &amp;info, &amp;processInfo)) { WaitForSingleObject(processInfo.hProcess, INFINITE); CloseHandle(processInfo.hProcess); CloseHandle(processInfo.hThread); } </code></pre>
<p>Here is a solution for <code>CreateProcessA</code></p> <pre><code>STARTUPINFOW initInfo = { 0 }; initInfo.cb = sizeof(initInfo); PROCESS_INFORMATION procInfo = { 0 }; CreateProcessA(PATH_FOR_EXE, NULL, NULL, NULL, FALSE, 0, NULL, NULL, (LPSTARTUPINFOA)&amp;initInfo, &amp;procInfo); </code></pre>
6,406
<p>I am converting from existing CVS repository to SVN repository. CVS repository has few brances and I'd like to rename branches while converting.</p> <p>Wanted conversion is like this:</p> <pre><code>CVS branch SVN branch HEAD -&gt; branches/branchX branchA -&gt; trunk branchB -&gt; branches/branchB branchC -&gt; branches/branchC </code></pre> <p>That is, CVS HEAD becomes a normal branch and CVS branchA becomes SVN trunk.</p> <p>Both CVS and SVN repositories will be on same linux machine.</p> <p>How could this be done? </p> <p>Also conversion where CVS branchA becomes SVN trunk and all other CVS branches are ignored might be enough.</p>
<blockquote>I am especially interested in preserving commit history. If I rename and move branches around in SVN after the conversion, will the history be preserved?</blockquote> <p>Yes. Subversion also keeps track of changes to the directory structure, and all version history is preserved even if a file is moved in the tree. </p> <p>I recommend converting the repository with <a href="http://cvs2svn.tigris.org/" rel="noreferrer">cvs2svn</a>, including branches and tags. Once the repository is in Subversion you can move the branches and tags around as you wish. This also keeps the history of the actual tags and branches being renamed, which may be interesting in a historical context later.</p>
<p>I am especially interested in preserving commit history. If I rename and move branches around in SVN after the conversion, will the history be preserved?</p>
3,513
<p>I have been considering taking the plunge and installing Linux on my Playstation 3. As C# is my current language of choice, the next logical step would be to install Mono.</p> <p>I have done a little research and found that <a href="http://psubuntu.com/wiki/InstallationInstructions" rel="nofollow noreferrer">http://psubuntu.com/wiki/InstallationInstructions</a> has instruction on installing Ubuntu and links to download an ISO containing a PS3 specific version of Ubuntu. There is also this cool project at <a href="http://code.google.com/p/celldotnet/" rel="nofollow noreferrer">http://code.google.com/p/celldotnet/</a> that has developed some code to utilize the 6 additional SPU cores of the CPU, not just the general-purpose one that you have access to by default.</p> <p>The problem is that the project documentation seems a little thin. Has anyone set up a PS3 to develop .NET code? If so, what problems did you encounter? Does anyone have any code samples of how to even partially load up one of these monster processors?</p> <p><strong>Update:</strong></p> <p>I do realize that basic .NET/Mono programming will come into play here. The part I am fuzzy on is what sort of data structures do you pass to a specialty core? If I am reading this right, <a href="http://en.wikipedia.org/wiki/Cell_processor#Synergistic_Processing_Elements_.28SPE.29" rel="nofollow noreferrer">the 6 SPU cores have 128 registers at 128 bits each</a>. I haven't seen any discussion on how to go about coding effectively for this.</p> <p><strong>Update 2:</strong></p> <p><a href="http://arstechnica.com/hardware/news/2009/11/end-of-the-line-for-ibms-cell.ars" rel="nofollow noreferrer">IBM has announced</a> that further development on the Cell processor has been cancelled. While this pretty much kills any desire I might have to develop on the platform, hopefully someone else might add some useful info.</p>
<p>Just found <a href="http://tirania.org/blog/archive/2008/Feb-26.html" rel="nofollow noreferrer">this posting</a> from Miguel de Icaza's blog. Promising that as recently as Feb 2008 he was looking into this. As he is <a href="https://stackoverflow.com/users/16929/migueldeicaza">a member of the SO community</a> now, I hope he can shed some further light on the topic.</p>
<p>The PS3 features a PPC general purpose CPU.</p> <p>You can try to cross compile mono to ppc and go from there.</p> <p>Mono from svn has received a lot of attention regarding the ppc port, so I would advise using it instead of the 2.0 release.</p>
4,680
<p>What should I use to virtualize my desktop, vmx, xen, or vmware?</p> <p><strong>Needs to work on a linux or windows host, sorry virtual pc.</strong></p> <p>@Derek Park: Free as in speech, not beer. I want to be able to make a new virtual machine from my own licensed copies of windows, for that vmware is kind of expensive.</p>
<p>Try <a href="http://www.virtualbox.org/" rel="noreferrer">VirtualBox</a>. It's free, open source, and it runs on Windows, Linux, Macintosh and OpenSolaris.</p>
<p>Um, <a href="http://info.vmware.com/content/GLP_VMwarePlayer" rel="nofollow noreferrer">VMware is free</a>.</p>
3,307
<p>I'm wanting to parse a string into a nullable int in C#. ie. I want to get back either the int value of the string or null if it can't be parsed.</p> <p>I was kind of hoping that this would work</p> <pre><code>int? val = stringVal as int?; </code></pre> <p>But that won't work, so the way I'm doing it now is I've written this extension method</p> <pre><code>public static int? ParseNullableInt(this string value) { if (value == null || value.Trim() == string.Empty) { return null; } else { try { return int.Parse(value); } catch { return null; } } } </code></pre> <p>Is there a better way of doing this?</p> <p><strong>EDIT:</strong> Thanks for the TryParse suggestions, I did know about that, but it worked out about the same. I'm more interested in knowing if there is a built-in framework method that will parse directly into a nullable int?</p>
<p><code>int.TryParse</code> is probably a tad easier:</p> <pre><code>public static int? ToNullableInt(this string s) { int i; if (int.TryParse(s, out i)) return i; return null; } </code></pre> <p><strong>Edit</strong> @Glenn <code>int.TryParse</code> is "built into the framework". It and <code>int.Parse</code> are <em>the</em> way to parse strings to ints.</p>
<p>You should <em>never</em> use an exception if you don't have to - the overhead is horrible.</p> <p>The variations on TryParse solve the problem - if you want to get creative (to make your code look more elegant) you could probably do something with an extension method in 3.5 but the code would be more or less the same.</p>
6,682
<p>Can I get a 'when to use' for these and others? </p> <pre><code>&lt;% %&gt; &lt;%# EVAL() %&gt; </code></pre> <p>Thanks</p>
<p>Check out the <a href="http://quickstarts.asp.net/QuickStartv20/aspnet/doc/pages/syntax.aspx#expressions" rel="noreferrer">Web Forms Syntax Reference</a> on MSDN.</p> <p>For basics, </p> <ul> <li><p>&lt;% %> is used for pure code blocks. I generally only use this for if statements</p> <blockquote> <br/> &nbsp; &lt;div class="authenticated"><br/> <br/> &nbsp; &lt;div class="unauthenticated"><br/> </blockquote></li> <li> is used to add text into your markup; that is, it equates to <blockquote> <p>&lt;div class='&lt;%= IsLoggedIn ? "authenticated" : "unauthenticated" %>'></p> </blockquote></li> <li><p>&lt;%# Expression %> is very similar to the above, but it is evaluated in a DataBinding scenario. One thing that this means is that you can use these expressions to set values of runat="server" controls, which you can't do with the &lt;%= %> syntax. Typically this is used inside of a template for a databound control, but you can also use it in your page, and then call Page.DataBind() (or Control.DataBind()) to cause that code to evaluate.</p></li> </ul> <p>The others mentioned in the linked article are less common, though certainly have their uses, too.</p>
<p>In ASP.NET 4.0, comes &lt;%: %> syntax for writing something html encoded.</p> <pre><code>&lt;%: "&lt;script&gt;alert('Hello XSS')&lt;/script&gt;" %&gt; </code></pre> <p>The above can be used instead of the belove.</p> <pre><code>&lt;%= Html.Encode("&lt;script&gt;alert('Hello XSS')&lt;/script&gt;")%&gt; </code></pre>
4,787
<p>Jeff mentioned the concept of 'Progressive Enhancement' when talking about using JQuery to write stackoverflow.</p> <p>After a quick Google, I found a couple of high-level discussions about it.</p> <p>Can anyone recommend a good place to start as a programmer.</p> <p>Specifically, I have been writing web apps in PHP and would like to use YUI to improve the pages I am writing, but a lot of them seem very JavaScript based, with most of the donkey work being done using JavaScript. To me, that seems a bit overkill, since viewing the site without Javascript will probably break most of it.</p> <p>Anyone have some good places to start using this idea, I don't really care about the language.</p> <p>Ideally, I would like to see how you start creating the static HTML first, and then adding the YUI (or whatever Ajax framework) to it so that you get the benefits of a richer client?</p>
<p>As you've said</p> <blockquote> <p>To me, that seems a bit overkill, since viewing the site without Javascript will probably break most of it.</p> </blockquote> <p>This isn't progressive enhancement. Progressive enhancement is when the site works perfectly without JavaScript or CSS, and <em>then</em> adding (layering) these extra technologies/code to increase the usability and functionality of the website.</p> <p>The best example I can give is the tag input box on this website. With JavaScript turned off, it would still work allowing you to enter tags separated with a space. With JavaScript turned on, you get a drop down with suggestions of previous entries.</p> <p>This is progressive enhancement.</p>
<p>Going at it from the other direction is sometimes referred to as graceful degradation. This is usually needed when the site is built first with the enhanced functionality afforded by the various technologies then modified to degrade gracefully for browsers with those technologies are not available.</p> <p>It is also graceful degradation when designing to work with older browsers (ancient in the Internets terminology) such as IE 5.5, Netscape, etc...</p> <p>In my opinion it is much more work to gracefully degrade the application. Progressively enhancing it tends to be much more efficient; however, sometimes the need to take an existing app and make it accessible in these lacking environments arise.</p>
2,459
<p>I am in charge of about 100+ documents (word document, not source code) that needs revision by different people in my department. Currently all the documents are in a shared folder where they will retrieve, revise and save back into the folder. </p> <p>What I am doing now is looking up the "date modified" in the shared folder, opened up recent modified documents and use the "Track Change" function in MS Word to apply the changes. I find this a bit tedious.</p> <p>So will it be better and easier if I commit this in a version control database?</p> <p>Basically I want to keep different version of a file.</p> <p><hr> What have I learn from answers:</p> <ul> <li><p>Use Time Machine to save different version (or Shadow copy in Vista)</p></li> <li><p>There is a difference between text and binary documents when you use version control app. (I didn't know that)</p></li> <li><p>Diff won't work on binary files</p></li> <li><p>A notification system (ie email) for revision is great</p></li> <li><p>Google Docs revision feature.</p></li> </ul> <p><strong>Update</strong> : </p> <p>I played around with Google Docs revision feature and feel that it is almost right for me. Just a bit annoyed with the too frequent versioning (autosaving). </p> <p>But what feels right for me doesn't mean it feels right for my dept. Will they be okay with saving all these documents with Google? </p>
<p>I've worked with Word documents in SVN. With <a href="http://tortoisesvn.tigris.org/" rel="noreferrer">TortoiseSVN</a>, you can easily diff Word documents (between working copy and repository, or between two repository revisions). It's really slick and definitely recommended.</p> <p>The other thing to do if you're using Word documents in SVN is to add the <a href="http://svnbook.red-bean.com/en/1.2/svn.advanced.locking.html" rel="noreferrer">svn:needs-lock</a> property to the Word documents. This will prevent two people from trying to edit the same document at the same time, since unfortunately there's no good way to merge Word documents.</p> <p>With the above two things, handling revision controlled Word documents is at least tolerable. It certainly beats the alternative of using a shared folder and track-changes.</p>
<p>You could do that, but if that files are binary you should always put a lock on it before editing. You won't get a conflict (which would be unresolvable).</p>
4,960
<p>I have a set of tree objects with a depth somewhere in the 20s. Each of the nodes in this tree needs access to its tree's root.</p> <p>A couple of solutions:</p> <ol> <li>Each node can store a reference to the root directly (wastes memory) <ul> <li>I can compute the root at runtime by "going up" (wastes cycles)</li> <li><s>I can use static fields (but this amounts to globals)</s></li> </ul></li> </ol> <p>Can someone provide a design that doesn't use a global (in any variation) but is more efficient that #1 or #2 in both memory or cycles respectively?</p> <p><strong>Edit:</strong> Since I have a Set of Trees, I can't simply store it in a static since it'd be hard to differentiate between trees. (thanks maccullt)</p>
<p>Pass the root as a parameter to whichever functions in the node that need it.</p> <p>Edit: The options are really the following:</p> <ol> <li>Store the root reference in the node</li> <li>Don't store the root reference at all</li> <li>Store the root reference in a global</li> <li>Store the root reference on the stack (my suggestion, either visitor pattern or recursive)</li> </ol> <p>I think this all the possibilities, there is no option 5.</p>
<p>You could derive a class from TreeView and then add a singleton static property. That way you are effectively adding a global field that references the single instance of the class but have the benefit of it being namespace scoped to that class.</p>
9,366
<p>I'm writing a few little bash scripts under Ubuntu linux. I want to be able to run them from the GUI without needing a terminal window to enter any input or view any output.</p> <p>So far the only input required is a password for sudo - and gksudo handles that fine. But I haven't found an easy way to show a message box yet. Is there some kind of 'gkmessage' command available? I'd prefer something present in a default Ubuntu install, but I don't mind installing a new package if necessary.</p>
<p>I believe <a href="http://live.gnome.org/Zenity" rel="noreferrer">Zenity</a> will do what you want. It's specifically designed for displaying GTK dialogs from the command line, and it's available as an <a href="http://packages.ubuntu.com/hardy/zenity" rel="noreferrer">Ubuntu package</a>.</p>
<p>Kdialog and dialog are both good, but I'd recommend Zenity. Quick, easy, and much better looking the xmessage or dialog. </p>
2,870
<p>Using reflection, how can I get all types that implement an interface with C# 3.0/.NET 3.5 with the least code, and minimizing iterations?</p> <p>This is what I want to re-write:</p> <pre><code>foreach (Type t in this.GetType().Assembly.GetTypes()) if (t is IMyInterface) ; //do stuff </code></pre>
<p>Mine would be this in c# 3.0 :)</p> <pre><code>var type = typeof(IMyInterface); var types = AppDomain.CurrentDomain.GetAssemblies() .SelectMany(s =&gt; s.GetTypes()) .Where(p =&gt; type.IsAssignableFrom(p)); </code></pre> <p>Basically, the least amount of iterations will always be:</p> <pre><code>loop assemblies loop types see if implemented. </code></pre>
<p>You could use some LINQ to get the list:</p> <pre><code>var types = from type in this.GetType().Assembly.GetTypes() where type is ISomeInterface select type; </code></pre> <p>But really, is that more readable?</p>
4,630
<p>Is there a C library function that will return the index of a character in a string?</p> <p>So far, all I've found are functions like strstr that will return the found char *, not it's location in the original string.</p>
<p><code>strstr</code> returns a pointer to the found character, so you could use pointer arithmetic: (Note: this code not tested for its ability to compile, it's one step away from pseudocode.)</p> <pre><code>char * source = "test string"; /* assume source address is */ /* 0x10 for example */ char * found = strstr( source, "in" ); /* should return 0x18 */ if (found != NULL) /* strstr returns NULL if item not found */ { int index = found - source; /* index is 8 */ /* source[8] gets you "i" */ } </code></pre>
<p>If you are not totally tied to pure C and can use string.h there is strchr() <a href="http://www.cplusplus.com/reference/clibrary/cstring/strchr.html" rel="nofollow noreferrer">See here</a></p>
2,680
<p>So, I know that try/catch does add some overhead and therefore isn't a good way of controlling process flow, but where does this overhead come from and what is its actual impact?</p>
<p>I'm not an expert in language implementations (so take this with a grain of salt), but I think one of the biggest costs is unwinding the stack and storing it for the stack trace. I suspect this happens only when the exception is thrown (but I don't know), and if so, this would be decently sized hidden cost every time an exception is thrown... so it's not like you are just jumping from one place in the code to another, there is a lot going on.</p> <p>I don't think it's a problem as long as you are using exceptions for EXCEPTIONAL behavior (so not your typical, expected path through the program).</p>
<p>Let us analyse one of the biggest possible costs of a try/catch block when used where it shouldn't need to be used:</p> <pre><code>int x; try { x = int.Parse("1234"); } catch { return; } // some more code here... </code></pre> <p>And here's the one without try/catch:</p> <pre><code>int x; if (int.TryParse("1234", out x) == false) { return; } // some more code here </code></pre> <p>Not counting the insignificant white-space, one might notice that these two equivelant pieces of code are almost exactly the same length in bytes. The latter contains 4 bytes less indentation. Is that a bad thing?</p> <p>To add insult to injury, a student decides to loop while the input can be parsed as an int. The solution without try/catch might be something like:</p> <pre><code>while (int.TryParse(...)) { ... } </code></pre> <p>But how does this look when using try/catch?</p> <pre><code>try { for (;;) { x = int.Parse(...); ... } } catch { ... } </code></pre> <p>Try/catch blocks are magical ways of wasting indentation, and we still don't even know the reason it failed! Imagine how the person doing debugging feels, when code continues to execute past a serious logical flaw, rather than halting with a nice obvious exception error. Try/catch blocks are a lazy man's data validation/sanitation.</p> <p>One of the smaller costs is that try/catch blocks do indeed disable certain optimizations: <a href="http://msmvps.com/blogs/peterritchie/archive/2007/06/22/performance-implications-of-try-catch-finally.aspx" rel="nofollow noreferrer">http://msmvps.com/blogs/peterritchie/archive/2007/06/22/performance-implications-of-try-catch-finally.aspx</a>. I guess that's a positive point too. It can be used to disable optimizations that might otherwise cripple safe, sane message passing algorithms for multithreaded applications, and to catch possible race conditions ;) That's about the only scenario I can think of to use try/catch. Even that has alternatives.</p>
7,553
<p>I have played with the idea of using a wiki (MediaWiki) to centralize all project information for a development project. This was done using extensions that pull information from SVN (using <a href="http://svnkit.com/" rel="nofollow noreferrer">SVNKit</a>) and by linking to Bugzilla to extract work assigned to a developer or work remaining for a release.</p> <p>Examples:</p> <pre><code>&lt;bugzilla type="summary" user="richard.tasker@gmail.com" /&gt; </code></pre> <p>would return a summary</p> <p><img src="https://i.stack.imgur.com/rfJjy.png" alt="Bugzilla Summary"></p> <pre><code>&lt;bugzilla type="status" status="ASSIGNED" product="SCM BEPPI" /&gt; </code></pre> <p>would return</p> <p><img src="https://i.stack.imgur.com/YSV0t.png" alt="Bugzilla Status"></p> <p>Do you think that this would be useful? If so then what other integrations would you think would be valuable?</p>
<p>I think this would be extremly useful. Depending on the size of a project team members come and go. And a wiki is a good tool to keep the history and the "spirit" of a project available to new team members. I did that in many projects, and though the projects were already finished, all the informations are available.</p> <p>One more idea: also try to integrate meeting schedules, minutes etc. If your team communicates via IM, try to integrate a log of the conversations.</p>
<p>The other classic integration would be your source code repository, e.g. svn, or cvs. <a href="http://trac.edgewall.org/" rel="nofollow noreferrer">trac</a> is an existing product that does exactly this - it combines a wiki, custom bug tracker, and integrates nicely with svn.</p>
7,213
<p>How do you actually perform datetime operations such as adding date, finding difference, find out how many days excluding weekends in an interval? I personally started to pass some of these operations to my postgresql dbms as typically I would only need to issue one sql statement to obtain an answer, however, to do it in PHP way I would have to write a lot more code that means more chances for errors to occur...</p> <p>Are there any libraries in PHP that does datetime operation in a way that don't require a lot of code? that beats sql in a situation where 'Given two dates, how many workdays are there between the two dates? Implement in either SQL, or $pet_lang' that is solved by making this query?</p> <pre class="lang-sql prettyprint-override"><code>SELECT COUNT(*) AS total_days FROM (SELECT date '2008-8-26' + generate_series(0, (date '2008-9-1' - date '2008-8-26')) AS all_days) AS calendar WHERE EXTRACT(isodow FROM all_days) &lt; 6; </code></pre>
<p>While for most datetime operations I would normally convert to Unixtime and perform addition subtraction etc. on the Unixtime integer, you may want to look at the Zend framework Zend_Date class. </p> <p>This has a lot of the functionality you describe. Although Zend is billed as a "framework" it works exceptionally well as a class library to pick and chose elements from. We routinely include it in projects and then just pull in bits as and when we need them. </p>
<p>to get working days/holidays, postgresql CTE ftw -- see <a href="http://osssmb.wordpress.com/2009/12/02/business-days-working-days-sql-for-postgres-2/" rel="nofollow">http://osssmb.wordpress.com/2009/12/02/business-days-working-days-sql-for-postgres-2/</a></p>
8,053
<p>I have a script that has a part that looks like that:</p> <pre><code>for file in `ls *.tar.gz`; do echo encrypting $file gpg --passphrase-file /home/$USER/.gnupg/backup-passphrase \ --simple-sk-checksum -c $file done </code></pre> <p>For some reason if I run this script manually, works perfectly fine and all files are encrypted. If I run this as cron job, <code>echo $file</code> works fine (I see "encrypting &lt;file&gt;" in the log), but the file doesn't get encrypted and gpg silent fails with no stdout/stderr output.</p> <p>Any clues?</p>
<p>It turns out that the answer was easier than I expected. There is a <code>--batch</code> parameter missing, gpg tries to read from /dev/tty that doesn't exist for cron jobs. To debug that I have used <code>--exit-on-status-write-error</code> param. But to use that I was inspired by exit status 2, reported by echoing <code>$?</code> as Cd-Man suggested. </p>
<p>make sure the user that is running the cron job has the permissions needed to encrypt the file. </p>
6,102
<p>I know the answer is not going to be simple, and I already use a couple of (I think ugly) cludges. I am simply looking for some elegant answers.</p> <p>Abstract class:</p> <pre><code>public interface IOtherObjects; public abstract class MyObjects&lt;T&gt; where T : IOtherObjects { ... public List&lt;T&gt; ToList() { ... } } </code></pre> <p>Children:</p> <pre><code>public class MyObjectsA : MyObjects&lt;OtherObjectA&gt; //(where OtherObjectA implements IOtherObjects) { } public class MyObjectsB : MyObjects&lt;OtherObjectB&gt; //(where OtherObjectB implements IOtherObjects) { } </code></pre> <p>Is it possible, looping through a collection of MyObjects (or other similar grouping, generic or otherwise) to then utilise to <em>ToList</em> method of the <em>MyObjects</em> base class, as we do not specifically know the type of T at this point. </p> <p><strong>EDIT</strong> As for specific examples, whenever this has come up, I've thought about it for a while, and done something different instead, so there is no current requirement. but as it has come up quite frequently, I thought I would float it.</p> <p><strong>EDIT</strong> @Sara, it's not the specific type of the collection I care about, it could be a List, but still the ToList method of each instance is relatively unusable, without an anonymous type)</p> <p>@aku, true, and this question may be relatively hypothetical, however being able to retrieve, and work with a list of T of objects, knowing only their base type would be very useful. Having the ToList returning a List Of BaseType has been one of my workarounds</p> <p><strong>EDIT</strong> @ all: So far, this has been the sort of discussion I was hoping for, though it largely confirms all I suspected. Thanks all so far, but anyone else, feel free to input.</p> <p><strong>EDIT</strong>@Rob, Yes it works for a defined type, but not when the type is only known as a List of IOtherObjects. </p> <p>@Rob <strong>Again</strong> Thanks. That has usually been my cludgy workaround (no disrespect :) ). Either that or using the ConvertAll function to Downcast through a delegate. Thanks for taking the time to understand the problem.</p> <p><strong>QUALIFYING EDIT</strong> in case I have been a little confusing</p> <p>To be more precise, (I may have let my latest implementation of this get it too complex):</p> <p>lets say I have 2 object types, B and C inheriting from object A.</p> <p>Many scenarios have presented themselves where, from a List of B or a List of C, or in other cases a List of either - but I don't know which if I am at a base class, I have needed a less specific List of A. </p> <p>The above example was a watered-down example of the <em>List Of Less Specific</em> problem's latest incarnation.</p> <p>Usually it has presented itself, as I think through possible scenarios that limit the amount of code that needs writing and seems a little more elegant than other options. I really wanted a discussion of possibilities and other points of view, which I have more or less got. I am surprised no one has mentioned ConvertAll() so far, as that is another workaround I have used, but a little too verbose for the scenarios at hand</p> <p>@Rob <strong>Yet Again</strong> and Sara</p> <p>Thanks, however I do feel I understand generics in all their static contexted glory, and did understand the issues at play here.</p> <p>The actual design of our system and usage of generics it (and I can say this without only a touch of bias, as I was only one of the players in the design), has been done well. It is when I have been working with the core API, I have found situations when I have been in the wrong scope for doing something simply, instead I had to deal with them with a little less elegant than I like (trying either to be clever or perhaps lazy - I'll accept either of those labels).</p> <p>My distaste for what I termed a cludge is largely that we require to do a loop through our record set simply to convert the objects to their base value which may be a performance hit.</p> <p>I guess I was wondering if anyone else had come across this in their coding before, and if anyone had been cleverer, or at least more elegant, than me in dealing with it.</p>
<p>In your case MyObjectsA and MyObjectsB don't have common predecessor. Generic class is template for <em>different</em> classes not a common base class. If you want to have common properties in different classes use interfaces. You can't call <em>ToList</em> in a loop cause it has different signature in different classes. You can create ToList that returns <em>objects</em> rather than specific type.</p>
<p>Generics are used for static time type checks <strong>not</strong> runtime dispatch. Use inheritance/interfaces for runtime dispatch, use generics for compile-time type guarantees.</p> <pre><code>interface IMyObjects : IEnumerable&lt;IOtherObjects&gt; {} abstract class MyObjects&lt;T&gt; : IMyObjects where T : IOtherObjects {} IEnumerable&lt;IMyObjects&gt; objs = ...; foreach (IMyObjects mo in objs) { foreach (IOtherObjects oo in mo) { Console.WriteLine(oo); } } </code></pre> <p>(Obviously, I prefer Enumerables over Lists.)</p> <p><strong>OR</strong> Just use a proper dynamic language like VB. :-)</p>
7,682
<p>I have an executable (compiled by someone else) that is hitting an assertion near my code. I work on the code in Visual C++ 2003, but I don't have a project file for this particular executable (the code is used to build many different tools). Is it possible to launch the binary in Visual C++'s debugger and just tell it where the sources are? I've done this before in GDB, so I know it ought to be possible.</p>
<p>Without the PDB symbols for that application you're going to have a tough time making heads or tails of what is going on and where. I think any source code information is going to be only in that PDB file that was created when whoever built that application.</p> <p>This is assuming that the PDB file was EVER created for this application - which is not the default configuration for release mode VC++ projects I think. Since you're asserting, I guessing this is a debug configuration?</p>
<p>Short of any other answers, I would try attaching to the executable process in Visual Studio, setting a break point in your code and when you step into the process you don't have source to, it should ask for a source file.</p>
5,095
<p>I develop exclusively on VMs. I currently run Boot Camp on a MacBook Pro and do all my development on a series of Virtual PC VMs for many different environments. This <a href="http://www.andrewconnell.com/blog/articles/UseVirtualPCsDifferencingDisksToYourAdvantage.aspx" rel="nofollow noreferrer">post by Andrew Connell</a> litterally changed the way I work.</p> <p>I'm thinking about switching to Fusion and running everything in OS X but I wasn't able to answer the following questions about VM Fusion/Workstation/Server. <strong>I need to know if the following features from Virtual PC/Server exist in their VMWare counter parts.</strong></p> <ol> <li>Differencing Disks (ability to create a Base VM and provision new VMs which just add deltas on top of the base [saves a ton of disk space, and makes it easy to spin up new VMs with a base set of funcitonality]). <em>(Not available with Fusion, need Workstation [$189])</em></li> <li>Undo disks (ability to rollback all changes to the VM within a session). <em>(Available in both Workstation and Fusion [$189/$79.99 respectively])</em></li> <li>Easily NAT out a different subnet for the VM to sit in. <em>(In both Fusion/Workstation).</em></li> <li>Share VMs between VM Player and VM Server. I'd like to build up a VM locally (on OS X/Fusion) and then move it to some server (Win2k3/Win2k8 and VM Server) and host it there but with VM Server. <em>(In both Fusion/Workstation).</em></li> <li>An equivalent to Hyper-V. <em>(Both Fusion and Workstation take advantage of type-2 hypervisor a for 64x VMs, neither do for 32 bit VMs. VMWare claims they're no slower as a result some <a href="http://www.thehypervisor.com/?p=57" rel="nofollow noreferrer">benchmarks corroborate this assertion</a>).</em></li> <li>Ability to Share disks between multiple VMs. If I have a bunch of databases on a virtual disk and want them to appear on more than one VM I should be able to just attach them. <em>(Available in both Fusion and Workstation)</em></li> <li>(Nice to have) Support for multiple processors assigned to a VM <em>(Available in both Fusion and Workstation).</em></li> </ol> <p><strong>Is there a VMWare guru out there who knows for sure that the above features are available on the other side?</strong> </p> <p><strong>Also the above has been free (as long as you have licenses for Windows machines), besides buying Fusion are there any other costs?</strong></p> <p><strong>The end result of my research, thanks so much!</strong> <em>You can only create Linked clones and Full Clones (which are close to differencing disks) in VMWare Workstation (not Fusion). Workstation also has at better snapshot management in addition to other features which are difficult to enumerate. That being said Workstation is $189 (as opposed to $79) and not available on OS X. In addition Fusion 1.1 (current release) has a bunch of display bugs on OS X 10.5 (works well on 10.4). These will be remedied in Fusion 2.0 which is currently in (RC1). I'll probably wait until v2.0 comes out and then use both Workstation/Fusion to provision and use these VMs on OS X.</em></p>
<p>There's actually a way to turn that "feature" off. This will allow the user to post whichever characters they want, and there will be no need to convert characters to an alias using Javascript. See this article for <a href="http://mdid.org/mdidwiki/index.php?title=Disabling_Request_Validation" rel="nofollow noreferrer">disabling request validation</a>. It means that you'll have to do your own validation, but from the sounds of your post, it seems that is what you are looking to do anyway. You can also disable it per page by following <a href="http://www.asp.net/learn/whitepapers/request-validation/" rel="nofollow noreferrer">the instructions here</a>.</p>
<p>My first comment would be to avoid using JavaScript to change the angle brackets. Bypassing this is as simple as disabling JavaScript in the browser. Almost all server-side languages have some utility method that converts some HTML characters into their entity counterparts. For instance, PHP uses htmlentities(), and I am sure .NET has an equivalent utility method. In the least, you can do a regex replace for angle brackets, parenthesis and double quotes, and that will get you a long way toward a secure solution.</p>
8,707
<p>I would like to read the text and binary attachments in a saved Outlook message (.msg file) from a Java application, without resorting to native code (JNI, Java Native Interface). </p> <p><a href="http://poi.apache.org/hsmf/index.html" rel="noreferrer">Apache POI-HSMF</a> seems to be in the right direction, but it's in very early stages of development... </p>
<ul> <li><p>You could use Apache POIFS, which seems to be a little more mature, but that would appear to duplicate the efforts of POI-HSMF.</p></li> <li><p>You could use POI-HSMF and contribute changes to get the features you need working. That's often how FOSS projects like that expand.</p></li> <li><p>You could use com4j, j-Interop, or some other COM-level interop feature and interact directly with the COM interfaces that provide access to the structured document. That would be much easier than trying to hit it directly through JNI.</p></li> </ul>
<p>Have you tried to use Jython with the Python win32 extensions (<a href="http://www.jython.org/Project/" rel="nofollow noreferrer">http://www.jython.org/Project/</a> + <a href="http://python.net/crew/mhammond/win32/" rel="nofollow noreferrer">http://python.net/crew/mhammond/win32/</a>)? If this is for a "personal" or "internal" project Jython with Python may be a very good choice. If you are building a "shrink wrapped" software package this may not be the best option.</p>
9,157
<p>In some VB6 code, I have a handler for a TreeView's Collapse event:</p> <pre><code>Private Sub MyTree_Collapse(ByVal Node as MSComCtlLib.Node) </code></pre> <p>This is called whenever a node in the tree is collapsed, whether by the user or programmatically. As it turns out, through some roundabout execution, it may happen that this handler will wind up telling a node to collapse, leading to infinite recursion.</p> <p>I can think of multiple ways to skin this cat, but what seems simplest to me is to tell the TreeView not to raise events for some period of time. I can't find a simple call to let me do this, though. Has anyone successfully done this, or do I need to keep track of state in some other manner so I can respond appropriately when recursive events come along?</p>
<p>I think that like many events in VB, it can't be switched off.</p> <p>Just set a boolean flag as you've suggested.</p>
<p>I would declare the flag variable as STATIC in the Sub. This avoids making the variable global and makes it keep its value between calls.</p>
3,594
<p>I was noticing on a print I had just done that the quality was not up to typical snuff. I had just started using a roll of PLA filament that I had been keeping on a shelf without a wrapper for a couple months. How long can you store filament before it gets too hydrated from the air to print? I expected more than a couple months but perhaps I am wrong?</p>
<p>In <strong>theory</strong>, most filaments don't go bad within a year. However, praxis shows, that averse conditions can impact the filaments over time and age them to unusability.</p> <p>Among the damaging factors is heat, but most filaments also are hygroscopic and absorb water to some small degree, or even heavily like Nylon.</p> <p>As a result, it always a good idea to at least try to store filament dry. To enforce this, some use racks in a well-heated room, others are blessed with very dry weather overall. And on some locations, like the coast, you might even be forced to use dryboxes for each and every filament to try to slow the degradation of steady hot humid air leaching out the additions from the filament.</p> <p>Dryboxes can keep the filament reasonably isolated from the surrounding air and so prevent moisture interacting with them to some degree. It is also a good idea to store them out of direct sunlight, as UV light might destroy color and/or the plastic. More information on why to use them is for example at <a href="https://3dprinting.stackexchange.com/questions/6982/which-filaments-actually-do-need-to-be-stored-in-a-drybox?">this question</a></p> <p>A couple construction videos using an IKEA box and a bit of foam were offered by <a href="https://www.youtube.com/watch?v=OY5n9q-wS7k" rel="nofollow noreferrer">Tom (Thomas Sanladerer)</a> and <a href="https://www.youtube.com/watch?v=WEFtUKGAd7k" rel="nofollow noreferrer">CNC Kitchen (Stefan Hermann)</a> in 2017.</p> <p>But fear not: most filaments - PLA included - can be freshened up again if the damage is not to prolonged! Baking them at a low temperature or storing them in a dehumidifier has worked in some climates. For PLA, keep the temperature at below 80°C. A couple of hours should get some the moisture that has seeped in out again. The Quality might not get back to that of fresh filament in all cases, but you might at least regain reasonable to good printability.</p> <p><a href="https://3dsolved.com/filaments-and-moisture-absorption/" rel="nofollow noreferrer">Also note, that different filaments are differently affected.</a> ABS for example is a little less hygroscopic than PLA, while HIPS is one of the least hygroscopic filaments available.</p>
<p>To answer your question: it depends on the relative humidity. Generally, a few weeks in a semi-arid environment. But, it'll still print really well. If you're looking for really tight filament diameter tolerances, a week or even less could change the diameter .001 or more if there is moisture in the air. I've used PLA that has been exposed to air for a few months and it's been fine, but with small issues.</p> <p>If the filament has absorbed too much moisture, you'll typically hear popping coming from the hot extruder as the water is burned off at the nozzle. Sometimes steam comes out of the nozzle when printing as well with "wet" filament. Usually, you can still get great results even if it has absorbed moisture. If the filament diameter has been affected significantly, (if it jams, or the water keeps the plastic too cool when it's coming off the nozzle) you can dry it out in an oven for a few hours like @trish suggested. I keep my filaments in a plastic bag with desiccant inside a big storage bin. Probably overkill but my water softener is in the same room and introduces moisture to the air, and my a/c blows right at the setup.</p> <p>Dust is probably worse because it can accumulate much faster and cause jams.</p> <p>source and reading</p> <p><a href="https://www.fusion3design.com/the-importance-of-properly-storing-your-3d-printing-filament/" rel="nofollow noreferrer">https://www.fusion3design.com/the-importance-of-properly-storing-your-3d-printing-filament/</a></p>
1,103
<p>This would be a question for anyone who has code in the App_Code folder and uses a hardware load balancer. Its true the hardware load balancer could be set to sticky sessions to solve the issue, but in a perfect world, I would like the feature turned off.</p> <p>When a file in the App_Code folder, and the site is not pre-compiled iis will generate random file names for these files.</p> <pre><code>server1 "/ajax/SomeControl, App_Code.tjazq3hb.ashx" server2 "/ajax/SomeControl, App_Code.wzp3akyu.ashx" </code></pre> <p>So when a user posts the page and gets transfered to the other server nothing works.</p> <p>Does anyone have a solution for this? I could change to a pre-compiled web-site, but we would lose the ability for our QA department to just promote the changed files.</p>
<p>Do you have the &lt;machinekey&gt; node on both servers set to the same value?</p> <p>You can override the machine.config file in web.config to set this. This needs to match otherwise you can get strange situations like this.</p>
<p>If it's a hardware load balancer, you shouldn't have an issue, because all that is known there is the request URL, in which the server would compile the requested page and serve it.</p> <p>the only issue i can think of that you might have is with session and view state.</p>
7,355