instruction
stringlengths
27
22.5k
chosen
stringlengths
27
28.2k
rejected
stringlengths
19
24.9k
__index_level_0__
int64
0
10k
<p>The LCD resin printers I've looked at have pretty standard resolutions like for a smartphone and I understand they use the same technology. However, color LCD screens have three RGB sub-pixels for each color pixel. Check for example this magnified picture of an S-IPS LCD screen: <a href="https://i.stack.imgur.com/bml7t.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bml7t.jpg" alt="enter image description here"></a> It seems like they could just omit the color filter and have three grayscale pixels for each color pixel.</p> <p>3D printing just uses one color - UV. So why don't they have resolutions that are multiples of three of the usual resolutions?</p> <p>All results about sub-pixels that I could find are about anti-aliasing, which is different (using the existing pixels better vs. having more pixels).</p>
<p>If what Thomas Sanladerer states is correct, the Mars 2 Pro (monochrome LCD, no color filter) has a layer time approximately 1/3 as long as the Mars/Pro printers (where color LCDs are used):</p> <p><div class="youtube-embed"><div> <iframe width="640px" height="395px" src="https://www.youtube.com/embed/0yZ4KiK_pl0?start=0"></iframe> </div></div></p> <p>This would indicate that the light passing through the LCD, when the color filter is present, is 1/3. Put it in other words, only ONE subpixel is capable of transmitting UV light.</p> <p>Consequently, using all of them would not improve resolution because the other subpixels are always opaque.</p>
<p>It should be as simple as using a monochrome LCD. You don't actually want any color filters to interfere with the UV light.</p> <p>Do we know that LCD printers are not using monochrome LCD panels? It always seemed so obvious that I assumed it was the practice. All you need are the front and back polarizer layer and the LCD itself to rotate the light.</p>
1,420
<p>I'm new to 3D printing. I've made some projects before just fine, but this project is giving me problems.</p> <p>I 3D printed a trumpet mouthpiece, and the printer made supports inside the mouthpiece funnel. I can easily remove the exterior supports, but I don't know how to get to the supports inside. Does anyone know how I would do that?</p> <p><a href="https://i.stack.imgur.com/U6ys5.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/U6ys5.jpg" alt="Here&#39;s one image."></a> <a href="https://i.stack.imgur.com/DJ9KP.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DJ9KP.jpg" alt="Here&#39;s another."></a></p> <p><a href="https://www.thingiverse.com/thing:1539908" rel="nofollow noreferrer">Here is the link to the model</a> <a href="https://i.stack.imgur.com/8R11d.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8R11d.jpg" alt="enter image description here"></a></p>
<p>That model didn’t need supports. Most of the time you can break the connections with the supports to the model by inserting a knife or flat object into the cavity. In this case you could use a probe to loosen the joins then wedge it out. If that doesn’t work you can use needle nose pliers to remove prices. </p> <p>Your best and cleanest option would be to reprint the mouthpiece without supports at all. Print it so that the wide section(the section touching your mouth) is on the buildplate.</p>
<p>&quot;Don't use supports&quot;&lt;-- that is the best answer.</p> <h2>Warning</h2> <p>Sadly, if you print vertically, you end up with an extremely fragile piece -it'll snap between layers. The OP properly printed horizontally for strength, and possibly by using a very &quot;thin&quot; layer height he can get by without supports. That's the only way to succeed.</p> <p>I guess I should state that this answer assumes you want to use the print in a trumpet. If it's purely for decoration, then definitely print vertically.</p>
1,522
<p>I asked a couple of coworkers about <a href="http://ankhsvn.open.collab.net" rel="nofollow noreferrer">AnkhSVN</a> and neither one of them was happy with it. One of them went as far as saying that AnkhSVN has messed up his devenv several times.</p> <p>What's your experience with AnkhSVN? I really miss having an IDE integrated source control tool.</p>
<p>Older AnkhSVN (pre 2.0) was very crappy and I was only using it for shiny icons in the solution explorer. I relied on Tortoise for everything except reverts.</p> <p>The newer Ankh is a complete rewrite (it is now using the Source Control API of the IDE) and looks &amp; works much better. Still, I haven't forced it to any heavy lifting. Icons is enough for me.</p> <p>The only gripe I have with 2.0 is the fact that it slaps its footprint to <strong>.sln</strong> files. I always revert them lest they cause problems for co-workers who do not have Ankh installed. I don't know if my fears are groundless or not.</p> <hr /> <p>addendum:</p> <p>I have been using v2.1.7141 a bit more extensively for the last few weeks and here are the new things I have to add:</p> <ul> <li>No ugly crashes that plagued v1.x. Yay!</li> <li>For some reason, &quot;Show Changes&quot; (diff) windows are limited to only two. Meh.</li> <li>Diff windows do not allow editing/reverting yet. Boo!</li> <li>Updates, commits and browsing are MUCH faster than Tortoise. Yay!</li> </ul> <p>All in all, I would not use it standalone, but once you start using it, it becomes an almost indispensable companion to Tortoise.</p>
<p>I tried AnkhSVN (1.0.3, just 4 months ago), and it did not work the way I wanted it to (i.e. needed to select things in the browser window instead of based on active file). I ended up making some macros that utilize TortoiseSVN that work much more like what I expected.</p> <p>I've been very happy with using TortoiseSVN via explorer and my macros inside the IDE.</p>
3,864
<p>I've heard I should store filament in sealed container, preferably with a desiccant. </p> <p>But let's say I let a spool get a little old on the printer, or I purchased a filament spool that was old or improperly packaged. How would I know? How would this impact prints (what kind of symptoms would I see)? What things could I do (perhaps in the slicer) to correct for this and prolong the life of marginal filament?</p> <p>And the corollary... in a typical environment, how long can filament be left out without suffering from the exposure?</p> <p>I'm thinking mainly about PLA, but responses for ABS and other materials are useful, too.</p>
<p>The primary issue with long-term exposure of filament to the environment is that it will absorb water moisture from the air. When a filament that has absorbed water is passing though the hot end of a printer, the water will turn to steam and cause problems with extrusion:</p> <ul> <li>Small bubbles of steam can form, causing extrusion to sputter - you might hear a sizzling noise and have poor consistency.</li> <li>Large steam bubbles can cause significant oozing followed by no extrusion.</li> <li>Extreme cases can cause mysterious jams that seem to clear themselves (the extruder cannot overcome the steam pressure).</li> </ul> <p>In short, this will cause terrible print quality and failed prints. As the effects are not consistent, there is nothing that can be done by slicer settings to "recalibrate" for filament that has absorbed water.</p> <p>This can be avoided by storing filament in an air-tight container with desiccant to ensure low humidity. Some people use "dry boxes" that allow the spool to be mounted inside while filament can be passed to the printer, so there is minimal exposure even while the spool is in use.</p> <p>If you do suspect that your filament has absorbed moisture, you can dry it out, by placing the spool in a warm oven or in a food dehydrator for a few hours. If you weight it before and after, you should find that it weighs several grams less afterwards. <strong>WARNING:</strong> It is important that the temperature does not soften the plastic at all, or it can become distorted or bind on the spool. Most ovens will peak well above the set temperature as the thermostat cycles. Of course, fully melting a roll of filament could destroy your oven or cause a fire.</p> <p>It's hard to say how much environmental exposure is too much, as every filament and environment is different. When I started out, I had several spools of PLA that I stored in the open for months. I didn't think I was having any problems, but I was also learning much and improving my printer settings at the same time. After getting PETG, it became unusable with oozing and jams after about two weeks but a few hours in my oven was a miracle cure! I then dried some PLA as well, and I found that print quality did improve, but not amazingly so. I have not used ABS, but in theory it is less hygroscopic than PLA, so it is probably not very sensitive to exposure.</p> <p>I set up a dry storage box, and I am careful to always store PETG or my "good" PLA when I'm not actively using it. I have a couple rolls of PLA that I don't like as much anyway and generally just use for draft prints, and I don't really worry about it that much.</p> <p>Note: An object that has been printed will also absorb moisture, but in general this isn't a problem.</p>
<p>Moisture creeps into the filament when PLA is exposed for a long time to a humid environment. This is audible like popping bubbles when heated in the hotend and does worsen the print quality. The moisture causes a noticeable property as it breaks the long molecular chains of the thermoplastic material. Once this has happened, its effect is irreversible. </p> <p>The effects of this moisture breaking the long molecular chains results in a more brittle filament; it can even snap during feeding, leading to unpleasant results like unfinished prints. If the filament snaps/breaks when you bend it, instead of folding, your filament has taken up moisture.</p>
1,022
<p>I like Doxygen to create documentation of C or PHP code. I have an upcoming Python project and I think I remember that Python doesn't have <code>/* .. */</code> comments, and also has its own self-documentation facility which seems to be the pythonic way to document.</p> <p>Since I'm familiar with Doxygen, how can I use it to produce my Python documentation? Is there anything in particular that I need to be aware of?</p>
<p>This is <a href="http://www.doxygen.nl/manual/docblocks.html#pythonblocks" rel="noreferrer">documented on the doxygen website</a>, but to summarize here:</p> <p>You can use doxygen to document your Python code. You can either use the Python documentation string syntax:</p> <pre><code>"""@package docstring Documentation for this module. More details. """ def func(): """Documentation for a function. More details. """ pass </code></pre> <p>In which case the comments will be extracted by doxygen, but you won't be able to use any of the <a href="http://www.doxygen.nl/manual/commands.html#cmd_intro" rel="noreferrer">special doxygen commands</a>.</p> <p><strong>Or</strong> you can (similar to C-style languages under doxygen) double up the comment marker (<code>#</code>) on the first line before the member:</p> <pre><code>## @package pyexample # Documentation for this module. # # More details. ## Documentation for a function. # # More details. def func(): pass </code></pre> <p>In that case, you can use the special doxygen commands. There's no particular Python output mode, but you can apparently improve the results by setting <code>OPTMIZE_OUTPUT_JAVA</code> to <code>YES</code>.</p> <p>Honestly, I'm a little surprised at the difference - it seems like once doxygen can detect the comments in ## blocks or """ blocks, most of the work would be done and you'd be able to use the special commands in either case. Maybe they expect people using """ to adhere to more Pythonic documentation practices and that would interfere with the special doxygen commands?</p>
<p>An other very good documentation tool is <a href="http://www.sphinx-doc.org/en/master/index.html" rel="nofollow noreferrer">sphinx</a>. It will be used for the upcoming python 2.6 <a href="http://docs.python.org/dev/" rel="nofollow noreferrer">documentation</a> and is used by <a href="http://docs.djangoproject.com/en/dev/" rel="nofollow noreferrer">django</a> and a lot of other python projects.</p> <p>From the sphinx website:</p> <ul> <li><strong>Output formats</strong>: HTML (including Windows HTML Help) and LaTeX, for printable PDF versions</li> <li><strong>Extensive cross-references</strong>: semantic markup and automatic links for functions, classes, glossary terms and similar pieces of information</li> <li><strong>Hierarchical structure</strong>: easy definition of a document tree, with automatic links to siblings, parents and children</li> <li><strong>Automatic indices</strong>: general index as well as a module index</li> <li><strong>Code handling</strong>: automatic highlighting using the Pygments highlighter</li> <li><strong>Extensions</strong>: automatic testing of code snippets, inclusion of docstrings from Python modules, and more</li> </ul>
8,303
<p>I've been using Subversion for code control with TortoiseSVN to interface with the server for the past few months, and in general it's been going great! However, occasionally my FoxPro IDE will change the case of a file extension without warning where "<em>program.prg</em>" becomes "<em>program.<strong>PRG</em></strong>") TortoiseSVN apparently takes this to mean the first file was removed, becoming flagged as "missing" and the second name comes up as "non-versioned", wreaking havoc on my ability to track changes to the file. I understand that Subversion has it origins in the case-sensitive world of *nix but, is there any way to control this behavior in either Subversion or TortoiseSVN to be file name case-insensitive when used with Windows?</p>
<p>Unfortunately, Subversion is case-sensitive. This is due to the fact that files from Subversion can be checked out on both case-sensitive file systems (e.g., *nix) and case-insensitive file systems (e.g., Windows, Mac).</p> <p><a href="http://svn.apache.org/repos/asf/subversion/trunk/contrib/hook-scripts/case-insensitive.py" rel="nofollow noreferrer">This pre-commit hook script</a> may help you avoid problems when you check in files. If it doesn't solve your problem, my best suggestion is to write a little script to make sure that all extensions are lowercase and run it every time before you check in/check out. It'll be a PITA, but maybe your best bet.</p>
<p>Nope you sure can't. SVN is case-sensitive unless you were to rewrite the code somehow ... it <em>is</em> open-source.</p>
3,539
<p>As much as we would all like to say it is a benefit to programmers to be language agnostic, is it really feasible to support multiple enterprise Web applications of the same type all written in different languages? Think about how complicated a CMS or e-commerce system can be -- now imagine supporting three different CMS platforms all written in different languages. I would hate to be known as a .NET or Java or PHP shop, but I also don't want to be the vendor who says they can support a solution they have never worked with, upsetting a client who wonders why we can't get something done right on time. Can anyone speak from experience on this? Does your company usually just suck it up, and try and to learn a new platform on the fly? Do you bill up-to-speed, or eat those costs?</p>
<p><a href="http://racket-lang.org/" rel="noreferrer">Racket</a> (<a href="http://racket-lang.org/new-name.html" rel="noreferrer">formerly Dr Scheme</a>) has a nice editor, several different Scheme dialects, an attempt at visual debugging, lots of libraries, and can run on most platforms. It even has some modes specifically geared around learning the language.</p>
<p>LispMe works on a Palm Pilot, take it anywhere, and scheme on the go. GREAT way to learn scheme.</p>
8,403
<p>Seems like there are so many different ways of automating one's build/deployment that it becomes difficult to parse through all the different scenarios that people support in tutorials on the web. So I wanted to present the question to the stackoverflow crowd ... what would be the best way to set up an automated build and deployment system using the following configuration:</p> <ul> <li>Visual Studio 2008</li> <li>Web Application Project</li> <li>CruiseControl.NET</li> </ul> <p>One of the first things I tried was to have CCnet automatically zip the output and copy it to the server, but then that requires manual work to unzip at the destination. However, if we try to copy all the files individually, then it could potentially take a long time if it's a large application (build server lives outside of the datacenter in our office ... I know).</p> <p>Also of particular interest is how we would support multiple environments as we have dev, qa, uat, and then of course prod.</p> <p><a href="http://blogs.iis.net/msdeploy/archive/2008/01/22/welcome-to-the-web-deployment-team-blog.aspx" rel="noreferrer">MSDeploy</a> seems really interesting, but unless I'm interpreting the literature incorrectly, doesn't help in the scenario of deploying from the output of a build server. If anything, it seems like it'll be useful in deploying one build across a build farm ... but even for deploying from one environment to another, one would have to manually change config settings and web service URLs, etc.</p>
<p>I recently spent a few days working on automating deployments at my company.</p> <p>We use a combination of CruiseControl, NAnt, MSBuild to generate a release version of the app. Then a separate script uses MSDeploy and XCopy to backup the live site and transfer the new files over.</p> <p>Our solution is briefly described in an answer to this question <a href="https://stackoverflow.com/questions/45783/automate-deployment-for-web-applications">Automate Deployment for Web Applications?</a></p>
<p>I had a <a href="https://stackoverflow.com/questions/55323/aspnet-web-application-build-output-how-do-i-include-all-deployment-files">related question</a> about getting a deployable set of files from an automated build. I found Web Deployment Projects (links and all in the old question) did what I needed - they're a VS and MSBuild add-on.</p>
8,215
<p>The handle of a micro wave oven broke.</p> <p>I can't just order a replacement part because I can't even attach the new one. </p> <p>The problem is that the screw heads are somewhere on the interior side of the door, which cannot be disassembled (non destructively at least). I wouldn't even do it because of safety reasons.</p> <p>I have access to the threads of two loose and captive screws to work with (indicated by the two red lines on picture one). The screws are not machine screws, but screws for plastic like in the attached picture.</p> <p>The plan is to 3d print the plastic part of the handle and reuse the front aluminum cover.</p> <p>I don't want the handle to be loose, so I'm looking for suggestions to attach the new handle. I have a lot of ideas, maybe I will share them later if they are not mentioned at some point. The main problem is: how to attach something when all I have to work with is a loose, non machine screw ? </p> <p><a href="https://i.stack.imgur.com/ZziLk.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZziLk.jpg" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/QY4kK.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QY4kK.jpg" alt="enter image description here"></a></p>
<p>This answer is intended to be a generic answer for Z-offset determination. The question is not clear on how the Z-offset has been determined. It appears as if this distance is measured, while in reality this cannot be measured.</p> <p>A touch (or a inductive or capacitive) probe uses a trigger point to determine the distance of the probe trigger point to the bed print surface. Correct installation is trivial, as is the determination of the nozzle to trigger point definition. For a touch sensor, the probing element is either stowed, fully deployed, or pushed in during leveling up to the point that the trigger point is reached and the probe stowes the rest of the pin, see figure:</p> <p><a href="https://i.stack.imgur.com/wdto7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wdto7.png" alt="BLTouch or 3DTouch sensor Z-offset definition"></a></p> <p>The <code>M851 Zxx.xx</code> offset is determined by lowering the nozzle beyond the trigger point until the nozzle hits a paper sheet. If the stowed position to nozzle distance is used, the distance is too large and the nozzle will dive into the bed on printing.</p>
<p>You should be able to offset this with a <code>G54 Z-1.3</code> - <strong>if your setup accepts these gcodes</strong>. </p> <p>If you do this, always add a <code>G53</code> to the very start and just before the <code>M30</code> to clear all offsets after job finish (or in the event of a cancel, at the start of the next job).</p> <p>I'm not experienced with a wide variety of printers or firmware, but our repetier-based printers (and we use the same controls for our refurbished Fadal CNC machines) use G53-G59:</p> <p><a href="https://www.cnccookbook.com/g54-g92-g52-work-offsets-cnc-g-code/" rel="nofollow noreferrer">As explained in this tutorial from cnccookbook.com</a>:</p> <blockquote> <p>Basic work offsets are very simple to specify: simply enter one of G54, G55, G56, G57, G58, or G59. [...] When you execute the work offset g-code, the XYZ offset will be added to all of your coordinates from that point forward.</p> </blockquote> <p><a href="https://en.wikipedia.org/wiki/G-code" rel="nofollow noreferrer">As detailed on Wikipedia</a>:</p> <blockquote> <p><strong>G54-59</strong>: Have largely replaced position register (G50 and G92). Each tuple of axis offsets relates program zero directly to machine zero. Standard is 6 tuples (G54 to G59), with optional extensibility to 48 more via G54.1 P1 to P48.</p> </blockquote> <p><a href="http://hyrel3d.net/wiki/index.php/Gcode#G54_through_G59_-_Set_Offsets" rel="nofollow noreferrer">And on the gcode dictionary provided by Hyrel 3D</a>:</p> <blockquote> <p><strong>G54 through G59 - Set Offsets</strong> G54, G55, G56, G57, G58, and G59 will each store and invoke offsets in the X, Y, and/or Z axes for all subsequent moves. Any values not invoked will remain with their previous value (0 unless earlier specified otherwise).</p> <ul> <li>X is the offset in mm in the X axis.</li> <li>Y is the offset in mm in the Y axis.</li> <li>Z is the offset in mm in the Z axis.</li> </ul> <p>Here is an example:</p> <p><code>G54 X100 Y-50</code></p> <p>This command is decoded and executed by the printer as follows:</p> <p>G54 (set offsets)<br> - X100 (+100mm to all X coordinates)<br> - Y-50 (-50mm to all Y coordinates)<br></p> <p>Note that this differs from an M6, where the offsets are only applied to a SINGLE tool position.</p> </blockquote> <p><em>Disclaimer: I work for Hyrel 3D.</em></p>
1,468
<p>I got myself a Creality Ender 3 and just finished assembling it and I noticed that the home position is at the right instead of the left. It also is about about 3&nbsp;mm in front of the right corner.</p> <p>Anyone could maybe help me to understand what is wrong?</p>
<p>Homing is the process where the machine finds all the limits of the 3 axis (at the endstop switches), to have a common reference point for each axis. From the endstops, firmware defined values exist to give the printer head the offset to the home position, also known as origin (X=0, Y=0). If the origin position is in front of the plate, wrong offsets may be defined, but you can <a href="/q/6375">re-adjust the position</a>.</p> <p>If the stepper is driving the printer head away from the endstop on a particular axis on homing (in your case the X direction), there might be a few possible problems at hand:</p> <ol> <li>The stepper cables are plugged in reversed</li> <li>The wires in the stepper cable are switched</li> <li>The stepper direction is reversed in firmware (this is not very probable for a kit or bought printer)</li> <li>The stepper motor itself is build mirrored (not uncommon mistake for Y stepper motors on the Anet A8)</li> </ol> <p>A quick fix would be to rotate the connector 180&deg; of the cable going into the X stepper, if this works, you can disassemble the cable and properly arrange the stepper wires.</p>
<p>Did you make sure that the x,y,z and other cords / connectors, were connected to the right areas? TO clarify, the things that connect into the motors.<a href="https://i.stack.imgur.com/05zAt.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/05zAt.jpg" alt="What I was trying to explain. Apologies for the low quality."></a></p>
1,192
<p>I am using jQuery. I call a JavaScript function with next html:</p> <pre><code>&lt;li&gt;&lt;span&gt;&lt;a href="javascript:uncheckEl('tagVO-$id')"&gt;$tagname&lt;/a&gt;&lt;/span&gt;&lt;/li&gt; </code></pre> <p>I would like to remove the <code>li</code> element and I thought this would be easy with the <code>$(this)</code> object. This is my JavaScript function:</p> <pre><code>function uncheckEl(id) { $("#"+id+"").attr("checked",""); $("#"+id+"").parent("li").css("color","black"); $(this).parent("li").remove(); // This is not working retrieveItems(); } </code></pre> <p>But <code>$(this)</code> is undefined. Any ideas?</p>
<p>Try something like this (e.g. to hide the <code>&lt;li&gt;</code>):</p> <pre><code>function unCheckEl(id, ref) { (...) $(ref).parent().parent().hide(); // this should be your &lt;li&gt; } </code></pre> <p>And your link:</p> <pre><code>&lt;a href="javascript:uncheckEl('tagVO-$id', \$(this))"&gt; </code></pre> <p><code>$(this)</code> is not present inside your function, because how is it supposed to know where the action is called from? You pass no reference in it, so <code>$(this)</code> could refer to everything but the <code>&lt;a&gt;</code>.</p>
<p>Why not something like:</p> <pre><code>&lt;li id="uncheck_tagVO-$id"&gt;$tagname&lt;/li&gt; </code></pre> <p>and</p> <pre><code>$('li').click( function() { var id = this.id.split("_")[1]; $('#'+id).attr("checked","").parent("li").css("color","black"); $(this).remove(); retrieveItems(); }); </code></pre>
8,530
<p>Simple as the title states: Can you use only Java commands to take a screenshot and save it? Or, do I need to use an OS specific program to take the screenshot and then grab it off the clipboard?</p>
<p>Believe it or not, you can actually use <a href="http://java.sun.com/javase/6/docs/api/java/awt/Robot.html#createScreenCapture&#40;java.awt.Rectangle&#41;" rel="noreferrer"><code>java.awt.Robot</code></a> to "create an image containing pixels read from the screen." You can then write that image to a file on disk.</p> <p>I just tried it, and the whole thing ends up like:</p> <pre><code>Rectangle screenRect = new Rectangle(Toolkit.getDefaultToolkit().getScreenSize()); BufferedImage capture = new Robot().createScreenCapture(screenRect); ImageIO.write(capture, "bmp", new File(args[0])); </code></pre> <p>NOTE: This will only capture the primary monitor. See <a href="http://java.sun.com/javase/6/docs/api/java/awt/GraphicsConfiguration.html" rel="noreferrer">GraphicsConfiguration</a> for multi-monitor support.</p>
<p>You can use <code>java.awt.Robot</code> to achieve this task.</p> <p>below is the code of server, which saves the captured screenshot as image in your Directory.</p> <pre><code>import java.awt.image.BufferedImage; import java.io.File; import java.io.IOException; import java.net.ServerSocket; import java.net.Socket; import java.net.SocketTimeoutException; import java.sql.SQLException; import java.text.DateFormat; import java.text.SimpleDateFormat; import java.util.Date; import javax.imageio.ImageIO; public class ServerApp extends Thread { private ServerSocket serverSocket=null; private static Socket server = null; private Date date = null; private static final String DIR_NAME = "screenshots"; public ServerApp() throws IOException, ClassNotFoundException, Exception{ serverSocket = new ServerSocket(61000); serverSocket.setSoTimeout(180000); } public void run() { while(true) { try { server = serverSocket.accept(); date = new Date(); DateFormat dateFormat = new SimpleDateFormat("_yyMMdd_HHmmss"); String fileName = server.getInetAddress().getHostName().replace(".", "-"); System.out.println(fileName); BufferedImage img=ImageIO.read(ImageIO.createImageInputStream(server.getInputStream())); ImageIO.write(img, "png", new File("D:\\screenshots\\"+fileName+dateFormat.format(date)+".png")); System.out.println("Image received!!!!"); //lblimg.setIcon(img); } catch(SocketTimeoutException st) { System.out.println("Socket timed out!"+st.toString()); //createLogFile("[stocktimeoutexception]"+stExp.getMessage()); break; } catch(IOException e) { e.printStackTrace(); break; } catch(Exception ex) { System.out.println(ex); } } } public static void main(String [] args) throws IOException, SQLException, ClassNotFoundException, Exception{ ServerApp serverApp = new ServerApp(); serverApp.createDirectory(DIR_NAME); Thread thread = new Thread(serverApp); thread.start(); } private void createDirectory(String dirName) { File newDir = new File("D:\\"+dirName); if(!newDir.exists()){ boolean isCreated = newDir.mkdir(); } } } </code></pre> <p>And this is Client code which is running on thread and after some minutes it is capturing the screenshot of user screen.</p> <pre><code>package com.viremp.client; import java.awt.AWTException; import java.awt.Dimension; import java.awt.Rectangle; import java.awt.Robot; import java.awt.Toolkit; import java.awt.image.BufferedImage; import java.io.IOException; import java.net.Socket; import java.util.Random; import javax.imageio.ImageIO; public class ClientApp implements Runnable { private static long nextTime = 0; private static ClientApp clientApp = null; private String serverName = "192.168.100.18"; //loop back ip private int portNo = 61000; //private Socket serverSocket = null; /** * @param args * @throws InterruptedException */ public static void main(String[] args) throws InterruptedException { clientApp = new ClientApp(); clientApp.getNextFreq(); Thread thread = new Thread(clientApp); thread.start(); } private void getNextFreq() { long currentTime = System.currentTimeMillis(); Random random = new Random(); long value = random.nextInt(180000); //1800000 nextTime = currentTime + value; //return currentTime+value; } @Override public void run() { while(true){ if(nextTime &lt; System.currentTimeMillis()){ System.out.println(" get screen shot "); try { clientApp.sendScreen(); clientApp.getNextFreq(); } catch (AWTException e) { // TODO Auto-generated catch block System.out.println(" err"+e); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch(Exception e){ e.printStackTrace(); } } //System.out.println(" statrted ...."); } } private void sendScreen()throws AWTException, IOException { Socket serverSocket = new Socket(serverName, portNo); Toolkit toolkit = Toolkit.getDefaultToolkit(); Dimension dimensions = toolkit.getScreenSize(); Robot robot = new Robot(); // Robot class BufferedImage screenshot = robot.createScreenCapture(new Rectangle(dimensions)); ImageIO.write(screenshot,"png",serverSocket.getOutputStream()); serverSocket.close(); } } </code></pre>
8,267
<p>I was always attracted to the world of kernel hacking and embedded systems.<br> Has anyone got good tutorials (+easily available hardware) on starting to mess with such stuff?<br> Something like kits for writing drivers etc, which come with good documentation and are affordable?</p> <p>Thanks!</p>
<p>If you are completely new to kernel development, i would suggest not starting with hardware development and going to some "software-only" kernel modules like proc file / sysfs or for more complex examples filesystem / network development , developing on a uml/vmware/virtualbox/... machine so crashing your machine won't hurt so much :) For embedded development you could go for a small ARM Development Kit or a small Via C3/C4 machine, or any old PC which you can burn with your homebrew USB / PCI / whatever device.</p> <p>A good place to start is probably <a href="http://kernelnewbies.org/" rel="noreferrer">Kernelnewbies.org</a> - which has lots of links and useful information for kernel developers, and also features a list of easy to implement tasks to tackle for beginners.</p> <p>Some books to read:</p> <p><a href="http://books.google.de/books?id=cbbMrRNiC4cC&amp;dq=understanding+the+linux+kernel&amp;pg=PP1&amp;ots=TvUb8_r9Yd&amp;sig=7Cn76JnJktn0B9cvLjh-pxFEdfo&amp;hl=de&amp;sa=X&amp;oi=book_result&amp;resnum=5&amp;ct=result#PPP1,M1" rel="noreferrer">Understanding the Linux Kernel</a> - a very good reference detailing the design of the kernel subsystems</p> <p><a href="http://lwn.net/Kernel/LDD3/" rel="noreferrer">Linux Device Drivers</a> - is written more like a tutorial with a lot of example code, focusing on getting you going and explaining key aspects of the linux kernel. It introduces the build process and the basics of kernel modules.</p> <p><a href="http://www.tldp.org/LDP/lkmpg/2.6/html/" rel="noreferrer">Linux Kernel Module Programming Guide</a> - Some more introductory material</p> <p>As suggested earlier, looking at the linux code is always a good idea, especially as Linux Kernel API's tend to change quite often ... LXR helps a lot with a very nice browsing interface - <a href="http://lxr.linux.no" rel="noreferrer">lxr.linux.no</a></p> <p>To understand the Kernel Build process, this link might be helpful:</p> <p><a href="http://www.ravnborg.org/kbuild/makefiles.html" rel="noreferrer">Linux Kernel Makefiles (kbuild)</a></p> <p>Last but not least, browse the Documentation directory of the Kernel Source distribution!</p> <p>Here are some interesting exercises insolently stolen from a kernel development class: </p> <ul> <li>Write a kernel module which creates the file /proc/jiffies reporting the current time in jiffies on every read access.</li> <li>Write a kernel module providing the proc file /proc/sleep. When an application writes a number of seconds as ASCII text into this file ("echo 3 > /proc/sleep"), it should block for the specified amount of seconds. Write accesses should have no side effect on the contents of the file, i.e., on the read accesses, the file should appear to be empty (see LDD3, ch. 6/7)</li> <li>Write a proc file where you can store some text temporarily (using echo "blah" > /proc/pipe) and get it out again (cat /proc/pipe), clearing the file. Watch out for synchronisation issues.</li> <li>Modify the pipe example module to register as a character device /dev/pipe, add dynamic memory allocation for write requests.</li> <li>Write a really simple file system.</li> </ul>
<p>Some things to be prepared for:</p> <ul> <li>you'll be cross-compiling. The embedded device will use a MIPS, PowerPC, or ARM CPU but won't have enough CPU power, memory, or storage to compile its own kernel in a reasonable amount of time. </li> <li>An embedded system often uses a serial port as the console, and to lower the cost there is usually no connector soldered onto production boards. Debugging kernel panics is very difficult unless you can solder on a serial port connector, you won't have much information about what went wrong. </li> </ul> <p>The <A HREF="http://www.nslu2-linux.org/" rel="nofollow noreferrer">Linksys NSLU2</A> is a low-cost way to get a real embedded system to work with, and has a USB port to add peripherals. Any of a number of wireless access points can also be used, see the <A HREF="http://wiki.openwrt.org/TableOfHardware" rel="nofollow noreferrer">OpenWrt compatibility page.</A> Be aware that current models of the Linksys WRT54G you'll find in stores can no longer be used with Linux: they have less RAM and Flash in order to reduce the cost. Cisco/Linksys now uses vxWorks on the WRT54G, with a smaller memory footprint.</p> <p>If you really want to get into it, evaluation kits for embedded CPUs start at a couple hundred US dollars. I'd recommend not spending money on these unless you need it professionally for a job or consulting contract.</p>
8,575
<p>How do I duplicate a whole line in <strong>Vim</strong> in a similar way to <kbd>Ctrl</kbd>+<kbd>D</kbd> in IntelliJ IDEA/ Resharper or <kbd>Ctrl</kbd>+<kbd>Alt</kbd>+<kbd>&#8593;</kbd>/<kbd>&#8595;</kbd> in <strong>Eclipse</strong>?</p>
<p><kbd>y</kbd><kbd>y</kbd> or <kbd>Y</kbd> to copy the line (mnemonic: <strong><em>y</strong>ank</em>) <br />or <br /><kbd>d</kbd><kbd>d</kbd> to <strong><em>d</strong>elete</em> the line (Vim copies what you deleted into a clipboard-like "register", like a <em>cut</em> operation)</p> <p>then</p> <p><kbd>p</kbd> to <strong><em>p</strong>aste</em> the copied or deleted text <em>after</em> the current line <br />or <br /><kbd>P</kbd> to <strong><em>p</strong>aste</em> the copied or deleted text <em>before</em> the current line</p>
<p>I use this mapping, which is similar to vscode. I hope it is useful!!!.</p> <pre><code>nnoremap &lt;A-d&gt; :t. &lt;CR&gt;== inoremap &lt;A-d&gt; &lt;Esc&gt;:t. &lt;CR&gt;==gi vnoremap &lt;A-d&gt; :t$ &lt;CR&gt;gv=gv </code></pre>
9,900
<p>In Ruby on Rails, I'm attempting to update the <code>innerHTML</code> of a div tag using the <code>form_remote_tag</code> helper. This update happens whenever an associated select tag receives an onchange event. The problem is, <code>&lt;select onchange="this.form.submit();"&gt;</code>; doesn't work. Nor does <code>document.forms[0].submit()</code>. The only way to get the onsubmit code generated in the form_remote_tag to execute is to create a hidden submit button, and invoke the click method on the button from the select tag. Here's a working ERb partial example.</p> <pre><code>&lt;% form_remote_tag :url =&gt; product_path, :update =&gt; 'content', :method =&gt; 'get' do -%&gt; &lt;% content_tag :div, :id =&gt; 'content' do -%&gt; &lt;%= select_tag :update, options_for_select([["foo", 1], ["bar", 2]]), :onchange =&gt; "this.form.commit.click" %&gt; &lt;%= submit_tag 'submit_button', :style =&gt; "display: none" %&gt; &lt;% end %&gt; &lt;% end %&gt; </code></pre> <p>What I want to do is something like this, but it doesn't work.</p> <pre><code>&lt;% form_remote_tag :url =&gt; product_path, :update =&gt; 'content', :method =&gt; 'get' do -%&gt; &lt;% content_tag :div, :id =&gt; 'content' do -%&gt; # the following line does not work &lt;%= select_tag :update, options_for_select([["foo", 1], ["bar", 2]]), :onchange =&gt; "this.form.onsubmit()" %&gt; &lt;% end %&gt; &lt;% end %&gt; </code></pre> <p>So, is there any way to remove the invisible submit button for this use case?</p> <p>There seems to be some confusion. So, let me explain. The basic problem is that <code>submit()</code> doesn't call the <code>onsubmit()</code> code rendered into the form.</p> <p>The actual HTML form that Rails renders from this ERb looks like this:</p> <pre><code>&lt;form action="/products/1" method="post" onsubmit="new Ajax.Updater('content', '/products/1', {asynchronous:true, evalScripts:true, method:'get', parameters:Form.serialize(this)}); return false;"&gt; &lt;div style="margin:0;padding:0"&gt; &lt;input name="authenticity_token" type="hidden" value="4eacf78eb87e9262a0b631a8a6e417e9a5957cab" /&gt; &lt;/div&gt; &lt;div id="content"&gt; &lt;select id="update" name="update" onchange="this.form.commit.click"&gt; &lt;option value="1"&gt;foo&lt;/option&gt; &lt;option value="2"&gt;bar&lt;/option&gt; &lt;/select&gt; &lt;input name="commit" style="display: none" type="submit" value="submit_button" /&gt; &lt;/div&gt; &lt;/form&gt; </code></pre> <p>I want to axe the invisible submit button, but using a straight form.submit appears to not work. So, I need some way to call the form's onsubmit event code.</p> <p>Update: Orion Edwards solution would work if there wasn't a <code>return(false);</code> generated by Rails. I'm not sure which is worse though, sending a phantom click to an invisible submit button or calling eval on the <code>getAttribute('onsubmit')</code> call after removing the return call with a javascript string replacement! </p>
<p>I realize this question is kind of old, but what the heck are you doing eval for?</p> <pre><code>document.getElementById('formId').onsubmit(); document.getElementById('formId').submit(); </code></pre> <p>or</p> <pre><code>document.formName.onsubmit(); document.formName.submit(); </code></pre> <p>When the DOM of a document is loaded, the events are not strings any more, they are functions.</p> <pre><code>alert(typeof document.formName.onsubmit); // function </code></pre> <p>So there's no reason to convert a function to a string just so you can eval it.</p>
<p>If you didn't actually want to submit the form, but just invoke whatever code happened to be in the onsubmit, you could possibly do this: (untested)</p> <pre><code>var code = document.getElementById('formId').getAttribute('onsubmit'); eval(code); </code></pre>
5,399
<p>Anything thats as good and as stable and as feature-rich as gigaspaces?</p>
<p>Gigaspaces is top notch as far as a Javaspaces implementation goes for scalability and performance. Are you restricted to a Javaspaces implementation? Blitz Javaspaces is top notch for a free product.</p>
<p>I'd suggest taking a look at Gartner's "Competitive Landscape: In-Memory Data Grids" at <a href="http://www.gartner.com/technology/reprints.do?id=1-1HCCIMJ&amp;ct=130718&amp;st=sb" rel="nofollow">http://www.gartner.com/technology/reprints.do?id=1-1HCCIMJ&amp;ct=130718&amp;st=sb</a></p>
4,339
<p>In Java 5 and above you have the foreach loop, which works magically on anything that implements <code>Iterable</code>:</p> <pre><code>for (Object o : list) { doStuff(o); } </code></pre> <p>However, <code>Enumerable</code> still does not implement <code>Iterable</code>, meaning that to iterate over an <code>Enumeration</code> you must do the following:</p> <pre><code>for(; e.hasMoreElements() ;) { doStuff(e.nextElement()); } </code></pre> <p>Does anyone know if there is a reason why <code>Enumeration</code> still does not implement <code>Iterable</code>?</p> <p><strong>Edit:</strong> As a clarification, I'm not talking about the language concept of an <a href="http://en.wikipedia.org/wiki/Enumerated_type" rel="noreferrer">enum</a>, I'm talking a Java-specific class in the Java API called '<a href="http://java.sun.com/j2se/1.5.0/docs/api/java/util/Enumeration.html" rel="noreferrer">Enumeration</a>'. </p>
<p>Enumeration hasn't been modified to support Iterable because it's an interface not a concrete class (like Vector, which was modifed to support the Collections interface).</p> <p>If Enumeration was changed to support Iterable it would break a bunch of people's code.</p>
<p>It is possible to create an Iterable from any object with a method that returns an Enumeration, using a lambda as an adapter. In Java 8, use Guava's static <a href="https://guava.dev/releases/snapshot-jre/api/docs/com/google/common/collect/Iterators.html#forEnumeration(java.util.Enumeration)" rel="nofollow noreferrer"><code>Iterators.forEnumeration</code></a> method, and in Java 9+ use the Enumeration instance method <a href="https://docs.oracle.com/javase/9/docs/api/java/util/Enumeration.html#asIterator--" rel="nofollow noreferrer"><code>asIterator</code></a>.</p> <p>Consider the Servlet API's <code>HttpSession.getAttributeNames()</code>, which returns an <code>Enumeration&lt;String&gt;</code> rather than an <code>Iterator&lt;String&gt;</code>.</p> <p><strong>Java 8 using Guava</strong></p> <pre class="lang-java prettyprint-override"><code>Iterable&lt;String&gt; iterable = () -&gt; Iterators.forEnumeration(session.getAttributeNames()); </code></pre> <p><strong>Java 9+</strong></p> <pre class="lang-java prettyprint-override"><code>Iterable&lt;String&gt; iterable = () -&gt; session.getAttributeNames().asIterator(); </code></pre> <p>Note that these lambdas are truly Iterable; they return a fresh Iterator each time they are invoked. You can use them exactly like any other Iterable in an enhanced <code>for</code> loop, <code>StreamSupport.stream(iterable.spliterator(), false)</code>, and <code>iterable.forEach()</code>.</p> <p>The same trick works on classes that provide an Iterator but don't implement Iterable. <code>Iterable&lt;Something&gt; iterable = notIterable::createIterator;</code></p>
4,686
<p>I purchased a new 4.2.7 (256k) board for my Ender 5 Pro and added the CR Touch. I got everything connected and flashed the board with the &quot;Ender-5 Pro- Marlin2.0.1 - V1.1.1 - ALT - TMC2235.bin&quot; firmware from the creality.com/download &gt; Accessory Firmware &gt; CR Touch Firmware for 32-bit Motherboard &gt; Ender-5 Pro.zip</p> <p><strong>Problem:</strong></p> <p>When told to Auto Home the Z-axis drops 5 mm then checks X-axis &amp; Y-axis endstops = OK</p> <p>But then the Z-axis drops another 3 mm and the CR Touch deploys &gt; Retracts &gt; deploys &gt; Retracts and Faults out.</p> <p>The instructions say to move the Z-axis to get the offset but it will only go down while the CR Touch is faulted (Stopped)</p> <p>I've checked all wiring made sure the Z-axis endstop was disconnected. No help.</p> <p>Thinking it was a board issue I wanted to check the normal, no CR Touch, functionality, so I flashed the Marlin2.0.1 V1.0.1 original versionTMC2225 &quot;Marlin2.0.1 - V1.0.1 - Endstop - TMC2225.bin&quot; firmware onto the board, and reconnected the Z-axis endstop. The printer worked like normal.</p> <p>Not wanting to fail, I decided to try another firmware, so I decided to use TH3D_Unified2_CrealityV4X_256K. I went through and set up everything using <code>CUSTOM_PROBE</code> settings. And wouldn't you know it... SAME Results as with the Creality &quot;Ender-5 Pro- Marlin2.0.1 - V1.1.1 - ALT - TMC2235.bin&quot; firmware.</p> <p>Bed drops 5 mm at the start of Auto Home then drops another 3 mm after centering to deploy the CR Touch. The CR Touch deploys &gt; Retracts &gt; deploys &gt; Retracts then Faults out. At no point does the bed ever try to go up.</p> <p>The only other thing I can think of is that the CR Touch isn't working right. I thought it was supposed to deploy then the bed was to come up to it to detect it. Could the wiring &quot;harness&quot; for the CR Touch be incorrectly wired? (Wiring below)</p> <p>Can someone help me with this one? Do I have a bad Board? Do I have a bad CR Touch? Or am I just not getting a setting right somewhere?</p> <hr/> CR Touch Wiring (Current) 5pin from factory --- Connectors can only be put in one way <p>@ Creality 4.2.7 Board --- From Left to Right</p> <p>G = White | V = Black | IN = Yellow | G = Red | Out = Blue</p> <p>@ Creality CR Touch --- From Left to Right with Creality logo facing you</p> <p>Blue | Red | Yellow | Black | White</p>
<p>If you have an empty spool of the same brand, you could weigh the empty spool and the one you're trying to &quot;measure&quot; to get an approximate weight of the remaining filament. Divide by the (presumably available from manufacturer) weight per meter to get a rough length in meters, if that's more useful to you than weight.</p>
<p>Besides using the window on a spool that estimates the amount of filament left, I've used large calipers to measure the diameter of an empty spool and the diameter of the filament left on the spool.</p>
2,111
<p>Quite a few methods exist for passing data into a Flex binary from a Rails application. Right now, I'm using the old e4x resultFormat with a xml.erb template. I've done AMF before, but I feel like inlining parameters into the embed itself is a better solution because you don't have to wait for the browser to load a swf binary and the binary to make a web request. </p> <p>Taking all of that into account, what is the best practice for rendering a Flex widget with a Rails back-end these days?</p>
<p>Sounds to me like you shouldn't use Flex at all. Flex is great for writing applications that load and display data, but is a huge overhead if you just want to display charts. If you're worried about delivering the SWF and the data in two separate responses, you should be worrying about the 200-300 K overhead of the Flex framework.</p> <p>Go with something more lightweight like <a href="http://code.google.com/p/flot/" rel="nofollow noreferrer">Flot</a> and deliver the JavaScript code that generates the charts along with the data. I don't think Flex is a suitable platform for what you want.</p>
<p>It depends, are you going to want the charts to be 'live', as in get updated in real-time when data changes? If so then using AMF with a Flex native RemoteObject gives you plenty of polling options, you can also just use a simpler Flex native HTTPService.</p> <p>Read about them both here: <a href="http://livedocs.adobe.com/flex/3/langref/index.html" rel="nofollow noreferrer">Flex Actionscript 3.0 Documentation</a></p>
6,951
<p>I tried out printing a "bridge" for the first time using this <a href="https://www.thingiverse.com/thing:284380" rel="nofollow noreferrer">Bridge Calibration model</a>. The model creator suggests printing, tweaking settings, and printing again until it works "properly." </p> <p>My first try worked reasonably well, but the underside of the bridge is pretty rough and stringy. But I don't know what a "good bridge" is supposed to look like.</p> <p><a href="https://i.stack.imgur.com/htfAV.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/htfAV.jpg" alt="printed bridge"></a></p> <p>How smooth should the underside of the bridge be? Any suggestions for what changes I should make to my slicer settings? Here is what I have currently:</p> <p><a href="https://i.stack.imgur.com/87qVV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/87qVV.png" alt="slicer settings for bridging"></a></p> <p>Here is a picture of my hotend on my Monoprice Maker Select Plus, as requested:</p> <p><a href="https://i.stack.imgur.com/JxNbw.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JxNbw.jpg" alt="hotend of Monoprice Maker Select Plus"></a></p>
<p>Unless you are using a calibrated temperature sensor, it is a question what the temperature will be. </p> <p>Actually it doesn't really matter what the temperature exactly is, you just need to find the sweet spot for your filaments on your machine. With respect to reported temperatures by others, your settings may differ a little, but that does not matter. </p>
<p>Temperature from thermosensors gets collected as a resistance value that changes with temperature. The chip in your board decides the temperature from this value based on a temperature-resistance table.</p> <p>If you are using Marlin Firmware, the setting which table is referenced by your machine to get its values is written under the header Thermal Settings inside <code>Confinguration.h</code>.</p>
1,126
<p>Recently I noticed that Cura always is giving me less printing time than the print itself takes on my TEVO Tarantula with Marlin firmware.</p> <p>The time difference is about 15&nbsp;%, at requested 50&nbsp;mm/s printing speed.</p> <p><strong>How could I verify real printing speed?</strong></p>
<p><strong>Printing speed is dependent by the firmware and physical properties of your printer</strong>.</p> <p>Slicers typically compute the expected time by assuming the printer will execute <em>exactly</em> what it is instructed to do, but a printer is a real object, with mass and momentum, and stepper motors that have an upper limit for their power output and rotation speed.</p> <p>So for example, the GCODE may say "extrude 200mm at 100mm/s" and the slicer will compute that operation as taking 2 seconds. However the printer will need to accelerate and decelerate at the extremes of the movement, and it may even be incapable of reaching speeds over 70mm/s, so the <em>actual</em> operation will likely take 3 seconds or more.</p> <p>Accelerations and decelerations account for most of the difference between ideal time and real one, and since the number and intensity of those is totally dependent by the GCODE/model being printed, <strong>it is not possible to simply multiply the computed time for a given factor</strong> (for example <code>1.15</code>, as your question seems to imply). A large cylinder printed in vase mode will have a printing time much more similar to the computed one that an intricate model with a very complex surface, for example.</p> <p>In recent years, slicers that are maintained by a printer manufacturer (cura, slic3r PE) have become better at estimating printing times for their own printers, as the settings of the firmware are accounted for in the actual estimating algorithm.</p> <p>If you use Octoprint, you may have noticed that the time estimate octoprint gives improves over time, as <strong>octoprint will analyse the GCODE and measure the elapsed time, and will be able to guesstimate the real time with an increasingly degree of accuracy</strong>.</p>
<p>I use a Anet A8 which has a system acceleration and jerk of 400 and 20 When i slice with Cura, then I use a gcode without acceleration control, because the printer does this by its own and a combination with the gcode acceleration/jerk control gets me some strange behavior (e.g. sometimes some ultra slow movement in fine detail sections, or random nearly stopped movement) But to get a realistic approximation for the print time, i enable the acceleration/jerk control in cura and then I get a really good result, that is really close to the real values. </p> <p>And to be more specific to your question: It is e.g. 50mm/s >>maximum&lt;&lt; speed you can set, not a exactly and absolut speed. If you know your printer you can also experiment with the following approach: Set the max speed really high, so that the max material volume per time is the limiting factor and set the acceleration and jerk as low as needed for good results. So that your printer will accelerate on long lines as long as it can up to the point it has to deaccelerate for the next corner. </p>
874
<p>Is there anything like <a href="http://en.wikipedia.org/wiki/ELMAH" rel="noreferrer">ELMAH</a> for Windows Forms? </p> <p>I'm looking for a standard way to process unhandled exceptions and grab a screenshot and other environment information before packaging that up for support.</p>
<p>The very same Jeff Atwood coded a nice solution (albeit in VB.NET) which I had to modify and "fix" but that I am happily using since then.</p> <p>You can view his solution <a href="http://www.codeproject.com/KB/exception/ExceptionHandling.aspx" rel="nofollow noreferrer">Here</a></p>
<p>There is a commercial alternative called <a href="http://www.smartassembly.com/product/index.aspx" rel="nofollow noreferrer">{smartassembly}</a> that does error reporting.</p>
9,226
<p>I just started using GNU Emacs as my text editor and I am concerned about getting afflicted with "<a href="http://en.wikipedia.org/wiki/Emacs#Emacs_pinky" rel="nofollow noreferrer">Emacs Pinky</a>" by having to constantly press the control key with my pinky finger as is required when using Emacs. How can I avoid potentially getting this type of repetitive strain injury?</p>
<p>Making <KBD>caps lock</KBD> another control key is a good place to start. Invest in an ergonomic keyboard. Some emacs users even go as far as to get foot pedal things for control and meta...</p>
<p> Just to overcome this issue I remap all the copy,paste,save...etc into numpad.For further ease I bought a separate numpad and place it behind my laptop key board.</p> <p><img src="https://i.stack.imgur.com/zBQUO.jpg" /></p> <p> You can easily remap the keys using AHK(auto hot key).I am using the following key mappings script, </p> <pre><code>NumpadIns::^s NumpadEnd::^c NumpadDown::^v NumpadPgDn::^x NumpadLeft::^+v NumpadClear::Control NumpadRight::^a NumpadHome::q NumpadUp::Tab NumpadDel::^f NumpadEnter::Space </code></pre> <p> 1 : copy <br/> 2 : paste <br/> 0 : save <br/> etc.... </p>
7,574
<p>I am having issue with my new printer it is not printing circles correctly although I have change firmware and stepping of motor advise if anyone know the solution. <a href="https://i.stack.imgur.com/7egRT.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7egRT.jpg" alt="enter image description here"></a></p>
<p>Possible causes for the printer not printing correct dimensions:</p> <ul> <li>Incorrect number of steps/mm in firmware settings</li> <li>Belts are not tight enough</li> <li>Pulley slips on the shaft</li> </ul> <p>Looking at the picture, I would go for the first case, because distortion looks regular. Try checking microstep settings on your board, and settings in the firmware.</p>
<p>This could be because your belts are either old or not tight enough. Try adjust or replacing the belts and try again. </p>
685
<p>My company has a common code library which consists of many class libary projects along with supporting test projects. Each class library project outputs a single binary, e.g. Company.Common.Serialization.dll. Since we own the compiled, tested binaries as well as the source code, there's debate as to whether our consuming applications should use binary or project references. </p> <p>Some arguments in favor of project references:</p> <ul> <li>Project references would allow users to debug and view all solution code without the overhead of loading additional projects/solutions.</li> <li>Project references would assist in keeping up with common component changes committed to the source control system as changes would be easily identifiable without the active solution.</li> </ul> <p>Some arguments in favor of binary references:</p> <ul> <li>Binary references would simplify solutions and make for faster solution loading times.</li> <li>Binary references would allow developers to focus on new code rather than potentially being distracted by code which is already baked and proven stable.</li> <li>Binary references would force us to appropriately dogfood our stuff as we would be using the common library just as those outside of our organization would be required to do.</li> <li>Since a binary reference can't be debugged (stepped into), one would be forced to replicate and fix issues by extending the existing test projects rather than testing and fixing within the context of the consuming application alone. </li> <li>Binary references will ensure that concurrent development on the class library project will have no impact on the consuming application as a stable version of the binary will be referenced rather than an influx version. It would be the decision of the project lead whether or not to incorporate a newer release of the component if necessary.</li> </ul> <p>What is your policy/preference when it comes to using project or binary references?</p>
<p>It sounds to me as though you've covered all the major points. We've had a similar discussion at work recently and we're not quite decided yet.</p> <p>However, one thing we've looked into is to reference the binary files, to gain all the advantages you note, but have the binaries built by a common build system where the source code is in a common location, accessible from all developer machines (at least if they're sitting on the network at work), so that any debugging can in fact dive into library code, if necessary.</p> <p>However, on the same note, we've also tagged a lot of the base classes with appropriate attributes in order to make the debugger skip them completely, because any debugging you do in your own classes (at the level you're developing) would only be vastly outsized by code from the base libraries. This way when you hit the <em>Step Into</em> debugging shortcut key on a library class, you resurface into the next piece of code at your current level, instead of having to wade through tons of library code.</p> <p>Basically, I definitely vote up (in SO terms) your comments about keeping proven library code out of sight for the normal developer.</p> <p>Also, if I load the global solution file, that contains all the projects and basically, just everything, ReSharper 4 seems to have some kind of coronary problem, as Visual Studio practically comes to a stand-still.</p>
<p>I think that if the project is not part of the solution, you shouldn't include it there... but that's just my opinion</p> <p>I separate it by concept in short</p>
6,875
<p>I'm a software engineer, not a lawyer, and my university doesn't offer any courses geared toward licensing software. In fact, their law-related courses are lacking (but slowly growing in number). Where can I go to learn about open-source licenses and how to choose them?</p>
<p>There are lots described here: <a href="http://www.gnu.org/licenses/license-list.html#SoftwareLicenses" rel="noreferrer">http://www.gnu.org/licenses/license-list.html#SoftwareLicenses</a></p> <p>The decision of which one to use can be political, but should ultimately be determined by your plans/desires for the software. If you want to ensure it is always free then choose GPL or another "Copyleft" license. If you don't mind some commercial use, choose another one that's compatible with that. </p>
<p>We had a similar dilemma. At our company we decided invest lots of time on a framework, with the eventual hope of releasing it to the open source community. The business is built using open source tools (apache, php, etc.), it was time to give back. We decided on an LGPL/MPL dual license. That way, we could incorporate fixes/improvements from the community, while still protecting applications (particularly ours) running on top of it from being forced to go open source as well.</p>
6,330
<p>I'm using ADO.NET to access SQL Server 2005 and would like to be able to log from inside the T-SQL stored procedures that I'm calling. Is that somehow possible?</p> <p>I'm unable to see output from the 'print'-statement when using ADO.NET and since I want to use logging just for debuging the ideal solution would be to emit messages to DebugView from SysInternals.</p>
<p>I solved this by writing a SQLCLR-procedure as Eric Z Beard suggested. The assembly must be signed with a strong name key file.</p> <pre><code>using System; using System.Data; using System.Data.SqlClient; using System.Data.SqlTypes; using Microsoft.SqlServer.Server; public partial class StoredProcedures { [Microsoft.SqlServer.Server.SqlProcedure] public static int Debug(string s) { System.Diagnostics.Debug.WriteLine(s); return 0; } } } </code></pre> <p>Created a key and a login:</p> <pre><code>USE [master] CREATE ASYMMETRIC KEY DebugProcKey FROM EXECUTABLE FILE = 'C:\..\SqlServerProject1\bin\Debug\SqlServerProject1.dll' CREATE LOGIN DebugProcLogin FROM ASYMMETRIC KEY DebugProcKey GRANT UNSAFE ASSEMBLY TO DebugProcLogin </code></pre> <p>Imported it into SQL Server:</p> <pre><code>USE [mydb] CREATE ASSEMBLY SqlServerProject1 FROM 'C:\..\SqlServerProject1\bin\Debug\SqlServerProject1.dll' WITH PERMISSION_SET = unsafe CREATE FUNCTION dbo.Debug( @message as nvarchar(200) ) RETURNS int AS EXTERNAL NAME SqlServerProject1.[StoredProcedures].Debug </code></pre> <p>Then I was able to log in T-SQL procedures using</p> <pre><code>exec Debug @message = 'Hello World' </code></pre>
<p>You could use output variables for passing back messages, but that relies on the proc executing without errors.</p> <pre><code>create procedure usp_LoggableProc @log varchar(max) OUTPUT as -- T-SQL statement here ... select @log = @log + 'X is foo' </code></pre> <p>And then in your ADO code somehwere:</p> <pre><code>string log = (string)SqlCommand.Parameters["@log"].Value; </code></pre> <p>You could use raiserror to create your own custom errors with the information that you require and that will be available to you through the usual SqlException Errors collection in your ADO code:</p> <pre><code>RAISERROR('X is Foo', 10, 1) </code></pre> <p>Hmmm but yeah, can't help feeling just for debugging and in your situation, just insert varchar messages to an error table like the others have suggested and select * from it when you're debugging.</p>
8,313
<p>In .NET there are two version numbers available when building a project, File Version and Assembly Version. How are you using these numbers? Keeping them the same? Auto-incrementing one, but manually changing the other?</p> <p>Also what about the <code>AssemblyInformationalVersion</code> attribute?</p> <p>I'd found this support Microsoft Knowledge Base (KB) article that provided some help: <a href="http://support.microsoft.com/kb/556041" rel="noreferrer">How to use Assembly Version and Assembly File Version</a>.</p>
<p>In solutions with multiple projects, one thing I've found very helpful is to have all the AssemblyInfo files point to a single project that governs the versioning. So my AssemblyInfos have a line:</p> <pre><code>[assembly: AssemblyVersion(Foo.StaticVersion.Bar)] </code></pre> <p>I have a project with a single file that declares the string:</p> <pre><code>namespace Foo { public static class StaticVersion { public const string Bar= "3.0.216.0"; // 08/01/2008 17:28:35 } } </code></pre> <p>My automated build process then just changes that string by pulling the most recent version from the database and incrementing the second last number.</p> <p>I only change the Major build number when the featureset changes dramatically.</p> <p>I don't change the file version at all.</p>
<p>I keep them the same. But then, I don't have multifile assemblies, which is when the AssemblyVersion number becomes important. I use Microsoft-style date encoding for my build numbers, rather than auto-incrementing (I don't find the number of times that something has been built to be all that important).</p>
3,556
<p>What is the minimum I need to add to a .NET 2.0 WebSite's web.config to make it .NET 3.5? </p> <p>Visual Studio adds all the config sections and script handlers, but if you aren't using those are they are really necessary?</p> <p>Is there a command line tool to "upgrade" a .NET 2.0 web.config to 3.5?</p>
<p>There is a good description of the 3.5 web.config available here: <a href="https://web.archive.org/web/20211020153237/https://www.4guysfromrolla.com/articles/121207-1.aspx" rel="nofollow noreferrer">https://web.archive.org/web/20211020153237/https://www.4guysfromrolla.com/articles/121207-1.aspx</a></p> <p>The assemblies and config sections are important because they tell the runtime to use the new 3.5 dlls instead of the 2.0 dlls</p> <p>The <em>codedom</em> section tells the compiler to use 3.5.</p> <p>If you're not using ASP.Net Ajax you can probably skip the rest. I've never tested that though.</p>
<p>It depends on which features you want to include. Most of the 3.5 ASP.NET extensions are optional. You will want to include the assembly for System.Core and System.Xml.Linq. You will also to add compiler support for C# 3.0 if you plan to use that in your code behind. If you're deploying to IIS 7 there are HTTP handlers for the ASP.NET extensions and script modules.</p>
5,408
<p>What are some resources for getting started writing a Firefox Addon? Is there an API guide somewhere? Is there a getting started tutorial somewhere? Is there a developer discussion board somewhere?</p>
<p>We tried to make <a href="https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/WebExtensions" rel="nofollow noreferrer">https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/WebExtensions</a> answer all those questions. The first three links in the documentation section are about getting started (that includes something like Adam's link, before it became stale). The newsgroup and the irc channel in the Community section are the official discussion boards.</p> <p>Mozilla is very complex, so any kind of API guide would be overwhelming and hard to write. So your best bet is to check the code snippets page (also linked from the MDC Extensions page), then search MDC/google, then ask in the forums.</p>
<p>This has the best solutions: <a href="https://developer.mozilla.org/en/Extensions" rel="nofollow">https://developer.mozilla.org/en/Extensions</a> but you can try greasemonkey script compiler</p>
4,441
<p>According to the documentation, they're pretty much interchangeable. Is there a stylistic reason to use one over the other?</p>
<p>I like to use double quotes around strings that are used for interpolation or that are natural language messages, and single quotes for small symbol-like strings, but will break the rules if the strings contain quotes, or if I forget. I use triple double quotes for docstrings and raw string literals for regular expressions even if they aren't needed.</p> <p>For example:</p> <pre><code>LIGHT_MESSAGES = { 'English': "There are %(number_of_lights)s lights.", 'Pirate': "Arr! Thar be %(number_of_lights)s lights." } def lights_message(language, number_of_lights): """Return a language-appropriate string reporting the light count.""" return LIGHT_MESSAGES[language] % locals() def is_pirate(message): """Return True if the given message sounds piratical.""" return re.search(r"(?i)(arr|avast|yohoho)!", message) is not None </code></pre>
<p><code>'</code> = <code>"</code></p> <p><code>/</code> = <code>\</code> = <code>\\</code></p> <p>example : </p> <pre><code>f = open('c:\word.txt', 'r') f = open("c:\word.txt", "r") f = open("c:/word.txt", "r") f = open("c:\\\word.txt", "r") </code></pre> <p>Results are the same</p> <p>=>> no, they're not the same. A single backslash will escape characters. You just happen to luck out in that example because <code>\k</code> and <code>\w</code> aren't valid escapes like <code>\t</code> or <code>\n</code> or <code>\\</code> or <code>\"</code></p> <p>If you want to use single backslashes (and have them interpreted as such), then you need to use a "raw" string. You can do this by putting an '<code>r</code>' in front of the string</p> <pre><code>im_raw = r'c:\temp.txt' non_raw = 'c:\\temp.txt' another_way = 'c:/temp.txt' </code></pre> <p>As far as paths in Windows are concerned, forward slashes are interpreted the same way. Clearly the string itself is different though. I wouldn't guarantee that they're handled this way on an external device though.</p>
7,987
<p>While plenty of solutions exist for entering dates (such as calendars, drop-down menus, etc.), it doesn't seem like there are too many "standard" ways to ask for a time (or time range).</p> <p>I've personally tried drop-down menus for the hour, minute, and second fields (and sometimes an "AM/PM" field, as well). I've also tried several clock-like input devices, most of which are too hard to use for the typical end-user. I've even tried "pop-out" time selection menus (which allow you to, for example, hover over the hour "10" to receive a sub-menu that contains ":00",":15",":30", and ":45") -- but none of these methods seem natural.</p> <p>So far, the best (and most universal) method I have found is just using simple text fields and forcing a user to manually populate the hour, minute, and second. Alternatively, I've had good experiences creating something similar to Outlook's "Day View" which allows you to drag and drop an event to set the start and end times.</p> <p>Is there a "best way" to ask for this information? Is anybody using some type of time input widget that's really intuitive and easy to use? Or is there at least a way that's more efficient than using plain text boxes? ­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­</p>
<p>What I found works best is to really learn CSS. I mean <strong>really</strong> learn CSS.</p> <p>It can be a confusing language to learn, but if you read enough about it and practice, eventually you'll learn the best way to do things.</p> <p>The key is to do it enough that it comes natural. CSS can be very elegant if you know what you want to do before you start and you have enough experience to do it.</p> <p>Granted, it is also a major PITA to do sometimes, but even cross-browser issues aren't so bad if you <strong>really</strong> practice at it and learn what works and what doesn't, and how to get around problems.</p> <p>All it takes is practice and in time you can become good at it.</p>
<p>CSS takes a bit of time to learn, but the thing I initially found most discouraging was the fact that so many hacks were needed to get all browsers to behave the same way. Learning a system which doesn't adhere to logic seems dumb... but I've clung to the vague belief that there is logic behind each browser's idiosyncrasy, in the form of the W3 spec. It seems that the new generation browsers are slowly coming into line - but IE6 still makes my life hell on a daily basis.</p> <p>Maybe creating an abstraction layer between compliant/valid CSS code and the browsers' shoddy implementations wouldn't be a bad thing. But if such a thing was created - would it need to be powered by JS (or jQuery)? (and would that create an unreasonably burden, in terms of processing cost?)</p> <p>I've found that it useful to 'level the ground' when scripting with CSS. There are probably loads of different flavours of reset script out there - but using YUI resets has helped me to reduce the number of quirks I'd otherwise encounter - and YUI grids make life a little easier sometimes. </p>
4,508
<p>Suppose I want to to print several copies of a large, flat object, such as a hollow frame. If I make an STL file consisting of several copies floating just above each other then my 3D printer's software happily generates supports for the upper objects to sit on, as shown below:</p> <p><a href="https://i.stack.imgur.com/XbxbGm.png" rel="noreferrer"><img src="https://i.stack.imgur.com/XbxbGm.png" alt="enter image description here"></a></p> <p>My question is, is this a crazy idea? If it will work it seems it has several advantages. I can print more copies at once than if I tried to fit them next to each other on the platform, and if one of the prints fails the ones below it will still be OK. It doesn't take that much more filament than printing them all separately.</p> <p>I'll probably try it at some point just to see what happens, but it's not something I've heard of people doing before, so I'm wondering if there's a good reason why it won't work or would otherwise not be a good strategy.</p>
<p>According to the RepRap.org <a href="http://reprap.org/wiki/G-code" rel="nofollow noreferrer">list of G-Code commands</a>, see <strong><a href="http://reprap.org/wiki/G-code#G0_.26_G1:_Move" rel="nofollow noreferrer">G0 &amp; G1: Move</a></strong>:</p> <blockquote> <p>The <code>Ennn</code> command is <em>The amount to extrude between the starting point and ending point</em>. </p> </blockquote> <p>However, according to <strike><a href="https://github.com/daid/Cura/issues/385" rel="nofollow noreferrer">this</a></strike> a discussion, that is now deleted from GitHub, about the Cura slicing engine:</p> <p>The <strong>E</strong> values are in <em>absolute mode</em>, so perhaps the firmware is attempting to move the stepper motor to the absolute position (which is almost 50% through your print). This may lead to clogging or skipping depending on how hot your extruder is at that point.</p> <p>As a last resort, you can perform a Boolean subtract on your model of the section that's already printed and re-slice the model to print the remaining bit. Then glue, or ABS weld, the remaining piece to the main print. I've done this in the past, it's not super glamorous, but it gets the job done if the part doesn't require a lot of structural integrity.</p> <p><strong>I was incorrect with the following statements with regard to the Cura slicing engine:</strong> </p> <p><s>It's been a while since I've looked at 3D printer G-Code, but from what I remember, <strong>E</strong> values can be the bane of any manually written G-Code. Usually the slicing engine generates the <strong>E</strong> value as an incremental step value throughout the G-Code (at least this was true for Skeinforge and early MakerWare, please verify this). So, if the value is incremental and depending on the controller, this value could be lost or corrupt if a new print is initialized.</p> <p>I would hope, that if you're using a slicing engine's <em>custom G-Code</em> input, that the software would be able to compensate situations like this and reformat your provided G-Code to match the value of <strong>E</strong> or any similar command.</s></p>
<p>Is the hotend temperature set correctly? If you only preheat the hotend but then turn it off, it will behave exactly as you described (while you're above <code>EXTRUDE_MINTEMP</code> it will behave normally, but once the temperature drops too low it will continue doing the XYZ-moves, but stop extruding). Perhaps you need to add a <code>M109</code> command to set the temperature properly.</p> <p><a href="https://3dprinting.stackexchange.com/questions/850/no-extrusion-when-trying-to-resume-failed-print-with-manually-edited-gcode/851#851">In his answer</a>, tbm0115 mentions absolute and relative coordinates. Whether absolute or relative coordinates are used is set using the <code>G90</code>/<code>G91</code> commands and will always be the same for all axes (XYZ and E). He also mentions that:</p> <blockquote> <p>so perhaps the firmware is attempting to move the stepper motor to the absolute position (which is almost 50% thru your print). This may lead to clogging or skipping depending on how hot your extruder is at that point.</p> </blockquote> <p>This shouldn't happen, as extremely long extrude moves are ignored by Marlin. When the printer encounters</p> <pre><code>G1 F3600 X210.390 Y105.771 E4104.65185 </code></pre> <p>it performs only the XYZ-part of the move, but doesn't move the extruder (assuming the printer was reset and the current extruder position is 0). However, it does update the internal value for the extruder position, so the next move happens normally. Perhaps missing the initial extrude segment isn't a big deal, but if you want the print to resume perfectly where you left off, you should add a <code>G92</code> command to the beginning to initialize the extruder position correctly, for example:</p> <pre><code>G92 E4104.4 </code></pre> <p>(but the exact value depends on the last extruder position in the previous layer).</p>
232
<p>I have never used octal numbers in my code nor come across any code that used it (hexadecimal and bit twiddling notwithstanding).</p> <p>I started programming in C/C++ about 1994 so maybe I'm too young for this? Does older code use octal? C includes support for these by prepending a 0, but where is the code that uses these base 8 number literals?</p>
<p>I recently had to write network protocol code that accesses 3-bit fields. Octal comes in handy when you want to debug that. </p> <p>Just for effect, can you tell me what the 3-bit fields of this are?</p> <pre><code>0x492492 </code></pre> <p>On the other hand, this same number in octal:</p> <pre><code>022222222 </code></pre> <p>Now, finally, in binary (in groups of 3):</p> <pre><code>010 010 010 010 010 010 010 010 </code></pre>
<p>Octal is and was most useful with the first available display hardware (7-segment displays). These original displays did not have the decoders available later.</p> <p>Thus the digital register outputs were grouped to fit the available display which was capable of only displaying eight(8) symbols: 0,1,2 3,4,5,6,7 .</p> <p>Also the first CRT display tubes were raster scan displays and simplest character-symbol generators were equivalent to the 7-segment displays.</p> <p>The motivating driver was, as always, the least expensive display possible. </p>
6,635
<p>I'm still putting aside money for buying my first 3D printer and I'm designing all the things I have to print. </p> <p>As I have <strong>a lot of things</strong> to print I would like to print as much of them as possible in a single print.</p> <p>I have some arcs of circle (between 90° and 320° and 10/30 cm of diameter) and I'm going to slice them so I'll be able to print many of them, vertically, in a single print. I read in the internet that I cannot print over 45°... but starting from where?</p> <p>The image below shows how I would like to print my (orange) things</p> <p><a href="https://i.stack.imgur.com/wNnOf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wNnOf.png" alt="Illustration of vertical arcs"></a></p> <p>I suppose I can print my things without supports because from Y-start to Y-end they are &lt;= 45° (as the green line shows) is that right?</p> <p>The red line, instead, shows a case where the angle, starting from a (Y: 50%) point, is higher than 45°.</p> <p><strong>So the question is</strong>: Can I print my things in such way?</p>
<p>Again, I'm not a specialist in intellectual property law (or any other sort of lawyer), but I have a different interpretation.</p> <p>There is a potential difference between the use of the product, and the use of the derivatives of the product. For example, GCC has a restrictive (open) license, but its OK (in some contexts) to use the compiled code in a commercial product.</p> <p>Generally, you (as creator of a work) will own all of the rights to a work. The supplier of the tools you use does not <em>usually</em> gain any rights to the resulting work.</p> <p>So long as <em>you</em> don't gain from your use of the tool, it is probably OK for other people to profit from the use. Sharing with non-commercial should be your safest option if you're worried - but be aware that this doesn't prevent people from taking your design to a print service indirectly.</p> <p>Their <a href="http://www.sketchup.com/license/d/sketchup" rel="nofollow noreferrer">license</a> says:</p> <blockquote> <p>Non-commercial use means that you may not sell, rent, or lease the output of the Software</p> </blockquote> <p>This implies that they are not making any claims to restrict how other non-connected persons might use your works. Its specifically talking about use as part of employment, or you making a gain.</p> <p>I would also note that to me the language in the license does not look to have been drafted with a clear understanding of intent on this issue.</p>
<p>IANAL, naturally, so take this with that in mind.</p> <p>From what I've read and been instructed, any software which is licenced as non-commercial cannot be redistributed or incorporated into any commercial software product. The other thing about "noncommercial" software licenses is that they can't be used for corporate design work -- as opposed to home user hacking. </p> <p>Now, all you need to do is ensure that the license restrictions remain with the mixup files you post. There is no doubt that, so long as you do that, you are free from any legal problems that a downstream vendor may incur upon trying to sell the hardware produced. </p>
512
<p>This question would probably apply equally as well to other languages with C-like multi-line comments. Here's the problem I'm encountering. I'm working with Java code in Eclipse, and I wanted to comment out a block of code. However, there is a string that contains the character sequence "*/", and Eclipse thinks that the comment should end there, even though it is inside a string. It gives me tons of errors and fails to build.</p> <pre><code>/* ... some Java code ... ... "... */ ..." ... ... more Java code ... */ </code></pre> <p>Does the Java specification match with Eclipse's interpretation of my multi-line comment? I would like to think that Java and/or Eclipse would account for this sort of thing.</p>
<p>Eclipse is correct. There is no interpretation context inside a comment (no escaping, etc). See <a href="http://java.sun.com/docs/books/jls/second_edition/html/lexical.doc.html#48125" rel="noreferrer">JLS §3.7</a>.</p>
<p>A simple test shows Eclipse is correct:</p> <pre><code>public class Test { public static final void main(String[] args) throws Exception { String s = "This is the original string."; /* This is commented out. s = "This is the end of a comment: */ "; */ System.out.println(s); } } </code></pre> <p>This fails to compile with:</p> <pre><code>Test.java:5: unclosed string literal s = "This is the end of a comment: */ "; </code></pre>
5,289
<p>How do I get the key of the current element in a <code>foreach</code> loop in C#?</p> <p>For example:</p> <h2>PHP</h2> <pre><code>foreach ($array as $key =&gt; $value) { echo("$value is assigned to key: $key"); } </code></pre> <h3>What I'm trying to do in C#:</h3> <pre><code>int[] values = { 5, 14, 29, 49, 99, 150, 999 }; foreach (int val in values) { if(search &lt;= val &amp;&amp; !stop) { // Set key to a variable } } </code></pre>
<p><a href="https://stackoverflow.com/questions/60032/getting-the-array-key-in-a-foreach-loop#60035">Grauenwolf's way</a> is the most straightforward and performant way of doing this with an array:</p> <blockquote> <p>Either use a for loop or create a temp variable that you increment on each pass.</p> </blockquote> <p>Which would of course look like this:</p> <pre><code>int[] values = { 5, 14, 29, 49, 99, 150, 999 }; for (int key = 0; key &lt; values.Length; ++key) if (search &lt;= values[key] &amp;&amp; !stop) { // set key to a variable } </code></pre> <p>With .NET 3.5 you can take a more functional approach as well, but it is a little more verbose at the site, and would likely rely on a couple <a href="http://en.wikipedia.org/wiki/Apply" rel="noreferrer">support functions</a> for <a href="http://en.wikipedia.org/wiki/Visitor_pattern" rel="noreferrer">visiting</a> the elements in an IEnumerable. Overkill if this is all you need it for, but handy if you tend to do a lot of collection processing.</p>
<p><code>myKey = Array.IndexOf(values, val);</code></p>
8,481
<p>In the PHP manual, to show the syntax for functions with optional parameters, they use brackets around each set of dependent optional parameter. For example, for the <a href="http://php.net/date" rel="noreferrer"><code>date()</code></a> function, the manual reads:</p> <pre><code>string date ( string $format [, int $timestamp = time() ] ) </code></pre> <p>Where <code>$timestamp</code> is an optional parameter, and when left blank it defaults to the <a href="http://php.net/time" rel="noreferrer"><code>time()</code></a> function's return value.</p> <p>How do you go about creating optional parameters like this when defining a custom function in PHP?</p>
<p>Much like the manual, use an equals (<code>=</code>) sign in your definition of the parameters:</p> <pre><code>function dosomething($var1, $var2, $var3 = 'somevalue'){ // Rest of function here... } </code></pre>
<p>Starting with 7.1 there is a type hinting for nullable parameters</p> <pre><code>function func(?Object $object) {} </code></pre> <p>It will work for these cases:</p> <pre><code>func(null); //as nullable parameter func(new Object()); // as parameter of declared type </code></pre> <p>But for optional value signature should look like.</p> <pre><code>function func(Object $object = null) {} // In case of objects function func(?Object $object = null) {} // or the same with nullable parameter function func(string $object = '') {} // In case of scalar type - string, with string value as default value function func(string $object = null) {} // In case of scalar type - string, with null as default value function func(?string $object = '') {} // or the same with nullable parameter function func(int $object = 0) {} // In case of scalar type - integer, with integer value as default value function func(int $object = null) {} // In case of scalar type - integer, with null as default value function func(?int $object = 0) {} // or the same with nullable parameter </code></pre> <p>than it can be invoked as  </p> <pre><code>func(); // as optional parameter func(null); // as nullable parameter func(new Object()); // as parameter of declared type </code></pre>
5,524
<p>Has anyone built a website with IronPython and ASP.NET. What were your experiences and is the combination ready for prime-time?</p>
<p>The current version of ASP.NET integration for IronPython is not very up-to-date and is more of a "proof-of-concept." I don't think I'd build a production website based on it.</p> <p><strong>Edit:</strong>: I have a very high level of expectation for how things like this should work, and might setting the bar a little high. Maybe you should take what's in "ASP.NET Futures", write a test application for it and see how it works for you. If you're successful, I'd like to hear about it. Otherwise, I think there should be a newer CTP of this in the next six months.</p> <p>(I'm a developer on IronPython and IronRuby.)</p> <p><strong>Edit 2:</strong> Since I originally posted this, a <a href="http://www.codeplex.com/aspnet/Wiki/View.aspx?title=Dynamic%20Language%20Support" rel="nofollow noreferrer">newer version</a> has been released.</p>
<p>Keep a look out for ASP.NET MVC</p> <p>The IronRuby guys have got some internal builds of MVC to work with IronRuby, and IronPython 2 and IronRuby have a lot of code in common with the DLR.</p> <p>I'm not sure if they'll support IronPython/IronRuby when MVC is released, but it's definitely worth keeping your eye on anyway - The old ASP.NET forms-based development model is old, busted, and the sooner it goes away the better.</p>
3,379
<p>I am trying to dynamicaly compile code using CodeDom. I can load other assemblies, but I cannot load System.Data.Linq.dll. I get an error:</p> <p>Metadata file 'System.Data.Linq.dll' could not be found</p> <p>My code looks like:</p> <pre><code>CompilerParameters compilerParams = new CompilerParameters(); compilerParams.CompilerOptions = "/target:library /optimize"; compilerParams.GenerateExecutable = false; compilerParams.GenerateInMemory = true; compilerParams.IncludeDebugInformation = false; compilerParams.ReferencedAssemblies.Add("mscorlib.dll"); compilerParams.ReferencedAssemblies.Add("System.dll"); compilerParams.ReferencedAssemblies.Add("System.Data.Linq.dll"); </code></pre> <p>Any ideas? </p>
<p>That may be because this assembly is stored in a different location than mscorlib is. It should work if you provide a full path to the assembly. The most convenient way to get the full path is to let the .NET loader do the work for you. I would try something like this:</p> <pre><code>compilerParams.ReferencedAssemblies.Add(typeof(DataContext).Assembly.Location); </code></pre>
<p>This may be a silly answer, but are you sure the code is running on a machine with .NET Framework 3.5?</p>
8,577
<p>I have had my Ender 3 Pro for about 2 years now and it has been working amazingly!</p> <p>Since November it's struggling very much while printing. I am quite sure the problem lives on the extruder motor.</p> <p>About 1 of 5 prints come out ok. The main problem is that the first 2-3 first layers are all good. But as the prints develop, under-extrusion problems come. It's not a clog, because I can push the filament and it flows smoothly. It's like the motor cannot push the filament to the feeding line.</p> <p>Thing's I tried so far:</p> <ul> <li>Upgrading the extruder from plastic to aluminum (verified the tension, not too loose, not too tight)</li> <li>Changed hotend (PTFE, nozzle, block, etc)</li> <li>E-steps calibrated</li> <li>Tried switching MicroSD</li> <li>3 different brands of filament</li> </ul> <p>I noticed that the motor is getting pretty hot. I mean, you can't have your hand in there for more than a second. My theory is that this overheating softens the filament and it cannot feed. Is this possible? The aluminum extruder also gets pretty hot. There is no clicking sound or anything weird while printing, the first layers come out perfectly ok.</p>
<p>There are a few probable causes I can think of based on your description. That it only starts happening after the printer has been running for a bit makes me think it's a heat issue.</p> <p>The first problem I'd check out is that extruder motor getting that hot. It should be warm, but not so hot it's uncomfortable to hold. Either the stepper is defective, or you have the current to it set too high. There is a small screw on the motherboard beside each motor driver that allows you to adjust the current to each stepper. Find a guide for your printer to adjust it properly. You'll likely need a multimeter to do it right.</p> <p>The current being too high on the extruder could also cause the stepper driver (a chip on the mainboard) to overheat. When those overheat they go into thermal shutdown and will stop driving the extruder for a moment until they cool off enough to resume. If your printer has a fan for the mainboard, make sure that is functioning properly as well. Those chips <em>do</em> normally get hot enough they are uncomfortable to touch, so to check if that's the problem you'd need a thermometer and to check what model they are to see what their maximum operating temperature is.</p> <p>If fixing that doesn't solve it, it could be the heatsink on your hotend is getting hot enough that filament starts melting there, you can get a clog that stops the extruder from pushing filament out properly, but still feels like it can be extruded by hand. Make sure the heatsink on the hotend has sufficient cooling. After a bit of printing try touching the top of the heatsink - if it's uncomfortable to touch, you have a problem there.</p> <p>-</p>
<blockquote> <p>I noticed that the motor is getting pretty hot.</p> </blockquote> <p>That's not normal. Replacement motors are cheap and easy to find, so I'd swap out the motor before doing anything else. If the new one also heats up the same way, there might be a problem with the main board, but I'll bet a new motor will solve the problem.</p>
2,086
<p>Is there a way to draw a line along a curved path with a gradient that varies in a direction perpendicular to the direction of the line? I am using the GDI+ framework for my graphics.</p>
<p>The simple answer is no. You can create a GraphicsPath in order to describe what you would like to draw, using AddPoint/AddLine/AddBezier and so forth as needed to describe the complex path of what you want to draw. When you draw the path you can provide a Brush which can be something like LinearGradientBrush or RadialGradientBrush. Neither of those gradient brushes reacts to the actual path being drawn in the sense of changing direction as the drawing occurs. You have to specify the angles etc as constant for the entire gradient area.</p>
<p>One possible method you can use is to set the clip region of the Graphics object to be that of the line only. Then draw a Linear Gradient over the extremes of the line e.g.</p> <pre><code>GraphicsPath gp = new GraphicsPath(); gp.AddArc(); // etc... graphics.SetClip( gp ); graphics.FillRectangle( myLinearGradientBrush, gp.GetBounds()); </code></pre> <p>The above code might give you what you are looking for. </p>
8,824
<p>What are some guidelines for maintaining responsible session security with PHP? There's information all over the web and it's about time it all landed in one place!</p>
<p>There are a couple of things to do in order to keep your session secure:</p> <ol> <li>Use SSL when authenticating users or performing sensitive operations.</li> <li>Regenerate the session id whenever the security level changes (such as logging in). You can even regenerate the session id every request if you wish.</li> <li>Have sessions time out</li> <li>Don't use register globals</li> <li>Store authentication details on the server. That is, don't send details such as username in the cookie.</li> <li>Check the <code>$_SERVER['HTTP_USER_AGENT']</code>. This adds a small barrier to session hijacking. You can also check the IP address. But this causes problems for users that have changing IP address due to load balancing on multiple internet connections etc (which is the case in our environment here).</li> <li>Lock down access to the sessions on the file system or use custom session handling</li> <li>For sensitive operations consider requiring logged in users to provide their authenication details again</li> </ol>
<p>This is pretty trivial and obvious, but be sure to <a href="http://www.php.net/session_destroy" rel="nofollow noreferrer">session_destroy</a> after every use. This can be difficult to implement if the user does not log out explicitly, so a timer can be set to do this.</p> <p>Here is a good <a href="http://www.elated.com/articles/javascript-timers-with-settimeout-and-setinterval/" rel="nofollow noreferrer">tutorial</a> on setTimer() and clearTimer().</p>
2,281
<p>Cura does not seem let the full print area to be used. My printer is a <a href="https://www.lulzbot.com/store/printers/lulzbot-mini" rel="noreferrer">Lulzbot Mini</a>. The design illustrated below can be found <a href="https://www.tinkercad.com/things/hBE6Aj2EJMo-skyrail-marble-coster-banked-curve-beta" rel="noreferrer">here</a>.</p> <p><a href="https://i.imgur.com/y1WpAws.png" rel="noreferrer"> <img src="https://imgur.com/y1WpAws.png" /> </a> <a href="https://i.imgur.com/42n8npt.png" rel="noreferrer"> <img src="https://i.imgur.com/42n8npt.png" /> </a></p>
<p>Cura is likely factoring in your skirt. Change the skirt lines to 0 and you might be able to print (<code>Expert</code> -&gt; <code>Switch to full settings</code>, then click the options button next to &quot;platform adhesion type.&quot;). Cura also seems to have an in-built build size offset of about 2 mm. I can't seem to get rid of it in any way other than to change the build size.</p>
<p>If you set skirt setting to 'none', it will get you up to full bed area -1 mm at the edges, so, for example, 198x198 instead of 200x200.</p> <p>You need to go to 'Travel' and set 'Travel avoid distance' to zero to get the full 200x200, or whatever, bed area.</p>
203
<p>What's needed to succesfully make a crossdomain call from Silverlight?</p>
<p>If I understand your question correctly you would need to have a clientaccesspolicy.xml file in the domain web root of the server that you wish to call (ie www.example.com/clientaccesspolicy.xml) that defines that it is ok for services from other domains to call services on that domain.</p> <p>Read the <a href="http://msdn.microsoft.com/en-us/library/cc197955%28VS.95%29.aspx" rel="nofollow noreferrer">How to Make a Service Available Across Domain Boundaries</a> MSDN article for more detailed information.</p>
<p>Maybe also check out JSONP <a href="http://www.west-wind.com/weblog/posts/107136.aspx" rel="nofollow noreferrer">http://www.west-wind.com/weblog/posts/107136.aspx</a> for example this is how you can get Twitter updates in JavaScript on the client side even though Twitter is on a different domain than you web page.</p>
4,961
<p>Is there actually a way to determine whether the sheet of glass that one has received from a Chinese supplier on eBay is actually a sheet of Borosilicate glass, and not just a piece of normal glass intended for glazing purposes?</p> <p>Are there any laminations, or markings/features, visible under a certain light or if the sheet is viewed at a particular angle?</p> <p>I ask because most of the 200 x 300 mm sheets on eBay are priced at around £16 - £26, but there are a few priced at just £10, which still claim to be toughened glass, although they do not mention the word Borosilicate, such as this one, <a href="http://www.ebay.co.uk/itm/300x200x3mm-3D-Printer-Heated-Bed-Toughened-Glass-Clear-Build-Plate-UK-STOCK-/302067007035" rel="nofollow noreferrer">300x200x3mm 3D Printer Heated Bed Toughened Glass Clear Build Plate UK STOCK</a>. </p> <p>To paraphrase, "Usually if things seem to good to be true, then they are probably fake." However, I wondered whether it was worth a punt as it is only £10, and then if I would be able to verify its veracity once it was in my hands.</p>
<p>Google turned up this thread: <a href="http://arstechnica.com/civis/viewtopic.php?t=96214" rel="nofollow noreferrer">http://arstechnica.com/civis/viewtopic.php?t=96214</a></p> <p>There don't seem to be any easy, definitive tests. You have two main options, both of which seem to require a fairly high level of laboratory skill (but nothing fancy in terms of equipment).</p> <p>The refractive index of Borosilicate glass is very similar to vegetable oil (implying no reflection from a boundary if you have a pool of oil on the glass).</p> <p>The density of the glass can be measured too, using a water bath. This should also be able to give a fairly accurate answer.</p> <p>Stress testing seems the most reliable way to determine if you have a 'good' sample, but may be more expensive.</p>
<blockquote> <p>"toughened" glass,</p> </blockquote> <p>Lead, cadmium and barium would have that effect. It goes without saying that you wouldn't want that kind of glass anywhere near your neighborhood. </p> <p>Even those items that mention borosilicate glass in the product description may have never seen any boron. </p> <p>Some studies have found that more than 90% of items sourced from China are fraudulently labeled. There is no enforcement. </p> <p>While stats say nothing about any particular sample, I'd certainly keep them in mind. </p>
436
<p>In TFS when you merge branch A to branch B and checkin, you get a single changeset on B (typically with a comment like "merged A->B"). </p> <p>This means B doesn't have any of the checkin history from A. So if someone created a new file on branch A, you can't tell who created it from branch B. And if someone updated a file on A, you can't tell who did the update from branch B.</p> <p>Is there any way to see this kind of detailed changeset history across branches? Some kind of power toy, or third party tool, or anything?</p> <p>Update: The TFS Power Toy tfpt history /followbranches tool does not "expand merges," it only "expands branches" and therefore doesn't solve this problem.</p>
<p>TFS 2010 will include support for this.</p> <p>Brian Harry talks about it in <a href="http://channel9.msdn.com/pdc2008/TL52/" rel="noreferrer">this presentation</a>.</p> <p>You will now be able to see where a change originated and who made it after the change has been merged to a different branch.</p>
<p>Might want to try the TFS Follow branch History tool:</p> <p><a href="http://www.codeplex.com/TFSBranchHistory" rel="nofollow noreferrer"><a href="http://www.codeplex.com/TFSBranchHistory" rel="nofollow noreferrer">http://www.codeplex.com/TFSBranchHistory</a></a></p>
6,789
<p>Im testing an ASP.NEt site. When I execute it, it starts the ASP.NET Development Server and opens up a page.</p> <p>Now I want to test it in the intranet I have. </p> <ol> <li><p>Can I use this server or I need to configure IIS in this machine? </p></li> <li><p>Do I need to configure something for it to work?</p></li> </ol> <p>I've changed the localhost to the correct IP and I opened up the firewall.</p> <p>Thanks</p>
<p><strong>Yes you can! And you don't need IIS</strong></p> <p>Just use a simple Java TCP tunnel. Download this Java app &amp; just tunnel the traffic back. <a href="http://jcbserver.uwaterloo.ca/cs436/software/tgui/tcpTunnelGUI.shtml" rel="nofollow noreferrer">http://jcbserver.uwaterloo.ca/cs436/software/tgui/tcpTunnelGUI.shtml</a></p> <p>In command prompt, you'd then run the java app like this... Let's assume you want external access on port 80 and your standard debug environment runs on port 1088...</p> <pre><code>java -jar tunnel.jar 80 localhost 1088 </code></pre> <p>(Also answered here: <a href="https://stackoverflow.com/questions/1555058/accessing-asp-net-development-server-external-to-vm/7394823#7394823">Accessing asp. net development server external to VM</a>)</p>
<p>I believe the built in ASP.NET server only works on localhost. You'll have to use IIS.</p>
3,942
<p>I haven't touched sharepoint in years. If I want to setup a development environment to get up to speed, what options do I have? I don't have an MSDN license, is there anyway I can get up and running for free? (for development only)</p>
<p>You <strong>need</strong> a Windows 2003 Server (or 2008 Server, but I have no experience with that), no way around that. You can then of course use Visual C# 2005 Express and the SHarepoint Services 3.0 if that's your target.</p> <p>If you want to do development on Sharepoint 2007, you have to buy a Sharepoint 2007 license, which has a pretty hefty fee attached to it.</p> <p>As a SQL, SQL 2005 Express works fine for development.</p> <p>There is a good Article how to set up Sharepoint on a Single Server:</p> <p><a href="http://blogs.msdn.com/martinkearn/archive/2007/03/28/how-to-install-sharepoint-server-2007-on-a-single-machine.aspx" rel="nofollow noreferrer">http://blogs.msdn.com/martinkearn/archive/2007/03/28/how-to-install-sharepoint-server-2007-on-a-single-machine.aspx</a></p> <p>You CAN use a Trial Version of Windows 2003 and Sharepoint 2007 though if it's only needed for a limited time (i believe the Trials run 180 days).</p>
<p>The action pack is fantastic value, you can use the Windows Server from that, as well as SharePoint Enterprise / Standard.</p>
3,275
<p>I have a page that uses </p> <pre><code>$(id).show("highlight", {}, 2000); </code></pre> <p>to highlight an element when I start a ajax request, that might fail so that I want to use something like</p> <pre><code>$(id).show("highlight", {color: "#FF0000"}, 2000); </code></pre> <p>in the error handler. The problem is that if the first highlight haven't finished, the second is placed in a queue and wont run until the first is ready. Hence the question: Can I somehow stop the first effect?</p>
<p>From the jQuery docs: </p> <p><a href="http://docs.jquery.com/Effects/stop" rel="nofollow noreferrer">http://docs.jquery.com/Effects/stop</a></p> <blockquote> <p><em>Stop the currently-running animation on the matched elements.</em>...</p> <p>When <code>.stop()</code> is called on an element, the currently-running animation (if any) is immediately stopped. If, for instance, an element is being hidden with <code>.slideUp()</code> when <code>.stop()</code> is called, the element will now still be displayed, but will be a fraction of its previous height. Callback functions are not called.</p> <p>If more than one animation method is called on the same element, the later animations are placed in the effects queue for the element. These animations will not begin until the first one completes. When <code>.stop()</code> is called, the next animation in the queue begins immediately. If the <code>clearQueue</code> parameter is provided with a value of <code>true</code>, then the rest of the animations in the queue are removed and never run.</p> <p>If the <code>jumpToEnd</code> argument is provided with a value of true, the current animation stops, but the element is immediately given its target values for each CSS property. In our above <code>.slideUp()</code> example, the element would be immediately hidden. The callback function is then immediately called, if provided...</p> </blockquote>
<p>In my case, using below code does not work and keep your opacity value remain: </p> <pre><code>$('#identifier').stop(true, true).effect("pulsate", { times: 1 }, 1); </code></pre> <p>For me just remove opacity are working:</p> <pre><code>$('#identifier').stop(true, true).css('opacity',''); </code></pre>
8,467
<p>How do I save each sheet in an Excel workbook to separate <code>CSV</code> files with a macro?</p> <p>I have an excel with multiple sheets and I was looking for a macro that will save each sheet to a separate <code>CSV (comma separated file)</code>. Excel will not allow you to save all sheets to different <code>CSV</code> files.</p>
<p>Here is one that will give you a visual file chooser to pick the folder you want to save the files to and also lets you choose the CSV delimiter (I use pipes '|' because my fields contain commas and I don't want to deal with quotes):</p> <pre><code>' ---------------------- Directory Choosing Helper Functions ----------------------- ' Excel and VBA do not provide any convenient directory chooser or file chooser ' dialogs, but these functions will provide a reference to a system DLL ' with the necessary capabilities Private Type BROWSEINFO ' used by the function GetFolderName hOwner As Long pidlRoot As Long pszDisplayName As String lpszTitle As String ulFlags As Long lpfn As Long lParam As Long iImage As Long End Type Private Declare Function SHGetPathFromIDList Lib "shell32.dll" _ Alias "SHGetPathFromIDListA" (ByVal pidl As Long, ByVal pszPath As String) As Long Private Declare Function SHBrowseForFolder Lib "shell32.dll" _ Alias "SHBrowseForFolderA" (lpBrowseInfo As BROWSEINFO) As Long Function GetFolderName(Msg As String) As String ' returns the name of the folder selected by the user Dim bInfo As BROWSEINFO, path As String, r As Long Dim X As Long, pos As Integer bInfo.pidlRoot = 0&amp; ' Root folder = Desktop If IsMissing(Msg) Then bInfo.lpszTitle = "Select a folder." ' the dialog title Else bInfo.lpszTitle = Msg ' the dialog title End If bInfo.ulFlags = &amp;H1 ' Type of directory to return X = SHBrowseForFolder(bInfo) ' display the dialog ' Parse the result path = Space$(512) r = SHGetPathFromIDList(ByVal X, ByVal path) If r Then pos = InStr(path, Chr$(0)) GetFolderName = Left(path, pos - 1) Else GetFolderName = "" End If End Function '---------------------- END Directory Chooser Helper Functions ---------------------- Public Sub DoTheExport() Dim FName As Variant Dim Sep As String Dim wsSheet As Worksheet Dim nFileNum As Integer Dim csvPath As String Sep = InputBox("Enter a single delimiter character (e.g., comma or semi-colon)", _ "Export To Text File") 'csvPath = InputBox("Enter the full path to export CSV files to: ") csvPath = GetFolderName("Choose the folder to export CSV files to:") If csvPath = "" Then MsgBox ("You didn't choose an export directory. Nothing will be exported.") Exit Sub End If For Each wsSheet In Worksheets wsSheet.Activate nFileNum = FreeFile Open csvPath &amp; "\" &amp; _ wsSheet.Name &amp; ".csv" For Output As #nFileNum ExportToTextFile CStr(nFileNum), Sep, False Close nFileNum Next wsSheet End Sub Public Sub ExportToTextFile(nFileNum As Integer, _ Sep As String, SelectionOnly As Boolean) Dim WholeLine As String Dim RowNdx As Long Dim ColNdx As Integer Dim StartRow As Long Dim EndRow As Long Dim StartCol As Integer Dim EndCol As Integer Dim CellValue As String Application.ScreenUpdating = False On Error GoTo EndMacro: If SelectionOnly = True Then With Selection StartRow = .Cells(1).Row StartCol = .Cells(1).Column EndRow = .Cells(.Cells.Count).Row EndCol = .Cells(.Cells.Count).Column End With Else With ActiveSheet.UsedRange StartRow = .Cells(1).Row StartCol = .Cells(1).Column EndRow = .Cells(.Cells.Count).Row EndCol = .Cells(.Cells.Count).Column End With End If For RowNdx = StartRow To EndRow WholeLine = "" For ColNdx = StartCol To EndCol If Cells(RowNdx, ColNdx).Value = "" Then CellValue = "" Else CellValue = Cells(RowNdx, ColNdx).Value End If WholeLine = WholeLine &amp; CellValue &amp; Sep Next ColNdx WholeLine = Left(WholeLine, Len(WholeLine) - Len(Sep)) Print #nFileNum, WholeLine Next RowNdx EndMacro: On Error GoTo 0 Application.ScreenUpdating = True End Sub </code></pre>
<p>Please look into <a href="http://www.mrexcel.com/forum/excel-questions/265154-saving-multiple-sheets-separate-csv-files-visual-basic-applications.html" rel="nofollow">Von Pookie's answer</a>, all credits to him/her. </p> <pre><code> Sub asdf() Dim ws As Worksheet, newWb As Workbook Application.ScreenUpdating = False For Each ws In Sheets(Array("EID Upload", "Wages with Locals Upload", "Wages without Local Upload")) ws.Copy Set newWb = ActiveWorkbook With newWb .SaveAs ws.Name, xlCSV .Close (False) End With Next ws Application.ScreenUpdating = True End Sub </code></pre>
8,351
<p>Does C# have built-in support for parsing strings of page numbers? By page numbers, I mean the format you might enter into a print dialog that's a mixture of comma and dash-delimited.</p> <p>Something like this:</p> <pre><code>1,3,5-10,12 </code></pre> <p>What would be really nice is a solution that gave me back some kind of list of all page numbers represented by the string. In the above example, getting a list back like this would be nice:</p> <pre><code>1,3,5,6,7,8,9,10,12 </code></pre> <p>I just want to avoid rolling my own if there's an easy way to do it.</p>
<p>Should be simple:</p> <pre><code>foreach( string s in "1,3,5-10,12".Split(',') ) { // try and get the number int num; if( int.TryParse( s, out num ) ) { yield return num; continue; // skip the rest } // otherwise we might have a range // split on the range delimiter string[] subs = s.Split('-'); int start, end; // now see if we can parse a start and end if( subs.Length &gt; 1 &amp;&amp; int.TryParse(subs[0], out start) &amp;&amp; int.TryParse(subs[1], out end) &amp;&amp; end &gt;= start ) { // create a range between the two values int rangeLength = end - start + 1; foreach(int i in Enumerable.Range(start, rangeLength)) { yield return i; } } } </code></pre> <p><strong>Edit:</strong> thanks for the fix ;-) </p>
<p>Here's a slightly modified version of lassevk's code that handles the string.Split operation inside of the Regex match. It's written as an extension method and you can easily handle the duplicates problem using the Disinct() extension from LINQ.</p> <pre><code> /// &lt;summary&gt; /// Parses a string representing a range of values into a sequence of integers. /// &lt;/summary&gt; /// &lt;param name="s"&gt;String to parse&lt;/param&gt; /// &lt;param name="minValue"&gt;Minimum value for open range specifier&lt;/param&gt; /// &lt;param name="maxValue"&gt;Maximum value for open range specifier&lt;/param&gt; /// &lt;returns&gt;An enumerable sequence of integers&lt;/returns&gt; /// &lt;remarks&gt; /// The range is specified as a string in the following forms or combination thereof: /// 5 single value /// 1,2,3,4,5 sequence of values /// 1-5 closed range /// -5 open range (converted to a sequence from minValue to 5) /// 1- open range (converted to a sequence from 1 to maxValue) /// /// The value delimiter can be either ',' or ';' and the range separator can be /// either '-' or ':'. Whitespace is permitted at any point in the input. /// /// Any elements of the sequence that contain non-digit, non-whitespace, or non-separator /// characters or that are empty are ignored and not returned in the output sequence. /// &lt;/remarks&gt; public static IEnumerable&lt;int&gt; ParseRange2(this string s, int minValue, int maxValue) { const string pattern = @"(?:^|(?&lt;=[,;])) # match must begin with start of string or delim, where delim is , or ; \s*( # leading whitespace (?&lt;from&gt;\d*)\s*(?:-|:)\s*(?&lt;to&gt;\d+) # capture 'from &lt;sep&gt; to' or '&lt;sep&gt; to', where &lt;sep&gt; is - or : | # or (?&lt;from&gt;\d+)\s*(?:-|:)\s*(?&lt;to&gt;\d*) # capture 'from &lt;sep&gt; to' or 'from &lt;sep&gt;', where &lt;sep&gt; is - or : | # or (?&lt;num&gt;\d+) # capture lone number )\s* # trailing whitespace (?:(?=[,;\b])|$) # match must end with end of string or delim, where delim is , or ;"; Regex regx = new Regex(pattern, RegexOptions.IgnorePatternWhitespace | RegexOptions.Compiled); foreach (Match m in regx.Matches(s)) { Group gpNum = m.Groups["num"]; if (gpNum.Success) { yield return int.Parse(gpNum.Value); } else { Group gpFrom = m.Groups["from"]; Group gpTo = m.Groups["to"]; if (gpFrom.Success || gpTo.Success) { int from = (gpFrom.Success &amp;&amp; gpFrom.Value.Length &gt; 0 ? int.Parse(gpFrom.Value) : minValue); int to = (gpTo.Success &amp;&amp; gpTo.Value.Length &gt; 0 ? int.Parse(gpTo.Value) : maxValue); for (int i = from; i &lt;= to; i++) { yield return i; } } } } } </code></pre>
6,139
<p>I'm in the process of refactoring some code which includes moving folders around, and I would like to regularly merge to keep things current. What is the best way to merge after I've moved folders around in my working copy?</p>
<p>You can move the files around in StarTeam also. Then merge after that. </p> <p>Whatever you do, make sure you don't delete the files and re-add in StarTeam. You'll lose the file history if you do that.</p>
<p>The problem is I'm worried about breaking the build in the meantime while I'm moving folders in StarTeam. I suppose the only way to avoid that is to be ready to upload updated project files as soon as I move things around in StarTeam and do it as quickly as possible.</p>
4,819
<p>Does anyone here have any useful code which uses reduce() function in python? Is there any code other than the usual + and * that we see in the examples?</p> <p>Refer <a href="http://www.artima.com/weblogs/viewpost.jsp?thread=98196" rel="noreferrer">Fate of reduce() in Python 3000</a> by GvR</p>
<p>The other uses I've found for it besides + and * were with and and or, but now we have <code>any</code> and <code>all</code> to replace those cases. </p> <p><code>foldl</code> and <code>foldr</code> do come up in Scheme a lot... </p> <p>Here's some cute usages:</p> <p><b>Flatten a list</b></p> <p>Goal: turn <code>[[1, 2, 3], [4, 5], [6, 7, 8]]</code> into <code>[1, 2, 3, 4, 5, 6, 7, 8]</code>.</p> <pre><code>reduce(list.__add__, [[1, 2, 3], [4, 5], [6, 7, 8]], []) </code></pre> <p><b>List of digits to a number</b></p> <p>Goal: turn <code>[1, 2, 3, 4, 5, 6, 7, 8]</code> into <code>12345678</code>.</p> <p>Ugly, slow way:</p> <pre><code>int("".join(map(str, [1,2,3,4,5,6,7,8]))) </code></pre> <p>Pretty <code>reduce</code> way:</p> <pre><code>reduce(lambda a,d: 10*a+d, [1,2,3,4,5,6,7,8], 0) </code></pre>
<p>Using reduce() to find out if a list of dates are consecutive:</p> <pre><code>from datetime import date, timedelta def checked(d1, d2): """ We assume the date list is sorted. If d2 &amp; d1 are different by 1, everything up to d2 is consecutive, so d2 can advance to the next reduction. If d2 &amp; d1 are not different by 1, returning d1 - 1 for the next reduction will guarantee the result produced by reduce() to be something other than the last date in the sorted date list. Definition 1: 1/1/14, 1/2/14, 1/2/14, 1/3/14 is consider consecutive Definition 2: 1/1/14, 1/2/14, 1/2/14, 1/3/14 is consider not consecutive """ #if (d2 - d1).days == 1 or (d2 - d1).days == 0: # for Definition 1 if (d2 - d1).days == 1: # for Definition 2 return d2 else: return d1 + timedelta(days=-1) # datelist = [date(2014, 1, 1), date(2014, 1, 3), # date(2013, 12, 31), date(2013, 12, 30)] # datelist = [date(2014, 2, 19), date(2014, 2, 19), date(2014, 2, 20), # date(2014, 2, 21), date(2014, 2, 22)] datelist = [date(2014, 2, 19), date(2014, 2, 21), date(2014, 2, 22), date(2014, 2, 20)] datelist.sort() if datelist[-1] == reduce(checked, datelist): print "dates are consecutive" else: print "dates are not consecutive" </code></pre>
3,675
<p>I've created a python object, but I want to send signals on it. I made it inherit from gobject.GObject, but there doesn't seem to be any way to create a new signal on my object.</p>
<p>You can also define signals inside the class definition:</p> <pre><code>class MyGObjectClass(gobject.GObject): __gsignals__ = { "some-signal": (gobject.SIGNAL_RUN_FIRST, gobject.TYPE_NONE, (object, )), } </code></pre> <p>The contents of the tuple are the the same as the three last arguments to <code>gobject.signal_new</code>.</p>
<p>Here is how:</p> <pre><code>import gobject class MyGObjectClass(gobject.GObject): ... gobject.signal_new("signal-name", MyGObjectClass, gobject.SIGNAL_RUN_FIRST, None, (str, int)) </code></pre> <p>Where the second to last argument is the return type and the last argument is a tuple of argument types.</p>
9,225
<p>Does anyone have a trusted Proper Case or PCase algorithm (similar to a UCase or Upper)? I'm looking for something that takes a value such as <code>"GEORGE BURDELL"</code> or <code>"george burdell"</code> and turns it into <code>"George Burdell"</code>.</p> <p>I have a simple one that handles the simple cases. The ideal would be to have something that can handle things such as <code>"O'REILLY"</code> and turn it into <code>"O'Reilly"</code>, but I know that is tougher.</p> <p>I am mainly focused on the English language if that simplifies things.</p> <hr> <p><strong>UPDATE:</strong> I'm using C# as the language, but I can convert from almost anything (assuming like functionality exists).</p> <p>I agree that the McDonald's scneario is a tough one. I meant to mention that along with my O'Reilly example, but did not in the original post.</p>
<p>Unless I've misunderstood your question I don't think you need to roll your own, the TextInfo class can do it for you.</p> <pre><code>using System.Globalization; CultureInfo.InvariantCulture.TextInfo.ToTitleCase("GeOrGE bUrdEll") </code></pre> <p>Will return "George Burdell. And you can use your own culture if there's some special rules involved.</p> <p><strong>Update:</strong> <a href="https://stackoverflow.com/users/1684/michael-wolfenden">Michael</a> (in a comment to this answer) pointed out that this will not work if the input is all caps since the method will assume that it is an acronym. The naive workaround for this is to .ToLower() the text before submitting it to ToTitleCase.</p>
<p>You do not mention which language you would like the solution in so here is some pseudo code.</p> <pre><code>Loop through each character If the previous character was an alphabet letter Make the character lower case Otherwise Make the character upper case End loop </code></pre>
5,208
<p>So I was reading those Windows Vista UI guidelines someone linked to in another question, and they mentioned that you should be able to survive a switch to 120 DPI. Well, I fire up my handy VM with my app installed, and what do we get... AAAAGH!!! MASSIVE UI FAIL!</p> <p>Everything's all jumbled: some containers aren't big enough for their text; some controls that were positioned "next to each other" are now all squished together/spread apart; some buttons aren't tall enough; my <code>ListView</code> columns aren't wide enough... eeek.</p> <p>It sounds like a completely different approach is in order. My previous one was basically using the VS2008 Windows Forms designer to create, I guess, a pixel-based layout. I can see that if I were to stick with Windows Forms, <code>FlowLayoutPanel</code>s would be helpful, although I've found them rather inflexible in the past. They also don't solve the problem where the containers (e.g. the form itself) aren't big enough; presumably there's a way to do that? Maybe that <code>AutoSize</code> property?</p> <p>This might also be a sign that it's time to jump ship to WPF; I'm under the impression that it's specifically designed for this kind of thing.</p> <p>The basic issue seems to come down to these:</p> <ul> <li>If I were to stick with Windows Forms, what are all the tricks to achieving a font-size-independent layout that can survive the user setting his fonts large, or setting the display to 120 DPI?</li> <li>Does WPF have significant advantages here, and if so, can you try to convince me that it's worth the switch?</li> <li>Are there any general "best-practices" for font-size-independent layouts, either in the .NET stack or in general?</li> </ul>
<p>Learn how the <em>Anchor</em> and <em>Dock</em> properties work on your controls, leave anything that can <em>AutoSize</em> itself alone, and use a <code>TableLayoutPanel</code> when you can.</p> <p>If you do these three things, you'll get a lot of the WPF design experience in Windows Forms. A well-designed <code>TableLayoutPanel</code> will do its best to size the controls so that they fit the form properly. Combined with <em>AutoSize</em> controls, docking, and the <em>AutoScaleMode</em> mentioned by Soeren Kuklau you should be able to make something that scales well. If not, your form might just have too many controls on it; consider splitting it into tab pages, floating toolboxes, or some other space.</p> <p>In WPF it's a lot easier because the concept of auto-sizing controls is built-in; in most cases if you are placing a WPF element by using a coordinate pair you are doing it wrong. Still, you can't change the fact that at lower resolutions it doesn't take much 120 dpi text to fill up the screen. Sometimes the problem is not your layout, but an attempt to put too much into a small space.</p>
<blockquote> <p>If I were to stick with Windows Forms, what are all the tricks to achieving a font-size-independent layout that can survive the user setting his fonts large, or setting the display to 120 DPI?</p> </blockquote> <p>For one, <a href="http://msdn.microsoft.com/en-us/library/system.windows.forms.autoscalemode.aspx" rel="nofollow noreferrer">AutoScaleMode</a> may be your friend.</p>
7,331
<p>I have a web application developed with ASP.net and C# that is running on my companies' intranet. Because all the users for this application are all using Microsoft Outlook without exception, I would like for the the application to open up an Outlook message on the client-side. <strong>I understand that Office is designed to be run on the desktop and not from a server, however I have no trouble creating a Word or Excel document on the client-side.</strong> </p> <p>I have code that instantiates the Outlook object using the Microsoft.Office.Interop.Outlook namespace and Outlook installed on the server. When I try to run the code from the server, I get a DCOM source error message that states "The machine-default permission settings do not grant Local Activation permission for the COM Server application with CLSID {000C101C-0000-0000-C000-000000000046} to the user This security permission can be modified using the Component Services administrative tool." I have modified the permissions using the Component Services tool, but still get this same error. </p> <p>Is there a way to overcome this or is this a fruitless exercise because Outlook cannot be opened on the client side from the server-side code?</p> <p>Mailto will not work due to the extreme length that the emails can obtain. Also, the user that sends it needs add in eye-candy to the text for the recipients.</p>
<p>You cannot open something on the client from server side code. You'd have to use script on the page to do what you're wanting (or something else client-side like ActiveX or embedded .NET or something) </p> <p>Here's a sample Javascript that invokes an Outlook MailItem from an webpage. This could easily be injected into the page from your server-side code so it executes on the client. </p> <p><a href="http://www.codeproject.com/KB/aspnet/EmailUsingJavascript.aspx" rel="nofollow noreferrer">http://www.codeproject.com/KB/aspnet/EmailUsingJavascript.aspx</a></p>
<p>If everyone in the company uses Outlook, then just using a standard "mailto" link should always open Outlook. It sounds like you're over-engineering this.</p>
6,620
<p>I printed a test cube with ABS on my Ender 3 and after some tuning, I still have a problem I cannot solve. Two of the vertical edges (I believe they are the ones on the X+ side) are slightly squished in. Could this be due to warping or something else? I printed it at 0.1 mm layer height, 235/110 °C hotend/bed temperatures.</p> <p>Here is a picture:</p> <p><a href="https://i.stack.imgur.com/A45Rn.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/A45Rn.jpg" alt="enter image description here" /></a></p>
<p>I think this is caused by the shape of the hole. The shape is parabolic or circular, this means that if you slice it as in the green part, the slicer determines the placing of the layers along the curvature. If it has a shallow curvature, and slicing layer height is relatively thick, the curvature of the object cannot be followed. But, if you have more than 1 top layer, this usually should not be visible. I don't think that you have a single layer, so this is a pretty odd anomaly that I have not seen yet (that you look at the infill without top layers).</p> <p>You could get past this by using a local different infill, e.g. 100&nbsp;% infill. You can look at <a href="https://3dprinting.stackexchange.com/a/6523/5740">this answer</a> of <a href="https://3dprinting.stackexchange.com/q/6522/5740">the question: "Different infill in the same part"</a>. This should help you out printing in your preferred orientation.</p>
<p>I think this is caused by the shape of the hole. The shape is parabolic or circular, this means that if you slice it as in the green part, the slicer determines the placing of the layers along the curvature. If it has a shallow curvature, and slicing layer height is relatively thick, the curvature of the object cannot be followed. But, if you have more than 1 top layer, this usually should not be visible. I don't think that you have a single layer, so this is a pretty odd anomaly that I have not seen yet (that you look at the infill without top layers).</p> <p>You could get past this by using a local different infill, e.g. 100&nbsp;% infill. You can look at <a href="https://3dprinting.stackexchange.com/a/6523/5740">this answer</a> of <a href="https://3dprinting.stackexchange.com/q/6522/5740">the question: "Different infill in the same part"</a>. This should help you out printing in your preferred orientation.</p>
1,149
<p><em>TLDR - I’ve just driven myself insane trying to fix what I thought was a physical problem with my Z-axis, but it seems to have been solved by switching from the TH3D firmware to the Creality firmware. I’m hoping someone can help me see where I went wrong here, so I can learn from it. Have I missed something obvious?</em></p> <p>My printer is a Creality Ender 3 Pro. I have a BLTouch. I recently upgraded to the silent 4.2.7 board and compiled my own firmware using TH3D Unified 2 using their instructions. <a href="https://pastebin.com/MYs8sYGj" rel="nofollow noreferrer">This is my <code>configuration.h</code></a>.</p> <p>This “worked”, but after that upgrade, I had severe print problems. (See the photos below) These are supposed to be 20x20x20 calibration cubes (the big one is 200%). They are significantly taller than they should be. It may be hard to see in the photo, but this is because the Z spacing on the bottom layers seems to be too high. But, at the same point on each print (even the bigger one), the problem seems to just stop - and layer spacing is correct again for the end of the print.</p> <p><a href="https://i.stack.imgur.com/BGayd.jpg" rel="nofollow noreferrer" title="Comparison of multiple calibration cubes"><img src="https://i.stack.imgur.com/BGayd.jpg" alt="Comparison of multiple calibration cubes" title="Comparison of multiple calibration cubes" /></a></p> <p>I tried everything to figure out what this was and I was convinced it was a physical problem with my Z-axis. I read lots of questions on here but nothing quite on topic. I printed on different parts of the bed and got the same result. I re-calibrated my Z steps, but these turned out to be pretty much at the firmware default anyway. I tightened every bolt and eccentric nut I could find that related to the X-axis gantry, but nothing made any difference. Same problem, every time.</p> <p>Out of desperation, I eventually switched to the official Creality firmware for the Ender 3 Pro 4.2.7 board with BLTouch - and the problem was fixed:</p> <p><a href="https://i.stack.imgur.com/7tXcf.jpg" rel="nofollow noreferrer" title="Printed calibration cube after firmware change"><img src="https://i.stack.imgur.com/7tXcf.jpg" alt="Printed calibration cube after firmware change" title="Printed calibration cube after firmware change" /></a></p> <p>Does anyone have any ideas about what caused this? Have I overlooked something obvious? I’d ideally like to go back to the TH3D firmware but it’s seemingly not an option.</p> <p><a href="https://pastebin.com/MYs8sYGj" rel="nofollow noreferrer">https://pastebin.com/MYs8sYGj</a></p>
<p>It's well known in mathematical circles that the &quot;salesman problem&quot; is what mathematicians call &quot;hard&quot; -- in their usage, that means a lot of extremely smart people have worked on the problem for many years (more than a century?) and still not found a robust, works-every-time solution.</p> <p>What's probably happening with Cura and other slicers is that, for their version of this issue (the most efficient way to visit multiple locations) the decision was made that reducing computing time in slicing was more practical than optimizing travel time of the machine. This is a reasonable decision, from a programming standpoint, because you're likely to be sitting in front of a screen, getting more and more impatient (and thinking less and less of the software you're using) every second the slicing takes, but when the actual printing is going on, you can be doing something else (sleeping, working at your day job, etc.)</p> <p>Therefor, it's likely that what you see in Cura <em>is</em> optimized -- to minimize <em>your</em> time on the way to a solution, rather than to minimize the time for a machine that simply doesn't care if a print takes five hours or nine.</p>
<p><strong>Long story short:</strong> I only know the setting &quot;Combing Mode OFF&quot; that improves the travel paths. In my case it did not help. In your case I suggest you should give PrusaSlicer a try. I assume that the overall print duration will be improved because of a better calculation of the travel paths. But this is only my personal opinion between these two Slicers.</p> <p><strong>Further explanation:</strong> I downloaded the Cura 4.9 and made an install from scratch. I tried to reproduce your issue by placing lots of copies of the same part. As printer I selected the Ultimaker S5 and used the standard configuration for slicing. I let Cura arrange the parts on the print plate. I checked the travel paths between the parts and in most cases Cura has chosen the nearest distance to move the printhead to the next part. In my opinion, there could be a more efficient choice for the next part to print. After this first test I experimented with the settings (e.g. &quot;Combing Mode&quot; OFF) but without an improvement in travel movements.</p> <p>In the past I used Cura in combination with an Ultimaker S5 at work to print parts for production usecases. Over the past two years I recognized lots of parts where the travel movements have been chosen very unefficiently at the cost of high print duration.</p> <p>For comparison I used my standard slicer &quot;PrusaSlicer&quot; and did the test under the same conditions: standard settings, auto-arrangement of the parts. Overall the travel paths are calculated more efficiently, but there is also some room for improvement.</p>
1,992
<p>In past few months, there are a devastatingly few number of users that cast up votes, and many of users aren't coming back to site, and existing users don't have so much reputation, because no one is voting up great questions / answers.</p> <p>So... let me start like this..</p> <p><a href="https://3dprinting.stackexchange.com/users?tab=Voters&amp;filter=all">THIS IS REALLY DEVASTATING</a></p> <p>We all need to vote up users and try to answer their questions, so we can get more users answering / asking questions. Main reason I'm writing this post is because this is great site, but we need new regular users that are going to vote and ask questions. I think it all came down to 20-30 active users, and rest of users aren't so active.</p> <p>I'm relatively new to site. I've decided to use it because it helped me a lot, and I'm doing my best to answer the questions. My main concern is also that my answers also don't get any vote up.</p> <p>So... I'll let it all out. I think that main concern that we need to focus on is motivation.</p> <p>WE NEED TO MOTIVATE OTHERS TO VOTE. And we all can do that if we vote other users up.</p> <p>That's from my perspective. Even one vote up motivates you to go further.</p> <p>I think that I explained what I mean. :) It's all in VOTE UP!</p>
<p>I completely agree! I just posted <a href="https://3dprinting.meta.stackexchange.com/questions/197/3d-printing-se-beta-status">my own reminder</a>, focusing more on efforts to get us out of Beta.</p> <p>I'm sorry you can feel discouraged sometimes, I think a lot of users around the Stack Exchange network can feel that way at times.</p> <p>I think people sometimes forget that an up-vote to an answer isn't necessarily that it was helpful to you, specifically. But, rather that <strong>the answer is a good <em>quality</em> answer</strong> and <strong>will be <em>useful</em> to others</strong> as well!</p>
<p>Hmm yeah we might have a minor problem. I know most of my rep is disportionately from question acceptance and unusually low for +1s. I wonder if it has to do with any active member being able to be on the top 50 list their first month, thus people maybe hoarding their upvotes trying to be number one? </p>
36
<p>I'm trying to find a least-resistance path from C# to C++, and while I feel I handle C# pretty well after two solid years, I'm still not sure I've gotten the "groove" of C++, despite numerous attempts.</p> <p>Are there any particular books or websites that might be suitable for this transition?</p>
<p>I recommend <em>The C++ Programming language</em> by Bjarne Stroustrup. It's not a suitable book for new programmers, but I found it quite effective as programmer who was experienced in other languages and didn't want to waste too much time with learning how while loops work. It's a dense but quite comprehensive book.</p>
<p>I'd consider [K&amp;R](<a href="http://en.wikipedia.org/wiki/The_C_Programming_Language_(book))" rel="nofollow noreferrer">http://en.wikipedia.org/wiki/The_C_Programming_Language_(book))</a> a prerequisite for C++. Perhaps the best thing about C++ is that it's a better C.</p> <p>And of course, Stroustrup (as suggested by Mladen Jankovic) is a must read.</p>
9,359
<p>We've been using the 32bit linux version of the <a href="http://wrapper.tanukisoftware.org/" rel="nofollow noreferrer">JavaServiceWrapper</a> for quite a while now and it's working perfectly. We are now considering also using it on 64bit linux systems. There are downloads for 64bit binaries on the website, but looking into Makefile for the 64bit version I found the following comment, though: # This makefile is in progess. It builds, but the resulting libwrapper.so does not yet work. # If you know how to fix it then please help out.</p> <p>Can anyone confirm, that this comment is just outdated and the resulting wrapper will work without flaws?</p>
<p>I've had it running in production on 64-bit red hat without any trouble for the last year or so. </p>
<p>Take a look at <a href="http://yajsw.sourceforge.net/" rel="nofollow">http://yajsw.sourceforge.net/</a>. </p> <p>It's free and rather compatible reimplementation of TanukiSoftware Java Service Wrapper featuring free 64-bit support.</p>
4,719
<p>I'm coding a framework along with a project which uses this framework. The project is a Bazaar repository, with the framework in a subfolder below the project.</p> <p>I want to give the framework a Bazaar repository of its own. How do I do it?</p>
<p>You use the split command:</p> <pre><code>bzr split sub_folder </code></pre> <p>This creates an independant tree in the subfolder, which you can now export and work on separately.</p>
<p>As far as I know, there is not a way to do this easily with bazaar. One possibility is to take the original project, branch it, and then remove everything unrelated to the framework. You can then move the files in the subdir to the main dir. It's quite a chore, but it is possible to preserve the history. </p> <p>you will end up with something like: </p> <pre><code>branch project: .. other files.. framework/a.file framework/b.file framework/c.file branch framework: a.file b.file c.file </code></pre>
9,260
<p>I've got a direct drive extruder printer, that I have equipped with a Bowden-tube to give it a nice and defined filament path from my spool and to protect it from moisture when it exits my drybox. I've noticed that this 1 m Bowden tube adds quite a bit of friction to the system. I am keenly aware of it as feeding filament form the drybox into the tube by hand is quite a chore already, especially with high friction materials like PETG. Also I notice that I can sustain only lower volumetric flow without skipping or underextrusion after adding the tube.</p> <p>So I figured, why not have a motor on both sides of the bowden tube? A big and heavy one at the entrance, providing a baseline pressure on the filament. And a comparatively light one on the other end, adding that fine control you want for your retraction, rather than trying to implement that via a 1 m wet noodle full of static friction.</p> <p>Seems like it would give you the best of both worlds. Not quite as light as a full Bowden system, but you could have all the control of a direct drive system, with much less of the weight/flow rate tradeoff.</p> <p>Anyone aware of this being done before? Or any good arguments as to why it is a stupid idea? Seems like a fairly straightforward mod; in the simplest implementation, just add an additional extruder of the same spec on the other side of your Bowden tube, and split the control signal to be the same amongst both motors.</p>
<p>A Bowden tube is by design fairly small diameter to match the filament within, constraining the forces applied by the remote extruder mechanism. As you've noted, friction is a consideration.</p> <p>For your application, you would not have to have such a tightly constrained diameter. You could use a Bowden tube for 2.85 mm filament, if your direct drive extruder is made for 1.75 mm filament.</p> <p>You'd have the environmental control of the smaller tube, the path control and all of the other benefits, but none (or little) of the friction.</p> <p>Additionally, you would not have the complexity of managing retraction or synchronizing feed that a dual motor system presents.</p>
<p>I have a similar setup with one of my printers (that is using 1.75 mm filament), this answer is therefore my own experience with guiding tubes from spool to extruder (my previous was a direct drive extruder, the latter uses a Bowden setup, but both with a tube leading up to the extruder).</p> <p>Initially I used smaller diameter tubing (inner diameter of 2 and 3 mm, that is commonly used for Bowden setups for respectively for 1.75 mm and 2.85 mm filament), but when I changed to 4 mm inner diameter (and 6 mm outer diameter) all friction was gone. The benefit of using 4 mm inner diameter is that it fits over the 2/4 mm ID/OD tubes, and as such I slide the larger inner diameter tube over a piece of 4 mm outer diameter, forme, this also simplifies feeding the filament to the extruder.</p>
1,741
<p>I have a table similar to this:</p> <pre><code>CREATE TABLE example ( id integer primary key, name char(200), parentid integer, value integer); </code></pre> <p>I can use the parentid field to arrange data into a tree structure.</p> <p>Now here's the bit I can't work out. Given a parentid, is it possible to write an SQL statement to add up all the value fields under that parentid and recurse down the branch of the tree ?</p> <p><strong>UPDATE:</strong> I'm using posgreSQL so the fancy MS-SQL features are not available to me. In any case, I'd like this to be treated as a generic SQL question.</p> <p>BTW, I'm very impressed to have 6 answers within 15 minutes of asking the question! Go stack overflow!</p>
<p>There are a few ways to do what you need in PostgreSQL.</p> <ul> <li><p>If you can install modules, look at the tablefunc contrib. It has a connectby() function that handles traversing trees. <a href="http://www.postgresql.org/docs/8.3/interactive/tablefunc.html" rel="noreferrer">http://www.postgresql.org/docs/8.3/interactive/tablefunc.html</a></p></li> <li><p>Also check out the ltree contrib, which you could adapt your table to use: <a href="http://www.postgresql.org/docs/8.3/interactive/ltree.html" rel="noreferrer">http://www.postgresql.org/docs/8.3/interactive/ltree.html</a></p></li> <li><p>Or you can traverse the tree yourself with a PL/PGSQL function.</p></li> </ul> <p>Something like this:</p> <pre><code>create or replace function example_subtree (integer) returns setof example as 'declare results record; child record; begin select into results * from example where parent_id = $1; if found then return next results; for child in select id from example where parent_id = $1 loop for temp in select * from example_subtree(child.id) loop return next temp; end loop; end loop; end if; return null; end;' language 'plpgsql'; select sum(value) as value_sum from example_subtree(1234); </code></pre>
<p>If you need to store arbitrary graphs, not just hierarchies, you could push Postgres to the side and try a graph database such as <a href="http://agraph.franz.com/support/learning/" rel="nofollow noreferrer">AllegroGraph</a>:</p> <p>Everything in the graph database is stored as a triple (source node, edge, target node) and it gives you first class support for manipulating the graph structure and querying it using a SQL like language. </p> <p>It doesn't integrate well with something like Hibernate or Django ORM but if you are serious about graph structures (not just hierarchies like the Nested Set model gives you) check it out. </p> <p>I also believe Oracle has finally added a support for real Graphs in their latest products, but I'm amazed it's taken so long, lots of problems could benefit from this model.</p>
7,653
<blockquote> <p>See also <a href="https://stackoverflow.com/questions/1700917/how-does-a-wcf-server-inform-a-wcf-client-about-changes-better-solution-then-si">How does a WCF server inform a WCF client about changes? (Better solution then simple polling, e.g. Coment or long polling)</a></p> </blockquote> <p>I need to use push-technology with WCF through client firewalls. This must be a common problem, and I know for a fact it works in theory (see links below), but I have failed to get it working, and I haven't been able to find a code sample that demonstrates it.</p> <p>Requirements: </p> <ul> <li>WCF</li> <li>Clients connects to server through tcp port 80 (netTcpBinding).</li> <li>Server pushes back information at irregular intervals (1 min to several hours).</li> <li>Users should not have to configure their firewalls, server pushes must pass through firewalls that have all inbound ports closed. TCP duplex on the same connection is needed for this, a dual binding does not work since a port has to be opened on the client firewall.</li> <li>Clients sends heartbeats to server at regular intervals (perhaps every 15 mins) so server knows client is still alive.</li> <li>Server is IIS7 with WAS.</li> </ul> <p>The solution seems to be duplex netTcpBinding. Based on this information:</p> <p><a href="http://blogs.msdn.com/drnick/archive/2006/05/01/configuring-wcf-for-nats-and-firewalls.aspx" rel="nofollow noreferrer">WCF through firewalls and NATs</a></p> <p><a href="http://blogs.msdn.com/drnick/archive/2006/10/20/keeping-connections-open-in-iis.aspx" rel="nofollow noreferrer">Keeping connections open in IIS</a></p> <p>But I have yet to find a code sample that works.. I've tried combining the "Duplex" and "TcpActivation" samples from Microsoft's WCF Samples without any luck. Please can someone point me to example code that works, or build a small sample app. Thanks a lot!</p>
<p>I've found a couple of solutions:</p> <p><a href="http://www.zeroc.com/" rel="nofollow noreferrer" title="ZeroC Ice">ZeroC Ice</a> GPL with a commercial option. Have only tested quickly. Looks more powerful than .NET Remoting and is very actively developed.</p> <p><a href="http://www.remobjectssdk.com/" rel="nofollow noreferrer" title="RemObjects">RemObjects</a> Commercial, active development, supports everything but does not seem to have all the more advanced features that GenuineChannels use.</p> <p><a href="http://www.genuinechannels.com/" rel="nofollow noreferrer" title="GenuineChannels">GenuineChannels</a>. It uses remoting with a lot of nice added features, the most important one being it works through NATs without the need to open the client firewall. Unfortunately seems to be very dead.</p> <p>Another solution is to use streaming with IIS, according to this article: <a href="http://blogs.msdn.com/drnick/archive/2006/10/20/keeping-connections-open-in-iis.aspx" rel="nofollow noreferrer" title="Keeping connections open in IIS">Keeping connections open in IIS</a></p> <p>The client makes the first connection (http with IIS6, tcp with IIS7) to the server at port 80, the connection is then kept open with a streaming response that never ends.</p> <p>I haven't had the time to experiment with this, and I haven't found a sample that says it specifically solves the firewall-problem, but here's an excellent sample that probably works: <a href="http://blogs.thinktecture.com/buddhike/archive/2007/05/23/414851.aspx" rel="nofollow noreferrer">Streaming XML</a>. </p>
<p>You can do following change in client for accessing duplex web service on Firewall enabled client.</p> <ul> <li>Set WebHttp option checked in Firewall -> Advanced -> Settings (of Network Connection Setting) -> Web Server (Http)</li> </ul>
4,235
<p>How can I improve the look and feel of my Linux desktop to suit my programming needs?</p> <p>I found Compiz and it makes switching between my workspaces (which is something I do all the time to make the most of my 13.3" screen laptop) easy and look great - so what else don't I know about that make my programming environment more productive/pleasing?</p> <p>@Rob Cooper - thanks for the heads-up, hope this reword addresses the issues</p>
<p>I found that the best programming experience comes from having quick access all your tools. This means getting comfortable with basic command line acrobatics and really learning keyboard shortcuts, flags, and little productivity apps.</p> <p>I find that most of my workflow comes down to just a few apps and commands:</p> <ul> <li>Terminator</li> <li>SVN commands - ci, co, status, log, etc.</li> <li>Command Line FTP</li> <li>Vim</li> <li>Basic Command lines operations (cd, rm, mv, cp, touch, grep, and std i/o redirection comprise 80% of my work day)</li> </ul> <p>Not to say that GUI apps aren't necessary. A few I use:</p> <ul> <li>Diffmerge</li> <li>RapidSVN</li> <li>Filezilla</li> <li>VirtualBox</li> <li>GnomeDo (this really should be first)</li> </ul> <p>When it comes down to it, the real improvement in programming experience comes from just that - programming experience. Just pick a set of tools and stick with them until you know them inside and out.</p>
<p>If you have half decent 3D acceleration on board, CompizFusion adds attractive desktop effects like mapping your workspaces onto a cube using that to switch between them/move windows between them. Looks pretty and improves general usability - great!</p> <p><a href="http://en.wikipedia.org/wiki/Compiz" rel="nofollow noreferrer">http://en.wikipedia.org/wiki/Compiz</a></p>
5,730
<p>I have a JavaScript object. Is there a built-in or accepted best practice way to get the length of this object?</p> <pre><code>const myObject = new Object(); myObject[&quot;firstname&quot;] = &quot;Gareth&quot;; myObject[&quot;lastname&quot;] = &quot;Simpson&quot;; myObject[&quot;age&quot;] = 21; </code></pre>
<h2>Updated answer</h2> <p><strong>Here's an update as of 2016 and <a href="http://kangax.github.io/compat-table/es5/" rel="noreferrer">widespread deployment of ES5</a> and beyond.</strong> For IE9+ and all other modern ES5+ capable browsers, you can use <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/keys" rel="noreferrer"><code>Object.keys()</code></a> so the above code just becomes:</p> <pre><code>var size = Object.keys(myObj).length; </code></pre> <p>This doesn't have to modify any existing prototype since <code>Object.keys()</code> is now built-in.</p> <p><strong>Edit</strong>: Objects can have symbolic properties that can not be returned via Object.key method. So the answer would be incomplete without mentioning them.</p> <p>Symbol type was added to the language to create unique identifiers for object properties. The main benefit of the Symbol type is the prevention of overwrites.</p> <p><code>Object.keys</code> or <code>Object.getOwnPropertyNames</code> does not work for symbolic properties. To return them you need to use <code>Object.getOwnPropertySymbols</code>.</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-js lang-js prettyprint-override"><code>var person = { [Symbol('name')]: 'John Doe', [Symbol('age')]: 33, "occupation": "Programmer" }; const propOwn = Object.getOwnPropertyNames(person); console.log(propOwn.length); // 1 let propSymb = Object.getOwnPropertySymbols(person); console.log(propSymb.length); // 2</code></pre> </div> </div> </p> <h2>Older answer</h2> <p>The most robust answer (i.e. that captures the intent of what you're trying to do while causing the fewest bugs) would be:</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-js lang-js prettyprint-override"><code>Object.size = function(obj) { var size = 0, key; for (key in obj) { if (obj.hasOwnProperty(key)) size++; } return size; }; // Get the size of an object const myObj = {} var size = Object.size(myObj);</code></pre> </div> </div> </p> <p>There's a sort of convention in JavaScript that you <a href="https://stackoverflow.com/questions/10757455/object-prototype-is-verboten">don't add things to Object.prototype</a>, because it can break enumerations in various libraries. Adding methods to Object is usually safe, though.</p> <hr />
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-js lang-js prettyprint-override"><code>var myObject = new Object(); myObject["firstname"] = "Gareth"; myObject["lastname"] = "Simpson"; myObject["age"] = 21; var size = JSON.stringify(myObject).length; document.write(size);</code></pre> </div> </div> </p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-js lang-js prettyprint-override"><code>JSON.stringify(myObject)</code></pre> </div> </div> </p>
2,719
<p>I've been asked to write a Windows service in C# to periodically monitor an email inbox and insert the details of any messages received into a database table.</p> <p>My instinct is to do this via POP3 and sure enough, Googling for ".NET POP3 component" produces countless (ok, 146,000) results.</p> <p>Has anybody done anything similar before and can you recommend a decent component that won't break the bank (a few hundred dollars maximum)?</p> <p>Would there be any benefits to using IMAP rather than POP3?</p>
<p>I recomment <a href="http://www.chilkatsoft.com" rel="nofollow noreferrer">chilkat</a>. They have pretty stable components, and you can get their email component for as cheap as $99 for a single developer. Personally, I think going with the whole package of components is a better deal, as it's only $289, and comes with many useful components. I'm not affiliated with them in any way, although I probably sound like I am.</p>
<p>How about WCF? It's free.</p> <p>If you have an Exchange server: <a href="http://msdn.microsoft.com/en-us/library/bb397812.aspx" rel="nofollow noreferrer">http://msdn.microsoft.com/en-us/library/bb397812.aspx</a></p> <p>an example for pop3: <a href="http://bartdesmet.net/blogs/bart/archive/2006/09/13/4417.aspx" rel="nofollow noreferrer">http://bartdesmet.net/blogs/bart/archive/2006/09/13/4417.aspx</a></p>
3,863
<p>I need to do a dump of a table on a remote server, but I can't access the server directly. The only access I have is through PHP scripts.</p> <p>Is there some way in which MySQL will return an </p> <pre><code>INSERT INTO `table_name` (`field1`, `field2`) VALUES ('a', 'b'), ('c', 'd') </code></pre> <p>statement, like what <code>mysqldump</code> will return?</p> <p>I don't have access to phpMyAdmin, and I preferably don't want to use <code>exec</code>, <code>system</code> or <code>passthru</code>.</p> <p>See <a href="https://stackoverflow.com/questions/81934/easy-way-to-export-a-sql-table-without-access-to-the-server-or-phpmyadmin">this question</a> for another export method</p>
<p>1) can you run mysqldump from exec or passthru<br /> 2) take a look at this: <a href="http://www.php-mysql-tutorial.com/perform-mysql-backup-php.php" rel="noreferrer">http://www.php-mysql-tutorial.com/perform-mysql-backup-php.php</a></p>
<p>If you can use php-scripts on the server i would recommend <a href="http://www.phpmyadmin.net/home_page/index.php" rel="nofollow noreferrer">phpmyadmin</a>. Then you can do this from the web-interface.</p>
8,887
<p>Working on a little side project web app...</p> <p>I'd like to have it set up so that, when users send email to a certain account, I can kick off a PHP script that reads the email, pulls out some key info, and writes it to a database.</p> <p>What's the best way to do this? A cron job that checks for new email?</p> <p>The app is running on a "Dedicated-Virtual" Server at MediaTemple, so I guess I have a reasonable level of control, which should give me a range of options.</p> <p>I'm very slowly learning the ropes (PHP, MySQL, configuring/managing the server), so your advice and insight are much appreciated.</p> <p>Cheers, Matt Stuehler</p>
<p>Procmail is how I do it. Here's an example where I actually process the text inside the email to archive it back to a MySQL database. </p> <pre><code>:0: * ^(From).*test@example.com { :0 c | php /var/www/app/process_email.php } </code></pre>
<p>The Cronjob is the common solution to such a task. <a href="http://php.net/manual/en/function.imap-getmailboxes.php" rel="nofollow noreferrer">Checking for new Mails with PHP</a> is no Problem. If you run a qmail-server (maybe other servers can do this too?) you can <a href="http://www.faqts.com/knowledge_base/view.phtml/aid/34099/fid/139" rel="nofollow noreferrer">fire a script</a> on every "received mail", which triggers your php script.</p>
5,351
<p>I'd really like to handle this without monkey-patching but I haven't been able to find another option yet.</p> <p>I have an array (in Ruby) that I need to sort by multiple conditions. I know how to use the sort method and I've used the trick on sorting using an array of options to sort by multiple conditions. However, in this case I need the first condition to sort ascending and the second to sort descending. For example:</p> <pre><code>ordered_list = [[1, 2], [1, 1], [2, 1]] </code></pre> <p>Any suggestions?</p> <p>Edit: Just realized I should mention that I can't easily compare the first and second values (I'm actually working with object attributes here). So for a simple example it's more like:</p> <pre><code>ordered_list = [[1, "b"], [1, "a"], [2, "a"]] </code></pre>
<p>How about:</p> <pre> <code> ordered_list = [[1, "b"], [1, "a"], [2, "a"]] ordered_list.sort! do |a,b| [a[0],b[1]] &lt;=&gt; [b[0], a[1]] end </code> </pre>
<p>I had this same basic problem, and solved it by adding this:</p> <pre><code>class Inverter attr_reader :o def initialize(o) @o = o end def &lt;=&gt;(other) if @o.is &amp;&amp; other.o.is -(@o &lt;=&gt; other.o) else @o &lt;=&gt; other.o end end end </code></pre> <p>This is a wrapper that simply inverts the &lt;=> function, which then allows you to do things like this:</p> <pre><code>your_objects.sort_by {|y| [y.prop1,Inverter.new(y.prop2)]} </code></pre>
9,867
<p>Recently my printer has been creating double images along the X-axis, I just printed a calibration cube after checking all of my rods to make sure nothing was catching and here was my result: <a href="https://i.stack.imgur.com/vAqB6.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vAqB6.jpg" alt="Calibration Cube"></a> It looks like there is another X to the right of the X that is supposed to be, would this be a retraction problem, or could it be a problem with the filament because I printed a calibration cube with another filament a couple of weeks ago and there was no double X.</p>
<p>I agree with @FernandoBaltazar. You have several things going on. If the problem is recent, you may have loose belts, but you may also need to perform acceleration and jerk tuning. Reducing your print speeds may also help.</p>
<p>There are several problems, but lets start with a method for discovering them.</p> <p>The first thing to do is look for any looseness or slop in the mechanism. Try wiggling the hot end, looking for any free movement. Try moving along the axes that move. You haven't specified the machine, so I can't suggest the specific points to try moving. If the bed moves, try wiggling the bed. If the extruder moves, try wiggling the extruder on each movement axis. Nothing should move at all until the motors skip. </p> <p>If the mechanism is fully tight (and from this picture, I expect it to have some "wiggle room"), you could follow by slowing the machine, including reducing the acceleration settings. If you find something moving, you'll need to investigate why -- what is allowing the movement to happen. On a belt-driven machine, the belts can be loose, but one can also get movement from loose set screws, motor mounts, hot-ends which are coming apart, or loose thrust bearings or nuts on threaded shafts. Applying force to the parts that move during printing and watching for movement is the best diagnostic.</p> <p>In the future, when asking questions it can be helpful to include the type of machine, any modifications you've made, and information about the printing conditions.</p>
735
<p>This may be a long shot, but I was wondering if the signals seen across a stepper motor's windings could indicate whether the stepper was being told to move over some short time period. This is in particular for the stepper used on extruders.</p> <p>I've read that PWM in used in stepper drivers and controls current patterns to move the motor. And that a current must also be maintained through the windings if the motor is to hold its position. So it would seem that there is always a pulsing waveform across the windings whenever the stepper is energized, correct? When the motor is holding its position, is there anything distinctive about the waveform?</p> <p>This is for a filament sensor I'd like to make. The sensor would be located at the extruder motor. It would monitor movement/flow of the solid filament. A lack of filament flow could be because of filament runout, tangled or caught filament, or non-extruding travel moves. I'd like to be able to tell the first two causes from the last one (when it's not supposed to be extruding for some hundreds of milliseconds or so). It would also be nice to tell 'no directed movement' from 'very slow movement' which would happen with small nozzles, slow speed or other slow extrusion situations.</p> <p>I watched the waveforms with an oscilloscope while printing, but travel moves were quick and relatively rare, so I couldn't definitely see if there was something I could use during those times. Could I just filter the pulse waveform (what corner frequency?) to get an approximation of the current waveform going through the coils -- on the idea that the waveform should resemble a DC level during non-extruding but still energized times. Perhaps another low-pass filtering of that DC level, or a high-pass of the waveform to indicate directed extrusion? Using DSP on a micro, of course.</p> <p>Are there any experts here on the subject of low-level stepper motor control?</p>
<blockquote> <p>So it would seem that there is always a pulsing waveform across the windings whenever the stepper is energized, correct?</p> </blockquote> <p>Correct.</p> <blockquote> <p>When the motor is holding its position, is there anything distinctive about the waveform?</p> </blockquote> <p><a href="https://www.youtube.com/watch?v=Phds2oFOIYg" rel="nofollow noreferrer">Yes there is</a>. This video contains a short example of what the current waveforms would look like.</p> <p>If I understand you correctly: You want to detect filament events based by doing current sensing on the phases of the stepper motor. The simple of it is that, if the motor is not turning then the <a href="https://en.wikipedia.org/wiki/Counter-electromotive_force" rel="nofollow noreferrer">Back ElectroMotive Force</a> will be zero. As the speed increases the BEMF goes up, which causes the current levels to go down. Is this how stall detection in some driver systems work.</p> <p>You may be able to determine if what you want to do is even practical by studying the fundamentals of stepper motor driving; both <a href="https://www.arrow.com/en/research-and-events/articles/voltage-versus-current-mode-control-in-stepper-motors" rel="nofollow noreferrer">voltage mode and current mode</a>.</p> <p>Apart from that it may be just as practical to use a <a href="https://en.wikipedia.org/wiki/Rotary_encoder" rel="nofollow noreferrer">rotary encoder</a> upstream of the extruder motor to detect the filament movement, or lack thereof.</p> <p>In your case detecting if the filament is present but not moving calls for stall detection. If the PWM pulse occurs but the rate at which the voltage increased (the slope) did not change, then there was no BEMF event. The lack of such an event would indicate that the rotor did not move. If there is no filament to push, then when the rotor moves it will <em>ring</em> (overshoot it's target position and fluctuate back and forth until it settles into the electrical position that the controller intended), because the mechanical load has changed.</p> <p>For a NEMA17 stepper, these would be very small change in a very brief event. I would start with installing <a href="https://www.sparkfun.com/products/14544" rel="nofollow noreferrer">current sensors</a> on both phases, and then doing some signal processing on top of that. </p>
<p>There is nothing you can do to monitor the extruder stepper motor to assure that filament is being fed when it should be fed. A common condition is where the filament binds but the extruder stepper continues to move as commanded. The drive pully strips the dents it presses in the filament, and grinds down the filament. The extruder still steps, and the filament does not.</p> <p>You can reduce this for particular filaments by increasing the pressure on the filament by the drive pully, but the optimum pressure varies with the filament. For instance, I've found that PET needs a lesser pressure than PLA.</p> <p>@User77232 suggests using a separate rotary encoder to detect the motion of the filament through a different sensor. This may be better, since you can detect a filament jam with extruder motion. If you are using a PTFE tube to deliver filament to the extruder, the encoder can be mounted at the other end of the tube. You may need to make allowance for some compression and stretch of the filament in the tube, which could delay detecting "lack of movement".</p> <p>I had hoped that the Triaminic TMC2100 chips (spec <a href="https://www.trinamic.com/fileadmin/assets/Products/ICs_Documents/TMC2100_datasheet_Rev1.09.pdf" rel="nofollow noreferrer">here</a>) would provide detection of drive torque by watching the back EMF, but reading the spec didn't look helpful. There was an old patent (that should be long expired) that detected the current needed to move a stepper motor to achieve faster acceleration and quieter operation, but I didn't find on the Triaminic driver a useful torque limit signal. If one could detect the torque required, one could set a peek torque value that is lower than the initial filament strip torque.</p> <p>Perhaps one could mount the extruder stepper motor with a compliant mount, perhaps something like a TPU or rubber material, and use a strain gauge to measure the stretch of the mount.</p>
1,270
<p>What sort of database schema would you use to store email messages, with as much header information as practical/possible, into a database?</p> <p>Assume that they have been fed into a script from the MTA and parsed into the relevant headers/body/attachments.</p> <p>Would you store the message body whole in the database table, or split any MIME-parts apart? What about attachments?</p>
<p>You may want to check the <a href="http://www.archiveopteryx.org/architecture" rel="noreferrer">architecture</a> and the <a href="http://www.archiveopteryx.org/schema" rel="noreferrer">DB schema</a> of "Archiveopteryx".</p>
<p>If it is already split up, and you can be sure that the routine to split the data is sound, then I would split up the table as granular as possible. You can always parse it back together in your middle tier. If space is not an issue, you could always store it twice. One, split up into the relevant fields, and another field that has the whole thing as one blob, if putting it back together is hard.</p>
9,068
<p>I was just looking through some information about Google's <a href="http://code.google.com/apis/protocolbuffers/" rel="noreferrer">protocol buffers</a> data interchange format. Has anyone played around with the code or even created a project around it?</p> <p>I'm currently using XML in a Python project for structured content created by hand in a text editor, and I was wondering what the general opinion was on Protocol Buffers as a user-facing input format. The speed and brevity benefits definitely seem to be there, but there are so many factors when it comes to actually generating and processing the data.</p>
<p>If you are looking for user facing interaction, stick with xml. It has more support, understanding, and general acceptance currently. If it's internal, I would say that protocol buffers are a great idea.</p> <p>Maybe in a few years as more tools come out to support protocol buffers, then start looking towards that for a public facing api. Until then... <a href="http://en.wikipedia.org/wiki/JSON" rel="noreferrer">JSON</a>?</p>
<p>From your brief description, it sounds like protocol buffers is not the right fit. The phrase "structured content created by hand in a text editor" pretty much screams for XML.</p> <p>But if you want efficient, low latency communications with data structures that are not shared outside your organization, binary serialization such as protocol buffers can offer a huge win.</p>
2,404
<p>I've worked on a number of database systems in the past where moving entries between databases would have been made a lot easier if all the database keys had been <a href="http://en.wikipedia.org/wiki/Globally_Unique_Identifier" rel="noreferrer">GUID / UUID</a> values. I've considered going down this path a few times, but there's always a bit of uncertainty, especially around performance and un-read-out-over-the-phone-able URLs.</p> <p>Has anyone worked extensively with GUIDs in a database? What advantages would I get by going that way, and what are the likely pitfalls?</p>
<p>Advantages:</p> <ul> <li>Can generate them offline.</li> <li>Makes replication trivial (as opposed to int's, which makes it REALLY hard)</li> <li>ORM's usually like them</li> <li>Unique across applications. So We can use the PK's from our CMS (guid) in our app (also guid) and know we are NEVER going to get a clash.</li> </ul> <p>Disadvantages:</p> <ul> <li>Larger space use, but space is cheap(er)</li> <li>Can't order by ID to get the insert order.</li> <li>Can look ugly in a URL, but really, WTF are you doing putting a REAL DB key in a URL!? (This point disputed in comments below)</li> <li>Harder to do manual debugging, but not that hard.</li> </ul> <p>Personally, I use them for most PK's in any system of a decent size, but I got &quot;trained&quot; on a system which was replicated all over the place, so we HAD to have them. YMMV.</p> <p>I think the duplicate data thing is rubbish - you can get duplicate data however you do it. Surrogate keys are usually frowned upon where ever I've been working. We DO use the WordPress-like system though:</p> <ul> <li>unique ID for the row (GUID/whatever). Never visible to the user.</li> <li>public ID is generated ONCE from some field (e.g. the title - make it the-title-of-the-article)</li> </ul> <p><strong>UPDATE:</strong> So this one gets +1'ed a lot, and I thought I should point out a big downside of GUID PK's: Clustered Indexes.</p> <p>If you have a lot of records, and a clustered index on a GUID, your insert performance will SUCK, as you get inserts in random places in the list of items (that's the point), not at the end (which is quick).</p> <p>So if you need insert performance, maybe use a auto-inc INT, and generate a GUID if you want to share it with someone else (e.g., showing it to a user in a URL).</p>
<p>One thing not mentioned so far: UUIDs make it much harder to profile data</p> <p>For web apps at least, it's common to access a resource with the id in the url, like <code>stackoverflow.com/questions/45399</code>. If the id is an integer, this both</p> <ul> <li>provides information about the number of questions (ie September 5th, 2008, the 45,399th question was asked)</li> <li>provides a leverage point to iterate through questions (what happens when I increment that by 1? I open the next asked question)</li> </ul> <p>From the first point, I can combine the timestamp from the question and the number to profile how frequently questions are asked and how that changes over time. this matters less on a site like Stack Overflow, with publicly available information, but, depending on context, this may expose sensitive information.</p> <p>For example, I am a company that offers customers a permissions gated portal. the address is <code>portal.com/profile/{customerId}</code>. If the id is an integer, you could profile the number of customers regardless of being able to see their information by querying for <code>lastKnownCustomerCount + 1</code> regularly, and checking if the result is <code>404 - NotFound</code> (customer does not exist) or <code>403 - Forbidden</code> (customer does exist, but you do not have access to view).</p> <p>UUIDs non-sequential nature mitigate these issues. This isn't a garunted to prevent profiling, but it's a start.</p>
6,724
<p>In a SharePoint list I want an auto number column that as I add to the list gets incremented. How best can I go about this?</p>
<p>Sharepoint Lists automatically have an column with "ID" which auto increments. You simply need to select this column from the "modify view" screen to view it.</p>
<p>Peetha has the best idea, I've done the same with a custom list in our SP site. Using a workflow to auto increment is the best way, and it is not that difficult. Check this website out: <a href="http://splittingshares.wordpress.com/2008/04/11/auto-increment-a-number-in-a-new-list-item/" rel="nofollow noreferrer">http://splittingshares.wordpress.com/2008/04/11/auto-increment-a-number-in-a-new-list-item/</a></p> <p>I give much appreciation to the person who posted that solution, it is very cool!!</p>
9,812
<p>I get the following error pretty regularly when compiling in Visual Studio and running my web application:</p> <p>"Unable to start debugging on the web server. The web server did not respond in a timely manner. This may be because another debugger is already attached to the web server."</p> <p>Normally this is after having debug the application once already. From the command line I run "iisreset /restart" and it fixes the problem.</p> <p>How do I prevent this from happening in the first place?</p>
<p>I find that this happens if I'm debugging with Firefox as my browser. When I exit Firefox the VS2005/8 debug session doesn't terminate. I have not found a solution for this (yet).</p> <p>If this is what's happening with you then a quicker solution than running iisreset is to hit Shift-F5 when in Visual Studio and this will terminate the current debug session. You can then hit F5 and this will start a new debug session.</p>
<p>I just solved this problem on my machine. My problem is that I upgraded IE 9 To IE 10 and I got this error. Solution : Remove IE 10 and downgrade to IE 9. Go to "Programs and Features" --> "View recent updates" --> find IE 10---> Uninstall it-->reboot--->ie 9 is back--->debug--->works OK.</p>
2,463
<p>What is the best way to setup one Linux box to listen on its serial port for incoming connections? I've done a lot of googling but I can't find the right combination of commands to actually get them to talk!</p> <p>My main objective is to provide a serial interface to running instances of kvm/qemu VMs. They currently only have a VNC interface (they are on headless servers, no X). I can get the VM to create a serial device by starting it with the -serial file: flag, but how to talk to it, is a whole other problem. Both boxes are running Ubuntu 8.04.</p>
<p>The <a href="http://tldp.org/HOWTO/Serial-HOWTO.html" rel="noreferrer">Linux Serial HOWTO</a> has a lot of detailed information about serial communication in general. The more-specific <a href="http://tldp.org/HOWTO/Remote-Serial-Console-HOWTO/index.html" rel="noreferrer">Linux Remote Serial Console HOWTO</a> is what you're really looking for if you want to be able to log into your virtualized systems using the serial port as if you were at the console. As Hein indicated, you'll need a null modem cable and need to run minicom on the remote terminal.</p> <p>The Linux console is used in two ways, each of which must be configured separately for serial use. You can configure the kernel to copy its messages over the serial port, which is occasionally interesting for watching the system boot and nearly indispensable if you're doing kernel debugging. (This requires kernel support and updating the boot parameters so the kernel knows you want serial output; see <a href="http://tldp.org/HOWTO/Remote-Serial-Console-HOWTO/configure-kernel.html" rel="noreferrer">chapter 5</a> of the second howto.) You're probably more interested in logging in via the serial port, which requires running getty on the serial port after boot (just like your system already runs getty on the virtual terminals after boot), which is described in detail in <a href="http://tldp.org/HOWTO/Remote-Serial-Console-HOWTO/getty.html" rel="noreferrer">chapter 6</a> of the howto.</p>
<p>I assume you connect the two serial ports using a "null modem" cable.</p> <p>Use a program like minicom to talk to remote system -- you probably need to set up the communication parameters and possibly turn off hardware flow control (if your cable doesn't have the flow-control lines connected).</p>
9,503
<p>Using online interfaces to a version control system is a nice way to have a published location for the most recent versions of code. For example, I have a LaTeX package here (which is released to CTAN whenever changes are verified to actually work):</p> <p><a href="http://github.com/wspr/pstool/tree/master" rel="nofollow noreferrer">http://github.com/wspr/pstool/tree/master</a></p> <p>The package itself is derived from a single file (in this case, pstool.tex) which, when processed, produces the documentation, the readme, the installer file, and the actual files that make up the package as it is used by LaTeX.</p> <p>In order to make it easy for users who want to download this stuff, I include all of the derived files mentioned above in the repository itself as well as the master file pstool.tex. This means that I'll have double the number of changes every time I commit because the package file pstool.sty is a generated subset of the master file.</p> <p>Is this a perversion of version control?</p> <hr> <p>@<a href="https://stackoverflow.com/questions/39154/do-you-version-derived-files#39157">Jon Limjap</a> raised a good point:</p> <blockquote> <p>Is there another way for you to publish your generated files elsewhere for download, instead of relying on your version control to be your download server?</p> </blockquote> <p>That's really the crux of the matter in this case. Yes, released versions of the package can be obtained from elsewhere. So it does really make more sense to only version the non-generated files.</p> <p>On the other hand, @<a href="https://stackoverflow.com/questions/39154/do-you-version-derived-files#39179">Madir</a>'s comment that:</p> <blockquote> <p>the convenience, which is real and repeated, outweighs cost, which is borne behind the scenes </p> </blockquote> <p>is also rather pertinent in that if a user finds a bug and I fix it immediately, they can then head over to the repository and grab the file that's necessary for them to continue working without having to run any "installation" steps.</p> <p>And this, I think, is the more important use case <em>for my particular set of projects</em>.</p>
<p>I am using Tortoise SVN for small system ASP.NET development. Most code is interpreted ASPX, but there are around a dozen binary DLLs generated by a manual compile step. Whilst it doesn't make a lot of sense to have these source-code versioned in theory, it certainly makes it convenient to ensure they are correctly mirrored from the development environment onto the production system (one click). Also - in case of disaster - the rollback to the previous step is again one click in SVN. </p> <p>So I bit the bullet and included them in the SVN archive - the convenience, which is real and repeated, outweighs cost, which is borne behind the scenes.</p>
<p>In some cases we do, but it's more of a sysadmin type of use case, where the generated files (say, DNS zone files built from a script) have intrinsic interest in their own right, and the revision control is more linear audit trail than branching-and-tagging source control.</p>
6,024
<p>Does anybody know if there is a way to create an SQLite database based on an XSD <code>DataSet</code>? In the past, I've just used a basic SQLite manager, but I want to fuse things a bit more with my <code>.NET</code> development if possible.</p>
<p>I suspect in the general case this is hard; XML Schema allows for some quite bizarre type constructions. I'm not sure how you would do substitution groups, or extensions to restrictions of types for example.</p> <p>However it should be possible to knock something together quite quickly (especially mapping from the classes in System.Xml.Schema) that works for 90% of schemas (i.e. sequence and choice elements with a few simple data types).</p>
<p>I'm sure you could write a small app that takes an XSD file and parses it into a SQL script. I've never seen code out there to do it though, but that's not saying it doesn't exist.</p>
2,274
<p>Alright, this might be a bit of a long shot, but I have having problems getting AnkhSVN to connect from Visual Studio 2005 to an external SVN server. There is a network proxy in the way, but I can't seem to find a way in AnkhSVN to configure the proxy and doesn't seem to be detecting the Internet Explorer proxy configuration. Is there any way to resolve this issue, or will it likely just not work?</p>
<p><a href="http://en.wikipedia.org/wiki/Printf#printf_format_placeholders" rel="noreferrer">http://en.wikipedia.org/wiki/Printf#printf_format_placeholders</a> is Wikipedia's reference for format placeholders in printf. <a href="http://www.cplusplus.com/reference/clibrary/cstdio/printf.html" rel="noreferrer">http://www.cplusplus.com/reference/clibrary/cstdio/printf.html</a> is also helpful</p> <p>Basically in a simple form it's %[width].[precision][type]. Width allows you to make sure that the variable which is being printed is at least a certain length (useful for tables etc). Precision allows you to specify the precision a number is printed to (eg. decimal places etc) and the informs C/C++ what the variable you've given it is (character, integer, double etc).</p> <p>Hope this helps</p> <p><strong>UPDATE:</strong></p> <p>To clarify using your examples:</p> <pre><code>printf( "%10.1f %10.2\n", radius, area ); </code></pre> <p>%10.1f (referring to the first argument: radius) means make it 10 characters long (ie. pad with spaces), and print it as a float with one decimal place.</p> <p>%10.2 (referring to the second argument: area) means make it 10 character long (as above) and print with two decimal places.</p>
<p>One issue that hasn't been raised by others is whether <strong>double</strong> is the same as a <strong>float</strong>. On some systems a different format specifier was needed for a double compared to a float. Not least because the parameters passed could be of different sizes. <PRE> %f - float %lf - double %g - double </PRE></p>
3,861
<p>So I've done the trivial "warmup" apps with GAE. Now I'd like to build something with a more complex directory structure. Something along the lines of:</p> <pre><code>siteroot/ models/ controllers/ controller1/ controller2/ ... templates/ template1/ template2/ ... </code></pre> <p>..etc. The controllers will be Python modules handling requests. They would then need to locate (Django-style) templates in associated folders. Most of the demo apps I've seen resolve template paths like this:</p> <pre><code>path = os.path.join(os.path.dirname(__file__), 'myPage.html') </code></pre> <p>...the __ file __ property resolves to the currently executing script. So, in my above example, if a Python script were running in controllers/controller1/, then the 'myPage.html' would resolve to that same directory -- controllers/controller1/myPage.html -- and I would rather cleanly separate my Python code and templates.</p> <p>The solution I've hacked together feels... hacky:</p> <pre><code>base_paths = os.path.split(os.path.dirname(__file__)) template_dir = os.path.join(base_paths[0], "templates") </code></pre> <p>So, I'm just snipping off the last element of the path for the currently running script and appending the template directory to the new path. The other (non-GAE specific) solutions I've seen for resolving Python paths seem pretty heavyweight (such as splitting paths into lists and manipulating accordingly). Django seems to have an answer for this, but I'd rather stick to the GAE API, vs. creating a full Django app and modifying it for GAE.</p> <p>I'm assuming anything hard-coded would be non-starter, since the apps live on Google's infinite server farm. So what's a better way?</p>
<p>You can't use relative paths, as Toni suggests, because you have no guarantee that the path from your working directory to your app's directory will remain the same.</p> <p>The correct solution is to either use os.path.split, as you are, or to use something like:</p> <pre><code>path = os.path.join(os.path.dirname(__file__), '..', 'templates', 'myPage.html') </code></pre> <p>My usual approach is to generate a path to the template directory using the above method, and store it as a member of my controller object, and provide a "getTemplatePath" method that takes the provided filename and joins it with the basename.</p>
<p>The <code>dirname</code> function returns an absolute path, use relative paths. See what is the current directory when your controllers are executed with <code>os.path.abspath(os.path.curdir)</code> and build a path to the templates relative to that location (without the <code>os.path.abspath</code> part of course).</p> <p>This will only work if the current directory is somewhere inside <em>siteroot</em>, else you could do something like this:</p> <pre><code>template_dir = os.path.join(os.path.dirname(__file__), os.path.pardir, "templates") </code></pre>
8,722
<p>I am using Tomcat as a server and Internet Explorer 6 as a browser. A web page in our app has about 75 images. We are using SSL. It seems to be very slow at loading all the content. How can I configure Tomcat so that IE caches the images?</p>
<p>If you are serving a page over https then you'll need to serve all the included static or dynamic resources over https (either from the same domain, or another domain, also over https) to avoid a security warning in the browser.</p> <p>Content delivered over a secure channel will not be written to disk by default by most browsers and so lives in the browsers memory cache, which is much smaller than the on disk cache. This cache also disappears when the application quits.</p> <p>Having said all of that there are things you can do to improve the cachability for SSL assets inside a single browser setting. For starters, ensure that all you assets have reasonable Expires and Cache-Control headers. If tomcat is sitting behind apache then use mod_expires to add them. This will avoid the browser having to check if the image has changed between pages</p> <pre><code>&lt;Location /images&gt; FileEtag none ExpiresActive on ExpiresDefault "access plus 1 month" &lt;/Location&gt; </code></pre> <p>Secondly, and this is specific to MSIE and Apache, most apache ssl configs include these lines</p> <pre><code>SetEnvIf User-Agent ".*MSIE.*" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 </code></pre> <p>Which disables keepalive for ALL MSIE agents. IMHO this is far too conservative, the last MSIE browsers to have issues using SSL were 5.x and unpatched versions of 6.0 pre SP2, both of which are very uncommon now. The following is more lenient and will not disable keepalives when using MSIE and SSL</p> <pre><code>BrowserMatch "MSIE [1-4]" nokeepalive ssl-unclean-shutdown downgrade-1.0 force-response-1.0 BrowserMatch "MSIE [5-9]" ssl-unclean-shutdown </code></pre>
<p>Content served over a HTTPS connection <strong>never gets cached</strong> in the browser. You cannot do much about it. </p> <p>Usually, images in your web site are not very sensitive and are served over HTTP for this very reason.</p>
9,797
<p>Is there a way to convert a HTML string into a Image .tiff file?</p> <p>I am using C# .NET 3.5. The requirement is to give the user an option to fact a confirmation. The confirmation is created with XML and a XSLT. Typically it is e-mailed. </p> <p>Is there a way I can take the HTML string generated by the transformation HTML string and convert that to a <code>.tiff</code> or any image that can be faxed?</p> <p>3rd party software is allowed, however the cheaper the better.</p> <p>We are using a 3rd party fax library, that will only accept <code>.tiff</code> images, but if I can get the HTML to be any image I can covert it into a <code>.tiff</code>.</p>
<p>Here are some free-as-in-beer possibilities:</p> <p>You can use the PDFCreator printer driver that comes with ghostscript and print directly to a TIFF file or many other formats.</p> <p>If you have MSOffice installed, the Microsoft Office Document Image Writer will produce a file you can convert to other formats.</p> <p>But in general, your best bet is to print to a driver that will produce and image file of some kind or a windows meta-file format (.wmf) file.</p> <p>Is there some reason why you can't just print-to-fax? Does the third-party software not support a printer driver? That's unusual these days.</p>
<p>A starting point might be the software of <a href="http://www.websupergoo.com" rel="nofollow noreferrer">WebSuperGoo</a>, which provide rich image editing products, cheap or for free.</p> <p>I know for sure their PDF Writer can do basic HTML (<a href="http://www.websupergoo.com/helppdf6net/source/3-concepts/b-htmlstyles.htm" rel="nofollow noreferrer">http://www.websupergoo.com/helppdf6net/source/3-concepts/b-htmlstyles.htm</a>). This should not be too hard to convert to TIFF.</p> <p>This does not include the full HTML subset or CSS. That might require using Microsofts IE ActiveX component.</p>
7,752
<p>Glass is always level, easy to clean, easy to work with.</p> <p>Aluminium allows for the addition of automatic bed leveling with an inductive sensor and distributes heat a little more evenly.</p> <p>When printing mostly ABS and PLA, which one is better?</p>
<p>I believe printing directly on aluminium is unwise, simply because it will expand when heated, typically giving the bed a concave or convex shape. Glass, on the other hand, does not (at least not significantly).</p> <p>As pointed out in the comments below, the heat expansion of aluminium could potentially be mitigated by increasing the thickness of the bed, as well as heating it evenly. Also, a common solution is to place a glass plate on top of an aluminium bed, at the cost of a slightly longer heat-up time.</p> <p>In my experience, printing directly on heated glass can be very practical and give a nice surface finish for some PLA variants and other materials that support it. I don't know if printing directly on aluminium can give similar benefits.</p>
<p>you could always slightly slot the holes on your aluminum bed so its not corner constrained and forced to bow in the middle..</p> <p>My advice is use both and hairspray currently testing the effectiveness of insulation on the bottom of the plate</p>
186
<p>I have a very dense point cloud (billions of points) of the exterior of a building obtained by laser scanning it with a Leica head. I successfully subsampled it down to around 500,000 and I'm trying to print the building by first creating a mesh. I tried using CloudCompare, Meshlab and PDAL, using Poisson surface reconstruction. However, the resulting mesh is full of holes, mainly in the roofs which have the lowest point density, and I cannot print it. Is there any algorithm which could use the fact that the point cloud is precisely the exterior part of a geometric thing?</p>
<p>The foundation of any 3D printer is the controller and the firmware. Many devices are based on Arduino type controllers, with stepper motor driver boards either integrated or added as a plug-in component.</p> <p>Some manufacturers will use in-house or outside resources and develop their own boards and firmware.</p> <p>You can search for 3D printer controllers and get a pretty comprehensive list of the various devices available for purchase. Smoothieboard is one device, Raspberry Pi and Arduino as noted above, and others.</p> <p>There can be found varying "flavors" of firmware to load onto these controllers as well.</p> <p>The field is exhaustive.</p> <p>To address your focus regarding the printing aspect, that's one stepper motor per print head/nozzle (usually) and involves calibrating the amount of filament dispensed from the nozzle per unit steps, or more easily understood, amount of steps per unit of filament movement. My stepper motor for the extruder has a planetary gear and moves 100 mm of filament for about 5000 steps.</p> <p>All of the parameters you've noted are integrated with the firmware. Motor calibration requires movement per step or steps per millimeter to be entered, unless you purchase a turnkey system with the values loaded.</p> <p>You can adjust many of the parameters from the slicing software, but it's more practical to determine the calibration settings, enter that information into your slicer and proceed with model management.</p> <p>Look into instructables for others' build projects to see what they've accomplished and the steps involved in such a build. This can give you a starting point for your efforts.</p>
<p>this is an extension to fred_dot_u answer. As I am in the process of building my own printer, I decided to use RAMPS Arduino shield for electronics and Marlin firmware + Arduino mega2560 as a logic controller. </p> <p>As above are battle-tested, I don't need to discover wheel again, but rather focus on the mechanics. The RAMPS mainboard will provide you with the ability to connect 5 steppers and 3 PWM regulated devices (that can be heat-bed, nozzle and fan or combination) and there is still possibility to connect more as a bunch of IO pins are ready to alocate. </p> <p>MarlinFirmware is avalible on github, so you can download it and then provide all mechanical parameters that are needed to properly drive your printer -so that is:</p> <ol> <li>Mainboard type</li> <li>type coreXY, XZ ....</li> <li>bed size (x,y)</li> <li>gantry size (z)</li> <li>steps / mm for steppers</li> <li>thermal protection parameters </li> <li>LCD display (if in use)</li> </ol> <p>The main benefit of using Marlin for me is that printer configuration is extremely customizable.</p>
885
<p>I'm looking for a tool ($, free, open source; I don't care) that will allow me to view not just the memory statistics for a .NET program, but also the object hierarchy. I'd really like to be able to drill down each thourgh each object and view it's foot print, as well as all the object's it references.</p> <p>I've looked at things like Ants Profiler from RedGate, but it's not quite what I want: I can't view specific instances.</p> <p>EDIT: I've used the .NET Memory Profiler (the one that ships with visual studio, and the one that used to be part of the SDK (?))before, and while it's really good (and shows views most other don't), what I'm really after is being able to drill down through my object hierarchy, viewing each object instance.</p>
<p>I have used JetBrains DotTrace and Redgate Ants, both of which I would recommend. A lesser known profiler I have also used is .Net Memory Profiler (<a href="http://memprofiler.com/" rel="nofollow noreferrer">http://memprofiler.com/</a>), which at the time I used it provided a different perspective on memory usage than the former two profilers mentioned. I find DotTrace and Ants to be very similar, though each one is slightly different.</p>
<p><a href="http://www.jetbrains.com/profiler/" rel="nofollow noreferrer">JetBrains dottrace profiler</a> is the best. I wouldn't work without it. It is hard to find a tool that is free and performs well in this arena. Dottrace is hands down the best profiler I have used for .Net.</p>
7,296
<p>I mostly use Java and generics are relatively new. I keep reading that Java made the wrong decision or that .NET has better implementations etc. etc.</p> <p>So, what are the main differences between C++, C#, Java in generics? Pros/cons of each?</p>
<p>I'll add my voice to the noise and take a stab at making things clear:</p> <h2>C# Generics allow you to declare something like this.</h2> <pre><code>List&lt;Person&gt; foo = new List&lt;Person&gt;(); </code></pre> <p>and then the compiler will prevent you from putting things that aren't <code>Person</code> into the list.<br /> Behind the scenes the C# compiler is just putting <code>List&lt;Person&gt;</code> into the .NET dll file, but at runtime the JIT compiler goes and builds a new set of code, as if you had written a special list class just for containing people - something like <code>ListOfPerson</code>.</p> <p>The benefit of this is that it makes it really fast. There's no casting or any other stuff, and because the dll contains the information that this is a List of <code>Person</code>, other code that looks at it later on using reflection can tell that it contains <code>Person</code> objects (so you get intellisense and so on).</p> <p>The downside of this is that old C# 1.0 and 1.1 code (before they added generics) doesn't understand these new <code>List&lt;something&gt;</code>, so you have to manually convert things back to plain old <code>List</code> to interoperate with them. This is not that big of a problem, because C# 2.0 binary code is not backwards compatible. The only time this will ever happen is if you're upgrading some old C# 1.0/1.1 code to C# 2.0</p> <h2>Java Generics allow you to declare something like this.</h2> <pre><code>ArrayList&lt;Person&gt; foo = new ArrayList&lt;Person&gt;(); </code></pre> <p>On the surface it looks the same, and it sort-of is. The compiler will also prevent you from putting things that aren't <code>Person</code> into the list.</p> <p>The difference is what happens behind the scenes. Unlike C#, Java does not go and build a special <code>ListOfPerson</code> - it just uses the plain old <code>ArrayList</code> which has always been in Java. When you get things out of the array, the usual <code>Person p = (Person)foo.get(1);</code> casting-dance still has to be done. The compiler is saving you the key-presses, but the speed hit/casting is still incurred just like it always was.<br /> When people mention &quot;Type Erasure&quot; this is what they're talking about. The compiler inserts the casts for you, and then 'erases' the fact that it's meant to be a list of <code>Person</code> not just <code>Object</code></p> <p>The benefit of this approach is that old code which doesn't understand generics doesn't have to care. It's still dealing with the same old <code>ArrayList</code> as it always has. This is more important in the java world because they wanted to support compiling code using Java 5 with generics, and having it run on old 1.4 or previous JVM's, which microsoft deliberately decided not to bother with.</p> <p>The downside is the speed hit I mentioned previously, and also because there is no <code>ListOfPerson</code> pseudo-class or anything like that going into the .class files, code that looks at it later on (with reflection, or if you pull it out of another collection where it's been converted into <code>Object</code> or so on) can't tell in any way that it's meant to be a list containing only <code>Person</code> and not just any other array list.</p> <h2>C++ Templates allow you to declare something like this</h2> <pre><code>std::list&lt;Person&gt;* foo = new std::list&lt;Person&gt;(); </code></pre> <p>It looks like C# and Java generics, and it will do what you think it should do, but behind the scenes different things are happening.</p> <p>It has the most in common with C# generics in that it builds special <code>pseudo-classes</code> rather than just throwing the type information away like java does, but it's a whole different kettle of fish.</p> <p>Both C# and Java produce output which is designed for virtual machines. If you write some code which has a <code>Person</code> class in it, in both cases some information about a <code>Person</code> class will go into the .dll or .class file, and the JVM/CLR will do stuff with this.</p> <p>C++ produces raw x86 binary code. Everything is <em>not</em> an object, and there's no underlying virtual machine which needs to know about a <code>Person</code> class. There's no boxing or unboxing, and functions don't have to belong to classes, or indeed anything.</p> <p>Because of this, the C++ compiler places no restrictions on what you can do with templates - basically any code you could write manually, you can get templates to write for you.<br /> The most obvious example is adding things:</p> <p>In C# and Java, the generics system needs to know what methods are available for a class, and it needs to pass this down to the virtual machine. The only way to tell it this is by either hard-coding the actual class in, or using interfaces. For example:</p> <pre><code>string addNames&lt;T&gt;( T first, T second ) { return first.Name() + second.Name(); } </code></pre> <p>That code won't compile in C# or Java, because it doesn't know that the type <code>T</code> actually provides a method called Name(). You have to tell it - in C# like this:</p> <pre><code>interface IHasName{ string Name(); }; string addNames&lt;T&gt;( T first, T second ) where T : IHasName { .... } </code></pre> <p>And then you have to make sure the things you pass to addNames implement the IHasName interface and so on. The java syntax is different (<code>&lt;T extends IHasName&gt;</code>), but it suffers from the same problems.</p> <p>The 'classic' case for this problem is trying to write a function which does this</p> <pre><code>string addNames&lt;T&gt;( T first, T second ) { return first + second; } </code></pre> <p>You can't actually write this code because there are no ways to declare an interface with the <code>+</code> method in it. You fail.</p> <p>C++ suffers from none of these problems. The compiler doesn't care about passing types down to any VM's - if both your objects have a .Name() function, it will compile. If they don't, it won't. Simple.</p> <p>So, there you have it :-)</p>
<p>NB: I don't have enough point to comment, so feel free to move this as a comment to appropriate answer.</p> <p>Contrary to popular believe, which I never understand where it came from, .net implemented true generics without breaking backward compatibility, and they spent explicit effort for that. You don't have to change your non-generic .net 1.0 code into generics just to be used in .net 2.0. Both the generic and non-generic lists are still available in .Net framework 2.0 even until 4.0, exactly for nothing else but backward compatibility reason. Therefore old codes that still used non-generic ArrayList will still work, and use the same ArrayList class as before. Backward code compatibility is always maintained since 1.0 till now... So even in .net 4.0, you still have to option to use any non-generics class from 1.0 BCL if you choose to do so.</p> <p>So I don't think java has to break backward compatibility to support true generics.</p>
5,162
<p>Are there any <strong>good</strong> grid-hosting companies out there that offer .NET stacks? Something like MediaTemple - which won't host the worlds fastest websites, but for the price is far better than "shared hosting". I've used Rackspace's Mosso, but it sucked - it never felt like a normal .NET stack (caching was odd, site recompilation was odd).</p>
<p>Try <a href="http://www.gogrid.com" rel="noreferrer">gogrid.com</a> they seem to have a very nice following in the cloud computing circles.</p>
<p>can talk a bit more about the not so normal .net experience? I was thinking to go for Mosso...</p>
2,631
<p>At my job, I have a clutch of six Windows services that I am responsible for, written in C# 2003. Each of these services contain a timer that fires every minute or so, where the majority of their work happens.</p> <p>My problem is that, as these services run, they start to consume more and more CPU time through each iteration of the loop, even if there is no meaningful work for them to do (ie, they're just idling, looking through the database for something to do). When they start up, each service uses an average of (about) 2-3% of 4 CPUs, which is fine. After 24 hours, each service will be consuming an entire processor for the duration of its loop's run.</p> <p>Can anyone help? I'm at a loss as to what could be causing this. Our current solution is to restart the services once a day (they shut themselves down, then a script sees that they're offline and restarts them at about 3AM). But this is not a long term solution; my concern is that as the services get busier, restarting them once a day may not be sufficient... but as there's a significant startup penalty (they all use NHibernate for data access), as they get busier, exactly what we <em>don't</em> want to be doing is restarting them more frequently.</p> <hr> <p>@akmad: True, it is very difficult.</p> <ol> <li>Yes, a service run in isolation will show the same symptom over time.</li> <li>No, it doesn't. We've looked at that. This can happen at 10AM or 6PM or in the middle of the night. There's no consistency.</li> <li>We do; and they are. The services are doing exactly what they should be, and nothing else.</li> <li>Unfortunately, that requires foreknowledge of exactly when the services are going to be maxing out CPUs, which happens on an unpredictable schedule, and never very quickly... which makes things doubly difficult, because my boss will run and restart them when they start having problems without thinking of debug issues.</li> <li>No, they're using a fairly consistent amount of RAM (approx. 60-80MB each, out of 4GB on the machine).</li> </ol> <p>Good suggestions, but rest assured, we have tried all of the usual troubleshooting. What I'm hoping is that this is a .NET issue that someone might know about, that we can work on solving. My boss' solution (which I emphatically don't want to implement) is to put a field in the database which holds multiple times for the services to restart during the day, so that he can make the problem go away and not think about it. I'm desperately seeking the cause of the real problem so that I can fix it, because that solution will become a disaster in about six months.</p> <hr> <p>@Yaakov Ellis: They each have a different function. One reads records out of an Oracle database somewhere offsite; another one processes those records and transfers files belonging to those records over to our system; a third checks those files to make sure they're what we expect them to be; another is a maintenance service that constantly checks things like disk space (that we have enough) and polls other servers to make sure they're alive; one is running only to make sure all of these other ones are running and doing their jobs, monitors and reports errors, and restarts anything that's failed to keep the whole system going 24 hours a day.</p> <p>So, if you're asking what I think you're asking, no, there isn't one common thing that all these services do (other than database access via NHibernate) that I can point to as a potential problem. Unfortunately, if that turns out to be the actual issue (which wouldn't surprise me greatly), the whole thing might be screwed -- and I'll end up rewriting all of them in simple SQL. I'm hoping it's a garbage collector problem or something easier to deal with than NHibernate.</p> <hr> <p>@Joshdan: No secret. As I said, we've tried all the usual troubleshooting. Profiling was unhelpful: the profiler we use was unable to point to any code that was actually executing when the CPU usage was high. These services were torn apart about a month ago looking for this problem. Every section of code was analyzed to attempt to figure out if our code was the issue; I'm not here asking because I haven't done my homework. Were this a simple case of the services doing more work than anticipated, that's something that would have been caught. </p> <p>The problem here is that, most of the time, the services are not doing anything at all, yet still manage to consume 25% or more of four CPU cores: they're finding no work to do, and exiting their loop and waiting for the next iteration. This should, quite literally, take almost no CPU time at all.</p> <p>Here's a example of behaviour we're seeing, on a service with no work to do for two days (in an unchanging environment). This was captured last week:</p> <p>Day 1, 8AM: Avg. CPU usage approx 3%<br> Day 1, 6PM: Avg. CPU usage approx 8%<br> Day 2, 7AM: Avg. CPU usage approx 20%<br> Day 2, 11AM: Avg. CPU usage approx 30% </p> <p>Having looked at all of the possible mundane reasons for this, I've asked this question here because I figured (rightly, as it turns out) that I'd get more innovative answers (like Ubiguchi's), or pointers to things I <em>hadn't</em> thought of (like Ian's suggestion).</p> <hr> <blockquote> <p>So does the CPU spike happen immediately preceding the timer callback, within the timer callback, or immediately following the timer callback?</p> </blockquote> <p>You misunderstand. This is not a spike. If it were, there would be no problem; I can deal with spikes. But it's not... the CPU usage is going up generally. Even when the service is doing nothing, waiting for the next timer hit. When the service starts up, things are nice and calm, and the graph looks like what you'd expect... generally, 0% usage, with spikes to 10% as NHibernate hits the database or the service does some trivial amount of work. But this increases to an across-the-board 25% (more if I let it go too far) usage at all times while the process is running.</p> <p>That made Ian's suggestion the logical silver bullet (NHibernate does <em>a lot</em> of stuff when you're not looking). Alas, I've implemented his solution, but it hasn't had an effect (I have no proof of this, but I actually think it's made things worse... average usage is <em>seeming</em> to go up much faster now). Note that stripping out the NHibernate "sections" (as you recommend) is not feasible, since that would strip out about 90% of the code in the service, which would let me rule out the timer as a problem (which I absolutely intend to try), but can't help me rule out NHibernate as the issue, because if NHibernate is causing this, then the dodgy fix that's implemented (see below) is just going to have to become The Way The System Works; we are so dependent on NHibernate for this project that the PM simply won't accept that it's causing an unresolvable structural problem.</p> <blockquote> <p>I just noted a sense of desperation in the question -- that your problems would continue barring a small miracle</p> </blockquote> <p>Don't mean for it to come off that way. At the moment, the services are being restarted daily (with an option to input any number of hours of the day for them to shutdown and restart), which patches the problem but cannot be a long-term solution once they go onto the production machine and start to become busy. The problems will not continue, whether I fix them or the PM maintains this constraint on them. Obviously, I would prefer to implement a real fix, but since the initial testing revealed no reason for this, and the services have already been extensively reviewed, the PM would rather just have them restart multiple times than spend any more time trying to fix them. That's entirely out of my control and makes the miracle you were talking about more important than it would otherwise be.</p> <blockquote> <p>That is extremely intriguing (insofar as you trust your profiler).</p> </blockquote> <p>I don't. But then, these are Windows services written in .NET 1.1 running on a Windows 2000 machine, deployed by a dodgy Nant script, using an old version of NHibernate for database access. There's little on that machine I would actually say I trust.</p>
<p>It's obviously pretty difficult to remotely debug you're unknown application... but here are some things I'd look at:</p> <ol> <li>What happens when you only run one of the services at a time? Do you still see the slow-down? This may indicate that there is some contention between the services.</li> <li>Does the problem always occur around the same time, regardless of how long the service has been running? This may indicate that something else (a backup, virus scan, etc) is causing the machine (or db) as a whole to slow down.</li> <li>Do you have logging or some other mechanism to be sure that the service is only doing work as often as you think it should?</li> <li>If you can see the performance degradation over a short time period, try running the service for a while and then attach a profiler to see exactly what is pegging the CPU.</li> <li>You don't mention anything about memory usage. Do you have any of this information for the services? It's possible that your using up most of the RAM and causing the disk the trash, or some similar problem.</li> </ol> <p>Best of luck!</p>
<blockquote> <p>Good suggestions, but rest assured, we have tried all of the usual troubleshooting. What I'm hoping is that this is a .NET issue that someone might know about, that we can work on solving.</p> </blockquote> <p>My feeling is that no matter how bizarre the underlying cause, the usual troubleshooting steps are your best bet for locating the issue.</p> <p>Since this is a performance issue, good measurements are invaluable. The overall process CPU usage is far too broad a measurement. <em>Where</em> is your service spending its time? You could use a profiler to measure this, or just log various section start and stops. If you aren't able to do even that, then use Andrea Bertani's suggestion -- isolate sections by removing others.</p> <p>Once you've located the general area, then you can make even finer-grained measurements, until you sort out the source of the CPU usage. If it's not obvious how to fix it at that point, you at least have ammunition for a much more specific question.</p> <p>If you have in fact already done all this usual troubleshooting, please do let us in on the secret.</p>
4,575
<p>Could someone recommend any good resources for creating Graphics User Interfaces, preferably in C/C++?</p> <p>Currently my biggest influence is <a href="http://www.3dbuzz.com" rel="nofollow noreferrer">3DBuzz.com</a>'s <a href="http://www.3dbuzz.com/xcart/product.php?productid=30&amp;cat=12&amp;page=1" rel="nofollow noreferrer">C++/OpenGL VTMs</a> (Video Training Modules). While they are very good, they cover a large area of graphics programming, so only skim the surface of GUI programming.</p> <p>This question does relate to "<a href="https://stackoverflow.com/questions/7489/how-do-i-make-a-gui">How do I make a GUI?</a>", where there is also a rough outline of my current structure.</p> <p>Any response would be appreciated.</p> <p><strong>Edit:</strong> I've just read some of the answers and spotted that I've missed some points. I had to type the question twice as I caught the wrong button and lost it.</p> <p>I missed two important points, first: This will be used cross platform including homebrew on a Sony PSP. Second: I want to create a GUI system not use an existing one.</p> <p><strong>Edit 2:</strong> I think some of you are missing the point, I don't what to <strong>use</strong> an existing GUI system I want to build one.</p> <p>Qt in it's current form is not portable to the PSP, never mind the overkill of such a task.</p> <p>That said I've decided to create an IM-GUI, and have started to prototype the code.</p>
<p>I wouldn't use OpenGL for the GUI unless you are planning for hardware accelerated and/or 3D effects that you don't think you would get with a more traditional GUI toolkit (e.g <a href="http://trolltech.com/" rel="nofollow noreferrer">Qt</a>, <a href="http://www.wxwidgets.org/" rel="nofollow noreferrer">wxWidgets</a>, <a href="http://www.gtk.org/" rel="nofollow noreferrer">GTK</a>, etc). If you just need a quick simple GUI for hosting your OpenGL graphics then <a href="http://www.fltk.org/" rel="nofollow noreferrer">FLTK</a> is a nice choice. Otherwise, for rendering the GUI directly in OpenGL their are libraries like <a href="http://www.cegui.org.uk/wiki/index.php/Main_Page" rel="nofollow noreferrer">Crazy Eddie's GUI</a> that do just that and provide lots of <a href="http://en.wikipedia.org/wiki/Skinnable" rel="nofollow noreferrer">skinnable</a> <a href="http://en.wikipedia.org/wiki/GUI_widget" rel="nofollow noreferrer">widgets</a> that you won't have to reinvent. The window and OpenGL context could then be provide with a portable library like <a href="http://www.libsdl.org/" rel="nofollow noreferrer">SDL</a>.</p> <p><strong>EDIT</strong>: Now that I've gone back and taken at look your <a href="https://stackoverflow.com/questions/7489/how-do-i-make-a-gui">other post</a> I think I have a better understanding of what you are asking. For a GUI on an embedded system like the Nintendo DS, I would consider using an "immediate mode" GUI. <a href="http://sol.gfxile.net/who.html" rel="nofollow noreferrer">Jari Komppa</a> has a <a href="http://sol.gfxile.net/imgui/" rel="nofollow noreferrer">good tutorial about them</a>, but you could use a more object-oriented approach with C++ than the C code he presents.</p>
<p>I'll second Qt. It's cross platform, and I found it <em>much</em> easier to work with than the built in Visual Studio GUI stuff. It's dual-licensed, so if you don't want your code to be GPL you could purchase a license instead.</p>
3,456
<p>I'm seeing the following pattern on my Prusa clone:</p> <p><a href="https://i.stack.imgur.com/uskfa.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/uskfa.jpg" alt="Photo of low quality second layer print"></a></p> <p>The first layer prints OK, layer adhesion to the bed is good. The second layer shows the weird gaps. On larger pieces, or with an increased feed rate, it sometimes gets so bad that parts of the second layer detach and are dragged across the print. I get the impression that this effect is worse when printing in the "top left to bottom right" direction, but not as bad when printing in the "top right to bottom left" direction. </p> <p>This occurs with multiple filaments and materials. I've already tried to manipulate the flow rate, temperatures and print speed, but gotten little improvement if any. </p> <p>What might be causing this issue?</p>
<p>I wouldn't describe that as an "OK" first layer. The nozzle is either too far from the bed, or you're underextruding. Underextrusion also looks like the cause of your second layer problems.</p>
<p>Your first layer is horribly misscalibrated as well. </p> <p>You really should do all calibrations over. Including your firmware when your result is this bad. </p> <p>Following. Let's just assume your firmware is set right. And that your flow is right as well. The second layer speaks loudly of overcooked dribbling plastic. You might have a jam. Maybe the factory left some metal shavings in the hot end. I've had it happen. Try taking apart the extruded. Make sure there are no obstructions. Clean the tip with a micro hand drill bit. </p> <p>After that we will need more info. Post your settings. Your might have underflow (you set filiment size higher than it is) and have temperature so hot your are literally boiling the plastic and it is all dripping sparatically instead of controlled. </p> <p>Having temp too high can increase carbonization build ups and creats Jams. Did you ever have a good print ? My money is on a partial obstruction. </p>
434
<p>I am running an application through gdb and I want to set a breakpoint for any time a specific variable is accessed / changed. Is there a good method for doing this? I would also be interested in other ways to monitor a variable in C/C++ to see if/when it changes.</p>
<p><em>watch</em> only breaks on write, <em>rwatch</em> let you break on read, and <em>awatch</em> let you break on read/write.</p> <p>You can set read watchpoints on memory locations:</p> <pre><code>gdb$ rwatch *0xfeedface Hardware read watchpoint 2: *0xfeedface </code></pre> <p>but one limitation applies to the rwatch and awatch commands; you can't use gdb variables in expressions:</p> <pre><code>gdb$ rwatch $ebx+0xec1a04f Expression cannot be implemented with read/access watchpoint. </code></pre> <p>So you have to expand them yourself: </p> <pre><code>gdb$ print $ebx $13 = 0x135700 gdb$ rwatch *0x135700+0xec1a04f Hardware read watchpoint 3: *0x135700 + 0xec1a04f gdb$ c Hardware read watchpoint 3: *0x135700 + 0xec1a04f Value = 0xec34daf 0x9527d6e7 in objc_msgSend () </code></pre> <p><strong>Edit:</strong> Oh, and by the way. You need either hardware <strong>or software support</strong>. Software is obviously much slower. To find out if your OS supports hardware watchpoints you can see the <em>can-use-hw-watchpoints</em> environment setting. </p> <pre><code>gdb$ show can-use-hw-watchpoints Debugger's willingness to use watchpoint hardware is 1. </code></pre>
<p>In addition to what has already been answered/commented by <a href="https://stackoverflow.com/users/5577/asksol">asksol</a> and <a href="https://stackoverflow.com/users/2508150/paolo-m">Paolo M</a></p> <p>I didn't at first read understand, why do we need to cast the results. Though I read this: <a href="https://sourceware.org/gdb/onlinedocs/gdb/Set-Watchpoints.html" rel="nofollow noreferrer">https://sourceware.org/gdb/onlinedocs/gdb/Set-Watchpoints.html</a>, yet it wasn't intuitive to me..</p> <p>So I did an experiment to make the result clearer: Code: (Let's say that int main() is at Line 3; int i=0 is at Line 5 and other code.. is from Line 10)</p> <pre><code>int main() { int i = 0; int j; i = 3840 // binary 1100 0000 0000 to take into account endianness other code.. } </code></pre> <p>then i started gdb with the executable file in my first attempt, i set the breakpoint on the location of variable without casting, following were the results displayed</p> <pre><code>Thread 1 &quot;testing2&quot; h Breakpoint 2 at 0x10040109b: file testing2.c, line 10. (gdb) s 7 i = 3840; (gdb) p i $1 = 0 (gdb) p &amp;i $2 = (int *) 0xffffcbfc (gdb) watch *0xffffcbfc Hardware watchpoint 3: *0xffffcbfc (gdb) s [New Thread 13168.0xa74] Thread 1 &quot;testing2&quot; hit Breakpoint 2, main () at testing2.c:10 10 b = a; (gdb) p i $3 = 3840 (gdb) p *0xffffcbfc $4 = 3840 (gdb) p/t *0xffffcbfc $5 = 111100000000 </code></pre> <p>as we could see breakpoint was hit for line 10 which was set by me. gdb didn't break because although variable i underwent change yet the location being watched didn't change (due to endianness, since it continued to remain all 0's)</p> <p>in my second attempt, i did the casting on the address of the variable to watch for all the sizeof(int) bytes. this time:</p> <pre><code>(gdb) p &amp;i $6 = (int *) 0xffffcbfc (gdb) p i $7 = 0 (gdb) watch *(int *) 0xffffcbfc Hardware watchpoint 6: *(int *) 0xffffcbfc (gdb) b 10 Breakpoint 7 at 0x10040109b: file testing2.c, line 10. (gdb) i b Num Type Disp Enb Address What 6 hw watchpoint keep y *(int *) 0xffffcbfc 7 breakpoint keep y 0x000000010040109b in main at testing2.c:10 (gdb) n [New Thread 21508.0x3c30] Thread 1 &quot;testing2&quot; hit Hardware watchpoint 6: *(int *) 0xffffcbfc Old value = 0 New value = 3840 Thread 1 &quot;testing2&quot; hit Breakpoint 7, main () at testing2.c:10 10 b = a; </code></pre> <p>gdb break since it detected the value has changed.</p>
8,330
<p><a href="https://stackoverflow.com/questions/180/function-for-creating-color-wheels">Original Question</a></p> <p>If you are given N maximally distant colors (and some associated distance metric), can you come up with a way to sort those colors into some order such that the first M are also reasonably close to being a maximally distinct set?</p> <p>In other words, given a bunch of distinct colors, come up with an ordering so I can use as many colors as I need starting at the beginning and be reasonably assured that they are all distinct and that nearby colors are also very distinct (e.g., bluish red isn't next to reddish blue).</p> <p>Randomizing is OK but certainly not optimal.</p> <p><em>Clarification: Given some large and visually distinct set of colors (say 256, or 1024), I want to sort them such that when I use the first, say, 16 of them that I get a relatively visually distinct subset of colors. This is equivalent, roughly, to saying I want to sort this list of 1024 so that the closer individual colors are visually, the farther apart they are on the list.</em></p>
<p>This also sounds to me like some kind of <em>resistance graph</em> where you try to map out the path of least resistance. If you inverse the requirements, path of maximum resistance, it could perhaps be used to produce a set that from the start produces maximum difference as you go, and towards the end starts to go back to values closer to the others.</p> <p>For instance, here's one way to perhaps do what you want.</p> <ol> <li>Calculate the distance (ref <a href="https://stackoverflow.com/questions/1313/followup-finding-an-accurate-distance-between-colors">your other post</a>) from each color to all other colors</li> <li>Sum the distances for each color, this gives you an indication for <em>how far away this color is from all other colors in total</em></li> <li>Order the list by distance, going down</li> </ol> <p>This would, it seems, produce a list that starts with the color that is farthest away from all other colors, and then go down, colors towards the end of the list would be closer to other colors in general.</p> <p>Edit: Reading your reply to my first post, about the spatial subdivision, would not exactly fit the above description, since colors close to other colors would fall to the bottom of the list, but let's say you have a cluster of colors somewhere, at least one of the colors from that cluster would be located near the start of the list, and it would be the one that generally was farthest away from all other colors in total. If that makes sense.</p>
<p>Do you mean that from a set of N colors, you need to pick M colors, where M &lt; N, such that M is the <em>best</em> representation of the N colors in the M space?</p> <p>As a better example, reduce a true-color (24 bit color space) to a 8-bit mapped color space (GIF?).</p> <p>There are quantization algorithms for this, like the <a href="http://www.imagemagick.org/www/quantize.html" rel="nofollow noreferrer">Adaptive Spatial Subdivision</a> algorithm used by ImageMagic.</p> <p>These algorithms usually don't just pick existing colors from the source space but creates new colors in the target space that most closely resemble the source colors. As a simplified example, if you have 3 colors in the original image where two are red (with different intensity or bluish tints etc.) and the third is blue, and need to reduce to two colors, the target image could have a red color that is some kind of average of the original two red + the blue color from the original image.</p> <p>If you need something else then I didn't understand your question :)</p>
2,374
<p>Below is my current char* to hex string function. I wrote it as an exercise in bit manipulation. It takes ~7ms on a AMD Athlon MP 2800+ to hexify a 10 million byte array. Is there any trick or other way that I am missing?</p> <p>How can I make this faster?</p> <p>Compiled with -O3 in g++</p> <pre><code>static const char _hex2asciiU_value[256][2] = { {'0','0'}, {'0','1'}, /* snip..., */ {'F','E'},{'F','F'} }; std::string char_to_hex( const unsigned char* _pArray, unsigned int _len ) { std::string str; str.resize(_len*2); char* pszHex = &amp;str[0]; const unsigned char* pEnd = _pArray + _len; clock_t stick, etick; stick = clock(); for( const unsigned char* pChar = _pArray; pChar != pEnd; pChar++, pszHex += 2 ) { pszHex[0] = _hex2asciiU_value[*pChar][0]; pszHex[1] = _hex2asciiU_value[*pChar][1]; } etick = clock(); std::cout &lt;&lt; "ticks to hexify " &lt;&lt; etick - stick &lt;&lt; std::endl; return str; } </code></pre> <p><strong>Updates</strong></p> <p>Added timing code</p> <p><a href="https://stackoverflow.com/questions/69115/char-to-hex-string-exercise#69126">Brian R. Bondy</a>: replace the std::string with a heap alloc'd buffer and change ofs*16 to ofs &lt;&lt; 4 - however the heap allocated buffer seems to slow it down? - result ~11ms</p> <p><a href="https://stackoverflow.com/questions/69115/#69305">Antti Sykäri</a>:replace inner loop with </p> <pre><code> int upper = *pChar &gt;&gt; 4; int lower = *pChar &amp; 0x0f; pszHex[0] = pHex[upper]; pszHex[1] = pHex[lower]; </code></pre> <p>result ~8ms</p> <p><a href="https://stackoverflow.com/questions/69115?sort=votes#69218">Robert</a>: replace <code>_hex2asciiU_value</code> with a full 256-entry table, sacrificing memory space but result ~7ms!</p> <p><a href="https://stackoverflow.com/questions/69115/char-to-hex-string-exercise#70254">HoyHoy</a>: Noted it was producing incorrect results</p>
<p>At the cost of more memory you can create a full 256-entry table of the hex codes:</p> <pre><code>static const char _hex2asciiU_value[256][2] = { {'0','0'}, {'0','1'}, /* ..., */ {'F','E'},{'F','F'} }; </code></pre> <p>Then direct index into the table, no bit fiddling required.</p> <pre><code>const char *pHexVal = pHex[*pChar]; pszHex[0] = pHexVal[0]; pszHex[1] = pHexVal[1]; </code></pre>
<p>Make sure your compiler optimization is turned on to the highest working level.</p> <p>You know, flags like '-O1' to '-03' in gcc.</p>
9,459
<p>Is there a realistic way of implementing a multi-threaded model in PHP whether truly, or just simulating it. Some time back it was suggested that you could force the operating system to load another instance of the PHP executable and handle other simultaneous processes.</p> <p>The problem with this is that when the PHP code finished executing the PHP instance remains in memory because there is no way to kill it from within PHP. So if you are simulating several threads you can imagine whats going to happen. So I am still looking for a way multi-threading can be done or simulated effectively from within PHP. Any ideas?</p>
<h1>Multi-threading is possible in php</h1> <p>Yes you can do multi-threading in PHP with <a href="https://github.com/krakjoe/pthreads" rel="noreferrer">pthreads</a> </p> <p>From <a href="http://www.php.net/manual/en/intro.pthreads.php" rel="noreferrer">the PHP documentation</a>:</p> <blockquote> <p>pthreads is an object-orientated API that provides all of the tools needed for multi-threading in PHP. PHP applications can create, read, write, execute and synchronize with Threads, Workers and Threaded objects.</p> <p><strong>Warning</strong>: The pthreads extension cannot be used in a web server environment. Threading in PHP should therefore remain to CLI-based applications only.</p> </blockquote> <p><strong>Simple Test</strong></p> <pre><code>#!/usr/bin/php &lt;?php class AsyncOperation extends Thread { public function __construct($arg) { $this-&gt;arg = $arg; } public function run() { if ($this-&gt;arg) { $sleep = mt_rand(1, 10); printf('%s: %s -start -sleeps %d' . "\n", date("g:i:sa"), $this-&gt;arg, $sleep); sleep($sleep); printf('%s: %s -finish' . "\n", date("g:i:sa"), $this-&gt;arg); } } } // Create a array $stack = array(); //Initiate Multiple Thread foreach ( range("A", "D") as $i ) { $stack[] = new AsyncOperation($i); } // Start The Threads foreach ( $stack as $t ) { $t-&gt;start(); } ?&gt; </code></pre> <p>First Run</p> <pre><code>12:00:06pm: A -start -sleeps 5 12:00:06pm: B -start -sleeps 3 12:00:06pm: C -start -sleeps 10 12:00:06pm: D -start -sleeps 2 12:00:08pm: D -finish 12:00:09pm: B -finish 12:00:11pm: A -finish 12:00:16pm: C -finish </code></pre> <p>Second Run </p> <pre><code>12:01:36pm: A -start -sleeps 6 12:01:36pm: B -start -sleeps 1 12:01:36pm: C -start -sleeps 2 12:01:36pm: D -start -sleeps 1 12:01:37pm: B -finish 12:01:37pm: D -finish 12:01:38pm: C -finish 12:01:42pm: A -finish </code></pre> <p><strong>Real World Example</strong></p> <pre><code>error_reporting(E_ALL); class AsyncWebRequest extends Thread { public $url; public $data; public function __construct($url) { $this-&gt;url = $url; } public function run() { if (($url = $this-&gt;url)) { /* * If a large amount of data is being requested, you might want to * fsockopen and read using usleep in between reads */ $this-&gt;data = file_get_contents($url); } else printf("Thread #%lu was not provided a URL\n", $this-&gt;getThreadId()); } } $t = microtime(true); $g = new AsyncWebRequest(sprintf("http://www.google.com/?q=%s", rand() * 10)); /* starting synchronization */ if ($g-&gt;start()) { printf("Request took %f seconds to start ", microtime(true) - $t); while ( $g-&gt;isRunning() ) { echo "."; usleep(100); } if ($g-&gt;join()) { printf(" and %f seconds to finish receiving %d bytes\n", microtime(true) - $t, strlen($g-&gt;data)); } else printf(" and %f seconds to finish, request failed\n", microtime(true) - $t); } </code></pre>
<p>Multithreading means performing multiple tasks or processes simultaneously, we can achieve this in php by using following code,although there is no direct way to achieve multithreading in php but we can achieve almost same results by following way.</p> <pre><code>chdir(dirname(__FILE__)); //if you want to run this file as cron job for ($i = 0; $i &lt; 2; $i += 1){ exec("php test_1.php $i &gt; test.txt &amp;"); //this will execute test_1.php and will leave this process executing in the background and will go //to next iteration of the loop immediately without waiting the completion of the script in the //test_1.php , $i is passed as argument . </code></pre> <p>}</p> <p>Test_1.php</p> <pre><code>$conn=mysql_connect($host,$user,$pass); $db=mysql_select_db($db); $i = $argv[1]; //this is the argument passed from index.php file for($j = 0;$j&lt;5000; $j ++) { mysql_query("insert into test set id='$i', comment='test', datetime=NOW() "); } </code></pre> <p>This will execute test_1.php two times simultaneously and both process will run in the background simultaneously ,so in this way you can achieve multithreading in php.</p> <p>This guy done really good work <a href="https://github.com/krakjoe/pthreads" rel="nofollow">Multithreading in php</a></p>
9,634
<p>Once I've called <code>DragManager.acceptDrag</code> is there any way to "unaccept" the drag? Say that I have a view which can accept drag and drop, but only in certain areas. Once the user drags over one of these areas I call <code>DragManager.acceptDrag(this)</code> (from a <code>DragEvent.DRAG_OVER</code> handler), but if the user then moves out of this area I'd like to change the status of the drag to not accepted and show the <code>DragManager.NONE</code> feedback. However, neither calling <code>DragManager.acceptDrag(null)</code> nor <code>DragManager.showFeedback(DragManager.NONE)</code> seems to have any effect. Once I've accepted the drag an set the feedback type I can't seem to change it.</p> <p>Just to make it clear: the areas where the user should be able to drop are not components or even display objects, in fact they are just ranges in the text of a text field (like the selection). Had they been components of their own I could have solved it by making each of them accept drag events individually. I guess I could create proxy components that float over the text to emulate it, but I'd rather not if it isn't necessary.</p> <hr> <p>I've managed to get it working in both AIR and the browser now, but only by putting proxy components on top of the ranges of text where you should be able to drop things. That way I get the right feedback and drops are automatically unaccepted on drag exit.</p> <p>This is the oddest thing about D&amp;D in AIR:</p> <pre><code>DragManager.doDrag(initiator, source, event, dragImage, offsetX, offsetY); </code></pre> <p>In browser-based Flex, <code>offsetX</code> and <code>offsetY</code> should be negative (so says the documentation, and it works fine). However, when running <em>exactly the same code</em> in AIR you have to make the offsets positive. The same numbers, but positive. That is very, very weird.</p> <hr> <p>I've tested some more and what <a href="https://stackoverflow.com/questions/10870/how-can-i-unaccept-a-drag-in-flex#11209">@maclema</a> works, but not if you run in AIR. It seems like drag and drop in AIR is different. It's really, really weird because not only is the feedback not showing correctly, and it's not possible to unaccept, but the coordinates are also completely off. I just tried my application in a browser instead of AIR and dragging and dropping is completely broken.</p> <p>Also, skipping the <code>dragEnter</code> handler works fine in AIR, but breaks everything when running in a browser.</p>
<p>Are you using only the dragEnter method? If you are trying to reject the drag while still dragging over the same component you need to use both the dragEnter and dragOver methods.</p> <p>Check out this example:</p> <pre><code>&lt;?xml version="1.0" encoding="utf-8"?&gt; &lt;mx:Application xmlns:mx="http://www.adobe.com/2006/mxml" layout="absolute"&gt; &lt;mx:Script&gt; &lt;![CDATA[ import mx.core.DragSource; import mx.managers.DragManager; import mx.events.DragEvent; private function onDragEnter(e:DragEvent):void { if ( e.target == lbl ) { if ( e.localX &lt; lbl.width/2 ) { trace("accept"); DragManager.acceptDragDrop(this); } else { DragManager.acceptDragDrop(null); } } } private function doStartDrag(e:MouseEvent):void { if ( e.buttonDown ) { var ds:DragSource = new DragSource(); ds.addData("test", "text"); DragManager.doDrag(btn, ds, e); } } ]]&gt; &lt;/mx:Script&gt; &lt;mx:Label id="lbl" text="hello world!" left="10" top="10" dragEnter="onDragEnter(event)" dragOver="onDragEnter(event)" /&gt; &lt;mx:Button id="btn" x="47" y="255" label="Button" mouseMove="doStartDrag(event)"/&gt; &lt;/mx:Application&gt; </code></pre>
<p>ok, I see the problem now. Rather than null, try setting it to the dragInitiator.</p> <p>Check this out.</p> <pre><code>&lt;?xml version="1.0" encoding="utf-8"?&gt; &lt;mx:WindowedApplication xmlns:mx="http://www.adobe.com/2006/mxml" layout="absolute"&gt; &lt;mx:Script&gt; &lt;![CDATA[ import mx.controls.Alert; import mx.events.DragEvent; import mx.managers.DragManager; import mx.core.DragSource; private function doStartDrag(e:MouseEvent):void { if ( e.buttonDown &amp;&amp; !DragManager.isDragging ) { var ds:DragSource = new DragSource(); ds.addData("test", "test"); DragManager.doDrag(btn, ds, e); } } private function handleDragOver(e:DragEvent):void { if ( e.localX &lt; cvs.width/2 ) { //since null does nothing, lets just set to accept the drag //operation, but accept it to the dragInitiator DragManager.acceptDragDrop(e.dragInitiator); } else { //accept drag DragManager.acceptDragDrop(cvs); DragManager.showFeedback( DragManager.COPY ); } } private function handleDragDrop(e:DragEvent):void { if ( e.dragSource.hasFormat("test") ) { Alert.show("Got a drag drop!"); } } ]]&gt; &lt;/mx:Script&gt; &lt;mx:Canvas x="265" y="66" width="321" height="245" backgroundColor="#FF0000" id="cvs" dragOver="handleDragOver(event)" dragDrop="handleDragDrop(event)"&gt; &lt;/mx:Canvas&gt; &lt;mx:Button id="btn" x="82" y="140" label="Drag Me" mouseDown="doStartDrag(event)"/&gt; &lt;/mx:WindowedApplication&gt; </code></pre>
3,204