instruction
stringlengths 27
22.5k
| chosen
stringlengths 27
28.2k
| rejected
stringlengths 19
24.9k
| __index_level_0__
int64 0
10k
|
|---|---|---|---|
<p>One I am aware of is <a href="http://search.cpan.org/dist/Perl-Critic/" rel="nofollow noreferrer">Perl::Critic</a></p>
<p>And my googling has resulted in no results on multiple attempts so far. :-(</p>
<p>Does anyone have any recommendations here?</p>
<p>Any resources to configure Perl::Critic as per our coding standards and run it on code base would be appreciated.</p>
|
<p>In terms of setting up a profile, have you tried <code>perlcritic --profile-proto</code>? This will emit to stdout all of your installed policies with all their options with descriptions of both, including their default values, in perlcriticrc format. Save and edit to match what you want. Whenever you upgrade Perl::Critic, you may want to run this command again and do a diff with your current perlcriticrc so you can see any changes to existing policies and pick up any new ones.</p>
<p>In terms of running perlcritic regularly, set up a <a href="http://search.cpan.org/dist/Test-Perl-Critic/" rel="noreferrer">Test::Perl::Critic</a> test along with the rest of your tests. This is good for new code.</p>
<p>For your existing code, use <a href="http://search.cpan.org/dist/Test-Perl-Critic-Progressive/" rel="noreferrer">Test::Perl::Critic::Progressive</a> instead. T::P::C::Progressive will succeed the first time you run it, but will save counts on the number of violations; thereafter, T::P::C::Progressive will complain if any of the counts go up. One thing to look out for is when you revert changes in your source control system. (You are using one, aren't you?) Say I check in a change and run tests and my changes reduce the number of P::C violations. Later, it turns out my change was bad, so I revert to the old code. The T::P::C::Progressive test will fail due to the reduced counts. The easiest thing to do at this point is to just delete the history file (default location t/.perlcritic-history) and run again. It should reproduce your old counts and you can write new stuff to bring them down again.</p>
<p>Perl::Critic has a lot of policies that ship with it, but there are a bunch of add-on distributions of policies. Have a look at <a href="http://search.cpan.org/dist/Task-Perl-Critic/" rel="noreferrer">Task::Perl::Critic</a> and <a href="http://search.cpan.org/dist/Task-Perl-Critic-IncludingOptionalDependencies/" rel="noreferrer">
Task::Perl::Critic::IncludingOptionalDependencies</a>.</p>
<p>You don't need to have a single perlcriticrc handle all your code. Create separate perlcriticrc files for each set of files you want to test and then a separate test that points to each one. For an example, have a look at the author tests for P::C itself at <a href="http://perlcritic.tigris.org/source/browse/perlcritic/trunk/Perl-Critic/xt/author/" rel="noreferrer">http://perlcritic.tigris.org/source/browse/perlcritic/trunk/Perl-Critic/xt/author/</a>. When author tests are run, there's a test that runs over all the code of P::C, a second test that applies additional rules just on the policies, and a third one that criticizes P::C's tests.</p>
<p>I personally think that everyone should run at the "brutal" severity level, but knock out the policies that they don't agree with. Perl::Critic isn't entirely self compliant; even the P::C developers don't agree with everything Conway says. Look at the perlcriticrc files used on Perl::Critic itself and search the Perl::Critic code for instances of "## no critic"; I count 143 at present.</p>
<p>(Yes, I'm one of the Perl::Critic developers.)</p>
|
<p>In addition to the cosmetic best practices, I always find it useful to run Devel::Prof on my unit test suite to check test coverage.</p>
| 7,456
|
<p>Say instead of returning void a method you returned a reference to the class even if it didn't make any particular semantic sense. It seems to me like it would give you more options on how the methods are called, allowing you to use it in a fluent-interface-like style and I can't really think of any disadvantages since you don't have to do anything with the return value (even store it).</p>
<p>So suppose you're in a situation where you want to update an object and then return its current value.
instead of saying </p>
<pre><code>myObj.Update();
var val = myObj.GetCurrentValue();
</code></pre>
<p>you will be able to combine the two lines to say</p>
<pre><code>var val = myObj.Update().GetCurrentValue();
</code></pre>
<hr>
<p><strong>EDIT:</strong> I asked the below on a whim, in retrospect, I agree that its likely to be unnecessary and complicating, however my question regarding returning this rather than void stands.</p>
<p>On a related note, what do you guys think of having the language include a new bit of syntactic sugar:</p>
<pre><code>var val = myObj.Update()<.GetCurrentValue();
</code></pre>
<p>This operator would have a low order of precedence so myObj.Update() would execute first and then call GetCurrentValue() on myObj instead of the void return of Update.</p>
<p>Essentially I'm imagining an operator that will say "call the method on the right-hand side of the operator on the first valid object on the left". Any thoughts?</p>
|
<p>I think as a general policy, it simply doesn't make sense. Method chaining in this manner works with a properly defined interface but it's only appropriate if it makes semantic sense. </p>
<p>Your example is a prime one where it's not appropriate, because it makes no semantic sense.</p>
<p>Similarly, your syntactic sugar is unnecessary with a properly designed fluent interface.</p>
<p>Fluent interfaces or method chaining <b>can</b> work very well, but need to be designed carefully.</p>
|
<p>At first sight it may look good, but for a consistent interface you will need that all methods return a reference to this (which has it own problems).</p>
<p>Let say you have a class with two methods GetA which return this and GetB which return another object:</p>
<p>Then you can call obj.GetA().GetB(), but not obj.GetB().GetA(), which at least doesn't seems consistent.</p>
<p>With Pascal (and Visual Basic) you can call several methods of the same object.</p>
<blockquote>
<pre><code>with obj
.GetA();
.GetB();
end with;
</code></pre>
</blockquote>
<p>The problem with this feature is that you easily can write code that is harder to understand than it should be. Also adding a new operator probably make it ever harder.</p>
| 8,133
|
<p>How do you retrieve selected text using Regex in C#?</p>
<p>I am looking for C# code that is equivalent to this Perl code:</p>
<pre><code>$indexVal = 0;
if($string =~ /Index: (\d*)/){$indexVal = $1;}
</code></pre>
|
<pre><code>int indexVal = 0;
Regex re = new Regex(@"Index: (\d*)")
Match m = re.Match(s)
if(m.Success)
indexVal = int.TryParse(m.Groups[1].toString());
</code></pre>
<p>I might have the group number wrong, but you should be able to figure it out from here.</p>
|
<pre><code>int indexVal = 0;
Regex re = new Regex.Match(s, @"(<?=Index: )(\d*)");
if(re.Success)
{
indexVal = Convert.ToInt32(re.Value);
}
</code></pre>
| 3,072
|
<p><a href="https://en.wikipedia.org/wiki/Stereolithography">Stereolithography</a> produces parts by projecting ultraviolet light on the top of a vat of liquid photopolymer, causing it to harden. <a href="https://en.wikipedia.org/wiki/Continuous_Liquid_Interface_Production">CLIP</a> produces parts by projecting ultraviolet light through the bottom of a vat of liquid photopolymer, causing it to harden. This seems like a minor difference, yet CLIP is reportedly much faster (I've seen numbers as high as 100x). Why is this?</p>
|
<p>It's important to understand what specifically is being compared. <strong>CLIP is much faster than bottom-up technologies that require a peel step between every layer.</strong> For example, the Form1 galvo SLA printer tilts the resin vat to separate the transparent bottom from the print. That is, by far, the slowest part of SLA/DLP printing with most modern light sources. Where the speed comes in is that without a peel, a continuous "movie" can be used to cure the resin rather than a series of alternating images and peels. </p>
<p>Top-down printers can print dramatically faster than bottom-up-and-peel printers. CLIP is not necessarily faster than top-down. For example, the Gizmo 3D line of top-down printers are very similar in print speed to CLIP. (<a href="http://www.gizmo3dprinters.com.au/">http://www.gizmo3dprinters.com.au/</a>)</p>
<p>Most "consumer" SLA printers these days use bottom-up-and-peel techniques, because this has some practical advantages over top-down printers:</p>
<ul>
<li>Way less resin is required to fill the printer when the part is pulled out as it builds rather than being lowered into the tank (along with the Z stage) as it builds. Resin is expensive. This also means bottom up printers can be smaller and have fewer mechanical parts such as leveling devices submerged in resin. </li>
<li>Standard resins contain an inhibitor chemical that prevents polymerization in the presence of oxygen, which causes the surface layer exposed to air (and low-level stray light) to not cure. So top-down printers must shoot light through a non-curing layer before reaching curable resin. This makes the tuning more sensitive and can somewhat reduce detail compared to a bottom-up printer curing right on the window.</li>
<li>Replacement vats or windows for bottom-up printers may be seen by manufacturers as a profit-generating consumable, since they have to be replaced somewhat frequently.</li>
<li>Top-down printers have to worry somewhat more about resin flow rates as the part is lowered. Air bubbles may be pulled into the resin or the fresh resin layer above the part may vary significantly in thickness if the part is submerged too fast for the resin viscosity. (Admittedly, bottom-up printers will experience excessive suction forces and potentially break off bits of the print at high peel speeds.)</li>
</ul>
<p>CLIP is a bottom-up technique that doesn't require a peel step, because the vat creates an oxygen layer over the window that keeps the resin from curing directly on the surface and sticking. In that way, it arguably performs more like a top-down printer than a bottom-up printer. </p>
<p>Top-down printers that are designed to overcome the above issues and use high-intensity light sources can achieve exceptionally high print speeds. This includes similar "continuous" build techniques used as in CLIP.</p>
|
<p>The number of 100x could be true in some situations. I wish I could see the part(s) that they printed to measure this 100x, but that is another story. </p>
<p>Looking at their videos they can move the build plate at a maximum speed of 10mm per minute. You will see the Eiffel tower video where they have to change over to standard speed for flat layers. It is because the flat layers stop resin flow and can't be printed using continuous printing.</p>
<p>Prodways have shown that they can move it at 20mm per minute, but again there are things that are not advertised/mentioned. To achieve higher speeds you need to make the resins more reactive. Making resins more reactive means the resins won't last as long in the vat/bottle. So they expire sooner. It also means they could harden under normal light conditions so it makes it difficult to work with. If you look at the Prodways video you will see resin waste on the build plate. That shows that the projector brightness was set too high.</p>
<p>Gizmo can print between 5 and 25 times faster than a leading brand of SLA printers, also depends on the number of parts and complexity on the build plate. The decision was made to advertise the build plate moving speed, e.g. 3mm per minute, rather than the number of times faster than anything else, because that is a value that doesn't change with the number of items on the build plate, but it does change depending on the projection area size.</p>
<p>Imagine you just have a single line going upwards then most SLA printers could do it at the same speed. B9 Creator (bottom up) users have actually shown they can do continuous printing when printing really tiny items with very thin walls, because they don't have suction problems in those situations.</p>
<p>Now when you add many small objects with small features on the build plate DLP printers will display the full layer in one go where laser based SLA machines need to draw out each part like an FDM machine.</p>
<p>Continuous printing does have limitations. Resin needs to flow from the bottom or you will get resin starvation and cause holes so you might not be able to use it for everything. You should see it as another tool in your printing toolbox rather than the be all and end all.</p>
<p>So after all that, the 100x depends on so many things, you shouldn't get stuck at looking at that number other than knowing it is a way of marketing the printers.</p>
| 161
|
<p>I know the so-called textbook definition of unit tests and integration tests. What I am curious about is when it is time to write unit tests... I will write them to cover as many sets of classes as possible.</p>
<p>For example, if I have a <code>Word</code> class, I will write some unit tests for the <code>Word</code> class. Then, I begin writing my <code>Sentence</code> class, and when it needs to interact with the <code>Word</code> class, I will often write my unit tests such that they test both <code>Sentence</code> and <code>Word</code>... at least in the places where they interact.</p>
<p><em>Have these tests essentially become integration tests because they now test the integration of these 2 classes, or is it just a unit test that spans 2 classes?</em></p>
<p>In general, because of this uncertain line, I will rarely actually write integration tests... or is my using the finished product to see if all the pieces work properly the actual integration tests, even though they are manual and rarely repeated beyond the scope of each individual feature?</p>
<p>Am I misunderstanding integration tests, or is there really just very little difference between integration and unit tests?</p>
|
<p>The key difference, to me, is that <strong>integration tests</strong> reveal if a feature is working or is broken, since they stress the code in a scenario close to reality. They invoke one or more software methods or features and test if they act as expected.</p>
<p>On the opposite, a <strong>Unit test</strong> testing a single method relies on the (often wrong) assumption that the rest of the software is correctly working, because it explicitly mocks every dependency.</p>
<p>Hence, when a unit test for a method implementing some feature is green, it does <b>not</b> mean the feature is working.</p>
<p>Say you have a method somewhere like this:</p>
<pre><code>public SomeResults DoSomething(someInput) {
var someResult = [Do your job with someInput];
Log.TrackTheFactYouDidYourJob();
return someResults;
}
</code></pre>
<p><code>DoSomething</code> is very important to your customer: it's a feature, the only thing that matters. That's why you usually write a Cucumber specification asserting it: you wish to <i>verify</i> and <i>communicate</i> the feature is working or not.</p>
<pre><code>Feature: To be able to do something
In order to do something
As someone
I want the system to do this thing
Scenario: A sample one
Given this situation
When I do something
Then what I get is what I was expecting for
</code></pre>
<p>No doubt: if the test passes, you can assert you are delivering a working feature. This is what you can call <b>Business Value</b>.</p>
<p>If you want to write a unit test for <code>DoSomething</code> you should pretend (using some mocks) that the rest of the classes and methods are working (that is: that, all dependencies the method is using are correctly working) and assert your method is working.</p>
<p>In practice, you do something like:</p>
<pre><code>public SomeResults DoSomething(someInput) {
var someResult = [Do your job with someInput];
FakeAlwaysWorkingLog.TrackTheFactYouDidYourJob(); // Using a mock Log
return someResults;
}
</code></pre>
<p>You can do this with Dependency Injection, or some Factory Method or any Mock Framework or just extending the class under test.</p>
<p>Suppose there's a bug in <code>Log.DoSomething()</code>.
Fortunately, the Gherkin spec will find it and your end-to-end tests will fail.</p>
<p>The feature won't work, because <code>Log</code> is broken, not because <code>[Do your job with someInput]</code> is not doing its job. And, by the way, <code>[Do your job with someInput]</code> is the sole responsibility for that method.</p>
<p>Also, suppose <code>Log</code> is used in 100 other features, in 100 other methods of 100 other classes.</p>
<p>Yep, 100 features will fail. But, fortunately, 100 end-to-end tests are failing as well and revealing the problem. And, yes: <b>they are telling the truth</b>.</p>
<p>It's very useful information: I know I have a broken product. It's also very confusing information: it tells me nothing about where the problem is. It communicates me the symptom, not the root cause.</p>
<p>Yet, <code>DoSomething</code>'s unit test is green, because it's using a fake <code>Log</code>, built to never break. And, yes: <b>it's clearly lying</b>. It's communicating a broken feature is working. How can it be useful?</p>
<p>(If <code>DoSomething()</code>'s unit test fails, be sure: <code>[Do your job with someInput]</code> has some bugs.)</p>
<p>Suppose this is a system with a broken class:
<img src="https://i.stack.imgur.com/611ZY.jpg" alt="A system with a broken class"></p>
<p>A single bug will break several features, and several integration tests will fail.</p>
<p><img src="https://i.stack.imgur.com/rKhSr.jpg" alt="A single bug will break several features, and several integration tests will fail"></p>
<p>On the other hand, the same bug will break just one unit test.</p>
<p><img src="https://i.stack.imgur.com/M64Vb.jpg" alt="The same bug will break just one unit test"></p>
<p>Now, compare the two scenarios.</p>
<p>The same bug will break just one unit test.</p>
<ul>
<li>All your features using the broken <code>Log</code> are red</li>
<li>All your unit tests are green, only the unit test for <code>Log</code> is red</li>
</ul>
<p>Actually, unit tests for all modules using a broken feature are green because, by using mocks, they removed dependencies. In other words, they run in an ideal, completely fictional world. And this is the only way to isolate bugs and seek them. Unit testing means mocking. If you aren't mocking, you aren't unit testing.</p>
<h2>The difference</h2>
<p>Integration tests tell <b>what</b>'s not working. But they are of no use in <b>guessing where</b> the problem could be.</p>
<p>Unit tests are the sole tests that tell you <b>where</b> exactly the bug is. To draw this information, they must run the method in a mocked environment, where all other dependencies are supposed to correctly work.</p>
<p>That's why I think that your sentence "Or is it just a unit test that spans 2 classes" is somehow displaced. A unit test should never span 2 classes.</p>
<p>This reply is basically a summary of what I wrote here: <a href="http://arialdomartini.wordpress.com/2011/10/21/unit-tests-lie-thats-why-i-love-them/" rel="noreferrer">Unit tests lie, that's why I love them</a>.</p>
|
<p>If you're a TDD purist, you write the tests before you write production code. Of course, the tests won't compile, so you first make the tests compile, then make the tests pass.</p>
<p>You can do this with unit tests, but you can't with integration or acceptance tests. If you tried with an integration test, nothing would ever compile until you've finished!</p>
| 3,192
|
<p>I have been doing some java development lately and have started using Eclipse. For the most part, I think it is great, but being a C/C++ guy used to doing all of his editing in vim, I find myself needlessly hitting the Esc key over and over.</p>
<p>It would be really nice if I got all the nice features of Eclipse, but still could do basic editing the same way I can in vim. Anyone know of any Eclipse pluggins that would help with this?</p>
|
<p><a href="http://vrapper.sourceforge.net/home/" rel="noreferrer">Vrapper</a>:</p>
<blockquote>
<p>an Eclipse plugin which acts as a wrapper for Eclipse text editors to provide a Vim-like input scheme for moving around and editing text.</p>
<p>Unlike other plugins which embed Vim in Eclipse, Vrapper imitates the behaviour of Vim while still using whatever editor you have opened in the workbench. The goal is to have the comfort and ease which comes with the different modes, complex commands and count/operator/motion combinations which are the key features behind editing with Vim, while preserving the powerful features of the different Eclipse text editors, like code generation and refactoring...</p>
</blockquote>
|
<p><a href="https://marketplace.eclipse.org/content/viable-vim-eclipse" rel="nofollow noreferrer">Viable</a> has pretty much what you are looking for along with some extra features which none of the other plugins for eclipse seem to have, like some support for visual block mode, command line history, window splitting, and piping external commands.</p>
<p>It is pay ($15.00 CAD) but free to tree with all the features. I personally like it better than the other solutions.</p>
| 4,841
|
<p>I've only been using VS 2008 Team Foundation for a few weeks. Over the last few days, I've noticed that sometimes one of my objects/controls on my page just disappears from intellisense. The project builds perfectly and the objects are still in the HTML, but I still can't find the object. </p>
<p>Any one else notice this?</p>
<p><strong>Edit:</strong> For what it's worth, I know if I close VS and then open it up again, it comes back.</p>
|
<p>I am also having a number of problems with VS 2008. Who would guess that I don't ever need to select multiple controls on a web form...</p>
<p>Anyway, a lot has been fixed in Service Pack 1, which is in Beta currently. Might be worth installing that. It has gone a little way to fixing absolute positioning. This isn't your problem, of course, but your fix might be in there as well.</p>
|
<p>I occasionally get this in Visual Studio 2005.</p>
<p>A method I use to get the controls back, is to switch the web page between code view and design view. I know it's not a fix but it's a little quicker than restarting Visual Studio.</p>
| 2,805
|
<p>Recently on one of her videos a YouTuber stated that prints from large format printers are more brittle than if you were to print them in parts and glue them together. This seems to contradict the testimonials from the customers of a large format printer, who say that they get good prints from those printers (which print have a print area of a meter square). </p>
<p>Would a print form a larger format printer be more brittle than a print made of smaller pieces super glued together?</p>
<p>(with all other aspects being equal e.g. the nozzle, the temps, the material and the shape of the object).</p>
<p>The YouTube didn't cite any source information to back up her claim.</p>
|
<p>If you break up a large piece into multiple smaller pieces and properly glue them together, you basically add stiffeners (as a result of printing walls). This could lead to a more stiff model; this might have been confused by calling large prints more brittle opposed to constructed models.</p>
<p>If printing is conducted at similar conditions on large printers, there shouldn't be a reason why the model becomes more brittle unless the conditions aren't the same. But that would be true for printing at small printers too, e.g. if one print was printed in a draft.</p>
|
<p>I'd recommend getting the object to fit together by design, rather than glue - though I tend (if the item is never to be disassembled) use Zap-a-gap - that stuff sticks like crazy though you must not squeeze the parts together but let it naturally sit.</p>
| 1,352
|
<p>I don't want this to be a specific producer question, but I would like to know if the Sparkmaker is good enough to print small details in OO/HO scale objects.
I'm referring here to objects like furniture, and other house appliances at scale.
I wasn't able to find any visuals with very small objects for this printer.</p>
|
<p>Can you tell by looking at the de-adhesion what isn't sticking? </p>
<p>It may be that printing the ASA at higher temperature is melting the BVOH enough that it doesn't stick, being molten. If so, then it may be possible to print the first layer of ASA at a cooler temperature, slower if necessary to still succeed at extrusion, with fans blowing. Then, print the next layer of ASA at a higher temperature, also with fans flowing.</p>
<p>A test might be to print the BVOH, then a layer of ASA, and stop. Let it cool and test the adhesion. If it sticks under these conditions, then a better command of the temperature profile may offer a way forward.</p>
|
<p>Here's how I got it to work. I set the heat bed to 110°C and disabled the underlayer of the raft layer. I set the BVOH and the ASA to 235°C after the 4th layer I set the heatbed to 90°C and after the 6th layer I set the bed down to 60°C so that the BVOH hardened and adhered to the bed making sure that nothing curled or moved. After the print finished it took 24 hours for my BVOH to fully dissolve but thats probably because I set my support to 100% infill. Either way I got a perfect ASA extruder mount</p>
| 1,307
|
<p>Hello I am using Cura to slice my prints, and despite turning the flow rate to the minimum value of 5%, my prints are still hugely over extruding. </p>
<p>I have calibrated the extruder stepper perfectly, using Pronterface, so I do not understand why this is occurring. I have also timed how long it took to extrude a certain length and compared it to the length of time it was meant to extrude and it was exactly the same. Therefore, I have concluded it is not a problem with the calibration of the stepper. </p>
<p>So, I think there is a problem with the settings on Cura. Orginally, I had the flow rate at 100% and this was really, really terrible. Then I turned it down as far as possible and the print got better but there was still over extrusion. I can't down it down any further. </p>
<p>I can not figure out what the problem could possibly be and as you would imagine it is very, very frustrating.</p>
<p>Here is the print profile:
<a href="https://i.stack.imgur.com/VUpdO.png" rel="noreferrer"><img src="https://i.stack.imgur.com/VUpdO.png" alt="Screenshot of Print Profile"></a></p>
<p><strong>Edit</strong> - Some extra information</p>
<p>So the printer is not of any model, as it is homemade CD drive 3D printer. It shares many similarities with the Curiosity3D printer, so if you want more information on how it works, then their website will be of much value.
<a href="http://www.instructables.com/id/Curiosity-120-eWaste-Educational-3D-Printer/" rel="noreferrer">Link</a></p>
<p>The extruder is a bowden style one. It uses a cheap e3d hotend <a href="https://www.amazon.co.uk/Finlon-Hotend-Extruder-Printer-Filament/dp/B0734MR3JR/ref=sr_1_1?ie=UTF8&qid=1513678249&sr=8-1&keywords=e3d%20hotend" rel="noreferrer">Link to amazon</a> and a RepRap extruder kit as the motor <a href="https://www.amazon.co.uk/Gaoxing-Tech-Printer-Extruder-Stepper-Bracket-Motor/dp/B06XSJ9M2T/ref=sr_1_fkmr0_1?s=industrial&ie=UTF8&qid=1513678339&sr=1-1-fkmr0&keywords=extruder%20motor%20kit%C2%A8" rel="noreferrer">Link to amazon</a>. </p>
<p>Here are the Machine settings:</p>
<p><a href="https://i.stack.imgur.com/yqlpf.png" rel="noreferrer"><img src="https://i.stack.imgur.com/yqlpf.png" alt="Screenshot of Machine settings"></a></p>
<p>This is a photo of two failed prints. On the left is a G and on the right is a heart.
<a href="https://i.stack.imgur.com/cIMsi.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/cIMsi.jpg" alt="Photo of two failed prints"></a></p>
<p>This is what it was the G was meant to look like:</p>
<p><a href="https://i.stack.imgur.com/3KgHH.png" rel="noreferrer"><img src="https://i.stack.imgur.com/3KgHH.png" alt="Pronterface G"></a></p>
<p><strong>Edit 2</strong> - More information</p>
<p>So here is a link to the configuration.h file which I previously modified for my 3D printer.<a href="https://www.dropbox.com/s/jdeaqu8dodjm8ln/Configuration.h?dl=0" rel="noreferrer">Dropbox link</a></p>
<p>Also the filament I use is "Robox PLA SmartReel Cornflower Blue" <a href="http://www.desktop3dprinter.com/cel-robox-smartreels/robox-pla-smartreel-cornflower-blue.html" rel="noreferrer">link</a></p>
<p>Any help would be much appreciated. </p>
|
<p>First of all: Can you tell us, what kind of printer do you use and which material? Please give us some pictures of some prints as well.. Also the type of the Extruder setup is relevant. Is it a Bowden or Direct Driven extruder?</p>
<p>For an FDM machine like the Prusa styled printers with PLA:</p>
<ul>
<li>Flow of 5 % is totally wrong, normally PLA should be run between 90 % and 105 % flawlessly. The problem is elsewhere but not with the Flow settings, nor the Temp is the faulty one.
<ul>
<li>Temp between 185 °C and 210 °C should be fine for most PLAs</li>
<li>Did you set up your printer correct in Cura (Preferences -> Printers -> Machine Settings)? Material Diameter, Nozzle size and G-Code flavour are correct?</li>
</ul></li>
</ul>
<p>If you give us some more information, I will be glad to get this solved.</p>
|
<p>I've been building 3D printers for a few years and working in the computer science field for over 2 decades. Here's a simple trick I use for adjusting your extruder steps.</p>
<ol>
<li><p>Tweak your flow rate till it's where it should be. Then mark down
the percentage it's at.</p>
</li>
<li><p>Go into your configuration file and use that percentage to adjust the number of steps. So if <span class="math-container">$x$</span> is the number of steps and our flowrate is at 40 % to be extruding normal then:</p>
<p><span class="math-container">$x * 0.4 =$</span> new step count</p>
</li>
<li><p>Save the file and compile.</p>
</li>
</ol>
| 768
|
<p>As a programmer I need a place to store my stuff. I've been running a server in my parents closet for a long time, but I recently came across a decent 2U server.<br>
I have no experience dealing with hosting companies, beyond the very cheap stuff, and I'm wondering what I should look for in a colo or if I should just keep my closet server.</p>
|
<p>There are three major factors here.</p>
<ol>
<li>Cost. The colo will obviously be more expensive than sticking a server in your parents' closet.</li>
<li>Quality. The colo should be a lot more reliable than the server in your parents' closet. They aren't as likely to go down when there's a power surge. They should provide some support if things do go wrong on their end. They will also likely give you better bandwidth.</li>
<li>Convenience. It is a lot easier to fix a broken box when you can walk over to it and plug up a monitor. Going to the colo to troubleshoot is probably not going to be convenient, if it's even possible. Transferring files from your laptop to the server in the closet is also going to be a lot faster than transferring over the Internet. On the other hand, if it's your box in the closet, you have to deal with the hardware problems, so it can balance out.</li>
</ol>
<p>Personally, I pay for a (shared) server. I find that having someone else handle the server is worth it. Uploading large files can get really frustrating, but having to maintain an extra box in the closet is too much hassle for me.</p>
<p>You really have to decide what you value most. Is it worth the extra money to you to have a more reliable, more hands-off server?</p>
|
<p>I'd absolutely go for the server that's located under my roof, as long as I don't need it to be connected to the internet with a static IP.</p>
<p>Why:</p>
<ul>
<li>It's a target for hackers, as soon as it is reachable from the net</li>
<li>Any problems with the machine? I'd rather walk to the closet instead of calling a hotline - and probably pay for the service</li>
<li>Connection speed (from me to the server)</li>
<li>I can turn it off as long as I don't need it. Less power consuption, which is better for the environment.</li>
<li>The hosting of a machine costs money all the time. Even when you don't need it.</li>
</ul>
| 2,998
|
<p>I'm trying to do 'Attach to Process' for debugging in Visual Studio 2008 and I can't figure out what process to attach to. Help.</p>
|
<p>Indeed it is still w3wp.exe - You'll need to check the '<strong>Show processes in all sessions</strong>' option to get it to show up though.</p>
<p>(It caught me out for a while too.)</p>
|
<p>Isn't it w3wp.exe?</p>
| 8,599
|
<p>I have a Monoprice architect which is a barebones clone of the FlashForge Creator Pro, or Replicator 1 Dual. I have upgraded the power supply and added a heated bed and, after getting fed up with MakerBot software, I've started using Cura to slice then post process with GPX.</p>
<p>I did a lot of searching and finally found someone who posted their start and end G-code for this particular printer. The only catch is that his code only works on version 15.04.</p>
<p>Don't get me wrong, 15.04 is a huge upgrade compared to MakerWare. But, I would really like to start using a newer version like 2.5 or anything relatively new.</p>
<p>Here is the start code I found. I have tried it in 2.5 with error in post processing. Any help is appreciated!!</p>
<pre><code>; -- START GCODE --
M136 ; start build
M73 P0
G90 ; absolute coordinates
;
; set temperatures and assert Vref
M140 S{print_bed_temperature}
M104 S{print_temperature} T0
G130 X118 Y118 A118 B118 ; set stepper motor Vref to defaults
; let the Z stepper vref stay at eeprom level (probably 40)
;
; home and recall eeprom home position
T0 ; home on the right nozzle
G28 X Y Z ; home all axes at homing speed
G92 X0 Y0 Z0 A0 B0 ; set all coords to 0 for now
G1 Z5 F500 ; move Z 5mm away so we can carefully hit the limit switch
G161 Z F100 ; home Z slowly
M132 X Y Z ; recall stored home offsets for XYZ axes
;
; wait for heat up
G1 X110 Y-72 Z30 F3300 ; move to waiting position
M116 ; wait for temps
;
; purge and wipe
G92 E0 ; set current extruder position as 0 so that E15 below makes sense
G1 X110 Y-70 Z0.2 F2400.0 ; move to just on the bed
G1 X110 Y70 E15 F1200.000 ; extrude a line of filament along the right edge of the bed
G92 E0 ; set E to 0 again because the slicer's next extrusion is relative to this 0
;
; Sliced at: {day} {date} {time}
; Basic settings: Layer height: {layer_height} Walls: {wall_thickness} Fill: {fill_density}
; Print time: {print_time}
; Filament used: {filament_amount}m {filament_weight}g
; Filament cost: {filament_cost}
; -- end of START GCODE --
</code></pre>
|
<p>Yes, with the proper equipment.</p>
<p>Printing wax filament (at 51 seconds): <a href="https://youtu.be/tibkVZB_n9c?t=51s" rel="noreferrer">https://youtu.be/tibkVZB_n9c?t=51s</a></p>
<p>There are also options for melting wax, filling a heated reservoir head, and printing with that. I recommend doing this with a cold ambient temperature, so that the wax solidifies quickly. There's no point in just printing a puddle. :)</p>
|
<p>Just make a silicone mold of any of your prints and pour you wax in the silicone,you silicone putty is pretty cheap so is wax ,no need to break the bank as printers have become less expensive and can do a pretty good job </p>
| 631
|
<p>I have an NHibernate session. In this session, I am performing exactly 1 operation, which is to run this code to get a list:</p>
<pre><code>public IList<Customer> GetCustomerByFirstName(string customerFirstName)
{
return _session.CreateCriteria(typeof(Customer))
.Add(new NHibernate.Expression.EqExpression("FirstName", customerFirstName))
.List<Customer>();
}
</code></pre>
<p>I am calling <code>Session.Flush()</code> at the end of the <code>HttpRequest</code>, and I get a <code>HibernateAdoException</code>. NHibernate is passing an update statement to the db, and causing a foreign key violation. If I don't run the <code>flush</code>, the request completes with no problem. The issue here is that I need the flush in place in case there is a change that occurs within other sessions, since this code is reused in other areas. Is there another configuration setting I might be missing?</p>
<hr>
<p>Here's the code from the exception:</p>
<pre><code>[SQL: UPDATE CUSTOMER SET first_name = ?, last_name = ?, strategy_code_1 = ?, strategy_code_2 = ?, strategy_code_3 = ?, dts_import = ?, account_cycle_code = ?, bucket = ?, collector_code = ?, days_delinquent_count = ?, external_status_code = ?, principal_balance_amount = ?, total_min_pay_due = ?, current_balance = ?, amount_delinquent = ?, current_min_pay_due = ?, bucket_1 = ?, bucket_2 = ?, bucket_3 = ?, bucket_4 = ?, bucket_5 = ?, bucket_6 = ?, bucket_7 = ? WHERE customer_account_id = ?]
</code></pre>
<p>No parameters are showing as being passed.</p>
|
<p>Always be careful with NULLable fields whenever you deal with NHibernate. If your field is NULLable in DB, make sure corresponding .NET class uses Nullable type too. Otherwise, all kinds of weird things will happen. The symptom is usually will be that NHibernate will try to update the record in DB, even though you have not changed any fields since you read the entity from the database.</p>
<p>The following sequence explains why this happens:</p>
<ol>
<li>NHibernate retrieves raw entity's data from DB using ADO.NET</li>
<li>NHibernate constructs the entity and sets its properties</li>
<li>If DB field contained NULL the property will be set to the defaul value for its type:
<ul>
<li>properties of reference types will be set to null</li>
<li>properties of integer and floating point types will be set to 0</li>
<li>properties of boolean type will be set to false</li>
<li>properties of DateTime type will be set to DateTime.MinValue</li>
<li>etc.</li>
</ul></li>
<li>Now, when transaction is committed, NHibernate compares the value of the property to the original field value it read form DB, and since the field contained NULL but the property contains a non-null value, <strong>NHibernate considers the property dirty, and forces an update of the enity.</strong></li>
</ol>
<p>Not only this hurts performance (you get extra round-trip to DB and extra update every time you retrieve the entity) but it also may cause hard to troubleshoot errors with DateTime columns. Indeed, when DateTime property is initialized to its default value it's set to 1/1/0001. When this value is saved to DB, ADO.NET's SqlClient can't convert it to a valid SqlDateTime value since the smallest possible SqlDateTime is 1/1/1753!!!</p>
<p>The easiest fix is to make the class property use Nullable type, in this case "DateTime?". Alternatively, you could implement a custom type mapper by implementing IUserType with its Equals method properly comparing DbNull.Value with whatever default value of your value type. In our case Equals would need to return true when comparing 1/1/0001 with DbNull.Value. Implementing a full-functional IUserType is not really that hard but it does require knowledge of NHibernate trivia so prepare to do some substantial googling if you choose to go that way.</p>
|
<p>I also experienced this problem in NH 2.0.1 when trying to hide the inverse ends of many-to-many bags using access="noop" (hint: this doesn't work). </p>
<p>Converting them to access="field" + adding a field on the class fixed the problem. Pretty hard to track them down though.</p>
| 5,522
|
<p>What is the fastest, yet secure way to encrypt passwords (in PHP preferably), and for whichever method you choose, is it portable?</p>
<p>In other words, if I later migrate my website to a different server, will my passwords continue to work?</p>
<p>The method I am using now, as I was told, is dependent on the exact versions of the libraries installed on the server.</p>
|
<p>If you are choosing an encryption method for your login system then speed is not your friend, Jeff had a to-and-frow with Thomas Ptacek about passwords and the <a href="http://chargen.matasano.com/chargen/2007/9/7/enough-with-the-rainbow-tables-what-you-need-to-know-about-s.html" rel="noreferrer">conclusion</a> was that you should use the slowest, most secure encryption method you can afford to. </p>
<blockquote>
<p>From Thomas Ptacek's blog:<br>
Speed is exactly what you don’t want in a password hash function.</p>
<p>Modern password schemes are attacked with incremental password crackers.</p>
<p>Incremental crackers don’t precalculate all possible cracked passwords. They consider each password hash individually, and they feed their dictionary through the password hash function the same way your PHP login page would. Rainbow table crackers like Ophcrack use space to attack passwords; incremental crackers like John the Ripper, Crack, and LC5 work with time: statistics and compute.</p>
<p>The password attack game is scored in time taken to crack password X. With rainbow tables, that time depends on how big your table needs to be and how fast you can search it. With incremental crackers, the time depends on how fast you can make the password hash function run.</p>
<p>The better you can optimize your password hash function, the faster your password hash function gets, the weaker your scheme is. MD5 and SHA1, even conventional block ciphers like DES, are designed to be fast. MD5, SHA1, and DES are weak password hashes. On modern CPUs, raw crypto building blocks like DES and MD5 can be bitsliced, vectorized, and parallelized to make password searches lightning fast. Game-over FPGA implementations cost only hundreds of dollars. </p>
</blockquote>
|
<p><a href="http://php.net/password_hash" rel="nofollow noreferrer"><code>password_hash ( string $password , int $algo [, array $options ] )</code></a>. (PHP 5 >= 5.5.0, PHP 7)</p>
<p><code>password_hash()</code> creates a new password hash using a strong one-way hashing algorithm. <code>password_hash()</code> is compatible with <code>crypt()</code>. Therefore, password hashes created by <code>crypt()</code> can be used with <code>password_hash()</code>.</p>
| 2,309
|
<p>I'd like to use Segoe UI 9 pt on Vista, and Tahoma 8 pt on Windows XP/etc. (Actually, I'd settle for Segoe UI on both, but my users probably don't have it installed.) But, these being quite different, they really screw up the layout of my forms. So... is there a good way to deal with this?</p>
<p>An example: I have a <code>Label</code>, with some blank space in the middle, into which I place a <code>NumericUpDown</code> control. If I use Segoe UI, the <code>NumericUpDown</code> is about 5 pixels or so to the left of the blank space, compared to when I use Tahoma. This is a pain; I'm not sure what to do here.</p>
<p>So most specifically, my question would be: how can I place controls in the middle of a blank space in my <code>Label</code>s (or <code>CheckBox</code>es, etc.)? Most generally: is there a good way to handle varying fonts in Windows Forms?</p>
<p><strong><em>Edit</em></strong>: I don't think people understood the question. I know how to <em>vary my fonts</em> based on OS. I just don't know how to <em>deal with the layout problems</em> that arise from doing so.</p>
<p><strong><em>Reply to ajryan, quick_dry</em></strong>: OK, you guys understand the question. I guess MeasureString might work, although I'd be interested in further exploration of better ways to solve this problem.</p>
<p>The problem with splitting the control is most apparent with, say, a <code>CheckBox</code>. There, if the user clicks on the "second half" of the CheckBox (which would be a separate <code>Label</code> control, I guess), the <code>CheckBox</code> doesn't change state.</p>
|
<p>It's strange to need to layout one control within another. You might be solving an upstream problem wrong. Are you able to split the label into two labels with the updown between and maybe rely on a Windows Forms TableLayout panel? </p>
<p>If it's essential to try to position based on font sizes, you could use <a href="http://msdn.microsoft.com/en-us/library/6xe5hazb.aspx" rel="nofollow noreferrer">Graphics.MeasureString</a>("String before updown", myLabel.Font)</p>
<p>If what you're after is font-dependent control positioning, you should probably retitle the question.</p>
<p><hr>
[edit] You can handle the click event of the "second half" part of the label and change the checkbox state on that event. The whole thing seems like a hack though. What is the problem being solved by this weird control layout? Why do you need an up-down in the middle of a label?</p>
|
<p>First of all, you can find out which version of Windows you are using with the <a href="http://msdn.microsoft.com/en-us/library/system.operatingsystem.platform.aspx" rel="nofollow noreferrer">OperatingSystem.Platform</a> property in the System library.</p>
<p>Second, it is possible that you may put your font settings in Resource files, and determine which resource file to use depending on certain conditions (e.g., your operating system version).</p>
<p>Personally though, I think it would be nice to let your user determine the fonts that <em>they</em> prefer as opposed to the font that you want for them to use.</p>
<p>Finally, you might want to take a look at <a href="http://windowsclient.net/" rel="nofollow noreferrer">WPF</a> as this is one of the problem spaces that it was designed to solve.</p>
| 5,795
|
<p>If you are writing a program that is executable from the command line, you often want to offer the user several options or flags, along with possibly more than one argument. I have stumbled my way through this many times, but is there some sort of design pattern for looping through args and calling the appropriate handler functions?</p>
<p>Consider:</p>
<pre><code>myprogram -f filename -d directory -r regex
</code></pre>
<p>How do you organize the handler functions after you retrieve the arguments using whatever built-ins for your language? (language-specific answers welcomed, if that helps you articulate an answer)</p>
|
<p>I don't know of any documented "patterns" for processing.</p>
<p>I believe one of the oldest libraries/APIs for handling arguments is getopt. Googling "getopt" shows lots of man pages and links to implementations.</p>
<p>Generally, I have a preferences or settings service in my application that the argument processor knows how to communicate with. Arguments are then translated into something in this service that the application than then query. This could be as simple as a dictionary of settings (like a string setting named "filename").</p>
|
<p>You don't mention a language for this but if you are looking for a really nice Objective-C wrapper around getopt then Dave Dribin's DDCLI framework is really nice.</p>
<p><a href="http://www.dribin.org/dave/blog/archives/2008/04/29/ddcli" rel="nofollow noreferrer">http://www.dribin.org/dave/blog/archives/2008/04/29/ddcli</a></p>
| 7,804
|
<p>I'm just beginning to have a look at Objective-C and Cocoa with a view to playing with the iPhone SDK. I'm reasonably comfortable with C's <code>malloc</code> and <code>free</code> concept, but Cocoa's references counting scheme has me rather confused. I'm told it's very elegant once you understand it, but I'm just not over the hump yet.</p>
<p>How do <code>release</code>, <code>retain</code> and <code>autorelease</code> work and what are the conventions about their use?</p>
<p>(Or failing that, what did you read which helped you get it?)</p>
|
<p>Let's start with <code>retain</code> and <code>release</code>; <code>autorelease</code> is really just a special case once you understand the basic concepts. </p>
<p>In Cocoa, each object keeps track of how many times it is being referenced (specifically, the <code>NSObject</code> base class implements this). By calling <code>retain</code> on an object, you are telling it that you want to up its reference count by one. By calling <code>release</code>, you tell the object you are letting go of it, and its reference count is decremented. If, after calling <code>release</code>, the reference count is now zero, then that object's memory is freed by the system.</p>
<p>The basic way this differs from <code>malloc</code> and <code>free</code> is that any given object doesn't need to worry about other parts of the system crashing because you've freed memory they were using. Assuming everyone is playing along and retaining/releasing according to the rules, when one piece of code retains and then releases the object, any other piece of code also referencing the object will be unaffected.</p>
<p>What can sometimes be confusing is knowing the circumstances under which you should call <code>retain</code> and <code>release</code>. My general rule of thumb is that if I want to hang on to an object for some length of time (if it's a member variable in a class, for instance), then I need to make sure the object's reference count knows about me. As described above, an object's reference count is incremented by calling <code>retain</code>. By convention, it is also incremented (set to 1, really) when the object is created with an "init" method. In either of these cases, it is my responsibility to call <code>release</code> on the object when I'm done with it. If I don't, there will be a memory leak.</p>
<p>Example of object creation:</p>
<pre><code>NSString* s = [[NSString alloc] init]; // Ref count is 1
[s retain]; // Ref count is 2 - silly
// to do this after init
[s release]; // Ref count is back to 1
[s release]; // Ref count is 0, object is freed
</code></pre>
<p>Now for <code>autorelease</code>. Autorelease is used as a convenient (and sometimes necessary) way to tell the system to free this object up after a little while. From a plumbing perspective, when <code>autorelease</code> is called, the current thread's <code>NSAutoreleasePool</code> is alerted of the call. The <code>NSAutoreleasePool</code> now knows that once it gets an opportunity (after the current iteration of the event loop), it can call <code>release</code> on the object. From our perspective as programmers, it takes care of calling <code>release</code> for us, so we don't have to (and in fact, we shouldn't).</p>
<p>What's important to note is that (again, by convention) all object creation <em>class</em> methods return an autoreleased object. For example, in the following example, the variable "s" has a reference count of 1, but after the event loop completes, it will be destroyed.</p>
<pre><code>NSString* s = [NSString stringWithString:@"Hello World"];
</code></pre>
<p>If you want to hang onto that string, you'd need to call <code>retain</code> explicitly, and then explicitly <code>release</code> it when you're done.</p>
<p>Consider the following (very contrived) bit of code, and you'll see a situation where <code>autorelease</code> is required:</p>
<pre><code>- (NSString*)createHelloWorldString
{
NSString* s = [[NSString alloc] initWithString:@"Hello World"];
// Now what? We want to return s, but we've upped its reference count.
// The caller shouldn't be responsible for releasing it, since we're the
// ones that created it. If we call release, however, the reference
// count will hit zero and bad memory will be returned to the caller.
// The answer is to call autorelease before returning the string. By
// explicitly calling autorelease, we pass the responsibility for
// releasing the string on to the thread's NSAutoreleasePool, which will
// happen at some later time. The consequence is that the returned string
// will still be valid for the caller of this function.
return [s autorelease];
}
</code></pre>
<p>I realize all of this is a bit confusing - at some point, though, it will click. Here are a few references to get you going:</p>
<ul>
<li><a href="http://developer.apple.com/library/mac/#documentation/Cocoa/Conceptual/MemoryMgmt/Articles/MemoryMgmt.html" rel="noreferrer" title="Apple's introduction to Cocoa's memory management">Apple's introduction</a> to memory management.</li>
<li><a href="https://rads.stackoverflow.com/amzn/click/com/0321774086" rel="noreferrer" rel="nofollow noreferrer">Cocoa Programming for Mac OS X (4th Edition)</a>, by Aaron Hillegas - a very well written book with lots of great examples. It reads like a tutorial.</li>
<li>If you're truly diving in, you could head to <a href="http://www.bignerdranch.com/" rel="noreferrer" title="Big Nerd Ranch">Big Nerd Ranch</a>. This is a training facility run by Aaron Hillegas - the author of the book mentioned above. I attended the Intro to Cocoa course there several years ago, and it was a great way to learn.</li>
</ul>
|
<p>As several people mentioned already, Apple's <a href="http://developer.apple.com/mac/library/documentation/Cocoa/Conceptual/MemoryMgmt/Articles/mmPractical.html" rel="nofollow noreferrer">Intro to Memory Management</a> is by far the best place to start.</p>
<p>One useful link I haven't seen mentioned yet is <a href="http://developer.apple.com/mac/library/documentation/Cocoa/Conceptual/MemoryMgmt/Articles/mmPractical.html" rel="nofollow noreferrer">Practical Memory Management</a>. You'll find it in the middle of Apple's docs if you read through them, but it's worth direct linking. It's a brilliant executive summary of the memory management rules with examples and common mistakes (basically what other answers here are trying to explain, but not as well).</p>
| 2,828
|
<p>I'm new to development (an admin by trade) and I'm setting up my development environment and I would like to set up a CruiseControl.Net server on Server 2008. A quick Google did not turn up any instructions for getting it running on IIS 7.0, so I was wondering if anyone had experience getting this set up.</p>
|
<p>What Dale Ragan said; it installed flawlessly on our Windows Server 2008 machine, including the Dashboard running on IIS 7. Just give it a shot; should work fine.</p>
|
<p>I got it running by following the steps in <a href="http://www.matthewbussa.com/2011/05/installing-and-configuring-cruise.html" rel="nofollow">this blog</a>. Additionally, I had to enable ASP.NET, as shown in <a href="http://lamiavitaintrycatch.wordpress.com/2008/05/20/http-error-50019-iis-70-thoubles-fixed/" rel="nofollow">this blog</a>. Lastly, to get the package install working, I gave full permission to the local users on the webdashboard directory, as in <a href="http://jira.public.thoughtworks.org/browse/CCNET-1664" rel="nofollow">this bug report</a>.</p>
| 3,435
|
<p>I experience that on several different machines, with plugins, without plugins, with VB.net or c# solutions of many different sizes, <strong>closing</strong> the solution in <strong>VS 2005</strong> generally takes significantly more time than actually <strong>building</strong> the solution.</p>
<p>This has always been the case for me since I started using Visual Studio 2005, so I have learned to live with it, but I am curious:</p>
<p>What on earth is visual studio doing when you have actually told it to shut down? Is it significant? Is it configurable, can you turn it off?</p>
|
<blockquote>
<p>What on earth is visual studio doing
when you have actually told it to shut
down?</p>
</blockquote>
<p>You can use <a href="http://technet.microsoft.com/en-us/sysinternals/bb896645.aspx" rel="nofollow noreferrer">Process Monitor</a> from sysinternals. It maybe because of some plugins. Try resetting your Visual Studio settings (<strong>Tools->Import and Export Settings->Reset All Settings</strong>).</p>
|
<p>I've found that closing all the open documents before you close the solution helps speed it up. Or maybe it's just a perception thing, but it <em>seems</em> faster :)</p>
| 6,716
|
<p>Is it a good idea or do I need to calibrate my E steps after I switch to a new filament due to the different types I use (to make my prints accurate)?</p>
<p>E.g. calibrate when switching from PLA to ABS/PETG? </p>
|
<p>No that will not be necessary.</p>
<p>However, you could use calipers to measure the diameter of the filaments (e.g. at 5 positions over a few meters) and calculate the mean diameter, if there is a significant difference between the new and the currently used filament you could change the diameter in the slicer (or the flow modifier), you do not need to calibrate the steps per millimeter every time you change filament. </p>
<p>You only need to calibrate the steps per millimeter if you change something in the extruder hardware setup, e.g. different extruder, different stepper driver, a new gear, etc. As long as the hardware is not changed a calibrated extruder setup will move a certain amount of filament regardless of the diameter variation (per rotation of the extruder gear an amount of <span class="math-container">$2 \times \pi \times (gear\ radius)$</span> mm of filament.</p>
|
<p>That above is not entirely true. Maybe in a perfect world but we are far away from that. Just this week I tested it with an All-Metal MicroSwiss.</p>
<ul>
<li>Calibrated with PETG at 240 °C. Result E116</li>
<li>Calibrated with PLA at 180 °C. Result E96...</li>
</ul>
<p>A huge deviation one can't control with the flow. The more smart way to calibrate the extruder would be to remove the heating and nozzle out of the equation. Just run the filament dry out of the end of the PTFE tube with the same calculation method. Unfortunately this is still unreliable.</p>
| 1,090
|
<p>I need to implement a <a href="http://en.wikipedia.org/wiki/Wavetable" rel="nofollow noreferrer">wavetable synthesizer</a> in an ARM Cortex-M3 core. I'm looking for any code or tools to help me get started.</p>
<p>I'm aware of <a href="http://elm-chan.org/works/mxb/report.html" rel="nofollow noreferrer">this AVR implementation</a>. I actually converted it to a PIC a while back. Now I am looking for something similar, but a little better sounding.</p>
<p>ANSI C code would be great. Any code snippets (C or C++), samples, tools, or just general information would be greatly appreciated.</p>
<p>Thanks.</p>
|
<p>The Synthesis Toolkit (STK) is excellent, but it is C++ only:</p>
<p><a href="http://ccrma.stanford.edu/software/stk/" rel="noreferrer">http://ccrma.stanford.edu/software/stk/</a></p>
<p>You may be able to extract the wavetable synthesizer code from the STK though.</p>
|
<p>Any ARM synth, the best ones, can be changed to wavescanner in less than a day. Scanning the wave from files or generating them mathematically is nearly the same thing audio wise, WT provides massive banks of waveforms at zero processing cost, you need the waves, the WT oscillator code itself is 20 lines. so change your waveform knob from 3 to 100 to indicate which WAV you are reading, use a ramp/counter to read the WAV files(as arrays). WT fixed. </p>
<p>From 7 years of Synth experience, i'd recommend to change 20 lines of the oscillator function of your favorite synth to adapt it to read wave arrays. The WT only uses 20 lines of logic, the rest of the synthesizer is more important: LFO's, Filters, input parameters, preset memory... Use your favorite synth instead and find a WT wave library as WAV files and folders, and replace your fav synth oscillators with WT functions, it will sound almost the same, only lower processing costs. </p>
<p>A synth normally uses Sin, Sqr, Saw, Antialiased OSC functions for the wave... </p>
<p>A wavetable synth uses about 20 lines of code at it's base, and 10/20/100ds of waves, each wave sampled at every octave ideally. If you can get a wavetable sound library, the synth just loops, pitch shifts, the sounds, and pro synths can also have multiple octave to mix the octaves. </p>
<p>WTfunction = </p>
<ul>
<li>load WAV files into N arrays</li>
<li>change waveform = select waveform array from WAV list</li>
<li>read waveform array at desired Hz </li>
</ul>
<p>wavescanner function =</p>
<ul>
<li>crossfade between 2 waves and assign xfade to LFO, i.e. sine and xfade.</li>
</ul>
<p>The envelope, filter, amplitude, all other functions are independent from the wave generation function in all synths. </p>
<p>remember the the most powerful psychoacoustic tool for synthesizers is deviation from the digital tone of the notes, it's called unison detune, sonic character of synthesizers mostly comes from chorus and unison detune. </p>
<p>WT's are either single periods of waves of longer sections, in more advanced synths. the single period stuff is super easy to write into code. the advanced WT's are sampled per octave with waves lasting N periods, even 2-3 seconds, i.e. piano, and that means that they change sound quality through the octaves, so the complex WT's are crossfaded every octave with multiple octave recordings. </p>
| 4,935
|
<p>I have this gigantic ugly string:</p>
<pre class="lang-none prettyprint-override"><code>J0000000: Transaction A0001401 started on 8/22/2008 9:49:29 AM
J0000010: Project name: E:\foo.pf
J0000011: Job name: MBiek Direct Mail Test
J0000020: Document 1 - Completed successfully
</code></pre>
<p>I'm trying to extract pieces from it using regex. In this case, I want to grab everything after <code>Project Name</code> up to the part where it says <code>J0000011:</code> (the 11 is going to be a different number every time).</p>
<p>Here's the regex I've been playing with:</p>
<pre class="lang-none prettyprint-override"><code>Project name:\s+(.*)\s+J[0-9]{7}:
</code></pre>
<p>The problem is that it doesn't stop until it hits the <strong>J0000020:</strong> at the end.</p>
<p>How do I make the regex stop at the first occurrence of <code>J[0-9]{7}</code>?</p>
|
<p>Make <code>.*</code> non-greedy by adding '<code>?</code>' after it:</p>
<pre><code>Project name:\s+(.*?)\s+J[0-9]{7}:
</code></pre>
|
<p>(Project name:\s+[A-Z]:(?:\\w+)+.[a-zA-Z]+\s+J[0-9]{7})(?=:)</p>
<p>This will work for you.</p>
<p>Adding (?:\\w+)+.[a-zA-Z]+ will be more restrictive instead of .*</p>
| 4,246
|
<p>Can I use this approach efficiently?</p>
<pre><code>using(SqlCommand cmd = new SqlCommand("GetSomething", new SqlConnection(Config.ConnectionString))
{
cmd.Connection.Open();
// set up parameters and CommandType to StoredProcedure etc. etc.
cmd.ExecuteNonQuery();
}
</code></pre>
<p>My concern is : Will the Dispose method of the SqlCommand (which is called when exiting the using block) close the underlying SqlConnection object or not?</p>
|
<p>No, Disposing of the <code>SqlCommand</code> will not effect the Connection. A better approach would be to also wrap the <code>SqlConnection</code> in a using block as well: </p>
<pre><code>using (SqlConnection conn = new SqlConnection(connstring))
{
conn.Open();
using (SqlCommand cmd = new SqlCommand(cmdstring, conn))
{
cmd.ExecuteNonQuery();
}
}
</code></pre>
<p>Otherwise, the Connection is unchanged by the fact that a Command that was using it was disposed (maybe that is what you want?). But keep in mind, that a Connection should
be disposed of as well, and likely more important to dispose of than a command. </p>
<p><strong>EDIT:</strong> </p>
<p>I just tested this: </p>
<pre><code>SqlConnection conn = new SqlConnection(connstring);
conn.Open();
using (SqlCommand cmd = new SqlCommand("select field from table where fieldid = 1", conn))
{
Console.WriteLine(cmd.ExecuteScalar().ToString());
}
using (SqlCommand cmd = new SqlCommand("select field from table where fieldid = 2", conn))
{
Console.WriteLine(cmd.ExecuteScalar().ToString());
}
conn.Dispose();
</code></pre>
<p>The first command was disposed when the using block was exited. The connection was still open and good for the second command. </p>
<p><strong><em>So, disposing of the command definitely does not dispose of the connection it was using.</em></strong></p>
|
<p>I use this pattern. I have this private method somewhere in my app:</p>
<pre><code>private void DisposeCommand(SqlCommand cmd)
{
try
{
if (cmd != null)
{
if (cmd.Connection != null)
{
cmd.Connection.Close();
cmd.Connection.Dispose();
}
cmd.Dispose();
}
}
catch { } //don't blow up
}
</code></pre>
<p>Then I always create SQL commands and connections in a try block (but without being wrapped in a using block) and always have a finally block as:</p>
<pre><code> finally
{
DisposeCommand(cmd);
}
</code></pre>
<p>The connection object being a property of the command object makes a using block awkward in this situation - but this pattern gets the job done without cluttering up your code.</p>
| 8,600
|
<p>How would you go about proving that two queries are functionally equivalent, eg they will always both return the same result set.</p>
<hr>
<p>As I had a specific query in mind when I was doing this, I ended up doing as @dougman suggested, over about 10% of rows the tables concerned and comparing the results, ensuring there was no out of place results.</p>
|
<p>The best you can do is compare the 2 query outputs based on a given set of inputs looking for any differences. To say that they will always return the same results for all inputs really depends on the data.</p>
<p>For Oracle one of the better if not best approaches (very efficient) is here (<kbd>Ctrl</kbd>+<kbd>F</kbd> Comparing the Contents of Two Tables):<br>
<a href="http://www.oracle.com/technetwork/issue-archive/2005/05-jan/o15asktom-084959.html" rel="noreferrer"><a href="http://www.oracle.com/technetwork/issue-archive/2005/05-jan/o15asktom-084959.html" rel="noreferrer">http://www.oracle.com/technetwork/issue-archive/2005/05-jan/o15asktom-084959.html</a></a></p>
<p>Which boils down to:</p>
<pre><code>select c1,c2,c3,
count(src1) CNT1,
count(src2) CNT2
from (select a.*,
1 src1,
to_number(null) src2
from a
union all
select b.*,
to_number(null) src1,
2 src2
from b
)
group by c1,c2,c3
having count(src1) <> count(src2);
</code></pre>
|
<p>You don't.</p>
<p>If you need a high level of confidence that a performance change, for example, hasn't changed the output of a query then test the hell out it.</p>
<p>If you need a really high level of confidence .. then errrm, test it even more.</p>
<p>Massive level's of testing aren't that hard to cobble together for a SQL query. Write a proc which will iterate around a large/complete set of possible paramenters, and call each query with each set of params, and write the outputs to respective tables. Compare the two tables and there you have it.</p>
<p>It's not exactly scientific, which I guess was the OP's question, but I'm not aware of a formal method to prove equivalency.</p>
| 8,101
|
<p>I've never had to do IPC on Windows before. I'm developing a pair of programs, a standard GUI/CLI app, and a windows service. The app has to tell the service what to do. So, assuming the communication is local only, what would be the best communication method for these two processes?<br />
By best I mean more robust and less error prone, not the best performance nor the easiest to code.</p>
<p>Note I'm asking about what to use, a standard TCP socket, named pipes, or some other means of communication only.</p>
|
<p>IPC in .Net can be achieved using:</p>
<h1>WCF</h1>
<p>using named pipes <strong>requires .Net 3.0</strong> and above.</p>
<h2>Code example</h2>
<ul>
<li>The WCF class <strong>NetNamedPipeBinding</strong> can be used for interprocess communication on the same machine. The MSDN documentaion for this class includes a code sample covering this scenario <a href="http://msdn.microsoft.com/en-us/library/system.servicemodel.netnamedpipebinding.aspx" rel="noreferrer">http://msdn.microsoft.com/en-us/library/system.servicemodel.netnamedpipebinding.aspx</a></li>
</ul>
<hr />
<h1>Remoting</h1>
<p>The original IPC framework released with .Net 1.0. I believe remoting is no longer being actively developed, and you are encouraged to use WCF instead</p>
<h2>Code example</h2>
<p><a href="http://www.codeproject.com/KB/threads/process_communication.aspx/process_communication.aspx" rel="noreferrer">Inter-process communication via Remoting</a> - uses a tcp channel</p>
<h2>Resources</h2>
<ul>
<li>GenuineChannels, sell a remoting toolkit that includes a Shared Memory Channel. <a href="http://www.genuinechannels.com/Index.aspx" rel="noreferrer">http://www.genuinechannels.com/Index.aspx</a></li>
<li><a href="http://www.thinktecture.com/resourcearchive/tools-and-software/dotnetremotingprojects" rel="noreferrer">Ingo Rammer</a>, wrote the definitive .Net remoting book, <a href="https://rads.stackoverflow.com/amzn/click/com/1590594177" rel="noreferrer" rel="nofollow noreferrer">Advanced .NET Remoting, Second Edition </a></li>
</ul>
<hr />
<h1>Win32 RPC using csharptest-net RpcLibrary</h1>
<p>I came across a project recently that has wrapped the Win32 RPC library and created a .net class library that can be used for local and remote RPC</p>
<p><strong>Project home page</strong>: <a href="http://csharptest.net/projects/rpclibrary/" rel="noreferrer">http://csharptest.net/projects/rpclibrary/</a></p>
<p><strong>MSDN references:</strong></p>
<ul>
<li>How rpc works: <a href="http://technet.microsoft.com/en-us/library/cc738291(v=ws.10).aspx" rel="noreferrer">http://technet.microsoft.com/en-us/library/cc738291(v=ws.10).aspx</a></li>
<li>RPC functions: <a href="http://msdn.microsoft.com/en-us/library/aa378623(v=VS.85).aspx" rel="noreferrer">http://msdn.microsoft.com/en-us/library/aa378623(v=VS.85).aspx</a></li>
</ul>
<p>Also has a google protocol buffers rpc client that runs on top of the library: <a href="https://code.google.com/p/protobuf-csharp-rpc/" rel="noreferrer">https://code.google.com/p/protobuf-csharp-rpc/</a></p>
<hr />
<h1>WM_COPYDATA</h1>
<p>For completeness it's also possible to use the WIN32 method with the <a href="http://msdn.microsoft.com/en-us/library/ms649011.aspx" rel="noreferrer">WM_COPYDATA</a> message. I've used this method before in .Net 1.1 to create a single instance application opening multiple files from windows explorer.</p>
<h2>Resources</h2>
<ul>
<li><a href="http://msdn.microsoft.com/en-us/library/ms649011.aspx" rel="noreferrer">MSDN - WM_COPYDATA</a></li>
<li><a href="http://www.codeproject.com/KB/cs/wm_copydata_use.aspx" rel="noreferrer">Code example</a></li>
<li><a href="http://www.pinvoke.net/search.aspx?search=wm_copyData" rel="noreferrer">PInvoke.net declaration</a></li>
</ul>
<h1>Sockets</h1>
<p>Using a custom protocol (harder)</p>
|
<p>Your best bet is to use WCF. You will be able to create a service host in the windows service and expose a well defined interface that the GUI application can consume. WCF will let you communicate via named pipes if you choose, or you can choose any other communication protocal like TCP, HTTP, etc. Using WCF you get great tool support and lots of available information.</p>
| 7,285
|
<p>I am trying to set up dynamic per-item menus (Edit Control Block) in SharePoint 2007. My goal is to have certain features that are available based on the current user's group membership.</p>
<p>I know that the CustomAction tag that controls the creation of this menu item has a Rights attribute. The problem that I have with this is that the groups I am using have identical rights in the site (ViewListItems, ManageAlerts, etc). The groups that we have set up deal more with function, such as Manager, Employee, etc. We want to be able to assign a custom feature to a group, and have the menu items associated with that feature visible only to members of that group. Everyone has the same basic site permissions, but will have extra options availble based on their login credentials.</p>
<p>I have seen several articles on modifying the Core.js file to hide items in the context menu, but they are an all-or-nothing approach. There is an interesting post at <a href="http://blog.thekid.me.uk/archive/2008/04/29/sharepoint-custom-actions-in-a-list-view-webpart.aspx" rel="noreferrer">http://blog.thekid.me.uk/archive/2008/04/29/sharepoint-custom-actions-in-a-list-view-webpart.aspx</a> that shows how to dynamically modify the Actions menu. It is trivial to modify this example to check the users group and show or hide the menu based on membership. Unfortunately, this example does not seem to apply to context menu items as evidenced here <a href="http://forums.msdn.microsoft.com/en-US/sharepointdevelopment/thread/c2259839-24c4-4a7e-83e5-3925cdd17c44/" rel="noreferrer">http://forums.msdn.microsoft.com/en-US/sharepointdevelopment/thread/c2259839-24c4-4a7e-83e5-3925cdd17c44/</a>.</p>
<p>Does anyone know of a way to do this without using javascript? If not, what is the best way to check the user's group from javascript?</p>
|
<p>A short list might be:</p>
<ul>
<li>Avoid memory leaks through use shared pointers to manage memory allocation and cleanup</li>
<li>Use the <a href="https://en.wikipedia.org/wiki/Resource_Acquisition_Is_Initialization" rel="noreferrer">Resource Acquisition Is Initialization</a> (RAII) idiom to manage resource cleanup - especially in the presence of exceptions</li>
<li>Avoid calling virtual functions in constructors</li>
<li>Employ minimalist coding techniques where possible - for example, declaring variables only when needed, scoping variables, and early-out design where possible. </li>
<li>Truly understand the exception handling in your code - both with regard to exceptions you throw, as well as ones thrown by classes you may be using indirectly. This is especially important in the presence of templates.</li>
</ul>
<p>RAII, shared pointers and minimalist coding are of course not specific to C++, but they help avoid problems that do frequently crop up when developing in the language. </p>
<p>Some excellent books on this subject are:</p>
<ul>
<li>Effective C++ - Scott Meyers</li>
<li>More Effective C++ - Scott Meyers</li>
<li>C++ Coding Standards - Sutter & Alexandrescu</li>
<li>C++ FAQs - Cline</li>
</ul>
<p>Reading these books has helped me more than anything else to avoid the kind of pitfalls you are asking about.</p>
|
<pre><code>#include <boost/shared_ptr.hpp>
class A {
public:
void nuke() {
boost::shared_ptr<A> (this);
}
};
int main(int argc, char** argv) {
A a;
a.nuke();
return(0);
}
</code></pre>
| 5,031
|
<p>In Eclipse, I can switch through open editors using control-page up/down. This works great, except for editors like XML or JavaScript, where there are Design and Source tabs. For those editors, it just toggles between the different tabs. Is there any way to get Eclipse to ignore them? I know about alt-F6 for "Next Editor", but that doesn't use the same order that the editor tabs are displayed in, so it's confusing.</p>
|
<p>You're right -- looks like Eclipse has acknowledged it as a <a href="https://bugs.eclipse.org/bugs/show_bug.cgi?id=199499" rel="nofollow noreferrer">bug</a>. It's fixed in 3.5.</p>
|
<p>I was initially thinking <kbd>Alt</kbd>-<kbd>←</kbd> and <kbd>Alt</kbd>-<kbd>→</kbd> might do what you want, but that's more for going forward and backwards in history of tabs you've viewed. Which might sort of get you what you want, but is probably just as confusing as <kbd>Alt</kbd>-<kbd>F6</kbd>.</p>
<p>I think it sounds more like a bug in Eclipse, might be worth going over to eclipse.org to see if there's a pre-existing bug for this.</p>
| 8,697
|
<p>Is there any way to apply an attribute to a model file in ASP.NET Dynamic Data to hide the column?</p>
<p>For instance, I can currently set the display name of a column like this:</p>
<pre><code>[DisplayName("Last name")]
public object Last_name { get; set; }
</code></pre>
<p>Is there a similar way to hide a column?</p>
<p><strong>Edit</strong>: Many thanks to Christian Hagelid for going the extra mile and giving a spot-on answer :-)</p>
|
<p>Had no idea what ASP.NET Dynamic Data was so you promted me to so some research :)</p>
<p>Looks like the property you are looking for is </p>
<pre><code>[ScaffoldColumn(false)]
</code></pre>
<p>There is also a similar property for tables</p>
<pre><code>[ScaffoldTable(false)]
</code></pre>
<p><a href="http://davidhayden.com/blog/dave/archive/2008/05/15/DynamicDataWebsitesScaffoldTableScaffoldColumnAttributes.aspx" rel="noreferrer">source</a></p>
|
<p>A much, much easier method: If you want to only show certain columns in the List page, but all or others in the Details, etc. pages, see <a href="https://stackoverflow.com/questions/80175/how-do-i-hide-a-column-only-on-the-list-page-in-asp-net-dynamic-data">How do I hide a column only on the list page in ASP.NET Dynamic Data?</a></p>
<p>Simply set AutoGenerateColumns="false" in the GridView control, then define exactly the columns you want:</p>
<p><Columns><br/>
...<br/>
<asp:DynamicField DataField="FirstName" HeaderText="First Name" /><br/>
<asp:DynamicField DataField="LastName" HeaderText="Last Name" /><br/>
</Columns></p>
| 4,526
|
<p>Refactoring is the process of improving the existing system design without changing its behavior.</p>
<p>Besides Martin Fowler's seminal book "<a href="https://rads.stackoverflow.com/amzn/click/com/0201485672" rel="nofollow noreferrer" rel="nofollow noreferrer">Refactoring - Improving the design of existing code</a>" and Joshua Kerievsky's book "<a href="https://rads.stackoverflow.com/amzn/click/com/0321213351" rel="nofollow noreferrer" rel="nofollow noreferrer">Refactoring to Patterns</a>", are there any good resources on refactoring? </p>
|
<p><a href="http://www.refactoring.com/" rel="noreferrer">http://www.refactoring.com/</a> might help you. They have a long list of methods here:</p>
<ul>
<li><a href="http://www.refactoring.com/catalog/index.html" rel="noreferrer">http://www.refactoring.com/catalog/index.html</a></li>
</ul>
<p>Joel's article <a href="http://www.joelonsoftware.com/articles/fog0000000348.html" rel="noreferrer">Rub a dub dub</a> shows you why you should refactor and not rewrite (but I guess you already knew that rewriting is <a href="http://www.joelonsoftware.com/articles/fog0000000069.html" rel="noreferrer">a thing you should never do</a>..)</p>
|
<p>What is your codebase?</p>
<p>Eclipse has quite good support for Java. But unfortunately limited support for C++ code.</p>
<p><a href="http://www.ibm.com/developerworks/library/l-eclipse.html" rel="nofollow noreferrer">Heres</a> an article from the makers..</p>
| 7,127
|
<p>Simple ASP.NET application.</p>
<p>I have two drop-down controls. On the first-drop down I have a JavaScript <code>onChange</code> event. The JavaScript enables the second drop-down and removes a value from it (the value selected in the first drop-down). If they click the blank first value of the drop-down, then the second drop-down will be disabled (and the options reset).</p>
<p>I also have code in the <code>OnPreRender</code> method that will enable or disable the second drop-down based on the value of the first drop-down. This is so that the value of the first drop-down can be selected in code (loading user settings).</p>
<p>My problem is:</p>
<ol>
<li>The user selects something in the first drop-down. The second drop-down will become enabled through JavaScript. </li>
<li>They then change a third drop-down that initiates a post back. After the post back the drop-downs are in the correct state (first value selected, second drop-down enabled).</li>
<li>If they then click the back button, the second drop-down will no longer be enabled although it should be since there's something selected in the first drop-down.</li>
</ol>
<p>I've tried adding a startup script (that will set the correct state of the second-drop down) through <code>ClientScript.RegisterStartupScript</code>, however when this gets called the first drop-down has a <code>selectedIndex</code> of <code>0</code>, not what it actually is. My guess is that the value of the selection gets set after my start script (but still doesn't call the <code>onChange</code> script).</p>
<p>Any ideas on what to try?</p>
|
<pre><code><%@ Page Language="C#" %>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<script runat="server">
protected void indexChanged(object sender, EventArgs e)
{
Label1.Text = " I did something! ";
}
</script>
<html xmlns="http://www.w3.org/1999/xhtml">
<head runat="server">
<title>Test Page</title>
</head>
<body>
<script type="text/javascript">
function firstChanged() {
if(document.getElementById("firstSelect").selectedIndex != 0)
document.getElementById("secondSelect").disabled = false;
else
document.getElementById("secondSelect").disabled = true;
}
</script>
<form id="form1" runat="server">
<div>
<select id="firstSelect" onchange="firstChanged()">
<option value="0"></option>
<option value="1">One</option>
<option value="2">Two</option>
<option value="3">Three</option>
</select>
<select id="secondSelect" disabled="disabled">
<option value="1">One</option>
<option value="2">Two</option>
<option value="3">Three</option>
</select>
<asp:DropDownList ID="DropDownList1" AutoPostBack="true" OnSelectedIndexChanged="indexChanged" runat="server">
<asp:ListItem Text="One" Value="1"></asp:ListItem>
<asp:ListItem Text="Two" Value="2"></asp:ListItem>
</asp:DropDownList>
<asp:Label ID="Label1" runat="server"></asp:Label>
</div>
</form>
<script type="text/javascript">
window.onload = function() {firstChanged();}
</script>
</body>
</html>
</code></pre>
<p>Edit: Replaced the whole code. This should work even in your user control.
I believe that Register.ClientScriptBlock is not working because the code you write in that block is executed <em>before</em> window.onload is called. And, I assume (I am not sure of this point) that the DOM objects do not have their values set at that time. And, this is why you are getting selectedIndex as always 0.</p>
|
<p>If the second dropdown is initially enabled through javascript (I'm assuming this is during a javascript onchange, since you didn't specify), then clicking the back button to reload the previous postback will never enable it. </p>
<p>Mixing ASP.NET with classic javascript can be hairy. You might want to have a look at ASP.NET's Ajax implementation (or the third-party AjaxPanel control if you're forced to use an older ASP.NET version). Those will give you the behaviour that you want through pure C#, without forcing you to resort to javascript hackery-pokery.</p>
| 3,078
|
<p>I've just started learning Lisp and I can't figure out how to compile and link lisp code to an executable.</p>
<p>I'm using <code>clisp</code> and <code>clisp -c</code> produces two files:</p>
<ul>
<li>.fas</li>
<li>.lib</li>
</ul>
<p>What do I do next to get an executable?</p>
|
<p>I was actually trying to do this today, and I found typing this into the CLisp REPL worked:</p>
<pre><code>(EXT:SAVEINITMEM "executable.exe"
:QUIET t
:INIT-FUNCTION 'main
:EXECUTABLE t
:NORC t)
</code></pre>
<p>where main is the name of the function you want to call when the program launches, <code>:QUIET t</code> suppresses the startup banner, and <code>:EXECUTABLE t</code> makes a native executable. </p>
<p>It can also be useful to call </p>
<pre><code>(EXT:EXIT)
</code></pre>
<p>at the end of your main function in order to stop the user from getting an interactive lisp prompt when the program is done.</p>
<p>EDIT: Reading the documentation, you may also want to add <code>:NORC t</code>
(read <a href="http://clisp.cons.org/impnotes/image.html" rel="nofollow noreferrer">link</a>). This suppresses loading the RC file (for example, <code>~/.clisprc.lisp</code>).</p>
|
<p>I know this is an old question but the Lisp code I'm looking at is 25 years old :-)</p>
<p>I could not get compilation working with clisp on Windows 10.
However, it worked for me with <a href="https://www.cs.utexas.edu/users/novak/gclwin.html" rel="nofollow noreferrer">gcl</a>.</p>
<p>If my lisp file is jugs2.lisp,</p>
<pre><code>gcl -compile jugs2.lisp
</code></pre>
<p>This produces the file jugs2.o if jugs2.lisp file has no errors.</p>
<p>Run gcl with no parameters to launch the lisp interpreter:</p>
<pre><code>gcl
</code></pre>
<p>Load the .o file:</p>
<pre><code>(load "jugs2.o")
</code></pre>
<p>To create an EXE:</p>
<pre><code>(si:save-system "jugs2")
</code></pre>
<p>When the EXE is run it needs the DLL <code>oncrpc.dll</code>; this is in the <code><gcl install folder>\lib\gcl-2.6.1\unixport</code> folder that gcl.bat creates.</p>
<p>When run it shows a lisp environment, call (main) to run the main function
(main).</p>
| 4,470
|
<p>I wrote an Active X plugin for IE7 which implements IObjectWithSite besides some other necessary interfaces (note no IOleClient). This interface is queried and called by IE7. During the SetSite() call I retrieve a pointer to IE7's site interface which I can use to retrieve the IHTMLDocument2 interface using the following approach:</p>
<pre><code>IUnknown *site = pUnkSite; /* retrieved from IE7 during SetSite() call */
IServiceProvider *sp = NULL;
IHTMLWindow2 *win = NULL;
IHTMLDocument2 *doc = NULL;
if(site) {
site->QueryInterface(IID_IServiceProvider, (void **)&sp);
if(sp) {
sp->QueryService(IID_IHTMLWindow2, IID_IHTMLWindow2, (void **)&win);
if(win) {
win->get_document(&doc);
}
}
}
if(doc) {
/* found */
}
</code></pre>
<p>I tried a similiar approach on PIE as well using the following code, however, even the IPIEHTMLWindow2 interface cannot be acquired, so I'm stuck:</p>
<pre><code>IUnknown *site = pUnkSite; /* retrieved from PIE during SetSite() call */
IPIEHTMLWindow2 *win = NULL;
IPIEHTMLDocument1 *tmp = NULL;
IPIEHTMLDocument2 *doc = NULL;
if(site) {
site->QueryInterface(__uuidof(*win), (void **)&win);
if(win) { /* never the case */
win->get_document(&tmp);
if(tmp) {
tmp->QueryInterface(__uuidof(*doc), (void **)&doc);
}
}
}
if(doc) {
/* found */
}
</code></pre>
<p>Using the IServiceProvider interface doesn't work either, so I already tested this.</p>
<p>Any ideas?</p>
|
<p>I found the following code in the Google Gears code, <a href="http://code.google.com/p/gears/source/browse/trunk/gears/base/ie/activex_utils.cc?r=2157" rel="nofollow noreferrer">here</a>. I copied the functions I think you need to here. The one you need is at the bottom (GetHtmlWindow2), but the other two are needed as well. Hopefully I didn't miss anything, but if I did the stuff you need is probably at the link.</p>
<pre><code>#ifdef WINCE
// We can't get IWebBrowser2 for WinCE.
#else
HRESULT ActiveXUtils::GetWebBrowser2(IUnknown *site, IWebBrowser2 **browser2) {
CComQIPtr<IServiceProvider> service_provider = site;
if (!service_provider) { return E_FAIL; }
return service_provider->QueryService(SID_SWebBrowserApp,
IID_IWebBrowser2,
reinterpret_cast<void**>(browser2));
}
#endif
HRESULT ActiveXUtils::GetHtmlDocument2(IUnknown *site,
IHTMLDocument2 **document2) {
HRESULT hr;
#ifdef WINCE
// Follow path Window2 -> Window -> Document -> Document2
CComPtr<IPIEHTMLWindow2> window2;
hr = GetHtmlWindow2(site, &window2);
if (FAILED(hr) || !window2) { return false; }
CComQIPtr<IPIEHTMLWindow> window = window2;
CComPtr<IHTMLDocument> document;
hr = window->get_document(&document);
if (FAILED(hr) || !document) { return E_FAIL; }
return document->QueryInterface(__uuidof(*document2),
reinterpret_cast<void**>(document2));
#else
CComPtr<IWebBrowser2> web_browser2;
hr = GetWebBrowser2(site, &web_browser2);
if (FAILED(hr) || !web_browser2) { return E_FAIL; }
CComPtr<IDispatch> doc_dispatch;
hr = web_browser2->get_Document(&doc_dispatch);
if (FAILED(hr) || !doc_dispatch) { return E_FAIL; }
return doc_dispatch->QueryInterface(document2);
#endif
}
HRESULT ActiveXUtils::GetHtmlWindow2(IUnknown *site,
#ifdef WINCE
IPIEHTMLWindow2 **window2) {
// site is javascript IDispatch pointer.
return site->QueryInterface(__uuidof(*window2),
reinterpret_cast<void**>(window2));
#else
IHTMLWindow2 **window2) {
CComPtr<IHTMLDocument2> html_document2;
// To hook an event on a page's window object, follow the path
// IWebBrowser2->document->parentWindow->IHTMLWindow2
HRESULT hr = GetHtmlDocument2(site, &html_document2);
if (FAILED(hr) || !html_document2) { return E_FAIL; }
return html_document2->get_parentWindow(window2);
#endif
}
</code></pre>
|
<p>Well I was aware of the gears code already. The mechanism gears uses is based on a workaround through performing an explicit method call into the gears plugin from the gears loader to set the window object and use that as site interface instead of the IUnknown provided by IE Mobile in the SetSite call. Regarding to the gears code the Google engineers are aware of the same problem I'm asking and came up with this workaround I described.</p>
<p>However, I believe there must be another more "official" way of dealing with this issue since explicitely setting the site on an Active X control/plugin isn't very great. I'm going to ask the MS IE Mobile team directly now and will keep you informed once I get a solution. It might be a bug in IE Mobile which is the most likely thing I can imagine of, but who knows...</p>
<p>But thanks anyways for your response ;))</p>
| 6,766
|
<p>Is there a standard way of dealing with globally scoped variables in PHP? Session scoped? </p>
<p>From the research I've done, it looks like the options are mostly add-ons or external. APC might work, but would be limited to a single PHP instance and not so useful for a farm of servers. <em>Memcached</em> seems like it would work, but I was hoping to find something within PHP. </p>
<p>Does its stateless approach keep there from being a standard method for handling this?</p>
|
<p>A persistent layar is the only way to go with php. Either file based solution or database. </p>
<p>php natively doesn't provide any mechanism to do application scope variable.</p>
|
<p>You can do session variables with <a href="http://www.php.net/manual/en/reserved.variables.session.php" rel="nofollow noreferrer">$_SESSION</a>.</p>
| 7,344
|
<p>Does anyone have any experience with a good wpf pivot table control?</p>
|
<p><a href="http://www.devexpress.com/Products/NET/Controls/WPF/Grid/" rel="noreferrer">DevExpress Grid control</a> has good pivot functionality. Used it for Winforms a lot. I think the WPF version is also available.</p>
|
<p>I am a consultant at Infragistics and I've been working exclusively the last few weeks with the Infragistics XamPivotGrid. We've been working to make this control fast and memory efficient. As a user and a developer, I am highly impressed with the usability of this control. You can check out this control in the WPF and the Silverlight NetAdvantage Data Visualization products at www.infragistics.com!</p>
<p>:-)</p>
| 7,881
|
<p>We have some really old code that calls WebServices using behaviours (webservice.htc), and we are having some strange problems... since they've been deprecated a long time ago, I want to change the call.</p>
<p>What's the correct way of doing it? It's ASP.NET 1.1</p>
|
<p>You should be able to generate a proxy class using wsdl.exe. Then just use the web service as you normally would.</p>
|
<p>While I'm not 100% sure what the Web Service behavior does, I recall it allows client-side script to call Web Services, which would make AJAX it's contemporary replacement.</p>
<p>Since you're using .NET 1.1 how about using <a href="http://www.ajaxpro.info" rel="nofollow noreferrer">Ajax.NET Professional</a> to connect to the web services?</p>
| 5,541
|
<p>I've long been under the impression that <code>goto</code> should never be used if possible.</p>
<p>However, while perusing libavcodec (which is written in C) the other day, I was surprised to notice multiple uses of it.</p>
<p>Is it ever advantageous to use <code>goto</code> in a language that supports loops and functions? If so, why? Please provide a concrete example that clearly justifies the use of a <code>goto</code>.</p>
|
<p>Everybody who is anti-<code>goto</code> cites, directly or indirectly, Edsger Dijkstra's <a href="http://www.u.arizona.edu/~rubinson/copyright_violations/Go_To_Considered_Harmful.html" rel="noreferrer">GoTo Considered Harmful</a> article to substantiate their position. Too bad Dijkstra's article has virtually <em>nothing</em> to do with the way <code>goto</code> statements are used these days and thus what the article says has little to no applicability to the modern programming scene. The <code>goto</code>-less meme verges now on a religion, right down to its scriptures dictated from on high, its high priests and the shunning (or worse) of perceived heretics.</p>
<p>Let's put Dijkstra's paper into context to shed a little light on the subject.</p>
<p>When Dijkstra wrote his paper the popular languages of the time were unstructured procedural ones like BASIC, FORTRAN (the earlier dialects) and various assembly languages. It was quite common for people using the higher-level languages to jump <em>all over their code base</em> in twisted, contorted threads of execution that gave rise to the term "spaghetti code". You can see this by hopping on over to <a href="http://web.archive.org/web/20150215080553/http://www.dunnington.u-net.com/public/startrek/STTR1" rel="noreferrer">the classic Trek game</a> written by Mike Mayfield and trying to figure out how things work. Take a few moments to look that over.</p>
<p><strong>THIS</strong> is "the unbridled use of the go to statement" that Dijkstra was railing against in his paper in 1968. <strong>THIS</strong> is the environment he lived in that led him to write that paper. The ability to jump anywhere you like in your code at any point you liked was what he was criticising and demanding be stopped. Comparing that to the anaemic powers of <code>goto</code> in C or other such more modern languages is simply risible.</p>
<p>I can already hear the raised chants of the cultists as they face the heretic. "But," they will chant, "you can make code very difficult to read with <code>goto</code> in C." Oh yeah? You can make code very difficult to read without <code>goto</code> as well. Like this one:</p>
<pre><code>#define _ -F<00||--F-OO--;
int F=00,OO=00;main(){F_OO();printf("%1.3f\n",4.*-F/OO/OO);}F_OO()
{
_-_-_-_
_-_-_-_-_-_-_-_-_
_-_-_-_-_-_-_-_-_-_-_-_
_-_-_-_-_-_-_-_-_-_-_-_-_-_
_-_-_-_-_-_-_-_-_-_-_-_-_-_-_
_-_-_-_-_-_-_-_-_-_-_-_-_-_-_
_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_
_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_
_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_
_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_
_-_-_-_-_-_-_-_-_-_-_-_-_-_-_
_-_-_-_-_-_-_-_-_-_-_-_-_-_-_
_-_-_-_-_-_-_-_-_-_-_-_-_-_
_-_-_-_-_-_-_-_-_-_-_-_
_-_-_-_-_-_-_-_
_-_-_-_
}
</code></pre>
<p>Not a <code>goto</code> in sight, so it must be easy to read, right? Or how about this one:</p>
<pre><code>a[900]; b;c;d=1 ;e=1;f; g;h;O; main(k,
l)char* *l;{g= atoi(* ++l); for(k=
0;k*k< g;b=k ++>>1) ;for(h= 0;h*h<=
g;++h); --h;c=( (h+=g>h *(h+1)) -1)>>1;
while(d <=g){ ++O;for (f=0;f< O&&d<=g
;++f)a[ b<<5|c] =d++,b+= e;for( f=0;f<O
&&d<=g; ++f)a[b <<5|c]= d++,c+= e;e= -e
;}for(c =0;c<h; ++c){ for(b=0 ;b<k;++
b){if(b <k/2)a[ b<<5|c] ^=a[(k -(b+1))
<<5|c]^= a[b<<5 |c]^=a[ (k-(b+1 ))<<5|c]
;printf( a[b<<5|c ]?"%-4d" :" " ,a[b<<5
|c]);} putchar( '\n');}} /*Mike Laman*/
</code></pre>
<p>No <code>goto</code> there either. It must therefore be readable.</p>
<p>What's my point with these examples? It's not language features that make unreadable, unmaintainable code. It's not syntax that does it. It's bad programmers that cause this. And bad programmers, as you can see in that above item, can make <strong>any</strong> language feature unreadable and unusable. Like the <code>for</code> loops up there. (You can see them, right?)</p>
<p>Now to be fair, some language constructs are easier to abuse than others. If you're a C programmer, however, I'd peer far more closely at about 50% of the uses of <code>#define</code> long before I'd go on a crusade against <code>goto</code>!</p>
<p>So, for those who've bothered to read this far, there are several key points to note.</p>
<ol>
<li>Dijkstra's paper on <code>goto</code> statements was written for a programming environment where <code>goto</code> was a <em>lot</em>
more potentially damaging than it is in most modern languages that aren't an assembler.</li>
<li>Automatically throwing away all uses of <code>goto</code> because of this is about as rational as saying "I tried
to have fun once but didn't like it so now I'm against it".</li>
<li>There are legitimate uses of the modern (anaemic) <code>goto</code> statements in code that cannot be adequately
replaced by other constructs.</li>
<li>There are, of course, illegitimate uses of the same statements.</li>
<li>There are, too, illegitimate uses of the modern control statements like the "<code>godo</code>" abomination where an always-false <code>do</code> loop is broken out of using <code>break</code> in place of a <code>goto</code>. These are often worse than judicious use of <code>goto</code>.</li>
</ol>
|
<p>The problem with 'goto' and the most important argument of the 'goto-less programming' movement is, that if you use it too frequently your code, although it might behave correctly, becomes unreadable, unmaintainable, unreviewable etc. In 99.99% of the cases 'goto' leads to spaghetti code. Personally, I cannot think of any good reason as to why I would use 'goto'.</p>
| 4,406
|
<p>I'm having a strange problem.</p>
<p>I have to use <code>GetPostBackEventRefence</code> to force a Postback, but it works the first time, after the first postback, the .NET function is not rendered... any ideas?</p>
<p>This is what I'm missing after the postback:</p>
<pre><code><script language="javascript" type="text/javascript">
<!--
function __doPostBack(eventTarget, eventArgument) {
var theform;
if (window.navigator.appName.toLowerCase().indexOf("microsoft") > -1) {
theform = document.Main;
}
else {
theform = document.forms["Main"];
}
theform.__EVENTTARGET.value = eventTarget.split("$").join(":");
theform.__EVENTARGUMENT.value = eventArgument;
theform.submit();
}
// -->
</script>
</code></pre>
|
<p>The first thing I would look at is whether you have any asp controls (such as linkbutton, comboboxes,that don't normally generate a submit but requre a postback) being displayed on the page.
<strong>The __doPostback function will only be put into the page if ASP thinks that one of your controls requires it.</strong></p>
<p>If you aren't using one of those you can use: </p>
<pre><code>Page.ClientScript.GetPostBackClientHyperlink(controlName, "")
</code></pre>
<p>to add the function to your page</p>
|
<p>Well, following that idea I created a dummy function with the postbackreference, and it works... it still is weird though, because of it rendering correctly the first time</p>
<pre><code>this.Page.RegisterClientScriptBlock("DUMMY", "<script language='javascript'>function dummy() { " + this.Page.GetPostBackEventReference(this) + "; } </script>");
</code></pre>
| 7,340
|
<p>I have recently, in anticipation of printing carbon-fiber-reinforced nylon filament (Polymaker PA6-CF) on my Prusa i3 MK3S, built a filament drybox. It is made out of a large aluminum equipment enclosure which I had spare, every seam, rivet, hinge mount, foot screw, sealing lip, etc of which has been sealed with copious amounts of high-quality silicone sealant (this is condensation-curing silicone, so it does release a small amount of moisture as it cures, but it has been in here several days and is about as cured as it gets now). The lip around the edge of is clamped hard against a rubber sealing ring, which I have cleaned meticulously, when the lid of the box is closed and latched. In short, it is, by 3D printer drybox standards, unusually well-sealed.</p>
<p><a href="https://i.stack.imgur.com/D7xhB.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/D7xhB.jpg" alt="The exterior of the box" /></a>
<a href="https://i.stack.imgur.com/xJpod.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xJpod.jpg" alt="Inside" /></a>
(yes, I know my silicone work is a mess, but let nobody say I missed a spot!)</p>
<p>For drying, I have poured two pounds of orange self-indicating silica gel, freshly regenerated, directly into the bottom of the box, exposing the maximum possible surface area (as opposed to having it in a container).</p>
<p>Humidity is monitored by a DHT22 temperature/humidity sensor, connected to an external microcontroller used for data reporting. The wire passthrough is a 1mm diameter hole in a custom 3D-printed passthrough (held in by a printed nut) with 3 pieces of magnet wire running through it, sealed with glue from both ends. I really doubt it's leaking through this either. The datasheet for this sensor indicates it should be accurate even at low humidities, and is accurate to +/- ~2.5% RH over most of its range and does not exceed +/- 5% anywhere. Per the datasheet, its accuracy should far-exceed that of the little battery-powered digital hygrometers many drybox builds seem to use.
<a href="https://i.stack.imgur.com/3ca5x.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3ca5x.png" alt="enter image description here" /></a></p>
<p>Despite all of this, per the DHT22, it only reaches between 16.1 and 16.5% RH. As I understand it, a well-sealed drybox with well-exposed desiccant should be capable of better than this - I've seen sub-10% reported online. It's also worryingly close to the 20% maximum storage RH of the filament, per its TDS. What's going on here? My only real theories are:</p>
<ul>
<li>A lack of internal air circulation has resulted in stratification, with denser dry air close to the desiccant at the bottom and damper air at the top where the humidity sensor is located - this might be fixed with an internal circulation fan?</li>
<li>My sensor is not as accurate as its datasheet claims</li>
<li>The desiccant is somehow defective</li>
<li>My expectations are unrealistic and 16% RH is normal and acceptable performance for such a drybox</li>
</ul>
<p>Having tried most of the obvious things already (seal the box better, add more desiccant, increase desiccant surface area, let the box settle over days, install a high-quality sensor, ensure nothing inside the box is releasing absorbed moisture, seal the box even better), I'm beginning to expect it's the latter. Is it acceptable to remove my nylon from its sealed packaging and begin storing it in the drybox with this performance? Have others had success with nylon stored at 16% RH (as opposed to the sub-10% RH I've sometimes seen people talk about dryboxes reaching)? Are reported sub-10% readings for other dryboxes on the internet even accurate, or do the cheap little battery-powered hygrometers most DIY dryboxes use read inaccurately low at low humidities?</p>
|
<p>Track humidity graph over days (one measurement every 3 hours is fine). Be careful that polling the sensor too often may warm it up and cause inaccuracies, the temperature dependence is quite strong. Also, you may need to poll twice at 5 s interval and discard the first reading if you want more accurate values.</p>
<p>If the increase in humidity over a month is linear, the sensor is ok and you have a small leak. If the increase follows a non-linear curve, then the sensor is not reliable.</p>
<p>But I can tell you already: the sensor is <a href="https://www.kandrsmith.org/RJS/Misc/Hygrometers/calib_many.html" rel="nofollow noreferrer">not that much reliable</a>. Use a Bosch BME280, which is reliable. I use a Xiaomi temperature/humidity sensor and it is reliable down to 1% humidity, which I achieve after drying the silica. Then it goes linearly up to 10-12% in three months time, when I dry the silica again. I use an IKEA Samla box with foam tape between lid and box.</p>
<p>Also, the silica that way does not expose the max surface: make rather some pouches using kitchen paper and put them vertical, or the upper layers will shield the lower beads.</p>
<p>For info, you may already know it: nylon filament must be dried in the oven upon opening, it's not dry enough straight out of the factory. Also, a dry box will not suck out humidity in any reasonable time, so it's only useful to maintain dryness.</p>
|
<p>Alright, there were some good suggestion in the comments, and I gave them a look, but I think I solved this myself.</p>
<p>I think, after a lot more testing, that it was some combination of stratification and a faulty sensor. I managed to find four more DHT22s with which I could test the box, and found that the sensor I had been using consistently read ~16% RH where others would read 6-9%. Additionally, I did some tests with the sensor at different heights in the box, which were somewhat inconclusive, so I attached a fan to the sensor's power to circulate the air inside the box, to avoid stratification if that was occurring.</p>
<p>Across my three best-agreeing sensors, with the fan (which I'm still not certain provides a major benefit, but certainly makes sense if stratification is assumed to be a problem), I'm now seeing values that seem to be around 6% to 8% RH depending on the sensor (this amount of difference between them is to be expected per the datasheet, 6% vs 16% is not), much more in line with the performance I would expect from what I have seen online. Time to finally unwrap the nylon, I guess.</p>
<p>I won't mark this answer as the solution until I see this performance maintained and get good nylon prints out of it, but for now, I think this is probably most of the way to an answer.</p>
| 2,120
|
<p>I suck at math. I need to figure out how to calculate a video duration with only a few examples of values. For example, a value of 70966 is displayed as 1:10 minutes. A value of 30533 displays as 30 seconds. A value of 7007 displays as 7 seconds. </p>
|
<p>Looks like the numbers are in milliseconds. So to convert to seconds, divide by 1000, then divide by 60 to find minutes etc.</p>
|
<p>I'm not sure if I completely understand this, but:</p>
<pre><code> 70966 / 70 seconds = 1013.8
</code></pre>
<p>So dividing the "value" by 1013.8 should get the duration, approximately...</p>
<p>Edit: Yes, Ben is right, you should divide by 1000. I got 1013.8 because the 70 seconds was rounded down from 70.966 seconds to 70.</p>
| 5,818
|
<p>What is the reason for the following warning in some C++ compilers?</p>
<blockquote>
<p>No newline at end of file</p>
</blockquote>
<p>Why should I have an empty line at the end of a source/header file?</p>
|
<p>Think of some of the problems that can occur if there is no newline. According to the ANSI standard the <code>#include</code> of a file at the beginning inserts the file exactly as it is to the front of the file and does not insert the new line after the <code>#include <foo.h></code> after the contents of the file. So if you include a file with no newline at the end to the parser it will be viewed as if the last line of <code>foo.h</code> is on the same line as the first line of <code>foo.cpp</code>. What if the last line of foo.h was a comment without a new line? Now the first line of <code>foo.cpp</code> is commented out. These are just a couple of examples of the types of problems that can creep up.</p>
<hr>
<p>Just wanted to point any interested parties to James' answer below. While the above answer is still correct for C, the new C++ standard (C++11) has been changed so that this warning should no longer be issued if using C++ and a compiler conforming to C++11.</p>
<p>From C++11 standard via James' post: </p>
<blockquote>
<p>A source file that is not empty and that does not end in a new-line character, or that ends in a new-line character immediately preceded by a backslash character before any such splicing takes place, shall be processed as if an additional new-line character were appended to the file (C++11 §2.2/1).</p>
</blockquote>
|
<p>That's not an error. It's just a warning.</p>
<p>Open the file in an editor, go to the last line of the file, and hit enter to add a blank line to the end of the file.</p>
<p>Though, besides that, you should be using <code>#include <iostream></code> instead of <code><iostream.h></code>. Then put in a <code>using std::cout;</code> after it.</p>
| 9,790
|
<p>Given a Python object of any kind, is there an easy way to get the list of all methods that this object has?</p>
<p>Or,</p>
<p>if this is not possible, is there at least an easy way to check if it has a particular method other than simply checking if an error occurs when the method is called?</p>
|
<p><strong>For many objects</strong>, you can use this code, replacing 'object' with the object you're interested in:</p>
<pre class="lang-py prettyprint-override"><code>object_methods = [method_name for method_name in dir(object)
if callable(getattr(object, method_name))]
</code></pre>
<p>I discovered it at <a href="https://web.archive.org/web/20180901124519/http://www.diveintopython.net/power_of_introspection/index.html" rel="noreferrer">diveintopython.net</a> (now archived), that should provide some further details!</p>
<p><strong>If you get an <code>AttributeError</code>, you can use this instead</strong>:</p>
<p><code>getattr()</code> is intolerant of pandas style Python 3.6 abstract virtual sub-classes. This code does the same as above and ignores exceptions.</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df = pd.DataFrame([[10, 20, 30], [100, 200, 300]],
columns=['foo', 'bar', 'baz'])
def get_methods(object, spacing=20):
methodList = []
for method_name in dir(object):
try:
if callable(getattr(object, method_name)):
methodList.append(str(method_name))
except Exception:
methodList.append(str(method_name))
processFunc = (lambda s: ' '.join(s.split())) or (lambda s: s)
for method in methodList:
try:
print(str(method.ljust(spacing)) + ' ' +
processFunc(str(getattr(object, method).__doc__)[0:90]))
except Exception:
print(method.ljust(spacing) + ' ' + ' getattr() failed')
get_methods(df['foo'])
</code></pre>
|
<p>You can make use of dir() which is pre-defined in Python.</p>
<pre><code>import module_name
dir(module_name)
</code></pre>
<p>You can also pass an object to dir() as</p>
<pre><code>dir(object_name)
</code></pre>
<p>If the object is an object of a pre-defined class such as int, str, etc. it displays the methods in it (you may know those methods as built in functions). If that object is created for a user-defined class, it displays all the methods given in that class.</p>
| 5,468
|
<p><a href="https://stackoverflow.com/questions/32149/does-anyone-have-a-good-proper-case-algorithm">Using this question</a> as the base is there an alogrithm or coding example to change some text to Pascal or Camel casing.</p>
<p>For example:</p>
<pre><code>mynameisfred
</code></pre>
<p>becomes</p>
<pre><code>Camel: myNameIsFred
Pascal: MyNameIsFred
</code></pre>
|
<p>I found a thread with a bunch of Perl guys arguing the toss on this question over at <a href="http://www.perlmonks.org/?node_id=336331" rel="nofollow noreferrer">http://www.perlmonks.org/?node_id=336331</a>.</p>
<p>I hope this isn't too much of a non-answer to the question, but I would say you have a bit of a problem in that it would be a very open-ended algorithm which could have a lot of 'misses' as well as hits. For example, say you inputted:-</p>
<pre><code>camelCase("hithisisatest");
</code></pre>
<p>The output could be:-</p>
<pre><code>"hiThisIsATest"
</code></pre>
<p>Or:-</p>
<pre><code>"hitHisIsATest"
</code></pre>
<p>There's no way the algorithm would know which to prefer. You could add some extra code to specify that you'd prefer more common words, but again misses would occur (Peter Norvig wrote a very small spelling corrector over at <a href="http://norvig.com/spell-correct.html" rel="nofollow noreferrer">http://norvig.com/spell-correct.html</a> which <em>might</em> help algorithm-wise, I wrote a <a href="http://web.archive.org/web/20080930045207/http://www.codegrunt.co.uk/code/cs/spellcorrect/spell-correct.cs" rel="nofollow noreferrer">C# implementation</a> if C#'s your language).</p>
<p>I'd agree with Mark and say you'd be better off having an algorithm that takes a delimited input, i.e. this_is_a_test and converts that. That'd be simple to implement, i.e. in pseudocode:-</p>
<pre><code>SetPhraseCase(phrase, CamelOrPascal):
if no delimiters
if camelCase
return lowerFirstLetter(phrase)
else
return capitaliseFirstLetter(phrase)
words = splitOnDelimiter(phrase)
if camelCase
ret = lowerFirstLetter(first word)
else
ret = capitaliseFirstLetter(first word)
for i in 2 to len(words): ret += capitaliseFirstLetter(words[i])
return ret
capitaliseFirstLetter(word):
if len(word) <= 1 return upper(word)
return upper(word[0]) + word[1..len(word)]
lowerFirstLetter(word):
if len(word) <= 1 return lower(word)
return lower(word[0]) + word[1..len(word)]
</code></pre>
<p>You could also replace my capitaliseFirstLetter() function with a proper case algorithm if you so wished.</p>
<p>A C# implementation of the above described algorithm is as follows (complete console program with test harness):-</p>
<pre><code>using System;
class Program {
static void Main(string[] args) {
var caseAlgorithm = new CaseAlgorithm('_');
while (true) {
string input = Console.ReadLine();
if (string.IsNullOrEmpty(input)) return;
Console.WriteLine("Input '{0}' in camel case: '{1}', pascal case: '{2}'",
input,
caseAlgorithm.SetPhraseCase(input, CaseAlgorithm.CaseMode.CamelCase),
caseAlgorithm.SetPhraseCase(input, CaseAlgorithm.CaseMode.PascalCase));
}
}
}
public class CaseAlgorithm {
public enum CaseMode { PascalCase, CamelCase }
private char delimiterChar;
public CaseAlgorithm(char inDelimiterChar) {
delimiterChar = inDelimiterChar;
}
public string SetPhraseCase(string phrase, CaseMode caseMode) {
// You might want to do some sanity checks here like making sure
// there's no invalid characters, etc.
if (string.IsNullOrEmpty(phrase)) return phrase;
// .Split() will simply return a string[] of size 1 if no delimiter present so
// no need to explicitly check this.
var words = phrase.Split(delimiterChar);
// Set first word accordingly.
string ret = setWordCase(words[0], caseMode);
// If there are other words, set them all to pascal case.
if (words.Length > 1) {
for (int i = 1; i < words.Length; ++i)
ret += setWordCase(words[i], CaseMode.PascalCase);
}
return ret;
}
private string setWordCase(string word, CaseMode caseMode) {
switch (caseMode) {
case CaseMode.CamelCase:
return lowerFirstLetter(word);
case CaseMode.PascalCase:
return capitaliseFirstLetter(word);
default:
throw new NotImplementedException(
string.Format("Case mode '{0}' is not recognised.", caseMode.ToString()));
}
}
private string lowerFirstLetter(string word) {
return char.ToLower(word[0]) + word.Substring(1);
}
private string capitaliseFirstLetter(string word) {
return char.ToUpper(word[0]) + word.Substring(1);
}
}
</code></pre>
|
<p>The only way to do that would be to run each section of the word through a dictionary.</p>
<p>"mynameisfred" is just an array of characters, splitting it up into my Name Is Fred means understanding what the joining of each of those characters means.</p>
<p>You could do it easily if your input was separated in some way, e.g. "my name is fred" or "my_name_is_fred".</p>
| 5,216
|
<p>I have a database with a few dozen tables interlinked with foreign keys. Under normal circumstances, I want the default <code>ON DELETE RESTRICT</code> behavior for those constraints. But when trying to share a snapshot of the database with a consultant, I needed to remove some sensitive data. I wish that my memory of a <code>DELETE FROM Table CASCADE</code> command hadn't been pure hallucination.</p>
<p>What I ended out doing was dumping the database, writing a script to process the dump by adding <code>ON DELETE CASCADE</code> clauses too all the foreign key constraints, restoring from that, performing my deletes, dumping again, removing the <code>ON DELETE CASCADE</code>, and finally restoring again. That was easier than writing the deletion query I'd have needed to do this in SQL -- removing whole slices of the database isn't a normal operation, so the schema isn't exactly adapted to it.</p>
<p>Does anyone have a better solution for the next time something like this comes up?</p>
|
<p>You do not need to dump and restore. You should be able to just drop the constraint, rebuild it with cascade, do your deletes, drop it again, and the rebuild it with restrict.</p>
<pre><code>CREATE TABLE "header"
(
header_id serial NOT NULL,
CONSTRAINT header_pkey PRIMARY KEY (header_id)
);
CREATE TABLE detail
(
header_id integer,
stuff text,
CONSTRAINT detail_header_id_fkey FOREIGN KEY (header_id)
REFERENCES "header" (header_id) MATCH SIMPLE
ON UPDATE NO ACTION ON DELETE NO ACTION
);
insert into header values(1);
insert into detail values(1,'stuff');
delete from header where header_id=1;
alter table detail drop constraint detail_header_id_fkey;
alter table detail add constraint detail_header_id_fkey FOREIGN KEY (header_id)
REFERENCES "header" (header_id) on delete cascade;
delete from header where header_id=1;
alter table detail add constraint detail_header_id_fkey FOREIGN KEY (header_id)
REFERENCES "header" (header_id) on delete restrict;
</code></pre>
|
<p>You may want to look into using <a href="http://www.postgresql.org/docs/8.2/static/ddl-schemas.html" rel="nofollow noreferrer">schemas</a> with PostgreSQL. I've done this in past projects to allow different groups of people or developers to have their own data. Then you can use your scripts to create multiple copies of your database for just such situations.</p>
| 6,754
|
<p>SVN externals allow you to make an SVN folder appear as if it's at another location. A good use for this is having a common folder shared across all of your projects in SVN.</p>
<p>I have a /trunk/common folder in SVN that I share via several different project.</p>
<p>Example:</p>
<ul>
<li>Project1 : /trunk/project1/depends</li>
<li>Project2 : /trunk/project2/depends</li>
<li>Project3 : /trunk/project3/depends</li>
<li>Project4 : /trunk/project4/depends</li>
</ul>
<p>Each of these depends folders are empty, but have an svn:external defined to point to my /trunk/common folder. </p>
<p>The problem is when I view log within any of the projects: /trunk/projectX/ it does not show changes from the svn:externals. I am using tortoise SVN as my SVN client. </p>
<p>Does anyone know how to change this behavior? I would like for the show log of /trunk/projectX to include any changes to any defined svn:externals as well.</p>
|
<p>This is not possible with the current release of Subversion, other than explicitly calling svn log on the target of the externals directory</p>
<p>You can try issueing a feature request at <a href="http://subversion.apache.org/" rel="nofollow noreferrer">the Apache Subversion website</a></p>
|
<p>When you display the log for a local versioned folder, it will show the changes that are relative to this particular folder. Externals are only <strong>a link</strong> to a different folder on the repository. The only thing you can track about external references, from a folder which depends on this external project, is the <strong>reference definition itself</strong>. That is because the reference is a subversion property of the dependent folder.</p>
<p>Imagine you have the following repo hierarchy :</p>
<pre><code>repo
myfirstproject
trunk
mysecondproject
trunk
mycommonlib
trunk
</code></pre>
<p>and that <code>mysecondproject\trunk</code> folder has the following <em>svn:external</em> property :</p>
<pre><code>svn://mysrv/repo/mysharedlib@2451 sharedlib
</code></pre>
<p>A checkout of <em>mysecondproject\trunk</em> inside a new folder <em>secondproject</em> will create something like this on your file system :</p>
<pre><code>secondproject Folder (refers mysecondproject/trunk)
sharedlib Folder (refers mycommonlib/trunk @ revision #2451)
</code></pre>
<p>Calling "Show log" command of Tortoise from <em>secondproject</em> folder will only show <em>secondproject</em> files changes, and eventually changes that occurred on the <em>svn:external</em> property of the folder.</p>
<p>To get change log of the external project, you need to call "Show log" from the inner folder <em>sharedlib</em>, which makes sense.</p>
| 8,822
|
<p>Everyone I talk to who knows (knew) about it claims it was the greatest thing since sliced bread. Why did it fail? Or, if it didn't fail, who's using it now?</p>
|
<p>Check out <a href="http://www.gigaspaces.com" rel="noreferrer">GigaSpaces</a>. It's a quite successful Jini/Javaspaces implementation.</p>
<p>I think Jini has a great model, but it is stuck with Java. Web-services is more appealing because it works with standarized protocols, even though Jini service discovery is more natural.</p>
|
<p>Zeroconf and other discovery protocols are similarly referred to as the greatest thing since sliced bread; it's just that the flavor keeps changing.</p>
| 6,661
|
<p>My team is working on a crappy old website and most of the pages are still ASP classic. However, we've recently migrated to forms authentication using ASP.NET and wildcard mapping. Everything works surprisingly well except for one thing: logged in users are timing out too quickly. After looking in the logs it appears people are timing out exactly after 20 minutes (which is the specified timeout due to inactivity). </p>
<p>So, our hypothesis is that the ASP classic pages are not tripping whatever mechanism in the forms authentication framework that resets the inactivity timer. I've googled around and even read the wildcard mapping post by the Great Gu but still can't find anyone else who is having this problem. So, 1) Have you ever seen this problem? and 2) What's the best workaround? (other than manually placing a hidden frame in every janky ASP page that loads a dumb .NET page in the background)</p>
<p><strong>Update</strong>: <em>slidingExpiration</em> is set to <em>true</em> </p>
<p><strong>Also</strong>: We can't use perpetual sessions because we need the application to time out after 20 minutes of inactivity. Also, this terrible site was written so that the interface is usually stored in the page. There's no simple piece of interface code I could slip the JavaScript into. We tried to put some js into an include file that was called by about 80% of our pages but it's caused some esoteric problems with file download buffers so we may have to try a different tack. Thanks. </p>
|
<p><strong>Create a perpetual session.</strong></p>
<p>Essentially you end up emitting some JavaScript and an image tag in your master page or navigation users controls (whatever you're using for consistent navigation). This JavaScript on some interval changes the source of the image tag to an http handler endpoint (some .aspx, .ashx) which returns a 1x1 pix clear gif as a response for the image. The constant request ensures that idle pages will keep the session alive.</p>
<p><strong>As long as a browser window is open to your page your ASP.NET session will never time out.</strong></p>
<p>Often the JavaScript will tack on a random number to the request so that the browser doesn't cache the request.</p>
<p>A decent walkthrough is available <a href="http://www.codeproject.com/KB/aspnet/SessionForever.aspx" rel="nofollow noreferrer">here</a>.</p>
|
<p>I am assuming that you have manually created the cookie, in which case your timeout value in code is probably overriding your timeout value in the configuration.</p>
<p>First, if possible (which it probably isn't) don't create the cookie manually, it will save you from not only this headache but dozens of others.</p>
<p>If you must manually create the cookie, make sure that the timeout you are using is actually reading the timeout value that you have set in the configuration file and that sliding expiration is set to true (which you have said it was).</p>
<p>That said, we still have ocassional strange timeout problems when the cookies are manually created. Where I work we implemented a solution which allowed the cookies to be created automatically and timeouts were no longer a problem; however, it did create other issues and we were forced to switch back.</p>
| 8,211
|
<p>We develop custom survey web sites and I am looking for a way to automate the pattern testing of these sites. Surveys often contain many complex rules and branches which are triggered on how items are responded too. All surveys are rigorously tested before being released to clients. This testing results in a lot of manual work. I would like to learn of some options I could use to automate these tests by responding to questions and verifying the results in the database. The survey sites are produced by an engine which creates and writes asp pages and receives the responses to process into a database. So the only way I can determine to test the site is to interact with the web pages themselves. I guess in a way I need to build some type of bot; I really don't know much about the design behind them.</p>
<p>Could someone please provide some suggestions on how to achieve this? Thank you for your time.</p>
<p>Brett</p>
|
<p>Check out selenium: <a href="http://selenium.openqa.org/" rel="nofollow noreferrer">http://selenium.openqa.org/</a></p>
<p>Also, check out the answers to this other question: <a href="https://stackoverflow.com/questions/484/how-do-you-test-layout-design-across-multiple-browsersoss">https://stackoverflow.com/questions/484/how-do-you-test-layout-design-across-multiple-browsersoss</a></p>
|
<p>Sounds like your engine could generate a test script using something like <a href="http://search.cpan.org/~petdance/Test-WWW-Mechanize-1.20/Mechanize.pm" rel="nofollow noreferrer">Test::WWW::Mechanize</a></p>
| 9,476
|
<p>I just bought a new RAMPS 1.6 shield to replace my old RAMPS 1.4 shield. The problem is that it's just not working, the motors don't move, heatbed/nozzle don't get heated. It was all working with the RAMPS 1.4 shield.</p>
<p>I checked the RAMPS 1.6 and there's no bridging in the solder joints.</p>
<p>The DRV8255 is drawing current also but the motor doesn't move.</p>
<p>I am using Repetier Firmware. I tried Marlin (default) and even it's not working.</p>
|
<p>IMHO, it's probably better to get the printer as-is first. This way you can get familiar with the printer and 3D printing in general before you get overwhelmed with all the extra introduced variables from your upgrades.</p>
<p>The new model already has loads of features that were considered upgrades on the older model and you can already get very nice results.</p>
<p>Upgrading your printer is a part of the 3D printing hobby, and you'll probably never be done with it :)</p>
|
<p>I have an Ender 3 v2 and I haven't upgraded anything except updating the firmware. The first thing I suggest is updating the firmware and changing the hotend to an all-metal one. This will allow you to print in higher temperatures and adding a BLTouch for automatic bed leveling will make your life a bit easier.</p>
| 1,737
|
<p>I have been working with a <code>string[]</code> array in C# that gets returned from a function call. I could possibly cast to a <code>Generic</code> collection, but I was wondering if there was a better way to do it, possibly by using a temp array.</p>
<p><strong>What is the best way to remove duplicates from a C# array?</strong></p>
|
<p>You could possibly use a LINQ query to do this:</p>
<pre><code>int[] s = { 1, 2, 3, 3, 4};
int[] q = s.Distinct().ToArray();
</code></pre>
|
<pre><code>using System;
using System.Collections.Generic;
using System.Linq;
namespace Rextester
{
public class Program
{
public static void Main(string[] args)
{
List<int> listofint1 = new List<int> { 4, 8, 4, 1, 1, 4, 8 };
List<int> updatedlist= removeduplicate(listofint1);
foreach(int num in updatedlist)
Console.WriteLine(num);
}
public static List<int> removeduplicate(List<int> listofint)
{
List<int> listofintwithoutduplicate= new List<int>();
foreach(var num in listofint)
{
if(!listofintwithoutduplicate.Any(p=>p==num))
{
listofintwithoutduplicate.Add(num);
}
}
return listofintwithoutduplicate;
}
}
}
</code></pre>
| 3,106
|
<p>I am working on a project that requires the manipulation of enormous matrices, specifically pyramidal summation for a copula calculation. </p>
<p>In short, I need to keep track of a relatively small number of values (usually a value of 1, and in rare cases more than 1) in a sea of zeros in the matrix (multidimensional array). </p>
<p>A sparse array allows the user to store a small number of values, and assume all undefined records to be a preset value. Since it is not physically possibly to store all values in memory, I need to store only the few non-zero elements. This could be several million entries.</p>
<p>Speed is a huge priority, and I would also like to dynamically choose the number of variables in the class at runtime.</p>
<p>I currently work on a system that uses a binary search tree (b-tree) to store entries. Does anyone know of a better system?</p>
|
<p>For C++, a map works well. Several million objects won't be a problem. 10 million items took about 4.4 seconds and about 57 meg on my computer.</p>
<p>My test application is as follows:</p>
<pre><code>#include <stdio.h>
#include <stdlib.h>
#include <map>
class triple {
public:
int x;
int y;
int z;
bool operator<(const triple &other) const {
if (x < other.x) return true;
if (other.x < x) return false;
if (y < other.y) return true;
if (other.y < y) return false;
return z < other.z;
}
};
int main(int, char**)
{
std::map<triple,int> data;
triple point;
int i;
for (i = 0; i < 10000000; ++i) {
point.x = rand();
point.y = rand();
point.z = rand();
//printf("%d %d %d %d\n", i, point.x, point.y, point.z);
data[point] = i;
}
return 0;
}
</code></pre>
<p>Now to dynamically choose the number of variables, the easiest solution is to represent <strong>index as a string</strong>, and then use string as a key for the map. For instance, an item located at [23][55] can be represented via "23,55" string. We can also extend this solution for higher dimensions; such as for three dimensions an arbitrary index will look like "34,45,56". A simple implementation of this technique is as follows:</p>
<pre><code>std::map data<string,int> data;
char ix[100];
sprintf(ix, "%d,%d", x, y); // 2 vars
data[ix] = i;
sprintf(ix, "%d,%d,%d", x, y, z); // 3 vars
data[ix] = i;
</code></pre>
|
<p>Since only values with [a][b][c]...[w][x][y][z] are of consequence, we only store the indice themselves, not the value 1 which is just about everywhere - always the same + no way to hash it. Noting that the curse of dimensionality is present, suggest go with some established tool NIST or Boost, at least read the sources for that to circumvent needless blunder. </p>
<p>If the work needs to capture the temporal dependence distributions and parametric tendencies of unknown data sets, then a Map or B-Tree with uni-valued root is probably not practical. We can store only the indice themselves, hashed if ordering ( sensibility for presentation ) can subordinate to reduction of time domain at run-time, for all 1 values. Since non-zero values other than one are few, an obvious candidate for those is whatever data-structure you can find readily and understand. If the data set is truly vast-universe sized I suggest some sort of sliding window that manages file / disk / persistent-io yourself, moving portions of the data into scope as need be. ( writing code that you can understand ) If you are under commitment to provide actual solution to a working group, failure to do so leaves you at the mercy of consumer grade operating systems that have the sole goal of taking your lunch away from you.</p>
| 2,632
|
<p>I have a (potentially dumb) question about the C++ STL. When I make a container (vector, set, map, etc), is it allocated on the stack or on the heap? If I make a set and put 5 million strings, will I have to worry about a stack overflow?</p>
|
<p>STL classes by default allocate their internal buffers from the heap, although these classes also allow custom allocators that allow a user to specify an alternate location to allocate from - e.g. a shared memory pool.</p>
|
<p>The container itself is allocated where you decide (it can be the stack, the heap, an object's member, etc) but the memory it uses is, by default, as others described, taken on the Free Store (managed through new and delete) which is not the same as the heap (managed through malloc/free).</p>
<p><a href="http://www.parashift.com/c++-faq-lite/freestore-mgmt.html#faq-16.3" rel="nofollow noreferrer">Don't mix the two!</a></p>
| 5,335
|
<p>I have a list of bean objects passed into my JSP page, and one of them is a comment field. This field may contain newlines, and I want to replace them with semicolons using JSTL, so that the field can be displayed in a text input. I have found one solution, but it's not very elegant. I'll post below as a possibility.</p>
|
<p>Here is a solution I found. It doesn't seem very elegant, though:</p>
<pre><code><%@ taglib prefix="fn" uri="http://java.sun.com/jsp/jstl/functions" %>
<% pageContext.setAttribute("newLineChar", "\n"); %>
${fn:replace(item.comments, newLineChar, "; ")}
</code></pre>
|
<p>You could write your own JSP function to do the replacement.</p>
<p>This means you'd end up with something like:</p>
<pre><code><%@ taglib prefix="ns" uri="..." %>
...
${ns:replace(data)}
</code></pre>
<p>Where <code>ns</code> is a namespace prefix you define and <code>replace</code> is your JSP function.</p>
<p>These functions are pretty easy to implement (they're just a static method) although I can't seem to find a good reference for writing these at the moment.</p>
| 8,250
|
<p>Using SqlServer, and trying to update rows from within the same table. I want to use a table alias for readability.
This is the way I am doing it at the moment:</p>
<pre><code>UPDATE ra
SET ra.ItemValue = rb.ItemValue
FROM dbo.Rates ra, dbo.Rates rb
WHERE ra.ResourceID = rb.ResourceID
AND ra.PriceSched = 't8'
AND rb.PriceSched = 't9'
</code></pre>
<p>Are there easier / better ways?</p>
|
<pre><code>UPDATE ra
SET ra.ItemValue = rb.ItemValue
FROM dbo.Rates ra
INNER JOIN dbo.Rates rb
ON ra.ResourceID = rb.ResourceID
WHERE ra.PriceSched = 't8'
AND rb.PriceSched = 't9';
</code></pre>
<p>This might help in improving performance.</p>
|
<p><strong>Table alias in Update Query in T-SQL( Microsoft SQL) .
for MS SQL Server 2008 R2 it's work just fine</strong></p>
<pre><code>UPDATE A_GeneralLedger set ScheduleId=g.ScheduleId
from A_GeneralLedger l inner join A_AcGroup g on g.ACGroupID=l.AccountGroupID
</code></pre>
| 5,149
|
<p>I've seen people doing things like this in their HTML:</p>
<pre><code><!--[if IE]>
<link rel="stylesheet" href="ie.css" type="text/css" />
<![endif]-->
</code></pre>
<p>Does this work across all modern browsers and is there a list of browser types that will work with that kind of if statement?</p>
<p><strong>Edit</strong></p>
<p>Thanks <a href="https://stackoverflow.com/questions/46124/is-there-a-list-of-browser-conditionals-for-use-including-stylesheets#46126">Ross</a>. Interesting to find out about <strong>gt, lt, gte, & lte</strong>.</p>
|
<p>This works across all browsers because anything except IE sees <code><!--IGNORED COMMENT--></code>. Only IE reads the comment if it contains a conditional clause. Have a look at <a href="http://www.quirksmode.org/css/condcom.html" rel="noreferrer">this article</a></p>
<p>You can also specify which version of IE. For example:</p>
<pre><code><!--[if IE 8]>
<link rel="stylesheet type="text/css" href="ie8.css" />
<![endif]-->
</code></pre>
|
<p>Further to Ross' answer, you can only target the Internet Explorer rendering engine with conditional comments; there is no similar construct for other browsers. For example, you can't write conditional comments that target Firefox, but are ignored by Internet Explorer. </p>
<p>The way I achieve the same effect as your example above is to sniff the user agent string. I then deliver a suitable CSS file for that browser. This isn't perfect because sometimes people change their user-agent string for compatibility. </p>
<p>The other way to target different browsers is to utilise browser specific hacks. These are particularly nasty because they usually rely on bugs in the browser and bugs are liable to be fixed!</p>
<p>User-agent sniffing is the best all-round solution in my opinion.</p>
| 6,814
|
<p>I'm writing an asp.net application that will need to be localized to several regions other than North America. What do I need to do to prepare for this globalization? What are your top 1 to 2 resources for learning how to write a world ready application.</p>
|
<p>A couple of things that I've learned:</p>
<ul>
<li><p>Absolutely and brutally minimize the number of images you have that contain text. Doing so will make your life a billion percent easier since you won't have to get a new set of images for every friggin' language.</p></li>
<li><p>Be very wary of css positioning that relies on things always remaining the same size. If those things contain text, they will <strong>not</strong> remain the same size, and you will then need to go back and fix your designs.</p></li>
<li><p>If you use character types in your sql tables, make sure that any of those that might receive international input are unicode (nchar, nvarchar, ntext). For that matter, I would just standardize on using the unicode versions.</p></li>
<li><p>If you're building SQL queries dynamically, make sure that you include the N prefix before any quoted text if there's any chance that text might be unicode. If you end up putting garbage in a SQL table, check to see if that's there.</p></li>
<li><p>Make sure that all your web pages definitively state that they are in a unicode format. See Joel's article, mentioned above.</p></li>
<li><p>You're going to be using resource files a lot for this project. That's good - ASP.NET 2.0 has great support for such. You'll want to look into the App_LocalResources and App_GlobalResources folder as well as GetLocalResourceObject, GetGlobalResourceObject, and the concept of meta:resourceKey. Chapter 30 of <a href="https://rads.stackoverflow.com/amzn/click/com/0470041781" rel="noreferrer" rel="nofollow noreferrer">Professional ASP.NET 2.0</a> has some great content regarding that. The 3.5 version of the book may well have good content there as well, but I don't own it.</p></li>
<li><p>Think about fonts. Many of the standard fonts you might want to use aren't unicode capable. I've always had luck with Arial Unicode MS, MS Gothic, MS Mincho. I'm not sure about how cross-platform these are, though. Also, note that not all fonts support all of the Unicode character definition. Again, test, test, test.</p></li>
<li><p>Start thinking now about how you're going to get translations into this system. Go talk to whoever is your translation vendor about how they want data passed back and forth for translation. Think about the fact that, through your local resource files, you will likely be repeating some commonly used strings through the system. Do you normalize those into global resource files, or do you have some sort of database layer where only one copy of each text used is generated. In our recent project, we used resource files which were generated from a database table that contained all the translations and the original, english version of the resource files. </p></li>
<li><p>Test. Generally speaking I will test in German, Polish, and an Asian language (Japanese, Chinese, Korean). German and Polish are wordy and nearly guaranteed to stretch text areas, Asian languages use an entirely different set of characters which tests your unicode support.</p></li>
</ul>
|
<p>I would suggest:</p>
<ol>
<li>Put all strings in either the database or resource files. </li>
<li>Allow extra space for translated text, as some (e.g. German) are wordier.</li>
</ol>
| 8,362
|
<p>I have a view that has a list of jobs in it, with data like who they're assigned to and the stage they are in. I need to write a stored procedure that returns how many jobs each person has at each stage.</p>
<p>So far I have this (simplified):</p>
<pre><code>DECLARE @ResultTable table
(
StaffName nvarchar(100),
Stage1Count int,
Stage2Count int
)
INSERT INTO @ResultTable (StaffName, Stage1Count)
SELECT StaffName, COUNT(*) FROM ViewJob
WHERE InStage1 = 1
GROUP BY StaffName
INSERT INTO @ResultTable (StaffName, Stage2Count)
SELECT StaffName, COUNT(*) FROM ViewJob
WHERE InStage2 = 1
GROUP BY StaffName
</code></pre>
<p>The problem with that is that the rows don't combine. So if a staff member has jobs in stage1 and stage2 there's two rows in @ResultTable. What I would really like to do is to update the row if one exists for the staff member and insert a new row if one doesn't exist.</p>
<p>Does anyone know how to do this, or can suggest a different approach?
I would really like to avoid using cursors to iterate on the list of users (but that's my fall back option).</p>
<p>I'm using SQL Server 2005.</p>
<p><strong>Edit: @Lee:</strong> Unfortunately the InStage1 = 1 was a simplification. It's really more like WHERE DateStarted IS NOT NULL and DateFinished IS NULL.</p>
<p><strong>Edit: @BCS:</strong> I like the idea of doing an insert of all the staff first so I just have to do an update every time. But I'm struggling to get those UPDATE statements correct.</p>
|
<p>You could just check for existence and use the appropriate command. I believe this really does use a cursor behind the scenes, but it's the best you'll likely get: </p>
<pre><code>IF (EXISTS (SELECT * FROM MyTable WHERE StaffName = @StaffName))
begin
UPDATE MyTable SET ... WHERE StaffName = @StaffName
end
else
begin
INSERT MyTable ...
end
</code></pre>
<p>SQL2008 has a new MERGE capability which is cool, but it's not in 2005.</p>
|
<p>The following query on your result table should combine the rows again. This is assuming that InStage1 and InStage2 are never both '1'.</p>
<pre><code>select distinct(rt1.StaffName), rt2.Stage1Count, rt3.Stage2Count
from @ResultTable rt1
left join @ResultTable rt2 on rt1.StaffName=rt2.StaffName and rt2.Stage1Count is not null
left join @ResultTable rt3 on rt1.StaffName=rt2.StaffName and rt3.Stage2Count is not null
</code></pre>
| 3,957
|
<p>I have a page that is generated which inserts an HTML comment near the top of the page. Inside the comment is a *nix-style command.</p>
<pre><code><!-- command --option value --option2 value2 --option3 -->
</code></pre>
<p>This comment breaks the page completely. What is wrong with the comment to cause this to happen, and why is this the case?</p>
|
<p>Comments in the <a href="http://www.w3.org/TR/REC-xml/#sec-comments" rel="noreferrer">XML Spec from the w3.org</a> :</p>
<blockquote>
<p>For compatibility, the string "--"
(double-hyphen) MUST NOT occur within
comments.</p>
</blockquote>
|
<p>Comments at the top of the page before <code><html></code> will throw IE into quirks mode, which could explain why the page breaks, if that's where your comment appears.</p>
<p>For more information, check out the "Triggering different rendering modes" on <a href="http://en.wikipedia.org/wiki/Quirks_mode" rel="nofollow noreferrer">this wikipedia page</a></p>
| 2,733
|
<p>I can do an <code>eval("something()");</code> to execute the code dynamically in JavaScript. Is there a way for me to do the same thing in C#?</p>
<p>An example of what I am trying to do is: I have an integer variable (say <code>i</code>) and I have multiple properties by the names: "Property1", "Property2", "Property3", etc.
Now, I want to perform some operations on the " Property<em>i</em> " property depending on the value of <code>i</code>. </p>
<p>This is really simple with Javascript. Is there any way to do this with C#?</p>
|
<p><strong>DISCLAIMER:</strong> This answer was written back in 2008. The landscape has changed drastically since then.</p>
<p>Look at the other answers on this page, especially the one detailing <code>Microsoft.CodeAnalysis.CSharp.Scripting</code>.</p>
<p>Rest of answer will be left as it was originally posted but is no longer accurate.</p>
<hr />
<p>Unfortunately, C# isn't a dynamic language like that.</p>
<p>What you can do, however, is to create a C# source code file, full with class and everything, and run it through the CodeDom provider for C# and compile it into an assembly, and then execute it.</p>
<p>This forum post on MSDN contains an answer with some example code down the page somewhat:<br />
<a href="https://social.msdn.microsoft.com/Forums/vstudio/en-US/6a783cc4-bb54-4fec-b504-f9b1ed786b54/create-a-anonymous-method-from-a-string" rel="nofollow noreferrer">create a anonymous method from a string? </a></p>
<p>I would hardly say this is a very good solution, but it is possible anyway.</p>
<p>What kind of code are you going to expect in that string? If it is a minor subset of valid code, for instance just math expressions, it might be that other alternatives exists.</p>
<hr />
<p><strong>Edit</strong>: Well, that teaches me to read the questions thoroughly first. Yes, reflection would be able to give you some help here.</p>
<p>If you split the string by the ; first, to get individual properties, you can use the following code to get a PropertyInfo object for a particular property for a class, and then use that object to manipulate a particular object.</p>
<pre><code>String propName = "Text";
PropertyInfo pi = someObject.GetType().GetProperty(propName);
pi.SetValue(someObject, "New Value", new Object[0]);
</code></pre>
<p>Link: <a href="https://learn.microsoft.com/en-us/dotnet/api/system.reflection.propertyinfo.setvalue" rel="nofollow noreferrer">PropertyInfo.SetValue Method</a></p>
|
<p>Unfortunately, C# doesn't have any native facilities for doing exactly what you are asking. </p>
<p>However, my C# eval program does allow for evaluating C# code. It provides for evaluating C# code at runtime and supports many C# statements. In fact, this code is usable within any .NET project, however, it is limited to using C# syntax. Have a look at my website, <a href="http://csharp-eval.com" rel="nofollow">http://csharp-eval.com</a>, for additional details.</p>
| 2,663
|
<p>A checklist for improving execution time between .NET code and SQL Server.
Anything from the basic to weird solutions is appreciated.</p>
<p><strong>Code:</strong></p>
<p>Change default timeout in command and connection by <a href="https://stackoverflow.com/questions/67366/a-checklist-for-fixing-net-apps-to-sql-server-timeout-problems-and-improve-exec#67438">avgbody</a>.</p>
<p>Use stored procedure calls instead of inline sql statement by <a href="https://stackoverflow.com/questions/67366/a-checklist-for-fixing-net-apps-to-sql-server-timeout-problems-and-improve-exec#67438">avgbody</a>.</p>
<p>Look for blocking/locking using Activity monitor by <a href="https://stackoverflow.com/questions/67366/a-checklist-for-fixing-net-apps-to-sql-server-timeout-problems-and-improve-exec#67387">Jay Shepherd</a>.</p>
<p><strong>SQL Server:</strong></p>
<p>Watch out for parameter sniffing in stored procedures by <a href="https://stackoverflow.com/questions/67366/a-checklist-for-fixing-net-apps-to-sql-server-timeout-problems-and-improve-exec#68183">AlexCuse</a>.</p>
<p>Beware of dynamically growing the database by <a href="https://stackoverflow.com/questions/67366/a-checklist-for-fixing-net-apps-to-sql-server-timeout-problems-and-improve-exec#67660">Martin Clarke</a>.</p>
<p>Use Profiler to find any queries/stored procedures taking longer then 100 milliseconds by <a href="https://stackoverflow.com/questions/67366/a-checklist-for-fixing-net-apps-to-sql-server-timeout-problems-and-improve-exec#67598">BradO</a>.</p>
<p>Increase transaction timeout by <a href="https://stackoverflow.com/questions/67366/a-checklist-for-fixing-net-apps-to-sql-server-timeout-problems-and-improve-exec#67438">avgbody</a>.</p>
<p>Convert dynamic stored procedures into static ones by <a href="https://stackoverflow.com/questions/67366/a-checklist-for-fixing-net-apps-to-sql-server-timeout-problems-and-improve-exec#67438">avgbody</a>.</p>
<p>Check how busy the server is by <a href="https://stackoverflow.com/questions/67366/a-checklist-for-fixing-net-apps-to-sql-server-timeout-problems-and-improve-exec#67387">Jay Shepherd</a>.</p>
|
<p>In the past some of my solutions have been:</p>
<ol>
<li><p>Fix the default time out settings of the sqlcommand:</p>
<p>Dim myCommand As New SqlCommand("[dbo].[spSetUserPreferences]", myConnection)</p>
<p>myCommand.CommandType = CommandType.StoredProcedure</p>
<p><strong>myCommand.CommandTimeout = 120</strong></p></li>
<li><p>Increase connection timeout string:</p>
<p>Data Source=mydatabase;Initial Catalog=Match;Persist Security Info=True;User ID=User;Password=password;<strong>Connection Timeout=120</strong></p></li>
<li><p>Increase transaction time-out in sql-server 2005</p>
<p>In management studio’s Tools > Option > Designers Increase the “Transaction time-out after:” even if “Override connection string time-out value for table designer updates” checked/unchecked. </p></li>
<li><p>Convert dynamic stored procedures into static ones</p></li>
<li><p>Make the code call a stored procedure instead of writing an inline sql statement in the code.</p></li>
</ol>
|
<p>A few quick ones...</p>
<ul>
<li>Check Processor use of server to see if it's just too busy</li>
<li>Look for blocking/locking going on with the Activity monitor</li>
<li>Network issues/performance</li>
</ul>
| 9,291
|
<p>I have a simple 2-column layout with a footer that clears both the right and left div in my markup. My problem is that I can't get the footer to stay at the bottom of the page in all browsers. It works if the content pushes the footer down, but that's not always the case. </p>
|
<p>To get a sticky footer:</p>
<ol>
<li><p>Have a <code><div></code> with <code>class="wrapper"</code> for your content.</p></li>
<li><p>Right <strong>before</strong> the closing <code></div></code> of the <code>wrapper</code> place the
<code><div class="push"></div></code>.</p></li>
<li><p>Right <strong>after</strong> the closing <code></div></code> of the <code>wrapper</code> place the
<code><div class="footer"></div></code>.</p></li>
</ol>
<pre class="lang-css prettyprint-override"><code>* {
margin: 0;
}
html, body {
height: 100%;
}
.wrapper {
min-height: 100%;
height: auto !important;
height: 100%;
margin: 0 auto -142px; /* the bottom margin is the negative value of the footer's height */
}
.footer, .push {
height: 142px; /* .push must be the same height as .footer */
}
</code></pre>
|
<p>Try putting a container div (with overflow:auto) around the content and sidebar.</p>
<p>If that doesn't work, do you have any screenshots or example links where the footer isn't displayed properly?</p>
| 6,374
|
<p>I have inherited a client site which crashes every 3 or 4 days. It is built using the zend-framework with which I have no knowledge.</p>
<p>The following code:</p>
<pre><code><?php
// Make sure classes are in the include path.
ini_set('include_path', ini_get('include_path') . PATH_SEPARATOR . 'lib' . PATH_SEPARATOR . 'app' . DIRECTORY_SEPARATOR . 'classes');
// Use autoload so include or require statements are not needed.
require_once 'Zend/Loader.php';
Zend_Loader::registerAutoload();
// Run the application.
App_Main::run('production');
</code></pre>
<p>Is causing the following error:</p>
<pre>
[Tue Sep 02 12:58:45 2008] [error] [client 78.***.***.32] PHP Warning: require_once(Zend/Loader.php) [function.require-once]: failed to open stream: No such file or directory in /srv/www/vhosts/example.co.uk/httpdocs/bootstrap.php on line 6
[Tue Sep 02 12:58:45 2008] [error] [client 78.***.***.32] PHP Fatal error: require_once() [function.require]: Failed opening required 'Zend/Loader.php' (include_path='.:.:/usr/share/php5:/usr/share/php5/PEAR') in /srv/www/vhosts/example.co.uk/httpdocs/bootstrap.php on line 6
</pre>
<p>I don't even know where to begin trying to fix this. My level of knowledge of PHP is intermediate but like I said, I have no experience with Zend. Also, contacting the original developer is not an option.</p>
<p>The interesting thing is that even though the code is run every time a page of the site is hit the error is only happening every now and then.</p>
<p>I believe it must be something to do with the include_path but I am not sure.</p>
|
<p>for a start I think your include path should maybe have a trailing slash. Here is an example of mine :</p>
<pre><code> set_include_path('../library/ZendFramework-1.5.2/library/:../application/classes/:../application/classes/excpetions/:../application/forms/');
</code></pre>
<p>You bootstrap file will be included by another file (probably an index.php file). This means that if your include path is relative (as mine is) instead of absolute, then the path at which Loader.php is looked for changes if the file including bootstrap.php changes.</p>
<p>For example, I have two index.php files in my Zend app, one for the front end, and one for the admin area. These index files each need there own bootstrap.php with different relative paths in because they are included by different index files, which means they <strong>have to be relative to the original requested index file, not the bootstrap file they are defined within</strong>.</p>
<p>This could explain why your problem is intermittent, there could be another file including the bootstrap somewhere that is only used occasionally. I'd search through all the sites files for 'bootstrap.php' and see all the places which are including / requiring this file.</p>
|
<p>It works sometimes so there isn't anything inherently wrong on the PHP end of things (if the path was wrong it would never work... but it does, yes?). So what is causing Loader.php to be periodically inaccessible? I would suspect a permissions problem. Something that is making Loader.php or the directory that it is in inaccessible. Maybe a cron job is setting/reseting permissions? Check that first. See what permissions are when it is working and what they are when it is not. </p>
| 6,037
|
<p>I'm executing stored procedures using SET FMTONLY ON, in order to emulate what our code generator does. However, it seems that the results are cached when executed like this, as I'm still getting a <em>Conversion failed</em> error from a proc that I have just dropped! This happens even when I execute the proc without SET FMTONLY ON.</p>
<p>Can anyone please tell me what's going on here?</p>
|
<p>Some statements will still be executed, even with <strong><code>SET FMTONLY ON</code></strong>. You "Conversion failed" error could be from something as simple as a <code>set variable</code> statement in the stored proc. For example, this returns the metadata for the first query, but throws an exception when it runs the last statement:</p>
<pre><code>SET FMTONLY on
select 1 as a
declare @a int
set @a = 'a'
</code></pre>
<p>As for running a dropped procedure, that's a new one to me. SQL Server uses the system tables to determine the object to execute, so it doesn't matter if the execution plan is cached for that object. If you drop it, it is deleted from the system tables, and should never be executable. Could you please query sysobjects (or sys.objects) just before you execute the procedure? I expect you'll find that you haven't dropped it.</p>
|
<ol>
<li><p>This sounds like a client-side error. Do you get the same message when running through SQL Management Studio?</p></li>
<li><p>Have you confirmed that there isn't another procedure with the same name that's owned by a different schema/user?</p></li>
</ol>
| 9,764
|
<p>What is the difference in terms of functionality between the Apache HTTP Server and Apache Tomcat?</p>
<p>I know that Tomcat is written in Java and the HTTP Server is in C, but other than that I do not really know how they are distinguished. Do they have different functionality?</p>
|
<p>Apache Tomcat is used to deploy your Java Servlets and JSPs. So in your Java project you can build your WAR (short for Web ARchive) file, and just drop it in the deploy directory in Tomcat.</p>
<p>So basically Apache is an HTTP Server, serving HTTP. Tomcat is a Servlet and JSP Server serving Java technologies.</p>
<p>Tomcat includes Catalina, which is a servlet container. A servlet, at the end, is a Java class. JSP files (which are similar to PHP, and older ASP files) are generated into Java code (HttpServlet), which is then compiled to .class files by the server and executed by the Java virtual machine. </p>
|
<p>Apache is an HTTP web server which serve as HTTP.</p>
<p>Apache Tomcat is a java servlet container. It features same as web server but is customized to execute java servlet and JSP pages.</p>
| 5,051
|
<p>I printed a raspberry pi case from PLA. <a href="https://www.thingiverse.com/thing:1549574" rel="noreferrer">This case</a>.</p>
<p>I fitted it to my raspberry pi and now the wifi keeps dropping out. When I take it out of the case there is no problem. </p>
<p>The filament I am using is Black, 3D Hero 1.75mm.</p>
<p>Has anyone experienced anything like this before, if so how did you work around it?
Is it common with PLA is there another material that wouldn't cause an issue?</p>
|
<p>Typically the black pigments in polymers are mineral based. If you are having true WiFi issues then it sounds like your filiment is using black iron oxide also called <a href="https://en.wikipedia.org/wiki/Magnetite" rel="nofollow noreferrer">magnetite</a> for the black pigment. This is the same material they use in radar absorbing paints which would make sense as to why you are having radio related issues.</p>
|
<p>Typically the black pigments in polymers are mineral based. If you are having true WiFi issues then it sounds like your filiment is using black iron oxide also called <a href="https://en.wikipedia.org/wiki/Magnetite" rel="nofollow noreferrer">magnetite</a> for the black pigment. This is the same material they use in radar absorbing paints which would make sense as to why you are having radio related issues.</p>
| 1,262
|
<p>I'm looking for a robust, easy-to-use, regular expression evaluator for native C++, either platform-independent or Windows-specific.</p>
<p>Requirements:</p>
<ul>
<li>Can't use Boost or ACE regex libraries (unfortunately)</li>
<li>Can't use .NET Regex (or any managed code)</li>
</ul>
<p>The main requirement is that it should be standalone and open.</p>
|
<p>C++11 now includes the support of regular expressions.
It will be platform independent. You just need a recent compiler.
Check the following list to know which one to use.</p>
<p><a href="http://wiki.apache.org/stdcxx/C%2B%2B0xCompilerSupport" rel="noreferrer">http://wiki.apache.org/stdcxx/C%2B%2B0xCompilerSupport</a></p>
<p>Hope it helps</p>
|
<p>Why don't you use Microsoft ATL's regex library? Kenny Kerr has written a <a href="http://weblogs.asp.net/kennykerr/archive/2008/07/18/visual-c-in-short-regular-expressions.aspx" rel="nofollow noreferrer">short article</a> on that recently.</p>
<blockquote>
<p>ATL includes a lightweight regular expression implementation. Although
originally part of Visual C++, it is now included with the ATL Server
download.</p>
<p>The <code>CAtlRegExp</code> class template implements the parser and matching
engine. ...</p>
<p>The regular expression grammar is defined at the top of the <code>atlrx.h</code>
header file.</p>
</blockquote>
| 2,346
|
<p>Which is more efficient for the compiler and the best practice for checking whether a string is blank? </p>
<ol>
<li>Checking whether the length of the string == 0</li>
<li>Checking whether the string is empty (strVar == "")</li>
</ol>
<p>Also, does the answer depend on language?</p>
|
<p>Yes, it depends on language, since string storage differs between languages.</p>
<ul>
<li>Pascal-type strings: <code>Length = 0</code>.</li>
<li>C-style strings: <code>[0] == 0</code>. </li>
<li>.NET: <code>.IsNullOrEmpty</code>.</li>
</ul>
<p>Etc. </p>
|
<p>Actually, IMO the best way to determine is the IsNullOrEmpty() method of the string class.</p>
<p><a href="http://msdn.microsoft.com/en-us/library/system.string.isnullorempty." rel="nofollow noreferrer">http://msdn.microsoft.com/en-us/library/system.string.isnullorempty.</a></p>
<p>Update: I assumed .Net, in other languages, this might be different.</p>
| 3,145
|
<p>I am creating a GUI for a machine that runs remote (WinXP) or on the machine itself (Windows CE 6.0).</p>
<p>Right now I've created a fast visualisation (read: very simple) of the machine itself. The goal is to make a bit more complex visualisation of the machine and for that I would need a lightweight 3d engine.</p>
<p>The engine should run on both full and compact .net framework (I am prepared to do some porting :). <br />
What I mean with lightweigt is that it doesn't need shading, lighting, advanced camera's. The Idea is that the whole scene is shown in Isometric perspective.</p>
<p>So what I was wondering is that anyone knows a good engine (open source?) or has some helpfull resources you could share with me.</p>
|
<p>Did you try <a href="http://irrlicht.sourceforge.net/" rel="nofollow noreferrer">Irrlicht</a>. </p>
<blockquote>
<p>Recently Irrlicht has acquired official .NET bindings, allowing users to develop in .Net languages such as VB.NET, C# and Boo.</p>
</blockquote>
<p>There is also <a href="http://en.wikipedia.org/wiki/OGRE_3D" rel="nofollow noreferrer">Ogre 3D</a> and also <a href="http://en.wikipedia.org/wiki/Axiom_Engine" rel="nofollow noreferrer">Axiom Engine</a></p>
|
<p>It is a good question. I have looked as well, and not seen anything. It would be great to see some easy to access great visual effects for mobile, to somewhat compete with other platforms that are getting better looking.</p>
<p>Sometimes with Windows Mobile I feel like I am in the Windows 3.1 days!</p>
| 6,726
|
<p>While writing a custom IHttpHandler I came across a behavior that I didn't expect concerning the HttpCachePolicy object.</p>
<p>My handler calculates and sets an entity-tag (using the SetETag method on the HttpCachePolicy associated with the current response object). If I set the cache-control to public using the SetCacheability method everything works like a charm and the server sends along the e-tag header. If I set it to private the e-tag header will be suppressed.</p>
<p>Maybe I just haven't looked hard enough but I haven't seen anything in the HTTP/1.1 spec that would justify this behavior. Why wouldn't you want to send E-Tag to browsers while still prohibiting proxies from storing the data?</p>
<pre><code>using System;
using System.Web;
public class Handler : IHttpHandler {
public void ProcessRequest (HttpContext ctx) {
ctx.Response.Cache.SetCacheability(HttpCacheability.Private);
ctx.Response.Cache.SetETag("\"static\"");
ctx.Response.ContentType = "text/plain";
ctx.Response.Write("Hello World");
}
public bool IsReusable { get { return true; } }
}
</code></pre>
<p>Will return</p>
<pre>
Cache-Control: private
Content-Type: text/plain; charset=utf-8
Content-Length: 11
</pre>
<p>But if we change it to public it'll return</p>
<pre>
Cache-Control: public
Content-Type: text/plain; charset=utf-8
Content-Length: 11
Etag: "static"
</pre>
<p>I've run this on the ASP.NET development server and IIS6 so far with the same results. Also I'm unable to explicitly set the ETag using</p>
<pre><code>Response.AppendHeader("ETag", "static")
</code></pre>
<p><strong>Update</strong>: It's possible to append the ETag header manually when running in IIS7, I suspect this is caused by the tight integration between ASP.NET and the IIS7 pipeline.</p>
<p><strong>Clarification</strong>: It's a long question but the core question is this: <strong>why does ASP.NET do this, how can I get around it and should I?</strong></p>
<p><strong>Update</strong>: I'm going to accept <a href="https://stackoverflow.com/questions/32824/why-does-httpcacheabilityprivate-suppress-etags#34004">Tony's answer</a> since it's essentially correct (go Tony!). I found that if you want to emulate the HttpCacheability.Private fully you can set the cacheability to ServerAndPrivate but you also have call cache.<a href="http://msdn.microsoft.com/en-us/library/system.web.httpcachepolicy.setomitvarystar.aspx" rel="nofollow noreferrer">SetOmitVaryStar</a>(true) otherwise the cache will add the <strong>Vary: *</strong> header to the output and you don't want that. I'll edit that into the answer when I get edit permissions (or if you see this Tony perhaps you could edit your answer to include that call?)</p>
|
<p>I think you need to use HttpCacheability.ServerAndPrivate</p>
<p>That should give you cache-control: private in the headers and let you set an ETag.</p>
<p>The documentation on that needs to be a bit better.</p>
<p><strong>Edit:</strong> Markus found that you also have call cache.SetOmitVaryStar(true) otherwise the cache will add the Vary: * header to the output and you don't want that.</p>
|
<p>If like me you're unhappy with the workaround mentioned here of using Cacheability.ServerAndPrivate, and you really want to use Private instead - perhaps because you are customising pages individually for users and it makes no sense to cache on the server - then at least in .NET 3.5 you can set ETag through Response.Headers.Add and this works fine. </p>
<p>N.B. if you do this you have to implement the comparison of the client headers yourself and the HTTP 304 response handling - not sure if .NET takes care of this for you under normal circumstances.</p>
| 5,282
|
<p>I have ordered a dual hotend Chimera and it came with 2x 12 V heater elements (in my rush I forgot to order the one with 2x 24V).</p>
<p><a href="https://i.stack.imgur.com/xhQNem.png" rel="nofollow noreferrer" title="Chimera dual filament hotend"><img src="https://i.stack.imgur.com/xhQNem.png" alt="Chimera dual filament hotend" title="Chimera dual filament hotend"></a></p>
<p>Is it possible to run these 12 V heater elements in series? </p>
<p><em>I am planning on running this with an SRK 1.3 board.</em></p>
|
<p>This answer is intended to be a generic answer for Z-offset determination. The question is not clear on how the Z-offset has been determined. It appears as if this distance is measured, while in reality this cannot be measured.</p>
<p>A touch (or a inductive or capacitive) probe uses a trigger point to determine the distance of the probe trigger point to the bed print surface. Correct installation is trivial, as is the determination of the nozzle to trigger point definition. For a touch sensor, the probing element is either stowed, fully deployed, or pushed in during leveling up to the point that the trigger point is reached and the probe stowes the rest of the pin, see figure:</p>
<p><a href="https://i.stack.imgur.com/wdto7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wdto7.png" alt="BLTouch or 3DTouch sensor Z-offset definition"></a></p>
<p>The <code>M851 Zxx.xx</code> offset is determined by lowering the nozzle beyond the trigger point until the nozzle hits a paper sheet. If the stowed position to nozzle distance is used, the distance is too large and the nozzle will dive into the bed on printing.</p>
|
<p>You should be able to offset this with a <code>G54 Z-1.3</code> - <strong>if your setup accepts these gcodes</strong>. </p>
<p>If you do this, always add a <code>G53</code> to the very start and just before the <code>M30</code> to clear all offsets after job finish (or in the event of a cancel, at the start of the next job).</p>
<p>I'm not experienced with a wide variety of printers or firmware, but our repetier-based printers (and we use the same controls for our refurbished Fadal CNC machines) use G53-G59:</p>
<p><a href="https://www.cnccookbook.com/g54-g92-g52-work-offsets-cnc-g-code/" rel="nofollow noreferrer">As explained in this tutorial from cnccookbook.com</a>:</p>
<blockquote>
<p>Basic work offsets are very simple to specify: simply enter one of G54, G55, G56, G57, G58, or G59. [...] When you execute the work offset g-code, the XYZ offset will be added to all of your coordinates from that point forward.</p>
</blockquote>
<p><a href="https://en.wikipedia.org/wiki/G-code" rel="nofollow noreferrer">As detailed on Wikipedia</a>:</p>
<blockquote>
<p><strong>G54-59</strong>: Have largely replaced position register (G50 and G92). Each tuple of axis offsets relates program zero directly to machine zero. Standard is 6 tuples (G54 to G59), with optional extensibility to 48 more via G54.1 P1 to P48.</p>
</blockquote>
<p><a href="http://hyrel3d.net/wiki/index.php/Gcode#G54_through_G59_-_Set_Offsets" rel="nofollow noreferrer">And on the gcode dictionary provided by Hyrel 3D</a>:</p>
<blockquote>
<p><strong>G54 through G59 - Set Offsets</strong>
G54, G55, G56, G57, G58, and G59 will each store and invoke offsets in the X, Y, and/or Z axes for all subsequent moves. Any values not invoked will remain with their previous value (0 unless earlier specified otherwise).</p>
<ul>
<li>X is the offset in mm in the X axis.</li>
<li>Y is the offset in mm in the Y axis.</li>
<li>Z is the offset in mm in the Z axis.</li>
</ul>
<p>Here is an example:</p>
<p><code>G54 X100 Y-50</code></p>
<p>This command is decoded and executed by the printer as follows:</p>
<p>G54 (set offsets)<br>
- X100 (+100mm to all X coordinates)<br>
- Y-50 (-50mm to all Y coordinates)<br></p>
<p>Note that this differs from an M6, where the offsets are only applied to a SINGLE tool position.</p>
</blockquote>
<p><em>Disclaimer: I work for Hyrel 3D.</em></p>
| 1,467
|
<p>When you get a badge or aren't logged in to stack overflow there's a groovy little notification bar at the top of the page that lets you know there's something going on.</p>
<p>I know the SOflow team use JQuery, but I was wondering if anyone knew of an implementation of the same style of notification system in asp.net AJAX.</p>
<p>On a side note, what's the "official" name for this style of notification bar?</p>
|
<p>I'm currently using this:</p>
<pre><code>String content = null;
URLConnection connection = null;
try {
connection = new URL("http://www.google.com").openConnection();
Scanner scanner = new Scanner(connection.getInputStream());
scanner.useDelimiter("\\Z");
content = scanner.next();
scanner.close();
}catch ( Exception ex ) {
ex.printStackTrace();
}
System.out.println(content);
</code></pre>
<p>But not sure if there's a better way.</p>
|
<p>Its not library but a tool named curl generally installed in most of the servers or you can easily install in ubuntu by </p>
<pre><code>sudo apt install curl
</code></pre>
<p>Then fetch any html page and store it to your local file like an example </p>
<pre><code>curl https://www.facebook.com/ > fb.html
</code></pre>
<p>You will get the home page html.You can run it in your browser as well.</p>
| 5,140
|
<p>I work in VBA, and want to parse a string eg</p>
<pre><code><PointN xsi:type='typens:PointN'
xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance'
xmlns:xs='http://www.w3.org/2001/XMLSchema'>
<X>24.365</X>
<Y>78.63</Y>
</PointN>
</code></pre>
<p>and get the X & Y values into two separate integer variables.</p>
<p>I'm a newbie when it comes to XML, since I'm stuck in VB6 and VBA, because of the field I work in.</p>
<p>How do I do this?</p>
|
<p>This is a bit of a complicated question, but it seems like the most direct route would be to load the XML document or XML string via MSXML2.DOMDocument which will then allow you to access the XML nodes.</p>
<p>You can find more on MSXML2.DOMDocument at the following sites:</p>
<ul>
<li><a href="https://web.archive.org/web/20161217090033/http://en.allexperts.com/q/XML-1469/Manipulating-XML-files-Excel.htm" rel="noreferrer">Manipulating XML files with Excel VBA & Xpath</a></li>
<li>MSXML - <a href="http://msdn.microsoft.com/en-us/library/ms763742(VS.85).aspx" rel="noreferrer">http://msdn.microsoft.com/en-us/library/ms763742(VS.85).aspx</a></li>
<li><a href="https://web.archive.org/web/20161030020427/http://www.xml.com:80/lpt/a/979" rel="noreferrer">An Overview of MSXML 4.0</a></li>
</ul>
|
<p>Often it is easier to parse without VBA, when you don't want to enable macros. This can be done with the replace function. Enter your start and end nodes into cells B1 and C1.</p>
<pre><code>Cell A1: {your XML here}
Cell B1: <X>
Cell C1: </X>
Cell D1: =REPLACE(A1,1,FIND(A2,A1)+LEN(A2)-1,"")
Cell E1: =REPLACE(A4,FIND(A3,A4),LEN(A4)-FIND(A3,A4)+1,"")
</code></pre>
<p>And the result line E1 will have your parsed value:</p>
<pre><code>Cell A1: {your XML here}
Cell B1: <X>
Cell C1: </X>
Cell D1: 24.365<X><Y>78.68</Y></PointN>
Cell E1: 24.365
</code></pre>
| 3,245
|
<p>I have connected to a server via SFTP using FileZilla and accepted adding the server's SSH key to the key cache in FileZilla. </p>
<p>How can I extract this cached key to a keyfile so that may use it through other SFTP applications that require a keyfile be made available? </p>
<p>I have not been able to find anything in the FileZilla documentation related to this.</p>
|
<p>If you use the standard openssh console client (cygwin or from linux), host keys are stored, one-per-line, in ~/.ssh/known_hosts. From there, it's a simple matter of figuring out which bit of that host key is needed for your library.</p>
<p>Putty also stores host keys, but it appears to encode them in hex. Those can be found at HKCUR\Software\SimonTatham\PuTTY\SshHostKeys</p>
|
<p>Unless I am misunderstanding you: you don't need to.</p>
<p>If you connect to the server with another application (ie: PuTTY) and it has not seen the server before then you will be prompted to accept the key. </p>
<p>I see why you might want to do this, but each application could have it's own way to store keys.</p>
| 6,573
|
<p>There is filament residue on my glass bed that is so thin that it wont come off. I have had residue buildup for over a year now and none of it has come off. This residue appears with both the PETG and the PLA filament I have. Is there anyway to remove this residue?</p>
<p><a href="https://i.stack.imgur.com/DmG4e.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DmG4e.png" alt="Filament residue on bed" /></a></p>
|
<p>I would try a single edge razor blade at a low angle used as a scraper. If you can't feel the blade catching in the residue, it probably isn't an issue.</p>
<p>If you must get rid of it you don't have good choices of solvents. Maybe you can burn it off by placing the class in an oven through a clean cycle. With luck the hearing and cooling won't break the glass.</p>
<p>If you can't scrape it off with a sharp razor blade, I would just let it be.</p>
|
<p>Maybe you could put your bed at the max temperature and print a large square of 1 mm onto it using a higher than normal temperature for that filament, maybe this bonds the residue to the new print. Let it cool down and pry the print off or remove while hot.</p>
| 1,763
|
<p>Should I still be using tables anyway?</p>
<p>The table code I'd be replacing is:</p>
<pre><code><table>
<tr>
<td>Name</td><td>Value</td>
</tr>
...
</table>
</code></pre>
<p>From what I've been reading I should have something like</p>
<pre><code><label class="name">Name</label><label class="value">Value</value><br />
...
</code></pre>
<p>Ideas and links to online samples greatly appreciated. I'm a developer way out of my design depth.</p>
<p>EDIT: My need is to be able to both to display the data to a user and edit the values in a separate (but near identical) form.</p>
|
<p>I think that definition lists are pretty close semantically to name/value pairs.</p>
<pre><code><dl>
<dt>Name</dt>
<dd>Value</dd>
</dl>
</code></pre>
<p><a href="http://www.maxdesign.com.au/presentation/definition" rel="noreferrer">Definition lists - misused or misunderstood?</a></p>
|
<p>use the float: property eg:
css:</p>
<pre class="lang-css prettyprint-override"><code>.left {
float:left;
padding-right:20px
}
</code></pre>
<p>html:</p>
<pre><code><div class="left">
Name<br/>
AnotherName
</div>
<div>
Value<br />
AnotherValue
</div>
</code></pre>
| 8,654
|
<p>I'm looking for any idea of what could cause this problem. I'm printing (1.75mm PLA @ 220C) a 14cm x 14cm box, sliced with Simplify3D. Relevant settings are 3 bottom layers, 3 top layers, 3 outline/perimeter shells, and 15% orthagonal infill every other layer.</p>
<p>The first 3 layers print fine. Here's the first (bottom) layer after removing the print and turning it over:
<a href="https://i.stack.imgur.com/S45eF.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/S45eF.jpg" alt="bottom layer"></a></p>
<p>The infill (layers 4-10) also prints beautifully (see left side of photo below).</p>
<p>But the moment it starts printing the next layer (layer 11, the top layer of the bottom of the box), which should be solid fill exactly like layers 1-3, it starts underextruding and generally looking like crap:</p>
<p><a href="https://i.stack.imgur.com/bPScE.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/bPScE.jpg" alt="Not Good"></a></p>
<p>The first time this happened, I figured the nozzle got clogged or the extruder gear started slipping. This is not the freshest PLA, so maybe it was a PLA quality problem. But the same thing happened at the same layer when I tried it again. And again. And again. As soon as I cancel the print I can have it extrude 5cm of filament and it's fine, no clogs...and if I immediately start another print it again perfectly prints layers 1-10. So it doesn't seem to be the extruder, the nozzle, or PLA quality. I can't imagine the "stress" of laying 7 layers of infill could screw up the next layer...</p>
<p>I just can't figure out how layers 2 and 3 could be basically perfect, but layer 11 is consistently a disaster, when they should be almost the exact same gcode (only a mm apart). I looked at the gcode and it's basically identical for layers 3 and 11, including same feedrate (G1 F2250).</p>
<p>This is on a DeltaMaker printer with a new E3D Lite6 hot end. Can add more details if needed, but basically I'm just looking for an idea of what could cause this.</p>
<p>Update: Just tried a different roll of PLA and got the same results.</p>
|
<p>I doubt this is a printer or filament issue. Rather, I suspect it might be related to how your slicer handles <em>bridging</em> scenarios (which basically is what laying down layers over infill is).</p>
<p>You could try to:</p>
<ol>
<li><p>Increase the infill percentage and/or use an infill pattern that provides better support for the top layer (so that it isn't allowed to sag as much). You will probably still se <em>some</em> sag, but that is usually covered up by the next layer. Adding print cooling might also help here.</p></li>
<li><p>Adjust the bridging settings in your slicer software. If under extrusion is the main issue, perhaps you could try to increase extrusion for bridges?</p></li>
</ol>
<p>I am no expert with Simplify3D, and I suspect that the default settings actually might be pretty good. I would therefore try option 1. first. Perhaps someone else here can give a better answer related to your slicer. :-)</p>
|
<p>The infill percentage is a bit low. </p>
<p>Even with infill percentages that are higher than 15% (I use 25%), a single layer on top of infill will not create a good top. It takes at least two and looks really good with three. The first layer is often patchy looking! </p>
<p>If you're stopping your print after one layer, and have more layers to go, maybe let it continue and see if you are more satisfied with it after the second or third layer.</p>
<p>If you only have one top full coverage layer in your model, then I believe you will need to configure one or two more. </p>
<p>Sometimes too high of heat will cause the layers after the infill to looks sparse and sag, so temperature (as always) is a factor for covering infill. :+)</p>
| 399
|
<p>As part of our current database work, we are looking at a dealing with the process of updating databases.</p>
<p>A point which has been brought up recurrently, is that of dealing with system vs. user values; in our project user and system vals are stored together. For example...</p>
<p>We have a list of templates.</p>
<pre><code>1, <system template>
2, <system template>
3, <system template>
</code></pre>
<p>These are mapped in the app to an enum (1, 2, 3)</p>
<p>Then a user comes in and adds...</p>
<pre><code>4, <user template>
</code></pre>
<p>...and...</p>
<pre><code>5, <user template>
</code></pre>
<p>Then.. we issue an upgrade.. and insert as part of our upgrade scripts...</p>
<pre><code><new id> [6], <new system template>
</code></pre>
<p>THEN!!... we find a bug in the new system template and need to update it... The problem is how? We cannot update record using ID6 (as we may have inserted it as 9, or 999, so we have to identify the record using some other mechanism)</p>
<p>So, we've come to two possible solutions for this.</p>
<h2><strong>In the red corner (speed)....</strong></h2>
<p>We simply start user Ids at 5000 (or some other value) and test data at 10000 (or some other value). This would allow us to make modifications to system values and test them up to the lower limit of the next ID range. </p>
<p>Advantage...Quick and easy to implement, </p>
<p>Disadvantage... could run out of values if we don't choose a big enough range!</p>
<h2><strong>In the blue corner (scalability)...</strong></h2>
<p>We store, system and user data separately, use GUIDs as Ids and merge the two lists using a view.</p>
<p>Advantage...Scalable..No limits w/regard to DB size. </p>
<p>Disadvantage.. More complicated to implement. (many to one updatable views etc.)</p>
<hr>
<p>I plump squarely for the first option, but looking for some ammo to back me up!</p>
<p>Does anyone have any thoughts on these approaches, or even one(s) that we've missed?</p>
|
<p>I have never had problems (performance or development - TDD & unit testing included) using GUIDs as the ID for my databases, and I've worked on some pretty big ones. Have a look <a href="http://www.sql-server-performance.com/articles/per/guid_performance_p2.aspx" rel="nofollow noreferrer">here</a>, <a href="http://weblogs.asp.net/wwright/archive/2007/11/04/the-gospel-of-the-guid-and-why-it-matters.aspx" rel="nofollow noreferrer">here</a> and <a href="http://weblogs.asp.net/wwright/archive/2007/11/11/gospel-of-the-guid-answers-to-your-burning-questions-comments-and-insults.aspx" rel="nofollow noreferrer">here</a> if you want to find out more about using GUIDs (and the potential GOTCHAS involved) as your primary keys - but I can't recommend it highly enough since moving data around safely and DB synchronisation becomes as easy as brushing your teeth in the morning :-)</p>
<p>For your question above, I would either recommend a third column (if possible) that indicates whether or not the template is user or system based, or you can at the very least generate GUIDs for system templates as you insert them and keep a list of those on hand, so that if you need to update the template, you can just target that same GUID in your DEV, UAT and /or PRODUCTION databases without fear of overwriting other templates. The third column would come in handy though for selecting all system or user templates at will, without the need to seperate them into two tables (this is overkill IMHO).</p>
<p>I hope that helps,</p>
<p>Rob G</p>
|
<p>Maybe I didn't get it, but couldn't you use GUIDs as Ids and still have user and system data together? Then you can access the system data by the (non-changable) GUIDs.</p>
| 3,109
|
<p>Is it possible to validate an xml file against its associated schema using Visual Studio 2005 IDE? </p>
<p>I could only see options to create a schema based on the current file, or show the XSLT output</p>
<p><img src="https://i.stack.imgur.com/x9mHo.png" alt="alt text"></p>
|
<p>It's done automatically, errors appear as warnings in the "Error List" and are additionally underlined with the blue squiggle in the source file. </p>
<p>Not sure if there is another way to validate the file, but this will do for now.</p>
|
<p><a href="http://msdn.microsoft.com/en-us/library/system.xml.schema.xmlschemavalidator.aspx" rel="nofollow noreferrer">XmlSchemaValidator</a></p>
<p>Warning: It's not pretty to use.</p>
| 7,557
|
<p>I'm looking for any way that I can improve the selector performance of a jQuery call. Specifically things like this:</p>
<p>Is <code>$("div.myclass")</code> faster than <code>$(".myclass")</code></p>
<p>I would think it might be, but I don't know if jQuery is smart enough to limit the search by tag name first, etc. Anyone have any ideas for how to formulate a jQuery selector string for best performance?</p>
|
<p>There is no doubt that <strong>filtering by tag name first is much faster</strong> than filtering by classname.</p>
<p>This will be the case until all browsers implement getElementsByClassName natively, as is the case with getElementsByTagName.</p>
|
<p>I've been on some of the jQuery mailing lists and from what I've read there, they most likely filter by tag name then class name (or vice versa if it was faster). They are obsessive about speed and would use anything to gain a smidgen of performance.</p>
<p>I really wouldn't worry about it too much anyway unless you are running that selector thousands of times/sec. </p>
<p>If you are really concerned, try doing some benchmarking and see which is faster.</p>
| 6,826
|
<p>What software is recommended for working with and editing large XML schemas? I'm looking for both Windows and Linux software (doesn't have to be cross platform, just want suggestions for both) that help with dealing with huge XML files.</p>
|
<p>I may be old fashioned, but I prefer my text editor. </p>
<p>I use emacs, and it has a fairly decent xml mode. </p>
<p>Most good text editors will have decent syntax hi-lighting and tag matching facilities. Your IDE might already do it (IntelliJ idea does, and I believe Eclipse does as well). Good text editors will be able to deal with <em>huge</em> files, but some text editors may not be able to handle them. How big are we talking about?</p>
|
<p><strong>XML Copy Editor</strong> - Windows and Linux</p>
<p>Fast, free, and supports XML schema validation.</p>
<p>Official Website
<a href="http://xml-copy-editor.sourceforge.net/" rel="nofollow">http://xml-copy-editor.sourceforge.net/</a></p>
<p>How to install in Ubuntu
<a href="http://ubuntuforums.org/showthread.php?t=1640003" rel="nofollow">http://ubuntuforums.org/showthread.php?t=1640003</a></p>
| 2,397
|
<p>I have an object that I want to print in 3D. But I have a few questions about it. What are the things that I have to watch out for when 3D printing? </p>
<p>I know how to change the metric size etc. Some people said that it's best to set the thickness to a low amount and not make the object solid (to leave the inside empty/hollow) in order to save money when printing.</p>
<p>Is this true or does it not matter?</p>
<p>Also what if I want two parts of an object to be separate colours or materials? Do I have to change this in Blender? </p>
<p>Any advice and information would be helpful, thanks.</p>
<p><a href="https://i.stack.imgur.com/TG6Mg.png" rel="noreferrer"><img src="https://i.stack.imgur.com/TG6Mg.png" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/KIEy1.png" rel="noreferrer"><img src="https://i.stack.imgur.com/KIEy1.png" alt="enter image description here"></a></p>
|
<p>You are correct about the walls. Using a <em>Solidify</em> object modifier is probably your best bet. A low <em>Thickness:</em> value (<em>0.1</em> is probably good) helps keep the walls thin but strong. You can monitor the thickness while you adjust the value from <em>Wireframe</em> view.</p>
<p><a href="https://i.stack.imgur.com/u4wif.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/u4wif.png" alt="enter image description here"></a></p>
<p>Additionally, and <strong>this is probably the most important thing to know</strong>, your mesh must be clean. By clean, I mean it must all be one piece. No separate cubes, cylinders, etc. that you added while modeling, just one solid piece. Think about it this way. If you have added a cube and part of that cube is inside the rest, it might look good from the outside. But the 3D Printer isn't printing the outside, it's printing everything. So that wall, albeit hidden, that is present on the inside of your mesh <strong>will be printed</strong>.</p>
<p><em>Bad:</em></p>
<p><a href="https://i.stack.imgur.com/de7eW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/de7eW.png" alt="enter image description here"></a></p>
<p><em>Good:</em></p>
<p><a href="https://i.stack.imgur.com/RQXI3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RQXI3.png" alt="enter image description here"></a></p>
<p>Lastly, if you have parts of your mesh that can't be printed from the bottom up, or wouldn't stand by itself, consider adding supports. You can always cut these off later.</p>
<p><em>Leg added because it wouldn't stand by itself:</em></p>
<p><a href="https://i.stack.imgur.com/H1tqC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/H1tqC.png" alt="enter image description here"></a></p>
|
<blockquote>
<p>What are the things that i have to watch out for when 3d printing? </p>
</blockquote>
<p><a href="https://blender.stackexchange.com/q/7910/2816">non manifold geometry</a> : geometry that can not exist in the real world. </p>
<p>It's a good idea to check if the dimensions of your mesh are correct before exporting: </p>
<p><a href="https://i.stack.imgur.com/cfYiB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cfYiB.png" alt="enter image description here"></a>
<hr>
Turning on mesh analysis allows visual inspection of problems such as intersecting faces, sharp edges ,edges with thickness below a threshold and other criterias.<br>
<em>the inspection tool will color the faces with those problems.</em> </p>
<p><a href="https://i.stack.imgur.com/BLGEK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BLGEK.png" alt="enter image description here"></a>
<hr>
And lastly you can make selections by traits, such as loose geometry, interior faces or unconnected vertices. </p>
<p><a href="https://i.stack.imgur.com/phD8w.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/phD8w.png" alt="enter image description here"></a></p>
<p><em>note that those are just tools to help you find the problems , none of them will fix the mesh for 3d printing.</em> </p>
| 305
|
<p>In WPF, how would I apply multiple styles to a <code>FrameworkElement</code>? For instance, I have a control which already has a style. I also have a separate style which I would like to add to it without blowing away the first one. The styles have different TargetTypes, so I can't just extend one with the other.</p>
|
<p><strong>I think the simple answer is that you can't do (at least in this version of WPF) what you are trying to do.</strong></p>
<p><em>That is, for any particular element only one Style can be applied.</em></p>
<p>However, as others have stated above, maybe you can use <code>BasedOn</code> to help you out. Check out the following piece of loose xaml. In it you will see that I have a base style that is setting a property that exists on the base class of the element that I want to apply two styles to. And, in the second style which is based on the base style, I set another property.</p>
<p><strong>So, the idea here ... is if you can somehow separate the properties that you want to set ... according the inheritance hierarchy of the element you want to set multiple styles on ... you might have a workaround.</strong></p>
<pre><code><Page xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml">
<Page.Resources>
<Style x:Key="baseStyle" TargetType="FrameworkElement">
<Setter Property="HorizontalAlignment" Value="Left"/>
</Style>
<Style TargetType="Button" BasedOn="{StaticResource baseStyle}">
<Setter Property="Content" Value="Hello World"/>
</Style>
</Page.Resources>
<Grid>
<Button Width="200" Height="50"/>
</Grid>
</Page>
</code></pre>
<p><br>
Hope this helps.</p>
<p><strong>Note:</strong></p>
<p>One thing in particular to note. If you change the <code>TargetType</code> in the second style (in first set of xaml above) to <code>ButtonBase</code>, the two Styles do not get applied. However, check out the following xaml below to get around that restriction. Basically, it means you need to give the Style a key and reference it with that key.</p>
<pre><code><Page xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml">
<Page.Resources>
<Style x:Key="baseStyle" TargetType="FrameworkElement">
<Setter Property="HorizontalAlignment" Value="Left"/>
</Style>
<Style x:Key="derivedStyle" TargetType="ButtonBase" BasedOn="{StaticResource baseStyle}">
<Setter Property="Content" Value="Hello World"/>
</Style>
</Page.Resources>
<Grid>
<Button Width="200" Height="50" Style="{StaticResource derivedStyle}"/>
</Grid>
</Page>
</code></pre>
|
<p><strong>If you are trying to apply a unique style to just one single element</strong> as an addition to a base style, there is a completely different way to do this that is IMHO much better for readable and maintainable code.</p>
<p>It's extremely common to need to tweak parameters per individual element. Defining dictionary styles just for use on one-element is extremely cumbersome to maintain or make sense of. To avoid creating styles just for one-off element tweaks, read my answer to my own question here here:</p>
<p><a href="https://stackoverflow.com/a/54497665/1402498">https://stackoverflow.com/a/54497665/1402498</a></p>
| 3,680
|
<p>I have somehow misconfigured fingers. This leads to a very annoying situation.</p>
<ol>
<li>I select a block of text to copy;</li>
<li>I move the cursor the place where I want to paste the code;</li>
<li>I accidentally press Ctrl+C again instead of Ctrl+V;</li>
<li>My block of copied text is replaced by an empty block;</li>
<li>I have to go back and do it all over again. Grrrrr.</li>
</ol>
<p>Is there any way to disable this behavior, that is to disable copy of empty blocks of text in Visual Studio 2005+?</p>
|
<p>It's not copying an empty block, it's copying the blank line. You can change this setting in Tools > Options > Text Editor > All Languages > 'Apply Cut or Copy Commands to blank lines when there is no selection'</p>
|
<p>Go to Tools > Options > Text Editor > All Languages > General</p>
<p>The option on that page is "Apply Cut or Copy commands to blank lines when there is no selection"</p>
| 9,971
|
<p>I'm generating some XML documents and when it comes to the address part I have fragments that look like this:</p>
<pre><code><Address>15 Sample St
Example Bay
Some Country</Address>
</code></pre>
<p>The XSLT that I have for converting this to XHTML has some funky recursive template to convert newline characters within strings to <br/> tags.</p>
<p>This is all working fine; but is it considered "bad practice" to rely on linebreaks within XML documents? If so, is it recommended that I do this instead?</p>
<pre><code><Address><Line>15 Sample St</Line>
<Line>Example Bay</Line>
<Line>Some Country</Line></Address>
</code></pre>
<p>Seems like it'd be really awkward to wrap every place where my text may be multiple lines with tags like that..</p>
|
<p>It's generally considered bad practice to rely on linebreaks, since it's a fragile way to differentiate data. While most XML processors will preserve any whitespace you put in your XML, it's not guaranteed. </p>
<p>The real problem is that most applications that output your XML into a readable format consider all whitespace in an XML interchangable, and might collapse those linebreaks into a single space. That's why your XSLT has to jump through such hoops to render the data properly. Using a "br" tag would vastly simplify the transform.</p>
<p>Another potential problem is that if you open up your XML document in an XML editor and pretty-print it, you're likely to lose those line breaks. </p>
<p>If you do keep using linebreaks, make sure add an xml:space="preserve" attribute to "address." (You can do this in your DTD, if you're using one.)</p>
<p><strong>Some suggested reading</strong></p>
<ul>
<li>An <a href="http://www.xml.com/pub/a/2001/11/07/whitespace.html" rel="noreferrer">article from XML.com</a> says the following:</li>
</ul>
<blockquote>
<p>XML applications often seem to take a
cavalier attitude toward whitespace
because the rules about the places in
an XML document where whitespace
doesn't matter sometimes give these
applications free rein to add or
remove whitespace in certain places.</p>
</blockquote>
<ul>
<li><a href="http://www.dpawson.co.uk/xsl/sect2/N8321.html" rel="noreferrer">A collection of XSL-list posts regarding whitespace</a>. </li>
</ul>
|
<p>Yes, I think using a <a href="http://en.wikipedia.org/wiki/CDATA" rel="nofollow noreferrer">CDATA</a> block would protect the whitespace. Although some parser APIs allow you to preserve whitespace.</p>
| 2,892
|
<p>What's the simplest way to add a click event handler to a canvas element that will return the x and y coordinates of the click (relative to the canvas element)?</p>
<p>No legacy browser compatibility required, Safari, Opera and Firefox will do.</p>
|
<p>If you like simplicity but still want cross-browser functionality I found this solution worked best for me. This is a simplification of @Aldekein´s solution but <strong>without jQuery</strong>.</p>
<pre><code>function getCursorPosition(canvas, event) {
const rect = canvas.getBoundingClientRect()
const x = event.clientX - rect.left
const y = event.clientY - rect.top
console.log("x: " + x + " y: " + y)
}
const canvas = document.querySelector('canvas')
canvas.addEventListener('mousedown', function(e) {
getCursorPosition(canvas, e)
})
</code></pre>
|
<p>Hey, this is in dojo, just cause it's what I had the code in already for a project.</p>
<p>It should be fairly Obvious how to convert it back to non dojo vanilla JavaScript. </p>
<pre><code> function onMouseClick(e) {
var x = e.clientX;
var y = e.clientY;
}
var canvas = dojo.byId(canvasId);
dojo.connect(canvas,"click",onMouseClick);
</code></pre>
<p>Hope that helps. </p>
| 7,949
|
<p>I inserted some values into a table. There is a column whose value is auto-generated. In the next statement of my code, I want to retrieve this value.</p>
<p>Can you tell me how to do it the right way?</p>
|
<p><code>@@IDENTITY</code> is not scope safe and will get you back the id from another table if you have an insert trigger on the original table, always use <code>SCOPE_IDENTITY()</code></p>
|
<pre><code>SELECT @@Scope_Identity as Id
</code></pre>
<p>There is also @@identity, but if you have a trigger, it will return the results of something that happened during the trigger, where scope_identity respects your scope.</p>
| 6,764
|
<p>I've been using PostgreSQL a little bit lately, and one of the things that I think is cool is that you can use languages other than SQL for scripting functions and whatnot. But when is this actually useful?</p>
<p>For example, the documentation says that the main use for PL/Perl is that it's pretty good at text manipulation. But isn't that more of something that should be programmed into the application?</p>
<p>Secondly, is there any valid reason to use an untrusted language? It seems like making it so that any user can execute any operation would be a bad idea on a production system.</p>
<p>PS. Bonus points if someone can make <a href="http://pgfoundry.org/projects/pllolcode" rel="nofollow noreferrer">PL/LOLCODE</a> seem useful.</p>
|
<p>"isn't that [text manipulation] more of something that should be programmed into the application?"</p>
<p>Usually, yes. The generally accepted "<a href="http://en.wikipedia.org/wiki/Multitier_architecture" rel="nofollow noreferrer">three-tier</a>" application design for databases says that your logic should be in the middle tier, between the client and the database. However, sometimes you need some logic in a trigger or need to index on a function, requiring that some code be placed into the database. In that case all the usual "which language should I use?" questions come up.</p>
<p>If you only need a little logic, the most-portable language should probably be used (pl/pgSQL). If you need to do some serious programming though, you might be better off using a more expressive language (maybe pl/ruby). This will always be a judgment call.</p>
<p>"is there any valid reason to use an untrusted language?"</p>
<p>As above, yes. Again, putting direct file access (for example) into your middle tier is best when possible, but if you need to fire things off based on triggers (that might need access to data not available directly to your middle tier), then you need untrusted languages. It's not ideal, and should generally be avoided. And you definitely need to guard access to it.</p>
|
<p>I think most additional languages are offered so that if you develop in that language on a regular basis, you can feel comfortable writing db functions, triggers, etc. The usefulness of these features is to provide a control over data as close to the data as possible.</p>
| 6,059
|
<p>In Ruby, given an array in one of the following forms...</p>
<pre><code>[apple, 1, banana, 2]
[[apple, 1], [banana, 2]]
</code></pre>
<p>...what is the best way to convert this into a hash in the form of...</p>
<pre><code>{apple => 1, banana => 2}
</code></pre>
|
<p><strong>NOTE</strong>: For a concise and efficient solution, please see <a href="https://stackoverflow.com/a/20831486/332936">Marc-André Lafortune's answer</a> below.</p>
<p>This answer was originally offered as an alternative to approaches using flatten, which were the most highly upvoted at the time of writing. I should have clarified that I didn't intend to present this example as a best practice or an efficient approach. Original answer follows.</p>
<hr>
<p><strong>Warning!</strong> Solutions using <strong>flatten</strong> will not preserve Array keys or values!</p>
<p>Building on @John Topley's popular answer, let's try:</p>
<pre><code>a3 = [ ['apple', 1], ['banana', 2], [['orange','seedless'], 3] ]
h3 = Hash[*a3.flatten]
</code></pre>
<p>This throws an error:</p>
<pre><code>ArgumentError: odd number of arguments for Hash
from (irb):10:in `[]'
from (irb):10
</code></pre>
<p>The constructor was expecting an Array of even length (e.g. ['k1','v1,'k2','v2']). What's worse is that a different Array which flattened to an even length would just silently give us a Hash with incorrect values.</p>
<p>If you want to use Array keys or values, you can use <strong>map</strong>:</p>
<pre><code>h3 = Hash[a3.map {|key, value| [key, value]}]
puts "h3: #{h3.inspect}"
</code></pre>
<p>This preserves the Array key:</p>
<pre><code>h3: {["orange", "seedless"]=>3, "apple"=>1, "banana"=>2}
</code></pre>
|
<p>If the numeric values are seq indexes, then we could have simpler ways...
Here's my code submission, My Ruby is a bit rusty</p>
<pre><code> input = ["cat", 1, "dog", 2, "wombat", 3]
hash = Hash.new
input.each_with_index {|item, index|
if (index%2 == 0) hash[item] = input[index+1]
}
hash #=> {"cat"=>1, "wombat"=>3, "dog"=>2}
</code></pre>
| 6,067
|
<p>Here at work, we often need to find a string from the list of strings that is the closest match to some other input string. Currently, we are using Needleman-Wunsch algorithm. The algorithm often returns a lot of false-positives (if we set the minimum-score too low), sometimes it doesn't find a match when it should (when the minimum-score is too high) and, most of the times, we need to check the results by hand. We thought we should try other alternatives.</p>
<p>Do you have any experiences with the algorithms?
Do you know how the algorithms compare to one another?</p>
<p>I'd really appreciate some advice.</p>
<p>PS: We're coding in C#, but you shouldn't care about it - I'm asking about the algorithms in general.</p>
<hr>
<p>Oh, I'm sorry I forgot to mention that.</p>
<p>No, we're not using it to match duplicate data. We have a list of strings that we are looking for - we call it search-list. And then we need to process texts from various sources (like RSS feeds, web-sites, forums, etc.) - we extract parts of those texts (there are entire sets of rules for that, but that's irrelevant) and we need to match those against the search-list. If the string matches one of the strings in search-list - we need to do some further processing of the thing (which is also irrelevant).</p>
<p>We can not perform the normal comparison, because the strings extracted from the outside sources, most of the times, include some extra words etc.</p>
<p>Anyway, it's not for duplicate detection.</p>
|
<p>OK, Needleman-Wunsch(NW) is a classic end-to-end ("global") aligner from the bioinformatics literature. It was long ago available as "align" and "align0" in the FASTA package. The difference was that the "0" version wasn't as biased about avoiding end-gapping, which often allowed favoring high-quality internal matches easier. Smith-Waterman, I suspect you're aware, is a local aligner and is the original basis of BLAST. FASTA had it's own local aligner as well that was slightly different. All of these are essentially heuristic methods for estimating Levenshtein distance relevant to a scoring metric for individual character pairs (in bioinformatics, often given by Dayhoff/"PAM", Henikoff&Henikoff, or other matrices and usually replaced with something simpler and more reasonably reflective of replacements in linguistic word morphology when applied to natural language).</p>
<p>Let's not be precious about labels: Levenshtein distance, as referenced in practice at least, is basically edit distance and you have to estimate it because it's not feasible to compute it generally, and it's expensive to compute exactly even in interesting special cases: the water gets deep quick there, and thus we have heuristic methods of long and good repute.</p>
<p>Now as to your own problem: several years ago, I had to check the accuracy of short DNA reads against reference sequence known to be correct and I came up with something I called "anchored alignments".</p>
<p>The idea is to take your reference string set and "digest" it by finding all locations where a given N-character substring occurs. Choose N so that the table you build is not too big but also so that substrings of length N are not too common. For small alphabets like DNA bases, it's possible to come up with a perfect hash on strings of N characters and make a table and chain the matches in a linked list from each bin. The list entries must identify the sequence and start position of the substring that maps to the bin in whose list they occur. These are "anchors" in the list of strings to be searched at which an NW alignment is likely to be useful.</p>
<p>When processing a query string, you take the N characters starting at some offset K in the query string, hash them, look up their bin, and if the list for that bin is nonempty then you go through all the list records and perform alignments between the query string and the search string referenced in the record. When doing these alignments, you line up the query string and the search string <em>at</em> the anchor and extract a substring of the search string that is the same length as the query string and which contains that anchor at the same offset, K.</p>
<p>If you choose a long enough anchor length N, and a reasonable set of values of offset K (they can be spread across the query string or be restricted to low offsets) you should get a subset of possible alignments and often will get clearer winners. Typically you will want to use the less end-biased align0-like NW aligner.</p>
<p>This method tries to boost NW a bit by restricting it's input and this has a performance gain because you do less alignments and they are more often between similar sequences. Another good thing to do with your NW aligner is to allow it to give up after some amount or length of gapping occurs to cut costs, especially if you know you're not going to see or be interested in middling-quality matches.</p>
<p>Finally, this method was used on a system with small alphabets, with K restricted to the first 100 or so positions in the query string and with search strings much larger than the queries (the DNA reads were around 1000 bases and the search strings were on the order of 10000, so I was looking for approximate substring matches justified by an estimate of edit distance specifically). Adapting this methodology to natural language will require some careful thought: you lose on alphabet size but you gain if your query strings and search strings are of similar length.</p>
<p>Either way, allowing more than one anchor from different ends of the query string to be used simultaneously might be helpful in further filtering data fed to NW. If you do this, be prepared to possibly send overlapping strings each containing one of the two anchors to the aligner and then reconcile the alignments... or possibly further modify NW to emphasize keeping your anchors mostly intact during an alignment using penalty modification during the algorithm's execution.</p>
<p>Hope this is helpful or at least interesting.</p>
|
<p>To expand on Cd-MaN's answer, it sounds like you're facing a normalization problem. It isn't obvious how to handle scores between alignments with varying lengths.</p>
<p>Given what you are interested in, you may want to obtain p-values for your alignment. If you are using Needleman-Wunsch, you can obtain these p-values using Karlin-Altschul statistics <a href="http://www.ncbi.nlm.nih.gov/BLAST/tutorial/Altschul-1.html" rel="nofollow">http://www.ncbi.nlm.nih.gov/BLAST/tutorial/Altschul-1.html</a></p>
<p>BLAST will can local alignment and evaluate them using these statistics. If you are concerned about speed, this would be a good tool to use.</p>
<p>Another option is to use HMMER. HMMER uses Profile Hidden Markov Models to align sequences. Personally, I think this is a more powerful approach since it also provides positional information. <a href="http://hmmer.janelia.org/" rel="nofollow">http://hmmer.janelia.org/</a></p>
| 7,176
|
<p>How am I supposed to get the <code>IDENTITY</code> of an inserted row?</p>
<p>I know about <code>@@IDENTITY</code> and <code>IDENT_CURRENT</code> and <code>SCOPE_IDENTITY</code>, but don't understand the implications or impacts attached to each.</p>
<p>Can someone please explain the differences and when I would be using each?</p>
|
<ul>
<li><p><a href="http://msdn.microsoft.com/en-us/library/ms187342.aspx" rel="noreferrer"><code>@@IDENTITY</code></a> returns the last identity value generated for any table in the current session, across all scopes. <strong>You need to be careful here</strong>, since it's across scopes. You could get a value from a trigger, instead of your current statement.</p></li>
<li><p><a href="http://msdn.microsoft.com/en-us/library/ms190315.aspx" rel="noreferrer"><code>SCOPE_IDENTITY()</code></a> returns the last identity value generated for any table in the current session and the current scope. <strong>Generally what you want to use</strong>.</p></li>
<li><p><a href="http://msdn.microsoft.com/en-us/library/ms175098.aspx" rel="noreferrer"><code>IDENT_CURRENT('tableName')</code></a> returns the last identity value generated for a specific table in any session and any scope. This lets you specify which table you want the value from, in case the two above aren't quite what you need (<strong>very rare</strong>). Also, as @<a href="https://stackoverflow.com/questions/42648/best-way-to-get-identity-of-inserted-row#42665">Guy Starbuck</a> mentioned, "You could use this if you want to get the current IDENTITY value for a table that you have not inserted a record into."</p></li>
<li><p>The <a href="http://msdn.microsoft.com/en-us/library/ms177564.aspx" rel="noreferrer"><code>OUTPUT</code> clause</a> of the <code>INSERT</code> statement will let you access every row that was inserted via that statement. Since it's scoped to the specific statement, it's <strong>more straightforward</strong> than the other functions above. However, it's a little <strong>more verbose</strong> (you'll need to insert into a table variable/temp table and then query that) and it gives results even in an error scenario where the statement is rolled back. That said, if your query uses a parallel execution plan, this is the <strong>only guaranteed method</strong> for getting the identity (short of turning off parallelism). However, it is executed <em>before</em> triggers and cannot be used to return trigger-generated values.</p></li>
</ul>
|
<p>After Your Insert Statement you need to add this. And Make sure about the table name where data is inserting.You will get current row no where row affected just now by your insert statement. </p>
<pre><code>IDENT_CURRENT('tableName')
</code></pre>
| 6,420
|
<p>Has anyone run into issues serializing components into a file and reading them back, specifically in the area where the component vendor upgrades the VCL components. For example a file serialized with DelphiX and then years later read back with delphiY. Do the serialization formats change and if so what can be done to prevent errors reading in the componets when upgrading.</p>
|
<p>The built-in RTTI based system for serializing published properties is vulnerable to changes in the components. Going forwards is manageable as long as old properties are kept in new objects. I.e. you leave the property interface as is, but can toss away the contents if you like. Going backwards is worse - as a newer version saved property can't be opened in older version load, and that will be a problem.</p>
<p>There are components / libs (<a href="http://www.torry.net/quicksearchd.php?String=RTTI&Title=Yes" rel="nofollow noreferrer">http://www.torry.net/quicksearchd.php?String=RTTI&Title=Yes</a>) that can add serialization in XML format and this may help a bit as you can choose to skip content you don't know.</p>
<p>You still need to be mindful about how you design your published content and should probably find a way to "ignore but propagate" content that your current version don't understand. This will allow you to open and change a file in a newer format while attempting to keep newer attributes, instead of stripping them.</p>
|
<p>Formats will defintely change, as vendors will add features to their components. Serialization simply loops over all published properties and saves them to a stream. When they are read back, each of the properties that is read from the stream will be set back to the component. If the property does not exist anymore, you have a problem. I don't think you can do anything about that besides some basic exception handling.</p>
<p>Best way to guarantee compatibility is to do your own serialization.</p>
| 9,960
|
<p>I am using a Prusa i3 printer and just recently I have been having problems with the first layer. Here is a picture of what it looks like.</p>
<p><a href="https://i.stack.imgur.com/ShfZn.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ShfZn.jpg" alt="First extruded layer on Prusa i3"></a> </p>
<p>I have tried levelling the bed and it seems to be level. I was using a glue stick before but it started bubbling on that, so I switched to tape and that worked once but it is not working now. I wiped the tape with rubbing alcohol and it happened again. </p>
<p>What else can I do to get the first layer right?</p>
|
<p>Oh wow. You are way over cooking your PLA. It is bubbling because it is BOILING. I can tell you have it too hot because the lines are all blurred as if they are liquid and pooled together. </p>
<p>That said I can't give you details because you didn't post settings. I would start by turning the temps to 170 °C for hotend and 60 °C for plate. Also the gluestick it self should never bubble. It is unclear if that is an issue..</p>
|
<p>I had this problem and I had to adjust the nozzle temperature 10 °C below the minimum temperature of 190 °C!</p>
| 517
|
<p>I recently bought a BigTreeTech SKR V1.3 and uncommented <code>REPRAP_DISCOUNT_FULL_GRAPHIC_SMART_CONTROLLER</code> and clicked the upload button but faced an error that says:</p>
<pre><code>Marlin\src\lcd\ultralcd.cpp:767:9: error: 'touch_buttons' was not declared in this scope
if (touch_buttons) {
^~~~~~~~~~~~~
Marlin\src\lcd\ultralcd.cpp:767:9: note: suggested alternative: 'buttons'
if (touch_buttons) {
^~~~~~~~~~~~~
buttons
Marlin\src\lcd\ultralcd.cpp:805:5: error: 'else' without a previous 'if'
else wait_for_unclick = false;
^~~~
*** [.pio\build\LPC1768\src\src\lcd\ultralcd.cpp.o] Error 1
</code></pre>
<p>I am not sure what above message means, but can anyone else shed some light on why I am receiving these errors?</p>
<p>FWIW, I am using Marlin 2.</p>
|
<p>There is a temporary solution which I have found here, on the reprap forums, <a href="https://reprap.org/forum/read.php?13,857852,857876#msg-857876" rel="nofollow noreferrer">Re: Upload to the board failed after LCD enabled</a>:</p>
<blockquote>
<p>An official fix has been posted. Grab the new ultralcd.cpp from [<a href="https://github.com/MarlinFirmware/Marlin/blob/bugfix-2.0.x/Marlin/src/lcd/ultralcd.cpp" rel="nofollow noreferrer">github.com</a>]</p>
</blockquote>
<p>Apparently the sources contained a bug which was fixed later, a new version download fixed the problem.</p>
|
<p><a href="http://marlinfw.org/meta/download/" rel="nofollow noreferrer">Marlin 2.0.0 is in an <strong><em>Alpha</em></strong> state.</a> 'Alpha' is a state before <em>Beta</em>, meaning, it's not hot off the presses, it isn't even <em>off the presses</em>. You need to go back to the most recent "stable" version of Marlin, which is 1.1.9 (found on the same page as the link above). This should <em>most likely</em> solve the errors and problems you're seeing.</p>
| 1,433
|
<p>I know that here at SE I cant ask recommendation of products and so. I have a printer that I wanna buy and I want to know if it's good. Where should I post this question in order to get some smart people to take a look?</p>
|
<p>Great question! 3DPrinting SE attracts a lot of new Stack Exchange network users, which unfortunately can come with questions that don't always meet the Q&A style for the network that this site tries to uphold.</p>
<p>Questions like this are probably best asked in the <a href="http://chat.stackexchange.com/rooms/34216/the-hotbed-3d-printing">Chat</a> room. Currently, it's not very active, but hopefully if we have a few people interested in these more "off-topic" questions the activity will pick up. Pings are going to be a very useful tool if we're going to try and utilize the chat window more. Pinging some of the highly active users may help get quality answers to those off-topic questions.</p>
|
<p>Some of your question could be acceptable. What you want to avoid is "questions which are too broad, unclear, incomplete or primarily opinion-based". That still leaves lots of other acceptable questions to ask like what printers have a specific feature or whether a specific printer has a specific feature. Note the word "specific".</p>
<p>There is also a <a href="https://hardwarerecs.stackexchange.com/">Hardware Recommendations</a> SE Beta that may work for you.</p>
| 39
|
<p>This is in with <a href="https://3dprinting.stackexchange.com/questions/394/when-building-a-ramps-1-4-based-printer-can-i-safely-plugin-just-some-of-the-co">my other question about components</a> and the <a href="https://3dprinting.stackexchange.com/questions/389/in-the-standard-pc-cable-wire-that-goes-from-the-wall-outlet-to-the-switching-po">other question about electricity</a>; how can I check to see how many amps are being pulled? Can I check a component at a time to make sure I'm not going over the limit, and then just add them all in together once I've summed the amps to make sure it's safe to hook everything up. The amps shouldn't change right? </p>
<p>What settings should my multimeter be set to? And to check how much it's pulling, do I just put the multimeter's leads on the green terminals on RAMPS 1.4?</p>
|
<p>To measure amps (current), the meter has to be wired in series with the item to be measured (for this reason, ammeters are designed to have very low resistance). </p>
<p>This has the down-side that you have to disconnect the component to put the meter in line with it. That makes it hard to do the "check a component at a time" method you mentioned.</p>
<p>An ammeter measures <em>actual</em> current flow, so you really can't test a component for it in isolation. Components can have wildly different "current draw" depending on the situation. For example, motor current varies with torque and speed; current through a resistor varies with the voltage across it; and so on.</p>
<p>There are special "clamp-on" current meters that just clamp around a conductor and report the current by using induction. Very nice if you have one.</p>
<p>If you just want the total current the entire RAMPS board is pulling, put the ammeter between the power supply and the RAMPS power input connection(s). Be <em>very</em> sure not to have the meter set to read volts or ohms when you do this (it might or might not survive).</p>
|
<p>The easiest method is to use a Clamp On amp meter on either the hot or neutral of your power supply. </p>
<p>In most cases clamp on meters only work with AC xor DC power so you would only be able to get the reading on one side of the power supply.</p>
<p>Multiply this number by the voltage and you get the wattage.</p>
| 170
|
<p>I'm using Visual C++ 2003 to debug a program remotely via TCP/IP.</p>
<p>I had set the Win32 exception c00000005, "Access violation," to break into the debugger when thrown. Then, I set it back to "Use parent setting." The setting for the parent, Win32 Exceptions, is to continue when the exception is thrown.</p>
<p>Now, when I debug the program, it breaks each time that exception is thrown, forcing me to click Continue to let it keep debugging. How do I get it to stop breaking like this?</p>
|
<p>I'd like to support <a href="https://stackoverflow.com/questions/8263/i-cant-get-my-debugger-to-stop-breaking-on-first-chance-exceptions#8304">Will Dean's answer</a></p>
<p>An access violation sounds like an actual bug in your code. It's not something I'd expect the underlying C/++ Runtime to be throwing and catching internally.</p>
<p>The 'first-chance-exceptions' feature is so you can intercept things which get 'caught' in code, using the debugger, and have a look. If there's nothing 'catching' that exception (which makes sense, why on earth would you catch and ignore access violations?), then it will trigger the debugger regardless of what options you may have set.</p>
|
<p><kbd>Ctrl</kbd>+<kbd>Alt</kbd>+<kbd>E</kbd> (or Debug\Exceptions)</p>
<p>From there you can select which exceptions break.</p>
| 2,972
|
<p>1, Create and build a default Windows Forms project and look at the project properties. It says that the project is targetting .NET Framework 2.0. </p>
<p>2, Create a Setup project that installs just the single executable from the Windows Forms project. </p>
<p>3, Run that installer and it always says that it needs to install .NET 3.5 SP1 on the machine. But it obviously only really needs 2.0 and so I do not want customers to be forced to install .NET 3.5 when they do not need it. They might already have 2.0 installed and so forcing the upgrade is not desirable!</p>
<p>I have looked at the prerequisites of the setup project and checked the .NET Framework 2.0 entry and all the rest are unchecked. So I cannot find any reason for this strange runtime requirement. Anybody know how to resolve this one?</p>
|
<p>No need to edit the file manually. The hint is just above the GUID there:"LaunchCondition".</p>
<ol>
<li>Right click the setup project</li>
<li>Select "View" -> "Launch Conditions"</li>
<li>Expand the "Launch Conditions" node if it isn't already expanded</li>
<li>Right click the ".NET Framework" node and select "Properties Window"</li>
<li>In the "Properties" window change the "Version" value to the appropriate value, in your case 2.0.50727.</li>
</ol>
<p>I'm not sure why this isn't set appropriately from the start.</p>
|
<p>I've always used Innosetup to deploy my projects. It's very fast, and very customizable. There's almost nothing you can't do with a bit of scripting. Innosetup can detect which version of the Framework is installed, and prompt the user if the correct version is not present (with scripting).</p>
<p>I recommend that you try alternative deployment tools like Innosetup and see if you like them. There's a wealth of an opportunity out there.</p>
| 9,405
|
<p>I want to assign the decimal variable "trans" to the double variable "this.Opacity".</p>
<pre class="lang-cs prettyprint-override"><code>decimal trans = trackBar1.Value / 5000;
this.Opacity = trans;
</code></pre>
<p>When I build the app it gives the following error:</p>
<blockquote>
<p>Cannot implicitly convert type decimal to double</p>
</blockquote>
|
<p>An explicit cast to <code>double</code> like this isn't necessary:</p>
<pre><code>double trans = (double) trackBar1.Value / 5000.0;
</code></pre>
<p>Identifying the constant as <code>5000.0</code> (or as <code>5000d</code>) is sufficient:</p>
<pre><code>double trans = trackBar1.Value / 5000.0;
double trans = trackBar1.Value / 5000d;
</code></pre>
|
<p><strong>Try the following code:</strong></p>
<h3>Decimal values</h3>
<pre><code>decimal d1 = 3234.3434m;
</code></pre>
<h3>Convert to double</h3>
<pre><code>double r1 = Decimal.ToDouble(d1);
</code></pre>
| 2,230
|
<p>I would like to be able to add custom commands/script to be executed during a print.</p>
<p>For example I would like to write some software to take a picture check the print hasn't moved off the bead between layers. </p>
<p>Does anyone know if any of the available software/firmware allows custom scripts or calling back to the computer before continuing printing?</p>
<p>I am happy to build/buy a new printer if anyone knows a control board that has this sort of feature. </p>
|
<p>One of the solutions could be adding a layer change script (simplifi3d has that out of the box) and then using marlin firmware you could set a value to digital pin that could triger external actions.</p>
<blockquote>
<p><a href="https://forum.simplify3d.com/viewtopic.php?t=1959" rel="nofollow noreferrer">Layer Change G-Code</a>: I personally haven't had to use this, but I'm
sure that there are some excellent reasons/ideas to use for this. If
you'd like for a G-Code script to be inserted in-between each layer,
than you can simply place it in this tab. One interesting use of this,
is for the FlashForge Dreamer, to have the lights blink in between
each layer, however that can be a bit too much at times! </p>
</blockquote>
<p>The syntax for the M42 command is: M42 S(value to be written to pin) P (pin number) e.g. To set digital pin 30 high, you would use M42 S1 P30</p>
<blockquote>
<p><a href="http://forums.reprap.org/read.php?146,168237,173098" rel="nofollow noreferrer">The MARLIN firmware will</a> not enable you to change the status / write
values to any of the pins in use for things such as the heaters,
thermistors, end stops etc. The command will let you send values other
than 0 and 1 to any pins which can output analogue values. (0-255)</p>
</blockquote>
|
<p>I saw this 3 years ago, this is a <a href="https://www.youtube.com/watch?time_continue=33&v=L-_5KsAOVko" rel="nofollow noreferrer">closed loop control</a>, I think is better that taking a photo then analyze it with a kind of optical recognition software which needs some of possible failures.</p>
<p>The video has also the link for the <a href="https://www.youmagine.com/designs/dc-motor-closed-loop-control-software" rel="nofollow noreferrer">additional resources</a> the 3d printer will need according his author.</p>
| 877
|
<p>I have a project that I would like to start beta testing soon, it is a PHP/MySQL site for programmers. I have beta tested sites before however it was always in-house, and I have never done betas that could/should be seen by someone other then those associated with the project.</p>
<p>How should I go about starting the beta, and what problems, aside from those the beta is supposed to expose, should I be prepared for?</p>
|
<p>First, accept the fact that problems with your app (code, usability, etc.) will be discovered.</p>
<p>Then, make sure you have a clear way for users to communicate with you (form mail, email, uservoice, etc.). The easier you make this the better. For example, there is a uservoice link on every page of SO.</p>
<p>One philosophy I strongly believe in: if it's confusing to your users, it's broken. Be willing to change your app (no matter how "beautiful" the design may be) if your users are confused or not liking it. This doesn't mean you have to cave on your decisions, just that you need to consider revisions to improve the user experience.</p>
|
<p>Beta testing is a part of acceptance testing.
This type of testing will ensure the customer about the functionality and quality of the product.
Beta testing is done on customers end in an uncontrolled environment.
In beta testing customer driven test cases are written and he can enter whatever he wants to enter.
Here developer don't have any control over the testing approach.</p>
| 6,021
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.