instruction
stringlengths 27
22.5k
| chosen
stringlengths 27
28.2k
| rejected
stringlengths 19
24.9k
| __index_level_0__
int64 0
10k
|
|---|---|---|---|
<p>I'm using VS2008 SP1 - don't know if this would have happened before I applied SP1 as I never tried it before yesterday. I attempted to debug a fairly simple XSLT file using VS2008 SP1 and got this crash from VS2008 SP1:</p>
<hr>
<h2>Microsoft Visual Studio</h2>
<p>Unexpected error encountered. It is recommended that you restart the application as soon as possible.</p>
<p>Error: Unspecified error</p>
<p>File: vsee\pkgs\vssprovider\sccprj.cpp</p>
|
<p>We have reproduced this issue and will fix it in the next release of Visual Studio.</p>
<p>You are welcome to use <a href="http://connect.microsoft.com/" rel="nofollow noreferrer">Microsoft Connect</a> site for reporting any issues related to Visual Studio.</p>
<p>Best regards,<br/>
Anton Lapounov<br/>
Data Programmability Team @ Microsoft</p>
|
<p>The same problem, after the stylesheet finishes processing, I get Unspecified error (and eveything seems ok after closing the error message box). Selecting source control plugin to "None" in VS options gets rid of the problem. </p>
| 3,342
|
<p>I have an activity that has a TabHost containing a set of TabSpecs each with a listview containing the items to be displayed by the tab. When each TabSpec is created, I set an icon to be displayed in the tab header.</p>
<p>The TabSpecs are created in this way within a <code>setupTabs()</code> method which loops to create the appropriate number of tabs:</p>
<pre><code>TabSpec ts = mTabs.newTabSpec("tab");
ts.setIndicator("TabTitle", iconResource);
ts.setContent(new TabHost.TabContentFactory(
{
public View createTabContent(String tag)
{
...
}
});
mTabs.addTab(ts);
</code></pre>
<p>There are a couple of instances where I want to be able to change the icon which is displayed in each tab during the execution of my program. Currently, I am deleting all the tabs, and calling the above code again to re-create them.</p>
<pre><code>mTabs.getTabWidget().removeAllViews();
mTabs.clearAllTabs(true);
setupTabs();
</code></pre>
<p>Is there a way to replace the icon that is being displayed without deleting and re-creating all of the tabs?</p>
|
<p>The short answer is, you're not missing anything. The Android SDK doesn't provide a direct method to change the indicator of a <code>TabHost</code> after it's been created. The <code>TabSpec</code> is only used to build the tab, so changing the <code>TabSpec</code> after the fact will have no effect.</p>
<p>I think there's a workaround, though. Call <code>mTabs.getTabWidget()</code> to get a <code>TabWidget</code> object. This is just a subclass of <code>ViewGroup</code>, so you can call <code>getChildCount()</code> and <code>getChildAt()</code> to access individual tabs within the <code>TabWidget</code>. Each of these tabs is also a View, and in the case of a tab with a graphical indicator and a text label, it's almost certainly some other <code>ViewGroup</code> (maybe a <code>LinearLayout</code>, but it doesn't matter) that contains an <code>ImageView</code> and a <code>TextView</code>. So with a little fiddling with the debugger or <code>Log.i</code>, you should be able to figure out a recipe to get the <code>ImageView</code> and change it directly.</p>
<p>The downside is that if you're not careful, the exact layout of the controls within a tab could change and your app could break. Your initial solution is perhaps more robust, but then again it might lead to other unwanted side effects like flicker or focus problems.</p>
|
<p>Try This:</p>
<pre><code>tabHost.setOnTabChangedListener(new OnTabChangeListener() {
public void onTabChanged(String tabId) {
if (TAB_MAP.equals(tabId)) {
ImageView iv = (ImageView) tabHost.getTabWidget().getChildAt(0).findViewById(android.R.id.icon);
iv.setImageDrawable(getResources().getDrawable(R.drawable.tab_map_black));
iv = (ImageView) tabHost.getTabWidget().getChildAt(1).findViewById(android.R.id.icon);
iv.setImageDrawable(getResources().getDrawable(R.drawable.tab_list_white));
} else if (TAB_LIST.equals(tabId)) {
ImageView iv = (ImageView) tabHost.getTabWidget().getChildAt(0).findViewById(android.R.id.icon);
iv.setImageDrawable(getResources().getDrawable(R.drawable.tab_map_white));
iv = (ImageView) tabHost.getTabWidget().getChildAt(1).findViewById(android.R.id.icon);
iv.setImageDrawable(getResources().getDrawable(R.drawable.tab_list_black));
}
}
});
</code></pre>
| 5,748
|
<p>I have a home built RepRap with all sides open..</p>
<p>Would there be any advantage to enclosing the print area in acrylic?</p>
|
<p>It is hard to tell whether you personally should enclose your printer. However, you asked for the advantages and I will name some of them on which one can base a decision.</p>
<p>A 3D printer enclosure</p>
<ul>
<li>helps to keep the temperature of the whole print at controlled levels, if you use a heating element, thermocouple and <a href="https://en.wikipedia.org/wiki/PID_controller" rel="noreferrer">pid regulator</a>. This is one of the most direct uses of the enclosure, which can be achieved by almost no other means. One could sloppily say it does for the whole print what the heatbed does for the initial layers. Controlling the temperature can be beneficial for layer adhesion and can help against delamination problems. This can go as far as fixing cracks and complete delamination (Thanks to @J. Roibal for bringing these cracks to my attention in the comments)</li>
<li>keeps <a href="http://hackaday.com/2016/02/01/3d-printing-fumes-new-science/" rel="noreferrer">dangerous fumes</a> controlled. Here you can find a scientific study about it, published in <a href="http://www.sciencedirect.com/science/article/pii/S1352231013005086" rel="noreferrer">Atmospheric Environment 79, titled 'ultrafine particle emission from desktop 3D printers</a>, on exactly that topic. You can embed a <a href="http://www.thingiverse.com/thing:339128" rel="noreferrer">filter with a fan</a> in your housing to filter the air from all dangerous fumes that are created when melting certain plastic types. It could just circulate the air inside the chamber or get the filtered air out of the housing. This is another use which cannot be achieved otherwise (afaik).</li>
<li>can keep humidity away from your printer. This is helpful for filaments that attract water (and don't print well under that circumstance). This should be realized separately for stored filament, too, adding some silica gel to regulate humidity. (Thanks to @Obmerk Kronen in the comments)</li>
<li>minimizes losses of your heatbed. This happens in at least two ways, - the heated bed will also heat the surroundings, that is the inside of the enclosure. By raising its temperature, the temperature difference and hence heat loss is minimized. Also wind, introducing high fluctuations in the transfered (i.e. lost) heat is minimized. In that sense, it also</li>
<li>shuts out any wind for print temperature stability. Also dust and particles that could be blown on the print will be shut out (thanks to the addition of dust/particles: @Obmerk Kronen). This is a benefit that comes without having a heated chamber or filter.</li>
<li>helps to keep the printer clean in between use. Your axes will thank you being free from dust.</li>
<li>reduces smell and noise. If you use the printer in you living area, that alone can be a great benefit.</li>
<li>makes sure that your printer is safe during storage, nothing will fall on it.</li>
<li>can look pretty nice and add to the style of your printer, <a href="http://www.instructables.com/id/3d-Printer-Enclosure-from-Upcycled-Furniture/" rel="noreferrer">even if selfmade</a> ;-) </li>
</ul>
<p>There are obviously also downsides, as: connected work/money to make it, increased space used for the printer, and, if not well made for that purpose (which it should be), increased difficulty in repairs and maintenance of the printer itself (i.e. to get the printer out of the enclosure).</p>
|
<p>If you will be printing PLA, <strong>no</strong>, or at least not without active cooling of the chamber. PLA's glass transition temperature is sufficiently low that the ambient temperature matters a lot to whether layer N can cool enough before layer N+1 is laid, in order to have a firm surface to extrude against and bond to. This especially matters with overhangs and complex geometry. If you use a heated bed for adhesion, the chamber temperature will tend to reach nearly the bed temperature, keeping your print soft the entire time. In a worst case, it might even cause heat creep and jam the heatbreak. Even if you're not using a heated bed, waste heat from the hotend will warm the chamber somewhat; expect temperatures of at least 35-40 ˚C rather than a (preferred) ambient 19-22 ˚C.</p>
| 310
|
<p>Our dev shop currently uses Visual SourceSafe. We all know how that could end up (badly), so we're investigating other systems. First up is Perforce. Does anyone have experience with using it and its integration into Visual Studio (2003/2005/2008)? Is it as good as any other, or is it pretty solid with good features, comparatively?</p>
|
<p>I used Perforce at my last 3 jobs (my current job I'm using Subversion, which I don't like nearly as much.) I'm a big fan of Perforce, and moving from SourceSafe it will seem like Nirvana. Just getting atomic checkin will be a big boost for your company. Otherwise, Perforce is fast, it has good tools, and the workflow is simple for doing things like merges and integrations. I wholeheartedly recommend it. It may not be all new and flashy like the latest distributed VCS's, but honestly, I prefer the client/server model for its speed, especially if you're working with people in other countries that may have slow connections to you.</p>
<p>The Visual Studio integration is pretty good, but it has a few irritating issues. If you run another Perforce client at the same time (like P4V), it's very poor at keeping changes from the other client in sync in terms of showing what files are currently checked in/out. You generally have to shut down Visual Studio and load the project again if you want it to sync correctly. But, the sync status doesn't actually affect checkins/checkouts/updates from working correctly, it just means you can be fooled in to thinking something is in a different state than it actually is while you're in Visual Studio. The Perforce clients will always show the correct status as they sync continually with the database.</p>
<p>Also, on occasion you'll find you need to work "offline" (not connected to the Perforce database for some reason) and when you load the project again the next time, your Perforce bindings may be lost and you'll have to rebind each project individually. If you work with a solution that contains many projects this can be a big pain in the patoot. Same goes for when you first check out a solution, binding to Perforce is needed before the integration occurs.</p>
|
<p>I haven't used Perforce, but I have found moving to Team Foundation Server as one of the best options while working with Visual Studio.</p>
| 4,634
|
<p>Anyone doing any work using "offline" forms? We have an application that requires inputting data from outside our company. I was thinking about sending a form out via email, allowing the form to be filled out then sent back. Obviously a web application would be the best solution, but management doesn't seem ready to build the infrastructure and security to support that. I've read a little about PDF forms is that a good solution or are there other solutions?</p>
|
<p>Have you considered InfoPath? These can be created and distributed through email. And then the data can be collated automatically.</p>
<p>Also, consider using Google Spreadsheets with Google Forms. It's free and infrastructure is outsourced.</p>
<p>PDF forms can work as well.</p>
|
<p>Another possibility is to use <a href="http://www.microsoft.com/Sharepoint/default.mspx" rel="nofollow noreferrer">Microsoft SharePoint</a>. If your company uses Microsoft Office for the people filling the forms you referring to, you could deploy an Office based solution and gather information with <a href="http://www.microsoft.com/sharepoint/prodinfo/what.mspx" rel="nofollow noreferrer">Sharepoint Server</a>.</p>
<p><a href="http://www.microsoft.com/sharepoint/capabilities/bpf/overview.mspx" rel="nofollow noreferrer">Check this link out</a>.</p>
| 4,065
|
<p>I'm in the process of maintaining a Ruby on Rails app and am looking for an easy way to find the hostname or IP address of the box I'm on (since it's a VM and new instances may have different hostnames or IP addresses). Is there a quick and easy way to do this in Ruby on Rails?</p>
<p>Edit: The answer below is correct but the clarification Craig provided is useful (see also provided link in answer):</p>
<blockquote>
<p>The [below] code does NOT make a
connection or send any packets (to
64.233.187.99 which is google). Since UDP is a stateless protocol connect()
merely makes a system call which
figures out how to route the packets
based on the address and what
interface (and therefore IP address)
it should bind to. addr() returns an
array containing the family (AF_INET),
local port, and local address (which
is what we want) of the socket.</p>
</blockquote>
|
<p>From <a href="http://coderrr.wordpress.com/2008/05/28/get-your-local-ip-address/" rel="noreferrer">coderrr.wordpress.com</a>:</p>
<pre><code>require 'socket'
def local_ip
orig, Socket.do_not_reverse_lookup = Socket.do_not_reverse_lookup, true # turn off reverse DNS resolution temporarily
UDPSocket.open do |s|
s.connect '64.233.187.99', 1
s.addr.last
end
ensure
Socket.do_not_reverse_lookup = orig
end
# irb:0> local_ip
# => "192.168.0.127"
</code></pre>
|
<p>try: <a href="http://apidock.com/rails/v2.3.8/ActionController/Request/remote_ip" rel="nofollow noreferrer">Request.remote_ip</a></p>
<blockquote>
<p>remote_ip()</p>
<p>Determine originating IP address. REMOTE_ADDR is the standard but will
fail if the user is behind a proxy. HTTP_CLIENT_IP and/or
HTTP_X_FORWARDED_FOR are set by proxies so check for these if
REMOTE_ADDR is a proxy. HTTP_X_FORWARDED_FOR may be a comma- delimited
list in the case of multiple chained proxies; the last address which
is not trusted is the originating IP.</p>
</blockquote>
<p>Update:
Oops, sorry I misread the documentation.</p>
| 6,411
|
<p>From what I've seen the tag is ignored when hosting a WCF service in IIS. I understand that when self-hosting this is required but is this harmful or even used when operating under IIS?</p>
<p>ex.</p>
<pre><code><system.serviceModel>
<service blah blah blah>
<host>
<baseAddresses>
<add baseAddress="http://localhost/blah" />
</baseAddresses>
</host>
</service>
</system.serviceModel>
</code></pre>
<p>From what I've seen you can take a config file describing a service from one machine and use that on a completely different machine and it works fine. It looks as if IIS completely ignores this section.</p>
<p>Thanks,
kyle</p>
|
<p>As you have guessed, the baseAddresses element is completely ignored when hosting in IIS. The service's base address is determined by the web site & virtual directory into which your wcf service is placed.</p>
<p>Even when self-hosting, baseAddresses is not required. It is merely a convenience that avoids you having to enter a full address for each endpoint. If it is present, the endpoints can have relative addresses (relative to the base address, that is).</p>
|
<p>base address required for selfhosting. IIS/WAS hosts ignores the base address.</p>
| 8,013
|
<p>Each time a python file is imported that contains a large quantity of static regular expressions, cpu cycles are spent compiling the strings into their representative state machines in memory.</p>
<pre><code>a = re.compile("a.*b")
b = re.compile("c.*d")
...
</code></pre>
<p>Question: Is it possible to store these regular expressions in a cache on disk in a pre-compiled manner to avoid having to execute the regex compilations on each import?</p>
<p>Pickling the object simply does the following, causing compilation to happen anyway:</p>
<pre><code>>>> import pickle
>>> import re
>>> x = re.compile(".*")
>>> pickle.dumps(x)
"cre\n_compile\np0\n(S'.*'\np1\nI0\ntp2\nRp3\n."
</code></pre>
<p>And <code>re</code> objects are unmarshallable:</p>
<pre><code>>>> import marshal
>>> import re
>>> x = re.compile(".*")
>>> marshal.dumps(x)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: unmarshallable object
</code></pre>
|
<blockquote>
<p>Is it possible to store these regular expressions in a cache on disk in a pre-compiled manner to avoid having to execute the regex compilations on each import?</p>
</blockquote>
<p>Not easily. You'd have to write a custom serializer that hooks into the C <code>sre</code> implementation of the Python regex engine. Any performance benefits would be vastly outweighed by the time and effort required.</p>
<p>First, have you actually profiled the code? I doubt that compiling regexes is a significant part of the application's run-time. Remember that they are only compiled the first time the module is imported in the current execution -- thereafter, the module and its attributes are cached in memory.</p>
<p>If you have a program that basically spawns once, compiles a bunch of regexes, and then exits, you could try re-engineering it to perform multiple tests in one invocation. Then you could re-use the regexes, as above.</p>
<p>Finally, you could compile the regexes into C-based state machines and then link them in with an extension module. While this would likely be more difficult to maintain, it would eliminate regex compilation entirely from your application.</p>
|
<p>The <a href="http://docs.python.org/lib/module-shelve.html" rel="nofollow noreferrer">shelve</a> module appears to work just fine:</p>
<pre><code>
import re
import shelve
a_pattern = "a.*b"
b_pattern = "c.*d"
a = re.compile(a_pattern)
b = re.compile(b_pattern)
x = shelve.open('re_cache')
x[a_pattern] = a
x[b_pattern] = b
x.close()
# ...
x = shelve.open('re_cache')
a = x[a_pattern]
b = x[b_pattern]
x.close()
</code></pre>
<p>You can then make a nice wrapper class that automatically handles the caching for you so that it becomes transparent to the user... an exercise left to the reader.</p>
| 9,103
|
<p>I have a Wanhao Duplicator i3 v2 (A.K.A. Maker Select, Cocoon Create).</p>
<p>My extruder got clogged: I noticed that it wasn't extruding, so I stopped my print, removed the fan, heatsink and the extruder motor and I saw that the clog is at the heater block level.</p>
<p>This is a what I have:</p>
<p><a href="https://i.stack.imgur.com/uDyF2.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uDyF2.jpg" alt="Blocked extruder - no flash"></a></p>
<p><a href="https://i.stack.imgur.com/iL1mQ.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iL1mQ.jpg" alt="Blocked extruder - flash"></a></p>
<p>As you can see, there is some PLA clogged at the very beginning of the 1.75mm hole in the extruder block.</p>
<p>I can see there is something white inside (a stone? something that is <strong>not</strong> PLA?). I tried to heat up the extruder to 250°C and pushing with an hex tool but the block did not move. </p>
<p>I solved a lot of clogged nozzle issues with the <a href="https://printrbot.zendesk.com/hc/en-us/articles/202100554-How-to-Unclog-the-Hotend-The-Cold-Pull" rel="nofollow noreferrer">cold pull method</a>, it always worked like a charm, this time I tried too but as a result the PLA filament broke and the "stone" is still there.</p>
<p>Does anyone know if there is a way to unclog the extruder in this situation?</p>
<p>Do I have to change it?</p>
|
<p>I'm sure this is not the best solution, and if you have some ethyl acetate you should try that before going "the hard way".</p>
<p>Ethyl acetate is a solvent for PLA, so if you soak the extruder into it PLA should melt and free the extruder.</p>
<p>That said, this is the "hard" solution that worked for me.</p>
<hr>
<p>You'll need:</p>
<ol>
<li>A drill</li>
<li>A 1mm drill bit</li>
<li>A 1.5mm drill bit</li>
</ol>
<p>Insert the 1mm drill bit into the drill. If your drill has a setting to reduce the drilling speed, take this to the minimum speed. If your drill doesn't have such options, you'll need to push the drill button very gently.</p>
<p>Put the drill bit on the pla block (be sure to not touch the cooling block, you might ruin it). Start drilling at the minimum speed and push very gently, until you get a side-to-side hole on the PLA (you know it because you feel no resistance at all while pushing the drill).</p>
<p>Take the 1.5mm drill bit and repeat the same operation. At this point my PLA block literally exploded (now I have some broken PLA inside my room, don't know where) and the extruder was finally free.</p>
<p>I hope this helps someone. Please notice that you need to be very gentle in order to avoid breaking parts of your extruder, but if I managed to do it, you can do it too ;)</p>
<p>As previously said, if you have ethyl acetate try soaking the extruder into it to make PLA dissolve before trying this. Try this solution only if all other options didn't help.</p>
|
<p>Steps:</p>
<ol>
<li>Take the tip off, assuming it has a tip.</li>
<li>Heat up the extruder.</li>
<li>Take a length of plastic and push it though the extruder.</li>
<li>If the tip is clogged, then take a micro hand drill to clear the blockage.</li>
</ol>
| 502
|
<p>I want to create two piece labels for storage containers. The main piece would be the “badge” which would have text cut out of it (e.g. “Paint”, “Electrical” etc.). The second piece would be a positive of the text which is would be in a different colour, and would fit inside the cut out on the badge. Because of the tolerance of 3D printers, I need to make the insert slightly smaller than the cut out. Initially, I thought I could just scale the insert but that would affect the letter spacing. Then I thought it would work if I could somehow taper the letters so they are slightly smaller at the top than the bottom. So my question is, how I do that. I did the original in OpenScad but I would try Fusion360 if that’s a better solution. Any and all suggestions are welcome. Thanks.</p>
|
<p>In OpenSCAD, apply the <code>offset</code> transformation to inset the letter outlines before extruding them. However you may find it works better to fill the sunken letter shapes with nail polish then remove the overflow with acetone; see my question & answer <a href="https://3dprinting.stackexchange.com/a/10872/11157">https://3dprinting.stackexchange.com/a/10872/11157</a>.</p>
|
<p>You will need to run test prints to see how your printer behaves with your choice of filaments. Then you'll know how much narrower the letter-lines (not the letter dimensions) need to be to fit. </p>
<p>The problem with that is one of making letters with narrow solid parts but without "shrinking" the open parts so that they fit into the badge receptacle regions. You can't just reduce the dimensions of the letters themselves. </p>
<p>Other than what Trish's answer suggests, I'd recommend either using a very small nozzle diameter and setting a narrow linewidth to reduce the "over-dimensioning" effects. Even better would be to identify the letters separately from the badge but keep them as a single STL and use a two-color printer (twin extruders, etc) to make the badge in one run. </p>
| 1,555
|
<p>I need to flip an image so that a character faces in the right direction. This needs to be done "on the fly' as they say. </p>
<p>The issue I am having is that with Gif images, I seem to lose the transparency. (The background goes white)</p>
<p>Below is the code: (Alternatively someone could send me to a good example)</p>
<pre><code>$img = imagecreatefromgif("./unit.gif");
$size_x = imagesx($img);
$size_y = imagesy($img);
$temp = imagecreatetruecolor($size_x, $size_y);
imagecolortransparent($img, imagecolorallocate($img, 0, 0, 0));
imagealphablending($img, false);
imagesavealpha($img, true);
$x = imagecopyresampled($temp, $img, 0, 0, ($size_x-1), 0, $size_x, $size_y, 0-$size_x, $size_y);
if ($x) {
$img = $temp;
}
else {
die("Unable to flip image");
}
header("Content-type: image/gif");
imagegif($img);
imagedestroy($img);
</code></pre>
|
<p>Shouldn't this:</p>
<pre><code>imagecolortransparent($img, imagecolorallocate($img, 0, 0, 0));
imagealphablending($img, false);
imagesavealpha($img, true);
</code></pre>
<p>...be this:</p>
<pre><code>imagecolortransparent($temp, imagecolorallocate($img, 0, 0, 0));
imagealphablending($temp, false);
imagesavealpha($temp, true);
</code></pre>
<p>Note you should be calling these functions for the $temp image you have created, not the source image.</p>
|
<p>If you can guarantee the presence of ImageMagick, you can use their <code>mogrify -flop</code> command. It preserves transparency.</p>
| 9,203
|
<p>I have been researching asynchronous messaging, and I like the way it elegantly deals with some problems within certain domains and how it makes domain concepts more explicit. But is it a viable pattern for general domain-driven development (at least in the service/application/controller layer), or is the design overhead such that it should be restricted to SOA-based scenarios, like remote services and distributed processing? </p>
|
<p>Great question :). The main problem with asynchronous messaging is that when folks use procedural or object oriented languages, working in an asynchronous or event based manner is often quite tricky and complex and hard for the programmer to read & understand. Business logic is often way simpler if its built in a kinda synchronous manner - invoking methods and getting results immediately etc :).</p>
<p>My rule of thumb is generally to try use simpler synchronous programming models at the micro level for business logic; then use asynchrony and SEDA at the macro level. </p>
<p>For example submitting a purchase order might just write a message to a message queue; but the processing of the purchase order might require 10 different steps all being asynchronous and parallel in a high performance distributed system with many concurrent processes & threads processing individual steps in parallel. So the macro level wiring is based on a SEDA kind of approach - but at the micro level the code for the individual 10 steps could be written mostly in a synchronous programming style.</p>
|
<p>I agree with @BradS too BTW</p>
<p>BTW <a href="http://activemq.apache.org/camel/hiding-middleware.html" rel="nofollow noreferrer">here's a way of hiding the middleware from your business logic</a> while still getting the benefits of loose coupling & SEDA - while being able to easily switch between a variety of different middleware technology - from in memory SEDA to JMS to AMQP to JavaSpaces to database, files or FTP etc</p>
| 9,849
|
<p>I would like to open a small video file and map every frames in memory (to apply some custom filter). I don't want to handle the video codec, I would rather let the library handle that for me.</p>
<p>I've tried to use Direct Show with the SampleGrabber filter (using this sample <a href="http://msdn.microsoft.com/en-us/library/ms787867(VS.85).aspx" rel="noreferrer">http://msdn.microsoft.com/en-us/library/ms787867(VS.85).aspx</a>), but I only managed to grab some frames (not every frames!). I'm quite new in video software programming, maybe I'm not using the best library, or I'm doing it wrong. </p>
<p>I've pasted a part of my code (mainly a modified copy/paste from the msdn example), unfortunately it doesn't grabb the 25 first frames as expected...</p>
<pre><code>[...]
hr = pGrabber->SetOneShot(TRUE);
hr = pGrabber->SetBufferSamples(TRUE);
pControl->Run(); // Run the graph.
pEvent->WaitForCompletion(INFINITE, &evCode); // Wait till it's done.
// Find the required buffer size.
long cbBuffer = 0;
hr = pGrabber->GetCurrentBuffer(&cbBuffer, NULL);
for( int i = 0 ; i < 25 ; ++i )
{
pControl->Run(); // Run the graph.
pEvent->WaitForCompletion(INFINITE, &evCode); // Wait till it's done.
char *pBuffer = new char[cbBuffer];
hr = pGrabber->GetCurrentBuffer(&cbBuffer, (long*)pBuffer);
AM_MEDIA_TYPE mt;
hr = pGrabber->GetConnectedMediaType(&mt);
VIDEOINFOHEADER *pVih;
pVih = (VIDEOINFOHEADER*)mt.pbFormat;
[...]
}
[...]
</code></pre>
<p>Is there somebody, with video software experience, who can advise me about code or other simpler library?</p>
<p>Thanks</p>
<p>Edit:
Msdn links seems not to work (<a href="http://stackoverflow.uservoice.com/pages/general/suggestions/19963" rel="noreferrer">see the bug</a>)</p>
|
<p>Currently these are the most popular video frameworks available on Win32 platforms:</p>
<ol>
<li><p>Video for Windows: old windows framework coming from the age of Win95 but still widely used because it is very simple to use. Unfortunately it supports only AVI files for which the proper VFW codec has been installed.</p></li>
<li><p>DirectShow: standard WinXP framework, it can basically load all formats you can play with Windows Media Player. Rather difficult to use.</p></li>
<li><p><a href="http://ffmpeg.mplayerhq.hu/" rel="noreferrer">Ffmpeg</a>: more precisely libavcodec and libavformat that comes with Ffmpeg open- source multimedia utility. It is extremely powerful and can read a lot of formats (almost everything you can play with <a href="http://www.videolan.org/vlc/" rel="noreferrer">VLC</a>) even if you don't have the codec installed on the system. It's quite complicated to use but you can always get inspired by the code of ffplay that comes shipped with it or by other implementations in open-source software. Anyway I think it's still much easier to use than DS (and much faster). It needs to be comipled by MinGW on Windows, but all the steps are explained very well <a href="http://arrozcru.no-ip.org/ffmpeg/" rel="noreferrer">here</a> (in this moment the link is down, hope not dead).</p></li>
<li><p><a href="http://developer.apple.com/quicktime/download/" rel="noreferrer">QuickTime</a>: the Apple framework is not the best solution for Windows platform, since it needs QuickTime app to be installed and also the proper QuickTime codec for every format; it does not support many formats, but its quite common in professional field (so some codec are actually only for QuickTime). Shouldn't be too difficult to implement.</p></li>
<li><p><a href="http://www.gstreamer.net/" rel="noreferrer">Gstreamer</a>: latest open source framework. I don't know much about it, I guess it wraps over some of the other systems (but I'm not sure).</p></li>
</ol>
<p>All of this frameworks have been implemented as backend in OpenCv Highgui, except for DirectShow. The default framework for Win32 OpenCV is using VFW (and thus able only to open some AVI files), if you want to use the others you must download the CVS instead of the official release and still do some hacking on the code and it's anyway not too complete, for example FFMPEG backend doesn't allow to seek in the stream.
If you want to use QuickTime with OpenCV <a href="http://path.berkeley.edu/~zuwhan/QTforOpenCVWin/" rel="noreferrer">this</a> can help you.</p>
|
<p>I would recommend FFMPEG or GStreamer. Try and stay away from openCV unless you plan to utilize some other functionality than just streaming video. The library is a beefy build and a pain to install from source to configure FFMPEG/+GStreamer options.</p>
| 5,880
|
<p>I've tried using a throat with a PTFE tube, but encountered the problem described in this <a href="https://3dprinting.stackexchange.com/questions/904/teflon-tubing-in-nozzle-throat-coming-out">question</a>. It seems only a small amount of excess pressure in the extruder is enough to force out the inner tube. This makes me worry that with this particular part, I won't achieve a reliable configuration (and I'm experiencing some binding with the original plain steel throat, so a PTFE liner seems worth exploring).</p>
<p>I was wondering about the advisability of using a retainer to apply some pressure at the cold end - a nut with a washer soldered on maybe.</p>
<p>My goal is to prevent the teflon tube from rising up, so I can use this part and retain some resilience against excess extrusion pressure. I was assuming I had a slightly sub-standard throat part (in a pack of 6).</p>
<p>However, I now wonder if the problem was caused by too high a temperature (this is ABS filament) and maybe the teflon will be too soft to function as designed, so if I go back to PLA filament, maybe it is more likely to work without modification.</p>
<p><img src="https://i.stack.imgur.com/Kw6bM.jpg" width="200">
<a href="https://i.stack.imgur.com/Zj61nm.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Zj61nm.jpg" alt="Extruder"></a></p>
|
<p>In this case I would like to first recommend replacing your PTFE tube with a better quality product. Unfortunately, the quality:price ratio is as to be expected here. My reasoning:</p>
<p>PTFE has great thermal properties for a polymer, just like ABS. In fact, the <a href="https://en.wikipedia.org/wiki/Glass_transition" rel="nofollow noreferrer">glass-transition state</a> begins at relatively the same temperature between the two materials. ABS starts transitioning at <a href="https://www.matbase.com/material-categories/natural-and-synthetic-polymers/commodity-polymers/material-properties-of-acrylonitrile-butadiene-styrene-general-purpose-gp-abs.html#properties" rel="nofollow noreferrer">about 105°C</a> and <a href="https://www.matbase.com/material-categories/natural-and-synthetic-polymers/engineering-polymers/material-properties-of-polytetrafluoroethylene-ptfe.html#properties" rel="nofollow noreferrer">about 127°C</a> for PTFE.</p>
<p>However PTFE, traditionally, has a much higher <a href="https://en.wikipedia.org/wiki/Melting_point" rel="nofollow noreferrer">melting point</a> at about 327°C as opposed to the usual 125° we use in 3D printing.</p>
<p><strong>My Point</strong>: I think the hardware you currently have has low-quality PTFE.</p>
<p>PTFE can be recycled for re-use in other PTFE products. In recycled PTFE, you can lose a lot of the desired properties in the material (true for any material). This includes both the ideal "friction-less" and thermal resistance we need in 3D Printing.</p>
<p><strong>What I think happened:</strong> The higher print temperatures of ABS transitioned the PTFE into its glass-state. As the throat expands, the path of least resistance in the assembly is towards the extruder motor since the nozzle holds more pressure.</p>
<p><strong>I would not recommend "fixing" this problem with a retainer!</strong> By forcing the PTFE throat to stay in position, you could potentially force the PTFE to expand in other ways. Most likely resulting in constricting the filament, leading to grinding of filament on the drive gear and clogging of the nozzle. Worst case, you end up with gooey PTFE in your nozzle and/or around your retainer.</p>
|
<p>I see, you want to Block the tefflon tube from going Up and Out of the extruder. Yes I see no reason why you would not fid a nut cap. Drill a hole and have it work. There is likely a special nut cap with a pre existing hole in it already but I decided the keywords was asking for bad results on a work computer. </p>
<p>The only thing I can see happening is you will 1) possibly not have the clearance. 2) have a larger thermal mass.</p>
<p><a href="https://i.stack.imgur.com/UG4iP.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UG4iP.jpg" alt="enter image description here"></a></p>
| 471
|
<p>I want to use the <a href="http://msdn.microsoft.com/en-us/library/system.enterpriseservices.internal.publish.gacremove(VS.80).aspx" rel="nofollow noreferrer">Publish.GacRemove</a> function to remove an assembly from GAC. However, I don't understand what path I should pass as an argument.</p>
<p>Should it be a path to the original DLL (what if I removed it after installing it in the GAC?) or the path to the assembly in the GAC?</p>
<p><strong>UPDATE:</strong></p>
<p>I finally used <a href="http://blogs.msdn.com/junfeng/articles/229649.aspx" rel="nofollow noreferrer">these API wrappers</a>.</p>
|
<p>I am using the <code>GacInstall</code> to publish my assemblies, however once installed into the gac, I sometimes delete my ‘temporary’ copy of the assemblies.</p>
<p>And then, if I ever wanted to uninstall the assemblies from the gac I do not have the files at the original path. This is causing a problem since I cannot seem to get the <code>GacRemove</code> method to uninstall the assemblies unless I keep the original files.</p>
<p>Conclusion: Yes, you need to specify the path to the original DLL. (And try to not move/delete it later). If you delete it, try to copy the file from the GAC to your original path and you should be able to uninstall it using <code>GacRemove</code>.</p>
|
<p>I am not exactly sure about it but I believe GacRemove should do the same thing as gacutil /u. So, it should be the path of your DLL. However it doesn't have to be the same DLL file. Copy of the original should suffice since what counts is the uniqueID of the DLL.</p>
| 6,772
|
<p>I've found SVN to be extremely useful for documentation, personal files, among other non-source code uses.
What other practical uses have you found to version control systems in general?</p>
|
<p>I've seen version control being used for other non-source code purposes, like,</p>
<ol>
<li>Schema files - a set of XML schema files that represent a real world schema</li>
<li>Content files - content represented in a specific format, this is tied to a designer in VStudio, using source control, allows history, rollbacks everything, with no database interaction</li>
</ol>
<p>In both the cases we notice that it's basically verbose files, primary reasons to have these files in source control as opposed to "text records in database", is that </p>
<ol>
<li>files that might need ability to compare versions </li>
<li>history (because multiple users work on them) </li>
<li>ability to rollback to an earlier version </li>
<li>labeling and releases by getting a specific label </li>
<li>if you use Team Foundation (TFS), the whole scrum templates with work items etc </li>
<li>no database involved, no extra development for all the above</li>
</ol>
|
<p>Yes, I have a doc directory in git. I contains a todo list, a calendar and a few other documents.</p>
| 3,540
|
<p>In college I've had numerous design and <a href="http://en.wikipedia.org/wiki/Unified_Modeling_Language" rel="noreferrer">UML</a> oriented courses, and I recognize that UML can be used to benefit a software project, especially <a href="http://en.wikipedia.org/wiki/Use_case" rel="noreferrer">use-case</a> mapping, but is it really practical? I've done a few co-op work terms, and it appears that UML is not used heavily in the industry. Is it worth the time during a project to create UML diagrams? Also, I find that class diagrams are generally not useful, because it's just faster to look at the header file for a class. Specifically which diagrams are the most useful?</p>
<p><strong>Edit:</strong> My experience is limited to small, under 10 developer projects.</p>
<p><strong>Edit:</strong> Many good answers, and though not the most verbose, I belive the one selected is the most balanced.</p>
|
<p>In a sufficiently <strong>complex system</strong> there are some places where some <code>UML</code> is considered useful. </p>
<p>The useful diagrams for a system, vary by applicability.<br>
But the most widely used ones are: </p>
<ul>
<li>Class Diagrams</li>
<li>State Diagrams</li>
<li>Activity Diagrams</li>
<li>Sequence Diagrams</li>
</ul>
<p>There are many enterprises who swear by them and many who outright reject them as an utter waste of time and effort. </p>
<p>It's best not to go overboard and think what's best for the project you are on and pick the stuff that is applicable and makes sense.</p>
|
<p>UML is just one of methods for communication within people.
Whiteboard is better.</p>
| 3,935
|
<p>I'd like to script, preferably in rake, the following actions into a single command:</p>
<ol>
<li>Get the version of my local git repository.</li>
<li>Git pull the latest code.</li>
<li>Git diff from the version I extracted in step #1 to what is now in my local repository.</li>
</ol>
<p>In other words, I want to get the latest code form the central repository and immediately generate a diff of what's changed since the last time I pulled.</p>
|
<p>You could do this fairly simply with refspecs.</p>
<pre><code>git pull origin
git diff @{1}..
</code></pre>
<p>That will give you a diff of the current branch as it existed before and after the pull. Note that if the pull doesn't actually update the current branch, the diff will give you the wrong results. Another option is to explicitly record the current version:</p>
<pre><code>current=`git rev-parse HEAD`
git pull origin
git diff $current..
</code></pre>
<p>I personally use an alias that simply shows me a log, in reverse order (i.e. oldest to newest), sans merges, of all the commits since my last pull. I run this every time my pull updates the branch:</p>
<pre><code>git config --global alias.lcrev 'log --reverse --no-merges --stat @{1}..
</code></pre>
|
<p>If you drop this into your bash profile you'll be able to run grin (git remote incoming) and grout (git remote outgoing) to see diffs of commits that are incoming and outgoing for origin master.</p>
<pre><code>function parse_git_branch {
git branch --no-color 2> /dev/null | sed -e '/^[^*]/d' -e 's/* \(.*\)/\1/'
}
function gd2 {
echo branch \($1\) has these commits and \($2\) does not
git log $2..$1 --no-merges --format='%h | Author:%an | Date:%ad | %s' --date=local
}
function grin {
git fetch origin master
gd2 FETCH_HEAD $(parse_git_branch)
}
function grout {
git fetch origin master
gd2 $(parse_git_branch) FETCH_HEAD
}
</code></pre>
| 8,610
|
<p>I've trouble setting up Vim (7.1.xxx) for editing Python files (*.py).
Indenting seems to be broken (optimal 4 spaces).
I've followed some tutorials I found via Google. Still no effect :/
Please help.</p>
|
<p>I use this on my macbook:</p>
<pre><code>" configure expanding of tabs for various file types
au BufRead,BufNewFile *.py set expandtab
au BufRead,BufNewFile *.c set expandtab
au BufRead,BufNewFile *.h set expandtab
au BufRead,BufNewFile Makefile* set noexpandtab
" --------------------------------------------------------------------------------
" configure editor with tabs and nice stuff...
" --------------------------------------------------------------------------------
set expandtab " enter spaces when tab is pressed
set textwidth=120 " break lines when line length increases
set tabstop=4 " use 4 spaces to represent tab
set softtabstop=4
set shiftwidth=4 " number of spaces to use for auto indent
set autoindent " copy indent from current line when starting a new line
" make backspaces more powerfull
set backspace=indent,eol,start
set ruler " show line and column number
syntax on " syntax highlighting
set showcmd " show (partial) command in status line
</code></pre>
<p>(edited to only show stuff related to indent / tabs)</p>
|
<p>for more advanced python editing consider installing the <a href="http://eigenclass.org/hiki/simplefold" rel="nofollow noreferrer">simplefold</a> vim plugin. it allows you do advanced code folding using regular expressions. i use it to fold my class and method definitions for faster editing.</p>
| 9,081
|
<p>I am developping a (relatively small) website in ASP.Net 2.0. I am also using nAnt to perform some easy tweaking on my project before delivering executables. In its current state, the website is "precompiled" using </p>
<blockquote>
<p><code>aspnet_compiler.exe -nologo -v ${Appname} -u ${target}</code></p>
</blockquote>
<p>I have noticed that after the IIS pool is restarted (after a idle shutdown or a recycle), the application takes up to 20 seconds before it is back online (and Application_start is reached).</p>
<p>I don't have the same issue when I am debugging directly within Visual Studio (it takes 2 seconds to start) so I am wondering if the aspnet_compiler is really such a good idea.</p>
<p>I couldn't find much on MSDN. How do you compile your websites for production?</p>
|
<p>Make sure that:</p>
<ol>
<li>You are using a Web Application project rather than a Web Site project, this will result in a precompiled binary for your code behind</li>
<li>You have turned off debug code generation in the web.config file - I guess if this is different to when you used aspnet_compiler the code may be recompiled</li>
</ol>
<p>If you've tried those, you could maybe try running ngen over your assembly thus saving the JIT time?</p>
|
<p>@Simon:</p>
<ul>
<li>The project is a Web Application. Websites are then slower to startup (I had no idea it had an incidence, beside the different code organization)?</li>
<li>I checked, and while I edit the web.config after aspnet_compiler is called, I don't touch the debug value (I will however check the website is not faster to startup if I don't touch the web.config, just to make sure)</li>
</ul>
<p>(And I will definitely have a look at ngen, I was not aware of that tool.)</p>
| 8,027
|
<p>I created a few mediawiki custom tags, using the guide found here</p>
<p><a href="http://www.mediawiki.org/wiki/Manual:Tag_extensions" rel="nofollow noreferrer">http://www.mediawiki.org/wiki/Manual:Tag_extensions</a></p>
<p>I will post my code below, but the problem is after it hits the first custom tag in the page, it calls it, and prints the response, but does not get anything that comes after it in the wikitext. It seems it just stops parsing the page.</p>
<p>Any Ideas?</p>
<pre><code>if ( defined( 'MW_SUPPORTS_PARSERFIRSTCALLINIT' ) ) {
$wgHooks['ParserFirstCallInit'][] = 'tagregister';
} else { // Otherwise do things the old fashioned way
$wgExtensionFunctions[] = 'tagregister';
}
function tagregister(){
global $wgParser;
$wgParser->setHook('tag1','tag1func');
$wgParser->setHook('tag2','tag2func');
return true;
}
function tag1func($input,$params)
{
return "It called me";
}
function tag2func($input,$params)
{
return "It called me -- 2";
}
</code></pre>
<p>Update: @George Mauer -- I have seen that as well, but this does not stop the page from rendering, just the Mediawiki engine from parsing the rest of the wikitext. Its as if hitting the custom function is signaling mediawiki that processing is done. I am in the process of diving into the rabbit hole but was hoping someone else has seen this behavior.</p>
|
<p>Easy!</p>
<pre><code><xsl:template match="/">
<xsl:apply-templates select="root/node">
<xsl:sort select="position()" data-type="number" order="descending"/>
</xsl:apply-templates>
</xsl:template>
<xsl:template match="node">
<xsl:value-of select="."/>
</xsl:template>
</code></pre>
|
<pre><code><xsl:template match="/">
<xsl:apply-templates select="root/node[3]"/>
<xsl:apply-templates select="root/node[2]"/>
<xsl:apply-templates select="root/node[1]"/>
</xsl:template>
<xsl:template match="node">
<xsl:value-of select="."/>
</xsl:template>
</code></pre>
| 7,247
|
<p>I've always wanted a way to make a socket connection to a server and allow the server to manipulate the page DOM. For example, this could be used in a stock quotes page, so the server can push new quotes as they become available. </p>
<p>I know this is a classic limitation (feature?) of HTTP's request/response protocol, but I think this could be implemented as a Firefox plugin (cross-browser compatibility is not important for my application). Java/Flash solutions are not acceptable, because (as far as i know) they live in a box and can't interact with the DOM. </p>
<p>Can anyone confirm whether this is within the ability of a Firefox plugin? Has someone already created this or something similar? </p>
|
<p>You may want to look at <a href="http://en.wikipedia.org/wiki/Comet_(programming)" rel="nofollow noreferrer">Comet</a> which is a fancy name for a long running HTTP connection where the server can push updates to the page.</p>
|
<blockquote>
<p>You may want to look at Comet</p>
</blockquote>
<p>a.k.a. <a href="http://ejohn.org/blog/streaming-http-server-push/" rel="nofollow noreferrer">server push</a>. This does not let the server "update" the client page directly, but all the new data is sent to the page through a single connection.</p>
<p>Of course, a Firefox <a href="http://developer.mozilla.org/en/docs/Extension" rel="nofollow noreferrer">extension</a> (as well as <a href="http://developer.mozilla.org/en/docs/Plugins" rel="nofollow noreferrer">plugins</a>, which are binary libraries that can do whatever any other application can do) can work with sockets too. See <a href="http://www.xulplanet.com/tutorials/mozsdk/serverpush.php" rel="nofollow noreferrer">1</a>, <a href="http://www.xulplanet.com/tutorials/mozsdk/serverpush.php" rel="nofollow noreferrer">2</a>.</p>
| 3,428
|
<p>We have an InnoDB database that is about 70 GB and we expect it to grow to several hundred GB in the next 2 to 3 years. About 60 % of the data belong to a single table. Currently the database is working quite well as we have a server with 64 GB of RAM, so almost the whole database fits into memory, but we’re concerned about the future when the amount of data will be considerably larger. Right now we’re considering some way of splitting up the tables (especially the one that accounts for the biggest part of the data) and I’m now wondering, what would be the best way to do it.</p>
<p>The options I’m currently aware of are</p>
<ul>
<li>Using MySQL Partitioning that comes with version 5.1 </li>
<li>Using some kind of third party library that encapsulates the partitioning of the data (like hibernate shards)</li>
<li>Implementing it ourselves inside our application</li>
</ul>
<p>Our application is built on J2EE and EJB 2.1 (hopefully we’re switching to EJB 3 some day).</p>
<p>What would you suggest?</p>
<p>EDIT (2011-02-11):<br>
Just an update: Currently the size of the database is 380 GB, the data size of our "big" table is 220 GB and the size of its index is 36 GB. So while the whole table does not fit in memory any more, the index does.<br>
The system is still performing fine (still on the same hardware) and we're still thinking about partitioning the data.</p>
<p>EDIT (2014-06-04):
One more update: The size of the whole database is 1.5 TB, the size of our "big" table is 1.1 TB. We upgraded our server to a 4 processor machine (Intel Xeon E7450) with 128 GB RAM.
The system is still performing fine.
What we're planning to do next is putting our big table on a separate database server (we've already done the necessary changes in our software) while simultaneously upgrading to new hardware with 256 GB RAM.</p>
<p>This setup is supposed to last for two years. Then we will either have to finally start implementing a sharding solution or just buy servers with 1 TB of RAM which should keep us going for some time. </p>
<p>EDIT (2016-01-18):</p>
<p>We have since put our big table in it's own database on a separate server. Currently the size ot this database is about 1.9 TB, the size of the other database (with all tables except for the "big" one) is 1.1 TB.</p>
<p>Current Hardware setup:</p>
<ul>
<li>HP ProLiant DL 580</li>
<li>4 x Intel(R) Xeon(R) CPU E7- 4830</li>
<li>256 GB RAM</li>
</ul>
<p>Performance is fine with this setup.</p>
|
<p>You will definitely start to run into issues on that 42 GB table once it no longer fits in memory. In fact, as soon as it does not fit in memory anymore, performance will degrade extremely quickly. One way to test is to put that table on another machine with less RAM and see how poor it performs.</p>
<blockquote>
<p>First of all, it doesn't matter as much splitting out tables unless you also move some of the tables to a separate physical volume.</p>
</blockquote>
<p>This is incorrect. Partioning (either through the feature in MySQL 5.1, or the same thing using MERGE tables) can provide significant performance benefits even if the tables are on the same drive. </p>
<p>As an example, let's say that you are running SELECT queries on your big table using a date range. If the table is whole, the query will be forced to scan through the entire table (and at that size, even using indexes can be slow). The advantage of partitioning is that your queries will only run on the partitions where it is absolutely necessary. If each partition is 1 GB in size and your query only needs to access 5 partitions in order to fulfill itself, the combined 5 GB table is a lot easier for MySQL to deal with than a monster 42 GB version.</p>
<p>One thing you need to ask yourself is how you are querying the data. If there is a chance that your queries will only need to access certain chunks of data (i.e. a date range or ID range), partitioning of some kind will prove beneficial.</p>
<p>I've heard that there is still some buggyness with MySQL 5.1 partitioning, particularly related to MySQL choosing the correct key. MERGE tables can provide the same functionality, although they require slightly more overhead.</p>
<p>Hope that helps...good luck!</p>
|
<p>First of all, it doesn't matter as much splitting out tables unless you also move some of the tables to a separate physical volume. </p>
<p>Secondly, it's not necessarily the table with the largest physical size that you want to move. You may have a much smaller table that gets more activity, while your big table remains fairly constant or only appends data.</p>
<p>Whatever you do, don't implement it yourselves. Let the database system handle it.</p>
| 6,788
|
<p>Has anyone come up with a good way of performing full text searches (<code>FREETEXT() CONTAINS()</code>) for any number of arbitrary keywords using standard LinqToSql query syntax?</p>
<p>I'd obviously like to avoid having to use a Stored Proc or have to generate a Dynamic SQL calls.</p>
<p>Obviously I could just pump the search string in on a parameter to a SPROC that uses FREETEXT() or CONTAINS(), but I was hoping to be more creative with the search and build up queries like:</p>
<p>"pepperoni pizza" and burger, not "apple pie".</p>
<p>Crazy I know - but wouldn't it be neat to be able to do this directly from LinqToSql? Any tips on how to achieve this would be much appreciated.</p>
<p>Update: I think I may be on to something <a href="http://tomasp.net/blog/linq-expand-update.aspx" rel="nofollow noreferrer">here</a>...</p>
<p>Also: I rolled back the change made to my question title because it actually changed the meaning of what I was asking. I <em>know</em> that full text search is not supported in LinqToSql - I would have asked that question if I wanted to know that. Instead - I have updated my title to appease the edit-happy-trigger-fingered masses.</p>
|
<p>Unfortunately LINQ to SQL does not support Full Text Search. </p>
<p>There are a bunch of products out there that I think could: Lucene.NET, NHibernate Search comes to mind. LINQ for NHibernate combined with NHibernate Search would probably give that functionality, but both are still way deep in beta.</p>
|
<p>Unfortunately LINQ to SQL does not support Full Text Search. </p>
<p>There are a bunch of products out there that I think could: Lucene.NET, NHibernate Search comes to mind. LINQ for NHibernate combined with NHibernate Search would probably give that functionality, but both are still way deep in beta.</p>
| 9,326
|
<p>So, I'm having this problem where almost anything I print with a section(s) that is not directly connected to something below it breaks when I try to pull the small filaments meant to hold it up during printing off. For example, I 3D-printed a Rayquaza(<a href="https://www.myminifactory.com/object/rayquaza-23624" rel="noreferrer">this one</a>) from Pokemon for my little brother, and as I was carefully pulling the filament from under the mouth, the whole head just snapped off. Does someone have a recommendation as to a way to get the small filament off without breaking the object? Would a solution just be to print it bigger and see if it holds up better, or is there something else I can do? Thanks.</p>
|
<p>The small filaments you remove that hold the parts up are called supports. The one model I located on Thingiverse clearly requires a number of supports, as the model is not easily designed for 3d printing with FDM printers. It would be better printed with SLS, but that's not the focus of your question.</p>
<p>You don't specify how large you printed the model, but certainly a scaled-up version will be stronger at the weak points. You will want to use sharp non-shearing cutters to clear away as much of the supports as possible, without torquing on the model.</p>
<p>Another option which also reduces the forces on the model body is to use a soldering iron to smooth and clear/cut the supports. If you are able to use cutters and not damage the model, the soldering iron can remove and flatten the remnants of those supports.</p>
<p>Please note that if your careful work has resulted in a model that snaps to pieces, your little brother will soon destroy the successfully cleaned up model just as easily.</p>
<p>If you have skill with 3d modeling software (Meshmixer and Blender come to mind for such organic models), you can add insignificant items to the model to provide functional support. Would the Rayquaza look fiercer if you 3d printed a cage as an integrated part of the model, using the bars of the cage to provide support?</p>
<p>I successfully printed a model that was created by an artist unfamiliar with 3d printing restrictions. The support material was wash-away PVA. I provided the model to the "owner" who washed away the support material and snapped the legs in two. It's sometimes impossible to solve poor designs. You have a good chance if you build a cage for this one.</p>
|
<p>Depends how small your printing, if it's the one i see on thingiverse then it don't see why it would break unless:</p>
<ol>
<li>it's super small</li>
<li>too many or strong support that's hard to pull off?</li>
<li>Not calibrated for the filament - eg temperature too low or too much fan and the layer's aren't properly bonding. </li>
<li>Too much moisture in the filament making it weak</li>
<li>Needs more infill at the weak spots? (ex. if the neck is thin then infill other than 50-100 is a good idea if you can control that in your slicer).</li>
</ol>
<p>i would start with the temps/adjusting settings. really hard to say without a reference photo. </p>
<p>However, i would try a stronger material for toys such as PETG either way. from experience they'll just break easily anyway from the abuse kids put them through. PETG has good layer bonding and easy to print, except it's terrible at bridging. Supports will most likely need to be cliped off with side cutters but it's not a huge deal. </p>
| 387
|
<p>I looking for a way, specifically in PHP that I will be guaranteed to always get a unique key.</p>
<p>I have done the following:</p>
<pre><code>strtolower(substr(crypt(time()), 0, 7));
</code></pre>
<p>But I have found that once in a while I end up with a duplicate key (rarely, but often enough).</p>
<p>I have also thought of doing:</p>
<pre><code>strtolower(substr(crypt(uniqid(rand(), true)), 0, 7));
</code></pre>
<p>But according to the PHP website, uniqid() could, if uniqid() is called twice in the same microsecond, it could generate the same key. I'm thinking that the addition of rand() that it rarely would, but still possible.</p>
<p>After the lines mentioned above I am also remove characters such as L and O so it's less confusing for the user. This maybe part of the cause for the duplicates, but still necessary.</p>
<p>One option I have a thought of is creating a website that will generate the key, storing it in a database, ensuring it's completely unique.</p>
<p>Any other thoughts? Are there any websites out there that already do this that have some kind of API or just return the key. I found <a href="http://userident.com" rel="noreferrer">http://userident.com</a> but I'm not sure if the keys will be completely unique.</p>
<p>This needs to run in the background without any user input.</p>
|
<p>There are only 3 ways to generate unique values, rather they be passwords, user IDs, etc.:</p>
<ol>
<li>Use an effective GUID generator - these are long and cannot be shrunk. If you only use part <strong>you FAIL</strong>. </li>
<li>At least part of the number is sequentially generated off of a single sequence. You can add fluff or encoding to make it look less sequential. Advantage is they start short - disadvantage is they require a single source. The work around for the single source limitation is to have numbered sources, so you include the [source #] + [seq #] and then each source can generate its own sequence. </li>
<li>Generate them via some other means and then check them against the single history of previously generated values.</li>
</ol>
<p>Any other method is not guaranteed. Keep in mind, fundamentally you are generating a binary number (it is a computer), but then you can encode it in Hexadecimal, Decimal, Base64, or a word list. Pick an encoding that fits your usage. Usually for user entered data you want some variation of Base32 (which you hinted at).</p>
<p><strong>Note about GUIDS</strong>: They gain their strength of uniqueness from their length and the method used to generate them. <em>Anything less than 128-bits is not secure.</em> Beyond random number generation there are characteristics that go into a GUID to make it more unique. Keep in mind they are only practically unique, not completely unique. It is possible, although practically impossible to have a duplicate. </p>
<p><strong>Updated Note about GUIDS</strong>: Since writing this I learned that many GUID generators use a cryptographically secure random number generator (difficult or impossible to predict the next number generated, and a not likely to repeat). There are actually 5 different <a href="http://en.wikipedia.org/wiki/Universally_Unique_Identifier#Definition" rel="noreferrer">UUID algorithms</a>. Algorithm 4 is what Microsoft currently uses for the Windows GUID generation API. A <a href="http://en.wikipedia.org/wiki/Globally_unique_identifier" rel="noreferrer">GUID</a> is Microsoft's implementation of the UUID standard.</p>
<p><strong>Update</strong>: If you want 7 to 16 characters then you need to use either method 2 or 3.</p>
<p><strong>Bottom line</strong>: Frankly there is no such thing as completely unique. Even if you went with a sequential generator you would eventually run out of storage using all the atoms in the universe, thus looping back on yourself and repeating. Your only hope would be the heat death of the universe before reaching that point.</p>
<p>Even the best random number generator has a possibility of repeating equal to the total size of the random number you are generating. Take a quarter for example. It is a completely random bit generator, and its odds of repeating are 1 in 2. </p>
<p>So it all comes down to your threshold of uniqueness. You can have 100% uniqueness in 8 digits for 1,099,511,627,776 numbers by using a sequence and then base32 encoding it. Any other method that does not involve checking against a list of past numbers only has odds equal to n/1,099,511,627,776 (where n=number of previous numbers generated) of not being unique.</p>
|
<p>I usually do a random substring (randomize how many chars between 8 an 32, or less for user convenience) or the MD5 of some value I have gotten in, or the time, or some combination. For more randomness I do MD5 of come value (say last name) concatenate that with the time, MD5 it again, then take the random substring. Yes, you <em>could</em> get equal passwords, but its not very likely at all.</p>
| 7,895
|
<p>Do you have a good experience with a control library? Something that is kind of robust, well documented, consistent (across different controls) and quite well integrated into the Visual Studio.</p>
|
<p>I'll second the vote for <a href="http://www.telerik.com" rel="nofollow noreferrer">Telerik</a>. Their controls for the most part "just work" and their support has been excellent. I primarily use their forums and I still receive a response within a day (unlike some other vendors who barely seem to notice that they've even got a forum).</p>
<p>It also feels like they've actually spent time trying out a lot of the ways customer's will use their controls. The documentation and support reflects it. They aren't perfect, though. One issue that they had in the past, and that they've addressed in the latest releases (what they were calling their "Prometheus" controls, now just "Rad Controls for ASP.NET AJAX") is the performance of the controls. In previous releases they were definitely a bit sluggish (I'm thinking specifically of their RadGrid and RadEditor). Now they're noticeably faster (esp. the RadEditor - it loads MUCH faster).</p>
<p>Overall I wouldn't think twice of recommending them.</p>
|
<p><a href="http://www.componentart.com" rel="nofollow noreferrer">ComponentArt</a> has some pretty cool controls. You might want to check out <a href="http://www.telerik.com" rel="nofollow noreferrer">Telerik</a> as well. Both companies offer pretty easy to use controls that look nice.</p>
| 5,356
|
<p>I want to load a desktop application, via reflection, as a Control inside another application.</p>
<p>The application I'm reflecting is a legacy one - I can't make changes to it.</p>
<p>I can dynamically access the Form, but can't load it as a Control.</p>
<p>In .Net Form expands on Control, and I can assign the reflected Form as a Control, but it throws a run-time exception.</p>
<p>Forms cannot be loaded as controls.</p>
<p>Is there any way to convert the form to a control? </p>
|
<p>Yes, this works just fine. I'm working on a .NET app right now that loads forms into a panel on a host form.</p>
<p>The relevant snippet:</p>
<pre><code>// setup the new form
form.TopLevel = false;
form.FormBorderStyle = FormBorderStyle.None;
form.Dock = DockStyle.Fill;
form.Show ( );
// add to the panel's list of child controls
panelFormHost.Controls.Add ( form );
</code></pre>
|
<p>You should be able to add the form to the controls collection of your parent form...</p>
<p>See here:
<a href="http://vbcity.com/forums/topic.asp?tid=30539" rel="nofollow noreferrer"><a href="http://vbcity.com/forums/topic.asp?tid=30539" rel="nofollow noreferrer">http://vbcity.com/forums/topic.asp?tid=30539</a></a></p>
<p>If that fails, try using the adapter pattern to create a container with your legacy form inside it, then load it in an MDI maybe?</p>
| 7,445
|
<p>I can connect with the DataContext to the Oracle database however I get errors in running the query against the oracle database. I looked at the SQL generated and it is for MSSQL and not Oracle PSQL. </p>
<p>Does anybody know of a decent easy to use wrapper to use LINQ against an Oracle Database?</p>
|
<p>No, LINQ to SQL is very much MS SQL only - think of it as a client driver.</p>
<p><a href="http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=1623471&SiteID=1" rel="noreferrer">Microsoft is/was helping Oracle and DataDirect develop providers for Oracle and other non-MS database servers.</a></p>
|
<p>Not an easy way, at least until a good provider is produced.</p>
<p>Really MS should provide at least an OLEDB Linq provider. After all, Linq to Sql is basically an implementation of IQueryable with designer support.</p>
| 5,065
|
<p>I was reading Joel's book where he was suggesting as interview question:</p>
<blockquote>
<p>Write a program to reverse the "ON" bits in a given byte.</p>
</blockquote>
<p>I only can think of a solution using C. </p>
<p>Asking here so you can show me how to do in a Non C way (if possible)</p>
|
<blockquote>
<p>What specifically does that question mean?</p>
</blockquote>
<p>Good question. If reversing the "ON" bits means reversing only the bits that are "ON", then you will always get 0, no matter what the input is. If it means reversing <em>all</em> the bits, i.e. changing all 1s to 0s and all 0s to 1s, which is how I initially read it, then that's just a bitwise NOT, or complement. C-based languages have a complement operator, <code>~</code>, that does this. For example:</p>
<pre><code>unsigned char b = 102; /* 0x66, 01100110 */
unsigned char reverse = ~b; /* 0x99, 10011001 */
</code></pre>
|
<p>Since the question asked for a non-C way, here's a Scheme implementation, cheerfully plagiarised from <a href="http://people.csail.mit.edu/jaffer/SLIB" rel="nofollow noreferrer">SLIB</a>:</p>
<pre><code>(define (bit-reverse k n)
(do ((m (if (negative? n) (lognot n) n) (arithmetic-shift m -1))
(k (+ -1 k) (+ -1 k))
(rvs 0 (logior (arithmetic-shift rvs 1) (logand 1 m))))
((negative? k) (if (negative? n) (lognot rvs) rvs))))
(define (reverse-bit-field n start end)
(define width (- end start))
(let ((mask (lognot (ash -1 width))))
(define zn (logand mask (arithmetic-shift n (- start))))
(logior (arithmetic-shift (bit-reverse width zn) start)
(logand (lognot (ash mask start)) n))))
</code></pre>
<p>Rewritten as C (for people unfamiliar with Scheme), it'd look something like this (with the understanding that in Scheme, numbers can be arbitrarily big):</p>
<pre><code>int
bit_reverse(int k, int n)
{
int m = n < 0 ? ~n : n;
int rvs = 0;
while (--k >= 0) {
rvs = (rvs << 1) | (m & 1);
m >>= 1;
}
return n < 0 ? ~rvs : rvs;
}
int
reverse_bit_field(int n, int start, int end)
{
int width = end - start;
int mask = ~(-1 << width);
int zn = mask & (n >> start);
return (bit_reverse(width, zn) << start) | (~(mask << start) & n);
}
</code></pre>
| 3,113
|
<p>I am printing Eryone 1.75 mm PET on my Kobra Max printer with 80°C bed and 230°C.</p>
<p>I am experiencing this:</p>
<p><a href="https://i.stack.imgur.com/RXv0n.jpg" rel="nofollow noreferrer" title="Possible stringing and over-extrusion"><img src="https://i.stack.imgur.com/RXv0n.jpg" alt="Possible stringing and over-extrusion" title="Possible stringing and over-extrusion" /></a></p>
<p>Is this "over-extrusion" and "stringing"?</p>
<p>And how do I avoid this?</p>
|
<p>PETG sticks to brass nozzles <em>really well</em>. Better than it sticks to itself, in many cases. When travelling over already-printed material, especially if the filament is not retracted (Cura calls this kind of travel "combing") or if the material it's travelling over is overextruded and bulging up, the nozzle will pick-up/dig-up some of that material and drag it around. There, it accumulates until the size and shape is adequate to cause it to get dislodged when a part that's sticking out far enough not to be molten collides with another part of the print, and it falls off. It may get picked up again by the hot nozzle later, leading the process to repeat.</p>
<p>There are various mitigations for this:</p>
<ul>
<li><p>Limit "combing". In Cura, set "Max Combing Distance Without Retract" to 1 mm and "Connect Infill Lines" on to get good behavior without introducing huge numbers of retractions. slic3r-based slicers have an option called something like "Only Retract When Crossing Perimeters" that you want to turn <strong>off</strong> to get the same effect.</p>
</li>
<li><p>Make sure you're not overextruding. Turn down flow slightly if needed.</p>
</li>
<li><p>Dry your filament. PETG is always wet unless you dried it in the past 24 hours yourself, and wet PETG does this a lot worse.</p>
</li>
<li><p>You may be able to select a nozzle that the material doesn't stick to. There are non-stick (likely PTFE) coated nozzles designed for this purpose (but they can't be used at high temperatures), and nickel plated seems to do better too. I'm not sure about other materials like stainless steel or (note: expensive) tungsten carbide but they may be options to improve this too.</p>
</li>
</ul>
|
<h1>Stringing</h1>
<p>PETG is prone to stringing, even with the best settings. Often, I live with some stringing as strings are easy to clean up while other issues can be introduced by getting totally rid of it.</p>
| 2,208
|
<p>I need to print some items that will go in my car, which sits in direct sun a good portion of the day and almost certainly exceeds 50 °C inside during the summer months (I've had PLA prints sag quite visibly under their own weight and sliding parts weld together, and PETG prints show slight warpage after a couple weeks in the car).</p>
<p>I have the ability now to print hotter than 250 °C, with Capricorn XS tubing installed in the heat break of my Ender 3. I'm examining nylon and TPU filaments for their higher temperature tolerance, but I don't see anything on any filament that clearly gives a glass transition temperature. Is it safe to assume this will be within the recommended bed temperature range, or is there another way to determine this (important, in my use case) value for a given filament?</p>
|
<p>The glass transition temperature is not always indicative of strength. Some polymers are solid well above that temperature, see <a href="https://omnexus.specialchem.com/polymer-properties/properties/glass-transition-temperature" rel="nofollow noreferrer">https://omnexus.specialchem.com/polymer-properties/properties/glass-transition-temperature</a> for all the polyethylene, for example.</p>
<p>What you need is the heat deflection temperature, which is very seldomly provided in the datasheets.</p>
<p>Luckily you can very roughly use the Tg for many polymers, but you should know that you are using a value which is only vaguely indicative of what you are looking for.</p>
|
<p>If there isn't a TDS on the manufacturer's website, you could email them and ask for it. If they won't give it to you, you could ask for the SDS to see if it has any additives. Polymaker has a TDS for every filament they make on their website, so I would recommend their filament for your purposes. I would also suggest you pay more attention to the heat deflection temperature, as it is probably more useful for functional parts.</p>
| 1,994
|
<p>When creating the <code>id</code> attributes for HTML elements, what rules are there for the value?</p>
|
<p>For <a href="http://www.w3.org/TR/html4/types.html#type-id" rel="noreferrer">HTML 4</a>, the answer is technically:</p>
<blockquote>
<p>ID and NAME tokens must begin with a letter ([A-Za-z]) and may be followed by any number of letters, digits ([0-9]), hyphens ("-"), underscores ("_"), colons (":"), and periods (".").</p>
</blockquote>
<p><a href="https://www.w3.org/TR/html5/dom.html#the-id-attribute" rel="noreferrer">HTML 5</a> is even more permissive, saying only that an id must contain at least one character and may not contain any space characters.</p>
<p>The id attribute is case sensitive in <a href="https://www.w3.org/TR/xhtml1/diffs.html#h-4.2" rel="noreferrer">XHTML</a>.</p>
<p>As a purely practical matter, you may want to avoid certain characters. Periods, colons and '#' have special meaning in CSS selectors, so you will have to escape those characters using a <a href="http://www.w3.org/TR/CSS2/syndata.html#value-def-identifier" rel="noreferrer">backslash in CSS</a> or a double backslash in a <a href="http://learn.jquery.com/using-jquery-core/faq/how-do-i-select-an-element-by-an-id-that-has-characters-used-in-css-notation/" rel="noreferrer">selector string passed to jQuery</a>. Think about how often you will have to escape a character in your stylesheets or code before you go crazy with periods and colons in ids.</p>
<p>For example, the HTML declaration <code><div id="first.name"></div></code> is valid. You can select that element in CSS as <code>#first\.name</code> and in jQuery like so: <code>$('#first\\.name').</code> But if you forget the backslash, <code>$('#first.name')</code>, you will have a perfectly valid selector looking for an element with id <code>first</code> and also having class <code>name</code>. This is a bug that is easy to overlook. You might be happier in the long run choosing the id <code>first-name</code> (a hyphen rather than a period), instead.</p>
<p>You can simplify your development tasks by strictly sticking to a naming convention. For example, if you limit yourself entirely to lower-case characters and always separate words with either hyphens or underscores (but not both, pick one and never use the other), then you have an easy-to-remember pattern. You will never wonder "was it <code>firstName</code> or <code>FirstName</code>?" because you will always know that you should type <code>first_name</code>. Prefer camel case? Then limit yourself to that, no hyphens or underscores, and always, consistently use either upper-case or lower-case for the first character, don't mix them.</p>
<hr>
<p>A now very obscure problem was that at least one browser, Netscape 6, <a href="https://developer.mozilla.org/en-US/docs/Case_Sensitivity_in_class_and_id_Names" rel="noreferrer">incorrectly treated id attribute values as case-sensitive</a>. That meant that if you had typed <code>id="firstName"</code> in your HTML (lower-case 'f') and <code>#FirstName { color: red }</code> in your CSS (upper-case 'F'), that buggy browser would have failed to set the element's color to red. At the time of this edit, April 2015, I hope you aren't being asked to support Netscape 6. Consider this a historical footnote.</p>
|
<p>alphabets → caps & small</p>
<p>digits → 0-9</p>
<p>special characters → ':', '-', '_', '.'</p>
<p>The format should be either starting from '.' or an alphabet, followed by either of the special characters of more alphabets or numbers. The value of the id field must not end at an '_'.</p>
<p>Also, spaces are not allowed, if provided, they are treated as different values, which is not valid in case of the id attributes.</p>
| 9,602
|
<p>I've heard many programmers, particularly Delphi programmers scorn the use of 'with'. </p>
<p>I thought it made programs run faster (only one reference to parent object) and that it was easier to read the code if used sensibly (less than a dozen lines of code and no nesting).</p>
<p>Here's an example:</p>
<pre><code>procedure TBitmap32.FillRectS(const ARect: TRect; Value: TColor32);
begin
with ARect do FillRectS(Left, Top, Right, Bottom, Value);
end;
</code></pre>
<p>I like using <code>with</code>. What's wrong with me?</p>
|
<p>One annoyance with using with is that the debugger can't handle it. So it makes debugging more difficult.</p>
<p>A bigger problem is that it is less easy to read the code. Especially if the with statement is a bit longer.</p>
<pre><code>procedure TMyForm.ButtonClick(...)
begin
with OtherForm do begin
Left := 10;
Top := 20;
CallThisFunction;
end;
end;
</code></pre>
<p>Which Form's CallThisFunction will be called? Self (TMyForm) or OtherForm? You can't know without checking if OtherForm has a CallThisFunction method.</p>
<p>And the biggest problem is that you can make bugs easy without even knowing it. What if both TMyForm and OtherForm have a CallThisFunction, but it's private. You might expect/want the OtherForm.CallThisFunction to be called, but it really is not. The compiler would have warned you if you didn't use the with, but now it doesn't.</p>
<p>Using multiple objects in the with multiplies the problems. See <a href="http://blog.marcocantu.com/blog/with_harmful.html" rel="noreferrer">http://blog.marcocantu.com/blog/with_harmful.html</a></p>
|
<p>We've recently banned it in our Delphi coding stnadards. </p>
<p>The pros were frequently outweighing the cons.</p>
<p>That is bugs were being introduced because of its misuse. These didn't justify the savings in time to write or execute the code.</p>
<p>Yes, using with can led to (mildly) faster code execution.</p>
<p>In the following, foo is only evaluated once:</p>
<pre><code>with foo do
begin
bar := 1;
bin := x;
box := 'abc';
end
</code></pre>
<p>But, here it is evaluated three times:</p>
<pre><code>foo.bar := 1;
foo.bin := x;
foo.box := 'abc';
</code></pre>
| 9,691
|
<p>What is the best way to create a lock on a file in Perl?</p>
<p>Is it best to flock on the file or to create a lock file to place a lock on and check for a lock on the lock file?</p>
|
<p>If you end up using flock, here's some code to do it:</p>
<pre><code>use Fcntl ':flock'; # Import LOCK_* constants
# We will use this file path in error messages and function calls.
# Don't type it out more than once in your code. Use a variable.
my $file = '/path/to/some/file';
# Open the file for appending. Note the file path is quoted
# in the error message. This helps debug situations where you
# have a stray space at the start or end of the path.
open(my $fh, '>>', $file) or die "Could not open '$file' - $!";
# Get exclusive lock (will block until it does)
flock($fh, LOCK_EX) or die "Could not lock '$file' - $!";
# Do something with the file here...
# Do NOT use flock() to unlock the file if you wrote to the
# file in the "do something" section above. This could create
# a race condition. The close() call below will unlock the
# file for you, but only after writing any buffered data.
# In a world of buffered i/o, some or all of your data may not
# be written until close() completes. Always, always, ALWAYS
# check the return value of close() if you wrote to the file!
close($fh) or die "Could not write '$file' - $!";
</code></pre>
<p>Some useful links:</p>
<ul>
<li><a href="http://www.perlmonks.org/?node_id=7058" rel="noreferrer">PerlMonks file locking tutorial</a> (somewhat old)</li>
<li><a href="http://perldoc.perl.org/functions/flock.html" rel="noreferrer"><code>flock()</code> documentation</a></li>
</ul>
<p>In response to your added question, I'd say either place the lock on the file or create a file that you call 'lock' whenever the file is locked and delete it when it is no longer locked (and then make sure your programs obey those semantics).</p>
|
<p>Use the <a href="http://perldoc.perl.org/functions/flock.html" rel="nofollow noreferrer">flock</a> Luke. </p>
<p><strong>Edit:</strong> <a href="http://www.perlmonks.org/?node_id=7058" rel="nofollow noreferrer">This</a> is a good explanation.</p>
| 5,530
|
<p>How do you generate C# classes for accessing WMI?</p>
|
<p>To generate strongly typed WMI classes, use the Management Strongly typed class generator (MgmtClassGen.exe). It's usually in C:\Program Files\Microsoft Visual Studio X\SDK\vX\Bin. The parameters are <a href="http://msdn.microsoft.com/en-us/library/2wkebaxa.aspx" rel="noreferrer">at MSDN</a> and they even have <a href="http://msdn.microsoft.com/en-us/library/ms186156.aspx" rel="noreferrer">a page</a> describing the code generated. If you have to do a lot of work with WMI, it's a lifesaver.</p>
|
<p>You can try WMICodeCreator it generates vbscript jscript vb.net and c# code
<a href="http://www.microsoft.com/downloads/details.aspx?familyid=2cc30a64-ea15-4661-8da4-55bbc145c30e&displaylang=en" rel="nofollow noreferrer">Download WMICodeCreator from Microsoft</a></p>
| 6,627
|
<p>I am using a Flashforge Creator Dual Extrusion 3D Printer with RepG (<a href="http://replicat.org/" rel="nofollow noreferrer">ReplicatorG</a>), and tonight a print completely detached from the build plate.</p>
<p>I reset everything and watched it closely. It is a simple abs print at 230/110 degrees.</p>
<p>A few seconds after start of print, I noticed that the print bed temp had dropped to 109... and kept dropping.</p>
<p>This is new behavior. I haven't started to troubleshoot yet. I am looking for suggestions on how to troubleshoot the issue.</p>
<p><strong>Edit 1:</strong></p>
<p>Tonight RepG would not connect to the printer. I reseated the USB cord
and now it does.</p>
<p>The test print is ABS at 230/110 from this open scad.</p>
<pre><code>difference()
{
cylinder(d = 10, h = 2);
cylinder(d = 9, h = 2);
}
</code></pre>
<p>Pᴀᴜʟsᴛᴇʀ2/Oscar
Once the print starts the print bed begins cooling. It is cooling before
any filament is laid down.</p>
<p><strong>Edit 2:</strong></p>
<p>Examination of the gcode shows the root cause. It does contain</p>
<pre><code>M104 S230 T1 (set extruder temperature)
</code></pre>
<p>It does not contain a </p>
<pre><code>M109 S110 (set build plate temperature)
</code></pre>
<p>When I manually add the M109 to the gcode file, the model prints normally, and the build plate temperature holds for the duration of the print. </p>
<p>The drop in temperature last week is due to the fact that I tend to preheat the printer before actually starting the print. Since there was no M109 the bed plate cooled to ambient temperature.</p>
<p>There isn't a hardware issue here. Most likely it is a Rep G configuration error of some sort.</p>
<p><strong>Edit 3:</strong></p>
<p>I resolved the issue by a fresh install of the latest Rep G. I rebuilt the profiles from scratch with G Code to set the correct bed temp. Thank you for your suggestions.</p>
|
<p>Check your slicer settings. Some slicers allow the temperature to change at a specified layer. You may have programmed the slic3r to turn off the bed after the first (or maybe zeroth) layer.</p>
<p>You call also examine the G-code for M140, M144, and M190 commands.</p>
<p>Of course, the usual intermittent electric connections, power supply failures, etc. should be checked, but don't overlook the possibility that it is working exactly as you are asking it to work. </p>
|
<p>Preheating the bed is a commonly used approach to shorten the time to print. I'll explain; it actually does not shorten the physical print time itself, but before you start a print there is always stuff to do like inserting the SD-card or loading the model in the print server, while you do that, the bed will be heating, so time gained.</p>
<p>When you load a model it apparently resets the temperatures, so preheating will only have an effect when you also actually set the bed temperature in the G-code file. This means that either your G-code file has some codes that influence the temperature, the connector is loose or loading a print over the program you use must reset the bed temperature, there aren't any other options.</p>
<p>To comment on your G-code, M109 is used to set the extruder temperature and wait till it is reached, M190 is used to set the bed temperature and wait till it is reached. Usually M190 is used to set the bed temperature before the first layer is being printed; the printer waits until the temperature is reached before the printer continues to the next command (usually the heating of the hotend). If you have selected a different temperature for the rest of the print you will find the M140 G-code command prior to printing the second layer.</p>
<p>I have tested on my Prusa i3 clone running a Marlin fork to print a model with all references to M190 and M140 removed. I pre-heated to 70 °C (for PETG) and loaded the model in OctoPrint print server and commenced the print. The printer retained the bed value and did not drop.</p>
<p>Basically, it is needed to take a closer look at your G-code file. The answer must be in there. If there is nothing to find in the G-code file and electronic connections are fine, the answer must be found in ReplicatorG, this might send reset signals prior to sending the print, but that sounds really strange.</p>
| 973
|
<p>I have seen some lasers attached to the RepRap platform for cutting but most seem to be cutting paper, balsa wood, or merely etching. If I were wanting to build a platform for cutting wood, similar to the wood framed or boxed 3D printers on the market, what power laser would I need for that? I assume that a lower powered laser would have to travel slower but going too slow would add the possibility of catching the wood on fire (not good). </p>
|
<p>Please do not go down this road. First, not all lasers are equally absorbed by the material and the energy converted to heat to vaporize the material. The light not absorbed is reflected right back into your eyes. This is especially dangerous because it doesn't make you go blind instantly, fooling you into thinking there is no harm. You got the other part right, the lower the laser power, the less turned to heat, the longer it takes to cut, the longer you risk exposure to your eyes. That's right, a low powered laser is MORE dangerous than a big one. Next, the only way to properly cut is with air assist. This means a stream of air blows away the vaporized material so the laser can keep cutting deeper. This also prevents fires. The thing we haven't even touched is a proper safety enclosure, proper bed design to not reflect the laser beam back into the laser killing it and your eyes, and finally smoke/particle exhaust. </p>
<p>Simply put, these cheapo DIY lasers are dangerous, and are also illegal.</p>
|
<p>Strapping a laser to a cheap robot made with flammable plastic parts and no enclosure is stupid in the extreme. Don't do it. Seriously, just don't. <strong>RepRaps are not suitable for laser conversions.</strong> </p>
<p>You have to worry about reflected light damaging your eyes, and for the blue diode lasers currently growing in popularity, causing skin melanomas. You have to worry about mechanical vibration from jerky motion loosening fasteners. You have to worry about the workpiece catching on fire. You have to worry about the fumes being generated from etching and cutting. You have to worry about firmware freezes locking the laser on. It's dangerous to you, anybody in line of sight of the machine, and your property. </p>
<p>Many of the lasers currently being sold for RepRap conversions, particularly from Russia, violate US and EU safety regulations related to power and safety interlocks and should not be on the market at all. </p>
<p>The people posting videos of their RepRap laser conversions are almost always ignorant of proper laser safety requirements and regulations. It's downright scary how much blue flare you can see the camera picking up in a lot of these videos. Don't follow in those people's footsteps. </p>
<p>More directly answering the question details, most people doing light etching or paper cutting type work are using small 0.5-2W diode lasers. These are particularly dangerous because they have poor focus and because they don't have the power to cut cleanly and quickly. That means more fumes, more fire risk, and more damage to the workpiece from edge charring. </p>
<p>Slowing down the laser is not a good solution: dwell time is your enemy. You want a beam powerful enough to near-instantly vaporize the material, not slowly burn it away. </p>
<p>An entry-level proper lasercutter for cutting things like plywood and acrylic is more likely to use a 40w CO2 laser. It should have air assist and fume extraction, and a safety-interlocked enclosure that 100% reflects or absorbs the CO2 laser light wavelength. That basically means a dedicated machine.</p>
| 208
|
<p>I'm trying to alpha blend sprites and backgrounds with <a href="http://www.devkitpro.org/" rel="noreferrer">devkitPro</a> (including libnds, libarm, etc).</p>
<p>Does anyone know how to do this?</p>
|
<p>As a generic reference, i once wrote <a href="http://sylvainhb.blogspot.com/2006/09/alpha-blending.html" rel="nofollow noreferrer">a small blog entry</a> about that issue. Basically, you first have to define which layer is alpha-blended against which other layer(s). Afaik,</p>
<ul>
<li>the source layer(s) must be over destination layer(s) to have some blending displayed. that means the priority of source layers should be numerically lower than the the priority of destination layers.</li>
<li>the source layer is what is going to be translucent, the destination(s) is what is going to be seen through (and yes, i find this rather confusing).</li>
</ul>
<p>For the <a href="http://sylvainhb.blogspot.com/2011/09/oam-alpha.html" rel="nofollow noreferrer">sprites, specifically</a>, you then have 3 ways to achieve alpha-blending depending on what you need and what you're "ready to pay" for it:</p>
<ul>
<li>You can make all the sprites have some alpha-blending by turning on <code>BLEND_SRC_SPRITE</code> in <code>REG_BLDCNT[_SUB]</code> ... not that useful. </li>
<li>You can selectively turn on blending of some sprites by using <code>ATTR0_TYPE_BLENDED</code>. The blending level will be the same for all sprites (and layers)</li>
<li>bitmap-type sprites use direct colors (bypassing the palettes), so the <code>ATTR2_PALETTE()</code> field of GBA sprites is useless and has been recycled into <code>ATTR2_ALPHA</code>. </li>
</ul>
|
<p>It's been a <i>long</i> time since I've done any GBA programming, but as I recall, the DS supports most (if not all) of the stuff that GBA supports. <a href="http://www.coranac.com/tonc/text/gfx.htm" rel="nofollow noreferrer">This</a> link has a section on how to do alpha blending for GBA (section 13.2). I don't know if there's a DS-specific way of doing it, but this should work for you.</p>
| 2,880
|
<p>I apologize if this is in the wrong section, but it seems relevant.</p>
<p>I am an absolute noob at 3D printing, but recently my need for a specific, small part intersected with access to a 3D printer at work (Zortrax M200), so I figured now was a good opportunity to get into printing.</p>
<p>I am designing a clamp for smaller can capacitors that will hold them upright from a mounting surface. I would like the capacitors to snap into the clamp, but I am struggling to find information on how wide the opening to the clamp should be. It clearly needs to be smaller than the diameter of the clamp, but I'm not sure how much smaller it can be without snapping the clamp arms.</p>
<p>Is there a rule of thumb or general guideline for this type of geometry? I am using Z-ULTRAT, which is some proprietary filament made by Zortrax. It seems to be similar to ABS in terms of strength properties, but stiffer.</p>
<p>I don't have an image at the moment, but I can provide one tomorrow when I have access to the part files. The best analog I can think of is a LEGO figure's hands.</p>
<p>Thanks.</p>
<p>Edit: Per requests for additional information:</p>
<ul>
<li><p>I am not sure what is considered a "thin" wall; currently, the clamp walls are 2mm thick at the thinnest point (where the push-in opening is).</p></li>
<li><p>I would like the walls to be just flexible enough to allow the cap to be seated without breaking - it is more critical that the cap is not loose in the socket.</p></li>
<li><p>The clamp is not going to be subject to repeated stress cycles - by the time the cap needs replacing, it's likely I'll need to print a new clamp as well.</p></li>
<li><p>With my estimations for cooling, I do not expect the clamp to see temperatures above 100C.</p></li>
</ul>
<p>Edit 2:</p>
<p>Based on the link in the comments below, I came up with the following design:
<a href="https://i.stack.imgur.com/6vJ0A.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6vJ0A.png" alt="Clamp"></a></p>
<p>The little nubbins in the back fit into the "neck" at the bottom of the capacitor to provide some positive lock. 220 degrees of surround with 4mm thick arms - I think this should work.</p>
|
<p>Many printers have a problem with inserting filament into the extruder. Cutting it at an angle is a huge help. It is also very possible that you have a cog or a bad extruder nozzle. </p>
<p>This link could be of some help: <a href="https://www.thingiverse.com/groups/prusa-i3/topic:7469" rel="nofollow noreferrer">Extruder is not feeding</a>.</p>
|
<p>My first step would be to cut the filament at a bit of an angle. This will help the filament enter the extruder correctly. </p>
<p>If you're still having trouble, you may have a clogged nozzle. This article by Lifewire -<a href="https://www.lifewire.com/3d-printer-extruder-nozzle-clogged-2289" rel="nofollow noreferrer">3D Printer Extruder Nozzle Clogged? Here Is How To Unclog It</a> has some good info.</p>
<p>I've also heard stories about the Bowden Tube (the PTFE tube that guides the filament from the extruder to the hotend) becoming warped or melted in cases where the printer's temperature control went awry. This is rare as PTFE doesn't melt until 320°C. </p>
| 571
|
<p>Let's say I have the following simple enum:</p>
<pre><code>enum Response
{
Yes = 1,
No = 2,
Maybe = 3
}
</code></pre>
<p>How can I bind this enum to a DropDownList control so that the descriptions are displayed in the list as well as retrieve the associated numeric value (1,2,3) once an option has been selected?</p>
|
<p>I probably wouldn't <strong>bind</strong> the data as it's an enum, and it won't change after compile time (unless I'm having one of those <em>stoopid</em> moments). </p>
<p>Better just to iterate through the enum: </p>
<pre><code>Dim itemValues As Array = System.Enum.GetValues(GetType(Response))
Dim itemNames As Array = System.Enum.GetNames(GetType(Response))
For i As Integer = 0 To itemNames.Length - 1
Dim item As New ListItem(itemNames(i), itemValues(i))
dropdownlist.Items.Add(item)
Next
</code></pre>
<p>Or the same in C# </p>
<pre><code>Array itemValues = System.Enum.GetValues(typeof(Response));
Array itemNames = System.Enum.GetNames(typeof(Response));
for (int i = 0; i <= itemNames.Length - 1 ; i++) {
ListItem item = new ListItem(itemNames[i], itemValues[i]);
dropdownlist.Items.Add(item);
}
</code></pre>
|
<p>This is my solution for Order an Enum and DataBind(Text and Value)to Dropdown using LINQ</p>
<pre><code>var mylist = Enum.GetValues(typeof(MyEnum)).Cast<MyEnum>().ToList<MyEnum>().OrderBy(l => l.ToString());
foreach (MyEnum item in mylist)
ddlDivisao.Items.Add(new ListItem(item.ToString(), ((int)item).ToString()));
</code></pre>
| 8,728
|
<p>I am looking for some good links with best practices and sample code on creating <a href="http://en.wikipedia.org/wiki/Representational_State_Transfer" rel="noreferrer">REST</a>ful web services using .NET. </p>
<p>Also, any other input you might have regarding REST would be greatly appreciated.</p>
|
<p><a href="http://msdn.microsoft.com/en-us/magazine/cc748663.aspx?pr=blog" rel="noreferrer">ADO.Net Data Servcies</a> makes it really easy to build and <a href="http://msdn.microsoft.com/en-us/magazine/cc794279.aspx" rel="noreferrer">consume</a> RESTful web services in the .Net world but nevertheless understanding the concepts is important. Compared to WCF (which added REST support later), ADO.Net Data Services was built primarily for REST.</p>
<p><a href="http://www.25hoursaday.com/weblog/2007/11/19/GuidelinesForBuildingRESTfulWebServices.aspx" rel="noreferrer">Guidelines for Building RESTful Web Services</a> has all the info on the resources you need.</p>
<p>This is another useful <a href="http://www.25hoursaday.com/weblog/2008/08/17/ExplainingRESTToDamienKatz.aspx" rel="noreferrer">blog entry</a>:</p>
<p>The uniform interface constraints describe how a service built for the Web can be a good participant in the Web architecture. These constraints are described briefly as follows :</p>
<p>1) Identification of resources: A resource is any information item that can be named and represented (e.g. a document, a stock price at a given point in time, the current weather in Las Vegas, etc). Resources in your service should be identified using URIs. </p>
<p>2) Manipulation of resources via representations: A representation is the physical representation of a resource and should correspond to a valid media type. Using standard media types as the data formats behind your service increases the reach of your service by making it accessible to a wide range of potential clients. Interaction with the resource should be based on retrieval and manipulation of the representation of the resource identified by its URI. </p>
<p>3)Self-descriptive messages: Following the principles of statelessness in your service's interactions, using standard media types and correctly indicating the cacheability of messages via HTTP method usage and control headers ensures that messages are self descriptive. Self descriptive messages make it possible for messages to be processed by intermediaries between the client and server without impacting either. </p>
<p>4)Hypermedia as the engine of application state: Application state should be expressed using URIs and hyperlinks to transition between states. This is probably the most controversial and least understood of the architectural constraints set forth in <a href="http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm" rel="noreferrer">Roy Fielding's dissertation</a>. In fact, Fielding's dissertation contains an explicit arguments against using HTTP cookies for representing application state to hammer this point home yet it is often ignored.</p>
|
<p>When I began developing REST web services I read REST API Design Rulebook from Mark Masse. Once you know the basics and the theory, you will be able to implement REST with WCF, HTTPListener or ServiceStack. All these frameworks are .NET and quite good documented... </p>
<p>I would recommend to you service stack (http://www.servicestack.net/) there is enough information on the web to get started. </p>
<p>WCF offers the ASP.NET web API, it is OK, but I don't use it.</p>
<p>In any case, there is no good REST framework today, you have to choose one that you find easy to use and then apply the theory that you learned from the book.</p>
| 8,537
|
<p>I'm developing a data access component that will be used in a website that contains a mix of classic ASP and ASP.NET pages, and need a good way to manage its configuration settings.</p>
<p>I'd like to use a custom <code>ConfigurationSection</code>, and for the ASP.NET pages this works great. But when the component is called via COM interop from a classic ASP page, the component isn't running in the context of an ASP.NET request and therefore has no knowledge of web.config.</p>
<p>Is there a way to tell the <code>ConfigurationManager</code> to just load the configuration from an arbitrary path (e.g. <code>..\web.config</code> if my assembly is in the <code>/bin</code> folder)? If there is then I'm thinking my component can fall back to that if the default <code>ConfigurationManager.GetSection</code> returns <code>null</code> for my custom section.</p>
<p>Any other approaches to this would be welcome!</p>
|
<p>Try this:</p>
<pre><code>System.Configuration.ConfigurationFileMap fileMap = new ConfigurationFileMap(strConfigPath); //Path to your config file
System.Configuration.Configuration configuration = System.Configuration.ConfigurationManager.OpenMappedMachineConfiguration(fileMap);
</code></pre>
|
<p>Use XML processing:</p>
<pre><code>var appPath = AppDomain.CurrentDomain.BaseDirectory;
var configPath = Path.Combine(appPath, baseFileName);;
var root = XElement.Load(configPath);
// can call root.Elements(...)
</code></pre>
| 2,674
|
<p>I am designing a part with a material which can bear up to a certain stress. </p>
<p>To keep the stress within the design limits, I need to ensure a certain surface of solid material per layer, in other words a minimum amount of extruded filament per layer and a smoothly changing infill ratio so that the infill can transmit the load efficiently to the neighbouring layers.</p>
<p>The part has an irregular shape and I cannot simply increase the infill ratio for the whole object because the part is big and it would cost more time and filament. I also cannot build straight pillars of solid material inside the part because there are no regions which are suitable for a continuous pillar.</p>
<p>How can I calculate and apply a smoothly changing infill ratio or in general how to ensure that each layer is made of at least a certain amount of material?</p>
<p>I use Prusa Slicer for slicing and Fusion 360 for the design.</p>
<p>Related questions about variable infill, which however don't answer my question because I need to specify the amount of material and because I need a smoothly changing infill ratio:</p>
<p><a href="https://3dprinting.stackexchange.com/questions/6522/different-infill-in-the-same-part">Different infill in the same part</a></p>
<p><a href="https://3dprinting.stackexchange.com/questions/7025/slic3r-can-i-vary-the-infill-percentage-for-different-heights-of-my-model">slic3r: Can I vary the infill percentage for different heights of my model?</a> </p>
|
<p>I think you have a major XY problem. The amount of material per layer is not what determines the part strength. Unless additional material is placed in a manner that reinforces against the stress you're designing for, it's just wasted. Moreover, infill in particular is not terribly useful this way, as infill has to rest upon the support of existing infill below it. If you suddenly increase infill density at a particular layer, it will not provide any significant additional strength because the added lines will be unsupported and will not bond strongly to anything. Even the next layer above them, and the next after that for quite a few layers, will not bond well because the unsupported lines can just bend downward when the nozzle goes over them, rather than providing a surface for the newly deposited material to press firmly against and bond to.</p>
<p>Generally, infill is not your main source of part strength anyway. I would start out (especially if you can test; if this is a one-off thing, the material cost is not going to be an issue anyway and just go with overkill) by increasing the number of wall lines (wall thickness). Walls generally provide the most strength, and the amount of material used will be proportional to cross-sectional perimeter rather than area, which typically will vary "linearly" rather than "quadratically" (I use these terms loosely since I don't know right off how to make it rigorous - what the independent variable should be thought of as being). My usual default (and I believe Cura's) is 2 walls; I would expect 4-5 to be very strong, and as long as you keep at least 15-20% infill, probably stronger than what you'd get by any increase in infill percent.</p>
|
<p>Very interesting idea!
There may be a way to do this semi-automatically, but I believe it's not yet a part of any slicer.</p>
<p>My suggestion would be:</p>
<ul>
<li>Export file as .stl (Fusion 360)</li>
<li>Use a DLP slicer to create images of the layers (CHITUBox)</li>
<li>Calculate the area of each of the images (Matlab - I think?)</li>
<li>Calculate the required infill percentages for having the same mass on each layer (Excel)</li>
<li>Open the model in PrusaSlicer, create a single "height range modifier" (PrusaSlicer)</li>
<li>Save the project as .3mf (PrusaSlicer)</li>
<li>Extract the .3mf file (7 Zip)</li>
<li>Create the neccessary "layer config ranges" texts for PrusaSlicer (Excel)</li>
<li>Inject that text into an extrated .3mf file (Notepad++)</li>
<li>Pack the .3mf file again (7 Zip)</li>
<li>Slice the resulting file (PrusaSlicer)</li>
</ul>
| 1,547
|
<p>In my WPF application, I have a number of databound TextBoxes. The <code>UpdateSourceTrigger</code> for these bindings is <code>LostFocus</code>. The object is saved using the File menu. The problem I have is that it is possible to enter a new value into a TextBox, select Save from the File menu, and never persist the new value (the one visible in the TextBox) because accessing the menu does not remove focus from the TextBox. How can I fix this? Is there some way to force all the controls in a page to databind?</p>
<p><em>@palehorse: Good point. Unfortunately, I need to use LostFocus as my UpdateSourceTrigger in order to support the type of validation I want.</em></p>
<p><em>@dmo: I had thought of that. It seems, however, like a really inelegant solution for a relatively simple problem. Also, it requires that there be some control on the page which is is always visible to receive the focus. My application is tabbed, however, so no such control readily presents itself.</em></p>
<p><em>@Nidonocu: The fact that using the menu did not move focus from the TextBox confused me as well. That is, however, the behavior I am seeing. The following simple example demonstrates my problem:</em></p>
<pre class="lang-xml prettyprint-override"><code><Window x:Class="WpfApplication2.Window1"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
Title="Window1" Height="300" Width="300">
<Window.Resources>
<ObjectDataProvider x:Key="MyItemProvider" />
</Window.Resources>
<DockPanel LastChildFill="True">
<Menu DockPanel.Dock="Top">
<MenuItem Header="File">
<MenuItem Header="Save" Click="MenuItem_Click" />
</MenuItem>
</Menu>
<StackPanel DataContext="{Binding Source={StaticResource MyItemProvider}}">
<Label Content="Enter some text and then File > Save:" />
<TextBox Text="{Binding ValueA}" />
<TextBox Text="{Binding ValueB}" />
</StackPanel>
</DockPanel>
</Window>
</code></pre>
<pre><code>using System;
using System.Text;
using System.Windows;
using System.Windows.Data;
namespace WpfApplication2
{
public partial class Window1 : Window
{
public MyItem Item
{
get { return (FindResource("MyItemProvider") as ObjectDataProvider).ObjectInstance as MyItem; }
set { (FindResource("MyItemProvider") as ObjectDataProvider).ObjectInstance = value; }
}
public Window1()
{
InitializeComponent();
Item = new MyItem();
}
private void MenuItem_Click(object sender, RoutedEventArgs e)
{
MessageBox.Show(string.Format("At the time of saving, the values in the TextBoxes are:\n'{0}'\nand\n'{1}'", Item.ValueA, Item.ValueB));
}
}
public class MyItem
{
public string ValueA { get; set; }
public string ValueB { get; set; }
}
}
</code></pre>
|
<p>I found that removing the menu items that are scope depended from the FocusScope of the menu causes the textbox to lose focus correctly. I wouldn't think this applies to ALL items in Menu, but certainly for a save or validate action.</p>
<pre><code><Menu FocusManager.IsFocusScope="False" >
</code></pre>
|
<p>The easiest way is to <em>set the focus somewhere</em>.<br>
You can set the focus back immediately, but setting the focus anywhere will trigger the LostFocus-Event on <strong>any type of control</strong> and make it update its stuff:</p>
<pre><code>IInputElement x = System.Windows.Input.Keyboard.FocusedElement;
DummyField.Focus();
x.Focus();
</code></pre>
<p>Another way would be to get the focused element, get the binding element from the focused element, and trigger the update manually. An example for TextBox and ComboBox (you would need to add any control type you need to support):</p>
<pre><code>TextBox t = Keyboard.FocusedElement as TextBox;
if ((t != null) && (t.GetBindingExpression(TextBox.TextProperty) != null))
t.GetBindingExpression(TextBox.TextProperty).UpdateSource();
ComboBox c = Keyboard.FocusedElement as ComboBox;
if ((c != null) && (c.GetBindingExpression(ComboBox.TextProperty) != null))
c.GetBindingExpression(ComboBox.TextProperty).UpdateSource();
</code></pre>
| 8,175
|
<p>Are asymmetrical cryptographic keys fundamentally inter-convertible between the major key container formats? For example, can I convert an X.509 key file into a PGP or OpenGPG key file? </p>
<p>And--assuming the answer is yes--is it "security neutral" to keep one key pair in whatever format and convert into whichever container file format is needed for the occasion?</p>
<p>I'm getting a little tired of maintaining so many key pairs for X.509, OpenGPG, and SSH, when they're all RSA at the heart.</p>
|
<p>Yes and no: yes, the RSA keys embedded into certificates and privkeys are just numbers. You can extract them from the certificate and use them to build keys in other formats. This is commonly done to convert between different certificate formats. </p>
<p>PGP has some support for X.509 for S/MIME, but no ability to use X.509 privkeys verbatim. </p>
<p>SSH has some beta support for directly using X.509 keys and certificates.</p>
|
<p>I'd also have a look at OpenSSL. It has so many different -in and -out functions that will likely be able to convert certs. from one type to the other.</p>
<p>Have a look at <a href="http://marc.info/?l=openssl-users&m=105162569405053&w=2" rel="nofollow noreferrer">http://marc.info/?l=openssl-users&m=105162569405053&w=2</a></p>
| 8,160
|
<p>I've got a website that I've just uploaded onto the interwebs, and it's displaying differently using Firefox 3.0.1 on Ubuntu and WinXP.</p>
<p>Two things I've noticed on Ubuntu:</p>
<ol>
<li>The favicon is missing </li>
<li>The background color isn't displaying (it's set in the stylesheet)</li>
</ol>
<p>What have I done wrong? The CSS file is being fetched under Ubuntu, so why isn't it applying all of the stylesheet, just the bits it likes? And why isn't the favicon displaying? Are they the same problem?</p>
<p><hr>
The answer on the background color: invalid HTML. But I'd love for someone to explain why it works under Windows and not Ubuntu.</p>
<p>The answer on favicon: previously, there was no favicon. The browser cached the lack of favicon. <a href="http://pcsupport.about.com/od/maintenance/ht/clearcacheff.htm" rel="noreferrer">Clear the Firefox cache</a>, and all is well.</p>
|
<p>I would first suggesting getting you html and css code validated. If there are any errors in your markup, these can cause errors in the rendering.</p>
<ul>
<li><a href="http://jigsaw.w3.org/css-validator/" rel="noreferrer">CSS Validator</a></li>
<li><a href="http://validator.w3.org/" rel="noreferrer">HTML Validator</a></li>
</ul>
|
<p>i believe this is a font issue and a browser / OS issue.</p>
<p>we know that different firefox versions are dependent on the OS - there are some firefox extensions available for Linux, some firefox extensions for windows are available.
it's the font I guess.</p>
<p>Try to download mtts core fonts (microsoft true type ) which includes all the windows fonts so that firefox can display the fonts you specified in the css.</p>
<p>also you could check that you use fonts which are available on both platforms. Otherwise, I suggest rechecking and revalidating your code.</p>
<p>The other issue could be the screen resolution. It might be okay in windows with your high resolution but not with the low resolution ubuntu version.</p>
| 2,801
|
<p>Ok, I asked a question earlier about Flex and ADO.NET Data Services but didn't get much response so I thought I'd rephrase. Does anyone have any experience building Adobe Flex applications with a .NET back-end? If so, what architecture did you use and what third-party tools if any did you employ. I've read a little about doing Flex remoting with WebORB but it seems more complicated than it should be, are web services an adequate alternative?</p>
|
<p>I've mainly used plain ASP.NET pages that return XML for situations that are mainly one-way (data from ASP.NET --> Flex/Flash) communication. The Flex side just uses a URLLoader to hit the ASP.NET page and loads the result as XML.</p>
<p>If the communication needs to be a little more two-sided (sending more than a couple parameters to ASP.NET lets say), I have used standard ASP.NET webservices. </p>
<p>I've never used WebOrb or Flex remoting because I've never really needed that type of interaction between the server and the SWF.</p>
<p>Hope that helps.</p>
|
<p>I've mainly used plain ASP.NET pages that return XML for situations that are mainly one-way (data from ASP.NET --> Flex/Flash) communication. The Flex side just uses a URLLoader to hit the ASP.NET page and loads the result as XML.</p>
<p>If the communication needs to be a little more two-sided (sending more than a couple parameters to ASP.NET lets say), I have used standard ASP.NET webservices. </p>
<p>I've never used WebOrb or Flex remoting because I've never really needed that type of interaction between the server and the SWF.</p>
<p>Hope that helps.</p>
| 6,688
|
<p>Our site is dealing with electronics and practical engineering, so we use formulas in questions and answers on occasion, most usually when we have to figure out electronic ratings. </p>
<p>When we talk about the <a href="https://3dprinting.stackexchange.com/questions/760/analytic-equations-to-make-algorithm-of-3d-printer">design</a> or <a href="https://3dprinting.stackexchange.com/questions/631/how-are-delta-movements-calculated">function</a> of printers more than "rule of thumb", we need complex math. When we estimate <a href="https://3dprinting.stackexchange.com/questions/4472/how-is-the-print-time-of-an-object-to-be-printed-estimated?s=4%7C17.5425">print times</a> formulas could be used to illustrate the calculations. When we want to estimate the <a href="https://3dprinting.stackexchange.com/questions/6557/is-the-110m-length-accurate-for-3-0mm-1k-spool">length of a spool of filament</a>, we need math. When we discuss <a href="https://3dprinting.stackexchange.com/a/5155/8884">movement paths</a>, we also come into need for 3D geometry which can be simplified by using non-carthesian coordinates or functions - which in turn need manual typing of greek letters or proper formulas to be displayed well. </p>
<p>Up to now I have been typing LaTeX formulas in online converters and copy paste the link to the image generated formula. This is quite labor intensive, especially since the SE does support built in MathJax.</p>
<p>A recent <a href="https://3dprinting.stackexchange.com/a/6540/5740">answer</a> showed LaTeX/MathJax code in the answer, but that did not render into an actual formula image as it is not enabled.</p>
<blockquote>
<ul>
<li>Supply voltage $ U_S = 12V \or 24V $</li>
<li>Logic Voltage $ U_L = 5V $</li>
<li>Sensor Voltage $ U_sens = U_L$</li>
<li>Temperature control (Hotend/Bed/Cooling fans) $ U_T = U_S$</li>
</ul>
</blockquote>
<p>Why can't we enter formulas directly in posts?</p>
|
<p>$$\text{3D Printing Stack Exchange} \subset \text{Stack Exchange sites that use MathJax}$$</p>
<p>There are <a href="http://data.stackexchange.com/3dprinting/query/879802/mathjax-inline" rel="nofollow noreferrer">~17 posts</a> that could use an edit. Most of those are prices that have been converted into MathJax. You can fix that by escaping the dollar sign:</p>
<pre><code>$ => \$
</code></pre>
<p>I'll work on those edits myself, but I'd love to get some help.</p>
|
<p>MathJax would be really helpful:</p>
<p>We recently discussed <a href="https://3dprinting.stackexchange.com/questions/6832/how-is-infill-generated-via-slicing/6836#6836">How is infill generated via slicing?</a> and the answer is linear algebra. Displaying linear algebra without formulas is a PITA.</p>
| 53
|
<p>What's the "right" way to do the following as a boolean expression?</p>
<pre><code>for i in `ls $1/resources`; do
if [ $i != "database.db" ]
then
if [ $i != "tiles" ]
then
if [ $i != "map.pdf" ]
then
if [ $i != "map.png" ]
then
svn export -q $1/resources/$i ../MyProject/Resources/$i
...
</code></pre>
|
<p>The other solutions have a couple of common mistakes:
<a href="http://www.pixelbeat.org/programming/shell_script_mistakes.html" rel="nofollow noreferrer">http://www.pixelbeat.org/programming/shell_script_mistakes.html</a></p>
<ol>
<li><p><code>for i in $(ls ...)</code> is redundant/problematic
just do: <code>for i in $1/resources*; do ...</code></p></li>
<li><p><code>[ $i != file1 -a $1 != file2 ]</code> This actually has 2 problems.</p>
<p>a. The <code>$i</code> is not quoted, hence names with spaces will cause issues</p>
<p>b. <code>-a</code> is inefficient if <code>stat</code>ing files as it doesn't short circuit (I know the above is not <code>stat</code>ing files).</p></li>
</ol>
<p>So instead try:</p>
<pre class="lang-bsh prettyprint-override"><code>for i in $1/resources/*; do
if [ "$i" != "database.db" ] &&
[ "$i" != "tiles" ] &&
[ "$i" != "map.pdf" ] &&
[ "$i" != "map.png" ]; then
svn export -q "$i" "../MyProject/Resources/$(basename $i)"
fi
done
</code></pre>
|
<pre><code>for i in `ls $1/resources`; do
if [ $i != "database.db" ] && [ $i != "tiles" ] && [ $i != "map.pdf" ] && [ $i != "map.png" ]; then
svn export -q $1/resources/$i ../MyProject/Resources/$i
</code></pre>
| 7,123
|
<p>I am trying to add the line</p>
<blockquote>
<p>G4 P4000 G1 F4000 E-50</p>
</blockquote>
<p>to pause and retract the print in order for the printed line to dry (printing in mid air vertically). Does the above code work or should i skip lines or does it not matter at all?</p>
<blockquote>
<p>G4 P4000<br>
G1 F4000 E-50</p>
</blockquote>
<p>Would the following line do the exact same thing?</p>
<blockquote>
<p>G4 P5000 E-50</p>
</blockquote>
|
<p>Yes, G-code is read line by line. G-code is a numerical control programming language. It basically instructs the machine sequentially line by line to do a specific task. The printer than executes the lines one by one until it reaches the end.</p>
<p>If you instruct the printer to wait (<code>G4</code> dwell), it will do the wait/dwell first and than will execute the next command to retract the filament. As such, your examples will not work if you want to retract the filament during the pause, you reversed the process if you want to achieve that.</p>
<p>To add a pause (simple) for e.g. filament changing, you should instruct the head to go to a certain position, extract the filament, and now insert the pause/dwell command. Give yourself enough time to insert and prime the nozzle and go back to the last location to continue printing. </p>
<p>You could insert something like (e.g. in between layer change, before <code>G1 Zx.xx</code>):</p>
<pre><code>...
G1 X0 Y0 F2000 ; Relocate the print head
G1 F4000 E-50 ; Retract filament
G4 P40000 ; Wait for 40 seconds
G92 E50 ; The new filament should continue at this value
...
G1 Zx.xx
</code></pre>
<p>Depending on what happens after <code>G1 Zx.xx</code>, you may need to set the head back to the location prior to where it was before <code>G1 X0 Y0 F2000</code>.</p>
<p>Do note that there are pausing scripts/plugins available for e.g. Ultimaker Cura, and there is also a filament changing command <a href="https://reprap.org/wiki/G-code#M600:_Filament_change_pause" rel="nofollow noreferrer"><code>M600</code></a> that can be enabled for certain firmware (if this is your ultimate goal).</p>
<p>Using a post processing plugin of Ultimaker Cura, a pausing script looks like:</p>
<pre>
...
G0 X137.692 Y105
;TIME_ELAPSED:707.873599
;TYPE:CUSTOM;added code by post processing
;script: PauseAtHeight.py
;current z: 5
;current height: 5.0
M83
G1 F300 Z6
G1 F9000 X190 Y190
G1 F300 Z15
M104 S0; standby temperature
M0;Do the actual pause
M109 S200; resume temperature
G1 F300 Z6
G1 F9000 X133.423 Y105
G1 F9000
M82
G92 E911.50045
;LAYER:24
G0 X137.692 Y105 Z5
...
</pre>
<p>Note that <code>G0</code> and <code>G1</code> are <em>"move to"</em> location instructions (albeit through a different way, fast move and linear move respectively). If you look closely, you see that after the pause, the printer returns to the X-Y position where it left prior to the pause (<code>X137.692 Y105</code>).</p>
<hr>
<p><em>Side note:<br>
Some firmware flavors allow <a href="https://reprap.org/wiki/G-code#Buffering" rel="nofollow noreferrer">buffering</a>, but each statement is executed sequentially.</em></p>
|
<p>This needs to be on two lines like in your second example:</p>
<blockquote>
<p>G4 P4000<br>
G1 F4000 E-50</p>
</blockquote>
<p>See the RepRap Gcode Wiki for more info. <a href="https://reprap.org/wiki/G-code" rel="nofollow noreferrer">https://reprap.org/wiki/G-code</a></p>
<p>EDIT: if you actually want to retract and THEN pause, reverse it:</p>
<blockquote>
<p>G1 F4000 E-50<br>
G4 P4000<br></p>
</blockquote>
| 1,089
|
<p>Recently started using my kit Prusa I3 Mk3 and noticed that with large horizontal surfaces a wrinkling pattern is emerging.</p>
<p>As you can see in the image the wrinkles seem to run parallel, the surface is 10 mm above the build plate with all bridges fully supported.</p>
<p>Has anyone seen this before? All other areas seem to be doing well.</p>
<p>This print uses PLA filament @ 235 °C hotend temperature and clearly shows a wrinkled pattern on the top layer</p>
<p><a href="https://i.stack.imgur.com/a8luU.jpg" rel="nofollow noreferrer" title="Print showing wrinkled top layer print issue"><img src="https://i.stack.imgur.com/a8luU.jpg" alt="Print showing wrinkled top layer print issue" title="Print showing wrinkled top layer print issue"></a></p>
<hr>
<p>If it helps, I haven't changed the settings from the normal Prusa Slic3r 0.15 profile.</p>
<p><em>The problem reduced to an acceptable level by reducing the temperature to 205 °C but keeping the fan speed 100 %, I am printing PLA. I might reduce the fan speed if I feel troubled by the result.</em></p>
|
<p><em>Waves in printed surfaces with <a href="https://en.wikipedia.org/wiki/Fused_filament_fabrication" rel="nofollow noreferrer">FFF</a> are observed at either the bottom layer (common) or the top layer (less common).</em></p>
<h2>Waves in bottom layer</h2>
<p>Rippling/wave generation/wrinkling is a common problem for first layer to occur and has a direct relation to the print nozzle to bed distance; a too short of a distance or over-extrusion can lead to this effect. However, this effect is less commonly observed in top layer finishes. Bottom layer waves are described in more detail in <a href="/a/7232">this answer</a>.</p>
<h2>Waves in top layer</h2>
<p>I have seen this defect before. It is caused by a <em>combination of incorrect <strong>hotend temperature</strong> and <strong>print cooling fan</strong> settings</em>. Please reduce the hotend temperature and reduce the fan cooling. The image below clearly shows the differences of such measures.</p>
<p><a href="https://i.stack.imgur.com/10XmG.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/10XmG.jpg" alt="Solving waves in top layer of print" /></a></p>
|
<p>Might just be that you are bridging without collapsing but still have some visible sag. Things to try:</p>
<p>1) increase the support/infil density. If slic3r has a configurable setting like Cura does, you only need to increase the density for the last couple mm prior to the top layer.</p>
<p>2) increase the top plate thickness (number of extruded layers). This often allows the first extruded layer to sag, but the final layer or two to be quite flat. </p>
| 1,236
|
<p>I want to be able to capture the exception that is thrown when a user requests a non-existent controller and re-direct it to a 404 page. How can I do this?</p>
<p>For example, the user requests <code>http://www.nosite.com/paeges/1</code> (<em>should be</em> <code>/pages/</code>). How do I make it so they get re-directed to the 404 rather than the exception screen?</p>
|
<p>Take a look at <a href="http://richarddingwall.name/2008/08/09/three-common-aspnet-mvc-url-routing-issues/" rel="noreferrer">this page</a> for routing your 404-errors to a specified page.</p>
|
<p>Found this on the same site - <a href="http://richarddingwall.name/2008/08/17/strategies-for-resource-based-404-errors-in-aspnet-mvc/" rel="nofollow noreferrer">Strategies for Resource based 404s</a></p>
| 4,028
|
<p>What database should a beginner use in the C language?</p>
<p>Can I use MySQL?</p>
<p>Thanks</p>
|
<p>You can use SQLite, MySQL, PostgreSQL, or any other DBMS with a C language binding.</p>
<ul>
<li><a href="http://www.sqlite.org/quickstart.html" rel="noreferrer">SQLite In 5 Minutes Or Less</a></li>
<li><a href="http://dev.mysql.com/doc/refman/5.1/en/c.html" rel="noreferrer">MySQL C API</a></li>
<li><a href="http://www.postgresql.org/docs/8.1/static/libpq.html" rel="noreferrer">PostgreSQL C API</a></li>
</ul>
|
<p>MySQL has C bindings, so you could use that; libmysql usually installs the necessary headers and library files. You might also experiment with something like <a href="http://www.sqlite.org/" rel="nofollow noreferrer">SQLite</a> if you just want to mess about with a DBMS in C.</p>
| 6,451
|
<p>I’ve a Flashforge Adventurer 3 which I’ve found to be a fantastic out of the box ready to go printer. I’ve clocked up 500 hours on it.</p>
<p>I’ve had issues where the nozzle was too close to the print bed, making it impossible for the extruder wheel to force filament down the bowden tube. With harder filaments that results in clicking as it’s cog is skipping. With softer, usually matte PLA it’s just wearing a groove and no longer pushing. I have to take the then baked filament and manually push it out of the nozzle, then bed recalibration.</p>
<p>I have a filament dryer, and use it every time I’m printing as it’s a perfect dispenser with it’s roller bearings.</p>
<p>I recently keep getting prints where they start out perfect, but then after about layer 10, the extrusion simply stops. The printer obviously carries on like all is ok, but there’s not even spaghetti.</p>
<p>What could this be? Do I just need a new nozzle? I don’t understand how a metal nozzle printing plastic can deteriorate it. It’s not the same as pitting you get in a soldering iron tip surely…</p>
<p><a href="https://i.stack.imgur.com/IdDcQ.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IdDcQ.jpg" alt="Edited with photo of hobbed gear as @fred_dot_u suggested" /></a></p>
<p><a href="https://i.stack.imgur.com/EFz2Q.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EFz2Q.jpg" alt="Failed print. I’ve had several fail the same way. It’ll start out ok then the extrusion seems to fizzle out" /></a></p>
|
<p>Nozzles do degrade with use. That is why they're made to be easily replaceable.</p>
<p>Your problem may be a worn nozzle or incomplete cleaning or something with the bowden tube etc,. but an easy troubleshooting step is just to replace the nozzle.</p>
|
<p>I have had this happen at times. I finally got it to go away once I fixed my bed adhesion.</p>
<p>By chance is your print curling up near the edges? If it's popping up and exerting backpressure on the extruder, that can be enough to cause an internal jam and the extruder "clicking" (which is itself just a sign that the plastic isn't feeding).</p>
<p>There might be additional problems or other causes, but this was what fixed my issue.</p>
| 2,179
|
<p>Whenever I use a macro in Visual Studio I get an annoying tip balloon in the system tray and an accompanying "pop" sound. It says:</p>
<blockquote>
<p>Visual Studio .NET macros</p>
<p>To stop the macro from running, double-click the spinning cassette.<br>
Click here to not show this balloon again.</p>
</blockquote>
<p>I have trouble clicking the balloon because my macro runs so quickly.</p>
<p>Is this controllable by some dialog box option?</p>
<p>(I found someone else asking this question on <a href="http://www.tech-archive.net/Archive/VisualStudio/microsoft.public.vsnet.ide/2005-11/msg00267.html" rel="noreferrer">some other site</a> but it's not answered there. I give credit here because I've copied and pasted some pieces from there.)</p>
|
<p>This will disable the pop up:</p>
<p>For Visual Studio 2008:</p>
<pre>HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\8.0<br>
DWORD DontShowMacrosBalloon=6</pre>
<p>For Visual Studio 2010 (the DWORD won't be there by default, use <code>New | DWORD value</code> to create it):</p>
<pre>HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\10.0<br>
DWORD DontShowMacrosBalloon=6</pre>
<p>Delete the same key to re-enable it.</p>
|
<p>Okay, I found a way to make the balloon clickable, and clicking it does indeed stop it from popping up again. (On the other site I referenced in the original question the question asker claims that this is not the case. Though he was in VS2005 and I'm using VS2008.)</p>
<p>Anyway, I inserted a pause line in the macro so it would run for long enough for me to click the balloon:</p>
<pre><code>System.Threading.Thread.Sleep(2000)
</code></pre>
<p>It would still be nice to know if there's a dialog somewhere for turning this back on, in case I have a crazy change of heart.</p>
| 7,090
|
<p>It looks like we'll be adding <a href="http://en.wikipedia.org/wiki/Captcha" rel="noreferrer">CAPTCHA</a> support to Stack Overflow. This is necessary to prevent bots, spammers, and other malicious scripted activity. We only want human beings to post or edit things here!</p>
<p>We'll be using a JavaScript (jQuery) CAPTCHA as a first line of defense:</p>
<p><a href="http://docs.jquery.com/Tutorials:Safer_Contact_Forms_Without_CAPTCHAs" rel="noreferrer">http://docs.jquery.com/Tutorials:Safer_Contact_Forms_Without_CAPTCHAs</a></p>
<p>The advantage of this approach is that, <strong>for most people, the CAPTCHA won't ever be visible!</strong></p>
<p>However, for people with JavaScript disabled, we still need a fallback and this is where it gets tricky.</p>
<p>I have written a <a href="http://www.codeproject.com/KB/custom-controls/CaptchaControl.aspx" rel="noreferrer">traditional CAPTCHA control for ASP.NET</a> which we can re-use.</p>
<p><img src="https://i.stack.imgur.com/Puvbf.jpg" alt="CaptchaImage"></p>
<p>However, I'd prefer to go with something textual to avoid the overhead of creating all these images on the server with each request.</p>
<p>I've seen things like..</p>
<ul>
<li>ASCII text captcha: <code>\/\/(_)\/\/</code></li>
<li>math puzzles: what is 7 minus 3 times 2?</li>
<li>trivia questions: what tastes better, a toad or a popsicle?</li>
</ul>
<p>Maybe I'm just tilting at windmills here, but I'd like to have a less resource intensive, non-image based <code><noscript></code> compatible CAPTCHA if possible.</p>
<p>Ideas?</p>
|
<p><a href="http://gatekiller.co.uk/Post/JavaScript_Captcha" rel="noreferrer">A method that I have developed</a> and which seems to work perfectly (although I probably don't get as much comment spam as you), is to have a hidden field and fill it with a bogus value e.g.:</p>
<pre class="lang-html prettyprint-override"><code><input type="hidden" name="antispam" value="lalalala" />
</code></pre>
<p>I then have a piece of JavaScript which updates the value every second with the number of seconds the page has been loaded for:</p>
<pre class="lang-js prettyprint-override"><code>var antiSpam = function() {
if (document.getElementById("antiSpam")) {
a = document.getElementById("antiSpam");
if (isNaN(a.value) == true) {
a.value = 0;
} else {
a.value = parseInt(a.value) + 1;
}
}
setTimeout("antiSpam()", 1000);
}
antiSpam();
</code></pre>
<p>Then when the form is submitted, If the antispam value is still "lalalala", then I mark it as spam. If the antispam value is an integer, I check to see if it is above something like 10 (seconds). If it's below 10, I mark it as spam, if it's 10 or more, I let it through.</p>
<pre class="lang-asp prettyprint-override"><code>If AntiSpam = A Integer
If AntiSpam >= 10
Comment = Approved
Else
Comment = Spam
Else
Comment = Spam
</code></pre>
<p>The theory being that:</p>
<ul>
<li>A spam bot will not support JavaScript and will submit what it sees</li>
<li>If the bot does support JavaScript it will submit the form instantly</li>
<li>The commenter has at least read some of the page before posting</li>
</ul>
<p>The downside to this method is that it requires JavaScript, and if you don't have JavaScript enabled, your comment will be marked as spam, however, I do review comments marked as spam, so this is not a problem.</p>
<p><strong>Response to comments</strong></p>
<p>@MrAnalogy: The server side approach sounds quite a good idea and is exactly the same as doing it in JavaScript. Good Call.</p>
<p>@AviD: I'm aware that this method is prone to direct attacks as I've mentioned on <a href="http://gatekiller.co.uk/Post/JavaScript_Captcha" rel="noreferrer">my blog</a>. However, it will defend against your average spam bot which blindly submits rubbish to any form it can find.</p>
|
<p>One option would be out-of-band communication; the server could send the user an instant message (or SMS message?) that he/she then has to type into the captcha field.</p>
<p>This imparts an "either/or" requirement on the user -- either you must enable JavaScript OR you must be logged on to your IM service of choice. While it maybe isn't as flexible as some of the other solutions above, it would work for the vast majority of users.</p>
<p>Those with edit privileges, feel free to add to the Pros/Cons rather than submitting a separate reply.</p>
<p>Pros: </p>
<ul>
<li>Accessible: Many IM clients support reading of incoming messages. Some web-based clients will work with screen readers.</li>
</ul>
<p>Cons:</p>
<ul>
<li>Javascript-disabled users are now dependent on up-time of yet another service, on top of OpenID.</li>
<li>Bots will cause additional server resource usage (sending the out-of-band communications) unless additional protections are implemented</li>
</ul>
| 2,991
|
<p>How is it possible to read/write to the Windows registry using Java?</p>
|
<p>I know this question is old, but it is the first search result on google to "java read/write to registry". Recently I found this amazing piece of code which:</p>
<ul>
<li>Can read/write to ANY part of the registry. </li>
<li>DOES NOT USE JNI.</li>
<li>DOES NOT USE ANY 3rd PARTY/EXTERNAL APPLICATIONS TO WORK.</li>
<li>DOES NOT USE THE WINDOWS API (directly)</li>
</ul>
<p>This is pure, Java code.</p>
<p>It uses reflection to work, by actually accessing the private methods in the <code>java.util.prefs.Preferences</code> class. The internals of this class are complicated, but the class itself is very easy to use.</p>
<p>For example, the following code obtains the exact windows distribution <strong>from the registry</strong>:</p>
<pre><code>String value = WinRegistry.readString (
WinRegistry.HKEY_LOCAL_MACHINE, //HKEY
"SOFTWARE\\Microsoft\\Windows NT\\CurrentVersion", //Key
"ProductName"); //ValueName
System.out.println("Windows Distribution = " + value);
</code></pre>
<p>Here is the original class. Just copy paste it and it should work:</p>
<pre><code>import java.lang.reflect.InvocationTargetException;
import java.lang.reflect.Method;
import java.util.HashMap;
import java.util.Map;
import java.util.ArrayList;
import java.util.List;
import java.util.prefs.Preferences;
public class WinRegistry {
public static final int HKEY_CURRENT_USER = 0x80000001;
public static final int HKEY_LOCAL_MACHINE = 0x80000002;
public static final int REG_SUCCESS = 0;
public static final int REG_NOTFOUND = 2;
public static final int REG_ACCESSDENIED = 5;
private static final int KEY_ALL_ACCESS = 0xf003f;
private static final int KEY_READ = 0x20019;
private static final Preferences userRoot = Preferences.userRoot();
private static final Preferences systemRoot = Preferences.systemRoot();
private static final Class<? extends Preferences> userClass = userRoot.getClass();
private static final Method regOpenKey;
private static final Method regCloseKey;
private static final Method regQueryValueEx;
private static final Method regEnumValue;
private static final Method regQueryInfoKey;
private static final Method regEnumKeyEx;
private static final Method regCreateKeyEx;
private static final Method regSetValueEx;
private static final Method regDeleteKey;
private static final Method regDeleteValue;
static {
try {
regOpenKey = userClass.getDeclaredMethod("WindowsRegOpenKey",
new Class[] { int.class, byte[].class, int.class });
regOpenKey.setAccessible(true);
regCloseKey = userClass.getDeclaredMethod("WindowsRegCloseKey",
new Class[] { int.class });
regCloseKey.setAccessible(true);
regQueryValueEx = userClass.getDeclaredMethod("WindowsRegQueryValueEx",
new Class[] { int.class, byte[].class });
regQueryValueEx.setAccessible(true);
regEnumValue = userClass.getDeclaredMethod("WindowsRegEnumValue",
new Class[] { int.class, int.class, int.class });
regEnumValue.setAccessible(true);
regQueryInfoKey = userClass.getDeclaredMethod("WindowsRegQueryInfoKey1",
new Class[] { int.class });
regQueryInfoKey.setAccessible(true);
regEnumKeyEx = userClass.getDeclaredMethod(
"WindowsRegEnumKeyEx", new Class[] { int.class, int.class,
int.class });
regEnumKeyEx.setAccessible(true);
regCreateKeyEx = userClass.getDeclaredMethod(
"WindowsRegCreateKeyEx", new Class[] { int.class,
byte[].class });
regCreateKeyEx.setAccessible(true);
regSetValueEx = userClass.getDeclaredMethod(
"WindowsRegSetValueEx", new Class[] { int.class,
byte[].class, byte[].class });
regSetValueEx.setAccessible(true);
regDeleteValue = userClass.getDeclaredMethod(
"WindowsRegDeleteValue", new Class[] { int.class,
byte[].class });
regDeleteValue.setAccessible(true);
regDeleteKey = userClass.getDeclaredMethod(
"WindowsRegDeleteKey", new Class[] { int.class,
byte[].class });
regDeleteKey.setAccessible(true);
}
catch (Exception e) {
throw new RuntimeException(e);
}
}
private WinRegistry() { }
/**
* Read a value from key and value name
* @param hkey HKEY_CURRENT_USER/HKEY_LOCAL_MACHINE
* @param key
* @param valueName
* @return the value
* @throws IllegalArgumentException
* @throws IllegalAccessException
* @throws InvocationTargetException
*/
public static String readString(int hkey, String key, String valueName)
throws IllegalArgumentException, IllegalAccessException,
InvocationTargetException
{
if (hkey == HKEY_LOCAL_MACHINE) {
return readString(systemRoot, hkey, key, valueName);
}
else if (hkey == HKEY_CURRENT_USER) {
return readString(userRoot, hkey, key, valueName);
}
else {
throw new IllegalArgumentException("hkey=" + hkey);
}
}
/**
* Read value(s) and value name(s) form given key
* @param hkey HKEY_CURRENT_USER/HKEY_LOCAL_MACHINE
* @param key
* @return the value name(s) plus the value(s)
* @throws IllegalArgumentException
* @throws IllegalAccessException
* @throws InvocationTargetException
*/
public static Map<String, String> readStringValues(int hkey, String key)
throws IllegalArgumentException, IllegalAccessException,
InvocationTargetException
{
if (hkey == HKEY_LOCAL_MACHINE) {
return readStringValues(systemRoot, hkey, key);
}
else if (hkey == HKEY_CURRENT_USER) {
return readStringValues(userRoot, hkey, key);
}
else {
throw new IllegalArgumentException("hkey=" + hkey);
}
}
/**
* Read the value name(s) from a given key
* @param hkey HKEY_CURRENT_USER/HKEY_LOCAL_MACHINE
* @param key
* @return the value name(s)
* @throws IllegalArgumentException
* @throws IllegalAccessException
* @throws InvocationTargetException
*/
public static List<String> readStringSubKeys(int hkey, String key)
throws IllegalArgumentException, IllegalAccessException,
InvocationTargetException
{
if (hkey == HKEY_LOCAL_MACHINE) {
return readStringSubKeys(systemRoot, hkey, key);
}
else if (hkey == HKEY_CURRENT_USER) {
return readStringSubKeys(userRoot, hkey, key);
}
else {
throw new IllegalArgumentException("hkey=" + hkey);
}
}
/**
* Create a key
* @param hkey HKEY_CURRENT_USER/HKEY_LOCAL_MACHINE
* @param key
* @throws IllegalArgumentException
* @throws IllegalAccessException
* @throws InvocationTargetException
*/
public static void createKey(int hkey, String key)
throws IllegalArgumentException, IllegalAccessException,
InvocationTargetException
{
int [] ret;
if (hkey == HKEY_LOCAL_MACHINE) {
ret = createKey(systemRoot, hkey, key);
regCloseKey.invoke(systemRoot, new Object[] { new Integer(ret[0]) });
}
else if (hkey == HKEY_CURRENT_USER) {
ret = createKey(userRoot, hkey, key);
regCloseKey.invoke(userRoot, new Object[] { new Integer(ret[0]) });
}
else {
throw new IllegalArgumentException("hkey=" + hkey);
}
if (ret[1] != REG_SUCCESS) {
throw new IllegalArgumentException("rc=" + ret[1] + " key=" + key);
}
}
/**
* Write a value in a given key/value name
* @param hkey
* @param key
* @param valueName
* @param value
* @throws IllegalArgumentException
* @throws IllegalAccessException
* @throws InvocationTargetException
*/
public static void writeStringValue
(int hkey, String key, String valueName, String value)
throws IllegalArgumentException, IllegalAccessException,
InvocationTargetException
{
if (hkey == HKEY_LOCAL_MACHINE) {
writeStringValue(systemRoot, hkey, key, valueName, value);
}
else if (hkey == HKEY_CURRENT_USER) {
writeStringValue(userRoot, hkey, key, valueName, value);
}
else {
throw new IllegalArgumentException("hkey=" + hkey);
}
}
/**
* Delete a given key
* @param hkey
* @param key
* @throws IllegalArgumentException
* @throws IllegalAccessException
* @throws InvocationTargetException
*/
public static void deleteKey(int hkey, String key)
throws IllegalArgumentException, IllegalAccessException,
InvocationTargetException
{
int rc = -1;
if (hkey == HKEY_LOCAL_MACHINE) {
rc = deleteKey(systemRoot, hkey, key);
}
else if (hkey == HKEY_CURRENT_USER) {
rc = deleteKey(userRoot, hkey, key);
}
if (rc != REG_SUCCESS) {
throw new IllegalArgumentException("rc=" + rc + " key=" + key);
}
}
/**
* delete a value from a given key/value name
* @param hkey
* @param key
* @param value
* @throws IllegalArgumentException
* @throws IllegalAccessException
* @throws InvocationTargetException
*/
public static void deleteValue(int hkey, String key, String value)
throws IllegalArgumentException, IllegalAccessException,
InvocationTargetException
{
int rc = -1;
if (hkey == HKEY_LOCAL_MACHINE) {
rc = deleteValue(systemRoot, hkey, key, value);
}
else if (hkey == HKEY_CURRENT_USER) {
rc = deleteValue(userRoot, hkey, key, value);
}
if (rc != REG_SUCCESS) {
throw new IllegalArgumentException("rc=" + rc + " key=" + key + " value=" + value);
}
}
// =====================
private static int deleteValue
(Preferences root, int hkey, String key, String value)
throws IllegalArgumentException, IllegalAccessException,
InvocationTargetException
{
int[] handles = (int[]) regOpenKey.invoke(root, new Object[] {
new Integer(hkey), toCstr(key), new Integer(KEY_ALL_ACCESS) });
if (handles[1] != REG_SUCCESS) {
return handles[1]; // can be REG_NOTFOUND, REG_ACCESSDENIED
}
int rc =((Integer) regDeleteValue.invoke(root,
new Object[] {
new Integer(handles[0]), toCstr(value)
})).intValue();
regCloseKey.invoke(root, new Object[] { new Integer(handles[0]) });
return rc;
}
private static int deleteKey(Preferences root, int hkey, String key)
throws IllegalArgumentException, IllegalAccessException,
InvocationTargetException
{
int rc =((Integer) regDeleteKey.invoke(root,
new Object[] { new Integer(hkey), toCstr(key) })).intValue();
return rc; // can REG_NOTFOUND, REG_ACCESSDENIED, REG_SUCCESS
}
private static String readString(Preferences root, int hkey, String key, String value)
throws IllegalArgumentException, IllegalAccessException,
InvocationTargetException
{
int[] handles = (int[]) regOpenKey.invoke(root, new Object[] {
new Integer(hkey), toCstr(key), new Integer(KEY_READ) });
if (handles[1] != REG_SUCCESS) {
return null;
}
byte[] valb = (byte[]) regQueryValueEx.invoke(root, new Object[] {
new Integer(handles[0]), toCstr(value) });
regCloseKey.invoke(root, new Object[] { new Integer(handles[0]) });
return (valb != null ? new String(valb).trim() : null);
}
private static Map<String,String> readStringValues
(Preferences root, int hkey, String key)
throws IllegalArgumentException, IllegalAccessException,
InvocationTargetException
{
HashMap<String, String> results = new HashMap<String,String>();
int[] handles = (int[]) regOpenKey.invoke(root, new Object[] {
new Integer(hkey), toCstr(key), new Integer(KEY_READ) });
if (handles[1] != REG_SUCCESS) {
return null;
}
int[] info = (int[]) regQueryInfoKey.invoke(root,
new Object[] { new Integer(handles[0]) });
int count = info[0]; // count
int maxlen = info[3]; // value length max
for(int index=0; index<count; index++) {
byte[] name = (byte[]) regEnumValue.invoke(root, new Object[] {
new Integer
(handles[0]), new Integer(index), new Integer(maxlen + 1)});
String value = readString(hkey, key, new String(name));
results.put(new String(name).trim(), value);
}
regCloseKey.invoke(root, new Object[] { new Integer(handles[0]) });
return results;
}
private static List<String> readStringSubKeys
(Preferences root, int hkey, String key)
throws IllegalArgumentException, IllegalAccessException,
InvocationTargetException
{
List<String> results = new ArrayList<String>();
int[] handles = (int[]) regOpenKey.invoke(root, new Object[] {
new Integer(hkey), toCstr(key), new Integer(KEY_READ)
});
if (handles[1] != REG_SUCCESS) {
return null;
}
int[] info = (int[]) regQueryInfoKey.invoke(root,
new Object[] { new Integer(handles[0]) });
int count = info[0]; // Fix: info[2] was being used here with wrong results. Suggested by davenpcj, confirmed by Petrucio
int maxlen = info[3]; // value length max
for(int index=0; index<count; index++) {
byte[] name = (byte[]) regEnumKeyEx.invoke(root, new Object[] {
new Integer
(handles[0]), new Integer(index), new Integer(maxlen + 1)
});
results.add(new String(name).trim());
}
regCloseKey.invoke(root, new Object[] { new Integer(handles[0]) });
return results;
}
private static int [] createKey(Preferences root, int hkey, String key)
throws IllegalArgumentException, IllegalAccessException,
InvocationTargetException
{
return (int[]) regCreateKeyEx.invoke(root,
new Object[] { new Integer(hkey), toCstr(key) });
}
private static void writeStringValue
(Preferences root, int hkey, String key, String valueName, String value)
throws IllegalArgumentException, IllegalAccessException,
InvocationTargetException
{
int[] handles = (int[]) regOpenKey.invoke(root, new Object[] {
new Integer(hkey), toCstr(key), new Integer(KEY_ALL_ACCESS) });
regSetValueEx.invoke(root,
new Object[] {
new Integer(handles[0]), toCstr(valueName), toCstr(value)
});
regCloseKey.invoke(root, new Object[] { new Integer(handles[0]) });
}
// utility
private static byte[] toCstr(String str) {
byte[] result = new byte[str.length() + 1];
for (int i = 0; i < str.length(); i++) {
result[i] = (byte) str.charAt(i);
}
result[str.length()] = 0;
return result;
}
}
</code></pre>
<p>Original Author: Apache.</p>
<p>Library Source: <a href="https://github.com/apache/npanday/tree/trunk/components/dotnet-registry/src/main/java/npanday/registry" rel="noreferrer">https://github.com/apache/npanday/tree/trunk/components/dotnet-registry/src/main/java/npanday/registry</a></p>
|
<p>I prefer using <a href="https://docs.oracle.com/javase/8/docs/technotes/guides/preferences/overview.html" rel="nofollow">java.util.prefs.Preferences</a> class.</p>
<p>A simple example would be </p>
<pre><code>// Write Operation
Preferences p = Preferences.userRoot();
p.put("key","value");
// also there are various other methods such as putByteArray(), putDouble() etc.
p.flush();
//Read Operation
Preferences p = Preferences.userRoot();
String value = p.get("key");
</code></pre>
| 8,766
|
<p>I have a custom built ajax [div] based dynamic dropdown. </p>
<p>I have an [input] box which; <code>onkeyup</code>, runs an Ajax search which returns results in <code>div</code>s and are drawn back in using <code>innerHTML</code>. These <code>div</code>s all have highlights <code>onmouseover</code> so, a typical successful search yields the following structure (pardon the semi-code):</p>
<pre><code>[input]
[div id=results] //this gets overwritten contantly by my AJAX function
[div id=result1 onmouseover=highlight onclick=input.value=result1]
[div id=result2 onmouseover=highlight onclick=input.value=result2]
[div id=result2 onmouseover=highlight onclick=input.value=result2]
[/div]
</code></pre>
<p>It works.</p>
<p>However, I'm missing the important functions behind regular HTML elements. I can't keyboard down or up between "options".</p>
<p>I know javascript handles keyboard events but; I haven't been able to find a good guide. (Of course, the follow-up question will end up being: can I use <code><ENTER></code> to trigger that <code>onclick</code> event?)</p>
|
<p>What you need to do is attach event listeners to the <code>div</code> with <code>id="results"</code>. You can do this by adding <code>onkeyup</code>, <code>onkeydown</code>, etc. attributes to the <code>div</code> when you create it or you can attach these using JavaScript.</p>
<p>My recommendation would be that you use an AJAX library like <a href="http://developer.yahoo.com/yui/" rel="noreferrer">YUI</a>, <a href="http://jquery.com/" rel="noreferrer">jQuery</a>, <a href="http://www.prototypejs.org/" rel="noreferrer">Prototype</a>, etc. for two reasons:</p>
<ol>
<li>It sounds like you are trying to create an <a href="http://developer.yahoo.com/ypatterns/pattern.php?pattern=autocomplete" rel="noreferrer">Auto Complete</a> control which is something most AJAX libaries should provide. If you can use an existing component you'll save yourself a lot of time.</li>
<li>Even if you don't want to use the control provided by a library, all libraries provide event libraries that help to hide the differences between the event APIs provided by different browsers.</li>
</ol>
<p><a href="http://www.dustindiaz.com/yahoo-event-utility/" rel="noreferrer">Forget addEvent, use Yahoo!’s Event Utility</a> provides a good summary of what an event library should provide for you. I'm pretty sure that the event libraries provided by jQuery, Prototype, et. al. provide similar features. </p>
<p>If that article goes over your head have a look at <a href="http://developer.yahoo.com/yui/event/index.html" rel="noreferrer">this documentation</a> first and then re-read the original article (I found the article made much more sense after I'd used the event library).</p>
<p>A couple of other things:</p>
<ul>
<li>Using JavaScript gives you much more control than writing <code>onkeyup</code> etc. attributes into your HTML. Unless you want to do something <em>really simple</em> I would use JavaScript.</li>
<li>If you write your own code to handle keyboard events a <a href="http://www.cambiaresearch.com/c4/702b8cd1-e5b0-42e6-83ac-25f0306e3e25/Javascript-Char-Codes-Key-Codes.aspx" rel="noreferrer">good key code reference</a> is really handy.</li>
</ul>
|
<p>Off the top of my head, I would think that you'd need to maintain some form of a data structure in the JavaScript that reflects the items in the current dropdown list. You'd also need a reference to the currently active/selected item. </p>
<p>Each time <code>keyup</code> or <code>keydown</code> is fired, update the reference to the active/selected item in the data structure. To provide highlighting information on the UI, add or remove a class name that is styled via CSS based on if the item is active/selected or not.</p>
<p>Also, this isn't a biggy, but <code>innerHTML</code> is not really standard (look into <code>createTextNode()</code>, <code>createElement()</code>, and <code>appendChild()</code> for standard ways of creating data). You may also want to see about attaching event handlers in the JavaScript rather than doing so in an HTML attribute.</p>
| 2,687
|
<p>How can I host a (<code>.Net, Java, VB6, MFC, etc</code>) application in a <code>WPF window</code> ?.</p>
<p>I have a need to use <code>WPF windows</code> to wrap external applications and control the window size and location. </p>
<p>Does anyone have any ideas on how to accomplish this or a direction to research in?</p>
|
<p>Use a HwndHost to host the outside window in your application.</p>
|
<p>This <a href="http://www.ddj.com/windows/197003872?pgno=2" rel="nofollow noreferrer">article</a> explains how to use HwndHost along with a few other Win32 API calls to accomplish the task.</p>
| 9,358
|
<p>Just looking for the first step basic solution here that keeps the honest people out.</p>
<p>Thanks,
Mike</p>
|
<p>Sure thing, the simplest way is this:</p>
<pre><code>select foo from bar where baz in (1,2,3)
</code></pre>
|
<p>while <code>in</code> is a shortcut for <code>or</code> and I wasn't sure how I could combine <code>in</code> with <code>and</code>, I did it this way</p>
<pre><code> SELECT * FROM table
WHERE column1='x' AND (column2='y' OR column2='z');
</code></pre>
| 3,846
|
<p>I have a 3D printer, which wasn't used for a longer time. There is a nozzle, but don't know what diameter is it. I used several widths - from 0.2 to 0.8, switched them depending on my needs, but don't remember which one was used lately.
The nozzle is a little bit worn down, so the diameter on the side is not visible.</p>
<p>How to get the nozzle diameter, without taking it off the printer?</p>
<p>This is more of a theoretical question, because I can simply swap the nozzle, but still - eager to know.</p>
|
<p>The best method is to have a scale on your microscope that looks like a ruler drawn in the optical path. However I usually visually compare an unknown nozzle with known nozzles under a microscope. If you don't have a microscope, you can get USB otoscope cameras (15mm focal length for looking in ears) for under $20.</p>
<p>You can get a fairly good idea of the size by telling the printer to extrude in air then measure the extrusion diameter with calipers. Of course, it's better if you can compare with a known nozzle. If you push the filament through by hand, the extrusion will be too thick. With the stepper motor extruding, I measure the extrusion diameters between 0.3 and 0.5 mm on a 0.4 mm nozzle. Low cost calipers are less than $20. The main different with the low cost calipers is the slides are not smooth, reducing the accuracy in the 0.01 mm place.</p>
|
<p>Majority of nozzles have a .4 or a .3 on it, that indicates how big it is in millimeters. If yours doesn't, take one of these digital rulers (the ones that have the prongs that stick out on the side and slides, I don't know the name off of the top of my head), extrude some filament and measure the filament, make sure that you find a piece that was extruded after the bottom hit the ground and make sure that you find one that isn't bending that way its not stretched or compressed. (Your measurements won't be exact due to things cooling off and shrinking)</p>
| 1,944
|
<p>I'd like to indicate to the user of a web app that a long-running task is being performed. Once upon a time, this concept would have been communicated to the user by displaying an hourglass. Nowadays, it seems to be an animated spinning circle. (e.g., when you are loading a new tab in Firefox, or booting in Mac OS X. Coincidentally, the overflowing stack in the stackoverflow logo looks like one quarter of the circle).</p>
<p>Is there a simple way to create this effect using Javascript (in particular, JQuery)? Ideally, I'd like to have one of these little spinners as elements in a table, to indicate to the user that the system is still active in processing a pending task (i.e., it hasn't forgotten or crashed). (Of course, I realize it's possible that the back-end has crashed and the front-end still show as an animating spinning thing, it's more for the psychological purpose of the user seeing activity).</p>
<p>And what do you call that spinning thing, anyways?</p>
|
<p>Google <strong>Ajax activity indicator</strong> to find lots of images and image generators (the "spinning" image itself is an animated GIF). </p>
<p>Here is <a href="http://www.ajaxload.info/" rel="nofollow noreferrer">one link</a> to get you started.</p>
<p>With the image in hand, use JQuery to toggle the visibility of the image (or perhaps its parent DIV tag). See <a href="http://skfox.com/2008/04/28/jquery-example-ajax-activity-indicator/" rel="nofollow noreferrer">this link</a> for some more info.</p>
<p>rp</p>
|
<p>I assume you meant something to indicate background activity during an Ajax call.</p>
<p>I tend to have a CSS class which sets the background image to a little animated GIF, with appropriate padding and positioning (remember to turn background-repeat off), and then add and remove that class using a couple of JavaScript helpers called when the Ajax call is started, and when the response callback runs.</p>
| 6,243
|
<p>I'm having some problems integrating MS MapPoint 2009 into my WinForms .Net 2.0 application in C#. I've added the ActiveX MapPoint control onto a form and have no problems getting it to display a maps and locations; my concern is the time it takes to load a map once it is created. </p>
<p>The tests on my development machine have shown the average load time to be between 3 and 5 seconds, during which the application is totally locked. While this isn't totally unacceptable, it's an awfully long time to lose control of the application. Also, because the GUI thread is locked, I cannot show a loading dialog or something to mask the load time. </p>
<p>The line that hangs is this: (where axMappointControl1 is the MapPoint control)</p>
<pre><code>axMappointControl1.NewMap(MapPoint.GeoMapRegion.geoMapNorthAmerica);
</code></pre>
<p>I've tried executing the NewMap method on another thread but the GUI thread still ends up being blocked.</p>
<p>My questions are: </p>
<ul>
<li>What can I do to speed up MapPoint when it loads?</li>
<li>Is there any way to load MapPoint so that it won't block the GUI thread?</li>
</ul>
<p>Any help is greatly appreciated.</p>
|
<p>According to <a href="http://www.mapforums.com/load-new-map-separate-thread-keep-ui-responsive-4605.html" rel="nofollow noreferrer">these</a> <a href="http://www.mapforums.com/activex-performance-net-3415.html" rel="nofollow noreferrer">threads</a> at mapforums.com the slowness of ActiveX startup is well known and unavoidable (though the question of threading to help with GUI responsiveness is still open.</p>
<p>One thing suggested was to abandon the ActiveX version in favor of the MapPoint.Application object instead.</p>
<p>Hope that helps.</p>
|
<p>Yes the Application version runs on its own thread - so this should be a quicker alternative - easier to do your own stuff whilst it is starting up. However, MapPoint 2010 tends to take a few seconds to start up when started by a user.</p>
<p>I would create a temporary GUI thread and use this to display a splash screen during start up and/or do any thread-safe initialisation that you need to do. </p>
<p>All calls to a MapPoint instance (or ActiveX control) must be from the same thread that create the MapPoint control or application object.</p>
| 8,453
|
<p>I've been really enjoying Python programming lately. I come from a background of a strong love for C-based coding, where everything is perhaps more complicated than it should be (but puts hair on your chest, at least). So switching from C to Python for more complex things that don't require tons of speed has been more of a boon than a bane in writing projects.</p>
<p>However, coming from this land of brackets and parentheses and structs as far as the naked eye can see, I come across a small problem: I find Python difficult to read.</p>
<p>For example, the following block of text is hard for me to decipher unless I stare at it (which I dislike doing):</p>
<pre><code>if foo:
bar = baz
while bar not biz:
bar = i_am_going_to_find_you_biz_i_swear_on_my_life()
did_i_not_warn_you_biz()
my_father_is_avenged()
</code></pre>
<p>The problem occurs at the end of that if block: all the tabbing and then suddenly returning to a jarring block feels almost disturbing. As a solution, I've started coding my Python like this:</p>
<pre><code>if foo:
bar = baz
while bar not biz:
bar = i_am_going_to_find_you_biz_i_swear_on_my_life()
#-- while --
#-- if --
did_i_not_warn_you_biz()
my_father_is_avenged()
</code></pre>
<p>And this, for some odd reason, makes me more able to read my own code. But I'm curious: has anyone else with my strange problem found easier ways to make their tabbed-out code more readable? I'd love to find out if there's a better way to do this before this becomes a huge habit for me.</p>
|
<p>Part of learning a new programming language is learning to read code in that language. A crutch like this may make it easier to read your own code, but it's going to impede the process of learning how to read anyone else's Python code. I really think you'd be better off getting rid of the end of block comments and getting used to normal Python.</p>
|
<p>I would look in to understanding more details about Python syntax. Often times if a piece of code looks odd, there usually is a better way to write it. For example, in the above example:</p>
<pre><code>bar = foo if baz else None
while bar not biz:
bar = i_am_going_to_find_you_biz_i_swear_on_my_life()
did_i_not_warn_you_biz()
my_father_is_avenged()
</code></pre>
<p>While it is a small change, it might help the readability. Also, in all honesty, I've never used a while loop, so there is a good change you would end up with a nice concise list comprehension or for loop instead. ;)</p>
| 7,458
|
<p>I'm just wondering if it exists better solution for this.</p>
<pre><code>BitConverter.ToInt32(sample_guid.ToByteArray(), 0)
</code></pre>
|
<p>I don't think there's a better solution than this. </p>
|
<p>Dunno about a better solution, but I hope you don't intend to use that Int32 as a random (or unique) value. </p>
<p>You cannot depend on any sub part of a Guid to be unique. Guid is assumed to be unique, only in its entirety.</p>
| 6,749
|
<p>We were having a problem with our build server not checking out modifications from source control despite recognizing that there had been changes.</p>
<p>It was traced to the control folder (not sure what it's real name is), the existing working builds were using _svn. Clearing the working folder forced a new complete checkout and I noticed that now the control folder is .svn. It looks like originally our integration routines were checking out code using _svn but now it is using .svn.</p>
<p><em>The svn.exe being used during integration is from VisualSVN Server can I set this up to use _svn again?</em></p>
<p>How the original working copies were using _svn I don't know! - we only ever ever used VisualSVN Server and haven't changed this.</p>
<p>We had setup TortoiseSVN to use _svn following the recommendation that this works better for Visual Studio and have also installed TortoiseSVN on the build server in case it is ever needed. Could this be the cause?</p>
<p><em>Also is this really necessary? As MSBuild is Microsoft's is it recommended as it is for Visual Studio?</em></p>
|
<p>The business about _svn vs. .svn was an issue with Visual Studio web projects only (and I'm fairly sure it was fixed in VS2005 anyway), it's not a general "_svn works better with VS" thing.</p>
<p>It's also only a working-copy issue, not a repository issue - i.e. it doesn't matter if some users of SVN are using clients set up to do _svn and some are using .svn - the repository won't know or care - (unless somehow you end-up with a load of these _svn/.svn files actually checked-into the repository which would be confusing in the extreme.)</p>
<p>Unless you have absolute concrete evidence that .SVN is causing you problems, then I would stick with that wherever you can.</p>
|
<p>As far as I know _svn is needed, because WebApplications have problems when one of their directories begins with a point.</p>
| 3,320
|
<p>I have a Prusa i3 3D printer and am using Slic3r for slicing.</p>
<p>I've noticed that my extruder is underextruding. I've correctly set the nozzle diameter and the filament diameter and I've already tried to increment the extrusion multiplier (which is currently set to 1.1). As Slic3r suggests this is a maximum value, I'm wondering if there is something else wrong.</p>
<p>Does anybody a have suggestions on which settings I can look at?</p>
<p><a href="https://i.stack.imgur.com/mYACg.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mYACg.jpg" alt="enter image description here" /></a></p>
|
<p>I have a Prusa i3 pro b, and was having a similar issues. Turns out the spring for the plastic extruder feeder can be too slack. I printed <a href="https://www.thingiverse.com/thing:1936289" rel="nofollow noreferrer">these</a> at 2 mm and put it under the spring and it solved my issue.</p>
|
<p>It looks like the nozzle is too far from the bed, try leveling the bed again - that should solve the problem.</p>
| 1,710
|
<p>I had the BLTouch (3.1) working with stock firmware on an Ender 3 v2 but wanted to get a 5x5 mesh instead of the 3x3 so I installed the TH3D firmware. With the stock firmware my Z-offset was -2.95 mm.</p>
<p>After installing TH3D firmware, my Z-offset has to be in excess of -6 mm to reach the same nozzle-to-bed height. I've tried setting the Z-offset via LCD and through G-code, saving to EEPROM. I'm having 2 problems with this that I assume are connected:</p>
<ol>
<li>Z-offset seems to not be applied when setting Z axis position. When I autohome and then set Z axis to 0, even the extended probe from the BLTouch does not touch the bed (it's actually 1-2 mm away from touching). And this is with the Z-offset set to -6 mm.</li>
<li>I tried running a print, which includes an ABL request (<code>G29</code>), the print head crashed into the bed immediately after finishing levelling. My hunch is that when starting to print the -6 mm Z-offset starts being correctly taken into account.</li>
</ol>
<p>Could this be the result of something I did in the Configuration.h when compiling the firmware?</p>
<p>I made the following changes:</p>
<p><a href="https://github.com/th3dstudio/UnifiedFirmware/blob/2.0.x/Board%20Configuration%20Files/CrealityV42X/Firmware/Marlin/Configuration.h" rel="nofollow noreferrer">CrealityV42X/Firmware/Marlin/Configuration.h</a></p>
<ul>
<li>Line 24: uncommented <code>#define ENDER3_V2_V422_BOARD</code></li>
<li>Line 43: uncommented <code>#define CUSTOM_PROBE</code></li>
<li>Line 66: changed <code>#define EZABL_POINTS</code> from 3 to 5</li>
<li>Line 124: changed <code>#define NOZZLE_TO_PROBE_OFFSET</code> from <code>{ 10, 10, 0}</code> to <code>{ -44, -6, 0 }</code></li>
<li>Line 233: uncommented <code>#define BLTOUCH</code></li>
<li>Line 235: uncommented <code>#define SERVO0_PIN PB0</code></li>
<li>Line 636: commented out <code>#define Z_MIN_PROBE_USES_Z_MIN_ENDSTOP_PIN</code></li>
<li>Line 636: added <code>#define USE_PROBE_FOR_Z_HOMING</code></li>
</ul>
<p>In <a href="https://github.com/th3dstudio/UnifiedFirmware/blob/2.0.x/Firmware/Marlin/Configuration_backend.h" rel="nofollow noreferrer">Configuration_backend.h</a> the only change I made was increasing <code>Z_PROBE_OFFSET_RANGE_MIN</code> from -5 to -20.</p>
|
<p>I'm pretty sure something is wrong with the firmware I downloaded. I flashed the 3/5/21 version of Marlin firmware from <a href="https://marlin.crc.id.au/" rel="nofollow noreferrer">https://marlin.crc.id.au/</a> and it is somehow working better with the LCD firmware (which I got from TH3D), the Z offset is working as it should, and the leveling process has more points and goes faster.</p>
|
<p>Do you still have this issue? My Ender with a BLTouch won't respect the Z-off set setting in <strong>Motion</strong> > <strong>Control</strong> > <strong>ZOffset</strong>. You can change it to -10 and it will still start printing about 1 cm above the bed.</p>
<p>All of this started to happen after I used <strong>Bed Leveling</strong> from the prepare menu. To work around the issue, I reset the settings <strong>Initialize EPROM</strong> in the control menu.</p>
<ol>
<li>Reset All settings <strong>Initialize EPROM</strong></li>
<li>Set the Z-offset (In my case -3.3)</li>
<li>Store settings in the control menu</li>
<li>Switch off the printer (settings won't take effect even if you select to load settings)</li>
</ol>
<p>Then start printing. This doesn't solve the issue because it defeats the purpose of having a BLTouch in the first place. but at least I'm able to print again.</p>
<p>In a nutshell, avoiding auto bed leveling does the trick (or at least in my case)</p>
<p>Hope it helps.</p>
| 1,881
|
<p>I have been printing toy cars for about a month now and my Ender 3 has stopped extruding plastic even when I insert filament, it has been about a day since it stopped working. Is there any tips for getting it working again?</p>
<p>I tried manual feeding which worked.</p>
|
<p>You need to figure out what is not working</p>
<ul>
<li>Is the hotend getting hot? If not, melted filament won't come out.</li>
<li>Is the nozzle clogged? In your toolkit was a bit of thin wire for poking into the nozzle - try that and see what happens.<br> You may need to heat the hotend, extract the filament, wait for it to cool, remove the bowden tube and push the wire up from below, if the obstruction is too big to come through the 0.4mm nozzle.</li>
<li>Is the extruder pushing/feeding filament? Undo the bowden tube at the top, tell the control panel to extrude and observe if plastic moves. An Ender3 V2 has the round handle on top, you should see it slowly revolving.<br>
If you can see the gears turning and the filament is not coming through, try snipping that piece off and inserting a fresh end. Also clean inside the pushing gears of the extruder, could be simple plastic detris laying about.</li>
<li>Are you having reel problems? Can you tug on the filament and have the reel turn? If not, it might be binding on the roll, or knotted/tangled.</li>
</ul>
|
<p>My comment refuses to upload so I think it’ll be ok if I just put it here:</p>
<p>Thanks this fixed it I had to replace my hot end as the one had broken for some reason but now it is fully working again, thank you so much!</p>
<p>Sorry for all the mistakes in formatting, I am new I joined 3 days ago.</p>
| 2,033
|
<p>As some people probably know, this is the 3rd iteration of a SE site proposal whose scope covers 3D printers. The first 2 were <a href="http://area51.stackexchange.com/proposals/22246/personal-manufacturing">Personal Manufacturing</a>(before my time) and <a href="http://area51.stackexchange.com/proposals/41850/digital-fabrication">Digital Fabrication</a>(I tried and failed).</p>
<p>Now I am faced with a situation where I want to ask a question about DIY Polystyrene foam milling (I was researching a bit of <a href="https://en.wikipedia.org/wiki/Lost-foam_casting" rel="nofollow noreferrer">foam casting</a> for a micro-hydro turbine wheel), and I don't have a convenient SE site to ask it.</p>
<p>So my question is that, is it possible to expand the scope of the 3DPrinting SE to a more general scope, in particularly 3D Milling, given that a LOT of the underlying tech IS shared?</p>
<p>I am asking this because a lot of the people involved in the first 2 failed proposals are participating here and may be able to help answer my questions (and further, despite the site's title, this <a href="https://3dprinting.meta.stackexchange.com/a/143/150">answer</a> advises a more inclusive approach).</p>
<p><em>Minor Addendum: I would love being able to ask questions related to even <a href="https://en.wikipedia.org/wiki/David_J._Gingery" rel="nofollow noreferrer">Dave Gingery's work</a> here, but that's a very long way off (It would've definitely been on-topic for Personal Manufacturing).</em></p>
|
<p>Please look into <a href="/questions/147/hobbyist-machine-questions-on-topic">this question</a>, which is a similar question as you ask. </p>
<p>I think the general consensus is that CNC and laser cutting machines fall in the same category as 3D printers with respect to control, axis, modeling, etc. as the difference may be the tool the positioning head is carrying. </p>
<p>My general concern is if you want to include this, does the name of the site cover the content? "Makers.SE" is suggested, but that would include even more "hobbyist machines" that at least fall very much out of this scope (like the mentioned vacuum forming).</p>
<p>When we keep the name 3DPrinting, it should be about 3D printing, if the scope is extended (which is perfectly defendable for CNC machines and laser cutters based on similarity), the name should be changed (if that is possible). </p>
<p>I'm not in favor to change the name as it might scare people away or may cause people to not find the site. At the moment we are already doing everything we can to get out of the Beta phase.</p>
|
<p>Please look into <a href="/questions/147/hobbyist-machine-questions-on-topic">this question</a>, which is a similar question as you ask. </p>
<p>I think the general consensus is that CNC and laser cutting machines fall in the same category as 3D printers with respect to control, axis, modeling, etc. as the difference may be the tool the positioning head is carrying. </p>
<p>My general concern is if you want to include this, does the name of the site cover the content? "Makers.SE" is suggested, but that would include even more "hobbyist machines" that at least fall very much out of this scope (like the mentioned vacuum forming).</p>
<p>When we keep the name 3DPrinting, it should be about 3D printing, if the scope is extended (which is perfectly defendable for CNC machines and laser cutters based on similarity), the name should be changed (if that is possible). </p>
<p>I'm not in favor to change the name as it might scare people away or may cause people to not find the site. At the moment we are already doing everything we can to get out of the Beta phase.</p>
| 61
|
<p>Normally stainless steel is magnetic. But whenever i order stainless steel nozzles from Amazon, they are not magnetic. This makes me think they could be brass coated in something like aluminum. However, there are many types of steel.</p>
<p>I've attached an image of someone who reviewed these nozzles. He says they are not stainless steel because they are not magnetic.</p>
<p><img src="https://i.stack.imgur.com/FYGZr.jpg" alt="enter image description here"></p>
<p>But whenever I order "stainless steel" nozzles, even from other sellers, they are not magnetic. i already returned one pack from another seller and just received a nonmagnetic one from a third seller.</p>
<p>So that's it? Amazon just sells junk now? Or is there a way I can easily tell whether these are something other than colored brass / good for abrasive or high temp printing. Here is the product that was reviewed:
AUSTOR 13 Pieces Stainless Steel 3D Printer Nozzles 0.2 mm, 0.4 mm, 0.6 mm, 0.8 mm, 1.0 mm Extruder Nozzle Print Head for E3D Makerbot <a href="https://rads.stackoverflow.com/amzn/click/com/B07CHZMGRH" rel="nofollow noreferrer" rel="nofollow noreferrer">https://www.amazon.com/dp/B07CHZMGRH/ref=cm_sw_r_cp_apa_i_XtrNDbRMVH8SW</a></p>
|
<p>Let's preface, that there are a LOT of metal identification methods. For example, I found <a href="http://fac.ksu.edu.sa/sites/default/files/Metal%20Identification%20Ready%20_unprotected.pdf" rel="nofollow noreferrer">this guide</a> helpful and I had been at the scrapyard lately, where I have been told that 90+% of the time, steel objects that are non-magnetic are the more valuable stainless steels. The kitchen sink I dropped off? Stainless, non-magnetic steel.</p>
<h2>Tempering/Annealing behavior</h2>
<p>The very fact that the nozzles do change color to a brassy color that is commonly called straw is proof that it is indeed steel: heating up a piece of steel does alter the steel and also alters the surface color in a process called <a href="https://en.wikipedia.org/wiki/Tempering_(metallurgy)" rel="nofollow noreferrer">tempering</a>. The color is only the surface, and the mild straw color would become orange-brown, purple, pale blue teal and yellow if you were to heat it higher. Take a look at the tempering colors of steel here:
<a href="https://i.stack.imgur.com/hY7Pp.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hY7Pp.jpg" alt="Steel temper colors" /></a></p>
<p>In contrast, brass acts differently when heated and <a href="https://www.highettmetal.com.au/blog/brass/how-to-temper-brass" rel="nofollow noreferrer">tempering</a> is somewhat different. Subjecting the piece of brass to heat you will not temper but <a href="https://en.wikipedia.org/wiki/Annealing_(metallurgy)" rel="nofollow noreferrer">anneal</a> it and you get colors differently. Instead of becoming straw before blue, Brass becomes dark, starting with its pale gold to go over a dark "antique" look to before going green, teal, purple-blue, red and then losing its color like this piece of a polished brass plate shows:</p>
<p><a href="https://i.stack.imgur.com/zqtQT.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zqtQT.jpg" alt="Brass, annealed" /></a></p>
<h2>Hardness/Chip</h2>
<p>Another test that would be easy to conduct is hardness. The base idea of hardness is: An item can scratch a piece of equal or lower hardness, but not of higher hardness. If you have a chisel handy, then you have a piece of steel at hand. Most chisels are rated as HRC 58-62 - which is the Rockwell hardness scale. Brass could be <a href="http://www.matweb.com/search/DataSheet.aspx?MatGUID=d3bd4617903543ada92f4c101c2a20e5&ckck=1" rel="nofollow noreferrer">all over the place</a>, depending on work hardening. But the identification is not by the hardness but by how the chisel - or better a graver - cuts.</p>
<p>We expect Brass to get a smooth cut with saw tooth edges while stainless cuts smoothly and has sharp edges to the cut.</p>
<h2>Sparktest</h2>
<p>If you want to scrap one, get an angle grinder or another power tool to grind at the nozzle. <a href="https://youtu.be/D094eBa4S7c?t=150" rel="nofollow noreferrer">Steel sparks</a> red-orange to whitish and depend on the mix, Carbide sparks very short and orange. Stainless creates a HUGE shower of sparks, yellow-white and dense, no burstes and branching. Copper, aluminium and Brass <strong><a href="https://youtu.be/D094eBa4S7c?t=428" rel="nofollow noreferrer">do not spark</a></strong>. Titanium is very bright white. It can <a href="https://www.youtube.com/watch?v=D094eBa4S7c" rel="nofollow noreferrer">tell you what kind of steel you have</a>.</p>
<h2>Drilltest</h2>
<p>As we are at destroying a pair of nozzles, why not drill them? we should have done that before subjecting it to heattreating and the grinding, but alas... Basically, we clamp the piece down and take an HSS drill to drill out the center.</p>
<p>Brass needs a different drill type but can be drilled and machined without coolant. Typical HSS drills from the home depot have a positive rake, brass wants neutral or negative rake to drill or machine smoothly. If the piece grabs, creates short spials and dusty small flakes with an unmodified, new drill (or under positive rake machining), it drills like brass, as you see here from a <a href="https://youtu.be/pAngKHIZgyA" rel="nofollow noreferrer">Clickspring video on drilling brass</a>:</p>
<p><a href="https://i.stack.imgur.com/CgVeN.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CgVeN.jpg" alt="HSS on Steel, ft. Clickspring" /></a></p>
<p>In contrast, stainless steel doesn't want to be machined <strong>without cooling</strong> at all and using high speed creates smoke quickly and nearly no chips at all. A moment later your tool starts to glow and gets a dull edge. If your drilling experiment turns a new drill blunt on high speeds or uncooled, you have stainless at your hands. To get chips, you need to work <em>slow</em> and have some sort of cooling. It is still a painfully slow process that needs a lot of pressure, but it gets larger, nesting chips like seen here from a <a href="https://www.youtube.com/watch?v=FMzIHl1HMXc" rel="nofollow noreferrer">Wayne Canning steel drilling tutorial</a>:</p>
<p><a href="https://i.stack.imgur.com/a5VG2.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/a5VG2.jpg" alt="Drilling HSS in Stainless Steel" /></a></p>
|
<p>Stainless steel is created by adding elements (usually Chromium, but also Nickel) to steel. These added elements form an oxide layer with the outside air protecting the steel from corroding. </p>
<p>Whether stainless steel is magnetic or not depends on the added elements and the micro structure of the steel; some are and some aren't magnetic.</p>
<p>From <a href="https://www.physlink.com/Education/askexperts/ae546.cfm" rel="nofollow noreferrer">physlink</a>:</p>
<blockquote>
<p>As for whether they (red. stainless steels) are magnetic, the answer is that it depends. There are several families of stainless steels with different physical properties. A basic stainless steel has a 'ferritic' structure and is magnetic. These are formed from the addition of chromium and can be hardened through the addition of carbon (making them 'martensitic') and are often used in cutlery. However, the most common stainless steels are 'austenitic' - these have a higher chromium content and nickel is also added. It is the nickel which modifies the physical structure of the steel and makes it non-magnetic.</p>
</blockquote>
<p>So the answer is that you cannot determine by testing for magnetic properties if the nozzle is stainless steel or not. But if it is not magnetic, it can be stainless steel. Note that discoloration is possible, this is the oxide layer.</p>
<p>To identify if the steel is stainless, you could without sacrificing the nozzle (according to <a href="https://www.hunker.com/13401485/how-to-tell-if-stainless-steel-is-real" rel="nofollow noreferrer">this reference</a>):</p>
<blockquote>
<p><strong>Step 1</strong><br>
Stick the magnet on the piece you are testing. If it holds firmly, the metal is possibly stainless steel. If not, it is (red. could be) another metal such as aluminum.</p>
<p><strong>Step 2</strong><br>
Pick a spot on the piece that you don't mind damaging a little.</p>
<p><strong>Step 3</strong><br>
Fill the eye dropper with <a href="https://en.wikipedia.org/wiki/Hydrochloric_acid" rel="nofollow noreferrer">muriatic acid</a>. Drop a small amount of the acid on the test spot. Wait half an hour.</p>
<p><strong>Step 4</strong><br>
Wipe the acid off the piece. Examine the test spot. If it is discolored, the piece is stainless steel.</p>
</blockquote>
<hr>
<p><sup>Note that the image you posted shows a Zinc plated steel screw, not a stainless steel screw.</sup></p>
| 1,480
|
<p>I'm trying to fully understand all of Haskell's concepts.</p>
<p>In what ways are algebraic data types similar to generic types, e.g., in C# and Java? And how are they different? What's so algebraic about them anyway? </p>
<p>I'm familiar with universal algebra and its rings and fields, but I only have a vague idea of how Haskell's types work.</p>
|
<p>Haskell's <em>algebraic data types</em> are named such since they correspond to an <em>initial algebra</em> in category theory, giving us some laws, some operations and some symbols to manipulate. We may even use algebraic notation for describing regular data structures, where:</p>
<ul>
<li><code>+</code> represents sum types (disjoint unions, e.g. <code>Either</code>).</li>
<li><code>•</code> represents product types (e.g. structs or tuples)</li>
<li><code>X</code> for the singleton type (e.g. <code>data X a = X a</code>)</li>
<li><code>1</code> for the unit type <code>()</code></li>
<li>and <em><code>μ</code></em> for the least fixed point (e.g. recursive types), usually implicit.</li>
</ul>
<p>with some additional notation:</p>
<ul>
<li><code>X²</code> for <code>X•X</code></li>
</ul>
<p>In fact, you might say (following Brent Yorgey) that a Haskell data type is regular if it can be expressed in terms of <code>1</code>, <code>X</code>, <code>+</code>, <code>•</code>, and a least fixed point.</p>
<p>With this notation, we can concisely describe many regular data structures:</p>
<ul>
<li><p>Units: <code>data () = ()</code></p>
<p><code>1</code></p>
</li>
<li><p>Options: <code>data Maybe a = Nothing | Just a</code></p>
<p><code>1 + X</code></p>
</li>
<li><p>Lists: <code>data [a] = [] | a : [a]</code></p>
<p><code>L = 1+X•L</code></p>
</li>
<li><p>Binary trees: <code>data BTree a = Empty | Node a (BTree a) (BTree a)</code></p>
<p><code>B = 1 + X•B²</code></p>
</li>
</ul>
<p>Other operations hold (taken from Brent Yorgey's paper, listed in the references):</p>
<ul>
<li><p>Expansion: unfolding the fix point can be helpful for thinking about lists. <code>L = 1 + X + X² + X³ + ...</code> (that is, lists are either empty, or they have one element, or two elements, or three, or ...)</p>
</li>
<li><p>Composition, <code>◦</code>, given types <code>F</code> and <code>G</code>, the composition <code>F ◦ G</code> is a type which builds “F-structures made out of G-structures” (e.g. <code>R = X • (L ◦ R)</code> ,where <code>L</code> is lists, is a rose tree.</p>
</li>
<li><p>Differentiation, the derivative of a data type D (given as D') is the type of D-structures with a single “hole”, that is, a distinguished location not containing any data. That amazingly satisfy the same rules as for differentiation in calculus:</p>
<p><code>1′ = 0</code></p>
<p><code>X′ = 1</code></p>
<p><code>(F + G)′ = F' + G′</code></p>
<p><code>(F • G)′ = F • G′ + F′ • G</code></p>
<p><code>(F ◦ G)′ = (F′ ◦ G) • G′</code></p>
</li>
</ul>
<hr />
<p><em>References:</em></p>
<ul>
<li><a href="http://www.cis.upenn.edu/%7Ebyorgey/papers/species-pearl.pdf" rel="noreferrer">Species and Functors and Types</a>, Oh My!, Brent A. Yorgey, Haskell’10, September 30, 2010, Baltimore, Maryland, USA</li>
<li><a href="http://personal.cis.strath.ac.uk/%7Econor/Dissect.pdf" rel="noreferrer">Clowns to the left of me, jokers to the right (Dissecting Data Structures)</a>, Conor McBride POPL 2008</li>
</ul>
|
<p>For me, the concept of Haskell's algebraic data types always looked like polymorphism in OO-languages like C#.</p>
<p>Look at the example from <a href="http://en.wikipedia.org/wiki/Algebraic_data_types" rel="nofollow noreferrer">http://en.wikipedia.org/wiki/Algebraic_data_types</a>:</p>
<pre><code>data Tree = Empty
| Leaf Int
| Node Tree Tree
</code></pre>
<p>This could be implemented in C# as a TreeNode base class, with a derived Leaf class and a derived TreeNodeWithChildren class, and if you want even a derived EmptyNode class.</p>
<p>(OK I know, nobody would ever do that, but at least you could do it.)</p>
| 3,745
|
<p>Am just wondering if any conclusions can be drawn from this:</p>
<p><a href="https://i.stack.imgur.com/0RNE3.png" rel="nofollow noreferrer" title="Photo of poor adhesion"><img src="https://i.stack.imgur.com/0RNE3.png" alt="Photo of poor adhesion" title="Photo of poor adhesion"></a></p>
<p>Three corners are solid, but not the one in the centre of the plate.</p>
<p>The bed was levelled before printing (and checked afterwards also). Even though the photo may <em>appear</em> to show a slant or lower corner (where the print is coming off), there is not. The bed is level, relative to the extruder, at room temperature.</p>
<p>The temperature of the bed is about 70 °C. I get inconsistent readings (with laser thermometer) but to the finger it feels about the same everywhere.</p>
<p>It's a glass bed, presumably with some coating. Is it degraded? Local temperature variation? Any ideas anyone?</p>
|
<p>From here: <a href="https://io3dprint.com/review-anycubic-i3-mega-ultrabase/" rel="nofollow noreferrer">https://io3dprint.com/review-anycubic-i3-mega-ultrabase/</a></p>
<blockquote>
<p>Ultrabase Bed
The Anycubic i3 Mega Ultrabase is the latest version in the Anycubic i3 family. As hinted in the name, the main upgrade from the previous version is the Ultrabase bed. This is a textured coating on the Borosilicate glass bed that means you don’t need to apply any glue or tape to the bed to make your prints stick to it.</p>
<p>Ultrabase is similar to the popular BuildTak beds except unlike BuildTak it doesn’t wear off and the most significant benefit is that parts are exceptionally easy to remove once the bed has cooled.</p>
<p>The Ultrabase surface has a Moh’s hardness of over 7. This means you can safely use metal scrapers and blades to clean it without risk of it scratching!</p>
</blockquote>
<p>Perhaps it was just not cleaned sufficiently from a prior print.</p>
|
<p>From here: <a href="https://io3dprint.com/review-anycubic-i3-mega-ultrabase/" rel="nofollow noreferrer">https://io3dprint.com/review-anycubic-i3-mega-ultrabase/</a></p>
<blockquote>
<p>Ultrabase Bed
The Anycubic i3 Mega Ultrabase is the latest version in the Anycubic i3 family. As hinted in the name, the main upgrade from the previous version is the Ultrabase bed. This is a textured coating on the Borosilicate glass bed that means you don’t need to apply any glue or tape to the bed to make your prints stick to it.</p>
<p>Ultrabase is similar to the popular BuildTak beds except unlike BuildTak it doesn’t wear off and the most significant benefit is that parts are exceptionally easy to remove once the bed has cooled.</p>
<p>The Ultrabase surface has a Moh’s hardness of over 7. This means you can safely use metal scrapers and blades to clean it without risk of it scratching!</p>
</blockquote>
<p>Perhaps it was just not cleaned sufficiently from a prior print.</p>
| 1,403
|
<p>There are numerous <a href="http://en.wikipedia.org/wiki/Agile_software_development" rel="nofollow noreferrer">Agile software development</a> methods. Which ones have you used in practice to deliver a successful project, and how did the method contribute to that success? </p>
|
<p>I've been involved with quite a few organisations which claimed to work in an 'agile' way, and their processed usually seemed to be base on XP (extreme programming), but none of them ever followed anywhere near all the practices.</p>
<p>That said, I can probably comment on a few of the XP practices</p>
<ul>
<li><p><strong>Unit testing</strong> seems to prove very useful if it's done from the start of a project, but it seems very difficult to come into an existing code-base and start trying to add unit tests. If you get the opportunity to start from scratch, test driven development is a real help.</p></li>
<li><p><strong>Continuous integration</strong> seems to be a really good thing (or rather, the lack of it is really bad). That said, the organisations I've seen have usually been so small as to make any other approach seem foolish.</p></li>
<li><p><strong>User story cards</strong> are nice in that it's great to have a physical object to throw around for prioritisation, but they're not nearly detailed enough unless your developer really knows the domain, or you've got an onsite customer (which I've never actually seen).</p></li>
<li><p><strong>Standup meetings</strong> tend to be really useful for new team members to get to know everyone, and what they work on. The old hands very quickly slack off, and just say things like 'I'm still working on X', which they've been doing for the past week - It takes a strong leader to force them to delve into details.</p></li>
<li><p><strong>Refactoring</strong> is now a really misused term, but when you've got sufficient unit tests, it's really useful to conceptually separate the activity of 'changing the design of the existing code without changing the functionality' from 'adding new functionality'</p></li>
</ul>
|
<p>I've been working with a team using XP and Scrum practices sprinkled with some lean. It's been very productive.</p>
<p><strong>Daily Standup</strong>- helps us keep complete track of what and where everyone is working on.</p>
<p><strong>Pair Programming</strong>- has improved our code base and helped remove "silly" bugs being introduced into the system.</p>
<p><strong>iterative development</strong>- using 1 week iterations has helped up improve our velocity by setting more direct goals which has also helped us size requirements</p>
<p><strong>TDD</strong>- has helped me change my way of programming, now I don't write any code that doesn't fix a broken test and I don't write any test that doesn't have a clearly defined requirement. We've also been using executable requirements which has really helped devs and BAs reach requirements understandings.</p>
<p><strong>kanban boards</strong>- show in real time where we are. We have one for the Milestone as well as the current iteration. At a glance you can see what is left to do and what's being done and what's done and accepted. If you don't report in your daily standup something pertaining to what's on the board you have explaining to do.</p>
<p><strong>co-located team</strong>- everyone is up to speed and on page with what everyone else is doing. communication is just-in-time, very productive, I don't miss my cube at all.</p>
| 2,829
|
<p>What is the best SVN Ignore Pattern should TortoiseSVN have for a VB.NET solution?</p>
|
<p>this is what I use for C# w/resharper, should work just the same with vb.net:</p>
<pre><code>build deploy */bin */bin/* obj *.dll *.pdb *.user *.suo _ReSharper* *.resharper* bin
</code></pre>
|
<p>I always add Thumbs.db in as well, because I hate having those files versioned. Probably more of an issue for web developers </p>
| 2,612
|
<p>I am trying to 3D print an iPhone case, however, I want it to print standing up. Is this possible to do?</p>
<p>I have a picture of the case in the link below.<a href="https://i.stack.imgur.com/dUYr0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dUYr0.png" alt="iphone case"></a></p>
|
<p>Yes this is possible. For an FDM/FFF printer, you'll need to print with supports. I might also recommend printing in PLA to minimize the chance of warping during the print (from experience).</p>
<p>It might also help to slow down the feedrate to ensure smooth surface finish and avoiding delamination on such small layers.</p>
<p>You'll probably see a decrease in the surface quality on the inside due to the printing of support scaffolding depending on the slicing engine you use.</p>
<h1>However</h1>
<p>You'll want to pay attention to the strength of the case. If you print the case upright then it will be more susceptible to breaking without post-processing.</p>
<h1>Conclusion</h1>
<p>Ultimately, if you can get away with it, it would be better to subtract the star instead of extruding it in the model. Then you can simply print the case back face down on the plate with much better results.</p>
|
<p>You have a few options. First. I would change the design. Why not make it negative space instead? Else as the other user mentioned, glue the star on later.</p>
<p>Other options are, come up with a modular design so you do not print it as one piece. </p>
<p>From there you can do some more advanced things such as print it with the star up and use HIPS as a support material -- assuming you have a duel extruder</p>
<p><a href="http://reprap.org/wiki/HIPS" rel="nofollow">http://reprap.org/wiki/HIPS</a></p>
<p>You could print it with normal support material, with the star up, and just accept that the left overs will scratch the phone, or spend a lot of time with some hand tools and sand paper.</p>
| 348
|
<p>Do you know where I could find some useful third party (free) code snippets for VS 2008?</p>
|
<p><a href="http://gotcodesnippets.com/" rel="nofollow noreferrer">http://gotcodesnippets.com/</a><br>
<a href="http://www.codekeep.net/" rel="nofollow noreferrer">http://www.codekeep.net/</a> has a VS add-in for their snippets, too</p>
|
<p>bdukes site has more options, but here are the ones MSDN has published...</p>
<p><a href="http://msdn.microsoft.com/en-us/vstudio/aa718338.aspx" rel="nofollow noreferrer">http://msdn.microsoft.com/en-us/vstudio/aa718338.aspx</a></p>
| 4,597
|
<p>I have PHP configured so that magic quotes are on and register globals are off.</p>
<p>I do my best to always call htmlentities() for anything I am outputing that is derived from user input.</p>
<p>I also occasionally seach my database for common things used in xss attached such as...</p>
<pre><code><script
</code></pre>
<p>What else should I be doing and how can I make sure that the things I am trying to do are <strong>always</strong> done.</p>
|
<p>Escaping input is not the best you can do for successful XSS prevention. Also output must be escaped. If you use Smarty template engine, you may use <code>|escape:'htmlall'</code> modifier to convert all sensitive characters to HTML entities (I use own <code>|e</code> modifier which is alias to the above).</p>
<p>My approach to input/output security is:</p>
<ul>
<li>store user input not modified (no HTML escaping on input, only DB-aware escaping done via PDO prepared statements)</li>
<li>escape on output, depending on what output format you use (e.g. HTML and JSON need different escaping rules)</li>
</ul>
|
<p>It is difficult to implement a thorough sql injection/xss injection prevention on a site that doesn't cause false alarms. In a CMS the end user might want to use <code><script></code> or <code><object></code> that links to items from another site. </p>
<p>I recommend having all users install FireFox with NoScript ;-)</p>
| 9,682
|
<p>Note: I have extended my question as some of you mentioned that the question is not clear.</p>
<p>I am using a RAMPS 1.4 board with an Arduino mega 2560. I need to drive a stepper motor as an extruder using either E0 or E1. I am using Repetier-Firmware and can drive the extruder (stepper motor) using the E0 (RAMPS 1.4).
Now for my application, I need to make sure that the extruder is in home position before it starts to drive for the very first time. I am trying to use a switch to connect to the end stop and perform this homing operation. I can do this for X, Y, and Z axes. I was wondering how (h/w connections and firmware modification) can I do it for the extruder?</p>
|
<p><strong>The edited question</strong> appears to mention that <strong>the actual extruders of the print head need to home / limit themselves</strong>. The answer is that this is not required. When operating direct or Bowden driven extruder setups, you know (or you can measure or find out experimentally) the distance that the filament has to travel from extruder entry to hot end (e.g. to load new filament). If already loaded, because you have printed before, you also know where the filament is (filament could stop after printing, personally I retract the filament en few mm after a print). When a new print starts you usually reverse the retraction at temperature and extrude some extra filament to prime the nozzle to counteract oozed out filament for instance. At that point, the nozzle is primed and the gcode G92 E0 is then used to tell the extruder this is the start at zero length, sort of the home position of the filament. All this is usually done in the start code of your slicer, similar to disabling bed and hot end temperature or final retract is done in the end code of your slicer.</p>
<p><strong>This answer below addresses the initial question</strong>, this question was not quite clear. It was <strong>phrased as of the head containing the extruders needed to be homed correctly</strong>. The normal end-stops (can be mechanical or optical switches) already ensure that the printer head (containing the extruder or extruders) is homed correctly (if correctly configured in your printer firmware). The home offsets you define in the firmware define that you start at the origin (0,0,0). </p>
<p>Your question does not state what firmware you use, but e.g. in Marlin firmware these settings are found in the <a href="https://github.com/MarlinFirmware/Marlin/blob/1.1.x/Marlin/Configuration.h" rel="nofollow noreferrer">firmware configuration file</a>.</p>
<p>In this file the following is defined:</p>
<pre><code>// Travel limits (mm) after homing, corresponding to endstop positions.
#define X_MIN_POS 0
#define Y_MIN_POS 0
</code></pre>
<p>These values must be changed according to the offset between your switch and the origin of the heat bed (e.g. Prusa style printers have the origin at the front left).</p>
<p>For my Prusa clone printer I have defined:</p>
<pre><code>// Travel limits (mm) after homing, corresponding to endstop positions.
#define X_MIN_POS -35
#define Y_MIN_POS -12
</code></pre>
<p>What this says is that the homing switch for the X axis is 35 mm left of the origin, etc.</p>
|
<p>I think the question has already been answered(if at all possible because the question is still not clear exactly what you mean), but let me try putting this a different way.</p>
<p>You have to ask yourself what is homing and why are you doing it. The purpose of homing your axes is to set their 0 position in space. This then means that any subsequent movements will be relative to these 0 positions and assuming the motors skip no steps should allow accurate positioning of the head inside the build volume throughout the print. In other words their relative position should match their absolute position. As has already been stated the extruder axis does not work like this.</p>
<p>Now I think some of the confusion from your question comes around what you mean by homing the extruder. The extruder position is defined by the previously mentioned X, Y and Z axis. The extruder axis however controls the position of the filament inside the hotend and therefore how much filament is extruded into the build volume. This axis is arbitrarily set to 0 using G92 at the beginning of the print because unlike the spacial axes the absolute position of this axis is not particularly important. Strictly speaking if you were to care about the extruder axis’ absolute position, home would probably be the start of the roll. This however is irrelevant. All that you care about during a print is that the extruder axis remains consistent throughout the print to allow accurate extrusion.</p>
<p>So typically at the beginning of a print you will prime the nozzle to ensure plastic is ready to be extruded as ooze from the previous print may create a delay between moving the extruder axis and plastic actually being extruded. Once the nozzle is primed however the axis position will just be set to 0 and then all future positions of that axis will be relative to where it started on the roll.</p>
<p>Hopefully that helps but if it doesn’t I think we still need more clarification of what you are actually trying to accomplish.</p>
| 876
|
<p>Just a random thought, and if I wasn't asking the question myself I'd probably down vote it, but... are screenshots the best way to show printer settings, or would a text version be better? </p>
<p>Ok, yes, a screenshot is probably easier to do and post, and will show a familiar UI and be easy on the eye and make it easy to notice errors/improvements. </p>
<p>However, having a (standardised) text format would make it <em>searchable</em>. After all, other sites always say, "please copy and paste the error rather than post a screenshot" - because the text in a screenshot is not searchable. </p>
<p>Yes, it could/would be a pain to enforce this rule (is it possible to easily export printer settings as a file file?) and/or edit the text version of the settings into the post.</p>
<p>This is probably a daft "thought experiment" of an idea... but I thought I'd float it anyway. </p>
|
<p>Text would probably be best because it is searchable, so things can be found faster then by just looking at the long list of settings available in any slicer/printer/device available with your own eyes.</p>
<p>However, as far as I'm aware, exporting settings in text format from a slicer/printer is still wishful thinking due to many factors.</p>
<p>I remember a long conversation on the Ultimaker forums a few months ago about this very proposal: being able to export settings in some form, to make it easier for people who are experimenting to follow the various iterations of settings more easily.</p>
<p>Some people asked for text format, csv, odt, etc, but the people of Ultimaker ultimately crushed the hopes of everyone by explaining, in technical terms that I did not completely understood, that it wasn't possible without rebuilding the whole thing from scratch.</p>
<p>Thus why we're stuck with screenshots for the foreseeable future.</p>
|
<p>The problem with settings is that there are so much settings, if OP's are asked to post certain settings the OP or the one that helps might miss some other parameters that would be visible in a screenshot. I was able to spot a few problems already using the screenshots.</p>
<p>Another thought: "What would you gain by searchable settings?". In case of error messages I am absolutely convinced that you should, but I do not see the advantage for searching/indexing. </p>
<p>What does being searchable actually add? Questions and answers are usually not that long to spot certain text when you are on a page, what does it add if it is searchable for the site? E.g. if I would search for <code>combing</code>, you now get some hits, if everyone posts the settings in text you suddenly have many more you need to scroll through...</p>
| 62
|
<p>Is there a <a href="http://en.wikipedia.org/wiki/Box_plot" rel="nofollow noreferrer">Box Plot</a> graph, or box and whisker graph available for Reporting Services 2005? From the looks of the documentation there doesn't seem to be one out of the box; so I am wondering if there is a third party that has the graph, or a way to build my own?</p>
|
<p>There definitely isn't a Box Plot built into SSRS 2005, though it's possible that 2008 has one. SSRS 2005 does have a robust extension model. If you can implement a chart in System.Drawing/GDI+, you can make it into a <a href="http://msdn.microsoft.com/en-us/magazine/cc188686.aspx" rel="nofollow noreferrer">custom report item</a> for SSRS.</p>
<p>There are a few third-party vendors with fairly feature-rich products, but the only one I've ever evaluated was <a href="http://www.dundas.com/Products/Chart/RS/index.aspx" rel="nofollow noreferrer">Dundas Chart</a>, which isn't cheap, but gives you about 100x more charting capability than SSRS 2005 built in (for SSRS 2008, Microsoft incorporated a great deal of Dundas's charting technology). I can't say from experience that I know Dundas Chart supports the Box Plot, but this <a href="http://support.dundas.com/forum/printable.aspx?m=3579" rel="nofollow noreferrer">support forum post</a> says so.</p>
|
<p><a href="http://sourceforge.net/project/showfiles.php?group_id=114675" rel="nofollow noreferrer">ZedGraph</a> is a good open source alternative.</p>
| 4,750
|
<p>I dont want the part cooling fan on during preheat, especially when I'm only heating the bed, it is just unnecessarily loud and serves no purpose at that time.</p>
<p>However when I set <code>PREHEAT_1_FAN_SPEED</code> to 0 it has no effect, the fan still spins at full speed as soon as I preheat either the hotend or the bed.</p>
<p>So why is this setting not working and how do I fix this?</p>
<p>I am using the latest version of <a href="https://github.com/MarlinFirmware/Marlin/tree/1.1.x/Marlin" rel="nofollow noreferrer">Marlin</a>.</p>
<p>The only one that is defined is the one that I am editing and it is the one that appears on line ~1260 of the stock configuration.h</p>
<pre><code>#define PREHEAT_1_FAN_SPEED 0 // Value from 0 to 255
</code></pre>
<p>And this is the grep result of searching the entire firmware folder:</p>
<p><a href="https://i.stack.imgur.com/dJh5p.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dJh5p.png" alt="grep results"></a></p>
|
<p>Apparently, I forgot a critical step:</p>
<p>While I have completely reconfigured the LCD menus, setup custom 25-point mesh leveling, changed a bunch of other numerical values, and flashed the firmware dozens of times, certain values will never be updated unless you remember to <strong>initialize the EEPROM</strong> after the flash! </p>
<p>Honestly I kinda wish they would make it all or nothing but I guess there is limited space so particular things take priority.</p>
|
<p><em>This answer is now obsolete now that the OP has updated the question with additional information. It is left here as a possible solution for those who have tinkered with there firmware.</em></p>
<hr>
<p>Part cooling fan speed during preheat (from menu) is controlled by the setting </p>
<pre><code>#define PREHEAT_1_FAN_SPEED 0 // Value from 0 to 255
</code></pre>
<p>in <a href="https://github.com/MarlinFirmware/Marlin/blob/1.1.x/Marlin/Configuration.h" rel="nofollow noreferrer">Configuration.h</a> where a value of <code>0</code> implies no rotation, or 0 % and <code>255</code> implies 100 %.</p>
<p>If changing this value in your configuration does not result in a reduced fan speed, you could have this constant be defined somewhere else overriding this value.</p>
| 1,161
|
<p>How do I convert the RGB values of a pixel to a single monochrome value?</p>
|
<p>I found one possible solution in the <a href="http://www.poynton.com/notes/colour_and_gamma/ColorFAQ.html" rel="noreferrer">Color FAQ</a>. The <em>luminance component</em> Y (from the <em>CIE XYZ system</em>) captures what is most perceived by humans as color in one channel. So, use those coefficients:</p>
<pre><code>mono = (0.2125 * color.r) + (0.7154 * color.g) + (0.0721 * color.b);
</code></pre>
|
<p>The logic behind converting any RGB based picture to monochrome can is not a trivial linear transformation. In my opinion such a problem is better addressed by "Color Segmentation" techniques. You could achieve "Color segmentation" by k-means clustering.</p>
<p>See reference example from MathWorks site.</p>
<p><a href="https://www.mathworks.com/examples/image/mw/images-ex71219044-color-based-segmentation-using-k-means-clustering" rel="nofollow noreferrer">https://www.mathworks.com/examples/image/mw/images-ex71219044-color-based-segmentation-using-k-means-clustering</a></p>
<p><strong>Original picture in colours.</strong></p>
<p><a href="https://i.stack.imgur.com/EcHkr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EcHkr.png" alt="Picture with colours"></a></p>
<p><strong>After converting to monochrome using k-means clustering</strong>
<a href="https://i.stack.imgur.com/VaLea.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VaLea.png" alt="Picture after converting to monochrome using k-means clustering"></a></p>
<p><strong>How does this work?</strong></p>
<p>Collect all pixel values from entire image. From an image which is W pixels wide and H pixels high, you will get W *H color values. Now, using k-means algorithm create 2 clusters (or bins) and throw the colours into the appropriate "bins". The 2 clusters represent your black and white shades. </p>
<p><strong>Youtube video demonstrating image segmentation using k-means?</strong>
<a href="https://www.youtube.com/watch?v=yR7k19YBqiw" rel="nofollow noreferrer">https://www.youtube.com/watch?v=yR7k19YBqiw</a></p>
<p><strong>Challenges with this method</strong></p>
<p>The k-means clustering algorithm is susceptible to outliers. A few random pixels with a color whose RGB distance is far away from the rest of the crowd could easily skew the centroids to produce unexpected results.</p>
| 3,509
|
<p>Your backend could be PHP or Python but you want to use the controls from the ASP.NET toolkit is there a successful way to do this?</p>
|
<p>I don't think that it is possible.</p>
<p>The ASP.NET AJAX Toolkit is based on ASP.NET technique (what a surprise) and needs ASP.NET.</p>
<p>It contains server-side controls, which are translated to HTML and Javascript by the ASP.NET engine. It is not working without the engine.</p>
<p>However you can check the code itself to see how it is working, what it generates on the browser side and get ideas and Javascript codes to build into your application or framework.</p>
<p><strong>Edit:</strong> I've just found an interesting project, which is in alpha stage, check <a href="http://www.codeplex.com/phpmsajax" rel="nofollow noreferrer">this</a> out.</p>
|
<p>I have found that much of the functionality in AJAX Control Toolkit can be accomplished via the javascript frameworks such as jQuery.</p>
| 7,717
|
<p>A sample perl script that connects to an oracle database, does a simple SELECT query, and spits the results to stdout in CSV format would be great. Python or any other language available in a typical unix distribution would be fine too. </p>
<p>Note that I'm starting from scratch with nothing but a username/password for a remote Oracle database. Is there more to this than just having the right oracle connection library?</p>
<p>If there's a way to do this directly in mathematica, that would be ideal (presumably it should be possible with J/Link (mathematica's java integration thingy)).</p>
|
<p>In perl you could do something like this, leaving out all the my local variable declarations and ... or die "failmessage" error handling for brevity.</p>
<pre><code>use DBI;
use DBD::Oracle;
$dbh = DBI->connect( "dbi:Oracle:host=127.0.0.1;sid=XE", "username", "password" );
# some settings that you usually want for oracle 10
$dbh->{LongReadLen} = 65535;
$dbh->{PrintError} = 0;
$sth = $dbh->prepare("SELECT * FROM PEOPLE");
$sth->execute();
# one example for error handling just to show how it's done in principle
if ( $dbh->err() ) { die $dbh->errstr(); }
# you can also do other types of fetchrow, see perldoc DBI
while ( $arrayref = $sth->fetchrow_arrayref ) {
print join ";", @$arrayref;
print "\n";
}
$dbh->disconnect();
</code></pre>
<p>Two notes, because people asked in comments:</p>
<ul>
<li>sid=XE is the oracle service id, that is like the name of your database. If you install the free version of oracle, it defaults to "XE", but you can change it. </li>
<li>Installing DBD::Oracle needs the oracle client libraries on your system. Installing that will also set all the necessary environment variables.</li>
</ul>
|
<p>Mathematica has a package "DatabaseLink" built in that should make this easy but you need to find a driver for Oracle. Installing the "oracle client libraries" should do that...</p>
| 9,116
|
<p>I am trying to print my first test but my Creality Ender 3 starts printing near the front edge and within 10 seconds is printing in front of the tray (off onto my table). I really think it should be starting the print in the middle and then never going so close to the edge and even over it</p>
<p>What can I do?</p>
|
<p>When a print is not printing on the build platform, you either:</p>
<ol>
<li><p>Have the incorrect settings in the slicer (e.g. Ultimaker Cura, <a href="https://3dprinting.stackexchange.com/a/6450/5740">a common mistake is that the "origin at center" option is active</a>), or</p></li>
<li><p>Have the center of the bed incorrectly stored in your firmware. (See: <a href="https://3dprinting.stackexchange.com/questions/6375/how-to-center-my-prints-on-the-build-platform">How to center my prints on the build platform? (Re-calibrate homing offset)</a> or <a href="https://3dprinting.stackexchange.com/questions/6399/recalibrating-home-position">Recalibrating Home-position</a>).</p></li>
</ol>
<p>Note that the most simple change (after you verified the slicer settings and confirmed that it still does not print in the center) is adjusting the settings in the slicer (bed settings, of start G-code script), this way you do not need to compile and upload new firmware (e.g. if you are uncomfortable or inexperienced in doing so), but, fixing it in the firmware is the best solution. Why? If you change the offset in the slicer you force the bed dimensions to a new position that <em>you know</em> where it is, while if changed in the firmware, the printer "<em>knows</em>" the actual size and the limits.</p>
<hr>
<p><em>To fully answer your question, we need a little more information what slicer you use and what the current settings are. From your question it sounds as if the offset is more than a few millimeters. Usually this hints to an incorrect slicer setting (frequently Ultimaker Cura).</em></p>
|
<p>When a print is not printing on the build platform, you either:</p>
<ol>
<li><p>Have the incorrect settings in the slicer (e.g. Ultimaker Cura, <a href="https://3dprinting.stackexchange.com/a/6450/5740">a common mistake is that the "origin at center" option is active</a>), or</p></li>
<li><p>Have the center of the bed incorrectly stored in your firmware. (See: <a href="https://3dprinting.stackexchange.com/questions/6375/how-to-center-my-prints-on-the-build-platform">How to center my prints on the build platform? (Re-calibrate homing offset)</a> or <a href="https://3dprinting.stackexchange.com/questions/6399/recalibrating-home-position">Recalibrating Home-position</a>).</p></li>
</ol>
<p>Note that the most simple change (after you verified the slicer settings and confirmed that it still does not print in the center) is adjusting the settings in the slicer (bed settings, of start G-code script), this way you do not need to compile and upload new firmware (e.g. if you are uncomfortable or inexperienced in doing so), but, fixing it in the firmware is the best solution. Why? If you change the offset in the slicer you force the bed dimensions to a new position that <em>you know</em> where it is, while if changed in the firmware, the printer "<em>knows</em>" the actual size and the limits.</p>
<hr>
<p><em>To fully answer your question, we need a little more information what slicer you use and what the current settings are. From your question it sounds as if the offset is more than a few millimeters. Usually this hints to an incorrect slicer setting (frequently Ultimaker Cura).</em></p>
| 1,115
|
<p>I have a large GUI project that I'd like to port to Linux.
What is the most recommended framework to utilize for GUI programming in Linux? Are Frameworks such as KDE / Gnome usable for this objective Or is better to use something more generic other than X?</p>
<p>I feel like if I chose one of Gnome or KDE, I'm closing the market out for a chunk of the Linux market who have chosen one over the other. (Yes I know there is overlap) </p>
<p>Is there a better way? Or would I have to create 2 complete GUI apps to have near 100% coverage?</p>
<p>It's not necessary to have a cross-platform solution that will also work on Win32.</p>
|
<p>Your best bet may be to port it to a cross-platform widget library such as <a href="http://en.wikipedia.org/wiki/WxWidgets" rel="noreferrer">wxWidgets</a>, which would give you portability to any platform wxWidgets supports.</p>
<p>It's also important to make the distinction between Gnome libraries and GTK, and likewise KDE libraries and Qt. If you write the code to use GTK or Qt, it should work fine for users of any desktop environment, including less popular ones like XFCE. If you use other Gnome or KDE-specific libraries to do non-widget-related tasks, your app would be less portable between desktop environments.</p>
|
<p>Have you thought of using Mono? Programs like Paint.NET work great under Linux & Windows.</p>
| 5,633
|
<p>Starting from ASP.NET MVC Preview 3, HTML.Button ( and other related HTML controls) are no longer supported.</p>
<p>The question is, what is the equivalent for them? I've an app that was built using Preview 2, now I have to make it compatible with the latest CTP releases.</p>
|
<p>I figured it out. It goes something like this:</p>
<pre><code><form method="post" action="<%= Html.AttributeEncode(Url.Action("CastUpVote")) %>">
<input type="submit" value="<%=ViewData.Model.UpVotes%> up votes" />
</form>
</code></pre>
|
<p><code><asp:Button></code> is the ASP.NET equivalent to the HTML.Button. It will by default generate an <code><input type="button"></code>. (This is the System.Web.UI.WebControls.Button class)</p>
| 8,380
|
<p>If a user saves the password on the login form, ff3 is putting the saved password in the change password dialoge on the profile page, even though its <strong>not the same input name</strong> as the login. how can I prevent this?</p>
|
<p>Try using autocomplete="off" as an attribute of the text box. I've used it in the past to stop credit card details being stored by the browser but i dont know if it works with passwords. e.g. <code>print("<input type="text" name="cc" autocomplete="off" />");</code></p>
|
<p>Some sites have 3 inputs for changing a password, one for re-entering the current password and two for entering the new password. If the re-entering input was first and got auto-filled, it wouldn't be a problem. </p>
| 9,892
|
<p>Here is my sample code:</p>
<pre><code>from xml.dom.minidom import *
def make_xml():
doc = Document()
node = doc.createElement('foo')
node.innerText = 'bar'
doc.appendChild(node)
return doc
if __name__ == '__main__':
make_xml().writexml(sys.stdout)
</code></pre>
<p>when I run the above code I get this:</p>
<pre><code><?xml version="1.0" ?>
<foo/>
</code></pre>
<p>I would like to get:</p>
<pre><code><?xml version="1.0" ?>
<foo>bar</foo>
</code></pre>
<p>I just guessed that there was an innerText property, it gives no compiler error, but does not seem to work... how do I go about creating a text node?</p>
|
<p>Setting an attribute on an object won't give a compile-time or a run-time error, it will just do nothing useful if the object doesn't access it (i.e. "<code>node.noSuchAttr = 'bar'</code>" would also not give an error).</p>
<p>Unless you need a specific feature of <code>minidom</code>, I would look at <code>ElementTree</code>:</p>
<pre><code>import sys
from xml.etree.cElementTree import Element, ElementTree
def make_xml():
node = Element('foo')
node.text = 'bar'
doc = ElementTree(node)
return doc
if __name__ == '__main__':
make_xml().write(sys.stdout)
</code></pre>
|
<p>Setting an attribute on an object won't give a compile-time or a run-time error, it will just do nothing useful if the object doesn't access it (i.e. "<code>node.noSuchAttr = 'bar'</code>" would also not give an error).</p>
<p>Unless you need a specific feature of <code>minidom</code>, I would look at <code>ElementTree</code>:</p>
<pre><code>import sys
from xml.etree.cElementTree import Element, ElementTree
def make_xml():
node = Element('foo')
node.text = 'bar'
doc = ElementTree(node)
return doc
if __name__ == '__main__':
make_xml().write(sys.stdout)
</code></pre>
| 4,890
|
<p>I'm printing a cylindrical piece but at any moment it breaks down. I tried it two times, both have break down in different place. I'm using a 1.75 mm PLA filament in my Anet A8. I'm using Cura 2.6.2 to export to a <code>.gcode</code> file.</p>
<p>This is the original model: </p>
<p><a href="https://i.stack.imgur.com/mglT8.png" rel="nofollow noreferrer" title="Original model"><img src="https://i.stack.imgur.com/mglT8.png" alt="Original model" title="Original model"></a></p>
<p>And this is the result:</p>
<p><a href="https://i.stack.imgur.com/mtYyb.jpg" rel="nofollow noreferrer" title="Printed result"><img src="https://i.stack.imgur.com/mtYyb.jpg" alt="Printed result" title="Printed result"></a></p>
<p>Here is the <a href="https://mega.nz/#!1hkyhLBA!xmt7s38vrb7fEh9SkiSYWfgwLxg4WaYED1h6vzUpghg" rel="nofollow noreferrer">G-code file</a>.</p>
|
<p>The first thought that pops into my mind is insufficient cooling. Consider to either slow down the nozzle speed or to construct a toss-away model nearby. I prefer the toss-away or duplicate model method. It allows the material printed on the first column to cool more effectively while the material on the second column is being deposited. </p>
<p>If the problem persists, add a third column or slow the print speed about 10 mm / second</p>
|
<p>The first thought that pops into my mind is insufficient cooling. Consider to either slow down the nozzle speed or to construct a toss-away model nearby. I prefer the toss-away or duplicate model method. It allows the material printed on the first column to cool more effectively while the material on the second column is being deposited. </p>
<p>If the problem persists, add a third column or slow the print speed about 10 mm / second</p>
| 744
|
<p>Would it be posible to design something that is as flexible and that can be printed with PLA that would work as a cloth? I did some research and found that there was a company named Electroloom but that didn't make it. I'm not looking for anything fine, just something that would work for wiping</p>
|
<p>Maybe you would be better off with TPU or some other type of flexible material...</p>
<p>I have been able to print PLA and have it flex quite a bit, but that was an ~0.2mm single layer print, I guess maybe up to 0.3-0.4mm should still be a little bit flexible, but not much. Also since you want it to wipe things, maybe you should look if TPU even has all the properties required for that as Carl mentioned in the comments.</p>
|
<p>You can not 3D print that because there is no nozzle or method for producing filaments of the required size. A few microns at least. You might be able to spin molten pla like cotton candy and it might absorb; but I doubt it.</p>
<p><a href="https://phys.org/news/2010-05-cotton-candy-machine-tiny-nanofibers.html" rel="nofollow noreferrer">https://phys.org/news/2010-05-cotton-candy-machine-tiny-nanofibers.html</a> </p>
<p><a href="https://i.stack.imgur.com/VFHQd.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VFHQd.jpg" alt="Rotary Jet-Spinning"></a></p>
| 1,626
|
<p>I've been searching (with little success) for a free/opensource session clustering and replication solution for asp.net. I've run across the usual suspects (indexus sharedcache, memcached), however, each has some limitations.</p>
<ul>
<li><strong>Indexus</strong> - Very immature, stubbed session interface implementation. Its otherwise a great caching solution, though.</li>
<li><strong>Memcached</strong> - Little replication/failover support without going to a db backend.
Several SF.Net projects - All aborted in the early stages... nothing that appears to have any traction, and one which seems to have gone all commercial.</li>
<li><strong>Microsoft Velocity</strong> - Not OSS, but seems nice. Unfortunately, I didn't see where CTP1 supported failover, and there is no clear roadmap for this one. I fear that this one could fall off into the ether like many other MS dev projects.</li>
</ul>
<p>I am fairly used to the Java world where it is kind of taken for granted that many solutions to problems such as this will be available from the FOSS world.</p>
<p>Are there any suitable alternatives available on the .Net world?</p>
|
<p>As far as Velocity is concerned I have heard some great things about that project lately. It's still in the developing stages and probably not primetime ready yet. But I think the project has a solid footing and will become a strong mature product from Microsoft and not fall off into the ether like you predict.</p>
<p>Recently I've heard podcasts from <a href="http://www.hanselman.com/blog/HanselminutesPodcast116DistributedCachingWithMicrosoftsVelocity.aspx" rel="nofollow noreferrer">Scott Hanselman</a> and <a href="http://www.polymorphicpodcast.com/shows/scaletheweb/" rel="nofollow noreferrer">Polymorphic Podcast</a> regarding Velocity.</p>
|
<p>Although Velocity has made progress from CTP1 to CTP2, it still leaves much to be desired. It will be some time before they provide all the important features in a distributed cache and even longer before it is tested in the market. I wish them good luck.</p>
<p>In the meantime, NCache already provides all CTP2 & V1, and many more features. NCache is the first, the most mature, and the most feature-rich distributed cache in the .NET space. NCache is an enterprise level in-memory distributed cache for .NET and also provides a distributed ASP.NET Session State. Check it out at <a href="http://www.alachisoft.com/rp.php?dest=/ncache/index.html" rel="nofollow noreferrer">Distributed Cache</a>.</p>
<p>NCache Express is a totally free version of NCache. Check it out at <a href="http://www.alachisoft.com/rp.php?dest=/ncache/ncache_express.html" rel="nofollow noreferrer">Free Distributed Cache</a>.</p>
| 3,345
|
<p>I have a situation where I want to add hours to a date and have the new date wrap around the work-day. I cobbled up a function to determine this new date, but want to make sure that I'm not forgetting anything.</p>
<p>The hours to be added is called "delay". It could easily be a parameter to the function instead.</p>
<p>Please post any suggestions. [VB.NET Warning]</p>
<pre><code>Private Function GetDateRequired() As Date
''// A decimal representation of the current hour
Dim hours As Decimal = Decimal.Parse(Date.Now.Hour) + (Decimal.Parse(Date.Now.Minute) / 60.0)
Dim delay As Decimal = 3.0 ''// delay in hours
Dim endOfDay As Decimal = 12.0 + 5.0 ''// end of day, in hours
Dim startOfDay As Decimal = 8.0 ''// start of day, in hours
Dim newHour As Integer
Dim newMinute As Integer
Dim dateRequired As Date = Now
Dim delta As Decimal = hours + delay
''// Wrap around to the next day, if necessary
If delta > endOfDay Then
delta = delta - endOfDay
dateRequired = dateRequired.AddDays(1)
newHour = Integer.Parse(Decimal.Truncate(delta))
newMinute = Integer.Parse(Decimal.Truncate((delta - newHour) * 60))
newHour = startOfDay + newHour
Else
newHour = Integer.Parse(Decimal.Truncate(delta))
newMinute = Integer.Parse(Decimal.Truncate((delta - newHour) * 60))
End If
dateRequired = New Date(dateRequired.Year, dateRequired.Month, dateRequired.Day, newHour, newMinute, 0)
Return dateRequired
End Sub
</code></pre>
<p><strong>Note</strong>: This will probably not work if delay is more than 9 hours long. It should never change from 3, through.</p>
<p>EDIT:
The goal is find the date and time that you get as a result of adding several hours to the current time. This is used to determine a default value for a due date of a submission. I want to add 3 hours to the current time to get the due date time. However, I don't want due dates that go beyond 5pm on the current day. So, I tried to have the hours split between (today, up to 5pm) and (tomorrow, from 8am on), such that adding 3 hours to 4pm would give you 19am, because 1 hour is added to the end of today and 2 hours are added to the beginning of tomorrow.</p>
|
<p>You should probably write some automated tests for each condition you can think of, and then just start brainstorming more, writing the tests as you think of them. This way, you can see for sure it will work, and will continue to work if you make further changes. Look up Test Driven Development if you like the results.</p>
|
<p>I've worked with the following formula (pseudocode) with some success:</p>
<pre><code>now <- number of minutes since the work day started
delay <- number of minutes in the delay
day <- length of a work day in minutes
x <- (now + delay) / day {integer division}
y <- (now + delay) % day {modulo remainder}
return startoftoday + x {in days} + y {in minutes}
</code></pre>
| 2,723
|
<p>I have an FDM printer, I printed ABS for years.</p>
<p>Since SLA printers became really cheap lately and are able to print finer details, are there resins out there as strong as ABS/PLA?</p>
|
<p>I'm an owner of both an FDM printer and a resin one:</p>
<p>I've long searched a resin capable of printing durable objects even in tiny details but with poor luck. I've tried ABS-Like resins, and they provide a slightly better resistance than regular resin but do not expect great improvements; I've tryed the siraya tech Blue V2 that is for sure much more durable but eventually would fail on the smaller details...</p>
<p>In the end i think I'll try the flexible resins like the Liquicreate flexible X or the Siraya tech Tenacious because those (similarly to TPU in FDM) can better absorb energy from impacts and having a smaller young's modulus can help when under load.</p>
|
<p>I'm an owner of both an FDM printer and a resin one:</p>
<p>I've long searched a resin capable of printing durable objects even in tiny details but with poor luck. I've tried ABS-Like resins, and they provide a slightly better resistance than regular resin but do not expect great improvements; I've tryed the siraya tech Blue V2 that is for sure much more durable but eventually would fail on the smaller details...</p>
<p>In the end i think I'll try the flexible resins like the Liquicreate flexible X or the Siraya tech Tenacious because those (similarly to TPU in FDM) can better absorb energy from impacts and having a smaller young's modulus can help when under load.</p>
| 2,013
|
<p>We are bringing a new project in house and whereas previously all our work was on SQL Server the new product uses an oracle back end.</p>
<p>Can anyone advise any crib sheets or such like that gives an SQL Server person like me a rundown of what the major differences are - Would like to be able to get up and running as soon as possible.</p>
|
<p>@hamishcmcn</p>
<p>Your assertion that '' == Null is simply not true. In the relational world Null should only ever be read to mean "I don't know". The only result you will get from Oracle (and most other decent databases) when you compare a value to Null is 'False'.</p>
<p>Off the top of my head the major differences between SQL Server and Oracle are:</p>
<ul>
<li>Learn to love transactions, they are your friend - auto commit is not.</li>
<li>Read consistency and the lack of blocking reads</li>
<li>SQL Server Database == Oracle Schema</li>
<li>PL/SQL is a lot more feature rich than T-SQL</li>
<li>Learn the difference between an instance and a database in Oracle</li>
<li>You can have more than one Oracle instance on a server</li>
<li>No pointy clicky wizards (unless you really, really want them)</li>
</ul>
<p>Everyone else, please help me out and add more.</p>
|
<p>If you need to you can create and drop temporary tables in procedures using the <a href="http://www.dbasupport.com/oracle/ora9i/execute_immediate.shtml" rel="nofollow noreferrer">Execute Immediate</a> command.</p>
| 6,048
|
<p>Say you have an application divided into 3-tiers: GUI, business logic, and data access. In your business logic layer you have described your business objects: getters, setters, accessors, and so on... you get the idea. The interface to the business logic layer guarantees safe usage of the business logic, so all the methods and accessors you call will validate input. </p>
<p>This great when you first write the UI code, because you have a neatly defined interface that you can trust.</p>
<p>But here comes the tricky part, when you start writing the data access layer, the interface to the business logic does not accommodate your needs. You need to have more accessors and getters to set fields which are/used to be hidden. Now you are forced to erode the interface of your business logic; now it is possible set fields from the UI layer, which the UI layer has no business setting. </p>
<p>Because of the changes needed for the data access layer, the interface to the business logic has eroded to the point where it is possible to even set the business logic with invalid data. Thus, the interface does not guarantee safe usage anymore.</p>
<p>I hope I explained the problem clearly enough. How do you prevent interface eroding, maintain information hiding and encapsulation, and yet still accommodate different interface needs among different layers? </p>
|
<p>If I understand the question correctly, you've created a domain model and you would like to write an object-relational mapper to map between records in your database and your domain objects. However, you're concerned about polluting your domain model with the 'plumbing' code that would be necessary to read and write to your object's fields.</p>
<p>Taking a step back, you essentially have two choices of where to put your data mapping code - within the domain class itself or in an external mapping class.
The first option is often called the Active Record pattern and has the advantage that each object knows how to persist itself and has sufficient access to its internal structure to allow it to perform the mapping without needing to expose non-business related fields.</p>
<p>E.g</p>
<pre><code>public class User
{
private string name;
private AccountStatus status;
private User()
{
}
public string Name
{
get { return name; }
set { name = value; }
}
public AccountStatus Status
{
get { return status; }
}
public void Activate()
{
status = AccountStatus.Active;
}
public void Suspend()
{
status = AccountStatus.Suspended;
}
public static User GetById(int id)
{
User fetchedUser = new User();
// Lots of database and error-checking code
// omitted for clarity
// ...
fetchedUser.name = (string) reader["Name"];
fetchedUser.status = (int)reader["statusCode"] == 0 ? AccountStatus.Suspended : AccountStatus.Active;
return fetchedUser;
}
public static void Save(User user)
{
// Code to save User's internal structure to database
// ...
}
}
</code></pre>
<p>In this example, we have an object that represents a User with a Name and an AccountStatus. We don't want to allow the Status to be set directly, perhaps because we want to check that the change is a valid status transition, so we don't have a setter. Fortunately, the mapping code in the GetById and Save static methods have full access to the object's name and status fields.</p>
<p>The second option is to have a second class that is responsible for the mapping. This has the advantage of seperating out the different concerns of business logic and persistence which can allow your design to be more testable and flexible. The challenge with this method is how to expose the name and status fields to the external class. Some options are:
1. Use reflection (which has no qualms about digging deep into your object's private parts)
2. Provide specially-named, public setters (e.g. prefix them with the word 'Private') and hope no one uses them accidentally
3. If your language suports it, make the setters internal but grant your data mapper module access. E.g. use the InternalsVisibleToAttribute in .NET 2.0 onwards or friend functions in C++</p>
<p>For more information, I'd recommend Martin Fowler's classic book 'Patterns of Enterprise Architecture'</p>
<p>However, as a word of warning, before going down the path of writing your own mappers I'd strongly recommend looking at using a 3rd-party object relational mapper (ORM) tool such as nHibernate or Microsoft's Entity Framework. I've worked on four different projects where, for various reasons, we wrote our own mapper and it is very easy to waste a lot of time maintaining and extending the mapper instead of writing code that provides end user value. I've used nHibernate on one project so far and, although it has quite a steep learning curve initially, the investment you put in early on pays off considerably.</p>
|
<p>I always create a separate assembly that contains: </p>
<ul>
<li>A lot of small Interfaces (think ICreateRepository, IReadRepository, IReadListRepsitory.. the list goes on and most of them relies heavily on generics) </li>
<li>A lot of concrete Interfaces, like an IPersonRepository, that inherits from IReadRepository, you get the point..<br>
Anything you cannot describe with just the smaller interfaces, you put into the concrete interface.<br>
As long as you use the IPersonRepository to declare your object, you get a clean, consistent interface to work with. But the kicker is, you can also make a class that takes f.x. a ICreateRepository in its constructor, so the code will end up being very easy to do some really funky stuff with. There are also interfaces for the Services in the business tier here.</li>
<li>At last i stick all the domain objects into the extra assembly, just to make the code base itself a bit cleaner and more loosely coupled. These objects dont have any logic, they are just a common way to describe the data for all 3+ layers.</li>
</ul>
<p>Btw. Why would you define methods in the business logic tier to accommodate the data tier?<br>
The data tier should have no reason to even know there is a business tier..</p>
| 3,065
|
<p>Which is better to use in PHP, a 2D array or a class? I've included an example of what I mean by this.</p>
<pre><code>// Using a class
class someClass
{
public $name;
public $height;
public $weight;
function __construct($name, $height, $weight)
{
$this -> name = $name;
$this -> height = $height;
$this -> weight = $weight;
}
}
$classArray[1] = new someClass('Bob', 10, 20);
$classArray[2] = new someClass('Fred', 15, 10);
$classArray[3] = new someClass('Ned', 25, 30);
// Using a 2D array
$normalArray[1]['name'] = 'Bob';
$normalArray[1]['height'] = 10;
$normalArray[1]['weight'] = 20;
$normalArray[2]['name'] = 'Fred';
$normalArray[2]['height'] = 15;
$normalArray[2]['weight'] = 10;
$normalArray[3]['name'] = 'Ned';
$normalArray[3]['height'] = 25;
$normalArray[3]['weight'] = 30;
</code></pre>
<hr>
<p>Assuming that somebody doesn't come out and show that classes are too slow, it looks like class wins.</p>
<p>I've not idea which answer I should accept to I've just upvoted all of them.</p>
<hr>
<p>And I have now written two near identical pages, one using the 2D array (written before this question was posted) and now one using a class and I must say that the class produces much nicer code. I have no idea how much overhead is going to be generated but I doubt it will rival the improvement to the code itself.</p>
<p>Thank you for helping to make me a better programmer.</p>
|
<p>The "class" that you've constructed above is what most people would use a <em>struct</em> for in other languages. I'm not sure what the performance implications are in PHP, though I suspect instantiating the objects is probably more costly here, if only by a little bit.</p>
<p>That being said, if the cost is relatively low, it IS a bit easier to manage the objects, in my opinion.</p>
<p>I'm only saying the following based on the title and your question, but:
Bear in mind that classes provide the advantage of methods and access control, as well. So if you wanted to ensure that people weren't changing weights to negative numbers, you could make the <code>weight</code> field private and provide some accessor methods, like <code>getWeight()</code> and <code>setWeight()</code>. Inside <code>setWeight()</code>, you could do some value checking, like so:</p>
<pre><code>public function setWeight($weight)
{
if($weight >= 0)
{
$this->weight = $weight;
}
else
{
// Handle this scenario however you like
}
}
</code></pre>
|
<p>If your code uses lot of functions that operate on those attributes (name/height/weight), then using class could be a good option.</p>
| 4,388
|
<p>I've been trying to use SQLite with the PDO wrapper in PHP with mixed success. I can read from the database fine, but none of my updates are being committed to the database when I view the page in the browser. Curiously, running the script from my shell does update the database. I suspected file permissions as the culprit, but even with the database providing full access (chmod 777) the problem persists. Should I try changing the file owner? If so, what to?</p>
<p>By the way, my machine is the standard Mac OS X Leopard install with PHP activated.</p>
<p>@Tom Martin</p>
<p>Thank you for your reply. I just ran your code and it looks like PHP runs as user _www. I then tried chowning the database to be owned by _www, but that didn't work either.</p>
<p>I should also note that PDO's errorInfo function doesn't indicate an error took place. Could this be a setting with PDO somehow opening the database for read-only? I've heard that SQLite performs write locks on the entire file. Is it possible that the database is locked by something else preventing the write?</p>
<p>I've decided to include the code in question. This is going to be more or less a port of <a href="https://stackoverflow.com/questions/6936/using-what-ive-learned-from-stackoverflow-html-scraper">Grant's script</a> to PHP. So far it's just the Questions section:</p>
<pre><code><?php
$db = new PDO('sqlite:test.db');
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, "https://stackoverflow.com/users/658/kyle");
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_COOKIE, "shhsecret=1293706652");
$page = curl_exec($ch);
preg_match('/summarycount">.*?([,\d]+)<\/div>.*?Reputation/s', $page, $rep);
$rep = preg_replace("/,/", "", $rep[1]);
preg_match('/iv class="summarycount".{10,60} (\d+)<\/d.{10,140}Badges/s', $page, $badge);
$badge = $badge[1];
$qreg = '/question-summary narrow.*?vote-count-post"><strong.*?>(-?\d*).*?\/questions\/(\d*).*?>(.*?)<\/a>/s';
preg_match_all($qreg, $page, $questions, PREG_SET_ORDER);
$areg = '/(answer-summary"><a href="\/questions\/(\d*).*?votes.*?>(-?\d+).*?href.*?>(.*?)<.a)/s';
preg_match_all($areg, $page, $answers, PREG_SET_ORDER);
echo "<h3>Questions:</h3>\n";
echo "<table cellpadding=\"3\">\n";
foreach ($questions as $q)
{
$query = 'SELECT count(id), votes FROM Questions WHERE id = '.$q[2].' AND type=0;';
$dbitem = $db->query($query)->fetch(PDO::FETCH_ASSOC);
if ($dbitem['count(id)'] > 0)
{
$lastQ = $q[1] - $dbitem['votes'];
if ($lastQ == 0)
{
$lastQ = "";
}
$query = "UPDATE Questions SET votes = '$q[1]' WHERE id = '$q[2]'";
$db->exec($query);
}
else
{
$query = "INSERT INTO Questions VALUES('$q[3]', '$q[1]', 0, '$q[2]')";
echo "$query\n";
$db->exec($query);
$lastQ = "(NEW)";
}
echo "<tr><td>$lastQ</td><td align=\"right\">$q[1]</td><td>$q[3]</td></tr>\n";
}
echo "</table>";
?>
</code></pre>
|
<p>Kyle, in order for PDO/Sqlite to work you need write permission to directory where your database resides.</p>
<p>Also, I see you perform multiple selects in loop. This may be ok if you are building something small and not heavy loaded. Otherwise I'd suggest building single query that returns multiple rows and process them in separate loop.</p>
|
<p>@Tom
Depends on how the hosting is setup, If the server runs PHP as an Apache Module then its likely that it is 'nobody' (usually whatever user apache is setup as). But if PHP is setup as cgi (such as fast-cgi) and the server runs SuExec then php runs as the same user who owns the files. </p>
<p>Eitherway the folder that will contain the database must be writable by the script, either by being the same user, or by having write permission set to the php user. </p>
<p>@Michal
That aside, one could use beginTransaction(); perform all the actions needed then comit(); to actually comit them. </p>
| 8,620
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.