qid
int64
1
74.7M
question
stringlengths
12
33.8k
date
stringlengths
10
10
metadata
list
response_j
stringlengths
0
115k
response_k
stringlengths
2
98.3k
3,114,086
In the course of asking about catching 'divide by 0' exception, I found that with C++, we can't do that. I mean, divide by 0 doesn't throw an std::exception. Some of the hints that I found were I have to check the value, and throw the exception by self. I say it's confusing, as I've thought that C++ adopted the exception idea in order to replace the 'good old C/UNIX report error by returning value method'. Here are my questions * Q1 : Why C++ doesn't throw std::exception error for divide by 0? Is there any reason behind that? * Q2 : Normally, what error processing scheme the C++ users use? Always throw an error, and the exception is the divide by 0 error? * Q3 : In general, OOP languages prefer (or even enforce) using exception. Is this correct?
2010/06/24
[ "https://Stackoverflow.com/questions/3114086", "https://Stackoverflow.com", "https://Stackoverflow.com/users/260127/" ]
C++ is implemented on many different platforms and is designed to support high-performance applications. Allowing undefined behaviour means than not all uses of division need to be burdened by extra checking and a possible exception throw by the compiler. The compiler is allowed to implement the fastest translation of divide in machine code regardless of its behaviour on divide by zero. As with performing any operation, it is the programmers responsibility to ensure that any pre-conditions are met. In the case of division, a programmer may *know* that the divisor cannot be zero (or very small) and may just use an assert; in other cases he might need to validate the input and throw an application specific exception if the conditions aren't met. C++ isn't (just) an OO language and doesn't (in most cases) enforce the use of exceptions. It provides them as a tool for use where appropriate. There are other languages which force the use of exceptions to a much greater degree.
C++ doesn't use a lot of good principles in some places in order to maintain compatibility with C code. Java and such has no such constraints, so they can do what they'd like. In C++, always throw an exception. But, for something like divide by zero, you really should just check it yourself. It's not an exceptional circumstance, it's you failing to check yourself.
3,114,086
In the course of asking about catching 'divide by 0' exception, I found that with C++, we can't do that. I mean, divide by 0 doesn't throw an std::exception. Some of the hints that I found were I have to check the value, and throw the exception by self. I say it's confusing, as I've thought that C++ adopted the exception idea in order to replace the 'good old C/UNIX report error by returning value method'. Here are my questions * Q1 : Why C++ doesn't throw std::exception error for divide by 0? Is there any reason behind that? * Q2 : Normally, what error processing scheme the C++ users use? Always throw an error, and the exception is the divide by 0 error? * Q3 : In general, OOP languages prefer (or even enforce) using exception. Is this correct?
2010/06/24
[ "https://Stackoverflow.com/questions/3114086", "https://Stackoverflow.com", "https://Stackoverflow.com/users/260127/" ]
C++ is implemented on many different platforms and is designed to support high-performance applications. Allowing undefined behaviour means than not all uses of division need to be burdened by extra checking and a possible exception throw by the compiler. The compiler is allowed to implement the fastest translation of divide in machine code regardless of its behaviour on divide by zero. As with performing any operation, it is the programmers responsibility to ensure that any pre-conditions are met. In the case of division, a programmer may *know* that the divisor cannot be zero (or very small) and may just use an assert; in other cases he might need to validate the input and throw an application specific exception if the conditions aren't met. C++ isn't (just) an OO language and doesn't (in most cases) enforce the use of exceptions. It provides them as a tool for use where appropriate. There are other languages which force the use of exceptions to a much greater degree.
1. Because C++ is intended to be "close to the metal" and mostly tries to pass simple operations like division through to the hardware relatively directly (so if you can't depend on hardware to enforce a constraint, C++ probably won't by default either). 2. I don't think there's a universal answer to that. Some C++ programmers write almost C-like code that almost never uses exception handling. Others use exception handling quite rigorously (and most are somewhere in between). 3. While there's almost certainly a correlation between OO and exception handing, I don't think it's really cause and effect. The factors that seem likely to me are: 1. OOP and exception handling tend to be most useful in similar (e.g., large) projects. 2. OOP and exception handling have both become more common over time. Older OO languages often lack exception handling. Newer languages often have exception handling, even if they're not OO at all.
3,114,086
In the course of asking about catching 'divide by 0' exception, I found that with C++, we can't do that. I mean, divide by 0 doesn't throw an std::exception. Some of the hints that I found were I have to check the value, and throw the exception by self. I say it's confusing, as I've thought that C++ adopted the exception idea in order to replace the 'good old C/UNIX report error by returning value method'. Here are my questions * Q1 : Why C++ doesn't throw std::exception error for divide by 0? Is there any reason behind that? * Q2 : Normally, what error processing scheme the C++ users use? Always throw an error, and the exception is the divide by 0 error? * Q3 : In general, OOP languages prefer (or even enforce) using exception. Is this correct?
2010/06/24
[ "https://Stackoverflow.com/questions/3114086", "https://Stackoverflow.com", "https://Stackoverflow.com/users/260127/" ]
1) Throwing exceptions is an expensive operation. The C++ philosophy is not to pay for what you don't use. If you want exceptions, you throw them yourself (or use libraries that do). 2) Never accept the divide by zero error. It depends on the situation, if you know the input will never be a 0 never check for it. If you are unsure, always check for it. Then either throw an exception, or swallow the error quietly. It is up to you. 3) Exception throwing, especially combined with RAII can make for truely elegant and beautiful code. This may not be acceptable in all situations. You may have 100% confidence in your inputs and wish for raw performance. If you are creating a DLL you do not really want to be throwing exceptions out of your api, but for a critically consistant statically linked library you would be advised to.
1. Because C++ is intended to be "close to the metal" and mostly tries to pass simple operations like division through to the hardware relatively directly (so if you can't depend on hardware to enforce a constraint, C++ probably won't by default either). 2. I don't think there's a universal answer to that. Some C++ programmers write almost C-like code that almost never uses exception handling. Others use exception handling quite rigorously (and most are somewhere in between). 3. While there's almost certainly a correlation between OO and exception handing, I don't think it's really cause and effect. The factors that seem likely to me are: 1. OOP and exception handling tend to be most useful in similar (e.g., large) projects. 2. OOP and exception handling have both become more common over time. Older OO languages often lack exception handling. Newer languages often have exception handling, even if they're not OO at all.
3,114,086
In the course of asking about catching 'divide by 0' exception, I found that with C++, we can't do that. I mean, divide by 0 doesn't throw an std::exception. Some of the hints that I found were I have to check the value, and throw the exception by self. I say it's confusing, as I've thought that C++ adopted the exception idea in order to replace the 'good old C/UNIX report error by returning value method'. Here are my questions * Q1 : Why C++ doesn't throw std::exception error for divide by 0? Is there any reason behind that? * Q2 : Normally, what error processing scheme the C++ users use? Always throw an error, and the exception is the divide by 0 error? * Q3 : In general, OOP languages prefer (or even enforce) using exception. Is this correct?
2010/06/24
[ "https://Stackoverflow.com/questions/3114086", "https://Stackoverflow.com", "https://Stackoverflow.com/users/260127/" ]
C++ assumes you know what you're doing, doesn't pay for things you don't ask for, and makes no assumptions about the platforms it's intended for. If you want to divide numbers, it would be quite inefficient to mandate the compiler check the denominator and throw before dividing. (We didn't ask it to do that.) So that option is out; we can't have this check on every division, and it's especially wasteful since most divisions are not by zero. So, how can we just divide by zero and find out if it worked? Because C++ cannot assume anything about it's platform, it cannot assume there is a way to check the result, hardware-wise. That is to say, while many CPU's will jump to an interrupt of some sort when division by zero occurs, the C++ language cannot guarantee such a thing. The only option then is to let the behavior be undefined. And that's exactly what you get: undefined behavior. --- OOP languages might do something or another, it doesn't matter since OOP isn't well-defined and C++ isn't an OOP language anyway. In general, use the tool that's most appropriate. (Exceptions are for exceptional situations.)
Divide by zero is something you can test for before the calculating line, which could avoid wasted cycles if it's a complicated formula.
3,114,086
In the course of asking about catching 'divide by 0' exception, I found that with C++, we can't do that. I mean, divide by 0 doesn't throw an std::exception. Some of the hints that I found were I have to check the value, and throw the exception by self. I say it's confusing, as I've thought that C++ adopted the exception idea in order to replace the 'good old C/UNIX report error by returning value method'. Here are my questions * Q1 : Why C++ doesn't throw std::exception error for divide by 0? Is there any reason behind that? * Q2 : Normally, what error processing scheme the C++ users use? Always throw an error, and the exception is the divide by 0 error? * Q3 : In general, OOP languages prefer (or even enforce) using exception. Is this correct?
2010/06/24
[ "https://Stackoverflow.com/questions/3114086", "https://Stackoverflow.com", "https://Stackoverflow.com/users/260127/" ]
C++ is implemented on many different platforms and is designed to support high-performance applications. Allowing undefined behaviour means than not all uses of division need to be burdened by extra checking and a possible exception throw by the compiler. The compiler is allowed to implement the fastest translation of divide in machine code regardless of its behaviour on divide by zero. As with performing any operation, it is the programmers responsibility to ensure that any pre-conditions are met. In the case of division, a programmer may *know* that the divisor cannot be zero (or very small) and may just use an assert; in other cases he might need to validate the input and throw an application specific exception if the conditions aren't met. C++ isn't (just) an OO language and doesn't (in most cases) enforce the use of exceptions. It provides them as a tool for use where appropriate. There are other languages which force the use of exceptions to a much greater degree.
About Q3 - exceptions are something which should occure exceptional :) So to avoid (which is possible with div0) is always better. Additionally to Emyr (who is right about "avoid wasted cycles for calculation") you should consider that throwing an exception means a lot of "internal work" since the context changes (you may leave loop, functions, instance methods....) and your "excpetion stack" has to be prepared. So in general exception handling is "the common method" to handle exceptions. But it should not be a pattern to avoid "value checking". if(!string.IsNullOrEmpty(....) ... is much better than try{ xx=MyMabeNullString.Length... } catch{ //errmess - I could have checked it before :) }
3,114,086
In the course of asking about catching 'divide by 0' exception, I found that with C++, we can't do that. I mean, divide by 0 doesn't throw an std::exception. Some of the hints that I found were I have to check the value, and throw the exception by self. I say it's confusing, as I've thought that C++ adopted the exception idea in order to replace the 'good old C/UNIX report error by returning value method'. Here are my questions * Q1 : Why C++ doesn't throw std::exception error for divide by 0? Is there any reason behind that? * Q2 : Normally, what error processing scheme the C++ users use? Always throw an error, and the exception is the divide by 0 error? * Q3 : In general, OOP languages prefer (or even enforce) using exception. Is this correct?
2010/06/24
[ "https://Stackoverflow.com/questions/3114086", "https://Stackoverflow.com", "https://Stackoverflow.com/users/260127/" ]
C++ is implemented on many different platforms and is designed to support high-performance applications. Allowing undefined behaviour means than not all uses of division need to be burdened by extra checking and a possible exception throw by the compiler. The compiler is allowed to implement the fastest translation of divide in machine code regardless of its behaviour on divide by zero. As with performing any operation, it is the programmers responsibility to ensure that any pre-conditions are met. In the case of division, a programmer may *know* that the divisor cannot be zero (or very small) and may just use an assert; in other cases he might need to validate the input and throw an application specific exception if the conditions aren't met. C++ isn't (just) an OO language and doesn't (in most cases) enforce the use of exceptions. It provides them as a tool for use where appropriate. There are other languages which force the use of exceptions to a much greater degree.
Divide by zero is something you can test for before the calculating line, which could avoid wasted cycles if it's a complicated formula.
3,114,086
In the course of asking about catching 'divide by 0' exception, I found that with C++, we can't do that. I mean, divide by 0 doesn't throw an std::exception. Some of the hints that I found were I have to check the value, and throw the exception by self. I say it's confusing, as I've thought that C++ adopted the exception idea in order to replace the 'good old C/UNIX report error by returning value method'. Here are my questions * Q1 : Why C++ doesn't throw std::exception error for divide by 0? Is there any reason behind that? * Q2 : Normally, what error processing scheme the C++ users use? Always throw an error, and the exception is the divide by 0 error? * Q3 : In general, OOP languages prefer (or even enforce) using exception. Is this correct?
2010/06/24
[ "https://Stackoverflow.com/questions/3114086", "https://Stackoverflow.com", "https://Stackoverflow.com/users/260127/" ]
C++ assumes you know what you're doing, doesn't pay for things you don't ask for, and makes no assumptions about the platforms it's intended for. If you want to divide numbers, it would be quite inefficient to mandate the compiler check the denominator and throw before dividing. (We didn't ask it to do that.) So that option is out; we can't have this check on every division, and it's especially wasteful since most divisions are not by zero. So, how can we just divide by zero and find out if it worked? Because C++ cannot assume anything about it's platform, it cannot assume there is a way to check the result, hardware-wise. That is to say, while many CPU's will jump to an interrupt of some sort when division by zero occurs, the C++ language cannot guarantee such a thing. The only option then is to let the behavior be undefined. And that's exactly what you get: undefined behavior. --- OOP languages might do something or another, it doesn't matter since OOP isn't well-defined and C++ isn't an OOP language anyway. In general, use the tool that's most appropriate. (Exceptions are for exceptional situations.)
About Q3 - exceptions are something which should occure exceptional :) So to avoid (which is possible with div0) is always better. Additionally to Emyr (who is right about "avoid wasted cycles for calculation") you should consider that throwing an exception means a lot of "internal work" since the context changes (you may leave loop, functions, instance methods....) and your "excpetion stack" has to be prepared. So in general exception handling is "the common method" to handle exceptions. But it should not be a pattern to avoid "value checking". if(!string.IsNullOrEmpty(....) ... is much better than try{ xx=MyMabeNullString.Length... } catch{ //errmess - I could have checked it before :) }
4,494,745
Is it the paypal API compatible with asp.net MVC? Does anyone know of any expamples of how to implement it? Thank you.
2010/12/20
[ "https://Stackoverflow.com/questions/4494745", "https://Stackoverflow.com", "https://Stackoverflow.com/users/359506/" ]
yes it is compatible. Have a look at [MVC Storefront Starter Kit](http://www.asp.net/mvc/videos/mvc-1/aspnet-mvc-storefront/aspnet-mvc-storefront-part-1-architectural-discussion-and-overview) videos. [Episode 22](http://www.asp.net/mvc/videos/aspnet-mvc-storefront-part-22-restructuring-rerouting-and-paypal) is dedicated to Paypal
Recently I implemented a PayPal 'Buy Now' button in a ASP.NET MVC Razor view. In the end the button is just a HTML form that is posted to the PayPal website. However, it took me some time to find out which hidden form fields were required, and which optional fields I could also use to further configure the payment process. I have published my experiences on my blog: <http://buildingwebapps.blogspot.com/2012/01/single-item-paypal-buttons-and.html>. There you will also find the source code for an MVC Html helper method that makes rendering single-item PayPal buttons less work.
4,494,745
Is it the paypal API compatible with asp.net MVC? Does anyone know of any expamples of how to implement it? Thank you.
2010/12/20
[ "https://Stackoverflow.com/questions/4494745", "https://Stackoverflow.com", "https://Stackoverflow.com/users/359506/" ]
yes it is compatible. Have a look at [MVC Storefront Starter Kit](http://www.asp.net/mvc/videos/mvc-1/aspnet-mvc-storefront/aspnet-mvc-storefront-part-1-architectural-discussion-and-overview) videos. [Episode 22](http://www.asp.net/mvc/videos/aspnet-mvc-storefront-part-22-restructuring-rerouting-and-paypal) is dedicated to Paypal
Take a look at this site. <http://www.arunrana.net/2012/01/paypal-integration-in-mvc3-and-razor.html> It is specifically designed for MVC3 with Razor!
4,494,745
Is it the paypal API compatible with asp.net MVC? Does anyone know of any expamples of how to implement it? Thank you.
2010/12/20
[ "https://Stackoverflow.com/questions/4494745", "https://Stackoverflow.com", "https://Stackoverflow.com/users/359506/" ]
yes it is compatible. Have a look at [MVC Storefront Starter Kit](http://www.asp.net/mvc/videos/mvc-1/aspnet-mvc-storefront/aspnet-mvc-storefront-part-1-architectural-discussion-and-overview) videos. [Episode 22](http://www.asp.net/mvc/videos/aspnet-mvc-storefront-part-22-restructuring-rerouting-and-paypal) is dedicated to Paypal
> > **Source:** [Paypal add to cart multiple items form with discount](https://stackoverflow.com/questions/7840611/paypal-add-to-cart-multiple-items-form-with-discount/12936117#12936117) > > > I have successfully integrated the PayPal express checkout with shopping cart in to one of my Asp.net MVC projects with discount codes, shipping charges etc.
4,494,745
Is it the paypal API compatible with asp.net MVC? Does anyone know of any expamples of how to implement it? Thank you.
2010/12/20
[ "https://Stackoverflow.com/questions/4494745", "https://Stackoverflow.com", "https://Stackoverflow.com/users/359506/" ]
Recently I implemented a PayPal 'Buy Now' button in a ASP.NET MVC Razor view. In the end the button is just a HTML form that is posted to the PayPal website. However, it took me some time to find out which hidden form fields were required, and which optional fields I could also use to further configure the payment process. I have published my experiences on my blog: <http://buildingwebapps.blogspot.com/2012/01/single-item-paypal-buttons-and.html>. There you will also find the source code for an MVC Html helper method that makes rendering single-item PayPal buttons less work.
Take a look at this site. <http://www.arunrana.net/2012/01/paypal-integration-in-mvc3-and-razor.html> It is specifically designed for MVC3 with Razor!
4,494,745
Is it the paypal API compatible with asp.net MVC? Does anyone know of any expamples of how to implement it? Thank you.
2010/12/20
[ "https://Stackoverflow.com/questions/4494745", "https://Stackoverflow.com", "https://Stackoverflow.com/users/359506/" ]
> > **Source:** [Paypal add to cart multiple items form with discount](https://stackoverflow.com/questions/7840611/paypal-add-to-cart-multiple-items-form-with-discount/12936117#12936117) > > > I have successfully integrated the PayPal express checkout with shopping cart in to one of my Asp.net MVC projects with discount codes, shipping charges etc.
Take a look at this site. <http://www.arunrana.net/2012/01/paypal-integration-in-mvc3-and-razor.html> It is specifically designed for MVC3 with Razor!
93,266
I am shooting with a Canon EOS Rebel X S and I am using Ilford Delta 400 professional B&W for the first time, any recommendations which ISO setting I should use? I have heard 320.
2017/10/07
[ "https://photo.stackexchange.com/questions/93266", "https://photo.stackexchange.com", "https://photo.stackexchange.com/users/68991/" ]
Each film has a personality that is further accentuated by developer and pushing or pulling. If this is your first go around with Delta, I'd recommend shooting it at box speed (400) and seeing if you like it and then adjust from there. The only caveat to that is if you're planning on developing in rodinal. I've found that films lose a little effective speed but I've also been doing stand development with rodinal as opposed to normal development. If you're going this road, then yes, exposing at 320 would be ideal. But if you're going to use ddx or just about any other developer, go with the box speed and see how you like it.
For the first time pick something tried and tested - D76, ID11 or the like. Rodinal is not a great idea for starting with the tabular grain films (perhaps later). Once you get the hang of it try something fancy. [the Massive Dev Chart](http://www.digitaltruth.com/devchart.php) is a good place to start. The idea with using lower than nominal ISO (first overexposing and then underdeveloping a bit) is to get more shadow detail. Negative film (unlike digital sensor) is highly resistant to overexposing. There is always a bit more detail in the highlights. Shadow detail on the other hand gets lost easily. This is best reserved to more experienced darkroom technicians. Do the first couple films the plain vanilla way, and once you get the hang of it try experimenting.
93,266
I am shooting with a Canon EOS Rebel X S and I am using Ilford Delta 400 professional B&W for the first time, any recommendations which ISO setting I should use? I have heard 320.
2017/10/07
[ "https://photo.stackexchange.com/questions/93266", "https://photo.stackexchange.com", "https://photo.stackexchange.com/users/68991/" ]
I'm curious what's motivating your question since the film's name (Delta *400*) is telling you exactly where to start. Camera is irrelevant insofar as it's capable of handling the range of film speeds you need. Developer, though, is not. Are your shooting conditions challenging for some reason (for example, low ambient light where flash is prohibited)? Do you need a particular kind of result (for example, very fine grain)? These would factor into your developer choice which, in turn, *could* factor in to the film speed you choose. For general purpose shooting, I would recommend box speed (400) and a tried and true developer like D-76 or HC-110 (or equivalents). But read the fact sheet of whatever developer you're considering. Certain fine grain developers (Perceptol) and compensating developers (Diafine) work best when shooting below and above box speed respectively. But I wouldn't start with either of these.
For the first time pick something tried and tested - D76, ID11 or the like. Rodinal is not a great idea for starting with the tabular grain films (perhaps later). Once you get the hang of it try something fancy. [the Massive Dev Chart](http://www.digitaltruth.com/devchart.php) is a good place to start. The idea with using lower than nominal ISO (first overexposing and then underdeveloping a bit) is to get more shadow detail. Negative film (unlike digital sensor) is highly resistant to overexposing. There is always a bit more detail in the highlights. Shadow detail on the other hand gets lost easily. This is best reserved to more experienced darkroom technicians. Do the first couple films the plain vanilla way, and once you get the hang of it try experimenting.
401,079
I have a stored procedure in Oracle that returns a select statement cursor reference. I want to be able to pass column names and sort direction (example: 'CompanyName DESC') and be able to sort the results, or pass a filter such as 'CompanyID > 400' and be able to apply that to the select statement. what is the best way to accomplish this? This table is in an old database and has 90 columns, and I don't want to do logic for every possible combination.
2008/12/30
[ "https://Stackoverflow.com/questions/401079", "https://Stackoverflow.com", "https://Stackoverflow.com/users/18893/" ]
See here: <http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:1288401763279> and <http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:6711305251199> I especially like this method: <http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:1288401763279#4988761718663>
As you probably figured out, R. Bemrose's answer is the simpler to implement, using a dynamic SQL. For that reason, it's probably used the most often. If you do this, be sure to do it the way he(/she?) did, using a bind variable (e.g. USING p\_sort) rather than just concatenating the string onto the lv\_cursor\_txt. This method gives better performance and security than concatenating. The second method uses application contexts, which I haven't seen used much, but I suspect would provide better query performance if you're calling the query a lot. Good luck.
401,079
I have a stored procedure in Oracle that returns a select statement cursor reference. I want to be able to pass column names and sort direction (example: 'CompanyName DESC') and be able to sort the results, or pass a filter such as 'CompanyID > 400' and be able to apply that to the select statement. what is the best way to accomplish this? This table is in an old database and has 90 columns, and I don't want to do logic for every possible combination.
2008/12/30
[ "https://Stackoverflow.com/questions/401079", "https://Stackoverflow.com", "https://Stackoverflow.com/users/18893/" ]
<http://forums.oracle.com/forums/thread.jspa?threadID=2177728&tstart=0> CASE WHEN :P4\_SORT\_ORDER = 1 THEN ROW\_NUMBER() OVER (ORDER BY UPPER(ENAME)) or decode(:Sort\_var,1,ROW\_NUMBER() OVER (ORDER BY UPPER(ENAME)),2,ROW\_NUMBER() OVER (ORDER BY UPPER(Address)))
As you probably figured out, R. Bemrose's answer is the simpler to implement, using a dynamic SQL. For that reason, it's probably used the most often. If you do this, be sure to do it the way he(/she?) did, using a bind variable (e.g. USING p\_sort) rather than just concatenating the string onto the lv\_cursor\_txt. This method gives better performance and security than concatenating. The second method uses application contexts, which I haven't seen used much, but I suspect would provide better query performance if you're calling the query a lot. Good luck.
114,320
Hi all I'm a newbie on bassing and I'm learning tabs now this tab G ------------------------------------ D ------------------5----------------- A --------2-5------------5----2------- E ---3-----------------------------3-- While I start with first string and 3d fret then for A:2-5 do I hold down the A string on 2nd fret then while keeping this pressed I also push the A string on 5th fret? or do I let go of the string on 2nd fret and push 5th fret ? cheers !
2021/05/09
[ "https://music.stackexchange.com/questions/114320", "https://music.stackexchange.com", "https://music.stackexchange.com/users/77944/" ]
What you propose is possible, but it might be tricky. Precisely how tricky will depend on the choice of keys of the two harmonicas. It's easy to fall into the trap of thinking "if I want all the notes to play in any key I might as well just get one in C and one in C♯, just like a chromatic harmonica". Unfortunately, the situation is more complicated. It is true that a chromatic harmonica can play in any key, but not every key is created equal. The easiest key on a standard C-tuned chromatic is obviously C, and keeping the button pressed C♯ is just as easy. But what about the other keys? Let's look at the which notes are present in the C and C♯ scales. C major scale: C D E F G A B C C♯ major scale: C♯ D♯ E♯ F♯ G♯ A♯ B♯ C♯ To make things easier to argue about, let's assume the convention B♯ = C and E♯ = F, yielding the following. C♯ major scale: C♯ D♯ F F♯ G♯ A♯ C C♯ Now, compare this to a G major scale. G major scale: G A B C D E F♯ G All notes but the F♯ can be found in the C scale, but only the F♯ and C can be found in the C♯ scale. The implication is that each time you switch to the C♯ harp, it's most likely to play a single note and then switch back. This makes it hard not to make the playing choppy. Even though the keys share so many notes, many keys are easier than G major to play on a C chromatic, and D is even harder! (For somewhat technical reasons, keys with flats are much more convenient to play on a chromatic, but this answer is already long enough so I won't go into specifics..) So what to do? If you are not dead set on being able to play in any key, I would consider using two harmonicas in the keys of C and D. Take a look at the D major scale: D major scale: D E F♯ G A B C♯ D For songs in C or Am, you can stay on the C harmonica. For songs in D or Bm you can stay on the D harmonica. When playing songs in G or Em you might need to switch back and forth, but contrary to the C-C# setup you can often stay on one harmonica or the other for several notes which makes it much easier to play smoothly. For playing campfire songs in the keys you mention, I think this is the best approach and you should have an easier time learning to play it well. If you feel that you need one more key, you could go with harmonicas in C and A, which would make these two keys easy and the keys of G and D intermediate. In any case, it would still be much easier than playing any of these keys except C on the C-C♯ combination I outlined first. Finally, another consideration is that Asian tremolos are rather limited when it comes to chords: they basically go all in on the home chord. With the C-D setup I suggest, you would have the chords C major, D major, D minor and B minor. There are other harmonicas that would let you play, and while not harder to play they are non standard. You could, for instance stack two so-called spiral tuned diatonic harmonicas tuned to C and D on top of each other. This would give you: * Same easy scales as in the C-D setup outlined above (though they are not played with the exact same patterns). * All the chords C, D, Dm, Em, F, F♯m, G, A, Am and Bm! * If you learn to bend notes, which is not too hard, this setup is fully chromatic, which means you can play all the sharps and flats. * The bending that is possible on a diatonic allows you to play some cool bluesy notes if you want. Spiral tuned harmonicas are a bit harder to find, but they are at least available from Seydel (under the name Zirkular).
Since there's only one 'chromatic' note that needs the button pressed in - F♯ - to play every note in the key of G as opposed to key C, it's not an onerous task. When you need that F♯, just play the F♮ hole, and press in the button. Voila, the harp is in tune for key G. Even in key D, there's only one other note to change - C♯ - which is every C blown with the button in. Playing chords is a different matter, with really only a C chord available, which will be featured in key G, but not that often, and it may sound odd without the tonic and dominant chords playable - button or not.
49,254,211
I am using **BinarySecurityToken** for **OTA\_AirRulesRQ**, but I am getting **USG\_AUTHORIZATION\_FAILED**. I used the same token for **BargainFinderMaxRQ** and it worked. Is it some problem with the SOAP request I am sending or access to this method is not authorized form my PCC ? Also I am able to hold PNR and Issue ticket with same credentials Please Suggest
2018/03/13
[ "https://Stackoverflow.com/questions/49254211", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9484788/" ]
You should contact the API helpdesk providing your credentials
Do you point it to the according endpoint. E.g. getting creds from prod/testing and using it in prod/testing?
49,254,211
I am using **BinarySecurityToken** for **OTA\_AirRulesRQ**, but I am getting **USG\_AUTHORIZATION\_FAILED**. I used the same token for **BargainFinderMaxRQ** and it worked. Is it some problem with the SOAP request I am sending or access to this method is not authorized form my PCC ? Also I am able to hold PNR and Issue ticket with same credentials Please Suggest
2018/03/13
[ "https://Stackoverflow.com/questions/49254211", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9484788/" ]
You should contact the API helpdesk providing your credentials
Can you post the message you are trying to send including headers. You can block out Pcc information, but it would help to see the message. It may be something simple you are overlooking and at the very least I could duplicate the call on my test bench and try to to duplicate the issue.
276,586
I am trying to migrate from a MySQL AWS RDS instance with a huge SSD and too much excess space down to a small one, and data migration is the only method. There are four tables in the range of 330GB-450GB and executing mysqldump, in a single thread, while piped directly to the target RDS instance is estimated to take about 24 hours by pv (copying at 5 mbps). I wrote a bash script that calls multiple mysqldump using ' & ' at the end and a calculated `--where` parameter, to simulate multithreading. This works and currently takes less than an hour with 28 threads. However, I am concerned about any potential loss of performance while querying in the future, since I'll not be inserting in the sequence of the auto\_increment id columns. Can someone confirm whether this would be the case or whether I am being paranoid for no reasons. What solution did you use for a single table that is in the 100s of GBs? Due to a particular reason, I want to avoid using AWS DMS and definitely don't want to use tools that haven't been maintained in a while.
2020/10/06
[ "https://dba.stackexchange.com/questions/276586", "https://dba.stackexchange.com", "https://dba.stackexchange.com/users/164093/" ]
Tables are by nature unsorted, so you will not have any performance loss on that site, after inserting your data, but we doen't how your instances is smaller, we can't tell the impact it will have. Your index on that field will be sorted and so will find the wanted rows quite fast, at least faster as scanning the hole column.
Not a tall, there is no performance issue in any of the cause
276,586
I am trying to migrate from a MySQL AWS RDS instance with a huge SSD and too much excess space down to a small one, and data migration is the only method. There are four tables in the range of 330GB-450GB and executing mysqldump, in a single thread, while piped directly to the target RDS instance is estimated to take about 24 hours by pv (copying at 5 mbps). I wrote a bash script that calls multiple mysqldump using ' & ' at the end and a calculated `--where` parameter, to simulate multithreading. This works and currently takes less than an hour with 28 threads. However, I am concerned about any potential loss of performance while querying in the future, since I'll not be inserting in the sequence of the auto\_increment id columns. Can someone confirm whether this would be the case or whether I am being paranoid for no reasons. What solution did you use for a single table that is in the 100s of GBs? Due to a particular reason, I want to avoid using AWS DMS and definitely don't want to use tools that haven't been maintained in a while.
2020/10/06
[ "https://dba.stackexchange.com/questions/276586", "https://dba.stackexchange.com", "https://dba.stackexchange.com/users/164093/" ]
You are correct that it will cause fragmentation of the clustered index. However, if it is an auto-incrementing column the data wasn't really sorted by anything meaningful. You went from a unsorted mess to a differently sorted unsorted mess. Selecting/updating/reading a few rows at a time? Not a big deal - the B-tree will still know how to find the correct page without too much additional effort. You'll have issues if you're trying to break up large updates/deletes by using ranges of the auto-incrementing column as the rows will be spread across pages. If performance does become an issue, you can rebuild the index, the newer versions of MySQL should be able to do so without taking the table offline. As an aside - did you attempt sorting the data by the auto-incrementing column then performing a bulk load?
Not a tall, there is no performance issue in any of the cause
73,380
The late bishop John Shelby Spong, argued a few years back about how to understand the account of the magi of Matthew's Gospel (see Matthew 2:1-12). In his book, *Born of a Woman*, he writes how the universal assumption of people he knows, associated with New Testament study, is that the magi were not actual people. He states: "Matthew was clearly writing Christian midrash." (*Born of a Woman*, pages 89-90) John Dominic Crossan, in various places, also gives his view of the magi story by arguing that it’s a parable. What is the hermeneutical criteria for discerning between what is a fictional midrash like parable and what was meant to be literal history?
2022/01/08
[ "https://hermeneutics.stackexchange.com/questions/73380", "https://hermeneutics.stackexchange.com", "https://hermeneutics.stackexchange.com/users/44608/" ]
I personally know New Testament scholars who believe this account was not a parable--so at the very least, I can confirm that such people exist. **Verifiability** One of the most useful metrics for assessing the real-world authenticity of an account is context that provides verifiability. Consider the difference between starting an account "once upon a time, in a far away land" (wait...where? when?) vs. "in the fifteenth year of the reign of Tiberius Caesar...[in] the country about Jordan" (from Luke 3:1-3). The Gospel of Luke and the Book of Acts provide repeated examples of this phenomenon--Luke goes through city by city the places Jesus/Peter/Paul etc. visited, gives the names of individuals they interacted with, and regularly references official documents and proceedings (e.g. the Gallio trial). It is as if Luke is challenging his audience "go, fact check me, I've given you enough information to be able to do so". This is particularly noteworthy in the 16 different trials referenced in the book of Acts (see *Paul on Trial* by John Mauck for a survey) - these are events that a Roman official could readily verify. We may not be able to check the Roman legal archives or send someone to speak with Zaccheus today, but Luke's method suggests that he expects his audience would be able to do so. Providing this much context is dangerous if the story is fictitious, but it's the first century version of a *works cited* page if the story is true. **Application to the Magi** Matthew has clearly set this account into the context of a real time and place--Judea during the latter part of the reign of Herod the Great (note that "Herod the king" is a reference to Herod the Great, not one of his sons--none of his sons received the title "king"--my work on the subject [here](https://youtube.com/playlist?list=PLGACqQS4ut5j6BkvLX7Ic07qKBJUh6CH-)). The Magi were a real group of people (see [here](https://www.biblegateway.com/resources/encyclopedia-of-the-bible/Magi)) from a real place (Parthian empire). The account also includes other events that could be investigated, such as a family sojourn to Egypt, a return to Nazareth early in the reign of Archelaus, and the slaughter of children in Bethlehem. So Matthew has really over-specified the details if he's just telling a parable. Compare to Jesus' parables--with few exceptions, they provide no concrete markers in time or space--they are stories to illustrate a point, not historical events the reader can go investigate. **Antithesis** But if the story of the Magi really happened, why would Luke leave it out? Luke had very good reason to do so--not only does Luke have a rather low opinion of people associated with magic (see Acts 8:9-23), but he's writing to an educated Greco-Roman audience. The Romans were at the time in the midst of a series of wars with the Parthians (see [here](https://en.wikipedia.org/wiki/Roman%E2%80%93Parthian_Wars)). Luke's trying to present Jesus in a positive light, and it wouldn't help his case any to start out by saying "Jesus is so great, even your enemies the Parthians love Him!" Luke wisely skips this part of the story. But aren't there scholars who think it's a parable? Sure, there are also scholars who believe 59 of the 66 books of the Bible are forgeries and virtually everything in the Gospels not doubly-attested is late, unreliable embellishment. Something as consequential as the Bible engenders all manner of reactions. **Conclusion** From the hermeneutic of verifiability, Matthew appears to be providing enough context that the readers could check his story, and is therefore unlikely to be inventing it from whole cloth. Matthew has nothing to gain by including these details if they are untrue or metaphorical.
If that is the "universal assumption of people he [Spong] knows", then Spong has a very limited set of friends - I also know many NT scholars that believe Matt 2 is history. Biblical literature can be divided into only a few categories: * actual history - this is the majority of the Bible * poetry - the second largest part of the Bible where people express their deepest feelings about God, fears, hopes and salvation, etc. This includes the Psalms, much of Isaiah, Jeremiah and some of Ezekiel. * parables are scattered throughout the Bible and are easily recognized because they are almost always labeled as such * apocalyptic literature includes most of the book of revelation, most of Daniel, much of Zechariah and Joel, etc. This material is conspicuous for its extreme symbolism and fantastic images. Matt 2 does not have any of the earmarks of a parable, it is not poetic and it is certainly not apocalyptic - it is a simple historical narrative. All the standard commentaries such as Ellicott, Barnes, Pulpit, Cambridge, Gill, Poole, treat it as such. Further, if the story of the magi were a parable, then what is the point of this teaching (made clear in Jesus' other parables)? If it is a parable then it did not happen and so: * Jesus' earthly parents did not go to Egypt * They did not return from Egypt * Herod did not kill the children * Most of Matt 2 becomes ficticious None of this credible! That is the weakness of Spong's approach - he becomes judge of what is real and what is fictitious in the Bible so that he controls what to believe rather than accepting the Scripture as it is.
51,581
It was facing in Y+ direction, so i turned it to X+ direction so that it fits my scene, but when i render it turns back around it sound stupid but i am not joking! Here is the [File](http://download1649.mediafire.com/lyr91i9q1z8g/qa9p2puo20zc7vj/ALL+CARS.blend)
2016/04/28
[ "https://blender.stackexchange.com/questions/51581", "https://blender.stackexchange.com", "https://blender.stackexchange.com/users/24140/" ]
You have inserted keyframes , the viewport is displaying frame 2922 and you are rendering frame 0. you can change what frame you want to render in the render settings. to clear all keyframes select all objects and press Alt+I
Press NumPad 0 to enter camera view. Move and rotate the camera to change the rendered view. In the properties panel (shortcut N in the 3D window) you can lock the camera to the view, to adjust the shot.
79,256
The [Atomic Robo SRD](http://www.faterpg.com/wp-content/uploads/2016/04/AtomicSRD-CCBY.html) have recently become available, and one of the rules included in there is the *Mega-Stunts* one. The SRD contains a short discussion on how these special stunts interact with fate points, stating > > **Mega-Stunts, Stunt Slots, and Refresh** > > > These rules presume that a PC has a certain number of “stunt slots” instead of paying refresh for them. […] > In place of refresh, these rules assign a flat number of starting fate points to each PC. The specifics of this are up to your group. Possibilities include: > > > […] > > > * Make refresh inversely proportional to the number of stunt slots the PC has, from one to five. For example, if the PCs have five stunt slots, their effective refresh is 1. If they have two stunt slots, their effective refresh is 4. > > > There seems to be no equivalence to this rule in the Atomic Robo rule book. This looks like the rules are suggesting to use something they call “effective refresh”, which to me looks like the usual way of “paying refresh for stunts”, in place of … refresh, used to pay for stunts. Do I have a fundamental misconception either about how refresh and stunts interact in Fate Core, or about how this rules dial is suggested to work? Or is the phrasing in the SRD misleading for some reason?
2016/04/25
[ "https://rpg.stackexchange.com/questions/79256", "https://rpg.stackexchange.com", "https://rpg.stackexchange.com/users/5843/" ]
Unraveling this is made harder by the fact that “refresh” in Fate core is used as * a *parameter of the game* – which I will refer to as “refresh parameter” here, * the *number of fate points a character starts a session with* – which Fate Core also uses the term “refresh rate” for, and * the *mechanic of sacrificing refresh rate for stunt slots*, which shall be “paying refresh rate” here. > > These rules presume that a PC has a certain number of “stunt slots” *instead of paying refresh [rate]* for them. > > > This means that the number of stunt slots is an upper limit of the number of stunts any character can have, and it cannot increase beyond that. It takes out the “paying refresh” rule. > > In place of [a player-variable] *refresh [rate,]* > these rules assign a flat number of starting fate points to each PC. > > > So, in a game using these ARRPG mechanics, *all PCs have the same refresh rate, which is equal to the refresh parameter.* The options then describe how to choose the number of stunt slots and the refresh parameter. The option you referred to, > > * Make *[the] refresh [parameter]* inversely proportional to the number of stunt slots the PC has, from one to five. For example, if the PCs have five stunt slots, [the game's refresh parameter, and thus each of their refresh rates] is 1. If they have two stunt slots, [the game's refresh parameter, and thus each of their refresh rates] is 4. > > > suggests that a good **game choice** is to have the *refresh parameter* and the *maximum number of stunts a PC can have* to add up to 5. This is on a completely different level from the **paying refresh rule** in Fate Core, which says that the *refresh rate* and the *number of stunts beyond three that a PC has* should add up to the *refresh parameter* for every character.
Atomic Robo is high-powered game, so its dials are set a good deal higher than in Core. It also breaks the connection between refresh and stunts. There is **no** interaction. Very simply. 1. You get as many Fate points at the beginning of a session as you have aspects. That is a maximum of five. It is a fixed number and has **nothing to do with stunts**. 2. You have five fixed stunt "slots". You cannot pay for more in any way. 3. Each stunt you take provides 1 "benefit". Megastunts often provide two or three benefits. 4. If a character has more than five benefits in total the difference is given to the GM as additional Fate Points. So a character with four normal stunts and a megastunt with three benefits would provide the GM with three extra Fate Points. The SRD text generalizes the refresh rule, allowing you to set refresh any way that suits your game. ARRPG is a lovely book but does not explain very well and does not summarize. It really needs a "if you know Core, these are the differences" section.
87,449
I've rigged a model and why I try to change the pose, the mesh deforms badly on the legs. The model has a mirror modifier (on X and has Vertex Groups option checked) and the armature has X-Axis Mirror. Automatic weights was used and after that I've cleaned a bit the vertex groups (mainly in the head and arms). But I cannot find what's happening with the legs. [![mesh in edit mode](https://i.stack.imgur.com/YSmdS.png)](https://i.stack.imgur.com/YSmdS.png) [![weight painting](https://i.stack.imgur.com/ozMcu.png)](https://i.stack.imgur.com/ozMcu.png)
2017/08/07
[ "https://blender.stackexchange.com/questions/87449", "https://blender.stackexchange.com", "https://blender.stackexchange.com/users/7804/" ]
This is a classic weight issue. The vertices that aren't moving correctly are either weighted to the wrong bone, aren't weighted to a bone at all, or aren't weighted *enough* to the right bone. Vertices follow the movement of the bone they're assigned to, but they can be assigned to only follow *a percentage* of a given bone's movement. For example, if you assign a vertex to a bone at 50% (or 0.5) and then rotate the bone 90 degrees, the vertex will only follow half of that movement... 45 degrees. Furthermore, you can assign a vertex to more than one bone. For example, you might assign a vertex on the knee to 50% on the lower leg and 50% on the upper leg so that smooths between both of them. It looks to me like you have a handful of vertices on the (left?) leg that are not assigned to the bone going back to the left. Consequently, the bone moves to the left, but the vertices don't follow. A similar thing is probably happening with the ones on the right. Note: "not assigned to" might also mean "assigned to a *different* bone." Interesting note: it's possible (depending on your Blender settings) to have a vertex with *more than 100%* total influence. Unless you have "[normalize all](https://docs.blender.org/manual/de/dev/sculpt_paint/painting/weight_paint/weight_tools.html#normalize-all)" turned on, all the weights will get added together and do weird things.
Just realized that the body/neck/head were having all the vertices on the right side, even though the mesh has a mirror modifier. I've deleted half of the body/neck/head, deleted the vertex groups and the armature modifier and used Automatic weights again and now it works fine.
78,901
I'm looking for a word used in interaction design and probably in other fields as well. It's used for the following purpose: > > If a door has one handle which is a flat surface and another hanlde which is a bar then the bar has a ****\_*\_\_*** for being pulled, and the flat surface has a ****\_*\_\_*** for being pushed. > > > What is this word? It would be used to describe that an object complies with its "desire" or not to be able to make the user experience as intuitive as possible.
2012/08/20
[ "https://english.stackexchange.com/questions/78901", "https://english.stackexchange.com", "https://english.stackexchange.com/users/22310/" ]
The usual word for this in the computer trade is an *[affordance](http://dictionary.reference.com/browse/affordance)*:- > > A visual clue to the function of an object. > > > See also [here](http://en.wikipedia.org/wiki/Affordance).
*Disposition* or *susceptibility* might be appropriate in a human context, but for door handles I’d suggest writing the sentence in a different way. For example, > > If a door has one handle which is a flat surface and another handle > which is a bar then, the bar will normally suggest pulling and the flat > surface will normally suggest pushing. > > > *Invite* and *prompt* are possible alternatives to *suggest*.
78,901
I'm looking for a word used in interaction design and probably in other fields as well. It's used for the following purpose: > > If a door has one handle which is a flat surface and another hanlde which is a bar then the bar has a ****\_*\_\_*** for being pulled, and the flat surface has a ****\_*\_\_*** for being pushed. > > > What is this word? It would be used to describe that an object complies with its "desire" or not to be able to make the user experience as intuitive as possible.
2012/08/20
[ "https://english.stackexchange.com/questions/78901", "https://english.stackexchange.com", "https://english.stackexchange.com/users/22310/" ]
*Disposition* or *susceptibility* might be appropriate in a human context, but for door handles I’d suggest writing the sentence in a different way. For example, > > If a door has one handle which is a flat surface and another handle > which is a bar then, the bar will normally suggest pulling and the flat > surface will normally suggest pushing. > > > *Invite* and *prompt* are possible alternatives to *suggest*.
I'd use "[penchant](http://dictionary.reference.com/browse/penchant)" (noun. a strong inclination, taste, or liking for something: a penchant for outdoor sports.) or "[affinity](http://dictionary.reference.com/browse/affinity)" (noun 1. a natural liking for or attraction to a person, thing, idea, etc. 2. a person, thing, idea, etc., for which such a natural liking or attraction is felt.).
78,901
I'm looking for a word used in interaction design and probably in other fields as well. It's used for the following purpose: > > If a door has one handle which is a flat surface and another hanlde which is a bar then the bar has a ****\_*\_\_*** for being pulled, and the flat surface has a ****\_*\_\_*** for being pushed. > > > What is this word? It would be used to describe that an object complies with its "desire" or not to be able to make the user experience as intuitive as possible.
2012/08/20
[ "https://english.stackexchange.com/questions/78901", "https://english.stackexchange.com", "https://english.stackexchange.com/users/22310/" ]
The usual word for this in the computer trade is an *[affordance](http://dictionary.reference.com/browse/affordance)*:- > > A visual clue to the function of an object. > > > See also [here](http://en.wikipedia.org/wiki/Affordance).
I'd use "[penchant](http://dictionary.reference.com/browse/penchant)" (noun. a strong inclination, taste, or liking for something: a penchant for outdoor sports.) or "[affinity](http://dictionary.reference.com/browse/affinity)" (noun 1. a natural liking for or attraction to a person, thing, idea, etc. 2. a person, thing, idea, etc., for which such a natural liking or attraction is felt.).
78,901
I'm looking for a word used in interaction design and probably in other fields as well. It's used for the following purpose: > > If a door has one handle which is a flat surface and another hanlde which is a bar then the bar has a ****\_*\_\_*** for being pulled, and the flat surface has a ****\_*\_\_*** for being pushed. > > > What is this word? It would be used to describe that an object complies with its "desire" or not to be able to make the user experience as intuitive as possible.
2012/08/20
[ "https://english.stackexchange.com/questions/78901", "https://english.stackexchange.com", "https://english.stackexchange.com/users/22310/" ]
The usual word for this in the computer trade is an *[affordance](http://dictionary.reference.com/browse/affordance)*:- > > A visual clue to the function of an object. > > > See also [here](http://en.wikipedia.org/wiki/Affordance).
Other words that could be used here are **[proclivity](http://dictionary.reference.com/browse/proclivity)** and **[propensity](http://dictionary.reference.com/browse/propensity)**. Proclivity: > > natural or habitual inclination or > tendency; propensity; predisposition: a proclivity to meticulousness. > > > Propensity: > > a natural inclination or tendency: a propensity to drink too much. > > >
78,901
I'm looking for a word used in interaction design and probably in other fields as well. It's used for the following purpose: > > If a door has one handle which is a flat surface and another hanlde which is a bar then the bar has a ****\_*\_\_*** for being pulled, and the flat surface has a ****\_*\_\_*** for being pushed. > > > What is this word? It would be used to describe that an object complies with its "desire" or not to be able to make the user experience as intuitive as possible.
2012/08/20
[ "https://english.stackexchange.com/questions/78901", "https://english.stackexchange.com", "https://english.stackexchange.com/users/22310/" ]
Other words that could be used here are **[proclivity](http://dictionary.reference.com/browse/proclivity)** and **[propensity](http://dictionary.reference.com/browse/propensity)**. Proclivity: > > natural or habitual inclination or > tendency; propensity; predisposition: a proclivity to meticulousness. > > > Propensity: > > a natural inclination or tendency: a propensity to drink too much. > > >
I'd use "[penchant](http://dictionary.reference.com/browse/penchant)" (noun. a strong inclination, taste, or liking for something: a penchant for outdoor sports.) or "[affinity](http://dictionary.reference.com/browse/affinity)" (noun 1. a natural liking for or attraction to a person, thing, idea, etc. 2. a person, thing, idea, etc., for which such a natural liking or attraction is felt.).
239,681
I know this is a quite odd question, and I don't really know how to provide more elements to help you find a question. All I can say is that, suddenly, every window that has the chance to maximize itself will start maximized. One of the many examples is Chrome's preferences window, that should be a fairly small window but opens at full screen (1920x1080). Do someone have a clue? Please help me in finding some more elements to get a proper solution.
2011/01/30
[ "https://superuser.com/questions/239681", "https://superuser.com", "https://superuser.com/users/51621/" ]
Remapping the Video Stream to match the Audio Stream: Introduction ------------------------------------------------------------------ Many years ago, I made a rendered architectural fly-through animation (part of an university group project) where we used [Adobe Premiere](http://www.adobe.com/products/premiere/) and [**Adobe After Effects**](http://www.adobe.com/products/aftereffects/), to edit and manipulate rendered clips that we made in [3DS Max](http://usa.autodesk.com/3ds-max/) to fit to the beats and changes of some [German trance music](http://www.der-dritte-raum.de/). It worked out very well. Another group used [**Sony Vegas**](http://www.sonycreativesoftware.com/vegaspro) to do something similar. These programs are expensive, but they are industry standard tools and skills in using them are highly valued by employers in relevant fields. There are trial versions which will give you enough time to do what you want to do. It would be much better to slow down and speed up parts of the video to fit the music, rather than the other way around, as changing the tempo of the music will sound totally wrong, where as it will be unnoticeable visually, as long as it is only a *slight* variation. Remapping the Video Stream to match the Audio Stream: Procedure --------------------------------------------------------------- There are loads of tutorials on the internet. Here are a couple which I found from a quick search: [Time remapping in Adobe After Effects](http://library.creativecow.net/articles/preston_bryan/time_remapping.php) (this is for an old version of After Effects, but I doubt that the technique would have changed much since, although there may be new tools now which makes the process easier) [Time syncing video to music in Sony Vegas](http://www.youtube.com/watch?v=az7PIkc8Xkw) (YouTube Video)
Remapping the Audio Stream to match the Video Stream. ----------------------------------------------------- For modifying the audio stream there is a program called Audacity that works well. You should be able to download it, and select the portion of song you want speed up or slow down to implement the changes. Remember to download the [LAME MP3 encoder](http://lame.sourceforge.net/) as well if you want to be able to export the file as an `.mp3`.
239,681
I know this is a quite odd question, and I don't really know how to provide more elements to help you find a question. All I can say is that, suddenly, every window that has the chance to maximize itself will start maximized. One of the many examples is Chrome's preferences window, that should be a fairly small window but opens at full screen (1920x1080). Do someone have a clue? Please help me in finding some more elements to get a proper solution.
2011/01/30
[ "https://superuser.com/questions/239681", "https://superuser.com", "https://superuser.com/users/51621/" ]
Remapping the Video Stream to match the Audio Stream: Introduction ------------------------------------------------------------------ Many years ago, I made a rendered architectural fly-through animation (part of an university group project) where we used [Adobe Premiere](http://www.adobe.com/products/premiere/) and [**Adobe After Effects**](http://www.adobe.com/products/aftereffects/), to edit and manipulate rendered clips that we made in [3DS Max](http://usa.autodesk.com/3ds-max/) to fit to the beats and changes of some [German trance music](http://www.der-dritte-raum.de/). It worked out very well. Another group used [**Sony Vegas**](http://www.sonycreativesoftware.com/vegaspro) to do something similar. These programs are expensive, but they are industry standard tools and skills in using them are highly valued by employers in relevant fields. There are trial versions which will give you enough time to do what you want to do. It would be much better to slow down and speed up parts of the video to fit the music, rather than the other way around, as changing the tempo of the music will sound totally wrong, where as it will be unnoticeable visually, as long as it is only a *slight* variation. Remapping the Video Stream to match the Audio Stream: Procedure --------------------------------------------------------------- There are loads of tutorials on the internet. Here are a couple which I found from a quick search: [Time remapping in Adobe After Effects](http://library.creativecow.net/articles/preston_bryan/time_remapping.php) (this is for an old version of After Effects, but I doubt that the technique would have changed much since, although there may be new tools now which makes the process easier) [Time syncing video to music in Sony Vegas](http://www.youtube.com/watch?v=az7PIkc8Xkw) (YouTube Video)
A freeware solution is apparently Windows Movie Maker. Download and install [Custom Speed Effects for Movie Maker(XP & Vista)](http://movies.blainesville.com/2007/02/custom-speed-effects-for-movie-maker-xp.html). You will also find many other effects for WMM in that site which you can download free. After installing, you'll have new custom speed effects for : * Slow Down in increments of 25%, 50%, 66%, x3, x4, x6, x8 * Speed Up in increments of 25%, 50%, 66%, x3, x4, x6, x8 A commercial solution is [Roxio Creator](http://www.roxio.com/enu/products/creator/suite/overview.html) ($79.99). One [source](http://forums.support.roxio.com/topic/51495-is-there-a-speed-upslow-down-video-filter/) says : > > Right click on the segment you want to > speed-up/slow down and select Trim > from the drop down list. There is a > adjust speed option on the Trim window. > > >
39,564
In my host, I currently have installed 2 wordpress applications, 1 phpBB forum and one MediaWiki. Is there a way to merge the login so that all applications share the same credentials? For instance, I want to register only in my phpBB and then I want to access all other applications with the given username and password. Even if you don't know a unified way, what other login integration do you know of? Pros and cons of each?
2008/09/02
[ "https://Stackoverflow.com/questions/39564", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2644/" ]
If you're integrating a bunch of different apps, and you really just want a bridge, I've had good success with the bridge from [Single-Signon.com](http://www.single-signon.com). You can see there supported apps here: <http://www.single-signon.com/en/applications.html> I've also used a MediaWiki extension for phpBB integration: <http://www.mediawiki.org/wiki/Extension:PHPBB/Users_Integration>
You can write a custom login hook for mediaWiki. I've done it for LibraryThing so that login credentials from our main site are carried over to our mediaWiki installation. The authentication hook extends mediaWiki's AuthPlugin. There are several small issues: 1. mediaWiki usernames must start with initial caps (so if you allow case sensitive user names it could be a problem if two users have colliding wiki names) 2. underscores in usernames are converted to spaces in mediaWiki But if you can deal with those then it is certainly possible to use your own user/password data with mediaWiki. Advantages: 1. The user doesn't have to login to each area separately. Once they login to the main site they are logged into the wiki also. 2. You know that usernames are the same across the systems and can leverage that in links, etc.
39,564
In my host, I currently have installed 2 wordpress applications, 1 phpBB forum and one MediaWiki. Is there a way to merge the login so that all applications share the same credentials? For instance, I want to register only in my phpBB and then I want to access all other applications with the given username and password. Even if you don't know a unified way, what other login integration do you know of? Pros and cons of each?
2008/09/02
[ "https://Stackoverflow.com/questions/39564", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2644/" ]
One option is OpenID, which you can integrate into [phpBB](http://sourceforge.net/projects/phpbb-openid/), [WordPress](http://wordpress.org/extend/plugins/openid/), and [MediaWiki](http://www.mediawiki.org/wiki/Extension:OpenID). A second option is to set up an LDAP server, which you can also integrate into [phpBB](http://sourceforge.net/projects/ldapauthmod/), [WordPress](http://wordpress.org/extend/plugins/wpldap/), and [MediaWiki](http://www.mediawiki.org/wiki/Extension:LDAP_Authentication). If the sites are all on the same root domain, a third option is to modify the registration, login, and logout code so that these actions are replicated on every site at the same time. This gets messy, but it may be the easiest short-term solution if you're in a hurry. Once you track down the account code in each site, it's just a matter of copying and pasting and changing a few cookie parameters.
You can write a custom login hook for mediaWiki. I've done it for LibraryThing so that login credentials from our main site are carried over to our mediaWiki installation. The authentication hook extends mediaWiki's AuthPlugin. There are several small issues: 1. mediaWiki usernames must start with initial caps (so if you allow case sensitive user names it could be a problem if two users have colliding wiki names) 2. underscores in usernames are converted to spaces in mediaWiki But if you can deal with those then it is certainly possible to use your own user/password data with mediaWiki. Advantages: 1. The user doesn't have to login to each area separately. Once they login to the main site they are logged into the wiki also. 2. You know that usernames are the same across the systems and can leverage that in links, etc.
39,564
In my host, I currently have installed 2 wordpress applications, 1 phpBB forum and one MediaWiki. Is there a way to merge the login so that all applications share the same credentials? For instance, I want to register only in my phpBB and then I want to access all other applications with the given username and password. Even if you don't know a unified way, what other login integration do you know of? Pros and cons of each?
2008/09/02
[ "https://Stackoverflow.com/questions/39564", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2644/" ]
when you integrate the system. Just remember 2 things: 1. Login to system Check username/password with both systems. 2. Change of Password Update the password on both systems.
One option is OpenID, which you can integrate into [phpBB](http://sourceforge.net/projects/phpbb-openid/), [WordPress](http://wordpress.org/extend/plugins/openid/), and [MediaWiki](http://www.mediawiki.org/wiki/Extension:OpenID). A second option is to set up an LDAP server, which you can also integrate into [phpBB](http://sourceforge.net/projects/ldapauthmod/), [WordPress](http://wordpress.org/extend/plugins/wpldap/), and [MediaWiki](http://www.mediawiki.org/wiki/Extension:LDAP_Authentication). If the sites are all on the same root domain, a third option is to modify the registration, login, and logout code so that these actions are replicated on every site at the same time. This gets messy, but it may be the easiest short-term solution if you're in a hurry. Once you track down the account code in each site, it's just a matter of copying and pasting and changing a few cookie parameters.
39,564
In my host, I currently have installed 2 wordpress applications, 1 phpBB forum and one MediaWiki. Is there a way to merge the login so that all applications share the same credentials? For instance, I want to register only in my phpBB and then I want to access all other applications with the given username and password. Even if you don't know a unified way, what other login integration do you know of? Pros and cons of each?
2008/09/02
[ "https://Stackoverflow.com/questions/39564", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2644/" ]
One option is OpenID, which you can integrate into [phpBB](http://sourceforge.net/projects/phpbb-openid/), [WordPress](http://wordpress.org/extend/plugins/openid/), and [MediaWiki](http://www.mediawiki.org/wiki/Extension:OpenID). A second option is to set up an LDAP server, which you can also integrate into [phpBB](http://sourceforge.net/projects/ldapauthmod/), [WordPress](http://wordpress.org/extend/plugins/wpldap/), and [MediaWiki](http://www.mediawiki.org/wiki/Extension:LDAP_Authentication). If the sites are all on the same root domain, a third option is to modify the registration, login, and logout code so that these actions are replicated on every site at the same time. This gets messy, but it may be the easiest short-term solution if you're in a hurry. Once you track down the account code in each site, it's just a matter of copying and pasting and changing a few cookie parameters.
I personally think that integration login systems is one of, if not the, hardest job when utilizing multiple prebuilt applications. As a fan of reuse and modularity, I find this disappointing. If anyone knows of any easy ways to handle this problem between random app X and random app Y, I would love to know.
39,564
In my host, I currently have installed 2 wordpress applications, 1 phpBB forum and one MediaWiki. Is there a way to merge the login so that all applications share the same credentials? For instance, I want to register only in my phpBB and then I want to access all other applications with the given username and password. Even if you don't know a unified way, what other login integration do you know of? Pros and cons of each?
2008/09/02
[ "https://Stackoverflow.com/questions/39564", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2644/" ]
One option is OpenID, which you can integrate into [phpBB](http://sourceforge.net/projects/phpbb-openid/), [WordPress](http://wordpress.org/extend/plugins/openid/), and [MediaWiki](http://www.mediawiki.org/wiki/Extension:OpenID). A second option is to set up an LDAP server, which you can also integrate into [phpBB](http://sourceforge.net/projects/ldapauthmod/), [WordPress](http://wordpress.org/extend/plugins/wpldap/), and [MediaWiki](http://www.mediawiki.org/wiki/Extension:LDAP_Authentication). If the sites are all on the same root domain, a third option is to modify the registration, login, and logout code so that these actions are replicated on every site at the same time. This gets messy, but it may be the easiest short-term solution if you're in a hurry. Once you track down the account code in each site, it's just a matter of copying and pasting and changing a few cookie parameters.
I once did a phpBB/MediaWiki login integration from the phpBB end. [Check it out](https://damnian.svn.sourceforge.net/svnroot/damnian/phpBB3/MediaWiki/).
39,564
In my host, I currently have installed 2 wordpress applications, 1 phpBB forum and one MediaWiki. Is there a way to merge the login so that all applications share the same credentials? For instance, I want to register only in my phpBB and then I want to access all other applications with the given username and password. Even if you don't know a unified way, what other login integration do you know of? Pros and cons of each?
2008/09/02
[ "https://Stackoverflow.com/questions/39564", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2644/" ]
If you're integrating a bunch of different apps, and you really just want a bridge, I've had good success with the bridge from [Single-Signon.com](http://www.single-signon.com). You can see there supported apps here: <http://www.single-signon.com/en/applications.html> I've also used a MediaWiki extension for phpBB integration: <http://www.mediawiki.org/wiki/Extension:PHPBB/Users_Integration>
I personally think that integration login systems is one of, if not the, hardest job when utilizing multiple prebuilt applications. As a fan of reuse and modularity, I find this disappointing. If anyone knows of any easy ways to handle this problem between random app X and random app Y, I would love to know.
39,564
In my host, I currently have installed 2 wordpress applications, 1 phpBB forum and one MediaWiki. Is there a way to merge the login so that all applications share the same credentials? For instance, I want to register only in my phpBB and then I want to access all other applications with the given username and password. Even if you don't know a unified way, what other login integration do you know of? Pros and cons of each?
2008/09/02
[ "https://Stackoverflow.com/questions/39564", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2644/" ]
when you integrate the system. Just remember 2 things: 1. Login to system Check username/password with both systems. 2. Change of Password Update the password on both systems.
I once did a phpBB/MediaWiki login integration from the phpBB end. [Check it out](https://damnian.svn.sourceforge.net/svnroot/damnian/phpBB3/MediaWiki/).
39,564
In my host, I currently have installed 2 wordpress applications, 1 phpBB forum and one MediaWiki. Is there a way to merge the login so that all applications share the same credentials? For instance, I want to register only in my phpBB and then I want to access all other applications with the given username and password. Even if you don't know a unified way, what other login integration do you know of? Pros and cons of each?
2008/09/02
[ "https://Stackoverflow.com/questions/39564", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2644/" ]
If you're integrating a bunch of different apps, and you really just want a bridge, I've had good success with the bridge from [Single-Signon.com](http://www.single-signon.com). You can see there supported apps here: <http://www.single-signon.com/en/applications.html> I've also used a MediaWiki extension for phpBB integration: <http://www.mediawiki.org/wiki/Extension:PHPBB/Users_Integration>
Having tried to do this some years ago I remember it not being very easy. The way I did it was to create totally new table to user/pass and then replace these columns in the respective software with foreign keys to your new table - this required **a lot** of custom tweaking of core files in each application - mainly making sure all SQL requests to this data have the extra join needed for your new table. If I find the time I will maybe try and provide a step by step of the changes needed. There are some **pretty big drawbacks** to this approach though. The main one being from now on your gonna have to hand update any patches If you have no content or users yet look at <http://bbpress.org/documentation/integration-with-wordpress/> which will make things a lot simpler for you. I can't quite remember but I believe that I big problem I had was that MediaWiki requires usernames formatted a certain that conflicted with phpBB. --- Of course, a totally different approach would be to mod each piece of software to use OpenID \_ I believe plugins/extensions are readily available for all the applications you mentioned.
83,639
I have heard that when a sailboat is sailing against the wind, it operates on the principle of 'lift'. I am unable to understand the explanation, based on Bernoulli principle, completely. My question is, when it says 'lift', it literally means that the boat is being 'lifted' out of the water? Like when we throw a stone across the lake and it skims and hops on the water?
2013/11/06
[ "https://physics.stackexchange.com/questions/83639", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/32322/" ]
An important point that is often overlooked is that sails do not just generate 'lift', but generate lift in a direction. The pressure differential across the sail's surface is typically much larger near the leading edge, which results in more forward force than you might expect. The two sails (with a sloop) also interact in a way that generates more forward force. You can see some simulations with different sail configurations, and comparison to the performance of a real sailboat at this site: <https://sites.google.com/site/sailcfd/>
Unfortunately the link is broke with no forward link. However the effect is the same as an airplane wing, where lift is generated on the upper or slightly rounded surface. This in effect "Pulls" the aircraft up at a constant rate of assent as long as the upper surface continues to receive the "flow" at a constant rate. Now imagine the airplane wing sitting im the vertical position. The air flow will do the very same thing to the wing as it does to the sail. Now to make matters more complicated, Airplane wings have "airflow disrupters called ailerons, which disrupt the air flow in a certain section of the wing. Now if the flow remains constant, then the disturbed air will cause an area of increased or decreased pressure which causes certain areas of the wing to either slow down or speed up depending on the pressure flow. The Wikipedia article on [wingsails](https://en.wikipedia.org/wiki/Wingsail) explains the use of a fixed wing/sail and the way airflow flowing around the wing creates the force needed to push the boat. These are essentially airplane wings that have been turned from a horizontal position to a vertical position. The effect is the same because the sailboat with a center keel assumes the characteristics of a plane. The lower centerboard or keel act as a wing in the same way as the fixed wing above decks. This explains the relation between the Bernoulli principal and how a boats sail works. Aviators are trying to build a stunt plane with both horizontal as well as vertical wings for never before seen stunts. (See [this gizmag article](http://www.gizmag.com/vertical-winged-aircraft/21206/) for more information)
83,639
I have heard that when a sailboat is sailing against the wind, it operates on the principle of 'lift'. I am unable to understand the explanation, based on Bernoulli principle, completely. My question is, when it says 'lift', it literally means that the boat is being 'lifted' out of the water? Like when we throw a stone across the lake and it skims and hops on the water?
2013/11/06
[ "https://physics.stackexchange.com/questions/83639", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/32322/" ]
An important point that is often overlooked is that sails do not just generate 'lift', but generate lift in a direction. The pressure differential across the sail's surface is typically much larger near the leading edge, which results in more forward force than you might expect. The two sails (with a sloop) also interact in a way that generates more forward force. You can see some simulations with different sail configurations, and comparison to the performance of a real sailboat at this site: <https://sites.google.com/site/sailcfd/>
Well, [according to this](https://physics.stackexchange.com/a/295/85020) Bernoulli effect is NOT the cause of lift. > > Basically planes fly because they push enough air downwards and receive an upwards lift thanks to Newton's third law. > > > It is clear that blowing over a piece of paper pulls it up ([video](https://youtu.be/025lsG3VHKM?t=74)). Here: fast air => lower pressure => paper pulled up. Right. But that IS NOT the case for sails and wings. Wind arriving to a sail/wing goes ALL of it at the SAME speed. The air above the wing / on leeward of a sail does not spontaneously accelerate up. It gets speeded up by a a lower pressure in that zone. Thus, the "fast air" is not the cause but THE CONSEQUENCE. What does it cause the low pressure zone above a wing / on leeward of a wing? What is for sure is that wings/sails deflect air. Then for wings and sails: Main cause: air deflection => change of pressure => changes in air speed. That's it. Don't get confused by the fact that Bernoulli equation can be effectively used to measure lift. It's perfectly right to estimate the real cause (air deflected downwards) by a direct effect (change of pressure).
83,639
I have heard that when a sailboat is sailing against the wind, it operates on the principle of 'lift'. I am unable to understand the explanation, based on Bernoulli principle, completely. My question is, when it says 'lift', it literally means that the boat is being 'lifted' out of the water? Like when we throw a stone across the lake and it skims and hops on the water?
2013/11/06
[ "https://physics.stackexchange.com/questions/83639", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/32322/" ]
Unfortunately the link is broke with no forward link. However the effect is the same as an airplane wing, where lift is generated on the upper or slightly rounded surface. This in effect "Pulls" the aircraft up at a constant rate of assent as long as the upper surface continues to receive the "flow" at a constant rate. Now imagine the airplane wing sitting im the vertical position. The air flow will do the very same thing to the wing as it does to the sail. Now to make matters more complicated, Airplane wings have "airflow disrupters called ailerons, which disrupt the air flow in a certain section of the wing. Now if the flow remains constant, then the disturbed air will cause an area of increased or decreased pressure which causes certain areas of the wing to either slow down or speed up depending on the pressure flow. The Wikipedia article on [wingsails](https://en.wikipedia.org/wiki/Wingsail) explains the use of a fixed wing/sail and the way airflow flowing around the wing creates the force needed to push the boat. These are essentially airplane wings that have been turned from a horizontal position to a vertical position. The effect is the same because the sailboat with a center keel assumes the characteristics of a plane. The lower centerboard or keel act as a wing in the same way as the fixed wing above decks. This explains the relation between the Bernoulli principal and how a boats sail works. Aviators are trying to build a stunt plane with both horizontal as well as vertical wings for never before seen stunts. (See [this gizmag article](http://www.gizmag.com/vertical-winged-aircraft/21206/) for more information)
Well, [according to this](https://physics.stackexchange.com/a/295/85020) Bernoulli effect is NOT the cause of lift. > > Basically planes fly because they push enough air downwards and receive an upwards lift thanks to Newton's third law. > > > It is clear that blowing over a piece of paper pulls it up ([video](https://youtu.be/025lsG3VHKM?t=74)). Here: fast air => lower pressure => paper pulled up. Right. But that IS NOT the case for sails and wings. Wind arriving to a sail/wing goes ALL of it at the SAME speed. The air above the wing / on leeward of a sail does not spontaneously accelerate up. It gets speeded up by a a lower pressure in that zone. Thus, the "fast air" is not the cause but THE CONSEQUENCE. What does it cause the low pressure zone above a wing / on leeward of a wing? What is for sure is that wings/sails deflect air. Then for wings and sails: Main cause: air deflection => change of pressure => changes in air speed. That's it. Don't get confused by the fact that Bernoulli equation can be effectively used to measure lift. It's perfectly right to estimate the real cause (air deflected downwards) by a direct effect (change of pressure).
17,914,473
I need to rotate a Google Maps Marker. I'm using the JavaScript v3 API. What is the best way of doing this? I've had a look at some other similar questions but there seems to be no clear answer, or if there is it's for v2. I'm looking for a neat/tidy way to rotate the marker icon. **Edit:** As this question has started to rise in views quite dramatically I have decided to edit it to make the result of my question clearer. You can't =========
2013/07/29
[ "https://Stackoverflow.com/questions/17914473", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2273902/" ]
A marker image cant be rotated.. you will need to create new images of all your possible rotations and replace the marker image as required with a rotate image file.. using marker.setIcon() <https://developers.google.com/maps/documentation/javascript/reference#Marker>
How about? chart-API <https://chart.googleapis.com/chart?chst=d_map_spin&chld=0.25|38|FF0000|12|text-1|text-2> It rotates a red spin marker by 38degree scaled 25%,and labeled with two lines texts.
461,691
For some reason I have two "elephant" icons from Evernote at the top right menu bar. Does anyone know if this is a bug in Evernote or how to remove the extra one? **Additional Information:** This problem is still seen after the following steps: 1. Completely wiping the drive on a Macbook Air 2. Installing Mountain Lion from scratch 3. Installing Evernote from the Mac App Store After installing from the Mac App Store two elephant icons will appear on the menubar.
2012/08/07
[ "https://superuser.com/questions/461691", "https://superuser.com", "https://superuser.com/users/18690/" ]
I had the same problem and resolved it by going to: System Preferences > Users & Groups > Login Items [for my user] Then I deleted the EvernoteHelper (select and press the "-" button). Finally, I unchecked all of the tickboxes under Evernote Helper in Evernote's settings itself, and then re-checked them. Rebooted and only one elephant. Happy days :)
Sounds weird. Have you tried to relaunch Finder? Apple->Force Quit->Finder and click relaunch.
461,691
For some reason I have two "elephant" icons from Evernote at the top right menu bar. Does anyone know if this is a bug in Evernote or how to remove the extra one? **Additional Information:** This problem is still seen after the following steps: 1. Completely wiping the drive on a Macbook Air 2. Installing Mountain Lion from scratch 3. Installing Evernote from the Mac App Store After installing from the Mac App Store two elephant icons will appear on the menubar.
2012/08/07
[ "https://superuser.com/questions/461691", "https://superuser.com", "https://superuser.com/users/18690/" ]
I had the same problem and resolved it by going to: System Preferences > Users & Groups > Login Items [for my user] Then I deleted the EvernoteHelper (select and press the "-" button). Finally, I unchecked all of the tickboxes under Evernote Helper in Evernote's settings itself, and then re-checked them. Rebooted and only one elephant. Happy days :)
You are likely running a normal install of Evernote, and an App store install of it. You need to startup Evernote (the application), go to Preferences ![Pic of selecting Evernote in the upper left hand corner](https://i.stack.imgur.com/1bg75.png) Then under general uncheck "Start the Evernote Helper when I log in to my computer" ![Pic of selecting "Preferences"](https://i.stack.imgur.com/Y1dq4.png)
7,678,696
I've installed wkhtmltopdf on Mac OS X via homebrew and I've also tried compiling it (along with the patched version of Qt) by hand. In both cases, the PDFs it generates do not contain any selectable, copyable, or searchable text. Instead each page seems to be its own monolithic image. However, the [binary version](http://code.google.com/p/wkhtmltopdf/downloads/detail?name=wkhtmltopdf-0.9.9-OS-X.i368&can=2&q=) for Mac OS that's provided on the website *does* produce selectable text. But it's an older version (0.9.9) and does not support some of the newer features in 0.11 rc1 that I need. How do I get newer versions to produce PDFs with selectable and searchable text?
2011/10/06
[ "https://Stackoverflow.com/questions/7678696", "https://Stackoverflow.com", "https://Stackoverflow.com/users/52207/" ]
Currently there is [a bug in the queue](http://code.google.com/p/wkhtmltopdf/issues/detail?id=886) of that project for just this case. Unfortunatly, the recent binary for Mac OSX (0.11-rc1) still has this problem You can get a older (but newner than the original poster's version) version (0.10) that does work [here](http://code.google.com/p/wkhtmltopdf/downloads/detail?name=wkhtmltopdf-0.11.0_rc1-static-i386.tar.bz2&can=2&q=).
[Ensure the body element has a defined, non-zero height.](http://pivotallabs.com/users/jpalermo/blog/articles/1985-standup-01-13-2012-will-the-real-13th-please-stand-up)
328,307
What would be the best way to connect two freestanding farm buildings onto the same network? The total cable length would be less than 1000 feet. Cat6 is listed as having a max length of about 330', which is too short. What other options are out there? There is no line of sight due to a slight hill between the two buildings, so a Wi-Fi boost would probably run into problems too. Edit: Realized I didn't say how much bandwidth I would need. The internet connection is 10/3, and internal transfers won't be huge. Anything >= 10 Mbps would be plenty.
2011/11/06
[ "https://serverfault.com/questions/328307", "https://serverfault.com", "https://serverfault.com/users/97385/" ]
In a word: fiber. Your solution could be as simple as two [media converters](http://rads.stackoverflow.com/amzn/click/B00009B5AA) and [1000 feet of multi-mode fiber](http://rads.stackoverflow.com/amzn/click/B0000C8Y58) with matching connectors, at a cost of under $500 total for actual networking components. You would need to plan the run carefully to prevent the fiber from being damaged during or after installation. Compared to copper, fiber is easier to break and requires very expensive tools to fix. Ordinarily, one would consult a professional installer.
You can try these wireless solutions: * <http://www.trangobroadband.com/wireless-products/multipoint-broadband-access/M900S-900mhz.aspx> - Up to 20-mile range, 18 dB fade margin, 3 Mbps usable subscriber throughput * <http://www.industrialethernet.com/aw900x.html> If you want to go for fiber, then 100Base-FX would be enough.
328,307
What would be the best way to connect two freestanding farm buildings onto the same network? The total cable length would be less than 1000 feet. Cat6 is listed as having a max length of about 330', which is too short. What other options are out there? There is no line of sight due to a slight hill between the two buildings, so a Wi-Fi boost would probably run into problems too. Edit: Realized I didn't say how much bandwidth I would need. The internet connection is 10/3, and internal transfers won't be huge. Anything >= 10 Mbps would be plenty.
2011/11/06
[ "https://serverfault.com/questions/328307", "https://serverfault.com", "https://serverfault.com/users/97385/" ]
In a word: fiber. Your solution could be as simple as two [media converters](http://rads.stackoverflow.com/amzn/click/B00009B5AA) and [1000 feet of multi-mode fiber](http://rads.stackoverflow.com/amzn/click/B0000C8Y58) with matching connectors, at a cost of under $500 total for actual networking components. You would need to plan the run carefully to prevent the fiber from being damaged during or after installation. Compared to copper, fiber is easier to break and requires very expensive tools to fix. Ordinarily, one would consult a professional installer.
If you want to learn more about fibre, how it works, tools you need and repair. Then i found this FOA a great help. It's through and free! <http://thefoa.org/> <http://www.youtube.com/user/thefoainc> informative and useful...you soon be a great fibre provider. ;-) so you've got no excuse not to start laying.
328,307
What would be the best way to connect two freestanding farm buildings onto the same network? The total cable length would be less than 1000 feet. Cat6 is listed as having a max length of about 330', which is too short. What other options are out there? There is no line of sight due to a slight hill between the two buildings, so a Wi-Fi boost would probably run into problems too. Edit: Realized I didn't say how much bandwidth I would need. The internet connection is 10/3, and internal transfers won't be huge. Anything >= 10 Mbps would be plenty.
2011/11/06
[ "https://serverfault.com/questions/328307", "https://serverfault.com", "https://serverfault.com/users/97385/" ]
In a word: fiber. Your solution could be as simple as two [media converters](http://rads.stackoverflow.com/amzn/click/B00009B5AA) and [1000 feet of multi-mode fiber](http://rads.stackoverflow.com/amzn/click/B0000C8Y58) with matching connectors, at a cost of under $500 total for actual networking components. You would need to plan the run carefully to prevent the fiber from being damaged during or after installation. Compared to copper, fiber is easier to break and requires very expensive tools to fix. Ordinarily, one would consult a professional installer.
Depending on your switches you might not require external media-converters - just plug in an appropriate GBIC/SFP into the switches and you are go. I'd consider the fastest possible connection your switches are able to handle - there is no difference in price (as Gigabit is much more common these days than 100Mbit) and it's never a good idea to save a few cents on cabling and need to rip it out again when the requirements change in due time... There are preconfigured fibre-cables ready for installation in many lengths, fibre-types and connectors, so there is no need for much onsite installation other than just laying out the cables itself.
328,307
What would be the best way to connect two freestanding farm buildings onto the same network? The total cable length would be less than 1000 feet. Cat6 is listed as having a max length of about 330', which is too short. What other options are out there? There is no line of sight due to a slight hill between the two buildings, so a Wi-Fi boost would probably run into problems too. Edit: Realized I didn't say how much bandwidth I would need. The internet connection is 10/3, and internal transfers won't be huge. Anything >= 10 Mbps would be plenty.
2011/11/06
[ "https://serverfault.com/questions/328307", "https://serverfault.com", "https://serverfault.com/users/97385/" ]
Depending on your switches you might not require external media-converters - just plug in an appropriate GBIC/SFP into the switches and you are go. I'd consider the fastest possible connection your switches are able to handle - there is no difference in price (as Gigabit is much more common these days than 100Mbit) and it's never a good idea to save a few cents on cabling and need to rip it out again when the requirements change in due time... There are preconfigured fibre-cables ready for installation in many lengths, fibre-types and connectors, so there is no need for much onsite installation other than just laying out the cables itself.
You can try these wireless solutions: * <http://www.trangobroadband.com/wireless-products/multipoint-broadband-access/M900S-900mhz.aspx> - Up to 20-mile range, 18 dB fade margin, 3 Mbps usable subscriber throughput * <http://www.industrialethernet.com/aw900x.html> If you want to go for fiber, then 100Base-FX would be enough.
328,307
What would be the best way to connect two freestanding farm buildings onto the same network? The total cable length would be less than 1000 feet. Cat6 is listed as having a max length of about 330', which is too short. What other options are out there? There is no line of sight due to a slight hill between the two buildings, so a Wi-Fi boost would probably run into problems too. Edit: Realized I didn't say how much bandwidth I would need. The internet connection is 10/3, and internal transfers won't be huge. Anything >= 10 Mbps would be plenty.
2011/11/06
[ "https://serverfault.com/questions/328307", "https://serverfault.com", "https://serverfault.com/users/97385/" ]
Depending on your switches you might not require external media-converters - just plug in an appropriate GBIC/SFP into the switches and you are go. I'd consider the fastest possible connection your switches are able to handle - there is no difference in price (as Gigabit is much more common these days than 100Mbit) and it's never a good idea to save a few cents on cabling and need to rip it out again when the requirements change in due time... There are preconfigured fibre-cables ready for installation in many lengths, fibre-types and connectors, so there is no need for much onsite installation other than just laying out the cables itself.
If you want to learn more about fibre, how it works, tools you need and repair. Then i found this FOA a great help. It's through and free! <http://thefoa.org/> <http://www.youtube.com/user/thefoainc> informative and useful...you soon be a great fibre provider. ;-) so you've got no excuse not to start laying.
118,359
I have a SSD as my primary (C:) drive, mainly used for quickly loading games. It's pretty small (~30 GB) so I want to keep things that don't really need a speed boost off of it. I attempted installing the Visual Studio 2010 Express beta last night, and it claimed to require 2.1 GB of space so I changed the install directory to a secondary, non-SSD drive. After this, the installer said that it would use 1.8 GB on C: and ~200 MB on the secondary drive. While this token gesture of moving 1/10 of the app to the place I told it to is cute, I really want to install everything I can to the secondary drive. **Is there any way to install all of Visual Studio 2010 Express to a drive besides C:?**
2010/03/10
[ "https://superuser.com/questions/118359", "https://superuser.com", "https://superuser.com/users/13423/" ]
No, much of what VS installs (regardless of version) goes into subdirectories in your Windows folder: things such as the .NET frameworks, shared files, etc. So if you installed Windows to the C: drive, VS has to install much of it's core there as well.
Kind of. The setup DVD contains a file Setup\baseline.dat. This is a large text file which stores information on where to install large chunks of the software. You need to edit the text file and change the lines which say > > DefaultPath=[ProgramFilesFolder]\VC\ > > > ... > > > DefaultPath=[ProgramFilesFolder]\Microsoft Visual Studio 10.0 > > > ... > > > DefaultPath=[ProgramFilesFolder]\Microsoft Visual Studio 10.0\Common7\IDE > > > to the following > > DefaultPath=D:\Applications\VS2010\VC\ > > > ... > > > DefaultPath=D:\Applications\VS2010\Microsoft Visual Studio 10.0 > > > ... > > > DefaultPath=D:\Applications\VS2010\Microsoft Visual Studio 10.0\Common7\IDE > > > That will get most of the stuff off C. This also works with VisualStudio 2005/2008 and the Express Editions. I've been using this trick for years and never encountered a problem. n.B: Some parts of the installer also use locations such as *DefaultPath=[WindowsFolder]\assembly*. You can edit these in the same way to free up even more space, but I can't guarantee this won't break things. Obviously if you're installing from a DVD/iso you need to copy the entire contents of the DVD to a folder before editing baseline.dat, otherwise it will be read-only.
118,359
I have a SSD as my primary (C:) drive, mainly used for quickly loading games. It's pretty small (~30 GB) so I want to keep things that don't really need a speed boost off of it. I attempted installing the Visual Studio 2010 Express beta last night, and it claimed to require 2.1 GB of space so I changed the install directory to a secondary, non-SSD drive. After this, the installer said that it would use 1.8 GB on C: and ~200 MB on the secondary drive. While this token gesture of moving 1/10 of the app to the place I told it to is cute, I really want to install everything I can to the secondary drive. **Is there any way to install all of Visual Studio 2010 Express to a drive besides C:?**
2010/03/10
[ "https://superuser.com/questions/118359", "https://superuser.com", "https://superuser.com/users/13423/" ]
No, much of what VS installs (regardless of version) goes into subdirectories in your Windows folder: things such as the .NET frameworks, shared files, etc. So if you installed Windows to the C: drive, VS has to install much of it's core there as well.
I got a similar problem in Windows XP and found my own solution: 1) In c:\Program Files create manually by yourself all the folders that the VS2010 installation must create for you. This includes at least these folders: > > i. c:\Program Files\Microsoft SDKs ii. c:\Program Files\Microsoft > Visual Studio iii. c:\Program Files\Reference Assemblies iv. > c:\Program Files\Microsoft Visual Studio 9.0 > > > Since these folders are now empty, you can actually mount a logical disk drive on each of them. This effectively increases the size of the C:. 2) Install any additional hard disk and create an extended partition on it. Creates 4-5 logical disk drive on that extended partition. Then in Windows control Panel you can mount these logical disk drive onto the above folders. Now you should got enough disk space for your VS2010 installation. 3) It seems that we cannot use the above approach for the main program installation folder ie. c:\Program Files\Microsoft Visual Studio 10.0. But we can just tell the VS installer to use d:\Program Files. In addition, Sysinternals got a 'junction' utility allowing one to create symbolic links in Windows Xp to link some folders e.g. c:\Program Files\Microsoft SDKs to say d:\Program Files\Microsoft SDKs. This may be another solution in addition to mounting a logical disk to the folders.
118,359
I have a SSD as my primary (C:) drive, mainly used for quickly loading games. It's pretty small (~30 GB) so I want to keep things that don't really need a speed boost off of it. I attempted installing the Visual Studio 2010 Express beta last night, and it claimed to require 2.1 GB of space so I changed the install directory to a secondary, non-SSD drive. After this, the installer said that it would use 1.8 GB on C: and ~200 MB on the secondary drive. While this token gesture of moving 1/10 of the app to the place I told it to is cute, I really want to install everything I can to the secondary drive. **Is there any way to install all of Visual Studio 2010 Express to a drive besides C:?**
2010/03/10
[ "https://superuser.com/questions/118359", "https://superuser.com", "https://superuser.com/users/13423/" ]
No, much of what VS installs (regardless of version) goes into subdirectories in your Windows folder: things such as the .NET frameworks, shared files, etc. So if you installed Windows to the C: drive, VS has to install much of it's core there as well.
There are two ways. The easiest is just to install to C and then move the big folders over to your D drive and set up an NTFS junction to link the old location (on C) to the new one (on D). If your SSD is so small that you cannot do that, then make the folders on your D drive first, then the junction from C to D and then install the program (pointing to the "folder" on C). The installer will probably complain the the folder you are trying to install to already exists but most will happily continue anyway. <http://support.microsoft.com/kb/205524>
118,359
I have a SSD as my primary (C:) drive, mainly used for quickly loading games. It's pretty small (~30 GB) so I want to keep things that don't really need a speed boost off of it. I attempted installing the Visual Studio 2010 Express beta last night, and it claimed to require 2.1 GB of space so I changed the install directory to a secondary, non-SSD drive. After this, the installer said that it would use 1.8 GB on C: and ~200 MB on the secondary drive. While this token gesture of moving 1/10 of the app to the place I told it to is cute, I really want to install everything I can to the secondary drive. **Is there any way to install all of Visual Studio 2010 Express to a drive besides C:?**
2010/03/10
[ "https://superuser.com/questions/118359", "https://superuser.com", "https://superuser.com/users/13423/" ]
Kind of. The setup DVD contains a file Setup\baseline.dat. This is a large text file which stores information on where to install large chunks of the software. You need to edit the text file and change the lines which say > > DefaultPath=[ProgramFilesFolder]\VC\ > > > ... > > > DefaultPath=[ProgramFilesFolder]\Microsoft Visual Studio 10.0 > > > ... > > > DefaultPath=[ProgramFilesFolder]\Microsoft Visual Studio 10.0\Common7\IDE > > > to the following > > DefaultPath=D:\Applications\VS2010\VC\ > > > ... > > > DefaultPath=D:\Applications\VS2010\Microsoft Visual Studio 10.0 > > > ... > > > DefaultPath=D:\Applications\VS2010\Microsoft Visual Studio 10.0\Common7\IDE > > > That will get most of the stuff off C. This also works with VisualStudio 2005/2008 and the Express Editions. I've been using this trick for years and never encountered a problem. n.B: Some parts of the installer also use locations such as *DefaultPath=[WindowsFolder]\assembly*. You can edit these in the same way to free up even more space, but I can't guarantee this won't break things. Obviously if you're installing from a DVD/iso you need to copy the entire contents of the DVD to a folder before editing baseline.dat, otherwise it will be read-only.
I got a similar problem in Windows XP and found my own solution: 1) In c:\Program Files create manually by yourself all the folders that the VS2010 installation must create for you. This includes at least these folders: > > i. c:\Program Files\Microsoft SDKs ii. c:\Program Files\Microsoft > Visual Studio iii. c:\Program Files\Reference Assemblies iv. > c:\Program Files\Microsoft Visual Studio 9.0 > > > Since these folders are now empty, you can actually mount a logical disk drive on each of them. This effectively increases the size of the C:. 2) Install any additional hard disk and create an extended partition on it. Creates 4-5 logical disk drive on that extended partition. Then in Windows control Panel you can mount these logical disk drive onto the above folders. Now you should got enough disk space for your VS2010 installation. 3) It seems that we cannot use the above approach for the main program installation folder ie. c:\Program Files\Microsoft Visual Studio 10.0. But we can just tell the VS installer to use d:\Program Files. In addition, Sysinternals got a 'junction' utility allowing one to create symbolic links in Windows Xp to link some folders e.g. c:\Program Files\Microsoft SDKs to say d:\Program Files\Microsoft SDKs. This may be another solution in addition to mounting a logical disk to the folders.
118,359
I have a SSD as my primary (C:) drive, mainly used for quickly loading games. It's pretty small (~30 GB) so I want to keep things that don't really need a speed boost off of it. I attempted installing the Visual Studio 2010 Express beta last night, and it claimed to require 2.1 GB of space so I changed the install directory to a secondary, non-SSD drive. After this, the installer said that it would use 1.8 GB on C: and ~200 MB on the secondary drive. While this token gesture of moving 1/10 of the app to the place I told it to is cute, I really want to install everything I can to the secondary drive. **Is there any way to install all of Visual Studio 2010 Express to a drive besides C:?**
2010/03/10
[ "https://superuser.com/questions/118359", "https://superuser.com", "https://superuser.com/users/13423/" ]
Kind of. The setup DVD contains a file Setup\baseline.dat. This is a large text file which stores information on where to install large chunks of the software. You need to edit the text file and change the lines which say > > DefaultPath=[ProgramFilesFolder]\VC\ > > > ... > > > DefaultPath=[ProgramFilesFolder]\Microsoft Visual Studio 10.0 > > > ... > > > DefaultPath=[ProgramFilesFolder]\Microsoft Visual Studio 10.0\Common7\IDE > > > to the following > > DefaultPath=D:\Applications\VS2010\VC\ > > > ... > > > DefaultPath=D:\Applications\VS2010\Microsoft Visual Studio 10.0 > > > ... > > > DefaultPath=D:\Applications\VS2010\Microsoft Visual Studio 10.0\Common7\IDE > > > That will get most of the stuff off C. This also works with VisualStudio 2005/2008 and the Express Editions. I've been using this trick for years and never encountered a problem. n.B: Some parts of the installer also use locations such as *DefaultPath=[WindowsFolder]\assembly*. You can edit these in the same way to free up even more space, but I can't guarantee this won't break things. Obviously if you're installing from a DVD/iso you need to copy the entire contents of the DVD to a folder before editing baseline.dat, otherwise it will be read-only.
There are two ways. The easiest is just to install to C and then move the big folders over to your D drive and set up an NTFS junction to link the old location (on C) to the new one (on D). If your SSD is so small that you cannot do that, then make the folders on your D drive first, then the junction from C to D and then install the program (pointing to the "folder" on C). The installer will probably complain the the folder you are trying to install to already exists but most will happily continue anyway. <http://support.microsoft.com/kb/205524>
118,359
I have a SSD as my primary (C:) drive, mainly used for quickly loading games. It's pretty small (~30 GB) so I want to keep things that don't really need a speed boost off of it. I attempted installing the Visual Studio 2010 Express beta last night, and it claimed to require 2.1 GB of space so I changed the install directory to a secondary, non-SSD drive. After this, the installer said that it would use 1.8 GB on C: and ~200 MB on the secondary drive. While this token gesture of moving 1/10 of the app to the place I told it to is cute, I really want to install everything I can to the secondary drive. **Is there any way to install all of Visual Studio 2010 Express to a drive besides C:?**
2010/03/10
[ "https://superuser.com/questions/118359", "https://superuser.com", "https://superuser.com/users/13423/" ]
There are two ways. The easiest is just to install to C and then move the big folders over to your D drive and set up an NTFS junction to link the old location (on C) to the new one (on D). If your SSD is so small that you cannot do that, then make the folders on your D drive first, then the junction from C to D and then install the program (pointing to the "folder" on C). The installer will probably complain the the folder you are trying to install to already exists but most will happily continue anyway. <http://support.microsoft.com/kb/205524>
I got a similar problem in Windows XP and found my own solution: 1) In c:\Program Files create manually by yourself all the folders that the VS2010 installation must create for you. This includes at least these folders: > > i. c:\Program Files\Microsoft SDKs ii. c:\Program Files\Microsoft > Visual Studio iii. c:\Program Files\Reference Assemblies iv. > c:\Program Files\Microsoft Visual Studio 9.0 > > > Since these folders are now empty, you can actually mount a logical disk drive on each of them. This effectively increases the size of the C:. 2) Install any additional hard disk and create an extended partition on it. Creates 4-5 logical disk drive on that extended partition. Then in Windows control Panel you can mount these logical disk drive onto the above folders. Now you should got enough disk space for your VS2010 installation. 3) It seems that we cannot use the above approach for the main program installation folder ie. c:\Program Files\Microsoft Visual Studio 10.0. But we can just tell the VS installer to use d:\Program Files. In addition, Sysinternals got a 'junction' utility allowing one to create symbolic links in Windows Xp to link some folders e.g. c:\Program Files\Microsoft SDKs to say d:\Program Files\Microsoft SDKs. This may be another solution in addition to mounting a logical disk to the folders.
359,874
Say a group of people vote for a leader in an election but the leader then goes on to "*speak for the people*" but actually he's speaking for himself and his cronies. I'd like it to be clearly in a derogatory sense. > > "Mr ..... claimed he was speaking on behalf of the people who voted for him but he was actually **[stealing]** their voice" > > > What word would best fit here?
2016/11/23
[ "https://english.stackexchange.com/questions/359874", "https://english.stackexchange.com", "https://english.stackexchange.com/users/167962/" ]
How about 'Garble'? According to M-W * Garble: a : to so alter or distort as to create a wrong impression or change the meaning * *garble* a story * the candidate complained that his views had been deliberately *garbled* by his opponent 'Misrepresent' also works though.
[Ventriloquize](http://www.dictionary.com/browse/ventriloquize): > > to speak or sound in the manner of a ventriloquist. > > >
359,874
Say a group of people vote for a leader in an election but the leader then goes on to "*speak for the people*" but actually he's speaking for himself and his cronies. I'd like it to be clearly in a derogatory sense. > > "Mr ..... claimed he was speaking on behalf of the people who voted for him but he was actually **[stealing]** their voice" > > > What word would best fit here?
2016/11/23
[ "https://english.stackexchange.com/questions/359874", "https://english.stackexchange.com", "https://english.stackexchange.com/users/167962/" ]
You could use the term *co-opt*: > > **co-opt** *v. tr.* > > 3. To take or assume for one's own use; appropriate: *co-opted the criticism by embracing it.* > > > [From TFDO](http://www.thefreedictionary.com/co-opt)
[Ventriloquize](http://www.dictionary.com/browse/ventriloquize): > > to speak or sound in the manner of a ventriloquist. > > >
359,874
Say a group of people vote for a leader in an election but the leader then goes on to "*speak for the people*" but actually he's speaking for himself and his cronies. I'd like it to be clearly in a derogatory sense. > > "Mr ..... claimed he was speaking on behalf of the people who voted for him but he was actually **[stealing]** their voice" > > > What word would best fit here?
2016/11/23
[ "https://english.stackexchange.com/questions/359874", "https://english.stackexchange.com", "https://english.stackexchange.com/users/167962/" ]
How about *pervert*? From Google search: > > [pervert](https://www.google.com/search?client=safari&rls=en&q=pervert+definition&ie=UTF-8&oe=UTF-8) *verb*: > alter (something) from its original course, meaning, or state to a > distortion or corruption of what was first intended > > > Your example: > > Mr ..... claimed he was speaking on behalf of the people who voted for > him but he was actually *perverting* their voice > > > It's hard to find a verb more derogatory than *pervert*, not to mention its other obvious connotations as a noun. Also, as suggested by @alwayslearning, how about *misappropriate*? From Dictionary.com: > > [*misappropriate*](http://www.dictionary.com/browse/misappropriate?s=t): > to put to a wrong use > > > Your example: > > Mr ..... claimed he was speaking on behalf of the people who voted for > him but he was actually *misappropriating* their voice. > > >
[Ventriloquize](http://www.dictionary.com/browse/ventriloquize): > > to speak or sound in the manner of a ventriloquist. > > >
359,874
Say a group of people vote for a leader in an election but the leader then goes on to "*speak for the people*" but actually he's speaking for himself and his cronies. I'd like it to be clearly in a derogatory sense. > > "Mr ..... claimed he was speaking on behalf of the people who voted for him but he was actually **[stealing]** their voice" > > > What word would best fit here?
2016/11/23
[ "https://english.stackexchange.com/questions/359874", "https://english.stackexchange.com", "https://english.stackexchange.com/users/167962/" ]
The answer is in the title of your question [Misrepresent](http://www.merriam-webster.com/dictionary/misrepresent) > > to give a false or misleading representation of usually with an intent to deceive or be unfair > > > to serve badly or improperly as a representative of > > > For example: > > He claimed he was speaking on behalf of the people who voted for him, but he was mispresenting them when he announced the new policy that would... > > >
[Ventriloquize](http://www.dictionary.com/browse/ventriloquize): > > to speak or sound in the manner of a ventriloquist. > > >
359,874
Say a group of people vote for a leader in an election but the leader then goes on to "*speak for the people*" but actually he's speaking for himself and his cronies. I'd like it to be clearly in a derogatory sense. > > "Mr ..... claimed he was speaking on behalf of the people who voted for him but he was actually **[stealing]** their voice" > > > What word would best fit here?
2016/11/23
[ "https://english.stackexchange.com/questions/359874", "https://english.stackexchange.com", "https://english.stackexchange.com/users/167962/" ]
The most suitable word for my context was "Highjack" bringing strong, explicit implication of stealing for their own purposes. The word was contributed in a comment that was subsequently removed. Thanks to that unknown contributor!
You could use the term *co-opt*: > > **co-opt** *v. tr.* > > 3. To take or assume for one's own use; appropriate: *co-opted the criticism by embracing it.* > > > [From TFDO](http://www.thefreedictionary.com/co-opt)
359,874
Say a group of people vote for a leader in an election but the leader then goes on to "*speak for the people*" but actually he's speaking for himself and his cronies. I'd like it to be clearly in a derogatory sense. > > "Mr ..... claimed he was speaking on behalf of the people who voted for him but he was actually **[stealing]** their voice" > > > What word would best fit here?
2016/11/23
[ "https://english.stackexchange.com/questions/359874", "https://english.stackexchange.com", "https://english.stackexchange.com/users/167962/" ]
The most suitable word for my context was "Highjack" bringing strong, explicit implication of stealing for their own purposes. The word was contributed in a comment that was subsequently removed. Thanks to that unknown contributor!
[Ventriloquize](http://www.dictionary.com/browse/ventriloquize): > > to speak or sound in the manner of a ventriloquist. > > >
359,874
Say a group of people vote for a leader in an election but the leader then goes on to "*speak for the people*" but actually he's speaking for himself and his cronies. I'd like it to be clearly in a derogatory sense. > > "Mr ..... claimed he was speaking on behalf of the people who voted for him but he was actually **[stealing]** their voice" > > > What word would best fit here?
2016/11/23
[ "https://english.stackexchange.com/questions/359874", "https://english.stackexchange.com", "https://english.stackexchange.com/users/167962/" ]
You could use the term *co-opt*: > > **co-opt** *v. tr.* > > 3. To take or assume for one's own use; appropriate: *co-opted the criticism by embracing it.* > > > [From TFDO](http://www.thefreedictionary.com/co-opt)
How about 'Garble'? According to M-W * Garble: a : to so alter or distort as to create a wrong impression or change the meaning * *garble* a story * the candidate complained that his views had been deliberately *garbled* by his opponent 'Misrepresent' also works though.
359,874
Say a group of people vote for a leader in an election but the leader then goes on to "*speak for the people*" but actually he's speaking for himself and his cronies. I'd like it to be clearly in a derogatory sense. > > "Mr ..... claimed he was speaking on behalf of the people who voted for him but he was actually **[stealing]** their voice" > > > What word would best fit here?
2016/11/23
[ "https://english.stackexchange.com/questions/359874", "https://english.stackexchange.com", "https://english.stackexchange.com/users/167962/" ]
You could use the term *co-opt*: > > **co-opt** *v. tr.* > > 3. To take or assume for one's own use; appropriate: *co-opted the criticism by embracing it.* > > > [From TFDO](http://www.thefreedictionary.com/co-opt)
How about *pervert*? From Google search: > > [pervert](https://www.google.com/search?client=safari&rls=en&q=pervert+definition&ie=UTF-8&oe=UTF-8) *verb*: > alter (something) from its original course, meaning, or state to a > distortion or corruption of what was first intended > > > Your example: > > Mr ..... claimed he was speaking on behalf of the people who voted for > him but he was actually *perverting* their voice > > > It's hard to find a verb more derogatory than *pervert*, not to mention its other obvious connotations as a noun. Also, as suggested by @alwayslearning, how about *misappropriate*? From Dictionary.com: > > [*misappropriate*](http://www.dictionary.com/browse/misappropriate?s=t): > to put to a wrong use > > > Your example: > > Mr ..... claimed he was speaking on behalf of the people who voted for > him but he was actually *misappropriating* their voice. > > >
359,874
Say a group of people vote for a leader in an election but the leader then goes on to "*speak for the people*" but actually he's speaking for himself and his cronies. I'd like it to be clearly in a derogatory sense. > > "Mr ..... claimed he was speaking on behalf of the people who voted for him but he was actually **[stealing]** their voice" > > > What word would best fit here?
2016/11/23
[ "https://english.stackexchange.com/questions/359874", "https://english.stackexchange.com", "https://english.stackexchange.com/users/167962/" ]
The most suitable word for my context was "Highjack" bringing strong, explicit implication of stealing for their own purposes. The word was contributed in a comment that was subsequently removed. Thanks to that unknown contributor!
How about 'Garble'? According to M-W * Garble: a : to so alter or distort as to create a wrong impression or change the meaning * *garble* a story * the candidate complained that his views had been deliberately *garbled* by his opponent 'Misrepresent' also works though.
359,874
Say a group of people vote for a leader in an election but the leader then goes on to "*speak for the people*" but actually he's speaking for himself and his cronies. I'd like it to be clearly in a derogatory sense. > > "Mr ..... claimed he was speaking on behalf of the people who voted for him but he was actually **[stealing]** their voice" > > > What word would best fit here?
2016/11/23
[ "https://english.stackexchange.com/questions/359874", "https://english.stackexchange.com", "https://english.stackexchange.com/users/167962/" ]
The most suitable word for my context was "Highjack" bringing strong, explicit implication of stealing for their own purposes. The word was contributed in a comment that was subsequently removed. Thanks to that unknown contributor!
How about *pervert*? From Google search: > > [pervert](https://www.google.com/search?client=safari&rls=en&q=pervert+definition&ie=UTF-8&oe=UTF-8) *verb*: > alter (something) from its original course, meaning, or state to a > distortion or corruption of what was first intended > > > Your example: > > Mr ..... claimed he was speaking on behalf of the people who voted for > him but he was actually *perverting* their voice > > > It's hard to find a verb more derogatory than *pervert*, not to mention its other obvious connotations as a noun. Also, as suggested by @alwayslearning, how about *misappropriate*? From Dictionary.com: > > [*misappropriate*](http://www.dictionary.com/browse/misappropriate?s=t): > to put to a wrong use > > > Your example: > > Mr ..... claimed he was speaking on behalf of the people who voted for > him but he was actually *misappropriating* their voice. > > >
11,563,116
I'm working on a Intranet application that manipulates data from a SQL server. The application has to support multiple users playing with the same data source / same tables. The application uses a [jQuery datatable library](http://datatables.net/) fed by a Ajax call to a JSON result action in my MVC Controller and some Ajax links rendered in the table allowing CRUD operations in a form of a modal popup window. When the popup submits the CRUD operation, it closes and calls a refresh on the datatable. Everything is working wonderfully, however, I want that datatable to update if data is changed by another user (or another browser/tab) without having the refresh the table. One method would be to poll a Hash from the server every X milliseconds and update if the hash changes, but I dont really like that option as I feel it could get out of hands pretty quickly with the amount of tables to manage. I would like to know what you guys think of the polling method and if you have an alternative that would be more fluid, cleaner or simply a better practice for X reasons. Thanks a lot!
2012/07/19
[ "https://Stackoverflow.com/questions/11563116", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1338607/" ]
> > I would like to know what you guys think of the polling method and if > you have an alternative that would be more fluid, cleaner or simply a > better practice for X reasons. > > > The alternative to polling is pushing which is far more efficient. Checkout [`SignalR`](https://github.com/SignalR/SignalR/). And here's an [introductory blog post](http://www.hanselman.com/blog/AsynchronousScalableWebApplicationsWithRealtimePersistentLongrunningConnectionsWithSignalR.aspx).
Have you considered using <http://socket.io/> to create a websocket? This will allows the server to push data to the relevant clients whenever your table updates and will also allow users to push their changes to your server without refreshing (just like ajax). Socket.io also has a great fall back mechanism so it's supported on all major browsers (yes even IE6). <http://socket.io/#browser-support>
15,146
I've seen many statistics referenced along the lines of: > > Visitors decide where to stay on or leave a website within [7, 2, 11, 0.8 seconds] of seeing it. > > > Typically the statistic is used when talking about the importance of appealing design, clear navigation, prominent display of relevant content, etc. Unfortunately the articles I've found aren't citing sources. Where's the actual research to back this up? What seems to be the actual amount of time visitors are taking to determine if a site is relevant to them or not?
2011/12/18
[ "https://ux.stackexchange.com/questions/15146", "https://ux.stackexchange.com", "https://ux.stackexchange.com/users/10261/" ]
There's a few studies that show sub-conscious reactions to the aesthetics of a page are made within 50ms, and that these reactions then impinge on the user's sense of usability, satisfaction, and the credibility of the site. * [*Attention web designers: You have 50 milliseconds to make a good first impression!*](http://anaandjelic.typepad.com/files/attention-web-designers-2.pdf) (PDF), Lindgaard G., Fernandes G. J., Dudek C. & Brown, J. Tested subjects by displaying web pages (saved to disk) for 500ms or 50ms for "visual appeal." The report concludes that "...visual appeal can be assessed within 50 msec suggesting that web designers have about 50msec to make a good first impression." *Behaviour and Information Technology*, 25:115 - 126 (2006). 1. [*Web Users Judge Sites in the Blink of an Eye*](http://www.nature.com/news/2006/060109/full/060109-13.html), Hopkin, M. Nature first reported on the study online on Jan. 13, 2006. 2. [*First impressions count for web*](http://news.bbc.co.uk/1/hi/technology/4616700.stm), BBC News, Jan. 16, 2006 Early news report of the "blink" study after the Nature story. "My colleagues believed it would be impossible to really see anything in less than 500 milliseconds." -- Gitte Lindgaard et al * [Carleton University, HOT Lab](http://www.carleton.ca/hot/) Human Oriented Technology Lab at Carleton University in Ottawa, Ontario Canada researches "interactive technologies for human endeavors with an emphasis on human computer interaction and a user-centred design approach." Dr. Gitte Lindgaard and her colleagues work at the HOT Lab. * ["A second chance for emotion"](http://books.google.com.au/books?id=A2s963AzymYC&pg=PA12&lpg=PA12&dq=A+second+chance+for+emotion+Damasio), Damasio, A. R. In *Cognitive Neuroscience of Emotions* edited by R. D. Lane and L. Nadel (New York: Oxford University Press, 2000). * [Stanford Guidelines for Web Credibility](http://credibility.stanford.edu/guidelines/), Fogg, B.J. Persuasive Technology Lab. Stanford University, 2002 (revised November 2003). In two surveys of over 2600 people Fogg found that a "clean, professional look" was cited by 46.1% of participants when evaluating sites for web credibility. Information Design/Structure was cited 28.5% of the time, while Information Focus was cited 25.1% of the time. While the factors varied for different types of sites, disguised advertising and popup ads, stale content, broken or uncredible links, difficult navigation, typographic errors, popup ads, and slow or unavailable sites were found to harm credibility the most. * [*Blink: the power of thinking without thinking*](http://www.gladwell.com/blink/), Gladwell, M. A fascinating book about how accurate our first impressions can be, especially among those with a lifetime of experience (art appraisers for example). (New York: Little, Brown, and Co., 2005) * [*Response Time: Eight Seconds, Plus or Minus Two*](http://www.websiteoptimization.com/speed/1/), King, A. In [*Speed Up Your Site: Web Site Optimization*](http://www.speedupyoursite.com)), Indianapolis: New Riders Publishing, 2003. The consensus among HCI researchers is to deliver useful content within 1 to 2 seconds (navigation bar, search form) and your entire page within 8 to 10 seconds (8.6 seconds was the figure most cited). This [article](http://www.websiteoptimization.com/speed/tweak/blink/) has further reading. * [*Flow in Web Design*](http://www.websiteoptimization.com/speed/2/), King, A. In [*Speed Up Your Site: Web Site Optimization*](http://www.speedupyoursite.com), Indianapolis: New Riders Publishing, 2003. Fast, well-designed sites can actually create a flow state in users. In fact, according to a recent study, over 47% of users have experienced flow on the Web. * *The Emotional Brain: The Mysterious Underpinnings of Emotional Life*, [Ledoux, J.](http://en.wikipedia.org/wiki/Joseph_E._LeDoux) 1996, Simon & Schuster, 1998 Touchstone edition: ISBN 0-684-83659-9 * [*Understanding web browsing behaviors through Weibull analysis of dwell time*](http://doi.acm.org/10.1145/1835449.1835513), Chao Liu, Ryen W. White, and Susan Dumais, 2010. In Proceedings of the 33rd international ACM SIGIR conference. [Jakob Nielsen provides a good summary of the study](http://www.nngroup.com/articles/how-long-do-users-stay-on-web-pages/), stressing the importance of surviving the first 10 seconds. * [*The role of visual complexity and prototypicality regarding first impression of websites: Working towards understanding aesthetic judgments*](http://dx.doi.org/10.1016/j.ijhcs.2012.06.003), Tuch A.N., Presslaber E, Stoecklin M, Opwis K, Bargas-Avila J; 2012. This paper experimentally investigates the role of visual complexity (VC) and pro- totypicality (PT) as design factors of websites, shaping users’ first impressions by means of two studies. Results reveal that VC and PT affect participants’ aesthetics ratings within the first 50 ms of exposure. In a second study presentation times were shortened to 17, 33 and 50ms. Results suggest that VC and PT affect aesthetic perception even within 17ms, though the effect of PT is less pronounced than the one of VC. With increasing presentation time the effect of PT becomes as influential as the VC effect.
[This study](http://dl.acm.org/citation.cfm?doid=1835449.1835513) covers dwell times in general, as well as for various categories (entertainment, financial, etc.) [Jakob Nielsen](http://www.useit.com/alertbox/page-abandonment-time.html) provides a good summary of the study, stressing the importance of surviving the first 10 seconds.
15,146
I've seen many statistics referenced along the lines of: > > Visitors decide where to stay on or leave a website within [7, 2, 11, 0.8 seconds] of seeing it. > > > Typically the statistic is used when talking about the importance of appealing design, clear navigation, prominent display of relevant content, etc. Unfortunately the articles I've found aren't citing sources. Where's the actual research to back this up? What seems to be the actual amount of time visitors are taking to determine if a site is relevant to them or not?
2011/12/18
[ "https://ux.stackexchange.com/questions/15146", "https://ux.stackexchange.com", "https://ux.stackexchange.com/users/10261/" ]
[This study](http://dl.acm.org/citation.cfm?doid=1835449.1835513) covers dwell times in general, as well as for various categories (entertainment, financial, etc.) [Jakob Nielsen](http://www.useit.com/alertbox/page-abandonment-time.html) provides a good summary of the study, stressing the importance of surviving the first 10 seconds.
[This research by google](http://research.google.com/pubs/pub38315.html) says that even 17ms are long enough to rate the site.
15,146
I've seen many statistics referenced along the lines of: > > Visitors decide where to stay on or leave a website within [7, 2, 11, 0.8 seconds] of seeing it. > > > Typically the statistic is used when talking about the importance of appealing design, clear navigation, prominent display of relevant content, etc. Unfortunately the articles I've found aren't citing sources. Where's the actual research to back this up? What seems to be the actual amount of time visitors are taking to determine if a site is relevant to them or not?
2011/12/18
[ "https://ux.stackexchange.com/questions/15146", "https://ux.stackexchange.com", "https://ux.stackexchange.com/users/10261/" ]
There's a few studies that show sub-conscious reactions to the aesthetics of a page are made within 50ms, and that these reactions then impinge on the user's sense of usability, satisfaction, and the credibility of the site. * [*Attention web designers: You have 50 milliseconds to make a good first impression!*](http://anaandjelic.typepad.com/files/attention-web-designers-2.pdf) (PDF), Lindgaard G., Fernandes G. J., Dudek C. & Brown, J. Tested subjects by displaying web pages (saved to disk) for 500ms or 50ms for "visual appeal." The report concludes that "...visual appeal can be assessed within 50 msec suggesting that web designers have about 50msec to make a good first impression." *Behaviour and Information Technology*, 25:115 - 126 (2006). 1. [*Web Users Judge Sites in the Blink of an Eye*](http://www.nature.com/news/2006/060109/full/060109-13.html), Hopkin, M. Nature first reported on the study online on Jan. 13, 2006. 2. [*First impressions count for web*](http://news.bbc.co.uk/1/hi/technology/4616700.stm), BBC News, Jan. 16, 2006 Early news report of the "blink" study after the Nature story. "My colleagues believed it would be impossible to really see anything in less than 500 milliseconds." -- Gitte Lindgaard et al * [Carleton University, HOT Lab](http://www.carleton.ca/hot/) Human Oriented Technology Lab at Carleton University in Ottawa, Ontario Canada researches "interactive technologies for human endeavors with an emphasis on human computer interaction and a user-centred design approach." Dr. Gitte Lindgaard and her colleagues work at the HOT Lab. * ["A second chance for emotion"](http://books.google.com.au/books?id=A2s963AzymYC&pg=PA12&lpg=PA12&dq=A+second+chance+for+emotion+Damasio), Damasio, A. R. In *Cognitive Neuroscience of Emotions* edited by R. D. Lane and L. Nadel (New York: Oxford University Press, 2000). * [Stanford Guidelines for Web Credibility](http://credibility.stanford.edu/guidelines/), Fogg, B.J. Persuasive Technology Lab. Stanford University, 2002 (revised November 2003). In two surveys of over 2600 people Fogg found that a "clean, professional look" was cited by 46.1% of participants when evaluating sites for web credibility. Information Design/Structure was cited 28.5% of the time, while Information Focus was cited 25.1% of the time. While the factors varied for different types of sites, disguised advertising and popup ads, stale content, broken or uncredible links, difficult navigation, typographic errors, popup ads, and slow or unavailable sites were found to harm credibility the most. * [*Blink: the power of thinking without thinking*](http://www.gladwell.com/blink/), Gladwell, M. A fascinating book about how accurate our first impressions can be, especially among those with a lifetime of experience (art appraisers for example). (New York: Little, Brown, and Co., 2005) * [*Response Time: Eight Seconds, Plus or Minus Two*](http://www.websiteoptimization.com/speed/1/), King, A. In [*Speed Up Your Site: Web Site Optimization*](http://www.speedupyoursite.com)), Indianapolis: New Riders Publishing, 2003. The consensus among HCI researchers is to deliver useful content within 1 to 2 seconds (navigation bar, search form) and your entire page within 8 to 10 seconds (8.6 seconds was the figure most cited). This [article](http://www.websiteoptimization.com/speed/tweak/blink/) has further reading. * [*Flow in Web Design*](http://www.websiteoptimization.com/speed/2/), King, A. In [*Speed Up Your Site: Web Site Optimization*](http://www.speedupyoursite.com), Indianapolis: New Riders Publishing, 2003. Fast, well-designed sites can actually create a flow state in users. In fact, according to a recent study, over 47% of users have experienced flow on the Web. * *The Emotional Brain: The Mysterious Underpinnings of Emotional Life*, [Ledoux, J.](http://en.wikipedia.org/wiki/Joseph_E._LeDoux) 1996, Simon & Schuster, 1998 Touchstone edition: ISBN 0-684-83659-9 * [*Understanding web browsing behaviors through Weibull analysis of dwell time*](http://doi.acm.org/10.1145/1835449.1835513), Chao Liu, Ryen W. White, and Susan Dumais, 2010. In Proceedings of the 33rd international ACM SIGIR conference. [Jakob Nielsen provides a good summary of the study](http://www.nngroup.com/articles/how-long-do-users-stay-on-web-pages/), stressing the importance of surviving the first 10 seconds. * [*The role of visual complexity and prototypicality regarding first impression of websites: Working towards understanding aesthetic judgments*](http://dx.doi.org/10.1016/j.ijhcs.2012.06.003), Tuch A.N., Presslaber E, Stoecklin M, Opwis K, Bargas-Avila J; 2012. This paper experimentally investigates the role of visual complexity (VC) and pro- totypicality (PT) as design factors of websites, shaping users’ first impressions by means of two studies. Results reveal that VC and PT affect participants’ aesthetics ratings within the first 50 ms of exposure. In a second study presentation times were shortened to 17, 33 and 50ms. Results suggest that VC and PT affect aesthetic perception even within 17ms, though the effect of PT is less pronounced than the one of VC. With increasing presentation time the effect of PT becomes as influential as the VC effect.
[This research by google](http://research.google.com/pubs/pub38315.html) says that even 17ms are long enough to rate the site.
3,357,234
Does anyone know of any JQuery syntax validators or checkers? The braces are killing me.
2010/07/28
[ "https://Stackoverflow.com/questions/3357234", "https://Stackoverflow.com", "https://Stackoverflow.com/users/173432/" ]
Check out JSLint for validating JS code: <http://www.jslint.com/> Warning: JSLint will hurt your feelings
It's JavaScript. Any IDE with JavaScript syntax checker should do. NetBeans has one.
3,357,234
Does anyone know of any JQuery syntax validators or checkers? The braces are killing me.
2010/07/28
[ "https://Stackoverflow.com/questions/3357234", "https://Stackoverflow.com", "https://Stackoverflow.com/users/173432/" ]
**JSLint** and a live demo site **JSFiddle** will give you the ability to make sure something is valid and runs properly. As said, most IDE's will validate your Javascript. Eclipse, Netbeans, PHPEd and so on all have javascript checkers.
It's JavaScript. Any IDE with JavaScript syntax checker should do. NetBeans has one.
3,357,234
Does anyone know of any JQuery syntax validators or checkers? The braces are killing me.
2010/07/28
[ "https://Stackoverflow.com/questions/3357234", "https://Stackoverflow.com", "https://Stackoverflow.com/users/173432/" ]
Check out JSLint for validating JS code: <http://www.jslint.com/> Warning: JSLint will hurt your feelings
**JSLint** and a live demo site **JSFiddle** will give you the ability to make sure something is valid and runs properly. As said, most IDE's will validate your Javascript. Eclipse, Netbeans, PHPEd and so on all have javascript checkers.
65,750,527
We all know that Spark uses RAM to store processed data, both Spark and Hadoop use RAM for computation, which makes Spark to access data at blazing fast speed. But if that is the one thing which makes a lot of difference (apart from Tungsten and Catalyst), we could have added it in Hadoop itself. Why we have not changed just the storing routine in Hadoop (making it in memory) instead of inventing a different tool (Apache Spark) altogether? Are there any other limitations that prevent Hadoop from implementing in memory storage?
2021/01/16
[ "https://Stackoverflow.com/questions/65750527", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6950719/" ]
There are two main factors that determine the "choice" of using another platform altogether for faster computations (e.g. *Spark*) on top of Hadoop instead of reforming the way that the latter executes its applications. **1. Hadoop is more of an infrastructure than just a distributed computing library** And by no means do I imply that you can't use it as such to develop an application based on your needs by using the ways of the *MapReduce* paradigm. When we are talking about working in Hadoop, we are not just talking about a resource manager (YARN) or a distributed file system (HDFS) but we also have to include the *ecosystem* of products that are based or applicable to it (like [Flume](https://flume.apache.org/), [Pig](https://pig.apache.org/), [Hive](https://hive.apache.org/), and yes you guessed right Spark as well). Those modules act as extensions on top of Hadoop in order to make things easier and more flexible whenever the *Hadoop MapReduce* way of handling tasks and/or storing data on the disk gets troublesome. There's a big chance you actually used Spark to run an application using its beautiful and thorough libraries while retrieving your data from a directory in HDFS and you can get that Hadoop is just the base of the platform on which your application is running. Whatever you can put on top of it is your choice and preference exclusively based on your needs. **2. Main Memory is much more expensive and complicated** There's a big relief you can have when you are developing an application in Hadoop while knowing that all of the processed data will always be stored in the disk of your system/cluster since you know that: a) you will easily be able to point out what's sticking out like a sore thumb by looking at the inbetween and final process data by yourself and b) you can easily support applications that will probably need from 500GB to 10-20TB (if we are talking about a cluster, I guess) but it's entirely different conversation if you can support heavy (and I mean *heavy*, like multiple GB of RAM) application memory-wise This has to do with the whole **scale-out** way of scaling resources in projects like Hadoop, where instead of building a few powerful nodes that can take huge chunks of data to process, it is preferred to just add more less-powerful nodes that are built with common hardware specifications in mind. This is also one of the reasons that Hadoop is in someways still mistaken for a project that is centered around building small in-house data warehouses (but this is really a story for another time). --- However I'm kind of obliged to say at this point that Hadoop is slowly being dropped in usage due to the latest trends since: * projects like Spark become more independent and approachable/user-friendly in the use of more complex stuff like machine learning applications (you can read this small and neat article about it where some reality checks are being given over [here](https://hub.packtpub.com/why-is-hadoop-dying/)) * the infrastructure aspect of Hadoop is challenged by the use of Kubernetes containers instead of its YARN module, or Amazon's S3 which can actually replace HDFS altogether (but that doesn't mean that things are that bad about Hadoop just yet, you can take a taste of experimentation and the current state of things in this more broad and opinion-based article [here](https://medium.com/analytics-vidhya/is-hadoop-dying-or-re-inventing-3e4f144752d4)) In the end I believe that Hadoop will find its use for years onward, but everyone is also moving onward as well. The concepts of Hadoop are valuable to know and grasp, even if there might not even be any companies or enterprises that implement it because you will never really know if is going to be easier and more stable or not to develop something using Hadoop instead of a newer and slicker thing that everybody uses.
Of course there is always in memory computation. You cannot add, reduce on disk. That aside: * Hadoop was created with the primary goal to perform the data analysis from a disk in batch processing mode. Therefore, native Hadoop does not support the real-time analytics and interactivity. It's advantage is that it can process larger amounts than Spark when push comes to shove, but working with it is cumbersome compared to Spark API's. * Spark as designed as a processing and analytics engine developed in Scala using as much as possible in-memory processing. Why" As real-time analysis of the information became the mode and the ability to do machine learning which requires iterative processing, along with interactive querying. * Spark relies/relied on Hadoop API's and HDFS or cloud equivalent data stores, and YARN if not using Spark Stand-Alone, fault-tolerance. Now with K8 and S3 et al it all becomes a bit blurred, but working with Spark is easier, but not for the faint-hearted. * Spark at the very least relies on HDFS for many aspects - e.g. fault tolerance, API's to access HDFS. Simply, they do different things and continue to do so.
500,280
So I've had at least two professors mention that backtracking makes an algorithm non-deterministic without giving too much explanation into why that is. I *think* I understand how this happens, but I have trouble putting it into words. Could somebody give me a concise explanation of the reason for this?
2009/02/01
[ "https://Stackoverflow.com/questions/500280", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2147/" ]
Thought experiment: 1) Hidden from view there is some distribution of electric charges which you feel a force from and you measure the potential field they create. Tell me exactly the positions of all the charges. 2) Take some charges and arrange them. Tell me exactly the potential field they create. Only the second question has a unique answer. This is the non-uniqueness of [vector fields](http://en.wikipedia.org/wiki/Vector_potential). This situation may be in analogy with some non-deterministic algorithms you are considering. Further consider in [math limits](http://en.wikipedia.org/wiki/Limit_of_a_function) which do not exist because they have different answers depending on which direction you approach a discontinuity from.
I like the maze analogy. Lets think of the maze, for simplicity, as a binary tree, in which there is only one path out. Now you want to try a depth first search to find the correct way out of the maze. A non deterministic computer would, at every branching point, duplicate/clone itself and run each further calculations in parallel. It is like as if the person in the maze would duplicate/clone himself (like in the movie Prestige) at each branching point and send one copy of himself into the left subbranch of the tree and the other copy of himself into the right subbranch of the tree. The computers/persons who end up at a dead end they die (terminate without answer). Only one computer will survive (terminate with an answer), the one who gets out of the maze. The difference between backtracking and non-determinism is the following. In the case of backtracking there is only one computer alive at any given moment, he does the traditional maze solving trick, simply marking his path with a chalk and when he gets to a dead end he just simply backtracks to a branching point whose sub branches he did not yet explore completely, just like in a depth first search. IN CONTRAST : A non deteministic computer can clone himself at every branching point and check for the way out by running paralell searches in the sub branches. So the backtracking algorithm simulates/emulates the cloning ability of the non-deterministic computer on a sequential/non-parallel/deterministic computer.
500,280
So I've had at least two professors mention that backtracking makes an algorithm non-deterministic without giving too much explanation into why that is. I *think* I understand how this happens, but I have trouble putting it into words. Could somebody give me a concise explanation of the reason for this?
2009/02/01
[ "https://Stackoverflow.com/questions/500280", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2147/" ]
It's not so much the case that backtracking makes an algorithm non-deterministic. Rather, you usually need backtracking to process a non-deterministic algorithm, since (by the definition of non-deterministic) you don't know which path to take at a particular time in your processing, but instead you must try several.
Thought experiment: 1) Hidden from view there is some distribution of electric charges which you feel a force from and you measure the potential field they create. Tell me exactly the positions of all the charges. 2) Take some charges and arrange them. Tell me exactly the potential field they create. Only the second question has a unique answer. This is the non-uniqueness of [vector fields](http://en.wikipedia.org/wiki/Vector_potential). This situation may be in analogy with some non-deterministic algorithms you are considering. Further consider in [math limits](http://en.wikipedia.org/wiki/Limit_of_a_function) which do not exist because they have different answers depending on which direction you approach a discontinuity from.
500,280
So I've had at least two professors mention that backtracking makes an algorithm non-deterministic without giving too much explanation into why that is. I *think* I understand how this happens, but I have trouble putting it into words. Could somebody give me a concise explanation of the reason for this?
2009/02/01
[ "https://Stackoverflow.com/questions/500280", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2147/" ]
I'll just quote wikipedia: > > A nondeterministic programming language is a language which can specify, at certain points in the program (called "choice points"), various alternatives for program flow. Unlike an if-then statement, the method of choice between these alternatives is not directly specified by the programmer; the program must decide at runtime between the alternatives, via some general method applied to all choice points. A programmer specifies a limited number of alternatives, but the program must later choose between them. ("Choose" is, in fact, a typical name for the nondeterministic operator.) A hierarchy of choice points may be formed, with higher-level choices leading to branches that contain lower-level choices within them. > > > One method of choice is embodied in backtracking systems, in which some alternatives may "fail", causing the program to backtrack and try other alternatives. If all alternatives fail at a particular choice point, then an entire branch fails, and the program will backtrack further, to an older choice point. One complication is that, because any choice is tentative and may be remade, the system must be able to restore old program states by undoing side-effects caused by partially executing a branch that eventually failed. > > > Out of the [Nondeterministic Programming](http://en.wikipedia.org/wiki/Nondeterministic_programming) article.
I like the maze analogy. Lets think of the maze, for simplicity, as a binary tree, in which there is only one path out. Now you want to try a depth first search to find the correct way out of the maze. A non deterministic computer would, at every branching point, duplicate/clone itself and run each further calculations in parallel. It is like as if the person in the maze would duplicate/clone himself (like in the movie Prestige) at each branching point and send one copy of himself into the left subbranch of the tree and the other copy of himself into the right subbranch of the tree. The computers/persons who end up at a dead end they die (terminate without answer). Only one computer will survive (terminate with an answer), the one who gets out of the maze. The difference between backtracking and non-determinism is the following. In the case of backtracking there is only one computer alive at any given moment, he does the traditional maze solving trick, simply marking his path with a chalk and when he gets to a dead end he just simply backtracks to a branching point whose sub branches he did not yet explore completely, just like in a depth first search. IN CONTRAST : A non deteministic computer can clone himself at every branching point and check for the way out by running paralell searches in the sub branches. So the backtracking algorithm simulates/emulates the cloning ability of the non-deterministic computer on a sequential/non-parallel/deterministic computer.
500,280
So I've had at least two professors mention that backtracking makes an algorithm non-deterministic without giving too much explanation into why that is. I *think* I understand how this happens, but I have trouble putting it into words. Could somebody give me a concise explanation of the reason for this?
2009/02/01
[ "https://Stackoverflow.com/questions/500280", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2147/" ]
I wrote a maze runner that uses backtracking (of course), which I'll use as an example. You walk through the maze. When you reach a junction, you flip a coin to decide which route to follow. If you chose a dead end, trace back to the junction and take another route. If you tried them all, return to the previous junction. This algorithm is non-deterministic, non because of the backtracking, but because of the coin flipping. Now change the algorithm: when you reach a junction, always try the leftmost route you haven't tried yet first. If that leads to a dead end, return to the junction and again try the leftmost route you haven't tried yet. This algorithm is deterministic. There's no chance involved, it's predictable: you'll always follow the same route in the same maze.
If you allow backtracking you allow infinite looping in your program which makes it non-deterministic since the actual path taken may always include one more loop.
500,280
So I've had at least two professors mention that backtracking makes an algorithm non-deterministic without giving too much explanation into why that is. I *think* I understand how this happens, but I have trouble putting it into words. Could somebody give me a concise explanation of the reason for this?
2009/02/01
[ "https://Stackoverflow.com/questions/500280", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2147/" ]
Consider an algorithm for coloring a map of the world. No color can be used on adjacent countries. The algorithm arbitrarily starts at a country and colors it an arbitrary color. So it moves along, coloring countries, changing the color on each step until, "uh oh", two adjacent countries have the same color. Well, now we have to backtrack, and make a new color choice. Now we aren't making a choice as a nondeterministic algorithm would, that's not possible for our deterministic computers. Instead, we are simulating the nondeterministic algorithm with backtracking. A nondeterministic algorithm would have made the right choice for every country.
If you allow backtracking you allow infinite looping in your program which makes it non-deterministic since the actual path taken may always include one more loop.
500,280
So I've had at least two professors mention that backtracking makes an algorithm non-deterministic without giving too much explanation into why that is. I *think* I understand how this happens, but I have trouble putting it into words. Could somebody give me a concise explanation of the reason for this?
2009/02/01
[ "https://Stackoverflow.com/questions/500280", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2147/" ]
Consider an algorithm for coloring a map of the world. No color can be used on adjacent countries. The algorithm arbitrarily starts at a country and colors it an arbitrary color. So it moves along, coloring countries, changing the color on each step until, "uh oh", two adjacent countries have the same color. Well, now we have to backtrack, and make a new color choice. Now we aren't making a choice as a nondeterministic algorithm would, that's not possible for our deterministic computers. Instead, we are simulating the nondeterministic algorithm with backtracking. A nondeterministic algorithm would have made the right choice for every country.
Thought experiment: 1) Hidden from view there is some distribution of electric charges which you feel a force from and you measure the potential field they create. Tell me exactly the positions of all the charges. 2) Take some charges and arrange them. Tell me exactly the potential field they create. Only the second question has a unique answer. This is the non-uniqueness of [vector fields](http://en.wikipedia.org/wiki/Vector_potential). This situation may be in analogy with some non-deterministic algorithms you are considering. Further consider in [math limits](http://en.wikipedia.org/wiki/Limit_of_a_function) which do not exist because they have different answers depending on which direction you approach a discontinuity from.
500,280
So I've had at least two professors mention that backtracking makes an algorithm non-deterministic without giving too much explanation into why that is. I *think* I understand how this happens, but I have trouble putting it into words. Could somebody give me a concise explanation of the reason for this?
2009/02/01
[ "https://Stackoverflow.com/questions/500280", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2147/" ]
It's not so much the case that backtracking makes an algorithm non-deterministic. Rather, you usually need backtracking to process a non-deterministic algorithm, since (by the definition of non-deterministic) you don't know which path to take at a particular time in your processing, but instead you must try several.
I'll just quote wikipedia: > > A nondeterministic programming language is a language which can specify, at certain points in the program (called "choice points"), various alternatives for program flow. Unlike an if-then statement, the method of choice between these alternatives is not directly specified by the programmer; the program must decide at runtime between the alternatives, via some general method applied to all choice points. A programmer specifies a limited number of alternatives, but the program must later choose between them. ("Choose" is, in fact, a typical name for the nondeterministic operator.) A hierarchy of choice points may be formed, with higher-level choices leading to branches that contain lower-level choices within them. > > > One method of choice is embodied in backtracking systems, in which some alternatives may "fail", causing the program to backtrack and try other alternatives. If all alternatives fail at a particular choice point, then an entire branch fails, and the program will backtrack further, to an older choice point. One complication is that, because any choice is tentative and may be remade, the system must be able to restore old program states by undoing side-effects caused by partially executing a branch that eventually failed. > > > Out of the [Nondeterministic Programming](http://en.wikipedia.org/wiki/Nondeterministic_programming) article.
500,280
So I've had at least two professors mention that backtracking makes an algorithm non-deterministic without giving too much explanation into why that is. I *think* I understand how this happens, but I have trouble putting it into words. Could somebody give me a concise explanation of the reason for this?
2009/02/01
[ "https://Stackoverflow.com/questions/500280", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2147/" ]
Consider an algorithm for coloring a map of the world. No color can be used on adjacent countries. The algorithm arbitrarily starts at a country and colors it an arbitrary color. So it moves along, coloring countries, changing the color on each step until, "uh oh", two adjacent countries have the same color. Well, now we have to backtrack, and make a new color choice. Now we aren't making a choice as a nondeterministic algorithm would, that's not possible for our deterministic computers. Instead, we are simulating the nondeterministic algorithm with backtracking. A nondeterministic algorithm would have made the right choice for every country.
The running time of backtracking on a deterministic computer is factorial, i.e. it is in O(n!). Where a non-deterministic computer could instantly guess correctly in each step, a deterministic computer has to try all possible combinations of choices. Since it is impossible to build a non-deterministic computer, what your professor probably meant is the following: A [provenly hard](http://en.wikipedia.org/wiki/NP-complete) problem in the complexity class NP (all problems that a non-deterministic computer can solve efficiently by always guessing correctly) cannot be solved more efficiently on real computers than by backtracking. The above statement is true, if the complexity classes P (all problems that a deterministic computer can solve efficiently) and NP are not the same. This is the famous P vs. NP problem. The Clay Mathematics Institute has offered a $1 Million prize for its solution, but the problem has resisted proof for many years. However, most researchers believe that P is not equal to NP. A simple way to sum it up would be: Most interesting problems a non-deterministic computer could solve efficiently by always guessing correctly, are so hard that a deterministic computer would probably have to try all possible combinations of choices, i.e. use backtracking.
500,280
So I've had at least two professors mention that backtracking makes an algorithm non-deterministic without giving too much explanation into why that is. I *think* I understand how this happens, but I have trouble putting it into words. Could somebody give me a concise explanation of the reason for this?
2009/02/01
[ "https://Stackoverflow.com/questions/500280", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2147/" ]
It's not so much the case that backtracking makes an algorithm non-deterministic. Rather, you usually need backtracking to process a non-deterministic algorithm, since (by the definition of non-deterministic) you don't know which path to take at a particular time in your processing, but instead you must try several.
I wrote a maze runner that uses backtracking (of course), which I'll use as an example. You walk through the maze. When you reach a junction, you flip a coin to decide which route to follow. If you chose a dead end, trace back to the junction and take another route. If you tried them all, return to the previous junction. This algorithm is non-deterministic, non because of the backtracking, but because of the coin flipping. Now change the algorithm: when you reach a junction, always try the leftmost route you haven't tried yet first. If that leads to a dead end, return to the junction and again try the leftmost route you haven't tried yet. This algorithm is deterministic. There's no chance involved, it's predictable: you'll always follow the same route in the same maze.
500,280
So I've had at least two professors mention that backtracking makes an algorithm non-deterministic without giving too much explanation into why that is. I *think* I understand how this happens, but I have trouble putting it into words. Could somebody give me a concise explanation of the reason for this?
2009/02/01
[ "https://Stackoverflow.com/questions/500280", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2147/" ]
Thought experiment: 1) Hidden from view there is some distribution of electric charges which you feel a force from and you measure the potential field they create. Tell me exactly the positions of all the charges. 2) Take some charges and arrange them. Tell me exactly the potential field they create. Only the second question has a unique answer. This is the non-uniqueness of [vector fields](http://en.wikipedia.org/wiki/Vector_potential). This situation may be in analogy with some non-deterministic algorithms you are considering. Further consider in [math limits](http://en.wikipedia.org/wiki/Limit_of_a_function) which do not exist because they have different answers depending on which direction you approach a discontinuity from.
If you allow backtracking you allow infinite looping in your program which makes it non-deterministic since the actual path taken may always include one more loop.
314,113
I try to find bare soil areas. Those which are bare over the whole year and those which are fresh harvest and remain bare over a certain time. I try to do this with sentinel-2 data. But the Problem is that the cloud cover does not allow a sufficient detection. What kind of Satellite Products and/or Indices can help me out with this problem. Can anyone give me a hint in which direction I have to look?
2019/03/01
[ "https://gis.stackexchange.com/questions/314113", "https://gis.stackexchange.com", "https://gis.stackexchange.com/users/109615/" ]
I would recommend investigating [Synthetic Aperture Radar (SAR)](https://en.wikipedia.org/wiki/Synthetic-aperture_radar) data and soil indices such as Normalized Radar Backscatter soil Moisture Index (NBMI). Radar has the benefit of being able to penetrate clouds, unlike spectral sensors. The following are some resources to get you started. [Shoshany, M., Svoray, T., Curran, P. J., Foody, G. M., & Perevolotsky, A. (2000). The relationship between ERS-2 SAR backscatter and soil moisture: generalization from a humid to semi-arid transect. International Journal of Remote Sensing, 21(11), 2337-2343.](https://pdfs.semanticscholar.org/3334/956ee622d27b15fa25bba08064d66c0a926e.pdf) [Moran, M. S., Peters-Lidard, C. D., Watts, J. M., & McElroy, S. (2004). Estimating soil moisture at the watershed scale with satellite-based radar and land surface models. Canadian journal of remote sensing, 30(5), 805-826.](http://www.tucson.ars.ag.gov/unit/publications/PDFfiles/1528.pdf) [Barrett, B., Whelan, P., & Dwyer, E. (2013). Detecting changes in surface soil moisture content using differential SAR interferometry. International journal of remote sensing, 34(20), 7091-7112.](http://eprints.gla.ac.uk/109103/1/109103.pdf)
Sentinel-2 is best for this, stick at it. You need to use multiple scenes and periods to get around the cloud cover. Landsat, Spot, Aster still all have clouds. Basically, clouds exist so you must overcome them.
3,139
Over the years, I've had to throw away a quite a few bits of computing equipment (and the like): * Several ADSL routers with odd symptoms (losing wireless connections, losing wired connections, DHCP failures, DNS symptoms etc) * Two PVRs spontaneously rebooting and corrupting themselves (despite the best efforts of the community to diagnose and help) * One external hard disk still *claiming* to function, but corrupting data * One hard disk as part of a NAS raid array "going bad" (as far as the NAS was concerned) (This is in addition to various laptops and printers dying in ways unrelated to this question.) Obviously it'll be impossible to tell for sure from such a small amount of information, but might these be related to power issues? I don't currently have a UPS for any of this equipment. Everything on surge-protected gang sockets, but there's nothing to smooth a power cut. Is home UPS really viable and useful? I know there are some reasonably cheap UPSes on the market, but I don't know how useful they really are. I'm not interested in keeping my home network actually *running* during a power cut, but I'd like it to power down a bit more gracefully if the current situation is putting my hardware in jeopardy.
2009/05/02
[ "https://serverfault.com/questions/3139", "https://serverfault.com", "https://serverfault.com/users/173/" ]
I give you a concrete example a few days old. My Macbook was directly connected to the power supply without UPS. I had an external display connected to it. I started my Macbook. My both display were blank. I could only heard the noise of my hardisks. The solution to the problem was to take my Mac out of the direct supply. Both my displays get alive again. It seems that the voltage was too high at the moment which caused me the problem. I now have my Macbook in UPS - both displays work. Other problems which I have had 1. My external harddisk got broken immediately as I connected it to the wall: it just corrupts data similarly as with you. 2. Three of my WiFi stations cannot get connections - they all are about one month old 3. My SCX-4300 printer seems to be broken after two weeks usage 4. A power adapter in my PC got broken after a year usage. 5. Two of UPSs are now broken after a year usage 6. My black Macbook's harddisk needed to be changed after one year. 7. One of my Al Macbooks do not start sometimes when it does not have UPS. **Conclusion:** I recommend you to have UPS for your home equipment.
Not only is it useful and essential to protect your home computing equipment, but also if you have a home theater system. Basically, you'd rather the lower cost UPS buy it in the case of something happening.
3,139
Over the years, I've had to throw away a quite a few bits of computing equipment (and the like): * Several ADSL routers with odd symptoms (losing wireless connections, losing wired connections, DHCP failures, DNS symptoms etc) * Two PVRs spontaneously rebooting and corrupting themselves (despite the best efforts of the community to diagnose and help) * One external hard disk still *claiming* to function, but corrupting data * One hard disk as part of a NAS raid array "going bad" (as far as the NAS was concerned) (This is in addition to various laptops and printers dying in ways unrelated to this question.) Obviously it'll be impossible to tell for sure from such a small amount of information, but might these be related to power issues? I don't currently have a UPS for any of this equipment. Everything on surge-protected gang sockets, but there's nothing to smooth a power cut. Is home UPS really viable and useful? I know there are some reasonably cheap UPSes on the market, but I don't know how useful they really are. I'm not interested in keeping my home network actually *running* during a power cut, but I'd like it to power down a bit more gracefully if the current situation is putting my hardware in jeopardy.
2009/05/02
[ "https://serverfault.com/questions/3139", "https://serverfault.com", "https://serverfault.com/users/173/" ]
I give you a concrete example a few days old. My Macbook was directly connected to the power supply without UPS. I had an external display connected to it. I started my Macbook. My both display were blank. I could only heard the noise of my hardisks. The solution to the problem was to take my Mac out of the direct supply. Both my displays get alive again. It seems that the voltage was too high at the moment which caused me the problem. I now have my Macbook in UPS - both displays work. Other problems which I have had 1. My external harddisk got broken immediately as I connected it to the wall: it just corrupts data similarly as with you. 2. Three of my WiFi stations cannot get connections - they all are about one month old 3. My SCX-4300 printer seems to be broken after two weeks usage 4. A power adapter in my PC got broken after a year usage. 5. Two of UPSs are now broken after a year usage 6. My black Macbook's harddisk needed to be changed after one year. 7. One of my Al Macbooks do not start sometimes when it does not have UPS. **Conclusion:** I recommend you to have UPS for your home equipment.
What I want is a pile of tiny UPS devices. They would only have to run my stuff for about 30 seconds as the bulk of power outages don't seem to even last that long. Heck, from the same logic, I'd LOVE to see someone start making supercapacitor based replacements for laptop batteries so I can avoid cooking the real thing while plugged in at work.
3,139
Over the years, I've had to throw away a quite a few bits of computing equipment (and the like): * Several ADSL routers with odd symptoms (losing wireless connections, losing wired connections, DHCP failures, DNS symptoms etc) * Two PVRs spontaneously rebooting and corrupting themselves (despite the best efforts of the community to diagnose and help) * One external hard disk still *claiming* to function, but corrupting data * One hard disk as part of a NAS raid array "going bad" (as far as the NAS was concerned) (This is in addition to various laptops and printers dying in ways unrelated to this question.) Obviously it'll be impossible to tell for sure from such a small amount of information, but might these be related to power issues? I don't currently have a UPS for any of this equipment. Everything on surge-protected gang sockets, but there's nothing to smooth a power cut. Is home UPS really viable and useful? I know there are some reasonably cheap UPSes on the market, but I don't know how useful they really are. I'm not interested in keeping my home network actually *running* during a power cut, but I'd like it to power down a bit more gracefully if the current situation is putting my hardware in jeopardy.
2009/05/02
[ "https://serverfault.com/questions/3139", "https://serverfault.com", "https://serverfault.com/users/173/" ]
Yes! And not just for computers. My TiVo, cable modem, router, and Xbox are all on a UPS.
I don't currently use one, but have in the past. I'm considering doing so again, simply because despite living close to the same power station as you, Jon, we seem to get a surprisingly large number of power cuts of varying lengths. Frankly, it's starting to be too much trouble trying to get the network back up again after each one, particularly if I'm away on business. My kids' squid proxy depends on it :)
3,139
Over the years, I've had to throw away a quite a few bits of computing equipment (and the like): * Several ADSL routers with odd symptoms (losing wireless connections, losing wired connections, DHCP failures, DNS symptoms etc) * Two PVRs spontaneously rebooting and corrupting themselves (despite the best efforts of the community to diagnose and help) * One external hard disk still *claiming* to function, but corrupting data * One hard disk as part of a NAS raid array "going bad" (as far as the NAS was concerned) (This is in addition to various laptops and printers dying in ways unrelated to this question.) Obviously it'll be impossible to tell for sure from such a small amount of information, but might these be related to power issues? I don't currently have a UPS for any of this equipment. Everything on surge-protected gang sockets, but there's nothing to smooth a power cut. Is home UPS really viable and useful? I know there are some reasonably cheap UPSes on the market, but I don't know how useful they really are. I'm not interested in keeping my home network actually *running* during a power cut, but I'd like it to power down a bit more gracefully if the current situation is putting my hardware in jeopardy.
2009/05/02
[ "https://serverfault.com/questions/3139", "https://serverfault.com", "https://serverfault.com/users/173/" ]
I didn't consider buying one, as in many years I haven't experienced any power outage. Until on day a thunder storm took out the PSU of my PC and the PSU of the console.
I have been using one for over 4 years now (batteries should be up for replacement soon, so knock on wood that there's no outage). It has saved my bacon at least once, during an unexpected power outage. I was able to save everything and shut down without a hitch. I should mention that I live next to a switching station in my area (which happens to power the rail yard for our light rail system on our side of town). The switching station had "caught fire" and as a result, power was out everywhere for 10 blocks in all directions, along with the rail yard next to it. Preceding the event was a brown-out that lasted for several minutes. Eventually the fire was subdued (a mineral oil tank was the cause of the blaze) and service was restored. My computer would have been reduced to so much expensive junk during that wonderful brownout. It would be one thing to be inconvenienced by losing some games or something, but it's another when you make your living with it. "Personal" UPS units lack many of the bells and whistles of the big ones, but they *get the job done*.
3,139
Over the years, I've had to throw away a quite a few bits of computing equipment (and the like): * Several ADSL routers with odd symptoms (losing wireless connections, losing wired connections, DHCP failures, DNS symptoms etc) * Two PVRs spontaneously rebooting and corrupting themselves (despite the best efforts of the community to diagnose and help) * One external hard disk still *claiming* to function, but corrupting data * One hard disk as part of a NAS raid array "going bad" (as far as the NAS was concerned) (This is in addition to various laptops and printers dying in ways unrelated to this question.) Obviously it'll be impossible to tell for sure from such a small amount of information, but might these be related to power issues? I don't currently have a UPS for any of this equipment. Everything on surge-protected gang sockets, but there's nothing to smooth a power cut. Is home UPS really viable and useful? I know there are some reasonably cheap UPSes on the market, but I don't know how useful they really are. I'm not interested in keeping my home network actually *running* during a power cut, but I'd like it to power down a bit more gracefully if the current situation is putting my hardware in jeopardy.
2009/05/02
[ "https://serverfault.com/questions/3139", "https://serverfault.com", "https://serverfault.com/users/173/" ]
Short answer, yes. Long answer, read this: <http://www.codinghorror.com/blog/archives/000632.html>
I don't currently use one, but have in the past. I'm considering doing so again, simply because despite living close to the same power station as you, Jon, we seem to get a surprisingly large number of power cuts of varying lengths. Frankly, it's starting to be too much trouble trying to get the network back up again after each one, particularly if I'm away on business. My kids' squid proxy depends on it :)
3,139
Over the years, I've had to throw away a quite a few bits of computing equipment (and the like): * Several ADSL routers with odd symptoms (losing wireless connections, losing wired connections, DHCP failures, DNS symptoms etc) * Two PVRs spontaneously rebooting and corrupting themselves (despite the best efforts of the community to diagnose and help) * One external hard disk still *claiming* to function, but corrupting data * One hard disk as part of a NAS raid array "going bad" (as far as the NAS was concerned) (This is in addition to various laptops and printers dying in ways unrelated to this question.) Obviously it'll be impossible to tell for sure from such a small amount of information, but might these be related to power issues? I don't currently have a UPS for any of this equipment. Everything on surge-protected gang sockets, but there's nothing to smooth a power cut. Is home UPS really viable and useful? I know there are some reasonably cheap UPSes on the market, but I don't know how useful they really are. I'm not interested in keeping my home network actually *running* during a power cut, but I'd like it to power down a bit more gracefully if the current situation is putting my hardware in jeopardy.
2009/05/02
[ "https://serverfault.com/questions/3139", "https://serverfault.com", "https://serverfault.com/users/173/" ]
Short answer, yes. Long answer, read this: <http://www.codinghorror.com/blog/archives/000632.html>
Surge protectors definitely don't protect you from everything, especially if your house wiring has problems. I've discovered that an ordinary surge protector plugged into an outlet with an open ground can actually make surges worse than equipment plugged straight into the outlet (the result was several fried xbox 360 power supplies).
3,139
Over the years, I've had to throw away a quite a few bits of computing equipment (and the like): * Several ADSL routers with odd symptoms (losing wireless connections, losing wired connections, DHCP failures, DNS symptoms etc) * Two PVRs spontaneously rebooting and corrupting themselves (despite the best efforts of the community to diagnose and help) * One external hard disk still *claiming* to function, but corrupting data * One hard disk as part of a NAS raid array "going bad" (as far as the NAS was concerned) (This is in addition to various laptops and printers dying in ways unrelated to this question.) Obviously it'll be impossible to tell for sure from such a small amount of information, but might these be related to power issues? I don't currently have a UPS for any of this equipment. Everything on surge-protected gang sockets, but there's nothing to smooth a power cut. Is home UPS really viable and useful? I know there are some reasonably cheap UPSes on the market, but I don't know how useful they really are. I'm not interested in keeping my home network actually *running* during a power cut, but I'd like it to power down a bit more gracefully if the current situation is putting my hardware in jeopardy.
2009/05/02
[ "https://serverfault.com/questions/3139", "https://serverfault.com", "https://serverfault.com/users/173/" ]
Where I work we have had issues with sudden power cuts and because some of our developers run virtual machines off of external hard drives, a UPS is a must. I can highly recommend the [APC Back-UPS ES 700](http://www.apc.com/resource/include/techspec_index.cfm?base_sku=BE700-FR) (which I believe I found [via Scott Hanselman's blog](http://www.hanselman.com/blog/VistaBOOTMGRIsMissingAndTheImportanceOfTheUPS.aspx)). We have a few at work and I also have a couple at home. They come with 8 sockets total, with 4 being battery backed. It also comes with a socket to plug your phone line into and you can connect to it via USB, so it can detect for power failures and perform an automatic shut down. Although I must admit the software is ropey.
I used to run a media server without a UPS and was always a bit nervous of power failures. So I stumped up the cash and bought a cheap UPS, I installed that software which would perform an auto shutdown when the power went out for more than a few minutes. My mind was put at rest. But, more importantly all of the odd problems I'd get, random crashes and reboots just stopped. Now that may be down to me having fairly cheap components in the server back then (I've done a number of upgrades / new servers since), but the cleaner smoother power coming in to the server certainly helped IMO. I now have my 2 servers, 2 network switches, router PC and TV signal distribution amp all running from my much beefier UPS. The last time I tested it I got 40 minutes runtime before it got down to one light (but that was with one server). Anyway, the net effect of all that is that if the power goes down for 20-30 minutes my TV recordings aren't interrupted. The server performs TV/media duties. Unless of course the local TV transmitter also has no power (but I'd hope they'd have some kind of backup too). I keep meaning to find myself another cheap UPS to run my home cinema projector from. I'm not entirely sure how my £200 lamp will behave being suddenly shut off and now cooling fan to bring the temperature down gradually. So, if a UPS were installed, I could just power the projector down and the UPS would effectively run the cooling fan until the temperature of the lamp dropped.
3,139
Over the years, I've had to throw away a quite a few bits of computing equipment (and the like): * Several ADSL routers with odd symptoms (losing wireless connections, losing wired connections, DHCP failures, DNS symptoms etc) * Two PVRs spontaneously rebooting and corrupting themselves (despite the best efforts of the community to diagnose and help) * One external hard disk still *claiming* to function, but corrupting data * One hard disk as part of a NAS raid array "going bad" (as far as the NAS was concerned) (This is in addition to various laptops and printers dying in ways unrelated to this question.) Obviously it'll be impossible to tell for sure from such a small amount of information, but might these be related to power issues? I don't currently have a UPS for any of this equipment. Everything on surge-protected gang sockets, but there's nothing to smooth a power cut. Is home UPS really viable and useful? I know there are some reasonably cheap UPSes on the market, but I don't know how useful they really are. I'm not interested in keeping my home network actually *running* during a power cut, but I'd like it to power down a bit more gracefully if the current situation is putting my hardware in jeopardy.
2009/05/02
[ "https://serverfault.com/questions/3139", "https://serverfault.com", "https://serverfault.com/users/173/" ]
I didn't consider buying one, as in many years I haven't experienced any power outage. Until on day a thunder storm took out the PSU of my PC and the PSU of the console.
What I want is a pile of tiny UPS devices. They would only have to run my stuff for about 30 seconds as the bulk of power outages don't seem to even last that long. Heck, from the same logic, I'd LOVE to see someone start making supercapacitor based replacements for laptop batteries so I can avoid cooking the real thing while plugged in at work.
3,139
Over the years, I've had to throw away a quite a few bits of computing equipment (and the like): * Several ADSL routers with odd symptoms (losing wireless connections, losing wired connections, DHCP failures, DNS symptoms etc) * Two PVRs spontaneously rebooting and corrupting themselves (despite the best efforts of the community to diagnose and help) * One external hard disk still *claiming* to function, but corrupting data * One hard disk as part of a NAS raid array "going bad" (as far as the NAS was concerned) (This is in addition to various laptops and printers dying in ways unrelated to this question.) Obviously it'll be impossible to tell for sure from such a small amount of information, but might these be related to power issues? I don't currently have a UPS for any of this equipment. Everything on surge-protected gang sockets, but there's nothing to smooth a power cut. Is home UPS really viable and useful? I know there are some reasonably cheap UPSes on the market, but I don't know how useful they really are. I'm not interested in keeping my home network actually *running* during a power cut, but I'd like it to power down a bit more gracefully if the current situation is putting my hardware in jeopardy.
2009/05/02
[ "https://serverfault.com/questions/3139", "https://serverfault.com", "https://serverfault.com/users/173/" ]
I've had a lot of problems in the past from power outages destroying equipment that was on surge protection. Now I consider UPS devices to be essential. In this day and age, there is no reason why I should have to suffer without power. All computers in my house are on UPS, as well as all network equipment. The lights and appliances may be off, but I'm still downloading from the Internet.
What I want is a pile of tiny UPS devices. They would only have to run my stuff for about 30 seconds as the bulk of power outages don't seem to even last that long. Heck, from the same logic, I'd LOVE to see someone start making supercapacitor based replacements for laptop batteries so I can avoid cooking the real thing while plugged in at work.
3,139
Over the years, I've had to throw away a quite a few bits of computing equipment (and the like): * Several ADSL routers with odd symptoms (losing wireless connections, losing wired connections, DHCP failures, DNS symptoms etc) * Two PVRs spontaneously rebooting and corrupting themselves (despite the best efforts of the community to diagnose and help) * One external hard disk still *claiming* to function, but corrupting data * One hard disk as part of a NAS raid array "going bad" (as far as the NAS was concerned) (This is in addition to various laptops and printers dying in ways unrelated to this question.) Obviously it'll be impossible to tell for sure from such a small amount of information, but might these be related to power issues? I don't currently have a UPS for any of this equipment. Everything on surge-protected gang sockets, but there's nothing to smooth a power cut. Is home UPS really viable and useful? I know there are some reasonably cheap UPSes on the market, but I don't know how useful they really are. I'm not interested in keeping my home network actually *running* during a power cut, but I'd like it to power down a bit more gracefully if the current situation is putting my hardware in jeopardy.
2009/05/02
[ "https://serverfault.com/questions/3139", "https://serverfault.com", "https://serverfault.com/users/173/" ]
You might want to distinguish between filtered power and uninterrupted power. Uninterrupted power is probably a good idea for things that you want to shutdown gracefully. Depending on your needs you might only need enough time for the shutdown to finish, resulting in a much cheaper UPS. Other devices may not need UPS at all, but only filtered power. Typically, it's the line spike that takes out power supplies and electronics, not the sudden absence of power -- it has to handle the switch being turned off in any event, which may be indistinguishable from pulling the power plug. In my server room (back in the day), I had all my servers and disk drives hooked up to a UPS -- and a mighty big one at that. Printers, terminals (which could be moved to the UPS in an emergency), and other stuff that didn't need to up for the system to shutdown nicely, were on filtered power. Generally, I'd buy high quality filtered power strips (rack-mounted) for this purpose. Cheap power filters probably aren't worth the price. I haven't priced UPSes recently. Depending on the price difference between the UPS and the hiqh quality filtering power strip, you might want to get the UPS anyway. Just be sure that the UPS is always filtering the power and won't let the spike happen, then try to pick up before the equipment notices that the power is gone. If it's truly important you want an i[nline UPS rather than a stand-by UPS](http://www.pcguide.com/ref/power/ext/ups/types.htm), but you'll pay extra for it.
I have been using one for over 4 years now (batteries should be up for replacement soon, so knock on wood that there's no outage). It has saved my bacon at least once, during an unexpected power outage. I was able to save everything and shut down without a hitch. I should mention that I live next to a switching station in my area (which happens to power the rail yard for our light rail system on our side of town). The switching station had "caught fire" and as a result, power was out everywhere for 10 blocks in all directions, along with the rail yard next to it. Preceding the event was a brown-out that lasted for several minutes. Eventually the fire was subdued (a mineral oil tank was the cause of the blaze) and service was restored. My computer would have been reduced to so much expensive junk during that wonderful brownout. It would be one thing to be inconvenienced by losing some games or something, but it's another when you make your living with it. "Personal" UPS units lack many of the bells and whistles of the big ones, but they *get the job done*.
73,736
An excerpt from [this](http://www.chem4kids.com/files/matter_gas.html) page: > > Gases can fill a container of any size or shape. It doesn't even matter how big the container is. The molecules still spread out to fill the whole space equally. That is one of their physical characteristics. > > > If a fixed quantity of gas is let out in a limited space, will it spread out equally and maintain a fixed gas distribution throughout the space? Or, does it go where the gravitational pull is maximum? And what factors affect the distribution of gas in a given area?
2013/08/10
[ "https://physics.stackexchange.com/questions/73736", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/28097/" ]
Of course gravity affects distribution. That is why we have 'layers' of atmosphere surrounding the earth. The lower layers are very dense, and the density decreases as we go outwards, to zero in space. When a limited amount of gas is released in a containter, it will fill up the space *almost* equally; the lower regions will be slightly denser (depends on how much gas is present and how tall the container is). If the container's height is comparable to the radius of the earth, then because of the mere weight of the gases on top, we'd have more pressure at the bottom - which is literally what's happened in the atmosphere! In the case of small containers - yes, even a ship could be considered small - the effect would be negligible, but in theory, the lower portions WOULD be denser, even by a really tiny amount. Evem if you were to do it in space, the container in which you performed the experiment has mass, and subsequently its own gravitational field - and this would affect the distribution too. However, if you completely remove gravity, then - yes, gas spreads out exactly evenly everwhere. --- You've quoted from a kids' chemistry site. They obviously won't get into the depths of it - it's just basics, perhaps that's why they haven't mentioned it. See, they even mentioned that vapour and gases mean the same thing - although there is a difference. Vapour is definitely a gas and in the gaseous state, but not all gas is vapour!!! ;) $P.S$: I finished + posted this answer more than 2 hours ago :/ Unfortunately my stupid internet connection conked off and it didn't post it until now, when I got connectivity again.
Well we know from experience that gas will eventually cluster together and form gravitational wells. Just looking at the sky shows proof of this (in the form of stars)\*. Yet that site is still a common description of gasses. How can this be? The difference stems from the model used. The site talks about statistical/classical thermodynamics: It describes an ideal gas; a gas that to consist of hard particles which do not influence each other in any shape. So in this model there simply can't be any "gravitational" effect. Gravity is an effect where the molecules do react on each other, and as such the substance can no longer be described using the equations for an ideal gas Now to describe this (and other) effects there are a lot of other models. However you should understand that for a lot of applications the classical thermodynamic laws are still valid (within error bounds). \*The exact effect of formation of stars is something non trivial though, simple Newtonian mechanics/gravity is too limited to describe this. This is an active field of study.
895,432
I have a Dell Precision T3610 workstation that currently has two 3.5" SATA HDDs installed, and I'd like to add two more 2.5" disks in a hot-swap bay. I have two free SATA ports, but I'm struggling to find a way to provide power for the additional drives. The only spare power connector on the PSU is an 8-pin PCIe block designed to provide power for a second graphics card. The machine has a 685W power supply - my intuition is that if the PSU is designed to provide enough power for a second graphics card then it should probably be able to handle a couple of extra 2.5" HDDs (which would should only draw about 3-4W apiece) but I admit that I have not done all of the math to confirm this. I've come across various SATA-to-PCIe power adaptors (e.g. [this](http://www.amazon.co.uk/Startech-Power-6-Pin-Express-Adapter/dp/B007Y91B80)), but so far I haven't found the converse (i.e. male PCIe to female SATA). I suspect that there is a good reason for this - my understanding is that [PCIe power connectors provide ground and +12V pins](http://www.playtool.com/pages/psuconnectors/connectors.html#pciexpress8), whereas [SATA also requires a +5V pin (and possibly +3.3V as well for old drives)](http://www.playtool.com/pages/psuconnectors/connectors.html#sata). Can anyone confirm whether my suspicions are correct, and it is impossible to power a SATA drive from a PCIe connector? Another possibility I'm considering is whether I can use a couple of [y-adaptors](http://www.amazon.co.uk/StarTech-com-Female-Latching-Splitter-Adapter/dp/B00BBDL17G/ref=sr_1_1?ie=UTF8&qid=1427647092&sr=8-1&keywords=sata+y-adaptor) to split the power supply to my two existing 3.5" HDDs in order to provide power for the two additional drives. Does this sound like a reasonable approach to try?
2015/03/29
[ "https://superuser.com/questions/895432", "https://superuser.com", "https://superuser.com/users/156433/" ]
The y-adapter is reasonable, and something you can calculate beforehand. The HDD manufacturer should provide a product specification listing the maximum amperage draw for the rails it uses ([example](http://www.wdc.com/wdproducts/library/SpecSheet/ENG/2879-800026.pdf)). A power connector supplies a total of 4.5 A for each voltage, so knowing if you have enough power is something you can check Usually 3.5" drives use the 12V rail or both the 12V and 5V rail, and 2.5" drives just the 5V rail. The 3.5" drives will draw more current, and some drives can draw more than 2.25 A, so it may be better to put them on different y-cables. Using the y-cable for a 3.5" and 2.5" is probably nothing to worry about at all.
It doesn't exist, at least not as a simple cable. PCIe power connectors only provide 12V. Drives nearly always need 5V and sometimes also need 12V.
377,218
I've worked on a couple of projects where we spent a great deal of time discussing and writing elaborate coding standards covering everything from syntax layout to actual best practices. However, I have also found that these are rarely followed to the full extent. Many developers seem to hesitate to reject a code review based on coding standard violations alone. I.e. violations are committed to the repository on a regular basis. My questions are: Do you have coding standards? What do they cover? Are they followed by everyone? And what do you do (if anything) to make sure everybody is following the standards? I'm aware that there is a similar question [here](https://stackoverflow.com/questions/167575/should-a-project-manager-enforce-coding-standards), but my concern is not so much how you could do it, but how you are actually going about it and what are the perceived benefits?
2008/12/18
[ "https://Stackoverflow.com/questions/377218", "https://Stackoverflow.com", "https://Stackoverflow.com/users/38206/" ]
I've worked in places with barely-followed coding practices, and others where they're close to being enforced - or at least easily checked. A few suggestions: * The most important thing is to get buy-in to the idea that consistency trumps your personal preferred style. There should be discussion of the coding standard both before and after it's instituted, but no-one should be allowed to just opt out of it. * Code reviews should be mandatory, with the checkin comment including the username of the reviewer. If you're using a suitably powerful SCM, consider not allowing checkins which don't have a valid reviewer name. * There should be a document which *everyone* knows about laying out the coding standards. With enough detail, you shouldn't get too much in the way of arguments. * Where possible, automate checking of the conventions (via Lint, CheckStyle, FXCop etc) so it's easy for both the committer and the reviewer to get a quick check of things like ordering import/using directives, whitespace etc. The benefits are: * Primarily consistency - if you make it so that anyone can feel "at home" in any part of the codebase at any time, it gives you more flexibility. * Spreading best practice - if you ban public fields, mutable structs etc then no-one can accidentally plant a time bomb in your code. (At least, not a time bomb that's covered by the standard. There's no coding standard for perfect code, of course :) EDIT: I should point out that coding standards are probably *most* important when working in large companies. I believe they help even in small companies, but there's probably less need of process around the standard at that point. It helps when all the developers know each other personally and are all co-located.
I think the best way to look at coding standards is in terms of what you hope to achieve by applying, and the damage that they can cause if mis-applied. For example, I see the following as quite good; * Document and provide unit tests that illustrate all typical scenarios for usage of a given interface to a given routine or module. * Where possible use the following container classes libraries, etc... * Use asserts to validate incoming parameters and results returned (C & C++) * Minimise scope of all variables * Access object members through methods * Use new and delete over malloc and free * Use the prescribed naming conventions I don't think that enforcing style beyond this is a great idea, as different programmers are efficient using differing styles. Forcing programmers to change style can be counter productive and lead to lost time and reduced quality. Standards should be kept short and easy to understand.