qid int64 1 74.7M | question stringlengths 12 33.8k | date stringlengths 10 10 | metadata list | response_j stringlengths 0 115k | response_k stringlengths 2 98.3k |
|---|---|---|---|---|---|
1,312,323 | I shrinked my disk 0 and i tried to expand my disk1 but unlocated space doesn't seen on expand option. How can i merge different disks ?
[](https://i.stack.imgur.com/8sW17.png) | 2018/04/09 | [
"https://superuser.com/questions/1312323",
"https://superuser.com",
"https://superuser.com/users/884613/"
] | Technically, you *can* do what you want, using [Dynamic Disks](https://msdn.microsoft.com/en-us/library/windows/desktop/aa363785(v=vs.85).aspx#dynamic_disks). Dynamic Disks are much like LVM on Linux, except it’s a proprietary Microsoft technology.
Using Dynamic Disks will *seriously* impair your ability to use arbitrary software for backup and partition management. Converting a drive is also non-reversible (at least not officially) without removing *all* data from a drive.
***As such I must very much advise against using Dynamic Disks.***
If you’re sure you want to proceed, just right-click on the drive area (to the left) in Disk Management and select the conversion option. Convert both drives. You can then extend your volume `D:` with the free space on drive 0.
Again: Make very sure you want this. Going back is a major PITA. | You cannot merge 2 different disks. The only options you have here are to make a new partition for the unallocated space OR to expand your C:\ to use the whole space. If you wanted to combine both disks, you would have to wipe your whole machine and reinstall the disks in a `raid 0` configuration |
1,683 | Can we ask questions about programming questions on Pytorch, TensorFlow, or any deep learning frameworks?? | 2020/07/08 | [
"https://ai.meta.stackexchange.com/questions/1683",
"https://ai.meta.stackexchange.com",
"https://ai.meta.stackexchange.com/users/30725/"
] | No.
General programming issues are off-topic here. For example, if you have an exception/bug/error in your source code or you don't know how to use a certain library/API, then that's off-topic. If you have this type of question, the most appropriate site is probably Stack Overflow (or Data Science SE).
However, if you want to understand how a certain concept/algorithm/model is implemented, then you can ask questions about that because that's more a conceptual question. [Here is an example of such a question](https://ai.stackexchange.com/q/20803/2444). (But, please, try to ask a specific and clear question that explains what you don't really understand, so that to facilitate the answerer's life).
[Our on-topic page](https://ai.stackexchange.com/help/on-topic) actually states these things explicitly, so I suggest that you read or at least skim through our on-topic page again. | I'm personally in favor of this, but the overall consensus is that we should focus on theory, as opposed to implementation.
(We haven't historically had good response to programming or implementation questions, so the argument for leaving those to overflow and other stacks is strong.) |
362,614 | I'm having trouble using a piece of code that belongs to a GitHub repository. The problem does not seem to be caused by a bug, but rather me not understanding something about the code (although it could well be a bug). I did raise an issue in the repository, but it seems not to be maintained anymore as virtually all issues raised since a few months are left unanswered.
Is it appropriate to ask for help on Stack Overflow, at least so that I know whether I missed something or that it actually is a bug? | 2018/01/28 | [
"https://meta.stackoverflow.com/questions/362614",
"https://meta.stackoverflow.com",
"https://meta.stackoverflow.com/users/4386370/"
] | We don't really care if the code came from GitHub or any other source.
As long as your question fully describes the problem, demonstrates understanding and has all the information we need to understand it, then post it, regardless of its origin. | As per @Maroun's answer + comments, there is nothing wrong with asking a Question about an open source library, provided that the Question has enough detail to be answerable. An MCVE is advisable, but the point about licensing is a red herring. (An MCVE doesn't mean you need to copy the library into your question. An MCVE could say "download the library and compile against it" for example.)
But there are two other issues:
1. If the library is virtually unmaintained, this suggests that the community of people using it is small or not the "contributing" type1. That would suggest that your question is unlikely to get answers. Especially if the problems you are asking about are deep or obscure.
2. Assuming that you do decide that you have found a bug, where do you go from there? Submitting an issue is unlikely to get you anywhere. So do you clone the repo and fix the problem yourself? (Is that sustainable? Does it need to be sustainable?)
I would suggest a different course of action. Look for an alternative to the library.
---
1 - If this were not true, you would expect to see a forest of forks on GitHub, and over time one would emerge as the defacto replacement for the unmaintained original. |
8,281,714 | I am desperate right now and I really really need some help. I want an image to move in from the right of the screen into it.
Initially, the image exists outside the screen area. However, based on an event, I want it to slide in.
Does anyone know how this can be done? I read in a tutorial(<http://developerlife.com/tutorials/?p=343>) online that if "the animation effects extend beyond the screen region then they are clipped outside those boundaries".
Hence, according to this tutorial, this is not possible. However, remember the android 2.2 lock screens? The two images (for unlocking and putting it on silence) used to slide in from left and right side of the screen respectively.
I can make my image slide in from the left side of the screen but not from the right. Any ideas on how I can get this done???
If you want to see my code, I can put it up. | 2011/11/26 | [
"https://Stackoverflow.com/questions/8281714",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1034512/"
] | This is actually pretty simple. In your layout, position the ImageView to where you want it at the end of the animation and either set its visibility to INVISIBLE or GONE, depending on your layout needs. Then when the event occurs, start a TranslateAnimation with starting coordinates set using RELATIVE\_TO\_PARENT with the x of 1.0 (all the way to the right) and the destination x coordinates of 0.0 with type RELATIVE\_TO\_SELF so that your image end up in the position determined by the layout. Make sure to also turn on the visibility as you start the animation.
PS. It's important that whatever ViewGroup your ImageView is nested under extends all the way to the right of the screen. Otherwise the ImageView will be clipped against its parent's bounds. | You can use the following code:
TranslateAnimation animation = new TranslateAnimation(-970.0f, 2000.0f, 0.0f, 0.0f);
ImageView.startAnimation(animation);
The 1st two parameters are for from which position to which position you want the image to move horizontally. I am using Nexus 9 tab. Here the image is moving from outside the screen moves to the right end. |
20,481,749 | What is the most convenient/fast way to implement a sorted set in redis where the values are objects, not just strings.
Should I just store object id's in the sorted set and then query every one of them individually by its key or is there a way that I can store them directly in the sorted set, i.e. must the value be a string? | 2013/12/09 | [
"https://Stackoverflow.com/questions/20481749",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/120534/"
] | It depends on your needs, if you need to share this data with other zsets/structures and want to write the value only once for every change, you can put an id as the zset value and add a hash to store the object. However, it implies making additionnal queries when you read data from the zset (one zrange + n hgetall for n values in the zset), but writing and synchronising the value between many structures is cheap (only updating the hash corresponding to the value).
But if it is "self-contained", with no or few accesses outside the zset, you can serialize to a chosen format (JSON, MESSAGEPACK, KRYO...) your object and then store it as the value of your zset entry. This way, you will have better performance when you read from the zset (only 1 query with O(log(N)+M), it is actually pretty good, probably the best you can get), but maybe you will have to duplicate the value in other zsets / structures if you need to read / write this value outside, which also implies maintaining synchronisation by hand on the value.
Redis has good documentation on performance of each command, so check what queries you would write and calculate the total cost, so that you can make a good comparison of these two options.
Also, don't forget that redis comes with optimistic locking, so if you need pessimistic (because of contention for instance) you will have to do it by hand and/or using lua scripts. If you need a lot of sync, the first option seems better (less performance on read, but still good, less queries and complexity on writes), but if you have values that don't change a lot and memory space is not a problem, the second option will provide better performance on reads (you can duplicate the value in redis, synchronize the values periodically for instance). | Short answer: Yes, everything must be stored as a string
Longer answer: you can serialize your object into any text-based format of your choosing. Most people choose MsgPack or JSON because it is very compact and serializers are available in just about any language. |
171,871 | According to [this](http://www.wiremoons.com/posts/2014-12-09-Three-Letter-Word-Passwords/), a password such as *dinwryran* is secure against a brute-force attack. Is this true? If not, why? | 2017/10/22 | [
"https://security.stackexchange.com/questions/171871",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/161908/"
] | I would have to disagree that his claim of three 'three-letter' word is secure. In his article he says:
>
> So - 500 x 500 x 500 = 125,000,000 (one hundred and twenty five million) possibilities.
>
>
> Maybe that doesn’t sound like a lot - but if you could check 20 of them every second, 24 hours a day, you would need roughly 60 days to get through them all!
>
>
>
First of all, password cracking tools and combination of hardware can crack way more than 20 per second especially if the list of combinations is pre generated. On top of that, if a hash of these passwords are retrieved then an offline cracking attack can be performed which means the attacker has all the time in the world to try to crack them and if it actually did take 60 days, to a motivated criminal that is nothing.
The simple fact is that 9 alphabetical characters is just too short in the present day.
12 characters expotientally makes it more difficult but using only alphabetical isn't the best.
End of the day, I like to use a password manager and generate unique 20+ character password with alphanumeric, special characters and mixed case. | No `dinwryran` is not secure. You can test using any of the modern strength meter, such as [zxcvbn](https://apps.cygnius.net/passtest/) or the one built [using Neural network](https://github.com/cupslab/neural_network_cracking).
Bottomline is passwords below 10 characters are rarely safe to use.
As @nd510 suggested: use password manager and generate random alphaneumeric strings. You test the difference in the website link that I gave above. |
21,575 | **CONTEXT:** I'm working on a project consisting of making an underwater robot "fish" for an aquarium. It is mandatory to get at least 8 hours of autonomy. I'm responsible of the electrical part of the project and during all of my bachelor degree I've never got any lecture on batteries and I have no idea on how to get what is required. But I managed to do some research and try to figure it out:
1. I took the power consumption of all the potential components necessary for the project (motors & pumps, Usound captors, ucontroller and a set of RF receiver and transmitters) from the specs and datasheets available.
2. I got a battery of a 1200 mAh capacity. The battery is originally used for quadrocopters, i figured it would be good for the project since it's relatively the same thing (a software and motors).
[](https://i.stack.imgur.com/pGfzA.png)
3. Thanks to some references (Battery university, Eletropaedia), I estimated the battery autonomy. I used an ideal estimation [=Battery Capacity/sum of power consumptions] and I supposed all the components worked 100% of the time.
The results I got is far bellow needed :(
**Observations :**
It seems that the pump is the only component to have an excessive consumption
**Difficulties :**
1. Since I have no experience in the field, I don't know if the battery I'm using is good.
2. The pump seems to be necessary and irreplaceable. I talked to the mechanical team and replacing the pump with something else will make the project unrealistic.
3. The 8 hours autonomy is a necessity to the project's success.
4. The underwater environment.
5. Dimensions (less than 8 inches long)
**QUESTIONS :** First, is the 8 hours autonomy doable? What should I do to get that? Change battery? Change pump? Use a totally different approach?
**LINKS** **:**
[PUMP](https://www.amazon.ca/gp/product/B07HQLVCRX/ref=ppx_yo_dt_b_asin_title_o02_s00?ie=UTF8&psc=1)
[BATTERY](https://www.amazon.ca/gp/product/B0795CJWPB/ref=ppx_yo_dt_b_asin_image_o00_s00?ie=UTF8&language=en_CA&psc=1)
*Thank you for your time* | 2020/12/30 | [
"https://robotics.stackexchange.com/questions/21575",
"https://robotics.stackexchange.com",
"https://robotics.stackexchange.com/users/27665/"
] | Lithium Polymer batteries are, in general, the best you're going to get as far as either energy per weight or energy per volume. Going to a different brand may gain your 10% more capacity per volume, but not much more (if you get to that point, and you're in the US, check out ThunderPower batteries -- I fly model airplanes in competition, and the folks I fly against who fly electric either use ThunderPower, or they're not seriously expecting to win).
There's not really a viable alternative -- diesel fuel has about 20 times the energy capacity as LiPo batteries, but you need a diesel engine (which doesn't exist in the form factor you need) or a diesel fuel cell (which even more so doesn't exist in the form factor you need). There's some experimental battery technologies that get a lot of breathless press from University public relations departments, but nothing has been commercialized.
Do all the bits of the fish robot have to work continuously? Can it swim more slowly? Does the pump have to run all the time? If you can conserve power by having it run intermittently, that'll help a lot.
If you can't get a save by operating things intermittently, you need to go back to the mechanical engineers and show them what LiPo cells are capable of, and mention that conservation of energy is a thing. You'll be doing your part by making sure that no power is unnecessarily wasted -- but they have to make sure the goal can be accomplished with the available power. | In underwater autonomous vehicles there are basically two solutions to increasing long term autonomy.
The first, better power generation, (which has already been mentioned) won't really work in your case. See [here](https://www.mbari.org/technology/emerging-current-tools/power/) for a discussion page on larger fuel cells
The second, which is more common, is reducing the amount of actuation. One of the most popular underwater vehicle designs is the glider design. The [slocum glider](https://www.whoi.edu/what-we-do/explore/underwater-vehicles/auvs/slocum-glider/) which only pumps water in and out and moves a tailfin (IT also has a very low power computer running)
My suggestion: the project in its current state is unfeasible. You need to redesign many components. Just running the motor for 8 hours is infeasible, not to mention the other components you want to run (computers and ultrasounds). |
21,575 | **CONTEXT:** I'm working on a project consisting of making an underwater robot "fish" for an aquarium. It is mandatory to get at least 8 hours of autonomy. I'm responsible of the electrical part of the project and during all of my bachelor degree I've never got any lecture on batteries and I have no idea on how to get what is required. But I managed to do some research and try to figure it out:
1. I took the power consumption of all the potential components necessary for the project (motors & pumps, Usound captors, ucontroller and a set of RF receiver and transmitters) from the specs and datasheets available.
2. I got a battery of a 1200 mAh capacity. The battery is originally used for quadrocopters, i figured it would be good for the project since it's relatively the same thing (a software and motors).
[](https://i.stack.imgur.com/pGfzA.png)
3. Thanks to some references (Battery university, Eletropaedia), I estimated the battery autonomy. I used an ideal estimation [=Battery Capacity/sum of power consumptions] and I supposed all the components worked 100% of the time.
The results I got is far bellow needed :(
**Observations :**
It seems that the pump is the only component to have an excessive consumption
**Difficulties :**
1. Since I have no experience in the field, I don't know if the battery I'm using is good.
2. The pump seems to be necessary and irreplaceable. I talked to the mechanical team and replacing the pump with something else will make the project unrealistic.
3. The 8 hours autonomy is a necessity to the project's success.
4. The underwater environment.
5. Dimensions (less than 8 inches long)
**QUESTIONS :** First, is the 8 hours autonomy doable? What should I do to get that? Change battery? Change pump? Use a totally different approach?
**LINKS** **:**
[PUMP](https://www.amazon.ca/gp/product/B07HQLVCRX/ref=ppx_yo_dt_b_asin_title_o02_s00?ie=UTF8&psc=1)
[BATTERY](https://www.amazon.ca/gp/product/B0795CJWPB/ref=ppx_yo_dt_b_asin_image_o00_s00?ie=UTF8&language=en_CA&psc=1)
*Thank you for your time* | 2020/12/30 | [
"https://robotics.stackexchange.com/questions/21575",
"https://robotics.stackexchange.com",
"https://robotics.stackexchange.com/users/27665/"
] | Lithium Polymer batteries are, in general, the best you're going to get as far as either energy per weight or energy per volume. Going to a different brand may gain your 10% more capacity per volume, but not much more (if you get to that point, and you're in the US, check out ThunderPower batteries -- I fly model airplanes in competition, and the folks I fly against who fly electric either use ThunderPower, or they're not seriously expecting to win).
There's not really a viable alternative -- diesel fuel has about 20 times the energy capacity as LiPo batteries, but you need a diesel engine (which doesn't exist in the form factor you need) or a diesel fuel cell (which even more so doesn't exist in the form factor you need). There's some experimental battery technologies that get a lot of breathless press from University public relations departments, but nothing has been commercialized.
Do all the bits of the fish robot have to work continuously? Can it swim more slowly? Does the pump have to run all the time? If you can conserve power by having it run intermittently, that'll help a lot.
If you can't get a save by operating things intermittently, you need to go back to the mechanical engineers and show them what LiPo cells are capable of, and mention that conservation of energy is a thing. You'll be doing your part by making sure that no power is unnecessarily wasted -- but they have to make sure the goal can be accomplished with the available power. | Thank you for your answers and comments. We finally decided to change the pump with a less energy consuming system. We're testing the prototype in the next to weeks we hope we get the estimated autonpmy |
21,945 | I am directly quoting a sentence from an ebook. Thus, I want to add the page numbering to its citation. However, different to the print version, which I do not have, this ebook (Amazon Kindle version) does not have page numberings but rather "positions".
Do I just pretend it's a page numbering and note in the bibliography that it is a Kindle version? Or is there another way?
In case this is of interest, I am using Latex with Bibtex. | 2014/06/04 | [
"https://academia.stackexchange.com/questions/21945",
"https://academia.stackexchange.com",
"https://academia.stackexchange.com/users/12560/"
] | You are not providing page numbers with citations for their own sake, but to help readers to locate the cited passage, e.g., if they want to verify it or see it in context. (Therefore page numbers are already diminished in their usefulness for regular books as soon as there are two editions with significantly different paging.)
The arguably easiest way to locate a verbatim quote in an e-book is to just feed a few words into a full-text search. Thus giving page numbers or similar information has no purpose anymore. However, you could ease finding the location of the quote in a classical book by giving edition-independent location information, such as chapter and section numbers.
Note that this is a “utilitaristic” approach to citations. A relevant reader of your publication (e.g., a supervisor or reviewer) might have a “dogmatic” view on such things and thus require page numbers or similar for their own sake. | In my opinion, in this case, it may be a good idea to cite your reference regarding the section and part from which you are directly quoting a sentence. I mean, you may cite the ebook the same as you used to do before, but, when you want the reader to be informed about the exact part in which your quotation exists; you may refer to the section and part instead of page numbers which do not exist. Another good idea may be mentioning the phrase: "PDF file position: Page ???" in the same place of citation in which the page number is mentioned and the reader will easily find the part of the reference. |
21,945 | I am directly quoting a sentence from an ebook. Thus, I want to add the page numbering to its citation. However, different to the print version, which I do not have, this ebook (Amazon Kindle version) does not have page numberings but rather "positions".
Do I just pretend it's a page numbering and note in the bibliography that it is a Kindle version? Or is there another way?
In case this is of interest, I am using Latex with Bibtex. | 2014/06/04 | [
"https://academia.stackexchange.com/questions/21945",
"https://academia.stackexchange.com",
"https://academia.stackexchange.com/users/12560/"
] | You are not providing page numbers with citations for their own sake, but to help readers to locate the cited passage, e.g., if they want to verify it or see it in context. (Therefore page numbers are already diminished in their usefulness for regular books as soon as there are two editions with significantly different paging.)
The arguably easiest way to locate a verbatim quote in an e-book is to just feed a few words into a full-text search. Thus giving page numbers or similar information has no purpose anymore. However, you could ease finding the location of the quote in a classical book by giving edition-independent location information, such as chapter and section numbers.
Note that this is a “utilitaristic” approach to citations. A relevant reader of your publication (e.g., a supervisor or reviewer) might have a “dogmatic” view on such things and thus require page numbers or similar for their own sake. | Since your quote is direct, I would not bother myself so much about it. Just cite the book and give a note that it's an `e-book`. You can add a chapter or section number if it exists. Anyways, it's a direct quote: people should trust you that it is there. And if they needed, then can use full-text search if they needed.
**However, it is necessary to provide as precise version information of the file as possible!** |
21,945 | I am directly quoting a sentence from an ebook. Thus, I want to add the page numbering to its citation. However, different to the print version, which I do not have, this ebook (Amazon Kindle version) does not have page numberings but rather "positions".
Do I just pretend it's a page numbering and note in the bibliography that it is a Kindle version? Or is there another way?
In case this is of interest, I am using Latex with Bibtex. | 2014/06/04 | [
"https://academia.stackexchange.com/questions/21945",
"https://academia.stackexchange.com",
"https://academia.stackexchange.com/users/12560/"
] | Since your quote is direct, I would not bother myself so much about it. Just cite the book and give a note that it's an `e-book`. You can add a chapter or section number if it exists. Anyways, it's a direct quote: people should trust you that it is there. And if they needed, then can use full-text search if they needed.
**However, it is necessary to provide as precise version information of the file as possible!** | In my opinion, in this case, it may be a good idea to cite your reference regarding the section and part from which you are directly quoting a sentence. I mean, you may cite the ebook the same as you used to do before, but, when you want the reader to be informed about the exact part in which your quotation exists; you may refer to the section and part instead of page numbers which do not exist. Another good idea may be mentioning the phrase: "PDF file position: Page ???" in the same place of citation in which the page number is mentioned and the reader will easily find the part of the reference. |
41,308 | Due to not reaching an agreement about Brexit, the United Kingdom is forced to hold European elections a few weeks from now, on May 23rd. The elected members (across the European Union) will be installed as the new European parliament on July 1st; however, if the United Kingdom leaves the European Union before that date, these elections will have been completely unnecessary:
>
> Government sources say if the Brexit process is completed before 30 June, UK MEPs will not take up their seats at all.
>
>
>
(source: [BBC](https://www.bbc.com/news/uk-politics-48188951))
Of course, the election will double as a poll, but (in case of a speedy Brexit) there will be no tangible effects. As far as I can tell, this is a rather unique situation, so I was wondering:
Have there been any elections before (preferably on national level) which were not declared invalid (e.g. by a court) yet rendered completely irrelevant by later events? | 2019/05/07 | [
"https://politics.stackexchange.com/questions/41308",
"https://politics.stackexchange.com",
"https://politics.stackexchange.com/users/10072/"
] | In many U.S. states, judges race "retention elections" and if the voters vote "yes" the judge gets to serve another term, and if the voters vote "no", the judge's term expires at the end of the term and a new judge is appointed to fill the vacancy.
It isn't terribly unusual for a judge who faces a retention election (regardless of its outcome) to resign after the retention election is held, but prior to the end of their term, sometimes to seek another position or sometimes for another reason (e.g. a pending scandal), rendering the results of the retention election moot. | In 2008, Dmitry Medvedev was elected President of Russia.
One day after Dmitry Medvedev assumed the office of President, Vladimir Putin became the Prime Minister of Russia.
I would suggest that later event rendered Medvedev's election largely irrelevant, but perhaps not completely irrelevant. It may be a matter of some opinion. |
41,308 | Due to not reaching an agreement about Brexit, the United Kingdom is forced to hold European elections a few weeks from now, on May 23rd. The elected members (across the European Union) will be installed as the new European parliament on July 1st; however, if the United Kingdom leaves the European Union before that date, these elections will have been completely unnecessary:
>
> Government sources say if the Brexit process is completed before 30 June, UK MEPs will not take up their seats at all.
>
>
>
(source: [BBC](https://www.bbc.com/news/uk-politics-48188951))
Of course, the election will double as a poll, but (in case of a speedy Brexit) there will be no tangible effects. As far as I can tell, this is a rather unique situation, so I was wondering:
Have there been any elections before (preferably on national level) which were not declared invalid (e.g. by a court) yet rendered completely irrelevant by later events? | 2019/05/07 | [
"https://politics.stackexchange.com/questions/41308",
"https://politics.stackexchange.com",
"https://politics.stackexchange.com/users/10072/"
] | I'd say the most obvious one is the elections in Catalonia about independence from Spain. It was rendered irrelevant because of the strong reaction from the central governement of Spain. | In many U.S. states, judges race "retention elections" and if the voters vote "yes" the judge gets to serve another term, and if the voters vote "no", the judge's term expires at the end of the term and a new judge is appointed to fill the vacancy.
It isn't terribly unusual for a judge who faces a retention election (regardless of its outcome) to resign after the retention election is held, but prior to the end of their term, sometimes to seek another position or sometimes for another reason (e.g. a pending scandal), rendering the results of the retention election moot. |
41,308 | Due to not reaching an agreement about Brexit, the United Kingdom is forced to hold European elections a few weeks from now, on May 23rd. The elected members (across the European Union) will be installed as the new European parliament on July 1st; however, if the United Kingdom leaves the European Union before that date, these elections will have been completely unnecessary:
>
> Government sources say if the Brexit process is completed before 30 June, UK MEPs will not take up their seats at all.
>
>
>
(source: [BBC](https://www.bbc.com/news/uk-politics-48188951))
Of course, the election will double as a poll, but (in case of a speedy Brexit) there will be no tangible effects. As far as I can tell, this is a rather unique situation, so I was wondering:
Have there been any elections before (preferably on national level) which were not declared invalid (e.g. by a court) yet rendered completely irrelevant by later events? | 2019/05/07 | [
"https://politics.stackexchange.com/questions/41308",
"https://politics.stackexchange.com",
"https://politics.stackexchange.com/users/10072/"
] | The [New Forest](https://en.wikipedia.org/wiki/1905_New_Forest_by-election) and [Barkston Ash](https://en.wikipedia.org/wiki/1905_Barkston_Ash_by-election) by-elections in 1905. Parliament was not in session at the time, and did not come into session before the 1906 general election at which the results were different, so the MPs who were elected in 1905 never took up their seats. | In 2008, Dmitry Medvedev was elected President of Russia.
One day after Dmitry Medvedev assumed the office of President, Vladimir Putin became the Prime Minister of Russia.
I would suggest that later event rendered Medvedev's election largely irrelevant, but perhaps not completely irrelevant. It may be a matter of some opinion. |
41,308 | Due to not reaching an agreement about Brexit, the United Kingdom is forced to hold European elections a few weeks from now, on May 23rd. The elected members (across the European Union) will be installed as the new European parliament on July 1st; however, if the United Kingdom leaves the European Union before that date, these elections will have been completely unnecessary:
>
> Government sources say if the Brexit process is completed before 30 June, UK MEPs will not take up their seats at all.
>
>
>
(source: [BBC](https://www.bbc.com/news/uk-politics-48188951))
Of course, the election will double as a poll, but (in case of a speedy Brexit) there will be no tangible effects. As far as I can tell, this is a rather unique situation, so I was wondering:
Have there been any elections before (preferably on national level) which were not declared invalid (e.g. by a court) yet rendered completely irrelevant by later events? | 2019/05/07 | [
"https://politics.stackexchange.com/questions/41308",
"https://politics.stackexchange.com",
"https://politics.stackexchange.com/users/10072/"
] | The [New Forest](https://en.wikipedia.org/wiki/1905_New_Forest_by-election) and [Barkston Ash](https://en.wikipedia.org/wiki/1905_Barkston_Ash_by-election) by-elections in 1905. Parliament was not in session at the time, and did not come into session before the 1906 general election at which the results were different, so the MPs who were elected in 1905 never took up their seats. | In many U.S. states, judges race "retention elections" and if the voters vote "yes" the judge gets to serve another term, and if the voters vote "no", the judge's term expires at the end of the term and a new judge is appointed to fill the vacancy.
It isn't terribly unusual for a judge who faces a retention election (regardless of its outcome) to resign after the retention election is held, but prior to the end of their term, sometimes to seek another position or sometimes for another reason (e.g. a pending scandal), rendering the results of the retention election moot. |
41,308 | Due to not reaching an agreement about Brexit, the United Kingdom is forced to hold European elections a few weeks from now, on May 23rd. The elected members (across the European Union) will be installed as the new European parliament on July 1st; however, if the United Kingdom leaves the European Union before that date, these elections will have been completely unnecessary:
>
> Government sources say if the Brexit process is completed before 30 June, UK MEPs will not take up their seats at all.
>
>
>
(source: [BBC](https://www.bbc.com/news/uk-politics-48188951))
Of course, the election will double as a poll, but (in case of a speedy Brexit) there will be no tangible effects. As far as I can tell, this is a rather unique situation, so I was wondering:
Have there been any elections before (preferably on national level) which were not declared invalid (e.g. by a court) yet rendered completely irrelevant by later events? | 2019/05/07 | [
"https://politics.stackexchange.com/questions/41308",
"https://politics.stackexchange.com",
"https://politics.stackexchange.com/users/10072/"
] | I'd say the most obvious one is the elections in Catalonia about independence from Spain. It was rendered irrelevant because of the strong reaction from the central governement of Spain. | [All Russian Constituent Assembly](https://en.wikipedia.org/wiki/Russian_Constituent_Assembly) were supposed to be democratically elected government of Russian Republic, but they were immediately dispersed by Bolshevik Communist Party in October 1917, as they took power by force. Themself they were also in the ballots and had seats elected, but as a minority party. |
41,308 | Due to not reaching an agreement about Brexit, the United Kingdom is forced to hold European elections a few weeks from now, on May 23rd. The elected members (across the European Union) will be installed as the new European parliament on July 1st; however, if the United Kingdom leaves the European Union before that date, these elections will have been completely unnecessary:
>
> Government sources say if the Brexit process is completed before 30 June, UK MEPs will not take up their seats at all.
>
>
>
(source: [BBC](https://www.bbc.com/news/uk-politics-48188951))
Of course, the election will double as a poll, but (in case of a speedy Brexit) there will be no tangible effects. As far as I can tell, this is a rather unique situation, so I was wondering:
Have there been any elections before (preferably on national level) which were not declared invalid (e.g. by a court) yet rendered completely irrelevant by later events? | 2019/05/07 | [
"https://politics.stackexchange.com/questions/41308",
"https://politics.stackexchange.com",
"https://politics.stackexchange.com/users/10072/"
] | [All Russian Constituent Assembly](https://en.wikipedia.org/wiki/Russian_Constituent_Assembly) were supposed to be democratically elected government of Russian Republic, but they were immediately dispersed by Bolshevik Communist Party in October 1917, as they took power by force. Themself they were also in the ballots and had seats elected, but as a minority party. | In 2008, Dmitry Medvedev was elected President of Russia.
One day after Dmitry Medvedev assumed the office of President, Vladimir Putin became the Prime Minister of Russia.
I would suggest that later event rendered Medvedev's election largely irrelevant, but perhaps not completely irrelevant. It may be a matter of some opinion. |
41,308 | Due to not reaching an agreement about Brexit, the United Kingdom is forced to hold European elections a few weeks from now, on May 23rd. The elected members (across the European Union) will be installed as the new European parliament on July 1st; however, if the United Kingdom leaves the European Union before that date, these elections will have been completely unnecessary:
>
> Government sources say if the Brexit process is completed before 30 June, UK MEPs will not take up their seats at all.
>
>
>
(source: [BBC](https://www.bbc.com/news/uk-politics-48188951))
Of course, the election will double as a poll, but (in case of a speedy Brexit) there will be no tangible effects. As far as I can tell, this is a rather unique situation, so I was wondering:
Have there been any elections before (preferably on national level) which were not declared invalid (e.g. by a court) yet rendered completely irrelevant by later events? | 2019/05/07 | [
"https://politics.stackexchange.com/questions/41308",
"https://politics.stackexchange.com",
"https://politics.stackexchange.com/users/10072/"
] | I suggest the [Greek referendum](https://en.m.wikipedia.org/wiki/2015_Greek_bailout_referendum) in July 2015 that rejected the EU memorandum about their national debt.
The pressure put by the (mostly German) EU negotiators and the threat to block Greek banks induced Alexis Tsipras to accept a very similar, supposedly even harsher, memorandum a few days later. | [All Russian Constituent Assembly](https://en.wikipedia.org/wiki/Russian_Constituent_Assembly) were supposed to be democratically elected government of Russian Republic, but they were immediately dispersed by Bolshevik Communist Party in October 1917, as they took power by force. Themself they were also in the ballots and had seats elected, but as a minority party. |
41,308 | Due to not reaching an agreement about Brexit, the United Kingdom is forced to hold European elections a few weeks from now, on May 23rd. The elected members (across the European Union) will be installed as the new European parliament on July 1st; however, if the United Kingdom leaves the European Union before that date, these elections will have been completely unnecessary:
>
> Government sources say if the Brexit process is completed before 30 June, UK MEPs will not take up their seats at all.
>
>
>
(source: [BBC](https://www.bbc.com/news/uk-politics-48188951))
Of course, the election will double as a poll, but (in case of a speedy Brexit) there will be no tangible effects. As far as I can tell, this is a rather unique situation, so I was wondering:
Have there been any elections before (preferably on national level) which were not declared invalid (e.g. by a court) yet rendered completely irrelevant by later events? | 2019/05/07 | [
"https://politics.stackexchange.com/questions/41308",
"https://politics.stackexchange.com",
"https://politics.stackexchange.com/users/10072/"
] | I suggest the [Greek referendum](https://en.m.wikipedia.org/wiki/2015_Greek_bailout_referendum) in July 2015 that rejected the EU memorandum about their national debt.
The pressure put by the (mostly German) EU negotiators and the threat to block Greek banks induced Alexis Tsipras to accept a very similar, supposedly even harsher, memorandum a few days later. | In many U.S. states, judges race "retention elections" and if the voters vote "yes" the judge gets to serve another term, and if the voters vote "no", the judge's term expires at the end of the term and a new judge is appointed to fill the vacancy.
It isn't terribly unusual for a judge who faces a retention election (regardless of its outcome) to resign after the retention election is held, but prior to the end of their term, sometimes to seek another position or sometimes for another reason (e.g. a pending scandal), rendering the results of the retention election moot. |
41,308 | Due to not reaching an agreement about Brexit, the United Kingdom is forced to hold European elections a few weeks from now, on May 23rd. The elected members (across the European Union) will be installed as the new European parliament on July 1st; however, if the United Kingdom leaves the European Union before that date, these elections will have been completely unnecessary:
>
> Government sources say if the Brexit process is completed before 30 June, UK MEPs will not take up their seats at all.
>
>
>
(source: [BBC](https://www.bbc.com/news/uk-politics-48188951))
Of course, the election will double as a poll, but (in case of a speedy Brexit) there will be no tangible effects. As far as I can tell, this is a rather unique situation, so I was wondering:
Have there been any elections before (preferably on national level) which were not declared invalid (e.g. by a court) yet rendered completely irrelevant by later events? | 2019/05/07 | [
"https://politics.stackexchange.com/questions/41308",
"https://politics.stackexchange.com",
"https://politics.stackexchange.com/users/10072/"
] | The [New Forest](https://en.wikipedia.org/wiki/1905_New_Forest_by-election) and [Barkston Ash](https://en.wikipedia.org/wiki/1905_Barkston_Ash_by-election) by-elections in 1905. Parliament was not in session at the time, and did not come into session before the 1906 general election at which the results were different, so the MPs who were elected in 1905 never took up their seats. | [All Russian Constituent Assembly](https://en.wikipedia.org/wiki/Russian_Constituent_Assembly) were supposed to be democratically elected government of Russian Republic, but they were immediately dispersed by Bolshevik Communist Party in October 1917, as they took power by force. Themself they were also in the ballots and had seats elected, but as a minority party. |
41,308 | Due to not reaching an agreement about Brexit, the United Kingdom is forced to hold European elections a few weeks from now, on May 23rd. The elected members (across the European Union) will be installed as the new European parliament on July 1st; however, if the United Kingdom leaves the European Union before that date, these elections will have been completely unnecessary:
>
> Government sources say if the Brexit process is completed before 30 June, UK MEPs will not take up their seats at all.
>
>
>
(source: [BBC](https://www.bbc.com/news/uk-politics-48188951))
Of course, the election will double as a poll, but (in case of a speedy Brexit) there will be no tangible effects. As far as I can tell, this is a rather unique situation, so I was wondering:
Have there been any elections before (preferably on national level) which were not declared invalid (e.g. by a court) yet rendered completely irrelevant by later events? | 2019/05/07 | [
"https://politics.stackexchange.com/questions/41308",
"https://politics.stackexchange.com",
"https://politics.stackexchange.com/users/10072/"
] | I suggest the [Greek referendum](https://en.m.wikipedia.org/wiki/2015_Greek_bailout_referendum) in July 2015 that rejected the EU memorandum about their national debt.
The pressure put by the (mostly German) EU negotiators and the threat to block Greek banks induced Alexis Tsipras to accept a very similar, supposedly even harsher, memorandum a few days later. | In 2008, Dmitry Medvedev was elected President of Russia.
One day after Dmitry Medvedev assumed the office of President, Vladimir Putin became the Prime Minister of Russia.
I would suggest that later event rendered Medvedev's election largely irrelevant, but perhaps not completely irrelevant. It may be a matter of some opinion. |
2,422,077 | I want to use my iphone to set alter my wireless router settings, and I don't want to go through 192.168.1.1 - is there any security restrictions or SDK limitations I should be aware of starting off?
--
t | 2010/03/11 | [
"https://Stackoverflow.com/questions/2422077",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/291103/"
] | Aside from targeting specific devices and building an application for managing it, you should check out Unpnp (<http://www.gnucitizen.org/blog/hacking-with-upnp-universal-plug-and-play/>) which would let you address device in a more uniform way. However what you can achieve with upnp is limited. | Wouldn't that depend mostly (well, entirely) on the interface that the wireless router provides to the management of its settings? |
11,946,039 | I am trying to create a settings file/configuration file. This would contain a list of key value pairs. There are 10 scripts that would be using this configuration file,either taking the input from it or sending output to it(key-values) or both.
I can do this by simply reading from and writing to the file..but I was thinking about a global hash in my settings file which could be accessed by all 10 scripts and which could retain the changes made by each script.
Right now,if I use :
require "setting.pl"
I am able to change the hash in my current script,but in the next script the changes are not visible..
Is there a way to do this?Any help is much appreciated. | 2012/08/14 | [
"https://Stackoverflow.com/questions/11946039",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1279274/"
] | How about a [config file tied to a hash](http://p3rl.org/Config%3a%3aSimple#TIE-INTERFACE)? | I think you need some kind of database. You can either use mysql/sqlite/etc or create a distinct script which keeps your hash and provides read/write access to it with sockets. |
11,946,039 | I am trying to create a settings file/configuration file. This would contain a list of key value pairs. There are 10 scripts that would be using this configuration file,either taking the input from it or sending output to it(key-values) or both.
I can do this by simply reading from and writing to the file..but I was thinking about a global hash in my settings file which could be accessed by all 10 scripts and which could retain the changes made by each script.
Right now,if I use :
require "setting.pl"
I am able to change the hash in my current script,but in the next script the changes are not visible..
Is there a way to do this?Any help is much appreciated. | 2012/08/14 | [
"https://Stackoverflow.com/questions/11946039",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1279274/"
] | How about a [config file tied to a hash](http://p3rl.org/Config%3a%3aSimple#TIE-INTERFACE)? | Check out this module, [AppConfig](http://p3rl.org/AppConfig). |
11,946,039 | I am trying to create a settings file/configuration file. This would contain a list of key value pairs. There are 10 scripts that would be using this configuration file,either taking the input from it or sending output to it(key-values) or both.
I can do this by simply reading from and writing to the file..but I was thinking about a global hash in my settings file which could be accessed by all 10 scripts and which could retain the changes made by each script.
Right now,if I use :
require "setting.pl"
I am able to change the hash in my current script,but in the next script the changes are not visible..
Is there a way to do this?Any help is much appreciated. | 2012/08/14 | [
"https://Stackoverflow.com/questions/11946039",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1279274/"
] | Check out this module, [AppConfig](http://p3rl.org/AppConfig). | I think you need some kind of database. You can either use mysql/sqlite/etc or create a distinct script which keeps your hash and provides read/write access to it with sockets. |
70,915 | Is the block % and block absorption of a shield included in the calculated Protection value for that shield? If not, is there an easy way to determine which shield will provide better average damage mitigation between one with higher armor and one with better blocking? | 2012/05/29 | [
"https://gaming.stackexchange.com/questions/70915",
"https://gaming.stackexchange.com",
"https://gaming.stackexchange.com/users/341/"
] | I do not believe that the 'Protection' value on a shield when comparing to another item does not take into consideration 'Block Chance' or 'Block Value'. I say this because a shield in comparison to a magic source on my Wizard rarely leaves more than ~0.2 difference in protection, yet my shield (Lidless Wall) has a 20% block chance for 2-3k or so.
With this in mind, when comparing shields to shields, take the 'Protection' value at face value but then do your own comparison for 'Block Chance' and 'Block Value'. | What I would do is calculate the damage reduction % and the % chance to block (assuming that the blocked damage is somewhat equal)
Add the two together and whichever has the higher total percentage I would use. (again taking into account the damage blocked) |
70,915 | Is the block % and block absorption of a shield included in the calculated Protection value for that shield? If not, is there an easy way to determine which shield will provide better average damage mitigation between one with higher armor and one with better blocking? | 2012/05/29 | [
"https://gaming.stackexchange.com/questions/70915",
"https://gaming.stackexchange.com",
"https://gaming.stackexchange.com/users/341/"
] | I do not believe that the 'Protection' value on a shield when comparing to another item does not take into consideration 'Block Chance' or 'Block Value'. I say this because a shield in comparison to a magic source on my Wizard rarely leaves more than ~0.2 difference in protection, yet my shield (Lidless Wall) has a 20% block chance for 2-3k or so.
With this in mind, when comparing shields to shields, take the 'Protection' value at face value but then do your own comparison for 'Block Chance' and 'Block Value'. | You may find an [EHP calculator](http://us.battle.net/d3/en/forum/topic/6933915251) that will do the comparison for you. For example, D3Up used to allow you to compare EHP for two items for a specific character. It will also tell you how much an increased block chance will help you under EHP Gains by Stat. Unfortunately, I don't know how he did his calculation. You could try looking it up in his github repositories: [V1](https://github.com/aaroncox/d3up); [V2](https://github.com/d3up). |
762 | I've been in many difference sized companies and in many different environments where we've used terms interchangably, Testing Terms, Development Terms and Process types have been overloaded in one place or another. Test types from Acceptance, Feature to Unit Tests I've seen used in multiple ways in different companies and I've stuck with a few that I probably should not have and others I've tried to change immediately when I heard them. It's tough and at times the groups have resisted, and in some places I had to give up since it was an uphill fight. What I've done is try the following:
* Terminology Dictionaries, either documents or wiki's - makes it easy to direct people as to what a term means and with a wiki you can link in sources
* Group Think - get everyone together and try to hash out definitions - doesn't work well with big groups but often gets interesting conversations going
* Dicussion Threads - if you have Discussion Forums in the company its an alternative to a wiki and gives everyone a chance to add their comments
* Talk to the originators of the term - try and understand the origination and see if its valid, or maybe change at the source
I've seen this come up in other QA Forums as well, where someone new encounters a term they don't understand and questions the wider community. Terms are subjective, and often localized as well, but I believe the strategies in how to deal with these are fairly common. | 2011/05/23 | [
"https://sqa.stackexchange.com/questions/762",
"https://sqa.stackexchange.com",
"https://sqa.stackexchange.com/users/18/"
] | There are a couple of aspects to this issue:
The same terms being used differently in different locations/workplaces. For that issue I'd recommend getting a clear definition of the terminology in use there, *even if it isn't correct*. Why? Terminology usage in an organization is part of that organization's culture, and changing the culture isn't always easy or even possible. You might need to start by adding clarifiers and letting those be accepted before you start chipping away at incorrect usage.
Using a given term to mean more than one thing is even more likely to be a cultural issue in my view - in my experience, if a term like user acceptance tests is used to describe more than one kind of test, chances are the organization *does not see a difference* between the two usages. If this is the case, you've got a much more difficult education hill to climb.
For sites like this, an accepted dictionary so that new testers (and experienced ones who've come through company cultures where the terminology is used in a non-standard way) is an invaluable resource.
No matter how the incorrect terminology gets there, it helps to start from the assumption that the person using it is misinformed rather than ignorant - tester tact is always a good thing. | If I know the term I'm using is ambiguous, I define it beforehand. |
762 | I've been in many difference sized companies and in many different environments where we've used terms interchangably, Testing Terms, Development Terms and Process types have been overloaded in one place or another. Test types from Acceptance, Feature to Unit Tests I've seen used in multiple ways in different companies and I've stuck with a few that I probably should not have and others I've tried to change immediately when I heard them. It's tough and at times the groups have resisted, and in some places I had to give up since it was an uphill fight. What I've done is try the following:
* Terminology Dictionaries, either documents or wiki's - makes it easy to direct people as to what a term means and with a wiki you can link in sources
* Group Think - get everyone together and try to hash out definitions - doesn't work well with big groups but often gets interesting conversations going
* Dicussion Threads - if you have Discussion Forums in the company its an alternative to a wiki and gives everyone a chance to add their comments
* Talk to the originators of the term - try and understand the origination and see if its valid, or maybe change at the source
I've seen this come up in other QA Forums as well, where someone new encounters a term they don't understand and questions the wider community. Terms are subjective, and often localized as well, but I believe the strategies in how to deal with these are fairly common. | 2011/05/23 | [
"https://sqa.stackexchange.com/questions/762",
"https://sqa.stackexchange.com",
"https://sqa.stackexchange.com/users/18/"
] | I maintain a Glossary of Testing Terms (based on this: <http://strazzere.blogspot.com/2010/04/glossary-of-testing-terms.html>) that we use internally. In addition, we maintain a Glossary of Business Terms containing terms and acronyms for the industry in which we live, as well as company-specific terms.
We have a periodic "Lunch and Learn" session where we discuss terms, and other topics of interest. | If I know the term I'm using is ambiguous, I define it beforehand. |
762 | I've been in many difference sized companies and in many different environments where we've used terms interchangably, Testing Terms, Development Terms and Process types have been overloaded in one place or another. Test types from Acceptance, Feature to Unit Tests I've seen used in multiple ways in different companies and I've stuck with a few that I probably should not have and others I've tried to change immediately when I heard them. It's tough and at times the groups have resisted, and in some places I had to give up since it was an uphill fight. What I've done is try the following:
* Terminology Dictionaries, either documents or wiki's - makes it easy to direct people as to what a term means and with a wiki you can link in sources
* Group Think - get everyone together and try to hash out definitions - doesn't work well with big groups but often gets interesting conversations going
* Dicussion Threads - if you have Discussion Forums in the company its an alternative to a wiki and gives everyone a chance to add their comments
* Talk to the originators of the term - try and understand the origination and see if its valid, or maybe change at the source
I've seen this come up in other QA Forums as well, where someone new encounters a term they don't understand and questions the wider community. Terms are subjective, and often localized as well, but I believe the strategies in how to deal with these are fairly common. | 2011/05/23 | [
"https://sqa.stackexchange.com/questions/762",
"https://sqa.stackexchange.com",
"https://sqa.stackexchange.com/users/18/"
] | If I know the term I'm using is ambiguous, I define it beforehand. | There should be a new hire training presenting the terminologies used in the organization with their precise meanings. Anyone who newly joins the company should go through it. It should be conducted by experts who are really well-versed with these terms. |
762 | I've been in many difference sized companies and in many different environments where we've used terms interchangably, Testing Terms, Development Terms and Process types have been overloaded in one place or another. Test types from Acceptance, Feature to Unit Tests I've seen used in multiple ways in different companies and I've stuck with a few that I probably should not have and others I've tried to change immediately when I heard them. It's tough and at times the groups have resisted, and in some places I had to give up since it was an uphill fight. What I've done is try the following:
* Terminology Dictionaries, either documents or wiki's - makes it easy to direct people as to what a term means and with a wiki you can link in sources
* Group Think - get everyone together and try to hash out definitions - doesn't work well with big groups but often gets interesting conversations going
* Dicussion Threads - if you have Discussion Forums in the company its an alternative to a wiki and gives everyone a chance to add their comments
* Talk to the originators of the term - try and understand the origination and see if its valid, or maybe change at the source
I've seen this come up in other QA Forums as well, where someone new encounters a term they don't understand and questions the wider community. Terms are subjective, and often localized as well, but I believe the strategies in how to deal with these are fairly common. | 2011/05/23 | [
"https://sqa.stackexchange.com/questions/762",
"https://sqa.stackexchange.com",
"https://sqa.stackexchange.com/users/18/"
] | I maintain a Glossary of Testing Terms (based on this: <http://strazzere.blogspot.com/2010/04/glossary-of-testing-terms.html>) that we use internally. In addition, we maintain a Glossary of Business Terms containing terms and acronyms for the industry in which we live, as well as company-specific terms.
We have a periodic "Lunch and Learn" session where we discuss terms, and other topics of interest. | There are a couple of aspects to this issue:
The same terms being used differently in different locations/workplaces. For that issue I'd recommend getting a clear definition of the terminology in use there, *even if it isn't correct*. Why? Terminology usage in an organization is part of that organization's culture, and changing the culture isn't always easy or even possible. You might need to start by adding clarifiers and letting those be accepted before you start chipping away at incorrect usage.
Using a given term to mean more than one thing is even more likely to be a cultural issue in my view - in my experience, if a term like user acceptance tests is used to describe more than one kind of test, chances are the organization *does not see a difference* between the two usages. If this is the case, you've got a much more difficult education hill to climb.
For sites like this, an accepted dictionary so that new testers (and experienced ones who've come through company cultures where the terminology is used in a non-standard way) is an invaluable resource.
No matter how the incorrect terminology gets there, it helps to start from the assumption that the person using it is misinformed rather than ignorant - tester tact is always a good thing. |
762 | I've been in many difference sized companies and in many different environments where we've used terms interchangably, Testing Terms, Development Terms and Process types have been overloaded in one place or another. Test types from Acceptance, Feature to Unit Tests I've seen used in multiple ways in different companies and I've stuck with a few that I probably should not have and others I've tried to change immediately when I heard them. It's tough and at times the groups have resisted, and in some places I had to give up since it was an uphill fight. What I've done is try the following:
* Terminology Dictionaries, either documents or wiki's - makes it easy to direct people as to what a term means and with a wiki you can link in sources
* Group Think - get everyone together and try to hash out definitions - doesn't work well with big groups but often gets interesting conversations going
* Dicussion Threads - if you have Discussion Forums in the company its an alternative to a wiki and gives everyone a chance to add their comments
* Talk to the originators of the term - try and understand the origination and see if its valid, or maybe change at the source
I've seen this come up in other QA Forums as well, where someone new encounters a term they don't understand and questions the wider community. Terms are subjective, and often localized as well, but I believe the strategies in how to deal with these are fairly common. | 2011/05/23 | [
"https://sqa.stackexchange.com/questions/762",
"https://sqa.stackexchange.com",
"https://sqa.stackexchange.com/users/18/"
] | There are a couple of aspects to this issue:
The same terms being used differently in different locations/workplaces. For that issue I'd recommend getting a clear definition of the terminology in use there, *even if it isn't correct*. Why? Terminology usage in an organization is part of that organization's culture, and changing the culture isn't always easy or even possible. You might need to start by adding clarifiers and letting those be accepted before you start chipping away at incorrect usage.
Using a given term to mean more than one thing is even more likely to be a cultural issue in my view - in my experience, if a term like user acceptance tests is used to describe more than one kind of test, chances are the organization *does not see a difference* between the two usages. If this is the case, you've got a much more difficult education hill to climb.
For sites like this, an accepted dictionary so that new testers (and experienced ones who've come through company cultures where the terminology is used in a non-standard way) is an invaluable resource.
No matter how the incorrect terminology gets there, it helps to start from the assumption that the person using it is misinformed rather than ignorant - tester tact is always a good thing. | There should be a new hire training presenting the terminologies used in the organization with their precise meanings. Anyone who newly joins the company should go through it. It should be conducted by experts who are really well-versed with these terms. |
762 | I've been in many difference sized companies and in many different environments where we've used terms interchangably, Testing Terms, Development Terms and Process types have been overloaded in one place or another. Test types from Acceptance, Feature to Unit Tests I've seen used in multiple ways in different companies and I've stuck with a few that I probably should not have and others I've tried to change immediately when I heard them. It's tough and at times the groups have resisted, and in some places I had to give up since it was an uphill fight. What I've done is try the following:
* Terminology Dictionaries, either documents or wiki's - makes it easy to direct people as to what a term means and with a wiki you can link in sources
* Group Think - get everyone together and try to hash out definitions - doesn't work well with big groups but often gets interesting conversations going
* Dicussion Threads - if you have Discussion Forums in the company its an alternative to a wiki and gives everyone a chance to add their comments
* Talk to the originators of the term - try and understand the origination and see if its valid, or maybe change at the source
I've seen this come up in other QA Forums as well, where someone new encounters a term they don't understand and questions the wider community. Terms are subjective, and often localized as well, but I believe the strategies in how to deal with these are fairly common. | 2011/05/23 | [
"https://sqa.stackexchange.com/questions/762",
"https://sqa.stackexchange.com",
"https://sqa.stackexchange.com/users/18/"
] | I maintain a Glossary of Testing Terms (based on this: <http://strazzere.blogspot.com/2010/04/glossary-of-testing-terms.html>) that we use internally. In addition, we maintain a Glossary of Business Terms containing terms and acronyms for the industry in which we live, as well as company-specific terms.
We have a periodic "Lunch and Learn" session where we discuss terms, and other topics of interest. | There should be a new hire training presenting the terminologies used in the organization with their precise meanings. Anyone who newly joins the company should go through it. It should be conducted by experts who are really well-versed with these terms. |
18,762 | I am currently reading “Fundamentals Of Vehicle Dynamics” by Thomas D. Gillespie.
[](https://i.stack.imgur.com/HgTFF.jpg)
The author states how tires generate the required frictional forces for movement by 2 methods :
1)Hysteresis
2)Adhesion
It’s a no-brainier in understanding how adhesion can help generate friction. What I don’t understand however is how Hysteresis can help produce the same ?
What is the mechanism using which tires generate forces by stretching and un-stretching ? How does this work? | 2018/01/07 | [
"https://engineering.stackexchange.com/questions/18762",
"https://engineering.stackexchange.com",
"https://engineering.stackexchange.com/users/10075/"
] | I'm guessing that you are confusing the terms *friction coupling* with *traction*.
If I pull a trailer over a flat, level surface, most of the resistance is due to the hysteresis in the contact patch. The adhesion prevents the tire from deforming and then rebounding and recovering it's energy. Pulling a trailer across wet ice is easier than pulling it across sanded concrete. This is the coupling between deformation forces and the adhesion which fights against, say, lateral stretching and return of the squashed tire.
Basically, just realize that adhesion and deformation interact with each other and can't usefully be separated in the real world. They are strongly coupled.
The coupling also detracts from traction since some of the available adhesion is now expended on opposed forces caused by the deformation.
Here's a convenient PDF - see page# 3. [Rolling Resistance](http://www.mchenrysoftware.com/forum/UMRollingResistance.pdf) | Here is my simple answer of how tires generate traction [frictional] forces for movement by Method/Process #1 in the diagram, Hysteresis:
* Think of Hysteresis as the amount of 'deformation' a piece of rubber is capable of, rubber being an elastic/deformable material, and specifically tread rubber, when it comes in contact with the irregularities of a road surface, whether asphalt, new concrete, worn concrete.
High Hysteresis =
High Amount of Tread Rubber Deformation/Soft Rubber Compound [think racing tires];
High Amount of Traction;
but also a High Amount of Heat Generation [internal molecular friction, think of constantly bending a iron bar and the heat produced at the bend point];
and, a High Amount of Rolling Resistance [will use more fuel to overcome this traction process];
* Drive Tire Acceleration Hysteresis Traction - all drive tires exhibit a certain amount of 'slip' when the torque of the engine is applied to them under acceleration.
The tread rubber surface is deformed by the minute [extremely small] irregularities of the road surface. The rougher the surface, the more the tread rubber deformation, the more the traction, as the tread rubber 'sinks' into the deformation.
* Non-Drive/Trailer and Steer Tire Braking Traction - see above, but in reverse. When the brakes are applied, the deformation the tread rubber is undergoing creates a traction/braking force.
Trust this helps. |
18,762 | I am currently reading “Fundamentals Of Vehicle Dynamics” by Thomas D. Gillespie.
[](https://i.stack.imgur.com/HgTFF.jpg)
The author states how tires generate the required frictional forces for movement by 2 methods :
1)Hysteresis
2)Adhesion
It’s a no-brainier in understanding how adhesion can help generate friction. What I don’t understand however is how Hysteresis can help produce the same ?
What is the mechanism using which tires generate forces by stretching and un-stretching ? How does this work? | 2018/01/07 | [
"https://engineering.stackexchange.com/questions/18762",
"https://engineering.stackexchange.com",
"https://engineering.stackexchange.com/users/10075/"
] | I'm guessing that you are confusing the terms *friction coupling* with *traction*.
If I pull a trailer over a flat, level surface, most of the resistance is due to the hysteresis in the contact patch. The adhesion prevents the tire from deforming and then rebounding and recovering it's energy. Pulling a trailer across wet ice is easier than pulling it across sanded concrete. This is the coupling between deformation forces and the adhesion which fights against, say, lateral stretching and return of the squashed tire.
Basically, just realize that adhesion and deformation interact with each other and can't usefully be separated in the real world. They are strongly coupled.
The coupling also detracts from traction since some of the available adhesion is now expended on opposed forces caused by the deformation.
Here's a convenient PDF - see page# 3. [Rolling Resistance](http://www.mchenrysoftware.com/forum/UMRollingResistance.pdf) | Hysteresis essentially implies the loss of energy in an activity of cyclic nature.Here in case of motion of tire the portion of tread in contact with road surface is under compression due to load of the vehicle and as it moves on it gets extensional deformation. The compressed tread rubber will try to come back to its original state due to its elastic part of viscoelastic rubber. The viscous part gets converted to heat thereby heating the tread compound.This heat generation is considered as consequence of some form of friction named as hysteresis friction component, being different from the normal frictional heat. Dr. B R Gupta, Retd. Prof. I I T Kharagpur, India |
18,762 | I am currently reading “Fundamentals Of Vehicle Dynamics” by Thomas D. Gillespie.
[](https://i.stack.imgur.com/HgTFF.jpg)
The author states how tires generate the required frictional forces for movement by 2 methods :
1)Hysteresis
2)Adhesion
It’s a no-brainier in understanding how adhesion can help generate friction. What I don’t understand however is how Hysteresis can help produce the same ?
What is the mechanism using which tires generate forces by stretching and un-stretching ? How does this work? | 2018/01/07 | [
"https://engineering.stackexchange.com/questions/18762",
"https://engineering.stackexchange.com",
"https://engineering.stackexchange.com/users/10075/"
] | When you compress or stretch a material, the work done is partly converted into elastic energy, which causes the body to return (approximately) to it's initial length, once you remove the load. On the other hand, some of the energy is dissipated into heat. This is the primary cause for hysteresis.
(Tyre-)rubber, due to it's visco-elastic properties is strongly affected by hysteresis, shown by the following stress-strain-diagram (force-extension-diagram, to be precise)

*Fig 1 Source: <https://upload.wikimedia.org/wikipedia/commons/thumb/c/c6/Elastic_Hysteresis.svg/930px-Elastic_Hysteresis.svg.png>*
This means if you apply a load to rubber and release it afterwards, you will measure a different force for the same deformation during load and release.
If you now consider a tyre on a flat surface, the contact surface between the two materials forms a plane, meaning that in the front of the contact surface (in the direction in which the tyre is rolling) the rubber is in a loading phase (blue curve). In the back of the contact surface the rubber is unloading (red curve). This results in a non-symmetrical stress-distribution, as shown in the following figure. (Albeit for both of the bodies being cylinders, but the principle is the same)

*Fig 1 Source: <https://upload.wikimedia.org/wikipedia/commons/8/8c/Pressure_distribution_for_viscoelastic_rolling_cylinders.png>*
This results in a counterclockwise moment, with respect to the axis of rotation (clockwise), in other word: a retarding moment, or rolling resistance. | Here is my simple answer of how tires generate traction [frictional] forces for movement by Method/Process #1 in the diagram, Hysteresis:
* Think of Hysteresis as the amount of 'deformation' a piece of rubber is capable of, rubber being an elastic/deformable material, and specifically tread rubber, when it comes in contact with the irregularities of a road surface, whether asphalt, new concrete, worn concrete.
High Hysteresis =
High Amount of Tread Rubber Deformation/Soft Rubber Compound [think racing tires];
High Amount of Traction;
but also a High Amount of Heat Generation [internal molecular friction, think of constantly bending a iron bar and the heat produced at the bend point];
and, a High Amount of Rolling Resistance [will use more fuel to overcome this traction process];
* Drive Tire Acceleration Hysteresis Traction - all drive tires exhibit a certain amount of 'slip' when the torque of the engine is applied to them under acceleration.
The tread rubber surface is deformed by the minute [extremely small] irregularities of the road surface. The rougher the surface, the more the tread rubber deformation, the more the traction, as the tread rubber 'sinks' into the deformation.
* Non-Drive/Trailer and Steer Tire Braking Traction - see above, but in reverse. When the brakes are applied, the deformation the tread rubber is undergoing creates a traction/braking force.
Trust this helps. |
18,762 | I am currently reading “Fundamentals Of Vehicle Dynamics” by Thomas D. Gillespie.
[](https://i.stack.imgur.com/HgTFF.jpg)
The author states how tires generate the required frictional forces for movement by 2 methods :
1)Hysteresis
2)Adhesion
It’s a no-brainier in understanding how adhesion can help generate friction. What I don’t understand however is how Hysteresis can help produce the same ?
What is the mechanism using which tires generate forces by stretching and un-stretching ? How does this work? | 2018/01/07 | [
"https://engineering.stackexchange.com/questions/18762",
"https://engineering.stackexchange.com",
"https://engineering.stackexchange.com/users/10075/"
] | When you compress or stretch a material, the work done is partly converted into elastic energy, which causes the body to return (approximately) to it's initial length, once you remove the load. On the other hand, some of the energy is dissipated into heat. This is the primary cause for hysteresis.
(Tyre-)rubber, due to it's visco-elastic properties is strongly affected by hysteresis, shown by the following stress-strain-diagram (force-extension-diagram, to be precise)

*Fig 1 Source: <https://upload.wikimedia.org/wikipedia/commons/thumb/c/c6/Elastic_Hysteresis.svg/930px-Elastic_Hysteresis.svg.png>*
This means if you apply a load to rubber and release it afterwards, you will measure a different force for the same deformation during load and release.
If you now consider a tyre on a flat surface, the contact surface between the two materials forms a plane, meaning that in the front of the contact surface (in the direction in which the tyre is rolling) the rubber is in a loading phase (blue curve). In the back of the contact surface the rubber is unloading (red curve). This results in a non-symmetrical stress-distribution, as shown in the following figure. (Albeit for both of the bodies being cylinders, but the principle is the same)

*Fig 1 Source: <https://upload.wikimedia.org/wikipedia/commons/8/8c/Pressure_distribution_for_viscoelastic_rolling_cylinders.png>*
This results in a counterclockwise moment, with respect to the axis of rotation (clockwise), in other word: a retarding moment, or rolling resistance. | Hysteresis essentially implies the loss of energy in an activity of cyclic nature.Here in case of motion of tire the portion of tread in contact with road surface is under compression due to load of the vehicle and as it moves on it gets extensional deformation. The compressed tread rubber will try to come back to its original state due to its elastic part of viscoelastic rubber. The viscous part gets converted to heat thereby heating the tread compound.This heat generation is considered as consequence of some form of friction named as hysteresis friction component, being different from the normal frictional heat. Dr. B R Gupta, Retd. Prof. I I T Kharagpur, India |
18,762 | I am currently reading “Fundamentals Of Vehicle Dynamics” by Thomas D. Gillespie.
[](https://i.stack.imgur.com/HgTFF.jpg)
The author states how tires generate the required frictional forces for movement by 2 methods :
1)Hysteresis
2)Adhesion
It’s a no-brainier in understanding how adhesion can help generate friction. What I don’t understand however is how Hysteresis can help produce the same ?
What is the mechanism using which tires generate forces by stretching and un-stretching ? How does this work? | 2018/01/07 | [
"https://engineering.stackexchange.com/questions/18762",
"https://engineering.stackexchange.com",
"https://engineering.stackexchange.com/users/10075/"
] | Here is my simple answer of how tires generate traction [frictional] forces for movement by Method/Process #1 in the diagram, Hysteresis:
* Think of Hysteresis as the amount of 'deformation' a piece of rubber is capable of, rubber being an elastic/deformable material, and specifically tread rubber, when it comes in contact with the irregularities of a road surface, whether asphalt, new concrete, worn concrete.
High Hysteresis =
High Amount of Tread Rubber Deformation/Soft Rubber Compound [think racing tires];
High Amount of Traction;
but also a High Amount of Heat Generation [internal molecular friction, think of constantly bending a iron bar and the heat produced at the bend point];
and, a High Amount of Rolling Resistance [will use more fuel to overcome this traction process];
* Drive Tire Acceleration Hysteresis Traction - all drive tires exhibit a certain amount of 'slip' when the torque of the engine is applied to them under acceleration.
The tread rubber surface is deformed by the minute [extremely small] irregularities of the road surface. The rougher the surface, the more the tread rubber deformation, the more the traction, as the tread rubber 'sinks' into the deformation.
* Non-Drive/Trailer and Steer Tire Braking Traction - see above, but in reverse. When the brakes are applied, the deformation the tread rubber is undergoing creates a traction/braking force.
Trust this helps. | Hysteresis essentially implies the loss of energy in an activity of cyclic nature.Here in case of motion of tire the portion of tread in contact with road surface is under compression due to load of the vehicle and as it moves on it gets extensional deformation. The compressed tread rubber will try to come back to its original state due to its elastic part of viscoelastic rubber. The viscous part gets converted to heat thereby heating the tread compound.This heat generation is considered as consequence of some form of friction named as hysteresis friction component, being different from the normal frictional heat. Dr. B R Gupta, Retd. Prof. I I T Kharagpur, India |
57,778 | [](https://i.stack.imgur.com/7R2EO.png)
Hello. I love this composition and would like to to do something similar (the sky over the houses that are kind of faded in, not the computer, etc).
I am unsure whether I should use different HSB values for the properties over the background layer (sky) with certain opacity for each house or if I should just use different color variations of a base color with the gradient with a certain transparency on top of everything else.
I was kind of hoping someone could point me in the right direction. | 2015/08/09 | [
"https://graphicdesign.stackexchange.com/questions/57778",
"https://graphicdesign.stackexchange.com",
"https://graphicdesign.stackexchange.com/users/48113/"
] | I think you are overthinking things. I don't *really* understand what you are asking.. it seems to be "how to do I color like this?" Which would appear to be a straightforward question with an equally straightforward answer - you pick the colors you want.
It's just a blue palette with three variations - shadows, mid-tones, and highlights.
[](https://i.stack.imgur.com/KnxsF.jpg)
There are some additional variations within that to accommodate some of the details within the image. However essentially the image is built using one palette, then areas are altered for a second palette to lighten them up, then a third palette for highlights.
You **can** use blending modes and transparency to do that if you want.
But you can just as easily use a secondary or tertiary swatch group and fill objects with standard colors - no blending or transparency. (image above is all just solid colors).
You can also use things like Illustrator's **[Recolor Artwork](https://graphicdesign.stackexchange.com/questions/19388/how-to-change-just-one-color-value-for-multiple-objects-in-illustrator/19424#19424), [[2](https://graphicdesign.stackexchange.com/questions/19388/how-to-change-just-one-color-value-for-multiple-objects-in-illustrator/19424#19424)](https://graphicdesign.stackexchange.com/questions/28528/illustrator-recolor-artwork-how-to-manually-choose-the-replacement-colours-fr)**
Which method *you* choose is entirely up to you. There's nothing inherently *wrong* with any of them. There may be some considerations though.
If you are working in CMYK mode, [blending and transparency can yield undesired results](https://graphicdesign.stackexchange.com/questions/31183/is-cmyk-mode-not-ideal-for-designs-with-blending-mode/31192#31192). So you may be better off using solid colors.
For me, I dislike using transparency and blending within Illustrator if it can be avoided. But that's merely *my* preference. | Achieving with blending mode, here's the steps...
I'm working in RGB Mode
[](https://i.stack.imgur.com/msYpf.jpg)
Create a black rectangle over the Artboard
[](https://i.stack.imgur.com/6Lx4Y.jpg)
Select **Transparency**-> and change blending mode to **Saturation**
[](https://i.stack.imgur.com/K35z9.jpg)
Result of blending mode, you can lower the opacity for desire blending.
[](https://i.stack.imgur.com/rBwMV.jpg)
Double click the **Opacity Mask** icon to add opacity mask. This will activate Opacity Mask Editing mode.
[](https://i.stack.imgur.com/PShxJ.jpg)
Remove **Clip** check mark
[](https://i.stack.imgur.com/1KubG.jpg)
Now draw an area filled with black, where effect is not desired.
[](https://i.stack.imgur.com/C4Aho.jpg)
Click back and forth to edit Opacity Mask.
[](https://i.stack.imgur.com/OkONU.jpg)
Hopefully this helps.... |
642 | Background:
We have long running product development project, which consists of different features.
The development team consists of 15 developers and 5 QA engineers.
We setup releases each 5-7 month.
Each release we either add new features or enhance/existing features.
Each release consists of 6-8 development sprints (3 weeks each) and 2-3 pure stabilization/bug fix sprints.
Issue:
Some of features take several months and more to complete development to be at minimum level of deliverable or production ready stage. We do split them into small tasks and progress developing them in each sprint, but it ready for first demo only after 2-4 sprints and ready for QA/production after 5-8 sprints. Those features are complex and generally consist of developing new or enhance infrastructure, core/services, business logic, UI, Web and then integration in monitoring, reporting, printing, marketing and etc. Before these all done the feature cannot be considered as done (production ready). On the other hand in same release we have small features or enhancement which are much smaller and can be production ready in single sprint or two... Those long running features prevent from us to release product in smaller chunks since they takes much more sprints to be production ready rather the others.
Question:
Any idea/advice what could be better approach if we would like to make our releases shorter?
Thanks,
Pavel | 2011/02/24 | [
"https://pm.stackexchange.com/questions/642",
"https://pm.stackexchange.com",
"https://pm.stackexchange.com/users/463/"
] | What I think is happening, you are calling sprints to the development effort then you have "sprints" for QA.
I think you are using a cascade model instead a incremental model. I would try to dissect the tasks in backlog to the minimum size and in each sprint integrate QA, so you don´t wait until the sprint 5 to 8 to testing and go back with the 2-3 weeks or sprints for stabilization.
At the end of each sprint, you should have working histories developed and tested, then with a Configuration Management Strategy you can deploy those new or reworked functionality to the stating environment and after UAT, you can deploy to the live system, so the system have new functionality every sprint, and to at the end of the development and testing "sprints"
I think you are misunderstanding the "sprint" and SCRUM life-cycle with the traditional life-cycle. | I think the answer here may be a good tool. I don't know what kind of solution you use for version control but I think moving to [Distributed Version Control System](http://en.wikipedia.org/wiki/Distributed_Version_Control_System) might help. The main change of the mindset with using DVCS is you don't merge everything into one trunk, which you have to stabilize before pushing into production, but you can add different features into one release basing on distributed code base.
Many Kanban teams face the same issue, although in a smaller scale and DVCS proved to be pretty good solution.
UPDATE (based on comment): With DVCS you change the mindset regarding working with trunks/branches. Read [Joel Spolsky's introduction to DVCS](http://joelonsoftware.com/items/2010/03/17.html). Also you may check [Joel's Mercurial tutorial](http://hginit.com/) to get familiar with DVCS. |
642 | Background:
We have long running product development project, which consists of different features.
The development team consists of 15 developers and 5 QA engineers.
We setup releases each 5-7 month.
Each release we either add new features or enhance/existing features.
Each release consists of 6-8 development sprints (3 weeks each) and 2-3 pure stabilization/bug fix sprints.
Issue:
Some of features take several months and more to complete development to be at minimum level of deliverable or production ready stage. We do split them into small tasks and progress developing them in each sprint, but it ready for first demo only after 2-4 sprints and ready for QA/production after 5-8 sprints. Those features are complex and generally consist of developing new or enhance infrastructure, core/services, business logic, UI, Web and then integration in monitoring, reporting, printing, marketing and etc. Before these all done the feature cannot be considered as done (production ready). On the other hand in same release we have small features or enhancement which are much smaller and can be production ready in single sprint or two... Those long running features prevent from us to release product in smaller chunks since they takes much more sprints to be production ready rather the others.
Question:
Any idea/advice what could be better approach if we would like to make our releases shorter?
Thanks,
Pavel | 2011/02/24 | [
"https://pm.stackexchange.com/questions/642",
"https://pm.stackexchange.com",
"https://pm.stackexchange.com/users/463/"
] | What I think is happening, you are calling sprints to the development effort then you have "sprints" for QA.
I think you are using a cascade model instead a incremental model. I would try to dissect the tasks in backlog to the minimum size and in each sprint integrate QA, so you don´t wait until the sprint 5 to 8 to testing and go back with the 2-3 weeks or sprints for stabilization.
At the end of each sprint, you should have working histories developed and tested, then with a Configuration Management Strategy you can deploy those new or reworked functionality to the stating environment and after UAT, you can deploy to the live system, so the system have new functionality every sprint, and to at the end of the development and testing "sprints"
I think you are misunderstanding the "sprint" and SCRUM life-cycle with the traditional life-cycle. | I think that you the problem is that the scope is not elaborated properly. There are 15 developers, 5 testers and no analysts? Who is working with the scope documentation/analysis?
You need someone at this "**system analysis**" role. You need this person to break down the scope to much smaller chunks than now. |
642 | Background:
We have long running product development project, which consists of different features.
The development team consists of 15 developers and 5 QA engineers.
We setup releases each 5-7 month.
Each release we either add new features or enhance/existing features.
Each release consists of 6-8 development sprints (3 weeks each) and 2-3 pure stabilization/bug fix sprints.
Issue:
Some of features take several months and more to complete development to be at minimum level of deliverable or production ready stage. We do split them into small tasks and progress developing them in each sprint, but it ready for first demo only after 2-4 sprints and ready for QA/production after 5-8 sprints. Those features are complex and generally consist of developing new or enhance infrastructure, core/services, business logic, UI, Web and then integration in monitoring, reporting, printing, marketing and etc. Before these all done the feature cannot be considered as done (production ready). On the other hand in same release we have small features or enhancement which are much smaller and can be production ready in single sprint or two... Those long running features prevent from us to release product in smaller chunks since they takes much more sprints to be production ready rather the others.
Question:
Any idea/advice what could be better approach if we would like to make our releases shorter?
Thanks,
Pavel | 2011/02/24 | [
"https://pm.stackexchange.com/questions/642",
"https://pm.stackexchange.com",
"https://pm.stackexchange.com/users/463/"
] | What I think is happening, you are calling sprints to the development effort then you have "sprints" for QA.
I think you are using a cascade model instead a incremental model. I would try to dissect the tasks in backlog to the minimum size and in each sprint integrate QA, so you don´t wait until the sprint 5 to 8 to testing and go back with the 2-3 weeks or sprints for stabilization.
At the end of each sprint, you should have working histories developed and tested, then with a Configuration Management Strategy you can deploy those new or reworked functionality to the stating environment and after UAT, you can deploy to the live system, so the system have new functionality every sprint, and to at the end of the development and testing "sprints"
I think you are misunderstanding the "sprint" and SCRUM life-cycle with the traditional life-cycle. | I'd start by taking a look at the critical path of your delivery cycle.
Map it out with the different durations of each phase then see where you can condense or better sequence your activities. |
642 | Background:
We have long running product development project, which consists of different features.
The development team consists of 15 developers and 5 QA engineers.
We setup releases each 5-7 month.
Each release we either add new features or enhance/existing features.
Each release consists of 6-8 development sprints (3 weeks each) and 2-3 pure stabilization/bug fix sprints.
Issue:
Some of features take several months and more to complete development to be at minimum level of deliverable or production ready stage. We do split them into small tasks and progress developing them in each sprint, but it ready for first demo only after 2-4 sprints and ready for QA/production after 5-8 sprints. Those features are complex and generally consist of developing new or enhance infrastructure, core/services, business logic, UI, Web and then integration in monitoring, reporting, printing, marketing and etc. Before these all done the feature cannot be considered as done (production ready). On the other hand in same release we have small features or enhancement which are much smaller and can be production ready in single sprint or two... Those long running features prevent from us to release product in smaller chunks since they takes much more sprints to be production ready rather the others.
Question:
Any idea/advice what could be better approach if we would like to make our releases shorter?
Thanks,
Pavel | 2011/02/24 | [
"https://pm.stackexchange.com/questions/642",
"https://pm.stackexchange.com",
"https://pm.stackexchange.com/users/463/"
] | What I think is happening, you are calling sprints to the development effort then you have "sprints" for QA.
I think you are using a cascade model instead a incremental model. I would try to dissect the tasks in backlog to the minimum size and in each sprint integrate QA, so you don´t wait until the sprint 5 to 8 to testing and go back with the 2-3 weeks or sprints for stabilization.
At the end of each sprint, you should have working histories developed and tested, then with a Configuration Management Strategy you can deploy those new or reworked functionality to the stating environment and after UAT, you can deploy to the live system, so the system have new functionality every sprint, and to at the end of the development and testing "sprints"
I think you are misunderstanding the "sprint" and SCRUM life-cycle with the traditional life-cycle. | I have to agree with a lot of the comments here. You seem to be iterating a waterfall structure and confusing it with Agile. There is no problem with an iterating waterfall process, and I think you can find solutions in the way you initiate each cycle. If you think of each cycle as a new project, you may notice you've missed a bit of the methodology. If you don't have Analysts, you need to get some as yegor256 said.
Also this is a great environment for you to conduct lessons learned sessions and implement change suggestions. |
642 | Background:
We have long running product development project, which consists of different features.
The development team consists of 15 developers and 5 QA engineers.
We setup releases each 5-7 month.
Each release we either add new features or enhance/existing features.
Each release consists of 6-8 development sprints (3 weeks each) and 2-3 pure stabilization/bug fix sprints.
Issue:
Some of features take several months and more to complete development to be at minimum level of deliverable or production ready stage. We do split them into small tasks and progress developing them in each sprint, but it ready for first demo only after 2-4 sprints and ready for QA/production after 5-8 sprints. Those features are complex and generally consist of developing new or enhance infrastructure, core/services, business logic, UI, Web and then integration in monitoring, reporting, printing, marketing and etc. Before these all done the feature cannot be considered as done (production ready). On the other hand in same release we have small features or enhancement which are much smaller and can be production ready in single sprint or two... Those long running features prevent from us to release product in smaller chunks since they takes much more sprints to be production ready rather the others.
Question:
Any idea/advice what could be better approach if we would like to make our releases shorter?
Thanks,
Pavel | 2011/02/24 | [
"https://pm.stackexchange.com/questions/642",
"https://pm.stackexchange.com",
"https://pm.stackexchange.com/users/463/"
] | What I think is happening, you are calling sprints to the development effort then you have "sprints" for QA.
I think you are using a cascade model instead a incremental model. I would try to dissect the tasks in backlog to the minimum size and in each sprint integrate QA, so you don´t wait until the sprint 5 to 8 to testing and go back with the 2-3 weeks or sprints for stabilization.
At the end of each sprint, you should have working histories developed and tested, then with a Configuration Management Strategy you can deploy those new or reworked functionality to the stating environment and after UAT, you can deploy to the live system, so the system have new functionality every sprint, and to at the end of the development and testing "sprints"
I think you are misunderstanding the "sprint" and SCRUM life-cycle with the traditional life-cycle. | I think pawelbrodzinski provided the best answer...but it seems you/your-team is code branch averse.....another approach would be to release the changes - working or not - with each sprint, making them only reachable by certain user roles or links (so the current user base can not access it prior to completion)...this might mean duplicate tables (some with a new structure to support the new functionality) and duplicate links (one to the old functionality and one to the new functionality)...not a good solution or one I would follow myself, but another approach |
642 | Background:
We have long running product development project, which consists of different features.
The development team consists of 15 developers and 5 QA engineers.
We setup releases each 5-7 month.
Each release we either add new features or enhance/existing features.
Each release consists of 6-8 development sprints (3 weeks each) and 2-3 pure stabilization/bug fix sprints.
Issue:
Some of features take several months and more to complete development to be at minimum level of deliverable or production ready stage. We do split them into small tasks and progress developing them in each sprint, but it ready for first demo only after 2-4 sprints and ready for QA/production after 5-8 sprints. Those features are complex and generally consist of developing new or enhance infrastructure, core/services, business logic, UI, Web and then integration in monitoring, reporting, printing, marketing and etc. Before these all done the feature cannot be considered as done (production ready). On the other hand in same release we have small features or enhancement which are much smaller and can be production ready in single sprint or two... Those long running features prevent from us to release product in smaller chunks since they takes much more sprints to be production ready rather the others.
Question:
Any idea/advice what could be better approach if we would like to make our releases shorter?
Thanks,
Pavel | 2011/02/24 | [
"https://pm.stackexchange.com/questions/642",
"https://pm.stackexchange.com",
"https://pm.stackexchange.com/users/463/"
] | I think the answer here may be a good tool. I don't know what kind of solution you use for version control but I think moving to [Distributed Version Control System](http://en.wikipedia.org/wiki/Distributed_Version_Control_System) might help. The main change of the mindset with using DVCS is you don't merge everything into one trunk, which you have to stabilize before pushing into production, but you can add different features into one release basing on distributed code base.
Many Kanban teams face the same issue, although in a smaller scale and DVCS proved to be pretty good solution.
UPDATE (based on comment): With DVCS you change the mindset regarding working with trunks/branches. Read [Joel Spolsky's introduction to DVCS](http://joelonsoftware.com/items/2010/03/17.html). Also you may check [Joel's Mercurial tutorial](http://hginit.com/) to get familiar with DVCS. | I think that you the problem is that the scope is not elaborated properly. There are 15 developers, 5 testers and no analysts? Who is working with the scope documentation/analysis?
You need someone at this "**system analysis**" role. You need this person to break down the scope to much smaller chunks than now. |
642 | Background:
We have long running product development project, which consists of different features.
The development team consists of 15 developers and 5 QA engineers.
We setup releases each 5-7 month.
Each release we either add new features or enhance/existing features.
Each release consists of 6-8 development sprints (3 weeks each) and 2-3 pure stabilization/bug fix sprints.
Issue:
Some of features take several months and more to complete development to be at minimum level of deliverable or production ready stage. We do split them into small tasks and progress developing them in each sprint, but it ready for first demo only after 2-4 sprints and ready for QA/production after 5-8 sprints. Those features are complex and generally consist of developing new or enhance infrastructure, core/services, business logic, UI, Web and then integration in monitoring, reporting, printing, marketing and etc. Before these all done the feature cannot be considered as done (production ready). On the other hand in same release we have small features or enhancement which are much smaller and can be production ready in single sprint or two... Those long running features prevent from us to release product in smaller chunks since they takes much more sprints to be production ready rather the others.
Question:
Any idea/advice what could be better approach if we would like to make our releases shorter?
Thanks,
Pavel | 2011/02/24 | [
"https://pm.stackexchange.com/questions/642",
"https://pm.stackexchange.com",
"https://pm.stackexchange.com/users/463/"
] | I think the answer here may be a good tool. I don't know what kind of solution you use for version control but I think moving to [Distributed Version Control System](http://en.wikipedia.org/wiki/Distributed_Version_Control_System) might help. The main change of the mindset with using DVCS is you don't merge everything into one trunk, which you have to stabilize before pushing into production, but you can add different features into one release basing on distributed code base.
Many Kanban teams face the same issue, although in a smaller scale and DVCS proved to be pretty good solution.
UPDATE (based on comment): With DVCS you change the mindset regarding working with trunks/branches. Read [Joel Spolsky's introduction to DVCS](http://joelonsoftware.com/items/2010/03/17.html). Also you may check [Joel's Mercurial tutorial](http://hginit.com/) to get familiar with DVCS. | I'd start by taking a look at the critical path of your delivery cycle.
Map it out with the different durations of each phase then see where you can condense or better sequence your activities. |
642 | Background:
We have long running product development project, which consists of different features.
The development team consists of 15 developers and 5 QA engineers.
We setup releases each 5-7 month.
Each release we either add new features or enhance/existing features.
Each release consists of 6-8 development sprints (3 weeks each) and 2-3 pure stabilization/bug fix sprints.
Issue:
Some of features take several months and more to complete development to be at minimum level of deliverable or production ready stage. We do split them into small tasks and progress developing them in each sprint, but it ready for first demo only after 2-4 sprints and ready for QA/production after 5-8 sprints. Those features are complex and generally consist of developing new or enhance infrastructure, core/services, business logic, UI, Web and then integration in monitoring, reporting, printing, marketing and etc. Before these all done the feature cannot be considered as done (production ready). On the other hand in same release we have small features or enhancement which are much smaller and can be production ready in single sprint or two... Those long running features prevent from us to release product in smaller chunks since they takes much more sprints to be production ready rather the others.
Question:
Any idea/advice what could be better approach if we would like to make our releases shorter?
Thanks,
Pavel | 2011/02/24 | [
"https://pm.stackexchange.com/questions/642",
"https://pm.stackexchange.com",
"https://pm.stackexchange.com/users/463/"
] | I think the answer here may be a good tool. I don't know what kind of solution you use for version control but I think moving to [Distributed Version Control System](http://en.wikipedia.org/wiki/Distributed_Version_Control_System) might help. The main change of the mindset with using DVCS is you don't merge everything into one trunk, which you have to stabilize before pushing into production, but you can add different features into one release basing on distributed code base.
Many Kanban teams face the same issue, although in a smaller scale and DVCS proved to be pretty good solution.
UPDATE (based on comment): With DVCS you change the mindset regarding working with trunks/branches. Read [Joel Spolsky's introduction to DVCS](http://joelonsoftware.com/items/2010/03/17.html). Also you may check [Joel's Mercurial tutorial](http://hginit.com/) to get familiar with DVCS. | I have to agree with a lot of the comments here. You seem to be iterating a waterfall structure and confusing it with Agile. There is no problem with an iterating waterfall process, and I think you can find solutions in the way you initiate each cycle. If you think of each cycle as a new project, you may notice you've missed a bit of the methodology. If you don't have Analysts, you need to get some as yegor256 said.
Also this is a great environment for you to conduct lessons learned sessions and implement change suggestions. |
642 | Background:
We have long running product development project, which consists of different features.
The development team consists of 15 developers and 5 QA engineers.
We setup releases each 5-7 month.
Each release we either add new features or enhance/existing features.
Each release consists of 6-8 development sprints (3 weeks each) and 2-3 pure stabilization/bug fix sprints.
Issue:
Some of features take several months and more to complete development to be at minimum level of deliverable or production ready stage. We do split them into small tasks and progress developing them in each sprint, but it ready for first demo only after 2-4 sprints and ready for QA/production after 5-8 sprints. Those features are complex and generally consist of developing new or enhance infrastructure, core/services, business logic, UI, Web and then integration in monitoring, reporting, printing, marketing and etc. Before these all done the feature cannot be considered as done (production ready). On the other hand in same release we have small features or enhancement which are much smaller and can be production ready in single sprint or two... Those long running features prevent from us to release product in smaller chunks since they takes much more sprints to be production ready rather the others.
Question:
Any idea/advice what could be better approach if we would like to make our releases shorter?
Thanks,
Pavel | 2011/02/24 | [
"https://pm.stackexchange.com/questions/642",
"https://pm.stackexchange.com",
"https://pm.stackexchange.com/users/463/"
] | I think the answer here may be a good tool. I don't know what kind of solution you use for version control but I think moving to [Distributed Version Control System](http://en.wikipedia.org/wiki/Distributed_Version_Control_System) might help. The main change of the mindset with using DVCS is you don't merge everything into one trunk, which you have to stabilize before pushing into production, but you can add different features into one release basing on distributed code base.
Many Kanban teams face the same issue, although in a smaller scale and DVCS proved to be pretty good solution.
UPDATE (based on comment): With DVCS you change the mindset regarding working with trunks/branches. Read [Joel Spolsky's introduction to DVCS](http://joelonsoftware.com/items/2010/03/17.html). Also you may check [Joel's Mercurial tutorial](http://hginit.com/) to get familiar with DVCS. | I think pawelbrodzinski provided the best answer...but it seems you/your-team is code branch averse.....another approach would be to release the changes - working or not - with each sprint, making them only reachable by certain user roles or links (so the current user base can not access it prior to completion)...this might mean duplicate tables (some with a new structure to support the new functionality) and duplicate links (one to the old functionality and one to the new functionality)...not a good solution or one I would follow myself, but another approach |
84,595 | SA = [Selective Availability](https://www.gps.gov/systems/gps/modernization/sa/) (gps.gov)
>
> SA-On and SA-Aware are widely used in **current air transport aircraft** (...)
>
>
> * GPS that behave ***as if* SA is still active** (SA-On)
> * GPS that behave as if SA has been deactivated (SA-Aware)
>
>
>
I was under the impression a software update would turn SA-On to SA-Aware. But that's not the case. Why is that? I'm asking about transport aircraft. I wasn't able to find an answer, and to add a complication, [ICAO says](https://www.icao.int/APAC/Meetings/2011_ADS_B_SITF10/IP10_AUS%20AI.%206%20-%20GPS%20Accuracy.pdf):
>
> GPS system with SA activated ± 100 Metres (no longer relevant because SA is deactivated)
>
>
>
If it's "no longer relevant", then why is there a difference in the preflight check rule *(shown below)?*
>
> [](https://i.stack.imgur.com/vH5FJ.png)
>
>
>
---
Top quote and slide: FAA Briefing ADS-B Rules and Airspace, Sep 2017, via [icao.int](https://www.icao.int/SAM/Documents/2017-ADSB/08%20FAA%20Briefing%20ADS_B%20Rules%20and%20Airspace%20(2).pdf) (PDF) | 2021/03/02 | [
"https://aviation.stackexchange.com/questions/84595",
"https://aviation.stackexchange.com",
"https://aviation.stackexchange.com/users/-1/"
] | As the linked ICAO article on GPS accuracy states, the issue is not just the navigation accuracy but the reported accuracy which is critical when used for ADS-B.
From the report:
>
> 3.2.1 For GPS receivers which are not SA aware, the accuracy and integrity REPORTED and then used in ADS-B messages is based on an
> ASSUMED value of UERE, corresponding to the period when SA was active.
> This value is grossly larger than the accuracy of the positional data
> delivered now that SA is inactive.
>
>
> 3.2.2 For SA off/ SA aware receivers, some report accuracy and integrity values based on the assumed UERE in the SA inactive
> environment and some determine the UERE from the GPS message contents.
> Thus SA aware systems report more realistic accuracy and integrity
> values.
>
>
>
When SA was on it was implemented by dithering the satellite's clock and ephemeris data to create a UERE of approximately 32 meters. So there was no attempt in the SA On receivers to compute the UERE as it was basically fixed by the implementation of SA. Remember this was first implemented circa 1990 and ADS-B efforts were in draft form when SA was terminated.
Also GPS standards weren't revised just because SA went away. The changes were primarily driven by SBAS and GBAS development efforts not the need for ADS-B. SBAS was the basis for the TSO-C145/C146. The air transport market wasn't very interested in SBAS, they tended to lean towards GBAS (if they cared at all.) The TSO-C196 (SA Aware) is very much like the C145/C146 units but without SBAS. (I'm not really sure of all the history of this TSO as it was after my time working these products.)
Product changes around TSO changes are a fact of life. Trying to upgrade a product from one standard to a new one just doesn't happen as it is by definition a major change and requires a new certification. And even with a new GPS built to a new TSO, it can't be swapped into the aircraft without an STC and that's a major expense.
So, could the original TSO-C129 units be upgraded to be SA Aware (without going to the new TSO?) Theoretically they could, but not economically. Computing the real-time UERE has to be done within the GPS chipset which would require a complete redesign of the custom ASIC at which point it just worth moving to the new standard.
As for the preflight authorization requirement for the SA Aware (C196) units; it's because they don't have the SBAS integrity channel. They rely entirely on an aircraft based integrity process. This can cause periodic cases where the integrity is reported as unacceptable.
---
Glossary:
* ICAO: International Civil Aviation Organization
* GPS: Global Positioning System
* ADS-B: Automatic Dependent Surveillance - Broadcast
* SA: [Selective Availability](https://en.wikipedia.org/wiki/Error_analysis_for_the_Global_Positioning_System#Selective_availability)
* UERE: [User equivalent range error](https://en.wikipedia.org/wiki/Error_analysis_for_the_Global_Positioning_System)
* SBAS: Satellite Based Augmentation System
* GBAS: Ground Based Augmentation System
* TSO: Technical Standard Order
* ASIC: Application-Specific Integrated Circuit | The requestor wants to understand the reasoning behind either an operator's or avionics manufacturers business decisions with regard to ADS-B compliance but, does not specify which GPS receiver or provide any information on the aircraft or overall architecture of the onboard avionics. Answering accurately with specifics is hard.
On older transport aircraft early GPS equipage was done a number of ways to support different systems. One of the first uses of GPS was in the navigation system as a sensor for Flight Management Computers. On a lot of old jets, the ILS receiver was removed and multimode receivers that served as both ILS receivers and GPS receivers were installed.
GPS data are used by Terrain Awareness systems, ADSB, satellite communication systems and communication management units, ACARS, electric flight bag systems and it's captured by data acquisition units and safety systems like TCAS etc.
The SA aware requirement came out of the mandate to equip for ADS-B, which is essentially an ATC cooperative surveillance function. Those requirements are often met often by supplying GPS data to an evolved Mode S Transponder. Operational performance requirements have evolved and maybe still aren't universal across the globe (which would matter to a regional operator).
Compliance with ADS-B specifications involves meeting accuracy requirements, system integrity requirements and availability requirements. ATC must have visibility to those metrics to know how to space the aircraft.
In early industry discussions, suppliers and operators recognized that ADSB implementation with a stand alone GPS receiver on an older jet, could result in ATC getting better aircraft position than the crew had for navigation.
The use of GPS for aircraft navigation has steadily evolved from basic area NAV (RNAV) to GPS precision approaches and Required Nav Performance (RNP) procedures which can only be flown in an aircraft capable of meeting the Actual Nav Performance requirements.
Currently, the only practical way to meet the highest ADS-B and navigation system accuracy requirements requires, Satellite Based Augmentation System (SBAS), aka WAAS. Again, depending on which legacy units, upgrade could involve changes to receiver hardware, modified or additional antennas, and additional processing power.
Data requirements for all the different systems that use or capture GPS data are simply not all the same.
Some avionics manufacturers faced with investing significant sums in development and approval costs on a legacy product line with limited growth and marketing potential, found it logical from a business stand point, to abandon the evolution of their legacy product line and develop and deploy a new product with hardware support and processing power to meet all of the current requirements and have growth potential in the market. |
102,407 | I recently upgraded from a 350d to a 5d mark ii
One thing I noticed is that I seem to be able to use slower shutter speeds with the bigger camera. (On the order of decently sharp pictures at 1/20 55mm) where on the 350d i strugled with (1/30 at 55mm).
Could the weight of the setup have anything to do with this? (I’m using the 5d mark ii with the wft-e4 wireless transmitter that add similar weight to a battery gripp) | 2018/10/28 | [
"https://photo.stackexchange.com/questions/102407",
"https://photo.stackexchange.com",
"https://photo.stackexchange.com/users/59662/"
] | You are able to use slower shutter speeds because **you switched to a larger sensor.** A given amount of movement is relatively smaller compared with a larger sensor than a smaller one, proportional to the crop factor. The type or cause of the movement does not matter (angular, linear, rotational, whatever). The 1/20 sec vs 1/30 sec speeds you mention corresponds with switching from a 1.5-1.6 crop sensor to full frame.
**Weight does not seem to play a significant role in your case** because, if it did, you would be able to use shutter speeds slower than crop factor alone could account for. In principle, increased weight *could* stabilize against movement by providing resistance against external forces (inertia), but it can also worsen camera shake by requiring greater muscle engagement, which would increase essential tremor. | Beside the effect from full frame sensor (bigger photo cells) there is physiological effect. Your mind command muscles to hold strongly the camera because of the weight (bigger for full frame camera). Same is true when you add accessories to the camera line battery grip or heavier lens. But this effect have limitations in sense of the force and how long this force can be applied by particular person. |
102,407 | I recently upgraded from a 350d to a 5d mark ii
One thing I noticed is that I seem to be able to use slower shutter speeds with the bigger camera. (On the order of decently sharp pictures at 1/20 55mm) where on the 350d i strugled with (1/30 at 55mm).
Could the weight of the setup have anything to do with this? (I’m using the 5d mark ii with the wft-e4 wireless transmitter that add similar weight to a battery gripp) | 2018/10/28 | [
"https://photo.stackexchange.com/questions/102407",
"https://photo.stackexchange.com",
"https://photo.stackexchange.com/users/59662/"
] | You are able to use slower shutter speeds because **you switched to a larger sensor.** A given amount of movement is relatively smaller compared with a larger sensor than a smaller one, proportional to the crop factor. The type or cause of the movement does not matter (angular, linear, rotational, whatever). The 1/20 sec vs 1/30 sec speeds you mention corresponds with switching from a 1.5-1.6 crop sensor to full frame.
**Weight does not seem to play a significant role in your case** because, if it did, you would be able to use shutter speeds slower than crop factor alone could account for. In principle, increased weight *could* stabilize against movement by providing resistance against external forces (inertia), but it can also worsen camera shake by requiring greater muscle engagement, which would increase essential tremor. | The weight may have to do. The shutter has to accelerate quite brutally at the top, and gets rapidly stopped at the bottom. Both create impulses on the camery. Heavier cameras distribute the impulses better. Given that shutter mass likely does NOT increase similar to camera mass, it is one possible explanation. Never thought about it before, though. |
102,407 | I recently upgraded from a 350d to a 5d mark ii
One thing I noticed is that I seem to be able to use slower shutter speeds with the bigger camera. (On the order of decently sharp pictures at 1/20 55mm) where on the 350d i strugled with (1/30 at 55mm).
Could the weight of the setup have anything to do with this? (I’m using the 5d mark ii with the wft-e4 wireless transmitter that add similar weight to a battery gripp) | 2018/10/28 | [
"https://photo.stackexchange.com/questions/102407",
"https://photo.stackexchange.com",
"https://photo.stackexchange.com/users/59662/"
] | You are able to use slower shutter speeds because **you switched to a larger sensor.** A given amount of movement is relatively smaller compared with a larger sensor than a smaller one, proportional to the crop factor. The type or cause of the movement does not matter (angular, linear, rotational, whatever). The 1/20 sec vs 1/30 sec speeds you mention corresponds with switching from a 1.5-1.6 crop sensor to full frame.
**Weight does not seem to play a significant role in your case** because, if it did, you would be able to use shutter speeds slower than crop factor alone could account for. In principle, increased weight *could* stabilize against movement by providing resistance against external forces (inertia), but it can also worsen camera shake by requiring greater muscle engagement, which would increase essential tremor. | I see a lot of strange assumptions here. The main strange assumption appears to be that a larger sensor will help against shake. But that is only relevant if the *displacement* of the sensor is a significant factor for motion blur. This may be the case for macro photography or closeup photography. However, the camera is not held at its center of gravity and both hands will have significantly uncorrelated shake. That means that the resulting rotations will become much more relevant once the object distance is not dwarved by the focal length.
Now with a bigger camera you have several effects: overall mass will be larger, meaning that equal forces lead to smaller displacements. Overall size will be larger, meaning that equal displacement difference between the hands will lead to smaller angles of rotation. Putting this together, rotational inertia will be larger. Rotational inertia, the ratio between angular force and resulting angular momentum change, grows with the square of the distance to the rotational centre and the weight (of course leverage also grows with the distance, but only linearly).
Now there also is motion compensation with modern cameras, using lens movements and/or sensor movements. When smaller angles need to get compensated, this kind of compensation has more room to go before it has to give up. |
102,407 | I recently upgraded from a 350d to a 5d mark ii
One thing I noticed is that I seem to be able to use slower shutter speeds with the bigger camera. (On the order of decently sharp pictures at 1/20 55mm) where on the 350d i strugled with (1/30 at 55mm).
Could the weight of the setup have anything to do with this? (I’m using the 5d mark ii with the wft-e4 wireless transmitter that add similar weight to a battery gripp) | 2018/10/28 | [
"https://photo.stackexchange.com/questions/102407",
"https://photo.stackexchange.com",
"https://photo.stackexchange.com/users/59662/"
] | The following presumes the assumption stated in the question: the same focal length of 55mm is being used for both cameras.
There are several factors that could be at work here:
* With a larger sensor and the same focal length, it takes larger camera movements to move a point on the image projected by the lens that corresponds to a point in the scene the same percentage of the sensor width and height. This means that if images from both cameras are viewed at the same size, the blur from the same amount of movement will look smaller in the image from the camera with the larger sensor.
* With a heavier camera body it takes more force to overcome inertia and move the camera the same amount of angular, rotational, or lateral distance. The EOS 5D Mark II body weighs 32 ounces and the WFT-E4 adds another 13 ounces to it, the EOS Rebel XS/350D weighs 17 ounces.
* If different 55mm lenses are used with each camera, a FF lens tends to be heavier than an "equivalent" APS-C lens. The Canon EF 24-70mm f/2.8 L weighs 32 ounces, the EF-S 17-55mm f/2.8 IS weighs 23 ounces. (The lighter EF-S lens includes IS. The heavier FF lens does not! Otherwise, the difference in weight would be even greater.)
* Add the cameras and lenses above together and the FF combo is 77 ounces (a whopping 4.75+ pounds!) compared to 40 ounces (2.5 pounds).
* On the other hand, if the camera is heavy and you hold it up for too long, your muscles could become fatigued and the heavier weight would eventually result in you being less stable as you hold the camera. If it is extremely heavy you may struggle to hold a very heavy camera/lens combination steady for any length of time.
* Since the pixel width of the APS-C 8 MP EOS Rebel XT/350D and the FF 21 MP EOS 5D Mark II are both 6.4µm, at 100% viewing (one image pixel per screen pixel) there should be no difference in the amount of blur caused by the same amount of camera movement when both are enlarged by the same amount. (Remember, when both are enlarged by the same amount the image from the FF camera covers over twice the total area compared to the APS-C camera).
* If you are using different 55mm lenses on each camera and one or both incorporates Image Stabilization, one lens may outperform the other in this regard. Given the same generation of technology, the more expensive FF lenses will usually give slightly better IS performance than their APS-C counterparts.
In all of these cases except one, these factors favor the larger, heavier camera with a larger sensor and a heavier, more expensive lens. In the case of the exception, there is no difference either way. | Beside the effect from full frame sensor (bigger photo cells) there is physiological effect. Your mind command muscles to hold strongly the camera because of the weight (bigger for full frame camera). Same is true when you add accessories to the camera line battery grip or heavier lens. But this effect have limitations in sense of the force and how long this force can be applied by particular person. |
102,407 | I recently upgraded from a 350d to a 5d mark ii
One thing I noticed is that I seem to be able to use slower shutter speeds with the bigger camera. (On the order of decently sharp pictures at 1/20 55mm) where on the 350d i strugled with (1/30 at 55mm).
Could the weight of the setup have anything to do with this? (I’m using the 5d mark ii with the wft-e4 wireless transmitter that add similar weight to a battery gripp) | 2018/10/28 | [
"https://photo.stackexchange.com/questions/102407",
"https://photo.stackexchange.com",
"https://photo.stackexchange.com/users/59662/"
] | Beside the effect from full frame sensor (bigger photo cells) there is physiological effect. Your mind command muscles to hold strongly the camera because of the weight (bigger for full frame camera). Same is true when you add accessories to the camera line battery grip or heavier lens. But this effect have limitations in sense of the force and how long this force can be applied by particular person. | I see a lot of strange assumptions here. The main strange assumption appears to be that a larger sensor will help against shake. But that is only relevant if the *displacement* of the sensor is a significant factor for motion blur. This may be the case for macro photography or closeup photography. However, the camera is not held at its center of gravity and both hands will have significantly uncorrelated shake. That means that the resulting rotations will become much more relevant once the object distance is not dwarved by the focal length.
Now with a bigger camera you have several effects: overall mass will be larger, meaning that equal forces lead to smaller displacements. Overall size will be larger, meaning that equal displacement difference between the hands will lead to smaller angles of rotation. Putting this together, rotational inertia will be larger. Rotational inertia, the ratio between angular force and resulting angular momentum change, grows with the square of the distance to the rotational centre and the weight (of course leverage also grows with the distance, but only linearly).
Now there also is motion compensation with modern cameras, using lens movements and/or sensor movements. When smaller angles need to get compensated, this kind of compensation has more room to go before it has to give up. |
102,407 | I recently upgraded from a 350d to a 5d mark ii
One thing I noticed is that I seem to be able to use slower shutter speeds with the bigger camera. (On the order of decently sharp pictures at 1/20 55mm) where on the 350d i strugled with (1/30 at 55mm).
Could the weight of the setup have anything to do with this? (I’m using the 5d mark ii with the wft-e4 wireless transmitter that add similar weight to a battery gripp) | 2018/10/28 | [
"https://photo.stackexchange.com/questions/102407",
"https://photo.stackexchange.com",
"https://photo.stackexchange.com/users/59662/"
] | The following presumes the assumption stated in the question: the same focal length of 55mm is being used for both cameras.
There are several factors that could be at work here:
* With a larger sensor and the same focal length, it takes larger camera movements to move a point on the image projected by the lens that corresponds to a point in the scene the same percentage of the sensor width and height. This means that if images from both cameras are viewed at the same size, the blur from the same amount of movement will look smaller in the image from the camera with the larger sensor.
* With a heavier camera body it takes more force to overcome inertia and move the camera the same amount of angular, rotational, or lateral distance. The EOS 5D Mark II body weighs 32 ounces and the WFT-E4 adds another 13 ounces to it, the EOS Rebel XS/350D weighs 17 ounces.
* If different 55mm lenses are used with each camera, a FF lens tends to be heavier than an "equivalent" APS-C lens. The Canon EF 24-70mm f/2.8 L weighs 32 ounces, the EF-S 17-55mm f/2.8 IS weighs 23 ounces. (The lighter EF-S lens includes IS. The heavier FF lens does not! Otherwise, the difference in weight would be even greater.)
* Add the cameras and lenses above together and the FF combo is 77 ounces (a whopping 4.75+ pounds!) compared to 40 ounces (2.5 pounds).
* On the other hand, if the camera is heavy and you hold it up for too long, your muscles could become fatigued and the heavier weight would eventually result in you being less stable as you hold the camera. If it is extremely heavy you may struggle to hold a very heavy camera/lens combination steady for any length of time.
* Since the pixel width of the APS-C 8 MP EOS Rebel XT/350D and the FF 21 MP EOS 5D Mark II are both 6.4µm, at 100% viewing (one image pixel per screen pixel) there should be no difference in the amount of blur caused by the same amount of camera movement when both are enlarged by the same amount. (Remember, when both are enlarged by the same amount the image from the FF camera covers over twice the total area compared to the APS-C camera).
* If you are using different 55mm lenses on each camera and one or both incorporates Image Stabilization, one lens may outperform the other in this regard. Given the same generation of technology, the more expensive FF lenses will usually give slightly better IS performance than their APS-C counterparts.
In all of these cases except one, these factors favor the larger, heavier camera with a larger sensor and a heavier, more expensive lens. In the case of the exception, there is no difference either way. | The weight may have to do. The shutter has to accelerate quite brutally at the top, and gets rapidly stopped at the bottom. Both create impulses on the camery. Heavier cameras distribute the impulses better. Given that shutter mass likely does NOT increase similar to camera mass, it is one possible explanation. Never thought about it before, though. |
102,407 | I recently upgraded from a 350d to a 5d mark ii
One thing I noticed is that I seem to be able to use slower shutter speeds with the bigger camera. (On the order of decently sharp pictures at 1/20 55mm) where on the 350d i strugled with (1/30 at 55mm).
Could the weight of the setup have anything to do with this? (I’m using the 5d mark ii with the wft-e4 wireless transmitter that add similar weight to a battery gripp) | 2018/10/28 | [
"https://photo.stackexchange.com/questions/102407",
"https://photo.stackexchange.com",
"https://photo.stackexchange.com/users/59662/"
] | The weight may have to do. The shutter has to accelerate quite brutally at the top, and gets rapidly stopped at the bottom. Both create impulses on the camery. Heavier cameras distribute the impulses better. Given that shutter mass likely does NOT increase similar to camera mass, it is one possible explanation. Never thought about it before, though. | I see a lot of strange assumptions here. The main strange assumption appears to be that a larger sensor will help against shake. But that is only relevant if the *displacement* of the sensor is a significant factor for motion blur. This may be the case for macro photography or closeup photography. However, the camera is not held at its center of gravity and both hands will have significantly uncorrelated shake. That means that the resulting rotations will become much more relevant once the object distance is not dwarved by the focal length.
Now with a bigger camera you have several effects: overall mass will be larger, meaning that equal forces lead to smaller displacements. Overall size will be larger, meaning that equal displacement difference between the hands will lead to smaller angles of rotation. Putting this together, rotational inertia will be larger. Rotational inertia, the ratio between angular force and resulting angular momentum change, grows with the square of the distance to the rotational centre and the weight (of course leverage also grows with the distance, but only linearly).
Now there also is motion compensation with modern cameras, using lens movements and/or sensor movements. When smaller angles need to get compensated, this kind of compensation has more room to go before it has to give up. |
102,407 | I recently upgraded from a 350d to a 5d mark ii
One thing I noticed is that I seem to be able to use slower shutter speeds with the bigger camera. (On the order of decently sharp pictures at 1/20 55mm) where on the 350d i strugled with (1/30 at 55mm).
Could the weight of the setup have anything to do with this? (I’m using the 5d mark ii with the wft-e4 wireless transmitter that add similar weight to a battery gripp) | 2018/10/28 | [
"https://photo.stackexchange.com/questions/102407",
"https://photo.stackexchange.com",
"https://photo.stackexchange.com/users/59662/"
] | The following presumes the assumption stated in the question: the same focal length of 55mm is being used for both cameras.
There are several factors that could be at work here:
* With a larger sensor and the same focal length, it takes larger camera movements to move a point on the image projected by the lens that corresponds to a point in the scene the same percentage of the sensor width and height. This means that if images from both cameras are viewed at the same size, the blur from the same amount of movement will look smaller in the image from the camera with the larger sensor.
* With a heavier camera body it takes more force to overcome inertia and move the camera the same amount of angular, rotational, or lateral distance. The EOS 5D Mark II body weighs 32 ounces and the WFT-E4 adds another 13 ounces to it, the EOS Rebel XS/350D weighs 17 ounces.
* If different 55mm lenses are used with each camera, a FF lens tends to be heavier than an "equivalent" APS-C lens. The Canon EF 24-70mm f/2.8 L weighs 32 ounces, the EF-S 17-55mm f/2.8 IS weighs 23 ounces. (The lighter EF-S lens includes IS. The heavier FF lens does not! Otherwise, the difference in weight would be even greater.)
* Add the cameras and lenses above together and the FF combo is 77 ounces (a whopping 4.75+ pounds!) compared to 40 ounces (2.5 pounds).
* On the other hand, if the camera is heavy and you hold it up for too long, your muscles could become fatigued and the heavier weight would eventually result in you being less stable as you hold the camera. If it is extremely heavy you may struggle to hold a very heavy camera/lens combination steady for any length of time.
* Since the pixel width of the APS-C 8 MP EOS Rebel XT/350D and the FF 21 MP EOS 5D Mark II are both 6.4µm, at 100% viewing (one image pixel per screen pixel) there should be no difference in the amount of blur caused by the same amount of camera movement when both are enlarged by the same amount. (Remember, when both are enlarged by the same amount the image from the FF camera covers over twice the total area compared to the APS-C camera).
* If you are using different 55mm lenses on each camera and one or both incorporates Image Stabilization, one lens may outperform the other in this regard. Given the same generation of technology, the more expensive FF lenses will usually give slightly better IS performance than their APS-C counterparts.
In all of these cases except one, these factors favor the larger, heavier camera with a larger sensor and a heavier, more expensive lens. In the case of the exception, there is no difference either way. | I see a lot of strange assumptions here. The main strange assumption appears to be that a larger sensor will help against shake. But that is only relevant if the *displacement* of the sensor is a significant factor for motion blur. This may be the case for macro photography or closeup photography. However, the camera is not held at its center of gravity and both hands will have significantly uncorrelated shake. That means that the resulting rotations will become much more relevant once the object distance is not dwarved by the focal length.
Now with a bigger camera you have several effects: overall mass will be larger, meaning that equal forces lead to smaller displacements. Overall size will be larger, meaning that equal displacement difference between the hands will lead to smaller angles of rotation. Putting this together, rotational inertia will be larger. Rotational inertia, the ratio between angular force and resulting angular momentum change, grows with the square of the distance to the rotational centre and the weight (of course leverage also grows with the distance, but only linearly).
Now there also is motion compensation with modern cameras, using lens movements and/or sensor movements. When smaller angles need to get compensated, this kind of compensation has more room to go before it has to give up. |
630,883 | Attached schematic for the e-Paper driver HAT.
The meaning of the yellow marked part is not clear to me.
[](https://i.stack.imgur.com/h9A7J.png)
Does this mean I can replace 105 (1 μF) with 4.7 μF and use either as available?
[The entire schematic can be found here.](https://drive.google.com/file/d/1hE1a7wRdHbJu7Z7ePIGmB6oZyJvBYokl/view?usp=sharing) | 2022/08/12 | [
"https://electronics.stackexchange.com/questions/630883",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/285450/"
] | >
> Does this mean I can replace 105 (1 μF) with 4.7 μF and use either as available?
>
>
>
In this *specific* case yes, though this isn't a standard marking as others have mentioned. It seems to be an artifact of the history of these documents, as I found out. It is NOT a "caps-in-parallel" mark, as JRE's answer suggests.
I've actually been working on miniaturizing this very board recently. Waveshare's documentation is rather confusing, so to figure out what I needed, I consulted:
* the actual breakout "hat" schematic, including [the history of that document](https://www.waveshare.com/wiki/File:E-Paper-Driver-HAT-Schematic.pdf) on their wiki
* various versions of eink manuals from their site, including the history of those documents on their wiki
* Adafruit breakout schematic & Open source design
* CrystalFontz breakout schmatics
Comparing those, I found that Waveshare has recently bumped up the size of the caps to 4.7µF, but have historically been smaller at 1µF. I'm unimpressed with their document skills.
Here is a snippet of the relevant part of the table that I extracted from these resources when I was trying to decipher this myself, showing marked values and voltage rating:
| name | eink v3 | eink v2 | newer hat | older hat | Adafruit | CrystalFontz | CF-rating |
| --- | --- | --- | --- | --- | --- | --- | --- |
| VSH | 4.7/25 | 105/50 | 105/50 | 105/50 | 1u/25 | 1u/25 | 10V~17V |
| PreVGH | 4.7/25 | 105/50 | 105/50 | 105/50 | 1u/25 | 1u/50 | ~22V |
| VSL | 4.7/25 | 105/50 | 105/50 | 105/50 | 1u/25 | 1u/25 | -17V~ -10V |
| PreVGL | 4.7/25 | 105/50 | 105/50 | 105/50 | 1u/25 | 1u/50 | ~ -20V |
As you can see, it looks like the newest hat is a mix of eink v3 and the "newer hat" version.
As for me, I ended up using 105/50 (1.0µF 50V rating) as that fit the rest of the project (and I had on hand) and it is working fine for me today. | >
> What does a capacitor marked "105 | 47 μF" mean?
>
>
>
**"105 | 47 μF"** - a 47 μF electrolytic capacitor with a maximum operating temperature of 105° C.
**"105 / 50 V"** - a 1 μF 50 V ceramic capacitor. |
123,403 | I use Ubuntu for many years but recently I discovered a nice feature in Arch. It is common to display system-information on headless servers on ssh-login, on Ubuntu its the landscape- package.
I wounded if it's possible to create the same for the normal terminal in Ubuntu . Like the terminal in Arch

I think it might be useful to have this information displayed, at the time one starts the terminal.
Is it possible to create something like this for the terminal, and if so what would you suggest? I tried motd but these messages were not displayed.
Daniel | 2012/04/19 | [
"https://askubuntu.com/questions/123403",
"https://askubuntu.com",
"https://askubuntu.com/users/55655/"
] | The application that display the system info in Arch is called Archey and it can be installed on Ubuntu. This website has instruction on how to do so.
<http://www.linuxandlife.com/2012/02/how-to-install-screenfetch-and-archey.html> | You need to edit .bashrc
Here are some useful links in regards to what you can add:
* <http://mostlycli.blogspot.com/2010/03/my-bashrc-file-part-2-useful-system.html>
* <http://usalug.com/phpBB3/viewtopic.php?t=10456>
* <http://linux.derkeiler.com/Mailing-Lists/SuSE/2008-01/msg01947.html>
* <http://www.cyberciti.biz/faq/create-large-colorful-text-banner-on-screen/>
To find more do a Google search on cool .bashrc welcome screens |
123,403 | I use Ubuntu for many years but recently I discovered a nice feature in Arch. It is common to display system-information on headless servers on ssh-login, on Ubuntu its the landscape- package.
I wounded if it's possible to create the same for the normal terminal in Ubuntu . Like the terminal in Arch

I think it might be useful to have this information displayed, at the time one starts the terminal.
Is it possible to create something like this for the terminal, and if so what would you suggest? I tried motd but these messages were not displayed.
Daniel | 2012/04/19 | [
"https://askubuntu.com/questions/123403",
"https://askubuntu.com",
"https://askubuntu.com/users/55655/"
] | The application that display the system info in Arch is called Archey and it can be installed on Ubuntu. This website has instruction on how to do so.
<http://www.linuxandlife.com/2012/02/how-to-install-screenfetch-and-archey.html> | There's a program that does this that's called [linuxlogo ](https://apps.ubuntu.com/cat/applications/linuxlogo), here's the homepage:
* <http://www.deater.net/weave/vmwprod/linux_logo/>
You can install it and then just run `linuxlogo` for a default debian manner, check the manpage for all the configuration options to get what you need. |
155,039 | There is a tutorial on how to create a burndown chart for Scrum in the Google Docs application:
<http://www.scrumology.net/2011/05/03/how-to-create-a-burndown-chart-in-google-docs/>
The problem with it though is, it has only a place to update progress once per sprint but the burndown is supposed to be updated with daily progress, right? How can one modify this chart to be able to put daily progress on it?
I mean to be able to plot two lines (ideal and actual) with data such as (story points 255, velocity 24):
Actual Google Docs document (free to edit):
<https://docs.google.com/spreadsheet/ccc?key=0AuPWErnOiLTUdElJVzJZaE5EWEZ2S2xCelF6Z2lzaUE>

 | 2012/06/30 | [
"https://softwareengineering.stackexchange.com/questions/155039",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/31300/"
] | The tutorial is really only showing you how to use a spreadsheet to draw a graph. As you say, in a Scrum project, you are probably more interested in the burndown *within* an iteration, on a daily basis. So simply take their approach and replace Iteration with Day. There's no magic about the name of the column in the spreadsheet. | The tutorial you are referencing is very basic and meant to be customized :)
When I coach Scrum teams, I generally (not always) advise within the Sprint to:
1. Burn down on task hours
2. Burn up on story points
That way you can see how you are burning down on hours but balance that with actually completing stories. |
2,925,251 | Hi I am a newbie to mysql
Here are my questions:
1. What is the use of Mysql Index key?
2. Does it make a difference in mysql queries with defining an Index key and without it?
3. Are all primary keys default Index key?
Thanks a million | 2010/05/27 | [
"https://Stackoverflow.com/questions/2925251",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/324968/"
] | 1- Defining an index on a column (or set of columns) makes searching on that column (or set) much faster, at the expense of additional disk space.
2- Yes, the difference is that queries using that column will be much faster.
3- Yes, as it's usual to search by the primary key, it makes sense for that column to always be indexed.
Read more on MySQL indexing [here](http://dev.mysql.com/doc/refman/5.0/en/mysql-indexes.html). | An index is indeed an additional set of records. Nothing more.
Things that make indexes access faster are:
* Internally there's more chance that the engine put in buffer the index than the whole table rows
* The index is smaller so to parse it means reading less blocks of the hard drive
* The index is sorted already, so finding a given value is easy
* In case of being not null, it's even faster (for various reasons, but the most important thing to know is that the engine **doesn't store null values in indexes**)
To know whether or not an index is useful is not so easy to guess (obviously I'm not talking about the primary key) and should be investigated. Here are some counterparts when it might slow down your operations:
* It will slow down inserts and updates on indexed fields
* It requires more maintenance: statistics have to be built for each index so the computing could take a significantly longer time if you add many indexes
* It might slow down the queries when the statistics are not up to date. This effect could be catastrophic because the engine would actually go "the wrong way"
* It might slow down when the query are inadequate (anyway indexes should not be a rule but an exception: no index, except if there's an urge on certain queries. I know usually every table has at least one index, but it came after investigations)
We could comment this last point a lot, but I think every case is special and many examples of this already exist in internet.
Now about your question 'Are all primary keys default Index key?', I must say that it is not like this that the optimizer works. When there are various indexes defined in a table, the more efficient index combination will be compiled with on the fly datas and some other static datas (indexes statistics), in order to reach the best performances. There's no default index per se, every situation leeds to a different result.
Regards. |
279,174 | I am trying my hardest to define a list of CodeAnalysisRules that should be omitted from the Code Analysis tools when MSBuild executes my TFSBuild.proj file.
But each time I test it, my list of Code Analysis Rules to exclude are ignored and Team Build just simply honors the Code Analysis Rules settings for each project.
Anyone have an example of a TFSBuild.proj file that shares one list of Code Analysis Rules exceptions for all projects that are build in Team? I am using Team System 2008.
Thanks for any assistance? | 2008/11/10 | [
"https://Stackoverflow.com/questions/279174",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9162/"
] | We do this via the Code Analysis Check-in Policy. You can configure this via Team System. To implement, simply choose your rules and then Right Click Solution -> Replace Code Analysis Settings with Check-in Policy. | Well, it took me a while to find this but here is the best answer I could find
<http://bloggingabout.net/blogs/rick/archive/2007/09/04/howto-disable-specific-code-analysis-rules-for-a-team-build.aspx>
It works, and it does let me enforce a consistent set of rules across all of my solutions without having to visit each project's file and that is huge for me as we tweak our rulesets and don't wish to manually update each project each time
**Anyone else know of anything that works?** - this was a VS2005 solution/workaround so I was hoping that it would be easier in VS2008 Team System but so far haven't found anything else |
279,174 | I am trying my hardest to define a list of CodeAnalysisRules that should be omitted from the Code Analysis tools when MSBuild executes my TFSBuild.proj file.
But each time I test it, my list of Code Analysis Rules to exclude are ignored and Team Build just simply honors the Code Analysis Rules settings for each project.
Anyone have an example of a TFSBuild.proj file that shares one list of Code Analysis Rules exceptions for all projects that are build in Team? I am using Team System 2008.
Thanks for any assistance? | 2008/11/10 | [
"https://Stackoverflow.com/questions/279174",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9162/"
] | I don't see much point in trying to override the csproj settings with a different set - surely you want to use the same CA settings (or disable CA entirely) whenever and wherever you build the code?
As foosnazzy says, there's (usually) no need to do this in MSBuild. You can use the CA check in policy to set up the rules you wish to apply. Then, right click your solution in the solution explorer and about half way down the context menu there is a submenu of options for applying the TFSProjects' CA policy to all Projects in the solution. You can overwrite the project settings with the server's, or merge them.
It may only copy the settings for the current configuration so you may need to do it twice if you want to apply the same CA settings to Debug and Release. (I vaguely remember this happening but we don't run CA on our release build so it's not something I've tried recently)
(This was available in VSTS2005, but it didn't work - the values were "merged" with those in the projects so that any existing CA rules would be increased in severity (to warning or error) but you couldn't disable CA rules (demote error -> warning -> disabled). In addition, every time you opened a solution the CA settings would "drift" so that you had to reapply them every few days to keep it working)
An alternative is to set up the CA rules you want in a single project, find the XML element that contains the list, and use a text editor (or a few lines of C# code) to do a global search & replace for the CA element in all csproj files in your project. That's how I did it until VSTS2008 came along - once you've sussed the technique it only takes a few seconds to migrate your settings through all the csprojs. The advantage of this is you can be more selective about which projects the CA rules are applied to. | Well, it took me a while to find this but here is the best answer I could find
<http://bloggingabout.net/blogs/rick/archive/2007/09/04/howto-disable-specific-code-analysis-rules-for-a-team-build.aspx>
It works, and it does let me enforce a consistent set of rules across all of my solutions without having to visit each project's file and that is huge for me as we tweak our rulesets and don't wish to manually update each project each time
**Anyone else know of anything that works?** - this was a VS2005 solution/workaround so I was hoping that it would be easier in VS2008 Team System but so far haven't found anything else |
10,724,007 | >
> **Possible Duplicate:**
>
> [Unique IPs in a voting system](https://stackoverflow.com/questions/7775968/unique-ips-in-a-voting-system)
>
>
>
I am developing a small web app where it needs an online voting system using php for my college event. I was obtaining IP Address and storing it in database to prevent repeated voting from same user. But then I remembered that my college uses a proxy server so obtaining IP address is useless.
I tried accessing and storing Mac Address of the client using javascript.. I tried out a few examples that i found on the internet.. but none of them worked.
Can you guys suggest me an alternative to how can I get the results I want??
Any sort of help would be heartily appreciated.
Regards,
Aayush Shrestha
Nepal | 2012/05/23 | [
"https://Stackoverflow.com/questions/10724007",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/850678/"
] | Have them enter their email addresses and send a link to their email that allows them to vote. The link should include some hash of their email and you would have to check if they have already voted or not.
I realize that a student could enter multiple email addresses and vote multiple times, but how many emails does a student really have? And how much time would they spend doing something like that?
I'm guessing you don't have access to any sort of authentication system through your college, so this is probably the best way.
**Edit:**
Another idea is to use Facebook authentication (see: <http://developers.facebook.com/docs/authentication/>). The downside is that you have to assume that all voters have facebook accounts. | The only 100% airtight way to do this would be to make users create accounts that require some personally unique information to prevent a user from making multiple accounts.
The closest you can come without a login system is the [Evercookie](http://samy.pl/evercookie/) library, which stores a UUID in about a dozen different places in the user's browser. It's very difficult to clear them all out (even using a privacy mode in the browser), so if you give an Evercookie to a user when they vote, you can probably spot someone who has voted before.
Note that this stops repeat voting on the machine level, not the user level (a machine may haave multiple users, and a user may have multiple machines, which might enable repeat voters or block eligible voters). |
10,724,007 | >
> **Possible Duplicate:**
>
> [Unique IPs in a voting system](https://stackoverflow.com/questions/7775968/unique-ips-in-a-voting-system)
>
>
>
I am developing a small web app where it needs an online voting system using php for my college event. I was obtaining IP Address and storing it in database to prevent repeated voting from same user. But then I remembered that my college uses a proxy server so obtaining IP address is useless.
I tried accessing and storing Mac Address of the client using javascript.. I tried out a few examples that i found on the internet.. but none of them worked.
Can you guys suggest me an alternative to how can I get the results I want??
Any sort of help would be heartily appreciated.
Regards,
Aayush Shrestha
Nepal | 2012/05/23 | [
"https://Stackoverflow.com/questions/10724007",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/850678/"
] | Have them enter their email addresses and send a link to their email that allows them to vote. The link should include some hash of their email and you would have to check if they have already voted or not.
I realize that a student could enter multiple email addresses and vote multiple times, but how many emails does a student really have? And how much time would they spend doing something like that?
I'm guessing you don't have access to any sort of authentication system through your college, so this is probably the best way.
**Edit:**
Another idea is to use Facebook authentication (see: <http://developers.facebook.com/docs/authentication/>). The downside is that you have to assume that all voters have facebook accounts. | 1. First get the user to register an email address so they can use the voting system subsequently associated with that address.
2. Once you have an email address (that is validated with a activation link sent to that email), then you can gather voting related input from the user.
There is really no effective, platform-independent way of preventing repeated voting unless you enforce user certificates, etc. |
832,306 | I am working on an application for Windows Mobile 6 (or maybe 5) that plays YouTube videos. Well, it *should* play YouTube videos (and control/query the player about status changes, current frame/time, etc.)
After scouring the web for quite some time now (and a few trials), I still couldn't find a way to do this. The options I know of are:
* Use the YouTube player, embedded in HTML, controllable via JavaScript. However, I couldn't watch YT videos from IE Mobile, to begin with -- I get an error message saying something along the lines of "you need a browser with Flash Player 8 and JavaScript enabled".
* Host a Media Player control, but WMP refuses to play YT videos, including the Mobile format.
* Use DirectShow. I'm still looking into this one (I've never worked with COM, let alone DirectShow, before), but I am yet to find a solution that supports [YouTube's format(s)](http://en.wikipedia.org/wiki/YouTube#Format_and_quality_comparison_table)
I would rather write this application in C#, but C++ works, too.
Help me, O Wise Sages of StackOverflow! | 2009/05/06 | [
"https://Stackoverflow.com/questions/832306",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/59595/"
] | [The CorePlayer](http://www.coreplayer.com/) includes a plugin for IE mobile that allow to play Youtube videos. Another option is [TCPMP](http://forum.xda-developers.com/showthread.php?t=308837) which includes a plugin to play FLV videos on windows mobile, this is opensouce. | You might be able to use the New YouTube App for Windows Mobile that Google created either directly or indirectly.
[New YouTube App for Windows Mobile](http://googlemobile.blogspot.com/2009/03/new-youtube-app-for-windows-mobile-and.html)
[Watching Video on Windows Mobile](http://www.youtube.com/blog?entry=tWasC8HWSnI) |
832,306 | I am working on an application for Windows Mobile 6 (or maybe 5) that plays YouTube videos. Well, it *should* play YouTube videos (and control/query the player about status changes, current frame/time, etc.)
After scouring the web for quite some time now (and a few trials), I still couldn't find a way to do this. The options I know of are:
* Use the YouTube player, embedded in HTML, controllable via JavaScript. However, I couldn't watch YT videos from IE Mobile, to begin with -- I get an error message saying something along the lines of "you need a browser with Flash Player 8 and JavaScript enabled".
* Host a Media Player control, but WMP refuses to play YT videos, including the Mobile format.
* Use DirectShow. I'm still looking into this one (I've never worked with COM, let alone DirectShow, before), but I am yet to find a solution that supports [YouTube's format(s)](http://en.wikipedia.org/wiki/YouTube#Format_and_quality_comparison_table)
I would rather write this application in C#, but C++ works, too.
Help me, O Wise Sages of StackOverflow! | 2009/05/06 | [
"https://Stackoverflow.com/questions/832306",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/59595/"
] | You can also grab YouTube videos as MP4, hopefully that expands your player options. You can look into DirectShow CF for playback functionality, or host some other player in your app that supports MP4 or FLV.
Trying to play it back through IE mobile won't work, as the version necessary of the Flash plug-in with video playback support isn't available (last time I checked).
**To get the MP4 file make a request to this URL:**
"<http://www.youtube.com/get_video?video_id=>" + videoID + "&t=" + token + "&fmt=18"
**To get the FLV use this:**
"<http://www.youtube.com/get_video?video_id=>" + videoID + "&t=" + token
**To get the Token call this:**
"<http://www.youtube.com/api2_rest?method=youtube.videos.get_video_token&video_id=>" + videoID
I wrote an app that would grab a playlist of YouTube videos and sync them up with my PocketPC, I used TCPMP with the Flash add-on to playback the video (externally from my app). Although MP4 also worked on the PPC, I stuck to FLVs because at the time some videos on YouTube were not available as MP4. I wouldn't be concerned about this now.
Sadly my PPC broke, now I'm doing something similar on my iPhone but I had to switch completely to the MP4 format. VLC's FLV playback on the iPhone was too jerky for me. | [The CorePlayer](http://www.coreplayer.com/) includes a plugin for IE mobile that allow to play Youtube videos. Another option is [TCPMP](http://forum.xda-developers.com/showthread.php?t=308837) which includes a plugin to play FLV videos on windows mobile, this is opensouce. |
832,306 | I am working on an application for Windows Mobile 6 (or maybe 5) that plays YouTube videos. Well, it *should* play YouTube videos (and control/query the player about status changes, current frame/time, etc.)
After scouring the web for quite some time now (and a few trials), I still couldn't find a way to do this. The options I know of are:
* Use the YouTube player, embedded in HTML, controllable via JavaScript. However, I couldn't watch YT videos from IE Mobile, to begin with -- I get an error message saying something along the lines of "you need a browser with Flash Player 8 and JavaScript enabled".
* Host a Media Player control, but WMP refuses to play YT videos, including the Mobile format.
* Use DirectShow. I'm still looking into this one (I've never worked with COM, let alone DirectShow, before), but I am yet to find a solution that supports [YouTube's format(s)](http://en.wikipedia.org/wiki/YouTube#Format_and_quality_comparison_table)
I would rather write this application in C#, but C++ works, too.
Help me, O Wise Sages of StackOverflow! | 2009/05/06 | [
"https://Stackoverflow.com/questions/832306",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/59595/"
] | You can also grab YouTube videos as MP4, hopefully that expands your player options. You can look into DirectShow CF for playback functionality, or host some other player in your app that supports MP4 or FLV.
Trying to play it back through IE mobile won't work, as the version necessary of the Flash plug-in with video playback support isn't available (last time I checked).
**To get the MP4 file make a request to this URL:**
"<http://www.youtube.com/get_video?video_id=>" + videoID + "&t=" + token + "&fmt=18"
**To get the FLV use this:**
"<http://www.youtube.com/get_video?video_id=>" + videoID + "&t=" + token
**To get the Token call this:**
"<http://www.youtube.com/api2_rest?method=youtube.videos.get_video_token&video_id=>" + videoID
I wrote an app that would grab a playlist of YouTube videos and sync them up with my PocketPC, I used TCPMP with the Flash add-on to playback the video (externally from my app). Although MP4 also worked on the PPC, I stuck to FLVs because at the time some videos on YouTube were not available as MP4. I wouldn't be concerned about this now.
Sadly my PPC broke, now I'm doing something similar on my iPhone but I had to switch completely to the MP4 format. VLC's FLV playback on the iPhone was too jerky for me. | You might be able to use the New YouTube App for Windows Mobile that Google created either directly or indirectly.
[New YouTube App for Windows Mobile](http://googlemobile.blogspot.com/2009/03/new-youtube-app-for-windows-mobile-and.html)
[Watching Video on Windows Mobile](http://www.youtube.com/blog?entry=tWasC8HWSnI) |
3,045 | I know that, as part of their training, all Jedi were required to build their own lightsaber.
Did Yoda teach Luke on Dagobah before heading to Bespin? | 2011/04/26 | [
"https://scifi.stackexchange.com/questions/3045",
"https://scifi.stackexchange.com",
"https://scifi.stackexchange.com/users/1614/"
] | In the book *Shadows of the Empire* Luke goes to Obi-wan's old place on Tatooine and finds his lightsaber plans. Luke uses these plans to build his own lightsaber. If you compare Obi-wan's lightsaber and the one Luke made, there are some pretty major design similarities.
When Luke was training under Yoda, he already had a lightsaber, so it's not unreasonable that Yoda skipped the 'how to make your lightsaber' lesson. | There is a deleted scene from The Return of the Jedi where Luke completes his lightsaber on Tatooine.
---
This was also addressed on the old "[Ask the Jedi Council](https://web.archive.org/web/20030201215736/http://www.starwars.com/community/askjc/jocasta/askjc20011022.html)" website.
>
> *While Force intuition did play a great role in the construction of
> Luke Skywalker's new blade, **all the necessary technical information
> was contained in the place where he built it: Obi-Wan Kenobi's hut on
> Tatooine.***
>
>
> [Ask the Jedi Council - Madame Jocastu Nu.](https://web.archive.org/web/20030201215736/http://www.starwars.com/community/askjc/jocasta/askjc20011022.html)
>
>
> |
3,045 | I know that, as part of their training, all Jedi were required to build their own lightsaber.
Did Yoda teach Luke on Dagobah before heading to Bespin? | 2011/04/26 | [
"https://scifi.stackexchange.com/questions/3045",
"https://scifi.stackexchange.com",
"https://scifi.stackexchange.com/users/1614/"
] | In the book *Shadows of the Empire* Luke goes to Obi-wan's old place on Tatooine and finds his lightsaber plans. Luke uses these plans to build his own lightsaber. If you compare Obi-wan's lightsaber and the one Luke made, there are some pretty major design similarities.
When Luke was training under Yoda, he already had a lightsaber, so it's not unreasonable that Yoda skipped the 'how to make your lightsaber' lesson. | Partly Obi-Wan and Yoda, but mainly the Force
---------------------------------------------
Luke had some basic background on lightsaber building from Obi-Wan and Yoda, but he was mainly guided by his intuition, which of course was a product of the Force:
>
> Luke’s teachers, Obi-Wan and Yoda, had told him only a little of the
> way a Jedi must use the Force to build a lightsaber.
>
>
> And yet he seemed to know exactly what he needed to know. Reaching out
> with his mind, he found the right pieces—some easily purchased, some
> much harder to acquire.
>
>
> After leaving Obi-Wan's hut, he knew he had at last collected
> everything he needed. While our other heroes bustled about preparing
> for the rescue, Luke retreated into the solitude of a desert cave and
> puzzled over the pieces... In the end it took not just physical tools
> but also the Force to put it all together and bring the crystal inside
> to life.
>
>
> *Beware the Power of the Dark Side!*
>
>
>
Of course, intuitive mechanical aptitude is one of the markers of a strong Jedi. Anakin was able to build a fully sentient droid from spare parts, with little formal knowledge of droid design. Ahsoka, during her time hiding from the Empire on Raada, not only fixed a heavily broken machine, but rebuilt it with many of the required parts missing, and then used those parts to help build a new lightsaber. Being almost supernaturally good at mechanical stuff is what Jedi *do*. Undoubtedly one still needs some background knowledge, but Luke had whatever (probably considerable) general mechanical knowledge he had acquired in his childhood, and an outline of the general theory of lightsaber design from Obi-Wan and Yoda.
He also had many of the parts needed for lightsaber construction, which he found in Obi-Wan's hut:
>
> There, in his master’s lonely hermit hut deep in the Juntland wastes,
> he found a few things Obi-Wan had left behind that were useful to
> him... including the missing parts he needed for the construction of his
> own lightsaber—the weapon of a true Jedi Knight.
>
>
> *Beware the Power of the Dark Side!*
>
>
>
It might have been obvious how many of the essential parts fit together from their shape—even more so, with the aid of the Force. |
3,045 | I know that, as part of their training, all Jedi were required to build their own lightsaber.
Did Yoda teach Luke on Dagobah before heading to Bespin? | 2011/04/26 | [
"https://scifi.stackexchange.com/questions/3045",
"https://scifi.stackexchange.com",
"https://scifi.stackexchange.com/users/1614/"
] | There is a deleted scene from The Return of the Jedi where Luke completes his lightsaber on Tatooine.
---
This was also addressed on the old "[Ask the Jedi Council](https://web.archive.org/web/20030201215736/http://www.starwars.com/community/askjc/jocasta/askjc20011022.html)" website.
>
> *While Force intuition did play a great role in the construction of
> Luke Skywalker's new blade, **all the necessary technical information
> was contained in the place where he built it: Obi-Wan Kenobi's hut on
> Tatooine.***
>
>
> [Ask the Jedi Council - Madame Jocastu Nu.](https://web.archive.org/web/20030201215736/http://www.starwars.com/community/askjc/jocasta/askjc20011022.html)
>
>
> | Partly Obi-Wan and Yoda, but mainly the Force
---------------------------------------------
Luke had some basic background on lightsaber building from Obi-Wan and Yoda, but he was mainly guided by his intuition, which of course was a product of the Force:
>
> Luke’s teachers, Obi-Wan and Yoda, had told him only a little of the
> way a Jedi must use the Force to build a lightsaber.
>
>
> And yet he seemed to know exactly what he needed to know. Reaching out
> with his mind, he found the right pieces—some easily purchased, some
> much harder to acquire.
>
>
> After leaving Obi-Wan's hut, he knew he had at last collected
> everything he needed. While our other heroes bustled about preparing
> for the rescue, Luke retreated into the solitude of a desert cave and
> puzzled over the pieces... In the end it took not just physical tools
> but also the Force to put it all together and bring the crystal inside
> to life.
>
>
> *Beware the Power of the Dark Side!*
>
>
>
Of course, intuitive mechanical aptitude is one of the markers of a strong Jedi. Anakin was able to build a fully sentient droid from spare parts, with little formal knowledge of droid design. Ahsoka, during her time hiding from the Empire on Raada, not only fixed a heavily broken machine, but rebuilt it with many of the required parts missing, and then used those parts to help build a new lightsaber. Being almost supernaturally good at mechanical stuff is what Jedi *do*. Undoubtedly one still needs some background knowledge, but Luke had whatever (probably considerable) general mechanical knowledge he had acquired in his childhood, and an outline of the general theory of lightsaber design from Obi-Wan and Yoda.
He also had many of the parts needed for lightsaber construction, which he found in Obi-Wan's hut:
>
> There, in his master’s lonely hermit hut deep in the Juntland wastes,
> he found a few things Obi-Wan had left behind that were useful to
> him... including the missing parts he needed for the construction of his
> own lightsaber—the weapon of a true Jedi Knight.
>
>
> *Beware the Power of the Dark Side!*
>
>
>
It might have been obvious how many of the essential parts fit together from their shape—even more so, with the aid of the Force. |
905 | In [How can I repair a split in a board?](https://woodworking.stackexchange.com/questions/889/how-can-i-repair-a-split-in-a-board), the accepted response suggests a "dutchman patch," but I'm getting mixed messages from various sources. Some show a dutchman as being the removal of damaged material and insertion of a patch of the same size. Others seem to specifically be a bowtie/butterfly shape reinforcing a crack in wood.
For instance, both of these images are being referred to as dutchman patches:

Source: <https://makezine.com/projects/dutchman-wood-repair/>

Source: <http://www.woodworkingtalk.com/f5/pre-dovetail-butterfly-dutchman-joints-question-36119/#post306942>
So which is it? Is there a difference between a dutchman patch and a butterfly patch? Is this just a case of the same phrase meaning two different things?
Is a dutchman a general term and the buttefly specifically that *shape* of dutchman? | 2015/04/12 | [
"https://woodworking.stackexchange.com/questions/905",
"https://woodworking.stackexchange.com",
"https://woodworking.stackexchange.com/users/180/"
] | >
> So which is it? Is there a difference between a dutchman patch and a butterfly patch?
>
>
>
Yes and no. A Dutchman can be the shape of a butterfly (also called a bowtie) as well as many other shapes.
But a butterfly *in modern usage* typically means a wooden fixing to secure or stabilise a crack.
This is yet another example (of many!) of terminology being used loosely or irregularly in woodworking.
Note: Dutchmen patches are usually thin, and can in fact be made from veneer, butterflies or bowties are typically thick because of their structural requirement. | A dutchman patch is basically using wood to fill a void larger than can be done with filler alone. A butterfly patch is typically used to prevent a check from getting larger, or to reinforce a joint. You could consider a butterfly patch to be a specific case of a more general dutchman patch. |
40,339 | Does anyone know a mac app for converting a movie (.mov) to a series of images ? I know quicktime could do this in the olden days, but that features has been deleted. | 2012/02/14 | [
"https://apple.stackexchange.com/questions/40339",
"https://apple.stackexchange.com",
"https://apple.stackexchange.com/users/8275/"
] | Yup, you still can do it with Quicktime Player. Download it from [here](http://support.apple.com/kb/DL923). I believe you can also install it from the Snow Leopard install disc. I'm not sure about the Lion install download.
Open the movie and click File>Export:

Then you can select Movie to Image Sequence:

Set your destination folder and watch Quicktime Player do its magic.
 | There is an app called "movie2picture" in App Store. |
11,004,583 | I know this has been asked by a few people but I haven't seen the answer. I have a php upload form for a file upload in a div tag. Is it possible to submit the form and upload the file without a page refresh? I haven't found the plugins to work because I want to submit the form with a title and other data attached as well. Any suggestions? I looked at using an iframe, but i'm not sure it will work in an upload.php wrapped in a div tag? | 2012/06/12 | [
"https://Stackoverflow.com/questions/11004583",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1366676/"
] | Here is the most simple solution you can use: [Image upload without page refresh](http://php-drops.blogspot.ca/2011/02/image-upload-without-page-refresh.html).
No obligation to use the javascript part, but it gives you the opportunity to show a notice that the transfer has been correctly done.
If you are open to jQuery and want a more open solution, I suggest you to take a look at the [jQuery Form Plugin](http://jquery.malsup.com/form/) that can manage it all for you, with some extras that could be interesting. | Try using something like AIM (http://www.webtoolkit.info/). And you should really get a little more experience on html, php and js. |
11,004,583 | I know this has been asked by a few people but I haven't seen the answer. I have a php upload form for a file upload in a div tag. Is it possible to submit the form and upload the file without a page refresh? I haven't found the plugins to work because I want to submit the form with a title and other data attached as well. Any suggestions? I looked at using an iframe, but i'm not sure it will work in an upload.php wrapped in a div tag? | 2012/06/12 | [
"https://Stackoverflow.com/questions/11004583",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1366676/"
] | Here is the most simple solution you can use: [Image upload without page refresh](http://php-drops.blogspot.ca/2011/02/image-upload-without-page-refresh.html).
No obligation to use the javascript part, but it gives you the opportunity to show a notice that the transfer has been correctly done.
If you are open to jQuery and want a more open solution, I suggest you to take a look at the [jQuery Form Plugin](http://jquery.malsup.com/form/) that can manage it all for you, with some extras that could be interesting. | My favourite tool for this is [Uploadify](http://www.uploadify.com/). It includes real-time progress indicators, drag-and-drop, etc.
There are other, older solutions as well, including [this one](http://roshanbh.com.np/2008/10/model-file-upload-overlay-box-jquery.html). That page has a simple demo that may provide a good example for how to do this on your own.
Honourable mention goes to [Dave Walsh's facebook-lightbox](http://davidwalsh.name/facebook-lightbox), into which you *might* be able to put a file upload form. Worth a try if you like the look. |
11,004,583 | I know this has been asked by a few people but I haven't seen the answer. I have a php upload form for a file upload in a div tag. Is it possible to submit the form and upload the file without a page refresh? I haven't found the plugins to work because I want to submit the form with a title and other data attached as well. Any suggestions? I looked at using an iframe, but i'm not sure it will work in an upload.php wrapped in a div tag? | 2012/06/12 | [
"https://Stackoverflow.com/questions/11004583",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1366676/"
] | My favourite tool for this is [Uploadify](http://www.uploadify.com/). It includes real-time progress indicators, drag-and-drop, etc.
There are other, older solutions as well, including [this one](http://roshanbh.com.np/2008/10/model-file-upload-overlay-box-jquery.html). That page has a simple demo that may provide a good example for how to do this on your own.
Honourable mention goes to [Dave Walsh's facebook-lightbox](http://davidwalsh.name/facebook-lightbox), into which you *might* be able to put a file upload form. Worth a try if you like the look. | Try using something like AIM (http://www.webtoolkit.info/). And you should really get a little more experience on html, php and js. |
232,662 | Suppose that I find a interesting Website (having around 50 pages)
Suppose that I want to read it offline on my ebook device
How can i do it ?
I can download the website through HTTrack and then
What tool to use to "pack" all downloaded pages into ONE ebook with a table of content ? | 2011/01/13 | [
"https://superuser.com/questions/232662",
"https://superuser.com",
"https://superuser.com/users/20173/"
] | The best piece of software to do this (I think) is [Sigil](http://code.google.com/p/sigil/). Cross platform and open source. | Depending on how nice the code of the website is (and thus whether any changes need to be made to the CSS / HTML - ePub only supports a limited subset) a simple conversion could work using [Calibre](http://calibre-ebook.com/ "Calibre"). A simpler interface than Sigil, it can sometimes have problems with malformed (X)HTML. |
137,371 | The idea is that the concepts of DNA, inheritance, and micro-organisms were discovered significantly earlier in history. Roughly 50 years after the founding of Alexandria as a city state in 332 B.C., Greek scholars ended up developing the first microscopes. During their studies of human anatomy, they made the startling discovery of cells.
They posited theories that something within these small parts of our body (DNA) strongly affected the traits passed on from parents to their children. This appeared to be true, given what they could observe in plants, domestic animals, and even humans.
Under orders from Lysimachus, they began the ambitious project of selectively breeding humans in an attempt to bring out these characteristics:
Greater strength, stamina, and physical coordination (agility/dexterity).
Assume their work would be continued by the Roman Empire, and later the Byzantine and Holy Roman Empires as time marched on. By 1806, when the Holy Roman Empire was dissolved at last, how much progress could feasibly have been made through a selective breeding program? | 2019/01/22 | [
"https://worldbuilding.stackexchange.com/questions/137371",
"https://worldbuilding.stackexchange.com",
"https://worldbuilding.stackexchange.com/users/60352/"
] | It is possible to embed "ideals" into our culture, religion and so on, and to somehow make people search for partners bearing those properties. For instance, the eternal bliss of afterlife, may be described as being achievable only after successfully challenging some obstacles. Those obstacles may demand athletic performance, solving hard puzzles, being a brave and die-hard soldier. The Vikings have embedded those combat skills in their "Valhaha". The Mesoamericans (Maya, Aztec, Incas) describe the road to the afterlife filled with sophisticated puzzles, both mentally and physically. (The Xibalba -place of fear- of the Maya). You may add that the skills one develops in his material life are reflected in his/her skills necessary to reach Valhaha, or whatever. Obviously, women will search for men with those skills in a hope to inherit those good genes to their children.
So yes, religion and culture may coax people to cooperate, at-least to some degree. Needless to say, that the time needed for humans to reproduce, cultural influences and our curiosity for alternatives may push the brakes on such processes sooner or later. | Selective breeding of humans is possible but there are obvious social dangers. Also to really get the most out of it you have to be pretty brutal and cold blooded about your culling process. And there is always that question about what traits you want to breed for. Science Fiction horror stories abound with super soldiers and utterly faithful slaves. Usually the discussion involves the issue of fitness and what you can breed into people and what requires training. There are always trade-offs. Breeding for long life would be a disaster if severe population control were not imposed. With modern technology, strength and stamina, while attractive, is not as important as the ability to process computer data. In a world of nukes and smart bombs do we need meaner more lethal soldiers or more clever and accommodating diplomats? European royalty attempted to preserve something called royal blood through a de-facto process of inbreeding which resulted in stunted limbs, idiocy, and hemophilia. Their arrogant attempt to improve their genetic ability to rule resulted in exactly the opposite. A very good fictional example of a society that uses selective breeding is Hellstrom's Hive by Frank Herbert. Though his fictional Hive is horrifying to our eyes, many of the techniques described would make a great deal of sense if you plan on breeding humans. And remember his Hive had existed for only about a hundred years. Imagine what could be done in the time frame of the Roman empire. |
137,371 | The idea is that the concepts of DNA, inheritance, and micro-organisms were discovered significantly earlier in history. Roughly 50 years after the founding of Alexandria as a city state in 332 B.C., Greek scholars ended up developing the first microscopes. During their studies of human anatomy, they made the startling discovery of cells.
They posited theories that something within these small parts of our body (DNA) strongly affected the traits passed on from parents to their children. This appeared to be true, given what they could observe in plants, domestic animals, and even humans.
Under orders from Lysimachus, they began the ambitious project of selectively breeding humans in an attempt to bring out these characteristics:
Greater strength, stamina, and physical coordination (agility/dexterity).
Assume their work would be continued by the Roman Empire, and later the Byzantine and Holy Roman Empires as time marched on. By 1806, when the Holy Roman Empire was dissolved at last, how much progress could feasibly have been made through a selective breeding program? | 2019/01/22 | [
"https://worldbuilding.stackexchange.com/questions/137371",
"https://worldbuilding.stackexchange.com",
"https://worldbuilding.stackexchange.com/users/60352/"
] | Of course you could selectively breed humans for particular traits just the same as any other animal.
You do not need ***any*** of the scientific advances you list to do it though.
Deliberate & knowing selective breeding of crops, agricultural animals & hunting dogs predates the theory of evolution by a very ***very*** long time (many hundreds of years if not thousands).
So there really is no plausible reason your Romans couldn't do this other than.
1. The rather obvious difficulties of selectively breeding an unwilling human population.
2. The problem of keeping a breeding program on track with a species that the overseers of the project won't live long enough to see more than two or three generations of reach maturity.
Regarding point 2. very few organisations (companies & the like) survive much past a couple of hundred years (& that's not long enough by a very long chalk to do this), though the catholic church on the other hand has survived the best part of 2000 years in more or less it's current form.
You wouldn't need your subjects to be exclusive either with an individual or to limit their procreation to just those within the chosen breeding group, you could reward them for producing children with a desirable individual & still let them reproduce with or marry whoever else they want. All that's really important for a selective breeding project like this one is that you get the offspring you want, not that they shouldn't have other offspring, so with the right incentives point 1. can perhaps be overcome.
All you need then is an organisation with adequate resources able to persist for the required length of time & remain on target without significant drift in it's goals from those originally set & it's all good.
Empires don't tend to persist for long enough to accomplish what you have in mind (the Roman empire only lasted around 500 years & the Ottoman empire only did a little better at around 600 years) so to achieve what you want you'd probably need a religion or some sort of religious order within one of the more persistent religions.
>
> It's only taken since the 1800's to produce [Belgian Blue cattle](https://www.youtube.com/watch?v=K_QNZoGJguM) by selective breeding.
>
>
>
So people that without ever having done any resistance training will still look like [Arnold](https://www.youtube.com/watch?v=fbg4-EKgb2I) did in 1974 (if that's all you're aiming for of course) are plausible with just 500 or 600 years. | There is a highly controversial interpretation of history that says that this was done in the USA to black slaves, resulting in [superior athletic performance in their descendants](https://www.colorlines.com/articles/film-black-olympians-dominate-because-theyre-descendants-slaves). I won't touch on the relative merits of such a statement, other than to say that given what we have done to other species, it's plausible that it can be done to humans as well.
Even the chicken industry admits there's a difference between chickens raised for [eggs or meat](http://www.redteamfarm.com/blog/egg-laying-chickens-vs-meat-chickens) purposes; some of the more extreme examples of this start to show significant deviations in their physiology, but I have yet to come across a scientific paper that addresses and quantifies this deviation.
The [Merino sheep](https://www.businessinsider.com.au/why-sheep-cant-stop-growing-their-fur-2015-9?r=US&IR=T) have lost the gene to turn off their wool production. These are sheep that if not regularly shawn, will die of heat exposure from an increasingly thick coat of wool. These were effectively bred into existence in Europe in the 13th or 14th century.
All the cows on Earth are allegedly descendants of a herd of aurochs that were domesticated many tens of thousands of years ago.
The point being, the only difference between chickens, sheep, cows and humans is the longevity of the human generational period. In other words, if you really wanted to selectively breed humans you could do so and 2000 years is ample time to do it in.
The only problem I foresee is that humans are notoriously uncooperative with such programs, and being the intelligent species we are (and you're going to want to breed for that as well) we'll find ways to both breed when we're not supposed to and not breed where we're expected to. In such a case, successfully breeding humans will involve about the same amount of management overhead as successfully herding cats. |
137,371 | The idea is that the concepts of DNA, inheritance, and micro-organisms were discovered significantly earlier in history. Roughly 50 years after the founding of Alexandria as a city state in 332 B.C., Greek scholars ended up developing the first microscopes. During their studies of human anatomy, they made the startling discovery of cells.
They posited theories that something within these small parts of our body (DNA) strongly affected the traits passed on from parents to their children. This appeared to be true, given what they could observe in plants, domestic animals, and even humans.
Under orders from Lysimachus, they began the ambitious project of selectively breeding humans in an attempt to bring out these characteristics:
Greater strength, stamina, and physical coordination (agility/dexterity).
Assume their work would be continued by the Roman Empire, and later the Byzantine and Holy Roman Empires as time marched on. By 1806, when the Holy Roman Empire was dissolved at last, how much progress could feasibly have been made through a selective breeding program? | 2019/01/22 | [
"https://worldbuilding.stackexchange.com/questions/137371",
"https://worldbuilding.stackexchange.com",
"https://worldbuilding.stackexchange.com/users/60352/"
] | Of course you could selectively breed humans for particular traits just the same as any other animal.
You do not need ***any*** of the scientific advances you list to do it though.
Deliberate & knowing selective breeding of crops, agricultural animals & hunting dogs predates the theory of evolution by a very ***very*** long time (many hundreds of years if not thousands).
So there really is no plausible reason your Romans couldn't do this other than.
1. The rather obvious difficulties of selectively breeding an unwilling human population.
2. The problem of keeping a breeding program on track with a species that the overseers of the project won't live long enough to see more than two or three generations of reach maturity.
Regarding point 2. very few organisations (companies & the like) survive much past a couple of hundred years (& that's not long enough by a very long chalk to do this), though the catholic church on the other hand has survived the best part of 2000 years in more or less it's current form.
You wouldn't need your subjects to be exclusive either with an individual or to limit their procreation to just those within the chosen breeding group, you could reward them for producing children with a desirable individual & still let them reproduce with or marry whoever else they want. All that's really important for a selective breeding project like this one is that you get the offspring you want, not that they shouldn't have other offspring, so with the right incentives point 1. can perhaps be overcome.
All you need then is an organisation with adequate resources able to persist for the required length of time & remain on target without significant drift in it's goals from those originally set & it's all good.
Empires don't tend to persist for long enough to accomplish what you have in mind (the Roman empire only lasted around 500 years & the Ottoman empire only did a little better at around 600 years) so to achieve what you want you'd probably need a religion or some sort of religious order within one of the more persistent religions.
>
> It's only taken since the 1800's to produce [Belgian Blue cattle](https://www.youtube.com/watch?v=K_QNZoGJguM) by selective breeding.
>
>
>
So people that without ever having done any resistance training will still look like [Arnold](https://www.youtube.com/watch?v=fbg4-EKgb2I) did in 1974 (if that's all you're aiming for of course) are plausible with just 500 or 600 years. | Selective breeding of humans is possible but there are obvious social dangers. Also to really get the most out of it you have to be pretty brutal and cold blooded about your culling process. And there is always that question about what traits you want to breed for. Science Fiction horror stories abound with super soldiers and utterly faithful slaves. Usually the discussion involves the issue of fitness and what you can breed into people and what requires training. There are always trade-offs. Breeding for long life would be a disaster if severe population control were not imposed. With modern technology, strength and stamina, while attractive, is not as important as the ability to process computer data. In a world of nukes and smart bombs do we need meaner more lethal soldiers or more clever and accommodating diplomats? European royalty attempted to preserve something called royal blood through a de-facto process of inbreeding which resulted in stunted limbs, idiocy, and hemophilia. Their arrogant attempt to improve their genetic ability to rule resulted in exactly the opposite. A very good fictional example of a society that uses selective breeding is Hellstrom's Hive by Frank Herbert. Though his fictional Hive is horrifying to our eyes, many of the techniques described would make a great deal of sense if you plan on breeding humans. And remember his Hive had existed for only about a hundred years. Imagine what could be done in the time frame of the Roman empire. |
137,371 | The idea is that the concepts of DNA, inheritance, and micro-organisms were discovered significantly earlier in history. Roughly 50 years after the founding of Alexandria as a city state in 332 B.C., Greek scholars ended up developing the first microscopes. During their studies of human anatomy, they made the startling discovery of cells.
They posited theories that something within these small parts of our body (DNA) strongly affected the traits passed on from parents to their children. This appeared to be true, given what they could observe in plants, domestic animals, and even humans.
Under orders from Lysimachus, they began the ambitious project of selectively breeding humans in an attempt to bring out these characteristics:
Greater strength, stamina, and physical coordination (agility/dexterity).
Assume their work would be continued by the Roman Empire, and later the Byzantine and Holy Roman Empires as time marched on. By 1806, when the Holy Roman Empire was dissolved at last, how much progress could feasibly have been made through a selective breeding program? | 2019/01/22 | [
"https://worldbuilding.stackexchange.com/questions/137371",
"https://worldbuilding.stackexchange.com",
"https://worldbuilding.stackexchange.com/users/60352/"
] | Of course you could selectively breed humans for particular traits just the same as any other animal.
You do not need ***any*** of the scientific advances you list to do it though.
Deliberate & knowing selective breeding of crops, agricultural animals & hunting dogs predates the theory of evolution by a very ***very*** long time (many hundreds of years if not thousands).
So there really is no plausible reason your Romans couldn't do this other than.
1. The rather obvious difficulties of selectively breeding an unwilling human population.
2. The problem of keeping a breeding program on track with a species that the overseers of the project won't live long enough to see more than two or three generations of reach maturity.
Regarding point 2. very few organisations (companies & the like) survive much past a couple of hundred years (& that's not long enough by a very long chalk to do this), though the catholic church on the other hand has survived the best part of 2000 years in more or less it's current form.
You wouldn't need your subjects to be exclusive either with an individual or to limit their procreation to just those within the chosen breeding group, you could reward them for producing children with a desirable individual & still let them reproduce with or marry whoever else they want. All that's really important for a selective breeding project like this one is that you get the offspring you want, not that they shouldn't have other offspring, so with the right incentives point 1. can perhaps be overcome.
All you need then is an organisation with adequate resources able to persist for the required length of time & remain on target without significant drift in it's goals from those originally set & it's all good.
Empires don't tend to persist for long enough to accomplish what you have in mind (the Roman empire only lasted around 500 years & the Ottoman empire only did a little better at around 600 years) so to achieve what you want you'd probably need a religion or some sort of religious order within one of the more persistent religions.
>
> It's only taken since the 1800's to produce [Belgian Blue cattle](https://www.youtube.com/watch?v=K_QNZoGJguM) by selective breeding.
>
>
>
So people that without ever having done any resistance training will still look like [Arnold](https://www.youtube.com/watch?v=fbg4-EKgb2I) did in 1974 (if that's all you're aiming for of course) are plausible with just 500 or 600 years. | In the Robert A. Heinlein future history series the Howard Foundation bred humans for longevity. They listed young men and women who each had four living and healthy grandparents and offered them payment to marry persons on the list. And after a (scientifically surprisingly) few generations people were born who could live for a few hundred years.
I have read a suggestion that if an organization paid men born to fathers twice as old as the average father were induced to father sons when they were twice as old as the average father, and if this continued for a few generations of old fathers, it would produce persons with an average lifespan twice that of normal humans.
In modern society people have a pool of millions of potential future spouses, though of course they can only choose from among the thousands of persons that they actually happen to meet.
But for hundreds of thousands of years young people in endogamous bands only had a pool of few dozen potential future spouses to choose from, while young people in exogamous bands only that maybe half a dozen times that many people in neighboring bands to choose from.
So I suspect that it is psychologically quite acceptable for a young human to be limited to choosing a spouse from among a limited group of a few thousand members of a specific society, such as humans being bred for intelligence, or stupidity, or strength, or weakness, or longevity, or short lifespans, or whatever qualities. Possibly the persons who were taken out of the program because they didn't test with enough of the desired quality would be more disappointed, as well as persons in the program who might be in love with them and forbidden to be with them. |
137,371 | The idea is that the concepts of DNA, inheritance, and micro-organisms were discovered significantly earlier in history. Roughly 50 years after the founding of Alexandria as a city state in 332 B.C., Greek scholars ended up developing the first microscopes. During their studies of human anatomy, they made the startling discovery of cells.
They posited theories that something within these small parts of our body (DNA) strongly affected the traits passed on from parents to their children. This appeared to be true, given what they could observe in plants, domestic animals, and even humans.
Under orders from Lysimachus, they began the ambitious project of selectively breeding humans in an attempt to bring out these characteristics:
Greater strength, stamina, and physical coordination (agility/dexterity).
Assume their work would be continued by the Roman Empire, and later the Byzantine and Holy Roman Empires as time marched on. By 1806, when the Holy Roman Empire was dissolved at last, how much progress could feasibly have been made through a selective breeding program? | 2019/01/22 | [
"https://worldbuilding.stackexchange.com/questions/137371",
"https://worldbuilding.stackexchange.com",
"https://worldbuilding.stackexchange.com/users/60352/"
] | You don't need knowledge of genetics or cells or anything similar: breeding to improve stock (in both plants and animals) was well-understood by that point, having been practiced for millennia, even if they didn't didn't know why it worked.
So there's nothing in principle stopping it, but you run into several major practical problems.
**1. Generation time**
The average cow can be ready for breeding at 13 months old. A pig, 6 months. Fowl, 6 months. A horse, 18 months.
At best, you're looking at 13-14 years old for a human. But since humans take several more years to reach physical maturity, and if you're wanting to breed a "superior" human, you want your subjects to be old enough to see if they express those traits, so really you need to wait until they're 17-20 and in their physical prime before you can decide which members you want to breed for the next generation.
Unless you get ridiculously lucky and stumble on a beneficial mutation, you're looking at a century or more before you *might* start seeing some results, and even then it's going to take time for you to spread it around.
**2. Control**
You want to control breeding of cattle, or dogs, or chickens, or horses, it's not that hard. It's fairly trivial to control breeding access, separating the herd so that you can breed those traits you want in at least semi-controlled conditions.
Humans, on the other hand...they tend to not take being locked up very well, and have the tools (intelligence, manipulative limbs, *language*) to make trying that sort of control a true pain in the ass. And don't underestimate the power of language. If you go into a pen to pull out the best hen you want to mate with a specific rooster, you don't have to worry about the hens conspiring to ambush you and take the keys.
**3. Empathy**
You want to breed cows, aside from a few animal-rights activists (which were presumably rare back in Roman times) no one is going to really care. You want to do that same sort of thing to people, that's a much harder row to hoe. People, for the most part, care for other people. And that means you're going to have opposition. | Selective breeding of humans is possible but there are obvious social dangers. Also to really get the most out of it you have to be pretty brutal and cold blooded about your culling process. And there is always that question about what traits you want to breed for. Science Fiction horror stories abound with super soldiers and utterly faithful slaves. Usually the discussion involves the issue of fitness and what you can breed into people and what requires training. There are always trade-offs. Breeding for long life would be a disaster if severe population control were not imposed. With modern technology, strength and stamina, while attractive, is not as important as the ability to process computer data. In a world of nukes and smart bombs do we need meaner more lethal soldiers or more clever and accommodating diplomats? European royalty attempted to preserve something called royal blood through a de-facto process of inbreeding which resulted in stunted limbs, idiocy, and hemophilia. Their arrogant attempt to improve their genetic ability to rule resulted in exactly the opposite. A very good fictional example of a society that uses selective breeding is Hellstrom's Hive by Frank Herbert. Though his fictional Hive is horrifying to our eyes, many of the techniques described would make a great deal of sense if you plan on breeding humans. And remember his Hive had existed for only about a hundred years. Imagine what could be done in the time frame of the Roman empire. |
137,371 | The idea is that the concepts of DNA, inheritance, and micro-organisms were discovered significantly earlier in history. Roughly 50 years after the founding of Alexandria as a city state in 332 B.C., Greek scholars ended up developing the first microscopes. During their studies of human anatomy, they made the startling discovery of cells.
They posited theories that something within these small parts of our body (DNA) strongly affected the traits passed on from parents to their children. This appeared to be true, given what they could observe in plants, domestic animals, and even humans.
Under orders from Lysimachus, they began the ambitious project of selectively breeding humans in an attempt to bring out these characteristics:
Greater strength, stamina, and physical coordination (agility/dexterity).
Assume their work would be continued by the Roman Empire, and later the Byzantine and Holy Roman Empires as time marched on. By 1806, when the Holy Roman Empire was dissolved at last, how much progress could feasibly have been made through a selective breeding program? | 2019/01/22 | [
"https://worldbuilding.stackexchange.com/questions/137371",
"https://worldbuilding.stackexchange.com",
"https://worldbuilding.stackexchange.com/users/60352/"
] | There is a highly controversial interpretation of history that says that this was done in the USA to black slaves, resulting in [superior athletic performance in their descendants](https://www.colorlines.com/articles/film-black-olympians-dominate-because-theyre-descendants-slaves). I won't touch on the relative merits of such a statement, other than to say that given what we have done to other species, it's plausible that it can be done to humans as well.
Even the chicken industry admits there's a difference between chickens raised for [eggs or meat](http://www.redteamfarm.com/blog/egg-laying-chickens-vs-meat-chickens) purposes; some of the more extreme examples of this start to show significant deviations in their physiology, but I have yet to come across a scientific paper that addresses and quantifies this deviation.
The [Merino sheep](https://www.businessinsider.com.au/why-sheep-cant-stop-growing-their-fur-2015-9?r=US&IR=T) have lost the gene to turn off their wool production. These are sheep that if not regularly shawn, will die of heat exposure from an increasingly thick coat of wool. These were effectively bred into existence in Europe in the 13th or 14th century.
All the cows on Earth are allegedly descendants of a herd of aurochs that were domesticated many tens of thousands of years ago.
The point being, the only difference between chickens, sheep, cows and humans is the longevity of the human generational period. In other words, if you really wanted to selectively breed humans you could do so and 2000 years is ample time to do it in.
The only problem I foresee is that humans are notoriously uncooperative with such programs, and being the intelligent species we are (and you're going to want to breed for that as well) we'll find ways to both breed when we're not supposed to and not breed where we're expected to. In such a case, successfully breeding humans will involve about the same amount of management overhead as successfully herding cats. | Selective breeding of humans is possible but there are obvious social dangers. Also to really get the most out of it you have to be pretty brutal and cold blooded about your culling process. And there is always that question about what traits you want to breed for. Science Fiction horror stories abound with super soldiers and utterly faithful slaves. Usually the discussion involves the issue of fitness and what you can breed into people and what requires training. There are always trade-offs. Breeding for long life would be a disaster if severe population control were not imposed. With modern technology, strength and stamina, while attractive, is not as important as the ability to process computer data. In a world of nukes and smart bombs do we need meaner more lethal soldiers or more clever and accommodating diplomats? European royalty attempted to preserve something called royal blood through a de-facto process of inbreeding which resulted in stunted limbs, idiocy, and hemophilia. Their arrogant attempt to improve their genetic ability to rule resulted in exactly the opposite. A very good fictional example of a society that uses selective breeding is Hellstrom's Hive by Frank Herbert. Though his fictional Hive is horrifying to our eyes, many of the techniques described would make a great deal of sense if you plan on breeding humans. And remember his Hive had existed for only about a hundred years. Imagine what could be done in the time frame of the Roman empire. |
137,371 | The idea is that the concepts of DNA, inheritance, and micro-organisms were discovered significantly earlier in history. Roughly 50 years after the founding of Alexandria as a city state in 332 B.C., Greek scholars ended up developing the first microscopes. During their studies of human anatomy, they made the startling discovery of cells.
They posited theories that something within these small parts of our body (DNA) strongly affected the traits passed on from parents to their children. This appeared to be true, given what they could observe in plants, domestic animals, and even humans.
Under orders from Lysimachus, they began the ambitious project of selectively breeding humans in an attempt to bring out these characteristics:
Greater strength, stamina, and physical coordination (agility/dexterity).
Assume their work would be continued by the Roman Empire, and later the Byzantine and Holy Roman Empires as time marched on. By 1806, when the Holy Roman Empire was dissolved at last, how much progress could feasibly have been made through a selective breeding program? | 2019/01/22 | [
"https://worldbuilding.stackexchange.com/questions/137371",
"https://worldbuilding.stackexchange.com",
"https://worldbuilding.stackexchange.com/users/60352/"
] | Eugenics
========
**After all, we do it with animals, why wouldn't it work with people?**
The concept has been around since the late 1800s, and was popularly accepted as not all that bad an idea even by the high and mighty up until WWII.
>
> In 1883, Sir Francis Galton, a respected British scholar and cousin of Charles Darwin, first used the term eugenics, meaning “well-born.” Galton believed that the human race could help direct its future by selectively breeding individuals who have “desired” traits. This idea was based on Galton’s study of upper class Britain. Following these studies, Galton concluded that an elite position in society was due to a good genetic makeup. While Galton’s plans to improve the human race through selective breeding never came to fruition in Britain, they eventually took sinister turns in other countries. - [history of eugenics](http://knowgenetics.org/history-of-eugenics/)
>
>
>
It's a dirty word now after a certain chap by the name of Adolf Hitler put it into full production and people started to understand what it really meant.
It was mostly considered in racial and class terms. The underclass, the poor, and immigrants, were generally to be discouraged from reproducing in favour of those with money or acceptable racial traits. We now consider that approach to be abhorrent, but as soon as you start suggesting selective breeding of humans you have to understand that it's not a new idea and quite how dangerous the ground you're treading on can become. | You don't need knowledge of genetics or cells or anything similar: breeding to improve stock (in both plants and animals) was well-understood by that point, having been practiced for millennia, even if they didn't didn't know why it worked.
So there's nothing in principle stopping it, but you run into several major practical problems.
**1. Generation time**
The average cow can be ready for breeding at 13 months old. A pig, 6 months. Fowl, 6 months. A horse, 18 months.
At best, you're looking at 13-14 years old for a human. But since humans take several more years to reach physical maturity, and if you're wanting to breed a "superior" human, you want your subjects to be old enough to see if they express those traits, so really you need to wait until they're 17-20 and in their physical prime before you can decide which members you want to breed for the next generation.
Unless you get ridiculously lucky and stumble on a beneficial mutation, you're looking at a century or more before you *might* start seeing some results, and even then it's going to take time for you to spread it around.
**2. Control**
You want to control breeding of cattle, or dogs, or chickens, or horses, it's not that hard. It's fairly trivial to control breeding access, separating the herd so that you can breed those traits you want in at least semi-controlled conditions.
Humans, on the other hand...they tend to not take being locked up very well, and have the tools (intelligence, manipulative limbs, *language*) to make trying that sort of control a true pain in the ass. And don't underestimate the power of language. If you go into a pen to pull out the best hen you want to mate with a specific rooster, you don't have to worry about the hens conspiring to ambush you and take the keys.
**3. Empathy**
You want to breed cows, aside from a few animal-rights activists (which were presumably rare back in Roman times) no one is going to really care. You want to do that same sort of thing to people, that's a much harder row to hoe. People, for the most part, care for other people. And that means you're going to have opposition. |
137,371 | The idea is that the concepts of DNA, inheritance, and micro-organisms were discovered significantly earlier in history. Roughly 50 years after the founding of Alexandria as a city state in 332 B.C., Greek scholars ended up developing the first microscopes. During their studies of human anatomy, they made the startling discovery of cells.
They posited theories that something within these small parts of our body (DNA) strongly affected the traits passed on from parents to their children. This appeared to be true, given what they could observe in plants, domestic animals, and even humans.
Under orders from Lysimachus, they began the ambitious project of selectively breeding humans in an attempt to bring out these characteristics:
Greater strength, stamina, and physical coordination (agility/dexterity).
Assume their work would be continued by the Roman Empire, and later the Byzantine and Holy Roman Empires as time marched on. By 1806, when the Holy Roman Empire was dissolved at last, how much progress could feasibly have been made through a selective breeding program? | 2019/01/22 | [
"https://worldbuilding.stackexchange.com/questions/137371",
"https://worldbuilding.stackexchange.com",
"https://worldbuilding.stackexchange.com/users/60352/"
] | In the Robert A. Heinlein future history series the Howard Foundation bred humans for longevity. They listed young men and women who each had four living and healthy grandparents and offered them payment to marry persons on the list. And after a (scientifically surprisingly) few generations people were born who could live for a few hundred years.
I have read a suggestion that if an organization paid men born to fathers twice as old as the average father were induced to father sons when they were twice as old as the average father, and if this continued for a few generations of old fathers, it would produce persons with an average lifespan twice that of normal humans.
In modern society people have a pool of millions of potential future spouses, though of course they can only choose from among the thousands of persons that they actually happen to meet.
But for hundreds of thousands of years young people in endogamous bands only had a pool of few dozen potential future spouses to choose from, while young people in exogamous bands only that maybe half a dozen times that many people in neighboring bands to choose from.
So I suspect that it is psychologically quite acceptable for a young human to be limited to choosing a spouse from among a limited group of a few thousand members of a specific society, such as humans being bred for intelligence, or stupidity, or strength, or weakness, or longevity, or short lifespans, or whatever qualities. Possibly the persons who were taken out of the program because they didn't test with enough of the desired quality would be more disappointed, as well as persons in the program who might be in love with them and forbidden to be with them. | Selective breeding of humans is possible but there are obvious social dangers. Also to really get the most out of it you have to be pretty brutal and cold blooded about your culling process. And there is always that question about what traits you want to breed for. Science Fiction horror stories abound with super soldiers and utterly faithful slaves. Usually the discussion involves the issue of fitness and what you can breed into people and what requires training. There are always trade-offs. Breeding for long life would be a disaster if severe population control were not imposed. With modern technology, strength and stamina, while attractive, is not as important as the ability to process computer data. In a world of nukes and smart bombs do we need meaner more lethal soldiers or more clever and accommodating diplomats? European royalty attempted to preserve something called royal blood through a de-facto process of inbreeding which resulted in stunted limbs, idiocy, and hemophilia. Their arrogant attempt to improve their genetic ability to rule resulted in exactly the opposite. A very good fictional example of a society that uses selective breeding is Hellstrom's Hive by Frank Herbert. Though his fictional Hive is horrifying to our eyes, many of the techniques described would make a great deal of sense if you plan on breeding humans. And remember his Hive had existed for only about a hundred years. Imagine what could be done in the time frame of the Roman empire. |
137,371 | The idea is that the concepts of DNA, inheritance, and micro-organisms were discovered significantly earlier in history. Roughly 50 years after the founding of Alexandria as a city state in 332 B.C., Greek scholars ended up developing the first microscopes. During their studies of human anatomy, they made the startling discovery of cells.
They posited theories that something within these small parts of our body (DNA) strongly affected the traits passed on from parents to their children. This appeared to be true, given what they could observe in plants, domestic animals, and even humans.
Under orders from Lysimachus, they began the ambitious project of selectively breeding humans in an attempt to bring out these characteristics:
Greater strength, stamina, and physical coordination (agility/dexterity).
Assume their work would be continued by the Roman Empire, and later the Byzantine and Holy Roman Empires as time marched on. By 1806, when the Holy Roman Empire was dissolved at last, how much progress could feasibly have been made through a selective breeding program? | 2019/01/22 | [
"https://worldbuilding.stackexchange.com/questions/137371",
"https://worldbuilding.stackexchange.com",
"https://worldbuilding.stackexchange.com/users/60352/"
] | There is a highly controversial interpretation of history that says that this was done in the USA to black slaves, resulting in [superior athletic performance in their descendants](https://www.colorlines.com/articles/film-black-olympians-dominate-because-theyre-descendants-slaves). I won't touch on the relative merits of such a statement, other than to say that given what we have done to other species, it's plausible that it can be done to humans as well.
Even the chicken industry admits there's a difference between chickens raised for [eggs or meat](http://www.redteamfarm.com/blog/egg-laying-chickens-vs-meat-chickens) purposes; some of the more extreme examples of this start to show significant deviations in their physiology, but I have yet to come across a scientific paper that addresses and quantifies this deviation.
The [Merino sheep](https://www.businessinsider.com.au/why-sheep-cant-stop-growing-their-fur-2015-9?r=US&IR=T) have lost the gene to turn off their wool production. These are sheep that if not regularly shawn, will die of heat exposure from an increasingly thick coat of wool. These were effectively bred into existence in Europe in the 13th or 14th century.
All the cows on Earth are allegedly descendants of a herd of aurochs that were domesticated many tens of thousands of years ago.
The point being, the only difference between chickens, sheep, cows and humans is the longevity of the human generational period. In other words, if you really wanted to selectively breed humans you could do so and 2000 years is ample time to do it in.
The only problem I foresee is that humans are notoriously uncooperative with such programs, and being the intelligent species we are (and you're going to want to breed for that as well) we'll find ways to both breed when we're not supposed to and not breed where we're expected to. In such a case, successfully breeding humans will involve about the same amount of management overhead as successfully herding cats. | In the Robert A. Heinlein future history series the Howard Foundation bred humans for longevity. They listed young men and women who each had four living and healthy grandparents and offered them payment to marry persons on the list. And after a (scientifically surprisingly) few generations people were born who could live for a few hundred years.
I have read a suggestion that if an organization paid men born to fathers twice as old as the average father were induced to father sons when they were twice as old as the average father, and if this continued for a few generations of old fathers, it would produce persons with an average lifespan twice that of normal humans.
In modern society people have a pool of millions of potential future spouses, though of course they can only choose from among the thousands of persons that they actually happen to meet.
But for hundreds of thousands of years young people in endogamous bands only had a pool of few dozen potential future spouses to choose from, while young people in exogamous bands only that maybe half a dozen times that many people in neighboring bands to choose from.
So I suspect that it is psychologically quite acceptable for a young human to be limited to choosing a spouse from among a limited group of a few thousand members of a specific society, such as humans being bred for intelligence, or stupidity, or strength, or weakness, or longevity, or short lifespans, or whatever qualities. Possibly the persons who were taken out of the program because they didn't test with enough of the desired quality would be more disappointed, as well as persons in the program who might be in love with them and forbidden to be with them. |
137,371 | The idea is that the concepts of DNA, inheritance, and micro-organisms were discovered significantly earlier in history. Roughly 50 years after the founding of Alexandria as a city state in 332 B.C., Greek scholars ended up developing the first microscopes. During their studies of human anatomy, they made the startling discovery of cells.
They posited theories that something within these small parts of our body (DNA) strongly affected the traits passed on from parents to their children. This appeared to be true, given what they could observe in plants, domestic animals, and even humans.
Under orders from Lysimachus, they began the ambitious project of selectively breeding humans in an attempt to bring out these characteristics:
Greater strength, stamina, and physical coordination (agility/dexterity).
Assume their work would be continued by the Roman Empire, and later the Byzantine and Holy Roman Empires as time marched on. By 1806, when the Holy Roman Empire was dissolved at last, how much progress could feasibly have been made through a selective breeding program? | 2019/01/22 | [
"https://worldbuilding.stackexchange.com/questions/137371",
"https://worldbuilding.stackexchange.com",
"https://worldbuilding.stackexchange.com/users/60352/"
] | Eugenics
========
**After all, we do it with animals, why wouldn't it work with people?**
The concept has been around since the late 1800s, and was popularly accepted as not all that bad an idea even by the high and mighty up until WWII.
>
> In 1883, Sir Francis Galton, a respected British scholar and cousin of Charles Darwin, first used the term eugenics, meaning “well-born.” Galton believed that the human race could help direct its future by selectively breeding individuals who have “desired” traits. This idea was based on Galton’s study of upper class Britain. Following these studies, Galton concluded that an elite position in society was due to a good genetic makeup. While Galton’s plans to improve the human race through selective breeding never came to fruition in Britain, they eventually took sinister turns in other countries. - [history of eugenics](http://knowgenetics.org/history-of-eugenics/)
>
>
>
It's a dirty word now after a certain chap by the name of Adolf Hitler put it into full production and people started to understand what it really meant.
It was mostly considered in racial and class terms. The underclass, the poor, and immigrants, were generally to be discouraged from reproducing in favour of those with money or acceptable racial traits. We now consider that approach to be abhorrent, but as soon as you start suggesting selective breeding of humans you have to understand that it's not a new idea and quite how dangerous the ground you're treading on can become. | It is possible to embed "ideals" into our culture, religion and so on, and to somehow make people search for partners bearing those properties. For instance, the eternal bliss of afterlife, may be described as being achievable only after successfully challenging some obstacles. Those obstacles may demand athletic performance, solving hard puzzles, being a brave and die-hard soldier. The Vikings have embedded those combat skills in their "Valhaha". The Mesoamericans (Maya, Aztec, Incas) describe the road to the afterlife filled with sophisticated puzzles, both mentally and physically. (The Xibalba -place of fear- of the Maya). You may add that the skills one develops in his material life are reflected in his/her skills necessary to reach Valhaha, or whatever. Obviously, women will search for men with those skills in a hope to inherit those good genes to their children.
So yes, religion and culture may coax people to cooperate, at-least to some degree. Needless to say, that the time needed for humans to reproduce, cultural influences and our curiosity for alternatives may push the brakes on such processes sooner or later. |
4,959 | Right now I go with copying all email contacts in BCC and write my own email address in To column ... that's what I could come up after a lot of searching in Google.
But, it shows my name in recipient list to my recipient.
I don't want to display that either. | 2010/07/30 | [
"https://webapps.stackexchange.com/questions/4959",
"https://webapps.stackexchange.com",
"https://webapps.stackexchange.com/users/3320/"
] | If leaving the To: field blank doesn't work, you could add something such as:
>
> user@example.com
>
>
>
to the To: field.
<http://en.wikipedia.org/wiki/Example.com>
Hope this helps. | As long as GMail allows you to leave the "To:" field blank you should be able to send the mail just by filling in the "BCC:" list. |
4,959 | Right now I go with copying all email contacts in BCC and write my own email address in To column ... that's what I could come up after a lot of searching in Google.
But, it shows my name in recipient list to my recipient.
I don't want to display that either. | 2010/07/30 | [
"https://webapps.stackexchange.com/questions/4959",
"https://webapps.stackexchange.com",
"https://webapps.stackexchange.com/users/3320/"
] | If leaving the To: field blank doesn't work, you could add something such as:
>
> user@example.com
>
>
>
to the To: field.
<http://en.wikipedia.org/wiki/Example.com>
Hope this helps. | When sending an email in Google, the "CC" field stands for carbon copy, and "BCC" stands for blind carbon copy, meaning that the recipients are "blind" to the other email addresses.
You do not need to enter an email in the "To" or "CC" fields. When your message arrives, your recipients will be able to see that the email was sent to their inbox, in addition to "undisclosed recipients." They will not be able to view the other email addresses, nor will they be able to "reply all." |
4,959 | Right now I go with copying all email contacts in BCC and write my own email address in To column ... that's what I could come up after a lot of searching in Google.
But, it shows my name in recipient list to my recipient.
I don't want to display that either. | 2010/07/30 | [
"https://webapps.stackexchange.com/questions/4959",
"https://webapps.stackexchange.com",
"https://webapps.stackexchange.com/users/3320/"
] | As long as GMail allows you to leave the "To:" field blank you should be able to send the mail just by filling in the "BCC:" list. | When sending an email in Google, the "CC" field stands for carbon copy, and "BCC" stands for blind carbon copy, meaning that the recipients are "blind" to the other email addresses.
You do not need to enter an email in the "To" or "CC" fields. When your message arrives, your recipients will be able to see that the email was sent to their inbox, in addition to "undisclosed recipients." They will not be able to view the other email addresses, nor will they be able to "reply all." |
19,033 | How long should I expect it to take to enable FileVault 2 on a fresh installation of Lion? I'm using a [mid-2010 15" MacBook Pro](http://en.wikipedia.org/w/index.php?title=MacBook_Pro&oldid=441784309#Technical_specifications_2) with an i7 and a 5,400RPM 500GB hard drive with only 10GB used.
[John Siracusa's 19-page review for Ars Technica](http://arstechnica.com/apple/reviews/2011/07/mac-os-x-10-7.ars/13) had this to say:
>
> Encryption happens transparently in the background, which is a good thing because it takes a long time.
>
>
>
Hopefully someone can be a bit more precise than this. | 2011/07/28 | [
"https://apple.stackexchange.com/questions/19033",
"https://apple.stackexchange.com",
"https://apple.stackexchange.com/users/1674/"
] | Your mileage may vary, but it took about an hour to do on my clean Lion install (less than 10GB IIRC), Vertex 2 SSD. It'll obviously take longer the more data you have and the slower your drive. | On an early 2016 MacBook I got out of the box this week and on which I already have 30GB in use it said 30 minutes right at the start. It seems to be faster than that actually because I started answering this and now it's already down to 15 minutes.
Plus I can use it to do other things while it's encrypting.
[](https://i.stack.imgur.com/2SuC5.png) |
19,033 | How long should I expect it to take to enable FileVault 2 on a fresh installation of Lion? I'm using a [mid-2010 15" MacBook Pro](http://en.wikipedia.org/w/index.php?title=MacBook_Pro&oldid=441784309#Technical_specifications_2) with an i7 and a 5,400RPM 500GB hard drive with only 10GB used.
[John Siracusa's 19-page review for Ars Technica](http://arstechnica.com/apple/reviews/2011/07/mac-os-x-10-7.ars/13) had this to say:
>
> Encryption happens transparently in the background, which is a good thing because it takes a long time.
>
>
>
Hopefully someone can be a bit more precise than this. | 2011/07/28 | [
"https://apple.stackexchange.com/questions/19033",
"https://apple.stackexchange.com",
"https://apple.stackexchange.com/users/1674/"
] | 2 consecutive vault activations on a 2 new MBP w 750gb drives. Each was a clean Lion install with nothing else on it.
Time to encrypt: 12 hours minimum each time. By far the longest running disk encryption on any platform I have ever used. Also, this is the only disk encryption I have used that allowed me to use the machine whilst it was grinding bits. I accept the trade-off. | My personal experience is that it's faster than Bitlocker on Windows i.e. no more than 1 hour usually. Bitlocker on Windows usually takes several hours depending on drive size of course but even on small, fast raptor drive it takes forever. Encryption on Lion is painless in comparison, especially since it will run in background and even resume if you turn of computer.
I don't have any benchmarks to back this up by I'm sure Ars Technica or Anandtech will come out with some soon. |
19,033 | How long should I expect it to take to enable FileVault 2 on a fresh installation of Lion? I'm using a [mid-2010 15" MacBook Pro](http://en.wikipedia.org/w/index.php?title=MacBook_Pro&oldid=441784309#Technical_specifications_2) with an i7 and a 5,400RPM 500GB hard drive with only 10GB used.
[John Siracusa's 19-page review for Ars Technica](http://arstechnica.com/apple/reviews/2011/07/mac-os-x-10-7.ars/13) had this to say:
>
> Encryption happens transparently in the background, which is a good thing because it takes a long time.
>
>
>
Hopefully someone can be a bit more precise than this. | 2011/07/28 | [
"https://apple.stackexchange.com/questions/19033",
"https://apple.stackexchange.com",
"https://apple.stackexchange.com/users/1674/"
] | Your mileage may vary, but it took about an hour to do on my clean Lion install (less than 10GB IIRC), Vertex 2 SSD. It'll obviously take longer the more data you have and the slower your drive. | My personal experience is that it's faster than Bitlocker on Windows i.e. no more than 1 hour usually. Bitlocker on Windows usually takes several hours depending on drive size of course but even on small, fast raptor drive it takes forever. Encryption on Lion is painless in comparison, especially since it will run in background and even resume if you turn of computer.
I don't have any benchmarks to back this up by I'm sure Ars Technica or Anandtech will come out with some soon. |
19,033 | How long should I expect it to take to enable FileVault 2 on a fresh installation of Lion? I'm using a [mid-2010 15" MacBook Pro](http://en.wikipedia.org/w/index.php?title=MacBook_Pro&oldid=441784309#Technical_specifications_2) with an i7 and a 5,400RPM 500GB hard drive with only 10GB used.
[John Siracusa's 19-page review for Ars Technica](http://arstechnica.com/apple/reviews/2011/07/mac-os-x-10-7.ars/13) had this to say:
>
> Encryption happens transparently in the background, which is a good thing because it takes a long time.
>
>
>
Hopefully someone can be a bit more precise than this. | 2011/07/28 | [
"https://apple.stackexchange.com/questions/19033",
"https://apple.stackexchange.com",
"https://apple.stackexchange.com/users/1674/"
] | Mine may be the "untypical" case. Upgraded my mid 2010 2.66 GHz i7 MBP 17" from Snow Leopard 10.6.8 (including the supplementary update) to Lion, then enabled File Vault on the 500GB internal HDD (14+GB free) and continued working. Took more than 16 hours -- continuous -- as I didn't turn off the MBP until it was done. | On an early 2016 MacBook I got out of the box this week and on which I already have 30GB in use it said 30 minutes right at the start. It seems to be faster than that actually because I started answering this and now it's already down to 15 minutes.
Plus I can use it to do other things while it's encrypting.
[](https://i.stack.imgur.com/2SuC5.png) |
19,033 | How long should I expect it to take to enable FileVault 2 on a fresh installation of Lion? I'm using a [mid-2010 15" MacBook Pro](http://en.wikipedia.org/w/index.php?title=MacBook_Pro&oldid=441784309#Technical_specifications_2) with an i7 and a 5,400RPM 500GB hard drive with only 10GB used.
[John Siracusa's 19-page review for Ars Technica](http://arstechnica.com/apple/reviews/2011/07/mac-os-x-10-7.ars/13) had this to say:
>
> Encryption happens transparently in the background, which is a good thing because it takes a long time.
>
>
>
Hopefully someone can be a bit more precise than this. | 2011/07/28 | [
"https://apple.stackexchange.com/questions/19033",
"https://apple.stackexchange.com",
"https://apple.stackexchange.com/users/1674/"
] | Mine may be the "untypical" case. Upgraded my mid 2010 2.66 GHz i7 MBP 17" from Snow Leopard 10.6.8 (including the supplementary update) to Lion, then enabled File Vault on the 500GB internal HDD (14+GB free) and continued working. Took more than 16 hours -- continuous -- as I didn't turn off the MBP until it was done. | My personal experience is that it's faster than Bitlocker on Windows i.e. no more than 1 hour usually. Bitlocker on Windows usually takes several hours depending on drive size of course but even on small, fast raptor drive it takes forever. Encryption on Lion is painless in comparison, especially since it will run in background and even resume if you turn of computer.
I don't have any benchmarks to back this up by I'm sure Ars Technica or Anandtech will come out with some soon. |
19,033 | How long should I expect it to take to enable FileVault 2 on a fresh installation of Lion? I'm using a [mid-2010 15" MacBook Pro](http://en.wikipedia.org/w/index.php?title=MacBook_Pro&oldid=441784309#Technical_specifications_2) with an i7 and a 5,400RPM 500GB hard drive with only 10GB used.
[John Siracusa's 19-page review for Ars Technica](http://arstechnica.com/apple/reviews/2011/07/mac-os-x-10-7.ars/13) had this to say:
>
> Encryption happens transparently in the background, which is a good thing because it takes a long time.
>
>
>
Hopefully someone can be a bit more precise than this. | 2011/07/28 | [
"https://apple.stackexchange.com/questions/19033",
"https://apple.stackexchange.com",
"https://apple.stackexchange.com/users/1674/"
] | 2 consecutive vault activations on a 2 new MBP w 750gb drives. Each was a clean Lion install with nothing else on it.
Time to encrypt: 12 hours minimum each time. By far the longest running disk encryption on any platform I have ever used. Also, this is the only disk encryption I have used that allowed me to use the machine whilst it was grinding bits. I accept the trade-off. | On an early 2016 MacBook I got out of the box this week and on which I already have 30GB in use it said 30 minutes right at the start. It seems to be faster than that actually because I started answering this and now it's already down to 15 minutes.
Plus I can use it to do other things while it's encrypting.
[](https://i.stack.imgur.com/2SuC5.png) |
90,284 | I don't feel like this belongs on SU, so I put it here.
I know that "OS X" is pronounced "oh-ess ten," but how should the common construction "OS X 10.9" be pronounced?
The primary possibility I can think of is:
* The X becomes silent: "oh-ess ten-point-nine"
However, this feels awkward when reading from paper and there are suddenly silent words in the middle of sentences. Because of this, I also see as a possibility:
* This construction is unpronounceable and should be read "oh-ess ten mountain lion"
Of course, my personal favorite (the least accurate) is:
* The 10 becomes slient: "oh-ess eks-point-nine" | 2012/11/06 | [
"https://english.stackexchange.com/questions/90284",
"https://english.stackexchange.com",
"https://english.stackexchange.com/users/17688/"
] | A few options come to mind:
>
> "he thought this... but after being criticized, he \_\_ that the critics were correct"
>
>
>
* realized
* agreed
* concurred
* admitted
* was persuaded | I think you are looking for something like:
gave in,
backed down,
capitulated,
caved in,
conceded,
resigned,
surrendered,
threw in the towel,
washed his hands of,
yielded |
90,284 | I don't feel like this belongs on SU, so I put it here.
I know that "OS X" is pronounced "oh-ess ten," but how should the common construction "OS X 10.9" be pronounced?
The primary possibility I can think of is:
* The X becomes silent: "oh-ess ten-point-nine"
However, this feels awkward when reading from paper and there are suddenly silent words in the middle of sentences. Because of this, I also see as a possibility:
* This construction is unpronounceable and should be read "oh-ess ten mountain lion"
Of course, my personal favorite (the least accurate) is:
* The 10 becomes slient: "oh-ess eks-point-nine" | 2012/11/06 | [
"https://english.stackexchange.com/questions/90284",
"https://english.stackexchange.com",
"https://english.stackexchange.com/users/17688/"
] | Regarding the final sentence of the question (as quoted below) related words among others include *[co-opt](http://en.wiktionary.org/wiki/co-opt#English)*,
*[assimilate](http://en.wiktionary.org/wiki/assimilate)*, *[hoodwink](http://en.wiktionary.org/wiki/hoodwink)*, *[cave](http://en.wiktionary.org/wiki/cave#Verb)*, and perhaps *[recant](http://en.wiktionary.org/wiki/recant)*.
>
> It's actually a negative effect, his way of thinking permanently changed due to criticism (and I'm trying to say his original opinion was better).
>
>
>
However, to fit those words into your sample sentence (“he thought this ... but after being criticized, he \_\_ that the critics were correct”) requires adjustments, perhaps as illustrated in the following samples.
>
> • He thought his views were sound but the critics soon co-opted him.
>
> • His views were sound, but he caved under withering criticism.
>
> • Under light but unrelenting peer pressure, he recanted and was assimilated.
>
> • Despite his precise and marvelous mind, the critics soon hoodwinked him.
>
>
> | I think you are looking for something like:
gave in,
backed down,
capitulated,
caved in,
conceded,
resigned,
surrendered,
threw in the towel,
washed his hands of,
yielded |
90,284 | I don't feel like this belongs on SU, so I put it here.
I know that "OS X" is pronounced "oh-ess ten," but how should the common construction "OS X 10.9" be pronounced?
The primary possibility I can think of is:
* The X becomes silent: "oh-ess ten-point-nine"
However, this feels awkward when reading from paper and there are suddenly silent words in the middle of sentences. Because of this, I also see as a possibility:
* This construction is unpronounceable and should be read "oh-ess ten mountain lion"
Of course, my personal favorite (the least accurate) is:
* The 10 becomes slient: "oh-ess eks-point-nine" | 2012/11/06 | [
"https://english.stackexchange.com/questions/90284",
"https://english.stackexchange.com",
"https://english.stackexchange.com/users/17688/"
] | A few options come to mind:
>
> "he thought this... but after being criticized, he \_\_ that the critics were correct"
>
>
>
* realized
* agreed
* concurred
* admitted
* was persuaded | * *capitulated*
* *acquiesced*
* *agreed begrudgingly*
* *resigned himself to the fact*
* *caved in under pressure* |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.