qid
int64
1
74.7M
question
stringlengths
12
33.8k
date
stringlengths
10
10
metadata
list
response_j
stringlengths
0
115k
response_k
stringlengths
2
98.3k
24,327
It is believed that the one who understands chidambara rahasyam attains jnanam and gets enlightened. There are many books and TV serials on this topic. It is related to Sri naTarAja swami temple in the town of Chidambaram. What are the details of this chidambara rahasyam?
2018/02/13
[ "https://hinduism.stackexchange.com/questions/24327", "https://hinduism.stackexchange.com", "https://hinduism.stackexchange.com/users/7853/" ]
Firstly as S K mentions in the [other answer](https://hinduism.stackexchange.com/a/24328/4459), the Chidambara Rahasya is just a vast expanse of emptiness. But there is more to what meets the eye. ### Background Shiva is said to be present across the universe in 8 forms. Pushpadanata, in his famous Shiva Mahima Stotra (you can listen to a [rendition of that on YouTube at 18:55](https://www.youtube.com/watch?v=KlyyZAbv_U8)), enumerates them as: * Bhava * Sharva * Rudra * Pasupathi * Ugra * Mahadeva * Bheema * Eshana (You can learn more about the ashtamurthi here [What are the eight forms (Ashtamurti) of Lord Shiva?](https://hinduism.stackexchange.com/questions/21725/what-are-the-eight-forms-ashtamurti-of-lord-shiva)). These eight forms of Shiva represent one of the forces of nature respectively. Similarly, there is one temple present for each of them. You can get the mapping of the Ashtamurthi - Form of Nature - Temple from the [Sanathan Dharma Project website](https://sites.google.com/site/sanatandharmaproject/the-8-forms-of-lord-shiva). ### [Chidambaram Temple](https://en.wikipedia.org/wiki/Chidambaram) Now let's focus just on one of them, which is Bheema. From the above link: > > Bheema: - Akaasha Linga, Chidambaram, Tamil Nadu. This Kshetra is on the banks of Cauvery. We don’t see any Murthy in the temple Garbha Gruha. The puranas speak of this Kshetra very highly. No one can see the Lord’s Murthy, except the highest spiritual souls. There is a space in the Garbha Gruha and many Abharanas are decorated and the devotees assume the Lord is seated there. A very beautiful Nataraja murthy is in outer Garbha Gruha for worship and for the satisfaction of the devotees. > > > The Chidambaram temple is representative of the Akasha Murthi form of Shiva, called Bheema. He represents the space and outer cosmos. The etymology of the word Chidambaram is: > > The name of the town of this shrine, Chidambaram comes from the Tamil word Chitrambalam (also spelled Chithambalam) meaning "wisdom atmosphere". The roots are citt or chitthu means consciousness or wisdom while and ampalam means "atmosphere" > > > ### Chidambara Rahasya Chidambara Rahasya, refers to the fact that there is no visible deity in the sanctum sanctorium. Instead there's just an empty place. This is to symbolize that the Shiva there represents the Space - or Akasha. The golden chains hanging from the top, are shaped in the form of Bilva leaves (in order to decorate the invisible Shiva idol). A more comprehensive account of the temple and the rahasya is given in the book [Temples of South India](https://books.google.com/books?id=c08qf7d2TZQC&lpg=PA197) by VVS Reddy. > > In the chit-sabha to the right side of Nataraja there is the proverbial secret of Chidambaram (Chidambara rahasya), where several strings of golden bilva leaves hang in front of a curtain, behind which is empty space, but it is said to be akasa-linga, one of the panchabhuta (five) lingas. Akasa in other words is nothingness, or void, and thus, the linga is said to be there, but it not visible-actually nothing exists there but, people believe that there is actually the akasa-linga. This make-believe process is the essence of Chidambara-rahasya. Here it has been provided to worship the empty space itself as god. > > > The book [The Madras Presidency: With Mysore, Coorg and the Associated States](https://books.google.com/books?id=1h07AAAAIAAJ&lpg=PA249) by Edgar Thurston, also provides an explanation about what the Chidambara Rahasya is: > > At Chidambaram the emblem of the god is the ether linga, which has no actual existence, but is represented by an empty space in the holy of holies called the akasa or ether linga, wherein lies the so-called Chidambara rahasya, or secret of worship at Chidambaram. > > > ### Symbolism As S K mentioned [earlier](https://hinduism.stackexchange.com/a/24328/4459), the priest will lift the curtain and show you the space behind the curtain. The curtain itself symbolizes Maya. People need to lift the shroud of Maya, to stare into that emptiness, where you can see Shiva in the form of Bheema. From the [temple website](http://www.chidambaramnataraja.org/): > > Since ancient times, it is believed that this is the place where Lord Shiva and Parvathi are present, but are invisible to the naked eyes of normal people. In the Chidambaram temple of Lord Nataraja, Chidambara Ragasiyam is hidden by a curtain (Maya). Darshan of Chidambara Ragasiyam is possible only when priests open the curtain (or Maya) for special poojas. People who are privileged to have a darshan of Chidambara Ragasiyam can merely see golden vilva leaves (Aegle Marmelos) signifying the presence of Lord Shiva and Parvathi in front of them. It is also believed that devout saints can see the Gods in their physical form, but no such cases have been officially reported. > > > --- There's a lot more story about the temple, including that of Nataraja and his cosmic dance, which doesn't come under the scope of this answer
Chidambara Rahasyam is a secret that's not a secret. There is a chamber next to the main Nataraja idol which is normally covered by a curtain. When it comes time to show you the Chidambara rahasyam, the priest would draw back the curtain and shine his deepam (light) into the chamber - and you would see a lingam that's "not there" (akasha linga). The lingam is defined by gold chains that hang from the ceiling. They vary in length so that the ends of the chaims outline the silhouette of a linga. So the rahasyam is a linga that you can "see", but it is really not there.
13,609
I know that this is probably off-topic but I am posting it anyway. I am having difficulty reconciling myself to contributing content to a site that is now owned by a subsidiary of [Naspers](https://en.wikipedia.org/wiki/Naspers), a South African company with a racist past of supporting decades of apartheid and white supremacy. The company *refused* to comply with requests from South Africa’s Truth and Reconcilation Commission to detail its complicity. Instead, 127 individual employees told the commission that Naspers “had formed an integral part of the power structure which implemented and maintained apartheid”. As the company became more global, it decided to “apologize” for its role, but its apology has been [criticized](http://www.thejournalist.org.za/the-craft/whats-missing-naspers-late-half-apology-for-apartheid/) as insufficient. I mentioned Naspers’ past in a comment to the CEO’s [blog post](https://stackoverflow.blog/2021/06/02/prosus-acquires-stack-overflow/?cb=1&_ga=2.18515705.1961585004.1622783814-482213257.1622783814), and it was removed. Twice. This is censorship. My comment was completely truthful, but inconvenient to the image of Stack Exchange. Well, in my opinion its image is now *awful*. I am going to take a break while I consider whether I can be morally complicit in the new corporate regime. I am not interested in being encouraged to stay, but I will respectfully listen to opinions arguing why Naspers’ past should be irrelevant.
2021/06/06
[ "https://physics.meta.stackexchange.com/questions/13609", "https://physics.meta.stackexchange.com", "https://physics.meta.stackexchange.com/users/199630/" ]
Your time is valuable -- invaluable, actually, since time is the one thing we can't buy or create -- and you have the absolute right (and responsibility) to use it in pursuits you support. So to "answer" your "question" posed at the end of your post -- the past, present, and future of any company or entity you provide your time (and money, for that matter) to is as relevant or irrelevant as you want it to be. --- The reality of the world is that, unfortunately, any company that has been around for any length of time will likely have been a party to something objectionable. This is true for all of the major US companies, European companies, Chinese companies, Russian companies, etc. etc., for the past 100+ years (and in some cases, longer). Some of those companies were active participants in the atrocities. Others went along, and still others might have found it offensive but didn't speak up or do anything about it. Some companies may have tried to atone for their past. Others may have tried to hide it. Still others may refuse to move on. Perhaps it is performative, and perhaps it is substantive. I can't speak to the issues at play in this particular instance because I don't know the background in enough detail. There may be other issues or concerns with Prosus, its subsidiaries, other holdings, etc.. All I can say is that we are volunteers here with our time and our expertise. Hopefully we're here because we find joy in sharing that with others. But, StackExchange is a for-profit company and our passions and time support their bottom line. If who owns that bottom line gives you pause or troubles, then it's always worth asking yourself if you still enjoy the experience. --- I won't attempt to change your mind. That's not my place. And I wholeheartedly support you exploring your social and moral responsibilities with respect to your time and expertise. Do some research, if you want to, into what commitments have been made, or not been made, with regard to the issues you find important. Arguably the most important benefit of a free marketplace is that companies provide what consumers demand. For a long time, the primary demand has been low prices. But there is a growing demand for social responsibility in corporate behavior, whether it is in investing, purchasing, doing other business with, or in our instance, volunteering for/participating in their community. If you decide you no longer find joy in the experience with the site because of the issues you identify, then I encourage you to use your time in things that do bring you joy. Nobody can replace your time. Your contributions here have been immense and you would definitely be missed. If you do decide to continue, then I hope it is as rewarding and fun for you as it has been. Okay, well I actually hope it is even more rewarding and even more fun than it has been because this is a great community with tremendous potential and I'd like people to enjoy things more than they already do!
While I have some sympathy for your viewpoint, you cannot police the world. The issue surely is whether it's a racist company *now*, not how it behaved when most of white South Africa was complicit (by action or apathy) in the crime of apartheid. Experience shows that even when a company or organization senior management want to apologise for some past action or actions (e.g., controlled by departed managers), you will find lawyers advise them not to, simply to not accept potentially unlimited liability claims. What you are looking for probably won't happen for that simple reason. Is any of that right or ideal? Of course not. It is, however, the fundamental nature of politics that we accept necessary compromise as progress, rather than reject it as that is not constructive. The hope would be (and this does happen) that over time the company can move to a point where its apologies are more complete. But note, and I have seen this many time in my own country, that for some embittered groups (whether right or wrong), no apology would ever be good enough. South Africans have, in the main, had to accept that whatever happened in the past, some line has to be drawn under it. This same process happened in Northern Ireland (my late father's birthplace) and [in many other places](https://en.wikipedia.org/wiki/List_of_truth_and_reconciliation_commissions). That is necessary practical political reality. It is a matter for your own conscience whether you feel your actions are appropriate. I do know that your absence here would be felt by a community that has no real power to address the grievance you feel or achieve the goal you seek. Your contributions are valuable to a community that seeks to learn physics - many of them born long after the misery of apartheid ended - and subject to their own miseries in their own time. > > I mentioned Naspers’ past in a comment to the CEO’s blog post, and it was removed. Twice. This is censorship. My comment was completely truthful, but inconvenient to the image of Stack Exchange. Well, in my opinion its image is now awful. > > > This is hardly surprising and it is, let's be honest, overly optimistic to expect a business to let you use their own private resources to attack them. This isn't constructive of them, but it's not like we haven't had to deal with that before on other issues. There are perhaps better ways (in the long run) to achieve your goals than complete withdrawal. Perhaps an email campaign from interested members direct to Naspers and the South Africa government would be better? Good luck with your decision and you have my respect for your contributions here on SE.
13,609
I know that this is probably off-topic but I am posting it anyway. I am having difficulty reconciling myself to contributing content to a site that is now owned by a subsidiary of [Naspers](https://en.wikipedia.org/wiki/Naspers), a South African company with a racist past of supporting decades of apartheid and white supremacy. The company *refused* to comply with requests from South Africa’s Truth and Reconcilation Commission to detail its complicity. Instead, 127 individual employees told the commission that Naspers “had formed an integral part of the power structure which implemented and maintained apartheid”. As the company became more global, it decided to “apologize” for its role, but its apology has been [criticized](http://www.thejournalist.org.za/the-craft/whats-missing-naspers-late-half-apology-for-apartheid/) as insufficient. I mentioned Naspers’ past in a comment to the CEO’s [blog post](https://stackoverflow.blog/2021/06/02/prosus-acquires-stack-overflow/?cb=1&_ga=2.18515705.1961585004.1622783814-482213257.1622783814), and it was removed. Twice. This is censorship. My comment was completely truthful, but inconvenient to the image of Stack Exchange. Well, in my opinion its image is now *awful*. I am going to take a break while I consider whether I can be morally complicit in the new corporate regime. I am not interested in being encouraged to stay, but I will respectfully listen to opinions arguing why Naspers’ past should be irrelevant.
2021/06/06
[ "https://physics.meta.stackexchange.com/questions/13609", "https://physics.meta.stackexchange.com", "https://physics.meta.stackexchange.com/users/199630/" ]
While I have some sympathy for your viewpoint, you cannot police the world. The issue surely is whether it's a racist company *now*, not how it behaved when most of white South Africa was complicit (by action or apathy) in the crime of apartheid. Experience shows that even when a company or organization senior management want to apologise for some past action or actions (e.g., controlled by departed managers), you will find lawyers advise them not to, simply to not accept potentially unlimited liability claims. What you are looking for probably won't happen for that simple reason. Is any of that right or ideal? Of course not. It is, however, the fundamental nature of politics that we accept necessary compromise as progress, rather than reject it as that is not constructive. The hope would be (and this does happen) that over time the company can move to a point where its apologies are more complete. But note, and I have seen this many time in my own country, that for some embittered groups (whether right or wrong), no apology would ever be good enough. South Africans have, in the main, had to accept that whatever happened in the past, some line has to be drawn under it. This same process happened in Northern Ireland (my late father's birthplace) and [in many other places](https://en.wikipedia.org/wiki/List_of_truth_and_reconciliation_commissions). That is necessary practical political reality. It is a matter for your own conscience whether you feel your actions are appropriate. I do know that your absence here would be felt by a community that has no real power to address the grievance you feel or achieve the goal you seek. Your contributions are valuable to a community that seeks to learn physics - many of them born long after the misery of apartheid ended - and subject to their own miseries in their own time. > > I mentioned Naspers’ past in a comment to the CEO’s blog post, and it was removed. Twice. This is censorship. My comment was completely truthful, but inconvenient to the image of Stack Exchange. Well, in my opinion its image is now awful. > > > This is hardly surprising and it is, let's be honest, overly optimistic to expect a business to let you use their own private resources to attack them. This isn't constructive of them, but it's not like we haven't had to deal with that before on other issues. There are perhaps better ways (in the long run) to achieve your goals than complete withdrawal. Perhaps an email campaign from interested members direct to Naspers and the South Africa government would be better? Good luck with your decision and you have my respect for your contributions here on SE.
The company’s past is not irrelevant, but what the company does *now* is more relevant. Moreover, to quote [Desmond Tutu](https://www.brainyquote.com/quotes/desmond_tutu_454135): > > If you want peace, you don’t talk to your friends. You talk to your enemies. > > > Engagement always works better than annihilation. I suspect that continually posting respectful and well researched questions on their website, and organizing others to do so, will in the long run have a greater impact than boycotting the company, but of course the run might be *very* long.
13,609
I know that this is probably off-topic but I am posting it anyway. I am having difficulty reconciling myself to contributing content to a site that is now owned by a subsidiary of [Naspers](https://en.wikipedia.org/wiki/Naspers), a South African company with a racist past of supporting decades of apartheid and white supremacy. The company *refused* to comply with requests from South Africa’s Truth and Reconcilation Commission to detail its complicity. Instead, 127 individual employees told the commission that Naspers “had formed an integral part of the power structure which implemented and maintained apartheid”. As the company became more global, it decided to “apologize” for its role, but its apology has been [criticized](http://www.thejournalist.org.za/the-craft/whats-missing-naspers-late-half-apology-for-apartheid/) as insufficient. I mentioned Naspers’ past in a comment to the CEO’s [blog post](https://stackoverflow.blog/2021/06/02/prosus-acquires-stack-overflow/?cb=1&_ga=2.18515705.1961585004.1622783814-482213257.1622783814), and it was removed. Twice. This is censorship. My comment was completely truthful, but inconvenient to the image of Stack Exchange. Well, in my opinion its image is now *awful*. I am going to take a break while I consider whether I can be morally complicit in the new corporate regime. I am not interested in being encouraged to stay, but I will respectfully listen to opinions arguing why Naspers’ past should be irrelevant.
2021/06/06
[ "https://physics.meta.stackexchange.com/questions/13609", "https://physics.meta.stackexchange.com", "https://physics.meta.stackexchange.com/users/199630/" ]
While I have some sympathy for your viewpoint, you cannot police the world. The issue surely is whether it's a racist company *now*, not how it behaved when most of white South Africa was complicit (by action or apathy) in the crime of apartheid. Experience shows that even when a company or organization senior management want to apologise for some past action or actions (e.g., controlled by departed managers), you will find lawyers advise them not to, simply to not accept potentially unlimited liability claims. What you are looking for probably won't happen for that simple reason. Is any of that right or ideal? Of course not. It is, however, the fundamental nature of politics that we accept necessary compromise as progress, rather than reject it as that is not constructive. The hope would be (and this does happen) that over time the company can move to a point where its apologies are more complete. But note, and I have seen this many time in my own country, that for some embittered groups (whether right or wrong), no apology would ever be good enough. South Africans have, in the main, had to accept that whatever happened in the past, some line has to be drawn under it. This same process happened in Northern Ireland (my late father's birthplace) and [in many other places](https://en.wikipedia.org/wiki/List_of_truth_and_reconciliation_commissions). That is necessary practical political reality. It is a matter for your own conscience whether you feel your actions are appropriate. I do know that your absence here would be felt by a community that has no real power to address the grievance you feel or achieve the goal you seek. Your contributions are valuable to a community that seeks to learn physics - many of them born long after the misery of apartheid ended - and subject to their own miseries in their own time. > > I mentioned Naspers’ past in a comment to the CEO’s blog post, and it was removed. Twice. This is censorship. My comment was completely truthful, but inconvenient to the image of Stack Exchange. Well, in my opinion its image is now awful. > > > This is hardly surprising and it is, let's be honest, overly optimistic to expect a business to let you use their own private resources to attack them. This isn't constructive of them, but it's not like we haven't had to deal with that before on other issues. There are perhaps better ways (in the long run) to achieve your goals than complete withdrawal. Perhaps an email campaign from interested members direct to Naspers and the South Africa government would be better? Good luck with your decision and you have my respect for your contributions here on SE.
One thing to consider is that actions of any individual, company or a nation that has existed for a significat amount of time (on the appropriate time-scale) can be called in question: 1. if we apply modern day standards to the actions committed sufficiently far in the past 2. if we look into it in sufficient detail 3. if we apply standards of our community to other communities I could give many examples of acceptable points of things generally considered *horrible/awful* or horrific sides of some people or historical events that are treated as honorable... but this would necessarily generate lots of outrage and name-calling. Let me also point out that boycotting is not necessarily the best strategy to help those in need - it may actually have the very opposite effects, and thus be just as immoral. In this case, when the crime is in the past, boycotting probably comes at the expense of overlooking human right abuses elsewhere.
13,609
I know that this is probably off-topic but I am posting it anyway. I am having difficulty reconciling myself to contributing content to a site that is now owned by a subsidiary of [Naspers](https://en.wikipedia.org/wiki/Naspers), a South African company with a racist past of supporting decades of apartheid and white supremacy. The company *refused* to comply with requests from South Africa’s Truth and Reconcilation Commission to detail its complicity. Instead, 127 individual employees told the commission that Naspers “had formed an integral part of the power structure which implemented and maintained apartheid”. As the company became more global, it decided to “apologize” for its role, but its apology has been [criticized](http://www.thejournalist.org.za/the-craft/whats-missing-naspers-late-half-apology-for-apartheid/) as insufficient. I mentioned Naspers’ past in a comment to the CEO’s [blog post](https://stackoverflow.blog/2021/06/02/prosus-acquires-stack-overflow/?cb=1&_ga=2.18515705.1961585004.1622783814-482213257.1622783814), and it was removed. Twice. This is censorship. My comment was completely truthful, but inconvenient to the image of Stack Exchange. Well, in my opinion its image is now *awful*. I am going to take a break while I consider whether I can be morally complicit in the new corporate regime. I am not interested in being encouraged to stay, but I will respectfully listen to opinions arguing why Naspers’ past should be irrelevant.
2021/06/06
[ "https://physics.meta.stackexchange.com/questions/13609", "https://physics.meta.stackexchange.com", "https://physics.meta.stackexchange.com/users/199630/" ]
Your time is valuable -- invaluable, actually, since time is the one thing we can't buy or create -- and you have the absolute right (and responsibility) to use it in pursuits you support. So to "answer" your "question" posed at the end of your post -- the past, present, and future of any company or entity you provide your time (and money, for that matter) to is as relevant or irrelevant as you want it to be. --- The reality of the world is that, unfortunately, any company that has been around for any length of time will likely have been a party to something objectionable. This is true for all of the major US companies, European companies, Chinese companies, Russian companies, etc. etc., for the past 100+ years (and in some cases, longer). Some of those companies were active participants in the atrocities. Others went along, and still others might have found it offensive but didn't speak up or do anything about it. Some companies may have tried to atone for their past. Others may have tried to hide it. Still others may refuse to move on. Perhaps it is performative, and perhaps it is substantive. I can't speak to the issues at play in this particular instance because I don't know the background in enough detail. There may be other issues or concerns with Prosus, its subsidiaries, other holdings, etc.. All I can say is that we are volunteers here with our time and our expertise. Hopefully we're here because we find joy in sharing that with others. But, StackExchange is a for-profit company and our passions and time support their bottom line. If who owns that bottom line gives you pause or troubles, then it's always worth asking yourself if you still enjoy the experience. --- I won't attempt to change your mind. That's not my place. And I wholeheartedly support you exploring your social and moral responsibilities with respect to your time and expertise. Do some research, if you want to, into what commitments have been made, or not been made, with regard to the issues you find important. Arguably the most important benefit of a free marketplace is that companies provide what consumers demand. For a long time, the primary demand has been low prices. But there is a growing demand for social responsibility in corporate behavior, whether it is in investing, purchasing, doing other business with, or in our instance, volunteering for/participating in their community. If you decide you no longer find joy in the experience with the site because of the issues you identify, then I encourage you to use your time in things that do bring you joy. Nobody can replace your time. Your contributions here have been immense and you would definitely be missed. If you do decide to continue, then I hope it is as rewarding and fun for you as it has been. Okay, well I actually hope it is even more rewarding and even more fun than it has been because this is a great community with tremendous potential and I'd like people to enjoy things more than they already do!
The company’s past is not irrelevant, but what the company does *now* is more relevant. Moreover, to quote [Desmond Tutu](https://www.brainyquote.com/quotes/desmond_tutu_454135): > > If you want peace, you don’t talk to your friends. You talk to your enemies. > > > Engagement always works better than annihilation. I suspect that continually posting respectful and well researched questions on their website, and organizing others to do so, will in the long run have a greater impact than boycotting the company, but of course the run might be *very* long.
13,609
I know that this is probably off-topic but I am posting it anyway. I am having difficulty reconciling myself to contributing content to a site that is now owned by a subsidiary of [Naspers](https://en.wikipedia.org/wiki/Naspers), a South African company with a racist past of supporting decades of apartheid and white supremacy. The company *refused* to comply with requests from South Africa’s Truth and Reconcilation Commission to detail its complicity. Instead, 127 individual employees told the commission that Naspers “had formed an integral part of the power structure which implemented and maintained apartheid”. As the company became more global, it decided to “apologize” for its role, but its apology has been [criticized](http://www.thejournalist.org.za/the-craft/whats-missing-naspers-late-half-apology-for-apartheid/) as insufficient. I mentioned Naspers’ past in a comment to the CEO’s [blog post](https://stackoverflow.blog/2021/06/02/prosus-acquires-stack-overflow/?cb=1&_ga=2.18515705.1961585004.1622783814-482213257.1622783814), and it was removed. Twice. This is censorship. My comment was completely truthful, but inconvenient to the image of Stack Exchange. Well, in my opinion its image is now *awful*. I am going to take a break while I consider whether I can be morally complicit in the new corporate regime. I am not interested in being encouraged to stay, but I will respectfully listen to opinions arguing why Naspers’ past should be irrelevant.
2021/06/06
[ "https://physics.meta.stackexchange.com/questions/13609", "https://physics.meta.stackexchange.com", "https://physics.meta.stackexchange.com/users/199630/" ]
Your time is valuable -- invaluable, actually, since time is the one thing we can't buy or create -- and you have the absolute right (and responsibility) to use it in pursuits you support. So to "answer" your "question" posed at the end of your post -- the past, present, and future of any company or entity you provide your time (and money, for that matter) to is as relevant or irrelevant as you want it to be. --- The reality of the world is that, unfortunately, any company that has been around for any length of time will likely have been a party to something objectionable. This is true for all of the major US companies, European companies, Chinese companies, Russian companies, etc. etc., for the past 100+ years (and in some cases, longer). Some of those companies were active participants in the atrocities. Others went along, and still others might have found it offensive but didn't speak up or do anything about it. Some companies may have tried to atone for their past. Others may have tried to hide it. Still others may refuse to move on. Perhaps it is performative, and perhaps it is substantive. I can't speak to the issues at play in this particular instance because I don't know the background in enough detail. There may be other issues or concerns with Prosus, its subsidiaries, other holdings, etc.. All I can say is that we are volunteers here with our time and our expertise. Hopefully we're here because we find joy in sharing that with others. But, StackExchange is a for-profit company and our passions and time support their bottom line. If who owns that bottom line gives you pause or troubles, then it's always worth asking yourself if you still enjoy the experience. --- I won't attempt to change your mind. That's not my place. And I wholeheartedly support you exploring your social and moral responsibilities with respect to your time and expertise. Do some research, if you want to, into what commitments have been made, or not been made, with regard to the issues you find important. Arguably the most important benefit of a free marketplace is that companies provide what consumers demand. For a long time, the primary demand has been low prices. But there is a growing demand for social responsibility in corporate behavior, whether it is in investing, purchasing, doing other business with, or in our instance, volunteering for/participating in their community. If you decide you no longer find joy in the experience with the site because of the issues you identify, then I encourage you to use your time in things that do bring you joy. Nobody can replace your time. Your contributions here have been immense and you would definitely be missed. If you do decide to continue, then I hope it is as rewarding and fun for you as it has been. Okay, well I actually hope it is even more rewarding and even more fun than it has been because this is a great community with tremendous potential and I'd like people to enjoy things more than they already do!
One thing to consider is that actions of any individual, company or a nation that has existed for a significat amount of time (on the appropriate time-scale) can be called in question: 1. if we apply modern day standards to the actions committed sufficiently far in the past 2. if we look into it in sufficient detail 3. if we apply standards of our community to other communities I could give many examples of acceptable points of things generally considered *horrible/awful* or horrific sides of some people or historical events that are treated as honorable... but this would necessarily generate lots of outrage and name-calling. Let me also point out that boycotting is not necessarily the best strategy to help those in need - it may actually have the very opposite effects, and thus be just as immoral. In this case, when the crime is in the past, boycotting probably comes at the expense of overlooking human right abuses elsewhere.
13,609
I know that this is probably off-topic but I am posting it anyway. I am having difficulty reconciling myself to contributing content to a site that is now owned by a subsidiary of [Naspers](https://en.wikipedia.org/wiki/Naspers), a South African company with a racist past of supporting decades of apartheid and white supremacy. The company *refused* to comply with requests from South Africa’s Truth and Reconcilation Commission to detail its complicity. Instead, 127 individual employees told the commission that Naspers “had formed an integral part of the power structure which implemented and maintained apartheid”. As the company became more global, it decided to “apologize” for its role, but its apology has been [criticized](http://www.thejournalist.org.za/the-craft/whats-missing-naspers-late-half-apology-for-apartheid/) as insufficient. I mentioned Naspers’ past in a comment to the CEO’s [blog post](https://stackoverflow.blog/2021/06/02/prosus-acquires-stack-overflow/?cb=1&_ga=2.18515705.1961585004.1622783814-482213257.1622783814), and it was removed. Twice. This is censorship. My comment was completely truthful, but inconvenient to the image of Stack Exchange. Well, in my opinion its image is now *awful*. I am going to take a break while I consider whether I can be morally complicit in the new corporate regime. I am not interested in being encouraged to stay, but I will respectfully listen to opinions arguing why Naspers’ past should be irrelevant.
2021/06/06
[ "https://physics.meta.stackexchange.com/questions/13609", "https://physics.meta.stackexchange.com", "https://physics.meta.stackexchange.com/users/199630/" ]
The company’s past is not irrelevant, but what the company does *now* is more relevant. Moreover, to quote [Desmond Tutu](https://www.brainyquote.com/quotes/desmond_tutu_454135): > > If you want peace, you don’t talk to your friends. You talk to your enemies. > > > Engagement always works better than annihilation. I suspect that continually posting respectful and well researched questions on their website, and organizing others to do so, will in the long run have a greater impact than boycotting the company, but of course the run might be *very* long.
One thing to consider is that actions of any individual, company or a nation that has existed for a significat amount of time (on the appropriate time-scale) can be called in question: 1. if we apply modern day standards to the actions committed sufficiently far in the past 2. if we look into it in sufficient detail 3. if we apply standards of our community to other communities I could give many examples of acceptable points of things generally considered *horrible/awful* or horrific sides of some people or historical events that are treated as honorable... but this would necessarily generate lots of outrage and name-calling. Let me also point out that boycotting is not necessarily the best strategy to help those in need - it may actually have the very opposite effects, and thus be just as immoral. In this case, when the crime is in the past, boycotting probably comes at the expense of overlooking human right abuses elsewhere.
19,150
In the late middle ages, every knight or even his retainers and squires were armed with a spear or a lance. When heavy cavalry gave way to pike squares, we still see the lance still in use among the demi-lancers. Somewhere along the way though, they completely disappeared, surviving only in the Polish army and appropriately enough to be resurrected by them in the 18th century when the effectiveness of the Polish cavalry was shown. Since then, the lance stayed with cavalry until they became mechanized. A possible culprit is the [Reiter](http://en.wikipedia.org/wiki/Reiter) employing the [caracole](http://en.wikipedia.org/wiki/Caracole) which showed tactical supremacy over the lancer in many battles such as [Coutras](http://en.wikipedia.org/wiki/Battle_of_Coutras) but that doesn't explain why they suddenly became resurgent again in the 18/19th century. What caused the lance's short disappearance and why? If it indeed was firearms and caracolle tactics, then what made them become popular again later on?
2015/01/28
[ "https://history.stackexchange.com/questions/19150", "https://history.stackexchange.com", "https://history.stackexchange.com/users/2423/" ]
Because lances were unwieldy but required significant training to be proficient in. Their usefulness was progressively declining against the increasingly attractive (and cost-effective) firearms. > > Because of the nature of the weapon, and the training required to produce a proficient lancer, it had generally fallen from use by the mid 17th century. > > > **- Haythornthwaite, Philip. Napoleonic Light Cavalry Tactics. Osprey Publishing, 2013.** > > > At 3-4 metres long it was too clumsy to handle in close engagements, ineffective against the longer massed infantry pike, and quite useless in sieges and against a musket at any distance. It was expensive and broke easily. It required long training and discipline, "a weapon of much trouble and charge (weight)", says a Spanish authority in 1597. > > > **Kunzle, David, ed. *From Criminal to Courtier: the Soldier in Netherlandish Art 1550-1672.* Vol. 10. Brill, 2002.** > > > --- Despite these drawbacks, lances did not vanish "completely" during this period. When the Spanish Armada set sail for England in 1588, Queen Elizabeth ordered an army to be assembled from the county militias and feudal levies. Lances featured prominently in the cavalry component of this force, with 2711 demi-lancers (31%) and 4388 light horse (50%) using the weapon in some capacity. > > [W]e know that the horse was divided into three types: **demi-lancers**, as described by Sir Roger Williams, petronels, which were a form of harquebusiers on horseback armed with small-calibre weapons, and light horse armed with a light **lance** and single pistol. > > > **- Tincey, John. *Ironsides: English Cavalry 1588-1688.* Vol. 44. Osprey Publishing, 2012.** > > > Use of lances declined in Western and Central Europe after the [30 Years War](http://en.wikipedia.org/wiki/Thirty_Years%27_War), when sword-centric [cuirassiers](http://en.wikipedia.org/wiki/Cuirassier) became the last hurrah of the heavy cavalry. Nonetheless, they did not vanish - in fact, one of the [most celebrated lancer charges](http://en.wikipedia.org/wiki/Battle_of_Kircholm) in history took place in 1605. As late as 1644 a regiment of Scottish lancers performed so well against Royalists in the [Battle of Marston Moor](http://en.wikipedia.org/wiki/Battle_of_Marston_Moor) that all new cavalry levies after 1650 were ordered to be lancers (up from 50/50). In other armies, lances experienced a revival even before Napoleon. The Prussian Army for example raised a lancer unit, the [*Bosniakenkorps*](http://en.wikipedia.org/wiki/Bosniak_Corps), in 1745. The Austrians under [Emperor Joseph II](http://en.wikipedia.org/wiki/Joseph_II,_Holy_Roman_Emperor) created lance cavalry after the Polish fashion.
The fall (and rise) of lances were tied to other developments regarding horse troops. It was the (original) "cavalry" that used pointed weapons from the lances, dating back to the Middle Ages. By about the 17th century, there was a new type of horse soldier, [dragoons,](http://en.wikipedia.org/wiki/Dragoon) who were mounted infantry, rather than cavalry. As such, they were "musketeers" on horses, in contrast to "lancers." As time passed, tacticians "rethought" the value of hand weapons, and gave dragoons swords, which were easier to handle than lances, while (in some cases), replacing their heavy muskets with lighter ["carbines"](http://en.wikipedia.org/wiki/Carbine) for firing. Provided with inferior weapons and horses, dragoons were usually at a disadvantage against both infantry and cavalry in a "stand up" fight, but their combination of speed and firepower made them ideal for patrolling, scouting, seizing and holding key points, etc. In the 19th century, the introduction of rifles changed the equation further, by making guns longer ranged, and through the addition of the "repeating" feature. At this point, riflemen on horses were at a disadvantage against riflemen on foot, but the cavalry did have the advantage of getting to key points faster. Using this advantage, cavalry would (mostly) fight dismounted, with one-fourth of the men holding the horses of three others. In a war in which (cavalry) general Nathan Bedford Forrest described as "getting there firstest with the mostest," the advantage of faster arrival often outweighed the disadvantage of a one-quarter reduction in manpower. Even so, traditional cavalry (with lances) was still good for some things, like attacking artillery positions and "running down" fleeing infantrymen from broken formations. These advantages were more apparent in the plains of Eastern Europe (and in Spanish possessions), than in the more broken ground of the rest of Western Europe. So many East European armies reserved a some cavalry for such purposes, while other armies largely switched to infantry. In this regard, the (remaining) use of cavalry was kind of like the old rock-scissors-paper game.
19,150
In the late middle ages, every knight or even his retainers and squires were armed with a spear or a lance. When heavy cavalry gave way to pike squares, we still see the lance still in use among the demi-lancers. Somewhere along the way though, they completely disappeared, surviving only in the Polish army and appropriately enough to be resurrected by them in the 18th century when the effectiveness of the Polish cavalry was shown. Since then, the lance stayed with cavalry until they became mechanized. A possible culprit is the [Reiter](http://en.wikipedia.org/wiki/Reiter) employing the [caracole](http://en.wikipedia.org/wiki/Caracole) which showed tactical supremacy over the lancer in many battles such as [Coutras](http://en.wikipedia.org/wiki/Battle_of_Coutras) but that doesn't explain why they suddenly became resurgent again in the 18/19th century. What caused the lance's short disappearance and why? If it indeed was firearms and caracolle tactics, then what made them become popular again later on?
2015/01/28
[ "https://history.stackexchange.com/questions/19150", "https://history.stackexchange.com", "https://history.stackexchange.com/users/2423/" ]
Because lances were unwieldy but required significant training to be proficient in. Their usefulness was progressively declining against the increasingly attractive (and cost-effective) firearms. > > Because of the nature of the weapon, and the training required to produce a proficient lancer, it had generally fallen from use by the mid 17th century. > > > **- Haythornthwaite, Philip. Napoleonic Light Cavalry Tactics. Osprey Publishing, 2013.** > > > At 3-4 metres long it was too clumsy to handle in close engagements, ineffective against the longer massed infantry pike, and quite useless in sieges and against a musket at any distance. It was expensive and broke easily. It required long training and discipline, "a weapon of much trouble and charge (weight)", says a Spanish authority in 1597. > > > **Kunzle, David, ed. *From Criminal to Courtier: the Soldier in Netherlandish Art 1550-1672.* Vol. 10. Brill, 2002.** > > > --- Despite these drawbacks, lances did not vanish "completely" during this period. When the Spanish Armada set sail for England in 1588, Queen Elizabeth ordered an army to be assembled from the county militias and feudal levies. Lances featured prominently in the cavalry component of this force, with 2711 demi-lancers (31%) and 4388 light horse (50%) using the weapon in some capacity. > > [W]e know that the horse was divided into three types: **demi-lancers**, as described by Sir Roger Williams, petronels, which were a form of harquebusiers on horseback armed with small-calibre weapons, and light horse armed with a light **lance** and single pistol. > > > **- Tincey, John. *Ironsides: English Cavalry 1588-1688.* Vol. 44. Osprey Publishing, 2012.** > > > Use of lances declined in Western and Central Europe after the [30 Years War](http://en.wikipedia.org/wiki/Thirty_Years%27_War), when sword-centric [cuirassiers](http://en.wikipedia.org/wiki/Cuirassier) became the last hurrah of the heavy cavalry. Nonetheless, they did not vanish - in fact, one of the [most celebrated lancer charges](http://en.wikipedia.org/wiki/Battle_of_Kircholm) in history took place in 1605. As late as 1644 a regiment of Scottish lancers performed so well against Royalists in the [Battle of Marston Moor](http://en.wikipedia.org/wiki/Battle_of_Marston_Moor) that all new cavalry levies after 1650 were ordered to be lancers (up from 50/50). In other armies, lances experienced a revival even before Napoleon. The Prussian Army for example raised a lancer unit, the [*Bosniakenkorps*](http://en.wikipedia.org/wiki/Bosniak_Corps), in 1745. The Austrians under [Emperor Joseph II](http://en.wikipedia.org/wiki/Joseph_II,_Holy_Roman_Emperor) created lance cavalry after the Polish fashion.
The coming back part, is, IMHO well covered by [Tom Au](https://history.stackexchange.com/a/19155/12602). But the disappearance is due to modernisation of the armies in the 15th Century as well as the appearance of fire-weapons. In the earlier Middle Age, the nobles were equiped with lances and mounted on horses. This lead to the tactical uses of heavy cavalries, which were, e.g., quite effective in the First Crusade. One should note that the effectiveness of heavy cavalries were only part due to the effectiveness of the lance, and part to a psychological effect. This is quite well illustrated in Braveheart (regardless the inaccuracies that the film may present). The problem was: this unit was highly effective, but costed a lot of money. Nevertheless, through centuries, that was considered the main unit of a feudal army. During the 100 years war, however, three factors came in: the armies [became more professional](http://www.deremilitari.org/REVIEWS/crecy_mohacs.htm), the [fire weapons](http://brego-weard.com/lib/newosp/European%20Medieval%20Tactics%202%201260-1500.pdf) which countered well cavalries unit (scaring the horses) and it was shown that those expensive units could be countered (slaughtered?) by much less expensive units ([Crécy](https://en.wikipedia.org/wiki/Battle_of_Cr%C3%A9cy), [Agincourt](https://en.wikipedia.org/wiki/Battle_of_Agincourt)). Professional armies, rarely had the means for heavy cavalries (due to their cost), and nobles slowly retiring from effective battle-combats also reduced slowly their numbers. That coupled with effective means to fight against the once number one unit, lead to a (relative) disappearance of mounted lances on battle field of Western Europe.
19,150
In the late middle ages, every knight or even his retainers and squires were armed with a spear or a lance. When heavy cavalry gave way to pike squares, we still see the lance still in use among the demi-lancers. Somewhere along the way though, they completely disappeared, surviving only in the Polish army and appropriately enough to be resurrected by them in the 18th century when the effectiveness of the Polish cavalry was shown. Since then, the lance stayed with cavalry until they became mechanized. A possible culprit is the [Reiter](http://en.wikipedia.org/wiki/Reiter) employing the [caracole](http://en.wikipedia.org/wiki/Caracole) which showed tactical supremacy over the lancer in many battles such as [Coutras](http://en.wikipedia.org/wiki/Battle_of_Coutras) but that doesn't explain why they suddenly became resurgent again in the 18/19th century. What caused the lance's short disappearance and why? If it indeed was firearms and caracolle tactics, then what made them become popular again later on?
2015/01/28
[ "https://history.stackexchange.com/questions/19150", "https://history.stackexchange.com", "https://history.stackexchange.com/users/2423/" ]
The fall (and rise) of lances were tied to other developments regarding horse troops. It was the (original) "cavalry" that used pointed weapons from the lances, dating back to the Middle Ages. By about the 17th century, there was a new type of horse soldier, [dragoons,](http://en.wikipedia.org/wiki/Dragoon) who were mounted infantry, rather than cavalry. As such, they were "musketeers" on horses, in contrast to "lancers." As time passed, tacticians "rethought" the value of hand weapons, and gave dragoons swords, which were easier to handle than lances, while (in some cases), replacing their heavy muskets with lighter ["carbines"](http://en.wikipedia.org/wiki/Carbine) for firing. Provided with inferior weapons and horses, dragoons were usually at a disadvantage against both infantry and cavalry in a "stand up" fight, but their combination of speed and firepower made them ideal for patrolling, scouting, seizing and holding key points, etc. In the 19th century, the introduction of rifles changed the equation further, by making guns longer ranged, and through the addition of the "repeating" feature. At this point, riflemen on horses were at a disadvantage against riflemen on foot, but the cavalry did have the advantage of getting to key points faster. Using this advantage, cavalry would (mostly) fight dismounted, with one-fourth of the men holding the horses of three others. In a war in which (cavalry) general Nathan Bedford Forrest described as "getting there firstest with the mostest," the advantage of faster arrival often outweighed the disadvantage of a one-quarter reduction in manpower. Even so, traditional cavalry (with lances) was still good for some things, like attacking artillery positions and "running down" fleeing infantrymen from broken formations. These advantages were more apparent in the plains of Eastern Europe (and in Spanish possessions), than in the more broken ground of the rest of Western Europe. So many East European armies reserved a some cavalry for such purposes, while other armies largely switched to infantry. In this regard, the (remaining) use of cavalry was kind of like the old rock-scissors-paper game.
The coming back part, is, IMHO well covered by [Tom Au](https://history.stackexchange.com/a/19155/12602). But the disappearance is due to modernisation of the armies in the 15th Century as well as the appearance of fire-weapons. In the earlier Middle Age, the nobles were equiped with lances and mounted on horses. This lead to the tactical uses of heavy cavalries, which were, e.g., quite effective in the First Crusade. One should note that the effectiveness of heavy cavalries were only part due to the effectiveness of the lance, and part to a psychological effect. This is quite well illustrated in Braveheart (regardless the inaccuracies that the film may present). The problem was: this unit was highly effective, but costed a lot of money. Nevertheless, through centuries, that was considered the main unit of a feudal army. During the 100 years war, however, three factors came in: the armies [became more professional](http://www.deremilitari.org/REVIEWS/crecy_mohacs.htm), the [fire weapons](http://brego-weard.com/lib/newosp/European%20Medieval%20Tactics%202%201260-1500.pdf) which countered well cavalries unit (scaring the horses) and it was shown that those expensive units could be countered (slaughtered?) by much less expensive units ([Crécy](https://en.wikipedia.org/wiki/Battle_of_Cr%C3%A9cy), [Agincourt](https://en.wikipedia.org/wiki/Battle_of_Agincourt)). Professional armies, rarely had the means for heavy cavalries (due to their cost), and nobles slowly retiring from effective battle-combats also reduced slowly their numbers. That coupled with effective means to fight against the once number one unit, lead to a (relative) disappearance of mounted lances on battle field of Western Europe.
531,901
I would like to protect a pdf document using Adobe Digital Edition. I think that it is currently being used to protect the eBooks to prevent illegal circulation. Can any one throw some light on that. Is it possible to do it using C# or something ?
2009/02/10
[ "https://Stackoverflow.com/questions/531901", "https://Stackoverflow.com", "https://Stackoverflow.com/users/46795/" ]
You may want to have a look at [Adobe Content Server](http://www.adobe.com/products/contentserver/) and the [Adobe Digital Publishing Technology Center](http://www.adobe.com/devnet/digitalpublishing/) websites for some direction.
Yes, you can use C#: [See SDK Info Doc](http://web.archive.org/web/20100604014756/http://www.adobe.com/devnet/digitalpublishing/pdfs/ADE_LauncherSDK_DevNet.pdf). You can use any server side processing language.
3,198
I notice that there have been a bunch of edits in the queue recently to the effect of removing one curse word. I understand that profanity in general should be avoided, but is there a lower boundary on how much is worthy of an edit? I mean, if the bad word in question (damn) occurs once and is irrelevant to the meaning of the question, like in some of the proposed edits, should we be approving those edits? So far, I've been approving them, but they feel a bit frivolous, especially when the edit has a more forceful description to the tune of "no offensive language is welcome here"... I read this ([To summarily edit out offensive language?](https://music.meta.stackexchange.com/q/2358/45266)), but I'm asking more about the super-trivial edits for isolated, comparatively tame words. Especially when it really seems like no one could take offense to it. I meant edits like this: [![Proposed Edit](https://i.stack.imgur.com/T9OUP.png)](https://i.stack.imgur.com/T9OUP.png) Although this wasn't a great example of the "no offensive language is welcome here" type comment. For that, see this (a related question about a slightly more extreme case): [Edits: Foul Language in a song warranting removal of link?](https://music.meta.stackexchange.com/q/3199/45266)
2019/04/03
[ "https://music.meta.stackexchange.com/questions/3198", "https://music.meta.stackexchange.com", "https://music.meta.stackexchange.com/users/37992/" ]
I am not against editing out even soft profanities (unless such an edit causes harm to the post), but I myself am unlikely to initiate such an edit for something like a single occurrence of "damn." That said, I usually will accept such edit suggestions. I think that we strive for a certain tone of professionalism in the content here, and removing profanities always feels like a nudge in the right direction to me, and hence an improvement in the post. But, if such an edit actually changes the poster's meaning or otherwise harms the post, the edit should be rejected. This is the case [for the edit referenced in your other question](https://music.meta.stackexchange.com/questions/3199/edits-foul-language-in-a-song-warranting-removal-of-link). Trying to edit profanities from a title or a quotation, or removing a link to material germane to the question altogether would actively cause harm.
**Hi. This is Maika Sakuranomiya. I am answering your question since I am the one who made the suggested edit.** --- It appears as if Stack Exchange is very strict over content when I read <https://music.stackexchange.com/conduct> and <https://music.stackexchange.com/help/behavior>. Because of this, I came up with the idea of editing other users' posts by removing foul content in order to keep the site clean and prevent the posts from being marked as offensive. I was also inspired by the moderators on Music Stack Exchange (Doktor Mayhem, Dom, and Matthew Read) as they seem to help users remember what is okay and not okay for the site. One example is that they remove posts that may be spam, offensive, or does not attempt to answer the question. On the other hand, I am top 3% this year with over 1K reputation. I came up with some special ideas such as removing unacceptable content, upvoting on posts with a negative score, and giving upvotes to posts that were in the "first posts" list on my review queues. I had in mind that other users would believe me as a hero when I did such things. --- **It seems like if I had gone too far. *The basic answer to your question is "not to curse", though.* I am very sorry if I have offended you. I will go over the guide again and improve my behavior from now on, and I will try my best not to run too far in the future. I have been suspended on Music Stack Exchange twice so far, and I really hope I won't get suspended again. Thank you for your post as I appreciate it, and I will try to show my best behavior in the future.**
151,685
I am about to change my job for an automotive embedded software company, doing ASIL D development. Such a company put a highlight in their SAFe framework. Now, given my little experience, when working with "official" Scrum managements or "official" Agile methodologies in development, in couple of companies (the two last big ones where I worked) I found it was introduced frustration and acceptance in coworkers, as they lost the initiative, lost product innovation while working for years on the same project. Other engineer friends had similar opinions and "they live with it". Either this was normal or not, only once I worked with constant enthusiasm, and that was fitting how I think. We were following the Kanban method, with small tickets, and implementing changes as required, turning out to be something like Extreme Programming. No complex tools for tracking, no coworkers paid to do only that, meetings were done only if there was a problem, everyone was feeling useful, and delivery estimation check was done with close contact with each other - and experience. We had relatively fast deliveries, enthusiasm and the like. But the company was also smaller. As I am about to decide on this new job, I wonder: do you have any suggestions about which question I could ask their managers at the interview to check if SAFe is compatible with how I work? I am passionate about technology, electronics and reliability applied through it, not about management and confining creativity and engineering knowledge. I really feel clueless about how to discover this in advance, instead of discovering this after I start to work. As you might imagine, I am not an expert in such management things, but I think that if something works, it should go smoothly for an engineer without spending weeks to study such methodologies and not working. So I am also searching for a way to look at these things with the right perspective.
2020/01/22
[ "https://workplace.stackexchange.com/questions/151685", "https://workplace.stackexchange.com", "https://workplace.stackexchange.com/users/52838/" ]
Out of point: My opinion on SAFe is that it is not "agile" but something that tries to reconcile management and product management with agile team. On the point: The issue is that like for any methodology the company can say "we're doing X" or "we're working with Y" and when you join you understand that they don't understand either X or Y and are just buzzwording (intentionally or not). Start by researching about SAFe to better understand what problems it tries to solve and how it works. Then when interviewing, don't ask about SAFe, but how they effectively work with specific questions: > > How much planning is done on project, is it long term planning (>1y), > mid term planning (3~6 month) or short term/scrum planning (1~4w)? > > > How are deliveries done? Which system is used? What tools? How long > does it takes once a task is finished to be delivered? > > > With this you'll know two things: If they want to use SAFe do you want to work with it. Are the process effectively in place what you seek.
SAFe is a framework, not a proscription. That said, there are some fundamental things which, if you're not doing them (correctly), you're not doing Agile. I understand your frustrations with an Agile process, especially if done badly, but if you work with it, a lot of the pain is removed (the "big bang" approach to delivery, weeks of forced "death march" overtime, the screaming and blamestorming when things go wrong, management hiding behind "you didn't tell us it would be late", etc). It's not perfect of course; It can be really frustrating to be pulled out of an intensive, head-down code session for some seemingly trivial ceremony (an Agile term for a regular meeting). And when the "open and transparent" is used by management to gather info for bashing the team.... I understand where you're coming from, but I've also over the years worked with some really stubborn people who cannot, will not, start properly using source control, an IDE (software development environment) or some other tool or process (testing?) that we, as a profession, now consider de rigeur. Do you like the job? Is the money good? Other aspects? Take it or don't based on those.
151,685
I am about to change my job for an automotive embedded software company, doing ASIL D development. Such a company put a highlight in their SAFe framework. Now, given my little experience, when working with "official" Scrum managements or "official" Agile methodologies in development, in couple of companies (the two last big ones where I worked) I found it was introduced frustration and acceptance in coworkers, as they lost the initiative, lost product innovation while working for years on the same project. Other engineer friends had similar opinions and "they live with it". Either this was normal or not, only once I worked with constant enthusiasm, and that was fitting how I think. We were following the Kanban method, with small tickets, and implementing changes as required, turning out to be something like Extreme Programming. No complex tools for tracking, no coworkers paid to do only that, meetings were done only if there was a problem, everyone was feeling useful, and delivery estimation check was done with close contact with each other - and experience. We had relatively fast deliveries, enthusiasm and the like. But the company was also smaller. As I am about to decide on this new job, I wonder: do you have any suggestions about which question I could ask their managers at the interview to check if SAFe is compatible with how I work? I am passionate about technology, electronics and reliability applied through it, not about management and confining creativity and engineering knowledge. I really feel clueless about how to discover this in advance, instead of discovering this after I start to work. As you might imagine, I am not an expert in such management things, but I think that if something works, it should go smoothly for an engineer without spending weeks to study such methodologies and not working. So I am also searching for a way to look at these things with the right perspective.
2020/01/22
[ "https://workplace.stackexchange.com/questions/151685", "https://workplace.stackexchange.com", "https://workplace.stackexchange.com/users/52838/" ]
Out of point: My opinion on SAFe is that it is not "agile" but something that tries to reconcile management and product management with agile team. On the point: The issue is that like for any methodology the company can say "we're doing X" or "we're working with Y" and when you join you understand that they don't understand either X or Y and are just buzzwording (intentionally or not). Start by researching about SAFe to better understand what problems it tries to solve and how it works. Then when interviewing, don't ask about SAFe, but how they effectively work with specific questions: > > How much planning is done on project, is it long term planning (>1y), > mid term planning (3~6 month) or short term/scrum planning (1~4w)? > > > How are deliveries done? Which system is used? What tools? How long > does it takes once a task is finished to be delivered? > > > With this you'll know two things: If they want to use SAFe do you want to work with it. Are the process effectively in place what you seek.
This has absolutely nothing to do with SAFe or Scrum or Kanban or Extreme Programming. To start with, lots of companies claim to be using one or more of these methods, but you see a couple of things. One is that they are frameworks and there are multiple ways to do them within the bounds of the framework. Another thing that you see is that the intent of the framework isn't understood and the organization does them wrong. Just the fact that they claim to do them, whether it's in person or in a job description or elsewhere, doesn't really mean that much. You know how you have worked in the past and have been effective. You can ask questions about how these organizations work - is work pushed or pulled, how much time is allocated to innovation or R&D work, what tools are used and how do they fit into the development process, how much time is spent in meetings on a regular basis, how often is work delivered and integrated, and so on. Just don't worry about what process models or methods or frameworks that they claim to use. Focus on what's important to you and ask these types of questions to everyone, from the hiring manager to the leads to the individual contributors that interview you. Maybe even ask the same questions to multiple people, especially if they are on different teams to get a feel for how different teams in the same organization may be structured or go about their daily work.
151,685
I am about to change my job for an automotive embedded software company, doing ASIL D development. Such a company put a highlight in their SAFe framework. Now, given my little experience, when working with "official" Scrum managements or "official" Agile methodologies in development, in couple of companies (the two last big ones where I worked) I found it was introduced frustration and acceptance in coworkers, as they lost the initiative, lost product innovation while working for years on the same project. Other engineer friends had similar opinions and "they live with it". Either this was normal or not, only once I worked with constant enthusiasm, and that was fitting how I think. We were following the Kanban method, with small tickets, and implementing changes as required, turning out to be something like Extreme Programming. No complex tools for tracking, no coworkers paid to do only that, meetings were done only if there was a problem, everyone was feeling useful, and delivery estimation check was done with close contact with each other - and experience. We had relatively fast deliveries, enthusiasm and the like. But the company was also smaller. As I am about to decide on this new job, I wonder: do you have any suggestions about which question I could ask their managers at the interview to check if SAFe is compatible with how I work? I am passionate about technology, electronics and reliability applied through it, not about management and confining creativity and engineering knowledge. I really feel clueless about how to discover this in advance, instead of discovering this after I start to work. As you might imagine, I am not an expert in such management things, but I think that if something works, it should go smoothly for an engineer without spending weeks to study such methodologies and not working. So I am also searching for a way to look at these things with the right perspective.
2020/01/22
[ "https://workplace.stackexchange.com/questions/151685", "https://workplace.stackexchange.com", "https://workplace.stackexchange.com/users/52838/" ]
Out of point: My opinion on SAFe is that it is not "agile" but something that tries to reconcile management and product management with agile team. On the point: The issue is that like for any methodology the company can say "we're doing X" or "we're working with Y" and when you join you understand that they don't understand either X or Y and are just buzzwording (intentionally or not). Start by researching about SAFe to better understand what problems it tries to solve and how it works. Then when interviewing, don't ask about SAFe, but how they effectively work with specific questions: > > How much planning is done on project, is it long term planning (>1y), > mid term planning (3~6 month) or short term/scrum planning (1~4w)? > > > How are deliveries done? Which system is used? What tools? How long > does it takes once a task is finished to be delivered? > > > With this you'll know two things: If they want to use SAFe do you want to work with it. Are the process effectively in place what you seek.
My company rolled out SAFe based on the success of a grassroots agile movement. The good thing about it was bringing more of management into feeding us work in a more agile way. i.e. by maintaining a prioritized backlog and at least trying to do small features with more frequent feedback. The bad thing about it was it had a tendency to centralize decision making and added a lot of bureaucracy/overhead. When I was a scrum master during the grassroots phase it mostly meant leading the meetings. Our scrum master now is doing administrative work at least half time and often full time. Our day to day is mostly the same, but our month by month feels a lot less efficient. For example, we are asked to produce 10 week plans when they almost never last past 5 weeks. I think a lot of our day to day is helped by the fact we did grassroots first, and still insist on a lot of the autonomy that provided. Were I interviewing for a job at any "agile" company, whether SAFe or not, I would ask about their planning process from when someone has an idea to when it is deployed to production. That will give you an idea of how much teams are autonomously executing in small increments to a well-communicated shared vision, versus being top-down managed.
151,685
I am about to change my job for an automotive embedded software company, doing ASIL D development. Such a company put a highlight in their SAFe framework. Now, given my little experience, when working with "official" Scrum managements or "official" Agile methodologies in development, in couple of companies (the two last big ones where I worked) I found it was introduced frustration and acceptance in coworkers, as they lost the initiative, lost product innovation while working for years on the same project. Other engineer friends had similar opinions and "they live with it". Either this was normal or not, only once I worked with constant enthusiasm, and that was fitting how I think. We were following the Kanban method, with small tickets, and implementing changes as required, turning out to be something like Extreme Programming. No complex tools for tracking, no coworkers paid to do only that, meetings were done only if there was a problem, everyone was feeling useful, and delivery estimation check was done with close contact with each other - and experience. We had relatively fast deliveries, enthusiasm and the like. But the company was also smaller. As I am about to decide on this new job, I wonder: do you have any suggestions about which question I could ask their managers at the interview to check if SAFe is compatible with how I work? I am passionate about technology, electronics and reliability applied through it, not about management and confining creativity and engineering knowledge. I really feel clueless about how to discover this in advance, instead of discovering this after I start to work. As you might imagine, I am not an expert in such management things, but I think that if something works, it should go smoothly for an engineer without spending weeks to study such methodologies and not working. So I am also searching for a way to look at these things with the right perspective.
2020/01/22
[ "https://workplace.stackexchange.com/questions/151685", "https://workplace.stackexchange.com", "https://workplace.stackexchange.com/users/52838/" ]
SAFe is a framework, not a proscription. That said, there are some fundamental things which, if you're not doing them (correctly), you're not doing Agile. I understand your frustrations with an Agile process, especially if done badly, but if you work with it, a lot of the pain is removed (the "big bang" approach to delivery, weeks of forced "death march" overtime, the screaming and blamestorming when things go wrong, management hiding behind "you didn't tell us it would be late", etc). It's not perfect of course; It can be really frustrating to be pulled out of an intensive, head-down code session for some seemingly trivial ceremony (an Agile term for a regular meeting). And when the "open and transparent" is used by management to gather info for bashing the team.... I understand where you're coming from, but I've also over the years worked with some really stubborn people who cannot, will not, start properly using source control, an IDE (software development environment) or some other tool or process (testing?) that we, as a profession, now consider de rigeur. Do you like the job? Is the money good? Other aspects? Take it or don't based on those.
My company rolled out SAFe based on the success of a grassroots agile movement. The good thing about it was bringing more of management into feeding us work in a more agile way. i.e. by maintaining a prioritized backlog and at least trying to do small features with more frequent feedback. The bad thing about it was it had a tendency to centralize decision making and added a lot of bureaucracy/overhead. When I was a scrum master during the grassroots phase it mostly meant leading the meetings. Our scrum master now is doing administrative work at least half time and often full time. Our day to day is mostly the same, but our month by month feels a lot less efficient. For example, we are asked to produce 10 week plans when they almost never last past 5 weeks. I think a lot of our day to day is helped by the fact we did grassroots first, and still insist on a lot of the autonomy that provided. Were I interviewing for a job at any "agile" company, whether SAFe or not, I would ask about their planning process from when someone has an idea to when it is deployed to production. That will give you an idea of how much teams are autonomously executing in small increments to a well-communicated shared vision, versus being top-down managed.
151,685
I am about to change my job for an automotive embedded software company, doing ASIL D development. Such a company put a highlight in their SAFe framework. Now, given my little experience, when working with "official" Scrum managements or "official" Agile methodologies in development, in couple of companies (the two last big ones where I worked) I found it was introduced frustration and acceptance in coworkers, as they lost the initiative, lost product innovation while working for years on the same project. Other engineer friends had similar opinions and "they live with it". Either this was normal or not, only once I worked with constant enthusiasm, and that was fitting how I think. We were following the Kanban method, with small tickets, and implementing changes as required, turning out to be something like Extreme Programming. No complex tools for tracking, no coworkers paid to do only that, meetings were done only if there was a problem, everyone was feeling useful, and delivery estimation check was done with close contact with each other - and experience. We had relatively fast deliveries, enthusiasm and the like. But the company was also smaller. As I am about to decide on this new job, I wonder: do you have any suggestions about which question I could ask their managers at the interview to check if SAFe is compatible with how I work? I am passionate about technology, electronics and reliability applied through it, not about management and confining creativity and engineering knowledge. I really feel clueless about how to discover this in advance, instead of discovering this after I start to work. As you might imagine, I am not an expert in such management things, but I think that if something works, it should go smoothly for an engineer without spending weeks to study such methodologies and not working. So I am also searching for a way to look at these things with the right perspective.
2020/01/22
[ "https://workplace.stackexchange.com/questions/151685", "https://workplace.stackexchange.com", "https://workplace.stackexchange.com/users/52838/" ]
This has absolutely nothing to do with SAFe or Scrum or Kanban or Extreme Programming. To start with, lots of companies claim to be using one or more of these methods, but you see a couple of things. One is that they are frameworks and there are multiple ways to do them within the bounds of the framework. Another thing that you see is that the intent of the framework isn't understood and the organization does them wrong. Just the fact that they claim to do them, whether it's in person or in a job description or elsewhere, doesn't really mean that much. You know how you have worked in the past and have been effective. You can ask questions about how these organizations work - is work pushed or pulled, how much time is allocated to innovation or R&D work, what tools are used and how do they fit into the development process, how much time is spent in meetings on a regular basis, how often is work delivered and integrated, and so on. Just don't worry about what process models or methods or frameworks that they claim to use. Focus on what's important to you and ask these types of questions to everyone, from the hiring manager to the leads to the individual contributors that interview you. Maybe even ask the same questions to multiple people, especially if they are on different teams to get a feel for how different teams in the same organization may be structured or go about their daily work.
My company rolled out SAFe based on the success of a grassroots agile movement. The good thing about it was bringing more of management into feeding us work in a more agile way. i.e. by maintaining a prioritized backlog and at least trying to do small features with more frequent feedback. The bad thing about it was it had a tendency to centralize decision making and added a lot of bureaucracy/overhead. When I was a scrum master during the grassroots phase it mostly meant leading the meetings. Our scrum master now is doing administrative work at least half time and often full time. Our day to day is mostly the same, but our month by month feels a lot less efficient. For example, we are asked to produce 10 week plans when they almost never last past 5 weeks. I think a lot of our day to day is helped by the fact we did grassroots first, and still insist on a lot of the autonomy that provided. Were I interviewing for a job at any "agile" company, whether SAFe or not, I would ask about their planning process from when someone has an idea to when it is deployed to production. That will give you an idea of how much teams are autonomously executing in small increments to a well-communicated shared vision, versus being top-down managed.
57,341
I've lubed a chain on my motorcycle a few days ago, but there's a dark thing keeps flinging all over the swingarm. I'm wondering what it might be? If it's the chain lube flinging all over, then why it looks so dark, the lube I put on the chain after cleaning it is light brown, and why it keeps flinging if I wiped the excess of the lube off after lubing? How likely that it's not the lube flinging but the engine oil that's somehow leaking on the chain? **Additional info:** The chain and sprocket were changed couple of weeks ago, everything in the front sprocket area was cleaned. The lube I use is a spray can chain lube, sticky light brown consistency.
2018/07/26
[ "https://mechanics.stackexchange.com/questions/57341", "https://mechanics.stackexchange.com", "https://mechanics.stackexchange.com/users/37249/" ]
Driving on a flat flexes the internal cord far more than it was designed to take, so the tire casing is compromised to an unknown extent. If you only bop around on surface streets at 40 MPH, you could probably stick with it, but I would not want to run it on a 20-mile freeway commute. A blowout at speed could damage your $$ wheel too. I would make my decision on this and whether a loved one drives that car. Your car may require both tires on the same axle be replaced together, so I understand this can be costly. If so, keep the "good" old tire for the rainy day when this happens again. If you decide to run with it, keep an eye out for any sidewall distortion, bubbles, or pressure loss for the life of the tire.
I dont' want to be sued. However if that was my tire I would drive on it. The sidewall is only abraded a little. Worst case scenario is it starts to leak some day (incredibly unlikely). Why did it go flat in the first place? Sounds like it has other problems though.
45,162
I'm looking for the name of a tv episode about a soldier who somehow is the only one that is NOT frozen in time. He has a dilemma of saving everyone (I think) or saving his wife, who will have a car accident once time starts again. He ends up rigging a truck that will somehow save his wife so that he can "fix" whatever the problem is and time re-starts. I remember that it was in black and white and I **thought** it was the Twilight Zone but can't find it. Maybe The Outer Limits?
2015/12/22
[ "https://movies.stackexchange.com/questions/45162", "https://movies.stackexchange.com", "https://movies.stackexchange.com/users/28998/" ]
Sounds like an episode of **The Outer Limits** original 1965 run, ***The Premonition*** (Season 2 episode 16). Instead of a soldier, he's a Test Pilot, saving his daughter, not his wife (she's with him), and it's from a military truck, not a car. Semantics. [From Wiki](https://en.wikipedia.org/wiki/The_Premonition_(The_Outer_Limits)): > > Jim Darcy, the pilot of an X-15 rocket-powered research aircraft, and his wife, Linda, become trapped 10 seconds ahead of their time, enabling them to watch time unfold to catch up with them at the rate of about one second every 30 minutes. In the time left before returning to synch with normal time, they see that their daughter, Janie, is about to be hit by a rolling military truck whose parking brake had not been set. Jim and Linda's inability to move objects in the "real" world prevents them from resetting the truck's parking brake or pulling young Janie out of danger. Their problem is aggravated as they soon learn that at the moment when time "catches up" with them, they must assume the exact positions they had been in five hours earlier, when this whole thing started, or they could remain in that state forever. > > >
Echoing my answer [here](https://scifi.stackexchange.com/questions/73631/movie-about-a-man-and-a-woman-accelerated-in-time-the-rest-of-the-world-was-sus/73632#73632) on another stack, This is an episode of 'The Outer Limits' called ["The Premonition"](http://en.wikipedia.org/wiki/The_Premonition_%28The_Outer_Limits%29) from January 1965. **Per wikipedia** > > Jim Darcy, the pilot of an X-15 rocket-powered research aircraft, and > his wife, Linda, become trapped 10 seconds ahead of their time, > enabling them to watch time unfold to catch up with them at the rate > of about one second every 30 minutes. In the time left before > returning to synch with normal time, they see that their daughter, > Janie, is about to be hit by a rolling military truck whose parking > brake had not been set. **Jim and Linda's inability to move objects in > the "real" world prevents them from resetting the truck's parking > brake or pulling young Janie out of danger.** Their problem is > aggravated as they soon learn that at the moment when time "catches > up" with them, they must assume the exact positions they had been in > five hours earlier, when this whole thing started, or they could > remain in that state forever. > > > > > **Jim hits upon a way to save his daughter from death. He removes seatbelts from his wife's car and ties them to the back wheel of the menacing truck. He then ties the other end around the brake lever** so that the truck's brakes will engage the moment the time warp ends. (It was, by that time, moving at 10 mph. > > >
251,830
I'm a computer technician and I have to reinstall computers of all brands a lot. Now i'm looking for a way to make a ghost image for example for XP which I can install on all computers. Just like you do a clean install? Can someone think of something what works?
2011/03/01
[ "https://superuser.com/questions/251830", "https://superuser.com", "https://superuser.com/users/11495/" ]
Sadly, XP doesn't do that. Windows 7 is intended to work that way (one image, regardless of hardware), but with XP you need a separate image for each make/model. The closest you can get is to set up an unattended installation that you can fire & forget. It'll still take longer than just copying over an image, but at least you won't have to interact & answer the prompts.
Sounds like you want a custom automated install. Creating a ghost image of Windows and expecting it to work right out of the box on many different arrangements of hardware is a pipe dream of all windows administrators. Have a read through this example using *nLite*: <http://teamtutorials.com/windows-tutorials/how-to-make-a-custom-windows-install-w-nlite> There are other tools for doing similar things, and this is precisely what people like Dell and HP etc do when making a windows installation for distribution with a computer (packed full of rubbish AV software etc)
251,830
I'm a computer technician and I have to reinstall computers of all brands a lot. Now i'm looking for a way to make a ghost image for example for XP which I can install on all computers. Just like you do a clean install? Can someone think of something what works?
2011/03/01
[ "https://superuser.com/questions/251830", "https://superuser.com", "https://superuser.com/users/11495/" ]
Sadly, XP doesn't do that. Windows 7 is intended to work that way (one image, regardless of hardware), but with XP you need a separate image for each make/model. The closest you can get is to set up an unattended installation that you can fire & forget. It'll still take longer than just copying over an image, but at least you won't have to interact & answer the prompts.
[Clonezilla](http://clonezilla.org/) works for me and I bet it will work for you. From [wikipedia's page](https://secure.wikimedia.org/wikipedia/en/wiki/Clonezilla): > > 'Clonezilla Live' enables a user to clone a single computer's storage media, or a single partition on the media, to a separate medium device. The cloned data can be saved as an image-file or as a duplicated copy of the data. The data can be saved to locally attached storage device, a SSH server, Samba Server or a NFS file-share. The clone file can then be used to restore the original when needed. > > > The Clonezilla application can be run from a USB-flash-drive, a CD-ROM, or a DVD-ROM. Clonezilla requires no modification to the computer; the software runs in its own booted environment. > > > Try for yourself and good luck!
226,778
I have often heard the sentence > > Bread, you know it makes sandwiches > > > in various contexts - as a painting on a wall in a living room or in a motivational book. What is the meaning of it?
2015/02/10
[ "https://english.stackexchange.com/questions/226778", "https://english.stackexchange.com", "https://english.stackexchange.com/users/109210/" ]
The quote is > > Bread. You know it makes sandwiches. > > > from the 2006 book *[Whatever You Think, Think the Opposite](http://www.scribd.com/doc/6384790/Think-the-Opposite)*, a collection of business and creative advice (of mixed reviews) by former Saatchi and Saatchi creative director Paul Arden. You need context to make sense of it; it is intended to be an illustration of the preceding pages: > > Having too many ideas is not always a good thing. It's too easy to move on to the next one … and the next one to the next one… If you don't have many ideas, you have to make those you do have work for you. Bread. You know it makes sandwiches. > > > In other words, even if you cannot think of a thousand different uses for bread, you know of one, and can profit from that knowledge somehow.
"Bread" in this context means "money." The statement is usually made in the context of "self-help and actualization" and is saying that money is not a bad thing but rather something which one needs to flourish (eat/survive/grow). See, for instance, [this slide show](http://www.slideshare.net/ppferreira/think-the-opposite) (slides 55-60). To paraphrase those slides: Make your ideas work for you--**Bread. You know it makes sandwiches.** For a creative person, don't think about technique, think about money.
84,185
I installed Microsoft's [TrueType core fonts for the Web](https://en.wikipedia.org/wiki/Core_fonts_for_the_Web) on my Linux computer so that I could use popular fonts such as Arial, Courier New, Comic Sans MS, Times New Roman, etc. Can I use these fonts outside my computer, or does the [EULA](http://corefonts.sourceforge.net/eula.htm) only allow me to use them to display documents on my computer? Suppose I use a word processing software (e.g. LibreOffice Writer or OpenOffice Writer) to create a text document that uses the Times New Roman typeface. * Am I allowed to print out the document, and give the printed document to other people? * Am I allowed to export the document as a PNG image, and post it on a website (e.g. here on Stack Exchange)? * Am I allowed to export the document as a PDF file, and distribute it publicly through the web? Note that the PDF file would include an embedded subset of the Times New Roman font, which is probably not allowed by the fonts' EULA. This is what I see in the document properties when I open such a PDF file using a PDF viewer: [![List of embedded fonts in the PDF file, as shown in Evince](https://i.stack.imgur.com/Ryzz5.png)](https://i.stack.imgur.com/Ryzz5.png) I have read the fonts' [EULA](http://corefonts.sourceforge.net/eula.htm), but the usage restrictions are not clear to me. I want to be sure that I do not violate the EULA when using the fonts.
2022/09/09
[ "https://law.stackexchange.com/questions/84185", "https://law.stackexchange.com", "https://law.stackexchange.com/users/24922/" ]
I will assume that the EULA that you linked to is provably the EULA under which your copy of the fonts was licensed. The core paragraph which says what you can do is as follows: > > 1. GRANT OF LICENSE. This EULA grants you the following rights: > > > Installation and Use. You may install and use an unlimited number of > copies of the SOFTWARE PRODUCT. > > > Reproduction and Distribution. You may reproduce and distribute an > unlimited number of copies of the SOFTWARE PRODUCT; provided that each > copy shall be a true and complete copy, including all copyright and > trademark notices, and shall be accompanied by a copy of this EULA. > Copies of the SOFTWARE PRODUCT may not be distributed for profit > either on a standalone basis or included as part of your own product. > > > It explicitly says that you can install any number of copies, there is no restriction as to where – not just "your computer" – and you can use any number of copies. You simply have to include the whole package including notices. Other parts say that you can't sell or modify the fonts. There is no issue whatsoever with you using a font to make a physical printout (you can install and use: that is a type of using). Likewise a graphic image of a printout. However, it is not clear whether a non-graphic PDF file can always be distributed: that depends on the content of the file (how the creating software handles the font). If the PDF engine copies an incomplete portion of the software (font) and does not copy the trademark and license information (I would be surprised if it did), then that is not allowed, because any copy must be complete. You would have to research the technical details of how font data is embedded in the PDF file. Since the EULA is not crystal clear, MS further addresses this question [here](https://docs.microsoft.com/en-us/typography/fonts/font-faq), under Document Embedding. Their brief statement is that > > If an application follows the rules and restrictions defined in the > OpenType or TrueType specification, you can use it to embed Windows > supplied fonts in any document file it creates > > > followed by a more detailed analysis of when you can and cannot.
[Leon Laude's post here pretty much answers your question.](https://docs.microsoft.com/en-us/answers/questions/35519/the-legality-of-microsoft-fonts.html) > > The Microsoft Core Fonts are globally considered legal for > installation. Generally speaking, packages that are included in the > official Linux repositories are not encumbered by any copyright or > patent restrictions that would make it a crime to install. > > > Note however that the Msttcorefonts is provided in official Ubuntu > software sources, and is not free open source software. This package > installs proprietary fonts copyrighted by Microsoft, when you install > it you are asked to agree to Microsoft's terms of use for these fonts. > > > [You can also find a complete set of documentation on MS-related typography issues here.](https://docs.microsoft.com/en-us/typography/)
76,876
There have been countless images of female Iranian supporters dressed in party-hats and looking every bit as casually turned out as any Western women. In view of the recent severe, and brutal, actions inside Iran against women dressed in a way considered inappropriate, could this portend trouble for them when they return? Equally, at the World Cup match against England, the entire Iranian team stood grim-faced and silent during the playing of their country's anthem. It has been widely interpreted by the media as a protest against the regime that rules in Tehran.
2022/11/25
[ "https://politics.stackexchange.com/questions/76876", "https://politics.stackexchange.com", "https://politics.stackexchange.com/users/6837/" ]
Yes. One of the drawbacks of an authoritarian system of government is that what "the law" is is a bit more opaque. If the current ruling President of Iran (or the current Supreme Leader) wants to punish the players then the players will be punished. Who is there to overrule a leader who is Supreme? You can argue however that similar traits exist even in much more democratic forms of government; if anyone from local prosecutors or police up the chain in the executive all the way up to the President has a beef with someone, they can do their best to disrupt that person's life using their power to do so. The biggest differences though are that 1) The people in the executive must generally still prove something to different officials in a completely different branch of government before any actions can be taken, 2) those same people are subject to the same set of laws, which usually include harsh punishments for abusing their power against ordinary citizens. Additionally, many of the officials in the executive branch are elected, so in a not very long time they could potentially be held to account for their actions by the people generally, even if any actions they take against people don't rise to the level of criminal. This system of checks and balances against abuses of power just doesn't exist in authoritarian governments. There have been examples of kidnappings of dissidents by some administrations while they are abroad and even sometimes go so far as outright murder. It is highly speculated that Russia even went so far as to use a chemical agent against defected spy [Sergei Skripal](https://en.wikipedia.org/wiki/Sergei_Skripal) and his 33 year old daughter in order to either silence or exact revenge against, and U.S.-Saudi relations are still being impacted by the murder of [Jamal Khashoggi](https://en.wikipedia.org/wiki/Jamal_Khashoggi) by members of the Saudi royal family (which just so happened to also be audio recorded by the Turkish government). The balancing act authoritarians must make when dispensing such justice is to find a way to keep the populace subdued enough out of fear and yet not angry enough to revolt, so disappearing the *entire* team is probably not something they can risk doing, certainly not in the wake of the recent protests in the country triggered by the death of [Mahsa Amini](https://en.wikipedia.org/wiki/Mahsa_Amini_protests) while in official custody. They may, however, choose to identify who amongst them is the most vocal and attempt to make an example of them personally or their family in an attempt to silence the entire team (or in this case induce into patriotically singing along), but the range of options the leaders of the country have and can use against any single individual is, in fact, supreme.
We have confirmation now. The rockclimber who competed without a scarf about 6 weeks ago? Her family house has been [demolished by the government](https://www.bbc.com/news/world-middle-east-63847173). Which claims it was unrelated. I would also add that the change in behavior of the soccer team, between their first appearances - speech, no anthem - and subsequent avoidance of the subject and singing of the anthem again seems to strongly point to threats of consequences made between those 2 events.
62,015,417
Compiling is a process to conversion from one level of abstraction to lower level. Meanwhile transpiling is a process of conversion from one level of abstraction to another at same level like converting java code to Kotlin/python. That is my understanding of the two process. Could someone please explain it in terms of java code and jvm byte code. And is my inference correct?
2020/05/26
[ "https://Stackoverflow.com/questions/62015417", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7419457/" ]
Higher/Lower levels of abstraction from human language to machine language ========================================================================== A compiler translates from a higher level language to a lower level language. By higher/lower we mean how abstracted from machine language. So this would include Java language to bytecode. Bytecode is closer to machine language, and further away from human language. A transpiler converts between languages of comparable levels of abstraction. Converting from [EcmaScript](https://en.wikipedia.org/wiki/ECMAScript) 6 to EcmaScript 5 for compatibility with older web browsers would be one example. Converting from Java language to [Kotlin](https://en.wikipedia.org/wiki/Kotlin_(programming_language)) would be another, or [Swift](https://en.wikipedia.org/wiki/Swift_(programming_language)) to Kotlin. See Wikipedia: <https://en.wikipedia.org/wiki/Source-to-source_compiler> Intermediate representation --------------------------- In particular, [bytecode compiled from Java](https://en.wikipedia.org/wiki/Java_bytecode) language and [bitcode compiled via LLVM](https://en.wikipedia.org/wiki/LLVM#Intermediate_representation) (from Swift, [Rust](https://en.wikipedia.org/wiki/Rust_(programming_language)), etc) are known as [Intermediate Representation (IR)](https://en.wikipedia.org/wiki/Intermediate_representation). An IR is designed for further processing, optimizing, and translating on its way to becoming [machine language](https://en.wikipedia.org/wiki/Machine_code).
It works as explained below: > > Firstly java source code is converted into Bytecode file by the translator named “Compiler”. The byte code file gets name with .class extension and javac (java compiler) is the tool to compile the .java file. > > > Then, > > > `java` is a tool use to invoke Java Interpreter “JVM”. Now, the work of JVM starts. When JVM invoke, > > > 1. a subprogram in JVM called Class loader (or system class loader) starts and load the bytecode into OS memory( or RAM). > 2. another subprogram Bytecode Verifier verify and ensure that the code do not violate the security rules. That’s why the java program is much secured and virus free. > 3. Then last subprogram Execution Engine finally converts bytecodes into machine code. The name of that engine are in use today is JIT Just In Compiler. > > > You can read about the same here : <https://www.quora.com/How-does-the-Java-interpreter-JVM-convert-bytecode-into-machine-code>
62,015,417
Compiling is a process to conversion from one level of abstraction to lower level. Meanwhile transpiling is a process of conversion from one level of abstraction to another at same level like converting java code to Kotlin/python. That is my understanding of the two process. Could someone please explain it in terms of java code and jvm byte code. And is my inference correct?
2020/05/26
[ "https://Stackoverflow.com/questions/62015417", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7419457/" ]
> > Would conversion from java code to jvm byte code considered compiling or transpiling? > > > It is compiling, according to the definitions that you give in your question. The bytecode instruction set is at a lower level of abstraction than Java source code. --- Having said that, this distinction between compiling and transpiling is a bit nebulous, since there is no clear definition of a "level of abstraction". For example, one could argue that since C is sometimes called "high level assembly language", C++ is at a higher level of abstraction than C. So does that make C++ -> C conversion compilation? Or are they at the same level of abstraction, which makes C++ -> C conversion transpilation? This is not totally academic because the first implementation of the C++ language *did* translate C++ to C source code ... IIRC.
It works as explained below: > > Firstly java source code is converted into Bytecode file by the translator named “Compiler”. The byte code file gets name with .class extension and javac (java compiler) is the tool to compile the .java file. > > > Then, > > > `java` is a tool use to invoke Java Interpreter “JVM”. Now, the work of JVM starts. When JVM invoke, > > > 1. a subprogram in JVM called Class loader (or system class loader) starts and load the bytecode into OS memory( or RAM). > 2. another subprogram Bytecode Verifier verify and ensure that the code do not violate the security rules. That’s why the java program is much secured and virus free. > 3. Then last subprogram Execution Engine finally converts bytecodes into machine code. The name of that engine are in use today is JIT Just In Compiler. > > > You can read about the same here : <https://www.quora.com/How-does-the-Java-interpreter-JVM-convert-bytecode-into-machine-code>
62,015,417
Compiling is a process to conversion from one level of abstraction to lower level. Meanwhile transpiling is a process of conversion from one level of abstraction to another at same level like converting java code to Kotlin/python. That is my understanding of the two process. Could someone please explain it in terms of java code and jvm byte code. And is my inference correct?
2020/05/26
[ "https://Stackoverflow.com/questions/62015417", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7419457/" ]
Higher/Lower levels of abstraction from human language to machine language ========================================================================== A compiler translates from a higher level language to a lower level language. By higher/lower we mean how abstracted from machine language. So this would include Java language to bytecode. Bytecode is closer to machine language, and further away from human language. A transpiler converts between languages of comparable levels of abstraction. Converting from [EcmaScript](https://en.wikipedia.org/wiki/ECMAScript) 6 to EcmaScript 5 for compatibility with older web browsers would be one example. Converting from Java language to [Kotlin](https://en.wikipedia.org/wiki/Kotlin_(programming_language)) would be another, or [Swift](https://en.wikipedia.org/wiki/Swift_(programming_language)) to Kotlin. See Wikipedia: <https://en.wikipedia.org/wiki/Source-to-source_compiler> Intermediate representation --------------------------- In particular, [bytecode compiled from Java](https://en.wikipedia.org/wiki/Java_bytecode) language and [bitcode compiled via LLVM](https://en.wikipedia.org/wiki/LLVM#Intermediate_representation) (from Swift, [Rust](https://en.wikipedia.org/wiki/Rust_(programming_language)), etc) are known as [Intermediate Representation (IR)](https://en.wikipedia.org/wiki/Intermediate_representation). An IR is designed for further processing, optimizing, and translating on its way to becoming [machine language](https://en.wikipedia.org/wiki/Machine_code).
> > Would conversion from java code to jvm byte code considered compiling or transpiling? > > > It is compiling, according to the definitions that you give in your question. The bytecode instruction set is at a lower level of abstraction than Java source code. --- Having said that, this distinction between compiling and transpiling is a bit nebulous, since there is no clear definition of a "level of abstraction". For example, one could argue that since C is sometimes called "high level assembly language", C++ is at a higher level of abstraction than C. So does that make C++ -> C conversion compilation? Or are they at the same level of abstraction, which makes C++ -> C conversion transpilation? This is not totally academic because the first implementation of the C++ language *did* translate C++ to C source code ... IIRC.
67,924
End of Nov 2018 I took the Mini for tire rotation and told the garage I was taking a road trip and to check all fluid levels including the oil. When I picked it up they confirmed this was done. The oil change was due for the end of Dec. Jan 3 I check the oil level and the dipstick was showing very low oil level (540 miles driven). I opened the reservoir and it was empty! There were no visual oil leaks. Is it possible for the oil level to go down so much if they had really checked the oil levels at the shop? On Jan 3 the performed and engine oil system flush and synthetic oil change. In April the car started to run rough and cold start issue. Engine light came on and towed to garage. They said cylinders 1,3,4 were misfiring. Replaced all the coils and spark plugs. They also performed a throttle body/fuel injection service. Early May the car still having cold start issues. Towed back to the shop. This time all injectors were replaced and valve cleaning. Picked up the car on a Friday and by Tuesday the car was having the same cold start issues. This time they want to replace the high pressure fuel pump and purge valve. Are all these issues related to the low oil levels? What damage was caused to the engine? Is there something else going on with the engine? It's become a money pit!
2019/06/10
[ "https://mechanics.stackexchange.com/questions/67924", "https://mechanics.stackexchange.com", "https://mechanics.stackexchange.com/users/49603/" ]
This could be either a drive problem, or an engine problem. One easy check for whether it's a transmission problem is to put it in neutral, and see if the problem continues. If it does (stays relatively the same), then it's not a transmission problem (your transmission is disengaged). If it's a transmission problem, you'd start by checking fluid, and then trying to narrow it down to the transmission or the differential (probably one of those two). I'm leaning toward it being a drivetrain problem, since that seems to match the symptoms, although the codes don't quite match. Since it doesn't change with how fast the car is moving, that limits it to the transmission. If it's not a transmission problem, then it's an engine problem. The engine needs three things to run: fuel, air, and spark. So you'll have to start by narrowing it down to one of Gas, Airflow, or Electrical. * If you smell gas while the problem is happening, that probably means that the problem has to do with gas not being burnt when it should. So it's getting gas, but not burning it. Electrical or Airflow. * If it's a gas problem, it will probably be related to how much gas you're giving it. You mentioned that it doesn't have to do with whether you're accelerating, so my guess is that it's not a gas problem. * Electrical: Your engine codes seem to suggest that it might be an electrical issue of some sort. The first code might've just come on because the engine was running really rough, and the second one would tell me that the car isn't burning all the gas it's getting. You'd want to check spark first, then move to the more complicated parts such as your O2 sensor(s), MAF/MAP sensor, knock sensor, etc. * Airflow: If it were airflow, restarting the car wouldn't help.
Supreme gas is overkill. I would consider another cleaner like Seafoam who makes a fuel additive and top cleaner sprayed into the intake. Search youtube for info on that. They even make crankcase cleaners. In an 07, when was the last time you changed the transmission fluid? If there is a large fleck of metal floating around your transmission that would wreak havoc if the fluid hasn’t ever been changed. If you have a dipstick, Red is okay, black fluid means change immediately. Otherwise make sure the engine oil is changed sooner than every 5000 miles.
11,479,224
I am a complete newbie when it comes to Linux, I've touched some of it in OS classes in the 3rd semester, but that's about it. My OS interaction is limited to using WinAPI. I'm in the process of writing low-level systems for my game engine, that is **context creation, file handling, HIDs, threading, etc.** and I would like to be able to achieve the same functionality on Windows as well as on Linux. When looking for information about the Linux interface system all I can find is recommendations for libraries like Qt. It's a great library, I've used it on Windows, however I'm not as much interested in taking the path of least resistance as I am in the process of learning to work with Linux. It feels daunting since there isn't anything like Windows.h for Linux AFAIK. Could you guys try pointing me in the right direction? **What native (if such exist) libraries does Linux use for the window system (or just a way of creating an OpenGL context, but with Windows functionality like window focus, relative mouse coordinates, window minimalization), input from the keyboard/mouse/etc., file i/o and threading?** Doesn't have to be specific, it would just be nice to be able to type something into Google and get proper results.
2012/07/13
[ "https://Stackoverflow.com/questions/11479224", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1137356/" ]
Graphics display on GNU/Linux systems is typically done with the [X Window System](https://en.wikipedia.org/wiki/X_Window_System), or X11 for short. But unlike Windows, X11 doesn't have a "built-in" set of UI controls like buttons and labels; it's a lower-level API that deals with just opening a window and drawing things into it. To build a UI, you *could* use raw X11 and draw everything yourself, but most programs use a *toolkit*, a library that builds on top of X11 and implements common controls (such as buttons) and the event-handling infrastructure that you use to program with them. [GTK](http://www.gtk.org/) and [Qt](http://qt.nokia.com/) are two of the most common ones these days, but there are others, such as [Motif](https://en.wikipedia.org/wiki/Motif_%28widget_toolkit%29) (which is older and doesn't look as nice). Note that the [GNOME](http://www.gnome.org/) desktop environment is built using GTK, and [KDE](http://www.kde.org/) is built using Qt. If you want to use OpenGL in X11, you use [GLX](https://en.wikipedia.org/wiki/GLX) to create a GL context and associate it with a window. It's similar in design to the [WGL](https://en.wikipedia.org/wiki/WGL_%28software%29) interface used to set up GL contexts in Windows, but different enough that code written for one can't be used for the other. For convenience and portability, applications often use a higher-level library such as [GLFW](http://www.glfw.org/) or [SFML](http://www.sfml-dev.org/) to handle GL context creation. There's work being done on a new windowing framework called [Wayland](https://en.wikipedia.org/wiki/Wayland_%28display_server_protocol%29), which will (perhaps) eventually replace X11. Higher-level libraries (like GTK and Qt) will be ported to use Wayland as a backend — just like they can use Windows GDI and Apple Quartz as backends — so applications using those higher-level toolkits shouldn't be impacted much by the switch to Wayland.
The OpenGL website is a good resource <http://www.opengl.org/wiki/Getting_Started#Linux>
9,550,000
I'm new to Vala. I'm not familiar with GObject. As I understand it, GObject was spun off from the GLib project from GNOME. Correct me if I'm wrong. I do like the syntax and implementation of Vala very much, yet it is not in my intentions to write desktop applications for GNOME. I also know (think I know) that Vala does not have a standard library other than GObject itself. So my question is: Can Vala be used without GObject and if it can, is it usable (are there optimal and maintained base libraries for common things like type conversions, math, string manipulation, buffers, etc... available)?
2012/03/03
[ "https://Stackoverflow.com/questions/9550000", "https://Stackoverflow.com", "https://Stackoverflow.com/users/532430/" ]
There is some other Vala profiles like [Dova](http://live.gnome.org/Dova) and Posix.
Here is another profile that you can use [Aroop](https://github.com/kamanashisroy/aroop). (Note it is still under heavy development). What I hope is it is good if you need high performance. Please check the [features here](https://github.com/kamanashisroy/aroop/blob/master/talks/features.md).
9,550,000
I'm new to Vala. I'm not familiar with GObject. As I understand it, GObject was spun off from the GLib project from GNOME. Correct me if I'm wrong. I do like the syntax and implementation of Vala very much, yet it is not in my intentions to write desktop applications for GNOME. I also know (think I know) that Vala does not have a standard library other than GObject itself. So my question is: Can Vala be used without GObject and if it can, is it usable (are there optimal and maintained base libraries for common things like type conversions, math, string manipulation, buffers, etc... available)?
2012/03/03
[ "https://Stackoverflow.com/questions/9550000", "https://Stackoverflow.com", "https://Stackoverflow.com/users/532430/" ]
There is some other Vala profiles like [Dova](http://live.gnome.org/Dova) and Posix.
**TLDR: I recommend using Vala with GLib/GObject, because it was designed on top of them.** While there may be alternative profiles for valac they are either unfinished or deprecated. The whole point of Vala is to reduce the amount of boilerplate required to write GLib and Gtk+ applications in C. It also adds some nice other improvements over C, like string and array being simple data types instead of error prone pointers. It mostly wraps all the concepts present in GObject like: * classes * properties * inheritance * delegates * async methods * reference counting (which is manual in C + GObject, and automatic aka ARC in Vala) * type safety of objects * generics * probably much more ... All of these concepts can be implemented without using GObject/GLib/Gio, but that would mean to basically rewrite GObject/GLib/Gio which doesn't make much sense. If you don't want to write GUI applications GLib can be used to write console applications as well, using GIO or GTK+ is optional in Vala, applications work on a headless server as well. I think that there is even some effort in Qt to eventually switch to the GLib main loop, which would make interoperability of Qt and GLib much easier. A good example of a framework that uses GLib is GStreamer which is used across different desktop environments as well. In summary: * GLib is a basic cross platform application framework * GObject is the object system used by the GLib ecosystem * GIO is an I/O abstraction (network, filesystem, etc.) based on GLib + GObject * GTK+ is a graphic UI toolkit based on GLib + GObject + GIO + others * GNOME is a desktop environment based on all the "G" technologies * Vala is a high level programming language designed to reduce the boiler plate neded to use the "G" libraries from the C language. GTK+ originally came from GIMP and was since split into the different "G" libraries that are the basis for GNOME today. Vala also has very powerful binding mechanisms to make it easy to write so called "VAPI" files for any kind of C library out there. With the correct VAPI bindings you don't have to worry about manual memory management, pointers, zero termination of strings and arrays and some other tedious things that make writing correct C code so difficult.
9,550,000
I'm new to Vala. I'm not familiar with GObject. As I understand it, GObject was spun off from the GLib project from GNOME. Correct me if I'm wrong. I do like the syntax and implementation of Vala very much, yet it is not in my intentions to write desktop applications for GNOME. I also know (think I know) that Vala does not have a standard library other than GObject itself. So my question is: Can Vala be used without GObject and if it can, is it usable (are there optimal and maintained base libraries for common things like type conversions, math, string manipulation, buffers, etc... available)?
2012/03/03
[ "https://Stackoverflow.com/questions/9550000", "https://Stackoverflow.com", "https://Stackoverflow.com/users/532430/" ]
**TLDR: I recommend using Vala with GLib/GObject, because it was designed on top of them.** While there may be alternative profiles for valac they are either unfinished or deprecated. The whole point of Vala is to reduce the amount of boilerplate required to write GLib and Gtk+ applications in C. It also adds some nice other improvements over C, like string and array being simple data types instead of error prone pointers. It mostly wraps all the concepts present in GObject like: * classes * properties * inheritance * delegates * async methods * reference counting (which is manual in C + GObject, and automatic aka ARC in Vala) * type safety of objects * generics * probably much more ... All of these concepts can be implemented without using GObject/GLib/Gio, but that would mean to basically rewrite GObject/GLib/Gio which doesn't make much sense. If you don't want to write GUI applications GLib can be used to write console applications as well, using GIO or GTK+ is optional in Vala, applications work on a headless server as well. I think that there is even some effort in Qt to eventually switch to the GLib main loop, which would make interoperability of Qt and GLib much easier. A good example of a framework that uses GLib is GStreamer which is used across different desktop environments as well. In summary: * GLib is a basic cross platform application framework * GObject is the object system used by the GLib ecosystem * GIO is an I/O abstraction (network, filesystem, etc.) based on GLib + GObject * GTK+ is a graphic UI toolkit based on GLib + GObject + GIO + others * GNOME is a desktop environment based on all the "G" technologies * Vala is a high level programming language designed to reduce the boiler plate neded to use the "G" libraries from the C language. GTK+ originally came from GIMP and was since split into the different "G" libraries that are the basis for GNOME today. Vala also has very powerful binding mechanisms to make it easy to write so called "VAPI" files for any kind of C library out there. With the correct VAPI bindings you don't have to worry about manual memory management, pointers, zero termination of strings and arrays and some other tedious things that make writing correct C code so difficult.
Here is another profile that you can use [Aroop](https://github.com/kamanashisroy/aroop). (Note it is still under heavy development). What I hope is it is good if you need high performance. Please check the [features here](https://github.com/kamanashisroy/aroop/blob/master/talks/features.md).
46,202
I'm looking for a locally installable Apache webapp that will act as a frontend to local SVN repos for browsing and administration. Specifically I need: * Access controls based on the SVN repo's authz, and passwd * Administrative functions, like adding users, changing access controls * Statistics, to monitor the use of the server A some features would be allowing trusted users to add post commit hooks and a log oriented view (as an admin, I don't care so much about the code as seeing the history).
2009/07/26
[ "https://serverfault.com/questions/46202", "https://serverfault.com", "https://serverfault.com/users/919/" ]
for first one, and only partially - [websvn](http://www.websvn.info/) can understand authz file and follow access rules you define there. for adding users - consider using ldap as more general authentication mechanism, and use it for authenticating when accessing svn. i'm using ms active directory and have http-based auth for svn server [ and websvn ], i have reasonable defaults for all authenticated users and give more rights only for those who need them, where they need them.
You really need to check out [Trac](http://trac.edgewall.org/). There are Ubuntu packages and it'll do what you need, plus more.
6,807
I am building a humanoid robot with DC motor actuated fingers. There are 16 brushed DC motors to be position controlled with help of hall effect sensors implanted at the joints of each fingers. I need a developed driver board to control these 16, 3 watt, 12 v,DC motors. Also each motor is equipped with an incremental encoder for speed control. thank you
2015/03/16
[ "https://robotics.stackexchange.com/questions/6807", "https://robotics.stackexchange.com", "https://robotics.stackexchange.com/users/7533/" ]
Given that the brushed DC motors you are using can output a maximum power of 3 watts and are rated for 12 volts maximum, the max current will be 0.25 amps or 250 milliamps. So you need to find something that can handle those specs. In the most straightforward approach you have two options. Either you can opt to use a dedicated driver board (e.g. like the [adafruit motor shield](http://www.adafruit.com/products/1438)) or you can use a dedicated IC that can serve the same function but at a fraction of a cost. Given that you need to control 16 brushed DC motors, the IC route would probably be the most cost-effective. Personally I have used the [SN754410NE Quad Half-H Driver](http://www.ti.com/lit/ds/symlink/sn754410.pdf). The interesting thing about this IC driver is you can control 2 DC motors in either direction and with supply voltages of up 36 volts and drive currents of up to 1 amp, which is suitable for your application. If you're going to control brushless DC motors, then that's a different story.
I've had good experience with pololu motor drivers. They have a decently wide selection of brushed DC motor controllers [here](https://www.pololu.com/category/10/brushed-dc-motor-controllers). And in Particular I think [this one](https://www.pololu.com/product/1110) would suffice according to your specs. Its one of their light weight controllers and it is still capable of handling up to 3A peak current per motor channel or 1A continually. That seems plenty more than you'd need as your setup only demands 0.25A per motor. A single controller would handle two motor channels so you'd need a few of those.
71,129
I'm curious if there is some historical precedent for [this](https://www.bbc.com/news/world-europe-60539303) or if this a rather new level of friction that didn't even exist during the Cold War: > > Estonia, Latvia, Slovenia and Romania said on Saturday they were banning some flights from Russia. > > > Russia earlier said it would close its airspace to flights from Bulgaria, Poland and the Czech Republic after they issued a ban on Russian jets. > > > Meanwhile, Russian-owned planes can no longer enter UK airspace. > > > Estonian Prime Minister Kaja Kallas urged other European Union countries to issue similar restrictions on Twitter, adding: "There is no place for planes of the aggressor state in democratic skies." [...] > > > The restriction on Russian flights over large swathes of eastern Europe will require Russian airlines to take circuitous routes. > > > [![enter image description here](https://i.stack.imgur.com/if3Rp.png)](https://i.stack.imgur.com/if3Rp.png) Yeah, I know most of those countries (highlighted on that map) were actually in the Soviet block during the Cold War, but did Western countries ever impose such restrictions on Soviet aircraft during the heightened moments of the Cold War, e.g. during the Soviet invasion of Czechoslovakia? Premise update: A day after I wrote the above, [the whole of EU](https://www.bbc.com/news/world-europe-60539303) closed its airspace to Russian carriers.
2022/02/26
[ "https://politics.stackexchange.com/questions/71129", "https://politics.stackexchange.com", "https://politics.stackexchange.com/users/18373/" ]
> > anything comparable to the current ban on Russian civilian aircraft in various European countries ever imposed during the Cold War? > > > Straight off the top of my head was The Berlin Blockade (24 June 1948 – 12 May 1949) which gave rise to the Berlin Airlift, using the West Berlin air corridors, also known as the Berlin corridors and control zone, which were three regulated airways for civil and military air traffic of the Western Allies between West Berlin and West Germany passing over East Germany's territory. From 1945-1991, outside of that airspace was Soviet territory and anything that flew outside of that would be treated as a threat and be shot down. The Allies also imposed their own counter-blockade, restricting trade with East Germany and East Berlin. > > The only real incident that happened due to this is when Soviet authorities claimed that a DC-4 airliner had strayed off the international air corridor on Tuesday 29 April 1952. > > > > > Two Russian MiG-15s opened fire, causing substantial damage to the DC-4 and wounding three passengers. The aircraft was hit by 89 shots. Engines no. 3 and 4 were shut down and the pilot carried out a safe emergency landing at Berlin-Tempelhof. > > > During the Berlin Airlift itself, aircraft routinely buzzed and harassed each others air space. I know it is not a no-fly zone per se but it is a restricted airspace with threat of lethal force in Europe during the Cold War... [![enter image description here](https://i.stack.imgur.com/BeAxJ.png)](https://i.stack.imgur.com/BeAxJ.png) There was of course Operation Deny Flight, in 1993 as the enforcement of a United Nations no-fly zone over Bosnia and Herzegovina, but that was obviously post-Cold War.
So, thanks to Stuart F for the tip/comment, it appears that Aeroflot was banned from the US twice during the Reagan administration: * in Dec 1981 due to the perceived Soviet support for repression in Poland, * in 1983 due the downing of KAL 007. Strangely enough, it's easier to find statements/references about the first event. [State Department](https://history.state.gov/historicaldocuments/frus1981-88v03/d125); [Reagan Library](https://www.reaganlibrary.gov/archives/speech/statement-us-measures-taken-against-soviet-union-concerning-its-involvement-poland). But Wikipedia [mentions](https://en.wikipedia.org/wiki/Aeroflot#:%7E:text=Aeroflot%20service%20between%20the%20Soviet,Korean%20Air%20Lines%20Flight%20007) the latter was the lengthier ban, lasting until 1990! A [co-biography](https://books.google.com/books?id=Ru5CKP3pcXMC) of Reagan and Thatcher mentions both instances and suggest that in 1983 more countries might actually banned Aeroflot in response, besides the US, but doesn't specify which (or for how long). (The Polish airline LOT was [apparently](https://liveandletsfly.com/united-states-ban-aeroflot/) also banned by Reagan in 1981.) According this latter account, the 1983 US ban against Aeroflot was more extensive in the that their US offices were also closed and they were forbidden from selling tickets, even though third parties in the US, like for connection flights. Likewise, baggage transfers between US companies and Aeroflot were banned. There's a [1983 WaPo article](https://www.washingtonpost.com/archive/politics/1983/09/09/us-closes-aeroflots-two-offices/dbe05c53-b957-42bc-ab75-a3db8544ecc5/) cited for the latter. It also mentions that Canada took the much more limited measure of suspending Aeroflot landing rights for 60 days, in 1983. Interestingly enough, the Washington Aeroflot office was also [bombed in 1983](https://www.washingtonpost.com/archive/local/1983/02/19/soviets-claim-us-connived-in-aeroflot-office-bombing/b0e75951-0f69-4fc3-acfe-b3433f2f0d1b/)... and "It was the fourth bombing in six years". Also, there were no US flights to Moscow since 1978, when Pan Am had ended theirs! In fact, negotiations during the early Gorbachev era to reopen access to Aeroflot to the US [broke down](https://www.latimes.com/archives/la-xpm-1985-10-22-mn-12323-story.html) because the Soviets refused to consider US requests for more market-based considerations. Pan Am had ended their flights to Moscow because the Soviets would only allow it if their citizens could pay in non-convertible Roubles.
71,129
I'm curious if there is some historical precedent for [this](https://www.bbc.com/news/world-europe-60539303) or if this a rather new level of friction that didn't even exist during the Cold War: > > Estonia, Latvia, Slovenia and Romania said on Saturday they were banning some flights from Russia. > > > Russia earlier said it would close its airspace to flights from Bulgaria, Poland and the Czech Republic after they issued a ban on Russian jets. > > > Meanwhile, Russian-owned planes can no longer enter UK airspace. > > > Estonian Prime Minister Kaja Kallas urged other European Union countries to issue similar restrictions on Twitter, adding: "There is no place for planes of the aggressor state in democratic skies." [...] > > > The restriction on Russian flights over large swathes of eastern Europe will require Russian airlines to take circuitous routes. > > > [![enter image description here](https://i.stack.imgur.com/if3Rp.png)](https://i.stack.imgur.com/if3Rp.png) Yeah, I know most of those countries (highlighted on that map) were actually in the Soviet block during the Cold War, but did Western countries ever impose such restrictions on Soviet aircraft during the heightened moments of the Cold War, e.g. during the Soviet invasion of Czechoslovakia? Premise update: A day after I wrote the above, [the whole of EU](https://www.bbc.com/news/world-europe-60539303) closed its airspace to Russian carriers.
2022/02/26
[ "https://politics.stackexchange.com/questions/71129", "https://politics.stackexchange.com", "https://politics.stackexchange.com/users/18373/" ]
Why yes. Somewhat famously, Soviet airspace was closed off to a majority of capitalist nations during almost the entire Cold War period. Instead, flights from Europe to the Far East (read: Japan, South Korea, Taiwan) regularly had scheduled fuel stops in [Anchorage, Alaska](https://en.wikipedia.org/wiki/Ted_Stevens_Anchorage_International_Airport#History). The Soviets did not take incursions into their airspace lightly, as seen by the fate of [Korean Air Lines flight 007](https://en.wikipedia.org/wiki/Korean_Air_Lines_Flight_007) which was shot down in 1983 en route from New York via Anchorage to Seoul after it incurred Russian airspace following a navigational error. In the second half of the Cold War, the Soviet Union reached agreements with some western airlines, [allowing them to connect Japan to Europe with a stopover in Moscow](https://theaviationblogger.com/trans-siberian-air-corridor-soviet-union/). (JAL and Aeroflot had pioneered a cooperation in the late 1960's although this relied on Soviet-built aircraft and a Soviet flight crew.) Not all airlines offered Europe–Moscow–Japan connections though; some continuing to fly via Alaska. Finally, it bears mentioning that Finnair [claims to be the first European airline to connect western Europe and Japan directly in 1983](https://company.finnair.com/en/about/history) (followed by being the first airline to connect Europe to Beijing in 1988). The Japan flights required extra fuel tanks and took a polar route, again avoiding Soviet airspace. I don't know if the same was done for the Beijing flights, whether they avoided the USSR by going South or whether at that time Finland had secured 1st freedom rights.
> > anything comparable to the current ban on Russian civilian aircraft in various European countries ever imposed during the Cold War? > > > Straight off the top of my head was The Berlin Blockade (24 June 1948 – 12 May 1949) which gave rise to the Berlin Airlift, using the West Berlin air corridors, also known as the Berlin corridors and control zone, which were three regulated airways for civil and military air traffic of the Western Allies between West Berlin and West Germany passing over East Germany's territory. From 1945-1991, outside of that airspace was Soviet territory and anything that flew outside of that would be treated as a threat and be shot down. The Allies also imposed their own counter-blockade, restricting trade with East Germany and East Berlin. > > The only real incident that happened due to this is when Soviet authorities claimed that a DC-4 airliner had strayed off the international air corridor on Tuesday 29 April 1952. > > > > > Two Russian MiG-15s opened fire, causing substantial damage to the DC-4 and wounding three passengers. The aircraft was hit by 89 shots. Engines no. 3 and 4 were shut down and the pilot carried out a safe emergency landing at Berlin-Tempelhof. > > > During the Berlin Airlift itself, aircraft routinely buzzed and harassed each others air space. I know it is not a no-fly zone per se but it is a restricted airspace with threat of lethal force in Europe during the Cold War... [![enter image description here](https://i.stack.imgur.com/BeAxJ.png)](https://i.stack.imgur.com/BeAxJ.png) There was of course Operation Deny Flight, in 1993 as the enforcement of a United Nations no-fly zone over Bosnia and Herzegovina, but that was obviously post-Cold War.
71,129
I'm curious if there is some historical precedent for [this](https://www.bbc.com/news/world-europe-60539303) or if this a rather new level of friction that didn't even exist during the Cold War: > > Estonia, Latvia, Slovenia and Romania said on Saturday they were banning some flights from Russia. > > > Russia earlier said it would close its airspace to flights from Bulgaria, Poland and the Czech Republic after they issued a ban on Russian jets. > > > Meanwhile, Russian-owned planes can no longer enter UK airspace. > > > Estonian Prime Minister Kaja Kallas urged other European Union countries to issue similar restrictions on Twitter, adding: "There is no place for planes of the aggressor state in democratic skies." [...] > > > The restriction on Russian flights over large swathes of eastern Europe will require Russian airlines to take circuitous routes. > > > [![enter image description here](https://i.stack.imgur.com/if3Rp.png)](https://i.stack.imgur.com/if3Rp.png) Yeah, I know most of those countries (highlighted on that map) were actually in the Soviet block during the Cold War, but did Western countries ever impose such restrictions on Soviet aircraft during the heightened moments of the Cold War, e.g. during the Soviet invasion of Czechoslovakia? Premise update: A day after I wrote the above, [the whole of EU](https://www.bbc.com/news/world-europe-60539303) closed its airspace to Russian carriers.
2022/02/26
[ "https://politics.stackexchange.com/questions/71129", "https://politics.stackexchange.com", "https://politics.stackexchange.com/users/18373/" ]
Why yes. Somewhat famously, Soviet airspace was closed off to a majority of capitalist nations during almost the entire Cold War period. Instead, flights from Europe to the Far East (read: Japan, South Korea, Taiwan) regularly had scheduled fuel stops in [Anchorage, Alaska](https://en.wikipedia.org/wiki/Ted_Stevens_Anchorage_International_Airport#History). The Soviets did not take incursions into their airspace lightly, as seen by the fate of [Korean Air Lines flight 007](https://en.wikipedia.org/wiki/Korean_Air_Lines_Flight_007) which was shot down in 1983 en route from New York via Anchorage to Seoul after it incurred Russian airspace following a navigational error. In the second half of the Cold War, the Soviet Union reached agreements with some western airlines, [allowing them to connect Japan to Europe with a stopover in Moscow](https://theaviationblogger.com/trans-siberian-air-corridor-soviet-union/). (JAL and Aeroflot had pioneered a cooperation in the late 1960's although this relied on Soviet-built aircraft and a Soviet flight crew.) Not all airlines offered Europe–Moscow–Japan connections though; some continuing to fly via Alaska. Finally, it bears mentioning that Finnair [claims to be the first European airline to connect western Europe and Japan directly in 1983](https://company.finnair.com/en/about/history) (followed by being the first airline to connect Europe to Beijing in 1988). The Japan flights required extra fuel tanks and took a polar route, again avoiding Soviet airspace. I don't know if the same was done for the Beijing flights, whether they avoided the USSR by going South or whether at that time Finland had secured 1st freedom rights.
So, thanks to Stuart F for the tip/comment, it appears that Aeroflot was banned from the US twice during the Reagan administration: * in Dec 1981 due to the perceived Soviet support for repression in Poland, * in 1983 due the downing of KAL 007. Strangely enough, it's easier to find statements/references about the first event. [State Department](https://history.state.gov/historicaldocuments/frus1981-88v03/d125); [Reagan Library](https://www.reaganlibrary.gov/archives/speech/statement-us-measures-taken-against-soviet-union-concerning-its-involvement-poland). But Wikipedia [mentions](https://en.wikipedia.org/wiki/Aeroflot#:%7E:text=Aeroflot%20service%20between%20the%20Soviet,Korean%20Air%20Lines%20Flight%20007) the latter was the lengthier ban, lasting until 1990! A [co-biography](https://books.google.com/books?id=Ru5CKP3pcXMC) of Reagan and Thatcher mentions both instances and suggest that in 1983 more countries might actually banned Aeroflot in response, besides the US, but doesn't specify which (or for how long). (The Polish airline LOT was [apparently](https://liveandletsfly.com/united-states-ban-aeroflot/) also banned by Reagan in 1981.) According this latter account, the 1983 US ban against Aeroflot was more extensive in the that their US offices were also closed and they were forbidden from selling tickets, even though third parties in the US, like for connection flights. Likewise, baggage transfers between US companies and Aeroflot were banned. There's a [1983 WaPo article](https://www.washingtonpost.com/archive/politics/1983/09/09/us-closes-aeroflots-two-offices/dbe05c53-b957-42bc-ab75-a3db8544ecc5/) cited for the latter. It also mentions that Canada took the much more limited measure of suspending Aeroflot landing rights for 60 days, in 1983. Interestingly enough, the Washington Aeroflot office was also [bombed in 1983](https://www.washingtonpost.com/archive/local/1983/02/19/soviets-claim-us-connived-in-aeroflot-office-bombing/b0e75951-0f69-4fc3-acfe-b3433f2f0d1b/)... and "It was the fourth bombing in six years". Also, there were no US flights to Moscow since 1978, when Pan Am had ended theirs! In fact, negotiations during the early Gorbachev era to reopen access to Aeroflot to the US [broke down](https://www.latimes.com/archives/la-xpm-1985-10-22-mn-12323-story.html) because the Soviets refused to consider US requests for more market-based considerations. Pan Am had ended their flights to Moscow because the Soviets would only allow it if their citizens could pay in non-convertible Roubles.
53,310
I am reading Dune at the moment and can't work out why the Atreides line at the beginning of the book seemingly contains only three people, the Duke, his Bene Gesserit consort and son (later it turns out to be four including a daughter). No mention is made of other relatives, which would be expected in most House families (especially one as old as te Atreides is supposed to be). It may be answered that the Bene Gesserit kept down the numbers in order to control the their genetic traits, but is the line really that open to obliteration? It seems strange that in the whole of Caladan there are no relatives to fight for / take up control of Arrakis after everyone thinks that these three are killed. Even Baron Harkonnen had several young relatives to choose from. Another related question, why do we hear nothing of Caladan after the Atreides leave? Why don't any of their subjects try to avenge the deaths of their Ducal line? Any help, either from within the Dune Universe or from a narratological viewpoint, greatly appreciated!
2014/04/05
[ "https://scifi.stackexchange.com/questions/53310", "https://scifi.stackexchange.com", "https://scifi.stackexchange.com/users/24700/" ]
It is established that the Atreides have been a very poor House for many generations, with its renaissance only really beginning under Duke Leto's grandfather. Leto's father was married to a Corrino princess and the family enjoyed much success under his rule. It is not uncommon, historically, for royal and ducal families to die out due to a lack of heirs, and the Atreides are treated as being "not very fecund," meaning that they have never had many children. The Corrinos, genetically very closely related to the Atreides, are in the same boat; Muad'Dib's claim on the throne was actually almost as great as Shaddam's even before he married Irulan. The Harkonnen's on the other hand, are portrayed as being a very large, fecund family; the Richese ducal family, the Harkonnen's closest allies, are portrayed as having more than ten children in just one generation. This is why the Harkonnens have survivors to go underground after the fall of the family, as seen in *Heretics of Dune*, when the Bene Gesserit discover a post-Jihad stronghold of the surviving Harkonnens on Giedi Prime. Also bear in mind that it is very common, as is seen with Irulan, for the daughters of noble Houses to marry into other noble Houses; after this, their children will be members of the other House, rather than the one they were born into; this leaves the Atreides with many relatives, but no other members of the House itself. Regarding the Caladanians; Duke Leto took most of his military with him to Arrakis to defend against the expected Harkonnen-Sardaukar attack. Therefore, there was very little military presence on Caladan to fight back when the Imperials and Harkonnens showed up. Also bear in mind that from the point of view of the Caladanians, the Harkonnen takeover was legal; they were not aware of Sardaukar involvement. Their former House's hatred of the Harkonnens is unlikely to have rubbed off on the general populace, and they would not have had a very great capacity for resistance, being a simple people.
The impression I got was that aristocratic breeding was tightly controlled by the Bene Gesserit. And as for Caladan- what could they possibly do? If they tried to rebel against the new Dukes, they'd be stomped on by the Sardaukar. If they tried to go intergalactic- the Guild would refuse to transport them.
53,310
I am reading Dune at the moment and can't work out why the Atreides line at the beginning of the book seemingly contains only three people, the Duke, his Bene Gesserit consort and son (later it turns out to be four including a daughter). No mention is made of other relatives, which would be expected in most House families (especially one as old as te Atreides is supposed to be). It may be answered that the Bene Gesserit kept down the numbers in order to control the their genetic traits, but is the line really that open to obliteration? It seems strange that in the whole of Caladan there are no relatives to fight for / take up control of Arrakis after everyone thinks that these three are killed. Even Baron Harkonnen had several young relatives to choose from. Another related question, why do we hear nothing of Caladan after the Atreides leave? Why don't any of their subjects try to avenge the deaths of their Ducal line? Any help, either from within the Dune Universe or from a narratological viewpoint, greatly appreciated!
2014/04/05
[ "https://scifi.stackexchange.com/questions/53310", "https://scifi.stackexchange.com", "https://scifi.stackexchange.com/users/24700/" ]
The impression I got was that aristocratic breeding was tightly controlled by the Bene Gesserit. And as for Caladan- what could they possibly do? If they tried to rebel against the new Dukes, they'd be stomped on by the Sardaukar. If they tried to go intergalactic- the Guild would refuse to transport them.
They know that the whole thing is a set-up and the Harkonnens must have planned their deaths in detail. But being small and unimportant on caldan and having a big and violent personal enemy they have no real future. It is inevitable that caldan will fall to the Harkonnens some day. Then they are offered the most important place in the universe and they know that their only chance for them (and perhaps for caldan, too). I think everybody knows that their chance of surviving there, is nearly zero. But they know that. Still, their possibliity is to enter the obvious trap that has been set-up for them proudly and knowing that it is a trap hoping for a complete miracle.
53,310
I am reading Dune at the moment and can't work out why the Atreides line at the beginning of the book seemingly contains only three people, the Duke, his Bene Gesserit consort and son (later it turns out to be four including a daughter). No mention is made of other relatives, which would be expected in most House families (especially one as old as te Atreides is supposed to be). It may be answered that the Bene Gesserit kept down the numbers in order to control the their genetic traits, but is the line really that open to obliteration? It seems strange that in the whole of Caladan there are no relatives to fight for / take up control of Arrakis after everyone thinks that these three are killed. Even Baron Harkonnen had several young relatives to choose from. Another related question, why do we hear nothing of Caladan after the Atreides leave? Why don't any of their subjects try to avenge the deaths of their Ducal line? Any help, either from within the Dune Universe or from a narratological viewpoint, greatly appreciated!
2014/04/05
[ "https://scifi.stackexchange.com/questions/53310", "https://scifi.stackexchange.com", "https://scifi.stackexchange.com/users/24700/" ]
It is established that the Atreides have been a very poor House for many generations, with its renaissance only really beginning under Duke Leto's grandfather. Leto's father was married to a Corrino princess and the family enjoyed much success under his rule. It is not uncommon, historically, for royal and ducal families to die out due to a lack of heirs, and the Atreides are treated as being "not very fecund," meaning that they have never had many children. The Corrinos, genetically very closely related to the Atreides, are in the same boat; Muad'Dib's claim on the throne was actually almost as great as Shaddam's even before he married Irulan. The Harkonnen's on the other hand, are portrayed as being a very large, fecund family; the Richese ducal family, the Harkonnen's closest allies, are portrayed as having more than ten children in just one generation. This is why the Harkonnens have survivors to go underground after the fall of the family, as seen in *Heretics of Dune*, when the Bene Gesserit discover a post-Jihad stronghold of the surviving Harkonnens on Giedi Prime. Also bear in mind that it is very common, as is seen with Irulan, for the daughters of noble Houses to marry into other noble Houses; after this, their children will be members of the other House, rather than the one they were born into; this leaves the Atreides with many relatives, but no other members of the House itself. Regarding the Caladanians; Duke Leto took most of his military with him to Arrakis to defend against the expected Harkonnen-Sardaukar attack. Therefore, there was very little military presence on Caladan to fight back when the Imperials and Harkonnens showed up. Also bear in mind that from the point of view of the Caladanians, the Harkonnen takeover was legal; they were not aware of Sardaukar involvement. Their former House's hatred of the Harkonnens is unlikely to have rubbed off on the general populace, and they would not have had a very great capacity for resistance, being a simple people.
They know that the whole thing is a set-up and the Harkonnens must have planned their deaths in detail. But being small and unimportant on caldan and having a big and violent personal enemy they have no real future. It is inevitable that caldan will fall to the Harkonnens some day. Then they are offered the most important place in the universe and they know that their only chance for them (and perhaps for caldan, too). I think everybody knows that their chance of surviving there, is nearly zero. But they know that. Still, their possibliity is to enter the obvious trap that has been set-up for them proudly and knowing that it is a trap hoping for a complete miracle.
107,974
Hey, I downloaded the new Kaspersky Internet Security 2010, and since then my internet has not been working. As soon as I disable Kaspersky my internet works. I have narrowed down to what the problem within Kaspersky is and I am currently running it with: Enabled - File Anti-Virus Disabled - Mail Anti-Virus Disabled - Web Anti-Virus Disabled - IM Anti-Virus Enabled - Application Control Enabled - Firewall Enabled - Protective Defence Enabled - Network Attack Blocker Disabled - Anti-Spam Disabled - Anti-Banner Disabled - Parental Control Any help would be appreciated, thanks.
2010/02/12
[ "https://superuser.com/questions/107974", "https://superuser.com", "https://superuser.com/users/22251/" ]
Have you managed to get another anti-virus running? I made the mistake once of having both AVG and Avast running at the same time and couldn't get any browser to work. As soon as I disabled one of them it all worked again.
You need to check Kaspersky Rules for the firewall. See if all the connections are restricted. See this for details : <http://support.kaspersky.com/kis2010/firewall?qid=208280574>
107,974
Hey, I downloaded the new Kaspersky Internet Security 2010, and since then my internet has not been working. As soon as I disable Kaspersky my internet works. I have narrowed down to what the problem within Kaspersky is and I am currently running it with: Enabled - File Anti-Virus Disabled - Mail Anti-Virus Disabled - Web Anti-Virus Disabled - IM Anti-Virus Enabled - Application Control Enabled - Firewall Enabled - Protective Defence Enabled - Network Attack Blocker Disabled - Anti-Spam Disabled - Anti-Banner Disabled - Parental Control Any help would be appreciated, thanks.
2010/02/12
[ "https://superuser.com/questions/107974", "https://superuser.com", "https://superuser.com/users/22251/" ]
Have you managed to get another anti-virus running? I made the mistake once of having both AVG and Avast running at the same time and couldn't get any browser to work. As soon as I disabled one of them it all worked again.
Why not just use [Microsoft Security Essentials](http://www.microsoft.com/Security_Essentials/)? It's free and the best protection currently available.
116,730
A high level Wizard cast a spell on an Efreet I had summoned from an Efreet Bottle. This spell restrained him and forced him to make CON saves at the end of each of his turns. I cast the spell [Gaseous Form](https://www.dndbeyond.com/spells/gaseous-form) with the Efreet. He didn’t save quick enough and was turned to stone. In the end everything turned out great. But I’d like to know if turning into a gas would prevent petrification. I am playing D&D 5e. I don’t know what spell was cast by the way, I thought it was all awesome and flavorful. Just trying to learn more about 5e and it’s many nuances.
2018/03/05
[ "https://rpg.stackexchange.com/questions/116730", "https://rpg.stackexchange.com", "https://rpg.stackexchange.com/users/42853/" ]
Yes, a gaseous creature can be petrified. ----------------------------------------- The Petrified condition states: > > A petrified creature is transformed, along with any nonmagical object it is wearing or carrying, into a solid inanimate substance (usually stone). *(PHB, 291)* > > > Petrification specifically turns the creature *into a solid*. If they are gaseous due to the effect of some spell, petrification makes them solid, so they are still affected.
I am going to make an assumption, that the enemy(?) Wizard used *Flesh to Stone* spell or equivalent effect on the Efreeti, because the description matches that: > > You attempt to turn one creature that you can see within range into stone. If the target's body is made of flesh, the creature must make a Constitution saving throw. On a failed save, it is restrained as its flesh begins to harden. > > > So, the Efreeti failed the save and became restrained (the somewhat relevant part: *"A restrained creature's speed becomes 0, and it can't benefit from any bonus to its speed."*). Then you (the OP's character) cast *Gaseous Form* on the Efreeti. The relevant parts of that spell's description: > > You transform a willing creature you touch, along with everything it's wearing and carrying, into a misty cloud for the duration. > > > The target [...] has advantage on [...] Constitution saving throws. [...] The target can't fall and remains hovering in the air even when > stunned or otherwise incapacitated. > > > There's nothing which indicates the *Restrained* condition would end, so you have a misty cloud with speed 0, unable to do anything much. The petrification proceeds: > > A creature restrained by this spell must make another Constitution saving throw at the end of each of its turns. If it successfully saves against this spell three times, the spell ends. If it fails its saves three times, it is turned to stone and subjected to the petrified condition for the duration. The successes and failures don't need to be consecutive; keep track of both until the target collects three of a kind. > > > So these saving throws proceeded to their unfortunate (for the Efreeti) conclusion, despite the CON save advantage, really unlucky (I'd verify that the DM remembered to use the advantage). Finally, after 3-5 turns, petrification completes, and this part of it is relevant here: > > A petrified creature is transformed, along with any nonmagical object it is wearing or carrying, into a solid inanimate substance (usually stone). > > > So, yep, **the Efreeti would be subject to petrification**, while still also being subject to *Gaseous Form* for a few more rounds. --- Now things get a bit more unclear. Let's see [combining magical effects](https://www.dndbeyond.com/sources/basic-rules/spellcasting#CombiningMagicalEffects): > > The effects of different spells add together while the durations of those spells overlap. The effects of the same spell cast multiple times don't combine, however. Instead, the most potent effect--such as the highest bonus--from those castings applies while their durations overlap, or the most recent effect applies if the castings are equally potent and their durations overlap. > > > So, we have two different spells, and their effects should add together. But being both inanimate solid substance and a misty cloud seem mutually exclusive. Or maybe they aren't. DM has two options: * A simple solution is to adapt above rule for different spells with contradictory effects, by taking the most potent (*Flesh to Stone* is level 6 spell, vs level 3 for *Gaseous Form* without upcast) or the most recent (while *Flesh to Stone* was cast first, the petrified condition is more recent). So usually the *Gaseous Form* would be suppressed until it ends (assuming petrification became permanent). If the *Gaseous Form* would instead suppress the petrification (maybe because of upcasting), then the Efreeti would stay in the misty cloud form for the duration, with no effects from petrification, and then immediately become petrified when it ended (assuming petrification became permanent). So, nice and simple, either you're a misty cloud, or you're a statue, but not both, no nasty corner cases or intermediate states. * A DM might also decide, that sure, these effects can overlap. So, the misty cloud form of the Efreeti is suddenly inanimate, solid and quite heavy. Being petrified does not suddenly make the Efreeti an invalid target for *Gaseous Form*, either, since it is very much not incorporeal. At this point, the DM probably needs to decide what the misty cloud actually looks like - does it look like the original creature, just made of mist, or does it look like a small roundish cloud, or something else? In any case the petrified cloud will still hover, because being *"otherwise incapacitated"* does not end hovering, and this covers being petrified. Once the *Gaseous Form* ends, the statue falls down. It is an inanimate solid statue, but still, this is *magic*, and there doesn't seem to be anything which could prevent this, so it becomes statue of the original non-cloud Efreeti. Still, DM might also quite reasonably decide that being petrified exactly preserves the form at the moment of petrification, because it is specified to be solid and non-aging. In that case, if petrified condition is removed (eg. by *Greater Restoration* spell), the original form should be restored, as this is what normally happens when a magical transformation suddenly loses its magic, like in anti-magic field. If there was a risk of death or damage, it should be in the rules (save, DC, damage dice). **In any case, in the end the Efreeti will be petrified until restored.**
116,730
A high level Wizard cast a spell on an Efreet I had summoned from an Efreet Bottle. This spell restrained him and forced him to make CON saves at the end of each of his turns. I cast the spell [Gaseous Form](https://www.dndbeyond.com/spells/gaseous-form) with the Efreet. He didn’t save quick enough and was turned to stone. In the end everything turned out great. But I’d like to know if turning into a gas would prevent petrification. I am playing D&D 5e. I don’t know what spell was cast by the way, I thought it was all awesome and flavorful. Just trying to learn more about 5e and it’s many nuances.
2018/03/05
[ "https://rpg.stackexchange.com/questions/116730", "https://rpg.stackexchange.com", "https://rpg.stackexchange.com/users/42853/" ]
No -- If the spell used was *[flesh to stone](https://www.dndbeyond.com/spells/flesh-to-stone)*, the spell says: > > If the target's body is made of flesh, the creature must make a Constitution saving throw. > > > Gas cannot be counted as "flesh".
I am going to make an assumption, that the enemy(?) Wizard used *Flesh to Stone* spell or equivalent effect on the Efreeti, because the description matches that: > > You attempt to turn one creature that you can see within range into stone. If the target's body is made of flesh, the creature must make a Constitution saving throw. On a failed save, it is restrained as its flesh begins to harden. > > > So, the Efreeti failed the save and became restrained (the somewhat relevant part: *"A restrained creature's speed becomes 0, and it can't benefit from any bonus to its speed."*). Then you (the OP's character) cast *Gaseous Form* on the Efreeti. The relevant parts of that spell's description: > > You transform a willing creature you touch, along with everything it's wearing and carrying, into a misty cloud for the duration. > > > The target [...] has advantage on [...] Constitution saving throws. [...] The target can't fall and remains hovering in the air even when > stunned or otherwise incapacitated. > > > There's nothing which indicates the *Restrained* condition would end, so you have a misty cloud with speed 0, unable to do anything much. The petrification proceeds: > > A creature restrained by this spell must make another Constitution saving throw at the end of each of its turns. If it successfully saves against this spell three times, the spell ends. If it fails its saves three times, it is turned to stone and subjected to the petrified condition for the duration. The successes and failures don't need to be consecutive; keep track of both until the target collects three of a kind. > > > So these saving throws proceeded to their unfortunate (for the Efreeti) conclusion, despite the CON save advantage, really unlucky (I'd verify that the DM remembered to use the advantage). Finally, after 3-5 turns, petrification completes, and this part of it is relevant here: > > A petrified creature is transformed, along with any nonmagical object it is wearing or carrying, into a solid inanimate substance (usually stone). > > > So, yep, **the Efreeti would be subject to petrification**, while still also being subject to *Gaseous Form* for a few more rounds. --- Now things get a bit more unclear. Let's see [combining magical effects](https://www.dndbeyond.com/sources/basic-rules/spellcasting#CombiningMagicalEffects): > > The effects of different spells add together while the durations of those spells overlap. The effects of the same spell cast multiple times don't combine, however. Instead, the most potent effect--such as the highest bonus--from those castings applies while their durations overlap, or the most recent effect applies if the castings are equally potent and their durations overlap. > > > So, we have two different spells, and their effects should add together. But being both inanimate solid substance and a misty cloud seem mutually exclusive. Or maybe they aren't. DM has two options: * A simple solution is to adapt above rule for different spells with contradictory effects, by taking the most potent (*Flesh to Stone* is level 6 spell, vs level 3 for *Gaseous Form* without upcast) or the most recent (while *Flesh to Stone* was cast first, the petrified condition is more recent). So usually the *Gaseous Form* would be suppressed until it ends (assuming petrification became permanent). If the *Gaseous Form* would instead suppress the petrification (maybe because of upcasting), then the Efreeti would stay in the misty cloud form for the duration, with no effects from petrification, and then immediately become petrified when it ended (assuming petrification became permanent). So, nice and simple, either you're a misty cloud, or you're a statue, but not both, no nasty corner cases or intermediate states. * A DM might also decide, that sure, these effects can overlap. So, the misty cloud form of the Efreeti is suddenly inanimate, solid and quite heavy. Being petrified does not suddenly make the Efreeti an invalid target for *Gaseous Form*, either, since it is very much not incorporeal. At this point, the DM probably needs to decide what the misty cloud actually looks like - does it look like the original creature, just made of mist, or does it look like a small roundish cloud, or something else? In any case the petrified cloud will still hover, because being *"otherwise incapacitated"* does not end hovering, and this covers being petrified. Once the *Gaseous Form* ends, the statue falls down. It is an inanimate solid statue, but still, this is *magic*, and there doesn't seem to be anything which could prevent this, so it becomes statue of the original non-cloud Efreeti. Still, DM might also quite reasonably decide that being petrified exactly preserves the form at the moment of petrification, because it is specified to be solid and non-aging. In that case, if petrified condition is removed (eg. by *Greater Restoration* spell), the original form should be restored, as this is what normally happens when a magical transformation suddenly loses its magic, like in anti-magic field. If there was a risk of death or damage, it should be in the rules (save, DC, damage dice). **In any case, in the end the Efreeti will be petrified until restored.**
1,331,306
We have our servers at the server farm in a domain. Let's call it LIVE. Our developer computers live in a completely separate corporate domain, miles and miles away. Let's call it CORP. We have a large central storage unit (unix) that houses images and other media needed by many webservers in the server farm. The IIS application pools run as (let's say) LIVE\MediaUser and use those credentials to connect to a central storage share as a virtual directory, retrieve the images, and serve them as if they were local on each server. The problem is in development. On my development machine. I log in as CORP\MyName. My IIS 6 application pool runs as Network Service. I can't run it as a user from the LIVE domain because my machine isn't (and can not be) joined to that domain. I try to create a virtual directory, point it to the same network directory, click Connect As, uncheck the "Always use the authenticated user's credentials when validating access to the network directory" checkbox so that I can enter the login info, enter the credentails for LIVE\MediaUser, click OK, verify the password, etc. This doesn't work. I get "HTTP Error 500 - Internal server error" from IIS. The IIS log file reports sc-status = 500, sc-substatus = 16, and sc-win32-status = 1326. The documentation says this means "UNC authorization credentials are incorrect" and the Win32 status means "Logon failure: unknown user name or bad password." This would be all and good if it were anywhere close to accurate. I double- and trouble-checked it. Tried multiple known good logins. The IIS manager allows me to view the file tree in its window, it's only the browser that kicks me out. I even tried going to the virtual directory's Directory Security tab, and under Authentication and Access Control, I tried using the same LIVE domain username for the anonymous access credential. No luck. I'm not trying to run any ASP, ASP.NET, or other dynamic anything out of the virtual directory. I just want IIS to be able to load static images, css, and js files. If anyone has some bright ideas I would be most appreciative!
2009/08/25
[ "https://Stackoverflow.com/questions/1331306", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10039/" ]
We have this same situation. As the above poster mentioned, IIS is trying to use the IUSER\_XXXX account to authenticate for anonymous access, but that account needs to exist on the both machines. You can still get passthrough authentication working in separate domains. This works with a remote Windows file server due to the way NetLogon processes domain names -- I'm not sure if it will work with a Unix server: Following your example, you'd create a domain user in your local domain ('CORP\MediaUser') with the SAME logon name and password as the 'LIVE\MediaUser' account. Then, set up the virtual directory using the 'LIVE\MediaUser' credentials as you did before, but this time set up 'CORP\MediaUser' as the Anonymous User for that virtual directory. It should then work. This will also work a local account ('MYMACHINE\MediaUser') so long as the logon name and password are the same as the remote account.
Ir my memory is correct, this depends on the user account that is used by authentication with IIS. If the virtual directory is set up for anonymous, then depending on the version of IIS it will use a local machine account called IUSER\_MACHINENAME. [Here](http://www.microsoft.com/technet/prodtechnol/WindowsServer2003/Library/IIS/8feeaa51-c634-4de3-bfdc-e922d195a45e.mspx?mfr=true) is a technet article that explains how to change the user account used for anonymous authentication in IIS 6.0.
1,331,306
We have our servers at the server farm in a domain. Let's call it LIVE. Our developer computers live in a completely separate corporate domain, miles and miles away. Let's call it CORP. We have a large central storage unit (unix) that houses images and other media needed by many webservers in the server farm. The IIS application pools run as (let's say) LIVE\MediaUser and use those credentials to connect to a central storage share as a virtual directory, retrieve the images, and serve them as if they were local on each server. The problem is in development. On my development machine. I log in as CORP\MyName. My IIS 6 application pool runs as Network Service. I can't run it as a user from the LIVE domain because my machine isn't (and can not be) joined to that domain. I try to create a virtual directory, point it to the same network directory, click Connect As, uncheck the "Always use the authenticated user's credentials when validating access to the network directory" checkbox so that I can enter the login info, enter the credentails for LIVE\MediaUser, click OK, verify the password, etc. This doesn't work. I get "HTTP Error 500 - Internal server error" from IIS. The IIS log file reports sc-status = 500, sc-substatus = 16, and sc-win32-status = 1326. The documentation says this means "UNC authorization credentials are incorrect" and the Win32 status means "Logon failure: unknown user name or bad password." This would be all and good if it were anywhere close to accurate. I double- and trouble-checked it. Tried multiple known good logins. The IIS manager allows me to view the file tree in its window, it's only the browser that kicks me out. I even tried going to the virtual directory's Directory Security tab, and under Authentication and Access Control, I tried using the same LIVE domain username for the anonymous access credential. No luck. I'm not trying to run any ASP, ASP.NET, or other dynamic anything out of the virtual directory. I just want IIS to be able to load static images, css, and js files. If anyone has some bright ideas I would be most appreciative!
2009/08/25
[ "https://Stackoverflow.com/questions/1331306", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10039/" ]
Ir my memory is correct, this depends on the user account that is used by authentication with IIS. If the virtual directory is set up for anonymous, then depending on the version of IIS it will use a local machine account called IUSER\_MACHINENAME. [Here](http://www.microsoft.com/technet/prodtechnol/WindowsServer2003/Library/IIS/8feeaa51-c634-4de3-bfdc-e922d195a45e.mspx?mfr=true) is a technet article that explains how to change the user account used for anonymous authentication in IIS 6.0.
We had the same problem but could not get it corrected, even with the correct security permissions on the security tab. It was working one day and just suddenly stopped. After a lot of head banging, I went to the "Connect As" button next to the virtual directory and surprisingly there were DIFFERENT credentials than the security tab. I used a known good service account and Viola', it worked instantly.
1,331,306
We have our servers at the server farm in a domain. Let's call it LIVE. Our developer computers live in a completely separate corporate domain, miles and miles away. Let's call it CORP. We have a large central storage unit (unix) that houses images and other media needed by many webservers in the server farm. The IIS application pools run as (let's say) LIVE\MediaUser and use those credentials to connect to a central storage share as a virtual directory, retrieve the images, and serve them as if they were local on each server. The problem is in development. On my development machine. I log in as CORP\MyName. My IIS 6 application pool runs as Network Service. I can't run it as a user from the LIVE domain because my machine isn't (and can not be) joined to that domain. I try to create a virtual directory, point it to the same network directory, click Connect As, uncheck the "Always use the authenticated user's credentials when validating access to the network directory" checkbox so that I can enter the login info, enter the credentails for LIVE\MediaUser, click OK, verify the password, etc. This doesn't work. I get "HTTP Error 500 - Internal server error" from IIS. The IIS log file reports sc-status = 500, sc-substatus = 16, and sc-win32-status = 1326. The documentation says this means "UNC authorization credentials are incorrect" and the Win32 status means "Logon failure: unknown user name or bad password." This would be all and good if it were anywhere close to accurate. I double- and trouble-checked it. Tried multiple known good logins. The IIS manager allows me to view the file tree in its window, it's only the browser that kicks me out. I even tried going to the virtual directory's Directory Security tab, and under Authentication and Access Control, I tried using the same LIVE domain username for the anonymous access credential. No luck. I'm not trying to run any ASP, ASP.NET, or other dynamic anything out of the virtual directory. I just want IIS to be able to load static images, css, and js files. If anyone has some bright ideas I would be most appreciative!
2009/08/25
[ "https://Stackoverflow.com/questions/1331306", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10039/" ]
We have this same situation. As the above poster mentioned, IIS is trying to use the IUSER\_XXXX account to authenticate for anonymous access, but that account needs to exist on the both machines. You can still get passthrough authentication working in separate domains. This works with a remote Windows file server due to the way NetLogon processes domain names -- I'm not sure if it will work with a Unix server: Following your example, you'd create a domain user in your local domain ('CORP\MediaUser') with the SAME logon name and password as the 'LIVE\MediaUser' account. Then, set up the virtual directory using the 'LIVE\MediaUser' credentials as you did before, but this time set up 'CORP\MediaUser' as the Anonymous User for that virtual directory. It should then work. This will also work a local account ('MYMACHINE\MediaUser') so long as the logon name and password are the same as the remote account.
We had the same problem but could not get it corrected, even with the correct security permissions on the security tab. It was working one day and just suddenly stopped. After a lot of head banging, I went to the "Connect As" button next to the virtual directory and surprisingly there were DIFFERENT credentials than the security tab. I used a known good service account and Viola', it worked instantly.
521,412
I have a relay with 12V and 8A, it was not working properly. I want to change it with 12V 16A relay is that possible? Because I am thinking amper is the power of relay. If a relay can open 16A circut with 12V, It can easly open the 8A. Am I right? This is relays Finder is broken, [![enter image description here](https://i.stack.imgur.com/xZ3hD.png)](https://i.stack.imgur.com/xZ3hD.png) Omron which is I want to use [![enter image description here](https://i.stack.imgur.com/P0ypI.png)](https://i.stack.imgur.com/P0ypI.png)
2020/09/14
[ "https://electronics.stackexchange.com/questions/521412", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/213297/" ]
> > If a relay can open 16A circut with 12V, It can easly open the 8A. Am I right? > > > It's not that simple. @ocrdu has pointed out some important highlights - especially about driving the relay. But there are two important things that should be considered: * The load type the relay is switching (Motor, resistive, or capacitive) * Inrush current rating of the contacts. Please note that a big common problem that can be seen on relays is **contact sticking**, which is mostly caused by the arcs on especially switch-on events (Think of it something like welding). [finder 40.52](https://www.finder-relais.net/en/finder-relays-series-40.pdf) is capable of switching a peak current of 15A. If the load draws a higher current even for a short time then the contacts may get sticked. So, > > I have a relay with 12V and 8A, it was not working properly. > > > That may be the problem that you are currently facing. [Omron G2R-1E-12VDC](https://www.infinity-electronic.hk/datasheet/9d-G2R-1-H-T130-DC12.pdf) has a max switching current rating of 16A. So, even the nominal current rating is higher than finder's (16Anom/16Amax against 8Anom/15Amax), there's still a risk of encountering a contact sticking problem. So, * Identify the load type * Identify the peak switching current requirement * Identify the max power
Both are switched with 12V, both are specced for 230V~, and the replacement can handle a higher current than the broken one, so no problem there. You may want to look up the switching current for both; it could be the replacement's coil takes more current when switching.
433,957
A drive dropped out of my raid 5 array yesterday. It looks like the reason was due to a bad controller so I switched it out and attempted to re-add the drive but mdadm claimed it couldn't do it. So I zerod the superblock and just added the drive normally and left it to resync. When I came to check on the array this morning I was unable to mount it at all and it's now showing as CLEAN FAULTY with two drives missing. The two missing drives are listed as spare and faulty spare. Is there anything I can do in this situation or is the array gone? Update The disks appear to be fine - except maybe for enough bad data on one of the disks for mdadm to get annoyed and kick that disk from the array too. I was able to recreate the array by marking the disk as working and forcing the assembly so I'm currently just making sure that all my backups are up to date. So I can probably change this question to: RAID5 seems to be a problem with large disks (3x3TB). I'm considering changing to mirrored RAID-Z arrays but is there anything else I should consider instead?
2012/10/02
[ "https://serverfault.com/questions/433957", "https://serverfault.com", "https://serverfault.com/users/120261/" ]
It is not recommended to use raid-5 on 3TB consumer disks because it take *ages* to resync (might be well more than 24h), and during that time it is likely that yet *another one* might fail, and in such case all your data (or at least the part, that haven't managed to get resynced) will be lost. Raid-Z has small the advantage that it resyncs (resilvers) only that part of hard drive, which actually is used by files as compared to standard raid implementation which is filesystem-agnostic. Another advantage of zfs is that it is said (I've never tried it myself. See [article on serverfocus.org](http://www.serverfocus.org/zfs-best-practices-guide#data-reconstruction)) that you can specify the order of files that are resilvered; files important can get high priority, which translates to being first to resilver. I suggest to go with mirroring, which is faster and better error-proof for such large drives.
RAID-5 with just three disks is very risky because: * You only need one drive to go pop and one more drive failure will take out the array completely * The remaining two disks will be under increased load when one drive is out of the array, increasing the chance they could fail at that particular time * Consumer grade SATA drives are even more likely to fail (they aren't designed for 24/7 RAID I/O), plus if you bought them all at the same time from the same batch it is even more likely I used to deploy three disk RAID-5 setups years ago but quickly moved on. RAID-5 has a higher write penalty as well which is something to consider. It's hard to make a specific recommendation to improve your setup without knowing exactly what application you use your RAID array for, but you might want to look at getting an additional disk and using RAID-10 across all 4 or I tend to steer more towards ZFS these days, RAID-Z is interesting as well.
54,505,944
python libraries that I want to install using pip by ignoring version errors. ![python libraries that I want to install using pip by ignoring version errors.](https://i.stack.imgur.com/62DT6.png)
2019/02/03
[ "https://Stackoverflow.com/questions/54505944", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9077904/" ]
1. Check that your gwt.xml really inherits "com.google.gwt.user.User" 2. Check java sdk. Some gwt versions work only with java 1.8 3. If you use Idea, there was a bug, which also causes this error: <https://youtrack.jetbrains.com/issue/IDEA-218731?_ga=2.120118728.1133270638.1599566675-1712334692.1566210515>
Use 1.8.0 JRE to compile the GWT application. You can setup this in eclipse as shown below. [enter image description here](https://i.stack.imgur.com/MJDPh.png) Then try to compile the application you will get the following output in console. Compiling module com.somshine.miniproject.MiniProject Compiling 5 permutations Compiling permutation 0... Compiling permutation 1... Compiling permutation 2... Compiling permutation 3... Compiling permutation 4... Compile of permutations succeeded Compilation succeeded -- 21.193s Linking into D:\Java\gwtProject\miniProject\war\miniproject Link succeeded Linking succeeded -- 0.426s
35,563
I have just discovered that you can do both a combined smart contract that includes both the token and the crowdsale OR have the token in a separate contract from the crowdsale contract. Can anyone tell me what the advantages are to either of these methods please? Many Thanks, Phil.
2018/01/09
[ "https://ethereum.stackexchange.com/questions/35563", "https://ethereum.stackexchange.com", "https://ethereum.stackexchange.com/users/28257/" ]
If your contract costs that much, then you must be doing a bunch of storage initialization. Storage, by far, costs the most to read/write. The irony of Solidity is that it costs less to redo things in memory, so if you have some variables, you can re-initialize and recalculate. It's probably better to do so in memory each time a contract method is called.
My approach is: 1. Deploy to a testnet like ropsten and note the gas used 2. Go to <https://web3-tools.com/> and type in the gas to back-calculate the gas required in USD Like the accepted answer, my basic contract was about 1,400,000 (compared to 21,000) for a transfer Disclosure: I wrote web3-tools.com
1,273,707
I've read recommendations that we should create separate application pools for each asp.net application on our Win2008 server. We have about 20 apps that would be on the same server. I know this would create 20 separate worker processes which seems very wasteful. Is it good practice to create separate application pools for each application?
2009/08/13
[ "https://Stackoverflow.com/questions/1273707", "https://Stackoverflow.com", "https://Stackoverflow.com/users/28943/" ]
Reposted from ServerFault, "[Why add additional application pools in IIS?](https://serverfault.com/questions/2106/why-add-additional-application-pools-in-iis)" * AppPools can run as different identities, so you can restrict permissions this way. * You can assign a different identity to each app pool so that when you run task manager, you know which w3wp.exe is which. * You can recycle/restart one app pool without affecting the sites that are running in different app pools. * If you have a website that has a memory leak or generally misbehaves, you can place it in an app pool so it doesn't affect the other web sites * If you have a website that is very CPU-intensive (like resizing photos, for instance), you can place it in its own app pool and throttle its CPU utilization * If you have multiple websites that each have their own SQL database, you can use active directory authentication instead of storing usernames/passwords in web.config.
The main benefit of creating different application pools is that you can provide each pool with other credentials. Your 20 applications may communicate with 20 different databases that all need another login. The best practice then is to run each application using a different service account. I wouldn't worry too much about performance. Most time will probably be spent inside each web application, no matter what process each application is in.
1,273,707
I've read recommendations that we should create separate application pools for each asp.net application on our Win2008 server. We have about 20 apps that would be on the same server. I know this would create 20 separate worker processes which seems very wasteful. Is it good practice to create separate application pools for each application?
2009/08/13
[ "https://Stackoverflow.com/questions/1273707", "https://Stackoverflow.com", "https://Stackoverflow.com/users/28943/" ]
Reposted from ServerFault, "[Why add additional application pools in IIS?](https://serverfault.com/questions/2106/why-add-additional-application-pools-in-iis)" * AppPools can run as different identities, so you can restrict permissions this way. * You can assign a different identity to each app pool so that when you run task manager, you know which w3wp.exe is which. * You can recycle/restart one app pool without affecting the sites that are running in different app pools. * If you have a website that has a memory leak or generally misbehaves, you can place it in an app pool so it doesn't affect the other web sites * If you have a website that is very CPU-intensive (like resizing photos, for instance), you can place it in its own app pool and throttle its CPU utilization * If you have multiple websites that each have their own SQL database, you can use active directory authentication instead of storing usernames/passwords in web.config.
If your apps are stable and don't use much memory, then I would say that it's fine to put them in the same app pool. App Pools give you isolation between your applications.
1,273,707
I've read recommendations that we should create separate application pools for each asp.net application on our Win2008 server. We have about 20 apps that would be on the same server. I know this would create 20 separate worker processes which seems very wasteful. Is it good practice to create separate application pools for each application?
2009/08/13
[ "https://Stackoverflow.com/questions/1273707", "https://Stackoverflow.com", "https://Stackoverflow.com/users/28943/" ]
yes, it is a good idea even for 20 applications. 1. Security. Different app pools running under different accounts. 2. Isolation. One crashing app won't take down other apps. 3. Memory (if you are running 32bit). Each app pool will have its own address space. Thus you can address much more memory than a maximum of approximately 2.7GB of usable space for 1 process. 4. You may choose to periodically restart one not-so-well behaving app without affecting other applications.
The main benefit of creating different application pools is that you can provide each pool with other credentials. Your 20 applications may communicate with 20 different databases that all need another login. The best practice then is to run each application using a different service account. I wouldn't worry too much about performance. Most time will probably be spent inside each web application, no matter what process each application is in.
1,273,707
I've read recommendations that we should create separate application pools for each asp.net application on our Win2008 server. We have about 20 apps that would be on the same server. I know this would create 20 separate worker processes which seems very wasteful. Is it good practice to create separate application pools for each application?
2009/08/13
[ "https://Stackoverflow.com/questions/1273707", "https://Stackoverflow.com", "https://Stackoverflow.com/users/28943/" ]
yes, it is a good idea even for 20 applications. 1. Security. Different app pools running under different accounts. 2. Isolation. One crashing app won't take down other apps. 3. Memory (if you are running 32bit). Each app pool will have its own address space. Thus you can address much more memory than a maximum of approximately 2.7GB of usable space for 1 process. 4. You may choose to periodically restart one not-so-well behaving app without affecting other applications.
If your apps are stable and don't use much memory, then I would say that it's fine to put them in the same app pool. App Pools give you isolation between your applications.
1,273,707
I've read recommendations that we should create separate application pools for each asp.net application on our Win2008 server. We have about 20 apps that would be on the same server. I know this would create 20 separate worker processes which seems very wasteful. Is it good practice to create separate application pools for each application?
2009/08/13
[ "https://Stackoverflow.com/questions/1273707", "https://Stackoverflow.com", "https://Stackoverflow.com/users/28943/" ]
This really depends on the applications, your security model, and how much you trust the applications. Here are a few things that I always tell people to consider when working with application pools. * Use separate application pools if each application needs different access to system resources. (Can use multiple process accounts) * If an application is a resource hog, mission critical, or an "unknown", it is best to put it in to its own pool to isolate it from the rest of the system
We work as outside contractors for a large corporate client. We are not on-site, so we do not have connectivity to all of their systems. Sometimes during development it is necessary for me to debug an application directly on their development servers. To do that I must be able to attach to w3wp process. When I attach and start debugging, entire process stalls, which affects all of the applications that are in the same process/application pool. By creating a dedicated application pool and moving my development there, I can easily debug without making anyone's life miserable.
1,273,707
I've read recommendations that we should create separate application pools for each asp.net application on our Win2008 server. We have about 20 apps that would be on the same server. I know this would create 20 separate worker processes which seems very wasteful. Is it good practice to create separate application pools for each application?
2009/08/13
[ "https://Stackoverflow.com/questions/1273707", "https://Stackoverflow.com", "https://Stackoverflow.com/users/28943/" ]
I used to have 58 .Net websites and 17 old classic ASP websites on the same IIS7.5 server using separate app pools for each site. I noticed that the IIS compression started failing intermittently, causing the style sheets to be corrupted about 5% of the time. Looking at the task manager on the server, I could see that the server was approaching it's 4GB ram limit- each w3wp.exe process was taking anything up to 100 MB memory depending on how much traffic the site was getting. I then moved all the websites into just 2 application pools (one for .net 4 websites and one for the old classic ASP sites) and the total memory used after doing that dropped from 3.8GB to just under 2.8GB - saving me over 1GB memory space on the server. After the change (and leaving the server running for a couple of hours to get back to normal levels of traffic), the w3wp processes were using 300MB for all the .net websites websites and 20MB for the classic ASP websites. I could re-enable IIS compression again without a problem. Using separate APP pools is a great idea for many of the reasons mentioned by the other posts above, but it also in my experience causes a much higher memory overhead if you are hosting a fair number of websites on the same server. I guess it's a trade-off between hardware restrictions and security whether you want to use separate app pools. It's a good idea if you have the resources to do it.
The main benefit of creating different application pools is that you can provide each pool with other credentials. Your 20 applications may communicate with 20 different databases that all need another login. The best practice then is to run each application using a different service account. I wouldn't worry too much about performance. Most time will probably be spent inside each web application, no matter what process each application is in.
1,273,707
I've read recommendations that we should create separate application pools for each asp.net application on our Win2008 server. We have about 20 apps that would be on the same server. I know this would create 20 separate worker processes which seems very wasteful. Is it good practice to create separate application pools for each application?
2009/08/13
[ "https://Stackoverflow.com/questions/1273707", "https://Stackoverflow.com", "https://Stackoverflow.com/users/28943/" ]
I used to have 58 .Net websites and 17 old classic ASP websites on the same IIS7.5 server using separate app pools for each site. I noticed that the IIS compression started failing intermittently, causing the style sheets to be corrupted about 5% of the time. Looking at the task manager on the server, I could see that the server was approaching it's 4GB ram limit- each w3wp.exe process was taking anything up to 100 MB memory depending on how much traffic the site was getting. I then moved all the websites into just 2 application pools (one for .net 4 websites and one for the old classic ASP sites) and the total memory used after doing that dropped from 3.8GB to just under 2.8GB - saving me over 1GB memory space on the server. After the change (and leaving the server running for a couple of hours to get back to normal levels of traffic), the w3wp processes were using 300MB for all the .net websites websites and 20MB for the classic ASP websites. I could re-enable IIS compression again without a problem. Using separate APP pools is a great idea for many of the reasons mentioned by the other posts above, but it also in my experience causes a much higher memory overhead if you are hosting a fair number of websites on the same server. I guess it's a trade-off between hardware restrictions and security whether you want to use separate app pools. It's a good idea if you have the resources to do it.
yes, it is a good idea even for 20 applications. 1. Security. Different app pools running under different accounts. 2. Isolation. One crashing app won't take down other apps. 3. Memory (if you are running 32bit). Each app pool will have its own address space. Thus you can address much more memory than a maximum of approximately 2.7GB of usable space for 1 process. 4. You may choose to periodically restart one not-so-well behaving app without affecting other applications.
1,273,707
I've read recommendations that we should create separate application pools for each asp.net application on our Win2008 server. We have about 20 apps that would be on the same server. I know this would create 20 separate worker processes which seems very wasteful. Is it good practice to create separate application pools for each application?
2009/08/13
[ "https://Stackoverflow.com/questions/1273707", "https://Stackoverflow.com", "https://Stackoverflow.com/users/28943/" ]
The main benefit of creating different application pools is that you can provide each pool with other credentials. Your 20 applications may communicate with 20 different databases that all need another login. The best practice then is to run each application using a different service account. I wouldn't worry too much about performance. Most time will probably be spent inside each web application, no matter what process each application is in.
We work as outside contractors for a large corporate client. We are not on-site, so we do not have connectivity to all of their systems. Sometimes during development it is necessary for me to debug an application directly on their development servers. To do that I must be able to attach to w3wp process. When I attach and start debugging, entire process stalls, which affects all of the applications that are in the same process/application pool. By creating a dedicated application pool and moving my development there, I can easily debug without making anyone's life miserable.
1,273,707
I've read recommendations that we should create separate application pools for each asp.net application on our Win2008 server. We have about 20 apps that would be on the same server. I know this would create 20 separate worker processes which seems very wasteful. Is it good practice to create separate application pools for each application?
2009/08/13
[ "https://Stackoverflow.com/questions/1273707", "https://Stackoverflow.com", "https://Stackoverflow.com/users/28943/" ]
Reposted from ServerFault, "[Why add additional application pools in IIS?](https://serverfault.com/questions/2106/why-add-additional-application-pools-in-iis)" * AppPools can run as different identities, so you can restrict permissions this way. * You can assign a different identity to each app pool so that when you run task manager, you know which w3wp.exe is which. * You can recycle/restart one app pool without affecting the sites that are running in different app pools. * If you have a website that has a memory leak or generally misbehaves, you can place it in an app pool so it doesn't affect the other web sites * If you have a website that is very CPU-intensive (like resizing photos, for instance), you can place it in its own app pool and throttle its CPU utilization * If you have multiple websites that each have their own SQL database, you can use active directory authentication instead of storing usernames/passwords in web.config.
yes, it is a good idea even for 20 applications. 1. Security. Different app pools running under different accounts. 2. Isolation. One crashing app won't take down other apps. 3. Memory (if you are running 32bit). Each app pool will have its own address space. Thus you can address much more memory than a maximum of approximately 2.7GB of usable space for 1 process. 4. You may choose to periodically restart one not-so-well behaving app without affecting other applications.
1,273,707
I've read recommendations that we should create separate application pools for each asp.net application on our Win2008 server. We have about 20 apps that would be on the same server. I know this would create 20 separate worker processes which seems very wasteful. Is it good practice to create separate application pools for each application?
2009/08/13
[ "https://Stackoverflow.com/questions/1273707", "https://Stackoverflow.com", "https://Stackoverflow.com/users/28943/" ]
Separating apps in pools is a good thing to do when there's a reason, and there are a number of good reasons listed above. There are, however, good reasons not to separate apps into different pools, too. Apps using the same access, .NET version, etc. will run more efficiently in a single pool and be more easily maintained. Most annoyingly, IIS will kill idle app pools, requiring the pool be recreated on each use. If you isolate infrequently used apps you'll impose an unnecessary startup cost on users. Combining these apps into a single pool will make for happier users when they don't pay the startup cost, happier servers when they don't give memory to multiple processes and CPU slices for them, and happier admins when they have to manage fewer app pools.
We work as outside contractors for a large corporate client. We are not on-site, so we do not have connectivity to all of their systems. Sometimes during development it is necessary for me to debug an application directly on their development servers. To do that I must be able to attach to w3wp process. When I attach and start debugging, entire process stalls, which affects all of the applications that are in the same process/application pool. By creating a dedicated application pool and moving my development there, I can easily debug without making anyone's life miserable.
1,030,692
Over at [SpokenWord.org](http://spokenword.org) we’re trying to figure out how to scrape YouTube pages (or pages with embedded YouTube players), then hack a video or ShockWave URL that we can include in the <enclosure> element of RSS feeds. We’ve been able to do this for programs in YouTube EDU such as [this page](http://www.youtube.com/watch?v=Y1XpTc1-lh0), which we convert to [this media-file URL](http://www.youtube.com/v/Y1XpTc1-lh0&f=user_uploads&app=youtube_gdata). The latter URL can be played by standard Flash players, so we can include it in RSS feeds. But this only works for certain special cases such as YouTube EDU, not for mainstream YouTube pages. We don't want to convert the YouTube files to video, etc. We don't have rights/permission to do so.
2009/06/23
[ "https://Stackoverflow.com/questions/1030692", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17307/" ]
Every video has its own data feed, you just need the video id and you can grab the info, including the url to the embeddable version (assuming it's able to be embedded). Here's what the urls would look like for each: <http://gdata.youtube.com/feeds/api/videos/VIDEO_ID> There may be an easier way to use the gdata api to get what you want, perhaps if you elaborated on what you are trying to do? Like where is the original source of the videos coming from - are you doing searches, or are you looking for specific youtube users? The api should be able to accommodate any of those scenarios.
Would it not be easier to simply find the related video on youtube itself via the [youtube API](http://code.google.com/apis/youtube/overview.html)?
52,261
> > Consider a one-sample t-test regarding the population mean time to > complete a task with a one-sided alternative hypothesis of Ha: μ < 10 > minutes. A random sample of times to complete the task will be > obtained as part of a study. Which of the following values of the > sample mean times would result in a p-value of more than 0.5? > > > 1. 7 minutes 2. 9 minutes 3. 11 minutes 4. 13 minutes I chose 3 and 4 since their values will skew the data towards rejecting the alternative hypothesis. Is this correct? Also, would it be correct to say that "the p-value is the probability that the null hypothesis is true?" (as to interpret it in some vague sense?)
2013/03/14
[ "https://stats.stackexchange.com/questions/52261", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/20820/" ]
> > "*will skew the data towards rejecting the alternative hypothesis*" > > > -- you *don't* 'reject the alternative'. You either reject the null or you fail to do so. > > would it be correct to say that "the p-value is the probability that the null hypothesis is true?" > > > No. It's *the probability that a sample result at least as extreme as the one observed will occur, given that $H\_0$ is true*.
3 and 4 is correct for the reasons you mentioned. However, your interpretation of the p-value is not correct. It is very loosely linked to the probability of H0. See the wikipedia article on p-value. > > In statistical hypothesis testing the p-value is the probability of > obtaining a test statistic at least as extreme as the one that was > actually observed, assuming that the null hypothesis is true. > > > And from Hubbard, R.; Lindsay, R. M. (article in citations on the wiki): > > Using a Bayesian significance test for a normal mean, James Berger and > Thomas Sellke (1987, pp. 112–113) showed that for p values of .05, > .01, and .001, respectively, the posterior probabilities of the null, > Pr(H0 | x), for n = 50 are .52, .22, and .034. For n = 100 the > corresponding figures are .60, .27, and .045. Clearly these > discrepancies between p and Pr(H0 | x) are pronounced, and cast > serious doubt on the use of p values as reasonable measures of > evidence. In fact, Berger and Sellke (1987) demonstrated that data > yielding a p value of .05 in testing a normal mean nevertheless > resulted in a posterior probability of the null hypothesis of at least > .30 for any objective (symmetric priors with equal prior weight given > to H0 and HA ) prior distribution. > > >
62,753
I need a tool which allows me to upload an image with a device(like [this](http://images.anandtech.com/doci/9686/DSC_3927_678x452.jpg)) and then to upload my app screenshot on the screen. What can I use? I don't need a tool like placeit, I need to upload my own image and my own screenshot. And also I don't want to use Adobe Photoshop. Placeit.net allows me to put my screenshot in many preset device images; however, i want to use my very own device image and on top of that to add an app screenshot. I can't do it in PS because it's quite time consuming (my images are rotated, skewed, in perspective etc.) and I need to do multiple images so I'm looking for something a bit more automated.
2015/11/03
[ "https://graphicdesign.stackexchange.com/questions/62753", "https://graphicdesign.stackexchange.com", "https://graphicdesign.stackexchange.com/users/50806/" ]
I am very sorry to say but the best way of doing this is actually by using Photoshop. But the process is so simple that (with a little more dedication) you can use even Paint to do something like you described. You just need to find an image with a phone, it doesn't matter if it already has a screen. You will add your own image over the screen with a little bit of resizing and crop. Depending on how realistic you want it to be you will need to add reflections and shadows to it. > > An advantage in using Photoshop is that you will find a lot of free > mock-ups out there that will simplify your work. > > >
There are many tools you could use. I am not aware that such a service does exsist, but it might, as this is extremely simple to do. You dont even need to know much anything as you can find these ready made for several software and even in photoshop its no more work than drag and drop when you have that template loaded. Software (Other than photoshop) that can do this include: * Imagemagick * Gimp * Inkscape * Your browser * ... If i was *paid* to do such a service I would turn to imagemagik. And have a passable service up and running within a hour. New information in comments --------------------------- There is no reason why you can not automate Photoshop. All you need is to know where the corners of your screen are. Whether or not this will work for you is another thing. All you need to do is bring in a suitably sized smart object. then * Hit `Ctrl`+`t` with the smart object selected * Hold control down and drag each corner to position. Now all you need to do is replace the image in the smart object for all images you want to display on the image. This is easy to do. Wether or not you do thsi in Photoshop is up to you you could equally easily do it in imagemagick the complexity per image is the same. After the template is done its no more work in either software. 10 or 10000 images not much more work. The biggest issue is that if you have one screenshot but 10000 template images to process. Then no method really helps. Manually doing the corner pin takes about 1 minute per picture. So 100 images would take a few hours max. *Automagically* detect corners ------------------------------ It is possible to automagically (pun intended) detect corners? Yes it is, but only if you planned the shots very carefully. You could put a cyan/magenta/clearly visible different chekerboard on the screen before shooting. This would let you search for the selected area for corners and you could detect this. Second if its always the same phone you could detect the phones features and compare it with something like FLANN to detect the position. But given the phones, and their lack of features, we have its better if you have a fixed image in the screen to detect. Look into using open CV If you have a video ------------------- If you have a video then you should use corner pin with corner tracking. This is not very hard to set up in say after effects, or say syntheyes. But important here is that the object corners should be easily detectable and more trackable. Again plan before you shoot shoving static image makes it MUCH easier,
62,753
I need a tool which allows me to upload an image with a device(like [this](http://images.anandtech.com/doci/9686/DSC_3927_678x452.jpg)) and then to upload my app screenshot on the screen. What can I use? I don't need a tool like placeit, I need to upload my own image and my own screenshot. And also I don't want to use Adobe Photoshop. Placeit.net allows me to put my screenshot in many preset device images; however, i want to use my very own device image and on top of that to add an app screenshot. I can't do it in PS because it's quite time consuming (my images are rotated, skewed, in perspective etc.) and I need to do multiple images so I'm looking for something a bit more automated.
2015/11/03
[ "https://graphicdesign.stackexchange.com/questions/62753", "https://graphicdesign.stackexchange.com", "https://graphicdesign.stackexchange.com/users/50806/" ]
I am very sorry to say but the best way of doing this is actually by using Photoshop. But the process is so simple that (with a little more dedication) you can use even Paint to do something like you described. You just need to find an image with a phone, it doesn't matter if it already has a screen. You will add your own image over the screen with a little bit of resizing and crop. Depending on how realistic you want it to be you will need to add reflections and shadows to it. > > An advantage in using Photoshop is that you will find a lot of free > mock-ups out there that will simplify your work. > > >
If you check out some more advanced mockup templates for Photoshop you will learn that you can do all rotation, skewing etc. in layers or actions, which - after an initial setup - will allow a very quick replacement of your screenshot without much manual work. Another option might be <https://www.getscenery.com/> where you could add your own backgrounds but you will need to use the hand/device photo of the software.
303,784
From the Guardian article [Labour reshuffle a ‘move towards the voters’, says Wes Streeting](https://amp.theguardian.com/politics/2021/nov/30/keir-starmers-labour-reshuffle-a-move-towards-the-voters): > > Starmer's office is keen to cut back the number of political advisers on Labour's payroll significantly, with several shadow ministers being asked to *share or manage without.* > > >
2021/12/09
[ "https://ell.stackexchange.com/questions/303784", "https://ell.stackexchange.com", "https://ell.stackexchange.com/users/147790/" ]
Up to now, senior Opposition politicians have each had their own political adviser. This is expensive. Some shadow ministers are being asked to EITHER share political advisers with other shadow ministers instead of having one assigned exclusively, OR manage without a political adviser altogether
It means several shadow ministers would have to share or manage without political advisers.
271,468
The question mould is: > > How does X work? > I've read the [*Wikipedia article / original paper / documentation / other resource*] but I can't make sense of it. Please explain X to me in plain English. > > > where X is a concept, algorithm, pattern, etc. (Not a code dump.) I've come across a number questions fitting this mould. I never really know how to react. The result could perhaps be quite useful, but it doesn't quite seem to be a natural fit for Stack Overflow as a Q&A site. One reason is, it isn't clear what the appropriate scope of a good answer would be: * Explain which part? (Some of it? All of it?) * To what level of detail? The appropriate result is ill-defined and unbounded: could be anything between a one-liner and a 100-page treatise. Example\*: * [How does Dijkstra's self-stabilizing algorithm work?](https://stackoverflow.com/questions/6435046/how-does-dijkstras-self-stabilizing-algorithm-work) Questions: ---------- * What is the appropriate reaction to such a question? * When is such a question valid? \* Feel free to edit this if you want to add more examples.
2014/09/16
[ "https://meta.stackoverflow.com/questions/271468", "https://meta.stackoverflow.com", "https://meta.stackoverflow.com/users/119775/" ]
You can vote to close it as "unclear what you're asking" if the scope is ill-defined, or "too broad" if the question is decidedly so. Prompt the questioner to add specifics as appropriate. Such a question can be valid provided the asker has demonstrated research effort, the question is narrowed down enough and the subject matter is on-topic for our site.
The questioner should indicate which portion of the topic he already understands and which not. Otherwise it's not clear from which point the answer should start. I would refuse to answer "please explain X to me" type questions unless the questioner also mentions specific points he/she doesn't understand. If this is clear and you don't think the answer would be too long (too broad) or can be found elsewhere (not enough research) then why not writing a great answer. In the specific linked case the specific question is "I still don't quite get the k-state solution." So one could maybe give an answer detailing the k-state solution (if it is not too long) and then adapt the question towards "How does one obtain the k-state solution in Dijkstra's self-stabilizing algorithm?" if one wanted to.
271,468
The question mould is: > > How does X work? > I've read the [*Wikipedia article / original paper / documentation / other resource*] but I can't make sense of it. Please explain X to me in plain English. > > > where X is a concept, algorithm, pattern, etc. (Not a code dump.) I've come across a number questions fitting this mould. I never really know how to react. The result could perhaps be quite useful, but it doesn't quite seem to be a natural fit for Stack Overflow as a Q&A site. One reason is, it isn't clear what the appropriate scope of a good answer would be: * Explain which part? (Some of it? All of it?) * To what level of detail? The appropriate result is ill-defined and unbounded: could be anything between a one-liner and a 100-page treatise. Example\*: * [How does Dijkstra's self-stabilizing algorithm work?](https://stackoverflow.com/questions/6435046/how-does-dijkstras-self-stabilizing-algorithm-work) Questions: ---------- * What is the appropriate reaction to such a question? * When is such a question valid? \* Feel free to edit this if you want to add more examples.
2014/09/16
[ "https://meta.stackoverflow.com/questions/271468", "https://meta.stackoverflow.com", "https://meta.stackoverflow.com/users/119775/" ]
You can vote to close it as "unclear what you're asking" if the scope is ill-defined, or "too broad" if the question is decidedly so. Prompt the questioner to add specifics as appropriate. Such a question can be valid provided the asker has demonstrated research effort, the question is narrowed down enough and the subject matter is on-topic for our site.
The part of the question that makes the biggest difference for me is whether the user is asking for help understanding a *topic* or a *specific resource*. I love questions that say "I have read through resources X, Y, and Z, but still don't understand this concept". Those usually get an upvote from me, provided they're formulated well. If the question is inverted and instead says, "I have read this resource, and it covers concepts X, Y, and Z. Can you explain the whole document to me?" I will usually flag those as too broad. You've specified in your question that X is a "concept, algorithm, pattern, etc", so that would fall into the first category I described, and I think those kinds of questions are great as long as they are specific and list at least a few resources that the OP has attempted to understand.
271,468
The question mould is: > > How does X work? > I've read the [*Wikipedia article / original paper / documentation / other resource*] but I can't make sense of it. Please explain X to me in plain English. > > > where X is a concept, algorithm, pattern, etc. (Not a code dump.) I've come across a number questions fitting this mould. I never really know how to react. The result could perhaps be quite useful, but it doesn't quite seem to be a natural fit for Stack Overflow as a Q&A site. One reason is, it isn't clear what the appropriate scope of a good answer would be: * Explain which part? (Some of it? All of it?) * To what level of detail? The appropriate result is ill-defined and unbounded: could be anything between a one-liner and a 100-page treatise. Example\*: * [How does Dijkstra's self-stabilizing algorithm work?](https://stackoverflow.com/questions/6435046/how-does-dijkstras-self-stabilizing-algorithm-work) Questions: ---------- * What is the appropriate reaction to such a question? * When is such a question valid? \* Feel free to edit this if you want to add more examples.
2014/09/16
[ "https://meta.stackoverflow.com/questions/271468", "https://meta.stackoverflow.com", "https://meta.stackoverflow.com/users/119775/" ]
You can vote to close it as "unclear what you're asking" if the scope is ill-defined, or "too broad" if the question is decidedly so. Prompt the questioner to add specifics as appropriate. Such a question can be valid provided the asker has demonstrated research effort, the question is narrowed down enough and the subject matter is on-topic for our site.
"**Explain X to me"** is no different from "**Why does this code work/does not work?**". Your reaction should depend on whether additional information is included in the question, i.e. whether any research was made prior to posting this question. I would argue that including links in the question constitutes research. You can google for your key phrase and include top 5 links, using 1min of your time. Let's consider the question you linked as an example: > > I have read his seminal paper, Self-stabilizing systems in spite of distributed control. However, I don't quite get how the self-stabilizing algorithm works. I am most interested in his, **'solution' of k-state machines**. The density of the paper is quite intense and I can't make much sense of it. How does this algorithm work in plain English? > > > I highlighted the part which suggests that OP has in fact done some research. However, I think it is not enough to make a good question. Whether such a question should be answered is up to the community. I would expect the OP to explain a bit more of what they understand, and what they don't, from the topic they want explained. My reasoning is simple - if they cannot provide a detailed explanation of what they don't understand, there is a high chance they will not be able to provide feedback on the answer (i.e. keep asking questions in comments, without ever getting to the point).
271,468
The question mould is: > > How does X work? > I've read the [*Wikipedia article / original paper / documentation / other resource*] but I can't make sense of it. Please explain X to me in plain English. > > > where X is a concept, algorithm, pattern, etc. (Not a code dump.) I've come across a number questions fitting this mould. I never really know how to react. The result could perhaps be quite useful, but it doesn't quite seem to be a natural fit for Stack Overflow as a Q&A site. One reason is, it isn't clear what the appropriate scope of a good answer would be: * Explain which part? (Some of it? All of it?) * To what level of detail? The appropriate result is ill-defined and unbounded: could be anything between a one-liner and a 100-page treatise. Example\*: * [How does Dijkstra's self-stabilizing algorithm work?](https://stackoverflow.com/questions/6435046/how-does-dijkstras-self-stabilizing-algorithm-work) Questions: ---------- * What is the appropriate reaction to such a question? * When is such a question valid? \* Feel free to edit this if you want to add more examples.
2014/09/16
[ "https://meta.stackoverflow.com/questions/271468", "https://meta.stackoverflow.com", "https://meta.stackoverflow.com/users/119775/" ]
The part of the question that makes the biggest difference for me is whether the user is asking for help understanding a *topic* or a *specific resource*. I love questions that say "I have read through resources X, Y, and Z, but still don't understand this concept". Those usually get an upvote from me, provided they're formulated well. If the question is inverted and instead says, "I have read this resource, and it covers concepts X, Y, and Z. Can you explain the whole document to me?" I will usually flag those as too broad. You've specified in your question that X is a "concept, algorithm, pattern, etc", so that would fall into the first category I described, and I think those kinds of questions are great as long as they are specific and list at least a few resources that the OP has attempted to understand.
The questioner should indicate which portion of the topic he already understands and which not. Otherwise it's not clear from which point the answer should start. I would refuse to answer "please explain X to me" type questions unless the questioner also mentions specific points he/she doesn't understand. If this is clear and you don't think the answer would be too long (too broad) or can be found elsewhere (not enough research) then why not writing a great answer. In the specific linked case the specific question is "I still don't quite get the k-state solution." So one could maybe give an answer detailing the k-state solution (if it is not too long) and then adapt the question towards "How does one obtain the k-state solution in Dijkstra's self-stabilizing algorithm?" if one wanted to.
271,468
The question mould is: > > How does X work? > I've read the [*Wikipedia article / original paper / documentation / other resource*] but I can't make sense of it. Please explain X to me in plain English. > > > where X is a concept, algorithm, pattern, etc. (Not a code dump.) I've come across a number questions fitting this mould. I never really know how to react. The result could perhaps be quite useful, but it doesn't quite seem to be a natural fit for Stack Overflow as a Q&A site. One reason is, it isn't clear what the appropriate scope of a good answer would be: * Explain which part? (Some of it? All of it?) * To what level of detail? The appropriate result is ill-defined and unbounded: could be anything between a one-liner and a 100-page treatise. Example\*: * [How does Dijkstra's self-stabilizing algorithm work?](https://stackoverflow.com/questions/6435046/how-does-dijkstras-self-stabilizing-algorithm-work) Questions: ---------- * What is the appropriate reaction to such a question? * When is such a question valid? \* Feel free to edit this if you want to add more examples.
2014/09/16
[ "https://meta.stackoverflow.com/questions/271468", "https://meta.stackoverflow.com", "https://meta.stackoverflow.com/users/119775/" ]
The part of the question that makes the biggest difference for me is whether the user is asking for help understanding a *topic* or a *specific resource*. I love questions that say "I have read through resources X, Y, and Z, but still don't understand this concept". Those usually get an upvote from me, provided they're formulated well. If the question is inverted and instead says, "I have read this resource, and it covers concepts X, Y, and Z. Can you explain the whole document to me?" I will usually flag those as too broad. You've specified in your question that X is a "concept, algorithm, pattern, etc", so that would fall into the first category I described, and I think those kinds of questions are great as long as they are specific and list at least a few resources that the OP has attempted to understand.
"**Explain X to me"** is no different from "**Why does this code work/does not work?**". Your reaction should depend on whether additional information is included in the question, i.e. whether any research was made prior to posting this question. I would argue that including links in the question constitutes research. You can google for your key phrase and include top 5 links, using 1min of your time. Let's consider the question you linked as an example: > > I have read his seminal paper, Self-stabilizing systems in spite of distributed control. However, I don't quite get how the self-stabilizing algorithm works. I am most interested in his, **'solution' of k-state machines**. The density of the paper is quite intense and I can't make much sense of it. How does this algorithm work in plain English? > > > I highlighted the part which suggests that OP has in fact done some research. However, I think it is not enough to make a good question. Whether such a question should be answered is up to the community. I would expect the OP to explain a bit more of what they understand, and what they don't, from the topic they want explained. My reasoning is simple - if they cannot provide a detailed explanation of what they don't understand, there is a high chance they will not be able to provide feedback on the answer (i.e. keep asking questions in comments, without ever getting to the point).
17,375,755
I have been searching over the net for a specific solution on this problem in magento connect but I can't hardly find the right solution to solve it I have been constantly deleting files such as cache.cfg and connect.cfg as what other solutions have been stating but it would work temporarily then accessing it again would bring the problem back this also prevents me from installing new modules over the site, this is what I see: Cannot unpack gzipped data in file contents: '/var/www/magento/downloader/../downloader/cache.cfg' what do you think really causes this?
2013/06/29
[ "https://Stackoverflow.com/questions/17375755", "https://Stackoverflow.com", "https://Stackoverflow.com/users/985935/" ]
Try deleting the config.cfg file and check. Don't forget to take the backup of the same. Hope it works for you.
The proper file to delete in this case is cache.cfg. Of course You can delete connect.cfg as well. Both are generated at entering magento connect.
203,757
Most of us who are from electronics background knows that SRAM is faster than DRAM. But when it comes to comparing RAM with ROM, i am unsure. My question is related to micro-controller : "If a code is executing directly from RAM/ROM, whose performance will be better ?? 1) execution from RAM or 2) execution from ROM or 3) both will perform equal" Also considering the fact that ROM are designed to have higher READ speeds. whereas for RAM, there is a trade off of read speed for having write capabilities.
2015/12/01
[ "https://electronics.stackexchange.com/questions/203757", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/93262/" ]
The datasheet should tell you how long each instruction takes, and what differences there are, if any, between executing from RAM or ROM. For microcontroller that offer the option of executing from RAM, that is *probably* faster, likely being the main point of using additional RAM space to execute code from. There may also be some fetch overlap issues. In some cases it might be faster to execute from ROM because it is a separate memory and RAM access can be going on concurrently. Again, the only way to know for any particular micro is to **READ THE DATASHEET**.
It depends entirely on the memory and CPU architecture. As a rule of thumb, SRAM is faster than flash, particularly on higher-speed MCUs (>100 MHz). SRAM bit cells produce a (more or less) logic-level output, while flash memory has to go through a slower current sensing process. How much faster (if any) again depends on the architecture -- the word size of the memories, the number of wait states on each, the presence of caching, the size of the CPU instructions, etc. If you're running at a low enough frequency, you could have zero wait states on flash and RAM, so they might run at the same speed. The code also matters. If your code is strictly linear (no branching), the flash could prefetch instructions fast enough to keep the CPU saturated even at higher frequencies. As Olin said, a Harvard architecture CPU with separate program and data read paths could perform differently when code and data are in different memories. Metal ROMs (and other nonvolatile memories such as FRAM) have their own characteristics, and may or may not be as fast as SRAM. The ability to write doesn't necessarily make a difference; it's more about the characteristics of the bit cell output and sensing circuits. The datasheet will give you a rough idea of the speed difference, but the only way to know for sure is to profile your code.
15,708,011
I know that when a HTTP request is made, packets are sent from a seemingly-random high-numbered port (e.g. 4575) on the client to port 80 on the server. Then the server sends the reply to the same high-numbered port, the router knows to route that to the client computer, and all is complete. My question is: **How is the return port (4575 in this example) determined?** Is it random? If so, within what range? Are there any constraints on it? What happens, for example, if two computers in a LAN send HTTP requests with the same source port to the same website? How does the router know which one to route to which computer? Or maybe this situation is rare enough that no-one bothered to defend against it?
2013/03/29
[ "https://Stackoverflow.com/questions/15708011", "https://Stackoverflow.com", "https://Stackoverflow.com/users/76701/" ]
The NAT is going to decide/determine the *outbound* port for a NATed connection/session, via it's own internal means. Meaning, it will vary according to the implementation of the NAT. This means any responses back will come back to that same outbound port. As for your question: > > What happens, for example, if two computers in a LAN send HTTP > requests with the same source port to the same website? > > > It will assign *different* outbound ports for *each*. Thus, it can distinguish between the two in responses it receives. A NATs would create/maintain a *mapping* of translated ports, creating new outbound port numbers for new sessions. So even if if there were two different "internal" sessions, from two different machines, on the *same* port number, it would map to two different port numbers on the *outgoing* side. Thus, when packets came back in on the respective ports, it would know how to translate them back to the correct address/port on the inside LAN. Diagram: ![enter image description here](https://i.stack.imgur.com/Nyaay.png)
It depends on the NAT and on the protocol. For instance I'm writing this message behind a full cone NAT and this particular NAT is configured (potentially hard-wired) to always map an UDP private transport address UDP X:x to the public transport address UDP Y:x. It's quite easy to shed some light on this case with with a STUN server (google has some free stun servers), a cheap NAT, 2 laptops, wire shark and a really really light STUN client which uses a hard coded port like 777. Only the first call will get through and it will be mapped on the original port, the second one will be blocked. NAT's are a hack, some of them are so bad that they actually override on return the public transport address not only in the header but even in the transported data which is kinda crazy. ICE protocols has to xor the public address to bypass this issue.
42,809
Let's say a developer fixed a bug or refactor a component which can affect many places in the application. As a tester how do you define the scope of the regression tests so you don’t miss any bad side effect of those changes? Is it a Dev job to clearly define it? What if he is not clear?
2020/03/04
[ "https://sqa.stackexchange.com/questions/42809", "https://sqa.stackexchange.com", "https://sqa.stackexchange.com/users/43935/" ]
I believe it is important to look at test coverage at something that ### Needs to be adjusted as features grow and change (proactive) as *opposed to* ### Ensuring that things that happened in the past are prevented (or at least detected) by adding tests for them (reactive) The problem with the second approach (particularly common in traditional waterfall and command & control environments) is that it leads to an ever growing and slowing set of tests that, over time, give less value as more issues arise in using, managing and maintaining them - which is done by costly humans. The number of features and bugs times the number of devices and versions is always more than can be tested anyway so all testing requires some form of scoping down already. At worst you end up using KPIs for number of bugs and number of tests to measure quality effectiveness. 7000 UI tests should not be a point of pride in most organizations but I have seen them be in some. Also, when discovering and fixing a bug or indeed making any change in functionality the order should also be: * Will unit tests be enough to cover this new condition ? * If not, will integrated tests cover it ? * If not will E2E non-ui tests cover it ? * If not, will an automated UI test cover it ? * If not, will manual testing be practical to cover it ? For example if the change is in a back-end routine that prepares data seen in the front end, you want to *avoid testing backend code changes by front-end tests whenever possible* Instead you use *unit* tests to test the differences. Often the hard part here is communicating this to the business. That takes care and tacts and excellent communication skills such as listening first.
As testers we won't set hard boundaries for testing, that's why organisations push for test automation, CI/CD and so on. We expect to get 100% (almost) test or feature coverage for each new build. But if you are in an organisation that doesn't have test automation then some times test completion deadlines could make you feel challenged. In such cases, few of the things i follow are: 1. **Sanity testing** This decides the entry criteria. Here we check that basic things are working, you navigate to different pages and see things looks testable and functioning. 2. **Smoke test:** Test the most critical features 3. **Using critical thinking and common sense** If you are really time-limited, use critical thinking to choose components that would be affected. For instance, if a header element is updated, then ideally you should check all the pages that use it. But for time-limited, task think about pages that have too many components and could cause alignment issues if any component changes size. 4. **Never stop testing** The final rule is to test more and keep testing even after you approve the candidate. Never stop testing
42,809
Let's say a developer fixed a bug or refactor a component which can affect many places in the application. As a tester how do you define the scope of the regression tests so you don’t miss any bad side effect of those changes? Is it a Dev job to clearly define it? What if he is not clear?
2020/03/04
[ "https://sqa.stackexchange.com/questions/42809", "https://sqa.stackexchange.com", "https://sqa.stackexchange.com/users/43935/" ]
As testers we won't set hard boundaries for testing, that's why organisations push for test automation, CI/CD and so on. We expect to get 100% (almost) test or feature coverage for each new build. But if you are in an organisation that doesn't have test automation then some times test completion deadlines could make you feel challenged. In such cases, few of the things i follow are: 1. **Sanity testing** This decides the entry criteria. Here we check that basic things are working, you navigate to different pages and see things looks testable and functioning. 2. **Smoke test:** Test the most critical features 3. **Using critical thinking and common sense** If you are really time-limited, use critical thinking to choose components that would be affected. For instance, if a header element is updated, then ideally you should check all the pages that use it. But for time-limited, task think about pages that have too many components and could cause alignment issues if any component changes size. 4. **Never stop testing** The final rule is to test more and keep testing even after you approve the candidate. Never stop testing
Every software testing company uses both test automation and functional testing to handle these kinds of situations to cover 100% of the feature coverage. However, if we don't have automated scripts then for those cases we can perform smoke, sanity and regression testing: Smoke testing: In this, we decide if the QA build is stable and all critical areas of the application are working as expected. Sanity testing: In this testing QA team checks the basic functionality and workflow of the application and verifies that it is working fine or not. Moreover, test all the components that you think have an impact on the changes done by the developer. Execute all the test cases of those areas if time permits. Do not stop testing the build until the code goes to the live environment to find out all the issues that came after pushing refactored code in the system. Dev team can suggest the areas where code is refactored by providing the root cause analysis. Rest of the things QA team can handle by performing testing around those areas.
42,809
Let's say a developer fixed a bug or refactor a component which can affect many places in the application. As a tester how do you define the scope of the regression tests so you don’t miss any bad side effect of those changes? Is it a Dev job to clearly define it? What if he is not clear?
2020/03/04
[ "https://sqa.stackexchange.com/questions/42809", "https://sqa.stackexchange.com", "https://sqa.stackexchange.com/users/43935/" ]
I believe it is important to look at test coverage at something that ### Needs to be adjusted as features grow and change (proactive) as *opposed to* ### Ensuring that things that happened in the past are prevented (or at least detected) by adding tests for them (reactive) The problem with the second approach (particularly common in traditional waterfall and command & control environments) is that it leads to an ever growing and slowing set of tests that, over time, give less value as more issues arise in using, managing and maintaining them - which is done by costly humans. The number of features and bugs times the number of devices and versions is always more than can be tested anyway so all testing requires some form of scoping down already. At worst you end up using KPIs for number of bugs and number of tests to measure quality effectiveness. 7000 UI tests should not be a point of pride in most organizations but I have seen them be in some. Also, when discovering and fixing a bug or indeed making any change in functionality the order should also be: * Will unit tests be enough to cover this new condition ? * If not, will integrated tests cover it ? * If not will E2E non-ui tests cover it ? * If not, will an automated UI test cover it ? * If not, will manual testing be practical to cover it ? For example if the change is in a back-end routine that prepares data seen in the front end, you want to *avoid testing backend code changes by front-end tests whenever possible* Instead you use *unit* tests to test the differences. Often the hard part here is communicating this to the business. That takes care and tacts and excellent communication skills such as listening first.
Every software testing company uses both test automation and functional testing to handle these kinds of situations to cover 100% of the feature coverage. However, if we don't have automated scripts then for those cases we can perform smoke, sanity and regression testing: Smoke testing: In this, we decide if the QA build is stable and all critical areas of the application are working as expected. Sanity testing: In this testing QA team checks the basic functionality and workflow of the application and verifies that it is working fine or not. Moreover, test all the components that you think have an impact on the changes done by the developer. Execute all the test cases of those areas if time permits. Do not stop testing the build until the code goes to the live environment to find out all the issues that came after pushing refactored code in the system. Dev team can suggest the areas where code is refactored by providing the root cause analysis. Rest of the things QA team can handle by performing testing around those areas.
1,029,990
I have a 2015 Dell XPS (new style), and sometimes it exhibits the above behaviour. Typically once a week or so. I've not noticed any common pattern that triggers it. A reboot always fixes it, but is seldom convenient! I've tried disabling/reenabling the touchpad in decive manager etc, to no avail. Any ideas?
2016/01/22
[ "https://superuser.com/questions/1029990", "https://superuser.com", "https://superuser.com/users/338607/" ]
Go to control panel and uninstall the driver for the touchpad. Go to Dell's site and download the driver for your model laptop. You can open the program for the touchpad and modify the settings for it. It could be that another piece of software or service is interfering. Try doing a clean boot of your pc - as seen here <https://support.microsoft.com/en-us/kb/929135>. Once you do that go, start enabling different startup items until you find the culprit.
This no longer seems to be a problem (after being plagued by it for a couple of years), so I assume Dell or Microsoft finally fixed something. If you're having the issue make sure everything from Dell and Windows is up to date and if you're lucky it might fix things for you.
196,232
We're a medium-sized enterprise with many different IT administrators (e.g. domain admins, Azure admins, DB admins, ...). We're worried that hackers can easily breach an admin's laptop and through that steal data / do serious harm. We thought about using VDI / jump server so that an admin would connect from his personal laptop to that remote machine and administer the network through that. However, it doesn't seem to really solve the problem as if an attacker owns the admin's personal laptop, he can simply control the remote machine. We also read about [Privileged Access Workstations](https://docs.microsoft.com/en-us/windows-server/identity/securing-privileged-access/privileged-access-workstations) (PAW). While the approach seems safer, it's too cumbersome for our users and requires discipline. What's the best (practical) way to protect IT administrators?
2018/10/23
[ "https://security.stackexchange.com/questions/196232", "https://security.stackexchange.com", "https://security.stackexchange.com/users/189604/" ]
Using a VDI/Jump Server isn't such a terrible idea, it's a pretty common technique - if you set it up with some sort of multi-factor authentication (which isn't on the laptop) then at least a compromised laptop can't just connect into your management network.
I'd separate administartors' less-privileged operational accounts (for day-to-day duties) from more privileged accounts like domain/db admin (which are used during changes in domain/database architecture - a much rarer tasks). Yes it could require fine-tuning, custom roles and so on, but during it you'll better understand your RBAC-related landscape. For most privileged accounts I'd recommend shared passwords (i.e. half of password stored in warbox) combined with 2FA.
196,232
We're a medium-sized enterprise with many different IT administrators (e.g. domain admins, Azure admins, DB admins, ...). We're worried that hackers can easily breach an admin's laptop and through that steal data / do serious harm. We thought about using VDI / jump server so that an admin would connect from his personal laptop to that remote machine and administer the network through that. However, it doesn't seem to really solve the problem as if an attacker owns the admin's personal laptop, he can simply control the remote machine. We also read about [Privileged Access Workstations](https://docs.microsoft.com/en-us/windows-server/identity/securing-privileged-access/privileged-access-workstations) (PAW). While the approach seems safer, it's too cumbersome for our users and requires discipline. What's the best (practical) way to protect IT administrators?
2018/10/23
[ "https://security.stackexchange.com/questions/196232", "https://security.stackexchange.com", "https://security.stackexchange.com/users/189604/" ]
I agree with and would like to expand on Odo's answer. I know for our organization the idea of a dedicated PAW was great, but the practicality of implementing one based on actual STIG guidelines wasn't there. Especially for SMBs, having a dedicated access VLAN with that machine with only specific admin accounts it's a little more than most can handle. We ended up implementing an environment akin to what odo suggested. We have multiple administrators who perform various tasks across multiple sites, and so the threat vector for someone to gain admin access to our DCs was significant. I think you'll find though if you really consider your roles, there are only a handful of admins who actually require full domain admin permissions on your DCs. We ended up creating (for lack of a better word) an elevation admin group that our everyday IT admins used on everyday workstations. These accounts have no administrative privileges on the servers, and have actually been denied logon privileges through GPOs. Due to restrictions in our environment we aren't able to use any mobile devices in our building, and don't have the budget for tokens, so MFA wasn't really an option. We ended up using Terminal Service GPOs (I think they're called RDS now though) to allow or deny remote access to accounts depending on their privilege level. For example the elevate admins aren't able to remotely log into any workstation, and domain admins aren't able to remotely login to any servers (should also mention any members of DA/EA are also denied from login on everyday workstations). This significantly helps prevent an attacker from stealing credentials that are used on a daily basis and wreaking havoc on your system. There's some pretty useful guidelines on this from a number of different sources, I (we) generally use [STIG templates](https://iase.disa.mil/stigs/Pages/a-z.aspx) to create our baseline. Some important ones to note which touch on the stuff above are Deny Log on for highly privileged domain accounts ([V-63877](https://www.stigviewer.com/stig/windows_10/2015-11-30/finding/V-63877)) and deny log on through RDS for highly privileged domain accounts ([V-63879](https://www.stigviewer.com/stig/windows_10/2015-11-30/finding/V-63879)). Your administrators would need to elevate in a UAC prompt using their privileged accounts, but it ensures that if their laptop or workstation is broken into, the attacker would need multiple sets of credentials really accomplish anything.
I'd separate administartors' less-privileged operational accounts (for day-to-day duties) from more privileged accounts like domain/db admin (which are used during changes in domain/database architecture - a much rarer tasks). Yes it could require fine-tuning, custom roles and so on, but during it you'll better understand your RBAC-related landscape. For most privileged accounts I'd recommend shared passwords (i.e. half of password stored in warbox) combined with 2FA.
22,617
I apologize for the lengthy question! The point is that I would like to ask book recommendations for my thesis where it is described how to introduce changes in an existing software development process. For details, please read the lengthy part. I'm going to write my thesis (MBA) about how my team innovated our software development process by switching from TFVC to git and introducing more fine grained work tracking structure to be able to make more data driven decisions. It is subject of technology management science. What we did is process innovation. The main topic of my thesis is the process how to introduce new procedures, methods in an already existing process. For example: What are the steps you take? What are the parameters of your decision? How POCs can help? Where is the point of no return? Etc. Due to that the university where I study more manufacturing focused than software development, the literature I got by my teaching material is also rather manufacturing focused. And also expected from the pupil (myself) that he/she should display that by his/her thesis he/she has an understanding about the available literature of the given science. What I have now is [An implementation model for reducing resistance to technological change](http://onlinelibrary.wiley.com/doi/10.1002/hfm.4530040107/full) written by Mica R. Endsley and published in 1994. It is an excellent start, but as I said it is rather manufacturing focused and written before dot com crisis. Despite the fact software development and software delivery took its methods (Could I say that its initial steps?) from manufacturing we don't have those restrictions (let's says, they are lightened in various extend) in software development as manufacturing has. For example, in SD you will have a failed build or dropped MVP having little cost, however the same dead end might cost significantly more in manufacturing. My point is, what is described by Endsley is a good start, but the change process in SD shall not be that rigid as it is described. As a consequence, I have to make a fine distinction between manufacturing processes and software development processes and shed on light that introducing change in software development process has similarities to that described by Endsley's and also it has differences too. In order to support the point above, I need literature from software development area about the suggested way of making changes in an already existing process. Beside that the thesis will have a conclusion where my experiences will be shown. Unfortunately, I can't recall any books about this topic because those I read are about introducing kanban, scrum and technology related stuff. Making changes in processes are driven by common sense in my world, but it is not enough in a thesis. You might say PDCA cycle should be good. It is not applicable here because switching TFVC to git is rather a big bang thing than an iterative enhancement. Many thanks for any help!
2017/10/17
[ "https://pm.stackexchange.com/questions/22617", "https://pm.stackexchange.com", "https://pm.stackexchange.com/users/17022/" ]
These are some that I have recently listened to. I have put links to Audiobooks. Play at 1.5-2X speed which will help get through them quicker for your MBA. Manufacturing, but applicable to Software [The Goal](https://www.audible.co.uk/pd/Business/The-Goal-Audiobook/B00IFGIUX0/ref=a_search_c4_1_2_srTtl?qid=1508311542&sr=1-2) How a disaster project is turned around by switching to Agile methodolgy [The Phoenix Project](https://www.audible.co.uk/pd/Business/The-Phoenix-Project-Audiobook/B00VB034GK/ref=a_search_c4_1_1_srTtl?qid=1508311700&sr=1-1) Similar to The Phoenix Project [Rolling Rocks Downhill](https://www.audible.co.uk/pd/Non-fiction/Rolling-Rocks-Downhill-Audiobook/B01DYS9V22/ref=a_search_c4_1_1_srTtl?qid=1508311731&sr=1-1) Most of the books by Steve McConnell could be useful. [Rapid Development](https://www.amazon.co.uk/Rapid-Development-Taming-Software-Schedules/dp/1556159005/ref=sr_1_3?ie=UTF8&qid=1508311841&sr=8-3&keywords=steve%20mcconnell) is >20 years old now, but I still refer back to it even now. You can pick up secondhand copies from Amazon cheaply if your region has used copies available (for example in the UK for 0.01p + post). The Phoenix Project and Rolling Rocks may be more applicable to you as they start with a project that is in chaos (late, burned out developers, etc) and basically start adding Agile methology but chopping out requirements/wishes, starting smaller releases, starting testing earlier, things like that. Good luck with your MBA.
[Commitment](https://rads.stackoverflow.com/amzn/click/9462410038) by Olav Maassen. > > Via Rose Randall, the main character of this unique graphic business > novel, the reader is introduced to the challenges a project manager > faces. Rose Randall is the archetypal reluctant project manager. > Following a painful project failure years ago, Rose's life is cast > into chaos when she is once again thrown into the role against her > wishes. Faced with a struggling project, help comes from an unexpected > source guiding Rose in the direction of Real Options. > > >
657
I usually freeze my leftovers. Then, when I reheat them, the flavors are diminished - less salt, less chili, less everything. Any ideas why?
2010/07/11
[ "https://cooking.stackexchange.com/questions/657", "https://cooking.stackexchange.com", "https://cooking.stackexchange.com/users/176/" ]
There are a variety of factors at work here: 1. Freezing foods "improperly" (i.e. not flash-frozen, not vacuum-sealed) causes ice crystals to form within the food, damaging the molecular structures. This is what causes many frozen leftovers to become "mushy" or change in texture. 2. Again due to the formation of ice and the movement of water when the food is reheated, tiny particles such as spices can be lost in steam and/or runoff water. 3. Extreme temperatures (both hot and cold) can denature enzymes in the food, changing their flavor, texture, etc. 4. As food sits, flavors in the food may blend together in different ways, causing the food to have less distinct flavors. 5. If your freezer isn't especially clean and your food not well-sealed, your food may be absorbing other odors which are again "masking" the original flavor of your food. Hope that gives you some ideas.
I asked this to a Swedish food guru and got the explication that freezing and reheating often softens the food. The effect is that each food piece has less time in the mouth before swallowing and therefore has less time to be tasted. Not fully convinced but it might be worth experimenting with.
714,958
I was going to ask this in wilderness but I’m pretty sure we’ll need some math to find out the real answer. They say you can’t out run a grizzly bear because [they run about 35 mph](https://www.treehugger.com/how-fast-can-a-bear-run-5113356#:%7E:text=Armed%20with%20astonishingly%20powerful%20forelegs,species%2C%20the%20American%20black%20bear.). While the average human runs [6-18 mph](https://www.thecoldwire.com/how-fast-does-an-average-person-run/). I’m assuming we are simple steaks running in a straight line race. But seeing that the weight of the grizzly is between [700-1700lb](https://www.nwf.org/Educational-Resources/Wildlife-Guide/Mammals/Grizzly-Bear), and the average weight of a human is [136-178](https://www.livescience.com/36470-human-population-weight.html#:%7E:text=The%20average%20body%20mass%2C%20globally,178%20pounds%20(80.7%20kg).), couldn’t we attempt to out maneuver them with zigzags and/or a on-the-dime 180 turn? Or is the 20mph speed difference to much and our only hope is to lay down and hope it isn’t our day? Could we identify the optimal pathing which uses their bear disadvantages against them? Note - References aren’t anything special. I just wanted to give the math people some numbers to consider.
2022/06/21
[ "https://physics.stackexchange.com/questions/714958", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/71185/" ]
> > Could we identify the optimal pathing? > > > To what end? If you can't outrun you are never getting away. The bear also has a number of advantages to make up for the square cube law between it's mass and muscle cross section: * It's *way* more fit than you are. * It has four legs to run with which effectively halves the load compared to a human * Four legs probably provides more maneuverability at speed than two. Definitely more stability in terrain like a forest. The bear isn't going to trip. And your ability to turn on a dime doesn't mean anything if your goal is to run away since you don't want to be standing still. You could theoretically take advantage of this maneuverability trying to bull-fight, but zig-zagging is pointless because running away is something you do not want to do in bull-fighting. And good luck with bull-fighting with no training against something that has other ways to attack than just charging and isn't attracted to waving objects.
You can identify optimal pathing. Survival experts have done so. However, to your specific solution of zig zagging, a fundamental issue that arises is that you have to run further than the bear. Bears have something like a 2 meter reach from side to side. So every time you zig zag, you have to travel 2 meters further than they do. If you watch prey animals, they tend to run rather straight forward until a moment where they think turning really gives them an advantage. Then they give that turn their all. Zig zagging is slow, but zigging may be the key.
714,958
I was going to ask this in wilderness but I’m pretty sure we’ll need some math to find out the real answer. They say you can’t out run a grizzly bear because [they run about 35 mph](https://www.treehugger.com/how-fast-can-a-bear-run-5113356#:%7E:text=Armed%20with%20astonishingly%20powerful%20forelegs,species%2C%20the%20American%20black%20bear.). While the average human runs [6-18 mph](https://www.thecoldwire.com/how-fast-does-an-average-person-run/). I’m assuming we are simple steaks running in a straight line race. But seeing that the weight of the grizzly is between [700-1700lb](https://www.nwf.org/Educational-Resources/Wildlife-Guide/Mammals/Grizzly-Bear), and the average weight of a human is [136-178](https://www.livescience.com/36470-human-population-weight.html#:%7E:text=The%20average%20body%20mass%2C%20globally,178%20pounds%20(80.7%20kg).), couldn’t we attempt to out maneuver them with zigzags and/or a on-the-dime 180 turn? Or is the 20mph speed difference to much and our only hope is to lay down and hope it isn’t our day? Could we identify the optimal pathing which uses their bear disadvantages against them? Note - References aren’t anything special. I just wanted to give the math people some numbers to consider.
2022/06/21
[ "https://physics.stackexchange.com/questions/714958", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/71185/" ]
Usain Bolt's mass is [94kg](http://usainbolt.com/bio/), which is significantly larger than most human males. Does that mean you can outrun Usain Bolt (read: win the 200m since that involves running a bend) if you are good at zigzagging? No, of course. That's because biologically, running speed is most dependent on how much force you exert on the ground, not your weight. That's why it's impossible to run in zero gravity, and why the bear weighing more does not matter, because it's more than compensated for by the corresponding increase in muscular strength.
You can identify optimal pathing. Survival experts have done so. However, to your specific solution of zig zagging, a fundamental issue that arises is that you have to run further than the bear. Bears have something like a 2 meter reach from side to side. So every time you zig zag, you have to travel 2 meters further than they do. If you watch prey animals, they tend to run rather straight forward until a moment where they think turning really gives them an advantage. Then they give that turn their all. Zig zagging is slow, but zigging may be the key.
39,794,207
How do I get the gradle console to show when I run a gradle task in Android Stduio? When I execute the task, the run dialog pops up but the console does not. I want to be able to see the gradle console so that I can see the output, but I don't want to permantly see the gradle console (pinned mode) as 95% of the time I'd rather have the real estate for the editor.
2016/09/30
[ "https://Stackoverflow.com/questions/39794207", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1146334/" ]
On your run window you might see this button: ![](https://i.stack.imgur.com/8oJXG.png) Just press it: ![](https://i.stack.imgur.com/lFrvm.png) And there you go
From [Android Studio 3.1 and above](https://developer.android.com/studio/releases#output-window), viewing the gradle console has been as [Monitoring the Build Process](https://developer.android.com/studio/run#gradle-console). [![monitor the build process](https://i.stack.imgur.com/i003T.png)](https://i.stack.imgur.com/i003T.png) 1. build tab 2. sync tab 3. restart 4. toggle view - toggles between displaying task execution as a graphical tree and displaying more detailed text output from Gradle
279,991
I have a large power supply with multiple DC output modules. 2 of the modules are connected, positive to positive, through a 1.1 ohm resistor (really, 2 2.2 ohm resistors in parallel). When I measure the voltages without the resistor, I have the correct voltages (3.65V and 5.35V) but with the resistor I measure 5.9V on the 5.35V module, and 2.8V on the 3.65V module. I am probing across the + and common terminals of each module. I was also curious as to see if the commons were corrected, and when measured, it reads 67.5 ohms (However, inside the machine when all hooked uo, the commons become connected.) So I suppose my circuit would look a little something like this (outside of the machine)... ![schematic](https://i.stack.imgur.com/SgLPb.png) [simulate this circuit](/plugins/schematics?image=http%3a%2f%2fi.stack.imgur.com%2fSgLPb.png) – Schematic created using [CircuitLab](https://www.circuitlab.com/) My question is, what is the exact purpose of this circuit (My guess is to share the current for the loads) Also note, the resistors are only connected to the 2 positive output terminals. And how should I properly measure the voltages?[![enter image description here](https://i.stack.imgur.com/mMCF2.jpg)](https://i.stack.imgur.com/mMCF2.jpg) (This goes to an Ultrasound Medical Device, the computer reads the voltages correctly, but am I probing wrong? Red probe is indicating the 5.35V module and black is 3.65V. The blue line coming off the terminals is the power resistors located somewhere else.)
2017/01/12
[ "https://electronics.stackexchange.com/questions/279991", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/98200/" ]
Given your instructor's remark, I suspect that you were actually told about "debouncing", which a very useful function. However, the circuit shown is a bad, bad example of how to do it. Let's think about a switch making contact. Inside, you have two pieces of metal making contact, and on a timescale of milliseconds the two pieces can actually bounce, perhaps repeatedly, before settling down. If this happens the switch output will apparently show multiple activations where only one was intended. To combat this, a circuit like this can be used: ![schematic](https://i.stack.imgur.com/bzSBf.png) [simulate this circuit](/plugins/schematics?image=http%3a%2f%2fi.stack.imgur.com%2fbzSBf.png) – Schematic created using [CircuitLab](https://www.circuitlab.com/) You've noticed that it has the same general outline as your circuit, but a few more resistors. Let's say that the switch is open, and gets briefly closed, then open, then closed for good (a single bounce). Current through R1 turns on Q1 and pulls the collector low. This also discharges C1. As a result Q2 is turned off and Vout goes high. When the switch bounces, Q1 is briefly turned off and its collector rises. However, the resistor charges up much more slowly than it discharges (for appropriate resistor values of R2 and R3), so Q2 never turns back on. As a result, the output Vout has been "debounced", and R2/R3/C1 can be selected for any desired bounce time to be ignored. The original circuit is not very good, since the capacitor voltage swing is quite small, due to the clamping effect of the Q2 base-emitter junction.
(comments to answer) *The workings of the "circuit"* The voltage at the collector of the first BJT will be clamped at 0.7V (approx) by the base-emitter junction of the second BJT. BJT(2) is most likely in saturation (so only a small DC output say 0.2V). BJT(1) has no bias arrangement so it impossible to tell what Ue will actually do to BJT(1). (Is Ue a DC or AC or mixed signal? - no one knows. ). **Its a terrible example to give to a class who are just learning to come to grips with the basics.** I'm not surprised you were confused. *What does the capacitor achieve in this circuit ?* Very little and I can't understand why anyone would put it across BJT(1). As the voltage across it is clamped (by BJT(2)) all it can do is act as a 'smoothing' capacitor rather than a coupling capacitor (collector to base). If Ue is taken to +0.7V then BJT(1) would be in saturation (Vc-e) = 0.2V causing the voltage across the capacitor to fall (discharging rapidly through BJT(1)). In this case Ua2 goes high because BJT(2) is turned off. If Ue is taken low (0V) the voltage across the capacitor rises to 0.7V with the time constant = RL\*Cp and the Ua2 falls to Vsat (about 0.2V).
279,991
I have a large power supply with multiple DC output modules. 2 of the modules are connected, positive to positive, through a 1.1 ohm resistor (really, 2 2.2 ohm resistors in parallel). When I measure the voltages without the resistor, I have the correct voltages (3.65V and 5.35V) but with the resistor I measure 5.9V on the 5.35V module, and 2.8V on the 3.65V module. I am probing across the + and common terminals of each module. I was also curious as to see if the commons were corrected, and when measured, it reads 67.5 ohms (However, inside the machine when all hooked uo, the commons become connected.) So I suppose my circuit would look a little something like this (outside of the machine)... ![schematic](https://i.stack.imgur.com/SgLPb.png) [simulate this circuit](/plugins/schematics?image=http%3a%2f%2fi.stack.imgur.com%2fSgLPb.png) – Schematic created using [CircuitLab](https://www.circuitlab.com/) My question is, what is the exact purpose of this circuit (My guess is to share the current for the loads) Also note, the resistors are only connected to the 2 positive output terminals. And how should I properly measure the voltages?[![enter image description here](https://i.stack.imgur.com/mMCF2.jpg)](https://i.stack.imgur.com/mMCF2.jpg) (This goes to an Ultrasound Medical Device, the computer reads the voltages correctly, but am I probing wrong? Red probe is indicating the 5.35V module and black is 3.65V. The blue line coming off the terminals is the power resistors located somewhere else.)
2017/01/12
[ "https://electronics.stackexchange.com/questions/279991", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/98200/" ]
Given your instructor's remark, I suspect that you were actually told about "debouncing", which a very useful function. However, the circuit shown is a bad, bad example of how to do it. Let's think about a switch making contact. Inside, you have two pieces of metal making contact, and on a timescale of milliseconds the two pieces can actually bounce, perhaps repeatedly, before settling down. If this happens the switch output will apparently show multiple activations where only one was intended. To combat this, a circuit like this can be used: ![schematic](https://i.stack.imgur.com/bzSBf.png) [simulate this circuit](/plugins/schematics?image=http%3a%2f%2fi.stack.imgur.com%2fbzSBf.png) – Schematic created using [CircuitLab](https://www.circuitlab.com/) You've noticed that it has the same general outline as your circuit, but a few more resistors. Let's say that the switch is open, and gets briefly closed, then open, then closed for good (a single bounce). Current through R1 turns on Q1 and pulls the collector low. This also discharges C1. As a result Q2 is turned off and Vout goes high. When the switch bounces, Q1 is briefly turned off and its collector rises. However, the resistor charges up much more slowly than it discharges (for appropriate resistor values of R2 and R3), so Q2 never turns back on. As a result, the output Vout has been "debounced", and R2/R3/C1 can be selected for any desired bounce time to be ignored. The original circuit is not very good, since the capacitor voltage swing is quite small, due to the clamping effect of the Q2 base-emitter junction.
This circuit converts a current at its input to a voltage at its output. It is typically used for digital signals, and with a base resistor at the first transistor. (In that case, if the base resistor is larger than RL, the output impedance is lower than the input impedance, i.e., the circuit amplifies the current.) Ue and Ua,2 are not actually interesting, because both transistors are either off or saturated. The capacitor, if it has been put there deliberately and does not just represent a parasitic capacitance, slows down switching on the second transistor. This might be useful if you want to balance the switching times (a saturated transistor is slower switching off than switching on). For example, such a capacitor is used in several Roland MIDI devices to ensure that the falling and raising edges of the MIDI signal are delayed by approximately the same amount: ![Roland MKS-30 MIDI I/O](https://i.stack.imgur.com/B7f1K.png)
279,991
I have a large power supply with multiple DC output modules. 2 of the modules are connected, positive to positive, through a 1.1 ohm resistor (really, 2 2.2 ohm resistors in parallel). When I measure the voltages without the resistor, I have the correct voltages (3.65V and 5.35V) but with the resistor I measure 5.9V on the 5.35V module, and 2.8V on the 3.65V module. I am probing across the + and common terminals of each module. I was also curious as to see if the commons were corrected, and when measured, it reads 67.5 ohms (However, inside the machine when all hooked uo, the commons become connected.) So I suppose my circuit would look a little something like this (outside of the machine)... ![schematic](https://i.stack.imgur.com/SgLPb.png) [simulate this circuit](/plugins/schematics?image=http%3a%2f%2fi.stack.imgur.com%2fSgLPb.png) – Schematic created using [CircuitLab](https://www.circuitlab.com/) My question is, what is the exact purpose of this circuit (My guess is to share the current for the loads) Also note, the resistors are only connected to the 2 positive output terminals. And how should I properly measure the voltages?[![enter image description here](https://i.stack.imgur.com/mMCF2.jpg)](https://i.stack.imgur.com/mMCF2.jpg) (This goes to an Ultrasound Medical Device, the computer reads the voltages correctly, but am I probing wrong? Red probe is indicating the 5.35V module and black is 3.65V. The blue line coming off the terminals is the power resistors located somewhere else.)
2017/01/12
[ "https://electronics.stackexchange.com/questions/279991", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/98200/" ]
(comments to answer) *The workings of the "circuit"* The voltage at the collector of the first BJT will be clamped at 0.7V (approx) by the base-emitter junction of the second BJT. BJT(2) is most likely in saturation (so only a small DC output say 0.2V). BJT(1) has no bias arrangement so it impossible to tell what Ue will actually do to BJT(1). (Is Ue a DC or AC or mixed signal? - no one knows. ). **Its a terrible example to give to a class who are just learning to come to grips with the basics.** I'm not surprised you were confused. *What does the capacitor achieve in this circuit ?* Very little and I can't understand why anyone would put it across BJT(1). As the voltage across it is clamped (by BJT(2)) all it can do is act as a 'smoothing' capacitor rather than a coupling capacitor (collector to base). If Ue is taken to +0.7V then BJT(1) would be in saturation (Vc-e) = 0.2V causing the voltage across the capacitor to fall (discharging rapidly through BJT(1)). In this case Ua2 goes high because BJT(2) is turned off. If Ue is taken low (0V) the voltage across the capacitor rises to 0.7V with the time constant = RL\*Cp and the Ua2 falls to Vsat (about 0.2V).
This circuit converts a current at its input to a voltage at its output. It is typically used for digital signals, and with a base resistor at the first transistor. (In that case, if the base resistor is larger than RL, the output impedance is lower than the input impedance, i.e., the circuit amplifies the current.) Ue and Ua,2 are not actually interesting, because both transistors are either off or saturated. The capacitor, if it has been put there deliberately and does not just represent a parasitic capacitance, slows down switching on the second transistor. This might be useful if you want to balance the switching times (a saturated transistor is slower switching off than switching on). For example, such a capacitor is used in several Roland MIDI devices to ensure that the falling and raising edges of the MIDI signal are delayed by approximately the same amount: ![Roland MKS-30 MIDI I/O](https://i.stack.imgur.com/B7f1K.png)
13,976
People often say, including those with extensive knowledge in biology, that a certain species of animal will evolve in one way or another: 1. From changing environments. 2. Mutations. 3. Possibly even genetic engineering from human animals. My question lies in the fact that, aside from the latter option, why haven't any differences in animals'(except humans) markup, morphology, intelligence, DNA, behavior, or any habits changed over thousands or (possibly millions) of years? A cockroach has had the same behavior it has today [more than 10 million years ago](http://en.wikipedia.org/wiki/Cretaceous), and there have been no advancements in the species in the slightest bit. It makes you question evolution, because why don't other animals (like cockroaches) have any changes over 10+ million years, yet humans, like me and you somewhat, have, in a relative period of time similar to the linked geological period above, evolved from spear tossing hominids into someone brilliant enough to even ponder this question. If modern humans are the result of mutations in genes, why has no one species over the course of hundreds of millions of years been fit enough, or advanced mentally as we have, or even in any slightest bit?
2013/12/14
[ "https://biology.stackexchange.com/questions/13976", "https://biology.stackexchange.com", "https://biology.stackexchange.com/users/-1/" ]
> > How come most animals never seem to evolve over millennia? > > > The word "seem" in your question should not be disregarded. You seem to assume that cockroaches (or most animals as you say) did not change much the last tens or hundreds of thousands of years. But what do you know about that (no offence here)? Have you actually reviewed many kinds of research that estimate the rate of evolution of different randomly chosen lineages in the past 500,000 years? I think you assume that other species evolved slower than humans rather than know it. And you will certainly put much more importance to the evolution of the gene FoxP2 (involved in language) than to a gene allowing cockroaches to have a better sense of smell. This is a biased view of what is a rate of evolution. It would be much wiser to consider a rate of evolution as something like the number of newly arising mutations that succeeded to get fixed in the population. See Haldane's rate of evolution and the Darwin unit. Please don't make the mistake to think that being smart (or complex) is some kind of the goal of evolution and those that are not smart (or complex) are "less evolved" or that they evolved more slowly. You also seem to want to point on the evolution of DNA and evolution of habits. I guess you might be appreciative of the evolution of human knowledge and culture. But this is obviously something that does not have to do with genetic evolution but is rather a matter of cognitive capacity. You cannot compare the change of culture and traditions of insects and humans as insects have mostly no traditions. Now, this is obviously true that different lineages evolve at different rates. Many things influence this rates such as the population size, the mutation rate, the generation time, the selection pressure (which itself might depend on social structure or the rate of environmental change for example). In these terms, I would rather believe of Homo sapiens as a lineage that should have a rather slow evolutionary rate. Homo sapiens is quite a recent species. And speciation is often linked with phenotypic divergence, with niche competition and niche complementarity and therefore with a high rate of evolution. In these terms, I would believe that humans are a lineage with high evolutionary rate.
This is a tricky question. First, evolution tend to be slow, although there have been recent examples of very fast evolution as well. So for most evolutionary processes, we are not present long enough to see them either happening or see the outcome. Therefore its also hard to say that no evolution is happening - see your cockroach example. How do you know that these animals are the same as 10 Mio years ago? And even if it is like this, it can also mean that these animals fit their niche so good, that there is not much pressure for further adaption. This can change pretty fast as examples from mites (here a report in the [BBC](http://www.bbc.co.uk/news/science-environment-22039872), this is the [original publication](http://onlinelibrary.wiley.com/doi/10.1111/ele.12107/abstract)). Another example of fast evolution (of bigger animals) are the Cichlids in the Lake Victoria, which developed new after the last time the lake dried up completely something like 12.000 years ago. After that, an estimated 300 endemic species developed (see [here](http://www.sciencemag.org/content/273/5278/1091.short)) which was then reduced by pollution and other environmental problem. The remaining species are evolving again to occupy the free niches (see [here](http://www.nature.com/news/2010/100707/full/466174a.html)). In the case of the human, we are pretty lucky, that no other intelligent animal has come up so far. They would have fought for the same biological niche and living space with one species eventually dying out. This has, for example, happened to all the other homo species (habilis, erectus, neanderthalensis). As a species, we are quite young (around 200.000 years), so there is something going on. And there is genetic diversion between humans, but still not as much, that we cannot cross each other anymore. And with 7 billion of us now present, it's not that easy for mutations to come through at our reproduction rate.
13,976
People often say, including those with extensive knowledge in biology, that a certain species of animal will evolve in one way or another: 1. From changing environments. 2. Mutations. 3. Possibly even genetic engineering from human animals. My question lies in the fact that, aside from the latter option, why haven't any differences in animals'(except humans) markup, morphology, intelligence, DNA, behavior, or any habits changed over thousands or (possibly millions) of years? A cockroach has had the same behavior it has today [more than 10 million years ago](http://en.wikipedia.org/wiki/Cretaceous), and there have been no advancements in the species in the slightest bit. It makes you question evolution, because why don't other animals (like cockroaches) have any changes over 10+ million years, yet humans, like me and you somewhat, have, in a relative period of time similar to the linked geological period above, evolved from spear tossing hominids into someone brilliant enough to even ponder this question. If modern humans are the result of mutations in genes, why has no one species over the course of hundreds of millions of years been fit enough, or advanced mentally as we have, or even in any slightest bit?
2013/12/14
[ "https://biology.stackexchange.com/questions/13976", "https://biology.stackexchange.com", "https://biology.stackexchange.com/users/-1/" ]
This is a tricky question. First, evolution tend to be slow, although there have been recent examples of very fast evolution as well. So for most evolutionary processes, we are not present long enough to see them either happening or see the outcome. Therefore its also hard to say that no evolution is happening - see your cockroach example. How do you know that these animals are the same as 10 Mio years ago? And even if it is like this, it can also mean that these animals fit their niche so good, that there is not much pressure for further adaption. This can change pretty fast as examples from mites (here a report in the [BBC](http://www.bbc.co.uk/news/science-environment-22039872), this is the [original publication](http://onlinelibrary.wiley.com/doi/10.1111/ele.12107/abstract)). Another example of fast evolution (of bigger animals) are the Cichlids in the Lake Victoria, which developed new after the last time the lake dried up completely something like 12.000 years ago. After that, an estimated 300 endemic species developed (see [here](http://www.sciencemag.org/content/273/5278/1091.short)) which was then reduced by pollution and other environmental problem. The remaining species are evolving again to occupy the free niches (see [here](http://www.nature.com/news/2010/100707/full/466174a.html)). In the case of the human, we are pretty lucky, that no other intelligent animal has come up so far. They would have fought for the same biological niche and living space with one species eventually dying out. This has, for example, happened to all the other homo species (habilis, erectus, neanderthalensis). As a species, we are quite young (around 200.000 years), so there is something going on. And there is genetic diversion between humans, but still not as much, that we cannot cross each other anymore. And with 7 billion of us now present, it's not that easy for mutations to come through at our reproduction rate.
In response to this part: > > If modern humans are the result of mutations in genes, how come no one species over the course of hundreds of millions of years has been fit enough, or advanced mentally like we have, or even in any slightest bit? > > > All animals *are* the result of evolution, which includes mutations. Now, what you should understand is that evolutionary changes have to be selected for, but also must be immediately useful to the organism if they cost more. There is a long term tendency in our lineage towards increased brain sizes. Animals -> Mammals -> Primates -> Humans. This long term development need not have happened in the first place. In the Jurassic Period, the most successful group of animals were dinosaurs, who in general had small brains. In addition, our brains require far more calories than the brain of, say, a chimpanzee. Even if you have a lineage that, over the generations, has a tendency towards larger brain sizes, it would also need to be able to hunt or forage more in compensation for the increased caloric needs. If it were not able to do so, a larger brain would actually be a profoundly negative characteristic, a useless drain of energy. In addition, the benefits of increased intelligence are highly circumstantial. Consider if you gave a cheetah all the brain power of a human. It might then understand that if it dug a hole and placed fake grass over it, it could catch an antelope for less effort than having to stalk and chase it. Less effort means less calories expended and ultimately most of an organism's fitness has to do with how much energy it expends in trying to procure energy (calories). But lacking opposable thumbs and hands with digits, it would be unlikely to accomplish such a thing. Also, such tasks are more efficient when done by a group, but many animals do not coordinate group-wise as extensively as humans do. So what we have is at least 3 but probably many more things which all have to come together in the same species for them to rise to the top of the food chain like we did: * Dextrous limbs (i.e. opposable thumbs and seperated digits) * Brains tripled in size (relative to other species of ape) * Behavior of extensive group coordination (tribes) As you can imagine, a species evolving our intelligence and using it to dominate the local ecosystem as extensively as we do is therefore a rarity.
13,976
People often say, including those with extensive knowledge in biology, that a certain species of animal will evolve in one way or another: 1. From changing environments. 2. Mutations. 3. Possibly even genetic engineering from human animals. My question lies in the fact that, aside from the latter option, why haven't any differences in animals'(except humans) markup, morphology, intelligence, DNA, behavior, or any habits changed over thousands or (possibly millions) of years? A cockroach has had the same behavior it has today [more than 10 million years ago](http://en.wikipedia.org/wiki/Cretaceous), and there have been no advancements in the species in the slightest bit. It makes you question evolution, because why don't other animals (like cockroaches) have any changes over 10+ million years, yet humans, like me and you somewhat, have, in a relative period of time similar to the linked geological period above, evolved from spear tossing hominids into someone brilliant enough to even ponder this question. If modern humans are the result of mutations in genes, why has no one species over the course of hundreds of millions of years been fit enough, or advanced mentally as we have, or even in any slightest bit?
2013/12/14
[ "https://biology.stackexchange.com/questions/13976", "https://biology.stackexchange.com", "https://biology.stackexchange.com/users/-1/" ]
This is a tricky question. First, evolution tend to be slow, although there have been recent examples of very fast evolution as well. So for most evolutionary processes, we are not present long enough to see them either happening or see the outcome. Therefore its also hard to say that no evolution is happening - see your cockroach example. How do you know that these animals are the same as 10 Mio years ago? And even if it is like this, it can also mean that these animals fit their niche so good, that there is not much pressure for further adaption. This can change pretty fast as examples from mites (here a report in the [BBC](http://www.bbc.co.uk/news/science-environment-22039872), this is the [original publication](http://onlinelibrary.wiley.com/doi/10.1111/ele.12107/abstract)). Another example of fast evolution (of bigger animals) are the Cichlids in the Lake Victoria, which developed new after the last time the lake dried up completely something like 12.000 years ago. After that, an estimated 300 endemic species developed (see [here](http://www.sciencemag.org/content/273/5278/1091.short)) which was then reduced by pollution and other environmental problem. The remaining species are evolving again to occupy the free niches (see [here](http://www.nature.com/news/2010/100707/full/466174a.html)). In the case of the human, we are pretty lucky, that no other intelligent animal has come up so far. They would have fought for the same biological niche and living space with one species eventually dying out. This has, for example, happened to all the other homo species (habilis, erectus, neanderthalensis). As a species, we are quite young (around 200.000 years), so there is something going on. And there is genetic diversion between humans, but still not as much, that we cannot cross each other anymore. And with 7 billion of us now present, it's not that easy for mutations to come through at our reproduction rate.
Evolution is an ongoing process; it has no predetermined goal or direction; it never stops. Nothing ever stops because everything is ever-moving, ever-changing. Is man more intelligent now than few thousand years ago? Has mankind a better understanding of the phenomenal and noumenal realms now than the people who composed the Upanishads, the Brahmanas, the Vedas ~2,500 years ago, which was preceded by hundreds if not thousands years of oral transmission (myths) from generation to generation?
13,976
People often say, including those with extensive knowledge in biology, that a certain species of animal will evolve in one way or another: 1. From changing environments. 2. Mutations. 3. Possibly even genetic engineering from human animals. My question lies in the fact that, aside from the latter option, why haven't any differences in animals'(except humans) markup, morphology, intelligence, DNA, behavior, or any habits changed over thousands or (possibly millions) of years? A cockroach has had the same behavior it has today [more than 10 million years ago](http://en.wikipedia.org/wiki/Cretaceous), and there have been no advancements in the species in the slightest bit. It makes you question evolution, because why don't other animals (like cockroaches) have any changes over 10+ million years, yet humans, like me and you somewhat, have, in a relative period of time similar to the linked geological period above, evolved from spear tossing hominids into someone brilliant enough to even ponder this question. If modern humans are the result of mutations in genes, why has no one species over the course of hundreds of millions of years been fit enough, or advanced mentally as we have, or even in any slightest bit?
2013/12/14
[ "https://biology.stackexchange.com/questions/13976", "https://biology.stackexchange.com", "https://biology.stackexchange.com/users/-1/" ]
> > How come most animals never seem to evolve over millennia? > > > The word "seem" in your question should not be disregarded. You seem to assume that cockroaches (or most animals as you say) did not change much the last tens or hundreds of thousands of years. But what do you know about that (no offence here)? Have you actually reviewed many kinds of research that estimate the rate of evolution of different randomly chosen lineages in the past 500,000 years? I think you assume that other species evolved slower than humans rather than know it. And you will certainly put much more importance to the evolution of the gene FoxP2 (involved in language) than to a gene allowing cockroaches to have a better sense of smell. This is a biased view of what is a rate of evolution. It would be much wiser to consider a rate of evolution as something like the number of newly arising mutations that succeeded to get fixed in the population. See Haldane's rate of evolution and the Darwin unit. Please don't make the mistake to think that being smart (or complex) is some kind of the goal of evolution and those that are not smart (or complex) are "less evolved" or that they evolved more slowly. You also seem to want to point on the evolution of DNA and evolution of habits. I guess you might be appreciative of the evolution of human knowledge and culture. But this is obviously something that does not have to do with genetic evolution but is rather a matter of cognitive capacity. You cannot compare the change of culture and traditions of insects and humans as insects have mostly no traditions. Now, this is obviously true that different lineages evolve at different rates. Many things influence this rates such as the population size, the mutation rate, the generation time, the selection pressure (which itself might depend on social structure or the rate of environmental change for example). In these terms, I would rather believe of Homo sapiens as a lineage that should have a rather slow evolutionary rate. Homo sapiens is quite a recent species. And speciation is often linked with phenotypic divergence, with niche competition and niche complementarity and therefore with a high rate of evolution. In these terms, I would believe that humans are a lineage with high evolutionary rate.
> > aside from the latter option, why haven't any differences in > animals'(except humans) markup, morphology, intelligence, DNA, > behavior, or any habits changed over thousands or (possibly millions) > of years? > > > What evidence is leading you to that conclusion? For horses, example. (From the talkorigins article): > > The first equid was Hyracotherium, a small forest animal of the early Eocene. This little animal (10-20" at the shoulder) looked nothing at all like a horse. It had a "doggish" look with an arched back, short neck, short snout, short legs, and long tail. It browsed on fruit and fairly soft foliage, and probably scampered from thicket to thicket like a modern muntjac deer, only stupider, slower, and not as agile. This famous little equid was once known by the lovely name "Eohippus", meaning "dawn horse". Some Hyracotherium traits to notice:Legs were flexible and rotatable with all major bones present and > unfused. 4 toes on each front foot, 3 on hind feet. Vestiges of 1st (& > 2nd, behind) toes still present. Hyracotherium walked on pads; its > feet were like a dog's padded feet, except with small "hoofies" on > each toe instead of claws. Small brain with especially small frontal > lobes. Low-crowned teeth with 3 incisors, 1 canine, 4 distinct > premolars and 3 "grinding" molars in each side of each jaw (this is > the "primitive mammalian formula" of teeth). The cusps of the molars > were slightly connected in low crests. Typical teeth of an omnivorous > browser. > > > So from that, you conclude that the DNA, morphology, and intelligence of horses hasn't changed at all in 50 million years?
13,976
People often say, including those with extensive knowledge in biology, that a certain species of animal will evolve in one way or another: 1. From changing environments. 2. Mutations. 3. Possibly even genetic engineering from human animals. My question lies in the fact that, aside from the latter option, why haven't any differences in animals'(except humans) markup, morphology, intelligence, DNA, behavior, or any habits changed over thousands or (possibly millions) of years? A cockroach has had the same behavior it has today [more than 10 million years ago](http://en.wikipedia.org/wiki/Cretaceous), and there have been no advancements in the species in the slightest bit. It makes you question evolution, because why don't other animals (like cockroaches) have any changes over 10+ million years, yet humans, like me and you somewhat, have, in a relative period of time similar to the linked geological period above, evolved from spear tossing hominids into someone brilliant enough to even ponder this question. If modern humans are the result of mutations in genes, why has no one species over the course of hundreds of millions of years been fit enough, or advanced mentally as we have, or even in any slightest bit?
2013/12/14
[ "https://biology.stackexchange.com/questions/13976", "https://biology.stackexchange.com", "https://biology.stackexchange.com/users/-1/" ]
> > How come most animals never seem to evolve over millennia? > > > The word "seem" in your question should not be disregarded. You seem to assume that cockroaches (or most animals as you say) did not change much the last tens or hundreds of thousands of years. But what do you know about that (no offence here)? Have you actually reviewed many kinds of research that estimate the rate of evolution of different randomly chosen lineages in the past 500,000 years? I think you assume that other species evolved slower than humans rather than know it. And you will certainly put much more importance to the evolution of the gene FoxP2 (involved in language) than to a gene allowing cockroaches to have a better sense of smell. This is a biased view of what is a rate of evolution. It would be much wiser to consider a rate of evolution as something like the number of newly arising mutations that succeeded to get fixed in the population. See Haldane's rate of evolution and the Darwin unit. Please don't make the mistake to think that being smart (or complex) is some kind of the goal of evolution and those that are not smart (or complex) are "less evolved" or that they evolved more slowly. You also seem to want to point on the evolution of DNA and evolution of habits. I guess you might be appreciative of the evolution of human knowledge and culture. But this is obviously something that does not have to do with genetic evolution but is rather a matter of cognitive capacity. You cannot compare the change of culture and traditions of insects and humans as insects have mostly no traditions. Now, this is obviously true that different lineages evolve at different rates. Many things influence this rates such as the population size, the mutation rate, the generation time, the selection pressure (which itself might depend on social structure or the rate of environmental change for example). In these terms, I would rather believe of Homo sapiens as a lineage that should have a rather slow evolutionary rate. Homo sapiens is quite a recent species. And speciation is often linked with phenotypic divergence, with niche competition and niche complementarity and therefore with a high rate of evolution. In these terms, I would believe that humans are a lineage with high evolutionary rate.
In response to this part: > > If modern humans are the result of mutations in genes, how come no one species over the course of hundreds of millions of years has been fit enough, or advanced mentally like we have, or even in any slightest bit? > > > All animals *are* the result of evolution, which includes mutations. Now, what you should understand is that evolutionary changes have to be selected for, but also must be immediately useful to the organism if they cost more. There is a long term tendency in our lineage towards increased brain sizes. Animals -> Mammals -> Primates -> Humans. This long term development need not have happened in the first place. In the Jurassic Period, the most successful group of animals were dinosaurs, who in general had small brains. In addition, our brains require far more calories than the brain of, say, a chimpanzee. Even if you have a lineage that, over the generations, has a tendency towards larger brain sizes, it would also need to be able to hunt or forage more in compensation for the increased caloric needs. If it were not able to do so, a larger brain would actually be a profoundly negative characteristic, a useless drain of energy. In addition, the benefits of increased intelligence are highly circumstantial. Consider if you gave a cheetah all the brain power of a human. It might then understand that if it dug a hole and placed fake grass over it, it could catch an antelope for less effort than having to stalk and chase it. Less effort means less calories expended and ultimately most of an organism's fitness has to do with how much energy it expends in trying to procure energy (calories). But lacking opposable thumbs and hands with digits, it would be unlikely to accomplish such a thing. Also, such tasks are more efficient when done by a group, but many animals do not coordinate group-wise as extensively as humans do. So what we have is at least 3 but probably many more things which all have to come together in the same species for them to rise to the top of the food chain like we did: * Dextrous limbs (i.e. opposable thumbs and seperated digits) * Brains tripled in size (relative to other species of ape) * Behavior of extensive group coordination (tribes) As you can imagine, a species evolving our intelligence and using it to dominate the local ecosystem as extensively as we do is therefore a rarity.
13,976
People often say, including those with extensive knowledge in biology, that a certain species of animal will evolve in one way or another: 1. From changing environments. 2. Mutations. 3. Possibly even genetic engineering from human animals. My question lies in the fact that, aside from the latter option, why haven't any differences in animals'(except humans) markup, morphology, intelligence, DNA, behavior, or any habits changed over thousands or (possibly millions) of years? A cockroach has had the same behavior it has today [more than 10 million years ago](http://en.wikipedia.org/wiki/Cretaceous), and there have been no advancements in the species in the slightest bit. It makes you question evolution, because why don't other animals (like cockroaches) have any changes over 10+ million years, yet humans, like me and you somewhat, have, in a relative period of time similar to the linked geological period above, evolved from spear tossing hominids into someone brilliant enough to even ponder this question. If modern humans are the result of mutations in genes, why has no one species over the course of hundreds of millions of years been fit enough, or advanced mentally as we have, or even in any slightest bit?
2013/12/14
[ "https://biology.stackexchange.com/questions/13976", "https://biology.stackexchange.com", "https://biology.stackexchange.com/users/-1/" ]
> > How come most animals never seem to evolve over millennia? > > > The word "seem" in your question should not be disregarded. You seem to assume that cockroaches (or most animals as you say) did not change much the last tens or hundreds of thousands of years. But what do you know about that (no offence here)? Have you actually reviewed many kinds of research that estimate the rate of evolution of different randomly chosen lineages in the past 500,000 years? I think you assume that other species evolved slower than humans rather than know it. And you will certainly put much more importance to the evolution of the gene FoxP2 (involved in language) than to a gene allowing cockroaches to have a better sense of smell. This is a biased view of what is a rate of evolution. It would be much wiser to consider a rate of evolution as something like the number of newly arising mutations that succeeded to get fixed in the population. See Haldane's rate of evolution and the Darwin unit. Please don't make the mistake to think that being smart (or complex) is some kind of the goal of evolution and those that are not smart (or complex) are "less evolved" or that they evolved more slowly. You also seem to want to point on the evolution of DNA and evolution of habits. I guess you might be appreciative of the evolution of human knowledge and culture. But this is obviously something that does not have to do with genetic evolution but is rather a matter of cognitive capacity. You cannot compare the change of culture and traditions of insects and humans as insects have mostly no traditions. Now, this is obviously true that different lineages evolve at different rates. Many things influence this rates such as the population size, the mutation rate, the generation time, the selection pressure (which itself might depend on social structure or the rate of environmental change for example). In these terms, I would rather believe of Homo sapiens as a lineage that should have a rather slow evolutionary rate. Homo sapiens is quite a recent species. And speciation is often linked with phenotypic divergence, with niche competition and niche complementarity and therefore with a high rate of evolution. In these terms, I would believe that humans are a lineage with high evolutionary rate.
Evolution is an ongoing process; it has no predetermined goal or direction; it never stops. Nothing ever stops because everything is ever-moving, ever-changing. Is man more intelligent now than few thousand years ago? Has mankind a better understanding of the phenomenal and noumenal realms now than the people who composed the Upanishads, the Brahmanas, the Vedas ~2,500 years ago, which was preceded by hundreds if not thousands years of oral transmission (myths) from generation to generation?
13,976
People often say, including those with extensive knowledge in biology, that a certain species of animal will evolve in one way or another: 1. From changing environments. 2. Mutations. 3. Possibly even genetic engineering from human animals. My question lies in the fact that, aside from the latter option, why haven't any differences in animals'(except humans) markup, morphology, intelligence, DNA, behavior, or any habits changed over thousands or (possibly millions) of years? A cockroach has had the same behavior it has today [more than 10 million years ago](http://en.wikipedia.org/wiki/Cretaceous), and there have been no advancements in the species in the slightest bit. It makes you question evolution, because why don't other animals (like cockroaches) have any changes over 10+ million years, yet humans, like me and you somewhat, have, in a relative period of time similar to the linked geological period above, evolved from spear tossing hominids into someone brilliant enough to even ponder this question. If modern humans are the result of mutations in genes, why has no one species over the course of hundreds of millions of years been fit enough, or advanced mentally as we have, or even in any slightest bit?
2013/12/14
[ "https://biology.stackexchange.com/questions/13976", "https://biology.stackexchange.com", "https://biology.stackexchange.com/users/-1/" ]
> > aside from the latter option, why haven't any differences in > animals'(except humans) markup, morphology, intelligence, DNA, > behavior, or any habits changed over thousands or (possibly millions) > of years? > > > What evidence is leading you to that conclusion? For horses, example. (From the talkorigins article): > > The first equid was Hyracotherium, a small forest animal of the early Eocene. This little animal (10-20" at the shoulder) looked nothing at all like a horse. It had a "doggish" look with an arched back, short neck, short snout, short legs, and long tail. It browsed on fruit and fairly soft foliage, and probably scampered from thicket to thicket like a modern muntjac deer, only stupider, slower, and not as agile. This famous little equid was once known by the lovely name "Eohippus", meaning "dawn horse". Some Hyracotherium traits to notice:Legs were flexible and rotatable with all major bones present and > unfused. 4 toes on each front foot, 3 on hind feet. Vestiges of 1st (& > 2nd, behind) toes still present. Hyracotherium walked on pads; its > feet were like a dog's padded feet, except with small "hoofies" on > each toe instead of claws. Small brain with especially small frontal > lobes. Low-crowned teeth with 3 incisors, 1 canine, 4 distinct > premolars and 3 "grinding" molars in each side of each jaw (this is > the "primitive mammalian formula" of teeth). The cusps of the molars > were slightly connected in low crests. Typical teeth of an omnivorous > browser. > > > So from that, you conclude that the DNA, morphology, and intelligence of horses hasn't changed at all in 50 million years?
In response to this part: > > If modern humans are the result of mutations in genes, how come no one species over the course of hundreds of millions of years has been fit enough, or advanced mentally like we have, or even in any slightest bit? > > > All animals *are* the result of evolution, which includes mutations. Now, what you should understand is that evolutionary changes have to be selected for, but also must be immediately useful to the organism if they cost more. There is a long term tendency in our lineage towards increased brain sizes. Animals -> Mammals -> Primates -> Humans. This long term development need not have happened in the first place. In the Jurassic Period, the most successful group of animals were dinosaurs, who in general had small brains. In addition, our brains require far more calories than the brain of, say, a chimpanzee. Even if you have a lineage that, over the generations, has a tendency towards larger brain sizes, it would also need to be able to hunt or forage more in compensation for the increased caloric needs. If it were not able to do so, a larger brain would actually be a profoundly negative characteristic, a useless drain of energy. In addition, the benefits of increased intelligence are highly circumstantial. Consider if you gave a cheetah all the brain power of a human. It might then understand that if it dug a hole and placed fake grass over it, it could catch an antelope for less effort than having to stalk and chase it. Less effort means less calories expended and ultimately most of an organism's fitness has to do with how much energy it expends in trying to procure energy (calories). But lacking opposable thumbs and hands with digits, it would be unlikely to accomplish such a thing. Also, such tasks are more efficient when done by a group, but many animals do not coordinate group-wise as extensively as humans do. So what we have is at least 3 but probably many more things which all have to come together in the same species for them to rise to the top of the food chain like we did: * Dextrous limbs (i.e. opposable thumbs and seperated digits) * Brains tripled in size (relative to other species of ape) * Behavior of extensive group coordination (tribes) As you can imagine, a species evolving our intelligence and using it to dominate the local ecosystem as extensively as we do is therefore a rarity.
13,976
People often say, including those with extensive knowledge in biology, that a certain species of animal will evolve in one way or another: 1. From changing environments. 2. Mutations. 3. Possibly even genetic engineering from human animals. My question lies in the fact that, aside from the latter option, why haven't any differences in animals'(except humans) markup, morphology, intelligence, DNA, behavior, or any habits changed over thousands or (possibly millions) of years? A cockroach has had the same behavior it has today [more than 10 million years ago](http://en.wikipedia.org/wiki/Cretaceous), and there have been no advancements in the species in the slightest bit. It makes you question evolution, because why don't other animals (like cockroaches) have any changes over 10+ million years, yet humans, like me and you somewhat, have, in a relative period of time similar to the linked geological period above, evolved from spear tossing hominids into someone brilliant enough to even ponder this question. If modern humans are the result of mutations in genes, why has no one species over the course of hundreds of millions of years been fit enough, or advanced mentally as we have, or even in any slightest bit?
2013/12/14
[ "https://biology.stackexchange.com/questions/13976", "https://biology.stackexchange.com", "https://biology.stackexchange.com/users/-1/" ]
> > aside from the latter option, why haven't any differences in > animals'(except humans) markup, morphology, intelligence, DNA, > behavior, or any habits changed over thousands or (possibly millions) > of years? > > > What evidence is leading you to that conclusion? For horses, example. (From the talkorigins article): > > The first equid was Hyracotherium, a small forest animal of the early Eocene. This little animal (10-20" at the shoulder) looked nothing at all like a horse. It had a "doggish" look with an arched back, short neck, short snout, short legs, and long tail. It browsed on fruit and fairly soft foliage, and probably scampered from thicket to thicket like a modern muntjac deer, only stupider, slower, and not as agile. This famous little equid was once known by the lovely name "Eohippus", meaning "dawn horse". Some Hyracotherium traits to notice:Legs were flexible and rotatable with all major bones present and > unfused. 4 toes on each front foot, 3 on hind feet. Vestiges of 1st (& > 2nd, behind) toes still present. Hyracotherium walked on pads; its > feet were like a dog's padded feet, except with small "hoofies" on > each toe instead of claws. Small brain with especially small frontal > lobes. Low-crowned teeth with 3 incisors, 1 canine, 4 distinct > premolars and 3 "grinding" molars in each side of each jaw (this is > the "primitive mammalian formula" of teeth). The cusps of the molars > were slightly connected in low crests. Typical teeth of an omnivorous > browser. > > > So from that, you conclude that the DNA, morphology, and intelligence of horses hasn't changed at all in 50 million years?
Evolution is an ongoing process; it has no predetermined goal or direction; it never stops. Nothing ever stops because everything is ever-moving, ever-changing. Is man more intelligent now than few thousand years ago? Has mankind a better understanding of the phenomenal and noumenal realms now than the people who composed the Upanishads, the Brahmanas, the Vedas ~2,500 years ago, which was preceded by hundreds if not thousands years of oral transmission (myths) from generation to generation?
71,197
I am an ASP.NET web developer and small business owner. I maintain a network as a hobby, and to keep myself sharp on server technology, as I interact with them all the time as a part of my job. I also like the flexibility of running my own Exchange server. I have an MSDN Subscription, so that is the source of my licenses. Currently I have 3 physical server computers. One is a Windows Server 2k3 ISA Server 2006 box. It does all my firewalling / routing. The other two physical servers are virtual machine hosts. I am in the process of switching over to Hyper-V from VMWare Server 2.0. I am rebuilding all my servers from scratch in the Hyper-V environment, and I would like to take this opportunity to fix some of the flaws in my infrastructure. The rest of the servers I will be describing are virtual machines. This is my current setup: 1. Windows 2k3 Standard Exchange 2003 Server and Domain Controller (same VM). Does my DHCP and DNS. Also runs RADIUS server. I use the RADIUS to authenticate VPN users. Takes up to an hour to reboot! 2. Windows 2k3 Standard alternate Domain Controller that is also my Certificate Authority. Does DNS. 3. Windows 2k3 Standard Web / SQL Server 2005. 4. Windows 2k8 Standard Web / SQL Server 2008. I prefer to use this one to host the majority of my apps because I like IIS 7, but I have server #3 to act as a staging environment for clients who have only IIS 6 and/or SQL Server 2005. I have learned that it is generally not recommended to have Exchange on a Domain Controller. There are weird dependency issues and I have to run a batch file to shut down exchange server services before rebooting, or the reboot will take up to an hour! My Virtual Machine hosts are each dual core 3GHz, with 16 gigs of ram and 300 GB of RAID 1 storage. I've very happy with the hardware. This network is mostly a hobby, but supports a small number of clients who occasionally come in to check on the web apps I am building for them. Where I am looking for help is in the number of VMs I should have, and the roles each one should play. I am thinking this (All VMs are 64-bit unless otherwise specified): 1. Windows 2k8R2 Enterprise Primary Active Directory Server. Does DHCP, DNS, and RADIUS. Is 768 MB of RAM enough? 2. Windows 2k8R2 Enterprise Backup Active Directory Server. Also does DHCP, DNS, and RADIUS. I would like this to always be running, and handle DHCP, DNS, etc, if Server #1 is down. 3. Windows 2k8R2 Enterprise Exchange 2k7 server. I figure it should be all by itself and not play any other roles. 4. Windows 2k8R2 Enterprise Web / SQL Server 2008 (just like before but updated OS). 5. Windows 2k3 Standard Web / SQL Server 2005 / 32-bit (just like before). 6. Certificate Authority. Which server should do this? Should it be its own server? 7. (and beyond) Are there any popular services that I have not mentioned? Each Host would run 3-5 VMs, and I would make sure that 1) and 2) were not on the same host, in case that whole host goes down due to hardware failure. Does this configuration cover all the bases? Have I distributed the roles appropriately? For example, is it a bad idea to have DHCP and DNS on the same box? I realize a lot of this hinges on what I plan to use my servers for. I basically want to run a simple windows environment with all the most popular services, so I can have practice using them all. I think lots of people will consider my setup overkill. But I enjoy maintaining the environment as a hobby, and it gives me a lot of exposure to windows infrastructure that I would not otherwise have. This knowledge comes in handy when I deploy my apps at client sites. I am happy to supply additional information. Thanks!
2009/10/04
[ "https://serverfault.com/questions/71197", "https://serverfault.com", "https://serverfault.com/users/8806/" ]
You're correct in wanting to separate Domain Controllers from anything else, this is a best practice for security; also, as you already discovered, Exchange doesn't play *so* well when running on a DC. A DC can safely run DNS (this is indeed recommended) and DHCP; I'd of course suggest splitting the two VMs on different physical hosts, in order to achieve as much availability as possible. You shouldn't need to run the Enterprise edition of Windows Server 2008 R2, as you're not doing any failover clustering and x64 editions aren't limited in usable RAM as x86 editions used to be; I know you have "free" licenses, but it's better to use the Windows edition most suited to your environment, not the most powerful one available. About virtualization: if your hosts are only going to run VMs (and they really should), I'd suggest running a "pure" hypervisor product, instead of a full-blown O.S.; so your best choice would be Hyper-V Server if you want to go with Microsoft, or VMWare ESXi (the free edition) if you want to stay with VMWare; I don't know if you have any experience with ESX/ESXi, but I can assure you it's a completely different thing from Workstation/Server; I personally think it's a lot better than Hyper-V, but this is a mostly a personal issue. Anyway, I'd avoid running a full Windows Server 2008 R2 with the Hyper-V role enabled. About the Certification Authority: this really should run on its own (virtual) server, which shouldn't do anything else. Since you're going virtual, this should be no particular issue, and it will save you some headaches.
That setup sounds about right. The CA can just go on a domain controller. You are correct in saying that the exchange server should go by itself for easy of administration. The only products that your missing are CRM,sharepoint and OCS.
10,473
As was seen recently with [Hurricane Sandy](http://en.wikipedia.org/wiki/Hurricane_Sandy) (and the residual effects are still there), large portions of the East Coast of the US had major disruption to their air travel. There were also [similar effects in Europe in 2011](http://en.wikipedia.org/wiki/2011_eruption_of_Gr%C3%ADmsv%C3%B6tn#Effect_on_flights) due to the ash cloud from the volcano in Iceland. For example, I had a flight booked this week from Pittsburgh to London, via Chicago. For several days Pittsburgh airport was closed and/or flights were (at least from my perspective) randomly cancelled due to aircraft being out-of-position, etc. Fortunately in the end I was lucky and my flight took off on time. Not everyone was so fortunate. I know from past experience that when a flight is cancelled in these kind of circumstances (which often doesn't happen until less than 24 hours before, as the airline can't predict exactly what will happen), folks can get bumped onto flights several days later, as those are the first with spare seats. What techniques exist, from the perspective of the traveller, for increasing the chances of getting to their destination promptly (either avoiding cancelled flights or getting a reschedule onto a flight shortly after)? Let's assume the traveller might be prepared to spend some additional money, but doesn't have the money for a private jet or a taxi cross-country.
2012/11/02
[ "https://travel.stackexchange.com/questions/10473", "https://travel.stackexchange.com", "https://travel.stackexchange.com/users/2555/" ]
Number one strategy is to get out ahead of it. For example a number of people I know were going to fly Monday of this week from the East Coast. Some changed their flights (for free in some cases, at significant expense in others) to get out Sunday, thus avoiding all the hassles. Earthquakes, volcanoes, and the like don't come with prior warning but storms and strikes do. Once you're in it, knowledge is key. If you can take a train or a bus to somewhere less affected, great. But when a freak blizzard hit Baton Rouge (it snows there roughly twice a century) the people sitting near me who planned to rent a car and drive somewhere they could fly from discovered they couldn't get to their destinations any faster that way. Take some time to find out about schedules and about conditions in the place you're planning to use as an alternate departure point. Use the internet. Phone someone outside the airport and get them to help with research. If you're borderline on having status (with an airline, car rental place, hotel or whatever), the way you get treated during "irregular operations" may make the difference for you. Benefits like "you can always buy a seat even if it's sold out" get less attention than upgrades, but every once in a while they're priceless. Be in the habit of being prepared. Have stuff with you to cope with a 12 hour delay, or even a 36! Know where to get information. Practice breathing for stress reduction. Be nice to the staff - it's as horrible for them as for you. Count your blessings. The more positive you act and feel, the more likely you are to get lucky, whether you believe it or not.
Get on any form of transportation out of the affected area. If you stay you need to share the limited options with a lot of people. The more peripheral you can get the better. It doesn't matter what direction. I was once in a situation where the trapped travelers flocked together. I chose to jump at the first intercity bus out of the region. I didn't even check its destination. After jumping on yet two other buses this way, I got to a city where it was much easier to get a flight home. So the main technique is disperse instead flocking together.
10,473
As was seen recently with [Hurricane Sandy](http://en.wikipedia.org/wiki/Hurricane_Sandy) (and the residual effects are still there), large portions of the East Coast of the US had major disruption to their air travel. There were also [similar effects in Europe in 2011](http://en.wikipedia.org/wiki/2011_eruption_of_Gr%C3%ADmsv%C3%B6tn#Effect_on_flights) due to the ash cloud from the volcano in Iceland. For example, I had a flight booked this week from Pittsburgh to London, via Chicago. For several days Pittsburgh airport was closed and/or flights were (at least from my perspective) randomly cancelled due to aircraft being out-of-position, etc. Fortunately in the end I was lucky and my flight took off on time. Not everyone was so fortunate. I know from past experience that when a flight is cancelled in these kind of circumstances (which often doesn't happen until less than 24 hours before, as the airline can't predict exactly what will happen), folks can get bumped onto flights several days later, as those are the first with spare seats. What techniques exist, from the perspective of the traveller, for increasing the chances of getting to their destination promptly (either avoiding cancelled flights or getting a reschedule onto a flight shortly after)? Let's assume the traveller might be prepared to spend some additional money, but doesn't have the money for a private jet or a taxi cross-country.
2012/11/02
[ "https://travel.stackexchange.com/questions/10473", "https://travel.stackexchange.com", "https://travel.stackexchange.com/users/2555/" ]
Number one strategy is to get out ahead of it. For example a number of people I know were going to fly Monday of this week from the East Coast. Some changed their flights (for free in some cases, at significant expense in others) to get out Sunday, thus avoiding all the hassles. Earthquakes, volcanoes, and the like don't come with prior warning but storms and strikes do. Once you're in it, knowledge is key. If you can take a train or a bus to somewhere less affected, great. But when a freak blizzard hit Baton Rouge (it snows there roughly twice a century) the people sitting near me who planned to rent a car and drive somewhere they could fly from discovered they couldn't get to their destinations any faster that way. Take some time to find out about schedules and about conditions in the place you're planning to use as an alternate departure point. Use the internet. Phone someone outside the airport and get them to help with research. If you're borderline on having status (with an airline, car rental place, hotel or whatever), the way you get treated during "irregular operations" may make the difference for you. Benefits like "you can always buy a seat even if it's sold out" get less attention than upgrades, but every once in a while they're priceless. Be in the habit of being prepared. Have stuff with you to cope with a 12 hour delay, or even a 36! Know where to get information. Practice breathing for stress reduction. Be nice to the staff - it's as horrible for them as for you. Count your blessings. The more positive you act and feel, the more likely you are to get lucky, whether you believe it or not.
The first thing to decide is if you actually need to travel? With a lot of disruptions and cancellations, there's a lot of demand for the limited transport there is. You might be better off checking with your airline / travel insurance about covering hotels, finding a hotel before they all go, and settling down to wait it out. It won't work for all cases, but for some incidents it's much better to be sat in a hotel working remotely rather than spending days queueing / waiting in airports / railway stations! If you do need to travel, the thing to remember is that frequent traveller status matters. If there's only 1 seat left on a plane, the person with the status is much more likely to get it than the person without. Secondly, if you have status you'll normally have access to priority phonelines, and you'll often be able to use priority checkin / priority assistance desks at the airport / station. These normally have shorter queues, and sometimes agents with more discretion. Getting status will often mean a bit of planning in advance. You might find yourself needing to spend a little more on some trips during the year, or taking a slightly less convenient timing / routing to concentrate your travel with one alliance to hit the status threshold. That said, getting status has benefits normally, not just during disruptions - you'll normally get at least a few out of priority checkin / lounge access / extra luggage / upgrades. So, give some thought to concentrating your travel to earn status before you need it, so you've got it for when disruptions hit! When you're trying to get a seat, it's worth being flexible. If your flight / train has been cancelled, see if they can put you on one to somewhere vaguely near your destination, or perhaps just a big hub of theirs. Aim to get out of the disruption first, then worry about the last leg later! Alternately, see if you can get a seat from an airport/station part way, then look at bus / taxi / hire car / train to get you there. If you can get just clear of the disruption, you should have more luck getting a seat for the rest of the way. Make sure you have route maps available when asking for rerouting, so you can sanity check options, and make suggestions for less obvious possibilities based on the routes available. Don't forget you may sometimes have to go backwards / the wrong way to go forwards!
26,961
In sharepoint 2007, when you were looking at a list there was already a default drop down box to change the list view. ![Sharepoint 2007 View Selection](https://i.stack.imgur.com/h4Osm.png) Now that we have upgraded to Sharepoint 2010, we have noticed that this drop down box is hidden in the ribbon control "List" tab which you have to click then you can select the drop down box value. ![Sharepoint 2010 First Click to View Selection](https://i.stack.imgur.com/mjlIB.png) ![Sharepoint 2010 Second Click to View Selection](https://i.stack.imgur.com/BLMnn.png) Is there a way to add the drop down box to the default view in Sharepoint 2010?
2012/01/16
[ "https://sharepoint.stackexchange.com/questions/26961", "https://sharepoint.stackexchange.com", "https://sharepoint.stackexchange.com/users/6415/" ]
It is important that you get an understanding of how the SharePoint platform fits together. Have a look at the architecture and understand how different components relate to each other. This is a link to an architectural overview: <http://msdn.microsoft.com/en-us/library/gg552610.aspx> Setup a sandpit and play around with the environment. Gain an understanding of the different elements (lists, libraries, security, sites, central administration, services). This link has some ideas for some courses/exams: <http://blogs.msdn.com/b/pandrew/archive/2008/05/01/getting-started-with-sharepoint-development.aspx> This link has some other detailed learning material such as videos and labs: <http://msdn.microsoft.com/en-us/sharepoint/aa905692> <http://www.learningsharepoint.com/2010/10/30/sharepoint-2010-development-tutorial-videos-2/> <http://praveenbattula.blogspot.com/2010/05/free-sharepoint-2010-developer-tutorial.html> The MSDN site is a great reference for SharePoint API information. Setup a development environment for SharePoint and run through some tutorials - you need a local instance of SharePoint installed on your development environment. Here are some tutorial links: <http://dotnetguts.blogspot.com/2008/06/sharepoint-development-tutorial.html>
I am ringing my own bell here, but I think the first thing to understand about SharePoint is that you can do a lot out of the box, before starting developing. I have seen too many cases where people start developing from scratch when 90% of the expected features are already there. Sure, sometimes with SharePoint the issue is to know where to find it... My personal choice is to follow live resources like blogs or Twitter, because Web development is changing so fast these days. You'll find that the SharePoint online community is very active. You can start with Microsoft's own blog: <http://sharepoint.microsoft.com/blog>