qid
int64
1
74.7M
question
stringlengths
12
33.8k
date
stringlengths
10
10
metadata
list
response_j
stringlengths
0
115k
response_k
stringlengths
2
98.3k
19,876
This thought was inspired by [Serban Tanasa's question](https://worldbuilding.stackexchange.com/questions/19870/stealing-luck-for-fun-and-profit) regarding engineering one's luck, since luck is time variant (depends explicitly upon time) and a fifth or higher dimensional being "can" experiences all alternate times at once... remember in physics class when teacher mentioned a particular counter-intuitive fact about quantum physics of double slit is that the present can affect the past however this is irrelevent right now only an [interesting fact](http://secondnexus.com/technology-and-innovation/physicists-demonstrate-how-time-can-seem-to-run-backward-and-the-future-can-affect-the-past/). My question is since a higher dimensional being is capable of omni-experiencing-and-perceiving-all-times since big bang till end of time or perhap time is looping, can they feel lucky? or let's stretch our imagination further since we have multiple spatial dimensions maybe these beings see area or volume of time and luck applies differently than ours, a possibility. Note: Fifth dimensions mean 3 spatial dimensions your length, width and height and 2 timelines. Or some may argue it is 4 spatial dimensions plus 1 timeline it doesn't matters so long the beings can affect the past, present and future simultaneously. Suppose we see the being drops a vase and it is accelerated towards the ground, from the being perspective there are numerous amount of same vase at different heights and on some timelines the vase neither existed nor broken. The being can also see the fragments reassemble back into a vase while levitating off the ground. Imagine walking down a street where you sees many copies of younger and older self each doing different things and in one of the timelines you are either dead or never existed.
2015/06/30
[ "https://worldbuilding.stackexchange.com/questions/19876", "https://worldbuilding.stackexchange.com", "https://worldbuilding.stackexchange.com/users/8400/" ]
From the clues in the backstory, it seems the *Gerontocracy of the Favored* is based on some sort of technological version of magic, altering probability. This would be very difficult to explain without a lot of handwavium (hands moving at almost *c*), and also depends a bit on which version of the quantum universe you believe in. In the "Many Worlds" interpretation, every event is played out, so in "otherwhen" a version of you is not drinking the warm drink you have in your hand right now (or alternatively, you don't have the drink, but one of the infinite number of your counterparts in otherwhen does...). In other words, for every decision you make or don't make, every possible outcome happens. The *Gerontocracy of the Favored* have therefore discovered a way to manipulate or otherwise access the "many worlds" and direct themselves along the pathways that mostly favour themselves. This will be difficult even with Clarke tech level magic, since not only do you have to anticipate and determine future outcomes for an almost infinite number of possibilities, but you have to do so in such a way that the average outcome is more favourable to a group of individuals, which is exponentially more difficult. If manipulating quantum reality is a bit too much (this would be pretty much a post singularity landscape, and your hero would be wandering around essentially as an extra in *someone else's* dream), then you could simplify things a bit by postulating a cluster of AI's which could take input from a vast number of sensors and using game theory, probability tables and other statistical methods, advise the *Gerontocracy of the Favored* of what paths lead to the best possible outcomes. Once again, this runs into the problem of averaging out the outcomes over a group; outcome A may favour me over you, outcome B may favourite you over me but outcome C, while not optimum for either of us, is better than other outcomes for both of us. This is a variation of some forms of game theory. The problem with either form of "magic" is twofold. The fundamental problem is known as the "[Local Knowledge Problem](https://fee.org/articles/hayek-the-knowledge-problem/)", and was postulated by economist F.A. Hayek. Hayek observed that information is subtle and diffused in any system (this applies to markets, climate, ecosystems or other complex, adaptive systems) and local actors can observe and act upon this information far faster than any centralized system. By the time the information makes it up the chain, is observed, a decision made and the order to action given and passed down the chain, the conditions will have changed (either a little or a lot), leading to cumulative errors building up in a positive feedback loop. This is why market economies with local actors with free agency will always outperform centralized command economies. Your *Gerontocracy of the Favored* might actually be a craptacular USSR written on a global scale, and while the *Gerontocracy* is better off than the ordinary people (much like the Soviet era Nomenklatura), compared to us from the far distant past they are not well off at all. The second and probably more immediate issue lies in game theory. So long as the *Gerontocracy of the Favored* can hang together with common goals and are willing to accept individually sub optimal outcomes to preserve their overall ranking, they can be ahead of the others. Human nature (and possibly post human nature) being what it is, the various members making up the *Gerontocracy of the Favored* will probably end up seeking ways to optimize their own individual outcomes, leading to covert and even overt efforts to oust other members and seize resources for their own use. By the time your story starts, there may be only one member of the *Gerontocracy of the Favored* left "standing" as it were; the sole and absolute ruler of Earth. As Dirty Harry would say in these circumstances: "Do you feel lucky?"
Using some handwaves, you could do this with advanced probability computing. For instance, if one of the immutable laws of the universe is that there is a finite amount of "Luck" available at any time, and that "luck" can be altered by doing the math, then you have luck "mining" by simply having that technology only available to the elite. Additional immutable laws could come into play by having luck attract luck (and vice versa) so that the lucky get luckier and the unlucky get unluckier. For example, you have two people playing roulette - one of the technocrats and one of the proles. Our prole has no access to this probability calculating technology, wheras our technocrat does. Our technocrat can use his computing advantage to calculate what numbers to play based on every single variable - from the way the roulette operator rolls the ball to the temperature of the room. this gives him a significant advantage. The laws of the universe as handwaved above would then increase this advantage for each prior "lucky" moment. Combine this with the huge advantage of information such connectivity could bring and you have a powerful "lucky", well informed elite and an unlucky lower class with an information disadvantage.
19,876
This thought was inspired by [Serban Tanasa's question](https://worldbuilding.stackexchange.com/questions/19870/stealing-luck-for-fun-and-profit) regarding engineering one's luck, since luck is time variant (depends explicitly upon time) and a fifth or higher dimensional being "can" experiences all alternate times at once... remember in physics class when teacher mentioned a particular counter-intuitive fact about quantum physics of double slit is that the present can affect the past however this is irrelevent right now only an [interesting fact](http://secondnexus.com/technology-and-innovation/physicists-demonstrate-how-time-can-seem-to-run-backward-and-the-future-can-affect-the-past/). My question is since a higher dimensional being is capable of omni-experiencing-and-perceiving-all-times since big bang till end of time or perhap time is looping, can they feel lucky? or let's stretch our imagination further since we have multiple spatial dimensions maybe these beings see area or volume of time and luck applies differently than ours, a possibility. Note: Fifth dimensions mean 3 spatial dimensions your length, width and height and 2 timelines. Or some may argue it is 4 spatial dimensions plus 1 timeline it doesn't matters so long the beings can affect the past, present and future simultaneously. Suppose we see the being drops a vase and it is accelerated towards the ground, from the being perspective there are numerous amount of same vase at different heights and on some timelines the vase neither existed nor broken. The being can also see the fragments reassemble back into a vase while levitating off the ground. Imagine walking down a street where you sees many copies of younger and older self each doing different things and in one of the timelines you are either dead or never existed.
2015/06/30
[ "https://worldbuilding.stackexchange.com/questions/19876", "https://worldbuilding.stackexchange.com", "https://worldbuilding.stackexchange.com/users/8400/" ]
From the clues in the backstory, it seems the *Gerontocracy of the Favored* is based on some sort of technological version of magic, altering probability. This would be very difficult to explain without a lot of handwavium (hands moving at almost *c*), and also depends a bit on which version of the quantum universe you believe in. In the "Many Worlds" interpretation, every event is played out, so in "otherwhen" a version of you is not drinking the warm drink you have in your hand right now (or alternatively, you don't have the drink, but one of the infinite number of your counterparts in otherwhen does...). In other words, for every decision you make or don't make, every possible outcome happens. The *Gerontocracy of the Favored* have therefore discovered a way to manipulate or otherwise access the "many worlds" and direct themselves along the pathways that mostly favour themselves. This will be difficult even with Clarke tech level magic, since not only do you have to anticipate and determine future outcomes for an almost infinite number of possibilities, but you have to do so in such a way that the average outcome is more favourable to a group of individuals, which is exponentially more difficult. If manipulating quantum reality is a bit too much (this would be pretty much a post singularity landscape, and your hero would be wandering around essentially as an extra in *someone else's* dream), then you could simplify things a bit by postulating a cluster of AI's which could take input from a vast number of sensors and using game theory, probability tables and other statistical methods, advise the *Gerontocracy of the Favored* of what paths lead to the best possible outcomes. Once again, this runs into the problem of averaging out the outcomes over a group; outcome A may favour me over you, outcome B may favourite you over me but outcome C, while not optimum for either of us, is better than other outcomes for both of us. This is a variation of some forms of game theory. The problem with either form of "magic" is twofold. The fundamental problem is known as the "[Local Knowledge Problem](https://fee.org/articles/hayek-the-knowledge-problem/)", and was postulated by economist F.A. Hayek. Hayek observed that information is subtle and diffused in any system (this applies to markets, climate, ecosystems or other complex, adaptive systems) and local actors can observe and act upon this information far faster than any centralized system. By the time the information makes it up the chain, is observed, a decision made and the order to action given and passed down the chain, the conditions will have changed (either a little or a lot), leading to cumulative errors building up in a positive feedback loop. This is why market economies with local actors with free agency will always outperform centralized command economies. Your *Gerontocracy of the Favored* might actually be a craptacular USSR written on a global scale, and while the *Gerontocracy* is better off than the ordinary people (much like the Soviet era Nomenklatura), compared to us from the far distant past they are not well off at all. The second and probably more immediate issue lies in game theory. So long as the *Gerontocracy of the Favored* can hang together with common goals and are willing to accept individually sub optimal outcomes to preserve their overall ranking, they can be ahead of the others. Human nature (and possibly post human nature) being what it is, the various members making up the *Gerontocracy of the Favored* will probably end up seeking ways to optimize their own individual outcomes, leading to covert and even overt efforts to oust other members and seize resources for their own use. By the time your story starts, there may be only one member of the *Gerontocracy of the Favored* left "standing" as it were; the sole and absolute ruler of Earth. As Dirty Harry would say in these circumstances: "Do you feel lucky?"
Try to get out this dimension by meditation, then you will be multidimensional being, as that you can know and do whatever you wish in multidimensional universe or multiverse. Then you can see all possible life's and achieve your goal. Easy :) Mathematically, you will be live unlimited parallel lives with different outcome or end. But you will know all that in this one presence, so like there can be only one, who is wake is like who is live others are like sleepers. Can't achieve what you wish to do. Sentence Jhvh Jehovah men's that brate mudri.
19,876
This thought was inspired by [Serban Tanasa's question](https://worldbuilding.stackexchange.com/questions/19870/stealing-luck-for-fun-and-profit) regarding engineering one's luck, since luck is time variant (depends explicitly upon time) and a fifth or higher dimensional being "can" experiences all alternate times at once... remember in physics class when teacher mentioned a particular counter-intuitive fact about quantum physics of double slit is that the present can affect the past however this is irrelevent right now only an [interesting fact](http://secondnexus.com/technology-and-innovation/physicists-demonstrate-how-time-can-seem-to-run-backward-and-the-future-can-affect-the-past/). My question is since a higher dimensional being is capable of omni-experiencing-and-perceiving-all-times since big bang till end of time or perhap time is looping, can they feel lucky? or let's stretch our imagination further since we have multiple spatial dimensions maybe these beings see area or volume of time and luck applies differently than ours, a possibility. Note: Fifth dimensions mean 3 spatial dimensions your length, width and height and 2 timelines. Or some may argue it is 4 spatial dimensions plus 1 timeline it doesn't matters so long the beings can affect the past, present and future simultaneously. Suppose we see the being drops a vase and it is accelerated towards the ground, from the being perspective there are numerous amount of same vase at different heights and on some timelines the vase neither existed nor broken. The being can also see the fragments reassemble back into a vase while levitating off the ground. Imagine walking down a street where you sees many copies of younger and older self each doing different things and in one of the timelines you are either dead or never existed.
2015/06/30
[ "https://worldbuilding.stackexchange.com/questions/19876", "https://worldbuilding.stackexchange.com", "https://worldbuilding.stackexchange.com/users/8400/" ]
From the clues in the backstory, it seems the *Gerontocracy of the Favored* is based on some sort of technological version of magic, altering probability. This would be very difficult to explain without a lot of handwavium (hands moving at almost *c*), and also depends a bit on which version of the quantum universe you believe in. In the "Many Worlds" interpretation, every event is played out, so in "otherwhen" a version of you is not drinking the warm drink you have in your hand right now (or alternatively, you don't have the drink, but one of the infinite number of your counterparts in otherwhen does...). In other words, for every decision you make or don't make, every possible outcome happens. The *Gerontocracy of the Favored* have therefore discovered a way to manipulate or otherwise access the "many worlds" and direct themselves along the pathways that mostly favour themselves. This will be difficult even with Clarke tech level magic, since not only do you have to anticipate and determine future outcomes for an almost infinite number of possibilities, but you have to do so in such a way that the average outcome is more favourable to a group of individuals, which is exponentially more difficult. If manipulating quantum reality is a bit too much (this would be pretty much a post singularity landscape, and your hero would be wandering around essentially as an extra in *someone else's* dream), then you could simplify things a bit by postulating a cluster of AI's which could take input from a vast number of sensors and using game theory, probability tables and other statistical methods, advise the *Gerontocracy of the Favored* of what paths lead to the best possible outcomes. Once again, this runs into the problem of averaging out the outcomes over a group; outcome A may favour me over you, outcome B may favourite you over me but outcome C, while not optimum for either of us, is better than other outcomes for both of us. This is a variation of some forms of game theory. The problem with either form of "magic" is twofold. The fundamental problem is known as the "[Local Knowledge Problem](https://fee.org/articles/hayek-the-knowledge-problem/)", and was postulated by economist F.A. Hayek. Hayek observed that information is subtle and diffused in any system (this applies to markets, climate, ecosystems or other complex, adaptive systems) and local actors can observe and act upon this information far faster than any centralized system. By the time the information makes it up the chain, is observed, a decision made and the order to action given and passed down the chain, the conditions will have changed (either a little or a lot), leading to cumulative errors building up in a positive feedback loop. This is why market economies with local actors with free agency will always outperform centralized command economies. Your *Gerontocracy of the Favored* might actually be a craptacular USSR written on a global scale, and while the *Gerontocracy* is better off than the ordinary people (much like the Soviet era Nomenklatura), compared to us from the far distant past they are not well off at all. The second and probably more immediate issue lies in game theory. So long as the *Gerontocracy of the Favored* can hang together with common goals and are willing to accept individually sub optimal outcomes to preserve their overall ranking, they can be ahead of the others. Human nature (and possibly post human nature) being what it is, the various members making up the *Gerontocracy of the Favored* will probably end up seeking ways to optimize their own individual outcomes, leading to covert and even overt efforts to oust other members and seize resources for their own use. By the time your story starts, there may be only one member of the *Gerontocracy of the Favored* left "standing" as it were; the sole and absolute ruler of Earth. As Dirty Harry would say in these circumstances: "Do you feel lucky?"
The problem with 'Luck' is that it is a human concept applied to the quantum workings of the universe, which itself does not care if one macroscopic system is favoured by the vagaries of chance while another is harmed by the vagaries of chance, or is also favoured, or has no net benefit or loss from its point of view. Put more simply: Our universe simply doesn't care how many people get lucky or unlucky. This leaves us with two possibilities: 1. The universe described in the question is *different* to our own, and is quite probably (though not definitely) an anthropically-oriented simulation, (possibly matrix-like in which sentient beings are beings out-of-universe who are plugged into the simulation, though it does not necessarily follow that they *must* be plug-ins, and are not entirely simulated themselves), in which subjective luck is a limited resource, or: 2. The creators of the technology that 'transfers' 'luck' (i.e. net benefit or harm to living beings) deliberately included an unnecessary component that negatively affects non-target beings at the same time as it positively affects target beings, according to the net gain or loss inferred on targets and non-targets, based on the subjective appraisal of said gain/loss by all beings within its area of effect. In the case of option 1, this could be an experiment on the effects of an anthropically-oriented universe as opposed to our impersonal universe. In the case of option 2, the creators of the luck transference technology/magic must either had the mistaken assumption that luck is a universally-limited resource rather than mere localised quantum variations that could be influenced without affecting other localities, or they must have - with malice aforethought - set out to ensure that the device they created had no *net* subjective benefit or harm to the beings in its area of influence. The reason for this may be simple malice, or there may be a more subtle, probably legally-oriented reason. For example: the government in the time and place that the device(s) was(were) created may have had some arcane legal requirement that equated imposition of health benefits with financial gain, and that in order to avoid that government's tax laws that would have seen the device's owners taxed according to the net benefit it bestowed, chose to have it impose no *net* benefit, by causing low-level harm to a large number of non-targets commensurate with the benefit gained by the low numbers of target individuals. As to how it works - that depends on the two cases above. In Case 1, the simulation/different universe: The luckiness or unluckiness of an individual is determined by the generation of a random number in the simulation engine. Luck transference *to* an individual or group may occur by having a subroutine pick better values out of a buffer of truly random data that is otherwise streamed sequentially as randomness is required, leaving the less-lucky values for everyone else. In Case 2, our own universe or a reasonable facsimile: The world may be seeded with nanites that are able to change living beings and influence otherwise random outcomes for better or worse relative to the living beings between them. Due to the requirement that the entire system confer no net gain or loss as subjectively assessed by those living beings, they would be networked, forming a distributed intelligence, reading peoples' minds and imposing 'luckiness' or 'unluckiness' according to their/its target criteria, the entire system designed to produce no *net* gain or loss except over short, non-reportable periods. How? The nanites might read an individual's intent, and assess the likelihood that the actions they were performing would lead to a subjectively positive or negative outcome, and if necessary alter those actions by something as simple as adding or subtracting a few nerve impulses here and there so that an individual's actions were unexpectedly successful or otherwise, or someone had (or didn't have) a particular worthwhile thought or idea, or they might correct or create a genetic or immunological deficiency that would lead to cancer or disease immunity. Either way, luck *transference* can probably be put down to someone quite a while ago making the informed decision to make a lot of people's lives more miserable than necessary.
19,876
This thought was inspired by [Serban Tanasa's question](https://worldbuilding.stackexchange.com/questions/19870/stealing-luck-for-fun-and-profit) regarding engineering one's luck, since luck is time variant (depends explicitly upon time) and a fifth or higher dimensional being "can" experiences all alternate times at once... remember in physics class when teacher mentioned a particular counter-intuitive fact about quantum physics of double slit is that the present can affect the past however this is irrelevent right now only an [interesting fact](http://secondnexus.com/technology-and-innovation/physicists-demonstrate-how-time-can-seem-to-run-backward-and-the-future-can-affect-the-past/). My question is since a higher dimensional being is capable of omni-experiencing-and-perceiving-all-times since big bang till end of time or perhap time is looping, can they feel lucky? or let's stretch our imagination further since we have multiple spatial dimensions maybe these beings see area or volume of time and luck applies differently than ours, a possibility. Note: Fifth dimensions mean 3 spatial dimensions your length, width and height and 2 timelines. Or some may argue it is 4 spatial dimensions plus 1 timeline it doesn't matters so long the beings can affect the past, present and future simultaneously. Suppose we see the being drops a vase and it is accelerated towards the ground, from the being perspective there are numerous amount of same vase at different heights and on some timelines the vase neither existed nor broken. The being can also see the fragments reassemble back into a vase while levitating off the ground. Imagine walking down a street where you sees many copies of younger and older self each doing different things and in one of the timelines you are either dead or never existed.
2015/06/30
[ "https://worldbuilding.stackexchange.com/questions/19876", "https://worldbuilding.stackexchange.com", "https://worldbuilding.stackexchange.com/users/8400/" ]
The problem with 'Luck' is that it is a human concept applied to the quantum workings of the universe, which itself does not care if one macroscopic system is favoured by the vagaries of chance while another is harmed by the vagaries of chance, or is also favoured, or has no net benefit or loss from its point of view. Put more simply: Our universe simply doesn't care how many people get lucky or unlucky. This leaves us with two possibilities: 1. The universe described in the question is *different* to our own, and is quite probably (though not definitely) an anthropically-oriented simulation, (possibly matrix-like in which sentient beings are beings out-of-universe who are plugged into the simulation, though it does not necessarily follow that they *must* be plug-ins, and are not entirely simulated themselves), in which subjective luck is a limited resource, or: 2. The creators of the technology that 'transfers' 'luck' (i.e. net benefit or harm to living beings) deliberately included an unnecessary component that negatively affects non-target beings at the same time as it positively affects target beings, according to the net gain or loss inferred on targets and non-targets, based on the subjective appraisal of said gain/loss by all beings within its area of effect. In the case of option 1, this could be an experiment on the effects of an anthropically-oriented universe as opposed to our impersonal universe. In the case of option 2, the creators of the luck transference technology/magic must either had the mistaken assumption that luck is a universally-limited resource rather than mere localised quantum variations that could be influenced without affecting other localities, or they must have - with malice aforethought - set out to ensure that the device they created had no *net* subjective benefit or harm to the beings in its area of influence. The reason for this may be simple malice, or there may be a more subtle, probably legally-oriented reason. For example: the government in the time and place that the device(s) was(were) created may have had some arcane legal requirement that equated imposition of health benefits with financial gain, and that in order to avoid that government's tax laws that would have seen the device's owners taxed according to the net benefit it bestowed, chose to have it impose no *net* benefit, by causing low-level harm to a large number of non-targets commensurate with the benefit gained by the low numbers of target individuals. As to how it works - that depends on the two cases above. In Case 1, the simulation/different universe: The luckiness or unluckiness of an individual is determined by the generation of a random number in the simulation engine. Luck transference *to* an individual or group may occur by having a subroutine pick better values out of a buffer of truly random data that is otherwise streamed sequentially as randomness is required, leaving the less-lucky values for everyone else. In Case 2, our own universe or a reasonable facsimile: The world may be seeded with nanites that are able to change living beings and influence otherwise random outcomes for better or worse relative to the living beings between them. Due to the requirement that the entire system confer no net gain or loss as subjectively assessed by those living beings, they would be networked, forming a distributed intelligence, reading peoples' minds and imposing 'luckiness' or 'unluckiness' according to their/its target criteria, the entire system designed to produce no *net* gain or loss except over short, non-reportable periods. How? The nanites might read an individual's intent, and assess the likelihood that the actions they were performing would lead to a subjectively positive or negative outcome, and if necessary alter those actions by something as simple as adding or subtracting a few nerve impulses here and there so that an individual's actions were unexpectedly successful or otherwise, or someone had (or didn't have) a particular worthwhile thought or idea, or they might correct or create a genetic or immunological deficiency that would lead to cancer or disease immunity. Either way, luck *transference* can probably be put down to someone quite a while ago making the informed decision to make a lot of people's lives more miserable than necessary.
Using some handwaves, you could do this with advanced probability computing. For instance, if one of the immutable laws of the universe is that there is a finite amount of "Luck" available at any time, and that "luck" can be altered by doing the math, then you have luck "mining" by simply having that technology only available to the elite. Additional immutable laws could come into play by having luck attract luck (and vice versa) so that the lucky get luckier and the unlucky get unluckier. For example, you have two people playing roulette - one of the technocrats and one of the proles. Our prole has no access to this probability calculating technology, wheras our technocrat does. Our technocrat can use his computing advantage to calculate what numbers to play based on every single variable - from the way the roulette operator rolls the ball to the temperature of the room. this gives him a significant advantage. The laws of the universe as handwaved above would then increase this advantage for each prior "lucky" moment. Combine this with the huge advantage of information such connectivity could bring and you have a powerful "lucky", well informed elite and an unlucky lower class with an information disadvantage.
4,478,032
So I just finished working on a site on my computer, and I put it on a flash drive and put it in my public\_html folder on my server. When I type in <http://localhost/> I get my index page but the css is gone and all the images are gone. How can I fix this?
2010/12/18
[ "https://Stackoverflow.com/questions/4478032", "https://Stackoverflow.com", "https://Stackoverflow.com/users/521436/" ]
Ensure that you have put the right path for your css and images.
Make sure that inside your html, the path to your css is correct. link rel="stylesheet" type="text/css" href="mystyle.css" Your href="mystyle.css" should have a correct reference by using ..(dot-dot) just like "../folderName" in case your css file is one or more directory away from you main folder.
4,478,032
So I just finished working on a site on my computer, and I put it on a flash drive and put it in my public\_html folder on my server. When I type in <http://localhost/> I get my index page but the css is gone and all the images are gone. How can I fix this?
2010/12/18
[ "https://Stackoverflow.com/questions/4478032", "https://Stackoverflow.com", "https://Stackoverflow.com/users/521436/" ]
Ensure that you have put the right path for your css and images.
Sometimes all that you need to do is clear your browsers cache, which will store the pages current state and will sometimes ignore any new changes to the css and sometimes even images until the cache is cleared.
4,478,032
So I just finished working on a site on my computer, and I put it on a flash drive and put it in my public\_html folder on my server. When I type in <http://localhost/> I get my index page but the css is gone and all the images are gone. How can I fix this?
2010/12/18
[ "https://Stackoverflow.com/questions/4478032", "https://Stackoverflow.com", "https://Stackoverflow.com/users/521436/" ]
Sorry, this question is from years ago. For those interested, the problem was due to permissions on my server.
Ensure that you have put the right path for your css and images.
4,478,032
So I just finished working on a site on my computer, and I put it on a flash drive and put it in my public\_html folder on my server. When I type in <http://localhost/> I get my index page but the css is gone and all the images are gone. How can I fix this?
2010/12/18
[ "https://Stackoverflow.com/questions/4478032", "https://Stackoverflow.com", "https://Stackoverflow.com/users/521436/" ]
Sorry, this question is from years ago. For those interested, the problem was due to permissions on my server.
Make sure that inside your html, the path to your css is correct. link rel="stylesheet" type="text/css" href="mystyle.css" Your href="mystyle.css" should have a correct reference by using ..(dot-dot) just like "../folderName" in case your css file is one or more directory away from you main folder.
4,478,032
So I just finished working on a site on my computer, and I put it on a flash drive and put it in my public\_html folder on my server. When I type in <http://localhost/> I get my index page but the css is gone and all the images are gone. How can I fix this?
2010/12/18
[ "https://Stackoverflow.com/questions/4478032", "https://Stackoverflow.com", "https://Stackoverflow.com/users/521436/" ]
Sorry, this question is from years ago. For those interested, the problem was due to permissions on my server.
Sometimes all that you need to do is clear your browsers cache, which will store the pages current state and will sometimes ignore any new changes to the css and sometimes even images until the cache is cleared.
20,827,199
I am currently working on creating a paint program using python and pygame. I am currently having trouble with creating the undo/redo function in the program. The way I was thinking of doing so would be to save the canvas image after each time the user releases the mouse, but I am not sure if the individual images would have to be saved in a temporary folder that is deleted after the program is closed. I have also read that this method can affect performance of the program so I am wondering if there are any other methods that will work more efficiently. Thank you.
2013/12/29
[ "https://Stackoverflow.com/questions/20827199", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2950875/" ]
writing a copy to file does sound a bit heavy handed, does it need to be unlimited undo? I would suggest using something like pythons [collections.deque](http://docs.python.org/2/library/collections.html#collections.deque) as a circular buffer to save the last N modifications, this would save you having to worry about cleanup and disk storage. If taking full snapshots each time turns out to be to much performance wise you may need to look into limiting eached saved region to a specific bounding box based on whatever that last action was that the user performed.
My suggestion is to have a buffer of the last operations that have been done. Each operation will consist of a sprite, and a position of where it is placed. You will be drawing the canvas, as well as all sprites from that buffer. When you have to many sprites in the buffer, you can blit the oldest onto the canvas, thus saving memory. The undo itself would be rather easy. Just remove the last sprite that was added. A redo would be slightly more difficult. Since instead of removing, I would have a pointer, that points to the last sprite that I will draw. If a new action will be added, only then I remove all the sprites that have been "invisible".
304,827
I have a slime farm with an iron golem inside that attracts the slime, but I noticed that it also works with some other mobs if it isn't lit up properly. So I turned afk for a while with light level -7 in the slime farm and a lot of mobs spawned! But when I went down I saw a spider that killed the iron golem. So is there a way to spawn all mobs other than spiders?
2017/04/01
[ "https://gaming.stackexchange.com/questions/304827", "https://gaming.stackexchange.com", "https://gaming.stackexchange.com/users/183285/" ]
I think you have the wrong strategy here. Many (possibly all?) hostile mobs will attack Iron Golems. The spider may have been the mob to kill it, but all the mobs before the spider likely did damage to the golem, weakening it just enough to finally be killed by the spider. So, the real problem is not preventing spider spawns, but preventing the golem from dying. Unfortunately, it is not possible to permanently keep it alive. That being said, there are some tricks to keep the golem alive for as long as possible. * Frequently throw splash potions of instant health at the golem. Splash potions of instant health heal golems, but hurt hostile mobs. * Make an [iron golem farm](http://minecraft.gamepedia.com/Tutorials/Iron_golem_farming), and a water pathway that leads the iron golems to the mobs. This is much more difficult, but a more permanent solution.
If you put fences around the Iron Golem it will stop mobs from getting to it while still attracting the mobs to it.
1,893,248
I'm working on Markov Chains and I would like to know of efficient algorithms for constructing probabilistic transition matrices (of order n), given a text file as input. I am not after one algorithm, but I'd rather like to build a list of such algorithms. Papers on such algorithms are also more than welcome, as any tips on terminology, etc. Notice that this topic bears a strong resemblance with n-gram identification algorithms. Any help would be much appreciated.
2009/12/12
[ "https://Stackoverflow.com/questions/1893248", "https://Stackoverflow.com", "https://Stackoverflow.com/users/50305/" ]
It sounds like there are two possible questions, you should clarify which one: 1. The 'text file' contains probability values and "n" and you build the matrix directly, but how to code it? This question is trivial, so let's disregard it 2. The 'text file' contains something like signal data and you want to model it as a Markov Chain. 'Markov Chain' generally refers to a first order stochastic process, so I'm not sure then what you mean by "order", probably the size of the matrix, but that is not typical terminology. Anyway, for 1st-order, n x n matrix, discrete time random process, you should look at Viterbi Algorithm: <http://en.wikipedia.org/wiki/Viterbi_algorithm>
Whenever dealing with Markov Models, I tend to end up looking at [crm114 Discriminator](http://crm114.sourceforge.net/). One, he goes into great detail about what different models there actually are (Markov isn't always the best, depending on what the application is) and provides general links and lots of background information on how probabilistic models work. While crm114 is generally used as some sort of SPAM identification tool, it is actually a more generic probability engine that I have used in other applications.
36,232
My new Canon EF 75-300mm telephoto lens and the zoom ring is quite stiff. Is the lens faulty or is there a way to loosen it up?
2013/03/27
[ "https://photo.stackexchange.com/questions/36232", "https://photo.stackexchange.com", "https://photo.stackexchange.com/users/17780/" ]
I had similar sounding problem with an ef 75-300 I bought about a year ago, It felt almost as if the outer casing of the barrel was rubbing or catching in places against the inner making the movement feel a bit jerky! over time however using the zoom ring seems to have become a more fluent action, the front focusing ring is soft and smooth but it always was so, I suggest if the lens is still covered by warranty that you contact your supplier and discuss your concerns! regards M
**Background:** This might seem more a non-answer, but I am recounting from personal experience. Faced with a fogged lens due to continued shooting in extremely muggy Florida weather in summer, I took it upon myself that as a mechanical engineer, I could fix anything, especially as I was quite impatient. I disassembled the lens and could never put it back. Relenting to buy a new lens at Naples I was told by the store people very clearly that LASER is used to assemble lenses, so they told me never to try that again! After a while I looked and the damaged lens's fogging had gone away by itself. Patience would have been the best fix in that case. Pay attention to what "mattdm" has suggested and simply call the store/dealer that sold you the lens, find out if you have warranty and try to get it fixed. Another reason why you may want to stay away from fixing things yourself is, there wasn't a particular incident that caused your lens ring to tighten, so it might be a manufacturing defect from the get go.
36,232
My new Canon EF 75-300mm telephoto lens and the zoom ring is quite stiff. Is the lens faulty or is there a way to loosen it up?
2013/03/27
[ "https://photo.stackexchange.com/questions/36232", "https://photo.stackexchange.com", "https://photo.stackexchange.com/users/17780/" ]
I had similar sounding problem with an ef 75-300 I bought about a year ago, It felt almost as if the outer casing of the barrel was rubbing or catching in places against the inner making the movement feel a bit jerky! over time however using the zoom ring seems to have become a more fluent action, the front focusing ring is soft and smooth but it always was so, I suggest if the lens is still covered by warranty that you contact your supplier and discuss your concerns! regards M
I recently had the same problem, as the lens was stiff and felt like it was scratching. I took it back to the store where it was purchased (Currys) and they immediately sent it off to Canon to be repaired. Apparently it is an easy fix, but I didn't want to try myself in case it invalidated my warranty or screwed up the lens.
723,356
It's clear that one shouldn't use floating precision when working with, say, monetary amounts since the variation in precision leads to inaccuracies when doing calculations with that amount. That said, what are use cases when that is acceptable? And, what are the general principles one should have in mind when deciding?
2009/04/06
[ "https://Stackoverflow.com/questions/723356", "https://Stackoverflow.com", "https://Stackoverflow.com/users/87408/" ]
Floating point numbers should be used for what they were designed for: computations where what you want is a fixed *precision*, and you only care that your answer is accurate to within a certain tolerance. If you need an exact answer in all cases, you're best using something else. Here are three domains where you might use floating point: 1. **Scientific Simulations** Science apps require a lot of number crunching, and often use sophisticated numerical methods to solve systems of differential equations. You're typically talking double-precision floating point here. 2. **Games** Think of games as a simulation where it's ok to cheat. If the physics is "good enough" to seem real then it's ok for games, and you can make up in user experience what you're missing in terms of accuracy. Games usually use single-precision floating point. 3. **Stats** Like science apps, statistical methods need a lot of floating point. A lot of the numerical methods are the same; the application domain is just different. You find a lot of statistics and monte carlo simulations in financial applications and in any field where you're analyzing a lot of survey data. Floating point isn't trivial, and for most business applications you really don't need to know all these subtleties. You're fine just knowing that you can't represent some decimal numbers exactly in floating point, and that you should be sure to use some decimal type for prices and things like that. If you really want to get into the details and understand all the tradeoffs and pitfalls, check out the classic [What Every Programmer Should Know About Floating Point](http://docs.sun.com/source/806-3568/ncg_goldberg.html), or pick up a book on [Numerical Analysis](http://en.wikipedia.org/wiki/Numerical_analysis) or [Applied Numerical Linear Algebra](http://books.google.com/books?id=bj-Lu6zjWbEC&dq=applied+numerical+linear+algebra&printsec=frontcover&source=bl&ots=5F5fVNdfwX&sig=Z9FiPDHVocrwrJZNkoQ9rh8ow70&hl=en&ei=LXvaSc6bC42qtAPW5vDXBg&sa=X&oi=book_result&ct=result&resnum=8) if you're really adventurous.
I think you should ask the other way around: when should you not use floating point. For most numerical tasks, floating point is the preferred data type, as you can (almost) forget about overflow and other kind of problems typically encountered with integer types. One way to look at floating point data type is that the precision is independent of the dynamic, that is whether the number is very small of very big (within an acceptable range of course), the number of meaningful digits is approximately the same. One drawback is that floating point numbers have some surprising properties, like x == x can be False (if x is nan), they do not follow most mathematical rules (distributivity, that is x( y + z) != x*y + x*z). Depending on the values for z, y, and z, this can matters.
723,356
It's clear that one shouldn't use floating precision when working with, say, monetary amounts since the variation in precision leads to inaccuracies when doing calculations with that amount. That said, what are use cases when that is acceptable? And, what are the general principles one should have in mind when deciding?
2009/04/06
[ "https://Stackoverflow.com/questions/723356", "https://Stackoverflow.com", "https://Stackoverflow.com/users/87408/" ]
Floating point numbers should be used for what they were designed for: computations where what you want is a fixed *precision*, and you only care that your answer is accurate to within a certain tolerance. If you need an exact answer in all cases, you're best using something else. Here are three domains where you might use floating point: 1. **Scientific Simulations** Science apps require a lot of number crunching, and often use sophisticated numerical methods to solve systems of differential equations. You're typically talking double-precision floating point here. 2. **Games** Think of games as a simulation where it's ok to cheat. If the physics is "good enough" to seem real then it's ok for games, and you can make up in user experience what you're missing in terms of accuracy. Games usually use single-precision floating point. 3. **Stats** Like science apps, statistical methods need a lot of floating point. A lot of the numerical methods are the same; the application domain is just different. You find a lot of statistics and monte carlo simulations in financial applications and in any field where you're analyzing a lot of survey data. Floating point isn't trivial, and for most business applications you really don't need to know all these subtleties. You're fine just knowing that you can't represent some decimal numbers exactly in floating point, and that you should be sure to use some decimal type for prices and things like that. If you really want to get into the details and understand all the tradeoffs and pitfalls, check out the classic [What Every Programmer Should Know About Floating Point](http://docs.sun.com/source/806-3568/ncg_goldberg.html), or pick up a book on [Numerical Analysis](http://en.wikipedia.org/wiki/Numerical_analysis) or [Applied Numerical Linear Algebra](http://books.google.com/books?id=bj-Lu6zjWbEC&dq=applied+numerical+linear+algebra&printsec=frontcover&source=bl&ots=5F5fVNdfwX&sig=Z9FiPDHVocrwrJZNkoQ9rh8ow70&hl=en&ei=LXvaSc6bC42qtAPW5vDXBg&sa=X&oi=book_result&ct=result&resnum=8) if you're really adventurous.
It's appropriate to use floating point types when dealing with scientific or statistical calculations. These will invariably only have, say, 3-8 significant digits of accuracy. As to whether to use single or double precision floating point types, this depends on your need for accuracy and how many significant digits you need. Typically though people just end up using doubles unless they have a good reason not to. For example if you measure distance or weight or any physical quantity like that the number you come up with isn't exact: it has a certain number of significant digits based on the accuracy of your instruments and your measurements. For calculations involving anything like this, floating point numbers are appropriate. Also, if you're dealing with irrational numbers floating point types are appropriate (and really your only choice) eg linear algebra where you deal with square roots a lot. Money is different because you typically need to be exact and every digit is significant.
723,356
It's clear that one shouldn't use floating precision when working with, say, monetary amounts since the variation in precision leads to inaccuracies when doing calculations with that amount. That said, what are use cases when that is acceptable? And, what are the general principles one should have in mind when deciding?
2009/04/06
[ "https://Stackoverflow.com/questions/723356", "https://Stackoverflow.com", "https://Stackoverflow.com/users/87408/" ]
Most real-world quantities are inexact, and typically we know their numeric properties with a lot less precision than a typical floating-point value. In almost all cases, the C types float and double are good enough. It is necessary to know some of the pitfalls. For example, testing two floating-point numbers for equality is usually not what you want, since all it takes is a single bit of inaccuracy to make the comparison non-equal. tgamblin has provided some good references. The usual exception is money, which is calculated exactly according to certain conventions that don't translate well to binary representations. Part of this is the constants used: you'll never see a pi% interest rate, or a 22/7% interest rate, but you might well see a 3.14% interest rate. In other words, the numbers used are typically expressed in exact decimal fractions, not all of which are exact binary fractions. Further, the rounding in calculations is governed by conventions that also don't translate well into binary. This makes it extremely difficult to precisely duplicate financial calculations with standard floating point, and therefore people use other methods for them.
From [Wikipedia](http://en.wikipedia.org/wiki/Floating_point): > > Floating-point arithmetic is at its > best when it is simply being used to > measure real-world quantities over a > wide range of scales (such as the > orbital period of Io or the mass of > the proton), and at its worst when it > is expected to model the interactions > of quantities expressed as decimal > strings that are expected to be exact. > > > Floating point is fast but inexact. If that is an acceptable trade off, use floating point.
723,356
It's clear that one shouldn't use floating precision when working with, say, monetary amounts since the variation in precision leads to inaccuracies when doing calculations with that amount. That said, what are use cases when that is acceptable? And, what are the general principles one should have in mind when deciding?
2009/04/06
[ "https://Stackoverflow.com/questions/723356", "https://Stackoverflow.com", "https://Stackoverflow.com/users/87408/" ]
I'm guessing you mean "floating point" here. The answer is, basically, any time the quantities involved are approximate, measured, rather than precise; any time the quantities involved are larger than can be conveniently represented precisely on the underlying machine; any time the need for computational speed overwhelms exact precision; and any time the appropriate precision can be maintained without other complexities. For more details of this, you really need to read a numerical analysis book.
It's appropriate to use floating point types when dealing with scientific or statistical calculations. These will invariably only have, say, 3-8 significant digits of accuracy. As to whether to use single or double precision floating point types, this depends on your need for accuracy and how many significant digits you need. Typically though people just end up using doubles unless they have a good reason not to. For example if you measure distance or weight or any physical quantity like that the number you come up with isn't exact: it has a certain number of significant digits based on the accuracy of your instruments and your measurements. For calculations involving anything like this, floating point numbers are appropriate. Also, if you're dealing with irrational numbers floating point types are appropriate (and really your only choice) eg linear algebra where you deal with square roots a lot. Money is different because you typically need to be exact and every digit is significant.
723,356
It's clear that one shouldn't use floating precision when working with, say, monetary amounts since the variation in precision leads to inaccuracies when doing calculations with that amount. That said, what are use cases when that is acceptable? And, what are the general principles one should have in mind when deciding?
2009/04/06
[ "https://Stackoverflow.com/questions/723356", "https://Stackoverflow.com", "https://Stackoverflow.com/users/87408/" ]
Floating point numbers should be used for what they were designed for: computations where what you want is a fixed *precision*, and you only care that your answer is accurate to within a certain tolerance. If you need an exact answer in all cases, you're best using something else. Here are three domains where you might use floating point: 1. **Scientific Simulations** Science apps require a lot of number crunching, and often use sophisticated numerical methods to solve systems of differential equations. You're typically talking double-precision floating point here. 2. **Games** Think of games as a simulation where it's ok to cheat. If the physics is "good enough" to seem real then it's ok for games, and you can make up in user experience what you're missing in terms of accuracy. Games usually use single-precision floating point. 3. **Stats** Like science apps, statistical methods need a lot of floating point. A lot of the numerical methods are the same; the application domain is just different. You find a lot of statistics and monte carlo simulations in financial applications and in any field where you're analyzing a lot of survey data. Floating point isn't trivial, and for most business applications you really don't need to know all these subtleties. You're fine just knowing that you can't represent some decimal numbers exactly in floating point, and that you should be sure to use some decimal type for prices and things like that. If you really want to get into the details and understand all the tradeoffs and pitfalls, check out the classic [What Every Programmer Should Know About Floating Point](http://docs.sun.com/source/806-3568/ncg_goldberg.html), or pick up a book on [Numerical Analysis](http://en.wikipedia.org/wiki/Numerical_analysis) or [Applied Numerical Linear Algebra](http://books.google.com/books?id=bj-Lu6zjWbEC&dq=applied+numerical+linear+algebra&printsec=frontcover&source=bl&ots=5F5fVNdfwX&sig=Z9FiPDHVocrwrJZNkoQ9rh8ow70&hl=en&ei=LXvaSc6bC42qtAPW5vDXBg&sa=X&oi=book_result&ct=result&resnum=8) if you're really adventurous.
From [Wikipedia](http://en.wikipedia.org/wiki/Floating_point): > > Floating-point arithmetic is at its > best when it is simply being used to > measure real-world quantities over a > wide range of scales (such as the > orbital period of Io or the mass of > the proton), and at its worst when it > is expected to model the interactions > of quantities expressed as decimal > strings that are expected to be exact. > > > Floating point is fast but inexact. If that is an acceptable trade off, use floating point.
723,356
It's clear that one shouldn't use floating precision when working with, say, monetary amounts since the variation in precision leads to inaccuracies when doing calculations with that amount. That said, what are use cases when that is acceptable? And, what are the general principles one should have in mind when deciding?
2009/04/06
[ "https://Stackoverflow.com/questions/723356", "https://Stackoverflow.com", "https://Stackoverflow.com/users/87408/" ]
I'm guessing you mean "floating point" here. The answer is, basically, any time the quantities involved are approximate, measured, rather than precise; any time the quantities involved are larger than can be conveniently represented precisely on the underlying machine; any time the need for computational speed overwhelms exact precision; and any time the appropriate precision can be maintained without other complexities. For more details of this, you really need to read a numerical analysis book.
I think you should ask the other way around: when should you not use floating point. For most numerical tasks, floating point is the preferred data type, as you can (almost) forget about overflow and other kind of problems typically encountered with integer types. One way to look at floating point data type is that the precision is independent of the dynamic, that is whether the number is very small of very big (within an acceptable range of course), the number of meaningful digits is approximately the same. One drawback is that floating point numbers have some surprising properties, like x == x can be False (if x is nan), they do not follow most mathematical rules (distributivity, that is x( y + z) != x*y + x*z). Depending on the values for z, y, and z, this can matters.
58,091
[![Type of typography/design](https://i.stack.imgur.com/2HSdM.jpg)](https://i.stack.imgur.com/2HSdM.jpg) Does anyone know of a label that could be applied to the style of typography/design above?
2015/08/14
[ "https://graphicdesign.stackexchange.com/questions/58091", "https://graphicdesign.stackexchange.com", "https://graphicdesign.stackexchange.com/users/48392/" ]
This is drawing heavily from [The Memphis Group's](https://en.wikipedia.org/wiki/Memphis_Group) design motifs. They became shorthand for the 80s, due to the rather bizarre level of market acceptance they received during that period. [![enter image description here](https://i.stack.imgur.com/mgNwL.jpg)](https://i.stack.imgur.com/mgNwL.jpg) For something slightly less gaudy, also look at [Constructivism](https://en.wikipedia.org/wiki/Constructivism_(art)) and [De Stijl](https://en.wikipedia.org/wiki/De_Stijl).
I'd probably call it '80s revival' as the colours, while still gaudy, are slightly more muted than actual 1980s colours (for example, 'Miami Vice' titles or The Memphis Group's furniture, as @plainclothes says). If you want a shorter term, how about 'naff'!? (If you lived through the 80s it's quite depressing to see this sort of mish-mash come back on behance, etc.)
734,274
I have an Open Document Text file write with LibreOffice when opened in Microsoft Word it appear very similar (which is normal). The only thing that is a little annoying are the list bullets. In LibreOffice I have created the default list, simply pushed the list button. And the bullets appears as dots and in the second level they appear as a empty little circle. Then when opened in Microsoft Word (I think is Office 2013) the first level appear as circle with the number 10 inside and the second level as an arrow. The arrow isn't very bad, but the number is a little confused because it give the idea that is a numbered list and something go wrong (because all have the number 10). I used LibreOffice 4.1.3.2 to make the odt and opened it with Microsoft Word 2013. I have found this link [Differences between the OpenDocument Text (.odt) format and the Word (.docx) format](http://office.microsoft.com/en-ca/word-help/differences-between-the-opendocument-text-odt-format-and-the-word-docx-format-HA010355788.aspx) that explain the differences and it stated for bullets: > > Default bullets in OpenOffice change appearance when .odt file is opened in Word 2013. > > > So is something that is know by them. Still, I want to know if is some way to get the same (or similar) bullets in both text processors. **Edit:** I have get access to an office 2013 and I have seen that default list bullets are show correctly but not others. In the images bellow, the first list is a default one, the second is formatted with other bullets, the third is with images and the last is a numbered list. This are a capture of the list in libre office: ![enter image description here](https://i.stack.imgur.com/xWAhQ.png) And here the same file opened in Miscrosoft Word 2013: ![enter image description here](https://i.stack.imgur.com/afRmP.png)
2014/03/27
[ "https://superuser.com/questions/734274", "https://superuser.com", "https://superuser.com/users/110586/" ]
I'm afraid this is only half an answer... This question relates to setting the bullet point character in LibreOffice. [LibreOffice: using a dash as bullet automatically](https://superuser.com/questions/684885/libreoffice-using-a-dash-as-bullet-automatically?rq=1) I don't know what the character code is for the character Microsoft Office uses as the bullet, but (perhaps) if you worked *that* answer using the MS character it would get you what you want. Edit: Follow-on thought If you look at the .doc file using a hex editor you should be able to identify the bullet character by finding the surrounding text. And change them one by one or by mass-editing if you have a mind to. Tiresome to have to do this on each document, so I hope you are able to get a full solution.
What version of LibreOffice are you using? It could be a bug with 4.1 which would have been solved in 4.1.2. <http://ask.libreoffice.org/en/question/23556/problem-with-bullets-appearing-as-square-glyphs-when-imported-from-word-doc/>
734,274
I have an Open Document Text file write with LibreOffice when opened in Microsoft Word it appear very similar (which is normal). The only thing that is a little annoying are the list bullets. In LibreOffice I have created the default list, simply pushed the list button. And the bullets appears as dots and in the second level they appear as a empty little circle. Then when opened in Microsoft Word (I think is Office 2013) the first level appear as circle with the number 10 inside and the second level as an arrow. The arrow isn't very bad, but the number is a little confused because it give the idea that is a numbered list and something go wrong (because all have the number 10). I used LibreOffice 4.1.3.2 to make the odt and opened it with Microsoft Word 2013. I have found this link [Differences between the OpenDocument Text (.odt) format and the Word (.docx) format](http://office.microsoft.com/en-ca/word-help/differences-between-the-opendocument-text-odt-format-and-the-word-docx-format-HA010355788.aspx) that explain the differences and it stated for bullets: > > Default bullets in OpenOffice change appearance when .odt file is opened in Word 2013. > > > So is something that is know by them. Still, I want to know if is some way to get the same (or similar) bullets in both text processors. **Edit:** I have get access to an office 2013 and I have seen that default list bullets are show correctly but not others. In the images bellow, the first list is a default one, the second is formatted with other bullets, the third is with images and the last is a numbered list. This are a capture of the list in libre office: ![enter image description here](https://i.stack.imgur.com/xWAhQ.png) And here the same file opened in Miscrosoft Word 2013: ![enter image description here](https://i.stack.imgur.com/afRmP.png)
2014/03/27
[ "https://superuser.com/questions/734274", "https://superuser.com", "https://superuser.com/users/110586/" ]
I'm afraid this is only half an answer... This question relates to setting the bullet point character in LibreOffice. [LibreOffice: using a dash as bullet automatically](https://superuser.com/questions/684885/libreoffice-using-a-dash-as-bullet-automatically?rq=1) I don't know what the character code is for the character Microsoft Office uses as the bullet, but (perhaps) if you worked *that* answer using the MS character it would get you what you want. Edit: Follow-on thought If you look at the .doc file using a hex editor you should be able to identify the bullet character by finding the surrounding text. And change them one by one or by mass-editing if you have a mind to. Tiresome to have to do this on each document, so I hope you are able to get a full solution.
I suggest using SoftMaker FreeOffice instead of LibreOffice, because it has a much better compatibility with Microsoft Office. It is full-fledged office suite, available free of charge for Linux and Windows (you get it here: freeoffice.com). The included word processor FreeOffice TextMaker can open ODT faithfully, and if you create documents including bullets with FreeOffice TextMaker, save them as doc, and open them with Word 2013, the bullets will look like they should look.
83,315
Yesterday I rebooted the web server machine, but I'm trying to figure out why the graph below shows prior rebooting the memory almost full of cache and just a bit of active memory used. Would there be any problem keeping it the same was it was or rebooting every ~30 days is what I'm suppose to do? Thanks <http://img513.imageshack.us/i/localhostlocaldomainmem.png/>
2009/11/10
[ "https://serverfault.com/questions/83315", "https://serverfault.com", "https://serverfault.com/users/-1/" ]
Check out [this blog post](http://egloo.wordpress.com/2008/10/29/linux-cached-memory/), it might shed some light on the issue.
Linux likes to use all otherwise-unused memory for disk cache. There's no performance downside, and there just might be a benefit because the disk won't need to be touched for some disk reads.
16,222
I'm going to start by telling you what I know. Then I'm going to tell you that I want to get to this magical land of knowing everything about embedded systems development. Then I'm going to ask you what my next steps should be to get there. [This answer](https://electronics.stackexchange.com/questions/3343/how-to-become-an-embedded-software-developer/3361#3361) is rather informative, but I'm trying to get a little more detailed: **What I Know** Let's see, I'm fair with C and C++. Obviously, I want to get better with those languages but I think at this point the best way for me to improve is to just keep using them and continually try to improve my code as I write it. I don't think it would be very beneficial to dedicate any learning exercises to just learning C anymore. I'm fairly comfortable with designing simple circuits. I see a chip with an open collector output and I know I need a pull up etc. I'm fairly confident that given an IC and its datasheet, I can either figure out how to interface with it or at least ask the right questions to figure out how to interface it. I'm very good at math and logical thinking. There are few algorithms/design concepts that throw me for a loop. This is definitely my strongest area. **What I've Done** **Until now, all of my experience has been with 8-bit microcontrollers. My college course utilized a Motorola HC08, an Intel 8051, and a Zilog EZ8. My professor had built a little dev board for all three of them with a 7-seg display and some other stuff. We programmed them in assembly so I'm somewhat familiar with assembly and I took a basic computer architecture course so I have a decent idea about the hardware.** However, we worked on a windows environment and all 3 MCUs had their own IDE and tool chain already setup so I never really learned how my code went from assembly to running on the MCU. My next step was learning on my own. I got very familiar with the Arduino environment by interfacing with sensors/memories/displays that both had pre-written libraries and some that did not. Next I built my own Arduino board with an ICSP on it and connections to another MCU so that I could flash the bootloader onto bare ATmega328s. Again, however, the IDE and tool-chain was already setup and I never got an understanding of how I went from C/Arduino to code actually running on the MCU. **What I Want to Know** So from the previously linked answer, I think the most interesting/beneficial things for me would be the bullet about learning the tools (compiler and linker), and learning different styles of software architecture (going from interrupt based control loops to schedulers and RTOSes). That is my opinion on what would be most useful... but since I don't know it yet it's hard to say whether that's correct or not so feel free to suggest other bullet points from that answer if you think they would be better and please provide an explanation as to why you think that. --- So taking my current knowledge as a starting point and the above description of what I want to know as a goal, I'm looking for *very specific* suggestions on where to go next. I'm looking for exact project suggestions, websites/blog entries to read, chapters in books, etc. Also, are there any other holes in my knowledge base that you think I should fill before moving on to the above mentioned topics?
2011/06/30
[ "https://electronics.stackexchange.com/questions/16222", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/4159/" ]
To directly address your question of "What I Want to Know": I've found that calling the compiler directly via command line, becoming familiar with its options, and then writing your own Makefiles to do all of your builds has been extremely beneficial to me in learning the build process - which sounds like something that you want to learn. This basically separates the tool chain from the IDE and allows you to learn the tool chain more than the IDE. This is an on-going thing that I'm trying to improve on as well. I noticed that you've used arduino in the past, which is great because now I can recommend using avr-gcc as your compiler from now on. Give it a try, it's available on all platforms (Linux, WinAVR for windows, Mac) and the documentation on the avr-gcc tool chain and avrdude (programmer) is great, and there should be plenty of example Makefiles out there for you to learn from. A fair amount of this information is transferable to other hardware as well, for example arm-gcc.
One thing you haven't mentioned is communications. It seems that one *hole* you could *plug* would be to learn the various standard communications protocols used in industry - things like: * [Profibus](http://en.wikipedia.org/wiki/Profibus) * [EIA-485](http://en.wikipedia.org/wiki/EIA-485) * [Modbus](http://en.wikipedia.org/wiki/Modbus) etc.
16,222
I'm going to start by telling you what I know. Then I'm going to tell you that I want to get to this magical land of knowing everything about embedded systems development. Then I'm going to ask you what my next steps should be to get there. [This answer](https://electronics.stackexchange.com/questions/3343/how-to-become-an-embedded-software-developer/3361#3361) is rather informative, but I'm trying to get a little more detailed: **What I Know** Let's see, I'm fair with C and C++. Obviously, I want to get better with those languages but I think at this point the best way for me to improve is to just keep using them and continually try to improve my code as I write it. I don't think it would be very beneficial to dedicate any learning exercises to just learning C anymore. I'm fairly comfortable with designing simple circuits. I see a chip with an open collector output and I know I need a pull up etc. I'm fairly confident that given an IC and its datasheet, I can either figure out how to interface with it or at least ask the right questions to figure out how to interface it. I'm very good at math and logical thinking. There are few algorithms/design concepts that throw me for a loop. This is definitely my strongest area. **What I've Done** **Until now, all of my experience has been with 8-bit microcontrollers. My college course utilized a Motorola HC08, an Intel 8051, and a Zilog EZ8. My professor had built a little dev board for all three of them with a 7-seg display and some other stuff. We programmed them in assembly so I'm somewhat familiar with assembly and I took a basic computer architecture course so I have a decent idea about the hardware.** However, we worked on a windows environment and all 3 MCUs had their own IDE and tool chain already setup so I never really learned how my code went from assembly to running on the MCU. My next step was learning on my own. I got very familiar with the Arduino environment by interfacing with sensors/memories/displays that both had pre-written libraries and some that did not. Next I built my own Arduino board with an ICSP on it and connections to another MCU so that I could flash the bootloader onto bare ATmega328s. Again, however, the IDE and tool-chain was already setup and I never got an understanding of how I went from C/Arduino to code actually running on the MCU. **What I Want to Know** So from the previously linked answer, I think the most interesting/beneficial things for me would be the bullet about learning the tools (compiler and linker), and learning different styles of software architecture (going from interrupt based control loops to schedulers and RTOSes). That is my opinion on what would be most useful... but since I don't know it yet it's hard to say whether that's correct or not so feel free to suggest other bullet points from that answer if you think they would be better and please provide an explanation as to why you think that. --- So taking my current knowledge as a starting point and the above description of what I want to know as a goal, I'm looking for *very specific* suggestions on where to go next. I'm looking for exact project suggestions, websites/blog entries to read, chapters in books, etc. Also, are there any other holes in my knowledge base that you think I should fill before moving on to the above mentioned topics?
2011/06/30
[ "https://electronics.stackexchange.com/questions/16222", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/4159/" ]
The MicroC OS II book is probably something to invest in. You should also create projects to learn the various interfaces i2c, spi, mdio, etc. In particular how to bit bang each one. From time to time the hardware will support the bus (need to learn that on a vendor by vendor basis) but often for various reasons you wont be able to use i2c/spi hardware and have to bit bang. The avr/arduino is fine, you should learn ARM, thumb and thumb2, and the msp430 and some older (non-mips) pic. Look at the bootloader code for the arduino, and figure out how to make a loader program, erase the flash on it and take over the board/chip. Get an lpc based arm micro, same deal look at the serial port programming protocol. Get a sam7s or something with an arm7 that has a traditional jtag, get an olimex wiggler or jtag-tiny (I recommend the latter). Get comfortable with openocd. ARM's swd is more painful that normal jtag but in this market will prevail (for cortex-m based products). In short, learn the various ways that vendors provide for in circuit programming. You will brick boards from time to time and will want to be comfortable with unbricking them. Along these lines, write some code to parse intel hex, srec, and elf files, you may someday need to write a loader and need to support one or more of these popular formats. You mentioned tools. You cant go wrong learning gcc and binutils, learn how to cross compile, at least for the supported platforms (usually involves --target=msp430 --prefix=/something for example). Supported platforms for the mainline gcc and binutils is a moving target so avrgcc and mspgcc and the like are basically done for you. You need to learn to write linker scripts, how to write your C code so that for example fixed tables show up in the rom not the ram. Also get a feel for disassembling the binaries, you need to insure that the tables are in the right place, insure that code is where you think it is, that the vector tables and boot/startup code are where the processor needs it to be to boot. Also doesnt hurt to find out what the compiler optimizations do and what C code looks like when compiled to assembler/machine code. if possible dont limit yourself to gcc/gnu. llvm is a strong player, it has the potential for passing gcc by as a better tool. You may have already used sdcc. try the eval versions of kiel, iar, etc. You will find quickly that there is a lot of grey area in the C/C++ standards and each compiler interprets those differently, also dramatic differences in the quality of the code produced from the same high level source. If you stick with this profession there will be times when you are forced to use a not so great compiler and have to work around its warts and weaknesses. In the desktop business you can often get away with refusing to use non-standards compliant tools. In the microcontroller world, sometimes you get what you get and that is it. Sometimes you get vendors that modify/enhance the C language to meet their hardware features or supposedly make your life easier (rabbit semi and xmos come to mind). (xmos is a very attractive platform for many reasons, I consider it advanced, but from the sounds of your experience you are likely ready, the tools are a free download, really good simulator, important to learn to study .vcd/waveforms of your code executing). chibios is another one to look at. Creating sucessful bootloaders is an important skill. The bootloader, or at least the beginning part of wants to be rock solid, you dont want to deliver a product that is easily bricked. A simple boot with a way to re-load the application portion of the flash without compromising the entry part of the bootloader is key. The stellaris eval boards are loaded with peripherals, although they provide libraries it is worth learning, esp since how they tell you it works and how it actually works differ and you have to examine their code and other resources to find out. Being an avr fan, if still out there, I recommend getting an avr butterfly. Learn a little serial programming, solder on a connector and reprogram it. A few peripherals on there to learn to program. maybe get a formerly dallas semi, now maxim I think, one wire device. Like their temperature sensors. Even more painful than i2c and mdio with their bidirectional data bus, this one wire thing, is one wire (and ground). Power, master to dependent and dependent to master all on one wire. When I was where you are now, I found decoding infrared remote control protocols fun. The ir reciver modules are easy to come by, radio shack actually had a good one. Basically the opposite of bit banging, you want to measure the time between state changes, using that timing detect and or decode the protocol. A universal receiver is not necessary, one protocol at a time is fine. Likewise being able to then bitbang commands to an ir led, in particular if you bit bang the carrier frequency. Talking to an sd card via spi is probably a good idea as well. Definitely learn how to erase and program i2c and/or spi flash parts, you will come across these often for serial numbers and mac addresses and the like. I recommend learning the basic ethernet protocols as well. Be able to parse and create arp, and udp packets (from scratch) (and icmp/ping as well). Pretty easy to create an udp stack if you cheat a little, not following the the actual arp rules, if someone sends you something send the reponse back to the sending mac/ip. Or perhaps go so far as to watch the arp packets for other folks go buy and keep track of the mac/ip addresses around you. tcp takes a lot more work, doable, but better to just read about it first than to try to implement it. Good luck, and most important, have fun.
So the question is "How to learn, when every toolchain is a blackbox ?" I suggest to find a off-the shelf very old experimentors, debugging board with any common CPU. Something like 2 feet wide contraption with CPU, LEDS, switches and "Execute one single step" button. Manually create 5-10 instructions long loop program using machine code binary instructions from datasheet. Put it onto pluggable huge ROM chip. Insert ROM, hit power on/reset and debug it step by step.
16,222
I'm going to start by telling you what I know. Then I'm going to tell you that I want to get to this magical land of knowing everything about embedded systems development. Then I'm going to ask you what my next steps should be to get there. [This answer](https://electronics.stackexchange.com/questions/3343/how-to-become-an-embedded-software-developer/3361#3361) is rather informative, but I'm trying to get a little more detailed: **What I Know** Let's see, I'm fair with C and C++. Obviously, I want to get better with those languages but I think at this point the best way for me to improve is to just keep using them and continually try to improve my code as I write it. I don't think it would be very beneficial to dedicate any learning exercises to just learning C anymore. I'm fairly comfortable with designing simple circuits. I see a chip with an open collector output and I know I need a pull up etc. I'm fairly confident that given an IC and its datasheet, I can either figure out how to interface with it or at least ask the right questions to figure out how to interface it. I'm very good at math and logical thinking. There are few algorithms/design concepts that throw me for a loop. This is definitely my strongest area. **What I've Done** **Until now, all of my experience has been with 8-bit microcontrollers. My college course utilized a Motorola HC08, an Intel 8051, and a Zilog EZ8. My professor had built a little dev board for all three of them with a 7-seg display and some other stuff. We programmed them in assembly so I'm somewhat familiar with assembly and I took a basic computer architecture course so I have a decent idea about the hardware.** However, we worked on a windows environment and all 3 MCUs had their own IDE and tool chain already setup so I never really learned how my code went from assembly to running on the MCU. My next step was learning on my own. I got very familiar with the Arduino environment by interfacing with sensors/memories/displays that both had pre-written libraries and some that did not. Next I built my own Arduino board with an ICSP on it and connections to another MCU so that I could flash the bootloader onto bare ATmega328s. Again, however, the IDE and tool-chain was already setup and I never got an understanding of how I went from C/Arduino to code actually running on the MCU. **What I Want to Know** So from the previously linked answer, I think the most interesting/beneficial things for me would be the bullet about learning the tools (compiler and linker), and learning different styles of software architecture (going from interrupt based control loops to schedulers and RTOSes). That is my opinion on what would be most useful... but since I don't know it yet it's hard to say whether that's correct or not so feel free to suggest other bullet points from that answer if you think they would be better and please provide an explanation as to why you think that. --- So taking my current knowledge as a starting point and the above description of what I want to know as a goal, I'm looking for *very specific* suggestions on where to go next. I'm looking for exact project suggestions, websites/blog entries to read, chapters in books, etc. Also, are there any other holes in my knowledge base that you think I should fill before moving on to the above mentioned topics?
2011/06/30
[ "https://electronics.stackexchange.com/questions/16222", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/4159/" ]
Here's another idea. Implement your own background tasking system that allows you to create both timed tasks and demand tasks that run only when timed tasks are not running. It's not a true RTOS, but acts more like a cooperative scheduler. Convert a previous project to use the new tasking system. This kind of system worked really well on a products we used to use on a 8051. It was originally written in assembly, but later on we converted it to C to help with porting it to other architectures. It was really slick where the heartbeat of the this system was on a 5 ms tick and the timed tasks ran on 5 ms increments. We had a file that we used to name all our tasks (function pointers) with their time values and those that were on demand. That file was then converted to either assembly or C depending on how we implemented it and compiled into the code. If you get it working pretty well, then you can tackle writing your own simple RTOS for something a little heftier.
[Realtime Mantra](http://www.eventhelix.com/realtimemantra/) contains several articles about embedded software development.
16,222
I'm going to start by telling you what I know. Then I'm going to tell you that I want to get to this magical land of knowing everything about embedded systems development. Then I'm going to ask you what my next steps should be to get there. [This answer](https://electronics.stackexchange.com/questions/3343/how-to-become-an-embedded-software-developer/3361#3361) is rather informative, but I'm trying to get a little more detailed: **What I Know** Let's see, I'm fair with C and C++. Obviously, I want to get better with those languages but I think at this point the best way for me to improve is to just keep using them and continually try to improve my code as I write it. I don't think it would be very beneficial to dedicate any learning exercises to just learning C anymore. I'm fairly comfortable with designing simple circuits. I see a chip with an open collector output and I know I need a pull up etc. I'm fairly confident that given an IC and its datasheet, I can either figure out how to interface with it or at least ask the right questions to figure out how to interface it. I'm very good at math and logical thinking. There are few algorithms/design concepts that throw me for a loop. This is definitely my strongest area. **What I've Done** **Until now, all of my experience has been with 8-bit microcontrollers. My college course utilized a Motorola HC08, an Intel 8051, and a Zilog EZ8. My professor had built a little dev board for all three of them with a 7-seg display and some other stuff. We programmed them in assembly so I'm somewhat familiar with assembly and I took a basic computer architecture course so I have a decent idea about the hardware.** However, we worked on a windows environment and all 3 MCUs had their own IDE and tool chain already setup so I never really learned how my code went from assembly to running on the MCU. My next step was learning on my own. I got very familiar with the Arduino environment by interfacing with sensors/memories/displays that both had pre-written libraries and some that did not. Next I built my own Arduino board with an ICSP on it and connections to another MCU so that I could flash the bootloader onto bare ATmega328s. Again, however, the IDE and tool-chain was already setup and I never got an understanding of how I went from C/Arduino to code actually running on the MCU. **What I Want to Know** So from the previously linked answer, I think the most interesting/beneficial things for me would be the bullet about learning the tools (compiler and linker), and learning different styles of software architecture (going from interrupt based control loops to schedulers and RTOSes). That is my opinion on what would be most useful... but since I don't know it yet it's hard to say whether that's correct or not so feel free to suggest other bullet points from that answer if you think they would be better and please provide an explanation as to why you think that. --- So taking my current knowledge as a starting point and the above description of what I want to know as a goal, I'm looking for *very specific* suggestions on where to go next. I'm looking for exact project suggestions, websites/blog entries to read, chapters in books, etc. Also, are there any other holes in my knowledge base that you think I should fill before moving on to the above mentioned topics?
2011/06/30
[ "https://electronics.stackexchange.com/questions/16222", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/4159/" ]
The MicroC OS II book is probably something to invest in. You should also create projects to learn the various interfaces i2c, spi, mdio, etc. In particular how to bit bang each one. From time to time the hardware will support the bus (need to learn that on a vendor by vendor basis) but often for various reasons you wont be able to use i2c/spi hardware and have to bit bang. The avr/arduino is fine, you should learn ARM, thumb and thumb2, and the msp430 and some older (non-mips) pic. Look at the bootloader code for the arduino, and figure out how to make a loader program, erase the flash on it and take over the board/chip. Get an lpc based arm micro, same deal look at the serial port programming protocol. Get a sam7s or something with an arm7 that has a traditional jtag, get an olimex wiggler or jtag-tiny (I recommend the latter). Get comfortable with openocd. ARM's swd is more painful that normal jtag but in this market will prevail (for cortex-m based products). In short, learn the various ways that vendors provide for in circuit programming. You will brick boards from time to time and will want to be comfortable with unbricking them. Along these lines, write some code to parse intel hex, srec, and elf files, you may someday need to write a loader and need to support one or more of these popular formats. You mentioned tools. You cant go wrong learning gcc and binutils, learn how to cross compile, at least for the supported platforms (usually involves --target=msp430 --prefix=/something for example). Supported platforms for the mainline gcc and binutils is a moving target so avrgcc and mspgcc and the like are basically done for you. You need to learn to write linker scripts, how to write your C code so that for example fixed tables show up in the rom not the ram. Also get a feel for disassembling the binaries, you need to insure that the tables are in the right place, insure that code is where you think it is, that the vector tables and boot/startup code are where the processor needs it to be to boot. Also doesnt hurt to find out what the compiler optimizations do and what C code looks like when compiled to assembler/machine code. if possible dont limit yourself to gcc/gnu. llvm is a strong player, it has the potential for passing gcc by as a better tool. You may have already used sdcc. try the eval versions of kiel, iar, etc. You will find quickly that there is a lot of grey area in the C/C++ standards and each compiler interprets those differently, also dramatic differences in the quality of the code produced from the same high level source. If you stick with this profession there will be times when you are forced to use a not so great compiler and have to work around its warts and weaknesses. In the desktop business you can often get away with refusing to use non-standards compliant tools. In the microcontroller world, sometimes you get what you get and that is it. Sometimes you get vendors that modify/enhance the C language to meet their hardware features or supposedly make your life easier (rabbit semi and xmos come to mind). (xmos is a very attractive platform for many reasons, I consider it advanced, but from the sounds of your experience you are likely ready, the tools are a free download, really good simulator, important to learn to study .vcd/waveforms of your code executing). chibios is another one to look at. Creating sucessful bootloaders is an important skill. The bootloader, or at least the beginning part of wants to be rock solid, you dont want to deliver a product that is easily bricked. A simple boot with a way to re-load the application portion of the flash without compromising the entry part of the bootloader is key. The stellaris eval boards are loaded with peripherals, although they provide libraries it is worth learning, esp since how they tell you it works and how it actually works differ and you have to examine their code and other resources to find out. Being an avr fan, if still out there, I recommend getting an avr butterfly. Learn a little serial programming, solder on a connector and reprogram it. A few peripherals on there to learn to program. maybe get a formerly dallas semi, now maxim I think, one wire device. Like their temperature sensors. Even more painful than i2c and mdio with their bidirectional data bus, this one wire thing, is one wire (and ground). Power, master to dependent and dependent to master all on one wire. When I was where you are now, I found decoding infrared remote control protocols fun. The ir reciver modules are easy to come by, radio shack actually had a good one. Basically the opposite of bit banging, you want to measure the time between state changes, using that timing detect and or decode the protocol. A universal receiver is not necessary, one protocol at a time is fine. Likewise being able to then bitbang commands to an ir led, in particular if you bit bang the carrier frequency. Talking to an sd card via spi is probably a good idea as well. Definitely learn how to erase and program i2c and/or spi flash parts, you will come across these often for serial numbers and mac addresses and the like. I recommend learning the basic ethernet protocols as well. Be able to parse and create arp, and udp packets (from scratch) (and icmp/ping as well). Pretty easy to create an udp stack if you cheat a little, not following the the actual arp rules, if someone sends you something send the reponse back to the sending mac/ip. Or perhaps go so far as to watch the arp packets for other folks go buy and keep track of the mac/ip addresses around you. tcp takes a lot more work, doable, but better to just read about it first than to try to implement it. Good luck, and most important, have fun.
[Realtime Mantra](http://www.eventhelix.com/realtimemantra/) contains several articles about embedded software development.
16,222
I'm going to start by telling you what I know. Then I'm going to tell you that I want to get to this magical land of knowing everything about embedded systems development. Then I'm going to ask you what my next steps should be to get there. [This answer](https://electronics.stackexchange.com/questions/3343/how-to-become-an-embedded-software-developer/3361#3361) is rather informative, but I'm trying to get a little more detailed: **What I Know** Let's see, I'm fair with C and C++. Obviously, I want to get better with those languages but I think at this point the best way for me to improve is to just keep using them and continually try to improve my code as I write it. I don't think it would be very beneficial to dedicate any learning exercises to just learning C anymore. I'm fairly comfortable with designing simple circuits. I see a chip with an open collector output and I know I need a pull up etc. I'm fairly confident that given an IC and its datasheet, I can either figure out how to interface with it or at least ask the right questions to figure out how to interface it. I'm very good at math and logical thinking. There are few algorithms/design concepts that throw me for a loop. This is definitely my strongest area. **What I've Done** **Until now, all of my experience has been with 8-bit microcontrollers. My college course utilized a Motorola HC08, an Intel 8051, and a Zilog EZ8. My professor had built a little dev board for all three of them with a 7-seg display and some other stuff. We programmed them in assembly so I'm somewhat familiar with assembly and I took a basic computer architecture course so I have a decent idea about the hardware.** However, we worked on a windows environment and all 3 MCUs had their own IDE and tool chain already setup so I never really learned how my code went from assembly to running on the MCU. My next step was learning on my own. I got very familiar with the Arduino environment by interfacing with sensors/memories/displays that both had pre-written libraries and some that did not. Next I built my own Arduino board with an ICSP on it and connections to another MCU so that I could flash the bootloader onto bare ATmega328s. Again, however, the IDE and tool-chain was already setup and I never got an understanding of how I went from C/Arduino to code actually running on the MCU. **What I Want to Know** So from the previously linked answer, I think the most interesting/beneficial things for me would be the bullet about learning the tools (compiler and linker), and learning different styles of software architecture (going from interrupt based control loops to schedulers and RTOSes). That is my opinion on what would be most useful... but since I don't know it yet it's hard to say whether that's correct or not so feel free to suggest other bullet points from that answer if you think they would be better and please provide an explanation as to why you think that. --- So taking my current knowledge as a starting point and the above description of what I want to know as a goal, I'm looking for *very specific* suggestions on where to go next. I'm looking for exact project suggestions, websites/blog entries to read, chapters in books, etc. Also, are there any other holes in my knowledge base that you think I should fill before moving on to the above mentioned topics?
2011/06/30
[ "https://electronics.stackexchange.com/questions/16222", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/4159/" ]
> > So from the previously linked answer, > I think the most > interesting/beneficial things for me > would be the bullet about learning the > tools (compiler and linker), and > learning different styles of software > architecture (going from interrupt > based control loops to schedulers and > RTOSes) > > > Porting a small operating system to a new device could help you to understand schedulers and RTOSs. [FreeRTOS](http://www.freertos.org/) is popular and well documented. [eCos](http://ecos.sourceware.org/) is another. Writing a bootloader is a good way to get to grips with a linker as you'll want to divide up memory and flash into regions. Another tip is to pick a completely new architecture or chip and build yourself a development board. Forcing yourself to start right from the beginning and look everything up in the datasheet is a good way to learn. Explore [Protothreads](http://www.sics.se/~adam/pt/). Try writing the same programs in both a threaded and state machine style. Once you're done with Protothreads, write a real thread scheduler.
To directly address your question of "What I Want to Know": I've found that calling the compiler directly via command line, becoming familiar with its options, and then writing your own Makefiles to do all of your builds has been extremely beneficial to me in learning the build process - which sounds like something that you want to learn. This basically separates the tool chain from the IDE and allows you to learn the tool chain more than the IDE. This is an on-going thing that I'm trying to improve on as well. I noticed that you've used arduino in the past, which is great because now I can recommend using avr-gcc as your compiler from now on. Give it a try, it's available on all platforms (Linux, WinAVR for windows, Mac) and the documentation on the avr-gcc tool chain and avrdude (programmer) is great, and there should be plenty of example Makefiles out there for you to learn from. A fair amount of this information is transferable to other hardware as well, for example arm-gcc.
16,222
I'm going to start by telling you what I know. Then I'm going to tell you that I want to get to this magical land of knowing everything about embedded systems development. Then I'm going to ask you what my next steps should be to get there. [This answer](https://electronics.stackexchange.com/questions/3343/how-to-become-an-embedded-software-developer/3361#3361) is rather informative, but I'm trying to get a little more detailed: **What I Know** Let's see, I'm fair with C and C++. Obviously, I want to get better with those languages but I think at this point the best way for me to improve is to just keep using them and continually try to improve my code as I write it. I don't think it would be very beneficial to dedicate any learning exercises to just learning C anymore. I'm fairly comfortable with designing simple circuits. I see a chip with an open collector output and I know I need a pull up etc. I'm fairly confident that given an IC and its datasheet, I can either figure out how to interface with it or at least ask the right questions to figure out how to interface it. I'm very good at math and logical thinking. There are few algorithms/design concepts that throw me for a loop. This is definitely my strongest area. **What I've Done** **Until now, all of my experience has been with 8-bit microcontrollers. My college course utilized a Motorola HC08, an Intel 8051, and a Zilog EZ8. My professor had built a little dev board for all three of them with a 7-seg display and some other stuff. We programmed them in assembly so I'm somewhat familiar with assembly and I took a basic computer architecture course so I have a decent idea about the hardware.** However, we worked on a windows environment and all 3 MCUs had their own IDE and tool chain already setup so I never really learned how my code went from assembly to running on the MCU. My next step was learning on my own. I got very familiar with the Arduino environment by interfacing with sensors/memories/displays that both had pre-written libraries and some that did not. Next I built my own Arduino board with an ICSP on it and connections to another MCU so that I could flash the bootloader onto bare ATmega328s. Again, however, the IDE and tool-chain was already setup and I never got an understanding of how I went from C/Arduino to code actually running on the MCU. **What I Want to Know** So from the previously linked answer, I think the most interesting/beneficial things for me would be the bullet about learning the tools (compiler and linker), and learning different styles of software architecture (going from interrupt based control loops to schedulers and RTOSes). That is my opinion on what would be most useful... but since I don't know it yet it's hard to say whether that's correct or not so feel free to suggest other bullet points from that answer if you think they would be better and please provide an explanation as to why you think that. --- So taking my current knowledge as a starting point and the above description of what I want to know as a goal, I'm looking for *very specific* suggestions on where to go next. I'm looking for exact project suggestions, websites/blog entries to read, chapters in books, etc. Also, are there any other holes in my knowledge base that you think I should fill before moving on to the above mentioned topics?
2011/06/30
[ "https://electronics.stackexchange.com/questions/16222", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/4159/" ]
> > So from the previously linked answer, > I think the most > interesting/beneficial things for me > would be the bullet about learning the > tools (compiler and linker), and > learning different styles of software > architecture (going from interrupt > based control loops to schedulers and > RTOSes) > > > Porting a small operating system to a new device could help you to understand schedulers and RTOSs. [FreeRTOS](http://www.freertos.org/) is popular and well documented. [eCos](http://ecos.sourceware.org/) is another. Writing a bootloader is a good way to get to grips with a linker as you'll want to divide up memory and flash into regions. Another tip is to pick a completely new architecture or chip and build yourself a development board. Forcing yourself to start right from the beginning and look everything up in the datasheet is a good way to learn. Explore [Protothreads](http://www.sics.se/~adam/pt/). Try writing the same programs in both a threaded and state machine style. Once you're done with Protothreads, write a real thread scheduler.
One thing you haven't mentioned is communications. It seems that one *hole* you could *plug* would be to learn the various standard communications protocols used in industry - things like: * [Profibus](http://en.wikipedia.org/wiki/Profibus) * [EIA-485](http://en.wikipedia.org/wiki/EIA-485) * [Modbus](http://en.wikipedia.org/wiki/Modbus) etc.
16,222
I'm going to start by telling you what I know. Then I'm going to tell you that I want to get to this magical land of knowing everything about embedded systems development. Then I'm going to ask you what my next steps should be to get there. [This answer](https://electronics.stackexchange.com/questions/3343/how-to-become-an-embedded-software-developer/3361#3361) is rather informative, but I'm trying to get a little more detailed: **What I Know** Let's see, I'm fair with C and C++. Obviously, I want to get better with those languages but I think at this point the best way for me to improve is to just keep using them and continually try to improve my code as I write it. I don't think it would be very beneficial to dedicate any learning exercises to just learning C anymore. I'm fairly comfortable with designing simple circuits. I see a chip with an open collector output and I know I need a pull up etc. I'm fairly confident that given an IC and its datasheet, I can either figure out how to interface with it or at least ask the right questions to figure out how to interface it. I'm very good at math and logical thinking. There are few algorithms/design concepts that throw me for a loop. This is definitely my strongest area. **What I've Done** **Until now, all of my experience has been with 8-bit microcontrollers. My college course utilized a Motorola HC08, an Intel 8051, and a Zilog EZ8. My professor had built a little dev board for all three of them with a 7-seg display and some other stuff. We programmed them in assembly so I'm somewhat familiar with assembly and I took a basic computer architecture course so I have a decent idea about the hardware.** However, we worked on a windows environment and all 3 MCUs had their own IDE and tool chain already setup so I never really learned how my code went from assembly to running on the MCU. My next step was learning on my own. I got very familiar with the Arduino environment by interfacing with sensors/memories/displays that both had pre-written libraries and some that did not. Next I built my own Arduino board with an ICSP on it and connections to another MCU so that I could flash the bootloader onto bare ATmega328s. Again, however, the IDE and tool-chain was already setup and I never got an understanding of how I went from C/Arduino to code actually running on the MCU. **What I Want to Know** So from the previously linked answer, I think the most interesting/beneficial things for me would be the bullet about learning the tools (compiler and linker), and learning different styles of software architecture (going from interrupt based control loops to schedulers and RTOSes). That is my opinion on what would be most useful... but since I don't know it yet it's hard to say whether that's correct or not so feel free to suggest other bullet points from that answer if you think they would be better and please provide an explanation as to why you think that. --- So taking my current knowledge as a starting point and the above description of what I want to know as a goal, I'm looking for *very specific* suggestions on where to go next. I'm looking for exact project suggestions, websites/blog entries to read, chapters in books, etc. Also, are there any other holes in my knowledge base that you think I should fill before moving on to the above mentioned topics?
2011/06/30
[ "https://electronics.stackexchange.com/questions/16222", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/4159/" ]
> > So from the previously linked answer, > I think the most > interesting/beneficial things for me > would be the bullet about learning the > tools (compiler and linker), and > learning different styles of software > architecture (going from interrupt > based control loops to schedulers and > RTOSes) > > > Porting a small operating system to a new device could help you to understand schedulers and RTOSs. [FreeRTOS](http://www.freertos.org/) is popular and well documented. [eCos](http://ecos.sourceware.org/) is another. Writing a bootloader is a good way to get to grips with a linker as you'll want to divide up memory and flash into regions. Another tip is to pick a completely new architecture or chip and build yourself a development board. Forcing yourself to start right from the beginning and look everything up in the datasheet is a good way to learn. Explore [Protothreads](http://www.sics.se/~adam/pt/). Try writing the same programs in both a threaded and state machine style. Once you're done with Protothreads, write a real thread scheduler.
The MicroC OS II book is probably something to invest in. You should also create projects to learn the various interfaces i2c, spi, mdio, etc. In particular how to bit bang each one. From time to time the hardware will support the bus (need to learn that on a vendor by vendor basis) but often for various reasons you wont be able to use i2c/spi hardware and have to bit bang. The avr/arduino is fine, you should learn ARM, thumb and thumb2, and the msp430 and some older (non-mips) pic. Look at the bootloader code for the arduino, and figure out how to make a loader program, erase the flash on it and take over the board/chip. Get an lpc based arm micro, same deal look at the serial port programming protocol. Get a sam7s or something with an arm7 that has a traditional jtag, get an olimex wiggler or jtag-tiny (I recommend the latter). Get comfortable with openocd. ARM's swd is more painful that normal jtag but in this market will prevail (for cortex-m based products). In short, learn the various ways that vendors provide for in circuit programming. You will brick boards from time to time and will want to be comfortable with unbricking them. Along these lines, write some code to parse intel hex, srec, and elf files, you may someday need to write a loader and need to support one or more of these popular formats. You mentioned tools. You cant go wrong learning gcc and binutils, learn how to cross compile, at least for the supported platforms (usually involves --target=msp430 --prefix=/something for example). Supported platforms for the mainline gcc and binutils is a moving target so avrgcc and mspgcc and the like are basically done for you. You need to learn to write linker scripts, how to write your C code so that for example fixed tables show up in the rom not the ram. Also get a feel for disassembling the binaries, you need to insure that the tables are in the right place, insure that code is where you think it is, that the vector tables and boot/startup code are where the processor needs it to be to boot. Also doesnt hurt to find out what the compiler optimizations do and what C code looks like when compiled to assembler/machine code. if possible dont limit yourself to gcc/gnu. llvm is a strong player, it has the potential for passing gcc by as a better tool. You may have already used sdcc. try the eval versions of kiel, iar, etc. You will find quickly that there is a lot of grey area in the C/C++ standards and each compiler interprets those differently, also dramatic differences in the quality of the code produced from the same high level source. If you stick with this profession there will be times when you are forced to use a not so great compiler and have to work around its warts and weaknesses. In the desktop business you can often get away with refusing to use non-standards compliant tools. In the microcontroller world, sometimes you get what you get and that is it. Sometimes you get vendors that modify/enhance the C language to meet their hardware features or supposedly make your life easier (rabbit semi and xmos come to mind). (xmos is a very attractive platform for many reasons, I consider it advanced, but from the sounds of your experience you are likely ready, the tools are a free download, really good simulator, important to learn to study .vcd/waveforms of your code executing). chibios is another one to look at. Creating sucessful bootloaders is an important skill. The bootloader, or at least the beginning part of wants to be rock solid, you dont want to deliver a product that is easily bricked. A simple boot with a way to re-load the application portion of the flash without compromising the entry part of the bootloader is key. The stellaris eval boards are loaded with peripherals, although they provide libraries it is worth learning, esp since how they tell you it works and how it actually works differ and you have to examine their code and other resources to find out. Being an avr fan, if still out there, I recommend getting an avr butterfly. Learn a little serial programming, solder on a connector and reprogram it. A few peripherals on there to learn to program. maybe get a formerly dallas semi, now maxim I think, one wire device. Like their temperature sensors. Even more painful than i2c and mdio with their bidirectional data bus, this one wire thing, is one wire (and ground). Power, master to dependent and dependent to master all on one wire. When I was where you are now, I found decoding infrared remote control protocols fun. The ir reciver modules are easy to come by, radio shack actually had a good one. Basically the opposite of bit banging, you want to measure the time between state changes, using that timing detect and or decode the protocol. A universal receiver is not necessary, one protocol at a time is fine. Likewise being able to then bitbang commands to an ir led, in particular if you bit bang the carrier frequency. Talking to an sd card via spi is probably a good idea as well. Definitely learn how to erase and program i2c and/or spi flash parts, you will come across these often for serial numbers and mac addresses and the like. I recommend learning the basic ethernet protocols as well. Be able to parse and create arp, and udp packets (from scratch) (and icmp/ping as well). Pretty easy to create an udp stack if you cheat a little, not following the the actual arp rules, if someone sends you something send the reponse back to the sending mac/ip. Or perhaps go so far as to watch the arp packets for other folks go buy and keep track of the mac/ip addresses around you. tcp takes a lot more work, doable, but better to just read about it first than to try to implement it. Good luck, and most important, have fun.
16,222
I'm going to start by telling you what I know. Then I'm going to tell you that I want to get to this magical land of knowing everything about embedded systems development. Then I'm going to ask you what my next steps should be to get there. [This answer](https://electronics.stackexchange.com/questions/3343/how-to-become-an-embedded-software-developer/3361#3361) is rather informative, but I'm trying to get a little more detailed: **What I Know** Let's see, I'm fair with C and C++. Obviously, I want to get better with those languages but I think at this point the best way for me to improve is to just keep using them and continually try to improve my code as I write it. I don't think it would be very beneficial to dedicate any learning exercises to just learning C anymore. I'm fairly comfortable with designing simple circuits. I see a chip with an open collector output and I know I need a pull up etc. I'm fairly confident that given an IC and its datasheet, I can either figure out how to interface with it or at least ask the right questions to figure out how to interface it. I'm very good at math and logical thinking. There are few algorithms/design concepts that throw me for a loop. This is definitely my strongest area. **What I've Done** **Until now, all of my experience has been with 8-bit microcontrollers. My college course utilized a Motorola HC08, an Intel 8051, and a Zilog EZ8. My professor had built a little dev board for all three of them with a 7-seg display and some other stuff. We programmed them in assembly so I'm somewhat familiar with assembly and I took a basic computer architecture course so I have a decent idea about the hardware.** However, we worked on a windows environment and all 3 MCUs had their own IDE and tool chain already setup so I never really learned how my code went from assembly to running on the MCU. My next step was learning on my own. I got very familiar with the Arduino environment by interfacing with sensors/memories/displays that both had pre-written libraries and some that did not. Next I built my own Arduino board with an ICSP on it and connections to another MCU so that I could flash the bootloader onto bare ATmega328s. Again, however, the IDE and tool-chain was already setup and I never got an understanding of how I went from C/Arduino to code actually running on the MCU. **What I Want to Know** So from the previously linked answer, I think the most interesting/beneficial things for me would be the bullet about learning the tools (compiler and linker), and learning different styles of software architecture (going from interrupt based control loops to schedulers and RTOSes). That is my opinion on what would be most useful... but since I don't know it yet it's hard to say whether that's correct or not so feel free to suggest other bullet points from that answer if you think they would be better and please provide an explanation as to why you think that. --- So taking my current knowledge as a starting point and the above description of what I want to know as a goal, I'm looking for *very specific* suggestions on where to go next. I'm looking for exact project suggestions, websites/blog entries to read, chapters in books, etc. Also, are there any other holes in my knowledge base that you think I should fill before moving on to the above mentioned topics?
2011/06/30
[ "https://electronics.stackexchange.com/questions/16222", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/4159/" ]
To directly address your question of "What I Want to Know": I've found that calling the compiler directly via command line, becoming familiar with its options, and then writing your own Makefiles to do all of your builds has been extremely beneficial to me in learning the build process - which sounds like something that you want to learn. This basically separates the tool chain from the IDE and allows you to learn the tool chain more than the IDE. This is an on-going thing that I'm trying to improve on as well. I noticed that you've used arduino in the past, which is great because now I can recommend using avr-gcc as your compiler from now on. Give it a try, it's available on all platforms (Linux, WinAVR for windows, Mac) and the documentation on the avr-gcc tool chain and avrdude (programmer) is great, and there should be plenty of example Makefiles out there for you to learn from. A fair amount of this information is transferable to other hardware as well, for example arm-gcc.
[Realtime Mantra](http://www.eventhelix.com/realtimemantra/) contains several articles about embedded software development.
16,222
I'm going to start by telling you what I know. Then I'm going to tell you that I want to get to this magical land of knowing everything about embedded systems development. Then I'm going to ask you what my next steps should be to get there. [This answer](https://electronics.stackexchange.com/questions/3343/how-to-become-an-embedded-software-developer/3361#3361) is rather informative, but I'm trying to get a little more detailed: **What I Know** Let's see, I'm fair with C and C++. Obviously, I want to get better with those languages but I think at this point the best way for me to improve is to just keep using them and continually try to improve my code as I write it. I don't think it would be very beneficial to dedicate any learning exercises to just learning C anymore. I'm fairly comfortable with designing simple circuits. I see a chip with an open collector output and I know I need a pull up etc. I'm fairly confident that given an IC and its datasheet, I can either figure out how to interface with it or at least ask the right questions to figure out how to interface it. I'm very good at math and logical thinking. There are few algorithms/design concepts that throw me for a loop. This is definitely my strongest area. **What I've Done** **Until now, all of my experience has been with 8-bit microcontrollers. My college course utilized a Motorola HC08, an Intel 8051, and a Zilog EZ8. My professor had built a little dev board for all three of them with a 7-seg display and some other stuff. We programmed them in assembly so I'm somewhat familiar with assembly and I took a basic computer architecture course so I have a decent idea about the hardware.** However, we worked on a windows environment and all 3 MCUs had their own IDE and tool chain already setup so I never really learned how my code went from assembly to running on the MCU. My next step was learning on my own. I got very familiar with the Arduino environment by interfacing with sensors/memories/displays that both had pre-written libraries and some that did not. Next I built my own Arduino board with an ICSP on it and connections to another MCU so that I could flash the bootloader onto bare ATmega328s. Again, however, the IDE and tool-chain was already setup and I never got an understanding of how I went from C/Arduino to code actually running on the MCU. **What I Want to Know** So from the previously linked answer, I think the most interesting/beneficial things for me would be the bullet about learning the tools (compiler and linker), and learning different styles of software architecture (going from interrupt based control loops to schedulers and RTOSes). That is my opinion on what would be most useful... but since I don't know it yet it's hard to say whether that's correct or not so feel free to suggest other bullet points from that answer if you think they would be better and please provide an explanation as to why you think that. --- So taking my current knowledge as a starting point and the above description of what I want to know as a goal, I'm looking for *very specific* suggestions on where to go next. I'm looking for exact project suggestions, websites/blog entries to read, chapters in books, etc. Also, are there any other holes in my knowledge base that you think I should fill before moving on to the above mentioned topics?
2011/06/30
[ "https://electronics.stackexchange.com/questions/16222", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/4159/" ]
> > So from the previously linked answer, > I think the most > interesting/beneficial things for me > would be the bullet about learning the > tools (compiler and linker), and > learning different styles of software > architecture (going from interrupt > based control loops to schedulers and > RTOSes) > > > Porting a small operating system to a new device could help you to understand schedulers and RTOSs. [FreeRTOS](http://www.freertos.org/) is popular and well documented. [eCos](http://ecos.sourceware.org/) is another. Writing a bootloader is a good way to get to grips with a linker as you'll want to divide up memory and flash into regions. Another tip is to pick a completely new architecture or chip and build yourself a development board. Forcing yourself to start right from the beginning and look everything up in the datasheet is a good way to learn. Explore [Protothreads](http://www.sics.se/~adam/pt/). Try writing the same programs in both a threaded and state machine style. Once you're done with Protothreads, write a real thread scheduler.
How good of an understanding do you have of registers, operation and such on an 8-bit micro? It might be a good idea to do a little assembly. This has the benefit of teaching you exactly what is going on. This can help solve weird bugs with higher languages. AVRs have nice simple assembler and registers. It is a good platform to get your feed wet on. There are also some good tutorials for that platform out there. This will give you the bottom line of what the micro is doing. Then the next step of how the compiler and linker take C/Arduino to machine code will be easier to grasp.
16,222
I'm going to start by telling you what I know. Then I'm going to tell you that I want to get to this magical land of knowing everything about embedded systems development. Then I'm going to ask you what my next steps should be to get there. [This answer](https://electronics.stackexchange.com/questions/3343/how-to-become-an-embedded-software-developer/3361#3361) is rather informative, but I'm trying to get a little more detailed: **What I Know** Let's see, I'm fair with C and C++. Obviously, I want to get better with those languages but I think at this point the best way for me to improve is to just keep using them and continually try to improve my code as I write it. I don't think it would be very beneficial to dedicate any learning exercises to just learning C anymore. I'm fairly comfortable with designing simple circuits. I see a chip with an open collector output and I know I need a pull up etc. I'm fairly confident that given an IC and its datasheet, I can either figure out how to interface with it or at least ask the right questions to figure out how to interface it. I'm very good at math and logical thinking. There are few algorithms/design concepts that throw me for a loop. This is definitely my strongest area. **What I've Done** **Until now, all of my experience has been with 8-bit microcontrollers. My college course utilized a Motorola HC08, an Intel 8051, and a Zilog EZ8. My professor had built a little dev board for all three of them with a 7-seg display and some other stuff. We programmed them in assembly so I'm somewhat familiar with assembly and I took a basic computer architecture course so I have a decent idea about the hardware.** However, we worked on a windows environment and all 3 MCUs had their own IDE and tool chain already setup so I never really learned how my code went from assembly to running on the MCU. My next step was learning on my own. I got very familiar with the Arduino environment by interfacing with sensors/memories/displays that both had pre-written libraries and some that did not. Next I built my own Arduino board with an ICSP on it and connections to another MCU so that I could flash the bootloader onto bare ATmega328s. Again, however, the IDE and tool-chain was already setup and I never got an understanding of how I went from C/Arduino to code actually running on the MCU. **What I Want to Know** So from the previously linked answer, I think the most interesting/beneficial things for me would be the bullet about learning the tools (compiler and linker), and learning different styles of software architecture (going from interrupt based control loops to schedulers and RTOSes). That is my opinion on what would be most useful... but since I don't know it yet it's hard to say whether that's correct or not so feel free to suggest other bullet points from that answer if you think they would be better and please provide an explanation as to why you think that. --- So taking my current knowledge as a starting point and the above description of what I want to know as a goal, I'm looking for *very specific* suggestions on where to go next. I'm looking for exact project suggestions, websites/blog entries to read, chapters in books, etc. Also, are there any other holes in my knowledge base that you think I should fill before moving on to the above mentioned topics?
2011/06/30
[ "https://electronics.stackexchange.com/questions/16222", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/4159/" ]
Here's another idea. Implement your own background tasking system that allows you to create both timed tasks and demand tasks that run only when timed tasks are not running. It's not a true RTOS, but acts more like a cooperative scheduler. Convert a previous project to use the new tasking system. This kind of system worked really well on a products we used to use on a 8051. It was originally written in assembly, but later on we converted it to C to help with porting it to other architectures. It was really slick where the heartbeat of the this system was on a 5 ms tick and the timed tasks ran on 5 ms increments. We had a file that we used to name all our tasks (function pointers) with their time values and those that were on demand. That file was then converted to either assembly or C depending on how we implemented it and compiled into the code. If you get it working pretty well, then you can tackle writing your own simple RTOS for something a little heftier.
One thing you haven't mentioned is communications. It seems that one *hole* you could *plug* would be to learn the various standard communications protocols used in industry - things like: * [Profibus](http://en.wikipedia.org/wiki/Profibus) * [EIA-485](http://en.wikipedia.org/wiki/EIA-485) * [Modbus](http://en.wikipedia.org/wiki/Modbus) etc.
34,788
I would like to be able to install 48GB of RAM in my server, but it seems Windows Server 2008 Standard limits me to 32GB. How do I install more RAM in my server?
2009/07/02
[ "https://serverfault.com/questions/34788", "https://serverfault.com", "https://serverfault.com/users/10628/" ]
You will have to upgrade to an Enterprise or Datacenter version of Windows. Additionally, if you want to use more than 2 GB of memory for any one process, you'll have to upgrade to the 64 bit version. [Available memory table from Microsoft](http://msdn.microsoft.com/en-us/library/aa366778(VS.85).aspx)
To go beyond 32 Gigs you'll need to upgrade Windows 2008 Enterprise Edition.
642,154
Electronics newb, writing code on a Mac for ESP32 using Platformio. The ESP32 is connected to my Mac Studio via USB and a USB hub. All works well as long as I use one specific Micro USB cable which I have found via trial and error. The device is properly detected and upload works fine. However, now I’d like to move the prototype further away from my workstation. For this I require a longer USB cable. After some searching I realized that not all Micro USB cables support a data connection. Considerin this, after just buying [my second, supposedly fully connected Micro USB cable](https://www.amazon.de/gp/product/B0722PX1PM/) — still not luck. Could the length be an issue? Do you have some advice which cable to buy for this use case? Any advice (and maybe explaination) on this issue would be greatly appreciated.
2022/11/12
[ "https://electronics.stackexchange.com/questions/642154", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/325987/" ]
Here's the appropriate schematic. [![enter image description here](https://i.stack.imgur.com/9ITML.png)](https://i.stack.imgur.com/9ITML.png)
Your circuit will work fine, and as it stands the two DC supplies will not interfere with each other. Of course, you must ensure that any power you derive from your additional supply does not overload the transformer. Any current you draw from your second supply is increasing the current through the transformer, over and above the amount that the first supply drew on its own. The biggest issue you face may have to do with how the systems you connect to those two sources interact. You now have two ground points, the original labelled '0V', and your new one with a ground symbol, bottom right. These are not the same, and by measuring the AC voltage between them, you will find a significant potential difference. Consequently, any signal derived from a circuit connected to your new second supply will be relative its own particular ground, and in relation to the other supply's ground, it will look like a mess of noise and AC. You cannot solve this problem by connecting the two grounds to each other, because that will cause diodes to be short-circuited, and smoke to happen. In other words, this arrangement will only be of any use to you if there will be absolutely no communication (or any kind of connection) between the circuitry on each DC supply. If all you want is to power a *completely independent* system from the second DC source, you are good to go. Otherwise you can expect serious complications.
642,154
Electronics newb, writing code on a Mac for ESP32 using Platformio. The ESP32 is connected to my Mac Studio via USB and a USB hub. All works well as long as I use one specific Micro USB cable which I have found via trial and error. The device is properly detected and upload works fine. However, now I’d like to move the prototype further away from my workstation. For this I require a longer USB cable. After some searching I realized that not all Micro USB cables support a data connection. Considerin this, after just buying [my second, supposedly fully connected Micro USB cable](https://www.amazon.de/gp/product/B0722PX1PM/) — still not luck. Could the length be an issue? Do you have some advice which cable to buy for this use case? Any advice (and maybe explaination) on this issue would be greatly appreciated.
2022/11/12
[ "https://electronics.stackexchange.com/questions/642154", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/325987/" ]
Here's the appropriate schematic. [![enter image description here](https://i.stack.imgur.com/9ITML.png)](https://i.stack.imgur.com/9ITML.png)
You can double or triple the voltage by connecting another bridge rectifier. However, while voltage multipliers can increase the voltage, they only supply a lower current to the load. In this way, an additional 24 V power supply or a 36 V power supply can be easily made. **Voltage doubler circuit:** [![enter image description here](https://i.stack.imgur.com/C1w7L.jpg)](https://i.stack.imgur.com/C1w7L.jpg) [![enter image description here](https://i.stack.imgur.com/fcYW6.jpg)](https://i.stack.imgur.com/fcYW6.jpg) **Voltage tripler circuit:** [![enter image description here](https://i.stack.imgur.com/ReOsu.jpg)](https://i.stack.imgur.com/ReOsu.jpg) [![enter image description here](https://i.stack.imgur.com/9mIvh.jpg)](https://i.stack.imgur.com/9mIvh.jpg)
15,046,133
I really appreciate any answer to my question because I am searching for this about two weeks. My goal is to create directories using PHP and displaying them as a virtual subdomain (All procedure should have done automatically). For example : **example.com/test/index.php** should be considered as : **test.example.com/index.php**
2013/02/23
[ "https://Stackoverflow.com/questions/15046133", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2103289/" ]
The only way I was able to do such thing was using a wildcard subdomain. If your server supports that, it's just a matter of using a front controller to manage the requests.
You cannot do this solely with PHP. You need dynamic shell scripting to create the DNS zone files for each subdomain. **EDIT:** Probably @Robyflc is right, you can base conditions on the host name in PHP. It is not clear form the question if you want the subdomain or just some logic like create a URL user1.domain.com and then find the folder depending on the value of it.
15,046,133
I really appreciate any answer to my question because I am searching for this about two weeks. My goal is to create directories using PHP and displaying them as a virtual subdomain (All procedure should have done automatically). For example : **example.com/test/index.php** should be considered as : **test.example.com/index.php**
2013/02/23
[ "https://Stackoverflow.com/questions/15046133", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2103289/" ]
You cannot do this solely with PHP. You need dynamic shell scripting to create the DNS zone files for each subdomain. **EDIT:** Probably @Robyflc is right, you can base conditions on the host name in PHP. It is not clear form the question if you want the subdomain or just some logic like create a URL user1.domain.com and then find the folder depending on the value of it.
To do that, I recommand you to buy a VPS server it's possible with URL Rewriting.
15,046,133
I really appreciate any answer to my question because I am searching for this about two weeks. My goal is to create directories using PHP and displaying them as a virtual subdomain (All procedure should have done automatically). For example : **example.com/test/index.php** should be considered as : **test.example.com/index.php**
2013/02/23
[ "https://Stackoverflow.com/questions/15046133", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2103289/" ]
The only way I was able to do such thing was using a wildcard subdomain. If your server supports that, it's just a matter of using a front controller to manage the requests.
To do that, I recommand you to buy a VPS server it's possible with URL Rewriting.
6,756
I have used Gurobi and cplex for solving large scale LP problems with Pyomo. However, I do need to use open source solver. Any advise? glpk and cbc seems to be very slow in solving the problem (with 2e6 variables)
2021/08/15
[ "https://or.stackexchange.com/questions/6756", "https://or.stackexchange.com", "https://or.stackexchange.com/users/4775/" ]
There is a new open source solver that looks quite promising, HiGHS: <https://www.maths.ed.ac.uk/hall/HiGHS/> But as pointed out by others, for mixed-integer programming problems, at the moment, open-source solvers can't compete on performance and reliability with commercial solvers.
If you mean by LP is referred to the linear programming (not mixed-integer linear programming), there are some open-source solvers like SoPlex and Clp which can be linked with Pyomo via Neos server but, I really do not know is there any way to connect those locally. If you meant is the mixed-integer linear programming one of the best options is SCIP, but as far as I know it's not quite free at all.
6,756
I have used Gurobi and cplex for solving large scale LP problems with Pyomo. However, I do need to use open source solver. Any advise? glpk and cbc seems to be very slow in solving the problem (with 2e6 variables)
2021/08/15
[ "https://or.stackexchange.com/questions/6756", "https://or.stackexchange.com", "https://or.stackexchange.com/users/4775/" ]
There is a new open source solver that looks quite promising, HiGHS: <https://www.maths.ed.ac.uk/hall/HiGHS/> But as pointed out by others, for mixed-integer programming problems, at the moment, open-source solvers can't compete on performance and reliability with commercial solvers.
For large LPs you need an interior point solver. On top of what others have mentioned, you can use CLP's interior point method, or, interestingly, just plain old IPOPT can work perfectly fine since it will also apply an interior point algorithm.
140,255
I am trying to write a web service to spec and it requires a different response body depending on whether the method completes successfully or not. I have tried creating two different DataContract classes, but how can I return them and have them serialized correctly?
2008/09/26
[ "https://Stackoverflow.com/questions/140255", "https://Stackoverflow.com", "https://Stackoverflow.com/users/21784/" ]
The best way to indicate that your WCF web service has failed would be to throw a FaultException. There are settings in your service web.config files that allow the entire fault message to be passed to the client as part of the error. Another approach may be to inherit both of your results from the same base class or interface. The service would return an instance of the base type. You can then use the KnownType attribute to inform the client that multiple types may be returned. Come to think of it, it might be possible to use Object as the base type, but I haven't tried it. Failing either of those approaches, you can create a custom result object that contains both a result and error properties and your client can then decide which course of action to take. I had to use this approach for Silverlight 2 because Beta 2 does not yet fully support fault contracts. It's not pretty, I wouldn't normally recommend it, but if it's the only way that works or you feel it is the best approach for your situation... If you are having troubles with ADO.NET Data Services, I have less experience there. [Here's some information](http://bloggingabout.net/blogs/jpsmit/archive/2007/03/21/wcf-fault-contracts.aspx) on implementing FaultContracts
If you are using a xml based binding, then I believe there is no way to do that. A simple solution in that case would to just have part of the message flag if there was a failure, and store the failure information somewhere if needed. For a JSON binding you may be able to use a method that returns an object, then return two different types of objects. If I remember correctly (which is rare), that is possible because the JavaScriptSerializer class uses reflection if the object is clean of serialization attributes.
140,255
I am trying to write a web service to spec and it requires a different response body depending on whether the method completes successfully or not. I have tried creating two different DataContract classes, but how can I return them and have them serialized correctly?
2008/09/26
[ "https://Stackoverflow.com/questions/140255", "https://Stackoverflow.com", "https://Stackoverflow.com/users/21784/" ]
The answer is **yes** but it is tricky and you lose strong typing on your interface. If you return a **Stream** then the data could be xml, text, or even a binary image. For DataContract classes, you'd then serialize the data using the **DataContractSerializer**. See the [BlogSvc](http://codeplex.com/blogsvc) and more specifically the [**RestAtomPubService.cs** WCF service](http://blogsvc.codeplex.com/SourceControl/changeset/22604#Main/Source/Services/AtomPub/RestAtomPubService.cs) for more details. Note, that source code will also show you how to accept different types of data into a WCF rest method which requires a content type mapper.
If you are using a xml based binding, then I believe there is no way to do that. A simple solution in that case would to just have part of the message flag if there was a failure, and store the failure information somewhere if needed. For a JSON binding you may be able to use a method that returns an object, then return two different types of objects. If I remember correctly (which is rare), that is possible because the JavaScriptSerializer class uses reflection if the object is clean of serialization attributes.
154,347
Is there any difference between considered to be and considered as? For example: * Adam is considered as a good teacher. * Adam is considered to be a good teacher.
2014/02/27
[ "https://english.stackexchange.com/questions/154347", "https://english.stackexchange.com", "https://english.stackexchange.com/users/65308/" ]
"is considered to be" is [significantly more common](https://books.google.com/ngrams/graph?content=is+considered+as%2Cis+considered+to+be&case_insensitive=on&year_start=1800&year_end=2000&corpus=15&smoothing=3&share=&direct_url=t1%3B%2Cis%20considered%20as%3B%2Cc0%3B.t1%3B%2Cis%20considered%20to%20be%3B%2Cc0) and if you look at other uses of "is considered as" you notice a key difference between the two sentences: > > ... who **is considered as** a debtor... > > > ... the thing whose representation **is considered as** a part of the sphere... > > > These uses are either telling the reader that you should (a) consider two things as equals or (b) use a particular context in order to consider something. "is considered to be" is telling the reader how *others* consider a thing. In your example, this is much more likely to be the correct choice. > > Adam is considered to be a good teacher. — Adam is thought to be a good teacher. > > > Adam is considered as a good teacher. — We have treated Adam as if he is a good teacher. > > > The difference is subtle and not easy to explain.
In addition to Mr. Hen's correct statement: *Considered as* can have another meaning: to think about in terms of. "Adam is considered as a good teacher" can mean people decided to sit around and think about him as a good teacher. (This is subtly different from Mr. Hen's *treated as if he is a good teacher*.) Context, of course, makes this unlikely. In AmE, the more common constructions would be *considered to be*, or even *considered* (a to be deletion). > > Adam is considered a good teacher. > > > Also worth noting: Considered or considered to be, may be [left-handed compliments](http://dictionary.reference.com/browse/left-handed+compliment). It may carry the implication that, given the lousy performance of all of the other teachers, Adam is considered a good one (despite his otherwise glaring incompetence).
154,347
Is there any difference between considered to be and considered as? For example: * Adam is considered as a good teacher. * Adam is considered to be a good teacher.
2014/02/27
[ "https://english.stackexchange.com/questions/154347", "https://english.stackexchange.com", "https://english.stackexchange.com/users/65308/" ]
"is considered to be" is [significantly more common](https://books.google.com/ngrams/graph?content=is+considered+as%2Cis+considered+to+be&case_insensitive=on&year_start=1800&year_end=2000&corpus=15&smoothing=3&share=&direct_url=t1%3B%2Cis%20considered%20as%3B%2Cc0%3B.t1%3B%2Cis%20considered%20to%20be%3B%2Cc0) and if you look at other uses of "is considered as" you notice a key difference between the two sentences: > > ... who **is considered as** a debtor... > > > ... the thing whose representation **is considered as** a part of the sphere... > > > These uses are either telling the reader that you should (a) consider two things as equals or (b) use a particular context in order to consider something. "is considered to be" is telling the reader how *others* consider a thing. In your example, this is much more likely to be the correct choice. > > Adam is considered to be a good teacher. — Adam is thought to be a good teacher. > > > Adam is considered as a good teacher. — We have treated Adam as if he is a good teacher. > > > The difference is subtle and not easy to explain.
There is no such thing as "considered as" > > Adam is considered a good teacher. > > > <http://www.thefreedictionary.com/as> > > As is sometimes used superfluously to introduce the complements of verbs like consider, deem, and account, as in They considered it as one of the landmark decisions of the civil rights movement. The measure was deemed as unnecessary. This usage may have arisen by analogy to regard and esteem, with which as is standardly used in this way: We regarded her as the best writer among us. But the use of as with verbs like consider is not sufficiently well established to be acceptable in writing. > > >
511,938
I've got a silly situation. I have an Arduino board (2009). There's just one inbuilt LED but can't do much with it beyond blinking. I have five LEDs (3 yellow, 2 green), breadboards, jumper wires but not resistors. (And there is a total lockdown here) I wish to play with these LEDs but can't risk it putting it directly. I tried using them in series but that did not light them up. I know each LED needs a 330ohm resistor but that I don't have. I even have a 2x 7 segment display too but the same problem. Any way to use them? Thanks. Update: On suggestions, we have got two resistors from a faulty solder gun (blue-grey-orange-gold = 68k) and a (brown-black-brown-gold = 100) and 1 IN4000 diode.
2020/07/23
[ "https://electronics.stackexchange.com/questions/511938", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/5728/" ]
I'm sorry for you, but no, there's no way of connecting LED's to Arduino without resistors, without risk of damages to your Arduino's ports or the whole chip itself. Even if you try to connect them in series of 2 (2.5V per LED) or 3 LED (1.6 v per LED), it is not advisable. Don't you have a broken electronic device that you could scavenge for resistors? Even a burnt CFL or LED lamp can have some resistors that you could use. 330 ohm is just a **minimum** value, but as you want only an initial learning experience with Arduino, LED's values from 330 ohm to even 10k ohm can permit Arduino to light a LED safely. CFL lamps have diodes, you could connect a series of 4 diodes to make a 2.4/2.6 V reductor, this series of diodes could permit using a lower value resistor (CFL lamps usually has a low value resistor, under 100 ohm). What are the LED's colours? Their voltage depends on colour.
Anything between about 250 ohms and 10K will work fine. Surely you can sacrifice something and pull resistors out of it and maybe extend the leads a bit.
511,938
I've got a silly situation. I have an Arduino board (2009). There's just one inbuilt LED but can't do much with it beyond blinking. I have five LEDs (3 yellow, 2 green), breadboards, jumper wires but not resistors. (And there is a total lockdown here) I wish to play with these LEDs but can't risk it putting it directly. I tried using them in series but that did not light them up. I know each LED needs a 330ohm resistor but that I don't have. I even have a 2x 7 segment display too but the same problem. Any way to use them? Thanks. Update: On suggestions, we have got two resistors from a faulty solder gun (blue-grey-orange-gold = 68k) and a (brown-black-brown-gold = 100) and 1 IN4000 diode.
2020/07/23
[ "https://electronics.stackexchange.com/questions/511938", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/5728/" ]
I'm sorry for you, but no, there's no way of connecting LED's to Arduino without resistors, without risk of damages to your Arduino's ports or the whole chip itself. Even if you try to connect them in series of 2 (2.5V per LED) or 3 LED (1.6 v per LED), it is not advisable. Don't you have a broken electronic device that you could scavenge for resistors? Even a burnt CFL or LED lamp can have some resistors that you could use. 330 ohm is just a **minimum** value, but as you want only an initial learning experience with Arduino, LED's values from 330 ohm to even 10k ohm can permit Arduino to light a LED safely. CFL lamps have diodes, you could connect a series of 4 diodes to make a 2.4/2.6 V reductor, this series of diodes could permit using a lower value resistor (CFL lamps usually has a low value resistor, under 100 ohm). What are the LED's colours? Their voltage depends on colour.
If you have higher valued resistors, try those. I've had up to 20K series resistance with a red LED (high luminosity, admittedly) and it was still clearly visible in office/lab lighting. You can go for more intensity later, but this may be sufficiently visible for bench debug. How high you can go on resistance is probably different for each device, and I've tried this with exactly one sample, so YMMV.
511,938
I've got a silly situation. I have an Arduino board (2009). There's just one inbuilt LED but can't do much with it beyond blinking. I have five LEDs (3 yellow, 2 green), breadboards, jumper wires but not resistors. (And there is a total lockdown here) I wish to play with these LEDs but can't risk it putting it directly. I tried using them in series but that did not light them up. I know each LED needs a 330ohm resistor but that I don't have. I even have a 2x 7 segment display too but the same problem. Any way to use them? Thanks. Update: On suggestions, we have got two resistors from a faulty solder gun (blue-grey-orange-gold = 68k) and a (brown-black-brown-gold = 100) and 1 IN4000 diode.
2020/07/23
[ "https://electronics.stackexchange.com/questions/511938", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/5728/" ]
I'm sorry for you, but no, there's no way of connecting LED's to Arduino without resistors, without risk of damages to your Arduino's ports or the whole chip itself. Even if you try to connect them in series of 2 (2.5V per LED) or 3 LED (1.6 v per LED), it is not advisable. Don't you have a broken electronic device that you could scavenge for resistors? Even a burnt CFL or LED lamp can have some resistors that you could use. 330 ohm is just a **minimum** value, but as you want only an initial learning experience with Arduino, LED's values from 330 ohm to even 10k ohm can permit Arduino to light a LED safely. CFL lamps have diodes, you could connect a series of 4 diodes to make a 2.4/2.6 V reductor, this series of diodes could permit using a lower value resistor (CFL lamps usually has a low value resistor, under 100 ohm). What are the LED's colours? Their voltage depends on colour.
In case a 5V VCC if you dont want any resistors you can connect 2 leds of voltage drop of 2.7V in series. One of the LED's wont be completely forward biased and will limit current.
511,938
I've got a silly situation. I have an Arduino board (2009). There's just one inbuilt LED but can't do much with it beyond blinking. I have five LEDs (3 yellow, 2 green), breadboards, jumper wires but not resistors. (And there is a total lockdown here) I wish to play with these LEDs but can't risk it putting it directly. I tried using them in series but that did not light them up. I know each LED needs a 330ohm resistor but that I don't have. I even have a 2x 7 segment display too but the same problem. Any way to use them? Thanks. Update: On suggestions, we have got two resistors from a faulty solder gun (blue-grey-orange-gold = 68k) and a (brown-black-brown-gold = 100) and 1 IN4000 diode.
2020/07/23
[ "https://electronics.stackexchange.com/questions/511938", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/5728/" ]
If you have higher valued resistors, try those. I've had up to 20K series resistance with a red LED (high luminosity, admittedly) and it was still clearly visible in office/lab lighting. You can go for more intensity later, but this may be sufficiently visible for bench debug. How high you can go on resistance is probably different for each device, and I've tried this with exactly one sample, so YMMV.
In case a 5V VCC if you dont want any resistors you can connect 2 leds of voltage drop of 2.7V in series. One of the LED's wont be completely forward biased and will limit current.
38,843
Over the last few weeks, We have been experimenting with a protractor+typescript+cucumber framework for our new Angular 6 application. The main reason we thought our present Selenium+Cucumber+java framework may not rise up to occasion was due to angular page elements and angular architecture, in general, may render it useless. Also, we'd like to see if there are better options that make e2e test case writing quicker, easy and make maintenance easy as we have a large suite of test cases I want to list out my observations regarding protractor(the most popular tool for testing angular apps) and wanted to ask if my understanding is correct as I do not find a strong reason to transition to it. **1) Page elements:** I know that, there are different ways of grabbing angular elements in protractor. See docs here: - <https://www.protractortest.org/#/locators> but the only ones that give protractor the edge are "by.model", "by.repeater", "by.binding" & etc. However, this was the case with angular js. but with angular 2+ apps protractor has lost support for the locators. Also, we have seen custom components in the DOM. For e.g. this demo site: <https://miherlosev.github.io/e2e_angular/> is made in angular 4 and does not have any of elements that can be identified with "by.model", "by.repeater", "by.binding" rendering these utilities useless. how does one grab angular specific page elements. If everybody is using CSS locators than what advantage does protractor give over my existing framework? **2) Waits:** I have seen automation testers still using sleeps, waits - implicit/ explicit waits both protractor or selenium. I think one of the reason is many apps are hybrid is nature i.e angular and HTML/css UI and therefore at times the automatic wait between page transition with protractor is not of huge advantage here. Besides, there are java libraries that give selenium the same capability. Like this lib: <https://github.com/paul-hammant/ngWebDriver> provides waitForAngular() api that protractor does. If not much, the async nature of javascript makes it hard to wrap helper functions, page objects and other utlity code in promises- all this renders the flakiness nature .of tests **SetUp/Scafolding:** I agree protractor project with say typescript/javascript scores over selenium+java here as all you need to do is configure config.ts to start off; where as selenium+java requires lot of setup. But in my case, I have a mature framework setup already in java. I am not able to find a good reason as to what advantages does protractor have over my present framework? Would Selenium+java framework would be unusable on angular 6 application? Please let me know if I am not understanding this right or missing an important aspect of protractor.
2019/04/21
[ "https://sqa.stackexchange.com/questions/38843", "https://sqa.stackexchange.com", "https://sqa.stackexchange.com/users/38184/" ]
I totally had the same experience as you. For a new angular project i looked into protractor as the recommended way but then just risked using java + selenium because, as you, i found that the protractor features where either not usable or have been ported to java. So far I had no problems with element locators and i don't have to wait for anything (at least not more than in non angular projects). The only waits I usually need are for modals. In short: I see no reason to not use java + selenium for new angular projects. There are other reasons to go for protractor, like when the developers also write tests it may be better to have just a single language, it integrates better in the node builds... but if your are feeling more comfortable with java and it fits into your project, use it.
Selenium + Java will not work for angular 2 and above applications. Apart from thread.sleep and explicit/implicit combination there is no way to wait the script till page loads. Thread.sleep is not the right practice and explicit/implicit is not trustworthy. Ng webdriver only works for angularjs applications and not for angular framework( angular2 and above). Better to start the project with protractor.
47,220,386
I'm having an issue with the mentioned error in several .net core applications. I'm using vs code version 1.18.0 but the error started to appear already in the previous version. The error appears in every .cs file for every datatype like string, int, void etc. and also for class imports. All the projects still compile and run properly. Also on another workstation I'm **not** having the issue in the same projects, so it seems to be a local omnisharp/ vs code or windows? problem. Has anyone had something like this and managed to fix it or any suggestions on what i could try? I've reinstalled vs code and omnisharp already, but I'm still having the problem. **example Error:** *Predefined type 'System.Object' is not defined or imported [GG]*
2017/11/10
[ "https://Stackoverflow.com/questions/47220386", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8918840/" ]
I found a fix ( or workaround) for my problem: ***short version***: I changed the omnisharp msbuild instance by **uninstalling Visual Studio 2017 Pro**. ***long version***: A few months ago I installed VS 2017 Pro to check out the features, used it for 2 weeks in trial mode and forgot about it for several months. Around one week ago I opened it (by accident :D) and got a notification, that my trial period expired, also VS locked itself. It didn't bother me , because I wasn't using it. In @VahidN 's Link I found out that omnisharp is using "the most native" msbuild instance that is installed, which in my case was the one of the locked VS 2017 Pro. 1+1 I unsinstalled VS and I'm good. I'll reinstall VS 2017 ( Comunity) and post a comment if it still works fine. **EDIT:**: I reinstalled VS 2017, everything works fine, so the **actual solution is updating VS 2017**, which i couldnt do besause it was locked prieviously.
Thanks for sharing your fix. Unfortunately, that didn't work for me. What worked for me is to reinstall the latest OmniSharp. Copy-pasted from this [ticket](https://github.com/OmniSharp/omnisharp-vscode/issues/2295): > > The fix for this has been pushed into OmniSharp. You should be able to get the fix by setting the "omnisharp.path" option in VS Code to "latest". That will cause C# for VS Code to download the latest build OmniSharp at start up. > > >
508,150
Taking a personality quiz on FiveThirtyEight, there were these graphs, consisting of a regular pentagon and an irregular pentagon inside with the points further from the center being lesser and points closer to the edge being greater, on top of regularly spaced bars: [![An example of this kind of graph with personality traits](https://i.stack.imgur.com/z4onp.png)](https://i.stack.imgur.com/z4onp.png) I've seen this type of graph in other places. For example, on the website for the video game ARMS: [![The same kind of graph, but with a hexagon instead](https://i.stack.imgur.com/EywUH.png)](https://i.stack.imgur.com/EywUH.png) I was trying to think of a word for this, and couldn't. **What are these called?**
2019/08/13
[ "https://english.stackexchange.com/questions/508150", "https://english.stackexchange.com", "https://english.stackexchange.com/users/332980/" ]
Polygon-circle graph ==================== This kind of graph is called a ‘polygon-circle graph’, according to [this Wikipedia article](https://en.wikipedia.org/wiki/Polygon-circle_graph).
It is just a plot of a point in five-dimensional space. The outer pentagon represents the first orthant in a five-dimensional space, or the convex hull of possible scores to the personality quiz.
508,150
Taking a personality quiz on FiveThirtyEight, there were these graphs, consisting of a regular pentagon and an irregular pentagon inside with the points further from the center being lesser and points closer to the edge being greater, on top of regularly spaced bars: [![An example of this kind of graph with personality traits](https://i.stack.imgur.com/z4onp.png)](https://i.stack.imgur.com/z4onp.png) I've seen this type of graph in other places. For example, on the website for the video game ARMS: [![The same kind of graph, but with a hexagon instead](https://i.stack.imgur.com/EywUH.png)](https://i.stack.imgur.com/EywUH.png) I was trying to think of a word for this, and couldn't. **What are these called?**
2019/08/13
[ "https://english.stackexchange.com/questions/508150", "https://english.stackexchange.com", "https://english.stackexchange.com/users/332980/" ]
In data visualization, it is called a > > [radar or spider chart](https://en.wikipedia.org/wiki/Radar_chart) > > > because it looks like either a radar screen with the values as you go around the circle, or it just sort looks like a spider's web. It is essentially a bar chart or line chart of a small set on the same scale that, instead of being arranged linearly, is arranged circularly. The spider's web is like the y-axis on a linear chart, and the colored part is the value of each item. Placing radar charts side by side allows a comparison of many dimensions at once based on comparison of the shapes (a much more accessible comparison than [Chernoff faces](https://en.wikipedia.org/wiki/Chernoff_face). It is similar to Nightingale's famous [rose chart](https://datavizcatalogue.com/methods/nightingale_rose_chart.html) but the latter implies a cyclical order to the categories whereas a radar chart involves unordered categories. [FiveThirtyEight has a good use of radar charts to compare candidates.](https://fivethirtyeight.com/features/the-5-key-constituencies-of-the-2020-democratic-primary/) (see the graphic just above 'Group 1').
Polygon-circle graph ==================== This kind of graph is called a ‘polygon-circle graph’, according to [this Wikipedia article](https://en.wikipedia.org/wiki/Polygon-circle_graph).
508,150
Taking a personality quiz on FiveThirtyEight, there were these graphs, consisting of a regular pentagon and an irregular pentagon inside with the points further from the center being lesser and points closer to the edge being greater, on top of regularly spaced bars: [![An example of this kind of graph with personality traits](https://i.stack.imgur.com/z4onp.png)](https://i.stack.imgur.com/z4onp.png) I've seen this type of graph in other places. For example, on the website for the video game ARMS: [![The same kind of graph, but with a hexagon instead](https://i.stack.imgur.com/EywUH.png)](https://i.stack.imgur.com/EywUH.png) I was trying to think of a word for this, and couldn't. **What are these called?**
2019/08/13
[ "https://english.stackexchange.com/questions/508150", "https://english.stackexchange.com", "https://english.stackexchange.com/users/332980/" ]
In data visualization, it is called a > > [radar or spider chart](https://en.wikipedia.org/wiki/Radar_chart) > > > because it looks like either a radar screen with the values as you go around the circle, or it just sort looks like a spider's web. It is essentially a bar chart or line chart of a small set on the same scale that, instead of being arranged linearly, is arranged circularly. The spider's web is like the y-axis on a linear chart, and the colored part is the value of each item. Placing radar charts side by side allows a comparison of many dimensions at once based on comparison of the shapes (a much more accessible comparison than [Chernoff faces](https://en.wikipedia.org/wiki/Chernoff_face). It is similar to Nightingale's famous [rose chart](https://datavizcatalogue.com/methods/nightingale_rose_chart.html) but the latter implies a cyclical order to the categories whereas a radar chart involves unordered categories. [FiveThirtyEight has a good use of radar charts to compare candidates.](https://fivethirtyeight.com/features/the-5-key-constituencies-of-the-2020-democratic-primary/) (see the graphic just above 'Group 1').
It is just a plot of a point in five-dimensional space. The outer pentagon represents the first orthant in a five-dimensional space, or the convex hull of possible scores to the personality quiz.
78
The [Einstein's puzzle](http://www.stanford.edu/~laurik/fsmbook/examples/Einstein%27sPuzzle.html) or [zebra puzzle](http://en.wikipedia.org/wiki/Zebra_Puzzle) is a well-known logic puzzle. Are there any very easy ways to solve it fast?
2014/05/15
[ "https://puzzling.stackexchange.com/questions/78", "https://puzzling.stackexchange.com", "https://puzzling.stackexchange.com/users/113/" ]
For small puzzles of this type, a grid is very useful. If there are three types of item, with five of each type, you can use a grid like this. Put an x in all the squares you know cannot hold, and fill the boxes you know are true. So if you are told 1 is not a, you x the upper left box. If you are told B is 2 you fill the corresponding box, then x all the boxes in the same row and column. You can also copy x's from one grid to another when you fill a box. For four types, put types 1,2,3 across the top, 4,3,2 down the left and you will have six grids. It becomes unwieldy after five types![like this](https://i.stack.imgur.com/Sg0fp.jpg) This is especially useful to see when you have eliminated all but one possibility. To capture the left/right information, one of your types of item can be the house number.
This answer is more about choosing the puzzle than general strategies. Once you have the grid, a lot of reduction tactics are the same as with Sudoku. Playing around with timed versions of the Einstein puzzle like [this](https://www.thinkpenguin.com/gnu-linux/einstein) can soon net you shortcuts for solving the grid. Granted, you get everything laid out and can start solving immediately, but our brain is more suited to certain types of cues. Some solved squares may help, but it all depends on its position and the other cues. Particularly the "X is between A and B" hints can be very helpful, especially when they are chained or you can place them immediately. Since most hints are relative, you get several new anchor points to try out. Perhaps their higher apparent benefit is also due to the way we process information, as they usually clear the most amount of possibilities upon placement and the same amount (2) as is-before when uncertain, reducing the problem space. In general, if you can solve any of the near-central squares, that can reduce the positioning options of the triplets significantly. During plays, I've noticed a particular pattern that I can solve fast (as opposed to 15+ minutes). All my best time (2 min) games started like this, but I don't know if the speed can be attributed to just the initial layout. It's when you start with a solved square one square from the border that can be expanded with an in-between cue. It's better than border squares, since you can then also discard some guesses via is-a-neighbour cues, but still gives only one possible orientation.
6,978,578
I have a requirement of recording video via web cam, on my webpage. What are the available plugins for the same. My website is developed using Ruby on Rails framework Regards, Pankaj
2011/08/08
[ "https://Stackoverflow.com/questions/6978578", "https://Stackoverflow.com", "https://Stackoverflow.com/users/314010/" ]
The first hit on searching "webcam plugin": <http://www.xarg.org/project/jquery-webcam-plugin/> As it is using JavaScript it is easy to include in Rails. Many others appear in the results ...
Another option is to use the Nimbb widget. There are a lot of tutorials showing how to embed it into a website.
6,978,578
I have a requirement of recording video via web cam, on my webpage. What are the available plugins for the same. My website is developed using Ruby on Rails framework Regards, Pankaj
2011/08/08
[ "https://Stackoverflow.com/questions/6978578", "https://Stackoverflow.com", "https://Stackoverflow.com/users/314010/" ]
If an HTML5 solution could be suitable for you, you can take a look to WebRTC (currently supported in Chrome, Firefox and Opera). You can find a good tutorial here: <http://www.html5rocks.com/en/tutorials/getusermedia/intro/>
The first hit on searching "webcam plugin": <http://www.xarg.org/project/jquery-webcam-plugin/> As it is using JavaScript it is easy to include in Rails. Many others appear in the results ...
6,978,578
I have a requirement of recording video via web cam, on my webpage. What are the available plugins for the same. My website is developed using Ruby on Rails framework Regards, Pankaj
2011/08/08
[ "https://Stackoverflow.com/questions/6978578", "https://Stackoverflow.com", "https://Stackoverflow.com/users/314010/" ]
If an HTML5 solution could be suitable for you, you can take a look to WebRTC (currently supported in Chrome, Firefox and Opera). You can find a good tutorial here: <http://www.html5rocks.com/en/tutorials/getusermedia/intro/>
Another option is to use the Nimbb widget. There are a lot of tutorials showing how to embed it into a website.
412,452
I am trying to align the Michelson interferometer using 780 nm LED. But I am not getting any interference pattern in the CCD. Initially I align the interferometer using Laser to get the equal path length. Then I have used LED. Please let me know what I am missing. To be noted, coherence length for my LED is 9.5 um. However, my reference I am using a manual stage so there 1 revolution is 250 um.
2018/06/19
[ "https://physics.stackexchange.com/questions/412452", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/198659/" ]
Getting the two arms of your interferometer equal to within the coherence length is crucial. If you can obtain fringes using a laser, that is *not* enough to ensure that the path lengths are the same, because a laser typically has at least centimeters of coherence length, whereas the coherence length of your LED source is only 9.5 microns. If the resolution of your stage is really 250 microns per revolution, you will need to search for fringes using increments of about 3 degrees of revolution.
(I would only want to comment but my reputation is not high enough so I will write as an answer.) 1) First thing is the change in collimation and alignment when you swap between your laser and your LED. Usually you see clearly fringe contrast across the entire beamspot when the beam is collimated (i.e. there is only single k-vector along the arm). More generally, you see clear contrast across the beamspot when both the reference beam and the test beam have the same beam divergence (i.e. they have the same k-vector spread). 2) LED is hard to collimate, because it is an extended light source. In fact, it is physically impossible to perfectly collimate LED, because of the conservation of optical extent (known as etendue), analogous to conservation of phase space in Hamiltonian mechanics. I'm worried that when you switch to LED, maybe your reference beam and probe beam have different divergences. 3) If both of your arms are long and they are standing on a relatively thin optical breadboard (let's say 0.5 inch), I will be worried about vibration noise. Usually it's very low frequency ( O(10 Hz)), but the amplitude will be a couple of fringes.
58,689,818
TIA. Is it possible to run Linux binaries like chrome without building from source as unikernels?
2019/11/04
[ "https://Stackoverflow.com/questions/58689818", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12167785/" ]
OSv unikernel (<http://osv.io>) is the answer.
It is indeed possible to run arbitrary linux ELFs as unikernels via tools such as <https://ops.city> && using the Nanos unikernel <https://github.com/nanovms/nanos> . However, chrome itself would not be supported currently as that is a gui program and at least for Nanos programs they are server-side only.
217,861
I know pretty well that the plural for 'person' is 'people'. But my literature professor used once the word 'persons' because, he said, he was using the word the same as it will be used 'individuals'. Or at least I understood it that way. So, my question is: Can I use the word 'persons' in the next phrase? (I wrote the whole phrase in order for you to have the correct context.) > > I'm so into the idea of developing each skill alike, the idea of being > round **persons**. > > > Is it okay, or should I just use 'individuals'? Thank you! :)
2015/01/01
[ "https://english.stackexchange.com/questions/217861", "https://english.stackexchange.com", "https://english.stackexchange.com/users/103703/" ]
In general, the plural of "person" is "people". Exceptions include formal contexts such as law enforcement, and idiomatic phrases like "missing persons" or "persons unknown". A historical prescription is to use "people" for an unspecified number of people, and "persons" for a specific number of individuals: "many people" but "three persons". However, this rule is no longer followed, and "people" is used for both cases. In your example, I would use "people". (I would also suggest using "well-rounded" instead of "round", and using a more articulate phrase than "so into" to describe your interest.) Sources: * [Grammar Girl: People or Persons?](http://www.quickanddirtytips.com/education/grammar/people-or-persons) * [Daily Writing Tips: People versus Persons](http://www.dailywritingtips.com/people-versus-persons/) * [Grammarist: People vs. persons](http://grammarist.com/usage/people-persons/)
Your sentence doesn't quite make sense and I think you need to use "individuals" if you want to refer to more than one person. You could say: > > I'm so into the idea of developing each skill alike, the idea of being > a rounded person. > > > or > > I'm so into the idea of developing each skill alike, the idea of having > rounded individuals. > > > You can't really use "being" with "rounded individuals" as you can't "be" more than one person. Also, you don't say what audience this is meant for. "I'm so into" is a very "hip" was of saying "I am very enthusiastic about". For a more formal situation you probably should use the latter.
217,861
I know pretty well that the plural for 'person' is 'people'. But my literature professor used once the word 'persons' because, he said, he was using the word the same as it will be used 'individuals'. Or at least I understood it that way. So, my question is: Can I use the word 'persons' in the next phrase? (I wrote the whole phrase in order for you to have the correct context.) > > I'm so into the idea of developing each skill alike, the idea of being > round **persons**. > > > Is it okay, or should I just use 'individuals'? Thank you! :)
2015/01/01
[ "https://english.stackexchange.com/questions/217861", "https://english.stackexchange.com", "https://english.stackexchange.com/users/103703/" ]
Your sentence doesn't quite make sense and I think you need to use "individuals" if you want to refer to more than one person. You could say: > > I'm so into the idea of developing each skill alike, the idea of being > a rounded person. > > > or > > I'm so into the idea of developing each skill alike, the idea of having > rounded individuals. > > > You can't really use "being" with "rounded individuals" as you can't "be" more than one person. Also, you don't say what audience this is meant for. "I'm so into" is a very "hip" was of saying "I am very enthusiastic about". For a more formal situation you probably should use the latter.
I disagree with your premise. "Person" is a noun that identifies a single individual human being. "People", on the other hand, is a noun used for a collection consisting of "persons". "People" seems like a plural for "person" because in order to have a collection, one must necessarily have more than one. Because in many instances when one refers to multiple individuals it is not necessary whether to specify whether one is referring to the group or the individuals (three people standing on a street corner can be as accurately described as either "those three people over there" or as "those three persons over there"), people gets more usage than "persons", but people is not the plural of person. "Persons" is the word to use to specify more than one individual, when it is necessary or desirable to retain the reference to individuals.
217,861
I know pretty well that the plural for 'person' is 'people'. But my literature professor used once the word 'persons' because, he said, he was using the word the same as it will be used 'individuals'. Or at least I understood it that way. So, my question is: Can I use the word 'persons' in the next phrase? (I wrote the whole phrase in order for you to have the correct context.) > > I'm so into the idea of developing each skill alike, the idea of being > round **persons**. > > > Is it okay, or should I just use 'individuals'? Thank you! :)
2015/01/01
[ "https://english.stackexchange.com/questions/217861", "https://english.stackexchange.com", "https://english.stackexchange.com/users/103703/" ]
Your sentence doesn't quite make sense and I think you need to use "individuals" if you want to refer to more than one person. You could say: > > I'm so into the idea of developing each skill alike, the idea of being > a rounded person. > > > or > > I'm so into the idea of developing each skill alike, the idea of having > rounded individuals. > > > You can't really use "being" with "rounded individuals" as you can't "be" more than one person. Also, you don't say what audience this is meant for. "I'm so into" is a very "hip" was of saying "I am very enthusiastic about". For a more formal situation you probably should use the latter.
Your professor may be more or less correct about *persons*. To be upfront about it, I cringe every time I see the word, but in the hospitality industry they use *persons* specifically for individuals members of a group. Ten people arriving is ten people who have never met, but ten persons is a group of ten. The reason is that a group of ten persons will be treated differently to ten people in terms of rooming, checking in etc. I suspect that this usage is what your professor was referring to.
217,861
I know pretty well that the plural for 'person' is 'people'. But my literature professor used once the word 'persons' because, he said, he was using the word the same as it will be used 'individuals'. Or at least I understood it that way. So, my question is: Can I use the word 'persons' in the next phrase? (I wrote the whole phrase in order for you to have the correct context.) > > I'm so into the idea of developing each skill alike, the idea of being > round **persons**. > > > Is it okay, or should I just use 'individuals'? Thank you! :)
2015/01/01
[ "https://english.stackexchange.com/questions/217861", "https://english.stackexchange.com", "https://english.stackexchange.com/users/103703/" ]
Your sentence doesn't quite make sense and I think you need to use "individuals" if you want to refer to more than one person. You could say: > > I'm so into the idea of developing each skill alike, the idea of being > a rounded person. > > > or > > I'm so into the idea of developing each skill alike, the idea of having > rounded individuals. > > > You can't really use "being" with "rounded individuals" as you can't "be" more than one person. Also, you don't say what audience this is meant for. "I'm so into" is a very "hip" was of saying "I am very enthusiastic about". For a more formal situation you probably should use the latter.
Use 'people' in your sentence, it doesn't otherwise make sense. 'Persons' (e.g. "persons wishing to remain aboard the train...,") has an officious tone, and is really only used in that manner.
217,861
I know pretty well that the plural for 'person' is 'people'. But my literature professor used once the word 'persons' because, he said, he was using the word the same as it will be used 'individuals'. Or at least I understood it that way. So, my question is: Can I use the word 'persons' in the next phrase? (I wrote the whole phrase in order for you to have the correct context.) > > I'm so into the idea of developing each skill alike, the idea of being > round **persons**. > > > Is it okay, or should I just use 'individuals'? Thank you! :)
2015/01/01
[ "https://english.stackexchange.com/questions/217861", "https://english.stackexchange.com", "https://english.stackexchange.com/users/103703/" ]
In general, the plural of "person" is "people". Exceptions include formal contexts such as law enforcement, and idiomatic phrases like "missing persons" or "persons unknown". A historical prescription is to use "people" for an unspecified number of people, and "persons" for a specific number of individuals: "many people" but "three persons". However, this rule is no longer followed, and "people" is used for both cases. In your example, I would use "people". (I would also suggest using "well-rounded" instead of "round", and using a more articulate phrase than "so into" to describe your interest.) Sources: * [Grammar Girl: People or Persons?](http://www.quickanddirtytips.com/education/grammar/people-or-persons) * [Daily Writing Tips: People versus Persons](http://www.dailywritingtips.com/people-versus-persons/) * [Grammarist: People vs. persons](http://grammarist.com/usage/people-persons/)
I disagree with your premise. "Person" is a noun that identifies a single individual human being. "People", on the other hand, is a noun used for a collection consisting of "persons". "People" seems like a plural for "person" because in order to have a collection, one must necessarily have more than one. Because in many instances when one refers to multiple individuals it is not necessary whether to specify whether one is referring to the group or the individuals (three people standing on a street corner can be as accurately described as either "those three people over there" or as "those three persons over there"), people gets more usage than "persons", but people is not the plural of person. "Persons" is the word to use to specify more than one individual, when it is necessary or desirable to retain the reference to individuals.
217,861
I know pretty well that the plural for 'person' is 'people'. But my literature professor used once the word 'persons' because, he said, he was using the word the same as it will be used 'individuals'. Or at least I understood it that way. So, my question is: Can I use the word 'persons' in the next phrase? (I wrote the whole phrase in order for you to have the correct context.) > > I'm so into the idea of developing each skill alike, the idea of being > round **persons**. > > > Is it okay, or should I just use 'individuals'? Thank you! :)
2015/01/01
[ "https://english.stackexchange.com/questions/217861", "https://english.stackexchange.com", "https://english.stackexchange.com/users/103703/" ]
In general, the plural of "person" is "people". Exceptions include formal contexts such as law enforcement, and idiomatic phrases like "missing persons" or "persons unknown". A historical prescription is to use "people" for an unspecified number of people, and "persons" for a specific number of individuals: "many people" but "three persons". However, this rule is no longer followed, and "people" is used for both cases. In your example, I would use "people". (I would also suggest using "well-rounded" instead of "round", and using a more articulate phrase than "so into" to describe your interest.) Sources: * [Grammar Girl: People or Persons?](http://www.quickanddirtytips.com/education/grammar/people-or-persons) * [Daily Writing Tips: People versus Persons](http://www.dailywritingtips.com/people-versus-persons/) * [Grammarist: People vs. persons](http://grammarist.com/usage/people-persons/)
Your professor may be more or less correct about *persons*. To be upfront about it, I cringe every time I see the word, but in the hospitality industry they use *persons* specifically for individuals members of a group. Ten people arriving is ten people who have never met, but ten persons is a group of ten. The reason is that a group of ten persons will be treated differently to ten people in terms of rooming, checking in etc. I suspect that this usage is what your professor was referring to.
217,861
I know pretty well that the plural for 'person' is 'people'. But my literature professor used once the word 'persons' because, he said, he was using the word the same as it will be used 'individuals'. Or at least I understood it that way. So, my question is: Can I use the word 'persons' in the next phrase? (I wrote the whole phrase in order for you to have the correct context.) > > I'm so into the idea of developing each skill alike, the idea of being > round **persons**. > > > Is it okay, or should I just use 'individuals'? Thank you! :)
2015/01/01
[ "https://english.stackexchange.com/questions/217861", "https://english.stackexchange.com", "https://english.stackexchange.com/users/103703/" ]
In general, the plural of "person" is "people". Exceptions include formal contexts such as law enforcement, and idiomatic phrases like "missing persons" or "persons unknown". A historical prescription is to use "people" for an unspecified number of people, and "persons" for a specific number of individuals: "many people" but "three persons". However, this rule is no longer followed, and "people" is used for both cases. In your example, I would use "people". (I would also suggest using "well-rounded" instead of "round", and using a more articulate phrase than "so into" to describe your interest.) Sources: * [Grammar Girl: People or Persons?](http://www.quickanddirtytips.com/education/grammar/people-or-persons) * [Daily Writing Tips: People versus Persons](http://www.dailywritingtips.com/people-versus-persons/) * [Grammarist: People vs. persons](http://grammarist.com/usage/people-persons/)
Use 'people' in your sentence, it doesn't otherwise make sense. 'Persons' (e.g. "persons wishing to remain aboard the train...,") has an officious tone, and is really only used in that manner.
427,335
In my own research, I found some references as far back as 1905 but does anyone have the origin of the phrase "Let that sink in"?
2018/01/21
[ "https://english.stackexchange.com/questions/427335", "https://english.stackexchange.com", "https://english.stackexchange.com/users/277461/" ]
The relevant definition for *sink* in this sense in the OED is this: > > To penetrate *into* (†*to*, *unto*, *through*), enter or be impressed *in*, the mind, heart, etc. > > > Under this definition, the earliest entry is from a1300: > > Sua sar þin sakes to for-thingk > > þat soru thoru þin hert sink > > [Cursor mundi](https://quod.lib.umich.edu/c/cme/AJT8128.0001.001/1:4.1.171?rgn=div3;view=fulltext) > > > I found that [the Middle English Dictionary](https://quod.lib.umich.edu/cgi/m/mec/med-idx?type=byte&byte=181222814&egdisplay=open&egs=181264699) has a lot of examples for this sense, under 3.(c). As for the expression "let that sink in", a very similar expression can be found in this quote from 1385: > > Lat oure sorwe synken in thyn herte > > [Chaucer: The Knight's Tale](https://sites.fas.harvard.edu/~chaucer/teachslf/kt-par1.htm) > > > And in this quote from 1422: > > In-to thyn herte let my wordes synke > > [How to Learn to Die](https://books.google.com/books?id=tGQiRWIlgy0C&pg=PA201&lpg=PA201) > > > This same expression can be found much later, in this quote from 1798: > > It is concerning a truth, as our Lord faith, Luke ix. 44. "Let these thinks sink in your hearts:" so we say, let this truth sink in your hearts. > > [The Ruin of Rome](https://books.google.com/books?id=tcI0AAAAMAAJ&pg=PA442) > > > The expression "let [it] sink in your hearts" appears to be the origin of the shorter expression "let that sink in". The earliest I can find for "let that sink in" is from 1837: > > It appears, from the speech of the Leader of the Senate last night, that the only way in which the commission could supplement the work of the High Court would be in connexion with legislation which, while it might not infringe the provisions of the Constitution, might place one State at a disadvantage in relation to another. Let that sink in. > > [Parliamentary Debates](https://books.google.com/books?id=bCQEAAAAMAAJ&q="let+that+sink+in") > > > After that, the next example I found is from 1895: > > Both times I noticed his masterly use of the pause. It was as if he would say, 'There, let that sink in.' > > [A Memoir of George Higinbotham](https://books.google.com/books?id=8zlAAAAAYAAJ&pg=PA241) > > >
The idiomatic expression ***[sink in](http://www.dictionary.com/browse/sink--in)*** in the sense of *being understood* is quite old according to [The American Heritage Idioms Dictionary](http://www.dictionary.com/browse/sink--in) > > * Penetrate the mind, be absorbed, as in *The news of the crash didn't sink in right away*. ***[Late 1300s]*** > > > ***Let that sink in*** is a set phrase which derives from the above expression. Its earliest usages appear to be from the second half of the 19th century. I think the 1837 usage example present in [Google Books](https://books.google.com/ngrams/graph?content=let%20that%20sink%20in&year_start=1800&year_end=2008&corpus=15&smoothing=3&share=&direct_url=t1%3B%2Clet%20that%20sink%20in%3B%2Cc0) is a false positive. [The Railway News](https://books.google.it/books?id=SgA1AQAAIAAJ&q=%22let%20that%20sink%22&dq=%22let%20that%20sink%22&hl=en&sa=X&ved=0ahUKEwjF27j3-enYAhVHxxQKHVYuBMMQ6AEINzAE) ..., Volume 44 - 1885 > > ***Just let that sink into your minds***. The Continental traffic via Dover and Calais and Folkestone and Boulogne has increased in much greater proportion during the existence of the Queenborough route than it did in the years before. > > >
427,335
In my own research, I found some references as far back as 1905 but does anyone have the origin of the phrase "Let that sink in"?
2018/01/21
[ "https://english.stackexchange.com/questions/427335", "https://english.stackexchange.com", "https://english.stackexchange.com/users/277461/" ]
The relevant definition for *sink* in this sense in the OED is this: > > To penetrate *into* (†*to*, *unto*, *through*), enter or be impressed *in*, the mind, heart, etc. > > > Under this definition, the earliest entry is from a1300: > > Sua sar þin sakes to for-thingk > > þat soru thoru þin hert sink > > [Cursor mundi](https://quod.lib.umich.edu/c/cme/AJT8128.0001.001/1:4.1.171?rgn=div3;view=fulltext) > > > I found that [the Middle English Dictionary](https://quod.lib.umich.edu/cgi/m/mec/med-idx?type=byte&byte=181222814&egdisplay=open&egs=181264699) has a lot of examples for this sense, under 3.(c). As for the expression "let that sink in", a very similar expression can be found in this quote from 1385: > > Lat oure sorwe synken in thyn herte > > [Chaucer: The Knight's Tale](https://sites.fas.harvard.edu/~chaucer/teachslf/kt-par1.htm) > > > And in this quote from 1422: > > In-to thyn herte let my wordes synke > > [How to Learn to Die](https://books.google.com/books?id=tGQiRWIlgy0C&pg=PA201&lpg=PA201) > > > This same expression can be found much later, in this quote from 1798: > > It is concerning a truth, as our Lord faith, Luke ix. 44. "Let these thinks sink in your hearts:" so we say, let this truth sink in your hearts. > > [The Ruin of Rome](https://books.google.com/books?id=tcI0AAAAMAAJ&pg=PA442) > > > The expression "let [it] sink in your hearts" appears to be the origin of the shorter expression "let that sink in". The earliest I can find for "let that sink in" is from 1837: > > It appears, from the speech of the Leader of the Senate last night, that the only way in which the commission could supplement the work of the High Court would be in connexion with legislation which, while it might not infringe the provisions of the Constitution, might place one State at a disadvantage in relation to another. Let that sink in. > > [Parliamentary Debates](https://books.google.com/books?id=bCQEAAAAMAAJ&q="let+that+sink+in") > > > After that, the next example I found is from 1895: > > Both times I noticed his masterly use of the pause. It was as if he would say, 'There, let that sink in.' > > [A Memoir of George Higinbotham](https://books.google.com/books?id=8zlAAAAAYAAJ&pg=PA241) > > >
In 1534, William Tyndale used the words : > > Let these sayinges synke doune into youre eares. Luke 9:44. > > > The Authorised Version copied his wording in 1611 : > > Let these sayings sink down into your ears. > > > [Textus Receptus - Tyndale and KJV](http://textusreceptusbibles.com/Interlinear/42009044) The word 'sink' translates the Greek word τιθεμι, *tithemi*, meaning 'to place, lay or set down' [(Strong 5087)](http://biblehub.com/greek/5087.htm). In 1175 the Wessex Gospels expresses the translation as : > > Asetteð þas spræce on eowren heorten > > > [Textus Receptus - Wessex Gospels](http://textusreceptusbibles.com/Interlinear/42009044)
278,999
I recently upgraded from GNOME2 to GNOME3, and it's (mostly) been a smooth transition. One issue that has been bugging me, however, is the keyboard shortcuts, specifically, mapping certain keyboard shortcuts doesn't seem to work. I have several keyboard shortcuts mapped, but they do not work. For example: * Lock Screen: Mod4+L * Home Folder: Mod4+E * Run: Mod4+R (Works) * Run Terminal: Mod4+Enter (Works) Why is it that some of these keyboard shortcuts work, but others don't? Any suggestions? Thanks!
2011/05/04
[ "https://superuser.com/questions/278999", "https://superuser.com", "https://superuser.com/users/9209/" ]
just ran into the answer actually. It's because the windows key is mapped to show the activities window. You need to disable that in order to get the shortcuts working. What worked for me was to go into region and languages, under Alt/Win key behavior click meta is mapped to left win key. It seems that Linux immediately translates keys even when they are part of a key combination, so the left win is sent to the gnome bypassing the key combination.
Read this message: <http://mail.gnome.org/archives/gnome-shell-list/2011-May/msg00291.html> > > I got my shortcuts to work doing what is described there (mapping Left > Win key to Meta under 'Region and Languages'); I believe that I didn't > need that in GNOME 2. > > >
562,258
In general in English, we don't ever apply the definitive article to languages. We don't say "He speaks the Japanese" or "It was originally written in the French." But for translated books, they are very often prefaced with a note phrased as *Translated from the Spanish* or *Translated from the Arabic*. Where does this odd form originate? What is the reason for this grammatical deviation?
2021/03/09
[ "https://english.stackexchange.com/questions/562258", "https://english.stackexchange.com", "https://english.stackexchange.com/users/19064/" ]
“the *adj*” is a reduced form that removes a noun (which is usually obvious from context) because the adjective is what really matters. In this case, “the Spanish” probably means “the Spanish version”, though there are several other words that would give the same overall meaning.
In English, the definite article "*the*" has often been used in an idiomatic way with the names of things that wouldn’t appear to need an article.. Once, the use of "*the*" with a language was much more prevalent than it is today. Here are two old citations from the Oxford English Dictionary: > > *"[Let not your studying the French make you neglect the English](https://books.google.ch/books?hl=fr&id=E24EAAAAYAAJ&q=%22Let+not+your+studying+the+French+make+you+neglect+the+English%22#v=snippet&q=%22Let%20not%20your%20studying%20the%20French%20make%20you%20neglect%20the%20English%22&f=false)"* (1760). > > > *"Every advantage that … a complete knowledge of the Arabic could afford"* (1795). > > > The OED says people use "*the*" with languages in an [elliptical](https://www.thefreedictionary.com/elliptical) way – that is, they’re mentally deleting part of a longer phrase. Examples: "translated from the Spanish [version]" … or "from the [original] German" or "from the Japanese [language]." --- [According to an online article](http://masteringarticles.com/definite-article-languages/#:%7E:text=Rule%207.12%3A%20Use%20the%20definite,The%20English%20language%20is%20hard): > > Rule 7.12: Use the definite article when the word *language* immediately > follows the name of a language. > > > > > > > English is hard. > > > > > > The English language > > is hard. > > > > > > > > > > > > > Bill wants to learn Chinese. > > > > > > Bill wants to learn the Chinese language. > > > > > > > > >
562,258
In general in English, we don't ever apply the definitive article to languages. We don't say "He speaks the Japanese" or "It was originally written in the French." But for translated books, they are very often prefaced with a note phrased as *Translated from the Spanish* or *Translated from the Arabic*. Where does this odd form originate? What is the reason for this grammatical deviation?
2021/03/09
[ "https://english.stackexchange.com/questions/562258", "https://english.stackexchange.com", "https://english.stackexchange.com/users/19064/" ]
In English, the definite article "*the*" has often been used in an idiomatic way with the names of things that wouldn’t appear to need an article.. Once, the use of "*the*" with a language was much more prevalent than it is today. Here are two old citations from the Oxford English Dictionary: > > *"[Let not your studying the French make you neglect the English](https://books.google.ch/books?hl=fr&id=E24EAAAAYAAJ&q=%22Let+not+your+studying+the+French+make+you+neglect+the+English%22#v=snippet&q=%22Let%20not%20your%20studying%20the%20French%20make%20you%20neglect%20the%20English%22&f=false)"* (1760). > > > *"Every advantage that … a complete knowledge of the Arabic could afford"* (1795). > > > The OED says people use "*the*" with languages in an [elliptical](https://www.thefreedictionary.com/elliptical) way – that is, they’re mentally deleting part of a longer phrase. Examples: "translated from the Spanish [version]" … or "from the [original] German" or "from the Japanese [language]." --- [According to an online article](http://masteringarticles.com/definite-article-languages/#:%7E:text=Rule%207.12%3A%20Use%20the%20definite,The%20English%20language%20is%20hard): > > Rule 7.12: Use the definite article when the word *language* immediately > follows the name of a language. > > > > > > > English is hard. > > > > > > The English language > > is hard. > > > > > > > > > > > > > Bill wants to learn Chinese. > > > > > > Bill wants to learn the Chinese language. > > > > > > > > >
> > Definition of Spanish 1: the Romance language of the largest part of > Spain and of the countries colonized by Spaniards > <https://www.merriam-webster.com/dictionary/Spanish> > > > We see from the above definition that "Spanish" means "**the** language of Spain etc." So, in this meaning, if you wrote "the Spanish" you would effectively be writing "**the the** language of Spain" So, if I say, "I speak Spanish" I mean "I speak **the** language", if I said "I speak the Spanish" I would mean, "I speak **the the** language." In the case of a translated text, we are not translating the entire Spanish language that would require us to translate a dictionary. We are instead referring to the original *text*. The phrase "Translated from the Spanish" is conventionally understood to mean, "Translated from the Spanish text."
562,258
In general in English, we don't ever apply the definitive article to languages. We don't say "He speaks the Japanese" or "It was originally written in the French." But for translated books, they are very often prefaced with a note phrased as *Translated from the Spanish* or *Translated from the Arabic*. Where does this odd form originate? What is the reason for this grammatical deviation?
2021/03/09
[ "https://english.stackexchange.com/questions/562258", "https://english.stackexchange.com", "https://english.stackexchange.com/users/19064/" ]
In English, the definite article "*the*" has often been used in an idiomatic way with the names of things that wouldn’t appear to need an article.. Once, the use of "*the*" with a language was much more prevalent than it is today. Here are two old citations from the Oxford English Dictionary: > > *"[Let not your studying the French make you neglect the English](https://books.google.ch/books?hl=fr&id=E24EAAAAYAAJ&q=%22Let+not+your+studying+the+French+make+you+neglect+the+English%22#v=snippet&q=%22Let%20not%20your%20studying%20the%20French%20make%20you%20neglect%20the%20English%22&f=false)"* (1760). > > > *"Every advantage that … a complete knowledge of the Arabic could afford"* (1795). > > > The OED says people use "*the*" with languages in an [elliptical](https://www.thefreedictionary.com/elliptical) way – that is, they’re mentally deleting part of a longer phrase. Examples: "translated from the Spanish [version]" … or "from the [original] German" or "from the Japanese [language]." --- [According to an online article](http://masteringarticles.com/definite-article-languages/#:%7E:text=Rule%207.12%3A%20Use%20the%20definite,The%20English%20language%20is%20hard): > > Rule 7.12: Use the definite article when the word *language* immediately > follows the name of a language. > > > > > > > English is hard. > > > > > > The English language > > is hard. > > > > > > > > > > > > > Bill wants to learn Chinese. > > > > > > Bill wants to learn the Chinese language. > > > > > > > > >
It's similar to asking the question "What's the Spanish for -something-". For example "What's the Spanish for Supermarket?" In that case someone is asking for a specific Spanish word (the answer is 'supermercado'). In the case of "Translated from the Spanish" the writer is referring to a specific Spanish text. For example if the quote related to the windmills passage in Don Quixote the English might have the subscript "Translated from the Spanish" where "The Spanish" related to that passage in Don Quixote and not to, say, a guide to the Alhambra. When we say "Does he speak Spanish?" the question is about the subject's ability to speak (and understand) Spanish generally. This would include the ability to read Don Quixote, understand a sound guide to the Alhambra and to describe a fault with his car to a Spanish mechanic.
562,258
In general in English, we don't ever apply the definitive article to languages. We don't say "He speaks the Japanese" or "It was originally written in the French." But for translated books, they are very often prefaced with a note phrased as *Translated from the Spanish* or *Translated from the Arabic*. Where does this odd form originate? What is the reason for this grammatical deviation?
2021/03/09
[ "https://english.stackexchange.com/questions/562258", "https://english.stackexchange.com", "https://english.stackexchange.com/users/19064/" ]
“the *adj*” is a reduced form that removes a noun (which is usually obvious from context) because the adjective is what really matters. In this case, “the Spanish” probably means “the Spanish version”, though there are several other words that would give the same overall meaning.
> > Definition of Spanish 1: the Romance language of the largest part of > Spain and of the countries colonized by Spaniards > <https://www.merriam-webster.com/dictionary/Spanish> > > > We see from the above definition that "Spanish" means "**the** language of Spain etc." So, in this meaning, if you wrote "the Spanish" you would effectively be writing "**the the** language of Spain" So, if I say, "I speak Spanish" I mean "I speak **the** language", if I said "I speak the Spanish" I would mean, "I speak **the the** language." In the case of a translated text, we are not translating the entire Spanish language that would require us to translate a dictionary. We are instead referring to the original *text*. The phrase "Translated from the Spanish" is conventionally understood to mean, "Translated from the Spanish text."
562,258
In general in English, we don't ever apply the definitive article to languages. We don't say "He speaks the Japanese" or "It was originally written in the French." But for translated books, they are very often prefaced with a note phrased as *Translated from the Spanish* or *Translated from the Arabic*. Where does this odd form originate? What is the reason for this grammatical deviation?
2021/03/09
[ "https://english.stackexchange.com/questions/562258", "https://english.stackexchange.com", "https://english.stackexchange.com/users/19064/" ]
“the *adj*” is a reduced form that removes a noun (which is usually obvious from context) because the adjective is what really matters. In this case, “the Spanish” probably means “the Spanish version”, though there are several other words that would give the same overall meaning.
It's similar to asking the question "What's the Spanish for -something-". For example "What's the Spanish for Supermarket?" In that case someone is asking for a specific Spanish word (the answer is 'supermercado'). In the case of "Translated from the Spanish" the writer is referring to a specific Spanish text. For example if the quote related to the windmills passage in Don Quixote the English might have the subscript "Translated from the Spanish" where "The Spanish" related to that passage in Don Quixote and not to, say, a guide to the Alhambra. When we say "Does he speak Spanish?" the question is about the subject's ability to speak (and understand) Spanish generally. This would include the ability to read Don Quixote, understand a sound guide to the Alhambra and to describe a fault with his car to a Spanish mechanic.
562,258
In general in English, we don't ever apply the definitive article to languages. We don't say "He speaks the Japanese" or "It was originally written in the French." But for translated books, they are very often prefaced with a note phrased as *Translated from the Spanish* or *Translated from the Arabic*. Where does this odd form originate? What is the reason for this grammatical deviation?
2021/03/09
[ "https://english.stackexchange.com/questions/562258", "https://english.stackexchange.com", "https://english.stackexchange.com/users/19064/" ]
It's similar to asking the question "What's the Spanish for -something-". For example "What's the Spanish for Supermarket?" In that case someone is asking for a specific Spanish word (the answer is 'supermercado'). In the case of "Translated from the Spanish" the writer is referring to a specific Spanish text. For example if the quote related to the windmills passage in Don Quixote the English might have the subscript "Translated from the Spanish" where "The Spanish" related to that passage in Don Quixote and not to, say, a guide to the Alhambra. When we say "Does he speak Spanish?" the question is about the subject's ability to speak (and understand) Spanish generally. This would include the ability to read Don Quixote, understand a sound guide to the Alhambra and to describe a fault with his car to a Spanish mechanic.
> > Definition of Spanish 1: the Romance language of the largest part of > Spain and of the countries colonized by Spaniards > <https://www.merriam-webster.com/dictionary/Spanish> > > > We see from the above definition that "Spanish" means "**the** language of Spain etc." So, in this meaning, if you wrote "the Spanish" you would effectively be writing "**the the** language of Spain" So, if I say, "I speak Spanish" I mean "I speak **the** language", if I said "I speak the Spanish" I would mean, "I speak **the the** language." In the case of a translated text, we are not translating the entire Spanish language that would require us to translate a dictionary. We are instead referring to the original *text*. The phrase "Translated from the Spanish" is conventionally understood to mean, "Translated from the Spanish text."
142,735
Where can I get polygons (multipolygons) of all countries of the world including the territorial waters of the countries? OpenStreetmap displays these areas but I can't find an extract of this subset of the OSM data. * is there a subset of OSM data available, just containing the territories (incl. water territories) of a country as multipolygons ? * or, can I find this set of polygons somewhere else?
2015/04/15
[ "https://gis.stackexchange.com/questions/142735", "https://gis.stackexchange.com", "https://gis.stackexchange.com/users/2920/" ]
The country boundaries including the territorial waters are available in the OSM data set. I found a site where these country shapes can be downloaded: <https://wambachers-osm.website/boundaries/>
Take a look at the Global Administrative Areas dataset at <http://www.gadm.org>, it contains at least the boundaries, though I am not so sure about the territorial waters part
153,896
I am travelling to Bangalore via Abu Dhabi from London. Can i get liquor from duty free shop in Heathrow? Will it be confiscated in Abu Dhabi?
2020/02/17
[ "https://travel.stackexchange.com/questions/153896", "https://travel.stackexchange.com", "https://travel.stackexchange.com/users/104120/" ]
There's no prohibition on transiting through the UAE with alcohol; non-Muslim visitors can even bring in 4 liters past customs. It's even sold on board your Etihad flight and in the duty free in Abu Dhabi. You will have no problem, provided that the duty free places your purchase in one of the plastic security bags (otherwise you would be limited to carrying 100mL through security).
According to IATA <https://www.iatatravelcentre.com/AE-United-Arab-Emirates-customs-currency-airport-tax-regulations-details.htm#Import%20regulations>, free import of up to 4 liters of any kind of alcohol is permitted by Abu Dhabi for non-muslim passengers only. As stated by @Michael Hampton, the bottle(s) will need to be carried in the duty free bag in which they are sealed at the point of purchase.
97,764
tl;dr: trying to find propeller efficiency with thrust, velocity and power and see which propeller length to pitch ratio is efficient. Hello, I'm a high school student working on an extended essay for my physics class. What I've basically done is took a bunch of propellers (7x4E, 7x5E, 7X6E, so on and so forth). Propellers of lengths: 7, 8, 9, 10 inches and pitches: 4, 5, 6, 7, 8. (Fixed pitch propellers). I've taken these propellers and hooked them up to a motor and a weighing scale to try and find how much thrust they produce. Then, I put them on to a control line aircraft to see how fast the plane goes. Basically, Im trying to find the efficiency of these propellers and the data I have is the thrust produced by the propeller at different power levels and the velocity at different power levels. I'd really like to know if there's a relationship that I could employ to find the efficiency! It doesn't have to be complex! Just needs to work from a physics standpoint. My deadline's coming up soon and any help would be greatly appreciated.
2023/02/23
[ "https://aviation.stackexchange.com/questions/97764", "https://aviation.stackexchange.com", "https://aviation.stackexchange.com/users/67917/" ]
The FAA states that a [Ceiling](https://www.ecfr.gov/current/title-14/chapter-I/subchapter-A/part-1/section-1.1) "means the height above the earth's surface of the lowest layer of clouds or obscuring phenomena that is **reported** as “**broken**”, “overcast”, or “obscuration”, and not classified as “thin” or “partial”. A [broken level](https://w1.weather.gov/glossary/index.php?word=broken) is defined by the National Weather Service as "A layer of the atmosphere with **5/8 to 7/8** sky cover (cloud cover)." *(emphasis is mine)* This means that 4/8ths of the sky can be clear and the **ceiling** would be **reported** as **broken**. (see the image below of a ceiling) 14 CFR Part 91.155(c) states: > > (c) Except as provided in § 91.157, no person may operate an aircraft beneath **the ceiling** under VFR within the lateral boundaries of controlled airspace designated to the surface for an airport when the ceiling is less than 1,000 feet. > > > *(emphasis is mine)* In my opinion, this means that when the *official* **ceiling** is reported (in a METAR, for example) as "broken" (for example) this would apply to all airspace below the reported ceiling value within the entire "...lateral boundaries of the controlled airspace designated to the surface for an airport..." (ref: 14 CFR Part 91.155 (c)). Also, in my opinion, the official ceiling reported in a METAR, for example, is not a cloud or group of clouds directly above the aircraft measured and defined as a *ceiling* by the pilot. Instead, it is a defined and regulatory based atmospheric condition that, if it is reported as being below 1000 ft. AGL, renders the entire surface area of the controlled airspace below that reported ceiling as IMC. So, although **some** surface areas of controlled airspace surrounding an airport may be large, 14 CFR Part 91.155(c), and the definition of a ceiling, do not make any distinction or allowance for relief just because there is no cloud cover directly above the aircraft. *Image of a "Broken" ceiling (highlighting is mine).* [Source:](https://www.boldmethod.com/learn-to-fly/weather/how-cloud-ceilings-are-reported-for-pilots-metar/) [![enter image description here](https://i.stack.imgur.com/LwMgo.png)](https://i.stack.imgur.com/LwMgo.png)
The language talks about operating "beneath **the ceiling** ... when **the ceiling** is less than 1,000'." If you're not beneath that sub-1,000' ceiling, then this language doesn't apply to you. In your point 2, if there's no ceiling where you are, then you definitely aren't operating "beneath the ceiling." Your point 1 sounds like the cloud layer is sloping up as you get farther from the airport, and the pilot knows (how?) that where he is, there is now more than 1,000' between the ground and the clouds, while your point 3 has the ground sloping up so that the space between the ground and the clouds has become less than 1,000' where the pilot is operating. Those points would boil down to asking if the definition of ceiling, > > Ceiling means the height above the **earth's surface** of the lowest **layer of clouds** or obscuring phenomena that is reported as broken, overcast, or obscuration, and not classified as thin or partial. > > > is related to the (clouds & earth's surface *at the airport* itself), or the (clouds & earth's surface *where you're operating*). I'll leave that distinction for a separate answer.
43,290
Some time ago I've found a good online map for Alpine hiking trails. It was very similar to Google Maps and had green background. I forgot the address and can't find it in google now. It wasn't myalps.net Do you know this, or similar maps? EDIT ---- I've found this map in bookmarks on my old computer: <http://alpenkarte.eu/> Nevertheless the maps recommended in answers are very useful too. Thanks! P.S.: I've been searching google 2 hours and couldn't find ANY of them. It seems like google isn't very good at indexing map websites other that their own...
2015/02/13
[ "https://travel.stackexchange.com/questions/43290", "https://travel.stackexchange.com", "https://travel.stackexchange.com/users/26838/" ]
Here you go: * Austria: <http://www.amap.at> * France: <http://www.geoportail.gouv.fr> * Germany, Bavaria: <http://geoportal.bayern.de/bayernatlas> (check box "Wanderwege") * Italy: <http://www.pcn.minambiente.it/viewer> * Switzerland: <http://map.schweizmobil.ch> (check boxes in "Wanderland") * World: <http://opentopomap.org>
The answers so far are already good, but I'd like to add a map for Switzerland: <https://map.geo.admin.ch> It's without doubt the best online map I've ever seen. It's amazing how detailed it is, and what kind of information you can shown on the map on demand, e.g. geomagnetic fields, employment density, or 4G antenna locations, but also more useful things for hiking, such as slopes over 30°, ski and hiking routes, borreliose risk regions, or ibex populations. There are also tools available for planning, such as measurement tools, or elevation profiles, and obviously you can also expert and import GPS tracks. All this is totally for free!
43,290
Some time ago I've found a good online map for Alpine hiking trails. It was very similar to Google Maps and had green background. I forgot the address and can't find it in google now. It wasn't myalps.net Do you know this, or similar maps? EDIT ---- I've found this map in bookmarks on my old computer: <http://alpenkarte.eu/> Nevertheless the maps recommended in answers are very useful too. Thanks! P.S.: I've been searching google 2 hours and couldn't find ANY of them. It seems like google isn't very good at indexing map websites other that their own...
2015/02/13
[ "https://travel.stackexchange.com/questions/43290", "https://travel.stackexchange.com", "https://travel.stackexchange.com/users/26838/" ]
Here you go: * Austria: <http://www.amap.at> * France: <http://www.geoportail.gouv.fr> * Germany, Bavaria: <http://geoportal.bayern.de/bayernatlas> (check box "Wanderwege") * Italy: <http://www.pcn.minambiente.it/viewer> * Switzerland: <http://map.schweizmobil.ch> (check boxes in "Wanderland") * World: <http://opentopomap.org>
The Website <http://waymarkedtrails.org> shows sign posted hiking trails. The data comes from OpenStreetMap.org It shows the logo used on the signs and indicates the difficulty of the hike with different line styles. And it features the requested green background.
43,290
Some time ago I've found a good online map for Alpine hiking trails. It was very similar to Google Maps and had green background. I forgot the address and can't find it in google now. It wasn't myalps.net Do you know this, or similar maps? EDIT ---- I've found this map in bookmarks on my old computer: <http://alpenkarte.eu/> Nevertheless the maps recommended in answers are very useful too. Thanks! P.S.: I've been searching google 2 hours and couldn't find ANY of them. It seems like google isn't very good at indexing map websites other that their own...
2015/02/13
[ "https://travel.stackexchange.com/questions/43290", "https://travel.stackexchange.com", "https://travel.stackexchange.com/users/26838/" ]
Here you go: * Austria: <http://www.amap.at> * France: <http://www.geoportail.gouv.fr> * Germany, Bavaria: <http://geoportal.bayern.de/bayernatlas> (check box "Wanderwege") * Italy: <http://www.pcn.minambiente.it/viewer> * Switzerland: <http://map.schweizmobil.ch> (check boxes in "Wanderland") * World: <http://opentopomap.org>
Best Austria Alps maps are Kompass: <http://www.kompass.de/touren-und-regionen/touren/> Switch layer to Summer/Winter in top right to see the details like marked trails and contours (otherwise it shows Open Street Map with less details). There are even winter ski tours there.
43,290
Some time ago I've found a good online map for Alpine hiking trails. It was very similar to Google Maps and had green background. I forgot the address and can't find it in google now. It wasn't myalps.net Do you know this, or similar maps? EDIT ---- I've found this map in bookmarks on my old computer: <http://alpenkarte.eu/> Nevertheless the maps recommended in answers are very useful too. Thanks! P.S.: I've been searching google 2 hours and couldn't find ANY of them. It seems like google isn't very good at indexing map websites other that their own...
2015/02/13
[ "https://travel.stackexchange.com/questions/43290", "https://travel.stackexchange.com", "https://travel.stackexchange.com/users/26838/" ]
Here you go: * Austria: <http://www.amap.at> * France: <http://www.geoportail.gouv.fr> * Germany, Bavaria: <http://geoportal.bayern.de/bayernatlas> (check box "Wanderwege") * Italy: <http://www.pcn.minambiente.it/viewer> * Switzerland: <http://map.schweizmobil.ch> (check boxes in "Wanderland") * World: <http://opentopomap.org>
For Austria: <http://bergfex.com> Trails (ski-tour, hiking, cycling) are created by users using their GPS and smartphones. Highly recommended.
43,290
Some time ago I've found a good online map for Alpine hiking trails. It was very similar to Google Maps and had green background. I forgot the address and can't find it in google now. It wasn't myalps.net Do you know this, or similar maps? EDIT ---- I've found this map in bookmarks on my old computer: <http://alpenkarte.eu/> Nevertheless the maps recommended in answers are very useful too. Thanks! P.S.: I've been searching google 2 hours and couldn't find ANY of them. It seems like google isn't very good at indexing map websites other that their own...
2015/02/13
[ "https://travel.stackexchange.com/questions/43290", "https://travel.stackexchange.com", "https://travel.stackexchange.com/users/26838/" ]
The answers so far are already good, but I'd like to add a map for Switzerland: <https://map.geo.admin.ch> It's without doubt the best online map I've ever seen. It's amazing how detailed it is, and what kind of information you can shown on the map on demand, e.g. geomagnetic fields, employment density, or 4G antenna locations, but also more useful things for hiking, such as slopes over 30°, ski and hiking routes, borreliose risk regions, or ibex populations. There are also tools available for planning, such as measurement tools, or elevation profiles, and obviously you can also expert and import GPS tracks. All this is totally for free!
Best Austria Alps maps are Kompass: <http://www.kompass.de/touren-und-regionen/touren/> Switch layer to Summer/Winter in top right to see the details like marked trails and contours (otherwise it shows Open Street Map with less details). There are even winter ski tours there.
43,290
Some time ago I've found a good online map for Alpine hiking trails. It was very similar to Google Maps and had green background. I forgot the address and can't find it in google now. It wasn't myalps.net Do you know this, or similar maps? EDIT ---- I've found this map in bookmarks on my old computer: <http://alpenkarte.eu/> Nevertheless the maps recommended in answers are very useful too. Thanks! P.S.: I've been searching google 2 hours and couldn't find ANY of them. It seems like google isn't very good at indexing map websites other that their own...
2015/02/13
[ "https://travel.stackexchange.com/questions/43290", "https://travel.stackexchange.com", "https://travel.stackexchange.com/users/26838/" ]
The answers so far are already good, but I'd like to add a map for Switzerland: <https://map.geo.admin.ch> It's without doubt the best online map I've ever seen. It's amazing how detailed it is, and what kind of information you can shown on the map on demand, e.g. geomagnetic fields, employment density, or 4G antenna locations, but also more useful things for hiking, such as slopes over 30°, ski and hiking routes, borreliose risk regions, or ibex populations. There are also tools available for planning, such as measurement tools, or elevation profiles, and obviously you can also expert and import GPS tracks. All this is totally for free!
For Austria: <http://bergfex.com> Trails (ski-tour, hiking, cycling) are created by users using their GPS and smartphones. Highly recommended.
43,290
Some time ago I've found a good online map for Alpine hiking trails. It was very similar to Google Maps and had green background. I forgot the address and can't find it in google now. It wasn't myalps.net Do you know this, or similar maps? EDIT ---- I've found this map in bookmarks on my old computer: <http://alpenkarte.eu/> Nevertheless the maps recommended in answers are very useful too. Thanks! P.S.: I've been searching google 2 hours and couldn't find ANY of them. It seems like google isn't very good at indexing map websites other that their own...
2015/02/13
[ "https://travel.stackexchange.com/questions/43290", "https://travel.stackexchange.com", "https://travel.stackexchange.com/users/26838/" ]
The Website <http://waymarkedtrails.org> shows sign posted hiking trails. The data comes from OpenStreetMap.org It shows the logo used on the signs and indicates the difficulty of the hike with different line styles. And it features the requested green background.
Best Austria Alps maps are Kompass: <http://www.kompass.de/touren-und-regionen/touren/> Switch layer to Summer/Winter in top right to see the details like marked trails and contours (otherwise it shows Open Street Map with less details). There are even winter ski tours there.
43,290
Some time ago I've found a good online map for Alpine hiking trails. It was very similar to Google Maps and had green background. I forgot the address and can't find it in google now. It wasn't myalps.net Do you know this, or similar maps? EDIT ---- I've found this map in bookmarks on my old computer: <http://alpenkarte.eu/> Nevertheless the maps recommended in answers are very useful too. Thanks! P.S.: I've been searching google 2 hours and couldn't find ANY of them. It seems like google isn't very good at indexing map websites other that their own...
2015/02/13
[ "https://travel.stackexchange.com/questions/43290", "https://travel.stackexchange.com", "https://travel.stackexchange.com/users/26838/" ]
The Website <http://waymarkedtrails.org> shows sign posted hiking trails. The data comes from OpenStreetMap.org It shows the logo used on the signs and indicates the difficulty of the hike with different line styles. And it features the requested green background.
For Austria: <http://bergfex.com> Trails (ski-tour, hiking, cycling) are created by users using their GPS and smartphones. Highly recommended.
11,157,225
We have some Microfocus Cobol.Net applications. We would like to create a dependency map similar to what is available in NDepend. Does anyone know of a tool that is able to do this?
2012/06/22
[ "https://Stackoverflow.com/questions/11157225", "https://Stackoverflow.com", "https://Stackoverflow.com/users/86611/" ]
You could embed them into resources fairly easily and extract them when you need them. See [this link](http://www.codeproject.com/Articles/13573/Extracting-Embedded-Images-From-An-Assembly) for more Or you could encrypt them and decrypt them when you need them
Linked resources are stored as files within the project; during compilation the resource data is taken from the files and placed into the manifest for the application. The application's resource file (.resx) stores only a relative path or link to the file on disk. With embedded resources, the resource data is stored directly in the .resx file in a text representation of the binary data. In either case, the resource data is compiled into the executable file. To change a resource from linked to embedded 1. With a project selected in Solution Explorer, on the Project menu click Properties. 2. Click the Resources tab. 3. On the Resource Designer toolbar, point to the resource view drop-down, click the arrow, and select the type of resource that you want to edit. 4. Select the resource that you wish to change. 5. In the Properties window, select the Persistence property and change it to Embedded in .resx.
102,593
I usually make fava beans from dry beans, I simmer them in plain water for hours. Right after they are cooked they are bright green and have a very fresh delicious taste, but after letting it cool the color will change dramatically to a darker grey colour and as time goes the taste will change to the worse, Canned beans usually don't have this issue, I guess they add something to it, so what is causing this change and how can I prevent it? Edit: I made an experiment by separating three bowls of beans and their water one was topped with oil, one was rapidly cooled and then refrigerated and the last was the control left to cool down slowly in the open air, the rapidly cooled one was on the best in terms of colour then the oil covered one and last was the control
2019/09/28
[ "https://cooking.stackexchange.com/questions/102593", "https://cooking.stackexchange.com", "https://cooking.stackexchange.com/users/48070/" ]
I think the answer lies in how you're cooling them (hence my earlier comment). Many vegetables need to be 'shocked' immediately after cooking in order to retain a vibrant color. As shocking stops the cooking process, it also contributes to maintaining a good texture and flavor. Here is a very similar Q & A from [The Globe and Mail](https://www.theglobeandmail.com/life/food-and-wine/entertaining/how-do-i-keep-my-fava-and-green-beans-from-turning-grey/article584644/): > > The question: I love fava and green beans in the summertime and would love to serve them at dinner parties, but they always look dull and grey after I cook them. Whenever I have them in French restaurants, they're bright green and vibrant. My daughter insists the chefs are probably just using MSG. Is she right? > > > The answer: It's not likely MSG they're using - just a simple technique called a "big pot blanch and shock." The key is to cook your green vegetables as quickly as possible so the heat doesn't have time to release their pigment and then shock them in ice water as soon as they're done. Fill the biggest pot in your kitchen - I use an eight-litre stockpot - with cold water and bring it to the hottest boil your stove can muster. Add a cup of table salt for every four litres of water, then dump in only as many vegetables as you can add without stopping the boil. Cook them in batches if you must. When they're done, scoop them out and chill immediately in a big pot of ice water. And maybe wear some protective sunglasses. They're going to be that bright. > > > Since, at the end of your cooking, the beans have the nice color and flavor you like, I would suggest cooking as you normally do but use the shocking technique immediately after. If you normally keep the beans in the cooking liquid, I would still shock the beans and refrigerate them separately from the liquid. You can refrigerate the liquid and, when cold, add the beans back in.
Your observation is interesting in that fava beans contain high levels of oxidants. Persons with genetic susceptibility can get very sick from eating them. This illness is called [favism](https://www.sciencedirect.com/topics/medicine-and-dentistry/favism). <https://www.hematology.org/Thehematologist/Diffusion/8304.aspx> > > The problem is that the bean’s protein content can include as much as > 2 percent vicine and convicine, which are converted in the gut to > divicine and isouramil. These highly redox proteins are likely to > retard rotting of the bean, but produce reactive oxygen species (ROS) > including the superoxide anion and hydrogen peroxide, which rapidly > oxidize NADPH and glutathione. These molecules are normally detoxified > by catalase and glutathione peroxidase, in enzymatic reactions that > depend on NADPH. Because NADPH levels are very low in G6PD-deficient > red cells, these undergo severe oxidative damage. A characteristic > feature of favism is that intracellular and extracellular hemolysis > coexist. > > > Let us assume that color and flavor change described is due to these oxidants acting on the bean. How to prevent? I can think of one of 2 ways. **1: add antioxidants 2: prevent exposure to air.** The antioxidant that comes to mind would be lemon juice. The experiment is easy: a batch of fava beans, then treat half with lemon juice and the other leave alone. Does lemon juice prevent the color change? Lemon juice works to prevent browning of sliced apples by this mechanism. The other method would be to exclude air. One could do this by tossing the fresh cooked beans with olive oil, which should produce an air barrier on each bean. Submerging beans in oil would be a surer way to achieve the same purpose. A third method would be to contain them in an airtight bag or container and exclude air first before sealing. Again, testable. \*Note that submerging cooked beans in oil is not a way to keep them indefinitely. Different and possibly more dangerous types of spoilage can happen. Both of the above ideas are to keep your beans good for a couple of days at most. Both sound delicious to me.
102,593
I usually make fava beans from dry beans, I simmer them in plain water for hours. Right after they are cooked they are bright green and have a very fresh delicious taste, but after letting it cool the color will change dramatically to a darker grey colour and as time goes the taste will change to the worse, Canned beans usually don't have this issue, I guess they add something to it, so what is causing this change and how can I prevent it? Edit: I made an experiment by separating three bowls of beans and their water one was topped with oil, one was rapidly cooled and then refrigerated and the last was the control left to cool down slowly in the open air, the rapidly cooled one was on the best in terms of colour then the oil covered one and last was the control
2019/09/28
[ "https://cooking.stackexchange.com/questions/102593", "https://cooking.stackexchange.com", "https://cooking.stackexchange.com/users/48070/" ]
Your observation is interesting in that fava beans contain high levels of oxidants. Persons with genetic susceptibility can get very sick from eating them. This illness is called [favism](https://www.sciencedirect.com/topics/medicine-and-dentistry/favism). <https://www.hematology.org/Thehematologist/Diffusion/8304.aspx> > > The problem is that the bean’s protein content can include as much as > 2 percent vicine and convicine, which are converted in the gut to > divicine and isouramil. These highly redox proteins are likely to > retard rotting of the bean, but produce reactive oxygen species (ROS) > including the superoxide anion and hydrogen peroxide, which rapidly > oxidize NADPH and glutathione. These molecules are normally detoxified > by catalase and glutathione peroxidase, in enzymatic reactions that > depend on NADPH. Because NADPH levels are very low in G6PD-deficient > red cells, these undergo severe oxidative damage. A characteristic > feature of favism is that intracellular and extracellular hemolysis > coexist. > > > Let us assume that color and flavor change described is due to these oxidants acting on the bean. How to prevent? I can think of one of 2 ways. **1: add antioxidants 2: prevent exposure to air.** The antioxidant that comes to mind would be lemon juice. The experiment is easy: a batch of fava beans, then treat half with lemon juice and the other leave alone. Does lemon juice prevent the color change? Lemon juice works to prevent browning of sliced apples by this mechanism. The other method would be to exclude air. One could do this by tossing the fresh cooked beans with olive oil, which should produce an air barrier on each bean. Submerging beans in oil would be a surer way to achieve the same purpose. A third method would be to contain them in an airtight bag or container and exclude air first before sealing. Again, testable. \*Note that submerging cooked beans in oil is not a way to keep them indefinitely. Different and possibly more dangerous types of spoilage can happen. Both of the above ideas are to keep your beans good for a couple of days at most. Both sound delicious to me.
Use lemon skin or a bit of lemon juice while cooking. This tip is known in the fava beans restaurants in the middle east
102,593
I usually make fava beans from dry beans, I simmer them in plain water for hours. Right after they are cooked they are bright green and have a very fresh delicious taste, but after letting it cool the color will change dramatically to a darker grey colour and as time goes the taste will change to the worse, Canned beans usually don't have this issue, I guess they add something to it, so what is causing this change and how can I prevent it? Edit: I made an experiment by separating three bowls of beans and their water one was topped with oil, one was rapidly cooled and then refrigerated and the last was the control left to cool down slowly in the open air, the rapidly cooled one was on the best in terms of colour then the oil covered one and last was the control
2019/09/28
[ "https://cooking.stackexchange.com/questions/102593", "https://cooking.stackexchange.com", "https://cooking.stackexchange.com/users/48070/" ]
I think the answer lies in how you're cooling them (hence my earlier comment). Many vegetables need to be 'shocked' immediately after cooking in order to retain a vibrant color. As shocking stops the cooking process, it also contributes to maintaining a good texture and flavor. Here is a very similar Q & A from [The Globe and Mail](https://www.theglobeandmail.com/life/food-and-wine/entertaining/how-do-i-keep-my-fava-and-green-beans-from-turning-grey/article584644/): > > The question: I love fava and green beans in the summertime and would love to serve them at dinner parties, but they always look dull and grey after I cook them. Whenever I have them in French restaurants, they're bright green and vibrant. My daughter insists the chefs are probably just using MSG. Is she right? > > > The answer: It's not likely MSG they're using - just a simple technique called a "big pot blanch and shock." The key is to cook your green vegetables as quickly as possible so the heat doesn't have time to release their pigment and then shock them in ice water as soon as they're done. Fill the biggest pot in your kitchen - I use an eight-litre stockpot - with cold water and bring it to the hottest boil your stove can muster. Add a cup of table salt for every four litres of water, then dump in only as many vegetables as you can add without stopping the boil. Cook them in batches if you must. When they're done, scoop them out and chill immediately in a big pot of ice water. And maybe wear some protective sunglasses. They're going to be that bright. > > > Since, at the end of your cooking, the beans have the nice color and flavor you like, I would suggest cooking as you normally do but use the shocking technique immediately after. If you normally keep the beans in the cooking liquid, I would still shock the beans and refrigerate them separately from the liquid. You can refrigerate the liquid and, when cold, add the beans back in.
Use lemon skin or a bit of lemon juice while cooking. This tip is known in the fava beans restaurants in the middle east
194,279
There is a huge dungeon that has been unexplored until recently it was opened up to the public by the authority, adventurers are encouraged to form party to ensure higher chance of survivability. There are 18 floors in total and each floor is consisted of a network of tunnels and caves infested with dangerous monsters, every floor will spawn a champion/boss acting as the ruler of that floor. Going deeper means that adventurers will be encountering even tougher rulers and monster swarms/waves, but why don't all the monster rulers gang up together against the adventurers just like the behavior of the monster swarms/waves? P.S: I've noticed that the monsters are not hostile to their kind but will attack anything foreign on sight!
2021/01/18
[ "https://worldbuilding.stackexchange.com/questions/194279", "https://worldbuilding.stackexchange.com", "https://worldbuilding.stackexchange.com/users/8400/" ]
> > the monsters are not hostile to their kind but will attack anything foreign on sight! > > > If anything foreign is hostile, than even the guy living upstairs is such. Cooperatively acting requires certain brain skills which not all beings have. Take a bee hive: if you are trying to move it so that it's not flooded by water, would the bees spare you from their sting? No, not really. Like in the fairy tale of the scorpion and the frog, it's their nature.
**The monsters are highly territorial and hate each other as much as the adventurers** You say the monsters won't attack each other, but it may be that the monster lords have merely partitioned up the dungeon to their liking and have a strict agreement for their own minions to not be on each other's turf. Being isolated from the rest of the world is the exact kind of situation that would produce a highly rigid social structure with little challenging the dominant powers, as there are no external forces to disrupt the politics of the dungeon. Like a city that has been divided up between various gangs but with no police. The monsters *would* fight each other if they were on the same floor, but they so religiously stick to their own territories to avoid fights with other monster lord factions that in practice they never see each other.
194,279
There is a huge dungeon that has been unexplored until recently it was opened up to the public by the authority, adventurers are encouraged to form party to ensure higher chance of survivability. There are 18 floors in total and each floor is consisted of a network of tunnels and caves infested with dangerous monsters, every floor will spawn a champion/boss acting as the ruler of that floor. Going deeper means that adventurers will be encountering even tougher rulers and monster swarms/waves, but why don't all the monster rulers gang up together against the adventurers just like the behavior of the monster swarms/waves? P.S: I've noticed that the monsters are not hostile to their kind but will attack anything foreign on sight!
2021/01/18
[ "https://worldbuilding.stackexchange.com/questions/194279", "https://worldbuilding.stackexchange.com", "https://worldbuilding.stackexchange.com/users/8400/" ]
**The monsters are highly territorial and hate each other as much as the adventurers** You say the monsters won't attack each other, but it may be that the monster lords have merely partitioned up the dungeon to their liking and have a strict agreement for their own minions to not be on each other's turf. Being isolated from the rest of the world is the exact kind of situation that would produce a highly rigid social structure with little challenging the dominant powers, as there are no external forces to disrupt the politics of the dungeon. Like a city that has been divided up between various gangs but with no police. The monsters *would* fight each other if they were on the same floor, but they so religiously stick to their own territories to avoid fights with other monster lord factions that in practice they never see each other.
### Because then they'd have the share the tasty adventurers Have you ever had this experience: You're seated at a nice restaurant, a neighbouring table has its meal served up, and they call you over and ask for help eating it? No, me neither. Your monsters don't think that their dinner can be that much of a challenge. Plus they'd like to get all those shiny things and if they call for help, they'd need to share the loot.
194,279
There is a huge dungeon that has been unexplored until recently it was opened up to the public by the authority, adventurers are encouraged to form party to ensure higher chance of survivability. There are 18 floors in total and each floor is consisted of a network of tunnels and caves infested with dangerous monsters, every floor will spawn a champion/boss acting as the ruler of that floor. Going deeper means that adventurers will be encountering even tougher rulers and monster swarms/waves, but why don't all the monster rulers gang up together against the adventurers just like the behavior of the monster swarms/waves? P.S: I've noticed that the monsters are not hostile to their kind but will attack anything foreign on sight!
2021/01/18
[ "https://worldbuilding.stackexchange.com/questions/194279", "https://worldbuilding.stackexchange.com", "https://worldbuilding.stackexchange.com/users/8400/" ]
**The monsters are highly territorial and hate each other as much as the adventurers** You say the monsters won't attack each other, but it may be that the monster lords have merely partitioned up the dungeon to their liking and have a strict agreement for their own minions to not be on each other's turf. Being isolated from the rest of the world is the exact kind of situation that would produce a highly rigid social structure with little challenging the dominant powers, as there are no external forces to disrupt the politics of the dungeon. Like a city that has been divided up between various gangs but with no police. The monsters *would* fight each other if they were on the same floor, but they so religiously stick to their own territories to avoid fights with other monster lord factions that in practice they never see each other.
There's no way the inhabitants of this so lovely dungeon have hatred against their neighbour! But they can have a lot of motives to stay on their floor and not meet each other. Here's your plate of reasons you can sample as you wish! The dungeon finances are limited -------------------------------- The dungeon lord, who's at the bottom floor, may have limited resources to spend, and so much employees to take care of! Hence, in the contract he has signed with them, their protection task includes only one floor. Then, why, as a truly evil employee, would you ever thinking of working more than what you're paid for? Also, since the dungeon lord is evil and very, very stingy, they want to "fire" the less efficient employees by putting them to the more "risk-inducing workplaces", which are obviously at the top level of the dungeon, where the "clients" are many and meanies. But the employees may be stupid but they're not fool and understood what their boss did. Therefore, why should you pick others' dangerous slack when you can have a peaceful nap just below instead? No, no. You're not paid enough innocent souls for that! They want to live in their favored environment ---------------------------------------------- Each floor has been carefully made to allow specific species to live in. 1st floor has woodland inhabitants, 2nd has cavernal creatures, and so on. The motive to go to another floor becomes basically void, since other floors offer nothing suitable to sustain their prefered lifestyle. While you don't hate your neighbours, you don't really care what happens to them, since you don't meet them on a regular basis. Here's a more flavorful bite of this sample with ice ogres and lava snakes : they won't even dare venturing in each other floors, since the first one will melt on the hot smokey fire caverns while the seconds will freeze to death in the cold, lifeless chasms. They don't want to step on their friends' traps ----------------------------------------------- Monsters are sneaky ones. And paranoid, on top of that. They have laid traps of their own design in everything that looks like a room, a door or a chest. And they have so much problems trusting each other they won't tell anyone but their floor's comrades. And because of that, monsters of other floors can't simply rush in to help without falling in them. After all, how can you prove you *won't* betray us as soon as things go awry in order to earn a getaway or a good place in the commandment? What tells us you won't change the traps so we fall in them, just to joke on us? And more importantly, aren't you actually working for the adventurers?! Pushed to the extreme, small groups of monsters may band up together in some rooms, and they wouldn't know anything of the traps their neighbours on the same floor laid! They wouldn't help them from fear of being tricked or simply because they can touch the wrong pressure plate! They follow strict caste rules ------------------------------ Monsters can be ugly, nasty and love to have some human 'hors d'oeuvre' at dinner, but they do have a strong sense of order in their life. And this reflects back to how the monster society is organized. And it is organized in classes, where the beggars live on the cold, dangerous top floor, while the royalty live at the cozy and warm bottom. No beggar would ever think of getting near nobles, that's an offence to honour, and is a proof of disrespect against your leaders. At the same time and more importantly, most nobles don't really care about what happens to the lower blood, except for the rare few who care deeply for their people who look with sparkling eyes at them! Hope you enjoyed the samples! Know that you can easily mix in the ingredients I gave you to cook whole new flavors!
194,279
There is a huge dungeon that has been unexplored until recently it was opened up to the public by the authority, adventurers are encouraged to form party to ensure higher chance of survivability. There are 18 floors in total and each floor is consisted of a network of tunnels and caves infested with dangerous monsters, every floor will spawn a champion/boss acting as the ruler of that floor. Going deeper means that adventurers will be encountering even tougher rulers and monster swarms/waves, but why don't all the monster rulers gang up together against the adventurers just like the behavior of the monster swarms/waves? P.S: I've noticed that the monsters are not hostile to their kind but will attack anything foreign on sight!
2021/01/18
[ "https://worldbuilding.stackexchange.com/questions/194279", "https://worldbuilding.stackexchange.com", "https://worldbuilding.stackexchange.com/users/8400/" ]
> > the monsters are not hostile to their kind but will attack anything foreign on sight! > > > If anything foreign is hostile, than even the guy living upstairs is such. Cooperatively acting requires certain brain skills which not all beings have. Take a bee hive: if you are trying to move it so that it's not flooded by water, would the bees spare you from their sting? No, not really. Like in the fairy tale of the scorpion and the frog, it's their nature.
### Because then they'd have the share the tasty adventurers Have you ever had this experience: You're seated at a nice restaurant, a neighbouring table has its meal served up, and they call you over and ask for help eating it? No, me neither. Your monsters don't think that their dinner can be that much of a challenge. Plus they'd like to get all those shiny things and if they call for help, they'd need to share the loot.
194,279
There is a huge dungeon that has been unexplored until recently it was opened up to the public by the authority, adventurers are encouraged to form party to ensure higher chance of survivability. There are 18 floors in total and each floor is consisted of a network of tunnels and caves infested with dangerous monsters, every floor will spawn a champion/boss acting as the ruler of that floor. Going deeper means that adventurers will be encountering even tougher rulers and monster swarms/waves, but why don't all the monster rulers gang up together against the adventurers just like the behavior of the monster swarms/waves? P.S: I've noticed that the monsters are not hostile to their kind but will attack anything foreign on sight!
2021/01/18
[ "https://worldbuilding.stackexchange.com/questions/194279", "https://worldbuilding.stackexchange.com", "https://worldbuilding.stackexchange.com/users/8400/" ]
> > the monsters are not hostile to their kind but will attack anything foreign on sight! > > > If anything foreign is hostile, than even the guy living upstairs is such. Cooperatively acting requires certain brain skills which not all beings have. Take a bee hive: if you are trying to move it so that it's not flooded by water, would the bees spare you from their sting? No, not really. Like in the fairy tale of the scorpion and the frog, it's their nature.
There's no way the inhabitants of this so lovely dungeon have hatred against their neighbour! But they can have a lot of motives to stay on their floor and not meet each other. Here's your plate of reasons you can sample as you wish! The dungeon finances are limited -------------------------------- The dungeon lord, who's at the bottom floor, may have limited resources to spend, and so much employees to take care of! Hence, in the contract he has signed with them, their protection task includes only one floor. Then, why, as a truly evil employee, would you ever thinking of working more than what you're paid for? Also, since the dungeon lord is evil and very, very stingy, they want to "fire" the less efficient employees by putting them to the more "risk-inducing workplaces", which are obviously at the top level of the dungeon, where the "clients" are many and meanies. But the employees may be stupid but they're not fool and understood what their boss did. Therefore, why should you pick others' dangerous slack when you can have a peaceful nap just below instead? No, no. You're not paid enough innocent souls for that! They want to live in their favored environment ---------------------------------------------- Each floor has been carefully made to allow specific species to live in. 1st floor has woodland inhabitants, 2nd has cavernal creatures, and so on. The motive to go to another floor becomes basically void, since other floors offer nothing suitable to sustain their prefered lifestyle. While you don't hate your neighbours, you don't really care what happens to them, since you don't meet them on a regular basis. Here's a more flavorful bite of this sample with ice ogres and lava snakes : they won't even dare venturing in each other floors, since the first one will melt on the hot smokey fire caverns while the seconds will freeze to death in the cold, lifeless chasms. They don't want to step on their friends' traps ----------------------------------------------- Monsters are sneaky ones. And paranoid, on top of that. They have laid traps of their own design in everything that looks like a room, a door or a chest. And they have so much problems trusting each other they won't tell anyone but their floor's comrades. And because of that, monsters of other floors can't simply rush in to help without falling in them. After all, how can you prove you *won't* betray us as soon as things go awry in order to earn a getaway or a good place in the commandment? What tells us you won't change the traps so we fall in them, just to joke on us? And more importantly, aren't you actually working for the adventurers?! Pushed to the extreme, small groups of monsters may band up together in some rooms, and they wouldn't know anything of the traps their neighbours on the same floor laid! They wouldn't help them from fear of being tricked or simply because they can touch the wrong pressure plate! They follow strict caste rules ------------------------------ Monsters can be ugly, nasty and love to have some human 'hors d'oeuvre' at dinner, but they do have a strong sense of order in their life. And this reflects back to how the monster society is organized. And it is organized in classes, where the beggars live on the cold, dangerous top floor, while the royalty live at the cozy and warm bottom. No beggar would ever think of getting near nobles, that's an offence to honour, and is a proof of disrespect against your leaders. At the same time and more importantly, most nobles don't really care about what happens to the lower blood, except for the rare few who care deeply for their people who look with sparkling eyes at them! Hope you enjoyed the samples! Know that you can easily mix in the ingredients I gave you to cook whole new flavors!
194,279
There is a huge dungeon that has been unexplored until recently it was opened up to the public by the authority, adventurers are encouraged to form party to ensure higher chance of survivability. There are 18 floors in total and each floor is consisted of a network of tunnels and caves infested with dangerous monsters, every floor will spawn a champion/boss acting as the ruler of that floor. Going deeper means that adventurers will be encountering even tougher rulers and monster swarms/waves, but why don't all the monster rulers gang up together against the adventurers just like the behavior of the monster swarms/waves? P.S: I've noticed that the monsters are not hostile to their kind but will attack anything foreign on sight!
2021/01/18
[ "https://worldbuilding.stackexchange.com/questions/194279", "https://worldbuilding.stackexchange.com", "https://worldbuilding.stackexchange.com/users/8400/" ]
There's no way the inhabitants of this so lovely dungeon have hatred against their neighbour! But they can have a lot of motives to stay on their floor and not meet each other. Here's your plate of reasons you can sample as you wish! The dungeon finances are limited -------------------------------- The dungeon lord, who's at the bottom floor, may have limited resources to spend, and so much employees to take care of! Hence, in the contract he has signed with them, their protection task includes only one floor. Then, why, as a truly evil employee, would you ever thinking of working more than what you're paid for? Also, since the dungeon lord is evil and very, very stingy, they want to "fire" the less efficient employees by putting them to the more "risk-inducing workplaces", which are obviously at the top level of the dungeon, where the "clients" are many and meanies. But the employees may be stupid but they're not fool and understood what their boss did. Therefore, why should you pick others' dangerous slack when you can have a peaceful nap just below instead? No, no. You're not paid enough innocent souls for that! They want to live in their favored environment ---------------------------------------------- Each floor has been carefully made to allow specific species to live in. 1st floor has woodland inhabitants, 2nd has cavernal creatures, and so on. The motive to go to another floor becomes basically void, since other floors offer nothing suitable to sustain their prefered lifestyle. While you don't hate your neighbours, you don't really care what happens to them, since you don't meet them on a regular basis. Here's a more flavorful bite of this sample with ice ogres and lava snakes : they won't even dare venturing in each other floors, since the first one will melt on the hot smokey fire caverns while the seconds will freeze to death in the cold, lifeless chasms. They don't want to step on their friends' traps ----------------------------------------------- Monsters are sneaky ones. And paranoid, on top of that. They have laid traps of their own design in everything that looks like a room, a door or a chest. And they have so much problems trusting each other they won't tell anyone but their floor's comrades. And because of that, monsters of other floors can't simply rush in to help without falling in them. After all, how can you prove you *won't* betray us as soon as things go awry in order to earn a getaway or a good place in the commandment? What tells us you won't change the traps so we fall in them, just to joke on us? And more importantly, aren't you actually working for the adventurers?! Pushed to the extreme, small groups of monsters may band up together in some rooms, and they wouldn't know anything of the traps their neighbours on the same floor laid! They wouldn't help them from fear of being tricked or simply because they can touch the wrong pressure plate! They follow strict caste rules ------------------------------ Monsters can be ugly, nasty and love to have some human 'hors d'oeuvre' at dinner, but they do have a strong sense of order in their life. And this reflects back to how the monster society is organized. And it is organized in classes, where the beggars live on the cold, dangerous top floor, while the royalty live at the cozy and warm bottom. No beggar would ever think of getting near nobles, that's an offence to honour, and is a proof of disrespect against your leaders. At the same time and more importantly, most nobles don't really care about what happens to the lower blood, except for the rare few who care deeply for their people who look with sparkling eyes at them! Hope you enjoyed the samples! Know that you can easily mix in the ingredients I gave you to cook whole new flavors!
### Because then they'd have the share the tasty adventurers Have you ever had this experience: You're seated at a nice restaurant, a neighbouring table has its meal served up, and they call you over and ask for help eating it? No, me neither. Your monsters don't think that their dinner can be that much of a challenge. Plus they'd like to get all those shiny things and if they call for help, they'd need to share the loot.
114,087
Out in the former state of Utah, near where the Old Salt Lake ruins are, a group of scholars and students from the New Jerusalem University are on an exploration mission. They have heard, from an unknown but (maybe) reliable source that an old Fallout Shelter is out there. They search all day until they find it. They ask, or rather force, some slaves to break down the door, and they are able to get in. The Fallout Shelter, though littered with skeletons, is full of valuable knowledge from before WWIII. The paper they used in the Old Times was super powerful, and never rotted, at all. Most of the notes the found were just grocery lists and etc. but they do find a blueprint which talks about some machine powered by that mystical force, the “electron” if I remember correctly. There is just one problem. The apocalypse was 800 years ago, and since that time New English (The language they speak in the present) is totally different from Modern English. So, what might be a plausible reason for why the college kids could read Modern English?
2018/06/04
[ "https://worldbuilding.stackexchange.com/questions/114087", "https://worldbuilding.stackexchange.com", "https://worldbuilding.stackexchange.com/users/51013/" ]
In school, some of us learn Latin. We learn it because it is the root of many modern languages, so learning Latin helps us with understanding the modern languages. Classical Latin is 2000 years old. Also, the fact that they call their language New English suggests that there's strong connections between New English and "modern" English (at least similar as "modern" English to Old English, though that's not very similar).
Basic electrical machinery - such as a dynamo, or an electric motor - could easily be described simply through images, schematic diagrams, and other visual means. The scholars would need to experiment with the proper materials\*, but the actual design would be easy to pick up on. Once they have a handle on what those basic machines do and how, that gives them a starting point to translate documentation related to those machines, which is a big leg up when working on other schematics. \*They might not have to guess if the materials are described *chemically*; I would expect chemical symbols for elements to be pretty stable (no pun intended). They're already short, unambiguous, and half of them are already Latin or Greek, so it's not like they're neologisms that will wear out.
114,087
Out in the former state of Utah, near where the Old Salt Lake ruins are, a group of scholars and students from the New Jerusalem University are on an exploration mission. They have heard, from an unknown but (maybe) reliable source that an old Fallout Shelter is out there. They search all day until they find it. They ask, or rather force, some slaves to break down the door, and they are able to get in. The Fallout Shelter, though littered with skeletons, is full of valuable knowledge from before WWIII. The paper they used in the Old Times was super powerful, and never rotted, at all. Most of the notes the found were just grocery lists and etc. but they do find a blueprint which talks about some machine powered by that mystical force, the “electron” if I remember correctly. There is just one problem. The apocalypse was 800 years ago, and since that time New English (The language they speak in the present) is totally different from Modern English. So, what might be a plausible reason for why the college kids could read Modern English?
2018/06/04
[ "https://worldbuilding.stackexchange.com/questions/114087", "https://worldbuilding.stackexchange.com", "https://worldbuilding.stackexchange.com/users/51013/" ]
Languages tend to develop fastest in cities, where lots of people meet and develop new words and new grammar. Outside the cities, especially in isolated communities, language change is slower. There are parts of Sardinia that have an Italian dialect that would probably be intelligible to the Romans. If this is the case it may be that a slave can read what the intellectuals cannot. *Modern English is a foreign language to the intellectuals of New Jerusalem, but out in the wilds of Utah, among the backwoodsmen, you will still find families who speak a language that, while it isn't classical English, could be understood by a person from the times before the war.* *Among the slaves is LeVerl, and he grew up in an old isolated family (before his capture and enslavement), and while he has since learned to speak New, he still knows the language that his mother learned from her father. To the scholars’ surprise he can read the old texts.*
In school, some of us learn Latin. We learn it because it is the root of many modern languages, so learning Latin helps us with understanding the modern languages. Classical Latin is 2000 years old. Also, the fact that they call their language New English suggests that there's strong connections between New English and "modern" English (at least similar as "modern" English to Old English, though that's not very similar).