qid
int64
1
74.7M
question
stringlengths
12
33.8k
date
stringlengths
10
10
metadata
list
response_j
stringlengths
0
115k
response_k
stringlengths
2
98.3k
38,212
> > But as for you, Bethlehem Ephrathah, Too little to be among the clans of Judah, From you One will go forth for Me to be ruler in Israel. His goings forth are from long ago, From the days of eternity (yom olam). > [‭‭Micah‬ ‭5](https://parabible.com/Micah/5):2‬ ‭NASB‬‬ > > > What are the "days of eternity" (yom olam) in Micah asserting about the ruler?
2019/01/10
[ "https://hermeneutics.stackexchange.com/questions/38212", "https://hermeneutics.stackexchange.com", "https://hermeneutics.stackexchange.com/users/28028/" ]
To understand the verse in question it helps to understand the military context: > > ESV Micah 5: > > > 1a Now **muster your troops**, O **daughterb of troops**; > **siege** is laid against us; > **with a rod they strike the judge** of Israel > on the cheek. > 2c But you, O Bethlehem Ephrathah, > who are too little [insignificant] **to be among the clans [armies] of Judah**, > from you shall come forth for me > one who is **to be ruler in Israel**, > **whose coming forth** is from of old, > from ancient days. > > > Footnotes: > a 1 Ch 4:14 in Hebrew > b 1 That is, city > c 2 Ch 5:1 in Hebrew > > > So the prophet is saying that from the city of David, Bethlehem, the house of bread, which was nothing but a few women and children, the promised ruler of Israel would arise. But then he says "whose coming forth..." which is apparently taken by the ESV to refer to his birth in Bethlehem. However, (and I'm no Hebrew guru) the word is plural and is rendered in other translations as "whose comings forth" (IE: given the context, "sorties" or "military campaigns"). Now, if I'm correct concerning this then this would be, I believe [in a notional sense](http://www.21stcr.org/multimedia-2015/1_pdf/ds_john_and_jewish_preexistence.pdf), similar to this: > > [Rom 4:17 KJV] 17 (**As it is written, I have made** thee a father of many nations,) before him whom he believed, [even] God, who quickeneth the dead, **and calleth those things which be not as though they were**. > > > But most important, I believe is the concern in the original question that perhaps the form of one usage of OLAM might tell us the meaning of a similar use. However, that isn't necessarily the case. Context is always the key factor. The NET Bible renders Micah 5:2 like this: > > NET Bible Micah 5:2 As for you, Bethlehem Ephrathah, seemingly insignificant among the clans of Judah--from you a king will emerge who will rule over Israel on my behalf, one whose origins **are in the distant past**. > > > That's about all I think we can load OLAM with in actual usage. And if his military campaigns from OLAM then we must not imagine that his first battle was in eternity past. Surely there was no war on day one! The point is that the exploits of the Messiah have been in the scriptures from long ago and in God's mind longer than that. To that agree all the scriptures. Notice this similar verbiage from the mouth of Gideon: > > [Jdg 6:14-16 NLT] (14) Then the LORD turned to him and said, "Go with the strength you have, and **rescue Israel** from the Midianites. I am sending you!" (15) "But Lord," Gideon replied, **"how can I rescue Israel? My clan is the weakest in the whole tribe of Manasseh, and I am the least in my entire family!" (16) The LORD said to him, "I will be with you. And you will destroy the Midianites as if you were fighting against one man."** > > > I should also point out that interpreting Micah 5:2 as saying that Jesus IS the "ancient of days" clashes with Daniel where the Messiah ascends and appears before God who is referred to as "the Ancient of Days": > > [Dan 7:13-14 KJV] 13 I saw in the night visions, and, behold, [one] like the Son of man came with the clouds of heaven, and came to the Ancient of days, and they brought him near before him. 14 And there was given him dominion, and glory, and a kingdom, that all people, nations, and languages, should serve him: his dominion [is] an everlasting dominion, which shall not pass away, and his kingdom [that] which shall not be destroyed. > > >
Based on the other answers being *seemingly* over complex for what should be a readily available solution/answer. **Is Micah 5:2 identifying the Messiah as “the Ancient of Days”?** No, for the reasons found in the context of the passage. > > But as for you, Bethlehem Ephrathah, Too little to be among the clans of Judah, From you One will go forth for Me to be ruler in Israel. His goings forth are from long ago, From the days of eternity (yom olam). ‭‭Micah‬ ‭5:2‬ ‭NASB‬‬ > > > The Jews misunderstood many things of the new age Jesus ushered in. But they had one thing sure in their hearts - > > Where is the One having been born King of the Jews?... > And having assembled all **the chief priests and scribes** of the people, he was inquiring of them **where the Christ was to be born**. 5And they said to him, “In Bethlehem of Judea, for thus has it been written through the prophet: 6‘And you, Bethlehem, land of Judah, are by no means least among the rulers of Judah, for out of you will come forth One leading, who will shepherd My people Israel.’ Matt 2:2-6 > > > The Jews knew; * where the One was coming from. * the One would have a beginning, an 'origin', a birth - just like a normal person. * the One would be a descendant of David and Abraham - a human offspring. * they were expecting someone arranged by God, but would not *be* God! * the One was prophesied from the beginning - not *existing* from the beginning. * the origins were not of the birth, but of the plan, the promise, the prophecy of the birth - even Moses knew this. Gen 3:15 * he would have brothers, kinsmen V3 (does God have brothers?) * he will arise and shepherd His flock In the strength of the LORD, in the majesty of the name **of the LORD his God** v4 "days of eternity" is simply a reference to the timeline of the plan God had laid out. The Jews/Israelites whole history was one of salvation - always looking forward to the big day when this special King would solve all their problems. Their wait would be a little longer - but at least he was here now, shame they didn't believe him!
38,212
> > But as for you, Bethlehem Ephrathah, Too little to be among the clans of Judah, From you One will go forth for Me to be ruler in Israel. His goings forth are from long ago, From the days of eternity (yom olam). > [‭‭Micah‬ ‭5](https://parabible.com/Micah/5):2‬ ‭NASB‬‬ > > > What are the "days of eternity" (yom olam) in Micah asserting about the ruler?
2019/01/10
[ "https://hermeneutics.stackexchange.com/questions/38212", "https://hermeneutics.stackexchange.com", "https://hermeneutics.stackexchange.com/users/28028/" ]
To understand the verse in question it helps to understand the military context: > > ESV Micah 5: > > > 1a Now **muster your troops**, O **daughterb of troops**; > **siege** is laid against us; > **with a rod they strike the judge** of Israel > on the cheek. > 2c But you, O Bethlehem Ephrathah, > who are too little [insignificant] **to be among the clans [armies] of Judah**, > from you shall come forth for me > one who is **to be ruler in Israel**, > **whose coming forth** is from of old, > from ancient days. > > > Footnotes: > a 1 Ch 4:14 in Hebrew > b 1 That is, city > c 2 Ch 5:1 in Hebrew > > > So the prophet is saying that from the city of David, Bethlehem, the house of bread, which was nothing but a few women and children, the promised ruler of Israel would arise. But then he says "whose coming forth..." which is apparently taken by the ESV to refer to his birth in Bethlehem. However, (and I'm no Hebrew guru) the word is plural and is rendered in other translations as "whose comings forth" (IE: given the context, "sorties" or "military campaigns"). Now, if I'm correct concerning this then this would be, I believe [in a notional sense](http://www.21stcr.org/multimedia-2015/1_pdf/ds_john_and_jewish_preexistence.pdf), similar to this: > > [Rom 4:17 KJV] 17 (**As it is written, I have made** thee a father of many nations,) before him whom he believed, [even] God, who quickeneth the dead, **and calleth those things which be not as though they were**. > > > But most important, I believe is the concern in the original question that perhaps the form of one usage of OLAM might tell us the meaning of a similar use. However, that isn't necessarily the case. Context is always the key factor. The NET Bible renders Micah 5:2 like this: > > NET Bible Micah 5:2 As for you, Bethlehem Ephrathah, seemingly insignificant among the clans of Judah--from you a king will emerge who will rule over Israel on my behalf, one whose origins **are in the distant past**. > > > That's about all I think we can load OLAM with in actual usage. And if his military campaigns from OLAM then we must not imagine that his first battle was in eternity past. Surely there was no war on day one! The point is that the exploits of the Messiah have been in the scriptures from long ago and in God's mind longer than that. To that agree all the scriptures. Notice this similar verbiage from the mouth of Gideon: > > [Jdg 6:14-16 NLT] (14) Then the LORD turned to him and said, "Go with the strength you have, and **rescue Israel** from the Midianites. I am sending you!" (15) "But Lord," Gideon replied, **"how can I rescue Israel? My clan is the weakest in the whole tribe of Manasseh, and I am the least in my entire family!" (16) The LORD said to him, "I will be with you. And you will destroy the Midianites as if you were fighting against one man."** > > > I should also point out that interpreting Micah 5:2 as saying that Jesus IS the "ancient of days" clashes with Daniel where the Messiah ascends and appears before God who is referred to as "the Ancient of Days": > > [Dan 7:13-14 KJV] 13 I saw in the night visions, and, behold, [one] like the Son of man came with the clouds of heaven, and came to the Ancient of days, and they brought him near before him. 14 And there was given him dominion, and glory, and a kingdom, that all people, nations, and languages, should serve him: his dominion [is] an everlasting dominion, which shall not pass away, and his kingdom [that] which shall not be destroyed. > > >
The Ancient of Days is a figure from the Book of Daniel. “As I looked, “thrones were set in place, and the Ancient of Days took his seat. His clothing was as white as snow; the hair of his head was white like wool. His throne was flaming with fire, and its wheels were all ablaze. (Dan. 7:9) To answer the question, first we need to look at the date of the two books. If the Book of Daniel was written first, then the answer could be yes. But if Micah was written first, then it unlikely that this is what the prophet had in mind. Whether God had it in mind is beyond the scope of the question. I hold to the view of those scholars who [date the Book of Daniel](https://www.britannica.com/topic/The-Book-of-Daniel-Old-Testament) to the time of the Maccabean Revolt in the 2nd c. bce. But even if Daniel was written during the Babylonian Exile, Micah is earlier. > > The word of the Lord that came to Micah of Mo′resheth in the days of > Jotham, Ahaz, and Hezeki′ah, kings of Judah, which he saw concerning > Samar′ia and Jerusalem. (Micah 1:1) > > > By all accounts the above-named kings lived prior to the Babylonian Exile. So we may safely say that Micah's prophecy is earlier than Daniel's. Beyond that we have the problem that Micah refers to "a ruler in Israel," while Daniel refers to a Supernatural "son of man" coming with the clouds of heaven. Christians easily connect the two, but I have to insist that the prophets themselves probably did not. The question is about what *Micah* was thinking, not about what was in the God's mind in the realm of Eternity. **If the question were reversed we might get a different answer. In other words, Daniel could conceivably be referring to the ruler that Micah predicts. However, I think it is very unlikely that Micah, speaking several generations prior to Daniel, would refer to a person mentioned in Daniel's prophecy. Therefore, the answer must be 'No.'**
38,212
> > But as for you, Bethlehem Ephrathah, Too little to be among the clans of Judah, From you One will go forth for Me to be ruler in Israel. His goings forth are from long ago, From the days of eternity (yom olam). > [‭‭Micah‬ ‭5](https://parabible.com/Micah/5):2‬ ‭NASB‬‬ > > > What are the "days of eternity" (yom olam) in Micah asserting about the ruler?
2019/01/10
[ "https://hermeneutics.stackexchange.com/questions/38212", "https://hermeneutics.stackexchange.com", "https://hermeneutics.stackexchange.com/users/28028/" ]
Autodidact asked: ‘***What are ‘the days of eternity’ (yom olam) in Micah*** [5:1 (BHS)] ***asserting about the ruler?***’ --- **One** We’ve understand better the meaning of the term עלם/עולם (OLM/OULM [two variants commonly used in TaNaKh]) translated ‘*eternity*’ by NASB, along with a number of translations. First of all, the basic meaning of עלם (OLM) is not ‘to be eternal’, but ‘to be indistinct, indefinite’, and, in reference to time, ‘and unsighted time’. A homologous (in a semantic way) term in Akkadian (ancient Babylonian) was DA’AMU, ‘to became dark’ (Chicago Assyrian Dictionary [= CAD] III:1). From this term – probably – was derived, through a number of linguistical steps, the English verb ‘to dim’ (referring to ‘something hard to see at’). Granted, **also ‘eternity’** (NASB et al.) **– from men’s viewpoint – could be included into the well established concept of ‘indistinctness’, because we humans cannot understand, or, simply imagine, fully, what can indicates a time without a start and/or an end. Nevertheless, there are other situations of ‘indistinctness’ that are not linked with ‘eternity’, necessarily**. For an example, we know – from the Bible account – that the earth had surely a start\*\* (ראשׁית) inside the creation time-frame (Genesis 1:1). Still, **Psalm 78:69 applies עלם (OLM) to the ‘earth’**. Also the physical ‘hills’ on the earth had a start, when God did perform the separation between waters and soil (Genesis 1:9). Still, **Deuteronomy 33:15 applies עלם (OLM) to the ‘hills’** (very interestingly, this passage has the same two sequential terms used in Micah 5:1 - BHS [קדם > עלם]). Again, **was a ancient Israelite slave able to serve his master ‘eternally’? Exodus 21:6 says he may do עלם (OLM)**. These examples would be sufficient to understand that the best translation of עלם(OLM) is one which revolves themselves around the concept of ‘indefinite, indistinct time’. Granted, **sometimes עלם (OLM) is linked with ‘eternity’ (or alike), but other times not, as we have seen**. --- **Two** Returning to Micah 5:1 (BHS), **Septuagint (LXX) translated the Hebrew term עלם (OLM) with αιωνος**, that – strangely enough – has the same meaning of עלם (for one example, the αιωνος [‘era’, ‘epoch’] mentioned in Matthew 24:3 & 28:20 had a start and – according Jesus Christ – will have an end, also). Probably, from עלם (OLM) derived a number of words that were utilized in the past, but, we also are using some of these derivative words. For example, Latin language had (the ‘>’ simbol indicates samples of passages of this term in other languages): - *olim*, ‘that time’, ‘time ago’ > Anglosaxon *hwilum*, ‘formerly, times ago’ > Old English *whilom* > Contemporary English *while* (as in the expressions like ‘long while ago’, or, ‘it takes a while to read’). * *velum*, ‘a veil’ (that is ‘something that hide’) > English *veil*. English: - *gloom*, that retains all the letters of עלם (OLM) [according John Parkhurst, ‘A Hebrew and English Lexicon’]. Icelandic: - *hilma*, ‘to hide’. In view of the information above presented **the ‘ruler’ cited by Micah had a time start**. We may understand so on the basis of the MT verbal used there יצא (‘to go out’, ‘to go forth’, ‘to spring up’, et cetera), that implies, necessarily, **an action that starts on a given time point**. So, the Micah’s ‘ruler’ must possess a beginning. Then, in this case, the bynomial link between קדם and עלם point to a translation different from the concept of ‘eternity’. In other words, **the origin of the Micah’s ‘ruler’ was ‘lost in the mists of time’, from the viewpoint of a common human**. These clues well refer – from the viewpoint of christian Bible commentators – to the Messiah Jesus Christ. Then, **the translators are justified to translate as a derivative of ‘to be eternal’ only if the Bible context permits so**. --- **Three** As regards Mac’s Musings assertions about the claimed lack of ‘precision’ of Hebrew language (regarding abstract concepts), I think Ruminator was right when he seemed to doubt about that. Mac’s Musings said: “*Hebrew does not have any abstract nouns for a start. As stated above, Hebrew is excellent (and precise) for spiritual ideas and action but not abstract thought*.” It seems a hasty conclusion, because to assert so we should have a corpus of Hebrew texts at least of a size alike the ancient Greek texts have. Unfortunately, the amount of Hebrew texts (at our disposal, today) is a risible fraction compared to the huge amount of ancient Greek texts. But, even supposing the two corpora of texts (ancient Hebrew vs ancient Greek) were alike (in amount of texts), we have to ask ourselves, ‘what an abstract noun is, really’? And, ‘did ancient Hebrew language possess abstract nouns?’ Cambridge Dictionary (online): “*A noun that refers to a thing that does not exist as a material object*”. This being the case, we may easily test the Mac’s Musings claim with the following couple of reference-book’s definitions of ‘abstract noun’: Collins Dictionary (online): “*A noun that refers to an abstract concept, as for example ‘kindness’*”. Just a moment. Ask ourselves: ‘Has the Bible Hebrew language a specific term for ‘kindness’’?. Surely it has. It is חסד, and it mentioned on hundreds of occurrences in TaNaKh. MacMillan Dictionary (online): “*A common noun that refers to a quality, idea, or feeling rather than to a person or a physical object. For example ‘thought’, ‘problem’, ‘law’, and ‘opportunity’ are all abstract nouns*.” Oops! Sorry, but the TaNaKh do possess them all: ‘thought’ = חשׁב (as in Gen 6:5); ‘problem’ = חוד (as in Pro 1:6); ‘law’ = תורה (as in hundreds of occurrences in TaNaKh). Today, it is worlwide used the term ‘Torah’. ‘opportunity’ = תאנה (as in Judges 14:4). So, avoiding to expand this argument to other topics, like Hebrew subjective and non-subjective tenses, along with the 3D structure of prepositions, and so on, we may conclude that ‘Biblical’ Hebrew has abstract nouns, because also that people (ancient Israelites) – like all people - needed to think and to speak/write through abstractions, in certain cases). I hope these information will help.
Based on the other answers being *seemingly* over complex for what should be a readily available solution/answer. **Is Micah 5:2 identifying the Messiah as “the Ancient of Days”?** No, for the reasons found in the context of the passage. > > But as for you, Bethlehem Ephrathah, Too little to be among the clans of Judah, From you One will go forth for Me to be ruler in Israel. His goings forth are from long ago, From the days of eternity (yom olam). ‭‭Micah‬ ‭5:2‬ ‭NASB‬‬ > > > The Jews misunderstood many things of the new age Jesus ushered in. But they had one thing sure in their hearts - > > Where is the One having been born King of the Jews?... > And having assembled all **the chief priests and scribes** of the people, he was inquiring of them **where the Christ was to be born**. 5And they said to him, “In Bethlehem of Judea, for thus has it been written through the prophet: 6‘And you, Bethlehem, land of Judah, are by no means least among the rulers of Judah, for out of you will come forth One leading, who will shepherd My people Israel.’ Matt 2:2-6 > > > The Jews knew; * where the One was coming from. * the One would have a beginning, an 'origin', a birth - just like a normal person. * the One would be a descendant of David and Abraham - a human offspring. * they were expecting someone arranged by God, but would not *be* God! * the One was prophesied from the beginning - not *existing* from the beginning. * the origins were not of the birth, but of the plan, the promise, the prophecy of the birth - even Moses knew this. Gen 3:15 * he would have brothers, kinsmen V3 (does God have brothers?) * he will arise and shepherd His flock In the strength of the LORD, in the majesty of the name **of the LORD his God** v4 "days of eternity" is simply a reference to the timeline of the plan God had laid out. The Jews/Israelites whole history was one of salvation - always looking forward to the big day when this special King would solve all their problems. Their wait would be a little longer - but at least he was here now, shame they didn't believe him!
38,212
> > But as for you, Bethlehem Ephrathah, Too little to be among the clans of Judah, From you One will go forth for Me to be ruler in Israel. His goings forth are from long ago, From the days of eternity (yom olam). > [‭‭Micah‬ ‭5](https://parabible.com/Micah/5):2‬ ‭NASB‬‬ > > > What are the "days of eternity" (yom olam) in Micah asserting about the ruler?
2019/01/10
[ "https://hermeneutics.stackexchange.com/questions/38212", "https://hermeneutics.stackexchange.com", "https://hermeneutics.stackexchange.com/users/28028/" ]
Autodidact asked: ‘***What are ‘the days of eternity’ (yom olam) in Micah*** [5:1 (BHS)] ***asserting about the ruler?***’ --- **One** We’ve understand better the meaning of the term עלם/עולם (OLM/OULM [two variants commonly used in TaNaKh]) translated ‘*eternity*’ by NASB, along with a number of translations. First of all, the basic meaning of עלם (OLM) is not ‘to be eternal’, but ‘to be indistinct, indefinite’, and, in reference to time, ‘and unsighted time’. A homologous (in a semantic way) term in Akkadian (ancient Babylonian) was DA’AMU, ‘to became dark’ (Chicago Assyrian Dictionary [= CAD] III:1). From this term – probably – was derived, through a number of linguistical steps, the English verb ‘to dim’ (referring to ‘something hard to see at’). Granted, **also ‘eternity’** (NASB et al.) **– from men’s viewpoint – could be included into the well established concept of ‘indistinctness’, because we humans cannot understand, or, simply imagine, fully, what can indicates a time without a start and/or an end. Nevertheless, there are other situations of ‘indistinctness’ that are not linked with ‘eternity’, necessarily**. For an example, we know – from the Bible account – that the earth had surely a start\*\* (ראשׁית) inside the creation time-frame (Genesis 1:1). Still, **Psalm 78:69 applies עלם (OLM) to the ‘earth’**. Also the physical ‘hills’ on the earth had a start, when God did perform the separation between waters and soil (Genesis 1:9). Still, **Deuteronomy 33:15 applies עלם (OLM) to the ‘hills’** (very interestingly, this passage has the same two sequential terms used in Micah 5:1 - BHS [קדם > עלם]). Again, **was a ancient Israelite slave able to serve his master ‘eternally’? Exodus 21:6 says he may do עלם (OLM)**. These examples would be sufficient to understand that the best translation of עלם(OLM) is one which revolves themselves around the concept of ‘indefinite, indistinct time’. Granted, **sometimes עלם (OLM) is linked with ‘eternity’ (or alike), but other times not, as we have seen**. --- **Two** Returning to Micah 5:1 (BHS), **Septuagint (LXX) translated the Hebrew term עלם (OLM) with αιωνος**, that – strangely enough – has the same meaning of עלם (for one example, the αιωνος [‘era’, ‘epoch’] mentioned in Matthew 24:3 & 28:20 had a start and – according Jesus Christ – will have an end, also). Probably, from עלם (OLM) derived a number of words that were utilized in the past, but, we also are using some of these derivative words. For example, Latin language had (the ‘>’ simbol indicates samples of passages of this term in other languages): - *olim*, ‘that time’, ‘time ago’ > Anglosaxon *hwilum*, ‘formerly, times ago’ > Old English *whilom* > Contemporary English *while* (as in the expressions like ‘long while ago’, or, ‘it takes a while to read’). * *velum*, ‘a veil’ (that is ‘something that hide’) > English *veil*. English: - *gloom*, that retains all the letters of עלם (OLM) [according John Parkhurst, ‘A Hebrew and English Lexicon’]. Icelandic: - *hilma*, ‘to hide’. In view of the information above presented **the ‘ruler’ cited by Micah had a time start**. We may understand so on the basis of the MT verbal used there יצא (‘to go out’, ‘to go forth’, ‘to spring up’, et cetera), that implies, necessarily, **an action that starts on a given time point**. So, the Micah’s ‘ruler’ must possess a beginning. Then, in this case, the bynomial link between קדם and עלם point to a translation different from the concept of ‘eternity’. In other words, **the origin of the Micah’s ‘ruler’ was ‘lost in the mists of time’, from the viewpoint of a common human**. These clues well refer – from the viewpoint of christian Bible commentators – to the Messiah Jesus Christ. Then, **the translators are justified to translate as a derivative of ‘to be eternal’ only if the Bible context permits so**. --- **Three** As regards Mac’s Musings assertions about the claimed lack of ‘precision’ of Hebrew language (regarding abstract concepts), I think Ruminator was right when he seemed to doubt about that. Mac’s Musings said: “*Hebrew does not have any abstract nouns for a start. As stated above, Hebrew is excellent (and precise) for spiritual ideas and action but not abstract thought*.” It seems a hasty conclusion, because to assert so we should have a corpus of Hebrew texts at least of a size alike the ancient Greek texts have. Unfortunately, the amount of Hebrew texts (at our disposal, today) is a risible fraction compared to the huge amount of ancient Greek texts. But, even supposing the two corpora of texts (ancient Hebrew vs ancient Greek) were alike (in amount of texts), we have to ask ourselves, ‘what an abstract noun is, really’? And, ‘did ancient Hebrew language possess abstract nouns?’ Cambridge Dictionary (online): “*A noun that refers to a thing that does not exist as a material object*”. This being the case, we may easily test the Mac’s Musings claim with the following couple of reference-book’s definitions of ‘abstract noun’: Collins Dictionary (online): “*A noun that refers to an abstract concept, as for example ‘kindness’*”. Just a moment. Ask ourselves: ‘Has the Bible Hebrew language a specific term for ‘kindness’’?. Surely it has. It is חסד, and it mentioned on hundreds of occurrences in TaNaKh. MacMillan Dictionary (online): “*A common noun that refers to a quality, idea, or feeling rather than to a person or a physical object. For example ‘thought’, ‘problem’, ‘law’, and ‘opportunity’ are all abstract nouns*.” Oops! Sorry, but the TaNaKh do possess them all: ‘thought’ = חשׁב (as in Gen 6:5); ‘problem’ = חוד (as in Pro 1:6); ‘law’ = תורה (as in hundreds of occurrences in TaNaKh). Today, it is worlwide used the term ‘Torah’. ‘opportunity’ = תאנה (as in Judges 14:4). So, avoiding to expand this argument to other topics, like Hebrew subjective and non-subjective tenses, along with the 3D structure of prepositions, and so on, we may conclude that ‘Biblical’ Hebrew has abstract nouns, because also that people (ancient Israelites) – like all people - needed to think and to speak/write through abstractions, in certain cases). I hope these information will help.
The Ancient of Days is a figure from the Book of Daniel. “As I looked, “thrones were set in place, and the Ancient of Days took his seat. His clothing was as white as snow; the hair of his head was white like wool. His throne was flaming with fire, and its wheels were all ablaze. (Dan. 7:9) To answer the question, first we need to look at the date of the two books. If the Book of Daniel was written first, then the answer could be yes. But if Micah was written first, then it unlikely that this is what the prophet had in mind. Whether God had it in mind is beyond the scope of the question. I hold to the view of those scholars who [date the Book of Daniel](https://www.britannica.com/topic/The-Book-of-Daniel-Old-Testament) to the time of the Maccabean Revolt in the 2nd c. bce. But even if Daniel was written during the Babylonian Exile, Micah is earlier. > > The word of the Lord that came to Micah of Mo′resheth in the days of > Jotham, Ahaz, and Hezeki′ah, kings of Judah, which he saw concerning > Samar′ia and Jerusalem. (Micah 1:1) > > > By all accounts the above-named kings lived prior to the Babylonian Exile. So we may safely say that Micah's prophecy is earlier than Daniel's. Beyond that we have the problem that Micah refers to "a ruler in Israel," while Daniel refers to a Supernatural "son of man" coming with the clouds of heaven. Christians easily connect the two, but I have to insist that the prophets themselves probably did not. The question is about what *Micah* was thinking, not about what was in the God's mind in the realm of Eternity. **If the question were reversed we might get a different answer. In other words, Daniel could conceivably be referring to the ruler that Micah predicts. However, I think it is very unlikely that Micah, speaking several generations prior to Daniel, would refer to a person mentioned in Daniel's prophecy. Therefore, the answer must be 'No.'**
64,131
I can understand that all animals would instinctively stay away from a fire, however for a fire breathing dragon to be warded off by torches seem puzzling to me. What could help explain such ironic behavior from a fire dragon?
2016/12/10
[ "https://worldbuilding.stackexchange.com/questions/64131", "https://worldbuilding.stackexchange.com", "https://worldbuilding.stackexchange.com/users/8400/" ]
Many things burn. Wood. Flesh. But in that flame could also be various toxic substances. Perhaps dragons do not fear flame so much as smoke. When dragons set whole towns on fire, they know that there may be alchemists or other industries that use dangerous materials. They learn quickly to stay away from smoke.
A lot of modern depictions of dragons make them basically a biological lighter/stove top. The fire isn't inside them, they blow out a stream of fuel and ignite it as it comes out. So their ability to produce fire doesn't mean they will be immune to it. While the face would likely be heat resistant, and perhaps the scales would have some fire resistant properties to protect them from the breath of other dragons, they are still flesh and blood creatures and fire can still hurt them. Also, if you manage to ignite the fuel pouch, the results will probably not be pretty...
64,131
I can understand that all animals would instinctively stay away from a fire, however for a fire breathing dragon to be warded off by torches seem puzzling to me. What could help explain such ironic behavior from a fire dragon?
2016/12/10
[ "https://worldbuilding.stackexchange.com/questions/64131", "https://worldbuilding.stackexchange.com", "https://worldbuilding.stackexchange.com/users/8400/" ]
They are not really fire-proof ============================== Much like ruining a dishwasher with water, you can burn a dragon with fire - if you direct it to a weak spot. There's no reason why the outer belly of the dragon should be fire proof. Also, when it breathes fire, the fire doesn't stay close too long. It's most likely created outside the body. Fire is much more dangerous when someone pushes it at you. [![fire spitting](https://i.stack.imgur.com/Cr8Cx.jpg)](https://i.stack.imgur.com/Cr8Cx.jpg) Would he run away in a fire? Quite sure he would
Many things burn. Wood. Flesh. But in that flame could also be various toxic substances. Perhaps dragons do not fear flame so much as smoke. When dragons set whole towns on fire, they know that there may be alchemists or other industries that use dangerous materials. They learn quickly to stay away from smoke.
64,131
I can understand that all animals would instinctively stay away from a fire, however for a fire breathing dragon to be warded off by torches seem puzzling to me. What could help explain such ironic behavior from a fire dragon?
2016/12/10
[ "https://worldbuilding.stackexchange.com/questions/64131", "https://worldbuilding.stackexchange.com", "https://worldbuilding.stackexchange.com/users/8400/" ]
As I recall it from the Old stories, dragons fight each other with their fire -- and have for eons before we came along. They are only relatively fire resistant, not fire proof. Usually dragon magic (and/or very spicy food) can be used to up the heat of their fire. The temptation to use Alchemical help for fire and flight have recently become a problem....
Much in the same way that the human oesophagus and stomach can withstand hydrochloric acid (used to digest food), a dragon's throat and mouth can withstand fire, but its skin cannot. A dragon doesn't fear its own fire that it's using as a weapon, but recognises the danger the fire of others represents.
64,131
I can understand that all animals would instinctively stay away from a fire, however for a fire breathing dragon to be warded off by torches seem puzzling to me. What could help explain such ironic behavior from a fire dragon?
2016/12/10
[ "https://worldbuilding.stackexchange.com/questions/64131", "https://worldbuilding.stackexchange.com", "https://worldbuilding.stackexchange.com/users/8400/" ]
Many things burn. Wood. Flesh. But in that flame could also be various toxic substances. Perhaps dragons do not fear flame so much as smoke. When dragons set whole towns on fire, they know that there may be alchemists or other industries that use dangerous materials. They learn quickly to stay away from smoke.
**Molotov cocktails** If they exist in your would, dragons may not be intelligent enough to distinguish them from torches. Especially if molotov cocktails come with throwing handles like German WW1 grenades. Throwing one at a dragon may actually be an effective way to kill it, might burn through its wings and ground it, and generally be well worth avoiding for the dragon.
64,131
I can understand that all animals would instinctively stay away from a fire, however for a fire breathing dragon to be warded off by torches seem puzzling to me. What could help explain such ironic behavior from a fire dragon?
2016/12/10
[ "https://worldbuilding.stackexchange.com/questions/64131", "https://worldbuilding.stackexchange.com", "https://worldbuilding.stackexchange.com/users/8400/" ]
Many things burn. Wood. Flesh. But in that flame could also be various toxic substances. Perhaps dragons do not fear flame so much as smoke. When dragons set whole towns on fire, they know that there may be alchemists or other industries that use dangerous materials. They learn quickly to stay away from smoke.
Alternatively to the answers above, it's learned behaviour. Young dragons, when they first learn to breathe fire, quickly figure out that they need to exhale very hard, otherwise the flame goes up their nostrils or down their throats, and hurts. The flaming torch triggers this behaviour and they instinctively shy away.
64,131
I can understand that all animals would instinctively stay away from a fire, however for a fire breathing dragon to be warded off by torches seem puzzling to me. What could help explain such ironic behavior from a fire dragon?
2016/12/10
[ "https://worldbuilding.stackexchange.com/questions/64131", "https://worldbuilding.stackexchange.com", "https://worldbuilding.stackexchange.com/users/8400/" ]
Perhaps it is like this: A knight is skilled with a sword and may kill hundreds on the battlefield using this deadly weapon. However, even while wearing the best armor, the knight isn't standing still against an attack against another sword yielding knight. He will move to avoid being hit by the opponent's weapon. The Dragon's weapon is fire which he may yield skillfully in battle, but he is not remaining still while others attempt to burn him.
As I recall it from the Old stories, dragons fight each other with their fire -- and have for eons before we came along. They are only relatively fire resistant, not fire proof. Usually dragon magic (and/or very spicy food) can be used to up the heat of their fire. The temptation to use Alchemical help for fire and flight have recently become a problem....
64,131
I can understand that all animals would instinctively stay away from a fire, however for a fire breathing dragon to be warded off by torches seem puzzling to me. What could help explain such ironic behavior from a fire dragon?
2016/12/10
[ "https://worldbuilding.stackexchange.com/questions/64131", "https://worldbuilding.stackexchange.com", "https://worldbuilding.stackexchange.com/users/8400/" ]
**Fight fire with fire.** Remember, the fire-breathing dragon breathes fire for *some* reason. Even if the dragon doesn't realize that it breathes fire, the ability almost certainly evolved together with some particular set of behaviors. Most likely, this reason can be summed up as one or both of: * **Defense:** Warding off another * **Offense:** Attacking, to injure, drive off or kill another **If the dragon breathes fire in order to defend itself or someone it deems worth protecting** (mate, offspring, ...), then for other dragons to have a fear of the fire of another reduces the risk of greater injuries. [This is typical of aggressive behaviors](https://en.wikipedia.org/wiki/Dominance_%28ethology%29#Functions): they are *rituals* that have evolved to increase the chance of both individuals living another day. **If the dragon breathes fire in order to attack others,** then fire-breathing is a very aggressive or predatory behavior to which other dragons will very likely have evolved a response to either fight back, or flee. Fighting back increases the risk of injuries to all involved, and "fleeing" can easily be called "to be afraid" of whatever the individual flees in response to, even if there is no such intellectual response. When, presumably a human, carries fire, then the human takes the place of the other dragon. Unless the dragon's *default* response to *another fire-breathing dragon* is to fight back, even if the dragon can tell the difference between a human and a dragon, the dragon may well fall back to trying to increase the distance to the fear-invoking stimuli: the fire. In which case a human, anthropomorphizing, is likely to call it "afraid of fire". **Fear is simply an evolved response to situations that have turned out to be dangerous, for which evolutionary pressure ensures a particular response that increases the chance of the individual not being injured or killed.** Find a way to explain why a dragon would be afraid of another dragon's fire, and it's very likely that the same mechanism would apply in the case of a human with a torch. Or, failing that, a flamethrower.
They are not really fire-proof ============================== Much like ruining a dishwasher with water, you can burn a dragon with fire - if you direct it to a weak spot. There's no reason why the outer belly of the dragon should be fire proof. Also, when it breathes fire, the fire doesn't stay close too long. It's most likely created outside the body. Fire is much more dangerous when someone pushes it at you. [![fire spitting](https://i.stack.imgur.com/Cr8Cx.jpg)](https://i.stack.imgur.com/Cr8Cx.jpg) Would he run away in a fire? Quite sure he would
64,131
I can understand that all animals would instinctively stay away from a fire, however for a fire breathing dragon to be warded off by torches seem puzzling to me. What could help explain such ironic behavior from a fire dragon?
2016/12/10
[ "https://worldbuilding.stackexchange.com/questions/64131", "https://worldbuilding.stackexchange.com", "https://worldbuilding.stackexchange.com/users/8400/" ]
Perhaps it is like this: A knight is skilled with a sword and may kill hundreds on the battlefield using this deadly weapon. However, even while wearing the best armor, the knight isn't standing still against an attack against another sword yielding knight. He will move to avoid being hit by the opponent's weapon. The Dragon's weapon is fire which he may yield skillfully in battle, but he is not remaining still while others attempt to burn him.
Fire is a waste product of fire-breathing dragons. Breathing fire, for a fire-breathing dragon, could be the equivalent of humans using human waste as a weapon. Consider it similar to a human mailing a turd to someone, or dropping one off on someone else's doorstep. This might not be so difficult psychologically to commit, but it is likely to be highly aversive if the same person becomes a victim of someone else doing it to them.
64,131
I can understand that all animals would instinctively stay away from a fire, however for a fire breathing dragon to be warded off by torches seem puzzling to me. What could help explain such ironic behavior from a fire dragon?
2016/12/10
[ "https://worldbuilding.stackexchange.com/questions/64131", "https://worldbuilding.stackexchange.com", "https://worldbuilding.stackexchange.com/users/8400/" ]
He doesn't know he's breathing fire, he just knows that he's doing something in self defense. And a torch is a heat source which might be threatening his or her little ones. And a burning torch makes sometimes weird cracking noises and since dragons have very fine ears, the noises are disturbingly unpleasant. And the smell. Dragons have very fine noses and the burning guano in the straw really is disgusting for dragon noses. And you cannot eat it. The dragon once tried to eat a large torch and seriously burned his palate.
A lot of modern depictions of dragons make them basically a biological lighter/stove top. The fire isn't inside them, they blow out a stream of fuel and ignite it as it comes out. So their ability to produce fire doesn't mean they will be immune to it. While the face would likely be heat resistant, and perhaps the scales would have some fire resistant properties to protect them from the breath of other dragons, they are still flesh and blood creatures and fire can still hurt them. Also, if you manage to ignite the fuel pouch, the results will probably not be pretty...
64,131
I can understand that all animals would instinctively stay away from a fire, however for a fire breathing dragon to be warded off by torches seem puzzling to me. What could help explain such ironic behavior from a fire dragon?
2016/12/10
[ "https://worldbuilding.stackexchange.com/questions/64131", "https://worldbuilding.stackexchange.com", "https://worldbuilding.stackexchange.com/users/8400/" ]
They are not really fire-proof ============================== Much like ruining a dishwasher with water, you can burn a dragon with fire - if you direct it to a weak spot. There's no reason why the outer belly of the dragon should be fire proof. Also, when it breathes fire, the fire doesn't stay close too long. It's most likely created outside the body. Fire is much more dangerous when someone pushes it at you. [![fire spitting](https://i.stack.imgur.com/Cr8Cx.jpg)](https://i.stack.imgur.com/Cr8Cx.jpg) Would he run away in a fire? Quite sure he would
Alternatively to the answers above, it's learned behaviour. Young dragons, when they first learn to breathe fire, quickly figure out that they need to exhale very hard, otherwise the flame goes up their nostrils or down their throats, and hurts. The flaming torch triggers this behaviour and they instinctively shy away.
153,881
I am not able to get technical/logical reason behind following scenario. Can you please help me to explain the reason behind it? **Context:** Group of stocks makes the index. It means Index is dependent on stocks, right? Index future is dependent on Index. Also, Index PUT & CALL options are dependent on Index. **Confusion/Question:** When we say there is short covering in CALL options, then index moves upwards. And when there is long unwinding in PUT options, then index moves downwards. **I’ve heard this, and also experience this.** But how index PUT and CALL options writing/unwriting affects index ( internally stock prices)? Index and PUT/CALL options are separate entity. There can be premium/discount compared to index. But how they can affect index price? I’m not able understand technical/logical reason behind it.
2022/11/28
[ "https://money.stackexchange.com/questions/153881", "https://money.stackexchange.com", "https://money.stackexchange.com/users/120327/" ]
Options do not directly affect the index. The index is just the weighted average of the stock prices within it. That is all. There *might* be some secondary effects as options on the underlying stock are *exercised*, but options are more often sold (or bought) to close rather than being exercised, since it's generally more profitable to do so. But exercising an option on stock would create a marginal amount of buying/selling that could affect the index, but not dramatically. It would be no more of an effect as individual investors buying or selling stock.
Options can have a significant impact on the performance of an index. When options are used as part of a trading strategy, they can be used to hedge against market risk and to speculate on the direction of the index. If the options are correctly used, they can help to reduce volatility in the index and increase returns. However, if the options are misused, they can lead to losses and increased volatility. In addition, options can also create liquidity in the market. This can help to reduce the cost of trading, as well as increase the efficiency of the market.
153,881
I am not able to get technical/logical reason behind following scenario. Can you please help me to explain the reason behind it? **Context:** Group of stocks makes the index. It means Index is dependent on stocks, right? Index future is dependent on Index. Also, Index PUT & CALL options are dependent on Index. **Confusion/Question:** When we say there is short covering in CALL options, then index moves upwards. And when there is long unwinding in PUT options, then index moves downwards. **I’ve heard this, and also experience this.** But how index PUT and CALL options writing/unwriting affects index ( internally stock prices)? Index and PUT/CALL options are separate entity. There can be premium/discount compared to index. But how they can affect index price? I’m not able understand technical/logical reason behind it.
2022/11/28
[ "https://money.stackexchange.com/questions/153881", "https://money.stackexchange.com", "https://money.stackexchange.com/users/120327/" ]
It has to do with [dealer hedging](https://www.investopedia.com/terms/d/deltahedging.asp). When a market maker sells call options, they're short delta/gamma. As the index rallies, delta gets shorter and they need to buy more futures to cover their delta, which drives the index up further. Conversely, if they sell put options and the index sells off, their delta gets longer and they need to sell futures to stay delta-neutral, leading to a vicious cycle. Similar price action can be observed when clients close out large long/short call/put positions or because of the monthly/quarterly expiry. Futures activity impacts cash equities since any differences are arbitraged away, i.e. futures vs ETF, ETF creation/redemption vs stocks. Here's a recent example ([Bloomberg](https://www.bloomberg.com/news/articles/2022-09-14/a-3-2-trillion-option-expiry-seen-worsening-post-cpi-stock-rout) [Archive](https://archive.ph/UCwTk)): > > A looming $3.2 trillion options expiry played a notable role in the Tuesday selloff. > > > As a hotter-than-expected inflation reading rocked Wall Street, a slew of bearish options that had become worthless during last week’s rally jumped back in the money, forcing market makers to sell underlying stocks to hedge their positions. > > >
Options can have a significant impact on the performance of an index. When options are used as part of a trading strategy, they can be used to hedge against market risk and to speculate on the direction of the index. If the options are correctly used, they can help to reduce volatility in the index and increase returns. However, if the options are misused, they can lead to losses and increased volatility. In addition, options can also create liquidity in the market. This can help to reduce the cost of trading, as well as increase the efficiency of the market.
87,911
I recently replaced the vacuum instruments in my plane, a Piper PA-28-180, to a GI-275 Primary Flight Display (PFD). This plane is being used for training at my local flight club. Since the new system has been installed the lead flight instructor has complained that the attitude indicator is reading 5 degrees too high. When in straight and level flight at a density altitude of around 10,000 feet, the attitude indicator is reading an attitude of 5 degree nose up. It is my understanding that most airplanes require a positive angle of attack to generate lift. It seems to me that 5 degrees nose up would be about right for a 180 horsepower airplane with 2 people cruising at a density altitude of 10,000 feet. I am told that with the old vacuum systems that pilots are instructed to move the position of the miniature airplane on the attitude indicator to be level with the horizon in level flight and that this is actually a fudge factor and not a true representation of attitude. With the PFD displays the FAA does not allow the pilot to move the position of the miniature airplane because they want the attitude indicator to read the correct attitude. Is my lead flight instructor correct that the attitude indicator should read level with the horizon in level flight?
2021/06/23
[ "https://aviation.stackexchange.com/questions/87911", "https://aviation.stackexchange.com", "https://aviation.stackexchange.com/users/30160/" ]
Essentially this question boils down to, what is the definition & reference for "0" pitch? "Level flight" would be a problematic answer, because as Ron Beyer notes, your deck angle for level flight varies with airspeed (among other things). With the old attitude indicators, one could set the airplane symbol to whatever you wanted, so setting it so that at "these" conditions "today", level flight matches up with 0 pitch displayed is viable. Maybe not wise, but that's its own discussion. (For instance, if you lose your airspeed indicator & have to use **known pitch & power settings**, you've just introduced a delta to every pitch setting that's published, by tweaking the attitude indicator like that.) With modern AHRS and INS/IRS/INU's, the answer to the question becomes simple... 0 pitch is whatever the airplane/software manufactures say it is, and that's that... no adjustments available. That zero reference typically corresponds to a 0 AOA in level flight or level attitude on the ground or something like that. Thus, "level flight" generally corresponds to some amount of nose-up attitude. The 737 is about 3-4 degrees nose up, although the difference between clean & 200 knots vs clean & 320 knots is significant; then you start adding flaps or changing the gross weight & everything changes some more. I don't know if 5 degrees is right or not -- it certainly seems in the ballpark. The lead CFI may be right that it's off, but if he's claiming that it's off by the full 5 degrees I'd be doubtful. If the pitch shows zero when taxiing on a level surface, I'd tend to think that the system is okay. If it's showing +2 or +3 degrees at that point, it might be worth consulting whoever installed the new system & asking if that's right or if a sensor needs some adjustment.
The direction your thoughts have taken is absolutely correct. While we can all understand the simple expedient of aligning the airplane symbol with the horizon, the practice is incorrect when referring to actual pitch attitude. It's possible, but unlikely that an uncomplicated design like a PA-28 have a 0 deg pitch attitude in 'normal' straight and level flight at normal cruising speeds. 5 deg seems to be more likely correct, though it seems also too suggest a slower speed. Maybe the CFI is referring to the worst case to emphasize his objection. I remember transitioning from a 2 seat trainer to a high performance 19 seat turboprop and being embarrassed because I would dutifully put the nose of the airplane on the horizon of the ADI to level off and almost instantly develop a 2500fpm descent as a precursor to the ensuing camel ride. The Instructor, had superannuated not a month earlier from flying the wide body Airbus 300 and hadn't dealt with 250 hour trainee pilots before. Ironically, the A300 can be flown beautifully using simple pitch and thrust values. The only saving grace for me was that the series of trainees that followed were a shade worse if anything. For me what followed was a day of sussing things out, and the fastest growing up I've ever done in a 24 hr period. To be fair to all of us trainees, a simple brief on 'Pitch Attitude flying' would have avoided the whole debacle and waste of the first days aircraft training (no 'luxury' of a simulator) From the question asked, it's clear that the issue has not died away, for pilots who are essentially transitioning from mostly VFR to IFR style flights. For pilot's of larger airplanes knowing the Pitch + thrust/torque/power for different phases of flight and especially for straight and level, is probably the most basic, useful and unfailing tool given to them. It is the mainstay in tackling "flight with unreliable airspeed". It is second nature for a heavy jet pilot to aim for somewhere around 7 deg pitch for level flight the moment the first stage of flaps/slats are extended. From the discussion and answers given so far, the record is far from clear and the following concepts need clarity: PITCH ATTITUDE: There should be no ambiguity in the understanding of pitch attitude - it is the angle made between the Longitudinal Axis and the local horizon of the Earth, the longitudinal axis being a straight line that runs in a fore-aft direction *and is the axis around which the airplane ROLLS.* LEVEL FLIGHT: refers to flight with no change in altitude, i.e. Vertical Speed (VS) = zero. Level flight can be performed at different speeds and during turns. Attitude and Heading Reference Units (AHRU) boxes are installed to provide inputs for the PFD. Level attitude i.e. Pitch = 0 and Bank = 0 for the PA 28 is defined in the diagram below: [![longitudinal and lateral levelling PA 28](https://i.stack.imgur.com/nLFHZ.jpg)](https://i.stack.imgur.com/nLFHZ.jpg) Once the AHRU is installed and connected and feeding the PFD, then pitch and roll calibration offsets are set as required so as to display Pitch = 0deg and roll = 0deg with the airplane level. These software controlled offsets are available via menu on the GI-125 interface. Given the above description and 'definitions', an important property of such systems is that they naturally align themselves to the Earth's local horizon, or put another way, they exhibit the behavior of a 'bob' that hangs vertically along the line of gravitational force and the local horizon is at 90deg to this, and this is what the PFD displays. This holds true whether the system contains mechanical gyros mounted in a gimbal, or the ring laser type and is basically governed by thee properties of the inertial gyro and the rotation of then Earth around it's geographical, or True axis. The upshot of all this is that the pitch attitude and AoA are governed by the laws of Physics and cannot be simply set to match what is convenient to us as doing so would render them erroneous under different circumstances. As for whether 5 deg is correct - you could check about the calibration offsets as described above - this should not be difficult to ascertain from the shop that did the installation.
69,385
I'd really like to explore my fortresses in adventure mode, but I don't really like spending an hour to solve quests, gain followers, buy equipment and find the actual fortress. Is there some kind of shortcut to get me closer to what I want?
2012/05/23
[ "https://gaming.stackexchange.com/questions/69385", "https://gaming.stackexchange.com", "https://gaming.stackexchange.com/users/24900/" ]
The best shortcut is to prepare an armory for your adventurer at the entrance. Spend a few years making a suit of full masterwork adamantine gear, including adamantine underwear and mittens. Put all of it in lead/gold/platinum bins to prevent the items from scattering upon the fortress's death. After your adventurer makes it to the room, they will be nearly invincible (although some demons may still splatter you against a wall, and an adamantine chainshirt heated by dragonfire will burn your flesh the same as an iron chainshirt). Bludgeoning weapons and shield bashes do not benefit from adamantine's exceptional properties, heavier metals must be used instead for best results. Silver is the best metal for war hammers and maces sans moods (If you have a moody weaponsmith who likes war hammers, get him some platinum or lead, the resulting weapon will hit with the power of a thousand suns). This of course only works for adventurers which can equip dwarven-made armor. Dwarves and elves qualify. Most dwarven-made weapons should be good for any vanilla adventurer race, and having a sword that decapitates in one hit is always a plus; playing with a tiny race will obviously make regular weapons difficult. Actually finding the fortress can be quite difficult. Make note of where it is on the world map when you create it (you may also use reclaim to see it on the map), then try to match its position to what you see on the travel map in adventure mode. If that's not helping, try to start with a civ that is nearby the fortress site and use legends mode map viewer to figure out how the position of your starter town relates to the fortress's position.
The easiest way is probably to spawn an adventurer, make a note of where the spawn location was, and then build a Fortress nearby there. You will be able to spawn right near the fortress that way.
310,226
We currently have a heated internal debate as to whether the actual .NET assembly name should include the code's version number (e.g. CodeName02.exe or CompanyName.CodeName02.dll). Does anyone know of an authoritative source, like Microsoft, that provides guidance on this issue?
2008/11/21
[ "https://Stackoverflow.com/questions/310226", "https://Stackoverflow.com", "https://Stackoverflow.com/users/21410/" ]
I'm not aware of anything authoritative, but it would seem to me that using a consistent name would simplify everything from the process of installation scripts to documentation. Given that one can store the version as metadata on the file, I don't know why it would be needed in the filename. Why set yourself up for the hassle of having to account for differently-named files?
I think the main idea of putting a version number in the filename of a DLL is brought over from *DLL Hell*, where having multiple versions of the DLL, all with the same name caused problems (i.e. which actual version of a DLL do you have and does it have the required functions, etc). The .NET Framework handles dependencies completely different compared to the C/C++ DLL files that are more traditional, it is possible to have multiple versions of a library in the GAC, mainly because the GAC is a 'fake' folder that links to other files on the filesystem, in addition to being able to have the assemblies included with your executable install (same folder deploy, etc).
310,226
We currently have a heated internal debate as to whether the actual .NET assembly name should include the code's version number (e.g. CodeName02.exe or CompanyName.CodeName02.dll). Does anyone know of an authoritative source, like Microsoft, that provides guidance on this issue?
2008/11/21
[ "https://Stackoverflow.com/questions/310226", "https://Stackoverflow.com", "https://Stackoverflow.com/users/21410/" ]
just look at the .NET framework or any other microsoft product for that matter. putting a version number as part of the assembly name sounds like a bad idea. There is a place for this (and other information) in the assembly's meta-data section. (AssemblyInfo.cs) This information can be view in Windows Explorer (properties dialog,status bar, tooltip - they all show this information).
Microsoft used a suffix of 32 to denote 32 bit DLL versions so that those DLL's could coexist with the existing 16 bit DLL files.
310,226
We currently have a heated internal debate as to whether the actual .NET assembly name should include the code's version number (e.g. CodeName02.exe or CompanyName.CodeName02.dll). Does anyone know of an authoritative source, like Microsoft, that provides guidance on this issue?
2008/11/21
[ "https://Stackoverflow.com/questions/310226", "https://Stackoverflow.com", "https://Stackoverflow.com/users/21410/" ]
I'm not aware of anything authoritative, but it would seem to me that using a consistent name would simplify everything from the process of installation scripts to documentation. Given that one can store the version as metadata on the file, I don't know why it would be needed in the filename. Why set yourself up for the hassle of having to account for differently-named files?
Microsoft used a suffix of 32 to denote 32 bit DLL versions so that those DLL's could coexist with the existing 16 bit DLL files.
310,226
We currently have a heated internal debate as to whether the actual .NET assembly name should include the code's version number (e.g. CodeName02.exe or CompanyName.CodeName02.dll). Does anyone know of an authoritative source, like Microsoft, that provides guidance on this issue?
2008/11/21
[ "https://Stackoverflow.com/questions/310226", "https://Stackoverflow.com", "https://Stackoverflow.com/users/21410/" ]
I know DevExpress [website](http://devexpress.com) use version indicators as part of their assembly names such as XtraEditors8.2.dll. I guess the reason is that you want to be able to have multiple versions of the assembly located in the same directory. For example we have about 15 smartclients that are distributed as part of the same shell/client. Each smartclient can have a different version of DevExpress controls and therefore we need to be able to have XtraEditors7.1.dll and XtraEditors8.2 existing in the same directory. I would say that if you have common libraries that are dependencies of reusable modules and can exist in multiple versions 1.0, 1.1, 1.2 etc. then it would be a valid argument that version numbers could be included in the name to avoid collisions. Given that the common libs are not living in the GAC.
The version information can be contained in assemblyInfo file and can then be queried via reflection etc. Some vendors include the version number in the name to make it easier to see at a glance what it is. The Microsoft dll names dont contain a version number in framework directory.
310,226
We currently have a heated internal debate as to whether the actual .NET assembly name should include the code's version number (e.g. CodeName02.exe or CompanyName.CodeName02.dll). Does anyone know of an authoritative source, like Microsoft, that provides guidance on this issue?
2008/11/21
[ "https://Stackoverflow.com/questions/310226", "https://Stackoverflow.com", "https://Stackoverflow.com/users/21410/" ]
just look at the .NET framework or any other microsoft product for that matter. putting a version number as part of the assembly name sounds like a bad idea. There is a place for this (and other information) in the assembly's meta-data section. (AssemblyInfo.cs) This information can be view in Windows Explorer (properties dialog,status bar, tooltip - they all show this information).
Since the version can be set as a property isn't this semi redundant? I'd also go on a limb and suggest MS doesn't have a standard given a quick look at their DLL names: user32.dll, tcpmon.dll, winsock.dll, etc.
310,226
We currently have a heated internal debate as to whether the actual .NET assembly name should include the code's version number (e.g. CodeName02.exe or CompanyName.CodeName02.dll). Does anyone know of an authoritative source, like Microsoft, that provides guidance on this issue?
2008/11/21
[ "https://Stackoverflow.com/questions/310226", "https://Stackoverflow.com", "https://Stackoverflow.com/users/21410/" ]
just look at the .NET framework or any other microsoft product for that matter. putting a version number as part of the assembly name sounds like a bad idea. There is a place for this (and other information) in the assembly's meta-data section. (AssemblyInfo.cs) This information can be view in Windows Explorer (properties dialog,status bar, tooltip - they all show this information).
The version information can be contained in assemblyInfo file and can then be queried via reflection etc. Some vendors include the version number in the name to make it easier to see at a glance what it is. The Microsoft dll names dont contain a version number in framework directory.
164,270
3.5e is a very exploitable game and I am vaguely aware of a number of ways to make it highly probable that my character will be the first to move in any given combat encounter. To name just a few examples: Pun-Pun has an arbitrarily high bonus to initiative, [Supreme Initiative](https://www.d20srd.org/srd/divine/divineAbilitiesFeats.htm#supremeInitiative) appears to do what it says on the tin, and [Celerity seems to stack in interesting ways](https://rpg.stackexchange.com/q/8965/53359). However, given that multiple tricks for moving first exist, it is clear that not every trick for moving first can guarantee that you always move first. In the interest of greater cheese, I want to know if there is any way to guarantee moving first. I do not require the user to always have this guarantee. For example, I am happy if it's only a once per day trick. However, when whatever cheese is used in order to get this guarantee, I want it to work regardless of both the opponent and the level of cheese used by said opponent. If the opponent can use some sort of Contingent Celerity abuse to move before I can, then my trick isn't good enough. Does any such trick for guaranteed first moves exist? Note: Given that I've already referenced Pun-Pun and Salient Divine Abilities, you may safely assume that any level of cheese is on the table. Furthermore, please do not forget that there exists methods to be immune to surprise rounds. For example, the Divine Oracle has the extraordinary ability Immune to Surprise.
2020/02/05
[ "https://rpg.stackexchange.com/questions/164270", "https://rpg.stackexchange.com", "https://rpg.stackexchange.com/users/53359/" ]
Sisyphean Food ============== **You can make it, but you won't be nourished** Performance of Creation from the UA College of Creation states: > > As an action, you can create one nonmagical item of your choice in an unoccupied space within 10 feet of you... > > > The created item disappears at the end of your next turn, unless you use your action to maintain it. Each time you use your action in this way, the item’s duration is extended to the end of your next turn, up to a maximum of 1 minute. If you maintain the item for the full minute, it continues to exist for a number of hours equal to your bard level. > > > Once the duration has concluded, the item will disappear. Which means whatever is in your stomach also disappears. You may feel full for a few hours, but that's about it. What about digestion and evacuation? ------------------------------------ This isn't really a part of D&D mechanics, so it'll be up to a DM to decide. If they feel that it's been inside you long enough to be processed and that resulting nutrients remain, which is doubtful as their source is gone as if it was never there, a DM could rule that the effects of eating remain. But that's really up to them.
The rules for this subclass have changed. Now it states the following: > > Also at 3rd level, as an action, you can channel the magic of the Song of Creation to create one nonmagical item of your choice in an unoccupied space within 10 feet of you. The item must appear on a surface or in a liquid that can support it. The gp value of the item can't be more than 20 times your bard level, and the item must be Medium or smaller. The item glimmers softly, and a creature can faintly hear music when touching it. The created item disappears after a number of hours equal to your proficiency bonus. For examples of items you can create, see the equipment chapter of the Player's Handbook. > > > > > Once you create an item with this feature, you can't do so again until you finish a long rest, unless you expend a spell slot of 2nd level or higher to use this feature again. You can have only one item created by this feature at a time; if you use this action and already have an item from this feature, the first one immediately vanishes. > > > > > The size of the item you can create with this feature increases by one size category when you reach 6th level (Large) and 14th level (Huge). > > > According to this, you can be nourished for the number of hours it lasts, but then you become hungry again. It could be used to satisfy yourself until you can find some real food, though.
12,391
I was just reading about various Altaic language grouping hypotheses on wikipedia. According the article, evidence for an Altaic language family that would include Turkic, Uralic, Mongolian, Tungusic, etc. has mostly been rejected by specialists in recent years, but fails to give details. Have more promising alternative hypotheses been proposed? Or is this just a matter of there being too little evidence to draw solid conclusions after so much time? Is there any ongoing work on the subject that I could read?
2015/05/18
[ "https://linguistics.stackexchange.com/questions/12391", "https://linguistics.stackexchange.com", "https://linguistics.stackexchange.com/users/9796/" ]
The alternative to the Altaic theory is that every language group included in there (that is Turkic, Mongolic, Tungusic, Japonic and Korean in its widest form, any theory that directly links Uralic with Altaic has been dead for a century now) constitutes an unrelated language family and any similarity (which *is* undeniably there) is due to borrowing and sprachbund effects. If Altaic languages are in fact related, they must have been separated from each other in an earlier time than, say, Indo-European. Historical linguistic tools are just not enough to prove any relationship beyond reasonable doubt past a couple of millennia. Evidence is just too weak to convince mainstream linguists. Add to that the fact that languages hypothesized to form the Altaic family are attested quite late compared to Indo-European and you're in a very difficult situation. To reiterate, the alternative (and more widely accepted) theory for Turkic is that it's just in its own language family and not related to anything else.
As a matter of fact, there still are a number of linguists believing that some or all of the families considered to belong to the putative Altaic stock are related one way or another. "Core Altaic" and "Extended Altaic" ----------------------------------- The traditional "core" members were **Turkic**, **Mongolic** and **Tungusic**, with ***Japonic*** and ***Koreic*** being added in more recent decades. As to Uralic, to my knowledge, none of the proponents of the Altaic hypothesis believe it belongs to the putative stock. "Ural(o)-Altaic" ---------------- The so-called **Ural-Altaic** hypothesis is now considered dead even by Altaicists, who consider Uralic either similar due to areal convergence, or related only very distantly, perhaps within the even more controversial *Nostratic* superstock. "Eurasiatic" & "Nostratic" -------------------------- Whether the families included in Altaic form a standalone taxon or not, proponents of the so-called "Nostratic" hypothesis, or a very similar "Eurasiatic" hypothesis, seem to believe that most or all of them belong to a larger stock together with at least **Indo-European**, **Uralic**, and depending on the version, variously also **Kartvelian**, **Dravidian**, **Eskimo-Aleutian**, **Chukotko-Kamchatkan**, **Yeniseian**, **Nivkh**, and even **Afro-Asiatic**, which is by some proponents considered its *sister* rather than mere *daughter*. "Trans-Eurasian" as the most recent work ---------------------------------------- I think it was Martine Robbeets, a firm believer in the unity of the various *Altaic* families, who first coined this less-worn name for the hypothetical stock. Hence, ***Trans-Eurasian*** is probably what you should try and look up if you want to find the most recent work on Altaic. Some of her papers can be found on-line, e.g. *[Swadesh 100 on Japanese, Korean and Altaic (2004)](http://www.orientalistik.uni-mainz.de/robbeets/2004_Swadesh_100.pdf)* [PDF]. An interesting discussion can be found in *[Transeurasian Verbal Morphology in a Comparative Perspective: Genealogy, Contact, Chance (2010)](https://books.google.cz/books?id=9zcxQqmkgE0C&hl=cs&source=gbs_navlinks_s)* [Google Books]. See also her bibliography *[here](http://www.orientalistik.uni-mainz.de/119.php)*. Summary and Where to Look Next ------------------------------ Contrary to what **@cyco130**'s answer suggests, there are widely accepted families that prove that historical linguistics can go, at least, twice as deep as just *"a couple of millenia"*, and the *temporal ceiling* itself is a matter of controversy, which might also be one of the reasons why the Altaic debate is far from settled now. On the other hand, **@cyco130** is also definitely right in that the evidence for *Altaic* is simply **not sufficient** and quite **imperfect** at the moment, and until the adherents make a major breakthrough (if they ever do), it is certainly safer and more correct not to *lump* the families together (unless you emphasize that you are using the label as a short-hand only). After all, **areal convergence** is just as interesting and worth investigating as **genetic relationships**. Now, to direct you to some further information, I have just come across [this blog article](https://robertlindsay.wordpress.com/2014/02/13/are-japanese-and-korean-altaic/) (now a dead link, but [available via Wayback](https://web.archive.org/web/20170529053239/https://robertlindsay.wordpress.com/2014/02/13/are-japanese-and-korean-altaic/)) that gives a nice summary. Some of the most recent critiques that can by no means be neglected have been written by **[Alexander Vovin](https://ehess.academia.edu/AlexanderVovin)** (an expert especially on Japonic and Koreic) and **[Stefan Georg](https://uni-bonn.academia.edu/StefanGeorg)** (and expert on Turkic, Tungusic and Mongolic, among other things). To be sure, some of the criticisms have been addressed or replied to, especially by G. S. Starostin & A. V. Dybo's *[In Defence of the Comparative Method, or The End of the Vovin Controversy](https://www.academia.edu/801734/In_Defense_Of_The_Comparative_Method_Or_The_End_Of_The_Vovin_Controversy)* (2009), or G. S. Starostin's *[Review of: Koreo-Japonica. A Re-Evaluation of a Common Genetic Origin. By Alexander Vovin.](https://www.academia.edu/5183539/Review_of_Koreo-Japonica._A_Re-Evaluation_of_a_Common_Genetic_Origin._By_Alexander_Vovin)* (2010), but it would appear **critical views are prevailing**.
10,688,362
Have searched low and wide, and can't yet seem to locate a connection string for the web.config file.
2012/05/21
[ "https://Stackoverflow.com/questions/10688362", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1069845/" ]
You need a IBM db2 driver: <http://publib.boulder.ibm.com/infocenter/db2luw/v9r5/index.jsp?topic=/com.ibm.swg.im.dbclient.adonet.doc/doc/c0054118.html> or you can use a oledb driver with is not very fast, for the connectionstring you can have a look at: <http://www.connectionstrings.com/ibm-db2>
What is the issue that you are facing? Are you not able to create conn string for given DB? Did you take a look at <http://www.connectionstrings.com/>?
2,846,333
I just used Eclipse 3.5 to install the Google App Engine plug in. The plug in is showed as installed in the update manager. However, I am not seeing the option to "New Web Application Project" (<http://code.google.com/appengine/docs/java/tools/eclipse.html>). I also don't see anything Google related when I type Google into the search bar under Windows > Preferences. There were no errors at the time of installation, and I was asked if I wanted to restart Eclipse, clicked yes, and it restarted accordingly. Am I missing something?
2010/05/17
[ "https://Stackoverflow.com/questions/2846333", "https://Stackoverflow.com", "https://Stackoverflow.com/users/301816/" ]
solution: move the eclipse folder to X:\eclipse I also downloaded eclipse 3.5 sp2 - Eclipse IDE for Java Developers, installed the GWT plugin, restarted eclipse as instructed and then **File > New > Web Application Project** menu item does not exist in the eclipse. it was in p:\eclipse-java-galileo-SR2-win32\eclipse I moved te folder to p:\eclipse uninstalled and reinstalled the plugin now it works. looks to me like a bug in the gwt plugin (win xp)
I'm pretty sure you're right about the problem having to do with permissions writing to the Program Files folder. I'm on Windows 7 and had the same problem. I also installed Eclipse under the Program Files folder, and got around this issue by running Eclipse "as administrator" when installing the Google plugins. Worried that I would have similar problems installing other plugins, I moved my Eclipse installation to my C:\ directory and reinstalled the Google plugins without running as administrator. That installation went just fine. Someone else in [this thread](http://code.google.com/p/google-web-toolkit/issues/detail?id=4168) said he had that problem on one Windows 7 computer but not another. My guess is that he installed Eclipse in the Program Files directory on his problem computer.
2,846,333
I just used Eclipse 3.5 to install the Google App Engine plug in. The plug in is showed as installed in the update manager. However, I am not seeing the option to "New Web Application Project" (<http://code.google.com/appengine/docs/java/tools/eclipse.html>). I also don't see anything Google related when I type Google into the search bar under Windows > Preferences. There were no errors at the time of installation, and I was asked if I wanted to restart Eclipse, clicked yes, and it restarted accordingly. Am I missing something?
2010/05/17
[ "https://Stackoverflow.com/questions/2846333", "https://Stackoverflow.com", "https://Stackoverflow.com/users/301816/" ]
Or easier: Go to your eclipse folder > right click > uncheck to read only box. Done.
I'm pretty sure you're right about the problem having to do with permissions writing to the Program Files folder. I'm on Windows 7 and had the same problem. I also installed Eclipse under the Program Files folder, and got around this issue by running Eclipse "as administrator" when installing the Google plugins. Worried that I would have similar problems installing other plugins, I moved my Eclipse installation to my C:\ directory and reinstalled the Google plugins without running as administrator. That installation went just fine. Someone else in [this thread](http://code.google.com/p/google-web-toolkit/issues/detail?id=4168) said he had that problem on one Windows 7 computer but not another. My guess is that he installed Eclipse in the Program Files directory on his problem computer.
2,846,333
I just used Eclipse 3.5 to install the Google App Engine plug in. The plug in is showed as installed in the update manager. However, I am not seeing the option to "New Web Application Project" (<http://code.google.com/appengine/docs/java/tools/eclipse.html>). I also don't see anything Google related when I type Google into the search bar under Windows > Preferences. There were no errors at the time of installation, and I was asked if I wanted to restart Eclipse, clicked yes, and it restarted accordingly. Am I missing something?
2010/05/17
[ "https://Stackoverflow.com/questions/2846333", "https://Stackoverflow.com", "https://Stackoverflow.com/users/301816/" ]
solution: move the eclipse folder to X:\eclipse I also downloaded eclipse 3.5 sp2 - Eclipse IDE for Java Developers, installed the GWT plugin, restarted eclipse as instructed and then **File > New > Web Application Project** menu item does not exist in the eclipse. it was in p:\eclipse-java-galileo-SR2-win32\eclipse I moved te folder to p:\eclipse uninstalled and reinstalled the plugin now it works. looks to me like a bug in the gwt plugin (win xp)
Or easier: Go to your eclipse folder > right click > uncheck to read only box. Done.
1,958,061
I have a txt log file. How to make it look in Eclipse editor just as if it was the console output [ eg links underlined ]. It is a bit sad plugins called "Log viewer" do not highlight links as Console Window does. That's the worst thing about such plugins.
2009/12/24
[ "https://Stackoverflow.com/questions/1958061", "https://Stackoverflow.com", "https://Stackoverflow.com/users/217067/" ]
As chburd mentions, **[NTail](http://www.certiv.net/products/ntail.html)** is a good candidate (as I said in "[Can eclipse monitor an arbitrary log file in the Console view?](https://stackoverflow.com/questions/1069245/can-eclipse-monitor-an-arbitrary-log-file-in-the-console-view/1069468#1069468)") ![alt text](https://i.stack.imgur.com/IhVkR.jpg) You can also define your own Console Viewer (see [**this source**](http://kickjava.com/src/org/eclipse/ui/console/TextConsoleViewer.java.htm)) to define your own hyperlink.
I have installed [NTail](http://www.certiv.net/projects/ntail.html), I think it is what you are looking for
1,958,061
I have a txt log file. How to make it look in Eclipse editor just as if it was the console output [ eg links underlined ]. It is a bit sad plugins called "Log viewer" do not highlight links as Console Window does. That's the worst thing about such plugins.
2009/12/24
[ "https://Stackoverflow.com/questions/1958061", "https://Stackoverflow.com", "https://Stackoverflow.com/users/217067/" ]
I have installed [NTail](http://www.certiv.net/projects/ntail.html), I think it is what you are looking for
I use [LOG Viewer](https://github.com/anb0s/logviewer) for eclipse. Which works very well and allows customization.
1,958,061
I have a txt log file. How to make it look in Eclipse editor just as if it was the console output [ eg links underlined ]. It is a bit sad plugins called "Log viewer" do not highlight links as Console Window does. That's the worst thing about such plugins.
2009/12/24
[ "https://Stackoverflow.com/questions/1958061", "https://Stackoverflow.com", "https://Stackoverflow.com/users/217067/" ]
As chburd mentions, **[NTail](http://www.certiv.net/products/ntail.html)** is a good candidate (as I said in "[Can eclipse monitor an arbitrary log file in the Console view?](https://stackoverflow.com/questions/1069245/can-eclipse-monitor-an-arbitrary-log-file-in-the-console-view/1069468#1069468)") ![alt text](https://i.stack.imgur.com/IhVkR.jpg) You can also define your own Console Viewer (see [**this source**](http://kickjava.com/src/org/eclipse/ui/console/TextConsoleViewer.java.htm)) to define your own hyperlink.
Aptana (www.aptana.com) has a really nice tail view feature for Eclipse. Highly recommend it!
1,958,061
I have a txt log file. How to make it look in Eclipse editor just as if it was the console output [ eg links underlined ]. It is a bit sad plugins called "Log viewer" do not highlight links as Console Window does. That's the worst thing about such plugins.
2009/12/24
[ "https://Stackoverflow.com/questions/1958061", "https://Stackoverflow.com", "https://Stackoverflow.com/users/217067/" ]
As chburd mentions, **[NTail](http://www.certiv.net/products/ntail.html)** is a good candidate (as I said in "[Can eclipse monitor an arbitrary log file in the Console view?](https://stackoverflow.com/questions/1069245/can-eclipse-monitor-an-arbitrary-log-file-in-the-console-view/1069468#1069468)") ![alt text](https://i.stack.imgur.com/IhVkR.jpg) You can also define your own Console Viewer (see [**this source**](http://kickjava.com/src/org/eclipse/ui/console/TextConsoleViewer.java.htm)) to define your own hyperlink.
I use [LOG Viewer](https://github.com/anb0s/logviewer) for eclipse. Which works very well and allows customization.
1,958,061
I have a txt log file. How to make it look in Eclipse editor just as if it was the console output [ eg links underlined ]. It is a bit sad plugins called "Log viewer" do not highlight links as Console Window does. That's the worst thing about such plugins.
2009/12/24
[ "https://Stackoverflow.com/questions/1958061", "https://Stackoverflow.com", "https://Stackoverflow.com/users/217067/" ]
Aptana (www.aptana.com) has a really nice tail view feature for Eclipse. Highly recommend it!
I use [LOG Viewer](https://github.com/anb0s/logviewer) for eclipse. Which works very well and allows customization.
58,194
> > **Possible Duplicate:** > > [How do comments work?](https://meta.stackexchange.com/questions/19756/how-do-comments-work) > > > Most of the time I don't get a comment button on people's questions and answers. Is there a reason? Or am I doing something wrong ?
2010/07/23
[ "https://meta.stackexchange.com/questions/58194", "https://meta.stackexchange.com", "https://meta.stackexchange.com/users/-1/" ]
You need 50 points to comment other people's posts. <https://stackoverflow.com/faq>
See the SO FAQ. You need a reputation of 50 or more to leave comments on others questions and answers. Keep plugging away, it will surely come.
58,194
> > **Possible Duplicate:** > > [How do comments work?](https://meta.stackexchange.com/questions/19756/how-do-comments-work) > > > Most of the time I don't get a comment button on people's questions and answers. Is there a reason? Or am I doing something wrong ?
2010/07/23
[ "https://meta.stackexchange.com/questions/58194", "https://meta.stackexchange.com", "https://meta.stackexchange.com/users/-1/" ]
You need 50 points to comment other people's posts. <https://stackoverflow.com/faq>
See <https://stackoverflow.com/privileges/comment>: > > Please note that you can always comment on your own posts, and any part of your questions. However, commenting on other people's posts is a privilege. > > > That page indicates that you need 50 reputation points to be able to comment on other people's posts. The relevant part of the SO FAQ is at <https://stackoverflow.com/faq#reputation>.
57,409
It seems to me that "Cannot Recover" does everything that "Lost or Consumed" does and more. Why does this card have both icons? What "Lost or Consumed" signifies, that "Cannot Recover" does not? > > If the performed action from a card contains a "Lost or Consumed" symbol in the lower right of the action field, the card is instead placed in a player’s lost pile. Lost cards can only be returned to a player’s hand during a scenario by using a special recover action. > > > And > > Certain abilities allow a player to recover discarded or lost ability cards. This means that the player can look > through his or her discard or lost pile (or discarded or lost cards in his or her active area), select up to a number > of cards specified in the ability, and immediately return them to his or her hand. Some cards, however, cannot > be recovered or refreshed once lost. This is denoted by "Cannot Recover" the symbol. This symbol applies to the card no > matter how the card was lost or consumed > > > It seems that the only difference is that the card ends up in the lost pile instead of the discarded pile, but since it cannot be recovered anyway, why/when does this matter?
2022/05/11
[ "https://boardgames.stackexchange.com/questions/57409", "https://boardgames.stackexchange.com", "https://boardgames.stackexchange.com/users/2737/" ]
*I'm answering my own question, because no one else cannot read my mind and without it the question is not really answerable. I would delete the question but it already has an answer.* Your confusion is stemmed from the fact that you were comparing the digital version cards (which you own) against physical game rules (which you do not have), and because the icons are different in the digital version vs physical one you mixed up "Lost or Consumed" and "Cannot Recover": you thought they were the other way around. Once you realise that, everything falls into places. > > It seems to me that "Cannot Recover" does everything that "Lost or Consumed" does and more. > > > This is mostly true, but "Lost or Consumed" is much more frequent than "Cannot Recover". These cards go to the Lost pile, you do not get them during your short/long rest, but you still can recover them. > > Why does this card have both icons? What "Lost or Consumed" signifies, that "Cannot Recover" does not? > > > If the card only had "Cannot Recover" and did not have "Lost or Consumed" it would go to the Discard pile, not to the Lost pile. And then you would be able to get it back during you rest, since rest is not the same as "recover". This was not intention for this cars, so it gets both icons. In fact for most (if not all) cards that have "Cannot Recover" the intention is that this ability is only used once per combat, that's why those cards *also* have "Lost or Consumed" icon. It could have been possible to print just a single "Cannot Recover" icon on those cards and add a rule that those cards also should be ignored in the Discard pile during rest, but it seems it would only add confusion without providing any real benefit.
Some cards are lost or consumed when you play them. When you play a normal card it goes into you discard pile and the next round you'll get that card again. Some card have a one time use ability. After you play that card and use the ability (top or bottom half of the card, in this case the bottom part) that has the icon the card goes into your lost pile. The cards in your lost pile cannot reappear in your hand. When you take the resting action you can regain the cards from your discard pile to your draw pile. By resting you put a random card from your draw pile to your lost pile. [![enter image description here](https://i.stack.imgur.com/H9m1B.png)](https://i.stack.imgur.com/H9m1B.png) This card lets you regain a card from your lost pile and add it to your discard pile. For doing this you have to sacrifice this card (place it in the lost pile) [![enter image description here](https://i.stack.imgur.com/VvJmy.png)](https://i.stack.imgur.com/VvJmy.png). When this card is in your lost pile you cannot recover it, independent of how it ended up in your lost pile [![enter image description here](https://i.stack.imgur.com/EWvKG.png)](https://i.stack.imgur.com/EWvKG.png). So the [![enter image description here](https://i.stack.imgur.com/VvJmy.png)](https://i.stack.imgur.com/VvJmy.png) icon means that when played you have to place this card in the lost pile and the [![enter image description here](https://i.stack.imgur.com/EWvKG.png)](https://i.stack.imgur.com/EWvKG.png) icon means you cannot regain this card with a [![enter image description here](https://i.stack.imgur.com/NsjaN.png)](https://i.stack.imgur.com/NsjaN.png) ability, no matter how it ended up on the lost pile (resting, playing or other events).
4,513
After several years of agile and scrum hype around, I'd expect the first well-known reactions from "bullshit!" to "magical!" to be gone and I should be able to find some really elaborated articles that question the highly praised benefits of those modern management techniques. But I don't. Aren't there any? Is scrum really *that* great? Is there a conspiracy going on?
2012/02/01
[ "https://pm.stackexchange.com/questions/4513", "https://pm.stackexchange.com", "https://pm.stackexchange.com/users/3109/" ]
> > I should be able to find some really elaborated articles that question the highly praised benefits of those modern management techniques... > > > Unfortunately, Scrum has nothing to do with management techniques. It's a framework which tells you how to do the software development. Regarding managers, it says that a *good* team doesn't need managers, it needs people who keep away those who want to disturb the team. I have no idea why people thought that managers would let themselves be put aside. > > Is there a conspiracy going on? > > > My version: The original idea of Scrum was great, but after that came the consultants and turned a good initiative into business. You can attend to courses and workshops for a lot of money, but they won't really help you with the challenges of the job. Until there's money in this business, you won't read any elaborated article about it. > > Is scrum really that great? > > > My main problem is that Scrum is taught as the ultimate solution for every possible problem in the software industry, but **there isn't a good way to scale it, use it for maintenance or handle cases when team are changing often**. It says that the teams should be kept together almost forever, or if you test well you won't need maintenance, etc. Tell this to you boss and there is a good chance that he'll finally have a good laugh. It is unrealistic. I can understand that an organization wants to change because it feels that things could go better, and thanks to the hype, they'll find Scrum. **Instead of trying out Scrum they should find out what their real problem is.** Nevertheless, it is a good thing to have a look at successful companies like Fog Creek, GitHub, 37signals and find out what their secret is (none of these companies do Scrum or follow any other Agile framework. They do certain techniques, but it is a different story).
There's a well-known Ken Schwaber quote: > > Scrum doesn't solve your problems; Scrum exposes your problems. > > > He's exactly right. You still have to find and fix the problems that are keeping you from delivering high quality software more quickly. I've been in this business for three decades, and have managed teams for almost two decades, on and off. I've gone in and fixed a couple dozen companies in the past six years, using a combination of Scrum and Kanban. Lean/Agile approaches to software development work very well, if you understand the underlying philosophy. If you don't, if you just proceed with a 'cargo cult' rote form of Scrum (or another Lean/Agile approach), you will not get the results you're looking for. If you want a specific criticism, then let's look at the most common failure mode for Lean/Agile? When an organization (and specifically its leadership) is unwilling to acknowledge and fix the problems that this approach exposes. In fact, this is the only failure mode... and it is a universal failure mode.
4,513
After several years of agile and scrum hype around, I'd expect the first well-known reactions from "bullshit!" to "magical!" to be gone and I should be able to find some really elaborated articles that question the highly praised benefits of those modern management techniques. But I don't. Aren't there any? Is scrum really *that* great? Is there a conspiracy going on?
2012/02/01
[ "https://pm.stackexchange.com/questions/4513", "https://pm.stackexchange.com", "https://pm.stackexchange.com/users/3109/" ]
> > I should be able to find some really elaborated articles that question the highly praised benefits of those modern management techniques... > > > Unfortunately, Scrum has nothing to do with management techniques. It's a framework which tells you how to do the software development. Regarding managers, it says that a *good* team doesn't need managers, it needs people who keep away those who want to disturb the team. I have no idea why people thought that managers would let themselves be put aside. > > Is there a conspiracy going on? > > > My version: The original idea of Scrum was great, but after that came the consultants and turned a good initiative into business. You can attend to courses and workshops for a lot of money, but they won't really help you with the challenges of the job. Until there's money in this business, you won't read any elaborated article about it. > > Is scrum really that great? > > > My main problem is that Scrum is taught as the ultimate solution for every possible problem in the software industry, but **there isn't a good way to scale it, use it for maintenance or handle cases when team are changing often**. It says that the teams should be kept together almost forever, or if you test well you won't need maintenance, etc. Tell this to you boss and there is a good chance that he'll finally have a good laugh. It is unrealistic. I can understand that an organization wants to change because it feels that things could go better, and thanks to the hype, they'll find Scrum. **Instead of trying out Scrum they should find out what their real problem is.** Nevertheless, it is a good thing to have a look at successful companies like Fog Creek, GitHub, 37signals and find out what their secret is (none of these companies do Scrum or follow any other Agile framework. They do certain techniques, but it is a different story).
That's a very good question indeed, and no: Scrum is not that great, if you take it as a bullet-proof, cheap, one-size-fits-all way to let your team/company magically solve their problems... it is great if you know when, why and how to use it. That said there are several critiques out there that might be interesting for you. Some of them are not inherently related to Scrum, but nevertheless question some knowledge that in the Scrum world is taken for granted or "trusted" more as a dogma than as a verified experience. A good book on this is [The Leprechauns of Software Engineering](https://leanpub.com/leprechauns) But you can also have a look at: * [This video by M. Fowler and N. Ford "Explaining Agile"](https://www.youtube.com/watch?v=GE6lbPLEAzc) * [Why Scrum Should Basically Just Die In A Fire](https://web.archive.org/web/20150206025047/http://gilesbowkett.blogspot.com/2014/09/why-scrum-should-basically-just-die-in.html) * [The Failure of Agile](http://blog.toolshed.com/2015/05/the-failure-of-agile.html) * [It's time for Scrum to evolve](http://agileconsulting.blogspot.be/2010/02/it-time-for-scrum-to-evolve.html) Hope this helps :)
4,513
After several years of agile and scrum hype around, I'd expect the first well-known reactions from "bullshit!" to "magical!" to be gone and I should be able to find some really elaborated articles that question the highly praised benefits of those modern management techniques. But I don't. Aren't there any? Is scrum really *that* great? Is there a conspiracy going on?
2012/02/01
[ "https://pm.stackexchange.com/questions/4513", "https://pm.stackexchange.com", "https://pm.stackexchange.com/users/3109/" ]
No, scrum isn't that "great." It is just another framework. It may or may not work for you. But the reason I think it hasn't become a "Mac vs PC" war is the agile community knows that. There are few people in the agile community that will say agile is the one and only way. Most of those don't tend to stick around long as it pretty much flies in the face of the tenets of agile. Much of the agile community is in the camp of "give it a try" and not "you must convert." A low barrier to entry, and a generally welcoming community means less conflict. Me personally, I've always maintained it is just one of the many tools in my tool box. I find I reach for it more and more, but it still sits in the same box as the PMBoK, Conflict Management training, Manager-Tools, Customer Service Training and other tools.
> > I should be able to find some really elaborated articles that question the highly praised benefits of those modern management techniques... > > > Unfortunately, Scrum has nothing to do with management techniques. It's a framework which tells you how to do the software development. Regarding managers, it says that a *good* team doesn't need managers, it needs people who keep away those who want to disturb the team. I have no idea why people thought that managers would let themselves be put aside. > > Is there a conspiracy going on? > > > My version: The original idea of Scrum was great, but after that came the consultants and turned a good initiative into business. You can attend to courses and workshops for a lot of money, but they won't really help you with the challenges of the job. Until there's money in this business, you won't read any elaborated article about it. > > Is scrum really that great? > > > My main problem is that Scrum is taught as the ultimate solution for every possible problem in the software industry, but **there isn't a good way to scale it, use it for maintenance or handle cases when team are changing often**. It says that the teams should be kept together almost forever, or if you test well you won't need maintenance, etc. Tell this to you boss and there is a good chance that he'll finally have a good laugh. It is unrealistic. I can understand that an organization wants to change because it feels that things could go better, and thanks to the hype, they'll find Scrum. **Instead of trying out Scrum they should find out what their real problem is.** Nevertheless, it is a good thing to have a look at successful companies like Fog Creek, GitHub, 37signals and find out what their secret is (none of these companies do Scrum or follow any other Agile framework. They do certain techniques, but it is a different story).
4,513
After several years of agile and scrum hype around, I'd expect the first well-known reactions from "bullshit!" to "magical!" to be gone and I should be able to find some really elaborated articles that question the highly praised benefits of those modern management techniques. But I don't. Aren't there any? Is scrum really *that* great? Is there a conspiracy going on?
2012/02/01
[ "https://pm.stackexchange.com/questions/4513", "https://pm.stackexchange.com", "https://pm.stackexchange.com/users/3109/" ]
No, scrum isn't that "great." It is just another framework. It may or may not work for you. But the reason I think it hasn't become a "Mac vs PC" war is the agile community knows that. There are few people in the agile community that will say agile is the one and only way. Most of those don't tend to stick around long as it pretty much flies in the face of the tenets of agile. Much of the agile community is in the camp of "give it a try" and not "you must convert." A low barrier to entry, and a generally welcoming community means less conflict. Me personally, I've always maintained it is just one of the many tools in my tool box. I find I reach for it more and more, but it still sits in the same box as the PMBoK, Conflict Management training, Manager-Tools, Customer Service Training and other tools.
There's a well-known Ken Schwaber quote: > > Scrum doesn't solve your problems; Scrum exposes your problems. > > > He's exactly right. You still have to find and fix the problems that are keeping you from delivering high quality software more quickly. I've been in this business for three decades, and have managed teams for almost two decades, on and off. I've gone in and fixed a couple dozen companies in the past six years, using a combination of Scrum and Kanban. Lean/Agile approaches to software development work very well, if you understand the underlying philosophy. If you don't, if you just proceed with a 'cargo cult' rote form of Scrum (or another Lean/Agile approach), you will not get the results you're looking for. If you want a specific criticism, then let's look at the most common failure mode for Lean/Agile? When an organization (and specifically its leadership) is unwilling to acknowledge and fix the problems that this approach exposes. In fact, this is the only failure mode... and it is a universal failure mode.
4,513
After several years of agile and scrum hype around, I'd expect the first well-known reactions from "bullshit!" to "magical!" to be gone and I should be able to find some really elaborated articles that question the highly praised benefits of those modern management techniques. But I don't. Aren't there any? Is scrum really *that* great? Is there a conspiracy going on?
2012/02/01
[ "https://pm.stackexchange.com/questions/4513", "https://pm.stackexchange.com", "https://pm.stackexchange.com/users/3109/" ]
No, scrum isn't that "great." It is just another framework. It may or may not work for you. But the reason I think it hasn't become a "Mac vs PC" war is the agile community knows that. There are few people in the agile community that will say agile is the one and only way. Most of those don't tend to stick around long as it pretty much flies in the face of the tenets of agile. Much of the agile community is in the camp of "give it a try" and not "you must convert." A low barrier to entry, and a generally welcoming community means less conflict. Me personally, I've always maintained it is just one of the many tools in my tool box. I find I reach for it more and more, but it still sits in the same box as the PMBoK, Conflict Management training, Manager-Tools, Customer Service Training and other tools.
That's a very good question indeed, and no: Scrum is not that great, if you take it as a bullet-proof, cheap, one-size-fits-all way to let your team/company magically solve their problems... it is great if you know when, why and how to use it. That said there are several critiques out there that might be interesting for you. Some of them are not inherently related to Scrum, but nevertheless question some knowledge that in the Scrum world is taken for granted or "trusted" more as a dogma than as a verified experience. A good book on this is [The Leprechauns of Software Engineering](https://leanpub.com/leprechauns) But you can also have a look at: * [This video by M. Fowler and N. Ford "Explaining Agile"](https://www.youtube.com/watch?v=GE6lbPLEAzc) * [Why Scrum Should Basically Just Die In A Fire](https://web.archive.org/web/20150206025047/http://gilesbowkett.blogspot.com/2014/09/why-scrum-should-basically-just-die-in.html) * [The Failure of Agile](http://blog.toolshed.com/2015/05/the-failure-of-agile.html) * [It's time for Scrum to evolve](http://agileconsulting.blogspot.be/2010/02/it-time-for-scrum-to-evolve.html) Hope this helps :)
4,513
After several years of agile and scrum hype around, I'd expect the first well-known reactions from "bullshit!" to "magical!" to be gone and I should be able to find some really elaborated articles that question the highly praised benefits of those modern management techniques. But I don't. Aren't there any? Is scrum really *that* great? Is there a conspiracy going on?
2012/02/01
[ "https://pm.stackexchange.com/questions/4513", "https://pm.stackexchange.com", "https://pm.stackexchange.com/users/3109/" ]
There's a well-known Ken Schwaber quote: > > Scrum doesn't solve your problems; Scrum exposes your problems. > > > He's exactly right. You still have to find and fix the problems that are keeping you from delivering high quality software more quickly. I've been in this business for three decades, and have managed teams for almost two decades, on and off. I've gone in and fixed a couple dozen companies in the past six years, using a combination of Scrum and Kanban. Lean/Agile approaches to software development work very well, if you understand the underlying philosophy. If you don't, if you just proceed with a 'cargo cult' rote form of Scrum (or another Lean/Agile approach), you will not get the results you're looking for. If you want a specific criticism, then let's look at the most common failure mode for Lean/Agile? When an organization (and specifically its leadership) is unwilling to acknowledge and fix the problems that this approach exposes. In fact, this is the only failure mode... and it is a universal failure mode.
That's a very good question indeed, and no: Scrum is not that great, if you take it as a bullet-proof, cheap, one-size-fits-all way to let your team/company magically solve their problems... it is great if you know when, why and how to use it. That said there are several critiques out there that might be interesting for you. Some of them are not inherently related to Scrum, but nevertheless question some knowledge that in the Scrum world is taken for granted or "trusted" more as a dogma than as a verified experience. A good book on this is [The Leprechauns of Software Engineering](https://leanpub.com/leprechauns) But you can also have a look at: * [This video by M. Fowler and N. Ford "Explaining Agile"](https://www.youtube.com/watch?v=GE6lbPLEAzc) * [Why Scrum Should Basically Just Die In A Fire](https://web.archive.org/web/20150206025047/http://gilesbowkett.blogspot.com/2014/09/why-scrum-should-basically-just-die-in.html) * [The Failure of Agile](http://blog.toolshed.com/2015/05/the-failure-of-agile.html) * [It's time for Scrum to evolve](http://agileconsulting.blogspot.be/2010/02/it-time-for-scrum-to-evolve.html) Hope this helps :)
4,513
After several years of agile and scrum hype around, I'd expect the first well-known reactions from "bullshit!" to "magical!" to be gone and I should be able to find some really elaborated articles that question the highly praised benefits of those modern management techniques. But I don't. Aren't there any? Is scrum really *that* great? Is there a conspiracy going on?
2012/02/01
[ "https://pm.stackexchange.com/questions/4513", "https://pm.stackexchange.com", "https://pm.stackexchange.com/users/3109/" ]
There are many well-articulated arguments against Scrum and some other answers have pointed those out, so I'll leave that part of the question be. As for Agile, I think the big thing you have to consider is that Agile is made up of value statements, which are inherently very difficult to argue over. Let's take the first value as an example: Individuals and Interactions over Processes and Tools This means that we value people and their conversations and collaboration with other people. When we use processes and tools, we try to make sure that the ones we select or build improve people's ability to work together. Now, you either agree with that or you don't. You can't really argue that forcing people to use a tool that causes them to have trouble communicating with their colleagues is better than one that streamlines communication. If your job simply requires a warm body that follows a set process and you haven't automated that function yet, then sure, Agile isn't for you, but we all know that - there's no reason to debate the point. Now, of course, anything can become dated. There was quite a bit of conversation a little while back about updating the 12 Agile Principles because some of it just doesn't make sense anymore. For example, referring to delivering software every couple of weeks to a couple of months as delivering quickly is very dated. Many companies deliver production software multiple times per day and people delivering every few months are well behind the curve. But that points us to the other reason you don't see many debates: Proponents of Agile are a lot faster at pointing out flaws and pushing to fix them than opponents are. Agile has changed a lot in the past 15 years and as long as it continues to, many of the arguments against it are left in the dust as people practicing it find the weak points and keep improving them.
That's a very good question indeed, and no: Scrum is not that great, if you take it as a bullet-proof, cheap, one-size-fits-all way to let your team/company magically solve their problems... it is great if you know when, why and how to use it. That said there are several critiques out there that might be interesting for you. Some of them are not inherently related to Scrum, but nevertheless question some knowledge that in the Scrum world is taken for granted or "trusted" more as a dogma than as a verified experience. A good book on this is [The Leprechauns of Software Engineering](https://leanpub.com/leprechauns) But you can also have a look at: * [This video by M. Fowler and N. Ford "Explaining Agile"](https://www.youtube.com/watch?v=GE6lbPLEAzc) * [Why Scrum Should Basically Just Die In A Fire](https://web.archive.org/web/20150206025047/http://gilesbowkett.blogspot.com/2014/09/why-scrum-should-basically-just-die-in.html) * [The Failure of Agile](http://blog.toolshed.com/2015/05/the-failure-of-agile.html) * [It's time for Scrum to evolve](http://agileconsulting.blogspot.be/2010/02/it-time-for-scrum-to-evolve.html) Hope this helps :)
4,513
After several years of agile and scrum hype around, I'd expect the first well-known reactions from "bullshit!" to "magical!" to be gone and I should be able to find some really elaborated articles that question the highly praised benefits of those modern management techniques. But I don't. Aren't there any? Is scrum really *that* great? Is there a conspiracy going on?
2012/02/01
[ "https://pm.stackexchange.com/questions/4513", "https://pm.stackexchange.com", "https://pm.stackexchange.com/users/3109/" ]
> > I should be able to find some really elaborated articles that question the highly praised benefits of those modern management techniques... > > > Unfortunately, Scrum has nothing to do with management techniques. It's a framework which tells you how to do the software development. Regarding managers, it says that a *good* team doesn't need managers, it needs people who keep away those who want to disturb the team. I have no idea why people thought that managers would let themselves be put aside. > > Is there a conspiracy going on? > > > My version: The original idea of Scrum was great, but after that came the consultants and turned a good initiative into business. You can attend to courses and workshops for a lot of money, but they won't really help you with the challenges of the job. Until there's money in this business, you won't read any elaborated article about it. > > Is scrum really that great? > > > My main problem is that Scrum is taught as the ultimate solution for every possible problem in the software industry, but **there isn't a good way to scale it, use it for maintenance or handle cases when team are changing often**. It says that the teams should be kept together almost forever, or if you test well you won't need maintenance, etc. Tell this to you boss and there is a good chance that he'll finally have a good laugh. It is unrealistic. I can understand that an organization wants to change because it feels that things could go better, and thanks to the hype, they'll find Scrum. **Instead of trying out Scrum they should find out what their real problem is.** Nevertheless, it is a good thing to have a look at successful companies like Fog Creek, GitHub, 37signals and find out what their secret is (none of these companies do Scrum or follow any other Agile framework. They do certain techniques, but it is a different story).
You might want to look at Bertrand Meyer's book [Agile!: The Good, the Hype and the Ugly](https://rads.stackoverflow.com/amzn/click/com/3319051547), which, as the title says, deals with the good and bad of various agile methods. I can't summarize the entire book in an answer, read it for yourselves, but just in very large strokes: **The good**: * Refactoring * Short iterations * Short daily meetings * Focus on communication * Identifying and eliminating impediments and waste * Delivering working software * The Product Owner role **The bad**: * Deprecation of upfront tasks * User stories as a basis for requirements * Ignorance of dependencies * Rejection of traditional manager tasks * Rejection of upfront generalization * Coach as a separate role * Deprecation of documents
4,513
After several years of agile and scrum hype around, I'd expect the first well-known reactions from "bullshit!" to "magical!" to be gone and I should be able to find some really elaborated articles that question the highly praised benefits of those modern management techniques. But I don't. Aren't there any? Is scrum really *that* great? Is there a conspiracy going on?
2012/02/01
[ "https://pm.stackexchange.com/questions/4513", "https://pm.stackexchange.com", "https://pm.stackexchange.com/users/3109/" ]
There are many well-articulated arguments against Scrum and some other answers have pointed those out, so I'll leave that part of the question be. As for Agile, I think the big thing you have to consider is that Agile is made up of value statements, which are inherently very difficult to argue over. Let's take the first value as an example: Individuals and Interactions over Processes and Tools This means that we value people and their conversations and collaboration with other people. When we use processes and tools, we try to make sure that the ones we select or build improve people's ability to work together. Now, you either agree with that or you don't. You can't really argue that forcing people to use a tool that causes them to have trouble communicating with their colleagues is better than one that streamlines communication. If your job simply requires a warm body that follows a set process and you haven't automated that function yet, then sure, Agile isn't for you, but we all know that - there's no reason to debate the point. Now, of course, anything can become dated. There was quite a bit of conversation a little while back about updating the 12 Agile Principles because some of it just doesn't make sense anymore. For example, referring to delivering software every couple of weeks to a couple of months as delivering quickly is very dated. Many companies deliver production software multiple times per day and people delivering every few months are well behind the curve. But that points us to the other reason you don't see many debates: Proponents of Agile are a lot faster at pointing out flaws and pushing to fix them than opponents are. Agile has changed a lot in the past 15 years and as long as it continues to, many of the arguments against it are left in the dust as people practicing it find the weak points and keep improving them.
You might want to look at Bertrand Meyer's book [Agile!: The Good, the Hype and the Ugly](https://rads.stackoverflow.com/amzn/click/com/3319051547), which, as the title says, deals with the good and bad of various agile methods. I can't summarize the entire book in an answer, read it for yourselves, but just in very large strokes: **The good**: * Refactoring * Short iterations * Short daily meetings * Focus on communication * Identifying and eliminating impediments and waste * Delivering working software * The Product Owner role **The bad**: * Deprecation of upfront tasks * User stories as a basis for requirements * Ignorance of dependencies * Rejection of traditional manager tasks * Rejection of upfront generalization * Coach as a separate role * Deprecation of documents
4,513
After several years of agile and scrum hype around, I'd expect the first well-known reactions from "bullshit!" to "magical!" to be gone and I should be able to find some really elaborated articles that question the highly praised benefits of those modern management techniques. But I don't. Aren't there any? Is scrum really *that* great? Is there a conspiracy going on?
2012/02/01
[ "https://pm.stackexchange.com/questions/4513", "https://pm.stackexchange.com", "https://pm.stackexchange.com/users/3109/" ]
There's a well-known Ken Schwaber quote: > > Scrum doesn't solve your problems; Scrum exposes your problems. > > > He's exactly right. You still have to find and fix the problems that are keeping you from delivering high quality software more quickly. I've been in this business for three decades, and have managed teams for almost two decades, on and off. I've gone in and fixed a couple dozen companies in the past six years, using a combination of Scrum and Kanban. Lean/Agile approaches to software development work very well, if you understand the underlying philosophy. If you don't, if you just proceed with a 'cargo cult' rote form of Scrum (or another Lean/Agile approach), you will not get the results you're looking for. If you want a specific criticism, then let's look at the most common failure mode for Lean/Agile? When an organization (and specifically its leadership) is unwilling to acknowledge and fix the problems that this approach exposes. In fact, this is the only failure mode... and it is a universal failure mode.
You might want to look at Bertrand Meyer's book [Agile!: The Good, the Hype and the Ugly](https://rads.stackoverflow.com/amzn/click/com/3319051547), which, as the title says, deals with the good and bad of various agile methods. I can't summarize the entire book in an answer, read it for yourselves, but just in very large strokes: **The good**: * Refactoring * Short iterations * Short daily meetings * Focus on communication * Identifying and eliminating impediments and waste * Delivering working software * The Product Owner role **The bad**: * Deprecation of upfront tasks * User stories as a basis for requirements * Ignorance of dependencies * Rejection of traditional manager tasks * Rejection of upfront generalization * Coach as a separate role * Deprecation of documents
50,976,385
Very open architectural question. I have an Android offline app. In one of the actions user can change a configuration, in my specific case it is the day of the forecast. So to do that, the flow is this: * Activity on click event; * Preferences View Model; * Preferences Business; * And finally persisted on the persistence layer; The actual effect will happen in parallel (no important for my question). My questions are: Where is the best place to add the analytics track? What exactly should I be considering when positioning my analytics track events? Just in case, this is the app I'm talking about: <https://play.google.com/store/apps/details?id=pozzo.apps.travelweather> Thank you
2018/06/21
[ "https://Stackoverflow.com/questions/50976385", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1454719/" ]
Analytics is part of the domain layer, so It should ideally be kept in the domain layer. Often projects have analytics in the view layer (ViewControllers, activities or fragments or ViewModels). This leads to inconsistency and analytics calls are often fired from views or view models, controllers etc. Therefore, it is ideal to keep analytics inside UserCase/interactor classes, these are often re-usable classes, which makes logging easier with less duplication.
In terms of clean architecture analytics it's business layer, so it should be implemented in Interactor/Use case, but I think it's not so bad to keep analytics in a view, because it's simplest way.
93,495
The first thing that I can think of is that space exploration becomes incredibly slow, because sending everything up there that is not pre-programmed needs a human pilot, and there would be no way of getting the data but to bring back the probe altogether. But what else? This world is similar to the real world as far as astronomical bodies go; there's the moon, the sun, other planets, asteroids, etc. The major difference is that radio communications can only work in an atmosphere.
2017/09/28
[ "https://worldbuilding.stackexchange.com/questions/93495", "https://worldbuilding.stackexchange.com", "https://worldbuilding.stackexchange.com/users/41106/" ]
You can probably work around this problem using some sort of laser (with a revised version of the morse code) to comunicate between the spacecraft and low earth orbit station and then the normal radio waves from the station to earth (ISS is still in the atmosphere after all). So I think that once the system is setup, the space exploration will not be too much slower than today.
Basically in your world radio transmissions are only possible if traveling through an atmospheric medium. A lot, like sound. Generally, I dislike this concept since radio waves are a form of radiation which does not need a medium, hence why we can observe distant radio waves from far-off stars and other heavenly bodies. A better option would, in my opinion, be that the atmosphere of your planets have a characteristic that filters or blocks a lot of background radio interference that would otherwise make radio communication outside of the atmosphere very difficult since the interference levels could make broadcasting over radio frequencies limited to only short range. EDIT: Since the atmosphere or the lack of one does not permit radio communication, another reasonable way that is relatively in canon with my previous entry; Use either [Geostationary sattelites](https://en.wikipedia.org/wiki/Geostationary_orbit) or some way that can stretch a cable out into space and do the information transfer between the surface and out into the close proximity space. From there, if only the radio frequencies are impossible to communicate over, use optical light transmission. This can be done in super simple morse, or more complex [Li-Fi](https://en.wikipedia.org/wiki/Li-Fi) The latter, can be used in combination with satellites or other heavenly bodies as extenders that work in the same way as wifi-extenders. This way, you can theoretically attain extreme bandwidth. "Researchers have reached data rates of over 224 Gbit/s". I think I may have solved the issue of getting Earth-Lunar internet as well.
93,495
The first thing that I can think of is that space exploration becomes incredibly slow, because sending everything up there that is not pre-programmed needs a human pilot, and there would be no way of getting the data but to bring back the probe altogether. But what else? This world is similar to the real world as far as astronomical bodies go; there's the moon, the sun, other planets, asteroids, etc. The major difference is that radio communications can only work in an atmosphere.
2017/09/28
[ "https://worldbuilding.stackexchange.com/questions/93495", "https://worldbuilding.stackexchange.com", "https://worldbuilding.stackexchange.com/users/41106/" ]
Basically in your world radio transmissions are only possible if traveling through an atmospheric medium. A lot, like sound. Generally, I dislike this concept since radio waves are a form of radiation which does not need a medium, hence why we can observe distant radio waves from far-off stars and other heavenly bodies. A better option would, in my opinion, be that the atmosphere of your planets have a characteristic that filters or blocks a lot of background radio interference that would otherwise make radio communication outside of the atmosphere very difficult since the interference levels could make broadcasting over radio frequencies limited to only short range. EDIT: Since the atmosphere or the lack of one does not permit radio communication, another reasonable way that is relatively in canon with my previous entry; Use either [Geostationary sattelites](https://en.wikipedia.org/wiki/Geostationary_orbit) or some way that can stretch a cable out into space and do the information transfer between the surface and out into the close proximity space. From there, if only the radio frequencies are impossible to communicate over, use optical light transmission. This can be done in super simple morse, or more complex [Li-Fi](https://en.wikipedia.org/wiki/Li-Fi) The latter, can be used in combination with satellites or other heavenly bodies as extenders that work in the same way as wifi-extenders. This way, you can theoretically attain extreme bandwidth. "Researchers have reached data rates of over 224 Gbit/s". I think I may have solved the issue of getting Earth-Lunar internet as well.
If radio waves can’t be used then it would delay space exploration as there would be no easy means to communicate with space craft initially. However given a few more decades of technological advancement microwaves could be used instead or visible light by way of lasers, unless these were also blocked. Ultimately any electromagnetic wave might be used for communication, (some more easily than others). If all electromagnetic waves require an atmosphere to propagate then the earth would be a very different place as it would recieve no heat or light from the sun.
93,495
The first thing that I can think of is that space exploration becomes incredibly slow, because sending everything up there that is not pre-programmed needs a human pilot, and there would be no way of getting the data but to bring back the probe altogether. But what else? This world is similar to the real world as far as astronomical bodies go; there's the moon, the sun, other planets, asteroids, etc. The major difference is that radio communications can only work in an atmosphere.
2017/09/28
[ "https://worldbuilding.stackexchange.com/questions/93495", "https://worldbuilding.stackexchange.com", "https://worldbuilding.stackexchange.com/users/41106/" ]
There would never be life on planet Earth ========================================= Considering that radio is [electromagnetic radiation](https://en.wikipedia.org/wiki/Electromagnetic_radiation), and that all **light** is also electromagnetic radiation... [![enter image description here](https://i.stack.imgur.com/aBty5.png)](https://i.stack.imgur.com/aBty5.png) ...and that life on planet Earth is dependent on light from our nearby star Sol... ...**there would never be life on planet Earth if you have made it so that light cannot propagate anywhere but within an atmosphere**. This — naturally — puts a big dent in everyone's plan to explore space. No really, you just reversed one of the most basic laws of physics — [Classic Electromagnetism](https://en.wikipedia.org/wiki/Classical_electromagnetism) as described by [Maxwell's Equations](https://en.wikipedia.org/wiki/Maxwell%27s_equations) — in such a way that you essentially broke the universe. There is no way we can make it so that radio exists in a medium but not in a vacuum.
Basically in your world radio transmissions are only possible if traveling through an atmospheric medium. A lot, like sound. Generally, I dislike this concept since radio waves are a form of radiation which does not need a medium, hence why we can observe distant radio waves from far-off stars and other heavenly bodies. A better option would, in my opinion, be that the atmosphere of your planets have a characteristic that filters or blocks a lot of background radio interference that would otherwise make radio communication outside of the atmosphere very difficult since the interference levels could make broadcasting over radio frequencies limited to only short range. EDIT: Since the atmosphere or the lack of one does not permit radio communication, another reasonable way that is relatively in canon with my previous entry; Use either [Geostationary sattelites](https://en.wikipedia.org/wiki/Geostationary_orbit) or some way that can stretch a cable out into space and do the information transfer between the surface and out into the close proximity space. From there, if only the radio frequencies are impossible to communicate over, use optical light transmission. This can be done in super simple morse, or more complex [Li-Fi](https://en.wikipedia.org/wiki/Li-Fi) The latter, can be used in combination with satellites or other heavenly bodies as extenders that work in the same way as wifi-extenders. This way, you can theoretically attain extreme bandwidth. "Researchers have reached data rates of over 224 Gbit/s". I think I may have solved the issue of getting Earth-Lunar internet as well.
93,495
The first thing that I can think of is that space exploration becomes incredibly slow, because sending everything up there that is not pre-programmed needs a human pilot, and there would be no way of getting the data but to bring back the probe altogether. But what else? This world is similar to the real world as far as astronomical bodies go; there's the moon, the sun, other planets, asteroids, etc. The major difference is that radio communications can only work in an atmosphere.
2017/09/28
[ "https://worldbuilding.stackexchange.com/questions/93495", "https://worldbuilding.stackexchange.com", "https://worldbuilding.stackexchange.com/users/41106/" ]
Basically in your world radio transmissions are only possible if traveling through an atmospheric medium. A lot, like sound. Generally, I dislike this concept since radio waves are a form of radiation which does not need a medium, hence why we can observe distant radio waves from far-off stars and other heavenly bodies. A better option would, in my opinion, be that the atmosphere of your planets have a characteristic that filters or blocks a lot of background radio interference that would otherwise make radio communication outside of the atmosphere very difficult since the interference levels could make broadcasting over radio frequencies limited to only short range. EDIT: Since the atmosphere or the lack of one does not permit radio communication, another reasonable way that is relatively in canon with my previous entry; Use either [Geostationary sattelites](https://en.wikipedia.org/wiki/Geostationary_orbit) or some way that can stretch a cable out into space and do the information transfer between the surface and out into the close proximity space. From there, if only the radio frequencies are impossible to communicate over, use optical light transmission. This can be done in super simple morse, or more complex [Li-Fi](https://en.wikipedia.org/wiki/Li-Fi) The latter, can be used in combination with satellites or other heavenly bodies as extenders that work in the same way as wifi-extenders. This way, you can theoretically attain extreme bandwidth. "Researchers have reached data rates of over 224 Gbit/s". I think I may have solved the issue of getting Earth-Lunar internet as well.
Radio propagation does not need an atmosphere. It is the Ionosphere which may reflect some radio waves back to Earth, so an earth-based station may send signals at some frequencies across the globe without a satellite to act as a relay station. This wikipedia article on [ionosphere](https://en.wikipedia.org/wiki/Ionosphere#Radio_communication) explains the process Now, the second part: Change the atmosphere of the planet (or you can just say it is different, without specifying its composition) and say that the ionosphere is more reflective and is so over a broader frequency range. The radio technology has evolved well before space exploration and satellite launching became feasible, so signaling relies on the ionosphere rather than satellites. You can complicate things by saying this reflective layer works in both directions, so future satellites will have difficulty communicating with earth-based stations.
93,495
The first thing that I can think of is that space exploration becomes incredibly slow, because sending everything up there that is not pre-programmed needs a human pilot, and there would be no way of getting the data but to bring back the probe altogether. But what else? This world is similar to the real world as far as astronomical bodies go; there's the moon, the sun, other planets, asteroids, etc. The major difference is that radio communications can only work in an atmosphere.
2017/09/28
[ "https://worldbuilding.stackexchange.com/questions/93495", "https://worldbuilding.stackexchange.com", "https://worldbuilding.stackexchange.com/users/41106/" ]
You can probably work around this problem using some sort of laser (with a revised version of the morse code) to comunicate between the spacecraft and low earth orbit station and then the normal radio waves from the station to earth (ISS is still in the atmosphere after all). So I think that once the system is setup, the space exploration will not be too much slower than today.
If radio waves can’t be used then it would delay space exploration as there would be no easy means to communicate with space craft initially. However given a few more decades of technological advancement microwaves could be used instead or visible light by way of lasers, unless these were also blocked. Ultimately any electromagnetic wave might be used for communication, (some more easily than others). If all electromagnetic waves require an atmosphere to propagate then the earth would be a very different place as it would recieve no heat or light from the sun.
93,495
The first thing that I can think of is that space exploration becomes incredibly slow, because sending everything up there that is not pre-programmed needs a human pilot, and there would be no way of getting the data but to bring back the probe altogether. But what else? This world is similar to the real world as far as astronomical bodies go; there's the moon, the sun, other planets, asteroids, etc. The major difference is that radio communications can only work in an atmosphere.
2017/09/28
[ "https://worldbuilding.stackexchange.com/questions/93495", "https://worldbuilding.stackexchange.com", "https://worldbuilding.stackexchange.com/users/41106/" ]
There would never be life on planet Earth ========================================= Considering that radio is [electromagnetic radiation](https://en.wikipedia.org/wiki/Electromagnetic_radiation), and that all **light** is also electromagnetic radiation... [![enter image description here](https://i.stack.imgur.com/aBty5.png)](https://i.stack.imgur.com/aBty5.png) ...and that life on planet Earth is dependent on light from our nearby star Sol... ...**there would never be life on planet Earth if you have made it so that light cannot propagate anywhere but within an atmosphere**. This — naturally — puts a big dent in everyone's plan to explore space. No really, you just reversed one of the most basic laws of physics — [Classic Electromagnetism](https://en.wikipedia.org/wiki/Classical_electromagnetism) as described by [Maxwell's Equations](https://en.wikipedia.org/wiki/Maxwell%27s_equations) — in such a way that you essentially broke the universe. There is no way we can make it so that radio exists in a medium but not in a vacuum.
You can probably work around this problem using some sort of laser (with a revised version of the morse code) to comunicate between the spacecraft and low earth orbit station and then the normal radio waves from the station to earth (ISS is still in the atmosphere after all). So I think that once the system is setup, the space exploration will not be too much slower than today.
93,495
The first thing that I can think of is that space exploration becomes incredibly slow, because sending everything up there that is not pre-programmed needs a human pilot, and there would be no way of getting the data but to bring back the probe altogether. But what else? This world is similar to the real world as far as astronomical bodies go; there's the moon, the sun, other planets, asteroids, etc. The major difference is that radio communications can only work in an atmosphere.
2017/09/28
[ "https://worldbuilding.stackexchange.com/questions/93495", "https://worldbuilding.stackexchange.com", "https://worldbuilding.stackexchange.com/users/41106/" ]
You can probably work around this problem using some sort of laser (with a revised version of the morse code) to comunicate between the spacecraft and low earth orbit station and then the normal radio waves from the station to earth (ISS is still in the atmosphere after all). So I think that once the system is setup, the space exploration will not be too much slower than today.
Radio propagation does not need an atmosphere. It is the Ionosphere which may reflect some radio waves back to Earth, so an earth-based station may send signals at some frequencies across the globe without a satellite to act as a relay station. This wikipedia article on [ionosphere](https://en.wikipedia.org/wiki/Ionosphere#Radio_communication) explains the process Now, the second part: Change the atmosphere of the planet (or you can just say it is different, without specifying its composition) and say that the ionosphere is more reflective and is so over a broader frequency range. The radio technology has evolved well before space exploration and satellite launching became feasible, so signaling relies on the ionosphere rather than satellites. You can complicate things by saying this reflective layer works in both directions, so future satellites will have difficulty communicating with earth-based stations.
93,495
The first thing that I can think of is that space exploration becomes incredibly slow, because sending everything up there that is not pre-programmed needs a human pilot, and there would be no way of getting the data but to bring back the probe altogether. But what else? This world is similar to the real world as far as astronomical bodies go; there's the moon, the sun, other planets, asteroids, etc. The major difference is that radio communications can only work in an atmosphere.
2017/09/28
[ "https://worldbuilding.stackexchange.com/questions/93495", "https://worldbuilding.stackexchange.com", "https://worldbuilding.stackexchange.com/users/41106/" ]
There would never be life on planet Earth ========================================= Considering that radio is [electromagnetic radiation](https://en.wikipedia.org/wiki/Electromagnetic_radiation), and that all **light** is also electromagnetic radiation... [![enter image description here](https://i.stack.imgur.com/aBty5.png)](https://i.stack.imgur.com/aBty5.png) ...and that life on planet Earth is dependent on light from our nearby star Sol... ...**there would never be life on planet Earth if you have made it so that light cannot propagate anywhere but within an atmosphere**. This — naturally — puts a big dent in everyone's plan to explore space. No really, you just reversed one of the most basic laws of physics — [Classic Electromagnetism](https://en.wikipedia.org/wiki/Classical_electromagnetism) as described by [Maxwell's Equations](https://en.wikipedia.org/wiki/Maxwell%27s_equations) — in such a way that you essentially broke the universe. There is no way we can make it so that radio exists in a medium but not in a vacuum.
If radio waves can’t be used then it would delay space exploration as there would be no easy means to communicate with space craft initially. However given a few more decades of technological advancement microwaves could be used instead or visible light by way of lasers, unless these were also blocked. Ultimately any electromagnetic wave might be used for communication, (some more easily than others). If all electromagnetic waves require an atmosphere to propagate then the earth would be a very different place as it would recieve no heat or light from the sun.
93,495
The first thing that I can think of is that space exploration becomes incredibly slow, because sending everything up there that is not pre-programmed needs a human pilot, and there would be no way of getting the data but to bring back the probe altogether. But what else? This world is similar to the real world as far as astronomical bodies go; there's the moon, the sun, other planets, asteroids, etc. The major difference is that radio communications can only work in an atmosphere.
2017/09/28
[ "https://worldbuilding.stackexchange.com/questions/93495", "https://worldbuilding.stackexchange.com", "https://worldbuilding.stackexchange.com/users/41106/" ]
There would never be life on planet Earth ========================================= Considering that radio is [electromagnetic radiation](https://en.wikipedia.org/wiki/Electromagnetic_radiation), and that all **light** is also electromagnetic radiation... [![enter image description here](https://i.stack.imgur.com/aBty5.png)](https://i.stack.imgur.com/aBty5.png) ...and that life on planet Earth is dependent on light from our nearby star Sol... ...**there would never be life on planet Earth if you have made it so that light cannot propagate anywhere but within an atmosphere**. This — naturally — puts a big dent in everyone's plan to explore space. No really, you just reversed one of the most basic laws of physics — [Classic Electromagnetism](https://en.wikipedia.org/wiki/Classical_electromagnetism) as described by [Maxwell's Equations](https://en.wikipedia.org/wiki/Maxwell%27s_equations) — in such a way that you essentially broke the universe. There is no way we can make it so that radio exists in a medium but not in a vacuum.
Radio propagation does not need an atmosphere. It is the Ionosphere which may reflect some radio waves back to Earth, so an earth-based station may send signals at some frequencies across the globe without a satellite to act as a relay station. This wikipedia article on [ionosphere](https://en.wikipedia.org/wiki/Ionosphere#Radio_communication) explains the process Now, the second part: Change the atmosphere of the planet (or you can just say it is different, without specifying its composition) and say that the ionosphere is more reflective and is so over a broader frequency range. The radio technology has evolved well before space exploration and satellite launching became feasible, so signaling relies on the ionosphere rather than satellites. You can complicate things by saying this reflective layer works in both directions, so future satellites will have difficulty communicating with earth-based stations.
303,047
The saying [plaster saint](http://www.oxforddictionaries.com/definition/english/plaster-saint) is used to refer to: > > * A person who makes a show of being without moral faults or human weakness, especially in a hypocritical way. (ODO) > > > The expression is generally used to state that *you are no plaster saint* as in: > > * *she is no plaster saint—she acknowledges her faults and is quick to ask forgiveness.* > > > Usage appears to be from the late 19th century according to [Ngram](https://books.google.com/ngrams/graph?content=plaster%20saint%2Cno%20plaster%20saint&year_start=1870&year_end=2000&corpus=15&smoothing=3&share=&direct_url=t1%3B%2Cplaster%20saint%3B%2Cc0%3B.t1%3B%2Cno%20plaster%20saint%3B%2Cc0) and OED early usage examples are: > > * ***1890*** R. Kipling Barrack-room Ballads (1892) 8 Single men in barricks [sic] don't grow into plaster saints. > * ***1898*** G. B. Shaw Philanderer iv, in Plays Unpleasant 148 You fraud! You humbug! You miserable little plaster saint! > > > [![enter image description here](https://i.stack.imgur.com/Gyqil.jpg)](https://i.stack.imgur.com/Gyqil.jpg) A plaster saint. Questions: 1) I have always seen a plaster statue of a saint as an object of veneration and respect, so how did it come to represent an hypocritical attitude? What am I missing here? 2) The literal expression 'plaster saint' and its figurative usage appear to coincide in terms of period of origin (late 19th century). Was the expression imported from some 'catholic country' at that time?
2016/01/29
[ "https://english.stackexchange.com/questions/303047", "https://english.stackexchange.com", "https://english.stackexchange.com/users/-1/" ]
The interesting thing is [Merriam-Webster defines ***plaster saint*** simply as:](http://www.merriam-webster.com/dictionary/plaster%20saint) > > a person without human failings. > > > I sifted through Google Books, and this is the meaning you find in book after book after book. When it is explained why someone is not a plaster saint, the reason is that the person is less than saintly, misbehaves, has passions, struggles with temptation, very much unlike the other-worldly, beatific, ideal represented by a plaster saint, or the lifeless object itself. It’s not hard to imagine how *plaster saint* could come to mean hypocrite: real humans are flawed; if you look like a plaster saint you must be faking it. Sarcasm could have played a role here too. However Bernard Shaw’s quote is highly atypical. [Annie Edwards’ *A Plaster Saint* (1899)](https://www.google.pt/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=George%20Gervase,%20is%20the%20plaster%20saint&tbm=bks) is the only other instance I found of *plaster saint* used sarcastically in this way. *Plaster saint* in the Merriam-Webster sense appears in scores of books though. Merriam-Webster says the first known use is in 1890. So it’s likely Kipling’s *Tommy*, quoted by the OP, and first published that year under the title *The Queen’s Uniform* [(The Kipling Society)](http://www.kiplingsociety.co.uk/rg_tommy1.htm). Here Kipling has the proverbial British soldier Tommy Atkins criticise the British public, who sees the common soldier sometimes as a hero, sometimes as a ruffian (my emphasis throughout): > > […] Yes, makin’ mock o’ uniforms that guard you while you sleep > > Is cheaper than them uniforms, an’ they’re starvation cheap. > > An’ hustlin’ drunken soldiers when they’re goin’ large a bit > > Is five times better business than paradin’ in full kit. > > Then it’s Tommy this, an’ Tommy that, an’ Tommy, ’ow’s yer soul? > > But it’s “Thin red line of ’eroes” when the drums begin to roll > > The drums begin to roll, my boys, the drums begin to roll, > > O it's “Thin red line of ’eroes,” when the drums begin to roll. > > > > > We aren’t no thin red ’eroes, nor we aren’t no blackguards too, > > But single men in barricks, most remarkable like you; > > An’ if sometimes our conduck isn’t all your fancy paints, > > Why, single men in barricks don’t grow into ***plaster saints*** […] > > [Rudyard Kipling, *Tommy* aka *The Queen’s Uniform* (1890), (more info in Kipling Society)](http://www.kiplingsociety.co.uk/rg_tommy1.htm) and [full poem here](http://www.kiplingsociety.co.uk/bookmart_fra.htm) > > > The following give a more explicit description of what a plaster saint is not: > > Henry Morgan the Buccaneer was no “***plaster saint***”. His weaknesses, his follies, his errors are writ large on his record. He was rash, impulsive, reckless of speech, and oftentimes unscrupulous in action. He was a good hater and a firm friend. > > [The Transactions of the Honourable Society of Cymmrodorion, 1899, p. 41.](https://www.google.com/search?biw=1366&bih=625&tbs=sbd%3A1&tbm=bks&sxsrf=ALeKk013Cpz2da0_vXEpkm7TDiSVqT1GCg%3A1613907761764&ei=MUcyYJCKLozBUoXGg8AD&q=%22Henry%20Morgan%20the%20Buccaneer%20was%20no%20plaster%20saint%22%201899&oq=%22Henry%20Morgan%20the%20Buccaneer%20was%20no%20plaster%20saint%22%201899&gs_l=psy-ab.3...19561.24810.0.26441.11.8.3.0.0.0.124.846.2j6.8.0....0...1c.1.64.psy-ab..0.0.0....0.Pk-R29yiC20) > > > > > A study of his career will probably make us like him better, for we shall find that he was a man with very human virtues and failings, not a preposterous ***plaster saint***. > > [William Alfred Hirst, *Walks about London*, Henry Holt, 1900, p. 80.](https://books.google.pt/books?id=twU3AQAAMAAJ&q=%22plaster%20saint%22&dq=%22plaster%20saint%22&hl=en&sa=X&ved=0ahUKEwiEjafit6TLAhWLthoKHQDNAr8Q6AEIOjAH) > > > Sometimes the *plaster saint* is implicitly presented as something good: > > “Look here, Elizabeth,” she said desperately, “have done with all this nonsense, for heaven's sake, and take your husband as you find him. He is no ***plaster saint***, but neither are you, or any of us for that matter.” > > [Kate Horn, *Ships of Desire*, Cassel and Company, 1909, p. 317.](https://books.google.pt/books?id=bEQgAAAAMAAJ&q=%22plaster%20saint%22&dq=%22plaster%20saint%22&hl=en&sa=X&ved=0ahUKEwiCiavAw6TLAhVL7RQKHfsxDCUQ6AEIMDAE) > > > Sometimes people cultivate a *plaster-saint* image of important persons: > > In short, she [Rose Parks] is on her way to becoming the secular version of a ***plaster saint***. It is a fate that has already befallen Martin Luther King, who is so venerated it is politically incorrect even to acknowledge his human failings, like his womanising and his plagiarism. > > [“American trouble-makers,” *The Economist Year Book, 1992 in Review*, The Economist Books, 1993, p. 292.](https://books.google.pt/books?id=gRdXAAAAYAAJ&q=%22plaster%20saint%22&dq=%22plaster%20saint%22&hl=en&sa=X&ved=0ahUKEwjn5uOV0KXLAhWK2xoKHS5uC-g4KBDoAQhGMAk) > > > > > the Trustees were aware of the existence of letters by Einstein, some of them since published, 15 others to be published later, that conflict with the “***plaster saint***” image they wished to preserve > > [John Stachel, *Eistein from B to Z*, Birkhäuser, 2002, p. 99.](https://www.google.pt/search?q=%22plaster%20saint%22&espv=2&biw=1366&bih=643&source=lnt&tbs=cdr%3A1%2Ccd_min%3A1%2F1%2F2001%2Ccd_max%3A12%2F31%2F2010&tbm=bks#q=%22plaster%20saint%22&tbs=cdr:1,cd_min:1/1/2001,cd_max:12/31/2010&tbm=bks&start=20) > > > > > Several strategies combine to defuse the image of Lincoln as a ***plaster saint***. […] He may have contracted syphilis as a young man. […] Vidal’s very human Lincoln knows the art of the political deal. > > [Susan Baker, Curtis S. Gibson, *Gore Vidal: A Critical Companion*, Greenwood Press, 1997, p. 88.](https://books.google.pt/books?id=XWtMF1sl9twC&pg=PA88&dq=%22plaster%20saint%22&hl=en&sa=X&ved=0ahUKEwj13d-3x6XLAhXLDxoKHcxxA_w4FBDoAQhCMAc#v=onepage&q=%22plaster%20saint%22&f=false) > > > Often we get the notion that a plaster saint is someone we wouldn’t like that much. In this case Ernie, a young girl, has a “mercurial temperament”, but is urged by her friends to beat another girl, this one perhaps a plaster saint by nature or conviction, at winning a prize for “the pupil whose general average in attendance, conduct, and scholarship should be the highest.” She says: > > ”All right,” promised Ernie, with a weary little sigh. “I don't mind the studying so much; but I must confess I'm tired of being a ***plaster saint!***” > > [Alice Calhoun Haines, *The Luck of the Dudley Grahams*, Henry Holt, 1907, p. 173. (Full book available here.)](http://library.si.edu/digital-library/book/luckofdudleygrah00hain) > > > And even real saints are no plaster saints > > Do not for one moment picture him [Saint John Bosco] as a little monster of perfection, with no personality, no reactions, anaemic as a ***plaster saint***. The retiring, timid, peaceable, passive one was not John, but his brother Joseph—an intelligent, hardworking boy, marked from the beginning with mark of those who will never go above or below the level of a decent obscurity. But John was a different matter […] > > [Henry Ghéon, *The Secret of Saint John Bosco*, Tradibooks, 1944, p. 21.](https://books.google.pt/books?id=MRJdAgAAQBAJ&pg=PA21&dq=%22plaster%20saint%22&hl=en&sa=X&ved=0ahUKEwiww6blsqXLAhWB7xQKHbUhB7gQ6AEIJjAC#v=onepage&q=%22plaster%20saint%22&f=false) > > > As to whether *plaster saint* was borrowed from another language I found no evidence of. The literal equivalents *plâtre saint* or *saint de plâtre* in French, *santo de yesso* in Spanish, and *santo de gesso* in Portuguese, mean the object only. [*Petit saint*](http://www.larousse.fr/dictionnaires/francais/saint/70548/locution?q=saint#166748) (*little saint*) is used ironically to refer to a person hypocritically affecting virtue, and has been in use since the early 1800s. *Ce n’est pas un (petit) saint* (*he/she is no (little) saint*) sounds very much like the English phrase and [has been around](https://books.google.com/ngrams/graph?content=pas%20un%20petit%20saint%2Bpas%20un%20saint&year_start=1800&year_end=2000&corpus=19&smoothing=3&share=&direct_url=t1%3B%2C%28pas%20un%20petit%20saint%20%2B%20pas%20un%20saint%29%3B%2Cc0) since the early 1800s too, but means they’re dishonest. My hunch, for all it’s worth, is that the phrase as used in the examples above is transparent and suggestive enough for English speakers to have coined it without outside help.
The point is that a plaster saint can't actually do anything, it's just an object that's supposed to remind one of the real person, but has no powers of its own. Saints are saints generally because they are supposed to have caused miracles to be performed. Plaster saints do nothing at all but sit and stare at you from their niches in churches. So *you are no plaster saint* means *you don't just look like a saint, you're the real thing*. It's a real compliment.
303,047
The saying [plaster saint](http://www.oxforddictionaries.com/definition/english/plaster-saint) is used to refer to: > > * A person who makes a show of being without moral faults or human weakness, especially in a hypocritical way. (ODO) > > > The expression is generally used to state that *you are no plaster saint* as in: > > * *she is no plaster saint—she acknowledges her faults and is quick to ask forgiveness.* > > > Usage appears to be from the late 19th century according to [Ngram](https://books.google.com/ngrams/graph?content=plaster%20saint%2Cno%20plaster%20saint&year_start=1870&year_end=2000&corpus=15&smoothing=3&share=&direct_url=t1%3B%2Cplaster%20saint%3B%2Cc0%3B.t1%3B%2Cno%20plaster%20saint%3B%2Cc0) and OED early usage examples are: > > * ***1890*** R. Kipling Barrack-room Ballads (1892) 8 Single men in barricks [sic] don't grow into plaster saints. > * ***1898*** G. B. Shaw Philanderer iv, in Plays Unpleasant 148 You fraud! You humbug! You miserable little plaster saint! > > > [![enter image description here](https://i.stack.imgur.com/Gyqil.jpg)](https://i.stack.imgur.com/Gyqil.jpg) A plaster saint. Questions: 1) I have always seen a plaster statue of a saint as an object of veneration and respect, so how did it come to represent an hypocritical attitude? What am I missing here? 2) The literal expression 'plaster saint' and its figurative usage appear to coincide in terms of period of origin (late 19th century). Was the expression imported from some 'catholic country' at that time?
2016/01/29
[ "https://english.stackexchange.com/questions/303047", "https://english.stackexchange.com", "https://english.stackexchange.com/users/-1/" ]
The point is that a plaster saint can't actually do anything, it's just an object that's supposed to remind one of the real person, but has no powers of its own. Saints are saints generally because they are supposed to have caused miracles to be performed. Plaster saints do nothing at all but sit and stare at you from their niches in churches. So *you are no plaster saint* means *you don't just look like a saint, you're the real thing*. It's a real compliment.
For what it's worth, I always thought it was a comparison with the statues of saints carved in marble that you would find in a cathedral. The plaster saint is a cheap, breakable imitation of the real thing. Just speculation on my part.
303,047
The saying [plaster saint](http://www.oxforddictionaries.com/definition/english/plaster-saint) is used to refer to: > > * A person who makes a show of being without moral faults or human weakness, especially in a hypocritical way. (ODO) > > > The expression is generally used to state that *you are no plaster saint* as in: > > * *she is no plaster saint—she acknowledges her faults and is quick to ask forgiveness.* > > > Usage appears to be from the late 19th century according to [Ngram](https://books.google.com/ngrams/graph?content=plaster%20saint%2Cno%20plaster%20saint&year_start=1870&year_end=2000&corpus=15&smoothing=3&share=&direct_url=t1%3B%2Cplaster%20saint%3B%2Cc0%3B.t1%3B%2Cno%20plaster%20saint%3B%2Cc0) and OED early usage examples are: > > * ***1890*** R. Kipling Barrack-room Ballads (1892) 8 Single men in barricks [sic] don't grow into plaster saints. > * ***1898*** G. B. Shaw Philanderer iv, in Plays Unpleasant 148 You fraud! You humbug! You miserable little plaster saint! > > > [![enter image description here](https://i.stack.imgur.com/Gyqil.jpg)](https://i.stack.imgur.com/Gyqil.jpg) A plaster saint. Questions: 1) I have always seen a plaster statue of a saint as an object of veneration and respect, so how did it come to represent an hypocritical attitude? What am I missing here? 2) The literal expression 'plaster saint' and its figurative usage appear to coincide in terms of period of origin (late 19th century). Was the expression imported from some 'catholic country' at that time?
2016/01/29
[ "https://english.stackexchange.com/questions/303047", "https://english.stackexchange.com", "https://english.stackexchange.com/users/-1/" ]
The point is that a plaster saint can't actually do anything, it's just an object that's supposed to remind one of the real person, but has no powers of its own. Saints are saints generally because they are supposed to have caused miracles to be performed. Plaster saints do nothing at all but sit and stare at you from their niches in churches. So *you are no plaster saint* means *you don't just look like a saint, you're the real thing*. It's a real compliment.
> > I have always seen a plaster statue of a saint as an object of veneration and respect, > > > Unfortunately, the rest of UK had not had the same experience: Some background… up until the late 18th century, “Plaster saints” were associated with Catholics who were seen as dangerous, weird, and enemies of civilisation as their allegiance was to the Pope and not the Crown. In 1828, Daniel O’Connor, an Irish Catholic, was elected to Parliament but refused to take his seat until the anti-catholic oath was altered to his liking. From [The Encyclopaedia Britannica](https://www.britannica.com/event/Catholic-Emancipation) > > O’Connell’s ensuing triumphant election compelled the British prime minister, the Duke of Wellington, and Sir Robert Peel to carry the Emancipation Act of 1829 in Parliament. This act admitted Irish and English Roman Catholics to Parliament and to all but a handful of public offices. With the Universities Tests Act of 1871, which opened the universities to Roman Catholics, Catholic Emancipation in the United Kingdom was virtually complete. > > > You will see that at the time of Kipling, Roman Catholics had only recently been (almost\*) fully emancipated, and the cynical common soldiery, had not quite agreed to this – they, and the majority of the UK population saw the plaster saints that Catholics worshipped, and the Protestants did not, as tacky, and idolatrous – a cheap commercial representation of someone who, in fact, should be “an object of veneration and respect.” Hence the derogatory use. The OED gives: > > **plaster saint** n. freq. derogatory **a person who makes a show of being without moral faults or human weakness, esp. in a hypocritical way.** > > > As has been shown, the first use must have been somewhat prior to Kipling’s 1890 use: *“Single men in barricks [sic] don't grow into plaster saints.”* and seems to indicate “replicas of truly holy saints.” This seems to be borne out by > > 1980 Chinweizu et al. in D. Walder Lit. in Mod. World (1990) 286 Were our ancestors a parade of plaster saints who never, among themselves, struck a blow or hurt a fly? > > > 1995 Denver Post 15 Jan. e8/2 Clarke's book..presents her as a profoundly complex human being, infinitely more fascinating than any plaster saint or media-manufactured martyr. > > > There is also a play “[A Plaster Saint](https://babel.hathitrust.org/cgi/pt?id=osu.32435072953136&view=1up&seq=1)” by [Annie Edwards](https://peoplepill.com/people/annie-edwards/) and what I have read of the context seems to accord with the derogatory use. “[Historical Plays, Parts 1-7](https://books.google.co.uk/books?id=rqcxAQAAMAAJ&pg=PA264&dq=%22plaster%20saint%22&hl=en&newbks=1&newbks_redir=0&sa=X&ved=2ahUKEwiPhqSUxvvuAhVQUhUIHe4bAGcQ6AEwCHoECGIQAg#v=onepage&q=%22plaster%20saint%22&f=false)" By Tom Taylor from 1877 makes mention of it. \*There are still one or two restrictions on the rights of Catholics in the UK.
303,047
The saying [plaster saint](http://www.oxforddictionaries.com/definition/english/plaster-saint) is used to refer to: > > * A person who makes a show of being without moral faults or human weakness, especially in a hypocritical way. (ODO) > > > The expression is generally used to state that *you are no plaster saint* as in: > > * *she is no plaster saint—she acknowledges her faults and is quick to ask forgiveness.* > > > Usage appears to be from the late 19th century according to [Ngram](https://books.google.com/ngrams/graph?content=plaster%20saint%2Cno%20plaster%20saint&year_start=1870&year_end=2000&corpus=15&smoothing=3&share=&direct_url=t1%3B%2Cplaster%20saint%3B%2Cc0%3B.t1%3B%2Cno%20plaster%20saint%3B%2Cc0) and OED early usage examples are: > > * ***1890*** R. Kipling Barrack-room Ballads (1892) 8 Single men in barricks [sic] don't grow into plaster saints. > * ***1898*** G. B. Shaw Philanderer iv, in Plays Unpleasant 148 You fraud! You humbug! You miserable little plaster saint! > > > [![enter image description here](https://i.stack.imgur.com/Gyqil.jpg)](https://i.stack.imgur.com/Gyqil.jpg) A plaster saint. Questions: 1) I have always seen a plaster statue of a saint as an object of veneration and respect, so how did it come to represent an hypocritical attitude? What am I missing here? 2) The literal expression 'plaster saint' and its figurative usage appear to coincide in terms of period of origin (late 19th century). Was the expression imported from some 'catholic country' at that time?
2016/01/29
[ "https://english.stackexchange.com/questions/303047", "https://english.stackexchange.com", "https://english.stackexchange.com/users/-1/" ]
The interesting thing is [Merriam-Webster defines ***plaster saint*** simply as:](http://www.merriam-webster.com/dictionary/plaster%20saint) > > a person without human failings. > > > I sifted through Google Books, and this is the meaning you find in book after book after book. When it is explained why someone is not a plaster saint, the reason is that the person is less than saintly, misbehaves, has passions, struggles with temptation, very much unlike the other-worldly, beatific, ideal represented by a plaster saint, or the lifeless object itself. It’s not hard to imagine how *plaster saint* could come to mean hypocrite: real humans are flawed; if you look like a plaster saint you must be faking it. Sarcasm could have played a role here too. However Bernard Shaw’s quote is highly atypical. [Annie Edwards’ *A Plaster Saint* (1899)](https://www.google.pt/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=George%20Gervase,%20is%20the%20plaster%20saint&tbm=bks) is the only other instance I found of *plaster saint* used sarcastically in this way. *Plaster saint* in the Merriam-Webster sense appears in scores of books though. Merriam-Webster says the first known use is in 1890. So it’s likely Kipling’s *Tommy*, quoted by the OP, and first published that year under the title *The Queen’s Uniform* [(The Kipling Society)](http://www.kiplingsociety.co.uk/rg_tommy1.htm). Here Kipling has the proverbial British soldier Tommy Atkins criticise the British public, who sees the common soldier sometimes as a hero, sometimes as a ruffian (my emphasis throughout): > > […] Yes, makin’ mock o’ uniforms that guard you while you sleep > > Is cheaper than them uniforms, an’ they’re starvation cheap. > > An’ hustlin’ drunken soldiers when they’re goin’ large a bit > > Is five times better business than paradin’ in full kit. > > Then it’s Tommy this, an’ Tommy that, an’ Tommy, ’ow’s yer soul? > > But it’s “Thin red line of ’eroes” when the drums begin to roll > > The drums begin to roll, my boys, the drums begin to roll, > > O it's “Thin red line of ’eroes,” when the drums begin to roll. > > > > > We aren’t no thin red ’eroes, nor we aren’t no blackguards too, > > But single men in barricks, most remarkable like you; > > An’ if sometimes our conduck isn’t all your fancy paints, > > Why, single men in barricks don’t grow into ***plaster saints*** […] > > [Rudyard Kipling, *Tommy* aka *The Queen’s Uniform* (1890), (more info in Kipling Society)](http://www.kiplingsociety.co.uk/rg_tommy1.htm) and [full poem here](http://www.kiplingsociety.co.uk/bookmart_fra.htm) > > > The following give a more explicit description of what a plaster saint is not: > > Henry Morgan the Buccaneer was no “***plaster saint***”. His weaknesses, his follies, his errors are writ large on his record. He was rash, impulsive, reckless of speech, and oftentimes unscrupulous in action. He was a good hater and a firm friend. > > [The Transactions of the Honourable Society of Cymmrodorion, 1899, p. 41.](https://www.google.com/search?biw=1366&bih=625&tbs=sbd%3A1&tbm=bks&sxsrf=ALeKk013Cpz2da0_vXEpkm7TDiSVqT1GCg%3A1613907761764&ei=MUcyYJCKLozBUoXGg8AD&q=%22Henry%20Morgan%20the%20Buccaneer%20was%20no%20plaster%20saint%22%201899&oq=%22Henry%20Morgan%20the%20Buccaneer%20was%20no%20plaster%20saint%22%201899&gs_l=psy-ab.3...19561.24810.0.26441.11.8.3.0.0.0.124.846.2j6.8.0....0...1c.1.64.psy-ab..0.0.0....0.Pk-R29yiC20) > > > > > A study of his career will probably make us like him better, for we shall find that he was a man with very human virtues and failings, not a preposterous ***plaster saint***. > > [William Alfred Hirst, *Walks about London*, Henry Holt, 1900, p. 80.](https://books.google.pt/books?id=twU3AQAAMAAJ&q=%22plaster%20saint%22&dq=%22plaster%20saint%22&hl=en&sa=X&ved=0ahUKEwiEjafit6TLAhWLthoKHQDNAr8Q6AEIOjAH) > > > Sometimes the *plaster saint* is implicitly presented as something good: > > “Look here, Elizabeth,” she said desperately, “have done with all this nonsense, for heaven's sake, and take your husband as you find him. He is no ***plaster saint***, but neither are you, or any of us for that matter.” > > [Kate Horn, *Ships of Desire*, Cassel and Company, 1909, p. 317.](https://books.google.pt/books?id=bEQgAAAAMAAJ&q=%22plaster%20saint%22&dq=%22plaster%20saint%22&hl=en&sa=X&ved=0ahUKEwiCiavAw6TLAhVL7RQKHfsxDCUQ6AEIMDAE) > > > Sometimes people cultivate a *plaster-saint* image of important persons: > > In short, she [Rose Parks] is on her way to becoming the secular version of a ***plaster saint***. It is a fate that has already befallen Martin Luther King, who is so venerated it is politically incorrect even to acknowledge his human failings, like his womanising and his plagiarism. > > [“American trouble-makers,” *The Economist Year Book, 1992 in Review*, The Economist Books, 1993, p. 292.](https://books.google.pt/books?id=gRdXAAAAYAAJ&q=%22plaster%20saint%22&dq=%22plaster%20saint%22&hl=en&sa=X&ved=0ahUKEwjn5uOV0KXLAhWK2xoKHS5uC-g4KBDoAQhGMAk) > > > > > the Trustees were aware of the existence of letters by Einstein, some of them since published, 15 others to be published later, that conflict with the “***plaster saint***” image they wished to preserve > > [John Stachel, *Eistein from B to Z*, Birkhäuser, 2002, p. 99.](https://www.google.pt/search?q=%22plaster%20saint%22&espv=2&biw=1366&bih=643&source=lnt&tbs=cdr%3A1%2Ccd_min%3A1%2F1%2F2001%2Ccd_max%3A12%2F31%2F2010&tbm=bks#q=%22plaster%20saint%22&tbs=cdr:1,cd_min:1/1/2001,cd_max:12/31/2010&tbm=bks&start=20) > > > > > Several strategies combine to defuse the image of Lincoln as a ***plaster saint***. […] He may have contracted syphilis as a young man. […] Vidal’s very human Lincoln knows the art of the political deal. > > [Susan Baker, Curtis S. Gibson, *Gore Vidal: A Critical Companion*, Greenwood Press, 1997, p. 88.](https://books.google.pt/books?id=XWtMF1sl9twC&pg=PA88&dq=%22plaster%20saint%22&hl=en&sa=X&ved=0ahUKEwj13d-3x6XLAhXLDxoKHcxxA_w4FBDoAQhCMAc#v=onepage&q=%22plaster%20saint%22&f=false) > > > Often we get the notion that a plaster saint is someone we wouldn’t like that much. In this case Ernie, a young girl, has a “mercurial temperament”, but is urged by her friends to beat another girl, this one perhaps a plaster saint by nature or conviction, at winning a prize for “the pupil whose general average in attendance, conduct, and scholarship should be the highest.” She says: > > ”All right,” promised Ernie, with a weary little sigh. “I don't mind the studying so much; but I must confess I'm tired of being a ***plaster saint!***” > > [Alice Calhoun Haines, *The Luck of the Dudley Grahams*, Henry Holt, 1907, p. 173. (Full book available here.)](http://library.si.edu/digital-library/book/luckofdudleygrah00hain) > > > And even real saints are no plaster saints > > Do not for one moment picture him [Saint John Bosco] as a little monster of perfection, with no personality, no reactions, anaemic as a ***plaster saint***. The retiring, timid, peaceable, passive one was not John, but his brother Joseph—an intelligent, hardworking boy, marked from the beginning with mark of those who will never go above or below the level of a decent obscurity. But John was a different matter […] > > [Henry Ghéon, *The Secret of Saint John Bosco*, Tradibooks, 1944, p. 21.](https://books.google.pt/books?id=MRJdAgAAQBAJ&pg=PA21&dq=%22plaster%20saint%22&hl=en&sa=X&ved=0ahUKEwiww6blsqXLAhWB7xQKHbUhB7gQ6AEIJjAC#v=onepage&q=%22plaster%20saint%22&f=false) > > > As to whether *plaster saint* was borrowed from another language I found no evidence of. The literal equivalents *plâtre saint* or *saint de plâtre* in French, *santo de yesso* in Spanish, and *santo de gesso* in Portuguese, mean the object only. [*Petit saint*](http://www.larousse.fr/dictionnaires/francais/saint/70548/locution?q=saint#166748) (*little saint*) is used ironically to refer to a person hypocritically affecting virtue, and has been in use since the early 1800s. *Ce n’est pas un (petit) saint* (*he/she is no (little) saint*) sounds very much like the English phrase and [has been around](https://books.google.com/ngrams/graph?content=pas%20un%20petit%20saint%2Bpas%20un%20saint&year_start=1800&year_end=2000&corpus=19&smoothing=3&share=&direct_url=t1%3B%2C%28pas%20un%20petit%20saint%20%2B%20pas%20un%20saint%29%3B%2Cc0) since the early 1800s too, but means they’re dishonest. My hunch, for all it’s worth, is that the phrase as used in the examples above is transparent and suggestive enough for English speakers to have coined it without outside help.
For what it's worth, I always thought it was a comparison with the statues of saints carved in marble that you would find in a cathedral. The plaster saint is a cheap, breakable imitation of the real thing. Just speculation on my part.
303,047
The saying [plaster saint](http://www.oxforddictionaries.com/definition/english/plaster-saint) is used to refer to: > > * A person who makes a show of being without moral faults or human weakness, especially in a hypocritical way. (ODO) > > > The expression is generally used to state that *you are no plaster saint* as in: > > * *she is no plaster saint—she acknowledges her faults and is quick to ask forgiveness.* > > > Usage appears to be from the late 19th century according to [Ngram](https://books.google.com/ngrams/graph?content=plaster%20saint%2Cno%20plaster%20saint&year_start=1870&year_end=2000&corpus=15&smoothing=3&share=&direct_url=t1%3B%2Cplaster%20saint%3B%2Cc0%3B.t1%3B%2Cno%20plaster%20saint%3B%2Cc0) and OED early usage examples are: > > * ***1890*** R. Kipling Barrack-room Ballads (1892) 8 Single men in barricks [sic] don't grow into plaster saints. > * ***1898*** G. B. Shaw Philanderer iv, in Plays Unpleasant 148 You fraud! You humbug! You miserable little plaster saint! > > > [![enter image description here](https://i.stack.imgur.com/Gyqil.jpg)](https://i.stack.imgur.com/Gyqil.jpg) A plaster saint. Questions: 1) I have always seen a plaster statue of a saint as an object of veneration and respect, so how did it come to represent an hypocritical attitude? What am I missing here? 2) The literal expression 'plaster saint' and its figurative usage appear to coincide in terms of period of origin (late 19th century). Was the expression imported from some 'catholic country' at that time?
2016/01/29
[ "https://english.stackexchange.com/questions/303047", "https://english.stackexchange.com", "https://english.stackexchange.com/users/-1/" ]
The interesting thing is [Merriam-Webster defines ***plaster saint*** simply as:](http://www.merriam-webster.com/dictionary/plaster%20saint) > > a person without human failings. > > > I sifted through Google Books, and this is the meaning you find in book after book after book. When it is explained why someone is not a plaster saint, the reason is that the person is less than saintly, misbehaves, has passions, struggles with temptation, very much unlike the other-worldly, beatific, ideal represented by a plaster saint, or the lifeless object itself. It’s not hard to imagine how *plaster saint* could come to mean hypocrite: real humans are flawed; if you look like a plaster saint you must be faking it. Sarcasm could have played a role here too. However Bernard Shaw’s quote is highly atypical. [Annie Edwards’ *A Plaster Saint* (1899)](https://www.google.pt/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=George%20Gervase,%20is%20the%20plaster%20saint&tbm=bks) is the only other instance I found of *plaster saint* used sarcastically in this way. *Plaster saint* in the Merriam-Webster sense appears in scores of books though. Merriam-Webster says the first known use is in 1890. So it’s likely Kipling’s *Tommy*, quoted by the OP, and first published that year under the title *The Queen’s Uniform* [(The Kipling Society)](http://www.kiplingsociety.co.uk/rg_tommy1.htm). Here Kipling has the proverbial British soldier Tommy Atkins criticise the British public, who sees the common soldier sometimes as a hero, sometimes as a ruffian (my emphasis throughout): > > […] Yes, makin’ mock o’ uniforms that guard you while you sleep > > Is cheaper than them uniforms, an’ they’re starvation cheap. > > An’ hustlin’ drunken soldiers when they’re goin’ large a bit > > Is five times better business than paradin’ in full kit. > > Then it’s Tommy this, an’ Tommy that, an’ Tommy, ’ow’s yer soul? > > But it’s “Thin red line of ’eroes” when the drums begin to roll > > The drums begin to roll, my boys, the drums begin to roll, > > O it's “Thin red line of ’eroes,” when the drums begin to roll. > > > > > We aren’t no thin red ’eroes, nor we aren’t no blackguards too, > > But single men in barricks, most remarkable like you; > > An’ if sometimes our conduck isn’t all your fancy paints, > > Why, single men in barricks don’t grow into ***plaster saints*** […] > > [Rudyard Kipling, *Tommy* aka *The Queen’s Uniform* (1890), (more info in Kipling Society)](http://www.kiplingsociety.co.uk/rg_tommy1.htm) and [full poem here](http://www.kiplingsociety.co.uk/bookmart_fra.htm) > > > The following give a more explicit description of what a plaster saint is not: > > Henry Morgan the Buccaneer was no “***plaster saint***”. His weaknesses, his follies, his errors are writ large on his record. He was rash, impulsive, reckless of speech, and oftentimes unscrupulous in action. He was a good hater and a firm friend. > > [The Transactions of the Honourable Society of Cymmrodorion, 1899, p. 41.](https://www.google.com/search?biw=1366&bih=625&tbs=sbd%3A1&tbm=bks&sxsrf=ALeKk013Cpz2da0_vXEpkm7TDiSVqT1GCg%3A1613907761764&ei=MUcyYJCKLozBUoXGg8AD&q=%22Henry%20Morgan%20the%20Buccaneer%20was%20no%20plaster%20saint%22%201899&oq=%22Henry%20Morgan%20the%20Buccaneer%20was%20no%20plaster%20saint%22%201899&gs_l=psy-ab.3...19561.24810.0.26441.11.8.3.0.0.0.124.846.2j6.8.0....0...1c.1.64.psy-ab..0.0.0....0.Pk-R29yiC20) > > > > > A study of his career will probably make us like him better, for we shall find that he was a man with very human virtues and failings, not a preposterous ***plaster saint***. > > [William Alfred Hirst, *Walks about London*, Henry Holt, 1900, p. 80.](https://books.google.pt/books?id=twU3AQAAMAAJ&q=%22plaster%20saint%22&dq=%22plaster%20saint%22&hl=en&sa=X&ved=0ahUKEwiEjafit6TLAhWLthoKHQDNAr8Q6AEIOjAH) > > > Sometimes the *plaster saint* is implicitly presented as something good: > > “Look here, Elizabeth,” she said desperately, “have done with all this nonsense, for heaven's sake, and take your husband as you find him. He is no ***plaster saint***, but neither are you, or any of us for that matter.” > > [Kate Horn, *Ships of Desire*, Cassel and Company, 1909, p. 317.](https://books.google.pt/books?id=bEQgAAAAMAAJ&q=%22plaster%20saint%22&dq=%22plaster%20saint%22&hl=en&sa=X&ved=0ahUKEwiCiavAw6TLAhVL7RQKHfsxDCUQ6AEIMDAE) > > > Sometimes people cultivate a *plaster-saint* image of important persons: > > In short, she [Rose Parks] is on her way to becoming the secular version of a ***plaster saint***. It is a fate that has already befallen Martin Luther King, who is so venerated it is politically incorrect even to acknowledge his human failings, like his womanising and his plagiarism. > > [“American trouble-makers,” *The Economist Year Book, 1992 in Review*, The Economist Books, 1993, p. 292.](https://books.google.pt/books?id=gRdXAAAAYAAJ&q=%22plaster%20saint%22&dq=%22plaster%20saint%22&hl=en&sa=X&ved=0ahUKEwjn5uOV0KXLAhWK2xoKHS5uC-g4KBDoAQhGMAk) > > > > > the Trustees were aware of the existence of letters by Einstein, some of them since published, 15 others to be published later, that conflict with the “***plaster saint***” image they wished to preserve > > [John Stachel, *Eistein from B to Z*, Birkhäuser, 2002, p. 99.](https://www.google.pt/search?q=%22plaster%20saint%22&espv=2&biw=1366&bih=643&source=lnt&tbs=cdr%3A1%2Ccd_min%3A1%2F1%2F2001%2Ccd_max%3A12%2F31%2F2010&tbm=bks#q=%22plaster%20saint%22&tbs=cdr:1,cd_min:1/1/2001,cd_max:12/31/2010&tbm=bks&start=20) > > > > > Several strategies combine to defuse the image of Lincoln as a ***plaster saint***. […] He may have contracted syphilis as a young man. […] Vidal’s very human Lincoln knows the art of the political deal. > > [Susan Baker, Curtis S. Gibson, *Gore Vidal: A Critical Companion*, Greenwood Press, 1997, p. 88.](https://books.google.pt/books?id=XWtMF1sl9twC&pg=PA88&dq=%22plaster%20saint%22&hl=en&sa=X&ved=0ahUKEwj13d-3x6XLAhXLDxoKHcxxA_w4FBDoAQhCMAc#v=onepage&q=%22plaster%20saint%22&f=false) > > > Often we get the notion that a plaster saint is someone we wouldn’t like that much. In this case Ernie, a young girl, has a “mercurial temperament”, but is urged by her friends to beat another girl, this one perhaps a plaster saint by nature or conviction, at winning a prize for “the pupil whose general average in attendance, conduct, and scholarship should be the highest.” She says: > > ”All right,” promised Ernie, with a weary little sigh. “I don't mind the studying so much; but I must confess I'm tired of being a ***plaster saint!***” > > [Alice Calhoun Haines, *The Luck of the Dudley Grahams*, Henry Holt, 1907, p. 173. (Full book available here.)](http://library.si.edu/digital-library/book/luckofdudleygrah00hain) > > > And even real saints are no plaster saints > > Do not for one moment picture him [Saint John Bosco] as a little monster of perfection, with no personality, no reactions, anaemic as a ***plaster saint***. The retiring, timid, peaceable, passive one was not John, but his brother Joseph—an intelligent, hardworking boy, marked from the beginning with mark of those who will never go above or below the level of a decent obscurity. But John was a different matter […] > > [Henry Ghéon, *The Secret of Saint John Bosco*, Tradibooks, 1944, p. 21.](https://books.google.pt/books?id=MRJdAgAAQBAJ&pg=PA21&dq=%22plaster%20saint%22&hl=en&sa=X&ved=0ahUKEwiww6blsqXLAhWB7xQKHbUhB7gQ6AEIJjAC#v=onepage&q=%22plaster%20saint%22&f=false) > > > As to whether *plaster saint* was borrowed from another language I found no evidence of. The literal equivalents *plâtre saint* or *saint de plâtre* in French, *santo de yesso* in Spanish, and *santo de gesso* in Portuguese, mean the object only. [*Petit saint*](http://www.larousse.fr/dictionnaires/francais/saint/70548/locution?q=saint#166748) (*little saint*) is used ironically to refer to a person hypocritically affecting virtue, and has been in use since the early 1800s. *Ce n’est pas un (petit) saint* (*he/she is no (little) saint*) sounds very much like the English phrase and [has been around](https://books.google.com/ngrams/graph?content=pas%20un%20petit%20saint%2Bpas%20un%20saint&year_start=1800&year_end=2000&corpus=19&smoothing=3&share=&direct_url=t1%3B%2C%28pas%20un%20petit%20saint%20%2B%20pas%20un%20saint%29%3B%2Cc0) since the early 1800s too, but means they’re dishonest. My hunch, for all it’s worth, is that the phrase as used in the examples above is transparent and suggestive enough for English speakers to have coined it without outside help.
> > I have always seen a plaster statue of a saint as an object of veneration and respect, > > > Unfortunately, the rest of UK had not had the same experience: Some background… up until the late 18th century, “Plaster saints” were associated with Catholics who were seen as dangerous, weird, and enemies of civilisation as their allegiance was to the Pope and not the Crown. In 1828, Daniel O’Connor, an Irish Catholic, was elected to Parliament but refused to take his seat until the anti-catholic oath was altered to his liking. From [The Encyclopaedia Britannica](https://www.britannica.com/event/Catholic-Emancipation) > > O’Connell’s ensuing triumphant election compelled the British prime minister, the Duke of Wellington, and Sir Robert Peel to carry the Emancipation Act of 1829 in Parliament. This act admitted Irish and English Roman Catholics to Parliament and to all but a handful of public offices. With the Universities Tests Act of 1871, which opened the universities to Roman Catholics, Catholic Emancipation in the United Kingdom was virtually complete. > > > You will see that at the time of Kipling, Roman Catholics had only recently been (almost\*) fully emancipated, and the cynical common soldiery, had not quite agreed to this – they, and the majority of the UK population saw the plaster saints that Catholics worshipped, and the Protestants did not, as tacky, and idolatrous – a cheap commercial representation of someone who, in fact, should be “an object of veneration and respect.” Hence the derogatory use. The OED gives: > > **plaster saint** n. freq. derogatory **a person who makes a show of being without moral faults or human weakness, esp. in a hypocritical way.** > > > As has been shown, the first use must have been somewhat prior to Kipling’s 1890 use: *“Single men in barricks [sic] don't grow into plaster saints.”* and seems to indicate “replicas of truly holy saints.” This seems to be borne out by > > 1980 Chinweizu et al. in D. Walder Lit. in Mod. World (1990) 286 Were our ancestors a parade of plaster saints who never, among themselves, struck a blow or hurt a fly? > > > 1995 Denver Post 15 Jan. e8/2 Clarke's book..presents her as a profoundly complex human being, infinitely more fascinating than any plaster saint or media-manufactured martyr. > > > There is also a play “[A Plaster Saint](https://babel.hathitrust.org/cgi/pt?id=osu.32435072953136&view=1up&seq=1)” by [Annie Edwards](https://peoplepill.com/people/annie-edwards/) and what I have read of the context seems to accord with the derogatory use. “[Historical Plays, Parts 1-7](https://books.google.co.uk/books?id=rqcxAQAAMAAJ&pg=PA264&dq=%22plaster%20saint%22&hl=en&newbks=1&newbks_redir=0&sa=X&ved=2ahUKEwiPhqSUxvvuAhVQUhUIHe4bAGcQ6AEwCHoECGIQAg#v=onepage&q=%22plaster%20saint%22&f=false)" By Tom Taylor from 1877 makes mention of it. \*There are still one or two restrictions on the rights of Catholics in the UK.
293,742
I am curious about a [question](https://stackoverflow.com/questions/30125788/is-there-a-name-for-aggregation-queries-that-record-0-values) I asked today and whether or not it was a question that should be asked. I have already searched the topic and found this [question](https://meta.stackexchange.com/questions/183177/question-closed-because-yes-no-answer) on MSE about Yes/No questions, but I don't feel it applies specifically to my problem. The question I asked, in short, was: > > Is there a special name for this type of query? > > > Why I think this is a good question: * The query in question is a very common one seen on SO, and if there is a name for it, the name may help in flagging duplicates and pointing users in the right direction. * The question is not specific to a single issue I am having, and therefore can be beneficial to many other SO users. * I have researched my question to no luck, but followed all the guidelines such as proofreading my question and providing a specific example. Why I'm afraid this isn't okay: * The answer may simply be just 'No.' * That could lead to a simple one word answer, because it is too hard to explain 'why' no one came up with a name. From the MSO/MSE questions I've read, a question that can be answered so simply is not liked by some users because it has the 'give this answer to me' attitude. Should this question be closed? I'm taking a chance to see if there is an answer, and in the case that there was it could be a very helpful question in the future. If there's not, the question was really just a dud. Should a question with a possible anticlimactic answer like that be avoided on SO?
2015/05/08
[ "https://meta.stackoverflow.com/questions/293742", "https://meta.stackoverflow.com", "https://meta.stackoverflow.com/users/3131147/" ]
The question is off topic. It's not a *programming* problem. Having said that, were it to be asked somewhere it was on topic, my answer in the meta question you linked would apply exactly, specifically with respect to the second bullet. You aren't *actually* interested in a yes/no answer, you want to be asking a "what" question. You want to know, "What is this called?" and if the answer happens to be, "It has no name.", then so be it. I mean, if someone said, "Yes, there is a name for that.", clearly it wouldn't be what you're looking to hear. You would want to know *what that name is*, not just *that one exists*, in just the same way that when people ask, "Is it possible to do [...]?" they almost exclusively meant, "How do I do [...]?"
> > Is a question with 'no' as a possible answer a bad question? > > > No.
2,428,800
I have an app, say MyApp Free. I want to create MyApp Pro which I can charge for that has some additional functionality. The obvious way is to have a library that contains almost all my app code, then two Android app projects for the Free and Pro versions which reference that library. Suggestions?
2010/03/11
[ "https://Stackoverflow.com/questions/2428800", "https://Stackoverflow.com", "https://Stackoverflow.com/users/291910/" ]
Look on my [github](http://github.com/commonsguy) page for the CWAC series of projects -- they all create JAR files for reuse in other projects. In short, there's not much magical for simple JARs, other than putting the Android JAR in your build path so your code referencing Android APIs compiles. However: * It is difficult to share resources. I am working on a solution for that now. * You can have components (activities, services, etc.) in the JAR, but the apps themselves still have to list those components in those apps' manifests
For your particular situation, you could just write one project, then put a little static boolean in it that determines whether it's the free version of the paid version. I guess it doesn't really matter unless the added functionality involves a much bigger download.
2,428,800
I have an app, say MyApp Free. I want to create MyApp Pro which I can charge for that has some additional functionality. The obvious way is to have a library that contains almost all my app code, then two Android app projects for the Free and Pro versions which reference that library. Suggestions?
2010/03/11
[ "https://Stackoverflow.com/questions/2428800", "https://Stackoverflow.com", "https://Stackoverflow.com/users/291910/" ]
Look on my [github](http://github.com/commonsguy) page for the CWAC series of projects -- they all create JAR files for reuse in other projects. In short, there's not much magical for simple JARs, other than putting the Android JAR in your build path so your code referencing Android APIs compiles. However: * It is difficult to share resources. I am working on a solution for that now. * You can have components (activities, services, etc.) in the JAR, but the apps themselves still have to list those components in those apps' manifests
Another option is keeping a separate branch in your code repository for the free version, that is what I do. You do have to change AndroidManifest.xml for the free version on the free branch. See [2-Version software: Best VCS approach?](https://stackoverflow.com/questions/2365542/2-version-software-best-vcs-approach)
112,564
I've started a new job in data science a few months back. The problem I was assigned to was very challenging, but exciting. Our client was using a very simple baseline model to make predictions, and wanted to improve on it by employing more sophisticated machine learning methods, and that's where me and my team came in. The very simple baseline algorithm works pretty well. After much experimentation and research, I could only beat the baseline by around 20%, when our goal was to beat it by twice as much. At the start of the project, I was already very nervous about the target performance that we were aiming for. In research, we never made guarantees about how well the model will perform on the data (we could only hypothesize); it was a matter of implementing it and finding out. My boss seems extremely dissatisfied with these results, but I'm at loss with how to bring the error down further. I do acknowledge that I may be in need of further training, and that I still have so much to learn. I've also read articles like [this](https://blog.insightdatascience.com/always-start-with-a-stupid-model-no-exceptions-3a22314b9aaa) and [this](https://towardsdatascience.com/first-create-a-common-sense-baseline-e66dbf8a8a47) specifying that sometimes, simple baseline models are better than complex methods. However, I still can't help but feel like a failure for not being able to reach our target performance. Does this mean I am under performing as a data scientist if I can't beat the baseline by a large margin? Or is this a usual experience (for those who've been in a the field for quite some time)? How can I communicate that the target performance is unattainable?
2018/05/20
[ "https://workplace.stackexchange.com/questions/112564", "https://workplace.stackexchange.com", "https://workplace.stackexchange.com/users/87219/" ]
Please ask yourself and your senior data scientists if the target performance is reasonable or not. For example, it's not reasonable to expect a 100% accuracy for the simple MINST data set. Assume the target performance is reasonable and your boss wants that to happen. Generally, you have the following options: * Please **study** the failing cases. You shouldn't treat machine learning as a black box, you should take a look at the cases where model fail. There might be a pattern that you can further extract. * Re-engineering your feature sets. Kaggle people have good examples, please take a look. Does your data set look good? * Check your overfitting. You need good crossvalidation for bringing your error rates down. * Try several models. GBM, SVM, neural networks etc. Data scientists generally need to understand quite a number of mathematical models. Your experience is **not unusual**. Many people have had this experience, it's quite common. If you take note at the Kaggle prize winners, they would spend months just on improving error rate by something 2%. The non-winners could spend even more time on no improvement gain. If your existing model is already good, it's not simple to improve it even further. For example, deep learning might not improve the performance much if the data can be linearly separable. You're not alone.
I will reformulate the question into something more general: Does an expectation onto the first task for an beginner which is not matched make him bad in his job? No, I would not draw such a conclusion from this observation. * It could be that the expectation was to high (in this case: it could be that the baseline model matches the underlying model.) * It could be that the problem is poorly understood. E.g. I am doing a very special application of data based optimization and it took me (Physics Phd) and my very experienced Colleague (Mathematics Phd) several months to analyze the problem to a depth that we could reasonably start * It could be that the problem is plainly complex - new people who start with me/my colleague currently take several months until they understand the underlying logics/abstract goals of what we do.
33,701
For a child who goes to preschool, are there any scholarly recommendations on how much preschool is recommended for a child? I know that every child is different, and that disadvanted children may benefit differently. Perhaps how much structured group educational instruction is a better question? I.e. 4 hours a week? 10 hours? 15 hours?
2018/04/19
[ "https://parenting.stackexchange.com/questions/33701", "https://parenting.stackexchange.com", "https://parenting.stackexchange.com/users/13830/" ]
I disagree with purple rain's statement, even if it has some published backing. I've seen what keeping your kids home until the second they go into public schools does. If you have not raised them to be considerate they can go in being total d-bags and having to adopt a behavioral pattern after one is established may be far worse of a nightmare than an earlier introduction via a pre-school. This depends on the structure of a preschool as well. Some are play oriented and handle guidance well. Some might as well be a prison court yard where anything goes. A lot of what preschools do is develop patterns of time. Typically a preschool is not an all day program. At first it may be a couple hours a day, a couple days a week. It may have a nap time. After a couple years, which is generous, your child may not lose their feces every time you walk out of a room. If nothing else, preschool may ween them out of separation anxiety in a way where you're not violating a truancy law if you cave in and take them home. Both my girls went to preschool. Both did exceptionally well there, and had plenty of kids to play with, activities that we didn't really have at home, and the freedom to play without us hovering over them. They got used to us not being there all day. And they got used to trusting that we would show up later to get them. The patterns of a routine that is different than the one they knew while developing at an age where they can be distracted or in general may be more receptive to change. Our preschool was 2 years. Year one was 3 days a week. Year 2 is 4 days. Both years a "day" was defined as 3 1/2 hours. 8:30 to 12 noon. I happened to have a job that allowed me to adopt that schedule. I felt any more than this would have been too much preschool. If it helps to know, both of my girls were born at a time of year that makes them unable to register for kindergarten until they are 6. Both of them took the early entrance exam, and both were accepted into kinder at 4 years old. Both turned 5 within a month or so of entrance so don't get too excited there. Point is, the entrance exam was not an aptitude test. It was a maturity evaluation and a general verification that they can walk in a single file line, take turns, raise hands, share, count to a certain degree, identify shapes, etc. Very basic things. All of which they learned at their preschools. I don't have a control study here. I don't have an identical family with identical situations choosing the non preschool path, so I can't say if preschool is the deciding factor in why my kids did so well, and are doing so well now. Could be, but could also just be coincidence. Their cousin... they can't afford preschool. That girl is NOT ready for public school. Just saying...
Scholarly opinion will definitely vary, but the following article citing a well-respected child psychologist makes a lot of sense to me: ### [Nurturing children: Why "early learning" doesn't help](https://www.imfcanada.org/archive/685/nurturing-children-why-early-learning-does-not-help) The basis of the article is that preschoolers aren't developmentally able to learn many things. The things they *do* need to learn (like socialization) are best learned in an environment of adult attachment, *not* an environment of peers.
2,803
I'm currently looking for a hosting company that can provide a very solid service with a 100% SLA. In the search both cloud hosting and managed dedicated hosting have come up. (I'd rather not manage the server myself as I'm still rather new to Linux.) I'm not sure if phrasing this as a "which is best" would make sense, but what advantages does cloud hosting have over dedicated server hosting? I need a reliable service above all else, and some elements of the application to be hosted will be relatively CPU intensive, although those spikes in CPU usage will be sporadic, so the hosting needs to be able to deal with that.
2010/09/06
[ "https://webmasters.stackexchange.com/questions/2803", "https://webmasters.stackexchange.com", "https://webmasters.stackexchange.com/users/1306/" ]
> > what advantages does cloud hosting have over dedicated server hosting? > > > There is no answer to that in the abstract; or the general answer to that is at least too long to type up here. You need to start with a picture of which architecture you desire and which load you forecast, and then evaluate the hosting architecture on that basis. Just for a beginning, which programming language you're using matters a great deal, and you didn't say. **To give you a partial answer, in short form:** * Understand the [CAP theorem](http://en.wikipedia.org/wiki/CAP_theorem). Cloud hosting usually offers storage APIs that lean to the A-P side of CAP, such as Amazon SimpleDB and S3. * Cloud hosting implies that scaling out will not be a problem, i.e. you can spool up 100 new servers without prior warning, and you will get them. * Cloud hosting should have some network-centric and monitoring-centric addons that make managing a fleet of servers easier, fx HTTP load balancing, monitoring, auto-scaling. **Please note that:** * If you're just using a few servers, then cloud computing isn't really that different from traditional VPS hosting. * If you use those highly scalable storage APIs (like SimpleDB), then you do of course gain a platform to handle lots of growth. On the flip side, you're also strongly locked in by the cloud computing vendor. > > I need a reliable service above all else > > > That IMHO points to either: * A fully managed VPS or dedicated server provider like Rackspace, Engine Yard, Joyent and others. **OR** * A 'full-stack' cloud computing provider like Google App Engine or Windows Azure (as opposed to Amazon EC2, which requires you to manage the operating system, backups, security patching etc yourself). Either of the above would be good starting points -- but again, it comes down to the specifics of your architecture, and your growth expectations.
Without an idea of the kind of traffic you'll be seeing or your plans for growth, I can't speak to whether you'll do better with a clustered/grid-computing option or a traditional dedicated server, however, (as I've worked in the hosting industry for years) I can say that you will not find a reputable company with a 100% SLA - there is no such thing as guaranteed 100% uptime with any service and anyone who promises as much is hiding something (perhaps something so simple as overcharging every month to allow for credit issuance in the event of downtime).
2,803
I'm currently looking for a hosting company that can provide a very solid service with a 100% SLA. In the search both cloud hosting and managed dedicated hosting have come up. (I'd rather not manage the server myself as I'm still rather new to Linux.) I'm not sure if phrasing this as a "which is best" would make sense, but what advantages does cloud hosting have over dedicated server hosting? I need a reliable service above all else, and some elements of the application to be hosted will be relatively CPU intensive, although those spikes in CPU usage will be sporadic, so the hosting needs to be able to deal with that.
2010/09/06
[ "https://webmasters.stackexchange.com/questions/2803", "https://webmasters.stackexchange.com", "https://webmasters.stackexchange.com/users/1306/" ]
> > what advantages does cloud hosting have over dedicated server hosting? > > > There is no answer to that in the abstract; or the general answer to that is at least too long to type up here. You need to start with a picture of which architecture you desire and which load you forecast, and then evaluate the hosting architecture on that basis. Just for a beginning, which programming language you're using matters a great deal, and you didn't say. **To give you a partial answer, in short form:** * Understand the [CAP theorem](http://en.wikipedia.org/wiki/CAP_theorem). Cloud hosting usually offers storage APIs that lean to the A-P side of CAP, such as Amazon SimpleDB and S3. * Cloud hosting implies that scaling out will not be a problem, i.e. you can spool up 100 new servers without prior warning, and you will get them. * Cloud hosting should have some network-centric and monitoring-centric addons that make managing a fleet of servers easier, fx HTTP load balancing, monitoring, auto-scaling. **Please note that:** * If you're just using a few servers, then cloud computing isn't really that different from traditional VPS hosting. * If you use those highly scalable storage APIs (like SimpleDB), then you do of course gain a platform to handle lots of growth. On the flip side, you're also strongly locked in by the cloud computing vendor. > > I need a reliable service above all else > > > That IMHO points to either: * A fully managed VPS or dedicated server provider like Rackspace, Engine Yard, Joyent and others. **OR** * A 'full-stack' cloud computing provider like Google App Engine or Windows Azure (as opposed to Amazon EC2, which requires you to manage the operating system, backups, security patching etc yourself). Either of the above would be good starting points -- but again, it comes down to the specifics of your architecture, and your growth expectations.
Cloud hosting has a lot of different meanings, but if you are talking about Platform as a Service (PaaS) or Infrastructure as a Service (IaaS) then the main benefits are usually the ability to scale out to multiple servers and pay hourly instead of monthly. I wrote a blog post about [VPS/VM vs Dedicated vs Cloud Servers: Hosting options and cost comparisons](http://codeblog.theg2.net/2010/10/vpsvm-vs-dedicated-vs-cloud-servers.html), and from your question it sounds like you would do just fine with a Virtual Private Server (VPS) or VM hosting provider. If uptime is your highest concern than using a Cloud hosting provider with multiple VMs behind a load balancer is your best bet for high availability. By using multiple servers you can take one down for maintinence / upgrades and not have any downtime.
2,803
I'm currently looking for a hosting company that can provide a very solid service with a 100% SLA. In the search both cloud hosting and managed dedicated hosting have come up. (I'd rather not manage the server myself as I'm still rather new to Linux.) I'm not sure if phrasing this as a "which is best" would make sense, but what advantages does cloud hosting have over dedicated server hosting? I need a reliable service above all else, and some elements of the application to be hosted will be relatively CPU intensive, although those spikes in CPU usage will be sporadic, so the hosting needs to be able to deal with that.
2010/09/06
[ "https://webmasters.stackexchange.com/questions/2803", "https://webmasters.stackexchange.com", "https://webmasters.stackexchange.com/users/1306/" ]
> > what advantages does cloud hosting have over dedicated server hosting? > > > There is no answer to that in the abstract; or the general answer to that is at least too long to type up here. You need to start with a picture of which architecture you desire and which load you forecast, and then evaluate the hosting architecture on that basis. Just for a beginning, which programming language you're using matters a great deal, and you didn't say. **To give you a partial answer, in short form:** * Understand the [CAP theorem](http://en.wikipedia.org/wiki/CAP_theorem). Cloud hosting usually offers storage APIs that lean to the A-P side of CAP, such as Amazon SimpleDB and S3. * Cloud hosting implies that scaling out will not be a problem, i.e. you can spool up 100 new servers without prior warning, and you will get them. * Cloud hosting should have some network-centric and monitoring-centric addons that make managing a fleet of servers easier, fx HTTP load balancing, monitoring, auto-scaling. **Please note that:** * If you're just using a few servers, then cloud computing isn't really that different from traditional VPS hosting. * If you use those highly scalable storage APIs (like SimpleDB), then you do of course gain a platform to handle lots of growth. On the flip side, you're also strongly locked in by the cloud computing vendor. > > I need a reliable service above all else > > > That IMHO points to either: * A fully managed VPS or dedicated server provider like Rackspace, Engine Yard, Joyent and others. **OR** * A 'full-stack' cloud computing provider like Google App Engine or Windows Azure (as opposed to Amazon EC2, which requires you to manage the operating system, backups, security patching etc yourself). Either of the above would be good starting points -- but again, it comes down to the specifics of your architecture, and your growth expectations.
+1 for 100% SLA, every mission-critical application should reside at a host who offers this. In addition I might add that every company has fine print in between the 100% SLA. They might guarantee it on the uptime (ping) and the hardware, but the more intensive stuff comes in when they can offer a 100% SLA for the application itself. If you would like a list of providers who can offer this sort of thing, I've worked with a few I can recommend.
2,803
I'm currently looking for a hosting company that can provide a very solid service with a 100% SLA. In the search both cloud hosting and managed dedicated hosting have come up. (I'd rather not manage the server myself as I'm still rather new to Linux.) I'm not sure if phrasing this as a "which is best" would make sense, but what advantages does cloud hosting have over dedicated server hosting? I need a reliable service above all else, and some elements of the application to be hosted will be relatively CPU intensive, although those spikes in CPU usage will be sporadic, so the hosting needs to be able to deal with that.
2010/09/06
[ "https://webmasters.stackexchange.com/questions/2803", "https://webmasters.stackexchange.com", "https://webmasters.stackexchange.com/users/1306/" ]
> > what advantages does cloud hosting have over dedicated server hosting? > > > There is no answer to that in the abstract; or the general answer to that is at least too long to type up here. You need to start with a picture of which architecture you desire and which load you forecast, and then evaluate the hosting architecture on that basis. Just for a beginning, which programming language you're using matters a great deal, and you didn't say. **To give you a partial answer, in short form:** * Understand the [CAP theorem](http://en.wikipedia.org/wiki/CAP_theorem). Cloud hosting usually offers storage APIs that lean to the A-P side of CAP, such as Amazon SimpleDB and S3. * Cloud hosting implies that scaling out will not be a problem, i.e. you can spool up 100 new servers without prior warning, and you will get them. * Cloud hosting should have some network-centric and monitoring-centric addons that make managing a fleet of servers easier, fx HTTP load balancing, monitoring, auto-scaling. **Please note that:** * If you're just using a few servers, then cloud computing isn't really that different from traditional VPS hosting. * If you use those highly scalable storage APIs (like SimpleDB), then you do of course gain a platform to handle lots of growth. On the flip side, you're also strongly locked in by the cloud computing vendor. > > I need a reliable service above all else > > > That IMHO points to either: * A fully managed VPS or dedicated server provider like Rackspace, Engine Yard, Joyent and others. **OR** * A 'full-stack' cloud computing provider like Google App Engine or Windows Azure (as opposed to Amazon EC2, which requires you to manage the operating system, backups, security patching etc yourself). Either of the above would be good starting points -- but again, it comes down to the specifics of your architecture, and your growth expectations.
A cloud service has three distinct characteristics that differentiate it from traditional hosting. It is sold on demand, typically by the minute or the hour; it is elastic -- a user can have as much or as little of a service as they want at any given time; and the service is fully managed by the provider (the consumer needs nothing but a personal computer and Internet access). Significant innovations in virtualization and distributed computing, as well as improved access to high-speed Internet and a weak economy, have accelerated interest in cloud computing.
2,803
I'm currently looking for a hosting company that can provide a very solid service with a 100% SLA. In the search both cloud hosting and managed dedicated hosting have come up. (I'd rather not manage the server myself as I'm still rather new to Linux.) I'm not sure if phrasing this as a "which is best" would make sense, but what advantages does cloud hosting have over dedicated server hosting? I need a reliable service above all else, and some elements of the application to be hosted will be relatively CPU intensive, although those spikes in CPU usage will be sporadic, so the hosting needs to be able to deal with that.
2010/09/06
[ "https://webmasters.stackexchange.com/questions/2803", "https://webmasters.stackexchange.com", "https://webmasters.stackexchange.com/users/1306/" ]
Without an idea of the kind of traffic you'll be seeing or your plans for growth, I can't speak to whether you'll do better with a clustered/grid-computing option or a traditional dedicated server, however, (as I've worked in the hosting industry for years) I can say that you will not find a reputable company with a 100% SLA - there is no such thing as guaranteed 100% uptime with any service and anyone who promises as much is hiding something (perhaps something so simple as overcharging every month to allow for credit issuance in the event of downtime).
A cloud service has three distinct characteristics that differentiate it from traditional hosting. It is sold on demand, typically by the minute or the hour; it is elastic -- a user can have as much or as little of a service as they want at any given time; and the service is fully managed by the provider (the consumer needs nothing but a personal computer and Internet access). Significant innovations in virtualization and distributed computing, as well as improved access to high-speed Internet and a weak economy, have accelerated interest in cloud computing.
2,803
I'm currently looking for a hosting company that can provide a very solid service with a 100% SLA. In the search both cloud hosting and managed dedicated hosting have come up. (I'd rather not manage the server myself as I'm still rather new to Linux.) I'm not sure if phrasing this as a "which is best" would make sense, but what advantages does cloud hosting have over dedicated server hosting? I need a reliable service above all else, and some elements of the application to be hosted will be relatively CPU intensive, although those spikes in CPU usage will be sporadic, so the hosting needs to be able to deal with that.
2010/09/06
[ "https://webmasters.stackexchange.com/questions/2803", "https://webmasters.stackexchange.com", "https://webmasters.stackexchange.com/users/1306/" ]
Cloud hosting has a lot of different meanings, but if you are talking about Platform as a Service (PaaS) or Infrastructure as a Service (IaaS) then the main benefits are usually the ability to scale out to multiple servers and pay hourly instead of monthly. I wrote a blog post about [VPS/VM vs Dedicated vs Cloud Servers: Hosting options and cost comparisons](http://codeblog.theg2.net/2010/10/vpsvm-vs-dedicated-vs-cloud-servers.html), and from your question it sounds like you would do just fine with a Virtual Private Server (VPS) or VM hosting provider. If uptime is your highest concern than using a Cloud hosting provider with multiple VMs behind a load balancer is your best bet for high availability. By using multiple servers you can take one down for maintinence / upgrades and not have any downtime.
A cloud service has three distinct characteristics that differentiate it from traditional hosting. It is sold on demand, typically by the minute or the hour; it is elastic -- a user can have as much or as little of a service as they want at any given time; and the service is fully managed by the provider (the consumer needs nothing but a personal computer and Internet access). Significant innovations in virtualization and distributed computing, as well as improved access to high-speed Internet and a weak economy, have accelerated interest in cloud computing.
2,803
I'm currently looking for a hosting company that can provide a very solid service with a 100% SLA. In the search both cloud hosting and managed dedicated hosting have come up. (I'd rather not manage the server myself as I'm still rather new to Linux.) I'm not sure if phrasing this as a "which is best" would make sense, but what advantages does cloud hosting have over dedicated server hosting? I need a reliable service above all else, and some elements of the application to be hosted will be relatively CPU intensive, although those spikes in CPU usage will be sporadic, so the hosting needs to be able to deal with that.
2010/09/06
[ "https://webmasters.stackexchange.com/questions/2803", "https://webmasters.stackexchange.com", "https://webmasters.stackexchange.com/users/1306/" ]
+1 for 100% SLA, every mission-critical application should reside at a host who offers this. In addition I might add that every company has fine print in between the 100% SLA. They might guarantee it on the uptime (ping) and the hardware, but the more intensive stuff comes in when they can offer a 100% SLA for the application itself. If you would like a list of providers who can offer this sort of thing, I've worked with a few I can recommend.
A cloud service has three distinct characteristics that differentiate it from traditional hosting. It is sold on demand, typically by the minute or the hour; it is elastic -- a user can have as much or as little of a service as they want at any given time; and the service is fully managed by the provider (the consumer needs nothing but a personal computer and Internet access). Significant innovations in virtualization and distributed computing, as well as improved access to high-speed Internet and a weak economy, have accelerated interest in cloud computing.
30,854,520
I have an app which acts as an "admin app" for one of my apps that is already in the App Store. I want this admin app to be distributed to someone I know. Sending updates of the admin app would be much easier if I could use the TestFlight program. Is it possible to upload the app to the App Store with a wildcard app ID? Or do I have to create a explicit App ID for this app as well (which will not be submitted to review and only used via TestFlight)?
2015/06/15
[ "https://Stackoverflow.com/questions/30854520", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4635315/" ]
No, not possible, every app needs a unique bundle identifier and the related certificates and profiles. See the [AppStore guidelines](https://developer.apple.com/app-store/review/guidelines/) for more info
No you must have a full unique app ID. Also, you could issue that app as a B2B (Business-2-Business) app in order to still use the app store and have control over who can download it. Your user will have to register as a business buyer in order to download it. But this would keep you from having to constantly reissue it every time the provisioning profile expires.
76,547
I am writing an essay that need a word describing an employee who is treated well by most of his/her colleagues? Or is there any common phrase for that? Please, give me some suggestions.
2015/12/21
[ "https://ell.stackexchange.com/questions/76547", "https://ell.stackexchange.com", "https://ell.stackexchange.com/users/27484/" ]
Some ideas: esteemed, respected, well-liked, well-favored. But "treated well" might be more accurate than any of these. Other terms that mean treated well but to an excessive degree are "pampered", or "fawned over".
Thank you all for your support <3. I have looked up all of your suggestions in Oxford Dictionary and decided to use "esteemed". It has the closest meaning to my context. Thanks again ^^
781
As you may have noticed, [the big news of the day](https://hinduism.meta.stackexchange.com/questions/780/please-welcome-your-new-pro-tem-moderators) is that your moderator team has changed. I thought I would take this as an opportunity to issue your community a call to action. The moderators on any Stack Exchange site have a very important and very difficult job. There are multiple reasons for this, some of which are more obvious than others. Most of our frequent users know that mods are the people who actually handle all the flags. The instant delete, migrate and close powers that come with the moderator diamond get a lot of attention, too. Parts of the job that don't involve special privileges don't get talked about as much, but are no less important. Mods spend a lot of time—on meta and elsewhere in the community—gently guiding discussion and helping to keep things on track as the community grows. Behind the scenes, they serve as the primary liaisons between you fine people reading this and those of us who work at Stack Exchange. In short, **they're here to help**. It can't all be up to them, though. There's only so much that even the best moderator team can do when a community doesn't have established guidelines. Over the past months and years, this meta site has seen its share of debates about deletions and content. In many cases, they've been about very specific cases and people. That is not necessarily invalid, but has also been less constructive than it could have been. Long ago, early Stack Exchange users realized that [it's basically never a good idea to "call out" specific users publicly](https://meta.stackexchange.com/a/76172/). Admittedly, moderators are a little bit of a special case. Meta is a correct place to "appeal" if you feel a moderator has made a mistake, but even then, it's important to [focus on *actions*, not *people*](https://meta.stackexchange.com/a/289913/) and to [be civil](https://meta.stackexchange.com/a/197072/). (For what it's worth, "focus on what was done rather than who did it" is also general advice that we give to moderators about how to do their jobs.) What might be more helpful now is looking more broadly at what you do and do not want to see on the site. Do you believe that all answers should require supporting sources, and that there should be a site policy for deleting answers without citations? Start a new meta post proposing that. Or maybe you think language questions should be considered off-topic? Ask a meta question suggesting that. Be specific. When possible, link to things you've actually seen on the site. For example, I myself got the idea to mention "language questions" just now because I thought [this meta question](https://hinduism.meta.stackexchange.com/questions/748/should-we-close-questions-related-to-sanskrit-language-and-grammar-as-off-topic) did a good job at what I'm suggesting while keeping the focus on content and site policy, not people. Please see this as **a call to action to start discussions** on what actions moderators should take when they see X or Y type of content on the site. Waiting until something happens and then saying "we should have had a policy about that, and if we did, the policy would have been the opposite of what happened"... well, that's too little, too late. Again, the mods and CMs are here to help, but we cannot dictate. The mods are volunteering their time, and Stack Exchange is providing the servers, but the community guidelines must come from the community itself.
2017/02/24
[ "https://hinduism.meta.stackexchange.com/questions/781", "https://hinduism.meta.stackexchange.com", "https://hinduism.meta.stackexchange.com/users/18/" ]
First off all thanks for nicely explaining the job of moderators to the community. > > Do you believe that all answers should require supporting sources, and that there should be a site policy for deleting answers without citations? Start a new meta post proposing that. > > > Yes, that was very essential to make it clear. The discussion [Can we revisit the sources required rule?](https://hinduism.meta.stackexchange.com/q/786/277) has proved fruitful. Finally we've posted [Official policy for deleting answers that don't cite sources](https://hinduism.meta.stackexchange.com/q/803/277)
> > A call to action: what do you want to see here? > > > Here's my list: 1. Undelete the answer mentioned in [this meta post](https://hinduism.meta.stackexchange.com/q/559/2995) or let a moderator explain properly through an answer why the answer was deleted. I think the mods, not random users, owe an explanation to the user. I've explained in [this answer](https://hinduism.meta.stackexchange.com/a/687/2995) why in this instance OP's answer needs to be undeleted. 2. I've published [a query](http://data.stackexchange.com/hinduism/query/114513/search-comments?SearchQuery=cite%20sources) that lists all answers that do not contain any references and to be fair to other users whose answers got deleted, the old ones (after the [Back It Up! rule](https://hinduism.meta.stackexchange.com/a/2/2995) went into effect) needed to be taken down as well. However, whenever we tried to bring it to mods' notice ([once](http://chat.stackexchange.com/transcript/message/29928312#29928312), [twice](https://hinduism.meta.stackexchange.com/q/708/2995), [thrice](http://chat.stackexchange.com/transcript/message/34169600#34169600)), there's lot of escapism going on. I was told to flag those 400+ answers myself working with a limit of 20 flags a day over a period of 2-3 weeks so mods can take action! Why do I have to flag when they already have the list? Beats me! So I think the above stated behavior goes against what you stated in your post: > > In short, they're here to help. > > > 3. I put in a very simple request in [this](https://hinduism.meta.stackexchange.com/q/747/2995) meta post. Reminded the 3 mods to take action on it via [this](http://chat.stackexchange.com/transcript/message/34951596#34951596) comment. No response from any of the mods. --- Some other comments and questions I had in general: > > community guidelines must come from the community itself > > > What if mods do not uphold the very guidelines which are upvoted and accepted by rest of the community? Who do you complain? It appears mods here do not follow 1/2 the things covered in this excellent post: [Guidelines for new users answering questions](https://hinduism.meta.stackexchange.com/q/125), yet we continue to point new users to that post. Also, are mods allowed to have their personal tastes and philosophies dominating their actions? > > focus on what was done rather than who did it > > > Well said! But it goes both ways. What if mods themselves don't like being called out on meta and start taking every complaint personally instead of objectively? The moment someone comes forward with a complaint here, first thing I see here is heavy downvoting. This is *not* how we should run meta. --- Pops: Maybe you need to stick around a little bit and coach the new mods before you disappear again...Thanks.
3,112,866
Scenario is simple to describe, but might have a complex answer: Imagine a case where you have one write only mysql database. Then you have about 5 or 6 read only databases. The write database has a count for a particular inventory. You have hundreds of thousands of users banging away at this particular inventory item, but only limited quantity. For argument's sake, say 10 items. What's the best way to ensure that only 10 items get sold? If there is even a 200ms delta between the time the read-only slaves get updated, can't the integrity of the count go stale, thus selling inventory you do not have? How would you solve/scale this problem?
2010/06/24
[ "https://Stackoverflow.com/questions/3112866", "https://Stackoverflow.com", "https://Stackoverflow.com/users/175836/" ]
The basic solution to concurrent users will probably cover this too. At some point in the "buy" transaction, you need to decrement the inventory (on the write-server). Through whatever method, enforce that inventory can't go below zero. If there's one item left, and two people trying to buy it, one will be out of luck. The replication latency is exactly the same thing. Two users see a product available, but by the time they try to buy it, it's gone. A good solution for that scenario covers both replication latency and a user simply snatching the last item out from under another user.
It all depends on when and what window you decide to lock the master table for the update. A. If you have to be 100% sure an item will be attempted to be bought only when its surely available. You will have to lock the item for the particular user as soon as you list it to him (which means you will temporarily decrement the inventory stock) B. If you are okay with showing the one off "sorry, we just ran out of stock" message. you should lock the item just before you bill (well, you could do that after transaction is complete. but at the cost of a very furious customer) I would chose approach A for locking, and may be flag a "selling out soon" warning for items with very low stock left. (if its a very frequent situation, you could proly also count the number of concurrent users hitting on the item and give a more accurate warning) From the business perspective, you wouldn't want to be so low on stock (lower than the number of concurrent buyers) This is inevitable of course at "christmas" times when its okay to be out of stock :)
28,973
I gave my old iPhone 3GS (iOS 5.0) to a friend, and after removing the SIM card, we noticed a peculiar behavior. My friend is able to send texts via iMessage from the old phone number. Likewise, I can receive texts from this number. Has anyone else noticed this? I am assuming the iMessage servers register phone numbers for compatible devices. But what happens if someone else claims this number?
2011/10/24
[ "https://apple.stackexchange.com/questions/28973", "https://apple.stackexchange.com", "https://apple.stackexchange.com/users/12675/" ]
iMessage can also register email address (like your Apple ID), that's how it works on iPad and iPod Touch. So it's completely normal to be able to send iMessages with an SIM-free iPhone. As long as you are connected to a WiFi network.
I was delighted to find that [the iPhone still works with data transmission on Wi-Fi without a SIM card](http://www.techyv.com/questions/send-messages-iphone-without-sim-card). All the apps still work and I can send and receive emails. The iPhone is certainly not a throw away device and still extremely useful as a mini-computer.
28,973
I gave my old iPhone 3GS (iOS 5.0) to a friend, and after removing the SIM card, we noticed a peculiar behavior. My friend is able to send texts via iMessage from the old phone number. Likewise, I can receive texts from this number. Has anyone else noticed this? I am assuming the iMessage servers register phone numbers for compatible devices. But what happens if someone else claims this number?
2011/10/24
[ "https://apple.stackexchange.com/questions/28973", "https://apple.stackexchange.com", "https://apple.stackexchange.com/users/12675/" ]
iMessage can also register email address (like your Apple ID), that's how it works on iPad and iPod Touch. So it's completely normal to be able to send iMessages with an SIM-free iPhone. As long as you are connected to a WiFi network.
Yes - and be careful, I used my SIM to activate a friends iPhone and he is now receiving copies of my iMessages to his device. Looks like Apple hasn't solved this yet, and it's a major issue for people whose iPhones get stolen. Further reading: <http://arstechnica.com/apple/news/2011/12/stolen-iphone-your-imessages-may-still-be-going-to-the-wrong-place.ars>
591,693
In a domotics installation I will have several devices, Arduino's by example, that typically powers with 5V/1A. These devices are distributed in the space, up to tens of meters between them. The obvious solution to a continuous power of these devices is use a adapter 230 AC/5 DC before each device. However, I like to reduce number of elements. A single shared 5V line doesn't seems a good idea, too much voltage fall due to wires impedance that will made them unusable and inefficient. Thus, I'm thinking in two possibilities: 1. A common 12v line, with DC-DC converters before each device. 2. Instead of distribute DC current, use a common line of AC current, around 5.5Vrms AC, that is converted to DC before each devices using a diode bridge. However, voltages in the common line near to target ones are easy to fail due to wire resistance. 3. As combination of the previous two, a common line of 12Vrms AC that is converted to DC and reduced to 5V before each device. Knows someone which is/are the usual solutions to this subject, and the facts to be taken into account ?
2021/10/22
[ "https://electronics.stackexchange.com/questions/591693", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/114429/" ]
Due to voltage drop in wires it is a good idea to use a voltage regulator in each device. I'm going to do something similar: I will distribute 5V, and have 3V3 local LDOs because everything runs at 3V3. If you absolutely need 5V you could distribute 6V and use local 5V LDOs. Distributing AC requires rectifiers and large capacitors in each module, which increases bulk and cost. If you use 5V, distributing 12V instead of 6V pretty much mandates switching converters, which increases cost and EMI. 12V would only be justified if you need high power for lighting or actuators, and in this case 24V is better anyway. If you have a large number of devices it becomes important to optimize power consumption. Besides using sleep modes, that means selecting boards and modules that have LDOs (not high dropout regulators) and low idle power, which especially means to avoid LM317 variants with 10mA idle current. So basically, there are two scenarios: 1. You have low power devices like arduinos, which require a couple tens of mA current. Since most of the interesting peripherals for these either require 3V3 and don't work on 5V, or work both on 3V3 and 5V, then the easiest solution is to distribute 5V and run the arduinos on 3V3 with local cheap LDOs. 2. There are high power devices using more than a couple hundred milliamps at 5V like a large backlit display, or some actuators, or LED lights, etc. In this case the loads will decide the voltage and wire gauge, for example 24V for LED strips. 5V loads can be powered by local buck regulators, for exemple [these cheap ones](https://www.mouser.fr/c/power/dc-dc-converters/non-isolated-dc-dc-converters/?output%20voltage-channel%201=5%20V&sort=pricing) at 2-2.50€ each would do the trick nicely. If there is no load that requires a specific voltage (like a 24V LED strip) then a recycled 19V laptop power supply from the junk bin would be a good option too. under no circumstances should you use the counterfeit "LM2596" modules.
In my opinion, the most simple way to solve you problem is use 12 V common line, and use linear regulator (such is LM1117-5) before each device you need to power. So all you need is three components before each devices (LM1117-5 and two capacitors). If you would power Arduino, most of the boards has internal linear regulator with max current ~500 mA, so for small currents you don't need any additional components. Another advantage of this solution is electrical safety if you use a good 12 V power supply. Disadvantage of this method it's heating linear regulators at high current, but in case you need more current you can use DC-DC converters with higher efficiency. Voltage drop on shared wires at high current can be very big, but if you use linear regulator it's not a problem - max dropout voltage ~1.2 V, so you need a voltage more than ~6.2 V on you common line, and voltage drop on this line can reach 5.8 V, witch too much, and you can use very long lines and not too thick wires.
591,693
In a domotics installation I will have several devices, Arduino's by example, that typically powers with 5V/1A. These devices are distributed in the space, up to tens of meters between them. The obvious solution to a continuous power of these devices is use a adapter 230 AC/5 DC before each device. However, I like to reduce number of elements. A single shared 5V line doesn't seems a good idea, too much voltage fall due to wires impedance that will made them unusable and inefficient. Thus, I'm thinking in two possibilities: 1. A common 12v line, with DC-DC converters before each device. 2. Instead of distribute DC current, use a common line of AC current, around 5.5Vrms AC, that is converted to DC before each devices using a diode bridge. However, voltages in the common line near to target ones are easy to fail due to wire resistance. 3. As combination of the previous two, a common line of 12Vrms AC that is converted to DC and reduced to 5V before each device. Knows someone which is/are the usual solutions to this subject, and the facts to be taken into account ?
2021/10/22
[ "https://electronics.stackexchange.com/questions/591693", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/114429/" ]
Due to voltage drop in wires it is a good idea to use a voltage regulator in each device. I'm going to do something similar: I will distribute 5V, and have 3V3 local LDOs because everything runs at 3V3. If you absolutely need 5V you could distribute 6V and use local 5V LDOs. Distributing AC requires rectifiers and large capacitors in each module, which increases bulk and cost. If you use 5V, distributing 12V instead of 6V pretty much mandates switching converters, which increases cost and EMI. 12V would only be justified if you need high power for lighting or actuators, and in this case 24V is better anyway. If you have a large number of devices it becomes important to optimize power consumption. Besides using sleep modes, that means selecting boards and modules that have LDOs (not high dropout regulators) and low idle power, which especially means to avoid LM317 variants with 10mA idle current. So basically, there are two scenarios: 1. You have low power devices like arduinos, which require a couple tens of mA current. Since most of the interesting peripherals for these either require 3V3 and don't work on 5V, or work both on 3V3 and 5V, then the easiest solution is to distribute 5V and run the arduinos on 3V3 with local cheap LDOs. 2. There are high power devices using more than a couple hundred milliamps at 5V like a large backlit display, or some actuators, or LED lights, etc. In this case the loads will decide the voltage and wire gauge, for example 24V for LED strips. 5V loads can be powered by local buck regulators, for exemple [these cheap ones](https://www.mouser.fr/c/power/dc-dc-converters/non-isolated-dc-dc-converters/?output%20voltage-channel%201=5%20V&sort=pricing) at 2-2.50€ each would do the trick nicely. If there is no load that requires a specific voltage (like a 24V LED strip) then a recycled 19V laptop power supply from the junk bin would be a good option too. under no circumstances should you use the counterfeit "LM2596" modules.
The definitive answer to this question depends on more parameters than what you specified. For example, if you can accomodate wiring your modules with 10AWG cable (5mm²), distributing 5V DC could be acceptable: for a 10m x2 (supply and ground) wire, at 1A, that will amount to 0.12V of drop, which probably won't be a problem. (voltage drop can be easily calculated: there are even some online calculators for this: e.g. <https://bluerobotics.com/learn/voltage-drop-calculator>) If you want to use Cat5 cable (practical because it is cheap, and you have multiple wires, so you can use the same cable for both power and data), which is typically 24 AWG, you will obviously have a problem: the drop will be 3.28V at 1A so you'll only have 1.72V left at your device. But if you use 12V and use DC-DC converters, at say 80% efficiency, that amounts to 520mA on the wire, and the drop will be 1.71V (power loss ~1W). This could be acceptable. If you can parallel multiple wires from the Cat5 bundle, it makes it even easier. Now, if you distribute 12V AC through AWG24 and use bridges and linear converters, you will need 1A on the cable, the losses and drop will be greater on the cable, but you won't care because it will still be able to regulate 5V at your device side. But the total loss will be huge (7W, for a device that consumes 5W). So you will need a much bigger main supply, which may be a problem. You'll also probalby need a heatsink at each module. So, here is the design procedure I would follow: * Start by deciding the kind of cable(s) you want to use for power distribution. * Decide on a few reasonable options for power distribution: 5V, 12V, 24V, DC-DC converters or linear regulators on the devices side, etc... * Then for each option: + deduce the current on the line, then the voltage drop, and the power losses using some online calculator. + Depending on the number of modules, estimate the main supply needs. You'll see that it may seem much bigger than wat you'd expect. This may have an impact on your decision. + Find the appropriate solution for the device-side supplies and the main supply. Keep in mind that there are integrated DC-DC solutions available around, which is much easier, and probably cheaper, than making your own (e.g. <https://www.digikey.fr/product-detail/fr/cui-inc/P78E05-1000/102-5018-ND/9649654>). Those aren't more difficult to use than a linear converter. + That's a good idea to check the total cost of solution: wires, DC-DC or linear regulators on the devices, main supply. + You'll also probably want to check what physical size each solution takes, both on the device and the main supply side. At that point, you'll know what is the best option for you. Here is what you'll probably find out: * Distributing 5V is probably impossible, or makes the cabling too impractical. * 12V is probably a good choice, but using DC-DC converters. AC-DC and/or linear regulators at each module will make the solution bigger and more expensive: Each device will be bigger, and the main supply will be much bigger too.
591,693
In a domotics installation I will have several devices, Arduino's by example, that typically powers with 5V/1A. These devices are distributed in the space, up to tens of meters between them. The obvious solution to a continuous power of these devices is use a adapter 230 AC/5 DC before each device. However, I like to reduce number of elements. A single shared 5V line doesn't seems a good idea, too much voltage fall due to wires impedance that will made them unusable and inefficient. Thus, I'm thinking in two possibilities: 1. A common 12v line, with DC-DC converters before each device. 2. Instead of distribute DC current, use a common line of AC current, around 5.5Vrms AC, that is converted to DC before each devices using a diode bridge. However, voltages in the common line near to target ones are easy to fail due to wire resistance. 3. As combination of the previous two, a common line of 12Vrms AC that is converted to DC and reduced to 5V before each device. Knows someone which is/are the usual solutions to this subject, and the facts to be taken into account ?
2021/10/22
[ "https://electronics.stackexchange.com/questions/591693", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/114429/" ]
The definitive answer to this question depends on more parameters than what you specified. For example, if you can accomodate wiring your modules with 10AWG cable (5mm²), distributing 5V DC could be acceptable: for a 10m x2 (supply and ground) wire, at 1A, that will amount to 0.12V of drop, which probably won't be a problem. (voltage drop can be easily calculated: there are even some online calculators for this: e.g. <https://bluerobotics.com/learn/voltage-drop-calculator>) If you want to use Cat5 cable (practical because it is cheap, and you have multiple wires, so you can use the same cable for both power and data), which is typically 24 AWG, you will obviously have a problem: the drop will be 3.28V at 1A so you'll only have 1.72V left at your device. But if you use 12V and use DC-DC converters, at say 80% efficiency, that amounts to 520mA on the wire, and the drop will be 1.71V (power loss ~1W). This could be acceptable. If you can parallel multiple wires from the Cat5 bundle, it makes it even easier. Now, if you distribute 12V AC through AWG24 and use bridges and linear converters, you will need 1A on the cable, the losses and drop will be greater on the cable, but you won't care because it will still be able to regulate 5V at your device side. But the total loss will be huge (7W, for a device that consumes 5W). So you will need a much bigger main supply, which may be a problem. You'll also probalby need a heatsink at each module. So, here is the design procedure I would follow: * Start by deciding the kind of cable(s) you want to use for power distribution. * Decide on a few reasonable options for power distribution: 5V, 12V, 24V, DC-DC converters or linear regulators on the devices side, etc... * Then for each option: + deduce the current on the line, then the voltage drop, and the power losses using some online calculator. + Depending on the number of modules, estimate the main supply needs. You'll see that it may seem much bigger than wat you'd expect. This may have an impact on your decision. + Find the appropriate solution for the device-side supplies and the main supply. Keep in mind that there are integrated DC-DC solutions available around, which is much easier, and probably cheaper, than making your own (e.g. <https://www.digikey.fr/product-detail/fr/cui-inc/P78E05-1000/102-5018-ND/9649654>). Those aren't more difficult to use than a linear converter. + That's a good idea to check the total cost of solution: wires, DC-DC or linear regulators on the devices, main supply. + You'll also probably want to check what physical size each solution takes, both on the device and the main supply side. At that point, you'll know what is the best option for you. Here is what you'll probably find out: * Distributing 5V is probably impossible, or makes the cabling too impractical. * 12V is probably a good choice, but using DC-DC converters. AC-DC and/or linear regulators at each module will make the solution bigger and more expensive: Each device will be bigger, and the main supply will be much bigger too.
In my opinion, the most simple way to solve you problem is use 12 V common line, and use linear regulator (such is LM1117-5) before each device you need to power. So all you need is three components before each devices (LM1117-5 and two capacitors). If you would power Arduino, most of the boards has internal linear regulator with max current ~500 mA, so for small currents you don't need any additional components. Another advantage of this solution is electrical safety if you use a good 12 V power supply. Disadvantage of this method it's heating linear regulators at high current, but in case you need more current you can use DC-DC converters with higher efficiency. Voltage drop on shared wires at high current can be very big, but if you use linear regulator it's not a problem - max dropout voltage ~1.2 V, so you need a voltage more than ~6.2 V on you common line, and voltage drop on this line can reach 5.8 V, witch too much, and you can use very long lines and not too thick wires.
9,099
As a programmer, do you see any professional or other advantage in using your real name in online discourse, versus an invented handle? I've always gone by a single username and had my real name displayed whenever possible, for a few reasons: * My interests online are almost exclusively professional and aboveboard. * It constructs a search-friendly public log of all of my work, everywhere. * If someone wants to contact me, there are many ways to do it. * My portfolio of work is all tied to me personally. Possible cons to full disclosure include: * If you feel like becoming involved in something untoward, it could be harder. * The psychopath who inherits your project can more easily find out where you live. * You might be spammed by people who are not worth the precious time that could be better spent writing more of the brilliant software you're famous for. * Your portfolio of work is all tied to you personally. It seems, anyway, that a vast majority of StackOverflow users go by invented handles rather than real names. Notable exceptions include the best-known users, who are typically well established in the industry. But how could we ever become legendary rockstar programmers if we didn't get our names out there? Discuss.
2010/10/03
[ "https://softwareengineering.stackexchange.com/questions/9099", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/2107/" ]
The biggest thing I can think of is both an advantage and a disadvantage: **everything you put online under your real name will follow you**. This is good when you are posting good, constructive things. It's bad when you post a picture of you from *that* night or when you say something offensive or just plain stupid. I find that using my real name helps keep me in check -- I think more about what I say and how I say it. But it has on occasion been inconvenient when using my name invited personal attacks for various reasons. All in all, my approach is to use my real name when dealing with professional-ish stuff and to use a handle for personal interests and things I might not want to be as easily searchable.
Hmm, this question got me thinking here.. I always use my invented name. why? * Seperation between personal and Work. I find this very very important! (I do have my real name on the internet, but ONLY on a social network. (Hyves, And that account is also locked if your not my friend.) (I am thinking about putting my real name on stackoverflow) * If you do become famous, you can enjoy all of those benefits/downsides even with a invented name. people will just know you by that name. * As snorfus pointed out, stupidity is something I am good at. :) Mostly because I am only just began programming (3 years) * Isn't internet all about privacy? :)
9,099
As a programmer, do you see any professional or other advantage in using your real name in online discourse, versus an invented handle? I've always gone by a single username and had my real name displayed whenever possible, for a few reasons: * My interests online are almost exclusively professional and aboveboard. * It constructs a search-friendly public log of all of my work, everywhere. * If someone wants to contact me, there are many ways to do it. * My portfolio of work is all tied to me personally. Possible cons to full disclosure include: * If you feel like becoming involved in something untoward, it could be harder. * The psychopath who inherits your project can more easily find out where you live. * You might be spammed by people who are not worth the precious time that could be better spent writing more of the brilliant software you're famous for. * Your portfolio of work is all tied to you personally. It seems, anyway, that a vast majority of StackOverflow users go by invented handles rather than real names. Notable exceptions include the best-known users, who are typically well established in the industry. But how could we ever become legendary rockstar programmers if we didn't get our names out there? Discuss.
2010/10/03
[ "https://softwareengineering.stackexchange.com/questions/9099", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/2107/" ]
If you're using your real name for publishing smart, objective, funny, creative, constructive, useful, interesting content on the internet using your real name, gosh, then you will have an **awesome, persistent CV in the cloud** as we'd say today (or **in the mainframe**, as this [funny chap](http://java.dzone.com/articles/new-old-thing) calls it). If you're using your real name for utter crap, stupidity, boring, wrong, destructive, racist, neanderthal, unfair, illegal stuff, well, then you will still have that **CV in the cloud**. Might not be as awesome, though. **EDIT**: See how I'm subtly advertising myself as being smart, objective, funny, creative... ;-)
I use my real name for any sites where content is publicly posted that I wouldn't mind a potential employer seeing (because they will), for example my StackOverflow answers or my technical blog. I use a pseudonym for sites where I'd prefer it not to become part of my meta-resume. Not necessarily that I am embarrassed about anything I post under it, just that I don't want it to float the top of a Google search on my name and crowd out things that help my chances for a job. For example, I use a pseudonym any political discussion sites, or my personal blog.
9,099
As a programmer, do you see any professional or other advantage in using your real name in online discourse, versus an invented handle? I've always gone by a single username and had my real name displayed whenever possible, for a few reasons: * My interests online are almost exclusively professional and aboveboard. * It constructs a search-friendly public log of all of my work, everywhere. * If someone wants to contact me, there are many ways to do it. * My portfolio of work is all tied to me personally. Possible cons to full disclosure include: * If you feel like becoming involved in something untoward, it could be harder. * The psychopath who inherits your project can more easily find out where you live. * You might be spammed by people who are not worth the precious time that could be better spent writing more of the brilliant software you're famous for. * Your portfolio of work is all tied to you personally. It seems, anyway, that a vast majority of StackOverflow users go by invented handles rather than real names. Notable exceptions include the best-known users, who are typically well established in the industry. But how could we ever become legendary rockstar programmers if we didn't get our names out there? Discuss.
2010/10/03
[ "https://softwareengineering.stackexchange.com/questions/9099", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/2107/" ]
If you're using your real name for publishing smart, objective, funny, creative, constructive, useful, interesting content on the internet using your real name, gosh, then you will have an **awesome, persistent CV in the cloud** as we'd say today (or **in the mainframe**, as this [funny chap](http://java.dzone.com/articles/new-old-thing) calls it). If you're using your real name for utter crap, stupidity, boring, wrong, destructive, racist, neanderthal, unfair, illegal stuff, well, then you will still have that **CV in the cloud**. Might not be as awesome, though. **EDIT**: See how I'm subtly advertising myself as being smart, objective, funny, creative... ;-)
> > If you feel like becoming involved in something untoward, it could be harder. > > > This one is a little more complex than what you feel about something - it's not whether you think it's untoward that's relevant, it's what everybody else thinks. For instance I might not think that attending a political rally in support of voting reform is untoward, but others may. On the other hand, many people might not think being actively involved in an evangelical church is untoward, but it may, if I knew nothing else about them, negatively impact my opinion. Despite all this, I think there's value in being honest, both online and offline, and I think knowing that what you say online can be tracked back to you offline can have a civilising influence.
9,099
As a programmer, do you see any professional or other advantage in using your real name in online discourse, versus an invented handle? I've always gone by a single username and had my real name displayed whenever possible, for a few reasons: * My interests online are almost exclusively professional and aboveboard. * It constructs a search-friendly public log of all of my work, everywhere. * If someone wants to contact me, there are many ways to do it. * My portfolio of work is all tied to me personally. Possible cons to full disclosure include: * If you feel like becoming involved in something untoward, it could be harder. * The psychopath who inherits your project can more easily find out where you live. * You might be spammed by people who are not worth the precious time that could be better spent writing more of the brilliant software you're famous for. * Your portfolio of work is all tied to you personally. It seems, anyway, that a vast majority of StackOverflow users go by invented handles rather than real names. Notable exceptions include the best-known users, who are typically well established in the industry. But how could we ever become legendary rockstar programmers if we didn't get our names out there? Discuss.
2010/10/03
[ "https://softwareengineering.stackexchange.com/questions/9099", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/2107/" ]
[SnOrfus is right](https://softwareengineering.stackexchange.com/questions/9099/what-are-the-advantages-and-disadvantages-to-using-your-real-name-online/9116#9116), the **disadvantages outweigh the advantages**. When you post something online, it suffers from the following drawbacks: * Your online statements will probably **last forever**. If you formulate something undiplomatically, it will haunt you for the rest of your life. Or if you write something that is just plain wrong, the internet will help everybody to remember this mistake, forever. This is especially a problem for younger people. * Your opinions are **available for the whole world to see**, even if you intended them only for a specific audience. For example, if you publish an article on this site in which you discus a stupid habit of your manager, not necessarily to attack him, you don't want him to know it is your post. Or if you write an article on how much you hate programming language X, a future employer that is ready to offer you a programming job in that language might change his mind after reading it, even if you are really prepared to embrace the language in order to get the job. * There is often **too little context** so it can **very easily be misinterpreted**. For example: the prehistory of a certain statement is often not available for the reader; irony that is mistaken for serious-mindedness; your future employer will read all the mistakes you've made when you were a mere beginner without realizing it was written 10 years ago and you've morphed into a different person in the meanwhile; etc... As a happy medium, I chose to use my surname and the initials of my family name. This way, my pseudonym sounds more personal while still preserving my privacy.
If you're using your real name for publishing smart, objective, funny, creative, constructive, useful, interesting content on the internet using your real name, gosh, then you will have an **awesome, persistent CV in the cloud** as we'd say today (or **in the mainframe**, as this [funny chap](http://java.dzone.com/articles/new-old-thing) calls it). If you're using your real name for utter crap, stupidity, boring, wrong, destructive, racist, neanderthal, unfair, illegal stuff, well, then you will still have that **CV in the cloud**. Might not be as awesome, though. **EDIT**: See how I'm subtly advertising myself as being smart, objective, funny, creative... ;-)
9,099
As a programmer, do you see any professional or other advantage in using your real name in online discourse, versus an invented handle? I've always gone by a single username and had my real name displayed whenever possible, for a few reasons: * My interests online are almost exclusively professional and aboveboard. * It constructs a search-friendly public log of all of my work, everywhere. * If someone wants to contact me, there are many ways to do it. * My portfolio of work is all tied to me personally. Possible cons to full disclosure include: * If you feel like becoming involved in something untoward, it could be harder. * The psychopath who inherits your project can more easily find out where you live. * You might be spammed by people who are not worth the precious time that could be better spent writing more of the brilliant software you're famous for. * Your portfolio of work is all tied to you personally. It seems, anyway, that a vast majority of StackOverflow users go by invented handles rather than real names. Notable exceptions include the best-known users, who are typically well established in the industry. But how could we ever become legendary rockstar programmers if we didn't get our names out there? Discuss.
2010/10/03
[ "https://softwareengineering.stackexchange.com/questions/9099", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/2107/" ]
[SnOrfus is right](https://softwareengineering.stackexchange.com/questions/9099/what-are-the-advantages-and-disadvantages-to-using-your-real-name-online/9116#9116), the **disadvantages outweigh the advantages**. When you post something online, it suffers from the following drawbacks: * Your online statements will probably **last forever**. If you formulate something undiplomatically, it will haunt you for the rest of your life. Or if you write something that is just plain wrong, the internet will help everybody to remember this mistake, forever. This is especially a problem for younger people. * Your opinions are **available for the whole world to see**, even if you intended them only for a specific audience. For example, if you publish an article on this site in which you discus a stupid habit of your manager, not necessarily to attack him, you don't want him to know it is your post. Or if you write an article on how much you hate programming language X, a future employer that is ready to offer you a programming job in that language might change his mind after reading it, even if you are really prepared to embrace the language in order to get the job. * There is often **too little context** so it can **very easily be misinterpreted**. For example: the prehistory of a certain statement is often not available for the reader; irony that is mistaken for serious-mindedness; your future employer will read all the mistakes you've made when you were a mere beginner without realizing it was written 10 years ago and you've morphed into a different person in the meanwhile; etc... As a happy medium, I chose to use my surname and the initials of my family name. This way, my pseudonym sounds more personal while still preserving my privacy.
> > If you feel like becoming involved in something untoward, it could be harder. > > > This one is a little more complex than what you feel about something - it's not whether you think it's untoward that's relevant, it's what everybody else thinks. For instance I might not think that attending a political rally in support of voting reform is untoward, but others may. On the other hand, many people might not think being actively involved in an evangelical church is untoward, but it may, if I knew nothing else about them, negatively impact my opinion. Despite all this, I think there's value in being honest, both online and offline, and I think knowing that what you say online can be tracked back to you offline can have a civilising influence.
9,099
As a programmer, do you see any professional or other advantage in using your real name in online discourse, versus an invented handle? I've always gone by a single username and had my real name displayed whenever possible, for a few reasons: * My interests online are almost exclusively professional and aboveboard. * It constructs a search-friendly public log of all of my work, everywhere. * If someone wants to contact me, there are many ways to do it. * My portfolio of work is all tied to me personally. Possible cons to full disclosure include: * If you feel like becoming involved in something untoward, it could be harder. * The psychopath who inherits your project can more easily find out where you live. * You might be spammed by people who are not worth the precious time that could be better spent writing more of the brilliant software you're famous for. * Your portfolio of work is all tied to you personally. It seems, anyway, that a vast majority of StackOverflow users go by invented handles rather than real names. Notable exceptions include the best-known users, who are typically well established in the industry. But how could we ever become legendary rockstar programmers if we didn't get our names out there? Discuss.
2010/10/03
[ "https://softwareengineering.stackexchange.com/questions/9099", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/2107/" ]
Two big down-sides for me: * The name I use to sign checks isn't unique. Even in the mid-90's, I was already getting email from people who'd seen my name on a newsgroup somewhere and assumed I was someone else. My name isn't even terribly common - but The Internet is a pretty big namespace... * It increases the temptation to self-promote. I've seen this a lot - folks go job hunting, change their online IDs to reflect the name they're putting on resumes, and their whole act changes. You might consider this a *good* thing, encouraging a professional attitude and such... But I have little desire to interact with people who are constantly in "interview-mode", and even less desire to spend time there myself. Your online identity is what you produce, not what you name it. Getting hung up on a name is as silly as getting hung up on an avatar photo... Which, incidentally, do not usually correspond to the "real names" they're attached to.
Another way to view this question is what name do you consider to be your identity. For example, my full legal name would be "John Brock King II" while most people call me JB and there are several nicknames I have had over the years, some stemming from various interpretations of JB like James Brown, James Bond, Jim Bean, etc. while others have other stories behind their origin like Boogus or Funkmeister. I choose to identify with my name as JB rather than John as this is what I was called growing up and my name is similar to my fathers except I have a II at the end of mine. "John King" can be a rather common name as there is a CNN correspondent with that name among others. I'm often asked for my date of birth at the doctor's office and pharmacies because my name matches so much. Even "JB King" can still match some stuff like a shipwreck so it isn't totally unique unto myself. Some people are known by part of their name and others like to create their own identities. After all, what are the odds Lady Gaga would actually change her last name to Gaga? I'm thinking slim to none but maybe that's just me. I do consider J.B. King to be my real name though for legal matters it isn't always adequate.
9,099
As a programmer, do you see any professional or other advantage in using your real name in online discourse, versus an invented handle? I've always gone by a single username and had my real name displayed whenever possible, for a few reasons: * My interests online are almost exclusively professional and aboveboard. * It constructs a search-friendly public log of all of my work, everywhere. * If someone wants to contact me, there are many ways to do it. * My portfolio of work is all tied to me personally. Possible cons to full disclosure include: * If you feel like becoming involved in something untoward, it could be harder. * The psychopath who inherits your project can more easily find out where you live. * You might be spammed by people who are not worth the precious time that could be better spent writing more of the brilliant software you're famous for. * Your portfolio of work is all tied to you personally. It seems, anyway, that a vast majority of StackOverflow users go by invented handles rather than real names. Notable exceptions include the best-known users, who are typically well established in the industry. But how could we ever become legendary rockstar programmers if we didn't get our names out there? Discuss.
2010/10/03
[ "https://softwareengineering.stackexchange.com/questions/9099", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/2107/" ]
I do my professional things under my real name and always have even back to the days of newsgroups. I also use an email address that clearly is related to my business: gregcons.com is Gregory Consulting. My Twitter handle is gregcons because KateGregory was taken. I've been doing this since before there was a World Wide Web at all and recommend it. I definitely find significant advantages to having a consistent professional profile. You think before you write - it's going to stick around forever under your real name. People can check you out and confirm that you are capable. I find no shame in the possibility that someone will see I didn't know everything at some point in the past. If anything it shows I was always learning. That said, I do some personal things under a pseudonym. FlyerTalk for example. If I'm going to post about getting away with something, I'd rather the airlines weren't able to look up my record and investigate it. And on the stalker front, we often post rather specific travel plans on FT and I don't really want those associated with my real name. The creepiest correspondence I have ever received was a paper letter that came to my house from someone in a nearby jail who was reading my Visual C++ programming books and wanted to stop by and meet me when he got out. I realize that doesn't set a very high bar for creepy especially since he never wrote again. Perhaps that's why using my real name online doesn't worry me. For over twenty years it has brought me nothing but good.
I'm not sure what my real name is anymore. Seriously: there are people I know mainly in real life, who I rarely talk to online, who still know me as TRiG. (They'd know my real name too, usually.) And there are people I know mainly or exclusively online who'd still know my real name. The two identities blur together, especially since the nickname TRiG is in fact derived from my initials. You could find my online postings across a lot of forums very easily if you found a case-sensitive search engine.
9,099
As a programmer, do you see any professional or other advantage in using your real name in online discourse, versus an invented handle? I've always gone by a single username and had my real name displayed whenever possible, for a few reasons: * My interests online are almost exclusively professional and aboveboard. * It constructs a search-friendly public log of all of my work, everywhere. * If someone wants to contact me, there are many ways to do it. * My portfolio of work is all tied to me personally. Possible cons to full disclosure include: * If you feel like becoming involved in something untoward, it could be harder. * The psychopath who inherits your project can more easily find out where you live. * You might be spammed by people who are not worth the precious time that could be better spent writing more of the brilliant software you're famous for. * Your portfolio of work is all tied to you personally. It seems, anyway, that a vast majority of StackOverflow users go by invented handles rather than real names. Notable exceptions include the best-known users, who are typically well established in the industry. But how could we ever become legendary rockstar programmers if we didn't get our names out there? Discuss.
2010/10/03
[ "https://softwareengineering.stackexchange.com/questions/9099", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/2107/" ]
Two big down-sides for me: * The name I use to sign checks isn't unique. Even in the mid-90's, I was already getting email from people who'd seen my name on a newsgroup somewhere and assumed I was someone else. My name isn't even terribly common - but The Internet is a pretty big namespace... * It increases the temptation to self-promote. I've seen this a lot - folks go job hunting, change their online IDs to reflect the name they're putting on resumes, and their whole act changes. You might consider this a *good* thing, encouraging a professional attitude and such... But I have little desire to interact with people who are constantly in "interview-mode", and even less desire to spend time there myself. Your online identity is what you produce, not what you name it. Getting hung up on a name is as silly as getting hung up on an avatar photo... Which, incidentally, do not usually correspond to the "real names" they're attached to.
My name is not unique, it's pretty much a 'John Smith' type name, so using that to search uniquely for me is pretty useless. However, my alias is unique (and I hope to keep it that way, 7 years and going strong), and given enough tech savvy, you can work out who I am anyway (not that that would get you very far, as my name is a dime a dozen). However, I like to think of my alias as a pointer to my real self. Before facebook, there was no pointer to this real me (apart from FB, no website holds my real name attached to my alias). I like to think that my alias is my name (and in several computer-related circles, it is) because it's unique in a world of johnsmithery.
9,099
As a programmer, do you see any professional or other advantage in using your real name in online discourse, versus an invented handle? I've always gone by a single username and had my real name displayed whenever possible, for a few reasons: * My interests online are almost exclusively professional and aboveboard. * It constructs a search-friendly public log of all of my work, everywhere. * If someone wants to contact me, there are many ways to do it. * My portfolio of work is all tied to me personally. Possible cons to full disclosure include: * If you feel like becoming involved in something untoward, it could be harder. * The psychopath who inherits your project can more easily find out where you live. * You might be spammed by people who are not worth the precious time that could be better spent writing more of the brilliant software you're famous for. * Your portfolio of work is all tied to you personally. It seems, anyway, that a vast majority of StackOverflow users go by invented handles rather than real names. Notable exceptions include the best-known users, who are typically well established in the industry. But how could we ever become legendary rockstar programmers if we didn't get our names out there? Discuss.
2010/10/03
[ "https://softwareengineering.stackexchange.com/questions/9099", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/2107/" ]
Obviously putting stuff online under your real name can give you some sort of profile, and I've recently had people at work recognise my name on StackOverflow. I don't see this as particularly good or bad. A benefit of having high visibility online is that people can more easily contact or find out about me. Google and there are relevant results: I get calls from recruiters who find me on LinkedIn, and someone with a non-work related opportunity actually googled me, found where I worked and then called reception to speak to me. Maybe some people would find that annoying, but it hasn't annoyed me yet - instead I've had a few good outcomes from it. I personally like the fact that someone can google me and see what I am up to - at least, the things that I put into the public sphere.
My name is not unique, it's pretty much a 'John Smith' type name, so using that to search uniquely for me is pretty useless. However, my alias is unique (and I hope to keep it that way, 7 years and going strong), and given enough tech savvy, you can work out who I am anyway (not that that would get you very far, as my name is a dime a dozen). However, I like to think of my alias as a pointer to my real self. Before facebook, there was no pointer to this real me (apart from FB, no website holds my real name attached to my alias). I like to think that my alias is my name (and in several computer-related circles, it is) because it's unique in a world of johnsmithery.
9,099
As a programmer, do you see any professional or other advantage in using your real name in online discourse, versus an invented handle? I've always gone by a single username and had my real name displayed whenever possible, for a few reasons: * My interests online are almost exclusively professional and aboveboard. * It constructs a search-friendly public log of all of my work, everywhere. * If someone wants to contact me, there are many ways to do it. * My portfolio of work is all tied to me personally. Possible cons to full disclosure include: * If you feel like becoming involved in something untoward, it could be harder. * The psychopath who inherits your project can more easily find out where you live. * You might be spammed by people who are not worth the precious time that could be better spent writing more of the brilliant software you're famous for. * Your portfolio of work is all tied to you personally. It seems, anyway, that a vast majority of StackOverflow users go by invented handles rather than real names. Notable exceptions include the best-known users, who are typically well established in the industry. But how could we ever become legendary rockstar programmers if we didn't get our names out there? Discuss.
2010/10/03
[ "https://softwareengineering.stackexchange.com/questions/9099", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/2107/" ]
Obviously putting stuff online under your real name can give you some sort of profile, and I've recently had people at work recognise my name on StackOverflow. I don't see this as particularly good or bad. A benefit of having high visibility online is that people can more easily contact or find out about me. Google and there are relevant results: I get calls from recruiters who find me on LinkedIn, and someone with a non-work related opportunity actually googled me, found where I worked and then called reception to speak to me. Maybe some people would find that annoying, but it hasn't annoyed me yet - instead I've had a few good outcomes from it. I personally like the fact that someone can google me and see what I am up to - at least, the things that I put into the public sphere.
One advantage to using my real name: A high school friend was able to get back in touch because I made enough apparently interesting posts on a language forum to put me in several of the top 10 google results for my name.
65,666
I am designing a PCB board and was wondering if there is a reason not to use ring terminals directly to a PCB. The PCB could have a pad surrounding a hole like I often see as ground connections. A small bolt could go through the PCB with two nuts, one holding it to the board, and another to hold the ring terminal down. I've tried those little green screw terminal things where you can stick a wire in the side and screw down the top, but they always seemed flimsy for the relatively big wire I am using. I've also tried terminal blocks where one side has a wire soldered to the PCB and the other side can be connected to with a ring or spade terminal, but that seems like extra work when the PCB could be designed to screw to directly. Just wanted to check if I was missing anything obvious here as to why not to do this.
2013/04/15
[ "https://electronics.stackexchange.com/questions/65666", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/22610/" ]
Yes this has been done on boards used in "heavy industry" situations where you need a high current rating, without being restricted by the pin spacing (determining the voltage rating) of connectors. There are a few considerations to doing this succesfully * to avoid pad being wrenched off the board due to the torque forces when doing up the nut, use double sided board with vias all the way round the pads. * make it a plated through hole, clearance size for the thread you are using. * wide track on both sides both for the high currents and to stabilise the pad on the board for mechanical reasons, as far as you can go. * ring terminal shouldn't be put directly on to pad but use a plain washer in between to avoid transferring rotation to the pad, then a [split washer or a wave washer](http://en.wikipedia.org/wiki/Spring_washer#Spring_and_locking_washers) before the nut to keep the tension. * ensure sufficient space between terminals that the ring terminal does not hit the next one in any rotational position An alternative which works better in most situations, because it only requires access from one side of the board, is quick disconnect tabs, which are available as single, solder in parts. [Article](http://www.aeroelectric.com/articles/faston3.pdf) discussing relative merits.
One problem is that you must have access to both sides of the PCB when connecting or disconnecting the wire. Also, you need to make sure the screw doesn't extend so far as to contact the chassis below the PCB. Better to use screw terminals, such as these: <http://www.digikey.com/product-search/en/connectors-interconnects/terminals-screw-connectors/1442846>
76,781
Everytime I start Ubuntu I get a warning message that says my hard disk is failing. Big deal, it's just a warning, I'm a programmer, I ignore warnings (**kidding**). On a more serious note: I've already backed up all my data that I need, but I gonna continue to use this computer until it explodes dag napbit! So how do I tell Ubuntu that I don't care and make it stop showing me the warning?
2009/11/30
[ "https://superuser.com/questions/76781", "https://superuser.com", "https://superuser.com/users/2098/" ]
I'm not completely sure if you're experiencing the same kind of message that i did, but i was told that my *disk has many bad sectors*, so this is how i removed the warning message: 1. Open the Disk Utility from **System** > **Administration** > **Disk Utility** (or maybe a click on the warning will open it?) 2. Choose the disk that is failing and click the **More Information** link. (The link is placed to the red text that is showing you the warning) 3. Set a mark in the **Don't warn me if the disk is failing**-checkbox just above the attributes in the bottom of the window. I sincerely hope this will solve all of your problems.
Seems like an old post, but just to keep it up to date.... I had the same problem with my Ubuntu 12.04 LTS, and things seem to have changed a little. In Ubuntu 12.04 you need to start disk utility as well, but then you should choose the drive that is experiencing errors, and then go on the link "smart Data" and you will find the box "Don't warn me if disk is failing" Hope this is going to be helpful for a few others out there with newer versions of Ubuntu.
9,414,078
I am developing for Android tablet with 10.1 screen size with 800 x 1280 resolution. I read that the control bar (with buttons: back, home etc.) is 80px height (not sure). So i told the app UI designer to send me the design in 720 x 1280 resolution, but then i have noticed that there is also bar on top of the screen (with the app icon and name), i know how to make it disappear but i would like to keep it. I don't know the height of that bar and what resolution to ask from the UI designer. Can you please tell me what resolution i need?
2012/02/23
[ "https://Stackoverflow.com/questions/9414078", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1216917/" ]
Found it: Screen height: 800px Status Bar (Bottom): 48px Action Bar (Top): 56px Middle:696px
You should not bind your UI to specific and fixed sizes on Android, if possible try re-thinking the graphic elements in terms of stretchable nine patches and android xml resources such as gradients and shapes, in case you don't know them here's some link: Nine-Patches: <http://developer.android.com/guide/developing/tools/draw9patch.html> Drawable resources: <http://developer.android.com/guide/topics/resources/drawable-resource.html> And here's a link to a similar question with some values for the height of the status bar in Android: [Height of status bar in Android](https://stackoverflow.com/questions/3407256/height-of-status-bar-in-android)
137,076
Everytime I try to install this 2 updates I get the same error while I've succesfully installed other updates. I've tried to disable NOD32 antivirus and Spybot resident with no result. I've also tried to download the updates and open with the Windows Update Standalone Installer but it ends with an error (in the event manager says that is 0x800736b3). **Edit:** As suggested I was trying to install the x86 version on an x64 system. My fault.
2010/05/03
[ "https://superuser.com/questions/137076", "https://superuser.com", "https://superuser.com/users/35914/" ]
> > I've also tried to download the updates and open with the Windows Update Standalone Installer but it comes up with a message which states "The update is not applicable to your computer". > > > Did you make sure to download the update for the correct architecture of Windows 7, i.e. x86 or x64? Also try downloading the update through Windows Update again.
In my case, I got error 800736B3 when trying to turn on some Windows features. It turns out to be the problem of pending updates (The updates that are installed but waiting for the reboot to be effective). After the reboot the error is gone. It may not be the case for OP's problem but I just want to post it for the folks that are led here by Google :)
29,482
I have read that SHA-1 is a cryptographic hash function. On an exam, SHA-1 was given as a possible one-way encryption algorithm. Does SHA-1 require a key as input? Is a key required to qualify as "encryption"?
2013/01/22
[ "https://security.stackexchange.com/questions/29482", "https://security.stackexchange.com", "https://security.stackexchange.com/users/2053/" ]
SHA-1 is a hash function. Hash functions are intended to perform a "one-way transformation"; the original message cannot be recovered from the digest, at all. Therefore, whether SHA-1 constitutes "one-way encryption" depends on the definition of that term *from your class*. It could have several possible logical definitions depending on semantics: * If "encryption" is intended to be synonymous with "obfuscation", and "one-way" means "irreversible by *any* means", then SHA-1, as a hash, would meet this (very loose) definition of the term. The implicitly required "key" could be taken to be the "salt", which is a component of most crypto hashes that changes the produced hash in a deterministic but unpredictable way, and is therefore required to be correct in order to reproduce the same hash from the same message. However, technically the salt as used for hashes is not a secret, like a key normally is. * If "one-way encryption" == "keyed hash", then SHA-1, in its primitive form, does not meet the definition. However, SHA-1 can be used as the hash function of an HMAC, which is a "keyed hash" designed for message authentication (only the correct message, with the correct key, will produce the same HMAC). HMACs are used in a variety of security schemes, such as in authenticated cipher block modes or in zero-knowledge proofs. The SHA-1-based HMAC is, appropriately enough, named HMAC-SHA1. * If "encryption" is defined as "a key-based, reversible method of obfuscation", then "one-way encryption" == "trapdoor encryption" aka "asymmetric encryption", which SHA-1 in any form is not. Two keys are used, either of which when used in the encryption algorithm produces a transformation on the message that is irreversible without knowledge of the other key. RSA and elliptic-curve algorithms are examples, not SHA-1.
Ciphers are bijective and hash functions are not. Though you can still build a cipher by XORing the input with a hash (encryption) and doing the same one more time for decryption.
29,482
I have read that SHA-1 is a cryptographic hash function. On an exam, SHA-1 was given as a possible one-way encryption algorithm. Does SHA-1 require a key as input? Is a key required to qualify as "encryption"?
2013/01/22
[ "https://security.stackexchange.com/questions/29482", "https://security.stackexchange.com", "https://security.stackexchange.com/users/2053/" ]
This could easily be [googled](http://www.itl.nist.gov/fipspubs/fip180-1.htm) or [wikipedia'd](http://en.wikipedia.org/wiki/SHA-1), but here goes: SHA-1 is a cryptographic hash function, but is not an encryption function. All you work the SHA-1 function on is irreversible. SHA-1 *could* be done using a key, but that would make it a Message Authentication Code (MAC, see [HMAC](http://en.wikipedia.org/wiki/HMAC)). I agree with your last sentence. For something to be encrypted, you'll need to have some key, or something that corresponds to one. Say you have a (rather lousy) encryption function flipping the bits of the input, your key is "flip each bit". Another function may be a [feistel network](http://en.wikipedia.org/wiki/Feistel_network) using the round function F, and a key K = 281474976710656 as input to that function.
Ciphers are bijective and hash functions are not. Though you can still build a cipher by XORing the input with a hash (encryption) and doing the same one more time for decryption.