qid int64 1 74.7M | question stringlengths 12 33.8k | date stringlengths 10 10 | metadata list | response_j stringlengths 0 115k | response_k stringlengths 2 98.3k |
|---|---|---|---|---|---|
1,785 | I have now built up a small portfolio of pencil/charcoal drawings. Most of the time they are kept in my folders, but sometimes they are viewed by friends and relatives. And on the odd occasion displayed at the local community hall.
I've started to notice finger marks and tiny smudges on the artwork. As none of pieces are permanently displayed I don't want to frame them.
What is the best way of preserving and protecting my pencil/charcoal art without having to put them behind glass?
One of my friends said that you could use hairspray? Is that the right way to go? | 2016/06/16 | [
"https://crafts.stackexchange.com/questions/1785",
"https://crafts.stackexchange.com",
"https://crafts.stackexchange.com/users/426/"
] | If the pieces are likely to be handled and/or displayed then using fixative sprays is probably the best option. There are two purposely manufactured types:
* Workable
* Final
**Workable Fixative**
As the name suggests, this allows you to add additional layers to your work after the spray has been used.
Workable Fixative is a thin solution and it sets up a new toothy (slightly rough) surface for more drawing. You can choose to spray the entire piece or isolated parts. To prevent other areas getting sprayed you can use paper, or frisket, to shield the areas that you don't want to spray. Once the fixative is dry you can continue working on the drawing.
The fact that it provides new tooth can be an advantage as heavily shaded/worked areas can become smooth making it harder to apply further layers of graphite/colour to the piece. So, you can use a workable fixative every few layers to keep a fresh tooth.
**Final Fixative**
This provides a more durable surface than a Workable Fixative, but this should only be applied once you are certain that you don't what to make any further adjustments.
However, a Final Fixative can cause the piece to darken so many artists do not apply the this layer and simply stop at a Workable Fixative.
**Hairspray**
Hair Spray has been mentioned as a cheaper alternative to fixative sprays. Below is a *summarisation from [Drawing for Dummies](http://www.dummies.com/how-to/music-creative-arts/Visual-Arts/Drawing.html)*
>
> Hairspray does contain some of the materials of a Fixative but it only works for the short term, and ultimately damages the drawing - the hairspray yellows over time and ruins the drawing.
>
>
>
***Do not use hairspray.***
**Applying Fixative**
* Use proper ventilation. Fixatives smells and *is hazardous to your health*.
* Shake the can before spraying and test on a scrap piece of paper away from your drawing. The nozzle can clog & will deposit 'lumps' on your drawing
* Directions on the generally say to spray 20-25cm (8-10 inches) from drawing. However, I spray 30-40cm (12-15 inches) from drawing. Make sure it is on a flat surface and not drafty.
* Lightly spray first coat horizontally. Let dry for 15 minutes. Make sure the layer is even and smooth.
* Lightly spray second coat vertically. Let dry for 15 minutes. Make sure the layer is even and smooth.
* Subsequent layers may be added in alternating directions if needed.
*The sprayed areas of your piece will become darker once you have sprayed it and it won't lighten once the fixative has dried.* | Hairspray definitely works, but I'm not sure whether or not it will yellow over time. You can also buy fixative spray that is artist's quality and presumably tested for its pH and other qualities. |
1,785 | I have now built up a small portfolio of pencil/charcoal drawings. Most of the time they are kept in my folders, but sometimes they are viewed by friends and relatives. And on the odd occasion displayed at the local community hall.
I've started to notice finger marks and tiny smudges on the artwork. As none of pieces are permanently displayed I don't want to frame them.
What is the best way of preserving and protecting my pencil/charcoal art without having to put them behind glass?
One of my friends said that you could use hairspray? Is that the right way to go? | 2016/06/16 | [
"https://crafts.stackexchange.com/questions/1785",
"https://crafts.stackexchange.com",
"https://crafts.stackexchange.com/users/426/"
] | If the pieces are likely to be handled and/or displayed then using fixative sprays is probably the best option. There are two purposely manufactured types:
* Workable
* Final
**Workable Fixative**
As the name suggests, this allows you to add additional layers to your work after the spray has been used.
Workable Fixative is a thin solution and it sets up a new toothy (slightly rough) surface for more drawing. You can choose to spray the entire piece or isolated parts. To prevent other areas getting sprayed you can use paper, or frisket, to shield the areas that you don't want to spray. Once the fixative is dry you can continue working on the drawing.
The fact that it provides new tooth can be an advantage as heavily shaded/worked areas can become smooth making it harder to apply further layers of graphite/colour to the piece. So, you can use a workable fixative every few layers to keep a fresh tooth.
**Final Fixative**
This provides a more durable surface than a Workable Fixative, but this should only be applied once you are certain that you don't what to make any further adjustments.
However, a Final Fixative can cause the piece to darken so many artists do not apply the this layer and simply stop at a Workable Fixative.
**Hairspray**
Hair Spray has been mentioned as a cheaper alternative to fixative sprays. Below is a *summarisation from [Drawing for Dummies](http://www.dummies.com/how-to/music-creative-arts/Visual-Arts/Drawing.html)*
>
> Hairspray does contain some of the materials of a Fixative but it only works for the short term, and ultimately damages the drawing - the hairspray yellows over time and ruins the drawing.
>
>
>
***Do not use hairspray.***
**Applying Fixative**
* Use proper ventilation. Fixatives smells and *is hazardous to your health*.
* Shake the can before spraying and test on a scrap piece of paper away from your drawing. The nozzle can clog & will deposit 'lumps' on your drawing
* Directions on the generally say to spray 20-25cm (8-10 inches) from drawing. However, I spray 30-40cm (12-15 inches) from drawing. Make sure it is on a flat surface and not drafty.
* Lightly spray first coat horizontally. Let dry for 15 minutes. Make sure the layer is even and smooth.
* Lightly spray second coat vertically. Let dry for 15 minutes. Make sure the layer is even and smooth.
* Subsequent layers may be added in alternating directions if needed.
*The sprayed areas of your piece will become darker once you have sprayed it and it won't lighten once the fixative has dried.* | Hairspray will work but will yellow over time. Get the Krylon fixative mentioned above. Any hobby store will carry it. You can also use clear varnish from the paint department at a big box store. |
1,785 | I have now built up a small portfolio of pencil/charcoal drawings. Most of the time they are kept in my folders, but sometimes they are viewed by friends and relatives. And on the odd occasion displayed at the local community hall.
I've started to notice finger marks and tiny smudges on the artwork. As none of pieces are permanently displayed I don't want to frame them.
What is the best way of preserving and protecting my pencil/charcoal art without having to put them behind glass?
One of my friends said that you could use hairspray? Is that the right way to go? | 2016/06/16 | [
"https://crafts.stackexchange.com/questions/1785",
"https://crafts.stackexchange.com",
"https://crafts.stackexchange.com/users/426/"
] | Using a fixative, as described in previous answers here, is a standard way of protecting one's own work. On an acquired piece, you're free to do whatever you want with your own property. However, it would likely negatively affect the market value or historical value of the piece. On a valuable or historical work, fixative might be viewed similar to "vandalism" since it changes the nature and likely the appearance of the work from what the artist created.
Acquired works should be stored in acid-free archival quality flat files between layers of special paper or else professionally framed behind glass. | You can try using clear plastic or saran type wrap, or a clear window winterizing plastic and use heat to shrink it. |
1,785 | I have now built up a small portfolio of pencil/charcoal drawings. Most of the time they are kept in my folders, but sometimes they are viewed by friends and relatives. And on the odd occasion displayed at the local community hall.
I've started to notice finger marks and tiny smudges on the artwork. As none of pieces are permanently displayed I don't want to frame them.
What is the best way of preserving and protecting my pencil/charcoal art without having to put them behind glass?
One of my friends said that you could use hairspray? Is that the right way to go? | 2016/06/16 | [
"https://crafts.stackexchange.com/questions/1785",
"https://crafts.stackexchange.com",
"https://crafts.stackexchange.com/users/426/"
] | Hairspray definitely works, but I'm not sure whether or not it will yellow over time. You can also buy fixative spray that is artist's quality and presumably tested for its pH and other qualities. | I remember when I took art lessons when I was a kid. We would spray are drawings with non aerosol hairspray to keep them from getting smudged |
1,785 | I have now built up a small portfolio of pencil/charcoal drawings. Most of the time they are kept in my folders, but sometimes they are viewed by friends and relatives. And on the odd occasion displayed at the local community hall.
I've started to notice finger marks and tiny smudges on the artwork. As none of pieces are permanently displayed I don't want to frame them.
What is the best way of preserving and protecting my pencil/charcoal art without having to put them behind glass?
One of my friends said that you could use hairspray? Is that the right way to go? | 2016/06/16 | [
"https://crafts.stackexchange.com/questions/1785",
"https://crafts.stackexchange.com",
"https://crafts.stackexchange.com/users/426/"
] | If the pieces are likely to be handled and/or displayed then using fixative sprays is probably the best option. There are two purposely manufactured types:
* Workable
* Final
**Workable Fixative**
As the name suggests, this allows you to add additional layers to your work after the spray has been used.
Workable Fixative is a thin solution and it sets up a new toothy (slightly rough) surface for more drawing. You can choose to spray the entire piece or isolated parts. To prevent other areas getting sprayed you can use paper, or frisket, to shield the areas that you don't want to spray. Once the fixative is dry you can continue working on the drawing.
The fact that it provides new tooth can be an advantage as heavily shaded/worked areas can become smooth making it harder to apply further layers of graphite/colour to the piece. So, you can use a workable fixative every few layers to keep a fresh tooth.
**Final Fixative**
This provides a more durable surface than a Workable Fixative, but this should only be applied once you are certain that you don't what to make any further adjustments.
However, a Final Fixative can cause the piece to darken so many artists do not apply the this layer and simply stop at a Workable Fixative.
**Hairspray**
Hair Spray has been mentioned as a cheaper alternative to fixative sprays. Below is a *summarisation from [Drawing for Dummies](http://www.dummies.com/how-to/music-creative-arts/Visual-Arts/Drawing.html)*
>
> Hairspray does contain some of the materials of a Fixative but it only works for the short term, and ultimately damages the drawing - the hairspray yellows over time and ruins the drawing.
>
>
>
***Do not use hairspray.***
**Applying Fixative**
* Use proper ventilation. Fixatives smells and *is hazardous to your health*.
* Shake the can before spraying and test on a scrap piece of paper away from your drawing. The nozzle can clog & will deposit 'lumps' on your drawing
* Directions on the generally say to spray 20-25cm (8-10 inches) from drawing. However, I spray 30-40cm (12-15 inches) from drawing. Make sure it is on a flat surface and not drafty.
* Lightly spray first coat horizontally. Let dry for 15 minutes. Make sure the layer is even and smooth.
* Lightly spray second coat vertically. Let dry for 15 minutes. Make sure the layer is even and smooth.
* Subsequent layers may be added in alternating directions if needed.
*The sprayed areas of your piece will become darker once you have sprayed it and it won't lighten once the fixative has dried.* | 1. The best way would be to heat laminate the work which doesn't yellow out and also is easy and cheap to get it done.
2. Framing it in a good double glass frame is another way to keep it safe.
3. You could protect the finished work with thin film tapes that are available for mobile lamination which not just protects the work but also will be flexible for carrying just like a paper.
4. Simply covering it tight with a food wrapping film will protect it too. but it sure needs another level of protection over it like a glass frame or so as the film itself is very soft. |
1,785 | I have now built up a small portfolio of pencil/charcoal drawings. Most of the time they are kept in my folders, but sometimes they are viewed by friends and relatives. And on the odd occasion displayed at the local community hall.
I've started to notice finger marks and tiny smudges on the artwork. As none of pieces are permanently displayed I don't want to frame them.
What is the best way of preserving and protecting my pencil/charcoal art without having to put them behind glass?
One of my friends said that you could use hairspray? Is that the right way to go? | 2016/06/16 | [
"https://crafts.stackexchange.com/questions/1785",
"https://crafts.stackexchange.com",
"https://crafts.stackexchange.com/users/426/"
] | Using a fixative, as described in previous answers here, is a standard way of protecting one's own work. On an acquired piece, you're free to do whatever you want with your own property. However, it would likely negatively affect the market value or historical value of the piece. On a valuable or historical work, fixative might be viewed similar to "vandalism" since it changes the nature and likely the appearance of the work from what the artist created.
Acquired works should be stored in acid-free archival quality flat files between layers of special paper or else professionally framed behind glass. | 1. The best way would be to heat laminate the work which doesn't yellow out and also is easy and cheap to get it done.
2. Framing it in a good double glass frame is another way to keep it safe.
3. You could protect the finished work with thin film tapes that are available for mobile lamination which not just protects the work but also will be flexible for carrying just like a paper.
4. Simply covering it tight with a food wrapping film will protect it too. but it sure needs another level of protection over it like a glass frame or so as the film itself is very soft. |
290,554 | >
> I grant you with great pleasure the forgiveness you asked for.
>
>
>
Is the position of ***with great pleasure*** grammatically correct? | 2021/07/04 | [
"https://ell.stackexchange.com/questions/290554",
"https://ell.stackexchange.com",
"https://ell.stackexchange.com/users/134911/"
] | No, it isn't - you need to say who had arrived at home. (KB)
If it was *you* who had arrived at home you would need to say when arriving home (FF) | No. Correct structure for a "when" clause is: [*when + **subject + verb-with-tense***]. This, however, is [*when + **verb-with-tense***], with no subject, so bad grammar. |
26,564,195 | **Latest Update:**
This got fixed in the new masonry version.
**Original Post:**
I have an AngularJS website with Bootstrap3 style, which works fine in Chrome, Safari and Firefox, but not in IE (and I thought those days would be over).
I use the [Masonry](http://masonry.desandro.com/)-plugin to display some tiles. The first time I open the page IE11 sticks the tiles together. I believe it is because of some problem with the padding in bootstrap. When trying to debug the application or only show variable contents on console.log everything works fine. Also after reloading the page everything is rendered as expected, it is really only on the first time the page is accessed.
I've noticed that Masonry's website and examples work with IE so I'm trying to figure out what they have different.
The above mentioned problems also occur in IE10 - I don't have any information about IE9 and we don't intend to support IE8 or before.
**Update:** I've noticed that the masonry website doesn't use paddings (like bootstrap does) but margins instead, and indeed when I remove paddings and add margins, it works. However the question remains - Why doesn't it work with paddings?
**Update 2:** I have a working test which shows the error. It is quite extensive, and can be accessed here: <http://server.grman.at/ie11-intro.html>
It shows that the problem only occurs in IE, if the some script (probably the masonry library) is pre-loaded on a page before and afterwards used.
Here a screenshot of how it should look from Chrome:

And here a screenshot of how it looks for me in IE11:

**Last Update:** Yes, it's the masonry script, I've created a second intro page: <http://server.grman.at/ie11-intro2.html> which doesn't preload the masonry script and it works, now the quesion is - **WHY?** | 2014/10/25 | [
"https://Stackoverflow.com/questions/26564195",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/621366/"
] | After playing around with your demo a bit, I found that loading masonry.pkgd.min.js *before* Bootstrap and custom styles would resolve the issue for me. Something in Masonry's setup is breaking re-navigations in Internet Explorer - though I don't have specifics at this time.
Move the masonry script tag to the top of your document, above Bootstrap and your styles, and you should find that the issue is resolved. | The obvious and fast answer (as I'm not sure if the error is fixable in the masonry script in the first place) is, to remove a reference to the masonry script whenever you are not going to use it in the website.
**Update:**
This got fixed in the newer masonry version |
96,027 | >
> **Possible Duplicate:**
>
> [Too many SE sites causes confusion](https://meta.stackexchange.com/questions/70771/too-many-se-sites-causes-confusion)
>
>
>
First thing, I see that Stack Overflow has been remodeled into a number of other sites like Super User, Server Fault, and a lot, lot more. I think this Stack Overflow design is excellent. The UI is awesome. But do we actually need so many sites.
For example Super User, Server Fault, Unix, AskUbuntu are not much different in the purpose they serve. They might be targeted towards a different audience, but in reality we don't have so much of a strict boundary between these. I think if we have a lesser number of sites then we can have a better overall user experience. Since the post will find the answer in just one site, and no need to think which one these is the best one to put the question. I feel that reducing the number of sites will have a positive impact on the quality of the site in terms of the content.
Second thing, I wanted to know if I can get the Stack Overflow software for hosting it internally in my private LAN. | 2011/06/22 | [
"https://meta.stackexchange.com/questions/96027",
"https://meta.stackexchange.com",
"https://meta.stackexchange.com/users/164194/"
] | Short answer - yes we **do** need more sites than just one, to cover the IT profession (the fact that sites like cooking and english language don't belong on a programming site is hopefully obvious) as it is a very large area of professional expertise and specialisation. You mention the ease of being able to just post a question without worrying about where to post it - the flip side of that is that on a site that's too large and broadly defined the people with the answers to your questions may never see your question as it scrolls off the front page of the site too quickly. Probably not the effect you were hoping for.
The precise amount of sites we need is open for debate. I personally think there's a *little* too much fragmentation already (the classic example being "Unix and Linux" *and* "AskUbuntu", where the latter is not only arguably a subset of the former, but where questions on both those sites are arguably covered well enough by SO/SU/SF) but that's simply my opinion. Others disagree that there are too many sites, and yet others feel much like you.
And IIRC the Stack Overflow software isn't available for private use, though I think people have mentioned there are a few open source attempts out there which are similar. Perhaps you could look at some of these? | Regarding availability of the SE engine, the most recent info I know of is [this comment](https://meta.stackexchange.com/questions/79435/what-is-stack-overflows-business-model/79446#79446) by [Stack Exchange CFO](https://stackexchange.com/about/team) Michael Pryor:
>
> @popular Fixed link. It is offered. It is for internal private use only. It's also expensive. – Michael Pryor♦ May 25 at 15:26
>
>
>
There are also three existing questions on the topic. The oldest one erroneously says yes at the time of this writing.
* [Creating an internal Stack Exchange for proprietary questions?](https://meta.stackexchange.com/questions/55240/creating-an-internal-stack-exchange-for-proprietary-questions)
* [Is Stack Exchange / Stack Overflow available for private or internal use?](https://meta.stackexchange.com/questions/16054/is-the-stack-exchange-engine-available)
* [Are there going to be public details about the enterprise version of SE?](https://meta.stackexchange.com/questions/69362/are-there-going-to-be-public-details-about-the-enterprise-version-of-se/69436#69436) |
18,022,021 | My Xcode project has a navigation controller, and the main viewcontroller has a UIButton that pushes a new view. How to assign my "SecondViewController" as the view controller class for the second view? | 2013/08/02 | [
"https://Stackoverflow.com/questions/18022021",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2646711/"
] | In your interface builder > open the inspector on the right, select the Identity Inspector (3rd option), and under custom class, enter the name of your class.
 | Go to interface builder, see the property inspector top right (i cant remember the rght name), select your view controller, 2nd tab i believe, there's you class name box. |
48,804 | I was gifted a small pine tree of unknown species, around 30 cm tall. I don't know how to take care of it, so with not much fantasy I just moved it into a larger pot, added some soil, and watered it till I saw water in the plant saucer. I do not have any balcony or garden, so I placed it in front of the only window of my studio, that points to west.
Am I doing anything wrong? What should I do further? When should I water it?
[](https://i.stack.imgur.com/t1nSa.jpg) | 2019/12/17 | [
"https://gardening.stackexchange.com/questions/48804",
"https://gardening.stackexchange.com",
"https://gardening.stackexchange.com/users/27615/"
] | It is a pine (or more specifically, *Picea abies*, often known as spruce rather than pine, but a member of the Pine family) an old fashioned Christmas tree in fact - it will be okay for the Christmas period if you keep it away from heat sources and make sure it's watered. Unfortunately, though, these trees do not make good houseplants - they need to be outdoors, in the cold over winter. All you can do is keep it in the coolest, brightest spot in your home and maybe find someone to take it who can keep it outside later on. | How much root did it have on it? Rule of thumb is the roots should be at least half the weight of the shoots. If there is less root then you need to reduce the shoots accordingly. With a little spruce like that, you could cut out alternate shoots close to the central trunk, but do not prune back the leader at the top. This is only necessary if the root was small and weak. If it is strong then simply maintain water in the trough at the bottom, making sure that it is being taken up.i.e. the top shouldn't be bone dry. |
57,077 | This is a mechanics question pertaining to the maximum allowed damage for the Paladin's *Divine Smite* ability.
>
> **Divine Smite:** ...you can expend one [Spell Slot] to deal Radiant damage in addition to weapon damage. The extra damage is 2d8 for a
> 1st level slot, plus 1d8 for each spell level higher than 1st, to a
> maximum of 5d8....
>
>
>
**"...to a maximum of 5d8..."** is the part I'm having issues with figuring out how to rule.
Can the *extra* damage, beyond the 2d8 for filling a 1st level slot with the ability, be up to 5d8? Which would require a (Multiclass gained) spell slot of 6th level to add an extra 5d8 to the attack, making it 7d8 total (The default 2d8 + the maximum allowed extra of 5d8). *Or*, are the rules saying that the total allowed maximum damage can't exceed 5d8 total; which would only take a 4th level spell slot to add a max of 3d8 to the default 2d8?
2d8 + 3d8 = "to a maximum of 5d8" damage total?
2d8 + "to a maximum of 5d8" extra total = 7d8 ?
My guess is the first choice, since without Multiclassing a Paladin can never have more than 5th level spell slots and could never have the 6th level slot required to add 5d8 to the attack. My PC differs though. | 2015/02/20 | [
"https://rpg.stackexchange.com/questions/57077",
"https://rpg.stackexchange.com",
"https://rpg.stackexchange.com/users/21362/"
] | The maximum is on *all* the extra radiant damage that Divine Smite adds to your normal weapon damage, necessarily including the first 2d8 (emphasis mine):
>
> […] you can expend one paladin spell slot to deal radiant damage to the target, **in addition** to the weapon's damage. **The extra damage is [a variable amount], to a maximum of 5d8.**
>
>
>
The 5d8 maximum is a limit on the extra radiant damage that Divine Smite is adding to the weapon's damage.
(For *Improved* Divine Smite, see the question [Improved Divine Smite Differentiation](https://rpg.stackexchange.com/questions/70028/improved-divine-smite-differentiation).) | A compelling argument can be made for either case. Using the total maximum as 5d8 best mirrors how other maximums are written throughout the book. Take, for example, the description of falling damage:
>
> At the end of a fall, a creature takes 1d6 bludgeoning damage for every 10 feet it fell, to a maximum of 20d6. The creature lands prone, unless it avoids taking damage from the fall.
>
>
>
*Player's Handbook*, 183
And the "At Higher Levels" of "Hail of Thorns":
>
> If you cast this spell using a spell slot of 2nd level or higher, the damage increases by 1d10 for each slot level above 1st (to a maximum of 6d10).
>
>
>
*Player's Handbook*, 249
Both of these examples use "Some amount of damage, up to some maximum amount of damage". If we assume the goal was to write all maximums in the same way, then it makes sense to assume that the *total maximum damage* is 5d8.
However, since it wasn't clear, I asked Jeremy Crawford on Twitter:
[Divine Smite: Total maximum is 5d8, or additional spell lvl damage is 5d8? (Can I use a 6th level spell to get 2d8 + 5d8?)](https://twitter.com/GamerJosh/status/568816437047001089)
I will update when (if?) I get a response. |
57,077 | This is a mechanics question pertaining to the maximum allowed damage for the Paladin's *Divine Smite* ability.
>
> **Divine Smite:** ...you can expend one [Spell Slot] to deal Radiant damage in addition to weapon damage. The extra damage is 2d8 for a
> 1st level slot, plus 1d8 for each spell level higher than 1st, to a
> maximum of 5d8....
>
>
>
**"...to a maximum of 5d8..."** is the part I'm having issues with figuring out how to rule.
Can the *extra* damage, beyond the 2d8 for filling a 1st level slot with the ability, be up to 5d8? Which would require a (Multiclass gained) spell slot of 6th level to add an extra 5d8 to the attack, making it 7d8 total (The default 2d8 + the maximum allowed extra of 5d8). *Or*, are the rules saying that the total allowed maximum damage can't exceed 5d8 total; which would only take a 4th level spell slot to add a max of 3d8 to the default 2d8?
2d8 + 3d8 = "to a maximum of 5d8" damage total?
2d8 + "to a maximum of 5d8" extra total = 7d8 ?
My guess is the first choice, since without Multiclassing a Paladin can never have more than 5th level spell slots and could never have the 6th level slot required to add 5d8 to the attack. My PC differs though. | 2015/02/20 | [
"https://rpg.stackexchange.com/questions/57077",
"https://rpg.stackexchange.com",
"https://rpg.stackexchange.com/users/21362/"
] | The regular maximum for Divine Smite is 5d8, but increases to 6d8 against undead and fiends
===========================================================================================
As of the [2018 PHB errata](https://media.wizards.com/2018/dnd/downloads/PH-Errata.pdf), the description of the [Divine Smite](https://www.dndbeyond.com/classes/paladin#DivineSmite-264) feature says:
>
> Starting at 2nd level, when you hit a creature with a melee weapon attack, you can expend one spell slot to deal radiant damage to the target, in addition to the weapon’s damage. The extra damage is 2d8 for a 1st-level spell slot, plus 1d8 for each spell level higher than 1st, to **a maximum of 5d8**. The damage increases by 1d8 if the target is an undead or a fiend, to **a maximum of 6d8**.
>
>
>
As other answers have pointed out, 5d8 is the regular maximum for the total amount of "extra radiant damage" done by Divine Smite, regardless of the spell slot used. 5d8 is an overall limit for the feature - not just for the additional radiant damage done when using a higher-level spell slot.
The portion after the comma in the last sentence ("to a maximum of 6d8") was added in the 2018 PHB errata, and now makes it perfectly clear how the 1d8 extra damage against undead and fiends interacts with the 5d8 damage cap - the damage cap is increased by 1d8 as well.
Jeremy Crawford explains the reason for the change in the [November 27, 2018 episode of Dragon+](https://youtu.be/AaSdPL_pSgE?t=1432) (relevant segment starts around 23:52 into the episode). He describes it as a clarification to the feature that matches how many groups were interpreting it, not a change in how it works.
(As clarified in [my answer to the Improved Divine Smite Differentiation question](https://rpg.stackexchange.com/a/119874/33569), the Improved Divine Smite 11th-level feature adds an extra 1d8 radiant damage, not constrained by this 5d8 limit. In other words, Improved Divine Smite always adds an extra 1d8 radiant damage to qualifying attacks, separately from any spell slots expended to use the regular Divine Smite feature. Crawford explains in the Dragon+ episode that they removed the last sentence of Improved Divine Smite because it was causing people to mistakenly assume it *was* constrained by Divine Smite's limit.)
---
Pre-errata, [rules designer Jeremy Crawford answered this question (as asked by you!) in a January 2016 tweet](https://twitter.com/JeremyECrawford/status/687423172988284929):
>
> *Does the 5d8 damage cap on Divine Smite count the 2d8 base damage, JUST the extra 1d8 per slot > 1st, or all of it 2gether?*
>
>
> *Still unsure what the mac allowed damage of Divine Smite as a whole is. 5d8 using a 4th level slot? or 7d8 using a 6th?*
>
>
> **Divine Smite can deal a maximum of 5d8 radiant damage, or 6d8 if the target is an undead or a fiend.**
>
>
>
As clearly stated by Crawford, Divine Smite has a regular maximum of 5d8 radiant damage - but that maximum increases to 6d8 if the target is an undead or fiend (since Divine Smite automatically does an extra 1d8 radiant damage when used if the target is an undead or fiend). This was ambiguous before the errata, but is now explicitly stated in the feature description. | I've always seen it as being the additional damage beyond the 2d8 that is capped at 5d8. For example, a 20th level paladin has up to 5th level spell slots. As written, the damage progression is thus:
1st level slot - 2d8,
2nd level slot - 2d8 +1d8,
3rd level slot - 2d8 +2d8,
4th level slot - 2d8 +3d8,
5th level slot - 2d8 +4d8,
Why wouldn't they get to use their 5th level slots? I've always ruled at my own table that the 5d8 limit was for the bonus damage, meaning you can spend up to a 6th level on Smite, but nothing higher (For if you multiclass). |
57,077 | This is a mechanics question pertaining to the maximum allowed damage for the Paladin's *Divine Smite* ability.
>
> **Divine Smite:** ...you can expend one [Spell Slot] to deal Radiant damage in addition to weapon damage. The extra damage is 2d8 for a
> 1st level slot, plus 1d8 for each spell level higher than 1st, to a
> maximum of 5d8....
>
>
>
**"...to a maximum of 5d8..."** is the part I'm having issues with figuring out how to rule.
Can the *extra* damage, beyond the 2d8 for filling a 1st level slot with the ability, be up to 5d8? Which would require a (Multiclass gained) spell slot of 6th level to add an extra 5d8 to the attack, making it 7d8 total (The default 2d8 + the maximum allowed extra of 5d8). *Or*, are the rules saying that the total allowed maximum damage can't exceed 5d8 total; which would only take a 4th level spell slot to add a max of 3d8 to the default 2d8?
2d8 + 3d8 = "to a maximum of 5d8" damage total?
2d8 + "to a maximum of 5d8" extra total = 7d8 ?
My guess is the first choice, since without Multiclassing a Paladin can never have more than 5th level spell slots and could never have the 6th level slot required to add 5d8 to the attack. My PC differs though. | 2015/02/20 | [
"https://rpg.stackexchange.com/questions/57077",
"https://rpg.stackexchange.com",
"https://rpg.stackexchange.com/users/21362/"
] | The regular maximum for Divine Smite is 5d8, but increases to 6d8 against undead and fiends
===========================================================================================
As of the [2018 PHB errata](https://media.wizards.com/2018/dnd/downloads/PH-Errata.pdf), the description of the [Divine Smite](https://www.dndbeyond.com/classes/paladin#DivineSmite-264) feature says:
>
> Starting at 2nd level, when you hit a creature with a melee weapon attack, you can expend one spell slot to deal radiant damage to the target, in addition to the weapon’s damage. The extra damage is 2d8 for a 1st-level spell slot, plus 1d8 for each spell level higher than 1st, to **a maximum of 5d8**. The damage increases by 1d8 if the target is an undead or a fiend, to **a maximum of 6d8**.
>
>
>
As other answers have pointed out, 5d8 is the regular maximum for the total amount of "extra radiant damage" done by Divine Smite, regardless of the spell slot used. 5d8 is an overall limit for the feature - not just for the additional radiant damage done when using a higher-level spell slot.
The portion after the comma in the last sentence ("to a maximum of 6d8") was added in the 2018 PHB errata, and now makes it perfectly clear how the 1d8 extra damage against undead and fiends interacts with the 5d8 damage cap - the damage cap is increased by 1d8 as well.
Jeremy Crawford explains the reason for the change in the [November 27, 2018 episode of Dragon+](https://youtu.be/AaSdPL_pSgE?t=1432) (relevant segment starts around 23:52 into the episode). He describes it as a clarification to the feature that matches how many groups were interpreting it, not a change in how it works.
(As clarified in [my answer to the Improved Divine Smite Differentiation question](https://rpg.stackexchange.com/a/119874/33569), the Improved Divine Smite 11th-level feature adds an extra 1d8 radiant damage, not constrained by this 5d8 limit. In other words, Improved Divine Smite always adds an extra 1d8 radiant damage to qualifying attacks, separately from any spell slots expended to use the regular Divine Smite feature. Crawford explains in the Dragon+ episode that they removed the last sentence of Improved Divine Smite because it was causing people to mistakenly assume it *was* constrained by Divine Smite's limit.)
---
Pre-errata, [rules designer Jeremy Crawford answered this question (as asked by you!) in a January 2016 tweet](https://twitter.com/JeremyECrawford/status/687423172988284929):
>
> *Does the 5d8 damage cap on Divine Smite count the 2d8 base damage, JUST the extra 1d8 per slot > 1st, or all of it 2gether?*
>
>
> *Still unsure what the mac allowed damage of Divine Smite as a whole is. 5d8 using a 4th level slot? or 7d8 using a 6th?*
>
>
> **Divine Smite can deal a maximum of 5d8 radiant damage, or 6d8 if the target is an undead or a fiend.**
>
>
>
As clearly stated by Crawford, Divine Smite has a regular maximum of 5d8 radiant damage - but that maximum increases to 6d8 if the target is an undead or fiend (since Divine Smite automatically does an extra 1d8 radiant damage when used if the target is an undead or fiend). This was ambiguous before the errata, but is now explicitly stated in the feature description. | I believe the way it works is that by expending a 1st lvl spell slot gives 2d8, a 2nd lvl spell slot 3d8, a 3rd lvl spell slot 4d8 and a 4th lvl spell slot 5d8. a 5th lvl spell slot would also give 5d8.
If you multiclass to get higher lvl spell slots they would still give only the max of 5d8. |
57,077 | This is a mechanics question pertaining to the maximum allowed damage for the Paladin's *Divine Smite* ability.
>
> **Divine Smite:** ...you can expend one [Spell Slot] to deal Radiant damage in addition to weapon damage. The extra damage is 2d8 for a
> 1st level slot, plus 1d8 for each spell level higher than 1st, to a
> maximum of 5d8....
>
>
>
**"...to a maximum of 5d8..."** is the part I'm having issues with figuring out how to rule.
Can the *extra* damage, beyond the 2d8 for filling a 1st level slot with the ability, be up to 5d8? Which would require a (Multiclass gained) spell slot of 6th level to add an extra 5d8 to the attack, making it 7d8 total (The default 2d8 + the maximum allowed extra of 5d8). *Or*, are the rules saying that the total allowed maximum damage can't exceed 5d8 total; which would only take a 4th level spell slot to add a max of 3d8 to the default 2d8?
2d8 + 3d8 = "to a maximum of 5d8" damage total?
2d8 + "to a maximum of 5d8" extra total = 7d8 ?
My guess is the first choice, since without Multiclassing a Paladin can never have more than 5th level spell slots and could never have the 6th level slot required to add 5d8 to the attack. My PC differs though. | 2015/02/20 | [
"https://rpg.stackexchange.com/questions/57077",
"https://rpg.stackexchange.com",
"https://rpg.stackexchange.com/users/21362/"
] | The maximum is on *all* the extra radiant damage that Divine Smite adds to your normal weapon damage, necessarily including the first 2d8 (emphasis mine):
>
> […] you can expend one paladin spell slot to deal radiant damage to the target, **in addition** to the weapon's damage. **The extra damage is [a variable amount], to a maximum of 5d8.**
>
>
>
The 5d8 maximum is a limit on the extra radiant damage that Divine Smite is adding to the weapon's damage.
(For *Improved* Divine Smite, see the question [Improved Divine Smite Differentiation](https://rpg.stackexchange.com/questions/70028/improved-divine-smite-differentiation).) | I've always seen it as being the additional damage beyond the 2d8 that is capped at 5d8. For example, a 20th level paladin has up to 5th level spell slots. As written, the damage progression is thus:
1st level slot - 2d8,
2nd level slot - 2d8 +1d8,
3rd level slot - 2d8 +2d8,
4th level slot - 2d8 +3d8,
5th level slot - 2d8 +4d8,
Why wouldn't they get to use their 5th level slots? I've always ruled at my own table that the 5d8 limit was for the bonus damage, meaning you can spend up to a 6th level on Smite, but nothing higher (For if you multiclass). |
57,077 | This is a mechanics question pertaining to the maximum allowed damage for the Paladin's *Divine Smite* ability.
>
> **Divine Smite:** ...you can expend one [Spell Slot] to deal Radiant damage in addition to weapon damage. The extra damage is 2d8 for a
> 1st level slot, plus 1d8 for each spell level higher than 1st, to a
> maximum of 5d8....
>
>
>
**"...to a maximum of 5d8..."** is the part I'm having issues with figuring out how to rule.
Can the *extra* damage, beyond the 2d8 for filling a 1st level slot with the ability, be up to 5d8? Which would require a (Multiclass gained) spell slot of 6th level to add an extra 5d8 to the attack, making it 7d8 total (The default 2d8 + the maximum allowed extra of 5d8). *Or*, are the rules saying that the total allowed maximum damage can't exceed 5d8 total; which would only take a 4th level spell slot to add a max of 3d8 to the default 2d8?
2d8 + 3d8 = "to a maximum of 5d8" damage total?
2d8 + "to a maximum of 5d8" extra total = 7d8 ?
My guess is the first choice, since without Multiclassing a Paladin can never have more than 5th level spell slots and could never have the 6th level slot required to add 5d8 to the attack. My PC differs though. | 2015/02/20 | [
"https://rpg.stackexchange.com/questions/57077",
"https://rpg.stackexchange.com",
"https://rpg.stackexchange.com/users/21362/"
] | I've assumed that 5d8 was the maximum for the whole Divine Smite, not the extra damage. Maybe this isn't really a good answer, but I can't see why it'd be the other way given that multiclassing is optional at the DM's discretion. If it *was* the other way then at most you could only get 5d8 out of it with 4th level spell slots, and to get any higher the DM would have to allow multiclassing, and not every DM will have it in their game. Furthermore, it seems highly impractical to require leveling **another class** high enough to grant the character 6th level spell slots.
Primary spellcasters give their full level towards the calculation, but secondary casters like Paladins only add half their level (rounded up I believe) to that. So you only have 4th level spell slots as a Paladin at 13th level and higher. For the spell slots you want here, the Paladin would have to multiclass into a Primary spellcaster for 4 levels, to get the **one** 6th level spell slot from there. That is all *if* multiclassing is allowed. And the end result is a 17th level character.
Actually, let's re-frame that whole thought. If you wanted to get maximum Divine Smite capability, you actually would try to multiclass very early on. The Paladin gets Divine Smite at 2nd level, where they get Spellcasting and 1st level Spell Slots. You could accomplish this feat of Divine Smite by taking 10 levels in a Primary spellcasting class, and then you'll still get 6th level spell slots. **That** would be the most efficient way of doing it, I suppose.
I hope that answers your question thoroughly enough. It would be preposterous to require an optional feature, and at a *highly* impractical manner at that, to get the Maximum Power of your Divine Smiting capabilities. (*I feel like I really drifted off topic with that explanation of why it doesn't work.*)
**So I hope we can assume that the total maximum is 5d8.** | I've always seen it as being the additional damage beyond the 2d8 that is capped at 5d8. For example, a 20th level paladin has up to 5th level spell slots. As written, the damage progression is thus:
1st level slot - 2d8,
2nd level slot - 2d8 +1d8,
3rd level slot - 2d8 +2d8,
4th level slot - 2d8 +3d8,
5th level slot - 2d8 +4d8,
Why wouldn't they get to use their 5th level slots? I've always ruled at my own table that the 5d8 limit was for the bonus damage, meaning you can spend up to a 6th level on Smite, but nothing higher (For if you multiclass). |
57,077 | This is a mechanics question pertaining to the maximum allowed damage for the Paladin's *Divine Smite* ability.
>
> **Divine Smite:** ...you can expend one [Spell Slot] to deal Radiant damage in addition to weapon damage. The extra damage is 2d8 for a
> 1st level slot, plus 1d8 for each spell level higher than 1st, to a
> maximum of 5d8....
>
>
>
**"...to a maximum of 5d8..."** is the part I'm having issues with figuring out how to rule.
Can the *extra* damage, beyond the 2d8 for filling a 1st level slot with the ability, be up to 5d8? Which would require a (Multiclass gained) spell slot of 6th level to add an extra 5d8 to the attack, making it 7d8 total (The default 2d8 + the maximum allowed extra of 5d8). *Or*, are the rules saying that the total allowed maximum damage can't exceed 5d8 total; which would only take a 4th level spell slot to add a max of 3d8 to the default 2d8?
2d8 + 3d8 = "to a maximum of 5d8" damage total?
2d8 + "to a maximum of 5d8" extra total = 7d8 ?
My guess is the first choice, since without Multiclassing a Paladin can never have more than 5th level spell slots and could never have the 6th level slot required to add 5d8 to the attack. My PC differs though. | 2015/02/20 | [
"https://rpg.stackexchange.com/questions/57077",
"https://rpg.stackexchange.com",
"https://rpg.stackexchange.com/users/21362/"
] | The regular maximum for Divine Smite is 5d8, but increases to 6d8 against undead and fiends
===========================================================================================
As of the [2018 PHB errata](https://media.wizards.com/2018/dnd/downloads/PH-Errata.pdf), the description of the [Divine Smite](https://www.dndbeyond.com/classes/paladin#DivineSmite-264) feature says:
>
> Starting at 2nd level, when you hit a creature with a melee weapon attack, you can expend one spell slot to deal radiant damage to the target, in addition to the weapon’s damage. The extra damage is 2d8 for a 1st-level spell slot, plus 1d8 for each spell level higher than 1st, to **a maximum of 5d8**. The damage increases by 1d8 if the target is an undead or a fiend, to **a maximum of 6d8**.
>
>
>
As other answers have pointed out, 5d8 is the regular maximum for the total amount of "extra radiant damage" done by Divine Smite, regardless of the spell slot used. 5d8 is an overall limit for the feature - not just for the additional radiant damage done when using a higher-level spell slot.
The portion after the comma in the last sentence ("to a maximum of 6d8") was added in the 2018 PHB errata, and now makes it perfectly clear how the 1d8 extra damage against undead and fiends interacts with the 5d8 damage cap - the damage cap is increased by 1d8 as well.
Jeremy Crawford explains the reason for the change in the [November 27, 2018 episode of Dragon+](https://youtu.be/AaSdPL_pSgE?t=1432) (relevant segment starts around 23:52 into the episode). He describes it as a clarification to the feature that matches how many groups were interpreting it, not a change in how it works.
(As clarified in [my answer to the Improved Divine Smite Differentiation question](https://rpg.stackexchange.com/a/119874/33569), the Improved Divine Smite 11th-level feature adds an extra 1d8 radiant damage, not constrained by this 5d8 limit. In other words, Improved Divine Smite always adds an extra 1d8 radiant damage to qualifying attacks, separately from any spell slots expended to use the regular Divine Smite feature. Crawford explains in the Dragon+ episode that they removed the last sentence of Improved Divine Smite because it was causing people to mistakenly assume it *was* constrained by Divine Smite's limit.)
---
Pre-errata, [rules designer Jeremy Crawford answered this question (as asked by you!) in a January 2016 tweet](https://twitter.com/JeremyECrawford/status/687423172988284929):
>
> *Does the 5d8 damage cap on Divine Smite count the 2d8 base damage, JUST the extra 1d8 per slot > 1st, or all of it 2gether?*
>
>
> *Still unsure what the mac allowed damage of Divine Smite as a whole is. 5d8 using a 4th level slot? or 7d8 using a 6th?*
>
>
> **Divine Smite can deal a maximum of 5d8 radiant damage, or 6d8 if the target is an undead or a fiend.**
>
>
>
As clearly stated by Crawford, Divine Smite has a regular maximum of 5d8 radiant damage - but that maximum increases to 6d8 if the target is an undead or fiend (since Divine Smite automatically does an extra 1d8 radiant damage when used if the target is an undead or fiend). This was ambiguous before the errata, but is now explicitly stated in the feature description. | I've assumed that 5d8 was the maximum for the whole Divine Smite, not the extra damage. Maybe this isn't really a good answer, but I can't see why it'd be the other way given that multiclassing is optional at the DM's discretion. If it *was* the other way then at most you could only get 5d8 out of it with 4th level spell slots, and to get any higher the DM would have to allow multiclassing, and not every DM will have it in their game. Furthermore, it seems highly impractical to require leveling **another class** high enough to grant the character 6th level spell slots.
Primary spellcasters give their full level towards the calculation, but secondary casters like Paladins only add half their level (rounded up I believe) to that. So you only have 4th level spell slots as a Paladin at 13th level and higher. For the spell slots you want here, the Paladin would have to multiclass into a Primary spellcaster for 4 levels, to get the **one** 6th level spell slot from there. That is all *if* multiclassing is allowed. And the end result is a 17th level character.
Actually, let's re-frame that whole thought. If you wanted to get maximum Divine Smite capability, you actually would try to multiclass very early on. The Paladin gets Divine Smite at 2nd level, where they get Spellcasting and 1st level Spell Slots. You could accomplish this feat of Divine Smite by taking 10 levels in a Primary spellcasting class, and then you'll still get 6th level spell slots. **That** would be the most efficient way of doing it, I suppose.
I hope that answers your question thoroughly enough. It would be preposterous to require an optional feature, and at a *highly* impractical manner at that, to get the Maximum Power of your Divine Smiting capabilities. (*I feel like I really drifted off topic with that explanation of why it doesn't work.*)
**So I hope we can assume that the total maximum is 5d8.** |
57,077 | This is a mechanics question pertaining to the maximum allowed damage for the Paladin's *Divine Smite* ability.
>
> **Divine Smite:** ...you can expend one [Spell Slot] to deal Radiant damage in addition to weapon damage. The extra damage is 2d8 for a
> 1st level slot, plus 1d8 for each spell level higher than 1st, to a
> maximum of 5d8....
>
>
>
**"...to a maximum of 5d8..."** is the part I'm having issues with figuring out how to rule.
Can the *extra* damage, beyond the 2d8 for filling a 1st level slot with the ability, be up to 5d8? Which would require a (Multiclass gained) spell slot of 6th level to add an extra 5d8 to the attack, making it 7d8 total (The default 2d8 + the maximum allowed extra of 5d8). *Or*, are the rules saying that the total allowed maximum damage can't exceed 5d8 total; which would only take a 4th level spell slot to add a max of 3d8 to the default 2d8?
2d8 + 3d8 = "to a maximum of 5d8" damage total?
2d8 + "to a maximum of 5d8" extra total = 7d8 ?
My guess is the first choice, since without Multiclassing a Paladin can never have more than 5th level spell slots and could never have the 6th level slot required to add 5d8 to the attack. My PC differs though. | 2015/02/20 | [
"https://rpg.stackexchange.com/questions/57077",
"https://rpg.stackexchange.com",
"https://rpg.stackexchange.com/users/21362/"
] | A compelling argument can be made for either case. Using the total maximum as 5d8 best mirrors how other maximums are written throughout the book. Take, for example, the description of falling damage:
>
> At the end of a fall, a creature takes 1d6 bludgeoning damage for every 10 feet it fell, to a maximum of 20d6. The creature lands prone, unless it avoids taking damage from the fall.
>
>
>
*Player's Handbook*, 183
And the "At Higher Levels" of "Hail of Thorns":
>
> If you cast this spell using a spell slot of 2nd level or higher, the damage increases by 1d10 for each slot level above 1st (to a maximum of 6d10).
>
>
>
*Player's Handbook*, 249
Both of these examples use "Some amount of damage, up to some maximum amount of damage". If we assume the goal was to write all maximums in the same way, then it makes sense to assume that the *total maximum damage* is 5d8.
However, since it wasn't clear, I asked Jeremy Crawford on Twitter:
[Divine Smite: Total maximum is 5d8, or additional spell lvl damage is 5d8? (Can I use a 6th level spell to get 2d8 + 5d8?)](https://twitter.com/GamerJosh/status/568816437047001089)
I will update when (if?) I get a response. | I've always seen it as being the additional damage beyond the 2d8 that is capped at 5d8. For example, a 20th level paladin has up to 5th level spell slots. As written, the damage progression is thus:
1st level slot - 2d8,
2nd level slot - 2d8 +1d8,
3rd level slot - 2d8 +2d8,
4th level slot - 2d8 +3d8,
5th level slot - 2d8 +4d8,
Why wouldn't they get to use their 5th level slots? I've always ruled at my own table that the 5d8 limit was for the bonus damage, meaning you can spend up to a 6th level on Smite, but nothing higher (For if you multiclass). |
57,077 | This is a mechanics question pertaining to the maximum allowed damage for the Paladin's *Divine Smite* ability.
>
> **Divine Smite:** ...you can expend one [Spell Slot] to deal Radiant damage in addition to weapon damage. The extra damage is 2d8 for a
> 1st level slot, plus 1d8 for each spell level higher than 1st, to a
> maximum of 5d8....
>
>
>
**"...to a maximum of 5d8..."** is the part I'm having issues with figuring out how to rule.
Can the *extra* damage, beyond the 2d8 for filling a 1st level slot with the ability, be up to 5d8? Which would require a (Multiclass gained) spell slot of 6th level to add an extra 5d8 to the attack, making it 7d8 total (The default 2d8 + the maximum allowed extra of 5d8). *Or*, are the rules saying that the total allowed maximum damage can't exceed 5d8 total; which would only take a 4th level spell slot to add a max of 3d8 to the default 2d8?
2d8 + 3d8 = "to a maximum of 5d8" damage total?
2d8 + "to a maximum of 5d8" extra total = 7d8 ?
My guess is the first choice, since without Multiclassing a Paladin can never have more than 5th level spell slots and could never have the 6th level slot required to add 5d8 to the attack. My PC differs though. | 2015/02/20 | [
"https://rpg.stackexchange.com/questions/57077",
"https://rpg.stackexchange.com",
"https://rpg.stackexchange.com/users/21362/"
] | The regular maximum for Divine Smite is 5d8, but increases to 6d8 against undead and fiends
===========================================================================================
As of the [2018 PHB errata](https://media.wizards.com/2018/dnd/downloads/PH-Errata.pdf), the description of the [Divine Smite](https://www.dndbeyond.com/classes/paladin#DivineSmite-264) feature says:
>
> Starting at 2nd level, when you hit a creature with a melee weapon attack, you can expend one spell slot to deal radiant damage to the target, in addition to the weapon’s damage. The extra damage is 2d8 for a 1st-level spell slot, plus 1d8 for each spell level higher than 1st, to **a maximum of 5d8**. The damage increases by 1d8 if the target is an undead or a fiend, to **a maximum of 6d8**.
>
>
>
As other answers have pointed out, 5d8 is the regular maximum for the total amount of "extra radiant damage" done by Divine Smite, regardless of the spell slot used. 5d8 is an overall limit for the feature - not just for the additional radiant damage done when using a higher-level spell slot.
The portion after the comma in the last sentence ("to a maximum of 6d8") was added in the 2018 PHB errata, and now makes it perfectly clear how the 1d8 extra damage against undead and fiends interacts with the 5d8 damage cap - the damage cap is increased by 1d8 as well.
Jeremy Crawford explains the reason for the change in the [November 27, 2018 episode of Dragon+](https://youtu.be/AaSdPL_pSgE?t=1432) (relevant segment starts around 23:52 into the episode). He describes it as a clarification to the feature that matches how many groups were interpreting it, not a change in how it works.
(As clarified in [my answer to the Improved Divine Smite Differentiation question](https://rpg.stackexchange.com/a/119874/33569), the Improved Divine Smite 11th-level feature adds an extra 1d8 radiant damage, not constrained by this 5d8 limit. In other words, Improved Divine Smite always adds an extra 1d8 radiant damage to qualifying attacks, separately from any spell slots expended to use the regular Divine Smite feature. Crawford explains in the Dragon+ episode that they removed the last sentence of Improved Divine Smite because it was causing people to mistakenly assume it *was* constrained by Divine Smite's limit.)
---
Pre-errata, [rules designer Jeremy Crawford answered this question (as asked by you!) in a January 2016 tweet](https://twitter.com/JeremyECrawford/status/687423172988284929):
>
> *Does the 5d8 damage cap on Divine Smite count the 2d8 base damage, JUST the extra 1d8 per slot > 1st, or all of it 2gether?*
>
>
> *Still unsure what the mac allowed damage of Divine Smite as a whole is. 5d8 using a 4th level slot? or 7d8 using a 6th?*
>
>
> **Divine Smite can deal a maximum of 5d8 radiant damage, or 6d8 if the target is an undead or a fiend.**
>
>
>
As clearly stated by Crawford, Divine Smite has a regular maximum of 5d8 radiant damage - but that maximum increases to 6d8 if the target is an undead or fiend (since Divine Smite automatically does an extra 1d8 radiant damage when used if the target is an undead or fiend). This was ambiguous before the errata, but is now explicitly stated in the feature description. | A compelling argument can be made for either case. Using the total maximum as 5d8 best mirrors how other maximums are written throughout the book. Take, for example, the description of falling damage:
>
> At the end of a fall, a creature takes 1d6 bludgeoning damage for every 10 feet it fell, to a maximum of 20d6. The creature lands prone, unless it avoids taking damage from the fall.
>
>
>
*Player's Handbook*, 183
And the "At Higher Levels" of "Hail of Thorns":
>
> If you cast this spell using a spell slot of 2nd level or higher, the damage increases by 1d10 for each slot level above 1st (to a maximum of 6d10).
>
>
>
*Player's Handbook*, 249
Both of these examples use "Some amount of damage, up to some maximum amount of damage". If we assume the goal was to write all maximums in the same way, then it makes sense to assume that the *total maximum damage* is 5d8.
However, since it wasn't clear, I asked Jeremy Crawford on Twitter:
[Divine Smite: Total maximum is 5d8, or additional spell lvl damage is 5d8? (Can I use a 6th level spell to get 2d8 + 5d8?)](https://twitter.com/GamerJosh/status/568816437047001089)
I will update when (if?) I get a response. |
57,077 | This is a mechanics question pertaining to the maximum allowed damage for the Paladin's *Divine Smite* ability.
>
> **Divine Smite:** ...you can expend one [Spell Slot] to deal Radiant damage in addition to weapon damage. The extra damage is 2d8 for a
> 1st level slot, plus 1d8 for each spell level higher than 1st, to a
> maximum of 5d8....
>
>
>
**"...to a maximum of 5d8..."** is the part I'm having issues with figuring out how to rule.
Can the *extra* damage, beyond the 2d8 for filling a 1st level slot with the ability, be up to 5d8? Which would require a (Multiclass gained) spell slot of 6th level to add an extra 5d8 to the attack, making it 7d8 total (The default 2d8 + the maximum allowed extra of 5d8). *Or*, are the rules saying that the total allowed maximum damage can't exceed 5d8 total; which would only take a 4th level spell slot to add a max of 3d8 to the default 2d8?
2d8 + 3d8 = "to a maximum of 5d8" damage total?
2d8 + "to a maximum of 5d8" extra total = 7d8 ?
My guess is the first choice, since without Multiclassing a Paladin can never have more than 5th level spell slots and could never have the 6th level slot required to add 5d8 to the attack. My PC differs though. | 2015/02/20 | [
"https://rpg.stackexchange.com/questions/57077",
"https://rpg.stackexchange.com",
"https://rpg.stackexchange.com/users/21362/"
] | I believe the way it works is that by expending a 1st lvl spell slot gives 2d8, a 2nd lvl spell slot 3d8, a 3rd lvl spell slot 4d8 and a 4th lvl spell slot 5d8. a 5th lvl spell slot would also give 5d8.
If you multiclass to get higher lvl spell slots they would still give only the max of 5d8. | I've always seen it as being the additional damage beyond the 2d8 that is capped at 5d8. For example, a 20th level paladin has up to 5th level spell slots. As written, the damage progression is thus:
1st level slot - 2d8,
2nd level slot - 2d8 +1d8,
3rd level slot - 2d8 +2d8,
4th level slot - 2d8 +3d8,
5th level slot - 2d8 +4d8,
Why wouldn't they get to use their 5th level slots? I've always ruled at my own table that the 5d8 limit was for the bonus damage, meaning you can spend up to a 6th level on Smite, but nothing higher (For if you multiclass). |
57,077 | This is a mechanics question pertaining to the maximum allowed damage for the Paladin's *Divine Smite* ability.
>
> **Divine Smite:** ...you can expend one [Spell Slot] to deal Radiant damage in addition to weapon damage. The extra damage is 2d8 for a
> 1st level slot, plus 1d8 for each spell level higher than 1st, to a
> maximum of 5d8....
>
>
>
**"...to a maximum of 5d8..."** is the part I'm having issues with figuring out how to rule.
Can the *extra* damage, beyond the 2d8 for filling a 1st level slot with the ability, be up to 5d8? Which would require a (Multiclass gained) spell slot of 6th level to add an extra 5d8 to the attack, making it 7d8 total (The default 2d8 + the maximum allowed extra of 5d8). *Or*, are the rules saying that the total allowed maximum damage can't exceed 5d8 total; which would only take a 4th level spell slot to add a max of 3d8 to the default 2d8?
2d8 + 3d8 = "to a maximum of 5d8" damage total?
2d8 + "to a maximum of 5d8" extra total = 7d8 ?
My guess is the first choice, since without Multiclassing a Paladin can never have more than 5th level spell slots and could never have the 6th level slot required to add 5d8 to the attack. My PC differs though. | 2015/02/20 | [
"https://rpg.stackexchange.com/questions/57077",
"https://rpg.stackexchange.com",
"https://rpg.stackexchange.com/users/21362/"
] | The maximum is on *all* the extra radiant damage that Divine Smite adds to your normal weapon damage, necessarily including the first 2d8 (emphasis mine):
>
> […] you can expend one paladin spell slot to deal radiant damage to the target, **in addition** to the weapon's damage. **The extra damage is [a variable amount], to a maximum of 5d8.**
>
>
>
The 5d8 maximum is a limit on the extra radiant damage that Divine Smite is adding to the weapon's damage.
(For *Improved* Divine Smite, see the question [Improved Divine Smite Differentiation](https://rpg.stackexchange.com/questions/70028/improved-divine-smite-differentiation).) | I've assumed that 5d8 was the maximum for the whole Divine Smite, not the extra damage. Maybe this isn't really a good answer, but I can't see why it'd be the other way given that multiclassing is optional at the DM's discretion. If it *was* the other way then at most you could only get 5d8 out of it with 4th level spell slots, and to get any higher the DM would have to allow multiclassing, and not every DM will have it in their game. Furthermore, it seems highly impractical to require leveling **another class** high enough to grant the character 6th level spell slots.
Primary spellcasters give their full level towards the calculation, but secondary casters like Paladins only add half their level (rounded up I believe) to that. So you only have 4th level spell slots as a Paladin at 13th level and higher. For the spell slots you want here, the Paladin would have to multiclass into a Primary spellcaster for 4 levels, to get the **one** 6th level spell slot from there. That is all *if* multiclassing is allowed. And the end result is a 17th level character.
Actually, let's re-frame that whole thought. If you wanted to get maximum Divine Smite capability, you actually would try to multiclass very early on. The Paladin gets Divine Smite at 2nd level, where they get Spellcasting and 1st level Spell Slots. You could accomplish this feat of Divine Smite by taking 10 levels in a Primary spellcasting class, and then you'll still get 6th level spell slots. **That** would be the most efficient way of doing it, I suppose.
I hope that answers your question thoroughly enough. It would be preposterous to require an optional feature, and at a *highly* impractical manner at that, to get the Maximum Power of your Divine Smiting capabilities. (*I feel like I really drifted off topic with that explanation of why it doesn't work.*)
**So I hope we can assume that the total maximum is 5d8.** |
72,260 | I just installed a new Kohler toilet. It flushes fine but after the flush the level of water in the bowl slowly drains. Why would this be and is it anything to worry about? | 2015/08/22 | [
"https://diy.stackexchange.com/questions/72260",
"https://diy.stackexchange.com",
"https://diy.stackexchange.com/users/41742/"
] | Get the dimensions you need first, then judge by looks. My guess is that the roofing boards would be lower grade (ie, more knotty) than floor boards. | I would expect floorboards to be planed to tight thickness and width tolerances and to be relatively smooth on one side. Roof boards can be sawn rougher with more tolerance for variation in thickness and width.
Frankly though, it could very well be that the only difference is where they were pulled from! |
1,816 | I was wondering if it is possible to link to a comment? For example, the second comment after this reply [Questions about geometric distribution](https://math.stackexchange.com/questions/26386/questions-about-geometric-distribution/26412#26412)
Thanks! | 2011/03/19 | [
"https://math.meta.stackexchange.com/questions/1816",
"https://math.meta.stackexchange.com",
"https://math.meta.stackexchange.com/users/1281/"
] | You can obtain links to comments by clicking on the time of the comment posting, which appears right after the name of user who posted it. (Looking at the [recent activity](https://meta.stackexchange.com/a/120688/152579) in the [thread](https://meta.stackexchange.com/q/5436/155585) that [Hendrik's answer](http://meta.math.stackexchange.com/a/1818/1424) links to, this appears to be a new feature.)
Another way is to find the comment in the activity tab of the user who posted it, and click the link there. | Have a look at this question on meta.SO: [How to link to a comment?](https://meta.stackexchange.com/q/5436/155585) I don't know, however, if the solutions provided in the answers still work. |
44,910 | Electric fans are often controlled by a knob which the user turns to power the fan on or off, and control the fan's speed.
On a lot of fans, the knob goes immediately from the off position to the highest speed, followed by the other speed settings in descending order.
For example on the picture below, you can see the knob going from the off position ('0') to the highest ('3'), and then the other two in descending speed order ('2', '1').

Why is that? | 2013/09/12 | [
"https://ux.stackexchange.com/questions/44910",
"https://ux.stackexchange.com",
"https://ux.stackexchange.com/users/6026/"
] | And I found some thread discussing on this particular question as well.
<http://boards.straightdope.com/sdmb/showthread.php?t=292238>
Look like is because of the rheostat and how it works. | My guess is that...
Imagine you have to operate the device in darkness (no light environment) or someone with visual difficulties, placing the off setting next to the highest allow the user to know they've switched off the device.
See it this way, you are at position 2, your intention is to switch off the fan, you knob in the "0,1,2,3" setting, you will not be able to tell immediately if you are at the 1 position or off position as the fan is turning at a similar pace. (Slower but unable to identify easily).
Whereas, by placing the setting as "1,2,3,0", you will be able to tell immediately if you are at position 3 (faster and louder noise) or at position 0 (significantly slower and lesser noise). Thus, user in dark places can "feel" the setting without looking at the knobs.
Don't shoot me if I'm wrong, this is just my guess. Cheers. =) |
44,910 | Electric fans are often controlled by a knob which the user turns to power the fan on or off, and control the fan's speed.
On a lot of fans, the knob goes immediately from the off position to the highest speed, followed by the other speed settings in descending order.
For example on the picture below, you can see the knob going from the off position ('0') to the highest ('3'), and then the other two in descending speed order ('2', '1').

Why is that? | 2013/09/12 | [
"https://ux.stackexchange.com/questions/44910",
"https://ux.stackexchange.com",
"https://ux.stackexchange.com/users/6026/"
] | I'm so tempted to suggest that they did analytics and found that the fastest speed is the most popular, and the slow one the least. But that's nothing more than a guess.
Although I'm not a motor expert, from a quick research this is what I understood. I'm pretty sure this is correct, but to get a definite answer, I guess the post is more appropriate in the electronics forum.
I'm really simplifying the explanation, which anyhow requires some understanding of electronics.
Anyhow:
Motor Technology
================
Inductance
----------
Electric motors' key principle of operation is based on inductance. That is, a current flows through a coil to create a magnetic field, and magnets within this field start spin, being attached to a central pole. (It is also possible to have the coil on the spinning component and the magnets fixed.)
As with other inductive setups, once the moving components start spin they create what is known as *Back EMF* - a force opposite that of the force that caused the movement. In electrical terms, the more back EMF, the more resistance the inductor presents in the circuit.
Single Coil Designs
-------------------
While the fan is at rest, there is little resistance in the circuit. Thus, the circuit draws a lot of current, which may 'fry' the fan - various components can overheat.
The sooner the fan starts spin, the quicker the resistance rise, and the current consumption drops.
So quick starting of the rotation is beneficial here.
Multi Coil Designs
------------------
Another type of design involves an even number of coils arranged in a circle around the spinning pole.
With this design, a different pair of coils are triggered in concession, depending on the angle of the spinning part. This requires a controller that can sense the angular position of the spinning component. This is done sensing the resistance of each coil. But if the back EMF is too low (due to slow rotation) the controller can't sense the position of the fan, which yields jittered behaviour.
Here again, there's a need for quick start.
As you can see, either designs would benefit from a quick start of the fan. Thus, the fastest mode is the one next to the off position. | My guess is that...
Imagine you have to operate the device in darkness (no light environment) or someone with visual difficulties, placing the off setting next to the highest allow the user to know they've switched off the device.
See it this way, you are at position 2, your intention is to switch off the fan, you knob in the "0,1,2,3" setting, you will not be able to tell immediately if you are at the 1 position or off position as the fan is turning at a similar pace. (Slower but unable to identify easily).
Whereas, by placing the setting as "1,2,3,0", you will be able to tell immediately if you are at position 3 (faster and louder noise) or at position 0 (significantly slower and lesser noise). Thus, user in dark places can "feel" the setting without looking at the knobs.
Don't shoot me if I'm wrong, this is just my guess. Cheers. =) |
44,910 | Electric fans are often controlled by a knob which the user turns to power the fan on or off, and control the fan's speed.
On a lot of fans, the knob goes immediately from the off position to the highest speed, followed by the other speed settings in descending order.
For example on the picture below, you can see the knob going from the off position ('0') to the highest ('3'), and then the other two in descending speed order ('2', '1').

Why is that? | 2013/09/12 | [
"https://ux.stackexchange.com/questions/44910",
"https://ux.stackexchange.com",
"https://ux.stackexchange.com/users/6026/"
] | I think it is to protect the ensure the motor starts, because if low is first, low end motors might not be able to start. | My guess is that...
Imagine you have to operate the device in darkness (no light environment) or someone with visual difficulties, placing the off setting next to the highest allow the user to know they've switched off the device.
See it this way, you are at position 2, your intention is to switch off the fan, you knob in the "0,1,2,3" setting, you will not be able to tell immediately if you are at the 1 position or off position as the fan is turning at a similar pace. (Slower but unable to identify easily).
Whereas, by placing the setting as "1,2,3,0", you will be able to tell immediately if you are at position 3 (faster and louder noise) or at position 0 (significantly slower and lesser noise). Thus, user in dark places can "feel" the setting without looking at the knobs.
Don't shoot me if I'm wrong, this is just my guess. Cheers. =) |
44,910 | Electric fans are often controlled by a knob which the user turns to power the fan on or off, and control the fan's speed.
On a lot of fans, the knob goes immediately from the off position to the highest speed, followed by the other speed settings in descending order.
For example on the picture below, you can see the knob going from the off position ('0') to the highest ('3'), and then the other two in descending speed order ('2', '1').

Why is that? | 2013/09/12 | [
"https://ux.stackexchange.com/questions/44910",
"https://ux.stackexchange.com",
"https://ux.stackexchange.com/users/6026/"
] | 50+ years ago when I made the mistake of taking an intro course in EE when had an experiment with electrical motors. If I recall correctly (no mean feat) increasing the resistance decreases the motor's speed. MY brother had a cheap box fan whose speed was the typical 0 - 3 -2 -1. I figured the simplest( cheapest) design was to turn the knob so resistance went from 0 to progressively higher ohm so speed knob was off high med low. Like the man said, cars don't behave that way, so I figure it's just cheaper to do it that way
But whattaIknow I got a B in the course even though it was primarily solving ODE's which I'd done several years earlier. Shoulda never listened to my roommate | My guess is that...
Imagine you have to operate the device in darkness (no light environment) or someone with visual difficulties, placing the off setting next to the highest allow the user to know they've switched off the device.
See it this way, you are at position 2, your intention is to switch off the fan, you knob in the "0,1,2,3" setting, you will not be able to tell immediately if you are at the 1 position or off position as the fan is turning at a similar pace. (Slower but unable to identify easily).
Whereas, by placing the setting as "1,2,3,0", you will be able to tell immediately if you are at position 3 (faster and louder noise) or at position 0 (significantly slower and lesser noise). Thus, user in dark places can "feel" the setting without looking at the knobs.
Don't shoot me if I'm wrong, this is just my guess. Cheers. =) |
27,728,938 | What is the difference between control flow and data flow in a SSIS package along with some examples please.
Thanks. | 2015/01/01 | [
"https://Stackoverflow.com/questions/27728938",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4274563/"
] | In data flow task, it is mandatory that data needs to be made flown/transferred from a source to destination. Whereas in control flow task it is not. | **Control Flow:**
Control Flow is part of SQL Server Integration Services Package where you handle the flow of operations or Tasks.
Let's say you are reading a text file by using Data Flow task from a folder. If Data Flow Task completes successfully then you want to Run File System Task to move the file from Source Folder to Archive Folder. If Data Flow Task failed then you want to send email to your users by using Send Mail Task. The Precedence Constraints are used to control the execution flow.
**Data Flow:**
Data Flow is the part of SQL Server Integration Services Package, where data is extracted by using Data Flow Sources ( OLE DB Source, Raw File Source, Flat File Source , Excel Source etc.). After extacting data Data Flow Transformations such as Data Conversion, Derived Column, Lookup, Multicast,Merge etc are used to implement different business logics and finally written to Data Flow Destinations (OLE DB Destination, Flat File Destination,Excel Destination,DataReader Destination ADO NET Destination etc.)
Take a look at [This post](http://sqlage.blogspot.in/2014/03/ssis-what-is-difference-between-control.html) for more details. |
27,728,938 | What is the difference between control flow and data flow in a SSIS package along with some examples please.
Thanks. | 2015/01/01 | [
"https://Stackoverflow.com/questions/27728938",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4274563/"
] | In data flow task, it is mandatory that data needs to be made flown/transferred from a source to destination. Whereas in control flow task it is not. | Click on the Control Flow Tab and observe what items are available in Tool Box
Similarly Click on the Data Flow Tab observe what items are available |
11,275,974 | I am creating a Spring MVC Hibernate application using MySQL. Where should I save the User Images: in the database or in some folder, like under `WEB-INF/` ? | 2012/06/30 | [
"https://Stackoverflow.com/questions/11275974",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1493285/"
] | Certainly not inside WEB-INF. You might want to save them in the file system, but not in the webapp's directory. First of all, it could very well be inexistent, if the app is packaged as a war. And second, you would lose evrything as soon as you redeploy the app. Desktop apps don't store their user data in their install directory, do they? It should be the same for webapps.
Now, since images are usually big, and they're not searchable, you might want to store them on the file system, and only store their name, path, hash, and/or mime type into the database. But it depends on your application, if they need to be served/used by other applications, if these apps have access to the database and/or the file system, etc. You decide. | That depends what you're trying to accomplish.
If these are static images , and you have a fixed number of users ,you can consider saving them under WEB-INF/.
However, most likely this is not you case, and you have varying number amount of users and you have to store a user for each one of them.
Possible solutions:
A. For the user store an image name, and have a convention of storing/loading from a well known directory in your file system.
B. Store the image as a blob in your DB. Consider checking the [@Lob annotation](http://docs.jboss.org/hibernate/annotations/3.5/reference/en/html/entity.html) |
11,275,974 | I am creating a Spring MVC Hibernate application using MySQL. Where should I save the User Images: in the database or in some folder, like under `WEB-INF/` ? | 2012/06/30 | [
"https://Stackoverflow.com/questions/11275974",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1493285/"
] | You can choose it:
1. **DataBase** - you have the positive point that this can be associated with records and will never be orphan (depending on your model). For backup it is a little bit painful for situations in which the database increases.
2. **FileSystem** - backup facility, as these are physical files, an rsynch process should be enough. Another positive point is that you reduce the IO from DB. Although, it is quite hard to attach a logic between the files and the record stored in the DB (you have things distributed), so you will not be sure if the file wasn't removed and there are still some records referring to it in DB.
***If filesystem option is chosen, put it outside the application directory structure (prepare a dedicated place for the files)***. The application dir should not be modified, causing some pain when redeployment is done. You can use symbolic links though.
With images, probably you want to perform some thumbnails and so on, this would be cheaper using FileSystem option. | That depends what you're trying to accomplish.
If these are static images , and you have a fixed number of users ,you can consider saving them under WEB-INF/.
However, most likely this is not you case, and you have varying number amount of users and you have to store a user for each one of them.
Possible solutions:
A. For the user store an image name, and have a convention of storing/loading from a well known directory in your file system.
B. Store the image as a blob in your DB. Consider checking the [@Lob annotation](http://docs.jboss.org/hibernate/annotations/3.5/reference/en/html/entity.html) |
11,275,974 | I am creating a Spring MVC Hibernate application using MySQL. Where should I save the User Images: in the database or in some folder, like under `WEB-INF/` ? | 2012/06/30 | [
"https://Stackoverflow.com/questions/11275974",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1493285/"
] | Certainly not inside WEB-INF. You might want to save them in the file system, but not in the webapp's directory. First of all, it could very well be inexistent, if the app is packaged as a war. And second, you would lose evrything as soon as you redeploy the app. Desktop apps don't store their user data in their install directory, do they? It should be the same for webapps.
Now, since images are usually big, and they're not searchable, you might want to store them on the file system, and only store their name, path, hash, and/or mime type into the database. But it depends on your application, if they need to be served/used by other applications, if these apps have access to the database and/or the file system, etc. You decide. | You can choose it:
1. **DataBase** - you have the positive point that this can be associated with records and will never be orphan (depending on your model). For backup it is a little bit painful for situations in which the database increases.
2. **FileSystem** - backup facility, as these are physical files, an rsynch process should be enough. Another positive point is that you reduce the IO from DB. Although, it is quite hard to attach a logic between the files and the record stored in the DB (you have things distributed), so you will not be sure if the file wasn't removed and there are still some records referring to it in DB.
***If filesystem option is chosen, put it outside the application directory structure (prepare a dedicated place for the files)***. The application dir should not be modified, causing some pain when redeployment is done. You can use symbolic links though.
With images, probably you want to perform some thumbnails and so on, this would be cheaper using FileSystem option. |
44,795 | I am using Finder list view in OSX Lion 10.7.3, and I want to size the window to the contents. I click the zoom (green) button which according to the web 'toggles the window between the “user state” and the “standard state”'.
However, I find that OSX miscalculates and makes the window about 2.5 lines too short to fit the whole file list, unless the list is very small in which case it goes no shorter than a minimum size.
How do I avoid this behaviour and size the window to the list correctly? | 2012/03/20 | [
"https://apple.stackexchange.com/questions/44795",
"https://apple.stackexchange.com",
"https://apple.stackexchange.com/users/11678/"
] | Unfortunately I think you may be out of luck on this one. I tried hiding the toolbar, path bar, status bar, holding down modifier keys — nothing seems to change this behavior. You could check out some [third-party utilities](http://www.macupdate.com/app/mac/30591/right-zoom) to change the zoom button's behavior, but I don't know of anything that will give you the exact behavior you want. | As far as I'm concerned, it's a bug - this is a feature that has worked the same way in every version of the Mac OS that I've ever used until Lion. I'm stunned that it still hasn't been fixed after three major 10.7 updates. It defies logic that "zoom" would mean anything but "only make the window large enough to show all of the window contents," as it does in Icon view. I'm filing a report right now:
<http://www.apple.com/feedback/macosx.html>
Come on, Apple, fix this. |
46,160 | After the defeat of Selim Bradley (aka Homunculus Pride), his true form is revealed to be a tiny baby in a fetal position.
This makes me wonder, is Selim Bradley's true form is actually an aborted fetus, found later by The Father and given new life as Homunculus Pride?
This will explain a lot about his psychology (ex: Unconditional obedience towards his Father). | 2018/03/20 | [
"https://anime.stackexchange.com/questions/46160",
"https://anime.stackexchange.com",
"https://anime.stackexchange.com/users/39697/"
] | I think that none of the original forms of the homunculi are given an explanation.
* Lust has the body of women that can change her fingers to needle
things.
* Wrath is a mortal homunculus that has a human body.
* Sloth has that big giant kind of body.
* Envy has that green monster thing as the original form that he
disguises to a normal human body.
Only Gluttony can be given a reasonable explanation for his original body. He is the failed gate of truth and hence has a false gate in his stomach.
You can argue Envy has that green monster body since he is a compilation of many human souls but that is true for all the homunculi, so we can't take that as an explanation.
Also, I went through the WIKI and couldn't find anything on this.
I also think the fetus body remained in the end because Pride was removed (killed) from the body and only that fetus remained symbolically telling that the monster has been removed and only a pure form of the being (baby) remains(even though the baby is a homunculus). | All of their natural forms represent the sin they are. Pride is a child, arrogant as many children are, envy is a green ugly monster representing jealousy, sloth is big and powerful held back by his own laziness, lust is a sexual icon, gluttony is fat etc |
267,316 | I've got a situation where I want to put a wire in a concrete wall (I have a channel), but the one coming in isn't long enough. I cannot replace the incoming cable, so I need to join it with another one inside the channel. Said channel will afterwards be filled with cement, so I won't be able to access the joint anymore without some serious destruction. What is the best way to join the cable so that I can sleep easily knowing that there will be no problems there? If it matters, the cable will not be carrying significant loads (less than 100W max).
**Added:** Clarification was asked in comments. Location in world: Latvia. We're a 220V country. The incoming cable in question is marked "2x1,0" which (if I understand correctly) means two wires, each 1mm2 cross-section. Copper. It's pretty thin. The cable with which I will join it will be similar.
It will feed one LED lighting fixture (up to 50W, but probably less) and a wifi router. The location makes it unlikely that heavy loads would be attached there, however, of course, I cannot predict the future. | 2023/02/20 | [
"https://diy.stackexchange.com/questions/267316",
"https://diy.stackexchange.com",
"https://diy.stackexchange.com/users/409/"
] | I doubt anyone here is familiar with code in Latvia.
That being said, the "correct" way to do this in most code-complying areas would be to leave the splice serviceable. One option in your case would be to carve a hole in your wall and install a box to splice the wires in, and leave the box cover accessible (do not cover in cement). If you don't want a box cover there, then another option is to trace you cable back to a place where a box would be acceptable, and run new cable from that point.
Some jurisdictions allow the use of splice kits that are designed to be inaccessible after installation.
If you don't know what is acceptable in your jurisdiction and you can't learn yourself, then you will need to hire a professional to either do it for you or educate you. | "cable in a concrete wall" is not a good idea, and this is why. No maintainability. Better to put the wires through **conduit**. This is one place where steel conduit is not good, and plastic works better.
With conduit, you can pull the wires out readily and at will, and then pull in replacement wires. The conduit must be built "to be able to be pulled" e.g. you can't have a plumbing elbow in there, it needs to be broad "sweeps" of turns. In North America conduit is required to be built empty of wires, and then wires pulled in after completion. This "keeps you honest" lol. |
267,316 | I've got a situation where I want to put a wire in a concrete wall (I have a channel), but the one coming in isn't long enough. I cannot replace the incoming cable, so I need to join it with another one inside the channel. Said channel will afterwards be filled with cement, so I won't be able to access the joint anymore without some serious destruction. What is the best way to join the cable so that I can sleep easily knowing that there will be no problems there? If it matters, the cable will not be carrying significant loads (less than 100W max).
**Added:** Clarification was asked in comments. Location in world: Latvia. We're a 220V country. The incoming cable in question is marked "2x1,0" which (if I understand correctly) means two wires, each 1mm2 cross-section. Copper. It's pretty thin. The cable with which I will join it will be similar.
It will feed one LED lighting fixture (up to 50W, but probably less) and a wifi router. The location makes it unlikely that heavy loads would be attached there, however, of course, I cannot predict the future. | 2023/02/20 | [
"https://diy.stackexchange.com/questions/267316",
"https://diy.stackexchange.com",
"https://diy.stackexchange.com/users/409/"
] | if i interpret your description properly - where your existing incoming wire length ends up ending somewhere within the existing wall, you pull that length of it out and cut it and have the splice on the outside of the wall. Then you get whatever proper wire and lay a single run of wire through the wall (channel) and have any splices happen outside the wall. So you would be wasting the existing length of incoming wire that goes however far into your channel. You want one good continuous length of wire that would be encased within the concrete wall never to be touched again.
in the US there is UF-B wire which has extra thick insulation and is rated for direct burial in the earth as well as being ultraviolet resistant. But it is not approved for being used in poured concrete. You would not want to run that or any other non conduit protected wiring within poured cement (or concrete) because as the cement cures is becomes very alkaline like PH 13 for a period of time, and that can harm the insulation of the wiring.... I mean you could do it and take your chances but to do it properly and have no future problems is why you would use some sort of pvc or metal type conduit that encases the wire and have the concrete/cement poured over that. And then after the wall is cured, you can always simply pull wiring through the conduit... you didn't elaborate but I assume your channel is a straight run; if not then you want it to be unless you have good reason for it not to be. | "cable in a concrete wall" is not a good idea, and this is why. No maintainability. Better to put the wires through **conduit**. This is one place where steel conduit is not good, and plastic works better.
With conduit, you can pull the wires out readily and at will, and then pull in replacement wires. The conduit must be built "to be able to be pulled" e.g. you can't have a plumbing elbow in there, it needs to be broad "sweeps" of turns. In North America conduit is required to be built empty of wires, and then wires pulled in after completion. This "keeps you honest" lol. |
438,812 | So I have a very nice 27" 5K iMac, late 2014 model.
It runs Big Sur, but it doesn't get the Monterey upgrade, because it is 1 generation too old.
[I believe I can get Monterey to work using OCLP](https://github.com/dortania/Opencore-Legacy-Patcher/releases) on this hardware, but I rather don't want to mess with that as that would require a full re-install, for which I have no time. (A lot of home-brew/custom software that would need to be setup fresh.)
So I wonder if I can run Monterey in a VM instead, even though the underlying hardware is officially not supported.
(I've got 24 GB RAM to play with. Plenty for a VM.)
I can just give it a try off course, but Monterey is a big download and I have very limited bandwidth at the moment, that is also needed for other stuff during the day.
I rather not download the whole thing over several nights to find that it doesn't work.
Can anyone confirm whether this would work or not? | 2022/03/23 | [
"https://apple.stackexchange.com/questions/438812",
"https://apple.stackexchange.com",
"https://apple.stackexchange.com/users/133918/"
] | I have macOS Monterey version 12.1 installed on a VMware Fusion Player virtual machine on a iMac (21.5-inch, Late 2013) host. The iMac is running macOS Catalina version 10.15.7 from a 5 Gb/s USB port. The USB drive is a 500 GB Samsung T7 SSD. The VMware Fusion Player is the free for personal use version 12.1.2.
I created a Monterey installer drive by using a Big Sur virtual machine. I following Apple's instructions: [How to create a bootable installer for macOS](https://support.apple.com/en-us/HT201372). Instead of using a USB flash drive, I created and used a second SATA drive for the installer.
The virtual machine has 2 processor cores and 4096 MB of memory out of the 4 processor cores and 16 GB of memory installed in the iMac. This is a custom virtual machine where macOS 11.0 was selected as the operating system. | Yes, that should be fully possible. Note that there will be some performance loss - depending a lot on what type of computing you want to do. For example it might not be a good experience for 3D gaming, but will probably work just fine for app development. |
438,812 | So I have a very nice 27" 5K iMac, late 2014 model.
It runs Big Sur, but it doesn't get the Monterey upgrade, because it is 1 generation too old.
[I believe I can get Monterey to work using OCLP](https://github.com/dortania/Opencore-Legacy-Patcher/releases) on this hardware, but I rather don't want to mess with that as that would require a full re-install, for which I have no time. (A lot of home-brew/custom software that would need to be setup fresh.)
So I wonder if I can run Monterey in a VM instead, even though the underlying hardware is officially not supported.
(I've got 24 GB RAM to play with. Plenty for a VM.)
I can just give it a try off course, but Monterey is a big download and I have very limited bandwidth at the moment, that is also needed for other stuff during the day.
I rather not download the whole thing over several nights to find that it doesn't work.
Can anyone confirm whether this would work or not? | 2022/03/23 | [
"https://apple.stackexchange.com/questions/438812",
"https://apple.stackexchange.com",
"https://apple.stackexchange.com/users/133918/"
] | I have macOS Monterey version 12.1 installed on a VMware Fusion Player virtual machine on a iMac (21.5-inch, Late 2013) host. The iMac is running macOS Catalina version 10.15.7 from a 5 Gb/s USB port. The USB drive is a 500 GB Samsung T7 SSD. The VMware Fusion Player is the free for personal use version 12.1.2.
I created a Monterey installer drive by using a Big Sur virtual machine. I following Apple's instructions: [How to create a bootable installer for macOS](https://support.apple.com/en-us/HT201372). Instead of using a USB flash drive, I created and used a second SATA drive for the installer.
The virtual machine has 2 processor cores and 4096 MB of memory out of the 4 processor cores and 16 GB of memory installed in the iMac. This is a custom virtual machine where macOS 11.0 was selected as the operating system. | Self answer as I now had the opportunity to try.
I tried it on 2 different Macs (iMac late 2014 and a 2013 MacBook Pro) with Parallels, VMWare Fusion and VirtualBox with the same disappointing results.
It won't work properly. In all cases the installer or the first stage of the setup after the base install will throw a kernel-panic.
In Parallels and VirtualBox it would crash early after only 2 or 3 click-through screens in the beginning of the installation.
VMWare Fusion at least got to the part when it starts copying files to the harddisk and in one case to the first screen after that, but eventually crashed as well.
I tried each a coupl eof times. The exact moment of the crash varies a bit, but eventually it will crash.
With all 3 virtualization platforms I was able to make a virtual Big Sur and that worked fine. But these I couldn't upgrade to Monterrey either. With the same "unsupported" messages as I got on the real hardware.
My only option right now seems to try OCLP on a virtual machine (no idea if that would work) or spend a lot of money on a new Mac (which isn't in the budget for this year). |
1,592,082 | I'm interested in making an Arduino based MIDI controller to talk to my computer. Looking at other examples of Arduino MIDI (for example, *[MIDI Output using an Arduino](http://itp.nyu.edu/physcomp/Labs/MIDIOutput)*), they all seem to wire up a dedicated 5 pin DIN. Which makes sense as this is the original cable to connect keyboards, expanders and sequencers together.
However, I want to send MIDI to my PC. A 5-pin DIN is just going to have to be plugged into a conversion box which connects to my PC via USB. And I already have a USB cable to connect my Arduino to my PC. So why can't I just use this?
I'm assuming what would stop me is that these conversion boxes all come with drivers which know how to handle the signal coming in over USB. Whereas, say, a virtual synthesizer on my computer wouldn't expect or know how to handle raw bytes coming in via the serial port. So is there a standard or free equivalent to these drivers that I could use for my own project? Or, if not, what would it take to write one? Where could I find out more about this? | 2009/10/20 | [
"https://Stackoverflow.com/questions/1592082",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8482/"
] | I found it was easier to just embed a cheap ($6) MIDI-USB interface right into my Arduino projects. *[Quick and Dirty Arduino MIDI Over USB](http://shiftmore.blogspot.com/2010/01/quick-and-dirty-arduino-midi-over-usb.html)* explains how.
There are also some pictures of an old calculator I turned into an Arduino USB-MIDI controller in *[Calculator MIDI USB Controller](http://shiftmore.blogspot.com/2009/12/calculator-midi-usb-controller.html)*.
I hope that helps. | You may want to use checkSpikenzielabs [Serial - MIDI Converter](http://www.spikenzielabs.com/SpikenzieLabs/Serial_MIDI.html). It looks exactly what you're looking for. It converts incomming serial data to MIDI data. So on the Arduino side, just send serial data as usual, and receive MIDI data on PC side. |
1,592,082 | I'm interested in making an Arduino based MIDI controller to talk to my computer. Looking at other examples of Arduino MIDI (for example, *[MIDI Output using an Arduino](http://itp.nyu.edu/physcomp/Labs/MIDIOutput)*), they all seem to wire up a dedicated 5 pin DIN. Which makes sense as this is the original cable to connect keyboards, expanders and sequencers together.
However, I want to send MIDI to my PC. A 5-pin DIN is just going to have to be plugged into a conversion box which connects to my PC via USB. And I already have a USB cable to connect my Arduino to my PC. So why can't I just use this?
I'm assuming what would stop me is that these conversion boxes all come with drivers which know how to handle the signal coming in over USB. Whereas, say, a virtual synthesizer on my computer wouldn't expect or know how to handle raw bytes coming in via the serial port. So is there a standard or free equivalent to these drivers that I could use for my own project? Or, if not, what would it take to write one? Where could I find out more about this? | 2009/10/20 | [
"https://Stackoverflow.com/questions/1592082",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8482/"
] | We've developed an OSHW Arduino Shield for that <http://openpipe.cc/products/midi-usb-shield/>
Source code and schematics available.
Hope it helps! | You may want to use checkSpikenzielabs [Serial - MIDI Converter](http://www.spikenzielabs.com/SpikenzieLabs/Serial_MIDI.html). It looks exactly what you're looking for. It converts incomming serial data to MIDI data. So on the Arduino side, just send serial data as usual, and receive MIDI data on PC side. |
1,592,082 | I'm interested in making an Arduino based MIDI controller to talk to my computer. Looking at other examples of Arduino MIDI (for example, *[MIDI Output using an Arduino](http://itp.nyu.edu/physcomp/Labs/MIDIOutput)*), they all seem to wire up a dedicated 5 pin DIN. Which makes sense as this is the original cable to connect keyboards, expanders and sequencers together.
However, I want to send MIDI to my PC. A 5-pin DIN is just going to have to be plugged into a conversion box which connects to my PC via USB. And I already have a USB cable to connect my Arduino to my PC. So why can't I just use this?
I'm assuming what would stop me is that these conversion boxes all come with drivers which know how to handle the signal coming in over USB. Whereas, say, a virtual synthesizer on my computer wouldn't expect or know how to handle raw bytes coming in via the serial port. So is there a standard or free equivalent to these drivers that I could use for my own project? Or, if not, what would it take to write one? Where could I find out more about this? | 2009/10/20 | [
"https://Stackoverflow.com/questions/1592082",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8482/"
] | I found it was easier to just embed a cheap ($6) MIDI-USB interface right into my Arduino projects. *[Quick and Dirty Arduino MIDI Over USB](http://shiftmore.blogspot.com/2010/01/quick-and-dirty-arduino-midi-over-usb.html)* explains how.
There are also some pictures of an old calculator I turned into an Arduino USB-MIDI controller in *[Calculator MIDI USB Controller](http://shiftmore.blogspot.com/2009/12/calculator-midi-usb-controller.html)*.
I hope that helps. | We've developed an OSHW Arduino Shield for that <http://openpipe.cc/products/midi-usb-shield/>
Source code and schematics available.
Hope it helps! |
1,592,082 | I'm interested in making an Arduino based MIDI controller to talk to my computer. Looking at other examples of Arduino MIDI (for example, *[MIDI Output using an Arduino](http://itp.nyu.edu/physcomp/Labs/MIDIOutput)*), they all seem to wire up a dedicated 5 pin DIN. Which makes sense as this is the original cable to connect keyboards, expanders and sequencers together.
However, I want to send MIDI to my PC. A 5-pin DIN is just going to have to be plugged into a conversion box which connects to my PC via USB. And I already have a USB cable to connect my Arduino to my PC. So why can't I just use this?
I'm assuming what would stop me is that these conversion boxes all come with drivers which know how to handle the signal coming in over USB. Whereas, say, a virtual synthesizer on my computer wouldn't expect or know how to handle raw bytes coming in via the serial port. So is there a standard or free equivalent to these drivers that I could use for my own project? Or, if not, what would it take to write one? Where could I find out more about this? | 2009/10/20 | [
"https://Stackoverflow.com/questions/1592082",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8482/"
] | I found it was easier to just embed a cheap ($6) MIDI-USB interface right into my Arduino projects. *[Quick and Dirty Arduino MIDI Over USB](http://shiftmore.blogspot.com/2010/01/quick-and-dirty-arduino-midi-over-usb.html)* explains how.
There are also some pictures of an old calculator I turned into an Arduino USB-MIDI controller in *[Calculator MIDI USB Controller](http://shiftmore.blogspot.com/2009/12/calculator-midi-usb-controller.html)*.
I hope that helps. | We built some module to make your own Midi device easily
just have a look [e-licktronic](http://www.e-licktronic.com)
We use [Hairless](http://projectgus.github.com/hairless-midiserial/) to convert Serial to MIDI
it's a very simple software |
1,592,082 | I'm interested in making an Arduino based MIDI controller to talk to my computer. Looking at other examples of Arduino MIDI (for example, *[MIDI Output using an Arduino](http://itp.nyu.edu/physcomp/Labs/MIDIOutput)*), they all seem to wire up a dedicated 5 pin DIN. Which makes sense as this is the original cable to connect keyboards, expanders and sequencers together.
However, I want to send MIDI to my PC. A 5-pin DIN is just going to have to be plugged into a conversion box which connects to my PC via USB. And I already have a USB cable to connect my Arduino to my PC. So why can't I just use this?
I'm assuming what would stop me is that these conversion boxes all come with drivers which know how to handle the signal coming in over USB. Whereas, say, a virtual synthesizer on my computer wouldn't expect or know how to handle raw bytes coming in via the serial port. So is there a standard or free equivalent to these drivers that I could use for my own project? Or, if not, what would it take to write one? Where could I find out more about this? | 2009/10/20 | [
"https://Stackoverflow.com/questions/1592082",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8482/"
] | We've developed an OSHW Arduino Shield for that <http://openpipe.cc/products/midi-usb-shield/>
Source code and schematics available.
Hope it helps! | We built some module to make your own Midi device easily
just have a look [e-licktronic](http://www.e-licktronic.com)
We use [Hairless](http://projectgus.github.com/hairless-midiserial/) to convert Serial to MIDI
it's a very simple software |
4,771 | This should be an easy one if you have a Bible handy:
>
> My name is Kether Torah,
>
> the Crown of the Torah.
>
>
> I am a pillar of righteousness,
>
> and a foundation of mercy.
>
>
> I am four, and three, and two, and one;
>
> and beside me there is none other.
>
>
> Those who hear me have life;
>
> they that disobey me shall surely die.
>
>
>
What am I? | 2014/11/15 | [
"https://puzzling.stackexchange.com/questions/4771",
"https://puzzling.stackexchange.com",
"https://puzzling.stackexchange.com/users/5474/"
] | >
> Headstone?
>
> Some jewish people believe a good name is what is superior, "one crown of the torah" which a headstone has inscribed
>
> a pillar of righteousness as we honor those who fall before us and mercy on their deaths.
>
> an equestrian headstone has meanings depending on how many legs (4) it has touching the ground.
>
> there is none other that is the same as each is different inscriptions and no one is beside as they are deceased
>
> those who keep them have life as there not planted yet or not sold
>
> if you violate your "head" you shall surely die.
>
>
> | >
> 
>
> [Picture of the **Ten Commandments** with the words כתר תורה (*Kether Torah*) written prominently above.]
>
> The identification of the Ten Commandments with the **Crown of the Torah** comes from the ancient Jewish practice of [gematria](http://en.wikipedia.org/wiki/Gematria), whereby the numerical value of the words *Kether Torah* is equal to that of *Aseret Ha-Devarim* ("the Ten Commandments").
>
>
>
>
>
>
> |
218,339 | Imagine I have a generation ship that is heading to a nearby star system say 10 light years away, the average lifespan for the crew is 150 years on Earth and is expected to increase by about 5 years with each generation. No cryonic suspension due to the ban on any form of suspended animation on human by law and many policies are in place to ensure the population demography on board is healthy across every generation, so we can now laser focus on the economic and science aspects of this one way trip.
I believe delta v is important when planning trips within the solar system since as the name suggests it is the changes in velocity when jumping between orbits, what about jumping between star systems? Are we going to factor delta v for all interstellar flights regardless? if so why? | 2021/11/30 | [
"https://worldbuilding.stackexchange.com/questions/218339",
"https://worldbuilding.stackexchange.com",
"https://worldbuilding.stackexchange.com/users/8400/"
] | Physics is physics everywhere.
1. Your generation ship is coasting between start and arrival point: this means that it will be slowed down or accelerated by the attraction of the body whose Hill's sphere it's passing in at any moment. This influences the delta v. See the plot of the heliocentric velocity of [Voyager 2](https://en.wikipedia.org/wiki/Voyager_2) for a reference, and notice how it keeps decreasing, albeit slowly
[](https://i.stack.imgur.com/GpYHj.png)
2. Assuming your generation ship wants to land on a planet or orbit it and not just smack on it at several tens of km/s, it will need to slow down during the approach. Once again, this is delta v. | For interstellar journeys at sub light speeds the size of the DV is largely irrelevant in terms of the time it takes to complete the journey.
---------------------------------------------------------------------------------------------------------------------------------------------
Using your example and assuming an upper velocity (which is not mentioned in your question btw) of .10C the journey is going to take approx **100 years** with the acceleration & deceleration phases added on.
From the perspective of the crew it doesn't really matter if the ship takes 3 years to reach its 'cruising' velocity or just 3 weeks. With latter you end up with a 100 year (plus 6 weeks) long journey. With the former it takes 106 years. For all intents an purposes putting the 'pedal to the metal' only shaves a measly approx 6% off the total travel time. Hardly a huge saving for a generation ship. And of course the time saving only worsens with increases the distance. Double the distance the ship has to travel to 20 LY and at the high DV your only saving approx 3% on travel time over the lower DV. And on it goes.
Plus a lower DV is an easier engineering challenge which you want wherever possible. *Everything* on a generation ship has to be engineered to last because if breaks down and cant be repaired with what you have on board you screwed. |
218,339 | Imagine I have a generation ship that is heading to a nearby star system say 10 light years away, the average lifespan for the crew is 150 years on Earth and is expected to increase by about 5 years with each generation. No cryonic suspension due to the ban on any form of suspended animation on human by law and many policies are in place to ensure the population demography on board is healthy across every generation, so we can now laser focus on the economic and science aspects of this one way trip.
I believe delta v is important when planning trips within the solar system since as the name suggests it is the changes in velocity when jumping between orbits, what about jumping between star systems? Are we going to factor delta v for all interstellar flights regardless? if so why? | 2021/11/30 | [
"https://worldbuilding.stackexchange.com/questions/218339",
"https://worldbuilding.stackexchange.com",
"https://worldbuilding.stackexchange.com/users/8400/"
] | Physics is physics everywhere.
1. Your generation ship is coasting between start and arrival point: this means that it will be slowed down or accelerated by the attraction of the body whose Hill's sphere it's passing in at any moment. This influences the delta v. See the plot of the heliocentric velocity of [Voyager 2](https://en.wikipedia.org/wiki/Voyager_2) for a reference, and notice how it keeps decreasing, albeit slowly
[](https://i.stack.imgur.com/GpYHj.png)
2. Assuming your generation ship wants to land on a planet or orbit it and not just smack on it at several tens of km/s, it will need to slow down during the approach. Once again, this is delta v. | Yes, it's relevant. For a few reasons:
1. If your delta v is 0, you're not going anywhere. Sure this is a degenerate case, but if it were truly irrelevant it wouldn't matter. More practically, your delta v is going to need to be at least your star's escape velocity (barring exotic scenarios such as an extremely close approach by your target star.)
2. Stars are not at rest with respect to one another. So at a minimum, you will require at least the delta v of the relative speeds of your origin and destination systems.
3. Even in generational ships, time to destination matters. Realistically speaking, half your delta v is basically your cruising speed (ok, this doesn't hold as you get up into the really relativistic speeds, but in that case we're probably not talking generational ships.) So delta v will directly relate to how long your journey will take. |
218,339 | Imagine I have a generation ship that is heading to a nearby star system say 10 light years away, the average lifespan for the crew is 150 years on Earth and is expected to increase by about 5 years with each generation. No cryonic suspension due to the ban on any form of suspended animation on human by law and many policies are in place to ensure the population demography on board is healthy across every generation, so we can now laser focus on the economic and science aspects of this one way trip.
I believe delta v is important when planning trips within the solar system since as the name suggests it is the changes in velocity when jumping between orbits, what about jumping between star systems? Are we going to factor delta v for all interstellar flights regardless? if so why? | 2021/11/30 | [
"https://worldbuilding.stackexchange.com/questions/218339",
"https://worldbuilding.stackexchange.com",
"https://worldbuilding.stackexchange.com/users/8400/"
] | Yes, it's relevant. For a few reasons:
1. If your delta v is 0, you're not going anywhere. Sure this is a degenerate case, but if it were truly irrelevant it wouldn't matter. More practically, your delta v is going to need to be at least your star's escape velocity (barring exotic scenarios such as an extremely close approach by your target star.)
2. Stars are not at rest with respect to one another. So at a minimum, you will require at least the delta v of the relative speeds of your origin and destination systems.
3. Even in generational ships, time to destination matters. Realistically speaking, half your delta v is basically your cruising speed (ok, this doesn't hold as you get up into the really relativistic speeds, but in that case we're probably not talking generational ships.) So delta v will directly relate to how long your journey will take. | For interstellar journeys at sub light speeds the size of the DV is largely irrelevant in terms of the time it takes to complete the journey.
---------------------------------------------------------------------------------------------------------------------------------------------
Using your example and assuming an upper velocity (which is not mentioned in your question btw) of .10C the journey is going to take approx **100 years** with the acceleration & deceleration phases added on.
From the perspective of the crew it doesn't really matter if the ship takes 3 years to reach its 'cruising' velocity or just 3 weeks. With latter you end up with a 100 year (plus 6 weeks) long journey. With the former it takes 106 years. For all intents an purposes putting the 'pedal to the metal' only shaves a measly approx 6% off the total travel time. Hardly a huge saving for a generation ship. And of course the time saving only worsens with increases the distance. Double the distance the ship has to travel to 20 LY and at the high DV your only saving approx 3% on travel time over the lower DV. And on it goes.
Plus a lower DV is an easier engineering challenge which you want wherever possible. *Everything* on a generation ship has to be engineered to last because if breaks down and cant be repaired with what you have on board you screwed. |
743,070 | Is there a good, concise guide to the various behaviors of the controls that inherit from System.Windows.Controls.Panel? Or just a WPF layout guide in general?
In particular, I'm interested in how the panels handle child objects that have width or height set to "Auto" and Horizontal or Vertical Alignment set to "Stretch". Some layouts cause the objects to fill the size of their container, others just cause them to size based on their own content. | 2009/04/13 | [
"https://Stackoverflow.com/questions/743070",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6054/"
] | See <http://msdn.microsoft.com/en-us/library/ms754152.aspx> for a start... it sounds like you may be particularly interested in the table in the "User Interface Panels" section. | I would recommend looking at the [MSDN entry about panels](http://msdn.microsoft.com/en-us/library/ms754152.aspx) and [this example in particular](http://msdn.microsoft.com/en-us/library/ms754152.aspx#Panels_custom_panel_elements) first. The methods MeasureOverride and ArrangeOverride handle all layout related logic. |
4,725 | Is there any significant performance trade-off between buttons wired with the internal vs external pull-up/down resistors? I see a lot of tutorials demonstrating the use of external resistors when wiring a simple button, like [this](http://www.arduino.cc/en/Tutorial/Pushbutton) and [this](http://arduino.cc/en/tutorial/button). But since the Arduino/AVR includes internal pull-up resistors, you can wire a button without any external resistor as demonstrated [here](http://arduino.cc/en/Tutorial/InputPullupSerial).
Is there any good reason to complicate your design with an extra resistor? | 2014/10/26 | [
"https://arduino.stackexchange.com/questions/4725",
"https://arduino.stackexchange.com",
"https://arduino.stackexchange.com/users/4478/"
] | If you are using the pin as an input in close proximity to the Arduino there is little reason to use an external pullup.
There are a number of factors to consider.
If the switch is some distance from the chip and/or in a noisy environment it will be more susceptible to interference as the internal pullups are high impedance (20kΩ to 150kΩ depending on model).
If the pin is used as both input & output the pullup can reduce the voltage swing.
In other applications e.g. Raspberry Pi the state of the pin between boot and software initialisation can be significant, but this should not be an issue with Arduino. | I think it depends on the application you're going for. A simple switch probably doesn't need a resistor, since it doesn't require the chip to provide any specific voltage or current.
If you're trying to run an LED or something that requires a specific amount of current or voltage, then you may be better off supplying the power connection & resistor rather than relying on the chip to provide the proper power to the device.
Another situation may be where you're approaching the maximum power output of the chip that you might change to external pull-up/down resistors. |
68,175,913 | Despite all the solutions I saw on the site, unfortunately my problem was not solved, I almost despair
[enter image description here](https://i.stack.imgur.com/M6MnO.png) | 2021/06/29 | [
"https://Stackoverflow.com/questions/68175913",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15112278/"
] | I can see "مشمس" word in your image that it means "Sunny"; so you are not using english regional format.
Your problem is probably because of your system regional format.
just set it to "English"
You can see detail here: <https://stackoverflow.com/a/67554080> | Emulator is part of andorid-sdk-tools.
Emulator help to make virtual device, you can dowload it later in android studio. |
27,873 | I played a D&D 3.5 campaign years ago, and one PC was a druid. The player did a pretty awesome job as a spy thanks to the multiple divination spells, Wild Shape and especially A Thousand Faces:
>
> [**A Thousand Faces (Su)**](http://www.d20srd.org/srd/classes/druid.htm#aThousandFaces)
>
> At 13th level, a druid gains the ability to change her appearance at will, as if using the [*disguise self*](http://www.d20srd.org/srd/spells/disguiseSelf.htm) spell, but only while in her normal form. This affects the druid’s body but not her possessions. It is not an illusory effect, but a minor physical alteration of the druid’s appearance, within the limits described for the spell.
>
>
>
I'm now planning to run a Pathfinder campaign (albeit with a custom world) with a lot of druids and I just don't understand how Thousand Faces fits for a "protector of the wild, lover of Nature" kind of guy (and I've never seen this ability used since this spy-druid), so I'm considering dropping A Thousand Faces.
Did you ever read something justifying this ability, in D&D or Pathfinder material or anywhere else, even real stories on druids?
**EDIT :** What bugs me is that I can't see proper use of this ability besides deception, and deception doesn't feel too "natural". | 2013/08/12 | [
"https://rpg.stackexchange.com/questions/27873",
"https://rpg.stackexchange.com",
"https://rpg.stackexchange.com/users/8150/"
] | According to *Unearthed Arcana* (1985) for *Advanced Dungeons and Dragons*, “For druids of 16th level and above [… r]ather than spells, spell-like powers are acquired [including] the ability to alter his appearance at will. Appearance alteration is accomplished in 1 segment, with height and weight decrease/increase of 50% possible, apparent age from child to very old, and body and facial features of any human or humanoid sort. This alteration is non-magical, so it cannot be detected short of true seeing and the like” (17).
So in *Dungeons and Dragons 3.X* (and, subsequently, *Pathfinder*), this ability exists as a legacy of AD&D (and, probably, although I can't speak to it, 2nd Edition D&D). That answers why it exists *now*.
As to why it's existed for nearly 30 years, I vaguely remembered but couldn't find proof anywhere that Gygax believed Merlin was best represented by the druid class, and Merlin, according to sources as wide-ranging as Robert de Boron and Disney's *Sword and the Stone*, just changes shape a lot.
Theoretically, it's the idea that a lot of folks are much more willing to take advice from strangers than they are from folks they know (c.f. consultants), and such an ability makes the druid the ultimate advisor. If the only thing that penetrates his actually-physically-altered disguise is an AD&D *true seeing* spell, the druid can totally walk into the king's court in disguise, spout some "ancient prophecy" that says that the king shouldn't raise taxes or go to war or whatever, and depart, Batman-style, when he's finished, with none at court knowing that it was really the king's druid who delivered the message.
If you don't like the at-will *alter self* superpower, it should be replaced with something of equal utility. I suggest at-will *locate object* or *misdirection* or *augury* 3/day ('cause at-will *augury* is stupid), with this last possibly serving the same function as *alter self* in the theory detailed above. | I have not heard of a historical/fictional druid that had a particular ability to change their face.
However, this seems like a fairly straightforward extension of the 3.5 Druid's earlier abilities - namely Wild Shape. A 13th-level Druid is an accomplished shapeshifter, they are used to changing their form and flesh into those other than their own.
This could even be tied to a 13th-level Druid suddenly having the realization that humanoids are not so different from animals at all.
I can imagine that the idea for the ability surfaced when someone asked the question: "Wait, my high-level Druid can change their face to resemble that of any kind of animal, but not to look like the face of another human?" |
27,873 | I played a D&D 3.5 campaign years ago, and one PC was a druid. The player did a pretty awesome job as a spy thanks to the multiple divination spells, Wild Shape and especially A Thousand Faces:
>
> [**A Thousand Faces (Su)**](http://www.d20srd.org/srd/classes/druid.htm#aThousandFaces)
>
> At 13th level, a druid gains the ability to change her appearance at will, as if using the [*disguise self*](http://www.d20srd.org/srd/spells/disguiseSelf.htm) spell, but only while in her normal form. This affects the druid’s body but not her possessions. It is not an illusory effect, but a minor physical alteration of the druid’s appearance, within the limits described for the spell.
>
>
>
I'm now planning to run a Pathfinder campaign (albeit with a custom world) with a lot of druids and I just don't understand how Thousand Faces fits for a "protector of the wild, lover of Nature" kind of guy (and I've never seen this ability used since this spy-druid), so I'm considering dropping A Thousand Faces.
Did you ever read something justifying this ability, in D&D or Pathfinder material or anywhere else, even real stories on druids?
**EDIT :** What bugs me is that I can't see proper use of this ability besides deception, and deception doesn't feel too "natural". | 2013/08/12 | [
"https://rpg.stackexchange.com/questions/27873",
"https://rpg.stackexchange.com",
"https://rpg.stackexchange.com/users/8150/"
] | I think it was not uncommon for druids – and other mystical figures in folklore – to appear before others in magical disguise. Glamour, after all, is heavily associated with the fey, which are in turn tied to the natural world and the same sort of mythological background as druids. Further, as protectors of the natural world, druids have to assess potential “civilized” threats – appearing as a wanderer, sharing a campfire, gives you a good sense of whether the new people are going to be a problem.
Mostly, though, it just seems to play up the druids’ mystical angle. They *know things*, deep lore of the natural world but also about the comings and goings around them that they couldn’t possibly know. They have animals as eyes and ears but sometimes they need to see and hear for themselves. I think it is an appropriate, flavorful feature. And unlike most of the druid’s class features, I don’t think it’s overpowered.
Also, be careful about thinking too much about druids as being of or defending nature, because that was not really their historical or narrative role. The word “druid” itself means “oak-knower,” where oaks are symbols of all things ancient and deep. The druids were mystics, wise men, and priests. They drew their powers from the natural world, but much of their power was *knowledge* as much as it was magic (of course, at the time and in the myths, these were often the same thing). Any magic they had came from their knowledge of the nature of a world that had magic built into its very bones.
In role, they may have often been apart from society, but they were still very much human and very important to society. Their counsel was sought out in all things mystical and natural, which is to say everything that the common man did not understand. They were often highly political, kingmakers or rulers themselves as high priests. They were protectors of the old ways, which included human custom as much as it did the ancient natural world. | This is not going to be a very detailed answer, but here are ideas.
It could allow you to visit cities without revealing your identity (with some different clothing if needed). This could be used to get a sense of people's behavior, especially if you're a well known druid in the region, in a prince-disguised-as-beggar-trick way.
It could also serve as a defense mechanism pretty much anywhere, again possibly complemented by a change of clothes. It's hard to look for or follow someone who changes appearance.
In a sense, the above abilities are usually seen in fiction with a magic user (not just druids I think) turning into animals to observe discreetly or hide. Doing the same with humanoids seems just as valid.
And if you're the kind of druid actively protecting a natural location, you could turn into someone else to test the value of someone trespassing. The classic "will you help this old crone?" test. |
27,873 | I played a D&D 3.5 campaign years ago, and one PC was a druid. The player did a pretty awesome job as a spy thanks to the multiple divination spells, Wild Shape and especially A Thousand Faces:
>
> [**A Thousand Faces (Su)**](http://www.d20srd.org/srd/classes/druid.htm#aThousandFaces)
>
> At 13th level, a druid gains the ability to change her appearance at will, as if using the [*disguise self*](http://www.d20srd.org/srd/spells/disguiseSelf.htm) spell, but only while in her normal form. This affects the druid’s body but not her possessions. It is not an illusory effect, but a minor physical alteration of the druid’s appearance, within the limits described for the spell.
>
>
>
I'm now planning to run a Pathfinder campaign (albeit with a custom world) with a lot of druids and I just don't understand how Thousand Faces fits for a "protector of the wild, lover of Nature" kind of guy (and I've never seen this ability used since this spy-druid), so I'm considering dropping A Thousand Faces.
Did you ever read something justifying this ability, in D&D or Pathfinder material or anywhere else, even real stories on druids?
**EDIT :** What bugs me is that I can't see proper use of this ability besides deception, and deception doesn't feel too "natural". | 2013/08/12 | [
"https://rpg.stackexchange.com/questions/27873",
"https://rpg.stackexchange.com",
"https://rpg.stackexchange.com/users/8150/"
] | I have not heard of a historical/fictional druid that had a particular ability to change their face.
However, this seems like a fairly straightforward extension of the 3.5 Druid's earlier abilities - namely Wild Shape. A 13th-level Druid is an accomplished shapeshifter, they are used to changing their form and flesh into those other than their own.
This could even be tied to a 13th-level Druid suddenly having the realization that humanoids are not so different from animals at all.
I can imagine that the idea for the ability surfaced when someone asked the question: "Wait, my high-level Druid can change their face to resemble that of any kind of animal, but not to look like the face of another human?" | I don't know much about real stories of historical druids, but I can clearly remember some figure a protector of nature that used to live among humans, conceiling his true aspect, to watch on them.
I can't really remember which book or movie featured this story but the trope of a disguised watcher of events is familiar to me, to the point of instantly recognizing it every time I look at the *A Thousand Faces* feature.
I don't think this kind of character is any good as a PC unless it's a story about this sort of things, but it might be awesome for some higher-level NPC (for it's quite useless by the level it comes into play). |
27,873 | I played a D&D 3.5 campaign years ago, and one PC was a druid. The player did a pretty awesome job as a spy thanks to the multiple divination spells, Wild Shape and especially A Thousand Faces:
>
> [**A Thousand Faces (Su)**](http://www.d20srd.org/srd/classes/druid.htm#aThousandFaces)
>
> At 13th level, a druid gains the ability to change her appearance at will, as if using the [*disguise self*](http://www.d20srd.org/srd/spells/disguiseSelf.htm) spell, but only while in her normal form. This affects the druid’s body but not her possessions. It is not an illusory effect, but a minor physical alteration of the druid’s appearance, within the limits described for the spell.
>
>
>
I'm now planning to run a Pathfinder campaign (albeit with a custom world) with a lot of druids and I just don't understand how Thousand Faces fits for a "protector of the wild, lover of Nature" kind of guy (and I've never seen this ability used since this spy-druid), so I'm considering dropping A Thousand Faces.
Did you ever read something justifying this ability, in D&D or Pathfinder material or anywhere else, even real stories on druids?
**EDIT :** What bugs me is that I can't see proper use of this ability besides deception, and deception doesn't feel too "natural". | 2013/08/12 | [
"https://rpg.stackexchange.com/questions/27873",
"https://rpg.stackexchange.com",
"https://rpg.stackexchange.com/users/8150/"
] | According to *Unearthed Arcana* (1985) for *Advanced Dungeons and Dragons*, “For druids of 16th level and above [… r]ather than spells, spell-like powers are acquired [including] the ability to alter his appearance at will. Appearance alteration is accomplished in 1 segment, with height and weight decrease/increase of 50% possible, apparent age from child to very old, and body and facial features of any human or humanoid sort. This alteration is non-magical, so it cannot be detected short of true seeing and the like” (17).
So in *Dungeons and Dragons 3.X* (and, subsequently, *Pathfinder*), this ability exists as a legacy of AD&D (and, probably, although I can't speak to it, 2nd Edition D&D). That answers why it exists *now*.
As to why it's existed for nearly 30 years, I vaguely remembered but couldn't find proof anywhere that Gygax believed Merlin was best represented by the druid class, and Merlin, according to sources as wide-ranging as Robert de Boron and Disney's *Sword and the Stone*, just changes shape a lot.
Theoretically, it's the idea that a lot of folks are much more willing to take advice from strangers than they are from folks they know (c.f. consultants), and such an ability makes the druid the ultimate advisor. If the only thing that penetrates his actually-physically-altered disguise is an AD&D *true seeing* spell, the druid can totally walk into the king's court in disguise, spout some "ancient prophecy" that says that the king shouldn't raise taxes or go to war or whatever, and depart, Batman-style, when he's finished, with none at court knowing that it was really the king's druid who delivered the message.
If you don't like the at-will *alter self* superpower, it should be replaced with something of equal utility. I suggest at-will *locate object* or *misdirection* or *augury* 3/day ('cause at-will *augury* is stupid), with this last possibly serving the same function as *alter self* in the theory detailed above. | This is not going to be a very detailed answer, but here are ideas.
It could allow you to visit cities without revealing your identity (with some different clothing if needed). This could be used to get a sense of people's behavior, especially if you're a well known druid in the region, in a prince-disguised-as-beggar-trick way.
It could also serve as a defense mechanism pretty much anywhere, again possibly complemented by a change of clothes. It's hard to look for or follow someone who changes appearance.
In a sense, the above abilities are usually seen in fiction with a magic user (not just druids I think) turning into animals to observe discreetly or hide. Doing the same with humanoids seems just as valid.
And if you're the kind of druid actively protecting a natural location, you could turn into someone else to test the value of someone trespassing. The classic "will you help this old crone?" test. |
27,873 | I played a D&D 3.5 campaign years ago, and one PC was a druid. The player did a pretty awesome job as a spy thanks to the multiple divination spells, Wild Shape and especially A Thousand Faces:
>
> [**A Thousand Faces (Su)**](http://www.d20srd.org/srd/classes/druid.htm#aThousandFaces)
>
> At 13th level, a druid gains the ability to change her appearance at will, as if using the [*disguise self*](http://www.d20srd.org/srd/spells/disguiseSelf.htm) spell, but only while in her normal form. This affects the druid’s body but not her possessions. It is not an illusory effect, but a minor physical alteration of the druid’s appearance, within the limits described for the spell.
>
>
>
I'm now planning to run a Pathfinder campaign (albeit with a custom world) with a lot of druids and I just don't understand how Thousand Faces fits for a "protector of the wild, lover of Nature" kind of guy (and I've never seen this ability used since this spy-druid), so I'm considering dropping A Thousand Faces.
Did you ever read something justifying this ability, in D&D or Pathfinder material or anywhere else, even real stories on druids?
**EDIT :** What bugs me is that I can't see proper use of this ability besides deception, and deception doesn't feel too "natural". | 2013/08/12 | [
"https://rpg.stackexchange.com/questions/27873",
"https://rpg.stackexchange.com",
"https://rpg.stackexchange.com/users/8150/"
] | I have not heard of a historical/fictional druid that had a particular ability to change their face.
However, this seems like a fairly straightforward extension of the 3.5 Druid's earlier abilities - namely Wild Shape. A 13th-level Druid is an accomplished shapeshifter, they are used to changing their form and flesh into those other than their own.
This could even be tied to a 13th-level Druid suddenly having the realization that humanoids are not so different from animals at all.
I can imagine that the idea for the ability surfaced when someone asked the question: "Wait, my high-level Druid can change their face to resemble that of any kind of animal, but not to look like the face of another human?" | I think you are looking at the ability wrong. Think about nature. There are a ton of animals which use deception as a means of defense or to hunt.
Victoria Butterflies mimic Monarch Butterflies because one tastes bad. Stick insects pretend to be sticks to keep away predators. Angler fish dangle little pseudo minnows in front of their face to attract prey.
If you stop thinking of A Thousand Faces as "Deception" and start thinking of it as "Camouflage" you might find the ability to be more helpful. |
27,873 | I played a D&D 3.5 campaign years ago, and one PC was a druid. The player did a pretty awesome job as a spy thanks to the multiple divination spells, Wild Shape and especially A Thousand Faces:
>
> [**A Thousand Faces (Su)**](http://www.d20srd.org/srd/classes/druid.htm#aThousandFaces)
>
> At 13th level, a druid gains the ability to change her appearance at will, as if using the [*disguise self*](http://www.d20srd.org/srd/spells/disguiseSelf.htm) spell, but only while in her normal form. This affects the druid’s body but not her possessions. It is not an illusory effect, but a minor physical alteration of the druid’s appearance, within the limits described for the spell.
>
>
>
I'm now planning to run a Pathfinder campaign (albeit with a custom world) with a lot of druids and I just don't understand how Thousand Faces fits for a "protector of the wild, lover of Nature" kind of guy (and I've never seen this ability used since this spy-druid), so I'm considering dropping A Thousand Faces.
Did you ever read something justifying this ability, in D&D or Pathfinder material or anywhere else, even real stories on druids?
**EDIT :** What bugs me is that I can't see proper use of this ability besides deception, and deception doesn't feel too "natural". | 2013/08/12 | [
"https://rpg.stackexchange.com/questions/27873",
"https://rpg.stackexchange.com",
"https://rpg.stackexchange.com/users/8150/"
] | Nature
------
>
> What bugs me is that I can't see proper use of this ability besides deception, and deception doesn't feel too "natural".
>
>
>
I think a big part of your confusion is that you're coming at nature from a very modern perspective. Woods are safe, bright, places fit for hiking and camping. "All natural" foods are free from contamination and uncertainty.
But a lot of older stories paint the wilds in a very different light. Dark, unknown, uncertain, capricious, dangerous. If you venture too far from the safety of civilization, you have no idea what you'll encounter, or whether you'll be able to find your way home.
The wilds are filled with hidden dangers: Venomous snakes, hidden predators, dead-drops, poisonous plants, witches, giants, dragons, and other devious things that mean you harm.
Look at British folklore (where the word Druid originates), for example: Pretty much everything out there is [some kind of shapeshifter that means you harm](http://en.wikipedia.org/wiki/Shapeshifting#British_and_Irish).
Look also at the "nature spirits" in D&D and Pathfinder. The Fey have "connections to nature" and are associated with illusion, charms, and other forms of trickery.
Druids
------
>
> Pomponius Mela is the first author who says that the druids' instruction was secret, and was carried on in caves and forests.
>
>
>
[(source)](http://en.wikipedia.org/wiki/Druid#Societal_role_and_training)
>
> The Druids were initiates of a secret school that existed in their midst. This school, which closely resembled the Bacchic and Eleusinian Mysteries of Greece or the Egyptian rites of Isis and Osiris, is justly designated the Druidic Mysteries. There has been much speculation concerning the secret wisdom that the Druids claimed to possess. Their secret teachings were never written, but were communicated orally to specially prepared candidates. Robert Brown, 32°, is of the opinion that the British priests secured their information from Tyrian and Phœnician navigators who, thousands of years before the Christian Era, established colonies in Britain and Gaul while searching for tin.
>
>
>
[(source)](http://www.sacred-texts.com/eso/sta/sta04.htm)
In addition to the nature angle, historical druids are typically viewed as mysterious in popular culture: A secretive order, protecting its own mysteries. | I have not heard of a historical/fictional druid that had a particular ability to change their face.
However, this seems like a fairly straightforward extension of the 3.5 Druid's earlier abilities - namely Wild Shape. A 13th-level Druid is an accomplished shapeshifter, they are used to changing their form and flesh into those other than their own.
This could even be tied to a 13th-level Druid suddenly having the realization that humanoids are not so different from animals at all.
I can imagine that the idea for the ability surfaced when someone asked the question: "Wait, my high-level Druid can change their face to resemble that of any kind of animal, but not to look like the face of another human?" |
27,873 | I played a D&D 3.5 campaign years ago, and one PC was a druid. The player did a pretty awesome job as a spy thanks to the multiple divination spells, Wild Shape and especially A Thousand Faces:
>
> [**A Thousand Faces (Su)**](http://www.d20srd.org/srd/classes/druid.htm#aThousandFaces)
>
> At 13th level, a druid gains the ability to change her appearance at will, as if using the [*disguise self*](http://www.d20srd.org/srd/spells/disguiseSelf.htm) spell, but only while in her normal form. This affects the druid’s body but not her possessions. It is not an illusory effect, but a minor physical alteration of the druid’s appearance, within the limits described for the spell.
>
>
>
I'm now planning to run a Pathfinder campaign (albeit with a custom world) with a lot of druids and I just don't understand how Thousand Faces fits for a "protector of the wild, lover of Nature" kind of guy (and I've never seen this ability used since this spy-druid), so I'm considering dropping A Thousand Faces.
Did you ever read something justifying this ability, in D&D or Pathfinder material or anywhere else, even real stories on druids?
**EDIT :** What bugs me is that I can't see proper use of this ability besides deception, and deception doesn't feel too "natural". | 2013/08/12 | [
"https://rpg.stackexchange.com/questions/27873",
"https://rpg.stackexchange.com",
"https://rpg.stackexchange.com/users/8150/"
] | I have not heard of a historical/fictional druid that had a particular ability to change their face.
However, this seems like a fairly straightforward extension of the 3.5 Druid's earlier abilities - namely Wild Shape. A 13th-level Druid is an accomplished shapeshifter, they are used to changing their form and flesh into those other than their own.
This could even be tied to a 13th-level Druid suddenly having the realization that humanoids are not so different from animals at all.
I can imagine that the idea for the ability surfaced when someone asked the question: "Wait, my high-level Druid can change their face to resemble that of any kind of animal, but not to look like the face of another human?" | This is not going to be a very detailed answer, but here are ideas.
It could allow you to visit cities without revealing your identity (with some different clothing if needed). This could be used to get a sense of people's behavior, especially if you're a well known druid in the region, in a prince-disguised-as-beggar-trick way.
It could also serve as a defense mechanism pretty much anywhere, again possibly complemented by a change of clothes. It's hard to look for or follow someone who changes appearance.
In a sense, the above abilities are usually seen in fiction with a magic user (not just druids I think) turning into animals to observe discreetly or hide. Doing the same with humanoids seems just as valid.
And if you're the kind of druid actively protecting a natural location, you could turn into someone else to test the value of someone trespassing. The classic "will you help this old crone?" test. |
27,873 | I played a D&D 3.5 campaign years ago, and one PC was a druid. The player did a pretty awesome job as a spy thanks to the multiple divination spells, Wild Shape and especially A Thousand Faces:
>
> [**A Thousand Faces (Su)**](http://www.d20srd.org/srd/classes/druid.htm#aThousandFaces)
>
> At 13th level, a druid gains the ability to change her appearance at will, as if using the [*disguise self*](http://www.d20srd.org/srd/spells/disguiseSelf.htm) spell, but only while in her normal form. This affects the druid’s body but not her possessions. It is not an illusory effect, but a minor physical alteration of the druid’s appearance, within the limits described for the spell.
>
>
>
I'm now planning to run a Pathfinder campaign (albeit with a custom world) with a lot of druids and I just don't understand how Thousand Faces fits for a "protector of the wild, lover of Nature" kind of guy (and I've never seen this ability used since this spy-druid), so I'm considering dropping A Thousand Faces.
Did you ever read something justifying this ability, in D&D or Pathfinder material or anywhere else, even real stories on druids?
**EDIT :** What bugs me is that I can't see proper use of this ability besides deception, and deception doesn't feel too "natural". | 2013/08/12 | [
"https://rpg.stackexchange.com/questions/27873",
"https://rpg.stackexchange.com",
"https://rpg.stackexchange.com/users/8150/"
] | I think it was not uncommon for druids – and other mystical figures in folklore – to appear before others in magical disguise. Glamour, after all, is heavily associated with the fey, which are in turn tied to the natural world and the same sort of mythological background as druids. Further, as protectors of the natural world, druids have to assess potential “civilized” threats – appearing as a wanderer, sharing a campfire, gives you a good sense of whether the new people are going to be a problem.
Mostly, though, it just seems to play up the druids’ mystical angle. They *know things*, deep lore of the natural world but also about the comings and goings around them that they couldn’t possibly know. They have animals as eyes and ears but sometimes they need to see and hear for themselves. I think it is an appropriate, flavorful feature. And unlike most of the druid’s class features, I don’t think it’s overpowered.
Also, be careful about thinking too much about druids as being of or defending nature, because that was not really their historical or narrative role. The word “druid” itself means “oak-knower,” where oaks are symbols of all things ancient and deep. The druids were mystics, wise men, and priests. They drew their powers from the natural world, but much of their power was *knowledge* as much as it was magic (of course, at the time and in the myths, these were often the same thing). Any magic they had came from their knowledge of the nature of a world that had magic built into its very bones.
In role, they may have often been apart from society, but they were still very much human and very important to society. Their counsel was sought out in all things mystical and natural, which is to say everything that the common man did not understand. They were often highly political, kingmakers or rulers themselves as high priests. They were protectors of the old ways, which included human custom as much as it did the ancient natural world. | I think you are looking at the ability wrong. Think about nature. There are a ton of animals which use deception as a means of defense or to hunt.
Victoria Butterflies mimic Monarch Butterflies because one tastes bad. Stick insects pretend to be sticks to keep away predators. Angler fish dangle little pseudo minnows in front of their face to attract prey.
If you stop thinking of A Thousand Faces as "Deception" and start thinking of it as "Camouflage" you might find the ability to be more helpful. |
27,873 | I played a D&D 3.5 campaign years ago, and one PC was a druid. The player did a pretty awesome job as a spy thanks to the multiple divination spells, Wild Shape and especially A Thousand Faces:
>
> [**A Thousand Faces (Su)**](http://www.d20srd.org/srd/classes/druid.htm#aThousandFaces)
>
> At 13th level, a druid gains the ability to change her appearance at will, as if using the [*disguise self*](http://www.d20srd.org/srd/spells/disguiseSelf.htm) spell, but only while in her normal form. This affects the druid’s body but not her possessions. It is not an illusory effect, but a minor physical alteration of the druid’s appearance, within the limits described for the spell.
>
>
>
I'm now planning to run a Pathfinder campaign (albeit with a custom world) with a lot of druids and I just don't understand how Thousand Faces fits for a "protector of the wild, lover of Nature" kind of guy (and I've never seen this ability used since this spy-druid), so I'm considering dropping A Thousand Faces.
Did you ever read something justifying this ability, in D&D or Pathfinder material or anywhere else, even real stories on druids?
**EDIT :** What bugs me is that I can't see proper use of this ability besides deception, and deception doesn't feel too "natural". | 2013/08/12 | [
"https://rpg.stackexchange.com/questions/27873",
"https://rpg.stackexchange.com",
"https://rpg.stackexchange.com/users/8150/"
] | Nature
------
>
> What bugs me is that I can't see proper use of this ability besides deception, and deception doesn't feel too "natural".
>
>
>
I think a big part of your confusion is that you're coming at nature from a very modern perspective. Woods are safe, bright, places fit for hiking and camping. "All natural" foods are free from contamination and uncertainty.
But a lot of older stories paint the wilds in a very different light. Dark, unknown, uncertain, capricious, dangerous. If you venture too far from the safety of civilization, you have no idea what you'll encounter, or whether you'll be able to find your way home.
The wilds are filled with hidden dangers: Venomous snakes, hidden predators, dead-drops, poisonous plants, witches, giants, dragons, and other devious things that mean you harm.
Look at British folklore (where the word Druid originates), for example: Pretty much everything out there is [some kind of shapeshifter that means you harm](http://en.wikipedia.org/wiki/Shapeshifting#British_and_Irish).
Look also at the "nature spirits" in D&D and Pathfinder. The Fey have "connections to nature" and are associated with illusion, charms, and other forms of trickery.
Druids
------
>
> Pomponius Mela is the first author who says that the druids' instruction was secret, and was carried on in caves and forests.
>
>
>
[(source)](http://en.wikipedia.org/wiki/Druid#Societal_role_and_training)
>
> The Druids were initiates of a secret school that existed in their midst. This school, which closely resembled the Bacchic and Eleusinian Mysteries of Greece or the Egyptian rites of Isis and Osiris, is justly designated the Druidic Mysteries. There has been much speculation concerning the secret wisdom that the Druids claimed to possess. Their secret teachings were never written, but were communicated orally to specially prepared candidates. Robert Brown, 32°, is of the opinion that the British priests secured their information from Tyrian and Phœnician navigators who, thousands of years before the Christian Era, established colonies in Britain and Gaul while searching for tin.
>
>
>
[(source)](http://www.sacred-texts.com/eso/sta/sta04.htm)
In addition to the nature angle, historical druids are typically viewed as mysterious in popular culture: A secretive order, protecting its own mysteries. | I think you are looking at the ability wrong. Think about nature. There are a ton of animals which use deception as a means of defense or to hunt.
Victoria Butterflies mimic Monarch Butterflies because one tastes bad. Stick insects pretend to be sticks to keep away predators. Angler fish dangle little pseudo minnows in front of their face to attract prey.
If you stop thinking of A Thousand Faces as "Deception" and start thinking of it as "Camouflage" you might find the ability to be more helpful. |
27,873 | I played a D&D 3.5 campaign years ago, and one PC was a druid. The player did a pretty awesome job as a spy thanks to the multiple divination spells, Wild Shape and especially A Thousand Faces:
>
> [**A Thousand Faces (Su)**](http://www.d20srd.org/srd/classes/druid.htm#aThousandFaces)
>
> At 13th level, a druid gains the ability to change her appearance at will, as if using the [*disguise self*](http://www.d20srd.org/srd/spells/disguiseSelf.htm) spell, but only while in her normal form. This affects the druid’s body but not her possessions. It is not an illusory effect, but a minor physical alteration of the druid’s appearance, within the limits described for the spell.
>
>
>
I'm now planning to run a Pathfinder campaign (albeit with a custom world) with a lot of druids and I just don't understand how Thousand Faces fits for a "protector of the wild, lover of Nature" kind of guy (and I've never seen this ability used since this spy-druid), so I'm considering dropping A Thousand Faces.
Did you ever read something justifying this ability, in D&D or Pathfinder material or anywhere else, even real stories on druids?
**EDIT :** What bugs me is that I can't see proper use of this ability besides deception, and deception doesn't feel too "natural". | 2013/08/12 | [
"https://rpg.stackexchange.com/questions/27873",
"https://rpg.stackexchange.com",
"https://rpg.stackexchange.com/users/8150/"
] | Nature
------
>
> What bugs me is that I can't see proper use of this ability besides deception, and deception doesn't feel too "natural".
>
>
>
I think a big part of your confusion is that you're coming at nature from a very modern perspective. Woods are safe, bright, places fit for hiking and camping. "All natural" foods are free from contamination and uncertainty.
But a lot of older stories paint the wilds in a very different light. Dark, unknown, uncertain, capricious, dangerous. If you venture too far from the safety of civilization, you have no idea what you'll encounter, or whether you'll be able to find your way home.
The wilds are filled with hidden dangers: Venomous snakes, hidden predators, dead-drops, poisonous plants, witches, giants, dragons, and other devious things that mean you harm.
Look at British folklore (where the word Druid originates), for example: Pretty much everything out there is [some kind of shapeshifter that means you harm](http://en.wikipedia.org/wiki/Shapeshifting#British_and_Irish).
Look also at the "nature spirits" in D&D and Pathfinder. The Fey have "connections to nature" and are associated with illusion, charms, and other forms of trickery.
Druids
------
>
> Pomponius Mela is the first author who says that the druids' instruction was secret, and was carried on in caves and forests.
>
>
>
[(source)](http://en.wikipedia.org/wiki/Druid#Societal_role_and_training)
>
> The Druids were initiates of a secret school that existed in their midst. This school, which closely resembled the Bacchic and Eleusinian Mysteries of Greece or the Egyptian rites of Isis and Osiris, is justly designated the Druidic Mysteries. There has been much speculation concerning the secret wisdom that the Druids claimed to possess. Their secret teachings were never written, but were communicated orally to specially prepared candidates. Robert Brown, 32°, is of the opinion that the British priests secured their information from Tyrian and Phœnician navigators who, thousands of years before the Christian Era, established colonies in Britain and Gaul while searching for tin.
>
>
>
[(source)](http://www.sacred-texts.com/eso/sta/sta04.htm)
In addition to the nature angle, historical druids are typically viewed as mysterious in popular culture: A secretive order, protecting its own mysteries. | I don't know much about real stories of historical druids, but I can clearly remember some figure a protector of nature that used to live among humans, conceiling his true aspect, to watch on them.
I can't really remember which book or movie featured this story but the trope of a disguised watcher of events is familiar to me, to the point of instantly recognizing it every time I look at the *A Thousand Faces* feature.
I don't think this kind of character is any good as a PC unless it's a story about this sort of things, but it might be awesome for some higher-level NPC (for it's quite useless by the level it comes into play). |
195,949 | I just installed the latest security update on Mac OS X (installed on 2-10-2010). On restart my Mac booted in Windows 7, which I had installed previously and was set not to boot by default.
I tried to restart holding the alt key, and selected the Mac OS X partition, but still the Windows 7 partition boots. It does not matter what partition I choose, Windows 7 always boots.
I took a look in the OS X partition and noticed that the admin home folder is empty, or at least Windows is not showing any files there. There is another user on OS X and I can see their files no problem.
This has me stumped, has anyone any suggestions for a finding a solution? | 2010/10/05 | [
"https://superuser.com/questions/195949",
"https://superuser.com",
"https://superuser.com/users/15184/"
] | If you install Virtual PC on your laptop, you are more or less good to go. Just copy the two files across...
Be aware that the Virtual PC console has "gone", and has been replaced by an even naffer Windows Explorer window.
You can install Windows 7 Virtual PC on versions other than the ones which support the XP Mode extensions to Virtual PC (at least, I have installed it on Windows 7 Home Premium), but Microsoft doesn't make it easy. If you go to the [Virtual PC and XP Mode](http://www.microsoft.com/windows/virtual-pc/download.aspx) download page, you can enter a version that does allow one to download XP Mode, and download and install "Windows Virtual PC". | In order to run Windows 7's Virtual PC, you'll need Professional, Enterprise or Ultimate. Home Premium won't work.
[Virtual PC 2007 SP1](http://www.microsoft.com/downloads/en/details.aspx?FamilyId=28C97D22-6EB8-4A09-A7F7-F6C7A1F000B5&displaylang=en) should work on Windows 7, but it's not "officially" supported.
[VirtualBox](http://www.virtualbox.org/) is another option if this doesn't work for you, though I'm not familiar with the conversion process. |
7,724 | I'd like to run the new Unity interface from Ubuntu 10.10 inside of a VirtualBox VM (host is Ubuntu 10.04). Is that possible? Thanks! | 2010/10/16 | [
"https://askubuntu.com/questions/7724",
"https://askubuntu.com",
"https://askubuntu.com/users/4179/"
] | So you want to help test the Ubuntu distribution that is customised specifically for netbooks but don’t have a netbook to test it on? That’s not a problem. What you need is a virtual machine and an Ubuntu Netbook Remix (UNR) image.
**Getting the image** STEP 1
<http://www.ubuntu.com/netbook/get-ubuntu/download>
**Installing a Virtual Machine**
>
> sudo apt-get install virtualbox-ose
>
>
>
**Setting up the Virtual Machine**
Virtualbox -> New -> Next ->
Name: UbuntuNetbook
Operating System: Linux
Version: Ubuntu
-> Next -> Memory: Base memory size: 512 Mb
Note: Use the amount of RAM for the virtual machine that you can afford. Linux requires less memory to run than does Windows, but the amount of RAM that you dedicate to the virtual machine in this step will not be available to the Windows host. On my laptop, I have 3 Gb RAM, so I dedicate 1024 Mb (1 Gb) to the virtual machine in this step and leave 2 Gb for Windows. You should always leave at least 1 Gb RAM for Windows (or it will run painfully slowly). Linux is able to run with only 512 Mb in server mode or 1 Gb in desktop mode (perhaps even less).
-> Next -> Virtual Hard Disk ->
Boot Hard Disk (Primary Master): (ticked)
Create new hard disk: (ticked)
-> Next -> Next -> Hard disk storage type:Dynamically expanding storage: (ticked)
-> Next -> Virtual Disk Location and Size:
Once this is download you want to make sure your virtual machine image will boot into UNR when it first runs. To do this select the “Settings” icon from the VirtualBox screen (first make sure you have selected your image in the left-hand column).
What you are presented with now is a list of options for your virtual machine image. The one we are interested in is CD/DVD-ROM. Select this option. and select ISO and find where you download the UNR iso image.
p.d :
.. but there are 2 virtual box If you are interested in using VirtualBox -- either for private or business use --, you have the choice between two versions:
* The full VirtualBox package is available in binary (executable) form free of charge from the Downloads page. This version is free for personal use and evaluation under the terms of the VirtualBox Personal Use and Evaluation License.
Closed-source features
The following list shows the enterprise features that are only present in the closed-source edition. Note that this list may change over time as some of these features will eventually be made available with the open-source version as well.
1. Remote Display Protocol (RDP) Server
This component implements a complete RDP server on top of the virtual hardware and allows users to connect to a virtual machine remotely using any RDP compatible client.
2. USB support
VirtualBox implements a virtual USB controller and supports passing through USB 1.1 and USB 2.0 devices to virtual machines.
3. SB over RDP
This is a combination of the RDP server and USB support allowing users to make USB devices available to virtual machines running remotely.
* The VirtualBox Open Source Edition (OSE) is the one that has been released under the GPL and comes with complete source code. It is functionally equivalent to the full VirtualBox package, except for a few features that primarily target enterprise customers. This gives us a chance to generate revenue to fund further development of VirtualBox. problem with this version :
Open-source features
The following list shows the features that are only present in the open-source edition. The licensing conditions of the necessary libraries prevent inclusion in the full-featured product.
1. Virtual Network Computing (VNC) Server
This component implements a complete VNC server on top of the virtual hardware and allows users to connect to a virtual machine remotely using any VNC client.
**Install virtualbox no ose**
follow this instruction :
<http://www.virtualbox.org/wiki/Linux_Downloads> | With VirtualBox 4.0 it is now possible to test Unity under Ubuntu *11.04*.
The 5-steps howto is [here](http://www.webupd8.org/2010/12/how-to-test-ubuntu-1104-with-unity-in.html).
I did not tried to run Unity under 10.10 in a VM but if you still want to, you should be more lucky with the latest VirtualBox release. |
7,724 | I'd like to run the new Unity interface from Ubuntu 10.10 inside of a VirtualBox VM (host is Ubuntu 10.04). Is that possible? Thanks! | 2010/10/16 | [
"https://askubuntu.com/questions/7724",
"https://askubuntu.com",
"https://askubuntu.com/users/4179/"
] | So, to answer the title of this article:
"How to create a Ubuntu Netbook 10.10 Unity VM under VirtualBox"
You can't. The Unity interface can't run in a VirtualBox guest. You can, however, use the default gnome shell common to the regular Ubuntu distribution -- but that is not trying out UNR... | With VirtualBox 4.0 it is now possible to test Unity under Ubuntu *11.04*.
The 5-steps howto is [here](http://www.webupd8.org/2010/12/how-to-test-ubuntu-1104-with-unity-in.html).
I did not tried to run Unity under 10.10 in a VM but if you still want to, you should be more lucky with the latest VirtualBox release. |
839,391 | I've tried to find a solution, but I haven't had much luck. The photo says it all:

I'm supposed to have 4 cores, but they're not showing up. What gives? msconfig advanced boot options also only show one processor. | 2014/11/13 | [
"https://superuser.com/questions/839391",
"https://superuser.com",
"https://superuser.com/users/224330/"
] | There are a number of things that can cause this, and here are several possible fixes.
---
ACPI bugs in your motherboard's BIOS can cause this problem. Ensure that you are running the latest BIOS/firmware for your motherboard.
On that note, did you upgrade Windows recently? If so, then you may need to update your chipset drivers.
---
In msconfig, *uncheck* the box to set the number of CPU cores. Then shut down the computer.

---
A third party utility called EasyBCD can reset the number of CPUs Windows thinks it has. Despite it apparently controlling exactly the same thing, this has been known to work when the msconfig setting above failed.
1. Download EasyBCD from a reliable web site (such as [its official site](https://neosmart.net/EasyBCD/)).
2. Click Advanced Settings, then go to the Developer tab.
3. Set the option [Limit Windows to xx CPUs](https://neosmart.net/wiki/easybcd/basics/advanced-settings/#Limit_Windows_to_xx_CPUs) to 0 (zero).
4. Shut down the computer, so that it powers off.
When you power on the computer again, Windows should figure out how many CPUs you have and fix itself.
---
Another possible fix is to *delete* your CPU from Device Manager, and then restart your computer. This causes Windows to redetect your CPU, hopefully correctly.
While you're in Device Manager, see if anything else is reporting a problem, and fix it. | Well, turns out it wasn't as bad of an issue as I thought. After I installed the chipset drivers for my motherboard and rebooted, the cores showed up.
Sometimes the most obvious answer is right in front of me. |
839,391 | I've tried to find a solution, but I haven't had much luck. The photo says it all:

I'm supposed to have 4 cores, but they're not showing up. What gives? msconfig advanced boot options also only show one processor. | 2014/11/13 | [
"https://superuser.com/questions/839391",
"https://superuser.com",
"https://superuser.com/users/224330/"
] | There are a number of things that can cause this, and here are several possible fixes.
---
ACPI bugs in your motherboard's BIOS can cause this problem. Ensure that you are running the latest BIOS/firmware for your motherboard.
On that note, did you upgrade Windows recently? If so, then you may need to update your chipset drivers.
---
In msconfig, *uncheck* the box to set the number of CPU cores. Then shut down the computer.

---
A third party utility called EasyBCD can reset the number of CPUs Windows thinks it has. Despite it apparently controlling exactly the same thing, this has been known to work when the msconfig setting above failed.
1. Download EasyBCD from a reliable web site (such as [its official site](https://neosmart.net/EasyBCD/)).
2. Click Advanced Settings, then go to the Developer tab.
3. Set the option [Limit Windows to xx CPUs](https://neosmart.net/wiki/easybcd/basics/advanced-settings/#Limit_Windows_to_xx_CPUs) to 0 (zero).
4. Shut down the computer, so that it powers off.
When you power on the computer again, Windows should figure out how many CPUs you have and fix itself.
---
Another possible fix is to *delete* your CPU from Device Manager, and then restart your computer. This causes Windows to redetect your CPU, hopefully correctly.
While you're in Device Manager, see if anything else is reporting a problem, and fix it. | This is for anyone having the problem like I was, try this.
I had my drivers installed and was still only showing 1 view.
Right click the graph, hover over change graph to, then select logical processors. |
839,391 | I've tried to find a solution, but I haven't had much luck. The photo says it all:

I'm supposed to have 4 cores, but they're not showing up. What gives? msconfig advanced boot options also only show one processor. | 2014/11/13 | [
"https://superuser.com/questions/839391",
"https://superuser.com",
"https://superuser.com/users/224330/"
] | Well, turns out it wasn't as bad of an issue as I thought. After I installed the chipset drivers for my motherboard and rebooted, the cores showed up.
Sometimes the most obvious answer is right in front of me. | This is for anyone having the problem like I was, try this.
I had my drivers installed and was still only showing 1 view.
Right click the graph, hover over change graph to, then select logical processors. |
1,282 | Let's say you want to temporarily exclude a **.config** file from being applied as a patch by Sitecore. Normally, you would either:
* Delete the file;
* Or rename it to have an extension different from **.config**.
Is there any other way of disabling a **.config** file? I don't want to change the file name or its contents (due to deployment process intricacies). | 2016/10/13 | [
"https://sitecore.stackexchange.com/questions/1282",
"https://sitecore.stackexchange.com",
"https://sitecore.stackexchange.com/users/104/"
] | You can make your configuration file hidden, which will cause Sitecore to NOT read that configuration file.
But, don't know how to set the hidden attribute automatically from the deployment process.
One option would be, you manually set the file properties to hidden on the server/instance and then stop the already existing file deployment to that certain folder. (But, this will also introduce a problem in case you change any configuration on those files) | Wrap the context in comment tags. Or you could drop another patch file in below it to overwrite that patch. Maybe those are obvious, but figured I would throw them out anyways. |
48,413,592 | I am using Debug.Assert in a .NET Core 2.0 C# console application and surprised to find out that it only silently shows "DEBUG ASSERTION FAILS" in the output window without breaking into the debugger or showing any message box at all.
How can I bring this common behavior back in .NET Core 2.0? | 2018/01/24 | [
"https://Stackoverflow.com/questions/48413592",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1635450/"
] | Note that for **infinite** rectangular "tube" you can reduce problem to 2D case: just project all points and edges onto the plane, perpendicular to tube.
Now you have to look for intersection of polylines (polygons) with rectangle (axis-aligned if you use points on generatrices of tube as base point and vectors) - this task is definitely simpler.
If your shapes are convex, polygons are convex too and SAT-method (Separating Axis Theorem) is very good (of course this is not true for links between shapes, they should be treated separately). | Continuing MBo's answer, you can find the useful outline of the B shapes by taking the (2D) convexhull of the vertices, which is a convex polygon.
<http://www.algorithmist.com/index.php/Monotone_Chain_Convex_Hull>
Then apply the Sutherland-Hodgman clipping algorithm and see if a non-empty intersection remains.
<https://en.wikipedia.org/wiki/Sutherland%E2%80%93Hodgman_algorithm> |
52,870,205 | This is more a theory question than anything else.
I've been tasked at creating a website for a client that involves users looking up an item x. A json file that holds all the item data will be searched and the resulting data about that item is presented.
I'm using React as my front end for this, so I'm guessing I need to use Express as my back end? What are all the packages needed to make this work, and if anyone knows of a guide or tutorial or even an npm package that gives this basic infrastructure it would be hugely appreciated.
I've managed to get this working all client side, but I'm worried for efficiency, as the data im currently using is only 19k lines long as is just a subset of the full data so I dont think this is going to work at when the subset json is replaced with the full thing.
I have the data also stored in an excel sheet, if this would be better for server side than json? | 2018/10/18 | [
"https://Stackoverflow.com/questions/52870205",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4983217/"
] | Look firstly you have to concentrate you project architecture.Architecture mainly responsible for which technology you will use.As description form you about your project,nodejs(express) and mongodb is perfect for you.
if you want to work with express and mongodb there you need some npm packages like express,morgan,body-parser,mongoose here more then these.
and nodejs is suitable as back-end technology for react.
you can follow [this](https://www.youtube.com/watch?v=WDrU305J1yw) tutorial for getting primary concept.
Thank-you :) | i think you need to store your data at mongodb database, for that you need some packages:
1- express for server side
2- mongoose to deal with mongodb
3- body-parser to hold post requests from user
those are the basic packages you need, in case there will be authentication & authorization at your application you will need more packages:
passport & passport-local & passport-local-mongoose and express-session. |
433,948 | Is saying a cache is a special kind of buffer correct? They both perform similar functions, but is there some underlying difference that I am missing? | 2012/06/07 | [
"https://superuser.com/questions/433948",
"https://superuser.com",
"https://superuser.com/users/89126/"
] | From Wikipedia's article on [data buffers](http://en.wikipedia.org/wiki/Data_buffer):
>
> a buffer is a region of a physical memory storage used to temporarily hold data while it is being moved from one place to another
>
>
>
A **buffer** ends up cycling through and holding every single piece of data that is transmitted from one storage location to another (like when using a circular buffer in audio processing). A buffer allows just that - a "buffer" of data before and after your current position in the data stream.
Indeed, there are some common aspects of a buffer and a cache. However, cache in the conventional sense usually does *not* store all of the data when it's being moved from place to place (i.e. CPU cache).
The purpose of a **[cache](http://en.wikipedia.org/wiki/Cache_%28computing%29)** is to store data in a transparent way, such that just enough data is cached so that the remaining data can be transferred without any performance penalty. In this context, the cache only "pre-fetches" a small amount of data (depending on the transfer rates, cache sizes, etc...).
The main difference is that a buffer will eventually have held all of the data. Conversely, a cache may have held all, some, or none of the data (depending on the design). However, a cache is accessed as if you were directly accessing the data in the first place - what exactly gets cached is transparent to the "user" of the cache.
---
The difference is in the *interface*. When you're using a cache to access a data source, you use it as if the cache **is** the data source - you can access every part of the data source through the cache, and the cache will determine where the data comes from (the cache itself, or the source). The cache itself determines what parts of the data to preload (usually just the beginning, but sometimes all), while the [cache replacement algorithm](http://en.wikipedia.org/wiki/Cache_algorithms) in use determines what/when things are removed from the cache. The best example of this is a system, aside from [CPU cache](http://en.wikipedia.org/wiki/CPU_cache) itself, is [prefetcher](http://en.wikipedia.org/wiki/Prefetcher)/[readahead](http://en.wikipedia.org/wiki/Readahead). Both load the parts of data they think you will use most into memory, and revert to the hard drive if something isn't cached.
Conversely, a buffer can't be used to instantaneously move your location in the data stream, unless the new part has already been moved to the buffer. To do so would require the buffer to relocate (given the new location exceeds the buffer length), effectively requiring you to "restart" the buffer from a new location. The best example of this is moving the slider in a Youtube video.
Another good example of a buffer is playing audio back in Winamp. Since audio files need to be decoded by the CPU, it takes some time between when the song is read in, to when the audio is processed, to when it's sent to your sound card. Winamp will buffer some of the audio data, so that there is enough audio data already processed to avoid any "lock-ups" (i.e. the CPU is always preparing the audio you'll hear in a few hundred milliseconds, it's never real-time; what you hear comes from the buffer, which is what the CPU prepared in the past). | It would be more accurate to say that a cache is a particular usage pattern of a buffer, that implies multiple uses of the same data. Most uses of "buffer" imply that the data will be drained or discarded after a single use (although this isn't necessarily the case), whereas "cache" implies that the data will be reused multiple times. Caching also often implies that the data is stored as it is also being simultaneously used, although this isn't necessarily the case (as in pre-fetching and the like), whereas buffering implies that the data is being stored up for later use.
There is certainly a large overlap in both implementation and usage, however. |
433,948 | Is saying a cache is a special kind of buffer correct? They both perform similar functions, but is there some underlying difference that I am missing? | 2012/06/07 | [
"https://superuser.com/questions/433948",
"https://superuser.com",
"https://superuser.com/users/89126/"
] | It would be more accurate to say that a cache is a particular usage pattern of a buffer, that implies multiple uses of the same data. Most uses of "buffer" imply that the data will be drained or discarded after a single use (although this isn't necessarily the case), whereas "cache" implies that the data will be reused multiple times. Caching also often implies that the data is stored as it is also being simultaneously used, although this isn't necessarily the case (as in pre-fetching and the like), whereas buffering implies that the data is being stored up for later use.
There is certainly a large overlap in both implementation and usage, however. | One important difference between cache and buffer is:
Buffer is a part of the primary memory. They are structures present and accessed from the primary memory (RAM).
On the other hand, cache is a separate physical memory in a computer's memory hierarchy.
Buffer is also sometimes called as - Buffer cache. This name stresses on the fact that the use of buffer is similar to that of cache, i.e. to store data. while the difference lies in the context of its usage.
Buffers are used for temporarily storing data, while the data is moved from one object to another.
EX: when a video is moved from the Internet onto our PC for the display buffers are used to store the frames of the video which would be displayed next. ( THIS INCREASES THE QoS, AS THE VIDEO WOULD RUN SMOOTHLY AFTER A SUCCESSFUL BUFFERING PROCESS.)
EX: another example is the scenario when we write data onto our files. The newly written data is not copied to the secondary memory instantaneously. The changes made are stored in the buffer and then according to the designed policy, the changes are reflected back to the file in the secondary memory (hard disk).
Caches on the other hand are used between the primary memory and processors, to bridge the gap between the speed of execution of RAM and the processor. Also the most frequently accessed data is stored in the cache to reduce the access to RAM. |
433,948 | Is saying a cache is a special kind of buffer correct? They both perform similar functions, but is there some underlying difference that I am missing? | 2012/06/07 | [
"https://superuser.com/questions/433948",
"https://superuser.com",
"https://superuser.com/users/89126/"
] | It would be more accurate to say that a cache is a particular usage pattern of a buffer, that implies multiple uses of the same data. Most uses of "buffer" imply that the data will be drained or discarded after a single use (although this isn't necessarily the case), whereas "cache" implies that the data will be reused multiple times. Caching also often implies that the data is stored as it is also being simultaneously used, although this isn't necessarily the case (as in pre-fetching and the like), whereas buffering implies that the data is being stored up for later use.
There is certainly a large overlap in both implementation and usage, however. | Common thing: both are intermediary data storage components (software or hardware) between computation and "main" storage.
To me difference is the following:
Buffer:
* Handles **sequential** access to data (e.g. reading/writing data from file or socket)
* **Enables** interface between computation and main storage, **adapts** to each other the different data transfer patterns of a data producer and a data consumer. E.g. computation writes small chuncks of data, but disk drive can accept only pieces of data of a specific size. So the buffer accumulates small pieces as the input and regroup them into bigger pieces of the output.
* So it is like a **Adapter** design pattern. It joins two interacting components that cannot interoperate directly.
* Examples: disk buffer, BufferedReader in Java language, duffering in computer graphics.
Cache:
* Handles **random** access to data (e.g. CPU cache caches lines of mememory that located non necessary sequentially).
* **Optimizes** access to the main storage, makes it faster. E.g. CPU cache avoid accesses to memory thus making CPU commands faster.
* It is like a **Decorator** design pattern. It joins (often transparently) two interacting components that could in principle interoperate directly , but it makes the interaction faster.
* Examples: CPU cache, page cache, web proxy, browser cache. |
433,948 | Is saying a cache is a special kind of buffer correct? They both perform similar functions, but is there some underlying difference that I am missing? | 2012/06/07 | [
"https://superuser.com/questions/433948",
"https://superuser.com",
"https://superuser.com/users/89126/"
] | From Wikipedia's article on [data buffers](http://en.wikipedia.org/wiki/Data_buffer):
>
> a buffer is a region of a physical memory storage used to temporarily hold data while it is being moved from one place to another
>
>
>
A **buffer** ends up cycling through and holding every single piece of data that is transmitted from one storage location to another (like when using a circular buffer in audio processing). A buffer allows just that - a "buffer" of data before and after your current position in the data stream.
Indeed, there are some common aspects of a buffer and a cache. However, cache in the conventional sense usually does *not* store all of the data when it's being moved from place to place (i.e. CPU cache).
The purpose of a **[cache](http://en.wikipedia.org/wiki/Cache_%28computing%29)** is to store data in a transparent way, such that just enough data is cached so that the remaining data can be transferred without any performance penalty. In this context, the cache only "pre-fetches" a small amount of data (depending on the transfer rates, cache sizes, etc...).
The main difference is that a buffer will eventually have held all of the data. Conversely, a cache may have held all, some, or none of the data (depending on the design). However, a cache is accessed as if you were directly accessing the data in the first place - what exactly gets cached is transparent to the "user" of the cache.
---
The difference is in the *interface*. When you're using a cache to access a data source, you use it as if the cache **is** the data source - you can access every part of the data source through the cache, and the cache will determine where the data comes from (the cache itself, or the source). The cache itself determines what parts of the data to preload (usually just the beginning, but sometimes all), while the [cache replacement algorithm](http://en.wikipedia.org/wiki/Cache_algorithms) in use determines what/when things are removed from the cache. The best example of this is a system, aside from [CPU cache](http://en.wikipedia.org/wiki/CPU_cache) itself, is [prefetcher](http://en.wikipedia.org/wiki/Prefetcher)/[readahead](http://en.wikipedia.org/wiki/Readahead). Both load the parts of data they think you will use most into memory, and revert to the hard drive if something isn't cached.
Conversely, a buffer can't be used to instantaneously move your location in the data stream, unless the new part has already been moved to the buffer. To do so would require the buffer to relocate (given the new location exceeds the buffer length), effectively requiring you to "restart" the buffer from a new location. The best example of this is moving the slider in a Youtube video.
Another good example of a buffer is playing audio back in Winamp. Since audio files need to be decoded by the CPU, it takes some time between when the song is read in, to when the audio is processed, to when it's sent to your sound card. Winamp will buffer some of the audio data, so that there is enough audio data already processed to avoid any "lock-ups" (i.e. the CPU is always preparing the audio you'll hear in a few hundred milliseconds, it's never real-time; what you hear comes from the buffer, which is what the CPU prepared in the past). | One important difference between cache and buffer is:
Buffer is a part of the primary memory. They are structures present and accessed from the primary memory (RAM).
On the other hand, cache is a separate physical memory in a computer's memory hierarchy.
Buffer is also sometimes called as - Buffer cache. This name stresses on the fact that the use of buffer is similar to that of cache, i.e. to store data. while the difference lies in the context of its usage.
Buffers are used for temporarily storing data, while the data is moved from one object to another.
EX: when a video is moved from the Internet onto our PC for the display buffers are used to store the frames of the video which would be displayed next. ( THIS INCREASES THE QoS, AS THE VIDEO WOULD RUN SMOOTHLY AFTER A SUCCESSFUL BUFFERING PROCESS.)
EX: another example is the scenario when we write data onto our files. The newly written data is not copied to the secondary memory instantaneously. The changes made are stored in the buffer and then according to the designed policy, the changes are reflected back to the file in the secondary memory (hard disk).
Caches on the other hand are used between the primary memory and processors, to bridge the gap between the speed of execution of RAM and the processor. Also the most frequently accessed data is stored in the cache to reduce the access to RAM. |
433,948 | Is saying a cache is a special kind of buffer correct? They both perform similar functions, but is there some underlying difference that I am missing? | 2012/06/07 | [
"https://superuser.com/questions/433948",
"https://superuser.com",
"https://superuser.com/users/89126/"
] | From Wikipedia's article on [data buffers](http://en.wikipedia.org/wiki/Data_buffer):
>
> a buffer is a region of a physical memory storage used to temporarily hold data while it is being moved from one place to another
>
>
>
A **buffer** ends up cycling through and holding every single piece of data that is transmitted from one storage location to another (like when using a circular buffer in audio processing). A buffer allows just that - a "buffer" of data before and after your current position in the data stream.
Indeed, there are some common aspects of a buffer and a cache. However, cache in the conventional sense usually does *not* store all of the data when it's being moved from place to place (i.e. CPU cache).
The purpose of a **[cache](http://en.wikipedia.org/wiki/Cache_%28computing%29)** is to store data in a transparent way, such that just enough data is cached so that the remaining data can be transferred without any performance penalty. In this context, the cache only "pre-fetches" a small amount of data (depending on the transfer rates, cache sizes, etc...).
The main difference is that a buffer will eventually have held all of the data. Conversely, a cache may have held all, some, or none of the data (depending on the design). However, a cache is accessed as if you were directly accessing the data in the first place - what exactly gets cached is transparent to the "user" of the cache.
---
The difference is in the *interface*. When you're using a cache to access a data source, you use it as if the cache **is** the data source - you can access every part of the data source through the cache, and the cache will determine where the data comes from (the cache itself, or the source). The cache itself determines what parts of the data to preload (usually just the beginning, but sometimes all), while the [cache replacement algorithm](http://en.wikipedia.org/wiki/Cache_algorithms) in use determines what/when things are removed from the cache. The best example of this is a system, aside from [CPU cache](http://en.wikipedia.org/wiki/CPU_cache) itself, is [prefetcher](http://en.wikipedia.org/wiki/Prefetcher)/[readahead](http://en.wikipedia.org/wiki/Readahead). Both load the parts of data they think you will use most into memory, and revert to the hard drive if something isn't cached.
Conversely, a buffer can't be used to instantaneously move your location in the data stream, unless the new part has already been moved to the buffer. To do so would require the buffer to relocate (given the new location exceeds the buffer length), effectively requiring you to "restart" the buffer from a new location. The best example of this is moving the slider in a Youtube video.
Another good example of a buffer is playing audio back in Winamp. Since audio files need to be decoded by the CPU, it takes some time between when the song is read in, to when the audio is processed, to when it's sent to your sound card. Winamp will buffer some of the audio data, so that there is enough audio data already processed to avoid any "lock-ups" (i.e. the CPU is always preparing the audio you'll hear in a few hundred milliseconds, it's never real-time; what you hear comes from the buffer, which is what the CPU prepared in the past). | Common thing: both are intermediary data storage components (software or hardware) between computation and "main" storage.
To me difference is the following:
Buffer:
* Handles **sequential** access to data (e.g. reading/writing data from file or socket)
* **Enables** interface between computation and main storage, **adapts** to each other the different data transfer patterns of a data producer and a data consumer. E.g. computation writes small chuncks of data, but disk drive can accept only pieces of data of a specific size. So the buffer accumulates small pieces as the input and regroup them into bigger pieces of the output.
* So it is like a **Adapter** design pattern. It joins two interacting components that cannot interoperate directly.
* Examples: disk buffer, BufferedReader in Java language, duffering in computer graphics.
Cache:
* Handles **random** access to data (e.g. CPU cache caches lines of mememory that located non necessary sequentially).
* **Optimizes** access to the main storage, makes it faster. E.g. CPU cache avoid accesses to memory thus making CPU commands faster.
* It is like a **Decorator** design pattern. It joins (often transparently) two interacting components that could in principle interoperate directly , but it makes the interaction faster.
* Examples: CPU cache, page cache, web proxy, browser cache. |
433,948 | Is saying a cache is a special kind of buffer correct? They both perform similar functions, but is there some underlying difference that I am missing? | 2012/06/07 | [
"https://superuser.com/questions/433948",
"https://superuser.com",
"https://superuser.com/users/89126/"
] | One important difference between cache and buffer is:
Buffer is a part of the primary memory. They are structures present and accessed from the primary memory (RAM).
On the other hand, cache is a separate physical memory in a computer's memory hierarchy.
Buffer is also sometimes called as - Buffer cache. This name stresses on the fact that the use of buffer is similar to that of cache, i.e. to store data. while the difference lies in the context of its usage.
Buffers are used for temporarily storing data, while the data is moved from one object to another.
EX: when a video is moved from the Internet onto our PC for the display buffers are used to store the frames of the video which would be displayed next. ( THIS INCREASES THE QoS, AS THE VIDEO WOULD RUN SMOOTHLY AFTER A SUCCESSFUL BUFFERING PROCESS.)
EX: another example is the scenario when we write data onto our files. The newly written data is not copied to the secondary memory instantaneously. The changes made are stored in the buffer and then according to the designed policy, the changes are reflected back to the file in the secondary memory (hard disk).
Caches on the other hand are used between the primary memory and processors, to bridge the gap between the speed of execution of RAM and the processor. Also the most frequently accessed data is stored in the cache to reduce the access to RAM. | Common thing: both are intermediary data storage components (software or hardware) between computation and "main" storage.
To me difference is the following:
Buffer:
* Handles **sequential** access to data (e.g. reading/writing data from file or socket)
* **Enables** interface between computation and main storage, **adapts** to each other the different data transfer patterns of a data producer and a data consumer. E.g. computation writes small chuncks of data, but disk drive can accept only pieces of data of a specific size. So the buffer accumulates small pieces as the input and regroup them into bigger pieces of the output.
* So it is like a **Adapter** design pattern. It joins two interacting components that cannot interoperate directly.
* Examples: disk buffer, BufferedReader in Java language, duffering in computer graphics.
Cache:
* Handles **random** access to data (e.g. CPU cache caches lines of mememory that located non necessary sequentially).
* **Optimizes** access to the main storage, makes it faster. E.g. CPU cache avoid accesses to memory thus making CPU commands faster.
* It is like a **Decorator** design pattern. It joins (often transparently) two interacting components that could in principle interoperate directly , but it makes the interaction faster.
* Examples: CPU cache, page cache, web proxy, browser cache. |
10,596,793 | I created tens of **Label** over a **Group**, I added a right click menu for every Label, and then attached an event **listener** to the menu, then How I can get the exact Label which I right-clicked on via the menu item select listener (**ContextMenuEvent.MENU\_ITEM\_SELECT**)? Thanks very much | 2012/05/15 | [
"https://Stackoverflow.com/questions/10596793",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/176475/"
] | Produce XML output. Check the time taken and the file size.
Produce JSON output. Check the time taken and the file size.
Decide which is best. | Better depends on your case. For your case it could be JSON. It's light and easy to parse in Javascript (if you'll use it with JS) |
10,596,793 | I created tens of **Label** over a **Group**, I added a right click menu for every Label, and then attached an event **listener** to the menu, then How I can get the exact Label which I right-clicked on via the menu item select listener (**ContextMenuEvent.MENU\_ITEM\_SELECT**)? Thanks very much | 2012/05/15 | [
"https://Stackoverflow.com/questions/10596793",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/176475/"
] | Produce XML output. Check the time taken and the file size.
Produce JSON output. Check the time taken and the file size.
Decide which is best. | JSON is good.
"the program will check often with the server". That is not required I believe.
Send an Asynchronous request using NSURLConnection and use NSNotificationCenter for notification services of iOS. So, when you get a response from the server, you can set a function to be called to use the data you need. [Check here for NSNotificationCenter usage](http://cocoawithlove.com/2008/06/five-approaches-to-listening-observing.html) |
41,276 | Beginners are taught principles in a classical style: control the center and develop pieces early. To my understanding, hypermodern openings try to control the center from a distance and wait for the opponent to occupy the center with pawns that become targets. Is this style of play of giving the opponent the early center not recommended to beginners for being too difficult or tricky at that level? | 2023/01/02 | [
"https://chess.stackexchange.com/questions/41276",
"https://chess.stackexchange.com",
"https://chess.stackexchange.com/users/28052/"
] | Hypermodern openings are typically not recommended to beginners.
1. They disregard some opening principles (prima facie) which beginners need to understand before "breaking" them. (That is "giving up" the center; in reality the plan often is to break the center with a pawn as soon as the army is developed. So even in hypermodern openings you eventually will put a pawn into the center.)
2. They **require a lot of chess understanding** to play well. Many of them can get you in troublesome (e.g. cramped) positions quickly if you do not play them well (i.e. strike with counterplay).
3. Due to the fact that the opponent gets the central control and a free hand, he has a lot of lines that promise an advantage. You'll **need to know a lot of theory**. In the time you need to really learn one of such openings (KID, Grünfeld,...) you could have improved your game much more meaningfully in other ways as a beginner.
4. Hypermodern openings **tend** to produce more closed positions (even though some lines don't, e.g. in the Grünfeld), see Brian Towers' answer.
Nevertheless, that doesn't mean they're never recommended or unplayable as a beginner. GM Daniel Naroditsky recommends the Grünfeld against d4 (and IM Levy Rozman says it's playable as a beginner if you are willing to learn a lot of theory). The Grünfeld has some advantages over other hypermodern openings w.r.t. beginners - it's very direct/tactical, positions can open quickly, the ideas are understandable - and you strike quickly in/at the center. I personally think that the Nimzo-/Queens-Indian isn't too bad for beginners, either. | The standard recommendation for beginners is to play openings which produce open positions favouring rapid piece development and tactical opportunities. The reason for this is that this gives the most practice in tactics and improving tactics is the fastest way to improve at this level.
Hypermodern openings produce closed positions where tactics play a much reduced role and the emphasis is on strategy and maneuvering. Beginners benefit and improve much less with this and so it is not recommended. |
14,750,035 | If possible, i'd like to run a find & remove query on non-indexed columns in "background", without disturbing other tasks or exhausting memory to the detriment of others.
For indexing, there is a background flag. Can the same be appended for find/remove tasks?
Thanks for a tip | 2013/02/07 | [
"https://Stackoverflow.com/questions/14750035",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1436033/"
] | This is not something you can use "background:true" for. Possibly the best way to handle this is to write a script that does this in the background. This script should run your operation in small batches with some delay in between. In pseudo code you would do:
* find 10 docs you need to update
* update those 10 docs
* sleep
* goto first step.
You will have to experiment with which value for sleep works. You do need to realize that all documents that you are updating need to be pulled into memory, so it will have at least some impact. | No, there is not a background:true flag for this operation. The remove will yield when page faults occur and allow other operations to execute. If you need to throttle this, then you can either remove in smaller batches or use a find/remove pattern which will lower the impact to other operations. |
25,975 | I will be hiking 4 days in early August between the High Sierra camps in Yosemite:
* Toulumne Meadows Lodge → Glen Aulin
* Glen Aulin → May Lake
* May Lake → Sunrise
* Sunrise → Toulumne Meadows Lodge
I do not need to carry food for breakfast and dinner because it will be provided.
I need a sleep sack but not a sleeping bag because the camp beds have blankets.
Any suggestions for the appropriate size pack?
If it helps, I tend toward the 'wear the same stuff except for socks and underwear' method of dressing each day when out and about like this. | 2021/03/10 | [
"https://outdoors.stackexchange.com/questions/25975",
"https://outdoors.stackexchange.com",
"https://outdoors.stackexchange.com/users/20884/"
] | I have a greater swiss mountain dog. I started by getting him used to camping with tarp in the backyard and sleeping in the van, we've also been doing a bit of winter camping in a friend's garden.
He doesn't like sleeping away from home, but then he also doesn't likes being away from me. He anyways spends his nights outdoors, so temperature/fur is no problem. (His breed is typically farmyard guard dog or livestock guard dog, so he's a barker. So rather no stealth camping. He'll growl or bark when animals move around closely - which is fine if it tells the wild boars to make a detour, but rather annoying when I wake up because some racoons came along in the tree)
Tarp + tethering from a tree works well. I haven't tried a tent. Put some thought in where to tie him to minimize the chances of getting entangled with the lines of tarp or tent.
Pad: a dog quickly recognizes that a pad is softer and warmer than the soil. If they need that, they'll use it (unless they decide it's in the category of things they're not supposed to be on). I'd put it to their use at home for a while.
My dog will scratch a pad like he does when making a cozy sleeping place with his mat or some leaves and soil outdoors. Foam pads will quickly be in shreds, I'm thinking to experiment with the cloth of an old dog mat as hull. At around 0°C, he uses a pad if available. If they don't use it, they don't need it (they may prefer to make their own camp from leaves)
Feeding: my dog needs substantially more feed when hiking the whole day, and also in cold weather (probably also because he's able to run more in cold/cool weather). I observe roughly a factor 2 between full hiking day in winter compared to hot weather in summer. (But then the mountain dogs really don't much when lazy.)
I'd do a bunch of single-night out test hikes before starting on a whole week. | I have two small dogs (each about 25 pounds), and have taken them camping with me a few times. I ended up deciding that it was not really successful, that they weren't enjoying it and that I wasn't either, but YMMV.
Try setting up the tent in the back yard and getting the dogs used to it. That way they understand what it is, and you can see what issues there might be with pads, chewing, etc.
Re tying the German shepherd up, it seems straightforward to tie him to a tree. However, that's going to get awkward if you want to keep him tied and at the same time inside the tent.
I don't think a huge amount of extra food is necessary. Dogs are much more efficient walkers than humans, and in any case, exercise doesn't really consume as many calories as people imagine. In humans, hiking is about 0.4 kcal per mile per pound of body weight. Dogs are probably half that. |
4,414 | [This question about the sun's circumference](https://physics.stackexchange.com/questions/68120/how-is-suns-equatorial-circumference-measured/68125#68125), I answered with what I *thought* was a helpful answer - it seems that a couple of others found it useful (2 upvotes), but I am getting grief about the answer being "while strictly correct, but is enormously unsatisfying".
Why not then, if an answer is "enormously unsatisfying", post n alternative answer?
What have I done wrong this time? | 2013/06/17 | [
"https://physics.meta.stackexchange.com/questions/4414",
"https://physics.meta.stackexchange.com",
"https://physics.meta.stackexchange.com/users/-1/"
] | Nobody downvoted your answer. That indicates that nobody thought it was unhelpful; on the other hand, two people thought it *was* helpful. So honestly, your complaint is without merit. If people actually think something is wrong with your answer, they'll downvote it.
Also note that posting an unsatisfying (or unhelpful, wrong, etc.) answer is not technically *wrong*, in the sense that it's not against the rules here. So you haven't *done* anything wrong.
By the way, keep in mind that if e.g. I find another person's answer unsatisfying, inadqeuate, wrong, or so on, that has no bearing on whether I should post my own answer. The argument "don't criticize me if you can't do better" holds no water on this site. | I guess the unsatisfying bit is the inclusion of "Distance from Earth to the Sun" in the formula.
I agree that the question is a bit unclear in this aspect; I think it's trying to ask "how was the Sun's diameter *first* measured". From this point of view, talking about the E-S distance just raises another question "how was the E-S distance calculated?"
To make a rough analogy1, this is similar to being asked "How can I calculate the circumference of the Earth", and answering "Multiply the diameter by $\pi$". Again, this is strictly correct, but not really satisfying.
1. I've tried to make the issue clearer in this analogy by exaggerating the situation. I've found that folks sometimes easily get offended by such exaggerated analogies, so I'm just mentioning that I don't mean to offend here :) |
4,414 | [This question about the sun's circumference](https://physics.stackexchange.com/questions/68120/how-is-suns-equatorial-circumference-measured/68125#68125), I answered with what I *thought* was a helpful answer - it seems that a couple of others found it useful (2 upvotes), but I am getting grief about the answer being "while strictly correct, but is enormously unsatisfying".
Why not then, if an answer is "enormously unsatisfying", post n alternative answer?
What have I done wrong this time? | 2013/06/17 | [
"https://physics.meta.stackexchange.com/questions/4414",
"https://physics.meta.stackexchange.com",
"https://physics.meta.stackexchange.com/users/-1/"
] | I can only repeat what I wrote before.
1. Your answer is strictly correct.
2. But you have assumed the hard part of the historical problem has already been solved and answered with a basic geometric identity so I found your answer to be uninteresting.
and emphasize that
* I did not vote your answer down.
* No one else had voted your answer down.
* A couple of people had found it useful enough to vote up.
Finally, I'm having a hard time understanding how you get from a +2/-0 result on an answer to understanding that you have "done [] something wrong" or "stuffed up". You are ahead with that answer and there is no reason to expect that you will ever be behind---the answer is correct as far as it goes. | I guess the unsatisfying bit is the inclusion of "Distance from Earth to the Sun" in the formula.
I agree that the question is a bit unclear in this aspect; I think it's trying to ask "how was the Sun's diameter *first* measured". From this point of view, talking about the E-S distance just raises another question "how was the E-S distance calculated?"
To make a rough analogy1, this is similar to being asked "How can I calculate the circumference of the Earth", and answering "Multiply the diameter by $\pi$". Again, this is strictly correct, but not really satisfying.
1. I've tried to make the issue clearer in this analogy by exaggerating the situation. I've found that folks sometimes easily get offended by such exaggerated analogies, so I'm just mentioning that I don't mean to offend here :) |
4,414 | [This question about the sun's circumference](https://physics.stackexchange.com/questions/68120/how-is-suns-equatorial-circumference-measured/68125#68125), I answered with what I *thought* was a helpful answer - it seems that a couple of others found it useful (2 upvotes), but I am getting grief about the answer being "while strictly correct, but is enormously unsatisfying".
Why not then, if an answer is "enormously unsatisfying", post n alternative answer?
What have I done wrong this time? | 2013/06/17 | [
"https://physics.meta.stackexchange.com/questions/4414",
"https://physics.meta.stackexchange.com",
"https://physics.meta.stackexchange.com/users/-1/"
] | I confess to being one of those people. I agree completely that you didn't need to delete your answer. The pinhole camera suggestion meaningfully added to the discussion. Reason is that you can't just take a protractor and measure the angle subtended by the sun.
Now the criticism was that we don't have a measurement of the distance to the sun. I hope that you don't take this to mean that *you didn't contribute* to answering the question, because you did.
At the root of the quibbling, however, was the fact that someone asked a fairly low level question that has an extremely difficult answer. Why do you think no one else has added an answer after that discussion? Probably several people looked into it and quickly realized the problem is more difficult than they thought.
Using references on Earth, how do you establish an astronomical length scale? That question is **insanely** hard. But is that what was asked? To be nit-picky, yes, it was. It was a simple question that stumbled on a really advanced and difficult question. To answer that advanced question, one would still start out where you started out. | I guess the unsatisfying bit is the inclusion of "Distance from Earth to the Sun" in the formula.
I agree that the question is a bit unclear in this aspect; I think it's trying to ask "how was the Sun's diameter *first* measured". From this point of view, talking about the E-S distance just raises another question "how was the E-S distance calculated?"
To make a rough analogy1, this is similar to being asked "How can I calculate the circumference of the Earth", and answering "Multiply the diameter by $\pi$". Again, this is strictly correct, but not really satisfying.
1. I've tried to make the issue clearer in this analogy by exaggerating the situation. I've found that folks sometimes easily get offended by such exaggerated analogies, so I'm just mentioning that I don't mean to offend here :) |
4,414 | [This question about the sun's circumference](https://physics.stackexchange.com/questions/68120/how-is-suns-equatorial-circumference-measured/68125#68125), I answered with what I *thought* was a helpful answer - it seems that a couple of others found it useful (2 upvotes), but I am getting grief about the answer being "while strictly correct, but is enormously unsatisfying".
Why not then, if an answer is "enormously unsatisfying", post n alternative answer?
What have I done wrong this time? | 2013/06/17 | [
"https://physics.meta.stackexchange.com/questions/4414",
"https://physics.meta.stackexchange.com",
"https://physics.meta.stackexchange.com/users/-1/"
] | I guess the unsatisfying bit is the inclusion of "Distance from Earth to the Sun" in the formula.
I agree that the question is a bit unclear in this aspect; I think it's trying to ask "how was the Sun's diameter *first* measured". From this point of view, talking about the E-S distance just raises another question "how was the E-S distance calculated?"
To make a rough analogy1, this is similar to being asked "How can I calculate the circumference of the Earth", and answering "Multiply the diameter by $\pi$". Again, this is strictly correct, but not really satisfying.
1. I've tried to make the issue clearer in this analogy by exaggerating the situation. I've found that folks sometimes easily get offended by such exaggerated analogies, so I'm just mentioning that I don't mean to offend here :) | >
> What have I done wrong this time?
>
>
>
Initially being too sensitive to fair, constructive criticisim of your answer, that replaced one measurement with another, without explaining how the new measurement was to be carried out.
What did you do right?
Taking on board the criticism in the comments and improving the answer. |
4,414 | [This question about the sun's circumference](https://physics.stackexchange.com/questions/68120/how-is-suns-equatorial-circumference-measured/68125#68125), I answered with what I *thought* was a helpful answer - it seems that a couple of others found it useful (2 upvotes), but I am getting grief about the answer being "while strictly correct, but is enormously unsatisfying".
Why not then, if an answer is "enormously unsatisfying", post n alternative answer?
What have I done wrong this time? | 2013/06/17 | [
"https://physics.meta.stackexchange.com/questions/4414",
"https://physics.meta.stackexchange.com",
"https://physics.meta.stackexchange.com/users/-1/"
] | Nobody downvoted your answer. That indicates that nobody thought it was unhelpful; on the other hand, two people thought it *was* helpful. So honestly, your complaint is without merit. If people actually think something is wrong with your answer, they'll downvote it.
Also note that posting an unsatisfying (or unhelpful, wrong, etc.) answer is not technically *wrong*, in the sense that it's not against the rules here. So you haven't *done* anything wrong.
By the way, keep in mind that if e.g. I find another person's answer unsatisfying, inadqeuate, wrong, or so on, that has no bearing on whether I should post my own answer. The argument "don't criticize me if you can't do better" holds no water on this site. | >
> What have I done wrong this time?
>
>
>
Initially being too sensitive to fair, constructive criticisim of your answer, that replaced one measurement with another, without explaining how the new measurement was to be carried out.
What did you do right?
Taking on board the criticism in the comments and improving the answer. |
4,414 | [This question about the sun's circumference](https://physics.stackexchange.com/questions/68120/how-is-suns-equatorial-circumference-measured/68125#68125), I answered with what I *thought* was a helpful answer - it seems that a couple of others found it useful (2 upvotes), but I am getting grief about the answer being "while strictly correct, but is enormously unsatisfying".
Why not then, if an answer is "enormously unsatisfying", post n alternative answer?
What have I done wrong this time? | 2013/06/17 | [
"https://physics.meta.stackexchange.com/questions/4414",
"https://physics.meta.stackexchange.com",
"https://physics.meta.stackexchange.com/users/-1/"
] | I can only repeat what I wrote before.
1. Your answer is strictly correct.
2. But you have assumed the hard part of the historical problem has already been solved and answered with a basic geometric identity so I found your answer to be uninteresting.
and emphasize that
* I did not vote your answer down.
* No one else had voted your answer down.
* A couple of people had found it useful enough to vote up.
Finally, I'm having a hard time understanding how you get from a +2/-0 result on an answer to understanding that you have "done [] something wrong" or "stuffed up". You are ahead with that answer and there is no reason to expect that you will ever be behind---the answer is correct as far as it goes. | >
> What have I done wrong this time?
>
>
>
Initially being too sensitive to fair, constructive criticisim of your answer, that replaced one measurement with another, without explaining how the new measurement was to be carried out.
What did you do right?
Taking on board the criticism in the comments and improving the answer. |
4,414 | [This question about the sun's circumference](https://physics.stackexchange.com/questions/68120/how-is-suns-equatorial-circumference-measured/68125#68125), I answered with what I *thought* was a helpful answer - it seems that a couple of others found it useful (2 upvotes), but I am getting grief about the answer being "while strictly correct, but is enormously unsatisfying".
Why not then, if an answer is "enormously unsatisfying", post n alternative answer?
What have I done wrong this time? | 2013/06/17 | [
"https://physics.meta.stackexchange.com/questions/4414",
"https://physics.meta.stackexchange.com",
"https://physics.meta.stackexchange.com/users/-1/"
] | I confess to being one of those people. I agree completely that you didn't need to delete your answer. The pinhole camera suggestion meaningfully added to the discussion. Reason is that you can't just take a protractor and measure the angle subtended by the sun.
Now the criticism was that we don't have a measurement of the distance to the sun. I hope that you don't take this to mean that *you didn't contribute* to answering the question, because you did.
At the root of the quibbling, however, was the fact that someone asked a fairly low level question that has an extremely difficult answer. Why do you think no one else has added an answer after that discussion? Probably several people looked into it and quickly realized the problem is more difficult than they thought.
Using references on Earth, how do you establish an astronomical length scale? That question is **insanely** hard. But is that what was asked? To be nit-picky, yes, it was. It was a simple question that stumbled on a really advanced and difficult question. To answer that advanced question, one would still start out where you started out. | >
> What have I done wrong this time?
>
>
>
Initially being too sensitive to fair, constructive criticisim of your answer, that replaced one measurement with another, without explaining how the new measurement was to be carried out.
What did you do right?
Taking on board the criticism in the comments and improving the answer. |
74,873 | We know that if we sprinkle a handful of water over madhiy on a garment, it is sufficient according to the most authentic opinion. But I have a doubt. If stains or traces of madhiy still remain after sprinkling the water, do I have to sprinkle water again in that area? Or is sprinkling a handful of water once sufficient regardless of the stains or traces of madhiy that remain? And is the water that was splashed on madhiy pure? | 2022/02/07 | [
"https://islam.stackexchange.com/questions/74873",
"https://islam.stackexchange.com",
"https://islam.stackexchange.com/users/49043/"
] | Water is regarded in fiqh as having the attribute to be tahir: clean
Moreover it is regarded as
>
> tahir mutahhir
>
> طاهر مُطَهِّر
>
>
>
meaning it is clean طاهر (by itself) and cleaning مُطَهِّر (anything which comes in contact with it).
We could also say madhiy (a spot) usually is less the quantity of water (a handful) used for the act of cleaning therefore the water used in such a case is still regarded as tahir in the majority view.
But we know from fiqh and from hadith sources that the only requirement for removing or maybe it is better to say handle madhiy is by sprinkling water over it. This requirement was nowhere extended to remove the traces of madhiy (if madhiy is regarded as najasah then this would be a special case).
Therefore the above discussion can be closed by saying handling madhiy is done and fulfilled by only sprinkling water over it there's no necessity to remove it or its traces.
As to the topic of stains of najasa it was already discussed in the following posts:
* [Semen Stains Remaining On Clothes Even After thoroughly Washing The Garment](https://islam.stackexchange.com/questions/67552/semen-stains-remaining-on-clothes-even-after-thoroughly-washing-the-garment)
* [is pus and blood najis](https://islam.stackexchange.com/questions/50020/is-pus-and-blood-najis)
And what must be cleaned or removed, if najasah (color, odor and taste) is removed is explained in:
[Ritual impurity in pants](https://islam.stackexchange.com/questions/42575/ritual-impurity-in-pants) | Sheikh 'Assim in a [YouTube video](https://youtu.be/mwlEmkLK3rw?t=57) said:
>
> Sprinkling it does the job even if there are marks. Ignore it and pray with it.
>
>
>
This fatwa on [Islamweb](https://www.islamweb.net/en/fatwa/242992/sprinkling-water-over-mathy-for-purification) also says it is sufficient to sprinkle water over it according to some scholars.
>
> There is no doubt that "sprinkling water does not remove the impurity," as Ibn Qudaamah may Allaah have mercy upon him said, but some scholars are anyway of the view that it is sufficient to sprinkle water over it in order not to cause hardship and embarrassment, pursuant to the Ahaadeeth that provide that sprinkling water is enough, like the Hadeeth: “….what about that clothes it touches? and the Prophet sallallaahu `alayhi wa sallam ( may Allaah exalt his mention ) replied: It is enough for you to take a handful of water and splash it over the part of your clothes that you see it has touched.” [Abu Daawood] And the Hadeeth narrated by ’Ali may Allaah be pleased with him about Mathy which reads: "… and splash water over your genitals."
>
>
>
Hopefully this answers the question. And Allah knows the best. |
31,788 | In SharePoint foundation 2010 can i add a new calculated value column where the calculation are based on when another column is modified ? | 2012/03/16 | [
"https://sharepoint.stackexchange.com/questions/31788",
"https://sharepoint.stackexchange.com",
"https://sharepoint.stackexchange.com/users/817/"
] | Out of the box, SharePoint won't tell you when a specific column was modified. You only know when the item was last modified (Modified field).
To track changes on a specific column, you would need to duplicate it and run a workflow that checks for updates. Not a simple design.
Here is how it would work:
* add two more columns hidden from the users: CopyColumn and CopyModified
* on item change, the workflow compares Column with CopyColumn
* if the values are different it means that the user has modified the column. Calculate the time difference between Modified and CopyModified, then update CopyModified and CopyColumn for the next round
* if the values are the same, no action, just stop the workflow | Yes, you can see this link for possible formulas - <http://msdn.microsoft.com/en-us/library/bb862071.aspx> |
31,788 | In SharePoint foundation 2010 can i add a new calculated value column where the calculation are based on when another column is modified ? | 2012/03/16 | [
"https://sharepoint.stackexchange.com/questions/31788",
"https://sharepoint.stackexchange.com",
"https://sharepoint.stackexchange.com/users/817/"
] | Out of the box, SharePoint won't tell you when a specific column was modified. You only know when the item was last modified (Modified field).
To track changes on a specific column, you would need to duplicate it and run a workflow that checks for updates. Not a simple design.
Here is how it would work:
* add two more columns hidden from the users: CopyColumn and CopyModified
* on item change, the workflow compares Column with CopyColumn
* if the values are different it means that the user has modified the column. Calculate the time difference between Modified and CopyModified, then update CopyModified and CopyColumn for the next round
* if the values are the same, no action, just stop the workflow | Agree with Christophe. SharePoint has no column level changetracking (just like there's no column level security). A list item is the deepest object in SharePoint you can bind events etc. to. My current client wanted to be able to see (and track, for auditing) purposes these column changes as well.
We implemented an ItemEvenReceiver that writes all fields for which the AfterProperties value of a column didn't match the BeforeProperties value of said column to a database.
Another option would be to have a list identical to the source list and copy the item to that list on every update (or just the date and the old and new value of the column in question).
And another option would be to add a column ColumnBModified to your list and setting that column to the current time when the BeforeProperties value of a column does not match the AfterProperties value.
All options require code (ItemEventReceiver) |
21,855,420 | I have a 3D application in which I render terrain as a heightmap. The further away the terrain is from the camera the lower the resolution it will be at. Is there any option other than creating a vertex buffer object / display list for every resolution or would there be a way to have one, at the highest resolution, and somehow tell the GPU to only use some of the data? | 2014/02/18 | [
"https://Stackoverflow.com/questions/21855420",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2733085/"
] | You need to implement a *Level of Detail* (LOD) algorithm. A comprehensive list of algorithms with colorful pictures can be found [here](http://vterrain.org/LOD/Papers/). I would suggest [geomipmapping](http://www.flipcode.com/archives/article_geomipmaps.pdf) as an easy starting point. There are also some interesting [implementations](http://vterrain.org/LOD/Implementations/) which you may be able to reuse or learn from.
In most of the simple implementations, there are typically several index buffers for different levels of detail and one or more vertex buffers for different parts of the terrain. The GPU cannot reduce the level of detail of geometry by itself, except for tesselation shaders, which have been also used for terrain rendering using vertex texturing (the terrain data is then in a texture instead of in a VBO and the roughness is controlled by different tesselation level). | You have to create your VBO with local position coordinates. Send this coordinates to your shader and apply the transformation to world coordinates on the shader, with special view to the resolution, which you can bind as an uniform to the shader.
And one last thing: **"Never touch an existing VBO!"** ;-) |
62,642,059 | Sorry, this may be a dumb question, but I am new to MIDI and no musician - I am trying to figure out if I can use a MIDI Controller for some other control application.
I know how to set up the MIDI system and receive MIDI events using AudioKit.midi in iOS.
I am trying to find out if I can determine the state of let's say a Midi Knob, without it sending events? As soon as I start turning a knob I get events - so at the moment in order initialize the system I have to turn every button so it sends an event and the controller setting is being reflected in my software. The controller I have has 16 knobs and I seem to believe that I am missing something? This must be easier, somehow...
I could use relative knobs and keep the state in my code, but my controller wakes up with all dials in absolute mode - so I am thinking there has to be a way?
Any hint would be appreciated :-)
Thanks! | 2020/06/29 | [
"https://Stackoverflow.com/questions/62642059",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10851991/"
] | You must be editing wrong php.ini file.. or have Xdebug settings defined in a separate .ini file (that is parsed in addition to the master file).
On Linux & Mac it's quite common to have different php.ini for CLI and web server.
Check `phpinfo()` output, top table: it will show all config files used by that PHP installation. | Was also stuck 1+ hours on this problem, because the error message is not very helpful and sometimes misleading.
Maybe it's obvious, but try the following if everything else fails:
* place a phpinfo() somewhere in your project to read out your currently applied php.ini configuration
* open the php.ini you see under "Loaded Configuration File" in your phpinfo
* edit your debug config
* restart your webserver (This seems to be required when you start Laravel via "php artisan serve", at least in my case)
* load your phpinfo again to confirm that your settings have been applied
* run your PhpStorm debug config validator again
Your webserver and PhpStorm should now pick up your new debug configuration. |
4,993,315 | Had this interesting question being asked today and the arguments varied from Proxy to Wrapper to Decorator.
Thoughts? | 2011/02/14 | [
"https://Stackoverflow.com/questions/4993315",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/276783/"
] | The short description of
* *Proxy*: "Provide a surrogate or placeholder for another object to control access to it".\*
* *Decorator*: "Attach additional responsibilities to an object dynamically."\*
* *Adapter* (Wrapper): "Convert the interface of a class into another interface clients expect."\*
Based on this, to me AOP looks like (a solution to the problem solved by) Decorator rather than Proxy - and definitely not Adapter.
\*From the GoF book. | >
> "the arguments varied from Proxy to Wrapper to Decorator."
>
>
>
Correct. That's why they give it a new name -- Aspect-Oriented Programming -- not just an OOP design pattern.
If it could be reduced to a single design pattern, it wouldn't last long in the marketplace of ideas.
The point is to take viewpoint that's a bit more broad. |
11,365 | This year seems to have a significant meaning in Super 8, as this is the year the humans made contact with the alien in the experiments. During the flashback sequence in the lab, the file folder containing the experimental data has the year prominently displayed in bold writing, so it doesn't seem like this is just a piece of trivia for the viewer.
It seems as though the placement of a prominent African-American scientist in a lead role in this important experiment, who is then stifled, shamed, and silenced, amidst an America steeped in racial inequality and the nascent civil rights movement is more than an coincidence.
Is there any other evidence of inequality being a theme in the film, and does this relate to the "alien" aspect? | 2013/05/11 | [
"https://movies.stackexchange.com/questions/11365",
"https://movies.stackexchange.com",
"https://movies.stackexchange.com/users/417/"
] | The civil rights context in 1958 was marked by sharp division and the beginnings of a greater activism. I summarize some of the important events in the preceding few years.
The modern era of the Civil Rights Movement in the United States began with the Supreme Court's 1954 decision in [Brown v. Board of Education](http://en.wikipedia.org/wiki/Brown_v._Board_of_Education), requiring desegregation in public schools. Most of the events in the following years resulted from attempts to achieve desegregation in other realms or to resist it.
In 1955, fourteen-year-old [Emmett Till](http://en.wikipedia.org/wiki/Emmett_Till) was lynched in Mississippi for whistling at a white woman. News coverage sparked national outrage, but two men accused of the crime were acquitted.
The [Montgomery Bus boycott](http://en.wikipedia.org/wiki/Montgomery_Bus_Boycott) of 1955–1956 began when Rosa Parks refused to yield her seat to a white passenger. The boycott lasted just over a year until a court ordered the desegregation of the buses in Montgomery. The boycott was organized by the Montgomery Improvement Association, under the leader ship of Martin Luther King, Jr.
In 1957, a crisis occurred when the governor of Arkansas called out the National Guard to prevent the admission of [nine black students](http://en.wikipedia.org/wiki/Little_Rock_Nine) to Little Rock Central High School. It was resolved when President Eisenhower took control of the National Guard in Arkansas and ordered them to return to barracks and used troops from the 101st Airborne Division to protect the students. In 1958, at the end of the school year, Little Rock and other school systems in the south closed their schools completely rather than continue with integration.
The first "[sit-ins](http://en.wikipedia.org/wiki/Sit-in#Civil_Rights_Movement)" had occurred as early as 1939, but in 1958, their use expanded greatly. During the year, sit-ins at lunch counters in Kansas and Oklahoma led two the successful integration of two chains of drug stores.
Despite the presence of a sympathetic African-American scientist in the film, it has received little critical appreciation for how it addresses racial issues. Even a [comparatively favorable examination](http://sites.williams.edu/cthorne/tag/super-8/) finds that it essentially calls for a "separate but equal" solution to race issues; others accuse it of [avoiding the issue](http://www.bvblackspin.com/2011/06/09/abrams-super-8-and-black-science-fiction/) or of simply [being racist](http://sbpdl.net/2011/06/13/super-8-is-racist-the-three-black-characters-die/). | 1958 was heavily affected by the first artificial satellite launch of [Sputnik](http://en.wikipedia.org/wiki/Sputnik), which was launched October 1957: the satellite was visible around the world due to its low orbit and its radio emissions were easily detectable by any electronics hobbyist.
The cold war was already in force, but Sputnik caused a huge psychological shift called by some the [Sputnik Crisis](http://en.wikipedia.org/wiki/Sputnik_crisis): fear of the Soviet Union now being superior rekindled much the same kind of fear and insecurity as Orson Welle's [1938 radio broadcast](http://en.wikipedia.org/wiki/The_War_of_the_Worlds_%28radio_drama%29) did. Grade school curricula were significantly altered to emphasize science and math. The [Space Race](http://en.wikipedia.org/wiki/Space_Race) was immediately kicked off.
I am not aware of any significant African-American associations with 1958 as it precedes the [Martin Luther King protests](http://en.wikipedia.org/wiki/Martin_Luther_King) of 1962 and 1963, which largely heralded the civil rights movement. Perhaps repression of the black scientist is symbolism for lost and wasted opportunities for improvement: the scientist loses out on contributing and recognition, and the world loses out for not sharing his gifts, racial prejudice being a deeply lose-lose situation. |
36,716,894 | I would like to execute an Http Request Sampler with each request defined in another request group (Simple Controller) but could not find an appropriate construct to achieve this.
More concrete description:
I'd like to execute LogRequest with each of the Req1, Req2, Req3 ... and I dont want to duplicate the LogRequest.
Any idea on how to do this? | 2016/04/19 | [
"https://Stackoverflow.com/questions/36716894",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/269905/"
] | **Using Samplers:**
A Sampler will not be executed for each sampler in the group/controller.
There seems to be a workaround using Interleave Controller.
[](https://i.stack.imgur.com/aaKza.png)
As the picture shows, for each request the in the InterLeave Controller, the Log Request gets executed. The LoopController here is required to execute the InterLeave controller again and again for all the samplers inside the controller.
**Using Post Processor:**
If I were you, I would try to use a Pre Processor / Post Processor. As you need to log something, You can use a beanshell post processor. It would be simply as shown below.
[](https://i.stack.imgur.com/xdcGY.png) | Depending on what you're trying to achieve it could be:
1. [Interleave Controller](http://jmeter.apache.org/usermanual/component_reference.html#Interleave_Controller)
2. [Module Controller](https://guide.blazemeter.com/hc/en-us/articles/207421345-Using-JMeter-Module-Controller) |
165,720 | I'm looking for an easy way to debug RESTful services. For example, most webapps can be debugged using your average web browser. Unfortunately that same browser won't allow me to test HTTP PUT, DELETE, and to a certain degree even HTTP POST.
I am not looking to automate tests. I'd like to run new services through a quick sanity check, ideally without having to writing my own client. | 2008/10/03 | [
"https://Stackoverflow.com/questions/165720",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14731/"
] | A tool I've found useful if you're running OS X Leopard:
[HTTP Client](http://ditchnet.org/httpclient/)
It's a very simple GUI program that allows you to craft http requests to a resource and view the response. | [RESTTest](https://addons.mozilla.org/en-US/firefox/addon/5946) for Firefox (an add-on). [Fiddler](http://www.fiddler2.com/fiddler2/) for IE. |
165,720 | I'm looking for an easy way to debug RESTful services. For example, most webapps can be debugged using your average web browser. Unfortunately that same browser won't allow me to test HTTP PUT, DELETE, and to a certain degree even HTTP POST.
I am not looking to automate tests. I'd like to run new services through a quick sanity check, ideally without having to writing my own client. | 2008/10/03 | [
"https://Stackoverflow.com/questions/165720",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14731/"
] | A tool I've found useful if you're running OS X Leopard:
[HTTP Client](http://ditchnet.org/httpclient/)
It's a very simple GUI program that allows you to craft http requests to a resource and view the response. | I use restclient, available from [Google Code](http://code.google.com/p/rest-client/). It's a simple Java Swing application which supports all HTTP methods, and allows you full control over the HTTP headers, conneg, etc. |
165,720 | I'm looking for an easy way to debug RESTful services. For example, most webapps can be debugged using your average web browser. Unfortunately that same browser won't allow me to test HTTP PUT, DELETE, and to a certain degree even HTTP POST.
I am not looking to automate tests. I'd like to run new services through a quick sanity check, ideally without having to writing my own client. | 2008/10/03 | [
"https://Stackoverflow.com/questions/165720",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14731/"
] | Use an existing 'REST client' tool that makes it easy to inspect the requests and responses, like [RESTClient](https://github.com/wiztools/rest-client). | [RESTTest](https://addons.mozilla.org/en-US/firefox/addon/5946) for Firefox (an add-on). [Fiddler](http://www.fiddler2.com/fiddler2/) for IE. |
165,720 | I'm looking for an easy way to debug RESTful services. For example, most webapps can be debugged using your average web browser. Unfortunately that same browser won't allow me to test HTTP PUT, DELETE, and to a certain degree even HTTP POST.
I am not looking to automate tests. I'd like to run new services through a quick sanity check, ideally without having to writing my own client. | 2008/10/03 | [
"https://Stackoverflow.com/questions/165720",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14731/"
] | Use an existing 'REST client' tool that makes it easy to inspect the requests and responses, like [RESTClient](https://github.com/wiztools/rest-client). | I've found RequestBin useful for debugging REST requests. Post to a unique URL and request data are updated/displayed. Can help in a pinch when other tools are not available.
<https://requestbin.com/> |
165,720 | I'm looking for an easy way to debug RESTful services. For example, most webapps can be debugged using your average web browser. Unfortunately that same browser won't allow me to test HTTP PUT, DELETE, and to a certain degree even HTTP POST.
I am not looking to automate tests. I'd like to run new services through a quick sanity check, ideally without having to writing my own client. | 2008/10/03 | [
"https://Stackoverflow.com/questions/165720",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14731/"
] | [RESTTest](https://addons.mozilla.org/en-US/firefox/addon/5946) for Firefox (an add-on). [Fiddler](http://www.fiddler2.com/fiddler2/) for IE. | You guys should check [poster](https://addons.mozilla.org/en-US/firefox/addon/poster/) extension for firefox, it's simple and useful enough to use :) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.