qid int64 1 74.7M | question stringlengths 12 33.8k | date stringlengths 10 10 | metadata list | response_j stringlengths 0 115k | response_k stringlengths 2 98.3k |
|---|---|---|---|---|---|
1,521 | What I mean is that is it more dangerous than other contact sports that aren't martial arts? Such as Football, Soccer, Basketball, etc...
And if yes, why? | 2012/10/16 | [
"https://martialarts.stackexchange.com/questions/1521",
"https://martialarts.stackexchange.com",
"https://martialarts.stackexchange.com/users/112/"
] | I'm not sure that your statement about the safety of boxing is generally accepted.
>
> ["There is absolutely no way you can make boxing safe," said Nelson Richards, MD, a delegate from the American Academy of Neurology who proposed the original resolution to ban the sport in 1983.](https://www.ama-assn.org/amednews/2002/07/08/hlsb0708.htm)
>
>
>
[The BBC reported](http://www.bbc.co.uk/health/physical_health/conditions/boxing.shtml)
>
> According to brain surgeons, over 80 per cent of professional boxers have serious brain scarring on MRI scans. The evidence for harm or cumulative brain damage to amateur boxers is less clear.
>
>
>
Boxing advocates point out that amateur boxing [has fewer injuries](http://www.boxinggyms.com/gladiator/newpaper/safe.htm) than soccer, gymnastics, etc. However that source doesn't cite how they measure "fewer injuries", and doesn't state whether they count long term damage to the brain. [There is some evidence that even amateur boxing can cause brain damage.](http://www.science20.com/news_articles/even_amateur_boxing_can_cause_brain_damage-89512)
Ultimately, you may want to look at a source which [compares injury rates](http://www.thecni.org/reviews/11-1-p35-gerberkozora.htm). A quick google search suggests that [football and soccer](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2267335/) have the highest injury rate/hour practiced. But that doesn't address the severity or long term consequences of those injuries. | At a good gym, meaning experienced coaches and decent equipment, boxing/kick-boxing should not be that dangerous. First of all, you're probably not sparring right away, and once you are its in a controlled environment with mouthpieces, headgear, gloves, and shinpads(if kickboxing).
As pointed out in a previous answer, you are probably at an increased risk of facial bruises, bloody noses, etc, but not serious injury. However, if you are training for an MMA style of fighting which includes takedowns, your rate of injury is going to spike sharply.
Comparing it to other sports is tricky. Even at a high-school level we certainly had a higher level of general injuries in football, and typically more severe...broken bones not being uncommon. Basketball didn't have the same high incidence of really violent injuries, but a much higher incidence of high ankle sprains and the like. Soccer seemed relatively safe but I never played at a highly competitive level, and if you watch the Europeans play you'd think it was more dangerous than trying to snuff volcanoes with your bare hands with the frequency they go down screaming in pain.
In short, I think there are too many variables to objectively answer your question, but the above has been my (anecdotal) experience. |
1,521 | What I mean is that is it more dangerous than other contact sports that aren't martial arts? Such as Football, Soccer, Basketball, etc...
And if yes, why? | 2012/10/16 | [
"https://martialarts.stackexchange.com/questions/1521",
"https://martialarts.stackexchange.com",
"https://martialarts.stackexchange.com/users/112/"
] | I'm not sure that your statement about the safety of boxing is generally accepted.
>
> ["There is absolutely no way you can make boxing safe," said Nelson Richards, MD, a delegate from the American Academy of Neurology who proposed the original resolution to ban the sport in 1983.](https://www.ama-assn.org/amednews/2002/07/08/hlsb0708.htm)
>
>
>
[The BBC reported](http://www.bbc.co.uk/health/physical_health/conditions/boxing.shtml)
>
> According to brain surgeons, over 80 per cent of professional boxers have serious brain scarring on MRI scans. The evidence for harm or cumulative brain damage to amateur boxers is less clear.
>
>
>
Boxing advocates point out that amateur boxing [has fewer injuries](http://www.boxinggyms.com/gladiator/newpaper/safe.htm) than soccer, gymnastics, etc. However that source doesn't cite how they measure "fewer injuries", and doesn't state whether they count long term damage to the brain. [There is some evidence that even amateur boxing can cause brain damage.](http://www.science20.com/news_articles/even_amateur_boxing_can_cause_brain_damage-89512)
Ultimately, you may want to look at a source which [compares injury rates](http://www.thecni.org/reviews/11-1-p35-gerberkozora.htm). A quick google search suggests that [football and soccer](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2267335/) have the highest injury rate/hour practiced. But that doesn't address the severity or long term consequences of those injuries. | The short answer is yes. The very point of the sport is to do damage to your opponent. That being said, the chances of you actually breaking something (apart from your nose) is pretty rare. The only particularly dangerous thing that can happen to you in boxing or kickboxing is a concussion, which can and probably will cause scarring of the brain and make you "punch drunk" after a few years.
So yes, boxing and kickboxing is pretty dangerous, but the types of serious injury is limited to your brain unless you're very unlucky. |
1,521 | What I mean is that is it more dangerous than other contact sports that aren't martial arts? Such as Football, Soccer, Basketball, etc...
And if yes, why? | 2012/10/16 | [
"https://martialarts.stackexchange.com/questions/1521",
"https://martialarts.stackexchange.com",
"https://martialarts.stackexchange.com/users/112/"
] | I'm not sure that your statement about the safety of boxing is generally accepted.
>
> ["There is absolutely no way you can make boxing safe," said Nelson Richards, MD, a delegate from the American Academy of Neurology who proposed the original resolution to ban the sport in 1983.](https://www.ama-assn.org/amednews/2002/07/08/hlsb0708.htm)
>
>
>
[The BBC reported](http://www.bbc.co.uk/health/physical_health/conditions/boxing.shtml)
>
> According to brain surgeons, over 80 per cent of professional boxers have serious brain scarring on MRI scans. The evidence for harm or cumulative brain damage to amateur boxers is less clear.
>
>
>
Boxing advocates point out that amateur boxing [has fewer injuries](http://www.boxinggyms.com/gladiator/newpaper/safe.htm) than soccer, gymnastics, etc. However that source doesn't cite how they measure "fewer injuries", and doesn't state whether they count long term damage to the brain. [There is some evidence that even amateur boxing can cause brain damage.](http://www.science20.com/news_articles/even_amateur_boxing_can_cause_brain_damage-89512)
Ultimately, you may want to look at a source which [compares injury rates](http://www.thecni.org/reviews/11-1-p35-gerberkozora.htm). A quick google search suggests that [football and soccer](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2267335/) have the highest injury rate/hour practiced. But that doesn't address the severity or long term consequences of those injuries. | TL;DR: Yes, kickboxing, MMA, and boxing are extremely dangerous.
----------------------------------------------------------------
**The greatest risk in all combat sports in which blows to the head are allowed is traumatic brain injury. When it comes to traumatic brain injury, boxing is by far the most dangerous sport, but kickboxing and MMA aren't far behind.**
---
Brain Injury:
-------------
>
> Almost a century ago, a rare but serious form of dementia was linked to repetitive head injuries in boxing. The dementia was aptly named, “Boxer’s dementia.” Lately, this “punch drunk” dementia has been found to affect athletes in other sports, such as American football and soccer, where athletes' heads take repeated blows, so a broader term for this condition was needed.
>
>
> Chronic traumatic encephalopathy (CTE), is a related brain disorder that has been shown to affect other kinds of athletes, and more rarely, non-athletes who sustain head injuries...
>
>
> Its prevalence in boxers continues. One recent review study of athletes who were diagnosed with CTE found that of the 51 confirmed cases of CTE, 46 were in athletes – and of these, 39 were boxers. Five football players, a soccer player, and a wrestler made up the remainder of the athletes affected by chronic brain trauma.
>
> - [Athletes and Brain Trauma](http://www.thedoctorwillseeyounow.com/content/sports_medicine/art3599.html)
>
>
>
From an article on a 2014 [study](http://ajs.sagepub.com/content/early/2014/03/19/0363546514526151.full.pdf+html?sid=a2e7e3b5-e377-4840-9489-6e3903519bae), limited to kickboxers and MMA fighters:
>
> **The rate of serious head injuries among professional mixed martial arts competitors is potentially twice that of professional football players, according to U.S. researchers...**
>
>
> Yet [fighters'] risk of head injury hadn't been well studied, according to [Michael Hutchison, a researcher at the University of Toronto] and his coauthors. The highly physical nature of the contact sport - which some critics consider dangerous or violent - got the researchers wondering just how high a risk players run of getting knocked out repeatedly.
>
>
> The first event they looked for was knock-outs, in which players are literally knocked unconscious. The second, known as technical knockouts, occur when a referee or other authority judges that the player is too woozy to successfully defend him- or herself. Both kinds of knockout end the match.
>
>
> The researchers also used statistics to investigate which factors were associated with a player having a higher risk of a knockout or a technical knockout due to being struck multiple times.
>
>
> **They found that players suffered a knockout in 12.7 percent of matches, and that a technical knockout took place in about 19 percent, meaning that nearly one-third of matches ended as a result of some type of head trauma.**
>
>
> **These numbers mean that out of every 100 matches in which a mixed martial arts athlete could be knocked out, known as an athlete exposure, the injury would happen 6.4 times.**
>
>
> **The comparable concussion rates for boxing and kickboxing are, respectively, 4.9 and 1.9 per 100 exposures, the authors note.**
>
>
> Moreover, they observed that competitors often used the few seconds before the referee stepped in to repeatedly kick the downed opponent in the head.
>
>
> **If all knockouts and technical knockouts are counted as concussions, the rate among professional mixed martial arts athletes seen in the study was about 16 per 100 athlete exposures.**
>
>
> It's tempting to compare those statistics to rates of concussions in sports such as football, which has been found to have 8.08 concussions per 100 plays, and ice hockey, with 2.2 concussions per 100 athlete-encounters.
>
> - [Head injury risk is high in mixed martial arts: study](http://www.reuters.com/article/us-head-injury-idUSBREA311SB20140402)
>
>
>
From an article about [an ongoing long-term study](http://www.ncbi.nlm.nih.gov/pubmed/25633832) of brain injuries in boxing and MMA, whose results to date were published in 2015:
>
> Is boxing the better sport or does mixed martial arts win that title? While spectators may debate for hours, the answer to that question when it focuses solely on the health of participants is simple: **Both are bad, research indicates, even if martial arts combatants have a slight advantage. The repeated head blows sustained by fighters during their battles link to slower cognitive processing speeds and smaller volumes of certain brain parts.**
>
>
> “**Repetitive head trauma may be a risk factor for Alzheimer’s disease and is considered the primary cause of chronic traumatic encephalopathy (CTE)**,” wrote the authors in their new study. A**lzheimer’s is a well-known form of dementia, while CTE is a progressive degenerative disease of the brain linked to memory loss, confusion, impaired judgment, impulse control problems, aggression, depression, and progressive dementia.**
>
>
> **To understand how these sports might affect fighters’ brains, researchers from Cleveland Clinic turned to the data collected by the Professional Fighters Brain Health Study (PFBHS). They identified 224 professional fighters: 131 mixed martial arts (MMA) fighters and 93 boxers. The PFBHS athletes were all between the ages of 18 and 44 and the average time these professionals had fought was about four years, with an average number of 10 total matches...** Next, the researchers matched these athletes with 22 same-aged people with a similar level of education but no history of head trauma.
>
>
> At the start of the study, all participants underwent an MRI scan to assess their brain volume and then they returned for a brain scan annually for four years after that. At each juncture, the researchers tested their verbal memory, processing speed, fine motor skills, and reaction times as a general assessment of brainpower. Next, the researchers calculated for each athlete a Fight Exposure Score, or FES, which combines duration and intensity of fight career...
>
>
> **Fighters with an FES score of four were found to be 8.8 percent slower in processing speed than those with an FES score of 0. Add to that, the higher the score, the smaller the brain volume, particularly in the thalamus and the caudate**... The researchers speculate the typical response to a punch — when a fighter’s head rotates slightly — might be the cause of volume loss in the thalamus and caudate.
>
>
> **More generally, smaller brain volumes plus higher Fight Exposure Scores were linked to slower brain processing speeds. In fact, the researchers estimated a 0.19 percent reduction in processing speed per fight and a 2.1 percent reduction for each increase in FES. Irrespective of age, boxers tended to fare worse than martial arts combatants.**
>
>
> “**Perhaps the most obvious explanation is that boxers get hit in the head more**,” the authors note. “**MMA fighters can utilize other combat skills such as wrestling and jiu jitsu to win their match by submission without causing a concussion.” In the end, boxers' brain structure volumes were smaller and they were mentally slower than the mixed martial arts fighters. Ever so slightly, then, MMA edges out boxing as the 'better' sport, at least in terms of a fighter's health.**
>
> - [Head Blows And Brain Injury: Boxing And Mixed Martial Arts Cause A Similar Loss Of Processing Speed In Fighters' Brains](http://www.medicaldaily.com/head-blows-and-brain-injury-boxing-and-mixed-martial-arts-cause-similar-loss-320000)
>
>
>
---
Why are boxing, kickboxing, and MMA more dangerous than non-combat sports?
--------------------------------------------------------------------------
### TL;DR: Because in team sports, blows to the head are an unfortunate but inevitable consequence of the game; in combat sports, blows to the head are one of the *goals* of the game.
### American Football, Basketball, and Soccer:
In American football, you're trying to get the ball into the end zone and stop the other team from doing likewise. In basketball, you're trying to throw the ball through the hoop and stop the other team from doing likewise. In soccer, you're trying to kick the ball into the goal and stop the other team from doing likewise. Someone might get hurt in any of these sports, but that's a side effect, not the primary objective.
### Boxing:
In boxing, you're trying to punch the other guy until he is unconscious or unable to fight; although body shots sometimes achieve this goal, the surest way to pull it off is to punch your opponent in the face and head as many times as possible, as hard as you can. In other words, you're basically doing everything in your power to give the guy a concussion. What is a concussion? Simple: a concussion is traumatic brain injury. Thus, when two boxers step in the ring, they are essentially trying their hardest to inflict traumatic brain injury on one another.
The fact that boxers are restricted to hitting each other from the belt up increases the rates of brain injury, because you are only allowed to hit opponents on the end of the body where the head happens to be.
Aside from brain trauma, the most common injuries in boxing are broken bones (usually in the head and face - noses, eye sockets, cheekbones, jaws - but sometimes in other places - especially the ribs and hands), eye damage, and swelling. These are relatively minor, relative to brain injury, and easy to overcome.
### Kickboxing:
In kickboxing, as in boxing, the goal is to hit your opponent until he is unconscious or incapable of fighting; however, you're allowed to strike with more of your body (hands, feet, shins, knees, and elbows, as opposed to just your hands), and you're allowed to hit the opponent in more places on his body (basically, everything except the testicles, throat, and eyes, as opposed to only the face, sides of the head, and front and sides of the torso).
This simultaneously reduces the percentage of shots that will be delivered to the head (because more areas are fair game) and increases the kinds of shots that will be delivered to the head (because you're kicking AND punching, and because punches can strike with other parts of your hand, not just your knuckles). It also helps that you can block incoming attacks with your legs as well as your hands and arms.
All in all, kickboxing is slightly less likely to cause brain damage than boxing is, for all the reasons mentioned above, but the difference is negligible.
However, other injuries are far more common in kickboxing than in boxing, mainly broken bones in the arms, legs, feet, hands, ribs, and face. And obviously, there's plenty of bruising, and some damage to ligaments and tendons.
### MMA:
If kickboxing can be described (via a slight oversimplification) as "boxing plus kicking", then MMA might be described as "kickboxing plus grappling". All of the factors I mentioned in relation to kickboxing apply here, but there is the added component of the grappling: whereas boxers and kickboxers are limited to striking, MMA fighters have other options. They can grab, hold, throw, wrestle, etc.
Obviously, the grappling angle of MMA means that even less time is spent trading shots to the face and head than is the case in kickboxing; as a result, traumatic brain injury is probably slightly less common in MMA than kickboxing, and even less common than it is in boxing. Again, though, the difference is relatively small.
On the other hand, [some researchers](http://thesportspost.com/science-says-mma-is-the-most-dangerous-sport/) believe that MMA might be a bigger risk for brain injury, although their findings were based on reviewing fight tapes rather than examining fighters or their medical records. They speculate that one reason for MMA being more dangerous is related to the fact that MMA bouts often end with the dominant fighter delivering a flurry of head shots to his opponent, while the opponent is pinned on the ground and incapable of defending himself.
Non-brain-related injuries are probably more common in MMA than in kickboxing or boxing, and in addition to broken bones, bruising, and torn ligaments/tendons, dislocations are more common because of grappling and submission holds.
---
If you're still not convinced, you might want to look at the [Manuel Velazquez Boxing Fatality Collection](http://ejmas.com/jcs/velazquez/), which lists all the known, recorded cases in which boxers died due to injuries sustained in the ring. At present, the [raw data](https://docs.google.com/spreadsheets/d/1e7iWW3KLuVYfQjN_MOK3a9M42Ht_ACIZVa3Tf5n_Z5o/edit?usp=sharing) lists 2,045 such cases between September 1724 and December 2015; there are 1,324 boxing deaths listed between December 1915 and December 2015.
There is no such resource for kickboxing or MMA, and I'm not aware of any similar resources for any team sports. |
1,521 | What I mean is that is it more dangerous than other contact sports that aren't martial arts? Such as Football, Soccer, Basketball, etc...
And if yes, why? | 2012/10/16 | [
"https://martialarts.stackexchange.com/questions/1521",
"https://martialarts.stackexchange.com",
"https://martialarts.stackexchange.com/users/112/"
] | At a good gym, meaning experienced coaches and decent equipment, boxing/kick-boxing should not be that dangerous. First of all, you're probably not sparring right away, and once you are its in a controlled environment with mouthpieces, headgear, gloves, and shinpads(if kickboxing).
As pointed out in a previous answer, you are probably at an increased risk of facial bruises, bloody noses, etc, but not serious injury. However, if you are training for an MMA style of fighting which includes takedowns, your rate of injury is going to spike sharply.
Comparing it to other sports is tricky. Even at a high-school level we certainly had a higher level of general injuries in football, and typically more severe...broken bones not being uncommon. Basketball didn't have the same high incidence of really violent injuries, but a much higher incidence of high ankle sprains and the like. Soccer seemed relatively safe but I never played at a highly competitive level, and if you watch the Europeans play you'd think it was more dangerous than trying to snuff volcanoes with your bare hands with the frequency they go down screaming in pain.
In short, I think there are too many variables to objectively answer your question, but the above has been my (anecdotal) experience. | The short answer is yes. The very point of the sport is to do damage to your opponent. That being said, the chances of you actually breaking something (apart from your nose) is pretty rare. The only particularly dangerous thing that can happen to you in boxing or kickboxing is a concussion, which can and probably will cause scarring of the brain and make you "punch drunk" after a few years.
So yes, boxing and kickboxing is pretty dangerous, but the types of serious injury is limited to your brain unless you're very unlucky. |
1,521 | What I mean is that is it more dangerous than other contact sports that aren't martial arts? Such as Football, Soccer, Basketball, etc...
And if yes, why? | 2012/10/16 | [
"https://martialarts.stackexchange.com/questions/1521",
"https://martialarts.stackexchange.com",
"https://martialarts.stackexchange.com/users/112/"
] | At a good gym, meaning experienced coaches and decent equipment, boxing/kick-boxing should not be that dangerous. First of all, you're probably not sparring right away, and once you are its in a controlled environment with mouthpieces, headgear, gloves, and shinpads(if kickboxing).
As pointed out in a previous answer, you are probably at an increased risk of facial bruises, bloody noses, etc, but not serious injury. However, if you are training for an MMA style of fighting which includes takedowns, your rate of injury is going to spike sharply.
Comparing it to other sports is tricky. Even at a high-school level we certainly had a higher level of general injuries in football, and typically more severe...broken bones not being uncommon. Basketball didn't have the same high incidence of really violent injuries, but a much higher incidence of high ankle sprains and the like. Soccer seemed relatively safe but I never played at a highly competitive level, and if you watch the Europeans play you'd think it was more dangerous than trying to snuff volcanoes with your bare hands with the frequency they go down screaming in pain.
In short, I think there are too many variables to objectively answer your question, but the above has been my (anecdotal) experience. | TL;DR: Yes, kickboxing, MMA, and boxing are extremely dangerous.
----------------------------------------------------------------
**The greatest risk in all combat sports in which blows to the head are allowed is traumatic brain injury. When it comes to traumatic brain injury, boxing is by far the most dangerous sport, but kickboxing and MMA aren't far behind.**
---
Brain Injury:
-------------
>
> Almost a century ago, a rare but serious form of dementia was linked to repetitive head injuries in boxing. The dementia was aptly named, “Boxer’s dementia.” Lately, this “punch drunk” dementia has been found to affect athletes in other sports, such as American football and soccer, where athletes' heads take repeated blows, so a broader term for this condition was needed.
>
>
> Chronic traumatic encephalopathy (CTE), is a related brain disorder that has been shown to affect other kinds of athletes, and more rarely, non-athletes who sustain head injuries...
>
>
> Its prevalence in boxers continues. One recent review study of athletes who were diagnosed with CTE found that of the 51 confirmed cases of CTE, 46 were in athletes – and of these, 39 were boxers. Five football players, a soccer player, and a wrestler made up the remainder of the athletes affected by chronic brain trauma.
>
> - [Athletes and Brain Trauma](http://www.thedoctorwillseeyounow.com/content/sports_medicine/art3599.html)
>
>
>
From an article on a 2014 [study](http://ajs.sagepub.com/content/early/2014/03/19/0363546514526151.full.pdf+html?sid=a2e7e3b5-e377-4840-9489-6e3903519bae), limited to kickboxers and MMA fighters:
>
> **The rate of serious head injuries among professional mixed martial arts competitors is potentially twice that of professional football players, according to U.S. researchers...**
>
>
> Yet [fighters'] risk of head injury hadn't been well studied, according to [Michael Hutchison, a researcher at the University of Toronto] and his coauthors. The highly physical nature of the contact sport - which some critics consider dangerous or violent - got the researchers wondering just how high a risk players run of getting knocked out repeatedly.
>
>
> The first event they looked for was knock-outs, in which players are literally knocked unconscious. The second, known as technical knockouts, occur when a referee or other authority judges that the player is too woozy to successfully defend him- or herself. Both kinds of knockout end the match.
>
>
> The researchers also used statistics to investigate which factors were associated with a player having a higher risk of a knockout or a technical knockout due to being struck multiple times.
>
>
> **They found that players suffered a knockout in 12.7 percent of matches, and that a technical knockout took place in about 19 percent, meaning that nearly one-third of matches ended as a result of some type of head trauma.**
>
>
> **These numbers mean that out of every 100 matches in which a mixed martial arts athlete could be knocked out, known as an athlete exposure, the injury would happen 6.4 times.**
>
>
> **The comparable concussion rates for boxing and kickboxing are, respectively, 4.9 and 1.9 per 100 exposures, the authors note.**
>
>
> Moreover, they observed that competitors often used the few seconds before the referee stepped in to repeatedly kick the downed opponent in the head.
>
>
> **If all knockouts and technical knockouts are counted as concussions, the rate among professional mixed martial arts athletes seen in the study was about 16 per 100 athlete exposures.**
>
>
> It's tempting to compare those statistics to rates of concussions in sports such as football, which has been found to have 8.08 concussions per 100 plays, and ice hockey, with 2.2 concussions per 100 athlete-encounters.
>
> - [Head injury risk is high in mixed martial arts: study](http://www.reuters.com/article/us-head-injury-idUSBREA311SB20140402)
>
>
>
From an article about [an ongoing long-term study](http://www.ncbi.nlm.nih.gov/pubmed/25633832) of brain injuries in boxing and MMA, whose results to date were published in 2015:
>
> Is boxing the better sport or does mixed martial arts win that title? While spectators may debate for hours, the answer to that question when it focuses solely on the health of participants is simple: **Both are bad, research indicates, even if martial arts combatants have a slight advantage. The repeated head blows sustained by fighters during their battles link to slower cognitive processing speeds and smaller volumes of certain brain parts.**
>
>
> “**Repetitive head trauma may be a risk factor for Alzheimer’s disease and is considered the primary cause of chronic traumatic encephalopathy (CTE)**,” wrote the authors in their new study. A**lzheimer’s is a well-known form of dementia, while CTE is a progressive degenerative disease of the brain linked to memory loss, confusion, impaired judgment, impulse control problems, aggression, depression, and progressive dementia.**
>
>
> **To understand how these sports might affect fighters’ brains, researchers from Cleveland Clinic turned to the data collected by the Professional Fighters Brain Health Study (PFBHS). They identified 224 professional fighters: 131 mixed martial arts (MMA) fighters and 93 boxers. The PFBHS athletes were all between the ages of 18 and 44 and the average time these professionals had fought was about four years, with an average number of 10 total matches...** Next, the researchers matched these athletes with 22 same-aged people with a similar level of education but no history of head trauma.
>
>
> At the start of the study, all participants underwent an MRI scan to assess their brain volume and then they returned for a brain scan annually for four years after that. At each juncture, the researchers tested their verbal memory, processing speed, fine motor skills, and reaction times as a general assessment of brainpower. Next, the researchers calculated for each athlete a Fight Exposure Score, or FES, which combines duration and intensity of fight career...
>
>
> **Fighters with an FES score of four were found to be 8.8 percent slower in processing speed than those with an FES score of 0. Add to that, the higher the score, the smaller the brain volume, particularly in the thalamus and the caudate**... The researchers speculate the typical response to a punch — when a fighter’s head rotates slightly — might be the cause of volume loss in the thalamus and caudate.
>
>
> **More generally, smaller brain volumes plus higher Fight Exposure Scores were linked to slower brain processing speeds. In fact, the researchers estimated a 0.19 percent reduction in processing speed per fight and a 2.1 percent reduction for each increase in FES. Irrespective of age, boxers tended to fare worse than martial arts combatants.**
>
>
> “**Perhaps the most obvious explanation is that boxers get hit in the head more**,” the authors note. “**MMA fighters can utilize other combat skills such as wrestling and jiu jitsu to win their match by submission without causing a concussion.” In the end, boxers' brain structure volumes were smaller and they were mentally slower than the mixed martial arts fighters. Ever so slightly, then, MMA edges out boxing as the 'better' sport, at least in terms of a fighter's health.**
>
> - [Head Blows And Brain Injury: Boxing And Mixed Martial Arts Cause A Similar Loss Of Processing Speed In Fighters' Brains](http://www.medicaldaily.com/head-blows-and-brain-injury-boxing-and-mixed-martial-arts-cause-similar-loss-320000)
>
>
>
---
Why are boxing, kickboxing, and MMA more dangerous than non-combat sports?
--------------------------------------------------------------------------
### TL;DR: Because in team sports, blows to the head are an unfortunate but inevitable consequence of the game; in combat sports, blows to the head are one of the *goals* of the game.
### American Football, Basketball, and Soccer:
In American football, you're trying to get the ball into the end zone and stop the other team from doing likewise. In basketball, you're trying to throw the ball through the hoop and stop the other team from doing likewise. In soccer, you're trying to kick the ball into the goal and stop the other team from doing likewise. Someone might get hurt in any of these sports, but that's a side effect, not the primary objective.
### Boxing:
In boxing, you're trying to punch the other guy until he is unconscious or unable to fight; although body shots sometimes achieve this goal, the surest way to pull it off is to punch your opponent in the face and head as many times as possible, as hard as you can. In other words, you're basically doing everything in your power to give the guy a concussion. What is a concussion? Simple: a concussion is traumatic brain injury. Thus, when two boxers step in the ring, they are essentially trying their hardest to inflict traumatic brain injury on one another.
The fact that boxers are restricted to hitting each other from the belt up increases the rates of brain injury, because you are only allowed to hit opponents on the end of the body where the head happens to be.
Aside from brain trauma, the most common injuries in boxing are broken bones (usually in the head and face - noses, eye sockets, cheekbones, jaws - but sometimes in other places - especially the ribs and hands), eye damage, and swelling. These are relatively minor, relative to brain injury, and easy to overcome.
### Kickboxing:
In kickboxing, as in boxing, the goal is to hit your opponent until he is unconscious or incapable of fighting; however, you're allowed to strike with more of your body (hands, feet, shins, knees, and elbows, as opposed to just your hands), and you're allowed to hit the opponent in more places on his body (basically, everything except the testicles, throat, and eyes, as opposed to only the face, sides of the head, and front and sides of the torso).
This simultaneously reduces the percentage of shots that will be delivered to the head (because more areas are fair game) and increases the kinds of shots that will be delivered to the head (because you're kicking AND punching, and because punches can strike with other parts of your hand, not just your knuckles). It also helps that you can block incoming attacks with your legs as well as your hands and arms.
All in all, kickboxing is slightly less likely to cause brain damage than boxing is, for all the reasons mentioned above, but the difference is negligible.
However, other injuries are far more common in kickboxing than in boxing, mainly broken bones in the arms, legs, feet, hands, ribs, and face. And obviously, there's plenty of bruising, and some damage to ligaments and tendons.
### MMA:
If kickboxing can be described (via a slight oversimplification) as "boxing plus kicking", then MMA might be described as "kickboxing plus grappling". All of the factors I mentioned in relation to kickboxing apply here, but there is the added component of the grappling: whereas boxers and kickboxers are limited to striking, MMA fighters have other options. They can grab, hold, throw, wrestle, etc.
Obviously, the grappling angle of MMA means that even less time is spent trading shots to the face and head than is the case in kickboxing; as a result, traumatic brain injury is probably slightly less common in MMA than kickboxing, and even less common than it is in boxing. Again, though, the difference is relatively small.
On the other hand, [some researchers](http://thesportspost.com/science-says-mma-is-the-most-dangerous-sport/) believe that MMA might be a bigger risk for brain injury, although their findings were based on reviewing fight tapes rather than examining fighters or their medical records. They speculate that one reason for MMA being more dangerous is related to the fact that MMA bouts often end with the dominant fighter delivering a flurry of head shots to his opponent, while the opponent is pinned on the ground and incapable of defending himself.
Non-brain-related injuries are probably more common in MMA than in kickboxing or boxing, and in addition to broken bones, bruising, and torn ligaments/tendons, dislocations are more common because of grappling and submission holds.
---
If you're still not convinced, you might want to look at the [Manuel Velazquez Boxing Fatality Collection](http://ejmas.com/jcs/velazquez/), which lists all the known, recorded cases in which boxers died due to injuries sustained in the ring. At present, the [raw data](https://docs.google.com/spreadsheets/d/1e7iWW3KLuVYfQjN_MOK3a9M42Ht_ACIZVa3Tf5n_Z5o/edit?usp=sharing) lists 2,045 such cases between September 1724 and December 2015; there are 1,324 boxing deaths listed between December 1915 and December 2015.
There is no such resource for kickboxing or MMA, and I'm not aware of any similar resources for any team sports. |
217,548 | Say I have this scenario. 3 Libraries - 1) Orders 2) Suppliers 3) Containers
In the Orders I can choose "Supplier" and "Container" which are look up fields, looking into the 'Title' column of the "Suppliers" and "Containers" libraries, respectively.
Now Let's say in Order X, I choose supplier: SupplierA.
I want to only be able to choose containers of that supplier.
Possible starting points:
a) In the Containers' library I have a look up field "Supplier" looking into the same Suppliers library, so the logic would be, show all the containers which have the same RelatedSupplier.
or
b) The name 'SupplierA' always appears within the Title of the container, so the logic would be, show all the containers where the Title contains the name of the supplier.
What would be the best way to achieve this? Javascript would probably need to be used unless there is a SharePoint way that I don't know about. I would appreciate a starting point.
I am working in a SharePoint Online environment. | 2017/06/08 | [
"https://sharepoint.stackexchange.com/questions/217548",
"https://sharepoint.stackexchange.com",
"https://sharepoint.stackexchange.com/users/57235/"
] | I'm trying to diagnose what is happening here.
As a possible workaround, can you run
npm outdated
and in your package.json file update the @microsoft/# references? They should all be 1.1.0, and probably include sp-build-web, sp-core-library, sp-module-interfaces, sp-webpart-base and sp-webpart-workbench.
OK - sorted out the problem with running the original yeoman generator with the latest bits. The problem is that there was a package published with a patch version change (1.0.0 -> 1.0.1) that should have been 1.1.0. The 1.0.1 package references 1.1.0 packages (which are part of the latest release). So we wind up with a mismatched collection of packages. We're working on getting this sorted out and we will republish our packages. People using the 1.0.x yeoman generator won't need to do anything other recreate the solution (or reinstall the npm packages) if they hit this issue.
Last Update - fix has been released. Offending packages have been removed, updated packages published. | Ran into this issue yesterday, but the issue seems resolved after reinstalling my packages today. I'm using the 1.0.0 packages. |
65,184 | So, I was tasked by client to help him convert his wp menu to javascript dropdown. I did on my development server. He did see the change and I was paid. I deliver the code he deploy it. But, no change on his server. So, I have to spent hours debugging it on his server. It turns out, his other plugin is not compatible with my change. That plugin is really custom. I have to change my code to make sure it's compatible with that plugin.
My question is, **is it fair for me to charge him for the hours I spent on debugging it AND actually fixing it?** or **is it still my responsibility, to make sure my code deployed properly?** | 2011/04/04 | [
"https://softwareengineering.stackexchange.com/questions/65184",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/4615/"
] | On one hand, I think you're within your rights. If he didn't give you all the information you needed to do the job successfully (ie. the other plugin), how could you be expected to?
On the other hand, is there likely to be more work from this source, or through his friends? If so then you might want to seriously consider whether within your rights is enough. Maybe doing him a favour will more than pay off in the end. | It kinda depends on the contract you had with him. If your agreement was that he would pay you by the hour then you would have more room to say you need to charge him more than if you made a bid on the project. If you made a bid on the project and he didn't provide you all the information (but it would still be true that you didn't research the setup properly) then you could potentially bill for the difference of what you did bid compared to the amount you would have bid if you had known all the details. Ethically I would say you have a responsibility to get the code working on his equipment/setup as that's what he was paying for. He wasn't paying for code he couldn't use. There are always times when things like this happen as it's easy to overlook things like this. |
65,184 | So, I was tasked by client to help him convert his wp menu to javascript dropdown. I did on my development server. He did see the change and I was paid. I deliver the code he deploy it. But, no change on his server. So, I have to spent hours debugging it on his server. It turns out, his other plugin is not compatible with my change. That plugin is really custom. I have to change my code to make sure it's compatible with that plugin.
My question is, **is it fair for me to charge him for the hours I spent on debugging it AND actually fixing it?** or **is it still my responsibility, to make sure my code deployed properly?** | 2011/04/04 | [
"https://softwareengineering.stackexchange.com/questions/65184",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/4615/"
] | There is a third option here where you could charge him for the fix (ie, finishing the job), but not charge him for the debug time which only occurred because you did not do your *BEST* possible job as a programmer.
Don't get me wrong; **you did what most developers would do** with a contract job. However, as developers we also know that minor differences between servers can be the difference between working plugin and a worthless plugin. Had you of created a mirror of the clients setup (as close as reasonably possible), this likely could have been avoided.
I would *ask* him for payment (keyword *'ask'*; do not REQUIRE payment) regarding the fix, but leave the debug time out of it. Make a point of bringing this to his attention; perhaps include the debug time on the invoice with a deduction. | It kinda depends on the contract you had with him. If your agreement was that he would pay you by the hour then you would have more room to say you need to charge him more than if you made a bid on the project. If you made a bid on the project and he didn't provide you all the information (but it would still be true that you didn't research the setup properly) then you could potentially bill for the difference of what you did bid compared to the amount you would have bid if you had known all the details. Ethically I would say you have a responsibility to get the code working on his equipment/setup as that's what he was paying for. He wasn't paying for code he couldn't use. There are always times when things like this happen as it's easy to overlook things like this. |
65,184 | So, I was tasked by client to help him convert his wp menu to javascript dropdown. I did on my development server. He did see the change and I was paid. I deliver the code he deploy it. But, no change on his server. So, I have to spent hours debugging it on his server. It turns out, his other plugin is not compatible with my change. That plugin is really custom. I have to change my code to make sure it's compatible with that plugin.
My question is, **is it fair for me to charge him for the hours I spent on debugging it AND actually fixing it?** or **is it still my responsibility, to make sure my code deployed properly?** | 2011/04/04 | [
"https://softwareengineering.stackexchange.com/questions/65184",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/4615/"
] | It kinda depends on the contract you had with him. If your agreement was that he would pay you by the hour then you would have more room to say you need to charge him more than if you made a bid on the project. If you made a bid on the project and he didn't provide you all the information (but it would still be true that you didn't research the setup properly) then you could potentially bill for the difference of what you did bid compared to the amount you would have bid if you had known all the details. Ethically I would say you have a responsibility to get the code working on his equipment/setup as that's what he was paying for. He wasn't paying for code he couldn't use. There are always times when things like this happen as it's easy to overlook things like this. | I could be convinced either way on this one; but if there's more work on the line, I'd probably just end up eating it.
These are the precious life lessons that no amount of school can teach you. You're lucky that this one only cost you a couple of hours of your time :) |
65,184 | So, I was tasked by client to help him convert his wp menu to javascript dropdown. I did on my development server. He did see the change and I was paid. I deliver the code he deploy it. But, no change on his server. So, I have to spent hours debugging it on his server. It turns out, his other plugin is not compatible with my change. That plugin is really custom. I have to change my code to make sure it's compatible with that plugin.
My question is, **is it fair for me to charge him for the hours I spent on debugging it AND actually fixing it?** or **is it still my responsibility, to make sure my code deployed properly?** | 2011/04/04 | [
"https://softwareengineering.stackexchange.com/questions/65184",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/4615/"
] | It kinda depends on the contract you had with him. If your agreement was that he would pay you by the hour then you would have more room to say you need to charge him more than if you made a bid on the project. If you made a bid on the project and he didn't provide you all the information (but it would still be true that you didn't research the setup properly) then you could potentially bill for the difference of what you did bid compared to the amount you would have bid if you had known all the details. Ethically I would say you have a responsibility to get the code working on his equipment/setup as that's what he was paying for. He wasn't paying for code he couldn't use. There are always times when things like this happen as it's easy to overlook things like this. | I can understand why you feel miffed at the situation. You had everything working at your end, and it fell over at the customer site, because of something at their end.
Now, from the customers perspective, they paid you for a change, and you had failed to deliver that change, until you did the extra debugging, and patch...
Personally, (and I have been in that kind of situation) I would not send in an additional invoice, and take the loss on the chin, however, I would factor in the deployment risk into future quotes for this customer, (and similar work else where). If the customer queries the new higher pricing for future work, use this as an example of the time scale risks you are facing, and that you do have rent to pay and need to eat, but that you have already demonstrated that you do stand by the quality of your work and will make sure that they are happy, even after they have paid you. |
65,184 | So, I was tasked by client to help him convert his wp menu to javascript dropdown. I did on my development server. He did see the change and I was paid. I deliver the code he deploy it. But, no change on his server. So, I have to spent hours debugging it on his server. It turns out, his other plugin is not compatible with my change. That plugin is really custom. I have to change my code to make sure it's compatible with that plugin.
My question is, **is it fair for me to charge him for the hours I spent on debugging it AND actually fixing it?** or **is it still my responsibility, to make sure my code deployed properly?** | 2011/04/04 | [
"https://softwareengineering.stackexchange.com/questions/65184",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/4615/"
] | There is a third option here where you could charge him for the fix (ie, finishing the job), but not charge him for the debug time which only occurred because you did not do your *BEST* possible job as a programmer.
Don't get me wrong; **you did what most developers would do** with a contract job. However, as developers we also know that minor differences between servers can be the difference between working plugin and a worthless plugin. Had you of created a mirror of the clients setup (as close as reasonably possible), this likely could have been avoided.
I would *ask* him for payment (keyword *'ask'*; do not REQUIRE payment) regarding the fix, but leave the debug time out of it. Make a point of bringing this to his attention; perhaps include the debug time on the invoice with a deduction. | On one hand, I think you're within your rights. If he didn't give you all the information you needed to do the job successfully (ie. the other plugin), how could you be expected to?
On the other hand, is there likely to be more work from this source, or through his friends? If so then you might want to seriously consider whether within your rights is enough. Maybe doing him a favour will more than pay off in the end. |
65,184 | So, I was tasked by client to help him convert his wp menu to javascript dropdown. I did on my development server. He did see the change and I was paid. I deliver the code he deploy it. But, no change on his server. So, I have to spent hours debugging it on his server. It turns out, his other plugin is not compatible with my change. That plugin is really custom. I have to change my code to make sure it's compatible with that plugin.
My question is, **is it fair for me to charge him for the hours I spent on debugging it AND actually fixing it?** or **is it still my responsibility, to make sure my code deployed properly?** | 2011/04/04 | [
"https://softwareengineering.stackexchange.com/questions/65184",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/4615/"
] | On one hand, I think you're within your rights. If he didn't give you all the information you needed to do the job successfully (ie. the other plugin), how could you be expected to?
On the other hand, is there likely to be more work from this source, or through his friends? If so then you might want to seriously consider whether within your rights is enough. Maybe doing him a favour will more than pay off in the end. | I could be convinced either way on this one; but if there's more work on the line, I'd probably just end up eating it.
These are the precious life lessons that no amount of school can teach you. You're lucky that this one only cost you a couple of hours of your time :) |
65,184 | So, I was tasked by client to help him convert his wp menu to javascript dropdown. I did on my development server. He did see the change and I was paid. I deliver the code he deploy it. But, no change on his server. So, I have to spent hours debugging it on his server. It turns out, his other plugin is not compatible with my change. That plugin is really custom. I have to change my code to make sure it's compatible with that plugin.
My question is, **is it fair for me to charge him for the hours I spent on debugging it AND actually fixing it?** or **is it still my responsibility, to make sure my code deployed properly?** | 2011/04/04 | [
"https://softwareengineering.stackexchange.com/questions/65184",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/4615/"
] | On one hand, I think you're within your rights. If he didn't give you all the information you needed to do the job successfully (ie. the other plugin), how could you be expected to?
On the other hand, is there likely to be more work from this source, or through his friends? If so then you might want to seriously consider whether within your rights is enough. Maybe doing him a favour will more than pay off in the end. | I can understand why you feel miffed at the situation. You had everything working at your end, and it fell over at the customer site, because of something at their end.
Now, from the customers perspective, they paid you for a change, and you had failed to deliver that change, until you did the extra debugging, and patch...
Personally, (and I have been in that kind of situation) I would not send in an additional invoice, and take the loss on the chin, however, I would factor in the deployment risk into future quotes for this customer, (and similar work else where). If the customer queries the new higher pricing for future work, use this as an example of the time scale risks you are facing, and that you do have rent to pay and need to eat, but that you have already demonstrated that you do stand by the quality of your work and will make sure that they are happy, even after they have paid you. |
65,184 | So, I was tasked by client to help him convert his wp menu to javascript dropdown. I did on my development server. He did see the change and I was paid. I deliver the code he deploy it. But, no change on his server. So, I have to spent hours debugging it on his server. It turns out, his other plugin is not compatible with my change. That plugin is really custom. I have to change my code to make sure it's compatible with that plugin.
My question is, **is it fair for me to charge him for the hours I spent on debugging it AND actually fixing it?** or **is it still my responsibility, to make sure my code deployed properly?** | 2011/04/04 | [
"https://softwareengineering.stackexchange.com/questions/65184",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/4615/"
] | There is a third option here where you could charge him for the fix (ie, finishing the job), but not charge him for the debug time which only occurred because you did not do your *BEST* possible job as a programmer.
Don't get me wrong; **you did what most developers would do** with a contract job. However, as developers we also know that minor differences between servers can be the difference between working plugin and a worthless plugin. Had you of created a mirror of the clients setup (as close as reasonably possible), this likely could have been avoided.
I would *ask* him for payment (keyword *'ask'*; do not REQUIRE payment) regarding the fix, but leave the debug time out of it. Make a point of bringing this to his attention; perhaps include the debug time on the invoice with a deduction. | I could be convinced either way on this one; but if there's more work on the line, I'd probably just end up eating it.
These are the precious life lessons that no amount of school can teach you. You're lucky that this one only cost you a couple of hours of your time :) |
65,184 | So, I was tasked by client to help him convert his wp menu to javascript dropdown. I did on my development server. He did see the change and I was paid. I deliver the code he deploy it. But, no change on his server. So, I have to spent hours debugging it on his server. It turns out, his other plugin is not compatible with my change. That plugin is really custom. I have to change my code to make sure it's compatible with that plugin.
My question is, **is it fair for me to charge him for the hours I spent on debugging it AND actually fixing it?** or **is it still my responsibility, to make sure my code deployed properly?** | 2011/04/04 | [
"https://softwareengineering.stackexchange.com/questions/65184",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/4615/"
] | There is a third option here where you could charge him for the fix (ie, finishing the job), but not charge him for the debug time which only occurred because you did not do your *BEST* possible job as a programmer.
Don't get me wrong; **you did what most developers would do** with a contract job. However, as developers we also know that minor differences between servers can be the difference between working plugin and a worthless plugin. Had you of created a mirror of the clients setup (as close as reasonably possible), this likely could have been avoided.
I would *ask* him for payment (keyword *'ask'*; do not REQUIRE payment) regarding the fix, but leave the debug time out of it. Make a point of bringing this to his attention; perhaps include the debug time on the invoice with a deduction. | I can understand why you feel miffed at the situation. You had everything working at your end, and it fell over at the customer site, because of something at their end.
Now, from the customers perspective, they paid you for a change, and you had failed to deliver that change, until you did the extra debugging, and patch...
Personally, (and I have been in that kind of situation) I would not send in an additional invoice, and take the loss on the chin, however, I would factor in the deployment risk into future quotes for this customer, (and similar work else where). If the customer queries the new higher pricing for future work, use this as an example of the time scale risks you are facing, and that you do have rent to pay and need to eat, but that you have already demonstrated that you do stand by the quality of your work and will make sure that they are happy, even after they have paid you. |
429,930 | Currently I am using [Recaps](http://www.gooli.org/blog/recaps/) for switching between keyboard layouts. But I am looking for a replacement, because it is a little buggy and not updated for years. Do you know any replacement? | 2012/05/29 | [
"https://superuser.com/questions/429930",
"https://superuser.com",
"https://superuser.com/users/101936/"
] | Punto Switcher can do this! <http://punto.yandex.ru/win/>
Basically it allows you to switch keyboard layout automatically, based on what you are typing. But it also can switch keyboard layouts on Caps Lock or many other keys. If don't like automatic switching you can turn it off in settings. | I made it using [PowerPro](http://powerpro.cresadu.com/) tool (as if it is constantly loaded already for other stuff)
And now I achieve language change by tapping and CAPSLOCK via long press. |
139,612 | I am currently updating my resume to reflect new responsibilities assumed in my current position. I am thinking of creating a new section on resume to include publications, professional journal articles, and Linked articles I have written in my profession of cybersecurity. **Several have been well received by my professional network**, and I am working with editorial board of a professional organization to see if they are able to accept a publication for inclusion in their official magazine for members.
**For someone in an senior role, how worthwhile would these publications be considered?** Would future hiring managers see these as evidence of passion commitment, and well - honed communication ability? | 2019/07/03 | [
"https://workplace.stackexchange.com/questions/139612",
"https://workplace.stackexchange.com",
"https://workplace.stackexchange.com/users/30062/"
] | It definitely won't hurt you (almost always) to put them in provided the opinion of the community/industry hasn't turned against you. If you have a large array of publications, consider selecting your top three and presenting them under "Selected Publications". Just like any kind of public available information, be aware that there is a wide gamut of opinions on a lot of topics, so anything more controversial will have a lower chance to aid you. | ### Placement is the key
It generally never hurts including the information in your resume. However, **where**, **what**, and **how much** you choose to place in your resume is something you can put some thought into.
* Is including a select few of them going to influence the chances of you getting hired? If yes, sure display the section prominently, maybe even on the first page.
* Is it just an additional item showcasing one of your skill/achievement? You can put it in a later page.
To conclude, the inclusion may play a crucial role in one job and not make much of a difference in another. You'd be better off maintaining different versions of your resume and/or customize it based on the job you are applying for.
You should even consider handpicking which publications to list in your resume, based on the role, when applying for a particular job.
If you maintain a personal website, it would be a good idea to link to the articles from your website. You can always mention your website address in the resume, as it lets hiring personnel easily browse through any/all information about you. |
139,612 | I am currently updating my resume to reflect new responsibilities assumed in my current position. I am thinking of creating a new section on resume to include publications, professional journal articles, and Linked articles I have written in my profession of cybersecurity. **Several have been well received by my professional network**, and I am working with editorial board of a professional organization to see if they are able to accept a publication for inclusion in their official magazine for members.
**For someone in an senior role, how worthwhile would these publications be considered?** Would future hiring managers see these as evidence of passion commitment, and well - honed communication ability? | 2019/07/03 | [
"https://workplace.stackexchange.com/questions/139612",
"https://workplace.stackexchange.com",
"https://workplace.stackexchange.com/users/30062/"
] | It definitely won't hurt you (almost always) to put them in provided the opinion of the community/industry hasn't turned against you. If you have a large array of publications, consider selecting your top three and presenting them under "Selected Publications". Just like any kind of public available information, be aware that there is a wide gamut of opinions on a lot of topics, so anything more controversial will have a lower chance to aid you. | **Yes, your publications are great to include on your resume if you're eager to share them and believe they demonstrate your capabilities.**
Beyond your technical knowledge, your publications demonstrate:
* **Your ability to effectively document what you know** - you're able to write down your own knowledge in a way that is easy for others to understand and make use of
* **Your willingness to share knowledge** - you've made efforts to share what you know and you'll likely take time to share your knowledge with future colleagues
* **Your personal interest in your professional work** - you're not the kind of person who's in it just for a paycheck, you actually enjoy what you're doing
* **You're open to criticism and broad review of your work** - You posted your compositions for large audiences, including topical experts, to review and criticize
* **You can "get the job done"** - you took the articles to completion, having a real publication is a rare accomplishment |
63,421 | If we [become] Fully Deity in Christ, based on [Colossians 2:9-10] which states:
>
> [9] For in Christ all the \*\*fullness of the Deity\*\* lives in bodily form, [10] and in Christ \*\*you have been brought to fullness\*\*. He is the head over every power and authority.
* Are Fully-Deified souls in Christ :"sinless" or "omnipotent", ("sinless" and "omnipotent")?
**What does "Fullness of the Deity" ( πλήρωμα τῆς Θεότητος) mean [for humans] in context to [Colossians 2]?** | 2021/07/13 | [
"https://hermeneutics.stackexchange.com/questions/63421",
"https://hermeneutics.stackexchange.com",
"https://hermeneutics.stackexchange.com/users/37964/"
] | NIV Colossians 2:
>
> 9 For in Christ all the fullness of the Deity lives in bodily form,
>
>
>
Deity
Θεότητος (Theotētos)
Noun - Genitive Feminine Singular
Strong's 2320: Deity, Godhead. From theos; divinity.
In other words, all the fullness of the Deity lives in bodily form in Christ. Here *Deity* applies only to Christ.
>
> 10a and in Christ you have been brought to fullness.
>
>
>
In other words, you have been brought to fullness in Christ. It does not say that you acquire Deity status. This meaning is confirmed in 10b
>
> He is the head over every power and authority.
>
>
>
Christ is the head, not us.
There cannot be two omnipotent beings in a logical universe. | **1 THESS 5:23** *Now may the God of peace Himself sanctify you completely; and may your whole spirit, soul, and body be preserved blameless at the coming of our Lord Jesus Christ.*
There is debate in theological circles over the makeup of ‘man’. Some say tripartite body, soul, spirit, some are dualistic, that is, body and soul are ‘one’.
It’s true that the soul and spirit are difficult to separate, in fact there is only one ‘thing’ that can separate them ..
**HEB 4:12** *For the word of God is living and powerful, and sharper than any two-edged sword, piercing even to the division of soul and spirit,*
But, that ‘one’ *thing* is *the* ‘thing’ that matters. (The word). It’s in this understanding (of the makeup of man) that the understanding for you answer lies. If you **don’t** ‘see’ the distinction between soul and spirit, you will need to re-interpret much of Paul’s teachings..... including Colossians from which you are quoting...
It’s the numerous times Paul says that if you are a believer, then you are ‘in him’ or ‘in Christ’.
**COL 2:10** *and you are complete in Him*
The ‘key’ to this is ...
**2 COR 5:17** *Therefore, if anyone is in Christ, he is a new creation; old things have passed away; behold, all things have become new.*
What is ‘new’? What ‘part’ of you is a ‘new creation’? Body? Soul? or Spirit? Or is this merely ‘figurative’? This decision is yours to take, but whatever you decide, it will influence your interpretation of all of Paul’s teachings, every one of them. Bit, as I said, there is no agreement on this among theologians.
So when you asked, what does “*Fullness of the Deity*” mean in Colossians 2, the answers you get, and the answer you’ll accept depends on your viewpoint of ‘body, soul, spirit’. I say this (fullness of deity) is fulfilled ‘spiritually’, that is, via your recreated ‘spirit’. You (that is, your ‘spirit man’) has been reborn. New.
But you’ve already accepted another view so obviously won’t agree with mine. Nevertheless I post this for others to consider. |
86,306 | I know from basic physics lessons that a box painted black will absorb heat better than a box covered in tin foil. However a box covered in tin foil will lose heat slower than a black box.
So what is the best way to conserve the temperature of a box? (aiming for 0 degrees Celsius inside the box when it's -60 outside).
I mean would painting the outside of the box black, and having tin foil on the inside work? So the box can absorb heat better (black paint) and the tin foil making it harder for heat to escape? | 2013/11/12 | [
"https://physics.stackexchange.com/questions/86306",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/4/"
] | If your box (at 0-20°C) starts out hotter than the environment (at -60°C) then your best strategy is to prevent any heat flowing out of the box into the environment i.e. insulate the box.
Using foil will reduce radiative energy transfer, however in most cases the cooling is dominated by convection rather than radiation and foil is a rather good conductor of heat. You can demonstrate this by wrapping yourself (at about 37°C) in foil and standing in a -60°C wind (though I wouldn't do this experiment for very long). Mind you, painting yourself black would also do little to keep you warm when you're standing in a -60°C wind.
However, suppose it's a clear winter day and the Sun is shining brightly. In that case painting the box black would help because it would increase the absorption of energy from the Sun. | A perfectly one way insulator would violate the law of conservation of energy. You could place it in a fluid filled box and let a temperature gradient develop. You could then use it to drive machinery. Bam! Energy for nothing. Therefore by the conservation of energy (and second law of thermodynamics: the entropy would decrease) such a one way insulator is impossible (although exceptions may exist if the insulator gets used up or something)
With respect to your question, covering the box with tin foil will prevent the energy from escaping better then the black one (the box's content will stay warmer for longer), however it won't make the box any warmer. |
86,306 | I know from basic physics lessons that a box painted black will absorb heat better than a box covered in tin foil. However a box covered in tin foil will lose heat slower than a black box.
So what is the best way to conserve the temperature of a box? (aiming for 0 degrees Celsius inside the box when it's -60 outside).
I mean would painting the outside of the box black, and having tin foil on the inside work? So the box can absorb heat better (black paint) and the tin foil making it harder for heat to escape? | 2013/11/12 | [
"https://physics.stackexchange.com/questions/86306",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/4/"
] | A perfectly one way insulator would violate the law of conservation of energy. You could place it in a fluid filled box and let a temperature gradient develop. You could then use it to drive machinery. Bam! Energy for nothing. Therefore by the conservation of energy (and second law of thermodynamics: the entropy would decrease) such a one way insulator is impossible (although exceptions may exist if the insulator gets used up or something)
With respect to your question, covering the box with tin foil will prevent the energy from escaping better then the black one (the box's content will stay warmer for longer), however it won't make the box any warmer. | make 1 box, with the inside and outside as reflective as possible. make a second box the same way except bigger. using magnets on all sides to 'levitate' box 1 inside of box 2. suck all the air out of box 2. |
86,306 | I know from basic physics lessons that a box painted black will absorb heat better than a box covered in tin foil. However a box covered in tin foil will lose heat slower than a black box.
So what is the best way to conserve the temperature of a box? (aiming for 0 degrees Celsius inside the box when it's -60 outside).
I mean would painting the outside of the box black, and having tin foil on the inside work? So the box can absorb heat better (black paint) and the tin foil making it harder for heat to escape? | 2013/11/12 | [
"https://physics.stackexchange.com/questions/86306",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/4/"
] | If your box (at 0-20°C) starts out hotter than the environment (at -60°C) then your best strategy is to prevent any heat flowing out of the box into the environment i.e. insulate the box.
Using foil will reduce radiative energy transfer, however in most cases the cooling is dominated by convection rather than radiation and foil is a rather good conductor of heat. You can demonstrate this by wrapping yourself (at about 37°C) in foil and standing in a -60°C wind (though I wouldn't do this experiment for very long). Mind you, painting yourself black would also do little to keep you warm when you're standing in a -60°C wind.
However, suppose it's a clear winter day and the Sun is shining brightly. In that case painting the box black would help because it would increase the absorption of energy from the Sun. | make 1 box, with the inside and outside as reflective as possible. make a second box the same way except bigger. using magnets on all sides to 'levitate' box 1 inside of box 2. suck all the air out of box 2. |
22,829 | Are there any valid reasons to ***not*** rollover a former 401(k) when changing jobs?
I have several little ones that I never quite got around to rolling together - with an upcoming job change, should they all be combined into my new employer's retirement plan? If not, what else might be considered? | 2013/06/14 | [
"https://money.stackexchange.com/questions/22829",
"https://money.stackexchange.com",
"https://money.stackexchange.com/users/969/"
] | The biggest reason why one might want to leave 401k money invested in an ex-employer's plan is that the plan offers some *superior*
investment opportunities that are **not** available elsewhere,
e.g. some mutual funds that are not open to individual investors such as S&P index funds for institutional investors (these have expense ratios even
smaller than the already low expense ratios of good S&P index funds)
or "hot" funds that are (usually temporarily) closed to new investors,
etc. The biggest reason to roll over 401k money from an ex-employer's
plan to the 401k
plan of a new employer is essentially the same: the new employer's
plan offers *superior* investment opportunities
that are not available elsewhere. Of course, the new employer's
401k plan must accept
such roll overs. I do not believe that it is a *requirement* that a 401k plan
must accept rollovers, but rather an option that a plan can be set up to allow
for or not.
Another reason to roll over 401k money from one plan to another
(rather than into an IRA) is to keep it safe from creditors. If you are
sued and found liable for damages in a court proceeding, the plaintiff can come after IRA assets but not after 401k money. Also, you can take a loan from
the 401k money (subject to various rules about how much can be borrowed,
payment requirements etc) which you cannot from an IRA.
That being said, the benefits of keeping 401k money as 401k money must
be weighed against the usually higher administrative costs and usually
poorer and more limited choices of investment opportunities
available in most 401k plans as Muro has said already. | I've changed jobs several times and I chose to rollover my 401k from the previous employer into an IRA instead of the new employer's 401k plan. The biggest reason not to rollover the 401k into the new employer's 401k plan was due to the limited investments offered by 401k plans. I found it better to roll the 401k into an IRA where I can invest in any stock or fund. |
22,829 | Are there any valid reasons to ***not*** rollover a former 401(k) when changing jobs?
I have several little ones that I never quite got around to rolling together - with an upcoming job change, should they all be combined into my new employer's retirement plan? If not, what else might be considered? | 2013/06/14 | [
"https://money.stackexchange.com/questions/22829",
"https://money.stackexchange.com",
"https://money.stackexchange.com/users/969/"
] | I've changed jobs several times and I chose to rollover my 401k from the previous employer into an IRA instead of the new employer's 401k plan. The biggest reason not to rollover the 401k into the new employer's 401k plan was due to the limited investments offered by 401k plans. I found it better to roll the 401k into an IRA where I can invest in any stock or fund. | Another minor reason not to rollover would be to avoid the pro-rata taxes when doing a [backdoor Roth IRA contribution](http://en.wikipedia.org/wiki/Roth_IRA#Traditional_IRA_conversion_as_a_workaround_to_Roth_IRA_income_limits). |
22,829 | Are there any valid reasons to ***not*** rollover a former 401(k) when changing jobs?
I have several little ones that I never quite got around to rolling together - with an upcoming job change, should they all be combined into my new employer's retirement plan? If not, what else might be considered? | 2013/06/14 | [
"https://money.stackexchange.com/questions/22829",
"https://money.stackexchange.com",
"https://money.stackexchange.com/users/969/"
] | The biggest reason why one might want to leave 401k money invested in an ex-employer's plan is that the plan offers some *superior*
investment opportunities that are **not** available elsewhere,
e.g. some mutual funds that are not open to individual investors such as S&P index funds for institutional investors (these have expense ratios even
smaller than the already low expense ratios of good S&P index funds)
or "hot" funds that are (usually temporarily) closed to new investors,
etc. The biggest reason to roll over 401k money from an ex-employer's
plan to the 401k
plan of a new employer is essentially the same: the new employer's
plan offers *superior* investment opportunities
that are not available elsewhere. Of course, the new employer's
401k plan must accept
such roll overs. I do not believe that it is a *requirement* that a 401k plan
must accept rollovers, but rather an option that a plan can be set up to allow
for or not.
Another reason to roll over 401k money from one plan to another
(rather than into an IRA) is to keep it safe from creditors. If you are
sued and found liable for damages in a court proceeding, the plaintiff can come after IRA assets but not after 401k money. Also, you can take a loan from
the 401k money (subject to various rules about how much can be borrowed,
payment requirements etc) which you cannot from an IRA.
That being said, the benefits of keeping 401k money as 401k money must
be weighed against the usually higher administrative costs and usually
poorer and more limited choices of investment opportunities
available in most 401k plans as Muro has said already. | Another minor reason not to rollover would be to avoid the pro-rata taxes when doing a [backdoor Roth IRA contribution](http://en.wikipedia.org/wiki/Roth_IRA#Traditional_IRA_conversion_as_a_workaround_to_Roth_IRA_income_limits). |
2,197,503 | HI all,
How does NHibernate executes the queries? Does it manipulates the queries and uses some query optimization techniques? And what is the query execution plan followed by NHibernate? | 2010/02/04 | [
"https://Stackoverflow.com/questions/2197503",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/260892/"
] | >
> How does NHibernate executes the queries?
>
>
>
Not exactly sure about the question. But NH executes queries using normal ADO.NET with all the data passed as parameters.
>
> Does it manipulates the queries and uses some query optimization techniques?
>
>
>
It generates as optimal queries as possible with the information provided for it.
It caches not only the queries, but also the data returned by them if configured so.
>
> And what is the query execution plan followed by NHibernate?
>
>
>
NH takes into account that the execution plan should not be generated on the server if not required. So the execution plan will be the same for all queries of of the same kind. | You can use a tool, such as [NHibernate Profiler](http://nhprof.com/) or SQL Server Profiler, to view the queries being executed. You may also want to research NHibernate's caching capabilities. |
8,196 | Bane saved Talia when she was a baby. He was pretty old maybe in his 20s or 30s. But then she grows up and wouldn't that make Bane 60? He looks young. | 2012/11/23 | [
"https://movies.stackexchange.com/questions/8196",
"https://movies.stackexchange.com",
"https://movies.stackexchange.com/users/3467/"
] | **Timeline Info from Script:**
When the Young Prisoner (Young Talia, though not introduced as such at the time) is introduced, the [Script](http://www.thesportshero.com/?p=4165) says she is 'about 10'.
>
> INSERT CUT: a child of about ten looks up towards the light
>
>
>
When Miranda Tate is introduced, the [Script](http://www.thesportshero.com/?p=4165) says she is in her 30s.
>
> Alfred talks to an elegant woman, thirties, Miranda Tate.
>
>
>
So the short answer is 20-29 years between Talia Al Ghul escaping the prison and the beginning of the events in The Dark Knight Rises.
**Batman Begins [Script](http://www.nolanfans.com/screenplays/) info:**
Bruce Wayne is age 8 in the first scene where he falls into the cave.
>
> This is Bruce Wayne, aged 8
>
>
>
Bruce Wayne is age 28 when he meets Ra's Al Ghul:
>
> BRUCE WAYNE aged 28
>
>
>
Bruce Wayne's 30th Birthday is at the end of Batman Begins:
>
> **EARLE:** Not yet. I checked the trust, and Bruce can’t assume control until his thirtieth birthday. (presses intercom) Jessica, where’s that coffee?
>
> **BOARD MEMBER:** But that’s in three months.
>
>
>
**Timeline Info from Movies:**
In **Batman Begins**
This was after his wife took his place (with Talia). It's also implied that he had found out he had discovered her fate at this point and gotten his revenge.[[source]](http://en.wikiquote.org/wiki/Batman_Begins)
>
> **Henri Ducard**: But I know the rage that drives you. That impossible anger strangling the grief, until the memory of your loved ones is just poison in your veins. And one day you catch yourself wishing the person you loved had never existed, so you'd be spared your pain. I wasn't always here in the mountains. Once I had a wife, my great love. She was taken from me. Like you, I was forced to learn that there are those without decency that must be fought without hesitation, without pity. Your anger gives you great power, but if you let it, it will destroy you, as it almost did me.
>
> **Bruce Wayne**: What stopped it?
>
> **Henri Ducard**: Vengeance.
>
>
>
Bruce Wayne spent 7 years traveling abroad according to Alfred in Batman Begins:[[source]](http://www.imdb.com/title/tt0372784/quotes?qt=qt0469952)
>
> **Bruce Wayne:** Have you told anyone I'm coming back?
>
> **Alfred Pennyworth:** I just couldn't figure the legal ramifications of bringing you back from the dead.
>
> **Bruce Wayne:** Dead?
>
> **Alfred Pennyworth:** You've been gone seven years.
>
>
>
In **The Dark Knight**,
>
> **The Joker:** Let's wind the clocks back a year. These cops and lawyers wouldn't dare cross any of you. I mean, what happened? Did... did your balls drop off?
>
>
>
I interpret this as one of two possibilities: 1) a year ago batman didn't exist; or 2) a year ago Bruce Wayne was just starting as Batman and had not yet dealt a crippling blow to their organization.
In **The Dark Knight Rises**,
>
> **John Blake:** Those men locked up for eight years in Blackgate, and denied parole under the Dent Act, based on a lie?
>
>
>
So, **The Dark Knight Rises** takes place at least 9 years after **Batman Begins**.
**So, from all of this:**
Between 20 and 29 years passed from the time Talia escaped to the events at the beginning of The Dark Knight Rises.
Bane in the prison is at least in his late teens. All we see of him is a fairly young looking face.
So, we have a rough timeline. Assume year 0 is Bruce Wayne's birth. Years in *italics*, the specific year is unknown and provided as ranges which should contain **all** possible years, Years in **bold** are certain, and based on above sources. Any range (i.e. 0-9) indicates the exact point at which the event occurred is unknown, but could be anywhere in between, based on above sources.
**0**: Bruce Wayne Born.
*0-9*: Talia Al Ghul born.
*10-19*: Talia Al Ghul is about 10, escapes the prison, Bane is at least in his late teens, early 20s.
*10-28*: Talia Al Ghul finds her father, rescues Bane from prison. They are trained by the League of Shadows. Bane excommunicated.
**22**: Bruce Wayne leaves Gotham, travels the world living amongst criminals.
**28**: Bruce Wayne meets "Henri Ducard"(Ra's Al Ghul) and trains under him.
**29&30**: Bruce Wayne returns to Gotham, events of Batman Begins, Ra's Al Ghul's death.
*30-31*: Events of The Dark Knight
*38-39*: Events of The Dark Knight Rises. Bane is at least about the same age as Bruce Wayne.
So, based on that, there does seem to be an age problem. I don't think it was ever explicitly stated that Bane is younger than Bruce Wayne, just that Bruce Wayne was no longer in his top physical capability, being a recluse for 8 years and using a cane.
---
*Timeline Info from Actors/Actresses age:*
This doesn't really give a definitive age for any of the characters, especially Bruce Wayne and Bane, but it's extra information I looked at before finding the above, so I included it here. Ages are rough, due to assuming the actor/actress had their birthday for 2011 at the time of filming. 2011 was chosen as an estimate of when filming took place, since the movie came out in Summer 2012, leaving time for post-production.
The actress who played Young Talia is [Joey King](http://www.imdb.com/name/nm1428821/), was 12 in 2011. The character appears to be anywhere from 8-13.
The actress who played Adult Talia/"Miranda Tate" is [Marion Cotillard](http://www.imdb.com/name/nm0182839/), was 36 in 2011. The character appears to be anywhere from late 20s to mid 30s.
The actor who played Bane is [Tom Hardy](http://www.imdb.com/name/nm0362766/), was 34 in 2011.
The actor who played Bruce Wayne/Batman is [Christian Bale](http://www.imdb.com/name/nm0000288/), was 37 in 2011. | I think it's implied that Bane is older than Tom Hardy at the time (at least). I just get that impression about his character, the way he acts, the older tone of his voice and also during the fight scene, he criticizes how Batman "Fights like a younger man". It's inferred he's older, 40, or even more than 50 depending upon how old he's supposed to be in the pit scene. |
124,729 | In Chopin Marche funèbre, measure 19, on right hand, is the "A bemol" played *once* or *three times* ?
Is the "C" played three times ?
[](https://i.stack.imgur.com/HBeNJ.png)
Adding also Gymnopédie 1 from Satie :
[](https://i.stack.imgur.com/IHasm.png)
It seems that here, the F# is not played 4 times but only once. But they use exactly the same notation as the case of Chopin Marche Funèbre. | 2022/09/03 | [
"https://music.stackexchange.com/questions/124729",
"https://music.stackexchange.com",
"https://music.stackexchange.com/users/62695/"
] | The Ab is played three times, as it is notated. The C is played twice (also as it is notated). Are you perhaps confusing the *Slur* with a *Tie*? | Three times.
Ties connect just two notes. Ties could be notated like A below. Or, more likely, B - which *might* have been confusing.
But it wasn't written that way. The A♭ is played three times.
Not an enormous error in your playing to put right though!
[](https://i.stack.imgur.com/3kRhF.png) |
58,687,669 | I have a EKS cluster with min 3 and max 6 nodes, Created auto scaling group for this setup, How can i implement auto scale of nodes when spike up/down on Memory usage since there is no such metric in Auto scaling group like cpu.
Can somebody please suggest me clear steps i am new to this setup . | 2019/11/04 | [
"https://Stackoverflow.com/questions/58687669",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11951718/"
] | Out of the box ASG does not support scaling based on the memory utilization.
You`ll have to use custom metric to do that.
[Here](https://medium.com/@lvthillo/aws-auto-scaling-based-on-memory-utilization-in-cloudformation-159676b6f4d6) is way how to do that.
Have you considered using CloudWatch alarms to monitor your nodes?
The scripts can collect memory parameters that can be used later.
See [here](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/mon-scripts.html) how to set it up. | You need to deploy cluster autoscaler which will increase or decreased number of nodes for you.
See the [official docs](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler). |
58,687,669 | I have a EKS cluster with min 3 and max 6 nodes, Created auto scaling group for this setup, How can i implement auto scale of nodes when spike up/down on Memory usage since there is no such metric in Auto scaling group like cpu.
Can somebody please suggest me clear steps i am new to this setup . | 2019/11/04 | [
"https://Stackoverflow.com/questions/58687669",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11951718/"
] | I think you need Step Scaling Policy
* Target Tracking Policy; can be created for metrics where the needed capacity in proportional to the metric eg, if average CPU utilization is near or above the target value ASG will add capacity i.e. scale out.
See Considerations on <https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scaling-target-tracking.html>
* Step Scaling Policy; is user controlled you create your own cloud watch alarms and decide the action and setup an inversely proportional policy, eg if average free memory is less you will want to scale out and vice versa | You need to deploy cluster autoscaler which will increase or decreased number of nodes for you.
See the [official docs](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler). |
58,687,669 | I have a EKS cluster with min 3 and max 6 nodes, Created auto scaling group for this setup, How can i implement auto scale of nodes when spike up/down on Memory usage since there is no such metric in Auto scaling group like cpu.
Can somebody please suggest me clear steps i am new to this setup . | 2019/11/04 | [
"https://Stackoverflow.com/questions/58687669",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11951718/"
] | Out of the box ASG does not support scaling based on the memory utilization.
You`ll have to use custom metric to do that.
[Here](https://medium.com/@lvthillo/aws-auto-scaling-based-on-memory-utilization-in-cloudformation-159676b6f4d6) is way how to do that.
Have you considered using CloudWatch alarms to monitor your nodes?
The scripts can collect memory parameters that can be used later.
See [here](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/mon-scripts.html) how to set it up. | I think you need Step Scaling Policy
* Target Tracking Policy; can be created for metrics where the needed capacity in proportional to the metric eg, if average CPU utilization is near or above the target value ASG will add capacity i.e. scale out.
See Considerations on <https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scaling-target-tracking.html>
* Step Scaling Policy; is user controlled you create your own cloud watch alarms and decide the action and setup an inversely proportional policy, eg if average free memory is less you will want to scale out and vice versa |
262,106 | >
> What is the sentence structure of "Divergent as the arguments are, It is my firm conviction that..."?
>
>
>
Is it a correct sentence? It seems like an inversion but I can't find this structure online. Thanks in advance | 2020/10/07 | [
"https://ell.stackexchange.com/questions/262106",
"https://ell.stackexchange.com",
"https://ell.stackexchange.com/users/104987/"
] | The technical term for loss of appetite is *anorexia* however if you use this in general conversation people might assume you were referring to a specific condition *anorexia nervosa* which in fact does not always involve loss of appetite. | I would say I was **full**. **Full**, however, means lack of appetite due to a particular reason. That reason being that you have already eaten.
>
> Do you have an appetite?
>
>
>
>
> No I'm full.
>
>
>
*Full* works in many cases but if you had no appetite because you were sick, you would not say you were full. |
262,106 | >
> What is the sentence structure of "Divergent as the arguments are, It is my firm conviction that..."?
>
>
>
Is it a correct sentence? It seems like an inversion but I can't find this structure online. Thanks in advance | 2020/10/07 | [
"https://ell.stackexchange.com/questions/262106",
"https://ell.stackexchange.com",
"https://ell.stackexchange.com/users/104987/"
] | The medical term for reduced appetite is **[anorexia](https://en.wikipedia.org/wiki/Anorexia_(symptom))**. This is the normal medical term for reduced desire to eat for a variety of causes, e.g. illness such as common cold, hormone imbalance, influenza, fever, and others. However, this generic term for appetite loss should not be confused with **[anorexia nervosa](https://en.wikipedia.org/wiki/Anorexia_nervosa)**, which is a mental health disorder. | I would say I was **full**. **Full**, however, means lack of appetite due to a particular reason. That reason being that you have already eaten.
>
> Do you have an appetite?
>
>
>
>
> No I'm full.
>
>
>
*Full* works in many cases but if you had no appetite because you were sick, you would not say you were full. |
262,106 | >
> What is the sentence structure of "Divergent as the arguments are, It is my firm conviction that..."?
>
>
>
Is it a correct sentence? It seems like an inversion but I can't find this structure online. Thanks in advance | 2020/10/07 | [
"https://ell.stackexchange.com/questions/262106",
"https://ell.stackexchange.com",
"https://ell.stackexchange.com/users/104987/"
] | I would say I was **full**. **Full**, however, means lack of appetite due to a particular reason. That reason being that you have already eaten.
>
> Do you have an appetite?
>
>
>
>
> No I'm full.
>
>
>
*Full* works in many cases but if you had no appetite because you were sick, you would not say you were full. | Inappetent. For example: "It is early attended with high fever and marked general weakness and inappetence". |
262,106 | >
> What is the sentence structure of "Divergent as the arguments are, It is my firm conviction that..."?
>
>
>
Is it a correct sentence? It seems like an inversion but I can't find this structure online. Thanks in advance | 2020/10/07 | [
"https://ell.stackexchange.com/questions/262106",
"https://ell.stackexchange.com",
"https://ell.stackexchange.com/users/104987/"
] | The medical term for reduced appetite is **[anorexia](https://en.wikipedia.org/wiki/Anorexia_(symptom))**. This is the normal medical term for reduced desire to eat for a variety of causes, e.g. illness such as common cold, hormone imbalance, influenza, fever, and others. However, this generic term for appetite loss should not be confused with **[anorexia nervosa](https://en.wikipedia.org/wiki/Anorexia_nervosa)**, which is a mental health disorder. | The technical term for loss of appetite is *anorexia* however if you use this in general conversation people might assume you were referring to a specific condition *anorexia nervosa* which in fact does not always involve loss of appetite. |
262,106 | >
> What is the sentence structure of "Divergent as the arguments are, It is my firm conviction that..."?
>
>
>
Is it a correct sentence? It seems like an inversion but I can't find this structure online. Thanks in advance | 2020/10/07 | [
"https://ell.stackexchange.com/questions/262106",
"https://ell.stackexchange.com",
"https://ell.stackexchange.com/users/104987/"
] | The technical term for loss of appetite is *anorexia* however if you use this in general conversation people might assume you were referring to a specific condition *anorexia nervosa* which in fact does not always involve loss of appetite. | Inappetent. For example: "It is early attended with high fever and marked general weakness and inappetence". |
262,106 | >
> What is the sentence structure of "Divergent as the arguments are, It is my firm conviction that..."?
>
>
>
Is it a correct sentence? It seems like an inversion but I can't find this structure online. Thanks in advance | 2020/10/07 | [
"https://ell.stackexchange.com/questions/262106",
"https://ell.stackexchange.com",
"https://ell.stackexchange.com/users/104987/"
] | The medical term for reduced appetite is **[anorexia](https://en.wikipedia.org/wiki/Anorexia_(symptom))**. This is the normal medical term for reduced desire to eat for a variety of causes, e.g. illness such as common cold, hormone imbalance, influenza, fever, and others. However, this generic term for appetite loss should not be confused with **[anorexia nervosa](https://en.wikipedia.org/wiki/Anorexia_nervosa)**, which is a mental health disorder. | Inappetent. For example: "It is early attended with high fever and marked general weakness and inappetence". |
3,508,026 | Currently, my friend has a program that checks the users Windows CD-Key and then it goes through a one way encryption. He, then, adds that new generated number to the program for checking purposes and then he compiles it and then he sends it off to the client. Is there a better way to keep the program from being shared utilizing PHP somehow instead of his current method while not using a login system of any kind. | 2010/08/18 | [
"https://Stackoverflow.com/questions/3508026",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/170365/"
] | Fortunately, I've done extensive research in this area, a more affordable, and some say safer option to Zend Guard is [SourceGuardian](http://www.sourceguardian.com). It allows binding to IP addresses, MAC addresses, domains, and time. They're also working on a version that will support a physical dongle attached to the computer. They also release often and have pretty good support.
Another affordable and secure option is [NuCoder](http://www.nucoder.com), they have similar options to SourceGuardian, but also allow the option to bind to a uniquely generated hardware id.
Both SourceGuardian and NuCoder are the best out there, in my opinion anyway, however NuCoder has fallen behind in supporting the latest PHP releases. Currently they support up to 5.2, while SourceGuardian supports the very latest, including 5.3.
Furthermore, since your code is converted to protected bytecode, you also gain speed benefits as PHP doesn't need to take the extra step to convert your code into bytecode. However, as the previous commenter noted, this will require your users to install the necessary loaders, however, this usually entails a simple line addition to the php.ini, or in the case of > 5.2.6, otherwise no additions are usually necessary. | In short, any program using a key can be forged one way or another. Especially if the sources are available (which is the case with most PHP projects. You might want to look into [Zend Gard](http://www.zend.com/en/products/guard/) if you really want something professional.) But most security systems are a pain to the clients in my opinion.
A *good* system I came across once was an C compiled library that had many redundant code checks (spaghetti-like calling trees) and would validate an encrypted serial number. Since the application was custom and did not have many releases, there was no "crack" available for it and the client was in deep water when the reseller went into bankruptcy. Eventually, that code was cracked anyway.
In my opinion, the only true secure way would be to host your application and not releasing any of your source code, then have the client pay for a license and send him only an API key that he must send for each request. |
276,348 | Whenever I want to view a Pokemon within my Pokedex and click on it the Pokedex rotates back to Charmander, which was my starter. Does anyone else have that glitch or knows a solution? Everything else seems to work properly
Update: this is how it looks like when I swipe
[](https://i.stack.imgur.com/Vg1Lg.jpg)
Update:
I have a Sony XperiaZ5 compact Android version 6.0.1
I tried restaring the app and the phone multible times. I also reinstalled the app, which also didn't fix the Pokedex. | 2016/07/21 | [
"https://gaming.stackexchange.com/questions/276348",
"https://gaming.stackexchange.com",
"https://gaming.stackexchange.com/users/153358/"
] | Niantic released v 0.31.0 for android on 7/30/2016
Beforehand the Pokedex glitched and scrolled, just as your described. After I installed the updated, I was able to load the Pokedex and view the Pokemon with no issues at all. | I had this issue and it was due to my touchscreen not registering that I had removed my finger.
A restart of the phone resolved the issue.
It might also be worth clearing your cache or re-installing the app. You won't lose anything doing this, because all of your data is stored on your account, not your phone. |
243,911 | I'm trying to recall a story I read in the early-to-mid 80s. It was a school library book and probably old at the time. I don't recall any details of the cover. I do not believe it is [Lost Race of Mars](https://scifi.stackexchange.com/questions/105552/kids-sf-chapterbook-about-mars-with-three-eyed-martian-animals-and-hidden-old).
The protagonist is a teenager on Mars. A new authority figure comes into his life, either a teacher or an administrator of some sort, someone who is not flexible or patient with the customs of the people on Mars. For example, teenagers routinely painted their helmets for individuality - I believe the protagonist painted his like a tiger - and the authority figure ordered them to remove the paint.
Protagonist has a "pet" of local Martian fauna. At one point in the story, protagonist discovers that their "pet" is in fact the immature version of the adult Martian lifeform, something that had not been understood before that.
The overall arc of the story is Protagonist's rebellion against the new Authority, which I believe succeeded in the end, perhaps helped by his new understanding of or with the Martian natives. | 2021/03/01 | [
"https://scifi.stackexchange.com/questions/243911",
"https://scifi.stackexchange.com",
"https://scifi.stackexchange.com/users/20490/"
] | Sounds like "**[Red Planet](https://www.isfdb.org/cgi-bin/title.cgi?1328)**" by **Robert Heinlein** (1949).
I read it a long time ago and have forgot much of the plot but I remember the issues between the young protagonist and the school director on Mars.
There's a scene involving his mask that matches your description:
>
> Although this face occupied the whole screen and was weirdly
> distorted, Jim had no trouble in placing it as a colonial's respirator
> mask. What startled him almost out of the personal unawareness with
> which he was accepting this shadow show was that **he recognized the
> mask. It was decorated with the very tiger stripes** that Smythe had
> painted out for a quarter credit; it was his own, as it used to be.
>
>
>
The protagonist also had a pet named Willis that is later shown to be a child-like stage of the three-armed/legged Martians. | Red Planet by Robert A. Heinlein
Plot matches on all points
The authority figure is the new headmaster |
36,079,661 | I want to execute a post-build script from TFS which copies a folder in my TFS to the Build drop location.
I have very little knowledge of how to do this.
Kindly provide with the code.
I am using VS2015, tfs 2015.
i also have VS 2013, TFS 2013 | 2016/03/18 | [
"https://Stackoverflow.com/questions/36079661",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5706191/"
] | TFS 2015 Build has an out of the box template 'Visual Studio' that already does this using the PublishBuildArtifacts task.
Look at leveraging this task in your build def in order to accomplish what you are looking for.
<https://msdn.microsoft.com/en-us/Library/vs/alm/Build/steps/utility/publish-build-artifacts>
<https://github.com/Microsoft/vso-agent-tasks/tree/master/Tasks/PublishBuildArtifacts> | In XAML build, you can check in your script, and specify a post-build script path in your XAML build definition.
[This](https://tfsbuildextensions.codeplex.com/SourceControl/latest#Scripts/GatherItemsForDrop.ps1) script gathers some of the typical binary types from the typical locations and copies them to the folder from which TFBuild copies and drops to your staging location. Check more information about Run a script in your XAML build process at website: <https://msdn.microsoft.com/library/dn376353%28v=vs.120%29.aspx>
In vNext build, you can simply add a [PublishBuildArtifacts](https://msdn.microsoft.com/en-us/Library/vs/alm/Build/steps/utility/publish-build-artifacts) task as Mr. Kraus mentions. About how to use this task, check: <http://www.codewrecks.com/blog/index.php/2015/06/30/manage-artifacts-with-tfs-build-vnext/> |
5,021,159 | It seems to be dead. Is it?
If it is, what should I use instead? | 2011/02/16 | [
"https://Stackoverflow.com/questions/5021159",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/615030/"
] | [gtk2hs](http://www.haskell.org/haskellwiki/Gtk2Hs) is very much alive... I think it's too early to announce wxhaskell's demise, yet. Hackage says the May version builds fine with ghc7, there might be other reasons there hasn't been an update.
...unless, of course, you're looking for more haskelly approaches to GUI like [grapefruit](http://www.haskell.org/haskellwiki/Grapefruit), none of which are really ready for prime time, though, due to the general epicness of getting [FRP](http://en.wikipedia.org/wiki/Functional_reactive_programming) right. | wxHaskell is actively maintained for several years now. |
26,255 | Are there 2 letter ISO codes for the pinyin or hepburn transliterations? If not, are there non-ISO abbreviations in common use? Thanks. | 2017/10/27 | [
"https://linguistics.stackexchange.com/questions/26255",
"https://linguistics.stackexchange.com",
"https://linguistics.stackexchange.com/users/20318/"
] | ISO has codes for languages ([ISO 639](https://en.wikipedia.org/wiki/ISO_639-2)), and for scripts ([ISO 15924](https://en.wikipedia.org/wiki/ISO_15924)); but it has no codes for transliterations, as you can see by perusing ISO's standards on [Writing and Transliteration](https://www.iso.org/ics/01.140.10/x/). ISO adopts and standardises transliterations; but unlike languages and scripts, it has not catalogued them.
Using existing ISO codes, *zh-Latn* means "Chinese in Latin Script", and *zh-CN-Latn* means "Chinese in Latin Script, localised to China"; but that just implies "Pinyin" (and what country would we attribute Hepburn to?) It is not a solution.
I can't find evidence that there are standard codes for transliterations anywhere else either. A 2011 [IETF RFC draft for transliteration codes](https://datatracker.ietf.org/doc/html/draft-falk-transliteration-tags-01) went nowhere. | ISO has some standard romanization systems listed at <https://en.wikipedia.org/wiki/List_of_ISO_romanizations>. Pinyin is **ISO 7098**, but unfortunately no other systems of Chinese romanizations have been given ISO codes so you may not find this useful. |
6,900,056 | Is it possible to make iOS and Android apps compliant with [Section 508 of the U.S. Rehabilitation Act](http://www.section508.gov/index.cfm?fuseAction=1998Amend)? I have an upcoming meeting where this question will be raised. | 2011/08/01 | [
"https://Stackoverflow.com/questions/6900056",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/359519/"
] | See here for Apple's docs on how to make apps fully accessible: [Accessibility Programming Guide for iOS](http://developer.apple.com/library/ios/#documentation/UserExperience/Conceptual/iPhoneAccessibility/Introduction/Introduction.html)
In particular:
>
> If you use only standard UIKit controls, you probably don’t have to do much additional work to make sure your application is accessible. In this case, your next step is to ensure that the default attribute information supplied by these controls makes sense in your application
>
>
> | Sure, you can use a similar feature to VoiceOver, vibrations, sounds, use the flash on the iPhone 4, etc. You can't use braille though. |
6,900,056 | Is it possible to make iOS and Android apps compliant with [Section 508 of the U.S. Rehabilitation Act](http://www.section508.gov/index.cfm?fuseAction=1998Amend)? I have an upcoming meeting where this question will be raised. | 2011/08/01 | [
"https://Stackoverflow.com/questions/6900056",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/359519/"
] | I've done a couple of section 508 reviews but don't take what I say as the final word or a legal opinion.
Section 508 is usually used in government contracts and is part of the purchasing process. If your app is not completely 508 compliant this won't mean you can't get the contract, it just means you may lose out if someone has an app that is more compliant then yours with the same general feature set and usability.
As far as 508 compliance on a mobile device the VPAT, which is the form you need to fill out does not specifically mention smart phones. Take a look at
<http://www.itic.org/policy/accessibility>
To view the current VPAT. If I had to fill out a VPAT I would focus on "Section 1194.21 Software Applications and Operating Systems" since you are writing an application for what is basically a computer with assistive technology on it.
I'm a totally blind iPhone user and from my personal experience with the accessibility of Apple's built in applications as well as many third party applications I would say creating an application that is 508 compliant or very close is doable.
Android is a different story. I don't have any firsthand experience with Android but do to the different levels of Android, different hardware, and customizations from the device makers that may negatively impact accessibility you can't guarantee your app will be accessible. The best you can do is try to find a handset with good accessibility, develop on that handset, and in the VPAT make it clear that you only tested with one specific hardware device so your results will vary. With Apple it's safe to say that if an app is accessible on iOS 4.0 it will be accessible on an iPhone 3GS, iPhone 4, iPad, and iPod touch since they control the operating system and hardware. My understanding is that Android's accessibility API is more limited then Apples so that is something else to take into account.
For an introduction to making iPhone apps accessible other then Apple’s documentation see [this](http://mattgemmell.com/2010/12/19/accessibility-for-iphone-and-ipad-apps)
For an introduction to general Android accessibility see [this](https://eyes-free.googlecode.com/svn/trunk/documentation/android_access/index.html). Pay attention to the choosing a phone section for more detail on the fragmentation issue I mentioned earlier.
For a developer introduction to writing accessible Android apps see [this](https://eyes-free.googlecode.com/svn/trunk/documentation/android_access/developers.html) | Sure, you can use a similar feature to VoiceOver, vibrations, sounds, use the flash on the iPhone 4, etc. You can't use braille though. |
6,900,056 | Is it possible to make iOS and Android apps compliant with [Section 508 of the U.S. Rehabilitation Act](http://www.section508.gov/index.cfm?fuseAction=1998Amend)? I have an upcoming meeting where this question will be raised. | 2011/08/01 | [
"https://Stackoverflow.com/questions/6900056",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/359519/"
] | See here for Apple's docs on how to make apps fully accessible: [Accessibility Programming Guide for iOS](http://developer.apple.com/library/ios/#documentation/UserExperience/Conceptual/iPhoneAccessibility/Introduction/Introduction.html)
In particular:
>
> If you use only standard UIKit controls, you probably don’t have to do much additional work to make sure your application is accessible. In this case, your next step is to ensure that the default attribute information supplied by these controls makes sense in your application
>
>
> | I've done a couple of section 508 reviews but don't take what I say as the final word or a legal opinion.
Section 508 is usually used in government contracts and is part of the purchasing process. If your app is not completely 508 compliant this won't mean you can't get the contract, it just means you may lose out if someone has an app that is more compliant then yours with the same general feature set and usability.
As far as 508 compliance on a mobile device the VPAT, which is the form you need to fill out does not specifically mention smart phones. Take a look at
<http://www.itic.org/policy/accessibility>
To view the current VPAT. If I had to fill out a VPAT I would focus on "Section 1194.21 Software Applications and Operating Systems" since you are writing an application for what is basically a computer with assistive technology on it.
I'm a totally blind iPhone user and from my personal experience with the accessibility of Apple's built in applications as well as many third party applications I would say creating an application that is 508 compliant or very close is doable.
Android is a different story. I don't have any firsthand experience with Android but do to the different levels of Android, different hardware, and customizations from the device makers that may negatively impact accessibility you can't guarantee your app will be accessible. The best you can do is try to find a handset with good accessibility, develop on that handset, and in the VPAT make it clear that you only tested with one specific hardware device so your results will vary. With Apple it's safe to say that if an app is accessible on iOS 4.0 it will be accessible on an iPhone 3GS, iPhone 4, iPad, and iPod touch since they control the operating system and hardware. My understanding is that Android's accessibility API is more limited then Apples so that is something else to take into account.
For an introduction to making iPhone apps accessible other then Apple’s documentation see [this](http://mattgemmell.com/2010/12/19/accessibility-for-iphone-and-ipad-apps)
For an introduction to general Android accessibility see [this](https://eyes-free.googlecode.com/svn/trunk/documentation/android_access/index.html). Pay attention to the choosing a phone section for more detail on the fragmentation issue I mentioned earlier.
For a developer introduction to writing accessible Android apps see [this](https://eyes-free.googlecode.com/svn/trunk/documentation/android_access/developers.html) |
1,297 | If I remember rightly the Buddha is quoted as saying something along the lines of:
>
> Do not believe anything I say until you can prove it by yourself
>
>
>
In what text(s) of the Buddhist cannon is this quoted? | 2014/06/22 | [
"https://buddhism.stackexchange.com/questions/1297",
"https://buddhism.stackexchange.com",
"https://buddhism.stackexchange.com/users/193/"
] | I've searched the Internet, and found a [website](http://www.fakebuddhaquotes.com/believe-nothing-no-matter-where-you-read-it/) claiming that the quotation in question is a bad translation of a fragment from [Kalama Sutta](http://www.accesstoinsight.org/tipitaka/an/an03/an03.065.than.html) which in original goes:
>
> “Now, Kalamas, don’t go by reports, by legends, by traditions, by scripture, by logical conjecture, by inference, by analogies, by agreement through pondering views, by probability, or by the thought, ‘This contemplative is our teacher.’ When you know for yourselves that, ‘These qualities are skillful; these qualities are blameless; these qualities are praised by the wise; these qualities, when adopted & carried out, lead to welfare & to happiness’ — then you should enter & remain in them.
>
>
>
Buddha says that common sense and logical thinking is not enough to determine the truth. Only through his/her own experience or the experience of *the wise ones* one can be sure what to follow. | A text of the Buddhist Canon this quote is said to come from is the [Kalama Sutta](https://www.accesstoinsight.org/tipitaka/an/an03/an03.065.than.html); which is often misquoted and misrepresented. The [Kalama Sutta](https://www.accesstoinsight.org/tipitaka/an/an03/an03.065.than.html) basically recommends judging a path by its results, namely, does this method of practice lead to more wholesome states?
It isn't a "charter for free enquiry" as some claim, or an invitation to indulge personal views. |
1,297 | If I remember rightly the Buddha is quoted as saying something along the lines of:
>
> Do not believe anything I say until you can prove it by yourself
>
>
>
In what text(s) of the Buddhist cannon is this quoted? | 2014/06/22 | [
"https://buddhism.stackexchange.com/questions/1297",
"https://buddhism.stackexchange.com",
"https://buddhism.stackexchange.com/users/193/"
] | I've searched the Internet, and found a [website](http://www.fakebuddhaquotes.com/believe-nothing-no-matter-where-you-read-it/) claiming that the quotation in question is a bad translation of a fragment from [Kalama Sutta](http://www.accesstoinsight.org/tipitaka/an/an03/an03.065.than.html) which in original goes:
>
> “Now, Kalamas, don’t go by reports, by legends, by traditions, by scripture, by logical conjecture, by inference, by analogies, by agreement through pondering views, by probability, or by the thought, ‘This contemplative is our teacher.’ When you know for yourselves that, ‘These qualities are skillful; these qualities are blameless; these qualities are praised by the wise; these qualities, when adopted & carried out, lead to welfare & to happiness’ — then you should enter & remain in them.
>
>
>
Buddha says that common sense and logical thinking is not enough to determine the truth. Only through his/her own experience or the experience of *the wise ones* one can be sure what to follow. | My teaching is not to believe rather to practice.
Buddhism is about reason, realisation and awakening. It has to be applicable and put to daily life living. |
1,297 | If I remember rightly the Buddha is quoted as saying something along the lines of:
>
> Do not believe anything I say until you can prove it by yourself
>
>
>
In what text(s) of the Buddhist cannon is this quoted? | 2014/06/22 | [
"https://buddhism.stackexchange.com/questions/1297",
"https://buddhism.stackexchange.com",
"https://buddhism.stackexchange.com/users/193/"
] | The quote comes from Kalama Sutra ([AN 3.65](http://www.accesstoinsight.org/tipitaka/an/an03/an03.065.than.html)) and is often taken out of context, hence misunderstood.
People of Kalama found themselves bombarded by tens of spiritual teachers, each claiming authority and expertise in spiritual matters. These teachers' doctrines were rather different from each other, but each was presented as The truth. Each teacher seemed quite certain of himself and was able to articulate the teaching logically.
When Buddha on one of his tours around the country arrived at Kalama and presented *his* teaching, the citizens honestly told him, that what he posits as The truth, to them looks like yet another teaching. "Is there any way" they asked, "that we can figure out which of these teachings is real?"
And that's when Buddha gave his famous answer, the point of which:
**It is by its results that a teaching should be evaluated.**
* A teaching can be elaborate and logical, with precise definitions. According to some people's preconceptions, these are the marks of a true teaching.
* A teaching could be profound, deep and mysterious. Some people assume, if chasm is deep and they can't see the bottom, there must be something in there.
* A teaching could match student's view of the world, e.g. the scientific worldview, or a spiritual worldview, or both. Many people interpret Kalama Sutra this way, that they should not believe a teaching unless it matches "common sense". They don't seem to realize that what they assume as *common* sense is in fact the very tangle of preconceptions that holds them in Samsara.
* A teaching could go against student's preconceptions and introduce a completely new theory of everything. Some students are very excited about such esoteric teachings and their eyes glaze over teachers that, in their opinion, profanate Dharma by assuming it speaks about our everyday lives.
* A teacher can look confident and speak well, or be soft-spoken and funny, like Dalai Lama. Many people find it hard to relate to a teacher who mismatches their archetype of Sage or Wise Old Man.
According to Buddha, all these are secondary factors, that can't be used as identifying markers of Sat-Dharma (True/Eternal Law/Tradition). Instead, it is by the effects it brings out, both in student's psyche as well as in the world, that a teaching should be measured.
Sat-Dharma is famously good in the beginning, good in the middle and good in the end.
**Good in the beginning** means, even the outermost layer of Dharma, the one seen by non-Buddhists, has good influence on people. Even the laypeople who don't really practice, but are merely guided by basic Dharma principles, benefit from it. They find that Dharma not only happens to match their highest secular moral and wisdom, but that while secular moral is often too flexible, the compass of Dharma never wavers. When followed at large, True Dharma must lead to reduced suffering and increased harmony in daily lives of common people.
**Good in the middle** means, when someone practices a slightly superficial version of Dharma, without fully understanding it yet, it greatly reduces amount of suffering one generates inside and around. Student learns to watch his mind and recognize its state, learns to stay mindful of the body and notice arising emotions, learns to not let harmful thoughts and emotions control him, learns to let go of attachments and preconceptions. This leads to increased quality of life, as the student can now stay cool through various life challenges.
**Good in the end** means, one eventually arrives at Liberating Realization, whereby one is no longer dominated by arbitrary formations, but can instead juggle formations freely.
While some kind of "Good in the end" is obviously the goal of all alternative teachings, it is "Good in the middle" and "Good in the beginning" that is a characteristic mark of True Dharma.
So when Gotama said, "do not believe anything I say until you can prove it by yourself" (or however you want to put it), this is what he meant. He did not mean we should reject a teaching unless it matches our preconceptions. He meant we should evaluate a teaching by its effect on our lives. The proof is in the pudding. | A text of the Buddhist Canon this quote is said to come from is the [Kalama Sutta](https://www.accesstoinsight.org/tipitaka/an/an03/an03.065.than.html); which is often misquoted and misrepresented. The [Kalama Sutta](https://www.accesstoinsight.org/tipitaka/an/an03/an03.065.than.html) basically recommends judging a path by its results, namely, does this method of practice lead to more wholesome states?
It isn't a "charter for free enquiry" as some claim, or an invitation to indulge personal views. |
1,297 | If I remember rightly the Buddha is quoted as saying something along the lines of:
>
> Do not believe anything I say until you can prove it by yourself
>
>
>
In what text(s) of the Buddhist cannon is this quoted? | 2014/06/22 | [
"https://buddhism.stackexchange.com/questions/1297",
"https://buddhism.stackexchange.com",
"https://buddhism.stackexchange.com/users/193/"
] | The quote comes from Kalama Sutra ([AN 3.65](http://www.accesstoinsight.org/tipitaka/an/an03/an03.065.than.html)) and is often taken out of context, hence misunderstood.
People of Kalama found themselves bombarded by tens of spiritual teachers, each claiming authority and expertise in spiritual matters. These teachers' doctrines were rather different from each other, but each was presented as The truth. Each teacher seemed quite certain of himself and was able to articulate the teaching logically.
When Buddha on one of his tours around the country arrived at Kalama and presented *his* teaching, the citizens honestly told him, that what he posits as The truth, to them looks like yet another teaching. "Is there any way" they asked, "that we can figure out which of these teachings is real?"
And that's when Buddha gave his famous answer, the point of which:
**It is by its results that a teaching should be evaluated.**
* A teaching can be elaborate and logical, with precise definitions. According to some people's preconceptions, these are the marks of a true teaching.
* A teaching could be profound, deep and mysterious. Some people assume, if chasm is deep and they can't see the bottom, there must be something in there.
* A teaching could match student's view of the world, e.g. the scientific worldview, or a spiritual worldview, or both. Many people interpret Kalama Sutra this way, that they should not believe a teaching unless it matches "common sense". They don't seem to realize that what they assume as *common* sense is in fact the very tangle of preconceptions that holds them in Samsara.
* A teaching could go against student's preconceptions and introduce a completely new theory of everything. Some students are very excited about such esoteric teachings and their eyes glaze over teachers that, in their opinion, profanate Dharma by assuming it speaks about our everyday lives.
* A teacher can look confident and speak well, or be soft-spoken and funny, like Dalai Lama. Many people find it hard to relate to a teacher who mismatches their archetype of Sage or Wise Old Man.
According to Buddha, all these are secondary factors, that can't be used as identifying markers of Sat-Dharma (True/Eternal Law/Tradition). Instead, it is by the effects it brings out, both in student's psyche as well as in the world, that a teaching should be measured.
Sat-Dharma is famously good in the beginning, good in the middle and good in the end.
**Good in the beginning** means, even the outermost layer of Dharma, the one seen by non-Buddhists, has good influence on people. Even the laypeople who don't really practice, but are merely guided by basic Dharma principles, benefit from it. They find that Dharma not only happens to match their highest secular moral and wisdom, but that while secular moral is often too flexible, the compass of Dharma never wavers. When followed at large, True Dharma must lead to reduced suffering and increased harmony in daily lives of common people.
**Good in the middle** means, when someone practices a slightly superficial version of Dharma, without fully understanding it yet, it greatly reduces amount of suffering one generates inside and around. Student learns to watch his mind and recognize its state, learns to stay mindful of the body and notice arising emotions, learns to not let harmful thoughts and emotions control him, learns to let go of attachments and preconceptions. This leads to increased quality of life, as the student can now stay cool through various life challenges.
**Good in the end** means, one eventually arrives at Liberating Realization, whereby one is no longer dominated by arbitrary formations, but can instead juggle formations freely.
While some kind of "Good in the end" is obviously the goal of all alternative teachings, it is "Good in the middle" and "Good in the beginning" that is a characteristic mark of True Dharma.
So when Gotama said, "do not believe anything I say until you can prove it by yourself" (or however you want to put it), this is what he meant. He did not mean we should reject a teaching unless it matches our preconceptions. He meant we should evaluate a teaching by its effect on our lives. The proof is in the pudding. | My teaching is not to believe rather to practice.
Buddhism is about reason, realisation and awakening. It has to be applicable and put to daily life living. |
30,786 | I am a sleep deprived mum of a 9 1/2 month old boy. He's otherwise a smiley, healthy and seems to be on the top percentile for both weight and height for his age. So he doesn't seem to be sleep deprived, grumpy and starving or obese.
He is also completely breastfed- fresh from the tap, not through choice but he simply refuses the bottle (which is another issue) but my plan is to wean him in 3-4 months. He also doesn't take the pacifier... he objects to anything artificial in his mouth and we have tried many things (except for starving him until he takes the bottle) and now we have simply given up. He's eating solid alright and basically enjoy eating, he has also cut his day feeds drastically.
Nighttime however is becoming more and more of a nightmare. He was never a good sleeper and I made the rookie mistake of letting him sleep on the breast. I never managed the put him down awake thing that everyone else seems to be able to do. We also co-sleep so that I wouldn't die from all the multiple wakings.
Recently things have gotten from bad to worse. He goes to sleep fairly easily but wakes up every hour up to midnight or 1 am, then he wakes up every 2 hours until he wakes up for the day between 0600 to 0630. He goes to bad between 1830 to 1930 depending on when he wakes up from his last nap but typically he is out by 1900. Then the hourly waking starts.
I have searched the Internet and it seems the sleep association is to blame, as he falls asleep on the breast he needs it to get back to sleep again. He can't connect the sleep cycles himself. These sounds most likely the most plausible reason, but what can I do about it save cry-it-out?? I think he also have a second sleep association- which is being close to me. Sometimes (but rarely) he does get off the breast and wiggles himself to sleep but usually stuck against me.
The reason why I would like to leave CIO until there are no options left are because:
1. He wails and get more upset the longer he cries. He doesn't calm down easily once he starts.
2. He started crawling and few days later crusing and now he can go from furniture to furniture and climb down from the sofa, things actually started to worsen a month ago when he started turning in his sleep and got on all fours at night and started crying but he wasn't mobile until 2 weeks ago.
3. He is teething. His 3 teeth broke through in a week and one more on the way. I started giving him ibuprofen yesterday and he seemed to have slept better; will do it again tonight.
4. His separation anxiety seems to be getting worse. He was always clingy but now he wails when I leave the room and crawls to chase after me. He also clutches to be tightly after every feed.
So with so much going on, I really don't want to make it harder for him.. but at the same time things are getting worse on the home front as dear husband is blaming me for fostering his bad habits, inhibiting his ability to be independent and basically being the reason why he can smooth himself to sleep.
What can I do to improve the situation? Should I wait it out, cry it out? | 2017/07/02 | [
"https://parenting.stackexchange.com/questions/30786",
"https://parenting.stackexchange.com",
"https://parenting.stackexchange.com/users/28681/"
] | Well there are a number of things.
Separation anxiety getting worse is normal, developmentally appropriate & something you just have to pass through. The age where you might see more improvement with that is past 18 months. All children have it to some degree, and some are much more than others. I saw no difference in the children that had me back at work at 6 weeks & the one I was home for, in that regard, despite being "used to" me leaving daily. It's a normal healthy part of early childhood. It's hard, but it will pass on it's own.
Frequent waking can be a cycle, but it also is associated with oral ties, lip & tongue as well as things like silent reflux. A child's sleep cycle isn't like an adult, it's actually 45mins, so when a child wakes hourly, chances are they have become fully alert after a single sleep cycle.
I noticed you said you permitted the baby to sleep on the breast. One way to help you is to work on trying to get baby to fall asleep at naps & at the first bed sleep, unlatched. I know it's work. I have been through it. If you are persistent, and just unlatch them while awake & shush, pat, walk, bounce, rock, etc, they will fall asleep. If you can consistently get the baby to finally accept unlatching before sleep, you do generally see them waking you less at night. All babies & children rouse. Adults do as well. It's normal. We roll over, change positions, etc. So you simply are aiming to get the baby to a place where he is more likely to get back to sleep without rousing you to do so. For me, I have always found getting them to unlatch before they dose off, is most helpful there.
And if baby is going through milestones, developmental leaps, growth spurts & teeth, it's also just going to sometimes be stormy. It helps to remind yourself how fast the 9 months has gone. It will help you remember that this will pass faster than it seems too. It feels long in the moment, but then you can't believe it & they are talking & running around & you made it through after all.
So
The separation anxiety is normal stuff & expect that it will intensify going forward: <http://www.parenting.com/article/separation-anxiety-age-by-age>
If you have ever been told your baby has a tie or suspected it, you may want to have that relooked at. <http://www.drghaheri.com/blog/2014/2/20/a-babys-weight-gain-is-not-the-only-marker-of-successful-breastfeeding>
You might want to check into getting the "wonder weeks" app. It pretty accurately will tell you when to developmentally expect certain behaviors. It's not 100% but will give you a better idea of when your baby is most apt to being crabbier, wake more, etc. This blog sort of explains about the book & app. <http://www.weebeedreaming.com/my-blog/wonder-weeks-and-sleep>
And if you are interested, there is a sleep consultant that doesn't do CIO that I have heard rave reviews on & I know her through a mom group, and she has given me very sound input. I know there is a section on nap help on her site that is free. <http://childrenssleepconsultant.com/>
And mostly, hang in there. I found that least few months before they hit a year hard. There are a lot of things at play. It's a tough age. It's wonderful in a lot of ways too, but the developmentally things happening, growth spurts, teeth, they combine to also challenge you at a time when you have hoped this whole thing was going to start getting easier. It will. It then just gets harder in new ways though. But sleep will get better & with more sleep, then all of life seems more manageable. And be nice to yourself whenever you can. Take long baths or just a 15 minute walk alone, or any other way you can squeeze in some breathing space. That too helps to make it more manageable & something I insist that I do for myself every single day. The only time I don't might be during illness. I will walk out even if the baby is screaming about it, because I always put them with someone loving & kind & just take that time to clear my head & have 15 minutes to be alone with my thoughts & listen to the wind. | At this stage, you may want to start by deliberately putting him down in his own room a few minutes early, and gradually increasing the time he is left alone there. Use a timer so you don't"give in early. Once he has developed the trust that being on his own is not a permanent thing, he will start to fall asleep naturally (especially if you make it a point to tire him late in the day and do something relaxing just before bedtime.
Studies are fairly definitive that kids sleep better when they are not disturbed by the much noisier adults around them. Most "poor sleepers" in a cosleeping situation are actually being woken first by adult noises, then in turn they wake the parent. Take away the adult noises and suddenly they become good sleepers, to the relief of the sleep deprived mother.
(Edit) it is also important that he learns that he can't simply get attention every time by crying. You may end up buying a pair of earplugs and letting him cry it out, then going in to him after he stops crying. |
30,786 | I am a sleep deprived mum of a 9 1/2 month old boy. He's otherwise a smiley, healthy and seems to be on the top percentile for both weight and height for his age. So he doesn't seem to be sleep deprived, grumpy and starving or obese.
He is also completely breastfed- fresh from the tap, not through choice but he simply refuses the bottle (which is another issue) but my plan is to wean him in 3-4 months. He also doesn't take the pacifier... he objects to anything artificial in his mouth and we have tried many things (except for starving him until he takes the bottle) and now we have simply given up. He's eating solid alright and basically enjoy eating, he has also cut his day feeds drastically.
Nighttime however is becoming more and more of a nightmare. He was never a good sleeper and I made the rookie mistake of letting him sleep on the breast. I never managed the put him down awake thing that everyone else seems to be able to do. We also co-sleep so that I wouldn't die from all the multiple wakings.
Recently things have gotten from bad to worse. He goes to sleep fairly easily but wakes up every hour up to midnight or 1 am, then he wakes up every 2 hours until he wakes up for the day between 0600 to 0630. He goes to bad between 1830 to 1930 depending on when he wakes up from his last nap but typically he is out by 1900. Then the hourly waking starts.
I have searched the Internet and it seems the sleep association is to blame, as he falls asleep on the breast he needs it to get back to sleep again. He can't connect the sleep cycles himself. These sounds most likely the most plausible reason, but what can I do about it save cry-it-out?? I think he also have a second sleep association- which is being close to me. Sometimes (but rarely) he does get off the breast and wiggles himself to sleep but usually stuck against me.
The reason why I would like to leave CIO until there are no options left are because:
1. He wails and get more upset the longer he cries. He doesn't calm down easily once he starts.
2. He started crawling and few days later crusing and now he can go from furniture to furniture and climb down from the sofa, things actually started to worsen a month ago when he started turning in his sleep and got on all fours at night and started crying but he wasn't mobile until 2 weeks ago.
3. He is teething. His 3 teeth broke through in a week and one more on the way. I started giving him ibuprofen yesterday and he seemed to have slept better; will do it again tonight.
4. His separation anxiety seems to be getting worse. He was always clingy but now he wails when I leave the room and crawls to chase after me. He also clutches to be tightly after every feed.
So with so much going on, I really don't want to make it harder for him.. but at the same time things are getting worse on the home front as dear husband is blaming me for fostering his bad habits, inhibiting his ability to be independent and basically being the reason why he can smooth himself to sleep.
What can I do to improve the situation? Should I wait it out, cry it out? | 2017/07/02 | [
"https://parenting.stackexchange.com/questions/30786",
"https://parenting.stackexchange.com",
"https://parenting.stackexchange.com/users/28681/"
] | Well there are a number of things.
Separation anxiety getting worse is normal, developmentally appropriate & something you just have to pass through. The age where you might see more improvement with that is past 18 months. All children have it to some degree, and some are much more than others. I saw no difference in the children that had me back at work at 6 weeks & the one I was home for, in that regard, despite being "used to" me leaving daily. It's a normal healthy part of early childhood. It's hard, but it will pass on it's own.
Frequent waking can be a cycle, but it also is associated with oral ties, lip & tongue as well as things like silent reflux. A child's sleep cycle isn't like an adult, it's actually 45mins, so when a child wakes hourly, chances are they have become fully alert after a single sleep cycle.
I noticed you said you permitted the baby to sleep on the breast. One way to help you is to work on trying to get baby to fall asleep at naps & at the first bed sleep, unlatched. I know it's work. I have been through it. If you are persistent, and just unlatch them while awake & shush, pat, walk, bounce, rock, etc, they will fall asleep. If you can consistently get the baby to finally accept unlatching before sleep, you do generally see them waking you less at night. All babies & children rouse. Adults do as well. It's normal. We roll over, change positions, etc. So you simply are aiming to get the baby to a place where he is more likely to get back to sleep without rousing you to do so. For me, I have always found getting them to unlatch before they dose off, is most helpful there.
And if baby is going through milestones, developmental leaps, growth spurts & teeth, it's also just going to sometimes be stormy. It helps to remind yourself how fast the 9 months has gone. It will help you remember that this will pass faster than it seems too. It feels long in the moment, but then you can't believe it & they are talking & running around & you made it through after all.
So
The separation anxiety is normal stuff & expect that it will intensify going forward: <http://www.parenting.com/article/separation-anxiety-age-by-age>
If you have ever been told your baby has a tie or suspected it, you may want to have that relooked at. <http://www.drghaheri.com/blog/2014/2/20/a-babys-weight-gain-is-not-the-only-marker-of-successful-breastfeeding>
You might want to check into getting the "wonder weeks" app. It pretty accurately will tell you when to developmentally expect certain behaviors. It's not 100% but will give you a better idea of when your baby is most apt to being crabbier, wake more, etc. This blog sort of explains about the book & app. <http://www.weebeedreaming.com/my-blog/wonder-weeks-and-sleep>
And if you are interested, there is a sleep consultant that doesn't do CIO that I have heard rave reviews on & I know her through a mom group, and she has given me very sound input. I know there is a section on nap help on her site that is free. <http://childrenssleepconsultant.com/>
And mostly, hang in there. I found that least few months before they hit a year hard. There are a lot of things at play. It's a tough age. It's wonderful in a lot of ways too, but the developmentally things happening, growth spurts, teeth, they combine to also challenge you at a time when you have hoped this whole thing was going to start getting easier. It will. It then just gets harder in new ways though. But sleep will get better & with more sleep, then all of life seems more manageable. And be nice to yourself whenever you can. Take long baths or just a 15 minute walk alone, or any other way you can squeeze in some breathing space. That too helps to make it more manageable & something I insist that I do for myself every single day. The only time I don't might be during illness. I will walk out even if the baby is screaming about it, because I always put them with someone loving & kind & just take that time to clear my head & have 15 minutes to be alone with my thoughts & listen to the wind. | This might be an odd perspective to hear, but my first thought when I hear about this situation is to worry about the baby's teeth.
I used to work in a pediatric dental office, and the most difficult situations we had to deal with were young kids who were frequent night-nursers. They would usually come in somewhere between 1 year and 18 months with their front baby teeth so rotten that they all had to be pulled out, which is a much bigger deal with a kid that young than it is for older kids for two reasons: 1) the older kids can usually deal with getting their teeth pulled in the dental office, while the babies have to have general anesthesia at the hospital in order to be still enough and not freaked out, and 2) the baby teeth hold the space for the adult teeth to grow in to, and without them (especially for a long period of time) the likelihood of needing braces goes way up. I would strongly recommend wiping baby's gums and new teeth with a soft damp washcloth right after his bed time feeding, and then not nursing until he wakes up in the morning unless you absolutely have to.
As far as whether it's ok to let him cry it out, I'll tell you what my pediatrician told us (which by the way was VERY hard for me to stick to- I wanted to go to my son so badly, and was sure there would be some kind of abandonment issue from me leaving him to cry): she said that once baby is over 10lbs, he should be just fine overnight without additional feeding, and that at 6 months baby had gotten enough of a sense of security from sleeping with mom that it was safe to let him cry it out. And she said "it will be much harder for you than it will be for him." She said he might cry the whole night the first night, but that the second night he'd probably only cry for an hour or two, and the third night maybe 15 minutes. And that was basically exactly how it went.
My experience? It was HORRIBLE for me. But it was great for my son's sleep cycle in the long run: he's basically been sleeping through the night ever since then (with the exception of bad dreams sometimes), AND he has had a perfectly regular nap schedule (2-5pm every day) until the last two months when he's started sometimes skipping naps - he is now 3. Also, no abandonment issues. He's perfectly confident being left with other family or friends or babysitters, or at daycares or schools, because he knows I'll always come back. I would say the only downside to CIO is that you have to endure listening to your child scream for a couple of nights. Of course, you don't want to do it TOO early, but I think you would be safe at this point :)
I forgot to add that the pediatrician recommended that if baby was screaming for too long (I don't remember the exact length of time) that it was fine to go into the room and remind the baby that I was there, talk calmly to him, and remind him it was time to sleep - but not to pick him up! Or if I absolutely had to pick him up, to put him back down in his own bed and go back to my room afterwards (but she strongly recommended against picking him up - it was more just a remind-baby-that-mom-is-still-there thing). |
4,424,827 | If I connect with Java to MySQL on my localhost server, I access instantaneously.
But if I connect outside of the localhost, from a network PC (192.168.1.100), it is very slow (4-5 seconds).
And, if I connect from a public IP to my MY SQL server, it is also very slow (6 seconds or more). | 2010/12/12 | [
"https://Stackoverflow.com/questions/4424827",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/539941/"
] | Because your computer needs time to send packets to external servers and they need time to send packets back. It's called network latency, and is not an issue with Java specifically, but a general network issue. | Network latency plus connection creation time would be my guess. I don't know what else you have between the client machine and the MySQL server. |
4,424,827 | If I connect with Java to MySQL on my localhost server, I access instantaneously.
But if I connect outside of the localhost, from a network PC (192.168.1.100), it is very slow (4-5 seconds).
And, if I connect from a public IP to my MY SQL server, it is also very slow (6 seconds or more). | 2010/12/12 | [
"https://Stackoverflow.com/questions/4424827",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/539941/"
] | Because your computer needs time to send packets to external servers and they need time to send packets back. It's called network latency, and is not an issue with Java specifically, but a general network issue. | It will always take longer to make a connection across the network than to make the same connection locally. However, assuming you have a fairly typical local network, 4-5 seconds sounds a bit extreme. My guess (and it is just a guess) would be that the majority of the extra time is being consumed by network name resolution (i.e. DNS and/or netbios).
I would suggest that you try the connection using a numeric IP address, rather than a name. |
4,424,827 | If I connect with Java to MySQL on my localhost server, I access instantaneously.
But if I connect outside of the localhost, from a network PC (192.168.1.100), it is very slow (4-5 seconds).
And, if I connect from a public IP to my MY SQL server, it is also very slow (6 seconds or more). | 2010/12/12 | [
"https://Stackoverflow.com/questions/4424827",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/539941/"
] | Because your computer needs time to send packets to external servers and they need time to send packets back. It's called network latency, and is not an issue with Java specifically, but a general network issue. | 4 seconds on connecting could be a DNS problem and cannot be just a pure network latency.
Try starting MySQL server with "skip-name-resolve" parameter to skip resolving client's IP into hostname. Prior to that, make sure your grant tables are based on IPs and 'localhost' instead of symbolic names. |
4,424,827 | If I connect with Java to MySQL on my localhost server, I access instantaneously.
But if I connect outside of the localhost, from a network PC (192.168.1.100), it is very slow (4-5 seconds).
And, if I connect from a public IP to my MY SQL server, it is also very slow (6 seconds or more). | 2010/12/12 | [
"https://Stackoverflow.com/questions/4424827",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/539941/"
] | The "why" is already been answered. It's just the network latency.
You're probably also interested in how to "fix" it. The answer is: use a [connection pool](http://en.wikipedia.org/wiki/Connection_pool). If you're running a Java webapplication, use the webserver-provided connection pooling facilities. To take Tomcat as an example, check [this manual](http://tomcat.apache.org/tomcat-6.0-doc/jndi-datasource-examples-howto.html). If you're running a Java desktop application, use a decent connection pool implementation like [c3p0](http://sourceforge.net/projects/c3p0/) (tutorial [here](http://www.mchange.com/projects/c3p0/index.html)). | Network latency plus connection creation time would be my guess. I don't know what else you have between the client machine and the MySQL server. |
4,424,827 | If I connect with Java to MySQL on my localhost server, I access instantaneously.
But if I connect outside of the localhost, from a network PC (192.168.1.100), it is very slow (4-5 seconds).
And, if I connect from a public IP to my MY SQL server, it is also very slow (6 seconds or more). | 2010/12/12 | [
"https://Stackoverflow.com/questions/4424827",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/539941/"
] | It will always take longer to make a connection across the network than to make the same connection locally. However, assuming you have a fairly typical local network, 4-5 seconds sounds a bit extreme. My guess (and it is just a guess) would be that the majority of the extra time is being consumed by network name resolution (i.e. DNS and/or netbios).
I would suggest that you try the connection using a numeric IP address, rather than a name. | Network latency plus connection creation time would be my guess. I don't know what else you have between the client machine and the MySQL server. |
4,424,827 | If I connect with Java to MySQL on my localhost server, I access instantaneously.
But if I connect outside of the localhost, from a network PC (192.168.1.100), it is very slow (4-5 seconds).
And, if I connect from a public IP to my MY SQL server, it is also very slow (6 seconds or more). | 2010/12/12 | [
"https://Stackoverflow.com/questions/4424827",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/539941/"
] | The "why" is already been answered. It's just the network latency.
You're probably also interested in how to "fix" it. The answer is: use a [connection pool](http://en.wikipedia.org/wiki/Connection_pool). If you're running a Java webapplication, use the webserver-provided connection pooling facilities. To take Tomcat as an example, check [this manual](http://tomcat.apache.org/tomcat-6.0-doc/jndi-datasource-examples-howto.html). If you're running a Java desktop application, use a decent connection pool implementation like [c3p0](http://sourceforge.net/projects/c3p0/) (tutorial [here](http://www.mchange.com/projects/c3p0/index.html)). | It will always take longer to make a connection across the network than to make the same connection locally. However, assuming you have a fairly typical local network, 4-5 seconds sounds a bit extreme. My guess (and it is just a guess) would be that the majority of the extra time is being consumed by network name resolution (i.e. DNS and/or netbios).
I would suggest that you try the connection using a numeric IP address, rather than a name. |
4,424,827 | If I connect with Java to MySQL on my localhost server, I access instantaneously.
But if I connect outside of the localhost, from a network PC (192.168.1.100), it is very slow (4-5 seconds).
And, if I connect from a public IP to my MY SQL server, it is also very slow (6 seconds or more). | 2010/12/12 | [
"https://Stackoverflow.com/questions/4424827",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/539941/"
] | The "why" is already been answered. It's just the network latency.
You're probably also interested in how to "fix" it. The answer is: use a [connection pool](http://en.wikipedia.org/wiki/Connection_pool). If you're running a Java webapplication, use the webserver-provided connection pooling facilities. To take Tomcat as an example, check [this manual](http://tomcat.apache.org/tomcat-6.0-doc/jndi-datasource-examples-howto.html). If you're running a Java desktop application, use a decent connection pool implementation like [c3p0](http://sourceforge.net/projects/c3p0/) (tutorial [here](http://www.mchange.com/projects/c3p0/index.html)). | 4 seconds on connecting could be a DNS problem and cannot be just a pure network latency.
Try starting MySQL server with "skip-name-resolve" parameter to skip resolving client's IP into hostname. Prior to that, make sure your grant tables are based on IPs and 'localhost' instead of symbolic names. |
4,424,827 | If I connect with Java to MySQL on my localhost server, I access instantaneously.
But if I connect outside of the localhost, from a network PC (192.168.1.100), it is very slow (4-5 seconds).
And, if I connect from a public IP to my MY SQL server, it is also very slow (6 seconds or more). | 2010/12/12 | [
"https://Stackoverflow.com/questions/4424827",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/539941/"
] | It will always take longer to make a connection across the network than to make the same connection locally. However, assuming you have a fairly typical local network, 4-5 seconds sounds a bit extreme. My guess (and it is just a guess) would be that the majority of the extra time is being consumed by network name resolution (i.e. DNS and/or netbios).
I would suggest that you try the connection using a numeric IP address, rather than a name. | 4 seconds on connecting could be a DNS problem and cannot be just a pure network latency.
Try starting MySQL server with "skip-name-resolve" parameter to skip resolving client's IP into hostname. Prior to that, make sure your grant tables are based on IPs and 'localhost' instead of symbolic names. |
49,583 | I have an recurring problem. Every once in a while, no pattern, the laptop freezes during boot. Sometimes at a black screen, sometimes a black screen with a not blinking cursor...
The solution is to power down the laptop, cross my fingers and boot again. Sometimes it takes four or five reboots, but in the end I always get the system up and running.
What bugs me is the fact that the boot is not 'stable' in a sense that apparently it doesn't always do exactly the same thing.
I'm still using 10.10. The question is whether there is anything that can be done to make the system stable. (Does 11.04 have the same issue?)
**Edit:** Today the same thing happened. First a black screen with a non blinking cursor. Second a black screen. Third login screen. | 2011/06/19 | [
"https://askubuntu.com/questions/49583",
"https://askubuntu.com",
"https://askubuntu.com/users/20270/"
] | Moving to 11.04 solved this problem. | radeon graphic card, i guess? :) install kernel 3.0.0. That's what fixes radeon bug freezing at black/blank screen. Before the kernel upgrade my best record was 15 reboots in a row before it could sign in properly :)
3.0.0 is release candidate, though, so you might (for some unknown reason) not want to use it, downgrade to 2.6.35 then, it didn't have the bug. |
505 | I think the name of the tag [abnormal-psychology](https://cogsci.stackexchange.com/questions/tagged/abnormal-psychology "show questions tagged 'abnormal-psychology'") can be at least for some people offensive. The word "abnormal" have the negative connotation. The word "disorder" have also some negative connotation, but since this is scientific term, it would sound more neutral. | 2012/12/30 | [
"https://cogsci.meta.stackexchange.com/questions/505",
"https://cogsci.meta.stackexchange.com",
"https://cogsci.meta.stackexchange.com/users/899/"
] | I wholeheartedly agree with your sentiment, and I believe many in the psychology community think that too, but that's the "official" name of the subdiscipline as well as the name that of several well-known academic journals on the subject use.
I think that we'll eventually see this name fade away in favor of something more appropriate, but for the time being it would be confusing to re-appropriate a related term that, while perhaps more objective, doesn't reflect the existing body of work in this area. | I strongly disagree. "Abnomal" is the medical term, like *abnormal anatomy*, or anything.
It's not *judgemental*. A doctor don't use terms to please or to offense his patient, or to *judge*.
Medical and scientist terms are like they are. It's not here you can change them. It's important to *use the proper terms*, used in the medicine field, to stick to the scientist vocabulary, and not "politically correct" terms, because soon, everyone will use his own terms, and finding offense where it's simply neutral and descriptive, and soften the actual meaning.
Suggestion:
Read Geoges Canguilhem's book "**Normal and pathological**", this book is the one that explains the meaning of "norm" in the medical area,
knowing what "normal" means, you'll understand that "abnormal" doesn't mean the same as in the non medical speech of everyday, and is not negative like in everyday speech. Every doctors or psychologist should read it.
Everyone interested in this field needs to now what is a "norm" and a non-norm in medicine. Because we have to know the meaning of a word before discussing it. Exactly like in philosophy.
Science is really like philosophy, it has its own terms, and they generally doesn't have the same meaning than in everyday words.
If you was to use "abnormal" in everyday word, I would agree, it would be insulting. Be here, you have to not make the confusion between 2 words that doesn't mean the same. |
7,287 | One Piece chapter 467 when Zoro fighting Samurai Ryuuma at the end of their fight Zorro slash him down and his wound become a fire? Is there any explanation about this? I don't remember Zorro doing something like this again later.
 | 2014/02/03 | [
"https://anime.stackexchange.com/questions/7287",
"https://anime.stackexchange.com",
"https://anime.stackexchange.com/users/2869/"
] | This is one of his Santoryuu/Iitoryuu techniques.
According to the [wiki](http://onepiece.wikia.com/wiki/Santoryu/Ittoryu): -
>
> Hiryu: Kaen (飛竜火焔 Hiryū: Kaen?, literally meaning "Flying Dragon:
> Blaze"): Using one sword wielded in his left hand with his right hand
> gripping his left wrist for support (or vice-versa), Zoro jumps high
> up into the air and slashes his opponent. After slashing them, Zoro's
> opponent then bursts into flames (in the anime, the color of the fire
> is blue instead) from where they were slashed. This was first seen
> being used against Ryuma. The animal or creature that usually
> accompanies Zoro in the background when performing powerful techniques
> is an occidental dragon. The dragon bares an uncanny resemblance to
> the dragon killed by Ryuma the King from Oda's earlier work, Monsters.
> The scene where Zoro slashes Ryuma with this technique also resembles
> the scene from Monsters, in which Ryuma slays the dragon.
>
>
>
Zoro has used this technique only once. There are many techniques which he has used only once!
Plus the name of his techniques are pretty complicated so its hard to remember :P
For further references, you can check the [wiki](http://onepiece.wikia.com/wiki/Santoryu/Ittoryu) | It is the sword technique that causes the flame, although not properly explained how it is assumed that this technique uses [Friction Burn](http://tvtropes.org/pmwiki/pmwiki.php/Main/FrictionBurn) to set the enemy ablaze. |
52,882,279 | I have use case
-There are 2 CMS banner components(C1 and C2) ;of which only one needs to be displayed based upon the customer Loyalty status.
So say if a person is a gold member component C1 should be displayed on the home page while if the customer is a platinum member component C2 should be displayed.
I am aware that this feature can be achieved by Smart edit ,but there are few other rules which needs to be triggered in both cases in terms of what data would be rendered in C1 and C2 based upon those business rules.Is there any documentation that is available which could help me create the rules and associate it with the coms component or if someone can share any other approach of achieving the same. | 2018/10/18 | [
"https://Stackoverflow.com/questions/52882279",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/964819/"
] | Why not use CMS Restrictions? Evaluate if the component should be displayed in a CMSRestrictionEvaluator. Then populate the respective data in a controller/renderer. | Using promotion engine is quite costly. It's not really performant so you should not use it to achieve this kind of behaviour.
You should go with [Personalization (based on SmartEdit)](https://help.hybris.com/1808/hcd/bf181fa9fb4149f7902da9e072e0e6f1.html) |
4,700 | Why can't photons have a mass? Could you explain this to me in a short and mathematical way? | 2011/02/06 | [
"https://physics.stackexchange.com/questions/4700",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/58/"
] | There is nothing special about the photon having zero mass. Although zero is the smallest mass any particle can have, it is as good as any other value. In this sense, there is no **mathematical proof** that the photon **has to have** zero mass, this is a purely experimental fact. And, to our best knowledge, the photon mass is consistent to zero.
If you want to describe a theory with a zero mass vector in a manifestly relativistic way, you have to have gauge invariance. This is a mathematical fact. As is the fact that if you force this symmetry to be quantum mechanically exact, the mass will not receive quantum corrections (perturbatively, at least). Gauge theories can be shown to have all sorts of other nice features (like IR finiteness, if you sum enough virtual and real diagrams) and that makes us believe that at low energies they are the right theories.
But one would be inverting the logical order within physics if one says that the mass of the photon is zero because EM is described by a gauge theory. EM is described by a gauge theory because the photon has zero mass. There would be no problem with special relativity either. The fact that the maximal velocity is the same as the velocity of light in the vacuum is, again, an experimental fact (equivalent to the one we are discussing here) but by no means necessary by any mathematical theorem. | put simply - mass terms for photons break gauge invariance. |
33,078,165 | I can't find a version of one jdk installer. Can I just copy the jdk installation folder from another computer to mine without install it ? | 2015/10/12 | [
"https://Stackoverflow.com/questions/33078165",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | yes you can copy the installation directory, only change you need to do is to change you JAVA\_HOME and PATH variable accordingly... | Are you saying you can't find jdk 1.5, how about this: <http://www.oracle.com/technetwork/java/javasebusiness/downloads/java-archive-downloads-javase5-419410.html>
You can also find all the old versions of Java here:
<http://www.oracle.com/technetwork/java/archive-139210.html> |
25,648 | Some early RISC CPUs had branch delay slots, the theory being that this would make the CPU both cheaper and faster; you could omit some interlock circuitry, and at the same time, in some cases, execute another instruction in what would otherwise have been a wasted cycle. According to <https://en.wikipedia.org/wiki/Delay_slot>
>
> Branch delay slots are found mainly in DSP architectures and older RISC architectures. MIPS, PA-RISC, ETRAX CRIS, SuperH, and SPARC are RISC architectures that each have a single branch delay slot; PowerPC, ARM, Alpha, and RISC-V do not have any. DSP architectures that each have a single branch delay slot include the VS DSP, μPD77230 and TMS320C3x. The SHARC DSP and MIPS-X use a double branch delay slot; such a processor will execute a pair of instructions following a branch instruction before the branch takes effect. The TMS320C4x uses a triple branch delay slot.
>
>
>
There is nowadays a consensus that it is best to omit such things from an architecture, on the grounds that while they may be helpful in early implementations, later implementations will prefer a larger number, and later still will have branch predictors which means there is no fixed number of delay slots, so it just becomes baggage.
At what point did pipelines become deep enough that, if you were going to design the architecture with delay slots at that time, the number of them to include would've been greater than one? For example, which MIPS CPU first found itself in the position of 'well, if we were going to have delay slots at all, the correct number would've been 2, so the architecturally specified 1 delay slot doesn't actually mean we can omit those interlocks'? | 2022/11/20 | [
"https://retrocomputing.stackexchange.com/questions/25648",
"https://retrocomputing.stackexchange.com",
"https://retrocomputing.stackexchange.com/users/4274/"
] | The 1 cycle branch delay slot of early RISC only really works with slow 5 levels pipeline. Branch decision must include instruction decode, branch calculation, then updating cache fetch address in the same cycle to allow 1 cycle branches.
It works for relative branches and subroutine calls with simplified decoding, it doesn't quite work with conditional branches (flags may be updated, or the branch instruction need to reach the execute stage to be decided).
With faster frequencies, processors have had to decouple instruction fetch (and branch prediction, target address caches, ...) from execution. At that point, branch delay slots had become a burden. Particularly with superscalar CPUs.
Faster frequency have resulted that instruction fetch latency becomes widely variable between cache hits and cache misses (nowadays it is tens to hundred cycles to reach L2 or L3 caches or DRAM). A decoupled pipeline is able to fetch instructions in advance of execution, to hide a bit memory latency, but the pre-fetch engine need to predict the instruction flow. A branch delay slot offers nothing but more complexity.
For example, with MIPS, the R4000 had a 8 levels pipeline and 3 cycles branch delay, already the 1 cycle delay slot wasn't quite sufficient. The delay saved a cycle when some useful instruction could be fit in the delay slot, which is not always the case. Only branch prediction can effectively reduce, on average, actual branch delay, more efficiently than any slot. | The PPUs in the CDC 6600 (a Seymour Cray design circa 1964 to 1969) were barrel processors and thus had a branch delay in cycles equal to the number of PPUs (8 or 10 IIRC). The delay slot execution cycles were taken up by the other PPUs. |
469,056 | I have a 1Mbps broadband internet connection. I am sharing this on my PC by using Windows Connection Sharing, so that my roommate can also access the internet. I want to set a speed limit of 500Kbps on both the PCs, so that each one gets his fair share.
I'm using Windows Vista, and my friend is using Windows 7.
Is this possible in Windows (or Linux)? Third-party freeware is fine. | 2012/09/01 | [
"https://superuser.com/questions/469056",
"https://superuser.com",
"https://superuser.com/users/155831/"
] | My recommendation is to use [NetLimiter](http://www.netlimiter.com/). I've used this in the past with great success.
However, this won't stop you or your roommate from simply removing the limit whenever you feel like it. | I know a while back I had found a proxy software for web dev that had that feature. heres a link to some. [proxy list](http://forums.whirlpool.net.au/archive/65793) it is very easy to do in linux or if you set up a full blown proxy like squid. both of you could use the squid proxy for antivirus scanning of incomming downloads as well as bandwidth sharing. |
469,056 | I have a 1Mbps broadband internet connection. I am sharing this on my PC by using Windows Connection Sharing, so that my roommate can also access the internet. I want to set a speed limit of 500Kbps on both the PCs, so that each one gets his fair share.
I'm using Windows Vista, and my friend is using Windows 7.
Is this possible in Windows (or Linux)? Third-party freeware is fine. | 2012/09/01 | [
"https://superuser.com/questions/469056",
"https://superuser.com",
"https://superuser.com/users/155831/"
] | Use [NAT32](http://v2.nat32.com/index.html) to share internet. It has speed limiter too. | I know a while back I had found a proxy software for web dev that had that feature. heres a link to some. [proxy list](http://forums.whirlpool.net.au/archive/65793) it is very easy to do in linux or if you set up a full blown proxy like squid. both of you could use the squid proxy for antivirus scanning of incomming downloads as well as bandwidth sharing. |
469,056 | I have a 1Mbps broadband internet connection. I am sharing this on my PC by using Windows Connection Sharing, so that my roommate can also access the internet. I want to set a speed limit of 500Kbps on both the PCs, so that each one gets his fair share.
I'm using Windows Vista, and my friend is using Windows 7.
Is this possible in Windows (or Linux)? Third-party freeware is fine. | 2012/09/01 | [
"https://superuser.com/questions/469056",
"https://superuser.com",
"https://superuser.com/users/155831/"
] | My recommendation is to use [NetLimiter](http://www.netlimiter.com/). I've used this in the past with great success.
However, this won't stop you or your roommate from simply removing the limit whenever you feel like it. | Use [NAT32](http://v2.nat32.com/index.html) to share internet. It has speed limiter too. |
2,821 | Recently I learned that in some Middle Eastern countries they add cardamom and cloves to their coffee to flavor it.
**Are there any other coffee flavorings found around the globe I might not have heard of?** | 2016/05/17 | [
"https://coffee.stackexchange.com/questions/2821",
"https://coffee.stackexchange.com",
"https://coffee.stackexchange.com/users/2493/"
] | I prefer my coffee flavored just with water. However, it is common for people to flavor their coffee with many other ingredients with respect to their personal preference. As personal preference is closely related to culture, **yes, you may enlist some location-based flavoring ingredients for coffee**.
**In Turkey**, generally, coffee does not have any ingredients except sugar. In the west part, mastic is sometimes added for flavor. This is a common tradition with Greeks, I assume. In the Southeast part, cardamom is rarely added. This is a common tradition with Syrians, I assume. I never heard of cloves around here.
---
**Sugar** and its close relatives: I think this is the most common one, independent from the geography.
**Milk** may be the second most common one. Mostly used in the Italian-influenced Western coffee.
**Chocolate** is only common in the Austrian-influenced coffee recipes, I assume.
*From now on, I think we may say not very common flavorings.*
**Cinnamon** is common both on Austrian-influenced coffees and old-style coffees.
**Mastic** is common around Aegean Sea with Turkish brewing method.
**Cardamom** is common around Syria with Turkish brewing method.
**Cloves** is common around Arabian Peninsula with Saudi brewing method (say, a brewing method close to Turkish).
**Chicory** is common in Vietnam, South India and in New Orleans/Southern Louisiana of USA.
**Butter** is common in East Africa, Himalayas and very recently in North America under the fancy name of "bulletproof".
**Coconut** and **Marjoram** are my recent encounters in some old Turkish recipes together with Cardamom. Especially, when the beans are ground in *dibek*, an ancient Turkish coffee mortar. | In Canada and probably America too, pumpkin spice lattes have taken off and some stores now sell pumpkin pie spice mix which is a great flavouring to plain coffee. |
2,821 | Recently I learned that in some Middle Eastern countries they add cardamom and cloves to their coffee to flavor it.
**Are there any other coffee flavorings found around the globe I might not have heard of?** | 2016/05/17 | [
"https://coffee.stackexchange.com/questions/2821",
"https://coffee.stackexchange.com",
"https://coffee.stackexchange.com/users/2493/"
] | I prefer my coffee flavored just with water. However, it is common for people to flavor their coffee with many other ingredients with respect to their personal preference. As personal preference is closely related to culture, **yes, you may enlist some location-based flavoring ingredients for coffee**.
**In Turkey**, generally, coffee does not have any ingredients except sugar. In the west part, mastic is sometimes added for flavor. This is a common tradition with Greeks, I assume. In the Southeast part, cardamom is rarely added. This is a common tradition with Syrians, I assume. I never heard of cloves around here.
---
**Sugar** and its close relatives: I think this is the most common one, independent from the geography.
**Milk** may be the second most common one. Mostly used in the Italian-influenced Western coffee.
**Chocolate** is only common in the Austrian-influenced coffee recipes, I assume.
*From now on, I think we may say not very common flavorings.*
**Cinnamon** is common both on Austrian-influenced coffees and old-style coffees.
**Mastic** is common around Aegean Sea with Turkish brewing method.
**Cardamom** is common around Syria with Turkish brewing method.
**Cloves** is common around Arabian Peninsula with Saudi brewing method (say, a brewing method close to Turkish).
**Chicory** is common in Vietnam, South India and in New Orleans/Southern Louisiana of USA.
**Butter** is common in East Africa, Himalayas and very recently in North America under the fancy name of "bulletproof".
**Coconut** and **Marjoram** are my recent encounters in some old Turkish recipes together with Cardamom. Especially, when the beans are ground in *dibek*, an ancient Turkish coffee mortar. | [Liqueur coffees](https://en.wikipedia.org/wiki/Liqueur_coffee) are a whole category of coffees flavoured with alcohol. Irish coffee is probably the best known example.
Wikipedia has a [list of coffee drinks](https://en.wikipedia.org/wiki/List_of_coffee_drinks), but as you can see from other answers here (and indeed your own question) it's not complete. The cardamom-flavoured coffee I've had has been Saudi, and uses an unusually light roast, so the end result doesn't look or smell much like most coffee, despite the taste.
You even could class [affogato](https://en.wikipedia.org/wiki/Affogato) as a flavoured coffee, though it's served as a desert. |
2,821 | Recently I learned that in some Middle Eastern countries they add cardamom and cloves to their coffee to flavor it.
**Are there any other coffee flavorings found around the globe I might not have heard of?** | 2016/05/17 | [
"https://coffee.stackexchange.com/questions/2821",
"https://coffee.stackexchange.com",
"https://coffee.stackexchange.com/users/2493/"
] | I prefer my coffee flavored just with water. However, it is common for people to flavor their coffee with many other ingredients with respect to their personal preference. As personal preference is closely related to culture, **yes, you may enlist some location-based flavoring ingredients for coffee**.
**In Turkey**, generally, coffee does not have any ingredients except sugar. In the west part, mastic is sometimes added for flavor. This is a common tradition with Greeks, I assume. In the Southeast part, cardamom is rarely added. This is a common tradition with Syrians, I assume. I never heard of cloves around here.
---
**Sugar** and its close relatives: I think this is the most common one, independent from the geography.
**Milk** may be the second most common one. Mostly used in the Italian-influenced Western coffee.
**Chocolate** is only common in the Austrian-influenced coffee recipes, I assume.
*From now on, I think we may say not very common flavorings.*
**Cinnamon** is common both on Austrian-influenced coffees and old-style coffees.
**Mastic** is common around Aegean Sea with Turkish brewing method.
**Cardamom** is common around Syria with Turkish brewing method.
**Cloves** is common around Arabian Peninsula with Saudi brewing method (say, a brewing method close to Turkish).
**Chicory** is common in Vietnam, South India and in New Orleans/Southern Louisiana of USA.
**Butter** is common in East Africa, Himalayas and very recently in North America under the fancy name of "bulletproof".
**Coconut** and **Marjoram** are my recent encounters in some old Turkish recipes together with Cardamom. Especially, when the beans are ground in *dibek*, an ancient Turkish coffee mortar. | I am not a big lover of flavorings in coffee, but there's one spice that works very nice with turka called hawaij. It's a mixture of black pepper, cumin, cardamom and turmeric, commonly available in the Middle East. |
105,839 | I am photographing a scene with white and black elements in it. Starting at the f/22 stop, I widen the aperture one stop and decrease the exposure time by a factor of 2, take a picture, and keep doing this for all the f-stops on the lens. My expectation is that raw counts should stay the same inside a white region or a black region since **halving the exposure time compensates for widening the aperture**. But when I select a white region and average its pixel raw counts for each image, there is **variability between the images** (the standard deviation of the raw counts is ~5% of the mean). Same thing if I select and average a black region. I am not knowingly changing anything else (illumination, camera position), and the camera is a scientific CMOS. What could be causing this variation: noise, or something more systematic? | 2019/03/11 | [
"https://photo.stackexchange.com/questions/105839",
"https://photo.stackexchange.com",
"https://photo.stackexchange.com/users/59836/"
] | This is normal behavior, caused by:
1. Imperfections of aperture. Usually there are variations from
technology process which cause not to have exact size of the hole.
On 50mm lens f4 you should have 12.5mm opening, but it can be 12.4mm
or 12.6mm
2. Imperfections in shutter speed. The shutter is also mechanical unit
and based on some factors as temperature, how precise are the blades
and other elements inside, speed will be not 1/100s but can be
1/110s or 1/90s.
3. The same is true about the sensor itself (from electronic point of
view)
At the end even two consecutive photos can have different (slightly) exposure.
And add fluctuation of your illumination source... | The short answer is yes... they cancel. But there are some nuances.
Each time the diameter of a circle increases (or decreases) by a factor equal to the square root of 2 (approximately 1.4) the area of that circle is exactly doubled (or halved if decreased). The f-stop numbers are all based on powers of the square root of 2 (e.g. f/1 = √2^0; f/1.4 = √2^1; f/2 = √2^2; f/2.8 = √2^3; etc.)
Shutter exposures are more intuitive. 1/500th sec is obviously half as long as 1/250th sec, etc.
The nuances:
Cameras do a bit of rounding. E.g. if you have a 100mm lens it's probably not *precisely* 100mm (but it's probably not far off) and as you refocus, the lens may do a bit of focus breathing (for a good lens that stays with 5% of the stated focal length ... but some lenses have rather strong focus-breathing issues ... e.g. 30%. When this happens, it means the f-stop isn't strictly accurate.
F-stops aren't strictly accurate as it is. But they are "close enough" that the margin of error wont impact the exposure in a noticeable way.
There are other issues. When you shoot heavily stopped down (e.g. f/22), all light comes from a very small area near the center of the lens axis and is distributed across the sensor more evenly. When you shoot wide-open, light comes from a wide range of angles. Areas of the sensor near the center can collect light from many angles, but areas of the sensor near an edge or corner are more limited on the number of paths light can take through the lens to reach that particular spot. This results in vignetting. So while you can take two photos using "equivalent exposures" (trading a stop of aperture for a stop of shutter duration), changes in vignetting patterns can cause pixels to have a different amount of collected light depending on the pixel you choose to inspect. |
105,839 | I am photographing a scene with white and black elements in it. Starting at the f/22 stop, I widen the aperture one stop and decrease the exposure time by a factor of 2, take a picture, and keep doing this for all the f-stops on the lens. My expectation is that raw counts should stay the same inside a white region or a black region since **halving the exposure time compensates for widening the aperture**. But when I select a white region and average its pixel raw counts for each image, there is **variability between the images** (the standard deviation of the raw counts is ~5% of the mean). Same thing if I select and average a black region. I am not knowingly changing anything else (illumination, camera position), and the camera is a scientific CMOS. What could be causing this variation: noise, or something more systematic? | 2019/03/11 | [
"https://photo.stackexchange.com/questions/105839",
"https://photo.stackexchange.com",
"https://photo.stackexchange.com/users/59836/"
] | This is normal behavior, caused by:
1. Imperfections of aperture. Usually there are variations from
technology process which cause not to have exact size of the hole.
On 50mm lens f4 you should have 12.5mm opening, but it can be 12.4mm
or 12.6mm
2. Imperfections in shutter speed. The shutter is also mechanical unit
and based on some factors as temperature, how precise are the blades
and other elements inside, speed will be not 1/100s but can be
1/110s or 1/90s.
3. The same is true about the sensor itself (from electronic point of
view)
At the end even two consecutive photos can have different (slightly) exposure.
And add fluctuation of your illumination source... | In theory, yes — stops are interchangeable. In practice, they do not *perfectly* cancel to complete precision.
>
> the standard deviation of the raw counts is ~5% of the mean
>
>
>
In photographic terms, this is *basically nothing*. It is far below human perception, and even when the difference is noticeable, the generally-expected workflow involves working with each image individually, so the photographer can compensate either in the field or in post-production.
Cameras meant for photography are not measuring devices; using them as such is setting yourself up for disappointment. Making the devices much more precise would be a lot more expensive and provide no benefit for the target market. Even if you have a camera made for scientific purposes, these particular tolerances might not be within the relevant area of concern.
If you're trying to get perfection for something like a time-lapse or another series of photos, post-processing to even out the fluctuations is your best bet. |
105,839 | I am photographing a scene with white and black elements in it. Starting at the f/22 stop, I widen the aperture one stop and decrease the exposure time by a factor of 2, take a picture, and keep doing this for all the f-stops on the lens. My expectation is that raw counts should stay the same inside a white region or a black region since **halving the exposure time compensates for widening the aperture**. But when I select a white region and average its pixel raw counts for each image, there is **variability between the images** (the standard deviation of the raw counts is ~5% of the mean). Same thing if I select and average a black region. I am not knowingly changing anything else (illumination, camera position), and the camera is a scientific CMOS. What could be causing this variation: noise, or something more systematic? | 2019/03/11 | [
"https://photo.stackexchange.com/questions/105839",
"https://photo.stackexchange.com",
"https://photo.stackexchange.com/users/59836/"
] | This is normal behavior, caused by:
1. Imperfections of aperture. Usually there are variations from
technology process which cause not to have exact size of the hole.
On 50mm lens f4 you should have 12.5mm opening, but it can be 12.4mm
or 12.6mm
2. Imperfections in shutter speed. The shutter is also mechanical unit
and based on some factors as temperature, how precise are the blades
and other elements inside, speed will be not 1/100s but can be
1/110s or 1/90s.
3. The same is true about the sensor itself (from electronic point of
view)
At the end even two consecutive photos can have different (slightly) exposure.
And add fluctuation of your illumination source... | With regard to systematic problems: you are taking into account that with opening up the aperture depth of focus decreases and thus the borders of out-of-focus scene parts blur? Also with small apertures you might get some blurring due to diffraction.
If you have a mechanical shutter, you actually can get diffraction with *large* apertures from the resulting short shutter times when a significant amount of the exposure time is spent near at least one of the shutter curtains moving across. |
105,839 | I am photographing a scene with white and black elements in it. Starting at the f/22 stop, I widen the aperture one stop and decrease the exposure time by a factor of 2, take a picture, and keep doing this for all the f-stops on the lens. My expectation is that raw counts should stay the same inside a white region or a black region since **halving the exposure time compensates for widening the aperture**. But when I select a white region and average its pixel raw counts for each image, there is **variability between the images** (the standard deviation of the raw counts is ~5% of the mean). Same thing if I select and average a black region. I am not knowingly changing anything else (illumination, camera position), and the camera is a scientific CMOS. What could be causing this variation: noise, or something more systematic? | 2019/03/11 | [
"https://photo.stackexchange.com/questions/105839",
"https://photo.stackexchange.com",
"https://photo.stackexchange.com/users/59836/"
] | This is normal behavior, caused by:
1. Imperfections of aperture. Usually there are variations from
technology process which cause not to have exact size of the hole.
On 50mm lens f4 you should have 12.5mm opening, but it can be 12.4mm
or 12.6mm
2. Imperfections in shutter speed. The shutter is also mechanical unit
and based on some factors as temperature, how precise are the blades
and other elements inside, speed will be not 1/100s but can be
1/110s or 1/90s.
3. The same is true about the sensor itself (from electronic point of
view)
At the end even two consecutive photos can have different (slightly) exposure.
And add fluctuation of your illumination source... | I think it wasn't mentioned: with increase in exposure time comes increase in thermal Dark Shot noise. You can read more [here](https://www.thorlabs.com/newgrouppage9.cfm?objectgroup_id=10773), for example
[](https://i.stack.imgur.com/AFRfb.gif) |
105,839 | I am photographing a scene with white and black elements in it. Starting at the f/22 stop, I widen the aperture one stop and decrease the exposure time by a factor of 2, take a picture, and keep doing this for all the f-stops on the lens. My expectation is that raw counts should stay the same inside a white region or a black region since **halving the exposure time compensates for widening the aperture**. But when I select a white region and average its pixel raw counts for each image, there is **variability between the images** (the standard deviation of the raw counts is ~5% of the mean). Same thing if I select and average a black region. I am not knowingly changing anything else (illumination, camera position), and the camera is a scientific CMOS. What could be causing this variation: noise, or something more systematic? | 2019/03/11 | [
"https://photo.stackexchange.com/questions/105839",
"https://photo.stackexchange.com",
"https://photo.stackexchange.com/users/59836/"
] | The short answer is yes... they cancel. But there are some nuances.
Each time the diameter of a circle increases (or decreases) by a factor equal to the square root of 2 (approximately 1.4) the area of that circle is exactly doubled (or halved if decreased). The f-stop numbers are all based on powers of the square root of 2 (e.g. f/1 = √2^0; f/1.4 = √2^1; f/2 = √2^2; f/2.8 = √2^3; etc.)
Shutter exposures are more intuitive. 1/500th sec is obviously half as long as 1/250th sec, etc.
The nuances:
Cameras do a bit of rounding. E.g. if you have a 100mm lens it's probably not *precisely* 100mm (but it's probably not far off) and as you refocus, the lens may do a bit of focus breathing (for a good lens that stays with 5% of the stated focal length ... but some lenses have rather strong focus-breathing issues ... e.g. 30%. When this happens, it means the f-stop isn't strictly accurate.
F-stops aren't strictly accurate as it is. But they are "close enough" that the margin of error wont impact the exposure in a noticeable way.
There are other issues. When you shoot heavily stopped down (e.g. f/22), all light comes from a very small area near the center of the lens axis and is distributed across the sensor more evenly. When you shoot wide-open, light comes from a wide range of angles. Areas of the sensor near the center can collect light from many angles, but areas of the sensor near an edge or corner are more limited on the number of paths light can take through the lens to reach that particular spot. This results in vignetting. So while you can take two photos using "equivalent exposures" (trading a stop of aperture for a stop of shutter duration), changes in vignetting patterns can cause pixels to have a different amount of collected light depending on the pixel you choose to inspect. | With regard to systematic problems: you are taking into account that with opening up the aperture depth of focus decreases and thus the borders of out-of-focus scene parts blur? Also with small apertures you might get some blurring due to diffraction.
If you have a mechanical shutter, you actually can get diffraction with *large* apertures from the resulting short shutter times when a significant amount of the exposure time is spent near at least one of the shutter curtains moving across. |
105,839 | I am photographing a scene with white and black elements in it. Starting at the f/22 stop, I widen the aperture one stop and decrease the exposure time by a factor of 2, take a picture, and keep doing this for all the f-stops on the lens. My expectation is that raw counts should stay the same inside a white region or a black region since **halving the exposure time compensates for widening the aperture**. But when I select a white region and average its pixel raw counts for each image, there is **variability between the images** (the standard deviation of the raw counts is ~5% of the mean). Same thing if I select and average a black region. I am not knowingly changing anything else (illumination, camera position), and the camera is a scientific CMOS. What could be causing this variation: noise, or something more systematic? | 2019/03/11 | [
"https://photo.stackexchange.com/questions/105839",
"https://photo.stackexchange.com",
"https://photo.stackexchange.com/users/59836/"
] | The short answer is yes... they cancel. But there are some nuances.
Each time the diameter of a circle increases (or decreases) by a factor equal to the square root of 2 (approximately 1.4) the area of that circle is exactly doubled (or halved if decreased). The f-stop numbers are all based on powers of the square root of 2 (e.g. f/1 = √2^0; f/1.4 = √2^1; f/2 = √2^2; f/2.8 = √2^3; etc.)
Shutter exposures are more intuitive. 1/500th sec is obviously half as long as 1/250th sec, etc.
The nuances:
Cameras do a bit of rounding. E.g. if you have a 100mm lens it's probably not *precisely* 100mm (but it's probably not far off) and as you refocus, the lens may do a bit of focus breathing (for a good lens that stays with 5% of the stated focal length ... but some lenses have rather strong focus-breathing issues ... e.g. 30%. When this happens, it means the f-stop isn't strictly accurate.
F-stops aren't strictly accurate as it is. But they are "close enough" that the margin of error wont impact the exposure in a noticeable way.
There are other issues. When you shoot heavily stopped down (e.g. f/22), all light comes from a very small area near the center of the lens axis and is distributed across the sensor more evenly. When you shoot wide-open, light comes from a wide range of angles. Areas of the sensor near the center can collect light from many angles, but areas of the sensor near an edge or corner are more limited on the number of paths light can take through the lens to reach that particular spot. This results in vignetting. So while you can take two photos using "equivalent exposures" (trading a stop of aperture for a stop of shutter duration), changes in vignetting patterns can cause pixels to have a different amount of collected light depending on the pixel you choose to inspect. | I think it wasn't mentioned: with increase in exposure time comes increase in thermal Dark Shot noise. You can read more [here](https://www.thorlabs.com/newgrouppage9.cfm?objectgroup_id=10773), for example
[](https://i.stack.imgur.com/AFRfb.gif) |
105,839 | I am photographing a scene with white and black elements in it. Starting at the f/22 stop, I widen the aperture one stop and decrease the exposure time by a factor of 2, take a picture, and keep doing this for all the f-stops on the lens. My expectation is that raw counts should stay the same inside a white region or a black region since **halving the exposure time compensates for widening the aperture**. But when I select a white region and average its pixel raw counts for each image, there is **variability between the images** (the standard deviation of the raw counts is ~5% of the mean). Same thing if I select and average a black region. I am not knowingly changing anything else (illumination, camera position), and the camera is a scientific CMOS. What could be causing this variation: noise, or something more systematic? | 2019/03/11 | [
"https://photo.stackexchange.com/questions/105839",
"https://photo.stackexchange.com",
"https://photo.stackexchange.com/users/59836/"
] | In theory, yes — stops are interchangeable. In practice, they do not *perfectly* cancel to complete precision.
>
> the standard deviation of the raw counts is ~5% of the mean
>
>
>
In photographic terms, this is *basically nothing*. It is far below human perception, and even when the difference is noticeable, the generally-expected workflow involves working with each image individually, so the photographer can compensate either in the field or in post-production.
Cameras meant for photography are not measuring devices; using them as such is setting yourself up for disappointment. Making the devices much more precise would be a lot more expensive and provide no benefit for the target market. Even if you have a camera made for scientific purposes, these particular tolerances might not be within the relevant area of concern.
If you're trying to get perfection for something like a time-lapse or another series of photos, post-processing to even out the fluctuations is your best bet. | With regard to systematic problems: you are taking into account that with opening up the aperture depth of focus decreases and thus the borders of out-of-focus scene parts blur? Also with small apertures you might get some blurring due to diffraction.
If you have a mechanical shutter, you actually can get diffraction with *large* apertures from the resulting short shutter times when a significant amount of the exposure time is spent near at least one of the shutter curtains moving across. |
105,839 | I am photographing a scene with white and black elements in it. Starting at the f/22 stop, I widen the aperture one stop and decrease the exposure time by a factor of 2, take a picture, and keep doing this for all the f-stops on the lens. My expectation is that raw counts should stay the same inside a white region or a black region since **halving the exposure time compensates for widening the aperture**. But when I select a white region and average its pixel raw counts for each image, there is **variability between the images** (the standard deviation of the raw counts is ~5% of the mean). Same thing if I select and average a black region. I am not knowingly changing anything else (illumination, camera position), and the camera is a scientific CMOS. What could be causing this variation: noise, or something more systematic? | 2019/03/11 | [
"https://photo.stackexchange.com/questions/105839",
"https://photo.stackexchange.com",
"https://photo.stackexchange.com/users/59836/"
] | In theory, yes — stops are interchangeable. In practice, they do not *perfectly* cancel to complete precision.
>
> the standard deviation of the raw counts is ~5% of the mean
>
>
>
In photographic terms, this is *basically nothing*. It is far below human perception, and even when the difference is noticeable, the generally-expected workflow involves working with each image individually, so the photographer can compensate either in the field or in post-production.
Cameras meant for photography are not measuring devices; using them as such is setting yourself up for disappointment. Making the devices much more precise would be a lot more expensive and provide no benefit for the target market. Even if you have a camera made for scientific purposes, these particular tolerances might not be within the relevant area of concern.
If you're trying to get perfection for something like a time-lapse or another series of photos, post-processing to even out the fluctuations is your best bet. | I think it wasn't mentioned: with increase in exposure time comes increase in thermal Dark Shot noise. You can read more [here](https://www.thorlabs.com/newgrouppage9.cfm?objectgroup_id=10773), for example
[](https://i.stack.imgur.com/AFRfb.gif) |
399,498 | I happened to write "You are not going anywhere near us", and I meant "You should not leave us". Reading it back later I realised that what I wrote probably means the opposite...
Is there a way to use "not going anywhere" in the intended meaning? | 2017/07/11 | [
"https://english.stackexchange.com/questions/399498",
"https://english.stackexchange.com",
"https://english.stackexchange.com/users/246220/"
] | Some good examples already given in answers but I want to point out a couple of differences.
It is indeed the "near us" that did you in (as Yosef mentioned).
*"You're not going anywhere."*, is an idiomatic statement in English which can mean a couple of things:
1. **You are not going anywhere *without us*.** (as marcellothearcane pointed out)
2. **You are not going anywhere! OR You are not going anywhere young lady!** (the person is very upset/mad, probably the parent, and wants to protect their child or make sure they don't do something bad again so they are effectively "grounded". It's like saying, "You are grounded and not going anywhere!").
3. **You are not going anywhere.** (I'm not leaving you out of my sight - similar to #1 above but a subtle difference - because I care about you so much and you leaving would just tear me up inside. It's very much like saying, "No way, you are not going anywhere, because I would be so alone, or so worried, that I couldn't handle it"). | You can use a double negative to swap this round, which is *a syntactic construction in which two negative words are used in the same clause to express a single negation*1, for example:
>
> You are not going anywhere **without** us
>
>
>
This would have the required meaning of:
>
> You are going with us
>
>
>
Beware of double negatives though, as they can be confusing and potentially poor grammar - 'I haven't taken *nothing*', a double negative, means 'I have taken everything.
You may find [this](http://examples.yourdictionary.com/examples-of-double-negatives.html) link about double negatives
interesting.
1 [Dictionary.com](http://www.dictionary.com/browse/double-negative) |
259,039 | I am in need of a term for the feeling or condition of having to give up some life experience or opportunity due to a conflicting obligation, especially in cases of routine sacrifice of that which is considered "fulfilling".
I say "feeling or condition" because often it is sufficient to say one feels they are in a state of being subject to some condition, e.g. "I feel trapped".
I am interested in a term that has an appropriate connotation as the inverse of guilt. In other words, I am describing the contrasting negative feelings one might have (guilt vs \_\_\_) after having prioritized either self-interests or the interests of others, particularly when having a responsibility for the other party such as in parenthood.
**Update**: I appear to have actually asked for two things here. I was hoping for a means of describing the emotional condition of being bound to this scenario, but my contextual example asked for the resulting emotions that might follow, in which case *resentment* would have been a possible candidate. | 2015/07/13 | [
"https://english.stackexchange.com/questions/259039",
"https://english.stackexchange.com",
"https://english.stackexchange.com/users/128946/"
] | Sounds like you're balancing guilt vs *regret*.
Google's first definition of *regret* currently is:
>
> a feeling of sadness, repentance, or disappointment over something that has happened or been done.
>
>
> | I think you are referring to what is generally called [sense of responsibility](http://www.thefreedictionary.com/sense+of+responsibility):
>
> * an awareness of your obligations
>
>
>
The Free Dictionary |
259,039 | I am in need of a term for the feeling or condition of having to give up some life experience or opportunity due to a conflicting obligation, especially in cases of routine sacrifice of that which is considered "fulfilling".
I say "feeling or condition" because often it is sufficient to say one feels they are in a state of being subject to some condition, e.g. "I feel trapped".
I am interested in a term that has an appropriate connotation as the inverse of guilt. In other words, I am describing the contrasting negative feelings one might have (guilt vs \_\_\_) after having prioritized either self-interests or the interests of others, particularly when having a responsibility for the other party such as in parenthood.
**Update**: I appear to have actually asked for two things here. I was hoping for a means of describing the emotional condition of being bound to this scenario, but my contextual example asked for the resulting emotions that might follow, in which case *resentment* would have been a possible candidate. | 2015/07/13 | [
"https://english.stackexchange.com/questions/259039",
"https://english.stackexchange.com",
"https://english.stackexchange.com/users/128946/"
] | I believe you are looking for **self-abnegation** or **abnegation**.
>
> *self-abnegation*: the denial of one's own interests in favour of the interests of others [*Collins*](http://www.collinsdictionary.com/dictionary/english/self-abnegation)
>
>
>
Here are some more details and an example from *[vocabulary.com](http://www.vocabulary.com/dictionary/abnegation)*:
>
> The noun *abnegation* definitely has the sense of self-denial and self-sacrifice. So you wouldn't use abnegation to refer to the fact that you are giving up candy in order to eat more fruit. Instead, you would use the word if you were giving up desserts in order to donate to charity all the money you saved by not eating them for a month or two.
>
>
> | I think you are referring to what is generally called [sense of responsibility](http://www.thefreedictionary.com/sense+of+responsibility):
>
> * an awareness of your obligations
>
>
>
The Free Dictionary |
259,039 | I am in need of a term for the feeling or condition of having to give up some life experience or opportunity due to a conflicting obligation, especially in cases of routine sacrifice of that which is considered "fulfilling".
I say "feeling or condition" because often it is sufficient to say one feels they are in a state of being subject to some condition, e.g. "I feel trapped".
I am interested in a term that has an appropriate connotation as the inverse of guilt. In other words, I am describing the contrasting negative feelings one might have (guilt vs \_\_\_) after having prioritized either self-interests or the interests of others, particularly when having a responsibility for the other party such as in parenthood.
**Update**: I appear to have actually asked for two things here. I was hoping for a means of describing the emotional condition of being bound to this scenario, but my contextual example asked for the resulting emotions that might follow, in which case *resentment* would have been a possible candidate. | 2015/07/13 | [
"https://english.stackexchange.com/questions/259039",
"https://english.stackexchange.com",
"https://english.stackexchange.com/users/128946/"
] | I believe you are looking for **self-abnegation** or **abnegation**.
>
> *self-abnegation*: the denial of one's own interests in favour of the interests of others [*Collins*](http://www.collinsdictionary.com/dictionary/english/self-abnegation)
>
>
>
Here are some more details and an example from *[vocabulary.com](http://www.vocabulary.com/dictionary/abnegation)*:
>
> The noun *abnegation* definitely has the sense of self-denial and self-sacrifice. So you wouldn't use abnegation to refer to the fact that you are giving up candy in order to eat more fruit. Instead, you would use the word if you were giving up desserts in order to donate to charity all the money you saved by not eating them for a month or two.
>
>
> | Sounds like you're balancing guilt vs *regret*.
Google's first definition of *regret* currently is:
>
> a feeling of sadness, repentance, or disappointment over something that has happened or been done.
>
>
> |
55,717 | Could anyone tell where Ghost had been during Season 6 Episode 9 of Game of Thrones, 'Battle of the Bastards'? I am sure he would have done a lot of damage. | 2016/06/20 | [
"https://movies.stackexchange.com/questions/55717",
"https://movies.stackexchange.com",
"https://movies.stackexchange.com/users/36964/"
] | [According to Miguel Sapochnik, director of the episode](http://uk.businessinsider.com/game-of-thrones-director-why-ghost-wasnt-in-battle-of-the-bastards-2016-6):
>
> "[Ghost] was in there in spades originally, but it's also an
> incredibly time consuming and expensive character to bring to life,"
> the episode's director Miguel Sapochnik told Business Insider on
> Monday. "Ultimately **we had to choose between Wun-Wun and the direwolf,
> so the dog bit the dust**."
>
>
> | Where he exactly was is not known with certainty as Ghost was last seen before Jon left Castle Black.
That being said, I am fairly certain the following statement is incorrect.
>
> I am sure he would have done a lot of damage.
>
>
>
Direwolves are not war mounts and are certainly not trained for open field battle. In almost all cases where we see a Direwolf fight succesfully it is versus a single opponent or small group(s) of scattered opponents. Their main strength lies in speed, agility and overpowering single combatants, not in brute force. In fact, any time a direwolf goes up against a decently sized force (Grey Wind at the twins, Summer at the weirwood tree) they're killed fairly quickly.
Only Grey Wind has been used in battle, but again this fitted his strengths:
* Whispering woods: evening/night-time in a forest. The opponents were not an organised wall, most fighting would consist of individual combat. Also no archery that can take anything like a Direwolf out quite quickly. The ensuing slaughter of the Lannister camp (offscreen) is also at night time and with surprise (most soldiers were sleeping).
* Battle of Oxcross: Again at night-time. But even more, Grey Wind was not used ain direct combat but on a stealthy side mission to spook and thus cripple the opponent's cavalry.
Ghost is even more stealthy then Grey Wind, though this is more clear in the books than the show, it really makes no sense for him to be in this open field battle. In the initial man-to-man fight he might have been some use but very vulnerable to arrows (no armor), in the subsequent lock-in he would be speared very easily.
Thus the most likely, but again *unconfirmed*, assumption is that Ghost was with them. He just stayed behind in the camp with Melisandre, Sansa, Lady Mormont and the rest of the supply train, remember that a medieval army does not only consists of soldiers. |
409,454 | I have a BLE board with a 1/4 wavelength antenna and a feed that is about 12mm long. The wavelength of 2.45ghz is 122mm. I've seen various discussions saying anything over 1/10 the wavelength you probably need to impedance control the trace? Is this true? Can I get away with not using very tight impedance control on this feed line? | 2018/11/29 | [
"https://electronics.stackexchange.com/questions/409454",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/200385/"
] | You have to use the wavelength in the medium where the wave is traveling. On PCB's, waves travel more slowly than they do in free space. For an internal trace (stripline transmission line) the speed is around 47% of free space, so the wavelength is around 47% also (waves bunch up closer together when they slow down). So rather than 122mm, the wavelength will be around 57mm. This puts you a bit over the maximum length (which would be 5.7mm according to the rule of thumb).
However, you are probably not using an internal trace. For an external trace (microstrip) the propagation speed is slightly faster (around 56% of C) but the result is roughly the same.
So I recommend that you simply design your feed trace to be around 50 Ohms. This is not difficult. You can find online tools to help you. | Yes. For a feed this short you do not need tight control. Even if you are off by a large amount you will still probably not see a loss greater than 1dB. I would recommend using a calculator (this KiCad one is good) to find the best trace width for your board, and just use that.
You can also ask the fab house to impedance match it but for this it isn't needed. |
424,940 | my transformer has a secondary winding of 0 - 12v - 24v
Is it just labelled differently and is it really a center tap where 12v would be the center, 0 would be -12v and 24 would be 12v?
Thanks!
[](https://i.stack.imgur.com/iwcO2.jpg) | 2019/02/28 | [
"https://electronics.stackexchange.com/questions/424940",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/174444/"
] | Not necessarily. If the transformer is rated to produce full output from either the 12V tap or the 24V tap, it could be that the winding from the 12 to 24V terminals is of a finer gage, since the current that could be drawn from the 24V tap would be half that you could draw at 12V. This you can determine by comparing the resistance of the 0-12 and 12-24 windings.
Would that matter for using it as a center-tapped +/- 12V transformer? Not much, the terminal voltage at the high side would be fractionally lower under load, but the overall heating compared to drawing the full power at a single tap would actually be less. | I would guess that yes, it is just a regular center tap transformer. Assuming that's the case:
The outputs will be AC, so -12V doesn't really have any meaning. The 12V here will be 12V AC, and the 24 will be 24V AC.
If you make the center tap 0, then "LABEL 0" will be 12V, and "LABEL 24" will also be 12V (but they will be out of phase with each other).
Based on the labeling, I think it's fairly certain that this is a regular transformer, but I would do some continuity tests just to make sure. |
471,973 | I've been doing some reading about the people building an emulator for the Wii and it seems that given that it is nothing more then a beefed up GameCube, or the Nintendo64, so what makes build emulators for these systems so hard?
On a slight bit of a side note the hardware for these systems is surprisingly low:
Wii:
729MHz PPC CPU
88MB Memory
243MHz GPU
N64:
93.75MHz CPU (64-bit)
4MB Memory | 2009/01/23 | [
"https://Stackoverflow.com/questions/471973",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/115/"
] | The CPU architecture for game consoles is often somewhat exotic compared with your average desktop machine. *Emulation* means to perform in software everything that the original hardware did. That is, while the original console may have had dedicated graphics, audio, etc. chips as well as a CPU with a different instruction set, the emulator must perform all the functions of these parallel resources at speed.
Unless the console's GPU is old, it almost certainly must be emulated on the GPU of the host machine, as modern graphics cards, even cheap ones, have many times the throughput (for graphics workloads) of even the most expensive multicore CPUs. Compounding this difficulty is the fact that communication between CPU, GPU, any other onboard DSPs, and memory was probably highly optimized on the console to take advantage of the specifics of the hardware configuration, and therefore these resources must be rate-matched as well.
Compounding all these difficulties, usually little is known about the specifics of the console's hardware, as this is kept very much under wraps by design. Reverse engineering is getting less and less feasible for hobbyists to do.
To put things into perspective, an architectural simulator (a program which can run, for example, a PowerPC program on an x86 machine and collect all sorts of statistics about it) might run between 1000x and 100000x slower than real-time. An RTL simulation (a simulation of all the gates and flip-flops that make up a chip) of a modern CPU can usually only run between 10Hz and a few hundred Hz. Even very optimized emulation is likely to be between 10 and 100 times slower than native code, thus limiting what can be emulated convincingly today (particularly given the real-time interactivity implied by a game console emulator). | It is just because the game program is written for that particular hardware so it can utilize its all of the hardware benefits. Even if you have a super computer it cannot run properly a particular programs which cannot communicates to the hardware of the super computer itself. The same situation if you run PC games on consoles such as PS3/4 or xbox One. The only emulator that works 99% is Snes emulator and PS1. |
471,973 | I've been doing some reading about the people building an emulator for the Wii and it seems that given that it is nothing more then a beefed up GameCube, or the Nintendo64, so what makes build emulators for these systems so hard?
On a slight bit of a side note the hardware for these systems is surprisingly low:
Wii:
729MHz PPC CPU
88MB Memory
243MHz GPU
N64:
93.75MHz CPU (64-bit)
4MB Memory | 2009/01/23 | [
"https://Stackoverflow.com/questions/471973",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/115/"
] | The CPU architecture for game consoles is often somewhat exotic compared with your average desktop machine. *Emulation* means to perform in software everything that the original hardware did. That is, while the original console may have had dedicated graphics, audio, etc. chips as well as a CPU with a different instruction set, the emulator must perform all the functions of these parallel resources at speed.
Unless the console's GPU is old, it almost certainly must be emulated on the GPU of the host machine, as modern graphics cards, even cheap ones, have many times the throughput (for graphics workloads) of even the most expensive multicore CPUs. Compounding this difficulty is the fact that communication between CPU, GPU, any other onboard DSPs, and memory was probably highly optimized on the console to take advantage of the specifics of the hardware configuration, and therefore these resources must be rate-matched as well.
Compounding all these difficulties, usually little is known about the specifics of the console's hardware, as this is kept very much under wraps by design. Reverse engineering is getting less and less feasible for hobbyists to do.
To put things into perspective, an architectural simulator (a program which can run, for example, a PowerPC program on an x86 machine and collect all sorts of statistics about it) might run between 1000x and 100000x slower than real-time. An RTL simulation (a simulation of all the gates and flip-flops that make up a chip) of a modern CPU can usually only run between 10Hz and a few hundred Hz. Even very optimized emulation is likely to be between 10 and 100 times slower than native code, thus limiting what can be emulated convincingly today (particularly given the real-time interactivity implied by a game console emulator). | There are a number of reasons why emulation is difficult.
1. Emulating a system requires a lot more power than the target system
Sometimes the host system needs an order of magnitude more power (speed) than the target system. This is easy to understand if you consider the host machine needs to do all the work of the original system *and then some more* work to manage all communication between components while also sharing the system resources with other applications. This is why it takes a 2GHz processor to faithfully emulate a SNES which runs at a measly 21MHz.
2. Sometimes the instruction sets and/or subsystem workings are unknown to people
Most hardware is essentially a black box and understanding how it works is figured out through reverse engineering which takes a lot of time and patience. Not to mention, companies try their best to make reverse engineering difficult and companies have gotten much better at this post Playstation 1 era.
3. Lack of people building emulators
Emulation is a pretty niche area that requires a lot of working knowledge in many domains. To be frank, there's not a lot of people capable of emulating many of the modern systems. Emulating these systems takes a lot of time and effort and only the most dedicated will actually do it. |
471,973 | I've been doing some reading about the people building an emulator for the Wii and it seems that given that it is nothing more then a beefed up GameCube, or the Nintendo64, so what makes build emulators for these systems so hard?
On a slight bit of a side note the hardware for these systems is surprisingly low:
Wii:
729MHz PPC CPU
88MB Memory
243MHz GPU
N64:
93.75MHz CPU (64-bit)
4MB Memory | 2009/01/23 | [
"https://Stackoverflow.com/questions/471973",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/115/"
] | The CPU architecture for game consoles is often somewhat exotic compared with your average desktop machine. *Emulation* means to perform in software everything that the original hardware did. That is, while the original console may have had dedicated graphics, audio, etc. chips as well as a CPU with a different instruction set, the emulator must perform all the functions of these parallel resources at speed.
Unless the console's GPU is old, it almost certainly must be emulated on the GPU of the host machine, as modern graphics cards, even cheap ones, have many times the throughput (for graphics workloads) of even the most expensive multicore CPUs. Compounding this difficulty is the fact that communication between CPU, GPU, any other onboard DSPs, and memory was probably highly optimized on the console to take advantage of the specifics of the hardware configuration, and therefore these resources must be rate-matched as well.
Compounding all these difficulties, usually little is known about the specifics of the console's hardware, as this is kept very much under wraps by design. Reverse engineering is getting less and less feasible for hobbyists to do.
To put things into perspective, an architectural simulator (a program which can run, for example, a PowerPC program on an x86 machine and collect all sorts of statistics about it) might run between 1000x and 100000x slower than real-time. An RTL simulation (a simulation of all the gates and flip-flops that make up a chip) of a modern CPU can usually only run between 10Hz and a few hundred Hz. Even very optimized emulation is likely to be between 10 and 100 times slower than native code, thus limiting what can be emulated convincingly today (particularly given the real-time interactivity implied by a game console emulator). | Writing emulators is hard because you must exactly/completely/absolutely replicate said hardware behaviour, including it's OS behaviour in software.
Writing emulators for older consoles was in some cases harder than writing emulators for modern consoles. Because, a lot modern consoles use some form of Linux or \*nix so once the hardware is emulated software is a matter of dumping the machine's bios and handling over control.
Older consoles did everything in hardware, which means that reverse engineering played a much bigger role. You needed very good low level hackers to help you document how the old console worked and what each magic number meant.
Today, there's less magic numbers but instead standard GFX cards and CPUs. Although modern hardware has a lot more instructions and shiny doodads to emulate. A lot of what the more modern consoles do is well documented, unlike older consoles. |
471,973 | I've been doing some reading about the people building an emulator for the Wii and it seems that given that it is nothing more then a beefed up GameCube, or the Nintendo64, so what makes build emulators for these systems so hard?
On a slight bit of a side note the hardware for these systems is surprisingly low:
Wii:
729MHz PPC CPU
88MB Memory
243MHz GPU
N64:
93.75MHz CPU (64-bit)
4MB Memory | 2009/01/23 | [
"https://Stackoverflow.com/questions/471973",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/115/"
] | There are a number of reasons why emulation is difficult.
1. Emulating a system requires a lot more power than the target system
Sometimes the host system needs an order of magnitude more power (speed) than the target system. This is easy to understand if you consider the host machine needs to do all the work of the original system *and then some more* work to manage all communication between components while also sharing the system resources with other applications. This is why it takes a 2GHz processor to faithfully emulate a SNES which runs at a measly 21MHz.
2. Sometimes the instruction sets and/or subsystem workings are unknown to people
Most hardware is essentially a black box and understanding how it works is figured out through reverse engineering which takes a lot of time and patience. Not to mention, companies try their best to make reverse engineering difficult and companies have gotten much better at this post Playstation 1 era.
3. Lack of people building emulators
Emulation is a pretty niche area that requires a lot of working knowledge in many domains. To be frank, there's not a lot of people capable of emulating many of the modern systems. Emulating these systems takes a lot of time and effort and only the most dedicated will actually do it. | It is just because the game program is written for that particular hardware so it can utilize its all of the hardware benefits. Even if you have a super computer it cannot run properly a particular programs which cannot communicates to the hardware of the super computer itself. The same situation if you run PC games on consoles such as PS3/4 or xbox One. The only emulator that works 99% is Snes emulator and PS1. |
471,973 | I've been doing some reading about the people building an emulator for the Wii and it seems that given that it is nothing more then a beefed up GameCube, or the Nintendo64, so what makes build emulators for these systems so hard?
On a slight bit of a side note the hardware for these systems is surprisingly low:
Wii:
729MHz PPC CPU
88MB Memory
243MHz GPU
N64:
93.75MHz CPU (64-bit)
4MB Memory | 2009/01/23 | [
"https://Stackoverflow.com/questions/471973",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/115/"
] | Writing emulators is hard because you must exactly/completely/absolutely replicate said hardware behaviour, including it's OS behaviour in software.
Writing emulators for older consoles was in some cases harder than writing emulators for modern consoles. Because, a lot modern consoles use some form of Linux or \*nix so once the hardware is emulated software is a matter of dumping the machine's bios and handling over control.
Older consoles did everything in hardware, which means that reverse engineering played a much bigger role. You needed very good low level hackers to help you document how the old console worked and what each magic number meant.
Today, there's less magic numbers but instead standard GFX cards and CPUs. Although modern hardware has a lot more instructions and shiny doodads to emulate. A lot of what the more modern consoles do is well documented, unlike older consoles. | It is just because the game program is written for that particular hardware so it can utilize its all of the hardware benefits. Even if you have a super computer it cannot run properly a particular programs which cannot communicates to the hardware of the super computer itself. The same situation if you run PC games on consoles such as PS3/4 or xbox One. The only emulator that works 99% is Snes emulator and PS1. |
471,973 | I've been doing some reading about the people building an emulator for the Wii and it seems that given that it is nothing more then a beefed up GameCube, or the Nintendo64, so what makes build emulators for these systems so hard?
On a slight bit of a side note the hardware for these systems is surprisingly low:
Wii:
729MHz PPC CPU
88MB Memory
243MHz GPU
N64:
93.75MHz CPU (64-bit)
4MB Memory | 2009/01/23 | [
"https://Stackoverflow.com/questions/471973",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/115/"
] | There are a number of reasons why emulation is difficult.
1. Emulating a system requires a lot more power than the target system
Sometimes the host system needs an order of magnitude more power (speed) than the target system. This is easy to understand if you consider the host machine needs to do all the work of the original system *and then some more* work to manage all communication between components while also sharing the system resources with other applications. This is why it takes a 2GHz processor to faithfully emulate a SNES which runs at a measly 21MHz.
2. Sometimes the instruction sets and/or subsystem workings are unknown to people
Most hardware is essentially a black box and understanding how it works is figured out through reverse engineering which takes a lot of time and patience. Not to mention, companies try their best to make reverse engineering difficult and companies have gotten much better at this post Playstation 1 era.
3. Lack of people building emulators
Emulation is a pretty niche area that requires a lot of working knowledge in many domains. To be frank, there's not a lot of people capable of emulating many of the modern systems. Emulating these systems takes a lot of time and effort and only the most dedicated will actually do it. | Writing emulators is hard because you must exactly/completely/absolutely replicate said hardware behaviour, including it's OS behaviour in software.
Writing emulators for older consoles was in some cases harder than writing emulators for modern consoles. Because, a lot modern consoles use some form of Linux or \*nix so once the hardware is emulated software is a matter of dumping the machine's bios and handling over control.
Older consoles did everything in hardware, which means that reverse engineering played a much bigger role. You needed very good low level hackers to help you document how the old console worked and what each magic number meant.
Today, there's less magic numbers but instead standard GFX cards and CPUs. Although modern hardware has a lot more instructions and shiny doodads to emulate. A lot of what the more modern consoles do is well documented, unlike older consoles. |
80,286 | In 5e, a player who is grappling has their movement speed halved. However, a player with two hands can grapple *two* opponents.
As the wording seems to indicate that the movement speed penalty applies due to the grappling status, does that mean that a player grappling two opponents is doubly penalized?
If so, how would it apply: is your speed reduced to 0 ([1 - 0.5] - 0.5), or a quarter of your regular speed ([1 / 2] / 2)? | 2016/05/17 | [
"https://rpg.stackexchange.com/questions/80286",
"https://rpg.stackexchange.com",
"https://rpg.stackexchange.com/users/25296/"
] | It's reduced to a quarter of your regular speed
===============================================
>
> ***Moving a Grappled Creature.*** When you move, you can drag or carry the grappled creature with you, but your speed is halved, unless the creature is two or more sizes smaller than you.
>
>
>
So when you want to start dragging one creature, that will halve your speed. If your speed was 40 its now 20. In addition to dragging the first creature you want to drag a second creature, which again applies the halving, so your current speed of 20 is now 10. | It Does Not Stack
-----------------
It is very important to know exactly what the rules say here.
PHB, pg. 195
>
> **Moving a Grappled Creature.** When you move, you can drag or carry the grappled creature with you, but *your speed* is halved, unless the
> creature is two or more sizes smaller than you.
>
>
>
Note here that it says "your speed" and not "your remaining speed" -- your speed is always constant.
So if you are a human with a speed of 30 ft, *your speed* is halved to 15 feet. If you grapple a second creature, *your speed* is halved again to 15 feet -- but it doesn't matter at that point, since it was already halved the first time to the exact same value.
---
A related question which explains how the movement speed penalty works: [How Do Grapplers Stand If Prone?](https://rpg.stackexchange.com/questions/79392/how-do-grapplers-stand-if-prone) |
15,441 | There are recognized countries that are members of UN.
There are recognized countries that are not members of UN.
There are legal countries that are not widely recognized.
So, what is the relation between "recognition" and "legality" of a country"? | 2017/02/08 | [
"https://politics.stackexchange.com/questions/15441",
"https://politics.stackexchange.com",
"https://politics.stackexchange.com/users/12034/"
] | One of the definitions of **"legality"** basically means if a country declared its independence legally - maybe through a peace treaty, a referendum, etc.
It isn't that important in the foreign affairs aspect since diplomatic ties are based on "recognition". Ultimately, a country needs to be recognised to be able to gain membership in inter-governmental organisations or to travel freely to other nations.
---
**"[Diplomatic] Recognition"** as defined in Wikipedia:
>
> Diplomatic recognition in international law is a unilateral political act with domestic and international legal consequences, whereby **a state acknowledges an act or status of another state or government in control of a state** (may be also a recognized state).
>
>
>
The UN also describes this [on their website](http://www.un.org/en/sections/member-states/about-un-membership/index.html):
>
> **How does a new State or Government obtain recognition by the United Nations?**
>
>
> The recognition of a new State or Government is an act that only other States and Governments may grant or withhold. It generally **implies readiness to assume diplomatic relations**. The **United Nations is neither a State nor a Government, and therefore does not possess any authority to recognize either a State or a Government**. As an organization of independent States, it may admit a new State to its membership or accept the credentials of the representatives of a new Government.
>
>
>
This is more important in the diplomatic aspect as countries need to be recognised by a number of other countries in order to be a member of the United Nations.
An example would be Kosovo, which is recognised by 57% of UN member states but is still not a member of the UN.
---
In conclusion, "legality" basically means the legality of the existent of a state, but it doesn't really hold much meaning or value on the international stage while recognition means whether other countries accept you as a sovereign nation/ state or do they count you as territory of another country. | LEGALITY AND RECOGNITION ARE TWO COMPLETELY DIFFERENT THINGS
(Neither of them is a "necessary" or a "sufficient" condition for the other one)
Recognition is completely a political notion/act (as stated by International Court of Justice, Kosovo 2010 decision) and has nothing to do with legality.
So, even if no country recognizes a country, this has nothing to do with the legality of that country.
Then-President of the International Court of Justice (ICJ) Hisashi OWADA (2010): ["International law contains no "prohibition on declarations of independence."](http://www.bbc.com/news/world-europe-10730573)
The International Court of Justice (ICJ) (2010): "while the declaration may not have been illegal, the issue of RECOGNITION was a POLITICAL one".
(in [OSHISANYA 2016, An Almanac of Contemporary and Comperative Judicial Restatement](https://books.google.com.tr/books?id=xMvOBAAAQBAJ), p.64)
Hence, "recognition is a political action/matter, not a legal matter".
That is to say, “being recognized/not recognized does not affect legality/illegality of a country”.
That said, as far as "legality" and "recognition" are considered, the existence of either of them helps the existence of the other. |
15,441 | There are recognized countries that are members of UN.
There are recognized countries that are not members of UN.
There are legal countries that are not widely recognized.
So, what is the relation between "recognition" and "legality" of a country"? | 2017/02/08 | [
"https://politics.stackexchange.com/questions/15441",
"https://politics.stackexchange.com",
"https://politics.stackexchange.com/users/12034/"
] | One of the definitions of **"legality"** basically means if a country declared its independence legally - maybe through a peace treaty, a referendum, etc.
It isn't that important in the foreign affairs aspect since diplomatic ties are based on "recognition". Ultimately, a country needs to be recognised to be able to gain membership in inter-governmental organisations or to travel freely to other nations.
---
**"[Diplomatic] Recognition"** as defined in Wikipedia:
>
> Diplomatic recognition in international law is a unilateral political act with domestic and international legal consequences, whereby **a state acknowledges an act or status of another state or government in control of a state** (may be also a recognized state).
>
>
>
The UN also describes this [on their website](http://www.un.org/en/sections/member-states/about-un-membership/index.html):
>
> **How does a new State or Government obtain recognition by the United Nations?**
>
>
> The recognition of a new State or Government is an act that only other States and Governments may grant or withhold. It generally **implies readiness to assume diplomatic relations**. The **United Nations is neither a State nor a Government, and therefore does not possess any authority to recognize either a State or a Government**. As an organization of independent States, it may admit a new State to its membership or accept the credentials of the representatives of a new Government.
>
>
>
This is more important in the diplomatic aspect as countries need to be recognised by a number of other countries in order to be a member of the United Nations.
An example would be Kosovo, which is recognised by 57% of UN member states but is still not a member of the UN.
---
In conclusion, "legality" basically means the legality of the existent of a state, but it doesn't really hold much meaning or value on the international stage while recognition means whether other countries accept you as a sovereign nation/ state or do they count you as territory of another country. | Your question includes three different concepts: UN membership, recognition, and legality. I will explain each one in turn as well as drawing distinctions or relationships between each thing.
United Nations Membership
-------------------------
The rules for joining the United Nations can be found [here](http://www.un.org/en/ga/about/ropga/adms.shtml). The most relevant aspects are:
* States must submit an application. The application affirms their willingness to adhere to the [Charter of the United Nations](http://www.un.org/en/charter-united-nations/).
* The Security Council will approve or disapprove of their application. There are no rules or guidance for this - it is strictly up to the Council.
* If the Council approves the application, it moves to the General Assembly for a vote.
Although there are no formal rules for when a state should be admitted or not, the Charter does say this:
>
> Membership in the United Nations is open to all other peace-loving states which accept the obligations contained in the present Charter and, in the judgment of the Organization, are able and willing to carry out these obligations. *Source: [Chapter II Art.4](http://Membership%20in%20the%20United%20Nations%20is%20open%20to%20all%20other%20peace-loving%20states%20which%20accept%20the%20obligations%20contained%20in%20the%20present%20Charter%20and,%20in%20the%20judgment%20of%20the%20Organization,%20are%20able%20and%20willing%20to%20carry%20out%20these%20obligations.)*
>
>
>
United Nations membership just means that a state has been approved to be a member of the United Nations. They have agreed to adhere to the UN charter and have the approval of the Security Council and General Assembly. There is no notion of "legality" or "recognition", just the idea of being a state.
What is a state? (Legality)
---------------------------
The generally-cited legal description of a state comes from the [Montevideo Convention](http://www.cfr.org/sovereignty/montevideo-convention-rights-duties-states/p15897).
Article I defines a state:
>
> The state as a person of international law should possess the following qualifications:
>
>
> a ) a permanent population;
>
>
> b ) a defined territory;
>
>
> c ) government; and
>
>
> d) capacity to enter into relations with the other states.
>
>
>
This is the closest you will come to a concept of "legality". The Convention expressly says that a state may be legal and not be recognized:
>
> The political existence of the state is independent of recognition by the other states (*Source: Article 3)*
>
>
>
Recognition
-----------
No single entity decides when a state is a state, and no single entity decides when a state is recognized. Rather, each state decides when to recognize a state. There is a distinct one-to-one relationship here: if something is a state, than every other state is required to recognize it (so there are no unrecognized states except in the short-term), and everything recognized as a state is a state (so there are no incorrectly recognized states) (See this analysis from the [Yale Law Review](https://www.jstor.org/stable/792830?seq=1#page_scan_tab_contents)).
Effectively, this means that any legal state must be recognized by other nations, and also that all members of the United Nations are recognized states.
The wikipedia article nicely sums up [diplomatic recognition](https://en.wikipedia.org/wiki/Diplomatic_recognition), which is mostly a matter of custom rather than formal law. Recognition is unilateral (Country A chooses to recognize Country B, this does not mean that Country B recognize or does not recognize Country A).
De facto recognition occurs from extra-legal acts. For example, if the President of the United States visits with heads of a country in an official sense, that is effective recognition of that state. Recognition can also occur through legal channels. For example, when a United Nations member votes in favor of admitting a state to the United Nations, they are offering a legal admittance that the state is (in fact) a state.
Quick Summation
---------------
UN membership is only open to states. UN membership requires a vote from the General Assembly, and voting to admit a state is legal recognition of that state. So all members of the United Nations are **both** legal and recognized (by the majority of UN member nations) states.
Legal states may theoretically be unrecognized, although every state has an obligation to recognize a legal state. In practice, the evidence that a state is a legal state is recognition, so (in practice) there are no unrecognized legal states.
Recognized states are always legal states. However, not all recognized states may be members of the United Nations. It's possible that a recognized state can choose not to accept the United Nations charter or seek membership, or that it will fail to be admitted. |
42,430 | My company recently announced they will be performing some layoffs over the next 12 weeks. My area of the company will be affected, so I have no choice but to assume I will be let go and am starting to look for a new job now.
I've never been through this kind of thing before. Not once in my life have I been in the position of maybe not having a job tomorrow. So, needless to say, I am a bit unfocused right now. That's a problem for me, because I take my work very seriously and even though it is likely I will be let go, or find another job before I have a chance to find out, it is extremely important to me that I remain focused and professional while I'm at work.
Does any one have any tips/advice for how to accomplish this in the face of such uncertainty? I read [this Q & A](https://workplace.stackexchange.com/questions/19360/how-to-motivate-a-team-when-everyone-is-paranoid-about-layoffs), but all the answers all just said to start looking for work. I already know to do that. I need to know what I can do to help keep myself focused and my quality of work high. It's both a matter of self-preservation and my own morals and values for me to do so.
---
I think people have mis-understood my question to some degree. I am not actually very concerned with "saving my skin" so to speak. What I am concerned with are techniques I can use to maintain a high level of focus and professionalism. It's a matter of principal, not saving my job. | 2015/03/06 | [
"https://workplace.stackexchange.com/questions/42430",
"https://workplace.stackexchange.com",
"https://workplace.stackexchange.com/users/26114/"
] | **Focus on what you can control and worry about that**
It is hard to be positive with your job on the line. But it is important not to let yourself get down. If you focus on what you can't control you will get depressed. Despite your best efforts you may be laid off. Don't worry about that. There is nothing you can do. You can focus on keeping your performance up, that's it.
Focus on what you can control: Looking for new jobs, keeping your work at your current job up to snuff, being positive with/around your co-workers and bosses.
It takes true strength to be positive in times of trouble. Being positive and confident may be the difference you need(in your current job or the one your are looking for). | Well, you're doing the right thing by starting to look for a new job. Obviously, keeping this quiet will be important if you wish to keep your current job.
Fact is, you have one huge motivator for keeping busy with your job; your job depends on it.
If there are any responsibilities up for grabs, I suggest you grab them, or volunteer.
Usually, you can keep your job pretty certain by making yourself part of a low [bus factor](http://en.wikipedia.org/wiki/Bus_factor). That is to say, gaining expertise and knowledge that very few people have. It increases your value to the company, and letting you go would come at a higher price. |
42,430 | My company recently announced they will be performing some layoffs over the next 12 weeks. My area of the company will be affected, so I have no choice but to assume I will be let go and am starting to look for a new job now.
I've never been through this kind of thing before. Not once in my life have I been in the position of maybe not having a job tomorrow. So, needless to say, I am a bit unfocused right now. That's a problem for me, because I take my work very seriously and even though it is likely I will be let go, or find another job before I have a chance to find out, it is extremely important to me that I remain focused and professional while I'm at work.
Does any one have any tips/advice for how to accomplish this in the face of such uncertainty? I read [this Q & A](https://workplace.stackexchange.com/questions/19360/how-to-motivate-a-team-when-everyone-is-paranoid-about-layoffs), but all the answers all just said to start looking for work. I already know to do that. I need to know what I can do to help keep myself focused and my quality of work high. It's both a matter of self-preservation and my own morals and values for me to do so.
---
I think people have mis-understood my question to some degree. I am not actually very concerned with "saving my skin" so to speak. What I am concerned with are techniques I can use to maintain a high level of focus and professionalism. It's a matter of principal, not saving my job. | 2015/03/06 | [
"https://workplace.stackexchange.com/questions/42430",
"https://workplace.stackexchange.com",
"https://workplace.stackexchange.com/users/26114/"
] | **Focus on what you can control and worry about that**
It is hard to be positive with your job on the line. But it is important not to let yourself get down. If you focus on what you can't control you will get depressed. Despite your best efforts you may be laid off. Don't worry about that. There is nothing you can do. You can focus on keeping your performance up, that's it.
Focus on what you can control: Looking for new jobs, keeping your work at your current job up to snuff, being positive with/around your co-workers and bosses.
It takes true strength to be positive in times of trouble. Being positive and confident may be the difference you need(in your current job or the one your are looking for). | Far from being a definitive answer, but here's what's working for me so far.
1. Establish Cognitive Dissonance
---------------------------------
From "9 to 5" convince yourself that you are safe and will not be let go. This allows you to focus on your work and get the job done. After you punch out, convince yourself that you *will* be let go, and spend your time accordingly.
2. Talk About It
----------------
but not with your co-workers. Your co-workers are as frightened and uncertain about the future as you are. DO NOT discuss this with them. Negative energy builds in abundance this way. No, you need to talk about this, but with someone who is minimally affected by the situation. Obviously, discuss this with your significant other, but someone even more objective is preferred. If you have a mentor, call them. It's likely they've been through this before and have a few words of wisdom.
3. Stay Positive
----------------
By any means necessary. Get an extra workout in. Go for a long walk in the woods. Whatever you do for you, do it. Now is the time to make a little extra time for you.
Remind yourself that this is not the end of the world. Your not the first person to ever face a layoff and civilization is still here. Yup. I just checked. The whole wide world is still outside my window.
4. View the Situation as an Opportunity
---------------------------------------
Perhaps this is a good time to make a big bold career move. You've nothing to lose, so there is no sense of potentially lost security holding you back. Seize the opportunity to do something new or simply bigger than your current role. |
42,430 | My company recently announced they will be performing some layoffs over the next 12 weeks. My area of the company will be affected, so I have no choice but to assume I will be let go and am starting to look for a new job now.
I've never been through this kind of thing before. Not once in my life have I been in the position of maybe not having a job tomorrow. So, needless to say, I am a bit unfocused right now. That's a problem for me, because I take my work very seriously and even though it is likely I will be let go, or find another job before I have a chance to find out, it is extremely important to me that I remain focused and professional while I'm at work.
Does any one have any tips/advice for how to accomplish this in the face of such uncertainty? I read [this Q & A](https://workplace.stackexchange.com/questions/19360/how-to-motivate-a-team-when-everyone-is-paranoid-about-layoffs), but all the answers all just said to start looking for work. I already know to do that. I need to know what I can do to help keep myself focused and my quality of work high. It's both a matter of self-preservation and my own morals and values for me to do so.
---
I think people have mis-understood my question to some degree. I am not actually very concerned with "saving my skin" so to speak. What I am concerned with are techniques I can use to maintain a high level of focus and professionalism. It's a matter of principal, not saving my job. | 2015/03/06 | [
"https://workplace.stackexchange.com/questions/42430",
"https://workplace.stackexchange.com",
"https://workplace.stackexchange.com/users/26114/"
] | Far from being a definitive answer, but here's what's working for me so far.
1. Establish Cognitive Dissonance
---------------------------------
From "9 to 5" convince yourself that you are safe and will not be let go. This allows you to focus on your work and get the job done. After you punch out, convince yourself that you *will* be let go, and spend your time accordingly.
2. Talk About It
----------------
but not with your co-workers. Your co-workers are as frightened and uncertain about the future as you are. DO NOT discuss this with them. Negative energy builds in abundance this way. No, you need to talk about this, but with someone who is minimally affected by the situation. Obviously, discuss this with your significant other, but someone even more objective is preferred. If you have a mentor, call them. It's likely they've been through this before and have a few words of wisdom.
3. Stay Positive
----------------
By any means necessary. Get an extra workout in. Go for a long walk in the woods. Whatever you do for you, do it. Now is the time to make a little extra time for you.
Remind yourself that this is not the end of the world. Your not the first person to ever face a layoff and civilization is still here. Yup. I just checked. The whole wide world is still outside my window.
4. View the Situation as an Opportunity
---------------------------------------
Perhaps this is a good time to make a big bold career move. You've nothing to lose, so there is no sense of potentially lost security holding you back. Seize the opportunity to do something new or simply bigger than your current role. | Another motivator is being laid of today, doesn't mean no job tomorrow.
I don't know your company's situation other than if they are facing layoffs, it's probably bad. But often bad it a temporary thing.
Case and point. Our company had to lay off ALOT of people about two years ago because a company we were working with / competing with managed to trap our money in escrow. (AKA we earned the money, they just kept it from getting to us) when we finally got through that we hired most of the people we laid off back.
So even if you are laid off, that doesn't mean they won't hire you back later. Keep up the good work and if things don't pan out else where you might be able to come back when things stabilize.
(Also you want to have a good rep as a professional when layoffs happen. Often another company will try to hire up a chunk of people who get laid off. Such as if I'm looking to spin up a new team for a new project at my company, and the decent office next door is letting 8 developers go I'm going to hire the whole lot as they already know how to work together.) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.