question
stringlengths 3
301
| answer
stringlengths 9
7.04k
| context
listlengths 7
7
|
|---|---|---|
Before their double-helixed DNA model, Watson and Crick made a "failed" model. What did this model look like?
|
Apparently it was a triple helix with three sugar-phosphate backbones in the middle with the Nitrogen bases sticking out.
_URL_0_
That is my google fu however not my expertise. I would not be a good person to describe what that would actually look like.
|
[
"Late in 1951, Francis Crick started working with James Watson at the Cavendish Laboratory within the University of Cambridge. In 1953, Watson and Crick suggested what is now accepted as the first correct double-helix model of DNA structure in the journal \"Nature\". Their double-helix, molecular model of DNA was then based on one X-ray diffraction image (labeled as \"Photo 51\") taken by Rosalind Franklin and Raymond Gosling in May 1952, and the information that the DNA bases are paired. On 28 February 1953 Crick interrupted patrons' lunchtime at The Eagle pub in Cambridge to announce that he and Watson had \"discovered the secret of life\".\n",
"Watson and Crick's model attracted great interest immediately upon its presentation. Arriving at their conclusion on February 21, 1953, Watson and Crick made their first announcement on February 28. In an influential presentation in 1957, Crick laid out the \"central dogma of molecular biology\", which foretold the relationship between DNA, RNA, and proteins, and articulated the \"sequence hypothesis.\" A critical confirmation of the replication mechanism that was implied by the double-helical structure followed in 1958 in the form of the Meselson–Stahl experiment. Work by Crick and coworkers showed that the genetic code was based on non-overlapping triplets of bases, called codons, and Har Gobind Khorana and others deciphered the genetic code not long afterward (1966). These findings represent the birth of molecular biology.\n",
"The Meselson–Stahl experiment is an experiment by Matthew Meselson and Franklin Stahl in 1958 which supported Watson and Crick's hypothesis that DNA replication was semiconservative. In semiconservative replication, when the double stranded DNA helix is replicated, each of the two new double-stranded DNA helices consisted of one strand from the original helix and one newly synthesized. It has been called \"the most beautiful experiment in biology.\" Meselson and Stahl decided the best way to tag the parent DNA would be to change one of the atoms in the parent DNA molecule. Since nitrogen is found in the nitrogenous bases of each nucleotide, they decided to use an isotope of nitrogen to distinguish between parent and newly copied DNA. The isotope of nitrogen had an extra neutron in the nucleus, which made it heavier.\n",
"In 1953, based on X-ray diffraction images and the information that the bases were paired, James D. Watson along with Francis Crick co-discovered what is now widely accepted as the first accurate double-helix model of DNA structure.\n",
"The double-helix model of DNA structure was first published in the journal \"Nature\" by James Watson and Francis Crick in 1953, (X,Y,Z coordinates in 1954) based upon the crucial X-ray diffraction image of DNA labeled as \"Photo 51\", from Rosalind Franklin in 1952, followed by her more clarified DNA image with Raymond Gosling, Maurice Wilkins, Alexander Stokes, and Herbert Wilson, and base-pairing chemical and biochemical information by Erwin Chargaff. The prior model was triple-stranded DNA.\n",
"In April 1953, together with Sydney Brenner, Jack Dunitz, Leslie Orgel, and Beryl M. Oughton, Hodgkin was one of the first people to travel from Oxford to Cambridge to see the model of the double helix structure of DNA: constructed by Francis Crick and James Watson, it was based on data and technique acquired by Maurice Wilkins and Rosalind Franklin. According to the late Dr. Beryl Oughton (married name, Rimmer), they drove to Cambridge in two cars after Hodgkin announced that they were off to see the model of the structure of DNA.\n",
"Triple-stranded DNA structures were common hypotheses in the 1950s when scientists were struggling to discover DNA's true structural form. Watson and Crick (who later won the Nobel Prize for their double-helix model) originally considered a triple-helix model, as did Pauling and Corey, who published a proposal for their triple-helix model in 1953, as well as fellow scientist Fraser. However, Watson and Crick soon identified several problems with these models:\n"
] |
Is remembering a dream the same mechanism as remembering something in real life?
|
Memory isn't as perfect as we'd like to think it is to begin with. Then, on top of that, the altered state of consciousness the brain is in during sleep can (essentially) shut down parts of the brain, particularly the prefrontal cortex. Since memory requires many neurons firing in concert, having fewer neurons functional while sleeping likely causes the memory not to be encoded. Further, dreams are influenced by experiences so there's probably blurring between reality and dreams when it comes to forming memories.
& #x200B;
tl;dr: Same mechanism, but fewer active neurons to encode memory.
|
[
"For some people, sensations from the previous night's dreams are sometimes spontaneously experienced in falling asleep. However they are usually too slight and fleeting to allow dream recall. At least 95% of all dreams are not remembered. Certain brain chemicals necessary for converting short-term memories into long-term ones are suppressed during REM sleep. Unless a dream is particularly vivid and if one wakes during or immediately after it, the content of the dream is not remembered. Recording or reconstructing dreams may one day assist with dream recall. Using technologies such as functional magnetic resonance imaging (fMRI) and electromyography (EMG), researchers have been able to record basic dream imagery, dream speech activity and dream motor behavior (such as walking and hand movements).\n",
"Dreams are also difficult to remember, with no more than 5% to 10% of dreams being remembered the following day. The parts of the dream that are retained the next day likely dissipate overnight. However, dreams are not all negative and can have much to say about daily life. Broader possibilities for dreams can be presented by stressing their social aspect. Through this method dreams have a different, but equally important hold on psychoanalysis.\n",
"Research has found that frequency of dream recall is associated with absorption and related personality traits, such as openness to experience and proneness to dissociation. A proposed explanation is the continuity model of human consciousness. This model proposes that people who are prone to vivid and unusual experiences during the day, such as fantasy and daydreaming, will tend to have vivid and memorable dream content, and hence will be more likely to remember their dreams.\n",
"Dreams are brief compared to the range and abundance of dream thoughts. Through condensation or compression, dream content can be presented in one dream. Oftentimes, people may recall having more than one dream in a night. Freud explained that the content of all dreams occurring on the same night represents part of the same whole. He believed that separate dreams have the same meaning. Often the first dream is more distorted and the latter is more distinct. Displacement of dream content occurs when manifest content does not resemble the actual meaning of the dream. Displacement comes through the influence of a censorship agent. Representation in dreams is the causal relation between two things. Freud argues that two persons or objects can be combined into a single representation in a dream (see Freud's dream of his uncle and Friend R).\n",
"Sleep and memory have been closely correlated for over a century. It seemed logical that the rehearsal of learned information during the day, such as in dreams, could be responsible for this consolidation. REM sleep was first studied in 1953. It was thought to be the sole contributor to memory due to its association with dreams. It has recently been suggested that if sleep and waking experience are found to be using the same neuronal content, it is reasonable to say that all sleep has a role in memory consolidation. This is supported by the rhythmic behavior of the brain. Harmonic oscillators have the capability to reproduce a perturbation that happened in previous cycles. It follows that when the brain is unperturbed, such as during sleep, it is in essence rehearsing the perturbations of the day. Recent studies have confirmed that off wave states, such as slow-wave sleep, play a part in consolidation as well as REM sleep. There have even been studies done implying that sleep can lead to insight or creativity. Jan Born, from the University of Lubeck, showed subjects a number series with a hidden rule. She allowed one group to sleep for three hours, while the other group stayed awake. The awake group showed no progress, while most of the group that was allowed to sleep was able to solve the rule. This is just one example of how rhythm could contribute to humans unique cognitive abilities.\n",
"Griffin has posited another, more important reason for why dreaming is in metaphor. Using an analogous experience as a means of completing an arousal enables the arousal associated with the instinctive urge to be discharged but, importantly, the instinctive urge itself in the context it was experienced can be remembered. This prevents memory stores from becoming either corrupt or incomplete. It also explains why it is important to forget dreams most of the time.\n",
"The recollection of dreams is extremely unreliable, though it is a skill that can be trained. Dreams can usually be recalled if a person is awakened while dreaming. Women tend to have more frequent dream recall than men. Dreams that are difficult to recall may be characterized by relatively little affect, and factors such as salience, arousal, and interference play a role in dream recall. Often, a dream may be recalled upon viewing or hearing a random trigger or stimulus. The \"salience hypothesis\" proposes that dream content that is salient, that is, novel, intense, or unusual, is more easily remembered. There is considerable evidence that vivid, intense, or unusual dream content is more frequently recalled. A dream journal can be used to assist dream recall, for personal interest or psychotherapy purposes.\n"
] |
If I shoot a car with an EMP gun, what would happen?
|
Devices like this exist and are being marketed to police departments around the world as a means for terminating dangerous car chases. I believe there is some safety cost/benefit calculation at work. The burning out of all electronics will effectively destroy/total the car. The driver may in fact lose control of the vehicle, but this is considered preferable to the alternative of allowing him to continue and put other people's lives at risk. There is also considerable shielding required for the police car to prevent the pulse from also destroying the police vehicle electronics. There is the possibility of other nearby vehicles in the path of the pulse also being damaged (although the range of the pulse is only 20-30 feet so it is not a major consideration). If you built one on your own, you might be able to get away with it, however, you might also end up disabling your own vehicle in the process and get identified as the culprit
|
[
"BULLET::::- In the 2008 series \"Knight Rider\" the co-protagonist—a Ford Shelby GT500KR named KITT which is capable of driving itself, talking, and firing all sorts of offensive and defensive weapons—has a small EMP device on board. The car is most often seen deploying this weapon to disable vehicles that it pursues. When the EMP is discharged, it is visualized by a distorted blue wave that expands outward from KITT in a circle. The effect is a total electrical shutdown of the target vehicle, which is depicted by the car radio shutting off if in use, the gauge clusters all falling to zero, and the vehicle occupants cellphones also becomes inoperable. The target vehicle then (usually) coasts to a stop. In one episode, a continuity error shows up in the fact that after their vehicle has been EMP bombed by KITT, a two-way walkie-talkie held by one of the goons still appears to work. KITT is not affected in any way by his own EMP weapon.\n",
"At a high voltage level an EMP can induce a spark, for example from an electrostatic discharge when fuelling a gasoline-engined vehicle. Such sparks have been known to cause fuel-air explosions and precautions must be taken to prevent them.\n",
"An EMP would probably not affect most cars, despite modern cars' heavy use of electronics, because cars' electronic circuits and cabling are likely too short to be affected. In addition, cars' metallic frames provide some protection. However, even a small percentage of cars breaking down due to an electronic malfunction would cause temporary traffic jams.\n",
"The risk of an EMP, either through solar or atmospheric activity or enemy attack, while not dismissed, was suggested to be overblown by the news media in a commentary in \"Physics Today\". Instead, the weapons from rogue states were still too small and uncoordinated to cause a massive EMP, underground infrastructure is sufficiently protected, and there will be enough warning time from continuous solar observatories like SOHO to protect surface transformers should a devastating solar storm be detected.\n",
"An indicator that is behind the ejector port does not rise enough to disrupt a shooter's sight picture, but enough to be easily seen or felt to alert a user that there is a round in the chamber to avoid negligent discharge of the gun.\n",
"BULLET::::- In \"Halo 3\" and \"\", one can create an EMP by briefly charging a Covenant plasma pistol, or deploying a \"power drainer\". In \"\", the power drainer (like all deployables) is removed, but an EMP can also be created by using manual detonation on UNSC grenade launchers, or by using the full duration of the \"armor lock\" ability. An EMP disables the shields of a character, or their vehicle.\n",
"BULLET::::- The \"Mario Kart\" series features EMP in the form of \"Lightning\" power-up that could inflict a massive electric shock on other players and causing their vehicles to slow down. In addition, affected players will also temporarily shrink into a diminutive size.\n"
] |
why when there is a silent we often hear a beep sound?
|
Yer not alone in askin', and kind strangers have explained that this is *tinnitus:*
1. [ELI5: what is the ringing noise we hear when there's silence? ](_URL_3_) ^(_ > 100 comments_)
1. [ELI5: Why do my ears ring in a quiet room? ](_URL_2_) ^(_12 comments_)
1. [ELI5: What is the beeping sound I hear sometimes when it's completely silent? ](_URL_1_) ^(_4 comments_)
1. [ELI5: What is happening when you randomly hear a weird ringing in one or both of your ears? ](_URL_0_) ^(_69 comments_)
1. [ELI5: Why do I sometimes suddenly hear a ringing in one of my ears? ](_URL_4_) ^(_86 comments_)
|
[
"Beeps are also used as a warning when a truck, lorry or bus is reversing. It can also be used to define the sound produced by a car horn. Colloquially, beep is also used to refer to the action of honking the car horn at someone, (e.g., \"Why did that guy beep at me?\"), and is more likely to be used with vehicles with higher-pitched horns. \"Honk\" is used if the sound is lower pitched (e.g. Volkswagen Beetles beep, but Oldsmobiles honk . On trains, beeps may be used for communications between members of staff.\n",
"A beep is a short, single tone, typically high-pitched, generally made by a computer or other machine. The term has its origin in onomatopoeia. The word \"beep-beep\" is recorded for the noise of a car horn in 1929, and the modern usage of \"beep\" for a high-pitched tone is attributed to Arthur C. Clarke in 1951.\n",
"\"Beep, beep\" is onomatopoeia representing a noise, generally of a pair of identical tones following one after the other, often generated by a machine or device such as a car horn. It is commonly associated with the Road Runner cartoon (meep, meep) in the Looney Tunes cartoons featuring the speedy-yet-flightless bird and his constant pursuer, Wile E. Coyote. \"Beep, Beep\" is the name of a 1952 Warner Bros. cartoon in the \"Merrie Melodies\" series.\n",
"It is unclear exactly why the moth emits this sound. One thought is that the squeak may be used to deter potential predators. Due to its unusual method of producing sound, the squeak created by \"Acherontia atropos\" is especially startling. Another hypothesis suggests that the squeak relates to the moth's honey bee hive raiding habits. The squeak produced from this moth mimics the piping noise produced from a honey bee hive's queen, a noise in which she utilizes to signal the worker bees to stop moving.\n",
"Brains are not adapted for dealing with the repetitive and persistent sound of back-up beepers, but more towards natural sounds that dissipate. The sound is perceived as irritating or painful, which breaks concentration.\n",
"In the United Kingdom, the Puffin crossings and their predecessor, the Pelican crossing, will make a fast beeping sound to indicate that it is safe to cross the road. The beeping sound is disabled during the night time so as not to disturb any nearby residents.\n",
"By the end of the 20th century the sound of chirping crickets came to represent quietude in literature, theatre and film. From this sentiment arose expressions equating \"crickets\" with silence altogether, particularly when a group of assembled people makes no noise. These expressions have grown from the more descriptive, \"so quiet that you can hear crickets,\" to simply saying , \"crickets\" as shorthand for \"complete silence.\"\n"
] |
how does the amazon go store figure out what you are purchasing exactly?
|
Holy crud, this is a neat idea. Here's some speculation, until we can get a concrete answer from Ol' Amazon themselves.
* since you need the app, and need to apparently launch it when walking in, that's probably how the store determines that you in particular are the person who just entered. Bluetooth might also be involved, as that's a short-range wireless technology that can provide a unique identifier and help it accurately ballpark who's where in the building.
* cameras in the store are connected to a computer system that can tell people apart (that'd be some machine learning bit right there) and since it knows who just walked in the door, can keep an eye on you as you move about the building.
* sensors on the shelves know when an object has been taken. If it detects that a pudding cup got picked up, and knows by the cameras that you are standing right in front of the pudding, it assumes that you're the person who did so.
|
[
"Amazon announced in June 2019, that Amazon shoppers will be able to pick up their purchases at designated counters inside more than 100 Rite Aid stores across the US. The new service is called Counter and launches in the US after finding success in the UK with the Next clothing chain and in Italy with Giunti Al Punto Librerie, Fermopoint and SisalPay stores.\n",
"On January 22, 2018, Amazon Go, a store that uses cameras and sensors to detect items that a shopper grabs off shelves and automatically charges a shopper's Amazon account, was opened to the general public in Seattle. Customers scan their Amazon Go app as they enter, and are required to have an Amazon Go app installed on their smartphone and a linked Amazon account to be able to enter. The technology is meant to eliminate the need for checkout lines. Amazon Go was initially opened for Amazon employees in December 2016. By the end of 2018, there will be 8 total Amazon Go stores located in Seattle, Chicago, San Francisco and New York. Amazon has plans to open as many as 3,000 Amazon Go locations across the United States by 2021.\n",
"On January 22, 2018, Amazon Go, a store that uses cameras and sensors to detect items that a shopper grabs off shelves and automatically charges a shopper's Amazon account, was opened to the general public in Seattle. Customers scan their Amazon Go app as they enter, and are required to have an Amazon Go app installed on their smartphone and a linked Amazon account to be able to enter. The technology is meant to eliminate the need for checkout lines. Amazon Go was initially opened for Amazon employees in December 2016. By the end of 2018, there will be 8 total Amazon Go stores located in Seattle, Chicago, San Francisco and New York. Amazon has plans to open as many as 3,000 Amazon Go locations across the United States by 2021.\n",
"Shoppers using the iPad could flip through the catalog and tap on \"hotspots\" for the products in which they are interested, linking to the merchant's Web site for purchase. After clicking on a product of interest, a pop-up will appear for the user to read more about the product, which includes price, description, images, and title. This is also the page where users could send information about the product to others via email. For those who didn’t want to purchase online, store locations can be found by loading the 'Find Nearby' option. Further exploration of the products was supported by features such as the ability to zoom in on products as well as being able to view tags to garner additional information. In addition, products could be marked as favorites, and then all of those that were previously marked could be viewed on the same page by clicking the Favorites button on the bottom of the screen. Users could also mark specific catalogs as Favorites in order to receive a notification when a new issue was available. If a user was looking for a specific product, there was a search function that showed all the products related to the keyword that the user types in.\n",
"Amazon Cash (in the United States and Canada) and Amazon Top Up (in the United Kingdom) are services allowing Amazon shoppers to add money to their Amazon account at a physical retail store. The service, launched in April 2017, allows users to add between $5 and $500 (£5 and £250) to their accounts by paying with cash at a participating retailer, who scans a barcode linked to a customer's Amazon account. Users can present the app on paper, on the Amazon app, or as a text message sent by the Amazon website. Participating retailers in the United States include 7-Eleven, CVS Pharmacy, and GameStop. In Canada, reloads can only be made at Canada Post post offices. In the United Kingdom, reloads can only be made at PayPoint locations.\n",
"In December 2016, Amazon announced a bricks and mortar store in Seattle under the name Amazon Go, which uses a variety of cameras and sensors in order to see what customers are putting into their shopping bags. The customers scan a QR code when they enter the store through a companion app, which is linked to their Amazon.com account. When the customer exits the store, the items in their bag are automatically charged to the account.\n",
"Amazon has diversified its acquisition portfolio into several market sectors, with its largest acquisition being the purchase of the grocery store chain Whole Foods Market for $13.7 billion on June 16, 2017.\n"
] |
how was the dnc primary "rigged"?
|
The DNC is supposed to be neutral. The e-mails released by wikileaks from the DNC showed that they were actively trying to help Hilary's nomination and hurt Bernie's. That was a violation of their charter. In addition, after the leaks and subsequent calls for her resignation, the head of the DNC, Debbie Wasserman Schulz, was immediately appointed as chair of one of Hilary's election committees. In short, it was not a fair primary for Bernie or his supporters.
|
[
"The Democratic National Committee (DNC) proposed a new schedule and a new rule set for the 2008 Presidential primary elections. Among the changes: the primary election cycle would start nearly a year earlier than in previous cycles, states from the West and the South would be included in the earlier part of the schedule, and candidates who run in primary elections not held in accordance with the DNC's proposed schedule (as the DNC does not have any direct control over each state's official election schedules) would be penalized by being stripped of delegates won in offending states. The New York Times called the move, \"the biggest shift in the way Democrats have nominated their presidential candidates in 30 years.\"\n",
"The Democratic National Committee (DNC) proposed a new schedule and a new rule set for the 2008 Presidential primary elections. Among the changes: the primary election cycle would start nearly a year earlier than in previous cycles, states from the West and the South would be included in the earlier part of the schedule, and candidates who run in primary elections not held in accordance with the DNC's proposed schedule (as the DNC does not have any direct control over each state's official election schedules) would be penalized by being stripped of delegates won in offending states. The \"New York Times\" called the move, \"the biggest shift in the way Democrats have nominated their presidential candidates in 30 years.\"\n",
"The 1974 Congressional midterm elections took place in the wake of the Watergate scandal and less than three months after Ford assumed office. The Democratic Party turned voter dissatisfaction into large gains in the House elections, taking 49 seats from the Republican Party, increasing their majority to 291 of the 435 seats. This was one more than the number needed (290) for a two-thirds majority, the number necessary to override a Presidential veto or to propose a constitutional amendment. Perhaps due in part to this fact, the 94th Congress overrode the highest percentage of vetoes since Andrew Johnson was President of the United States (1865–1869). Even Ford's former, reliably Republican House seat was won by a Democrat, Richard Vander Veen, who defeated Robert VanderLaan. In the Senate elections, the Democratic majority became 61 in the 100-seat body.\n",
"PollyVote predicted the outcome of the 2006 U.S. House of Representatives Elections, forecasting that the Republicans would lose 23 seats, and thus, their majority in the House. The Republicans lost 30 seats and the House majority in those elections.\n",
"Norpoth developed the Primary Model, a statistical model he uses to predict the results of United States presidential elections based on data going back to 1912. He has used the model to correctly predict the winner of all six presidential elections from 1996 to 2016, including the Donald Trump victory in the 2016 election. This model is based on two factors: whether the party that has been in power for a long time seems to be about to lose it, and whether a given candidate did better in the primaries than his or her opponent. In February 2015, he projected that Republicans had a 65 percent chance of winning the general election the following year. In 2016, this model gained significant media attention because it predicted that Donald Trump would win the general election. In response to critics who cite polls in which Clinton leads Trump by a significant margin, Norpoth has said that these polls do not take into account who will actually vote in November, writing, \"...nearly all of us say, oh yes, I'll vote, and then many will not follow through.\"\n",
"In the 1971 General Elections the PNM faced only limited opposition as the major opposition parties boycotted the election citing the use of voting machines. The PNM captured all 36 seats in the election, including eight that they carried unopposed. Additionally Williams split the post of Deputy Leader into three and appointed Kamaluddin Mohammed, Errol Mahabir and George Chambers to the position.\n",
"In the 1972 primary elections, McGovern named Hart his national campaign director. Along with Rick Stearns, an expert on the new system, they decided on a strategy to focus on the 28 states holding caucuses instead of primary elections. They felt the nature of the caucuses made them easier (and less costly) to win if they targeted their efforts. While their primary election strategy proved successful in winning the nomination, McGovern went on to lose the 1972 presidential election in one of the most lopsided elections in U.S. history.\n"
] |
why did slave owners/ traders feel it was necessary to convert slaves to christianity? if slaves were considered nothing more than property why was their salvation important?
|
All the answers here are correct for a certain historical period. However, it's important to remember that for the majority of the time the Atlantic slave trade was in operation, religious conversion was not a priority. There were a number of reasons for this:
1. In many colonies the average slave lived only 5-10 years, so conversion was deemed not worth the effort. This was especially true in the Caribbean. It was only when the mortality rate dropped and whites began to see established intergenerational slave communities that anyone thought it might be worth trying to make new converts.
2. In colonies with a higher proportion of slaves (e.g. Barbados, where whites numbered less than 10% of the total population) there was a constant fear of slave uprisings. The authorities wanted to restrict Christianity because they feared that some of the Bible's more humane messages might give their slaves some revolutionary ideas.
3. More generally, slave owners throughout the Americas were (kind of) concerned about the theological implications of making their slaves Christians. There are all kinds of warnings in the Bible and in Catholic and Anglican texts about enslaving co-religionists. Slave owners didn't think it would cause much trouble, but they were concerned that if they converted their human chattel there might be a chance that the authorities would then declare the enslavement of Christians unlawful. And that would be a very expensive mistake.
Now, in the British colonies in continental North America, the people who made religious decisions and the people who mad economic decisions were one and the same. So there was no danger of the local plantation owner having his slaves preached at by the church deacon, because there was a good chance that they were the same man. Religion at the time was about hierarchy, but, contrary to the responses here, the best way to keep a slave population at the bottom of the social hierarchy is to never initiate them into it in the first place.
What ended up happening (again, in the 13 colonies - my knowledge of non-British slave systems is patchy) was that in the early-mid 18th century, the first in a series of religious revivals swept across the colonies. Now religion was rendered less hierarchical, and people started to think that anyone could talk to (a) God, and (b) other people about God. So now it's not only the local vicar who can convert heathens, it's any God-fearing Christian.
The situation as it subsequently developed was not therefore of the slave-owning class's making. Zealous individuals converted slaves of their own initiative and against the express wishes of the colonial elite. Once that damage was done, the slave owners just had to make the best of a bad situation by emphasising (as others here have pointed out) the hierarchical bits of Christianity. But it's wrong to say that the beneficiaries of the slave system actively converted anyone.
**TLDR: Slave owners never really converted anyone because slaves were easier to handle if they weren't Christian. It was only at the tail end of the Atlantic slave era that any widespread conversions started to happen.**
SOURCE: *Inhuman Bondage* by David Brion Davis.
|
[
"Slave-owners weren’t keen to have their slaves baptised as Christian converts could not be sold. Mostly freed slaves were therefore baptised and could then become members of the Dutch Reformed Church in South Africa (NGK). This led to the directors of SA Mission Society establishing their own congregation. It was called the SA Gesticht congregation of the SA Missionary Society. In 1820 Jacobus Henricus Beck became its first minister.\n",
"The Roman Empire extensively utilized chattel slavery for labor, private property that could be disposed at will, and slaves' status was specified in the Code of Justinian, but slaves' ethnicity or race was not specified. With the rise of Christianity, the status of slaves was not altered, but slaves were to be converted to Christianity. Christians were in theory banned from enslaving fellow Christians, but the practice persisted. With the rise of Islam, and the conquest of most of the Iberian peninsula in the eighth century, slavery declined in remaining Iberian Christian kingdoms. Muslims were resistant to conversion to Christianity, and they did not enslave fellow believers. Latin Christianity gradually diminished enslavement of fellow Christians. As Christian Spain sought to retake territory lost to Muslims, the reconquista had implications for their understanding of slavery. Conquered Muslims were enslaved with the justification conversion and acculturation, but Muslim captives were often offered back to their families and communities for cash payments (\"rescate\"). The thirteenth-century code of law, the \"Siete Partidas\" of Alfonso \"the learned\" (1252–1284) specified who could be enslaved: those who were captured in just war; offspring of an enslaved mother; those who voluntarily sold themselves into slavery, and specified slaves' good treatment by their masters. At the time it was generally domestic slavery and was a temporary condition of members of outgroups. As well as the formal parameters for slavery, the \"Siete Partidas\" also makes a value judgment, stating that it \"was the basest and most wretched condition into which anyone could fall because man, who is the most free noble of all God's creatures, becomes thereby in the power of another, who can do with him what he wishes as with any property, whether living or dead.\"\n",
"Manumission of a Muslim slave was encouraged as a way of expiating sins. Many early converts to Islam, such as Bilal ibn Rabah al-Habashi, were former slaves. In theory, slavery in Islamic law does not have a racial or color component, although this has not always been the case in practice. In 1990, the Cairo Declaration on Human Rights in Islam declared that \"no one has the right to enslave\" another human being. Many slaves were often imported from outside the Muslim world. Bernard Lewis maintains that though slaves often suffered on the way before reaching their destination, they received good treatment and some degree of acceptance as members of their owners' households.\n",
"BULLET::::- Sixth, some early Christians liberated their slaves, while some churches redeemed slaves using the congregation’s common means. Other Christians even sacrificially sold themselves into slavery to emancipate others.\n",
"Laws sometimes stated that conversion to Christianity, especially by Muslims, should result in the emancipation of the slave, but as such conversions often resulted in the freed slave returning to his home territory and reverting to his old religion, for example in the Crusader Kingdom of Jerusalem, which had such laws, provisions along these lines were often ignored and became less used.\n",
"In theory free-born Muslims could not be enslaved, and the only way that a non-Muslim could be enslaved was being captured in the course of holy war. (In early Islam, neither a Muslim nor a Christian or Jew could be enslaved.) Slavery was also perceived as a means of converting non-Muslims to Islam: A task of the masters was religious instruction. Conversion and assimilation into the society of the master didn't automatically lead to emancipation, though there was normally some guarantee of better treatment and was deemed a prerequisite for emancipation. The majority of Sunni authorities approved the manumission of all the \"People of the Book\". According to some jurists -especially among the Shi'a- only Muslim slaves should be liberated. In practice, traditional propagators of Islam in Africa often revealed a cautious attitude towards proselytizing because of its effect in reducing the potential reservoir of slaves.\n",
"Under Sharia (Islamic law), children of slaves or prisoners of war could become slaves but only non-Muslims. Manumission of a slave was encouraged as a way of expiating sins. Many early converts to Islam, such as Bilal ibn Rabah al-Habashi, were the poor and former slaves. In theory, slavery in Islamic law does not have a racial or color component, although this has not always been the case in practice.\n"
] |
why there is a difference in the way medication is administered. specifically, what is the difference between pills and injections.
|
Injections, if through an IV, go straight into the bloodstream. Pills have to be digested before entering the bloodstream, so generally less gets in (or if something is meant to work in the gastrointestinal tract it would be taken as a pill.)
Certain injections might only have a local effect (like a corticosteroid injection for a joint,) which would necessitate injection into a specific body part.
|
[
"A wide variety of drugs are injected, often opioids: these may include legally prescribed medicines and medication such as morphine, as well as stronger compounds often favored in recreational drug use, which are often illegal. Although there are various methods of taking drugs, injection is favoured by some people as the full effects of the drug are experienced very quickly, typically in five to ten seconds. It also bypasses first-pass metabolism in the liver, resulting in higher bioavailability and efficiency for many drugs (such as morphine or diacetylmorphine/heroin; roughly two-thirds of which is destroyed in the liver when consumed orally) than oral ingestion would. The effect is that the person gets a stronger (yet shorter-acting) effect from the same amount of the drug. Drug injection is therefore often related to substance dependence. \n",
"BULLET::::- intravenous injection (see also the article Drug injection): the user injects a solution of water and the drug into a vein, or less commonly, into the tissue. Drugs that are injected include morphine and heroin, less commonly other opioids. Stimulants like cocaine or methamphetamine may also be injected. In rare cases, users inject other drugs.\n",
"A wide variety of drugs are injected. Among the most popular in many countries are morphine, heroin, cocaine, amphetamine, and methamphetamine. Prescription drugs—including tablets, capsules, and even liquids and suppositories—are also occasionally injected. This applies particularly to prescription opioids, since some opioid addicts already inject heroin. Injecting preparations which were not intended for this purpose is particularly dangerous because of the presence of excipients (fillers), which can cause blood clots. Injecting codeine into the bloodstream directly is dangerous because it causes a rapid histamine release, which can lead to potentially fatal anaphylaxis and pulmonary edema. Dihydrocodeine, hydrocodone, nicocodeine, and other codeine-based products carry similar risks. Codeine may instead be injected by the intramuscular or subcutaneous route. The effect will not be instant, but the dangerous and unpleasant massive histamine release from the intravenous injection of codeine is avoided. To minimize the amount of undissolved material in fluids prepared for injection, a filter of cotton or synthetic fiber is typically used, such as a cotton-swab tip or a small piece of cigarette filter.\n",
"The characteristics of a medication's excipient play a fundamental role in creating a suitable environment for the correct absorption of a drug. This can mean that the same dose of a drug in different forms can have different bioequivalence, as they yield different plasma concentrations and therefore have different therapeutic effects. Dosage forms with modified release (such as delayed or extended release) allow this difference to be usefully applied.\n",
"The dosage form for a pharmaceutical contains the active pharmaceutical ingredient (API), which is the drug substance itself, and excipients, which are the ingredients of the tablet, or the liquid the API is suspended in, or other material that is pharmaceutically inert. Drugs are chosen primarily for their active ingredients.During formulation development, the excipients are chosen carefully so that the active ingredient can reach the target site in the body at the desired rate and extent. \n",
"Of all the ways to ingest drugs, injection carries the most risks by far as it bypasses the body's natural filtering mechanisms against viruses, bacteria, and foreign objects. There will always be much less risk of overdose, disease, infections, and health problems with alternatives to injecting, such as smoking, insufflation (snorting or nasal ingestion), or swallowing.\n",
"The combinations of drugs currently prescribed can be divided into two categories: non-artemesinin-based combinations and artemesinin based combinations. It is also important to distinguish \"fixed-dose\" combination therapies (in which two or more drugs are co-formulated into a single tablet) from combinations achieved by taking two separate antimalarials.\n"
] |
What happens to the blood in an uterus during missed periods?
|
Tl;dr the lining usually doesn't thicken in these cases
The causes of frequently irregular periods (oligomenorrhoea) or complete lack of them (amenorrhoea) are normally always hormones.
To put this into context with an example, breastfeeding results in high levels of the hormone prolactin, which then inhibits release of FSH and LH. These hormones drive oestrogen production, which is responsible for thickening the endometrium - the lining of the uterus. Without this, the uterus may never acquire a thick lining at all. Similarly excess stress releases hormones like cortisol which can also affect FSH and LH.
There are other, non-endocrine (non hormone related) causes but they're rare. Examples of such conditions include uterine agenesis (congenital - from birth) and endometrial fibrosis (acquired) but I'm not too familiar with those. In the case of the former, hopefully you can see that if the uterus does not form (agenesis) then its lining can't be thickened!
I don't know of the existence of a disease where the uterine lining remains but ovulation doesn't occur. Without progesterone formed by the corpus luteum post-ovulation, the lining would degrade anyway. Perhaps some sort of progesterone-producing tumour might do it but that would be mere speculation since progesterone has effects on FSH and LH anyway.
Anyway I've rambled on for far too long. Hope I helped!
|
[
"Couvelaire uterus is a phenomenon wherein the retroplacental blood may penetrate through the thickness of the wall of the uterus into the peritoneal cavity. This may occur after abruptio placentae. The hemorrhage that gets into the decidua basalis ultimately splits the decidua, and the haematoma may remain within the decidua or may extravasate into the myometrium (the muscular wall of the uterus). The myometrium becomes weakened and may rupture due to the increase in intrauterine pressure associated with uterine contractions. This may lead to a life-threatening obstetric emergency requiring urgent delivery of the fetus.\n",
"Normal menstrual bleeding in the ovulatory cycle is a result of a decline in progesterone due to the demise of the corpus luteum. It is thus a progesterone withdrawal bleeding. As there is no progesterone in the anovulatory cycle, bleeding is caused by the inability of estrogen — that needs to be present to stimulate the endometrium in the first place — to support a growing endometrium. Anovulatory bleeding is hence termed 'estrogen breakthrough bleeding.\n",
"Uterine rupture is a when the muscular wall of the uterus tears during pregnancy or childbirth. Symptoms while classically including increased pain, vaginal bleeding, or a change in contractions are not always present. Disability or death of the mother or baby may result.\n",
"Prior to and during delivery, bleeding can also occur from tears in the cervix, vagina, or perineum, sudden placental detachment (abruptio placenta) and placental attachment over the cervix (placenta previa), and uterine rupture.\n",
"In most cases, placental disease and abnormalities of the spiral arteries develop throughout the pregnancy and lead to necrosis, inflammation, vascular problems, and ultimately, abruption. Because of this, most abruptions are caused by bleeding from the arterial supply, not the venous supply. Production of thrombin via massive bleeding causes the uterus to contract and leads to DIC.\n",
"Occasionally, if a fallopian tube does not connect, the uterine horn will fill with blood each month, and a minor one-day surgery will be performed to remove it. Often, people who are born with this have trouble getting pregnant as both ovaries are functional and either may ovulate. The spare egg, that cannot travel the fallopian tube, is absorbed into the body.\n",
"BULLET::::- Mid-cycle or ovulatory bleeding is thought to result from the sudden drop in estrogen that occurs just before ovulation. This drop in hormones can trigger withdrawal bleeding in the same way that switching from active to placebo birth control pills does. The rise in hormones that occurs after ovulation prevents such mid-cycle spotting from becoming as heavy or long lasting as a typical menstruation. Spotting is more common in longer cycles.\n"
] |
Why do some cameras get really grainy when taking photos or videos in low-light/no-light?
|
Various kinds of noise. As the light goes down, you get less light (signal) but not less noise.
But what is the noise? One type of noise is read noise. Read noise is a constant caused by imperfections in the technology used to detect the light. For example, an amplifier might accidentally amplify stray currents and mix that in with the signal. This type of noise gets better with more advanced sensor technology.
Another type of noise is shot noise. Shot noise is caused by the fact that light is made up of photons. A bright light might shine billions of photons a second, but a very dim light might send 10s of photons a second. If you don't gater enough light to get lots of photons, you get noise.
The only way to reduce this is to capture more photons by increasing the size of the sensor/objective to gather more light or increasing the time of exposure.
Some cameras have larger sensors and larger aperatures meaning they gather more light.
There are other kinds of noise, but this shows noise levels in one device vs another is influenced by the quality of the sensor tech and the physical light gathering ability of the optics.
|
[
"Because the effect is caused by the relative motion between the camera, and the objects and scene, motion blur may be avoided by panning the camera to track those moving objects. In this case, even with long exposure times, the objects will appear sharper, and the background more blurred.\n",
"Some still camera manufacturers marketed their cameras as having digital image stabilization when they really only had a high-sensitivity mode that uses a short exposure time—producing pictures with less motion blur, but more noise. It reduces blur when photographing something that is moving, as well as from camera shake.\n",
"Although point and shoot cameras with affordable lenses have been used widely for candid photography, the resulting photographs can suffer from vignetting, distortion and over saturation of color. Due to short reaction times for the photographer, exposure or focus may be slightly off. Since flash cannot be used, pictures are often taken at low shutter speeds and show blurring from movement of the subject, or camera shaking. All these faults are usually considered acceptable because of the limitations of candid photography.\n",
"Shot noise, produced by spontaneous fluctuations in detected photocurrents, degrades darker areas of electronic images with random variations of pixel color and brightness. Film grain becomes obvious in areas of even and delicate tone. Grain and film sensitivity are linked, with more sensitive films having more obvious grain. Likewise, with digital cameras, images taken at higher sensitivity settings show more image noise than those taken at lower sensitivities.\n",
"Areas of a photo where information is lost due to extreme darkness are described as \"crushed blacks\". Digital capture tends to be more tolerant of underexposure, allowing better recovery of shadow detail, than same-ISO negative print film.\n",
"Modest image stabilization systems can degrade image quality if the photographer is intentionally panning (as the system tries to negate the panning motion), or if the camera is mounted on a very sturdy tripod (the system drifts around slowly due to spurious measurements over the course of a long exposure). Some more recent IS systems can automatically detect these situations and disable the IS along the panning axis, or disable it completely if the camera is on a tripod. Sweep panoramic photography certainly use panning system. So, modern image stabilization system is not use 2 axis anymore, but up to 5 axis: horizontal axis, vertical axis and rotation of 3 axis.\n",
"Some of these disadvantages can be viewed as advantages. For example, slow setup and composure time allow the photographer to better visualize the image before making an exposure. The shallow depth of field can be used to emphasize certain details and deemphasize others (in bokeh style, for example), especially combined with camera movements. The high cost of film and processing encourages careful planning. Because view cameras are rather difficult to set up and focus, the photographer must seek the best camera position, perspective, etc. before exposing. Beginning 35 mm photographers are even sometimes advised to use a tripod specifically because it slows down the picture-taking process.\n"
] |
the phrase 'have your cake and eat it, too.'
|
Once you eat the cake, it's gone. You don't have it anymore. You cannot have both
|
[
"The phrase \"Let them eat cake\" is often attributed to Marie Antoinette, but there is no evidence she ever uttered it, and it is now generally regarded as a \"journalistic cliché\". It may have been a rumor started by angry French peasants as a form of libel. This phrase originally appeared in Book VI of the first part (finished in 1767, published in 1782) of Rousseau's putative autobiographical work, \"Les Confessions\": \"\"Enfin je me rappelai le pis-aller d'une grande princesse à qui l'on disait que les paysans n'avaient pas de pain, et qui répondit: Qu'ils mangent de la brioche\"\" (\"Finally I recalled the stopgap solution of a great princess who was told that the peasants had no bread, and who responded: 'Let them eat brioche). Apart from the fact that Rousseau ascribes these words to an unknown princess, vaguely referred to as a \"great princess\", some think that he invented it altogether as \"Confessions\" was largely inaccurate.\n",
"The phrase \"Let them eat cake\" is often attributed to Marie Antoinette, but there is no evidence that she ever uttered it, and it is now generally regarded as a journalistic cliché. This phrase originally appeared in Book VI of the first part of Jean-Jacques Rousseau's autobiographical work \"Les Confessions\", finished in 1767 and published in 1782: \"\"Enfin je me rappelai le pis-aller d'une grande princesse à qui l'on disait que les paysans n'avaient pas de pain, et qui répondit: Qu'ils mangent de la brioche\"\" (\"Finally I recalled the stop-gap solution of a great princess who was told that the peasants had no bread, and who responded: 'Let them eat brioche). Rousseau ascribes these words to a \"great princess\", but the purported writing date precedes Marie Antoinette's arrival in France. Some think that he invented it altogether.\n",
"An early recording of the phrase is in a letter on 14 March 1538 from Thomas, Duke of Norfolk, to Thomas Cromwell, as \"a man can not have his cake and eat his cake\". The phrase occurs with the clauses reversed in John Heywood's \"A dialogue Conteinyng the Nomber in Effect of All the Prouerbes in the Englishe Tongue\" from 1546, as \"wolde you bothe eate your cake, and have your cake?\". In John Davies's \"Scourge of Folly\" of 1611, the same order is used, as \"A man cannot eat his cake and haue it stil.\"\n",
"\"Let them eat cake\" is the traditional translation of the French phrase \"\"\"\", supposedly spoken by \"a great princess\" upon learning that the peasants had no bread. Since brioche was a luxury bread enriched with butter and eggs, the quotation would reflect the princess's disregard for the peasants, or her poor understanding of their situation.\n",
"The name of the cake is a pun, as \"fa\" means both \"prosperity\" and \"raised (leavened)\", so \"fa gao\" means both \"prosperity cake\" and \"raised (leavened) cake\". These cakes, when used to encourage prosperity in the new year, are often dyed bright colors.\n",
"BULLET::::- Cake — Rather than referring to the foodstuff, the name is meant to be \"like when something insidiously becomes a part of your life...[we] mean it more as something that cakes onto your shoe and is just sort of there until you get rid of it\".\n",
"You can't have your cake and eat it (too) is a popular English idiomatic proverb or figure of speech. The proverb literally means \"you cannot simultaneously retain your cake and eat it\". Once the cake is eaten, it is gone. It can be used to say that one cannot or should not have or want more than one deserves or is reasonable, or that one cannot try to have two incompatible things. The proverb's meaning is similar to the phrases \"you can't have it both ways\" and \"you can't have the best of both worlds\".\n"
] |
How do large batteries work (like the Tesla house unit)? and What are the barriers around efficient large scale energy storage?
|
The tesla house battery is a basically a lithium Ion battery,same as yor phone just in Large. Here's a pretty good [link](_URL_0_) to how they work.
Barriers are :the low engie density in these types of. I think the Tesla battery weighs 100KG an can store up to 13.5 Kw/H. That equates to ~4KG of diesel.
Also the charge degrades over large time spans due to leak currents and even by high temperature.
With our current technologies storing electrical energie is quite inefficient and expensive. And likely to not change all to fast, after all li batterys were around ~1915.
Hower a a lot of money and manhours are Invested into research so we might see completely new technologies or a battery operating on simmylar principles but with different materials.
|
[
"Contrary to electric vehicle applications, batteries for stationary storage do not suffer from mass or volume constraints. However, due to the large amounts of energy and power implied, the cost per power or energy unit is crucial. The relevant metrics to assess the interest of a technology for grid-scale storage is the $/Wh (or $/W) rather than the Wh/kg (or W/kg). The electrochemical grid storage was made possible thanks to the development of the electric vehicle, that induced a fast decrease in the production costs of batteries below $300/kWh. By optimizing the production chain, major industrials aim to reach $150/kWh by the end of 2020. These batteries rely on a Li-Ion technology, which is suited for mobile applications (high cost, high density). Technologies optimized for the grid should focus on low cost and low density.\n",
"Whereas usually a battery storage uses only revenue model, the provision of control energy, which is a very small market, this storage uses three revenue models. The storage has been installed next to a solar system. This way the solar system can be designed larger than the grid power actually permits in the first revenue model. The storage accepts a peak input of the solar system, thus avoiding the cost of a further grid expansion. The second model allows taking up peak input from the power grid and feeding it back to stabilize the grid when necessary. The third model is storing energy and feeding it into the grid at peak prices. The store received an award for top innovation.\n",
"The basis of the energy storage system of Tesla products are lithium-ion cells in the 18650 form factor. These cylindrical cells have a diameter of 18 mm and are 65 mm in length, a size used for the batteries of laptops. Cylindrical cells are generally less expensive (costing 190–200 dollars per kWh as of 2014) than large format cells whose active layers are stacked or folded (approximately 240–250 dollars per kWh).\n",
"A battery’s ability to store charge is dependent on its energy density and power density. It is important that charge can remain stored and that a maximum amount of charge can be stored within a battery. Cycling and volume expansion are also important considerations as well. While many other types of batteries exist, current battery technology is based on lithium-ion intercalation technology for its high power and energy densities, long cycle life and no memory effects. These characteristics have led lithium-ion batteries to be preferred over other battery types. To improve a battery technology, cycling ability and energy and power density must be maximized and volume expansion must be minimized.\n",
"Due to the very high cost of dedicated battery storage, use of electric vehicle batteries both while charging in vehicles (see smart grid), and in stationary grid energy storage arrays as an end-of-life re-use once they no longer hold enough charge for road use, has become the preferred method of load following over dedicated power plants. Such stationary arrays act as a true load following power plant, and their deployment can \"improve the affordability of purchasing such vehicles...Batteries that reach the end of their useful lifespan within the automotive industry can still be considered for other applications as between 70-80% of their original capacity still remains.\" Such batteries are also often repurposed in home arrays which primarily serve as backup, so can participate much more readily in grid stabilizing. The number of such batteries doing nothing is increasing rapidly, e.g. in Australia where Tesla Powerwall demand rose 30 times after major power outages.\n",
"in a household equipped with photovoltaics, energy storage is needed. Multiple manufacturers produce rechargeable battery systems for storing energy, generally to hold surplus energy from home solar/wind generation. Today, for home energy storage, Li-ion batteries are preferable to lead-acid ones given their similar cost but much better performance.\n",
"Most energy production or storage devices have a complex relationship between the power they produce, the load placed on them, and the efficiency of the delivery. A conventional battery, for instance, stores energy in chemical reactions in its electrolytes and plates. These reactions take time to occur, which limits the rate at which the power can be efficiently drawn from the cell. For this reason, large batteries used for power storage generally list two or more capacities, normally the \"2 hour\" and \"20 hour\" rates, with the 2 hour rate often being around 50% of the 20 hour rate.\n"
] |
why is testosterone legally prescribed for transgender but not bodybuilding/muscle gain?
|
Because the trans man has a recognized medical condition and the dude just trying to bulk up doesn't. And because the trans man is only going to normal male levels of testosterone - which are relatively safe - not pushing it to dangerously high levels by adding more on top of typical male production.
|
[
"Transgender women, known as \"kathoeys\", have access to hormones through non-prescription sources. This kind of access is a result of the low availability and expense of transgender health care clinics. However, transgender men have difficulty gaining access to hormones such as testosterone in Thailand because it is not as readily available as hormones for kathoeys. As a result, just a third of all transmen surveyed are taking hormones to transition whereas almost three quarters of kathoeys surveyed are taking hormones.\n",
"Medications used in hormone therapy for transgender men include androgens and anabolic steroids like testosterone (by injection and other routes) to produce masculinization, suppress estrogen and progesterone levels, and prevent/reverse feminization; GnRH agonists and antagonists to suppress estrogen and progesterone levels; progestins like medroxyprogesterone acetate to suppress menses; and 5α-reductase inhibitors to prevent/reverse scalp hair loss.\n",
"Other effects that testosterone can have on transgender men can include an increase in their sex drive/libido. At times, this increase can be very sudden and dramatic. Like transgender women, some transgender men also experience changes in they way they experience arousal.\n",
"Some transgender women report a significant reduction in libido, depending on the dosage of antiandrogens. A small number of post-operative transgender women take low doses of testosterone to boost their libido. Many pre-operative transgender women wait until after reassignment surgery to begin an active sex life. Raising the dosage of estrogen or adding a progestogen raises the libido of some transgender women.\n",
"For transgender men, one of the most notable physical changes that many taking testosterone experience, in terms of sexuality and the sexual body, is the stimulation of clitorial tissue and the enlargement of the clitoris. This increase in size can range anywhere from just a slight increase to quadrupling in size. Other effects can include the female genitalia mucous membrane to thin and produce less lubrication. This can make sex with the female genitalia more painful and can, at times, result in bleeding.\n",
"To take advantage of its virilizing effects, testosterone is administered to transgender men as part of masculinizing hormone therapy, titrated to clinical effect with a \"target level\" of the average male's testosterone level.\n",
"In addition to its role as a natural hormone, testosterone is used as a medication, for instance in the treatment of low testosterone levels in men, transgender hormone therapy for transgender men, and breast cancer in women. Since testosterone levels decrease as men age, testosterone is sometimes used in older men to counteract this deficiency. It is also used illicitly to enhance physique and performance, for instance in athletes.\n"
] |
Are there other cultures that have a long tradition of personal names appropriated from languages other than the ones primarily spoken by that culture?
|
Late Ancient Hebrew did this a ton. Many names were Greek. Variants of "Alexander" were especially popular. Other names were Aramaic, but the two languages are so similar that distinguishing them in names is often difficult. Yiddish does this two. Many of the names are Hebrew names or Hebrew words, and though some of them correspond with ones generally used in Europe, they come straight from the Hebrew, rather than through Latin and/or Greek, so they're not really recognizable.
|
[
"However, in some areas of the world, many people are known by a single name, and so are said to be mononymous. Still other cultures lack the concept of specific, fixed names designating people, either individually or collectively. Certain isolated tribes, such as the Machiguenga of the Amazon, do not use personal names.\n",
"Human personal names are presented, used and categorised in many ways depending on the language and culture. In most cultures (Indonesia is one exception) it is customary for individuals to be given at least two names. In Western culture, the first name is given at birth or shortly thereafter and is referred to as the given name, the forename, the baptismal name (if given then), or simply the first name. In England prior to the Norman invasion of 1066, small communities of Celts, Anglo-Saxons and Scandinavians generally used single names: each person was identified by a single name as either a personal name or nickname. As the population increased, it gradually became necessary to identify people further – giving rise to names like John the butcher, Henry from Sutton, and Roger son of Richard … which naturally evolved into John Butcher, Henry Sutton, and Roger Richardson. We now know this additional name variously as the second name, last name, family name, surname or occasionally the byname, and this natural tendency was accelerated by the Norman tradition of using surnames that were fixed and hereditary within individual families. In combination these two names are now known as the personal name or, simply, the name. There are many exceptions to this general rule: Westerners often insert a third or more names between the given and surnames; Chinese and Hungarian names have the family name preceding the given name; females now often retain their maiden names (their family surname) or combine, using a hyphen, their maiden name and the surname of their husband; some East Slavic nations insert the patronym (a name derived from the given name of the father) between the given and the family name; in Iceland the given name is used with the patronym, or matronym (a name derived from the given name of the mother), and surnames are rarely used. Nicknames (sometimes called hypocoristic names) are informal names used mostly between friends.\n",
"Note: Many cultures have their own naming customs and systems (Chinese, Japanese, Korean, Arabic, Hungarian, Indian and others), some rather intricate. Minor changes or alterations, including reversing Eastern-style formats, do not in and of themselves qualify as stage names, and should not normally be included. For example, Björk, whose stage name appears to be an original creation, is part of her full Icelandic name, Björk Guðmundsdóttir. Her second name is a patronymic instead of a family name, following Icelandic naming conventions. \"Björk\" is not a stage name but how any Icelander would refer to her, casually or formally.\n",
"Language and personal names provide some difficulties. The former is an important indicator of culture but there is very little direct evidence for its use in specific circumstances during the period under consideration. Pictish, Middle Irish and Old Norse would certainly have been spoken and Woolf (2007) suggests that a significant degree of linguistic balkanisation took place. As a result, single individuals often appear in sources under a variety of different names.\n",
"In contemporary Western societies (except for Iceland, Hungary, and sometimes Flanders, depending on the occasion), the most common naming convention is that a person must have a given name, which is usually gender-specific, followed by the parents' family name. Some given names are bespoke, but most are repeated from earlier generations in the same culture. Many are drawn from mythology, some of which span multiple language areas. This has resulted in related names in different languages (e.g. George, Georg, Jorge), which might be translated or might be maintained as immutable proper nouns.\n",
"In the past, the names of people from other language areas were anglicised to a higher extent than today. This was the general rule for names of Latin or (classical) Greek origin. Today, the anglicised name forms are often retained for the more well-known persons, like Aristotle for Aristoteles, and Adrian (or later Hadrian) for Hadrianus. However, less well-known persons from antiquity are now often given their full original-language name (in the nominative case, regardless of its case in the English sentence).\n",
"From their earliest recorded history, the Chinese observed a number of naming taboos, avoiding the names of their elders, ancestors, and rulers out of respect and fear. As a result, the upper classes of traditional Chinese culture typically employed a variety of names over the course of their lives, and the emperors and sanctified deceased had still others.\n"
] |
Why didn't any Ottoman Sultans perform Hajj when they declared themselves Caliphs of Islam?
|
It's mostly logistical issues. A sultan traveling from Istanbul to Mekka would need a huge army for protection. Traveling there and back would take months even years with a massive entourage which would destabilize the government back home and probably any province they pass through.
|
[
"When the Ottomans conquered Mamluk territory in 1517, the role of the Ottoman sultan in the Hijaz was first and foremost to take care of the Holy Cities of Mecca and Medina, and provide safe passage for the many Muslims from various regions who travelled to Mecca in order to perform the Hajj. The Sultan was sometimes referred to as \"Servant of the Holy Places\" but since the Ottoman rulers could not claim lineage from the Prophet Muhammad, it was important to maintain an image of power and piety through construction projects, financial support and caretaking.\n",
"In their capacity as Caliphs, the Sultans of the Ottoman Empire would appoint an official known as the Sharif of Mecca. The role went to a member of the Hashemite family, but the Sultans typically promoted Hashemite inter-familial rivalries in their choice, preventing the building of a solid base of power in the Sharif.\n",
"There is no record of a ruling Sultan visiting Mecca during the Hajj but according to primary records, Ottoman princes and princesses were sent to make the pilgrimage or visit the Holy Cities during the year. The distance from the center of the empire in Istanbul, as well as the length and danger of the journey, was likely the main factor that prevented Sultans from travelling to the Hijaz.\n",
"The Ottoman sultan considered himself God's agent on Earth, the leader of a religious—not a national—state whose purpose was to defend and propagate Islam. Non-Muslims paid extra taxes and held an inferior status, but they could retain their old religion and a large measure of local autonomy. By converting to Islam, individuals among the conquered could elevate themselves to the privileged stratum of society. In the early years of the empire, all Ottoman high officials were the sultan's bondsmen the children of Christian subjects chosen in childhood for their promise, converted to Islam, and educated to serve. Some were selected from prisoners of war, others sent as gifts, and still others obtained through devshirme, the tribute of children levied in the Ottoman Empire's Balkan lands. Many of the best fighters in the sultan's elite guard, the janissaries, were conscripted as young boys from Christian Albanian families, and high-ranking Ottoman officials often had Albanian bodyguards.\n",
"Ottoman sultan Abdul Hamid II (1876–1909) launched his pan-Islamist program in a bid to protect the Ottoman Empire from Western attack and dismemberment, and to crush the Westernizing democratic opposition at home. He sent an emissary, Jamaluddin Afghani, to India in the late 19th century. The cause of the Ottoman monarch evoked religious passion and sympathy amongst Indian Muslims. Being a caliph, the Ottoman sultan was nominally the supreme religious and political leader of all Sunni Muslims across the world. However, this authority was never actually used.\n",
"In the last 19th century, Ottoman sultan Abdul Hamid II launched his pan-Islamist program in a bid to protect the Ottoman Empire from Western attack and dismemberment, and to crush the Westernizing democratic opposition at home. Being a caliph, the Ottoman sultan was nominally the supreme religious and political leader of all Sunni Muslims across the world. However, this authority was never actually used.\n",
"The Ottoman Dynasty embodied the Ottoman Caliphate since the fourteenth century, starting with the reign of Murad I. The Ottoman Dynasty kept the title Caliph, power over all Muslims, as Mehmed's cousin Abdülmecid II took the title. The Ottoman Dynasty left as a political-religious successor to Muhammad and a leader of the entire Muslim community without borders in a post Ottoman Empire. Abdülmecid II's title was challenged in 1916 by the leader of the Arab Revolt King Hussein bin Ali of Hejaz, who denounced Mehmet V, but his kingdom was defeated and annexed by Ibn Saud in 1925.\n"
] |
if the sun is on the other side of the earth at night, how does it stay so warm during the summer?
|
Okay, the only difference between summer and winter as far as heat goes is the angle that the sun hits the earth. With the axis, the sun hits at a steeper angle (ie. Straight up/down) which means greater concentration of energy, think: smaller area, same heat energy. That being said, the world and atmosphere absorb a ton of heat, and hold it as well, that is where the heat sticking around at night comes from.
|
[
"During winter in either hemisphere, the lower altitude of the Sun causes the sunlight to hit the Earth at an oblique angle. Thus a lower amount of solar radiation strikes the Earth per unit of surface area. Furthermore, the light must travel a longer distance through the atmosphere, allowing the atmosphere to dissipate more heat. Compared with these effects, the effect of the changes in the distance of the Earth from the Sun (due to the Earth's elliptical orbit) is negligible.\n",
"During May, June, and July, the Northern Hemisphere is exposed to more direct sunlight because the hemisphere faces the Sun. The same is true of the Southern Hemisphere in November, December, and January. It is Earth's axial tilt that causes the Sun to be higher in the sky during the summer months, which increases the solar flux. However, due to seasonal lag, June, July, and August are the warmest months in the Northern Hemisphere while December, January, and February are the warmest months in the Southern Hemisphere.\n",
"When the sun is in its northern declination northerly places will heat up and it will be cold towards the south. Then the northern air will expand in a southerly direction because of the heat due to the contraction of the southern air. Therefore most of the summer winds are merits and most of the winter winds are not.\n",
"BULLET::::- The distance from the Earth to the Sun varies. The Earth is closest to the Sun (at perihelion) in January, which is summer in the Southern Hemisphere. It is furthest away (at aphelion) in July, which is summer in the Northern Hemisphere, and only 93.55% of the solar radiation from the Sun falls on a given square area of land than at perihelion. Despite this, there are larger land masses in the Northern Hemisphere, which are easier to heat than the seas. Consequently, summers are warmer in the Northern Hemisphere than in the Southern Hemisphere under similar conditions.\n",
"BULLET::::- Seasons are not caused by the Earth being closer to the Sun in the summer than in the winter, but by the Earth's 23.4-degree axial tilt. Each Hemisphere is tilted towards the Sun in its respective summer (July in the Northern Hemisphere and January in the Southern Hemisphere), resulting in longer days and more direct sunlight, with the opposite being true in the winter.\n",
"Because of the increased distance at aphelion, only 93.55% of the solar radiation from the Sun falls on a given area of land as does at perihelion. However, this fluctuation does not account for the seasons, as it is summer in the northern hemisphere when it is winter in the southern hemisphere and \"vice versa.\" Instead, seasons result from the tilt of Earth's axis, which is 23.4 degrees away from perpendicular to the plane of Earth's orbit around the sun. Winter falls on the hemisphere where sunlight strikes least directly, and summer falls where sunlight strikes most directly, regardless of the Earth's distance from the Sun. In the northern hemisphere, summer occurs at the same time as aphelion. Despite this, there are larger land masses in the northern hemisphere, which are easier to heat than the seas. Consequently, summers are warmer in the northern hemisphere than in the southern hemisphere under similar conditions. Astronomers commonly express the timing of perihelion relative to the vernal equinox not in terms of days and hours, but rather as an angle of orbital displacement, the so-called longitude of the periapsis (also called longitude of the pericenter). For the orbit of the Earth, this is called the \"longitude of perihelion\", and in 2000 it was about 282.895°; by the year 2010, this had advanced by a small fraction of a degree to about 283.067°.\n",
"At and near the poles, the Sun never rises very high above the horizon, even in summer, which is one of reasons why these regions of the world are consistently cold in all seasons (others include the effect of albedo, the relative increased reflection of solar radiation of snow and ice). Even at the summer solstice, when the Sun reaches its highest point above the horizon at noon, it is still only 23.5° above the horizon at the poles. Additionally, as one approaches the poles the apparent path of the Sun through the sky each day diverges increasingly from the vertical. As summer approaches, the Sun rises and sets become more northerly in the north and more southerly in the south. At the poles, the path of the Sun is indeed a circle, which is roughly equidistant above the horizon for the entire duration of the daytime period on any given day. The circle gradually sinks below the horizon as winter approaches, and gradually rises above it as summer approaches. At the poles, apparent sunrise and sunset may last for several days.\n"
] |
how does the fourth amendment prevent government reach into government cell phones?
|
Your quote provides the answer.
The constitution including the bill of rights defines what the government can, and can't, do. (It does not apply directly to private employers, of course.)
You don't automatically lose rights as a result of becoming a government employee; but you may waive those rights at times in exchange for something else, such as having a certain job.
|
[
"The bill then states: \"The Fourth Amendment to the Constitution shall not be construed to allow any agency of the United States Government to search the phone records of Americans without a warrant based on probable cause.\"\n",
"In November 2017, the United States Supreme Court ruled in \"Carpenter v. United States\" that the government violates the Fourth Amendment by accessing historical records containing the physical locations of cellphones without a search warrant.\n",
"In contrast, the government's argument focused on who is gathering the data. It argued that the government itself does not collect cell site location data. Rather, cell phone users generate this data in the course of doing business with their phone service providers. Several Supreme Court opinions have established that the Fourth Amendment does not protect so-called \"business records.\" Since the information is not constitutionally protected, the government argued that it does not need a warrant to compel phone companies to turn over the data to investigators.\n",
"Stewart's opinion in \"Katz v. United States\" established that the Fourth Amendment \"protects people, not places.\" Stewart wrote that the government's installation of a recording device in a public phone booth violated the reasonable expectation of privacy; the government was committing \"seizure\" of callers' words. \"Katz\" therefore extended the reach of the fourth amendment beyond just physical intrusions; it would also protect against the seizure of incorporeal words. In addition, the reach of the amendment now went as far as a person's reasonable privacy expectation; the reach of the amendment was no longer defined solely by property limits. The \"Katz\" case made government wiretapping by both state and federal authorities subject to the Fourth Amendment's warrant requirements.\n",
"BULLET::::- \"United States v. United States District Court for the Eastern District of Michigan\", Government officials must obtain a warrant before beginning electronic surveillance even if domestic security issues are involved. The \"inherent vagueness of the domestic security concept\" and the potential for abusing it to quell political dissent make the Fourth Amendment's protections especially important when the government engages in spying on its own citizens.\n",
"The U.S. government has aggressively sought to dismiss and challenge Fourth Amendment cases raised against it, and has granted retroactive immunity to ISPs and telecoms participating in domestic surveillance.\n",
"The Supreme Court has held that the Fourth Amendment does not apply to information that is voluntarily shared with third parties. In \"Smith\", the Court held individuals have no \"legitimate expectation of privacy\" regarding the telephone numbers they dial because they knowingly give that information to telephone companies when they dial a number. However, under \"Carpenter v. United States\" (2018), individuals do have a reasonable expectation of privacy regarding cell phone records that would reveal where that person had traveled over many months and so law enforcement must get a search warrant before obtaining such records. \n"
] |
why, in the event a hurricane or super storm heading for a vulnerable area, can't we launch and detonate explosives within the storm to disperse it?
|
I think you've been watching too much Sharknado. It doesn't work that way in real life.
Besides, hurricanes can be hundreds of miles across. There's no way enough explosives could be launched to affect that, especially without causing massive environmental damage.
|
[
"Certain targets, such as bridges, historically could be attacked only by manually placed explosives. With the advent of precision-guided munitions, the destructive part of the raid may involve the SF unit controlling air strikes. Air strikes, however, are practical only when U.S. involvement is not hidden.\n",
"Attacks come from ambush for the element of surprise and attempt to immobilize a convoy of vehicles, then destroy its defenders, then destroy its contents, then escape before air or artillery support can arrive.\n",
"The usage of tornado emergencies to alert major population centers to the imminent threat of a catastrophic tornado impact has also led to the development of the flash flood emergency which is similarly employed when severe flash floods threaten populated areas.\n",
"Explosive-based area-denial weapons (mines) may be intentionally equipped with detonators which degrade over time, either exploding them or rendering them relatively harmless. Even in these cases, unexploded munitions often pose significant risk.\n",
"Bombing ranges pose several hazards, even when not in use or closed. Unexploded ordnance is often the biggest threat. Once a bombing range has been permanently closed, they are sometimes cleared of unexploded ordnance so that the land can be put to other use or to reduce the chance of accidental detonation causing harm to people near the range, trespassers or authorized personnel. Cleanup or complete cleanup may be put off indefinitely depending on the cost, the danger to personnel clearing the area, the land's potential use, the likelihood of an explosion being triggered and the probability of someone being around to trigger or be harmed by an explosion. \n",
"Alpha Force are in the Caribbean, diving, when a sudden oil spill draws them into a new mission. Having to watch out for assassins, sharks and the bends. All their skills - powerboating, scuba-diving and jetskiing - are needed when an underwater bomb explodes. An assassin's strike thickens the plot and worsens the situation.\n",
"The overall conclusion was that the best approach was to place the bomb somewhere that would redirect the explosion, then move away from where the blast was going to go. Attempting to fully contain an explosion would create deadly shrapnel that would kill anyone nearby. The team finished by blowing up the truck with of ANFO.\n"
] |
Why is the star in the "star and crescent" symbol of Ottoman Empire/Islam not exactly upright geometrically?
|
The design is specified by a 1930s law. The alignment of the star is such that one of the points of the star points directly left. So it's aligned "exactly" on a horizontal axis -- relative to the crescent -- rather than a vertical axis. See _URL_0_ and the sources cited therein.
|
[
"The star and crescent symbol became strongly associated with the Ottoman Empire in the 19th century, a symbol that had been used throughout the Middle East extending back to pre-Islamic times, especially in the Byzantine Empire and Crusader States which occupied the lands later assumed by the Ottoman Empire. By extension from the use in Ottoman lands, it became a symbol also for Islam as a whole, as well as representative of western Orientalism. \"Star and Crescent\" was used as a metaphor for the rule of the Islamic empires (Ottoman and Persian) in the late 19th century in British literature. This association was apparently strengthened by the increasingly ubiquitous fashion of using the star and crescent symbol in the ornamentation of Ottoman mosques and minarets. The \"Red Crescent\" emblem was adopted by volunteers of the International Committee of the Red Cross (ICRC) as early as 1877 during the Russo-Turkish War; it was officially adopted in 1929.\n",
"The star and crescent is an iconographic symbol used in various historical contexts but most well known as a symbol of the Ottoman Empire. It is often considered as a symbol of Islam by extension, however is denied as the religion bears no symbol. It develops in the iconography of the Hellenistic period (4th–1st centuries BCE) in the Kingdom of Pontus, the Bosporan Kingdom and notably the city of Byzantium by the 2nd century BCE. It is the conjoined representation of the crescent and a star, both of which constituent elements have a long prior history in the iconography of the Ancient Near East as representing either Sun and Moon or Moon and Morning Star (or their divine personifications). Coins with crescent and star symbols represented separately have a longer history, with possible ties to older Mesopotamian iconography. The star, or Sun, is often shown within the arc of the crescent (also called star in crescent, or star within crescent, for disambiguation of depictions of a star and a crescent side by side); In numismatics in particular, the term crescent and pellet is used in cases where the star is simplified to a single dot.\n",
"In the late 19th century, \"Star and Crescent\" came to be used as a metaphor for Ottoman rule in British literature. The increasingly ubiquitous fashion of using the star and crescent symbol in the ornamentation of Ottoman mosques and minarets led to a gradual association of the symbol with Islam in general in western Orientalism. The \"Red Crescent\" emblem was used by volunteers of the International Committee of the Red Cross (ICRC) as early as 1877 during the Russo-Turkish War; it was officially adopted in 1929.\n",
"The star and crescent is retained from the 19th-century Ottoman flag, and has acquired its status as de facto national emblem following the abolition of the Ottoman coat of arms in 1922. It was used on national identity cards by the 1930s (with the horns of the crescent facing left instead of the now more common orientation towards the right).\n",
"The adoption of star and crescent as the Ottoman state symbol started during the reign of Mustafa III (1757–1774) and its use became well-established during Abdul Hamid I (1774–1789) and Selim III (1789–1807) periods.\n",
"After the collapse of the Ottoman Empire in 1922, the star and crescent was used in several national flags adopted by its successor states. The star and crescent in the flag of the Kingdom of Libya (1951) was explicitly given an Islamic interpretation by associating it with \"the story of Hijra (migration) of our Prophet Mohammed\" By the 1950s, this symbolism was embraced by movements of Arab nationalism or Islamism, such as the proposed Arab Islamic Republic (1974) and the American Nation of Islam (1973).\n",
"By the mid 20th century, the star and crescent was used by a number successor states of the Ottoman Empire, including Algeria, Azerbaijan, Mauritania, Tunisia, Turkey, the Turkish Republic of Northern Cyprus and Libya. Because of its supposed \"Turkic\" associations, the symbol also came to be used in Central Asia, as in the flags of Turkmenistan and Uzbekistan.\n"
] |
what makes soda taste so bad when you leave it out for some time?
|
It doesn't taste bad at all, you're just losing the carbonation so there isn't that stimulating feeling. If soda was made without carbonation I'm sure there would be a lot less soda drinkers in the world
|
[
"A large number of soda pops are acidic as are many fruits, sauces and other foods. Drinking acidic drinks over a long period and continuous sipping may erode the tooth enamel. A 2007 study determined that some flavored sparkling waters are as erosive or more so than orange juice.\n",
"OK Soda had a more \"citric\" taste than traditional colas, almost like a fruit punch version of Coke's Fresca. It has been described as \"slightly spicy\" and likened to a combination of orange soda and flat Coca-Cola. It has also been compared to what is known as \"suicide\", \"swampwater\" or \"graveyard\", the resulting mixture of multiple soft drink flavors available at a particular convenience store or gas station's soft drink dispenser.\n",
"The drink is a particular phenomenon as its taste is quite different from the taste of its constituent liquids which are rather bitter. The chemical structures of both ingredients are of a similar molecular shape and attract each other, shielding the bitter taste.\n",
"In Serbia and other Eastern European countries, energy drinks based on guarana are marketed under this name, but without the same sweet flavor as the soda; they have a bitter taste and cardio-accelerating effect.\n",
"According to the Container Recycling Institute, sales of flavoured, non-carbonated drinks are expected to surpass soda sales by 2010. In response, Coca-Cola and Pepsi-Cola have introduced new carbonated drinks that are fortified with vitamins and minerals, Diet Coke Plus and Tava, marketed as \"sparkling beverages.\"\n",
"Most soft drinks contain high concentrations of simple carbohydrates: glucose, fructose, sucrose and other simple sugars. If oral bacteria ferment carbohydrates and produce acids that may dissolve tooth enamel and induce dental decay, then sweetened drinks may increase the risk of dental caries. The risk would be greater if the frequency of consumption is high.\n",
"\"\"Liquid candy,\" as soda is often called, is no longer just a \"fun\" moniker. In some cases, it's become a life or death situation. Here, too, Coca-Cola is not alone in overcoming this challenge. I don't believe it's coincidence that for both Pepsi and Dr. Pepper Snapple (DPS), sales volumes were also down.\n"
] |
why is it worthwhile to separate colors from whites in laundry?
|
In the past, you would often add bleach to whites to help clean them. However, it would destroy colored dyes, so you would need to separate them first.
|
[
"White fabrics acquire a slight color cast after use (usually grey or yellow). Since blue and yellow are complementary colors in the subtractive color model of color perception, adding a trace of blue color to the slightly off-white color of these fabrics makes them appear whiter. Laundry detergents may also use fluorescing agents to similar effect. Many white fabrics are blued during manufacturing. Bluing is not permanent and rinses out over time leaving dingy or yellowed whites. A commercial bluing product allows the consumer to add the bluing back into the fabric to restore whiteness.\n",
"White is the color most associated with cleanliness. Objects which are expected to be clean, such as refrigerators and dishes, toilets and sinks, bed linen and towels, are traditionally white. White was the traditional color of the coats of doctors, nurses, scientists and laboratory technicians, though nowadays a pale blue or green is often used. White is also the color most often worn by chefs, bakers, and butchers, and the color of the aprons of waiters in French restaurants.\n",
"The product is primarily used on white fabrics that have become dingy or have taken on a yellow color cast over time. When adding a small amount of the product to wash water, fabric items will actually be dyed slightly blue. However, because blue and yellow are complementary colors in the subtractive color model of color perception, adding a trace of blue color to yellowed fabrics visually cancels out the yellow color cast making the fabric appear very white.\n",
"Colour Catcher products are claimed to prevent colour runs in washing machine cycles and allow coloured and whites to be washed together without incurring color run accidents. It is sold in packets of 10-20 paper-like sheets that are intended to absorb the excess dyes released during the washing process by garments, before they have the time to transfer onto other clothes. There are several other products under the Colour Catcher name, including an oxi-action stain remover and a sheet that is claimed to restore and maintains clothes' whiteness.\n",
"The chemical formulae of alternative color dyes typically contain only tint and have no developer. This means that they will only create the bright color of the packet if they are applied to light blond hair. Darker hair (medium brown to black) would need to be bleached in order for these pigment applications to take to the hair desirably. Some types of fair hair may also take vivid colors more fully after bleaching. Gold, yellow and orange undertones in hair that has not been lightened enough can muddy the final hair color, especially with pink, blue and green dyes. Although some alternative colors are semi-permanent, such as blue and purple, it could take several months to fully wash the color from bleached or pre-lightened hair.\n",
"The college uses blue, green, purple, and black in its publications. Moreover, the interior design color palette of the college's main reception area uses those colors. With the exception of black, nurses commonly wear scrubs in those colors. Since 2010, there has been a growing trend for hospitals and health care organizations to assign scrub color codes to help identify healthcare professional by discipline or department. Color coded uniforms, however, have been widely criticized by healthcare workers for various reasons, one being that it cultivates a caste mentality in an environment that requires teamwork across all disciplines. In any event, the colors at the college do not represent a particular discipline or academic level.\n",
"The use of colors varies depending on styles of certain tribes. Generally they include shades of white, reds, browns, greens, and yellows. Blue does not appear to feature. Natural dyes produce variations in color, which are particularly obvious on older Bibibaffs.\n"
] |
why does peanut butter turn shiny after being spread?
|
The oil is more visible when the peanut butter is spread thin
|
[
"Peanut butter is a food paste or spread made from ground dry roasted peanuts. It often contains additional ingredients that modify the taste or texture, such as salt, sweeteners or emulsifiers. Peanut butter is served as a spread on bread, toast or crackers, and used to make sandwiches (notably the peanut butter and jelly sandwich). It is also used in a number of confections, such as peanut-flavored granola bars or croissants and other pastries. The United States is a leading exporter of peanut butter and itself consumes $800 million of peanut butter annually.\n",
"Peanut butter is a food paste or spread made from ground dry-roasted peanuts. It often contains additional ingredients that modify the taste or texture, such as salt, sweeteners, or emulsifiers. Peanut butter is popular in many countries. The United States is a leading exporter of peanut butter and itself consumes $800 million of peanut butter annually.\n",
"Peanut butter may be made from peanut paste mixed with a stabilizing agent, a sweetening agent, salt, and optionally, an emulsifying agent. In such formulas, peanut paste acts as the main ingredient in peanut butter, from 75% to as much as 99% of the recipe. Peanut butter is mainly known for being sold as a spread, and peanut paste is regularly sold to be used as an ingredient in cookies, cakes and a number of other retail food products.\n",
"Both crunchy/chunky and smooth peanut butter are sources of saturated (primarily palmitic acid, 21% of total fat) and monounsaturated fats, mainly oleic acid as 47% of total fat, and polyunsaturated fat (28% of total fat), primarily as linoleic acid).\n",
"Forms of peanut butter were already popular before Rosefield's innovation. The problem was that the oil separated from the peanut grit and did not keep. Rosefield's patented homogenization solution was to partially hydrogenate the peanut oil to make it more miscible with the peanuts. (In other words, he added vegetable shortening to his recipe.) This also made it possible to churn the peanut butter to a creamy consistency. His company promised a one-year shelf life for the product and claimed that it tasted better and was less sticky than previous formulas.\n",
"The two main types of peanut butter are crunchy (or chunky) and smooth (or creamy). In crunchy peanut butter, some coarsely-ground peanut fragments are included to give extra texture. The peanuts in smooth peanut butter are ground uniformly, creating a creamy texture.\n",
"Peanut butter is served as a spread on bread, toast, or crackers, and used to make sandwiches (notably the peanut butter and jelly sandwich). It is also used in a number of breakfast dishes and desserts, such as peanut-flavored granola, smoothies, crepes, cookies, brownies, or croissants. It is similar to other nut butters such as cashew butter and almond butter.\n"
] |
i saw a commercial for a car dealership offering you a car for $88 down and $88 per month even if you have bad or no credit. what's the catch? how can they do this?
|
You will be paying interest on that car for decades.
|
[
"Depending on the type of car purchased and \"the difference in fuel economy between the purchased vehicle and the trade-in vehicle\", the amount of the credit given in the form of vouchers to eligible customers is either $3,500 or $4,500. New car dealers will be able to reduce the purchase price by the amount of the voucher for which that the customer is eligible.\n",
"Car brokers work with their own established network of new car dealerships. When a client requires a new car, the car broker will contact one or more dealers in their network and determine which one will provide the required car at the lowest price. Delivery and location parameters may also be considered. Some car brokers offer to deliver the car to the client's home or place of work \n",
"Typical offers from auto companies are \"Zero Percent APR financing available or $1,000 rebate\". The consumer who elects \"zero percent\" financing gives up a $1,000 rebate (reduction in car price). Effectively, he or she pays $1,000 to get the \"interest free\" loan. Since only auto makers can do this type of bundling, banks, credit unions and other competitors are left at a disadvantage. They must disclose true APR rates while the auto makers can claim no interest costs. In the process, the typical consumer is left with a complex finance problem. \"Zero percent\" financing can cost a lot less, or a lot more, than conventional financing with a non-auto maker institution.\n",
"Customers may also find that a dealer can get them better rates than they can with their local bank or credit union. However, manufacturers often offer a low interest rate OR a cash rebate, if the vehicle is not financed through the dealer. Depending upon the amount of the rebate, it is prudent for the consumer to check if applying a larger rebate results in a lower payment due to the fact that s/he is financing less of the purchase. For example, if a dealer has an interest rate offer of 7.9% financing OR a $2000.00 rebate and a consumer's lending source offers 8.25%, a consumer should compare at the credit union what payments and total interest paid would be, if the consumer financed $2000.00 less at the credit union. The dealer can have their lending institution check a consumer's credit. A consumer can also allow his or her lending source to do the same and compare the results. Most financing available at new car dealerships is offered by the financing arm of the vehicle manufacturer or a local bank.\n",
"A car dealer orders vehicles from the manufacturer for inventory and pays interest (called flooring or floorplanning). Dealer holdbacks are a system of payments made by the manufacturers to their dealers. The holdback payments assist the dealer's ability to stock their inventory of vehicles and improves the profitability of dealers. Typically the holdback amount is around 1% to 3% of the vehicles' manufacturer's suggested retail price (MSRP). Hold-back is usually not a negotiable part of the price a consumer would pay for the vehicle, but dealers will \"give up\" the dealer holdback to get rid of a car that has been sitting in its inventory for a long time, or if the additional sale will bring them up to the manufacturer's additional incentive payments for reaching unit bonus targets. The holdback was originally designed to help offset the cost the new car dealer has for paying interest on the money that is borrowed to keep the car in inventory, but is in effect lowering the dealer's gross profit, and thus the sales commissions paid to employees. The holdback allows dealerships to promote at- or near-invoice price sales and still achieve comfortable profits on such transactions.\n",
"New car owners receive 50% or 75% of the time-based fee and 75% of the km fee. They can book their car for free and at other times, the car must be available for roughly 50% of the weekdays and 50% of the weekends, or a penalty may be charged to the owner. Owners do not choose who can use their vehicle beyond selecting whether a driver is 21 or older, 25 or older etc. (as per the CTP greenslip requirements), though they can request that a driver is banned. The site manages toll fees and charges these back to the owner each month. Drivers are able to purchase fuel with a fuel card and this is charged back to the owner each month. Monthly administration costs are also charged each month (which include fully comprehensive car insurance, and roadside assist). All these costs are then deducted from any earnings each month, with any positive remaining balance being transferred to the car owner's bank account.\n",
"According to one survey, more than half of dealership customers would prefer to buy directly from the manufacturer, without any monetary incentives to do so. An analyst report of a direct sales model is estimated to cut the cost of a vehicle by 8.6%. This implies an even greater demand currently exists for a direct manufacturer sales model. However, state laws in the United States prohibit manufacturers from selling directly, and customers must buy new cars through a dealer.\n"
] |
Do dissolved solids (I.E. sugar in coffee) have the same volume as their constituents?
|
Generally, no. Archimedes does not apply when you're dissolving a solid.
Depending on the nature of coordination in solution, a solid can dissolve and displace more or less volume than the solid would. If everything behaved like an ideal solution, this wouldn't be the case, but mixtures involve [partial molar properties](_URL_0_) that take place and break away from additive properties like volumes.
Changes in water order can change the local volume in similar ways to how ice takes up more volume than liquid water.
|
[
"When a sugar solution is measured by refractometer or density meter, the °Bx or °P value obtained by entry into the appropriate table only represents the amount of dry solids dissolved in the sample if the dry solids are exclusively sucrose. This is seldom the case. Grape juice (must), for example, contains little sucrose but does contain glucose, fructose, acids, and other substances. In such cases, the °Bx value clearly cannot be equated with the sucrose content, but it may represent a good approximation to the total sugar content. For example, an 11.0% by mass D-Glucose (\"grape sugar\") solution measured 10.9 °Bx using a hand held instrument. For these reasons, the sugar content of a solution obtained by use of refractometry with the ICUMSA table is often reported as \"Refractometric Dry Substance\" (RDS) which could be thought of as an equivalent sucrose content. Where it is desirable to know the actual dry solids content, empirical correction formulas can be developed based on calibrations with solutions similar to those being tested. For example, in sugar refining, dissolved solids can be accurately estimated from refractive index measurement corrected by an optical rotation (polarization) measurement.\n",
"BULLET::::- Liquid sugars are strong syrups consisting of 67% granulated sugar dissolved in water. They are used in the food processing of a wide range of products including beverages, hard candy, ice cream, and jams.\n",
"Scientists and the sugar industry use degrees Brix (symbol °Bx), introduced by Adolf Brix, as units of measurement of the mass ratio of dissolved substance to water in a liquid. A 25 °Bx sucrose solution has 25 grams of sucrose per 100 grams of liquid; or, to put it another way, 25 grams of sucrose sugar and 75 grams of water exist in the 100 grams of solution.\n",
"The remaining sugar is then dissolved to make a syrup (about 70 percent by weight solids), which is clarified by the addition of phosphoric acid and calcium hydroxide that combine to precipitate calcium phosphate. The calcium phosphate particles entrap some impurities and absorb others, and then float to the top of the tank, where they are skimmed off.\n",
"Danone Actimel plain 0% contains 3.3 g of sugar, original plain contains 10.5 g of sugar, multifruit contains 12.0 g of sugar for every serving (100 g). None of those concentrations is higher than the level defined as \"HIGH\" by the UK Food Standards Agency (described for concentrations of sugar above 15 g per 100 g). As a comparison, Coca-Cola and orange juices are also in the range of 10 g of sugar per 100 g, but with a serving size usually higher than 250 ml the total sugar quantity is much higher.\n",
"Sugar is the generic name for sweet-tasting, soluble carbohydrates, many of which are used in food. The various types of sugar are derived from different sources. Simple sugars are called monosaccharides and include glucose (also known as dextrose), fructose, and galactose. \"Table sugar\" or \"granulated sugar\" refers to sucrose, a disaccharide of glucose and fructose. In the body, sucrose is hydrolysed into fructose and glucose.\n",
"In cooking, a syrup or sirup (from ; \"sharāb\", beverage, wine and ) is a condiment that is a thick, viscous liquid consisting primarily of a solution of sugar in water, containing a large amount of dissolved sugars but showing little tendency to deposit crystals. Its consistency is similar to that of molasses. The viscosity arises from the multiple hydrogen bonds between the dissolved sugar, which has many hydroxyl (OH) groups. \n"
] |
When light is reflected off a surface, is that same photon being bounced back or is that photon absorbed and then another one emitted?
|
To the extent I understand it, photons don't have an "identity", there is no way to know, and use whatever assumption works for the problem you are solving.
It is a very unsatisfying answer, but physics has a lot of that.
|
[
"Total external reflection is the situation where the light starts in air and vacuum (refractive index 1), and bounces off a material with index of refraction less than 1. For example, in X-rays, the refractive index is frequently slightly less than 1, and therefore total external reflection can happen at a glancing angle. It is called \"external\" because the light bounces off the exterior of the material. This makes it possible to focus X-rays.\n",
"The law of reflection states that for each incident ray the angle of incidence equals the angle of reflection, and the incident, normal, and reflected directions are coplanar. This behavior was first described by Hero of Alexandria (AD c. 10–70). It may be contrasted with diffuse reflection, in which light is scattered away from the surface in a range of directions rather than just one.\n",
"The interference phenomenon in optics occurs as a result of the wave propagation of light. When light of a given wavelength is reflected back upon itself by a mirror, standing waves are generated, much as the ripples resulting from a stone dropped into still water create standing waves when reflected back by a surface such as the wall of a pool. In the case of ordinary incoherent light, the standing waves are distinct only within a microscopically thin volume of space next to the reflecting surface.\n",
"Diffuse reflection is the reflection of light or other waves or particles from a surface such that a ray incident on the surface is scattered at many angles rather than at just one angle as in the case of specular reflection. An \"ideal\" diffuse reflecting surface is said to exhibit Lambertian reflection, meaning that there is equal luminance when viewed from all directions lying in the half-space adjacent to the surface.\n",
"Alternatively, it is also possible to use an oscillating reflecting surface to cause destructive interference with reflected light along a single optical path. This principle is the basis for a Michelson interferometer.\n",
"Reflection and transmission of light waves occur because the frequencies of the light waves do not match the natural resonant frequencies of vibration of the objects. When IR light of these frequencies strikes an object, the energy is either reflected or transmitted.\n",
"Total internal reflection describes the fact that radiation (e.g. visible light) can, at certain angles, be totally reflected from an interface between two media of different indices of refraction (see Snell's law). Total internal reflection occurs when the first medium has a larger refractive index than the second medium, for example, light that starts in water and bounces off the water-to-air interface.\n"
] |
- if deadly viruses, like ebola, ultimately kill the host, how do they evolve, or persist to an epidemic level?
|
It isn't a good way to spread a virus strain.
That's precisely why these epidemic diseases kill thousands and then burn out. They massacre their food supply and host by accident and die with them.
The most successful viruses have no symptoms. They live in you and transfer among humanity without alarming our immune system or killing the host.
Ebola and others like it have accidentally infected humans instead of their preferred animal hosts where they generate little to no immune response or symptoms.
|
[
"Generally, if a virus kills its host too quickly, the host will not have a chance to come in contact with other hosts and transmit the virus before dying. However, in serial passage, when a virus was being transmitted from host to host regardless of its virulence, such as Subbaro’s experiment, the viruses that grow the fastest (and are therefore the most virulent) are selected for.\n",
"Viruses can remain intact from apoptosis in particular in the latter stages of infection. They can be exported in the \"apoptotic bodies\" that pinch off from the surface of the dying cell, and the fact that they are engulfed by phagocytes prevents the initiation of a host response. This favours the spread of the virus.\n",
"Every lethal viral disease presents a paradox: killing its host is obviously of no benefit to the virus, so how and why did it evolve to do so? Today it is believed that most viruses are relatively benign in their natural hosts; some viral infection might even be beneficial to the host. The lethal viral diseases are believed to have resulted from an \"accidental\" jump of the virus from a species in which it is benign to a new one that is not accustomed to it (see zoonosis). For example, viruses that cause serious influenza in humans probably have pigs or birds as their natural host, and HIV is thought to derive from the benign non-human primate virus SIV.\n",
"The natural source of Ebola virus is probably bats. Marburg viruses are transmitted to humans by monkeys, and Lassa fever by rats (\"Mastomys natalensis\"). Zoonotic infections can be severe because humans often have no natural resistance to the infection and it is only when viruses become well-adapted to new host that their virulence decreases. Some zoonotic infections are often \"dead ends\", in that after the initial outbreak the rate of subsequent infections subsides because the viruses are not efficient at spreading from person to person.\n",
"Transmission of the ebolaviruses between natural reservoirs and humans is rare, and outbreaks of Ebola virus disease are often traceable to a single case where an individual has handled the carcass of a gorilla, chimpanzee or duiker. The virus then spreads person-to-person, especially within families, hospitals and during some mortuary rituals where contact among individuals becomes more likely.\n",
"A pandemic has broken out across Earth, and most of humanity has been killed by a virus. The virus began with patient zero, a woman who comes into contact with the three DNA strands necessary for this virus to come into existence. A soldier, Colonel Beckett, is sent back in time to kill her and prevent the virus from forming.\n",
"Viruses have been able to continue their infectious existence due to evolution. Their rapid mutation rates and natural selection has given viruses the advantage to continue to spread. One way that viruses have been able to spread is with the evolution of virus transmission. The virus can find a new host through:\n"
] |
why do tech manufacturers region lock their devices?
|
It's pretty simple, depending on the region you sell your product the highest price people are ready to pay for your device can differ quite significantly. If you have the same price all over the world you wont sell in some regions. If you have different prices and don't region lock people will just buy from the cheapest region. The "solution" is region lock.
Tl;dr: it's because of money.
|
[
"Adapters (sometimes called \"dongles\") allow connecting a peripheral device with one plug to a different jack on the computer. They are often used to connect modern devices to a legacy port on an old system, or legacy devices to a modern port. Such adapters may be entirely passive, or contain active circuitry.\n",
"Secure access control such as for company entry and exit, home access, cars, and electronic devices was the first use of smart rings. Smart rings change the status quo for secure access control by increasing ease of use, decreasing physical security flaws such as by ease of losing the device, and by adding two-factor authentication mechanisms including biometrics and key code entry.\n",
"Handset manufacturers have economic incentives both to strengthen SIM lock security (which placates network providers and enables exclusivity deals) and to weaken it (broadening a handset's appeal to customers who are not interested in the service provider that offers it). Also, making it too difficult to unlock a handset might make it less appealing to network service providers who have a legal obligation to provide unlock codes for certain handsets or in certain countries.\n",
"The more that people know about lock technology, the better they are capable of understanding how and where certain weaknesses are present. This makes them well-equipped to participate in sportpicking endeavors and also helps them to simply be better consumers in the marketplace, making decisions based on sound fact and research.\n",
"Best Access products are sold primarily and directly to corporate and institutional end users without locksmith and wholesaler access to competitive distribution. Its products are typically marketed toward and installed into moderately sized or larger master key systems.\n",
"BULLET::::- Internal cooperation between departments of the company: product-security teams and corporate IT-security teams will have to work closely together in order to prevent the hackability of their devices. To do so, companies may create guidelines that minimize probabilities of bugs, and security gaps (software). Making modifying and patching systems easier can be another effect driven from that.\n",
"Electronic and mechanical locking devices (such as timers, drop meters, coin security products, smart cards and related equipment and technology, value transfer stations, access control units, and some appliances in kitchen) for electric equipment in consumer market and gaming industries. The company also provides security products such as luggage, furniture, laboratory equipment and commercial laundry.\n"
] |
What are some unsolved problems in Computer Science?
|
The biggest and probably the most famous problem is the P-NP problem. It concerns decision problems (problems that can be answered with a "yes" or a "no"). There are two important classes of decision problems - P and NP. P problems are those which can be decided in polynomial time. NP problems are those whose solutions can be verified in polynomial time. It's simple to see that P is a subset of NP - if you can solve a problem in polynomial time, you can verify a solution in polynomial time just by solving it. The big question is, is P a proper subset of NP - in other words, are there decision problems whose solutions can be verified in polynomial time, but cannot be solved in polynomial time?
Another famous problem is the RSA problem - can a semiprime (a product of two distinct primes) be factored efficiently (in polynomial time)? This is related to the P-NP problem, but it's not a decision problem, so it's a bit different. It's called the RSA problem because the RSA cryptosystem relies on semiprimes being difficult to factor. If an efficient factoring method was found, RSA would be easy to crack, which would be bad.
|
[
"This article is a list of unsolved problems in computer science. A problem in computer science is considered unsolved when no solution is known, or when experts in the field disagree about proposed solutions.\n",
"Perhaps the most important open problem in all of computer science is the question of whether a certain broad class of problems denoted NP can be solved efficiently. This is discussed further at Complexity classes P and NP, and P versus NP problem is one of the seven Millennium Prize Problems stated by the Clay Mathematics Institute in 2000. The Official Problem Description was given by Turing Award winner Stephen Cook.\n",
"Both problems were held to be of practical and theoretical importance long before the time of digital computers, but they are now generally considered the domain of computer science, as computers are most often used currently to tackle individual instances.\n",
"Currently, one of the most famous open problems in theoretical computer science is the P = NP problem, which involves the relationship between the complexity classes P and NP. The Clay Mathematics Institute has offered a $1 million USD prize for the first correct proof, along with prizes for six other mathematical problems.\n",
"In computer science, it is common to analyze the computational complexity of problems, including real life problems and games. It was proven that for the \"offline\" version of \"Tetris\" (the player knows the complete sequence of pieces that will be dropped, i.e. there is no hidden information) the following objectives are NP-complete:\n",
"Errors in computer programs are called \"bugs\". They may be benign and not affect the usefulness of the program, or have only subtle effects. But in some cases, they may cause the program or the entire system to \"hang\", becoming unresponsive to input such as mouse clicks or keystrokes, to completely fail, or to crash. Otherwise benign bugs may sometimes be harnessed for malicious intent by an unscrupulous user writing an exploit, code designed to take advantage of a bug and disrupt a computer's proper execution. Bugs are usually not the fault of the computer. Since computers merely execute the instructions they are given, bugs are nearly always the result of programmer error or an oversight made in the program's design.\n",
"Overall it is clear to see that there are many medical problems that can arise from using computers and damaged eyesight, CTS and musculoskeletal problems are only the tip of the iceberg. But it is also important to note that changes are currently being made to ensure that all these problems are ameliorated to the best standard that employers and computer users currently have the technology to implement. By taking measures like ensuring our computer peripherals are situated to ensure maximum comfort while working and taking frequent breaks from computational work can go a long way to ensuring that many medical conditions arising from computers are avoided. These are small measures but they go a long way to ensuring that computer users maintain their health, As with many modern and marvellous technologies in the world today there is always a downside and the major downside of computers is the medical problems that can arise from their prolonged use. Thus it is the duty of computer users and employers everywhere to ensure that the downside is kept to a minimum.\n"
] |
why did saber-tooth cats have such big fangs?
|
I'm just guessing here, but maybe it preyed on larger animals. Those fangs would have sunk deep into flesh.
|
[
"The different groups of saber-toothed cats evolved their saber-toothed characteristics entirely independently. They are most known for having maxillary canines which extended down from the mouth even when the mouth was closed. Saber-toothed cats were generally more robust than today's cats and were quite bear-like in build. They were believed to be excellent hunters and hunted animals such as sloths, mammoths, and other large prey. Evidence from the numbers found at La Brea Tar Pits suggests that \"Smilodon\", like modern lions, was a social carnivore.\n",
"A saber-toothed cat (alternatively spelled sabre-toothed cat) is any member of various extinct groups of predatory mammals that were characterized by long, curved saber-shaped canine teeth. The large maxillary canine teeth extended from the mouth even when it was closed. The saber-toothed cats were found worldwide from the Eocene epoch to the end of the Pleistocene epoch (42 million years ago (mya) – 11,000 years ago), existing for about .\n",
"Many of the saber-toothed cats' food sources were large mammals such as elephants, rhinos, and other colossal herbivores of the era. The evolution of enlarged canines in Tertiary carnivores was a result of large mammals being the source of prey for saber-toothed cats. The development of the saber-toothed condition appears to represent a shift in function and killing behavior, rather than one in predator-prey relations. Many hypotheses exist concerning saber-tooth killing methods, some of which include attacking soft tissue such as the belly and throat, where biting deep was essential to generate killing blows. The elongated teeth also aided with strikes reaching major blood vessels in these large mammals. However, the precise functional advantage of the saber-toothed cat's bite, particularly in relation to prey size, is a mystery. A new point-to-point bite model is introduced in the article by Andersson et al., showing that for saber-tooth cats, the depth of the killing bite decreases dramatically with increasing prey size. The extended gape of saber-toothed cats results in a considerable increase in bite depth when biting into prey with a radius of less than 10 cm. For the saber-tooth, this size-reversed functional advantage suggests predation on species within a similar size range to those attacked by present-day carnivorans, rather than \"megaherbivores\" as previously believed.\n",
"It is now generally thought that \"Megantereon\", like other saber-toothed cats, used its long saber teeth to deliver a killing throat bite, severing most of the major nerves and blood vessels. While the teeth would still risk damage, the prey animal would be killed quickly enough that any struggles would be feeble at best.\n",
"Saber-tooths also coexisted in many places with conical-toothed cats. In Africa and Eurasia, sabertooth cats competed with several pantherines and cheetahs until the early or middle Pleistocene. \"Homotherium\" survived in northern Europe even until the late Pleistocene. In the Americas, they coexisted with the cougar, American lion, American cheetah, and jaguar until the late Pleistocene. Saber-toothed and conical-toothed cats competed with each other for food resources, until the last of the former became extinct. All recent felids have more or less conical-shaped upper canines.\n",
"Traditionally, saber-toothed cats have been artistically restored with external features similar to those of extant felids, by artists such as Charles R. Knight in collaboration with various paleontologists in the early 20th century. In 1969, paleontologist G. J. Miller instead proposed that \"Smilodon\" would have looked very different from a typical cat and similar to a bulldog, with a lower lip line (to allow its mouth to open wide without tearing the facial tissues), a more retracted nose and lower-placed ears. Paleoartist Mauricio Antón and coauthors disputed this in 1998 and maintained that the facial features of \"Smilodon\" were overall not very different from those of other cats. Antón noted that modern animals like the hippopotamus are able to achieve a wide gap without tearing tissue by the moderate folding of the orbicularis oris muscle, and such a muscle configuration exists in modern large felids. Antón stated that extant phylogenetic bracketing (where the features of the closest extant relatives of a fossil taxon are used as reference) is the most reliable way of restoring the life-appearance of prehistoric animals, and the cat-like \"Smilodon\" restorations by Knight are therefore still accurate.\n",
"The earliest felids are known from the Oligocene of Europe, such as \"Proailurus\", and the earliest one with saber-tooth features is the Miocene genus \"Pseudaelurus\". The skull and mandible morphology of the earliest saber-toothed cats was similar to that of the modern clouded leopards (\"Neofelis\"). The lineage further adapted to the precision killing of large animals by developing elongated canine teeth and wider gapes, in the process sacrificing high bite force. As their canines became longer, the bodies of the cats became more robust for immobilizing prey. In derived smilodontins and homotherins, the lumbar region of the spine and the tail became shortened, as did the hind limbs. Based on mitochondrial DNA sequences extracted from fossils, the lineages of \"Homotherium\" and \"Smilodon\" are estimated to have diverged about 18 Ma ago. The earliest species of \"Smilodon\" is \"S. gracilis\", which existed from 2.5 million to 500,000 years ago (early Blancan to Irvingtonian ages) and was the successor in North America of \"Megantereon\", from which it probably evolved. \"Megantereon\" itself had entered North America from Eurasia during the Pliocene, along with \"Homotherium\". \"S. gracilis\" reached the northern regions of South America in the Early Pleistocene as part of the Great American Interchange. The younger \"Smilodon\" species are probably derived from \"S. gracilis\". \"S. fatalis\" existed 1.6 million–10,000 years ago (late Irvingtonian to Rancholabrean ages), and replaced \"S. gracilis\" in North America. \"S. populator\" existed 1 million–10,000 years ago (Ensenadan to Lujanian ages); it occurred in the eastern parts of South America.\n"
] |
when people say how fast something in space is moving what reference point are they using?
|
It is usually going to be with reference to the body that exerts the dominant gravitational force in the region.
The speed of a probe sent to orbit Europa would first be expressed with reference to the earth, then the sun, then Jupiter, then finally Europa. Possibly other planets or moons if a gravitational assist was involved.
|
[
"Alternatively, we could choose a frame of reference \"S′\" situated in the first car. In this case, the first car is stationary and the second car is approaching from behind at a speed of . In order to catch up to the first car, it will take a time of , that is, 25 seconds, as before. Note how much easier the problem becomes by choosing a suitable frame of reference. The third possible frame of reference would be attached to the second car. That example resembles the case just discussed, except the second car is stationary and the first car moves backward towards it at .\n",
"In Einstein's theory of relativity, the path of an object moving relative to a particular frame of reference is defined by four coordinate functions \"x\"(\"τ\"), where μ is a spacetime index which takes the value 0 for the timelike component, and 1, 2, 3 for the spacelike coordinates. The zeroth component is defined as the time coordinate multiplied by \"c\",\n",
"In the study of 1-dimensional kinematics, position vs. time graphs (also called distance vs. time graphs, or p-t graphs) provide a useful means to describe motion. The specific features of the motion of objects are demonstrated by the shape and the slope of the lines. In the accompanying figure, the plotted object moves away from the origin at a uniform speed of 1.66 m/s for six seconds, halts for five seconds, then returns to the origin over a period of seven seconds at a non-constant speed.\n",
"It would have been possible to choose a rotating, accelerating frame of reference, moving in a complicated manner, but this would have served to complicate the problem unnecessarily. It is also necessary to note that one is able to convert measurements made in one coordinate system to another. For example, suppose that your watch is running five minutes fast compared to the local standard time. If you know that this is the case, when somebody asks you what time it is, you are able to deduct five minutes from the time displayed on your watch in order to obtain the correct time. The measurements that an observer makes about a system depend therefore on the observer's frame of reference (you might say that the bus arrived at 5 past three, when in fact it arrived at three).\n",
"The rate of change in the distance between two objects in a frame of reference with respect to which both are moving (their closing speed) may have a value in excess of \"c\". However, this does not represent the speed of any single object as measured in a single inertial frame.\n",
"It may be helpful to visualize this situation using spacetime diagrams. For a given observer, the \"t\"-axis is defined to be a point traced out in time by the origin of the spatial coordinate \"x\", and is drawn vertically. The \"x\"-axis is defined as the set of all points in space at the time \"t\" = 0, and is drawn horizontally. The statement that the speed of light is the same for all observers is represented by drawing a light ray as a 45° line, regardless of the speed of the source relative to the speed of the observer.\n",
"Since there is no absolute reference frame in relativity theory, a concept of 'moving' doesn't strictly exist, as everything may be moving with respect to some other reference frame. Instead, any two frames that move at the same speed in the same direction are said to be \"comoving\". Therefore, \"S\" and \"S\"′ are not \"comoving\".\n"
] |
how do countries pay for maternity leave?
|
In France it is payed by the Social security (healthcare etc..), not the employer.
|
[
"Paid maternity leave is important for women to take time away from work to bond with a child without financial pressures. Of the 193 United Nations countries, only a handful do not have a paid-parental-leave policy: New Guinea, Suriname, the United States and a few South Pacific island nations. The international history dates back to the 1970s, with countries such as Iraq granting full pay for women. By the 1980s, Great Britain was at the point of giving women benefits but did not specify a pay rate. The history of pay rates is limited and not well-recorded, except by the OECD.\n",
"Most OECD countries provide payments replacing over 50 percent of previous earnings, with twelve countries offering average-wage mothers full compensation for the leave. Pay rates are lowest in Ireland and the United Kingdom, where only about one-third of gross average earnings are replaced by maternity benefits. Despite lengthy paid-leave entitlements, full-rate maternity leave in these countries is only nine weeks in Ireland and twelve in the UK.\n",
"In Denmark, a woman can receive 35 or 46 weeks of paid leave; the 35-week pay leave may be spread over 46 weeks. Women in Greece must be insured to receive maternity benefits, which include 56 days before giving birth and 63 days afterwards. To receive the benefits a woman must stop working for 56 days. If she does not take 56 days off, the woman must add the days after giving birth to be paid. In Switzerland, a woman is guaranteed up to 14 weeks (a minimum of 8 weeks) of paid leave after giving birth. She is paid 80 percent of her previous wage, with a daily maximum.\n",
"In the United Kingdom maternity-leave pay is known as Statutory Maternity Pay (SMP), which can cover up to 39 weeks of maternity leave. A woman can expect to earn 90 percent of her weekly earnings for the first six weeks of maternity leave; after that, the rate decreases.\n",
"In Australia, women can receive up to 18 weeks minimum-wage from the government; if an employer offers paid leave, a mother can receive that as well. In Canada, a woman can receive 17 weeks of maternity leave: two weeks before giving birth and 15 weeks afterwards. The two weeks before birth are unpaid. As of March 2019, the Canadian federal government announced new benefits that will add five additional weeks to the 35-week standard option and eight additional weeks to the 61-week extended option. In New Zealand, primary leave is 18 weeks of paid leave; special leave covers 10 days (spread out) for appointments or unpaid illness.\n",
"In most countries, the cost of maternity leave is shared by the government, employer, insurance agency and other social security programmes. In Singapore, for example, the employer bears the cost for 8 weeks and public funds for 8 weeks. In Australia and Canada, public funds bear the full cost. A social insurance scheme bears the cost in France. In Brazil, it shared by the employer, employee and the government.\n",
"Many countries have various legal regulations in place to protect pregnant women and their children. Maternity Protection Convention ensures that pregnant women are exempt from activities such as night shifts or carrying heavy stocks. Maternity leave typically provides paid leave from work during roughly the last trimester of pregnancy and for some time after birth. Notable extreme cases include Norway (8 months with full pay) and the United States (no paid leave at all except in some states). Moreover, many countries have laws against pregnancy discrimination.\n"
] |
What is the Eastern Front known as in Russia?
|
This can be a bit confusing, so I will use *italics for Latinized Russian* and **bold for English translations**
If you're asking about the Eastern Front of WWII, the single massive continuous front (geographic area) is known by Russians as Великая Отечественная Война (*Velikaya Otechestvennaya Voyna*), meaning **Great Patriotic War**
The **Patriotic War** (*Otechestvennaya Voyna*) would be WWI, which is also sometimes known as Вторая Отечественная война (*Vtoraya Otechestvennaya Voyna*), or **Second Fatherland War**, with the **First Fatherland War** being the war of Napoleon's invasion, which confusingly was the original **Patriotic War**
Confusingly for English-speakers, the **Great Patriotic War** consisted of several military units also known as фронт (*front* in Latinized Russian), which in this case means a Soviet military formation equivalent to an army group of most other militaries, and not the geographic area you are asking about. You can see [the flag on the right in this video](_URL_0_) says "1 БЕЛОРУССКИЙ ФРОНТ" (*1st Belorussian Front* in Latinized Russian), which most accurately translates to **1st Belorussian Army Group** in American English
Because of the two meanings for "front", it would be confusing to read, "The **Eastern Front** had many *fronts*." The proper translation would be, "The **Great Patriotic War** involved many **army groups**, some of which were named *1st Belorussian Front* (**1st Belorussian Army Group**), the *2nd Belorussian Front* (**2nd Belorussian Army Group**), and *1st Ukrainian Front* (**1st Ukrainian Army Group**)."
|
[
"\"Eastern Front\" is a corps-level simulation of Operation Barbarossa, the German invasion of the Soviet Union in 1941. The player controls the Germans, in white, while the computer plays the Russians, in red. Units are represented as boxes for armored corps or cavalry, and crosses for infantry, an attempt to replicate conventional military symbols given the low resolution.\n",
"The 2nd Far Eastern Front () was a Front—a formation equivalent to a Western Army Group—of the Soviet Army. It was formed just prior to the Soviet invasion of Manchuria and was active from August 5, 1945, until October 1, 1945.\n",
"After its Civil War service, the Far Eastern Front was re-created on June 28, 1938 from the Special Red Banner Far Eastern Army within the Far East Military District. It included the 1st Red Banner Army and the 2nd Red Banner Army. In 1938 Front forces — seemingly the Soviet 32nd Rifle Division of 39th Rifle Corps — engaged the Japanese at the Battle of Lake Khasan. On the eve of the invasion of the Soviet Union by Germany, the Front comprised:\n",
"The Northern Front () was a front of the Red Army during the Russian Civil War which was formed on 15 September 1918 to fight the troops of the interventionists and White Guards in the Northwest, North and Northeast of the Soviet Republic. The Northern Front covered the area between Pskov and Vyatka. It bordered the Eastern Front of the Red Army along the Balakhna - Yarensk - Glazov - Cherdyn, Cherdyn line. The Front headquarters were located in Yaroslavl. \n",
"The Eastern Front or Eastern Theater of World War I (, , \"Vostochnıy front\") was a theater of operations that encompassed at its greatest extent the entire frontier between the Russian Empire and Romania on one side and the Austro-Hungarian Empire, Bulgaria, the Ottoman Empire and the German Empire on the other. It stretched from the Baltic Sea in the north to the Black Sea in the south, involved most of Eastern Europe and stretched deep into Central Europe as well. The term contrasts with \"Western Front\", which was being fought in Belgium and France.\n",
"The Eastern Front started in spring 1918 as a secret movement among army officers and right-wing socialist forces. In that front, they launched an attack in collaboration with the Czechoslovak Legions (then stranded in Siberia by the Bolshevik Government, who barred them from leaving Russia) and with the Japanese, who also intervened to help the Whites in the east. Admiral Alexander Kolchak headed the eastern White counter-revolutionary army and a provisional Russian government. Despite some significant success in 1919, the Whites were defeated being forced back to Far Eastern Russia, where they continued fighting until October 1922. When the Japanese withdrew, the Soviet army of the Far Eastern Republic retook the territory. The Civil War was officially declared over at this point, although Anatoly Pepelyayev still controlled the Ayano-Maysky District at that time. Pepelyayev's Yakut revolt, which concluded on 16 June 1923, represented the last military action in Russia by a White Army. It ended with the defeat of the final anti-communist enclave in the country, signalling the end of all military hostilities relating to the Russian Civil War.\n",
"The 2nd Belorussian Front (, alternative spellings are 2nd Byelorussian Front and 2nd Belarusian Front) (2BF) was a military formation, of Army group size, of the Soviet Army during the Second World War. Soviet army groups were known as Fronts.\n"
] |
How are new stars born following the death of old stars? Surely all the hydrogen has gone- or the previous star wouldn't have died?
|
Stars can only fuse hydrogen (and in the latter stages other elements) in their cores, where the temperature is high enough to start fusion. The vast majority of the hydrogen is outside the core (90% orso) and gets blown away when the star is dieing. This forms the material for the next generation of stars
|
[
"According to theories of stellar formation, as in other stellar nurseries, the stars in Henize 206 were created after a dying star, or supernova, exploded, sending intense shockwaves through clouds of cosmic gas and dust. The gas and dust were subsequently compressed into large groups, then gravity further condensed them into massive objects, and stars were born. Eventually, some of the stars are expected to die in a fiery blast, triggering another cycle of stellar birth and death. This recycling of stellar dust and gas appears to occur throughout the Universe. Earth's own Sun is considered to have descended from multiple generations of stars, as evidenced by heavy elements found, in the Solar System, in concentrations too large for a first-time star.\n",
"A new star will sit at a specific point on the main sequence of the Hertzsprung–Russell diagram, with the main-sequence spectral type depending upon the mass of the star. Small, relatively cold, low-mass red dwarfs fuse hydrogen slowly and will remain on the main sequence for hundreds of billions of years or longer, whereas massive, hot O-type stars will leave the main sequence after just a few million years. A mid-sized yellow dwarf star, like the Sun, will remain on the main sequence for about 10 billion years. The Sun is thought to be in the middle of its main sequence lifespan.\n",
"Stars less massive than about are convective throughout most of the star. These stars continue to fuse hydrogen in their cores until essentially the entire star has been converted to helium, and they do not develop into subgiants. Stars of this mass have main-sequence lifetimes many times longer than the current age of the Universe.\n",
"Most stars will eventually come to a point in their evolution when the outward radiation pressure from the nuclear fusions in its interior can no longer resist the ever-present gravitational forces. When this happens, the star collapses under its own weight and undergoes the process of stellar death. For most stars, this will result in the formation of a very dense and compact stellar remnant, also known as a compact star.\n",
"By (100 trillion) years from now, star formation will end, leaving all stellar objects in the form of degenerate remnants. If protons do not decay, stellar-mass objects will disappear more slowly, making this era last longer.\n",
"This system may belong to a stellar association called Cygnus OB3, which would mean that Cygnus X-1 is about five million years old and formed from a progenitor star that had more than . The majority of the star's mass was shed, most likely as a stellar wind. If this star had then exploded as a supernova, the resulting force would most likely have ejected the remnant from the system. Hence the star may have instead collapsed directly into a black hole.\n",
"The first massive stars died in supernova explosions which ejected heavier elements into the gas, that formed the next generations of stars. The element composition of a star is an indirect indication of the star's generation and its previous star generation.\n"
] |
When did "Right by conquest" stop being a thing?
|
Actually way later, up until WW2 right by conquest was recognized as international law. "War of aggression" as a crime was only codified in the Nuermberg Principles after WW2 and made a UN resolution in 1974 (UN resolution 3314).
The principle of Right by Conquest was first diminished in the Kellogg-Briand Pact (1928) which was, in a very basic summary, a group of countries promising not to declare war to resolve their differences. It didn't work, the nations still went to war, they just didn't declare war, but it was a first step towards the establishment of "War of aggression" as a crime under international law.
|
[
"The right of conquest is the right of a conqueror to territory taken by force of arms. It was traditionally a principle of international law that has gradually given way in modern times until its proscription after World War II when the crime of war of aggression was first codified in the Nuremberg Principles. In 1974 the United Nations General Assembly recommended a definition of the crime of aggression to the Security Council in the non-binding United Nations General Assembly Resolution 3314.\n",
"It became the law after the Conquest, according to Sir Edward Coke, that an estate greater than for a term of years could not be disposed of by will, unless in Kent, where the custom of gavelkind prevailed, and in some manors and boroughs (especially the City of London), where the pre-Conquest law was preserved by special indulgence. The reason why devise of land was not acknowledged by law was, no doubt, partly to discourage deathbed gifts in mortmain, a view supported by Glanvill, partly because the testator could not give the devisee that seisin which was the principal element in a feudal conveyance. By means of the doctrine to uses, however, the devise of land was secured by a circuitous method, generally by conveyance to feoffees to uses in the lifetime of the feoffor to such uses as he should appoint by his will. Up to comparatively recent times a will of lands still bore traces of its origin in the conveyance to uses \"inter vivos\". On the passing of the Statute of Uses lands again became non-devisable, with a saving in the statute for the validity of wills made before 1 May 1536. The inconvenience of this state of things soon began to be felt, and was probably aggravated by the large amount of land thrown into the market after the dissolution of the monasteries. As a remedy an Act was passed in 1540 (which came to be known as the Statute of Wills), and a further explanatory Act in 1542-1543.\n",
"Land and Liberty (, ) is an anarchist slogan. It was originally used as a name of the Russian revolutionary organization Zemlya i Volya in 1878, then by the revolutionary leaders of the Mexican Revolution; the revolution was fought over land rights, and the leaders such as Emiliano Zapata and Pancho Villa were fighting to give the land back to the natives from whom it was expropriated either by force or by some dubious manner. Without land, the peasants were at the mercy of landowners for subsistence.\n",
"The completion of colonial conquest of much of the world (see the Scramble for Africa), the devastation of World War I and World War II, and the alignment of both the United States and the Soviet Union with the principle of self-determination led to the abandonment of the right of conquest in formal international law. The 1928 Kellogg–Briand Pact, the post-1945 Nuremberg Trials, the UN Charter, and the UN role in decolonization saw the progressive dismantling of this principle. Simultaneously, the UN Charter's guarantee of the \"territorial integrity\" of member states effectively froze out claims against prior conquests from this process.\n",
"These historians claim instead that territorial conquest was justified from natural law — that which has no owner can be taken by the first taker. Michael Connor in his book \"The Invention of Terra Nullius\" takes an even more extreme view and argues that no one in the 19th century thought of Australia as being \"terra nullius\". He calls the concept a legal fiction, a straw man developed in the late 20th century:\n",
"The Declaration of Right was enacted in an Act of Parliament, the Bill of Rights 1689, which received the Royal Assent in December 1689. The Act asserted \"certain ancient rights and liberties\" by declaring that:\n",
"In the 18th century, during the Industrial Revolution, the moral philosopher and economist Adam Smith (1723–1790), in contrast to Locke, drew a distinction between the \"right to property\" as an acquired right, and natural rights. Smith confined natural rights to \"liberty and life\". Smith also drew attention to the relationship between employee and employer and identified that property and civil government were dependent upon each other, recognizing that \"the state of property must always vary with the form of government\". Smith further argued that civil government could not exist without property, as government's main function was to safeguard property ownership.\n"
] |
It is said that Benedict Arnold died wishing to wear his Continental Army uniform, expressing regret at his betrayal. This may be legend, but do we know how he really felt in his later years about what he did, or his attitude towards the United States?
|
Not to discourage further discussion, but see /u/uncovered-history's answer in [this post](_URL_0_). He also addresses the Continental Army uniform question a little further down the comment chain.
|
[
"Benedict Arnold (June 14, 1801) was an American military officer who served as a general during the American Revolutionary War, fighting for the American Continental Army before defecting to the British in 1780. George Washington had given him his fullest trust and placed him in command of the fortifications at West Point, New York. Arnold planned to surrender the fort to British forces, but the plot was discovered in September 1780 and he fled to the British. His name quickly became a byword in the United States for treason and betrayal because he led the British army in battle against the very men whom he had once commanded.\n",
"Arnold was in the West Indies when the Boston Massacre took place on March 5, 1770. He wrote that he was \"very much shocked\" and wondered \"good God, are the Americans all asleep and tamely giving up their liberties, or are they all turned philosophers, that they don't take immediate vengeance on such miscreants?\"\n",
"Arnold, tipped off about André's arrest by a member of his staff unaware of his commander's involvement, was able to escape to the British with his family. After holding some commands in the British Army, he emigrated to England at war's end, where he was buried two decades later. Paulding, Van Wart and Williams were recognized and compensated for their roles in the capture. The Continental Congress awarded them lifetime pensions and the Fidelity Medallion, generally considered the first U.S. military decoration; the state gave them farms confiscated from Loyalists. Two decades later, three counties in the new state of Ohio were named after them. Later the elementary school near the memorial would take Paulding's name as well.\n",
"To explain and justify his actions, Arnold wrote an open letter dated October 7, 1780 that was published on October 11 in New York by the \"Royal Gazette\". This letter to \"The Inhabitants of America\" outlined what Arnold saw as the corruption, lies, and tyranny of the Second Continental Congress and the Patriot leadership.\n",
"Arnold became a celebrated hero early in the Revolutionary War. Severely wounded in the 1777 Battles of Saratoga, his shattered left leg left him unable to ride a horse or walk without pain. In June 1778, he was made military governor of southeast Pennsylvania, stationed in Philadelphia. His taste for high living and use of soldiers for personal tasks made him unpopular. In April 1779, he married Peggy Shippen, the daughter of a prominent Tory. That same month, he began a treasonous correspondence with British General Henry Clinton. By the summer, he was informing Clinton of American troop locations and strengths, and negotiating a fee to switch sides.\n",
"Arnold was living in British-controlled New York when his letter was published and he had been given a commission as a British officer. The letter \"To the Inhabitants of America\" was the first in a series of letters directed at different groups in America. He followed it with \"A Proclamation to the Officers and Soldiers of the Continental Army\" dated October 20, 1780. These letters essentially echoed common Loyalist opinion.\n",
"Written in 1780, while secretary to the French Legation to the US Army: \"D'Complot du Benedict Arnold & Sir Henri Clinton contre Eunas` States du America General George Washington\" One of the first accounts of Arnold's treason, was not published until 1816.\n"
] |
What is the relationship between C-reactive proteins and inflammation with depression?
|
Some cytokines can cross/be actively transported across the blood brain barrier. There are also cytokine receptors that stimulate the vagus nerve, providing feedback to the brain. There was a study specifically investigating the use of an anti-inflammatory drug, infliximab, which antagonizes tumor necrosis factor alpha (TNF-alpha), in people with treatment resistant depression. What they found was that overall, infliximab was not more effective than placebo. However, in those patients with high levels of CRP at pre-treatment, infliximab was more effective than placebo, while in those patients with low CRP, infliximab was *less* effective than placebo. [Here's a picture of that.](_URL_1_) What's noteworthy is that infliximab is too big of a molecule to cross the blood brain barrier, so any direct effects it has happen in the body. [Here's the full text of the source article.](_URL_0_)
|
[
"Various review have found that general inflammation may play a role in depression. One meta analysis of cytokines in people with MDD found increased IL-6 and TNF-a levels relative to controls. The first theories came about when it was noticed that interferon therapy caused depression in a large number of people receiving it. Meta analysis on cytokine levels in people with MDD have demonstrated increased levels of IL-1, IL-6, C-reactive protein, but not IL-10. Increased numbers of T-Cells presenting activation markers, levels of neopterin, IFN gamma, sTNFR, and IL-2 receptors have been observed in depression. Various sources of inflammation in depressive illness have been hypothesized and include trauma, sleep problems, diet, smoking and obesity. Cytokines, by manipulating neurotransmitters, are involved in the generation of sickness behavior, which shares some overlap with the symptoms of depression. Neurotransmitters hypothesized to be affected include dopamine and serotonin, which are common targets for antidepressant drugs. Induction of indolamine-2,3 dioxygenease by cytokines has been proposed as a mechanism by which immune dysfunction causes depression. One review found normalization of cytokine levels after successful treatment of depression. A meta analysis published in 2014 found the use of anti-inflammatory drugs such as NSAIDs and investigational cytokine inhibitors reduced depressive symptoms.\n",
"Of the pathways linking the non-pathogenic stressors associated with depression to inflammation, inflammasome activation has been highlighted as one of the most promising. While major depression is associated with increased inflammasome activation in general, the NLRP3 inflammasome complex has received the most attention in relation to major depression due to both its role in triggering the release of interleukin-1β and interleukin-18 and its association with depression and depression-like symptoms in both humans and non-human animals.\n",
"Inflammation is also intimately linked with metabolic processes in humans. For example, low levels of Vitamin D have been associated with greater risk for depression. The role of metabolic biomarkers in depression is an active research area. Recent work has explored the potential relationship between plasma sterols and depressive symptom severity.\n",
"Compared to the link between external stressors and inflammation, the connection between peripheral inflammation and depression symptoms is better understood. This is due to cytokines being directly involved with inflammatory responses while also serving as a signal that can lead to changes in behavior.\n",
"One explanation that sees the connection between depression and inflammation as the result of adaptations is the Pathogen Host Defense Hypothesis (PATHOS-D), which proposes that depression is directly tied to immune responses. From this perspective, depression-like symptoms are thought to reduce energy consumption and reallocate resources so that one can mount a stronger immune defense, thereby reducing the organism's risk of death. In addition to this, both the reduction in activity and social withdrawal that often accompanies depression are also suggested to provide benefits by decrease one’s risk of encountering new pathogens or exposing kin or cooperative partners to one’s illness, although they are likely of secondary importance.\n",
"The role of inflammation and the immune system in depression has been extensively studied. The evidence supporting this link has been shown in numerous studies over the past ten years. Nationwide studies and meta-analyses of smaller cohort studies have uncovered a correlation between pre-existing inflammatory conditions such as type 1 diabetes, rheumatoid arthritis (RA), or hepatitis, and an increased risk of depression. Data also shows that using pro-inflammatory agents in the treatment of diseases like melanoma can lead to depression. Several meta-analytical studies have found increased levels of proinflammatory cytokines and chemokines in depressed patients. This link has led scientists to investigate the effects of antidepressants on the immune system.\n",
"In addition there is increasing evidence that inflammation can cause depression because of the increase of cytokines, setting the brain into a \"sickness mode\". Classical symptoms of being physically sick like lethargy show a large overlap in behaviors that characterize depression. Levels of cytokines tend to increase sharply during the depressive episodes of people with bipolar disorder and drop off during remission. Furthermore, it has been shown in clinical trials that anti-inflammatory medicines taken in addition to antidepressants not only significantly improves symptoms but also increases the proportion of subjects positively responding to treatment.\n"
] |
What Slows a Computer Down?
|
This is a complicated question to answer.
First and foremost -- Did you upgrade OS versions in the meantime or are you running the same OS and exact same software as before?
If you upgraded the OS, then that could be part of the problem. Newer versions of Windows and OSX often are designed around newer computers. Older machines just can't keep up -- even if the OS is marketed as capable of running on older hardware. Sometimes newer OS versions fix and address the sins of previous versions, so they can get faster than older versions, but more often than not newer OS's are more "bloated". (Bloat is a general euphemism for bigger code that does more, fancier graphics and effects, etc -- all of it taking up resources at runtime and on the disk).
It depends on the exact OS release, basically. But the overall trend is towards newer OS = more of a resource hog.
Secondly, if you upgraded the installed programs in the meantime (via updates, etc) they can also be resource hogs for the same reason as the OS -- they get bloated over time as the programmers add features and subfeatures and noone complains because the software is assumed to run on newer machines that "can handle it".
Newer software assumes you are running a newer machine, so it takes up more CPU and RAM. Programmers sometimes don't bother to optimize their code when it runs "fast enough" on a newer machine. Or they allocate more memory than they need or use algorithms that are hungrier for resources.
Add to that the trend towards slower interpreted languages for more and more software (such as embedding javascript code or other scripting languages in applications to form part of the application logic, etc).
Another factor could be that your computer's hard disk is fragmented (usually an issue on Windows -- less of an issue on OSX).
Another factor could be that you have malware/adware or other background programs running that you accumulated over time as you installed more and more hardware and software. Some driver packages or other software you may install like to install all sorts of services and daemons, systray icons, toolbars you don't use, etc. My mouse for example came with an annoying systray icon utility that was absolutely useless but took up RAM and CPU occasionally for no reason.
Yet another factor is that if your computer is old, its cooling may be faulty. Your fans may be spinning slower and/or dust may have accumulated as a sort of 'blanket' on your motherboard/logic board. If your computer is running hotter, certain processors (such as Intel), will purposely slow down the CPU so that it doesn't heat up as much. To you, this will look like a performance hit.
It could be any or all of the above factors, basically.
But the computer itself, at least in theory, if kept clean inside and the fans are running, etc, doesn't "age" like a person does. It should run just as fast 10 years down the line as it did the day you bought it, assuming the hardware hasn't gone faulty (particularly read errors on the disk can delay things) and the cooling is working right.
|
[
"It was possible to increase the speed of the computer by using POKE 65495,0 which accelerates the ROM-resident BASIC interpreter, but temporarily disables correct functioning of the cassette/printer ports. Manufacturing variances mean that not all Dragons are able to function at this higher speed, and use of this POKE can cause some units to crash or be unstable, though with no permanent damage. POKE 65494,0 returns the speed to normal. POKE 65497,0 pushes the speed yet higher but the display is lost until a slower speed is restored.\n",
"Adrian Kingsley-Hughes, writing for ZDNet, believes that the slow-down over time is due to loading too much software, loading duplicate software, installing too much free/trial/beta software, using old, outdated or incorrect drivers, installing new drivers without uninstalling the old ones and may also be due to malware and spyware.\n",
"Many slowdowns are experienced with the software, usually resulting from the slow USB connection between the computer and calculator. Unexplained errors sometimes occur with the software, preventing users from transferring programs over. One solution is to use the TI SendTo sub-application, which is more stable than the Device Explorer.\n",
"BULLET::::- Measures against \"slowdown\" (1.4) : \"Icy Tower\" 1.4 estimates the possibility that the player's computer was artificially slowed down and records results of this estimation in replay files. A standalone program named SDbuster (Slowdown Buster) was also created in 2007 to help detect slowed down replays, which calculates the possibility of a given replay being slowed down based on previously remembered differences between replays recorded in normal and reduced speed.\n",
"BULLET::::- a traditional CPU cannot \"go faster\" than the expected worst-case performance of the slowest stage/instruction/component. When an asynchronous CPU completes an operation more quickly than anticipated, the next stage can immediately begin processing the results, rather than waiting for synchronization with a central clock. An operation might finish faster than normal because of attributes of the data being processed (e.g., multiplication can be very fast when multiplying by 0 or 1, even when running code produced by a naive compiler), or because of the presence of a higher voltage or bus speed setting, or a lower ambient temperature, than 'normal' or expected.\n",
"A computer may seem to hang when in fact it is simply processing very slowly. This can be caused by too many programs running at once, not enough memory (RAM), or memory fragmentation, slow hardware access (especially to remote devices), slow system APIs, etc. It can also be caused by hidden programs which were installed surreptitiously, such as spyware.\n",
"Despite the seemingly greater complexity of the second example, it may actually run faster on modern CPUs because they use an instruction pipeline. By nature, any jump in the code causes a pipeline stall, which is a detriment to performance.\n"
] |
Are any mammals as sexually dimorphic as humans?
|
Male gorillas are over twice the size of female gorillas, probably the largest sexual dimorphism among primates. The big [silverbacks](_URL_0_) you see in zoos are all males. Big differences like this are also seen in orangutans, mandrills, baboons, proboscis monkeys, hamadryas.
Sperm whale males weigh about 3 times as much as females. Pretty much all pinnipeds (seals, sea lions) show huge sexual dimorphism, with males being much larger than females.
As for features other than size, it's probably because you're not used to distinguishing between members of other species. Humans are very much attuned to detecting small differences in the facial features of other humans. And not just other humans, we are even more finely attuned to detecting these differences in our own ethnicity or geographical neighborhood. I'm guessing a farmer or herdsman is better able to tell the sex of a domestic animal at a glance than the average person, or a vet, or dog or cat breeder, for example. But sexual dimorphism is very very common among mammals.
|
[
"The reduced degree of sexual dimorphism is primarily visible in the reduction of the male canine tooth relative to other ape species (except gibbons). Another important physiological change related to sexuality in humans was the evolution of hidden estrus. Humans are the only ape in which the female is intermittently fertile year round, and in which no special signals of fertility are produced by the body (such as genital swelling during estrus). Nonetheless humans retain a degree of sexual dimorphism in the distribution of body hair and subcutaneous fat, and in the overall size, males being around 25% larger than females. These changes taken together have been interpreted as a result of an increased emphasis on pair bonding as a possible solution to the requirement for increased parental investment due to the prolonged infancy of offspring.\n",
"Modern humans do not display the same degree of sexual dimorphism as \"Australopithecus\" appears to have. In modern populations, males are on average a mere 15% larger than females, while in \"Australopithecus\", males could be up to 50% larger than females. New research suggests, however, that australopithecines exhibited a lesser degree of sexual dimorphism than these figures suggest, but the issue is not settled.\n",
"According to Scott D. Sampson, if ceratopsids were to have sexual dimorphism modern ecological analogues suggest it would be in their mating signals like horns and frills. No convincing evidence for sexual dimorphism in body size or mating signals is known in ceratopsids, although was present in the more primitive ceratopsian \"Protoceratops andrewsi\" whose sexes were distinguishable based on frill and nasal prominence size. This is consistent with other known tetrapod groups where midsized animals tended to exhibit markedly more sexual dimorphism than larger ones. However, if there were sexually dimorphic traits they may have been soft tissue variations like colorations or dewlaps that would not have been preserved as fossils.\n",
"Sexual dimorphisms in animals are often associated with sexual selection—the competition between individuals of one sex to mate with the opposite sex. Antlers in male deer, for example, are used in combat between males to win reproductive access to female deer. In many cases the male of a species is larger than the female. Mammal species with extreme sexual size dimorphism tend to have highly polygynous mating systems—presumably due to selection for success in competition with other males—such as the elephant seals. Other examples demonstrate that it is the preference of females that drive sexual dimorphism, such as in the case of the stalk-eyed fly.\n",
"According to Scott D. Sampson, if ceratopsids were to exhibit sexual dimorphism, modern ecological analogues suggest it would be found in display structures, such as horns and frills. No convincing evidence for sexual dimorphism in body size or mating signals is known in ceratopsids, although there is evidence that the more primitive ceratopsian \"Protoceratops andrewsi\" possessed sexes that were distinguishable based on frill and nasal prominence size. This is consistent with other known tetrapod groups where midsized animals tend to exhibit markedly more sexual dimorphism than larger ones. However, it has been proposed that these differences can be better explained by intraspecific and ontogenic variation rather than sexual dimorphism. In addition, many sexually dimorphic traits that may have existed in ceratopsians include soft tissue variations such as coloration or dewlaps, which would be unlikely to have been preserved in the fossil record.\n",
"The reduced degree of sexual dimorphism in humans is visible primarily in the reduction of the male canine tooth relative to other ape species (except gibbons) and reduced brow ridges and general robustness of males. Another important physiological change related to sexuality in humans was the evolution of hidden estrus. Humans are the only hominoids in which the female is fertile year round and in which no special signals of fertility are produced by the body (such as genital swelling or overt changes in proceptivity during estrus).\n",
"Regarding sexual dimorphism, humans fall into an intermediate group with moderate sex differences in body size but relatively large testes. This is a typical pattern of primates where several males and females live together in a group and the male faces an intermediate number of challenges from other males compared to exclusive polygyny and monogamy but frequent sperm competition.\n"
] |
how do jets that are taxiing stop and start moving without trying their engines up or down ?
|
To get moving again, they DO spin up their engines.... modern high bypass turbofans have ridiculous thrust, just bumping them up a little from idle is enough to get an airliner moving again. To stop, they have brakes. These brakes are ridiculously powerful, more than enough to stop an airliner moving along a taxiway. Pilots are just careful to use _enough_ brakes to slow the aircraft, if they were to stomp or lean on the brakes hard enough people and improperly secured baggage would fly around the cabin.
In fact, one of the standard certification tests for a new airliner is a takeoff abort test, or a takeoff "reject". (this has nothing to do with your question, but it's super cool). If the plane hasn't reached the critical V1 takeoff speed by a certain point of the runway, they're supposed to abort the takeoff. This means slamming the brakes on and engaging the engine reverse thrust. But to certify, the brakes alone have to be enough to bring the craft to a halt. [Usually this will leave the brake discs red-hot and more often than not pop a few tires due to the heat. Its quite spectacular.](_URL_2_)
[Here's a 777 doing such a test. The brakes are literally on fire.](_URL_0_)
[787-8 rejected takeoff with some good explanation](_URL_1_)
|
[
"When taxiing, aircraft travel slowly. This ensures that they can be stopped quickly and do not risk wheel damage on larger aircraft if they accidentally turn off the paved surface. Taxi speeds are typically .\n",
"An airplane uses taxiways to taxi from one place on an airport to another; for example, when moving from a hangar to the runway. The term \"taxiing\" is not used for the accelerating run along a runway prior to takeoff, or the decelerating run immediately after landing.\n",
"If an engine fails during taxiing or takeoff, the thrust yawing moment will force the aircraft to one side on the runway. If the airspeed is not high enough and hence, the rudder-generated side force is not powerful enough, the aircraft will deviate from the runway centerline and may even veer off the runway. The airspeed at which the aircraft, after engine failure, deviates 9.1 m from the runway centerline, despite using maximum rudder but without the use of nose wheel steering, is the minimum control speed on the ground (V).\n",
"Although many aircraft are capable of moving themselves backwards on the ground using reverse thrust (a procedure referred to as a \"powerback),\" the resulting jet blast or prop wash may cause damage to the terminal building or equipment. Engines close to the ground may also blow sand and debris forward and then suck it into the engine, causing damage to the engine. A pushback is therefore the preferred method to move the aircraft away from the gate.\n",
"To disengage from the maneuver, the pilot releases elevator and lets the plane drop into a nose dive, allowing the plane to gain speed. Once the stall speed is passed, the pilot can pull back on the stick to return to normal flight. Therefore, the pilot must ensure that there is sufficient altitude to recover from the stall when performing and exiting the maneuver.\n",
"Busy airports typically construct high-speed or rapid-exit taxiways to allow aircraft to leave the runway at higher speeds. This allows the aircraft to vacate the runway quicker, permitting another to land or take off in a shorter interval of time. This is usually accomplished by making the exiting taxiway longer, thus giving the aircraft more space in which to slow down, before the taxiways' upcoming intersection with another (perpendicular) taxiway, another runway, or the ramp/tarmac.\n",
"For taxiing and during the beginning of the take-off, aircraft are steered by a combination of rudder input as well as turning the nosewheel or tailwheel. At slow speeds the nosewheel or tailwheel has the most control authority, but as the speed increases the aerodynamic effects of the rudder increases, thereby making the rudder more and more important for yaw control. In some aircraft (mainly small aircraft) both of these mechanisms are controlled by the rudder pedals so there is no difference to the pilot. In other aircraft there is a special tiller controlling the wheel steering and the pedals control the rudder, and a limited amount of wheel steering (usually 5 degrees of nosewheel steering). For these aircraft the pilots stop using the tiller after lining up with the runway prior to take-off, and begin using it after landing before turning off the runway, to prevent over correcting with the sensitive tiller at high speeds. The pedals may also be used for small corrections while taxing in a straight line, or leading in or out of a turn, before applying the tiller, to keep the turn smooth.\n"
] |
Can fish see color? And if not, why are they so colorful?
|
I know for a fact that at least some fish do. Some fish have a trade-off feature where they have a red belly which females finds attractive, but they are more visible to predators. Some marine animals also get their color from their diet, so maybe it has something to do with that?
|
[
"Mesopelagic fish are adapted to a low-light environment. Many fish are black or red, because these colors appear dark due to the limited light penetration at depth. Some fish have rows of photophores, small light-producing organs, on their underside to mimic the surrounding environment. Other fish have mirrored bodies which are angled to reflect the surrounding ocean low-light colors and protect the fish from being seen, while another adaptation is countershading where fish have light colors on the ventral side and dark colors on the dorsal side.\n",
"To confirm that the red color is indeed the sign stimulus, researchers allowed male fish to be exposed to objects that were not fish themselves but had a similar coloring pattern to the males during breeding season. The same innate behaviors were exhibited toward objects with a red underside. Yet, when the male fish were approached by a similar looking fish painted all white, no elicit behavior was observed, confirming the color as the sign stimulus.\n",
"Wild fish exhibit strong colours only when agitated. Breeders have been able to make this coloration permanent, and a wide variety of hues breed true. Colours available to the aquarist include red, orange, yellow, blue, steel blue, turquoise/green, black, pastel, white (\"opaque\" white, not to be confused with albino) and multi-coloured fish.\n",
"The fish is maroon, with blue spot that fades to bright red. The color pattern helps it blend in with its natural environment. It grows to up to 24 in (60 cm) long. Most adult have blue mouths, while the young have bright red eyes.\n",
"Several physical characteristics distinguish this species from others that live in the region. The lack of pigmentation causes this fish to look pink in color; its blood and internal organs are visible through the scales. Several rows of teeth are accommodated by a big mouth and thick lips. Another factor that helps accommodate the quantity of teeth is the fact that the lower jaws protrude further than the upper jaws. \"G. ankaranensis\" is not light sensitive because little to no sunlight reaches the waters in which this species lives. The lengths of these fish typically vary from about , and they move around slowly with their mouths closed.\n",
"The color is probably the most diagnostic feature of the fish, especially when alive or fresh from the water. The back and sides of the fish are bright yellow, with the lower sides and underside of head fading to white. Four bright-blue stripes run longitudinally on the side of the fish, with several faint greyish stripes on lowermost part of sides. Most fins are yellow.\n",
"Bony fishes living in shallow water generally have good color vision due to their living in a colorful environment. Thus, in shallow-water fishes, red, orange, and green fluorescence most likely serves as a means of communication with conspecifics, especially given the great phenotypic variance of the phenomenon.\n"
] |
How did the heavier metals on Earth end up in the Earth's crust and not all towards the Core?
|
Here's a [recent post where I answered a very similar question](_URL_0_). Basically it comes down to two things: solubility in different materials (silicates versus metals, which is why there are Uranium ores on the surface of Earth) and meteor bombardment during the early history of the solar system (which is why there's still some Gold, Platinum, Iridium, etc. in the crust).
|
[
"In early stages of Earth's formation about 4.6 billion years ago, melting would have caused denser substances to sink toward the center in a process called planetary differentiation (see also the iron catastrophe), while less-dense materials would have migrated to the crust. The core is thus believed to largely be composed of iron (80%), along with nickel and one or more light elements, whereas other dense elements, such as lead and uranium, either are too rare to be significant or tend to bind to lighter elements and thus remain in the crust (see felsic materials). Some have argued that the inner core may be in the form of a single iron crystal.\n",
"The proto-Earth grew by accretion until its interior was hot enough to melt the heavy, siderophile metals. Having higher densities than the silicates, these metals sank. This so-called \"iron catastrophe\" resulted in the separation of a primitive mantle and a (metallic) core only 10 million years after the Earth began to form, producing the layered structure of Earth and setting up the formation of Earth's magnetic field. J.A. Jacobs was the first to suggest that Earth's inner core—a solid center distinct from the liquid outer core—is freezing and growing out of the liquid outer core due to the gradual cooling of Earth's interior (about 100 degrees Celsius per billion years).\n",
"The Earth's crust is made of approximately 5% of heavy metals by weight, with iron comprising 95% of this quantity. Light metals (~20%) and nonmetals (~75%) make up the other 95% of the crust. Despite their overall scarcity, heavy metals can become concentrated in economically extractable quantities as a result of mountain building, erosion, or other geological processes.\n",
"Concentrations of heavy metals below the crust are generally higher, with most being found in the largely iron-silicon-nickel core. Platinum, for example, comprises approximately 1 part per billion of the crust whereas its concentration in the core is thought to be nearly 6,000 times higher. Recent speculation suggests that uranium (and thorium) in the core may generate a substantial amount of the heat that drives plate tectonics and (ultimately) sustains the Earth's magnetic field.\n",
"The growth of the inner core may be expected to consume most of the outer core by some 3–4 billion years from now, resulting in a nearly solid core composed of iron and other heavy elements. The surviving liquid envelope will mainly consist of lighter elements that will undergo less mixing. Alternatively, if at some point plate tectonics comes to an end, the interior will cool less efficiently, which may end the growth of the inner core. In either case, this can result in the loss of the magnetic dynamo. Without a functioning dynamo, the magnetic field of the Earth will decay in a geologically short time period of roughly 10,000 years. The loss of the magnetosphere will cause an increase in erosion of light elements, particularly hydrogen, from the Earth's outer atmosphere into space, resulting in less favorable conditions for life.\n",
"The Earth's crust is made of approximately 25% of metals by weight, of which 80% are light metals such as sodium, magnesium, and aluminium. Nonmetals (~75%) make up the rest of the crust. Despite the overall scarcity of some heavier metals such as copper, they can become concentrated in economically extractable quantities as a result of mountain building, erosion, or other geological processes.\n",
"On Earth, a large piece of molten iron is sufficiently denser than continental crust material to force its way down through the crust to the mantle. In the outer Solar System a similar process may take place but with lighter materials: they may be hydrocarbons such as methane, water as liquid or ice, or frozen carbon dioxide.\n"
] |
How did the Allies supply their armies in France in WWII in 1944 and 1945?
|
Logistics were always a key factor in the planning of Overlord. Prior experience showed that capturing ports was difficult as they were a natural focus for defensive efforts, and once captured extensive work would likely be needed to repair sabotage and demolitions carried out by the defenders. Supplies would therefore have to come over the beaches initially, assisted by the artificial Mulberry harbours, until sufficient ports could be taken and cleared. An initial plan was for US forces to have Cherbourg operating by D+11, with a push into Brittany to take Brest and construct a new facility in Quiberon Bay around D+54. (Figures from *Logistical Support of the Armies: May 1941 - September 1944*, Roland G. Ruppenthal).
As it was Cherbourg only fell at the end of June, and rather than three days it took three weeks for the port to be cleared; Col. Alvin G. Viney described the damage done to the port as "... a masterful job, beyond a doubt the most complete, intensive, and best-planned demolition in history." (*Cross-Channel Attack*, Gordon A. Harrison). The majority of supplies therefore came over the beaches until August when Cherbourg was fully operational, some minor Normandy ports were opened, and Operation Dragoon started to make southern French ports available. The beaches remained in use, though with less traffic as weather worsened, and as the Allies pushed east along the channel coast heavily fortified ports such as Le Havre and Rouen were besieged, captured and repaired.
After initial slow progress, behind initial estimates, the breakout from Normandy happened far quicker than expected; by mid-September, about three months into the campaign, Allied forces were reaching objectives they were only planning to capture after a year. Antwerp was captured at the start of September with its docks intact but could not be utilised until the Scheldt estuary had been cleared, which only happened in November, Market Garden proving something of a distraction in the meantime. Ports in Brittany were scarcely used, with Brest heavily damaged and the planned facility in Quiberon Bay not built; by 1945 Antwerp and the Southern French ports were handling about half the supplies being landed, the rest coming into Cherbourg, Le Havre, Rouen and Ghent (Figures from *Logistical Support of the Armies: September 1944 - May 1945*, Roland G. Ruppenthal).
Of course the supplies had to get to the front line after being landed, and the unexpectedly rapid advance caused major logistical headaches. The French railway system had been heavily targeted by the Allied air forces in the run-up to Overlord to prevent German reinforcements being rapidly deployed, and though plans were in place to reconstruct it these could not keep up with the speed of advance. Improvisation was therefore required, primarily in the form of truck convoys; the most famous route for these was the Red Ball Express from Cherbourg, though others including the White Ball from Le Havre and the ABC from Antwerp were also established.
For further reading Ruppenthal's *Logistical Support of the Armies* is available online ([Volume I] (_URL_1_) and [Volume II] (_URL_0_)), the planning and execution of Overlord being a major theme.
|
[
"Conducted strategic bombardment of Axis targets in Europe. Between 29 August 1944 and 2 October 1944 division aircraft dropped food to the French population in liberated areas. It also airdropped food, equipment, and supplies to Allied forces engaged in the airborne attack on the Netherlands (September 1944), as well as troops engaged in the assault across the Rhine River (March 1945). \n",
"The war in Europe involved aid to Britain, her allies, and the Soviet Union, with the U.S. supplying munitions until it could ready an invasion force. U.S. forces were first tested to a limited degree in the North African Campaign and then employed more significantly with British Forces in Italy in 1943–45, where U.S. forces, representing about a third of the Allied forces deployed, bogged down after Italy surrendered and the Germans took over. Finally the main invasion of France took place in June 1944, under General Dwight D. Eisenhower. Meanwhile, the U.S. Army Air Forces and the British Royal Air Force engaged in the area bombardment of German cities and systematically targeted German transportation links and synthetic oil plants, as it knocked out what was left of the Luftwaffe post Battle of Britain in 1944. Being invaded from all sides, it became clear that Germany would lose the war. Berlin fell to the Soviets in May 1945, and with Adolf Hitler dead, the Germans surrendered.\n",
"During the First World War, facing the increased use of mechanized warfare, the French armed forces needed to set up a new network for fuel supply. It was then composed of a service to stock and supply the fuel, and a transport service automobile to deliver it to the end users. At the same time, a wider service to provide petrol, oils and lubricants was created. After the war, from July 12 1920 the munitions service resumed the sourcing and stockpiling role, and the artillery the distribution role. Then on November 25 1940 - during the Vichy regime, these functions were combined into a single body: it received the name \"Military fuel service\" (SEA), which it still bears.\n",
"In summer 1941 the British appealed to Americans to conserve food to provide more to go to Britons fighting in the Second World War. The Office of Price Administration warned Americans of potential gasoline, steel, aluminum and electricity shortages. It believed that with factories converting to military production and consuming many critical supplies, rationing would become necessary if the country entered the war. It established a rationing system after the attack on Pearl Harbor. In June 1942 the Combined Food Board was set up to coordinate the worldwide supply of food to the Allies, with special attention to flows from the U.S. and Canada to Britain.\n",
"The Allied oil campaign of World War II pitted the RAF and the USAAF against facilities supplying Nazi Germany with petroleum, oil, and lubrication (POL) products. It formed part of the immense Allied strategic bombing effort during the war. The targets in Germany and in \"Axis Europe\" included refineries for natural oil, factories producing synthetic fuel, storage depots, and other POL-infrastructure resources.\n",
"In May 1945, by the end of the war in Europe, the Free French forces comprised 1,300,000 personnel, and included around forty divisions making it the fourth largest Allied army in Europe behind the Soviet Union, the US and Britain. The GPRF sent an expeditionary force to the Pacific to retake French Indochina from the Japanese, but Japan surrendered before they could arrive in theatre.\n",
"In September 1944, the group sent planes and pilots to England to provide cover for Operation Market-Garden, the allied airborne assault on the Netherlands and Germany. The P-38s of the group struck pillboxes and troops early in October to aid First Army's capture of Aachen, and afterward struck railroads, bridges, viaducts, and tunnels in that area.\n"
] |
Was there any study of economics pre-consumerism?
|
Yes.
Consumerism is generally linked to the rise of industrial production and wasn't a phenomenon (at least outside the upper class) until the late 19th century. Before then you had such figures as Adam Smith, David Hume, Ricardo, Marx, Quesnay, Colbert etc all writing on economics. Adam Smith is considered the defining founding father of modern economics and industrial era economics based much of its premises on the works of Smith and Ricardo.
|
[
"In the late 20th century, areas of study that produced change in economic thinking were: risk-based (rather than price-based models), imperfect economic actors, and treating economics as a biological science (based on evolutionary norms rather than abstract exchange).\n",
"Consumer economics concludes the family-unit economists were strongly influenced by the most recent \"consumer era\"; which was the \"Modern Consumer Movement\" of the 1970s. The connection between Consumer Economics and consumer-related politics has been overt, although the strength of the connection varies between Universities and individuals.\n",
"Traditionally, the subject matter taught in Consumer Education would be found under the label Home Economics. Beginning in the late 20th Century, however, with the rise of Consumerism, the need for an individual to manage a budget, make informed purchases, and save for the future have become paramount. The outcomes of consumer education include not only the improved understanding of consumer goods and services but also increased awareness of the consumer's rights in the consumer market and better capability to take actions to improve consumer well-being.\n",
"Her earlier work focused on the role of markets in economic development in Europe. Her paper \"The evolution of markets in early modern Europe, 1350–1800: a study of wheat prices\" uses data on European wheat prices to study trends in market development from the early medieval period to the industrial revolution, demonstrating that markets were as well-integrated across Europe in the early 16th century as they were in the late 19th century. Her book \"Markets and growth in early modern Europe\" builds on this research, examining several aspects of the relationship between market integration and economic development.\n",
"The impetus for the separation of marketing and economics was due, at least in part, to economic's focus on production as the creator of economic value and general failure to investigate distribution. In the late 19th century and early 20th century, as markets became more globalised, distribution began to assume increasing importance. Some economics professors began to run courses examining various aspects of the marketing system, including \"distributive and regulative systems.\" Other courses, such as the \"marketing of products\" and the \"marketing of farm-products\" followed. As the first decades of the 20th century progressed, books and articles concerning marketing topics began to emerge. In 1936, the publication of the new \"Journal of Marketing\" gave marketing academics a forum for exchanging ideas and research methods and also gave the discipline a real sense of its own distinct identity as a maturing academic discipline.\n",
"The origins of consumer capitalism are found in the development of American department stores from the mid 19th Century, notably the advertising and marketing innovations at Wanamaker's in Philadelphia. Author William Leach describes a deliberate, coordinated effort among American 'captains of industry' to detach consumer demand from 'needs' (which can be satisfied) to 'wants' (which may remain unsatisfied). This cultural shift represented by the department store is also explored in Émile Zola's 1883 novel \"Au Bonheur des Dames\", which describes the workings and the appeal of a fictionalized version of Le Bon Marché.\n",
"Consumerism is a social and economic order that encourages the acquisition of goods and services in ever-increasing amounts. With the industrial revolution, but particularly in the 20th century, mass production led to overproduction—the supply of goods would grow beyond consumer demand, and so manufacturers turned to planned obsolescence and advertising to manipulate consumer spending. In 1899, a book on consumerism published by Thorstein Veblen, called \"The Theory of the Leisure Class\", examined the widespread values and economic institutions emerging along with the widespread \"leisure time\" in the beginning of the 20th century. In it Veblen \"views the activities and spending habits of this leisure class in terms of conspicuous and vicarious consumption and waste. Both are related to the display of status and not to functionality or usefulness.\"\n"
] |
What properties of charcoal cause it to be so useful in absorbing toxic compounds?
|
Can anyone actually explain this though! Yes it becomes more porous, yes it has active binding sites. But what is actually occurring here. Are particulates getting trapped? Are aldehyde/ketones groups protonating with particulates? Or what is the actual chemical manipulation of this?
|
[
"Activated charcoal is used to treat many types of oral poisonings such as phenobarbital and carbamazepine. It is not effective for a number of poisonings including: strong acids or bases, iron, lithium, arsenic, methanol, ethanol or ethylene glycol.\n",
"Activated carbon is used to treat poisonings and overdoses following oral ingestion. Tablets or capsules of activated carbon are used in many countries as an over-the-counter drug to treat diarrhea, indigestion, and flatulence. However, activated charcoal shows no effect of intestinal gas and diarrhea, and is, ordinarily, medically ineffective if poisoning resulted from ingestion of corrosive agents such as alkalis and strong acids, iron, boric acid, lithium, petroleum products, or alcohol. Activated carbon will not prevent these chemicals from being absorbed into the human body.\n",
"Cyanide compounds occur in small amounts in the natural environment and in cigarette smoke. They are also used in several industrial processes and as pesticides. Cyanides are released when synthetic fabrics or polyurethane burn, and may thus contribute to fire-related deaths. Arsine gas, formed when arsenic encounters an acid, is used as a pesticide and in the semiconductor industry; most exposures to it occur accidentally in the workplace.\n",
"In conjunction with magnesium and sometimes activated charcoal, tannic acid was once used as a treatment for many toxic substances, such as strychnine, mushroom, and ptomaine poisonings in the late 19th and early 20th centuries.\n",
"The primary risk associated with epoxy use is often related to the hardener component and not to the epoxy resin itself. Amine hardeners in particular are generally corrosive, but may also be classed as toxic or carcinogenic/mutagenic. Aromatic amines present a particular health hazard (most are known or suspected carcinogens), but their use is now restricted to specific industrial applications, and safer aliphatic or cycloaliphatic amines are commonly employed.\n",
"Active charcoal carbon filters are most effective at removing chlorine, particles such as sediment, volatile organic compounds (VOCs), taste and odor from water. They are not effective at removing minerals, salts, and dissolved inorganic substances.\n",
"Hexavalent chromium compounds (including chromium trioxide, chromic acids, chromates, chlorochromates) are toxic and carcinogenic. For this reason, chromic acid oxidation is not used on an industrial scale except in the aerospace industry.\n"
] |
Are Neutrino's really faster than light?
|
Because photons are light. To pass from the source to the detector, they are travelling through the earth. Light won't do that.
|
[
"Neutrino speeds \"consistent\" with the speed of light are expected given the limited accuracy of experiments to date. Neutrinos have small but nonzero mass, and so special relativity predicts that they must propagate at speeds slower than light. Nonetheless, known neutrino production processes impart energies far higher than the neutrino mass scale, and so almost all neutrinos are ultrarelativistic, propagating at speeds very close to that of light.\n",
"BULLET::::- An international team of scientists at CERN records neutrino particles apparently traveling faster than the speed of light. If confirmed, the discovery would overturn Albert Einstein's 1905 special theory of relativity, which says that nothing can travel faster than light. (BBC) (ArXiv)\n",
"In September 2011, OPERA researchers observed muon neutrinos apparently traveling faster than the speed of light. In February and March 2012, OPERA researchers blamed this result on a loose fibre optic cable connecting a GPS receiver to an electronic card in a computer. On 16 March 2012, a report announced that an independent experiment in the same laboratory, also using the CNGS neutrino beam, but this time the ICARUS detector, found no discernible difference between the speed of a neutrino and the speed of light. In May 2012, the Gran Sasso experiments BOREXINO, ICARUS, LVD and OPERA all measured neutrino velocity with a short-pulsed beam, and obtained agreement with the speed of light, showing that the original OPERA result was mistaken. Finally in July 2012, the OPERA collaboration updated their results. After the instrumental effects mentioned above were taken into account, it was shown that the speed of neutrinos is consistent with the speed of light. This was confirmed by a new, improved set of measurements in May 2013.\n",
"In a analysis of their data, scientists of the OPERA collaboration reported evidence that neutrinos they produced at CERN in Geneva and recorded at the OPERA detector at Gran Sasso, Italy, had traveled faster than light. The neutrinos were calculated to have arrived approximately 60.7 nanoseconds (60.7 billionths of a second) sooner than light would have if traversing the same distance in a vacuum. After six months of cross checking, on , the researchers announced that neutrinos had been observed traveling at faster-than-light speed. Similar results were obtained using higher-energy (28 GeV) neutrinos, which were observed to check if neutrinos' velocity depended on their energy. The particles were measured arriving at the detector faster than light by approximately one part per 40,000, with a 0.2-in-a-million chance of the result being a false positive, \"assuming\" the error were entirely due to random effects (significance of six sigma). This measure included estimates for both errors in measuring and errors from the statistical procedure used. It was, however, a measure of precision, not accuracy, which could be influenced by elements such as incorrect computations or wrong readouts of instruments. For particle physics experiments involving collision data, the standard for a discovery announcement is a five-sigma error limit, looser than the observed six-sigma limit.\n",
"In the 2011 Faster-than-light neutrino anomaly, the OPERA collaboration published results which appeared to show that the speed of neutrinos is slightly faster than the speed of light. However, sources of errors were found and confirmed in 2012 by the OPERA collaboration, which fully explained the initial results. In their final publication, a neutrino speed consistent with the speed of light was stated. Also subsequent experiments found agreement with the speed of light, see measurements of neutrino speed.\n",
"BULLET::::- \"Faster Than the Speed of Light?\" (BBC 2, 2011). Marcus du Sautoy discusses the recent discovery, the faster-than-light neutrino anomaly, that neutrinos may travel faster than light. First broadcast on 19 October 2011.\n",
"In 2011, the OPERA experiment mistakenly observed neutrinos appearing to travel faster than light. Even before the mistake was discovered, the result was considered anomalous because speeds higher than that of light in a vacuum are generally thought to violate special relativity, a cornerstone of the modern understanding of physics for over a century.\n"
] |
why are there patterns and fractals in nature?
|
> Why are there patterns and fractals in nature?
Patterns and fractals are just the large scale result of simple repeating behaviors. Suppose you have a stem that will grow for a bit and then split, then those stems grow for a bit and split, etc. You end up with a branching pattern from simple base behaviors.
> Is math based off of nature?
Sort of, in the most simplistic sense it is a way to model reality. People start counting stones and math adopts the behavior that things don't just spontaneously appear or vanish. You pick up one rock and then pick up another rock you will have "two" rocks. At this point of abstraction the system takes off behaving with internally consistent rules which yield results consistent with reality (in many cases).
So while the internally consistent rules can yield things which have no real counterpart (such as imaginary numbers) the application of those rules can allow the deduction of behaviors of the universe which are not immediately apparent via observation. This is again based on the basic observation that the universe behaves according to internally consistent rules and that the fundamental rules of mathematics are based on easily observed behaviors of the universe.
|
[
"Some mathematical rule-patterns can be visualised, and among these are those that explain patterns in nature including the mathematics of symmetry, waves, meanders, and fractals. Fractals are mathematical patterns that are scale invariant. This means that the shape of the pattern does not depend on how closely you look at it. Self-similarity is found in fractals. Examples of natural fractals are coast lines and tree shapes, which repeat their shape regardless of what magnification you view at. While self-similar patterns can appear indefinitely complex, the rules needed to describe or produce their formation can be simple (e.g. Lindenmayer systems describing tree shapes).\n",
"Fractal-like patterns occur widely in nature, in phenomena as diverse as clouds, river networks, geologic fault lines, mountains, coastlines, animal coloration, snow flakes, crystals, blood vessel branching, actin cytoskeleton, and ocean waves.\n",
"Fractal-like patterns work because the human visual system efficiently discriminates images which have different fractal dimension or other second-order statistics like Fourier spatial amplitude spectra; objects simply appear to pop out from the background. Timothy O'Neill helped the Marine Corps to develop first a digital pattern for vehicles, then fabric for uniforms, which had two colour schemes, one designed for woodland, one for desert.\n",
"Because fractals can generate the appearance of patterns in nature, they have a beauty and familiarity not typically seen with mathematically generated functions. Fractals have also found a place in computer-generated movie effects, where their ability to create complex curves with fractal symmetries results in more realistic virtual worlds.\n",
"Fractals are also found in human pursuits, such as music, painting, architecture, and stock market prices. Mandelbrot believed that fractals, far from being unnatural, were in many ways more intuitive and natural than the artificially smooth objects of traditional Euclidean geometry: Clouds are not spheres, mountains are not cones, coastlines are not circles, and bark is not smooth, nor does lightning travel in a straight line. —Mandelbrot, in his introduction to \"The Fractal Geometry of Nature\"\n",
"Fractal patterns have been reconstructed in physical 3-dimensional space and virtually, often called \"in silico\" modeling. Models of fractals are generally created using fractal-generating software that implements techniques such as those outlined above. As one illustration, trees, ferns, cells of the nervous system, blood and lung vasculature, and other branching patterns in nature can be modeled on a computer by using recursive algorithms and L-systems techniques. The recursive nature of some patterns is obvious in certain examples—a branch from a tree or a frond from a fern is a miniature replica of the whole: not identical, but similar in nature. Similarly, random fractals have been used to describe/create many highly irregular real-world objects. A limitation of modeling fractals is that resemblance of a fractal model to a natural phenomenon does not prove that the phenomenon being modeled is formed by a process similar to the modeling algorithms.\n",
"Wolfram briefly describes fractals as a form of geometric repetition, \"in which smaller and smaller copies of a pattern are successively nested inside each other, so that the same intricate shapes appear no matter how much you zoom in to the whole. Fern leaves and Romanesco broccoli are two examples from nature.\" He points out an unexpected conclusion:\n"
] |
What was President William McKinley's reasoning for his views on the issue of the annexation of the Philippines?
|
My understanding is he was sort of painted into a geopolitical corner. He hadn't really intended on taking the Phillipines, but now that he had them he couldn't give them to anyone else (because they'd just use them as a base for competition in China), couldn't give them back to Spain (because we had just beat the pants off of them and it would seem like a really pussy thing to do), and couldn't give them independence (because he thought they were a bunch of ignorant savages who couldn't govern themselves). Plus at that time period pretty much any island in the Pacific was useful as a naval coaling station and storehouse for supplies, much less somewhere like the Phillipines where there was the potential for a functional colony rather than just a lagoon and a beach to pile stuff on.
|
[
"A controversial aspect of McKinley's presidency is territorial expansion and the question of imperialism—with the exception of the Philippines, granted independence in 1946, the United States retains the territories taken under McKinley. The territorial expansion of 1898 is often seen by historians as the beginning of American empire. Morgan sees that historical discussion as a subset of the debate over the rise of America as a world power; he expects the debate over McKinley's actions to continue indefinitely without resolution, and notes that however one judges McKinley's actions in American expansion, one of his motivations was to change the lives of Filipinos and Cubans for the better.\n",
"A controversial aspect of McKinley's presidency is territorial expansion and the question of imperialism. The U.S. set Cuba free and granted independence to the Philippines in 1946. Puerto Rico remains in an ambiguous status. Hawaii is a state; Guam remains a territory. The territorial expansion of 1898 was the high water mark of American imperialism. Morgan sees that historical discussion as a subset of the debate over the rise of America as a world power; he expects the debate over McKinley's actions to continue indefinitely without resolution, and notes that however one judges McKinley's actions in American expansion, one of his motivations was to change the lives of Filipinos and Cubans for the better.\n",
"McKinley's cabinet agreed with him that Spain must leave Cuba and Puerto Rico, but they disagreed on the Philippines, with some wishing to annex the entire archipelago and some wishing only to retain a naval base in the area. Although public sentiment seemed to favor annexation of the Philippines, several prominent political leaders—including Democrats Bryan, and Cleveland, and the newly formed American Anti-Imperialist League—made their opposition known.\n",
"Rapid economic growth marked McKinley's presidency. He promoted the 1897 Dingley Tariff to protect manufacturers and factory workers from foreign competition and in 1900 secured the passage of the Gold Standard Act. McKinley hoped to persuade Spain to grant independence to rebellious Cuba without conflict, but when negotiation failed he led the nation into the Spanish-American War of 1898. The United States victory was quick and decisive. As part of the peace settlement, Spain turned over to the United States its main overseas colonies of Puerto Rico, Guam and the Philippines while Cuba was promised independence, but at that time remained under the control of the United States Army. The United States annexed the independent Republic of Hawaii in 1898 and it became a United States territory.\n",
"During the war, McKinley also pursued the annexation of the Republic of Hawaii. The new republic, dominated by business interests, had overthrown the Queen in 1893 when she rejected a limited role for herself. There was strong American support for annexation, and the need for Pacific bases in wartime became clear after the Battle of Manila. McKinley came to office as a supporter of annexation, and lobbied Congress to act, warning that to do nothing would invite a royalist counter-revolution or a Japanese takeover. Foreseeing difficulty in getting two-thirds of the Senate to approve a treaty of annexation, McKinley instead supported the effort of Democratic Representative Francis G. Newlands of Nevada to accomplish the result by joint resolution of both houses of Congress. The resulting Newlands Resolution passed both houses by wide margins, and McKinley signed it into law on July 8, 1898. McKinley biographer H. Wayne Morgan notes, \"McKinley was the guiding spirit behind the annexation of Hawaii, showing ... a firmness in pursuing it\"; the President told Cortelyou, \"We need Hawaii just as much and a good deal more than we did California. It is manifest destiny.\"\n",
"McKinley refused to recognize the native Filipino government of Emilio Aguinaldo, and relations between the United States and the Aguinaldo's supporters deteriorated after the conclusion of the Spanish–American War. McKinley believed that Aguinaldo represented just a small minority of the Filipino populace, and that benevolent American rule would lead to a peaceful occupation. In February 1899, Filipino and American forces clashed at the Battle of Manila, marking the start of the Philippine–American War. The fighting in the Philippines engendered increasingly vocal criticism from the domestic anti-imperialist movement, as did the continued deployment of volunteer regiments. Under General Elwell Stephen Otis, U.S. forces destroyed the rebel Filipino army, but Aguinaldo turned to guerrilla tactics. McKinley sent a commission led by William Howard Taft to establish a civilian government, and McKinley later appointed Taft as the civilian governor of the Philippines. The Filipino insurgency subsided with the capture of Aguinaldo in March 1901, and largely ended with the capture of Miguel Malvar in 1902. \n",
"During the campaign, McKinley and the Republicans criticized Bryan's adherence support of free silver, claimed credit for the nation's economic recovery from Panic of 1893, called for lower taxes, a larger merchant marine, and an interoceanic canal in Central America. In addition, McKinley argued that trusts were \"dangerous conspiracies against the public good and should be made the subject of prohibitory or penal legislation.\" Also, McKinley and the Republicans rejected both immediate independence for the Philippines and Bryan's idea of a protectorate for them, claiming that a Philippine protectorate would leave the U.S. responsible for the Philippines without the authority to meet its obligations.\n"
] |
Good books/movies/documentaries/websites/podcasts about Roman British history
|
British History Podcast.
|
[
"The History of Byzantium podcast by Robin Pierson is explicitly modelled after The History of Rome in style, length and quality; Pierson intended the podcast as a sequel to The History of Rome in order to complete the story. David Crowther of The History of England podcast has mentioned Duncan as an influence. as has Peter Adamson of the podcast: The History of Philosophy without any Gaps. Isaac Meyer of the History of Japan podcast has mentioned in a few episodes that The History of Rome podcast inspired the \"A day in the life of...\" episodes.\n",
"Mike Duncan began \"The History of Rome\" in 2007, after failing to find any good podcasts about ancient history. The project turned into an award-winning weekly podcast which aired for 179 episodes until 2012 and was downloaded more than 100 million times.\n",
"Allason-Jones has an extensive publication record on the material culture of Roman Britain and has been involved in the research of archaeological discoveries such as the Rudge Cup, the Corbridge Hoard, and Coventina's Well. She has appeared in several TV programmes on historical themes, including \"Time Team\" (1996-2000), \"Timewatch\" (2007), \"History Cold Case\" (2011) and \"Walking Through History\" (2014), as well as being the historical advisor on the 2011 film \"The Eagle\".\n",
"\"The History of Rome\" aired between 2007 and 2012 and covered Roman history from its legendary founding to the fall of the Western Roman Empire. \"The History of Rome\" won best educational podcast at the 2010 podcast awards, and was listed among the best podcasts of 2015 by Apple. \"\"The Storm Before the Storm\"\" entered the New York Times Bestseller list Hardcover Non-Fiction at the eighth place in November 2017.\n",
"It covers the history of England from the time of the Roman occupation until Queen Victoria's death, using a mixture of traditional history and mythology to explain the story of British history in a way accessible to younger readers.\n",
"Guy Martyn Thorold Huchet de la Bédoyère (born November 1957) is a British historian, who has published widely on Roman Britain and other subjects; and has appeared regularly on the Channel 4 archaeological television series \"Time Team\", starting in 1998. \n",
"More recently British dramatist Howard Brenton has written several histories. He gained notoriety for his play \"The Romans in Britain\", first staged at the National Theatre in October 1980, which drew parallels between the Roman invasion of Britain in 54BC and the contemporary British military presence in Northern Ireland. Its concerns with politics were, however, overshadowed by controversy surrounding a rape scene. Brenton also wrote \"Anne Boleyn\" a play on the life of Anne Boleyn, which premiered at Shakespeare's Globe in 2010. Anne Boleyn is portrayed as a significant force in the political and religious in-fighting at court and a furtherer of the cause of Protestantism in her enthusiasm for the Tyndale Bible.\n"
] |
why do the ends of escalators and moving walkways have the blue or green light that shines through the cracks?
|
I may be wrong but I think it's a light from a sensor that stops the escalator, moving sidewalk, etc. when it sees that there is something caught in the treads e.g. a pantleg, or a shoelace
|
[
"Multi-coloured spherical lights in the trees were installed in 2005 by the Elephant Impacts project. The project has repainted and added feature lighting to a number of bridges and buildings in the area, including the adjoining railway bridges on Walworth Road and Newington Causeway, and to London College of Communication and the Metropolitan Tabernacle. Proposed feature lighting at Metro Central Heights was abandoned when residents feared it would cause light pollution.\n",
"When the Green Building was first opened, the isolated prominence of the building and its relative proximity to the Charles River basin increased wind speeds in the high open archway at its base, preventing people from entering or leaving the building through the hinged main doors on windy days, necessitating use of a tunnel connecting to the other buildings. Large wood panels were temporarily erected in the open concourse to block the wind, and revolving doors were later installed at the ground floor entries to amend this problem somewhat. Several windows cracked, and at least one large pane popped out on upper stories, at least in part due to the effects of wind, eventually requiring all the windows to be replaced. A few years later, a similar-appearing problem was repeated in Boston's John Hancock Tower located in Back Bay across the river, a 60-story skyscraper which happened to be designed by the same architectural firm.\n",
"The orange false walls at platform level were removed in 2012 as part of construction, but the orange tiles at the Lexington Avenue mezzanine, as well as on the corridors to platform level, were kept for the time being. In spring 2012, temporary blue walls separating most of the IND and BMT sides were erected for the duration of construction. Both sides had large white and grey panels on the track side, as well as \"temporary\" tiles that said \"Lex 63\" at regular intervals. This differed vastly from the small beige tiles that were on the IND side of the tracks from 1989 to 2013. New platform signs for the Second Avenue Subway were erected in December 2016.\n",
"One eastbound lane was closed near Cherry Street due to deterioration. The concrete parapet wall could no longer support the light standard in that location. The light standard was instead relocated into the rightmost lane, and the lane was closed. Two locations, at Fort York Boulevard and near Cherry Street were reinforced to prevent \"punch-throughs\" (holes) from happening on the road surface, potentially knocking a large piece of concrete to the ground below and causing a dangerous incident for the vehicles above. It was estimated in December 2012 by the City of Toronto Infrastructure Department that the Expressway has a backlog of $626 million in repairs. Starting in 2013, the City intends to carry out $505 million worth of repairs over nine years. Temporary wood bracing and decking are being added to the underside of the road deck to prevent punch-throughs, but only provide a short term fix and will require a long term solution to prevent future deck collapse.\n",
"The China Bar and Alexandra tunnels have warning lights that are activated by cyclists before they enter the tunnels. This was required because the tunnels are curved. It is expected that the Ferrabee tunnel will get the same warning lights as it too is curved.\n",
"The tunnel between the and stations, including the junction with the future Yellow Line, was built at the same time as the other Metro tunnels in downtown Washington in the early 1970s. During construction under 7th Street and U Street, where the cut-and-cover technique was used, both street traffic and pedestrian access on those streets was difficult. This led to the closure of the traditional retail businesses along the route.\n",
"Because the station is in a sunken corridor, stairways and elevators were installed at Cedar Avenue and 19th Avenue to reach the platform. This is unlike other Green Line stations, which do not feature vertical pedestrian movement. The station was designed with an island platform to minimize the number of stairs and elevators needed.\n"
] |
will we ever see the national debt start going down or will it keep raising forever?
|
We'll likely see the national debt fluctuate up and down as this century goes on. The American economy is pretty robust and very very good at generating income. Without multi-trillion dollar wars to fight, and [hopefully] an upcoming rationalization of our economic, tax and social policies the debt will start to drift downwards.
However, it will almost certainly never go away.
This may sound wacky but - America's national debt is the chain that binds the rest of the world to America.
So long as the US continues to be THE place to invest money at a risk free rate (ie US Treasuries) the entire world has a vested interest in the US continuing to operate productively. In other words, the rest of the world NEEDS the US to be successful or their own economies will suffer. They need America to keep spending money, because America's economy is the beating heart that is pumping all the blood (re: dollars) through the rest of the world.
As an example, China's growth is impossible without billions of dollars of US money flowing into the country. That money is so critical that they loan that money back to us at pathetically small interest rates so we can keep buying.
The US is living in the best possible situation - we have the close to unlimited funds... and the appetite to match.
|
[
"According to the Treasury, \"failing to increase the debt limit would . . . cause the government to default on its legal obligations – an unprecedented event in American history\". These legal obligations include paying Social Security and Medicare benefits, military salaries, interest on the debt, and many other items. Making the promised payments of the principal and interest of US treasury securities on time ensures that the nation does not default on its sovereign debt.\n",
"In a May 12, 2011 editorial in the\" Wall Street Journal\", Rivkin addressed the runaway national debt problem by calling on Congress to reclaim its responsibility for issuing new U.S. debt: \"Congress should promptly increase the debt ceiling, but with one key caveat: The increase can be used only for borrowing to service existing obligations\".\n",
"In December 2013, Lew said that the government might run out of cash to pay the country's bills by late February or early March 2014. That set up yet another showdown in Congress over raising or suspending the debt limit, a statutory limit on the total amount of United States borrowing, early in the year. \"The creditworthiness of the United States is an essential underpinning of our strength as a nation; it is not a bargaining chip to be used for partisan political ends,\" Mr. Lew said in the letter. \"Increasing the debt limit does not authorize new spending commitments. It simply allows the government to pay for expenditures Congress has already approved.\"\n",
"The US has had public debt since its inception. Debts incurred during the American Revolutionary War and under the Articles of Confederation led to the first yearly report on the amount of the debt ($75,463,476.52 on January 1, 1791). Every president since Harry Truman has added to the national debt. The debt ceiling has been raised 74 times since March 1962, including 18 times under Ronald Reagan, eight times under Bill Clinton, seven times under George W. Bush and three times () under Barack Obama.\n",
"The public debt reached a post-World War II low of 24.6% in 1974. In that year, the Congressional Budget and Impoundment Control Act of 1974 reformed the budget process to allow Congress to challenge the president's budget more easily, and, as a consequence, deficits became increasingly difficult to control. National debt held by the public increased from its postwar low of 24.6% of GDP in 1974 to 26.2% in 1980.\n",
"BULLET::::25. This is Adam Florzak of Illinois. The national debt is now growing so quickly it will have increased by over half- million dollars in just the time it takes to ask this question. Over the years, politicians have borrowed just under $2 trillion from the Social Security trust fund to cover these massive budget deficits, and now the retirements of our generation are at risk. What will you do as president to help repay this money and restore the trust?\n",
"If the debt ceiling is not raised and extraordinary measures are exhausted, the United States government is legally unable to borrow money to pay its financial obligations. At that point, it must cease making payments unless the treasury has cash on hand to cover them. In addition, the government would not have the resources to pay the interest on (and sometime redeem) government securities when due, which would be characterized as a default. A default may affect the United States' sovereign risk rating and the interest rate that it will be required to pay on future debt. The United States has never defaulted on its financial obligations, but the periodic crises relating to the debt ceiling has led to a rating downgrade by several rating agencies and a warning by others. The GAO estimated that the delay in raising the debt ceiling during the debt ceiling crisis of 2011 raised borrowing costs for the government by $1.3 billion in fiscal year 2011 and noted that the delay would also raise costs in later years. The Bipartisan Policy Center extended the GAO's estimates and found that the delay raised borrowing costs by $18.9 billion over ten years.\n"
] |
What are some examples of small disciplined forces defeating larger forces?
|
The winter war perhaps? _URL_0_
Little Finland beating off the might of Soviet Russia. 70k casualties against soviet 323k.
|
[
"Regular forces, in turn, may act in order to invite such attacks by concentrations of enemy guerrillas, in order to bring an otherwise elusive enemy to battle, relying on its own superior training and firepower to win such battles. This was successfully practiced by the French during the First Indochina War at the Battle of Nà Sản, but a subsequent attempt to replicate this at Dien Bien Phu led to decisive defeat.\n",
"In warfare, the long-term objective is the defeat of the enemy. An effective tactical method is the demoralisation of the enemy by defeating their army and routing them from the battlefield. Once a force had become disorganized, losing its ability to fight, the victors can chase down the remnants and attempt to cause as many casualties or take as many prisoners as possible.\n",
"BULLET::::- Troops with exceptional morale or skill became skirmishers, and were deployed in a screen in front of the Army. Their main fighting tactics were of a guerrilla-warfare nature. Both mounted and on foot, the large swarm of skirmishers would hide from enemies if possible, pepper their formations with fire and deploy ambushes. Unable to retaliate on the scattered skirmishers, the morale and unit cohesion of the better trained and equipped émigré and monarchist armies was gradually worn down. The incessant harassing fire usually resulted in a section of the enemy line wavering, and then the 'regular' formations of the Revolutionary Army would be sent into the attack.\n",
"In military strategy and tactics, a recurring theme is that units are strengthened by proximity to supporting units. Nearby units can fire on an attacker's flank, lend indirect fire support such as artillery or maneuver to counterattack. \"Defeat in detail\" is the tactic of exploiting failures of an enemy force to co-ordinate and support the various smaller units that make up the force. An overwhelming attack on one defending subunit minimizes casualties on the attacking side and can be repeated a number of times against the defending subunits until all are eliminated.\n",
"Use of large irregular forces featured heavily in wars such as the American Revolution, the Irish War of Independence and Irish Civil War, the Franco-Prussian War, the Russian Civil War, the Second Boer War, Liberation war of Bangladesh, Vietnam War, and especially the Eastern Front of World War II where hundreds of thousands of partisans fought on both sides.\n",
"By cutting the enemy columns or units into smaller groups and then encircling them with light and mobile forces, such as ski-troops during winter, a smaller force can overwhelm a much larger force. If the encircled enemy unit was too strong, or if attacking it would have entailed an unacceptably high cost, e.g., because of a lack of heavy equipment, the \"motti\" was usually left to \"stew\" until it ran out of food, fuel, supplies, and ammunition and was weakened enough to be eliminated. Some of the larger mottis held out until the end of the war because they were resupplied by air. Being trapped, however, these units were not available for battle operations.\n",
"The various vyūhas (military formations) were studied by the Kauravas and Pandavas alike. Most of them can be beaten using a counter-measure targeted specifically against that formation. It is important to observe that in the form of battle described in the \"Mahabharata\", it was important to place powerful fighters in positions where they could inflict maximum damage to the opposing force, or defend their own side. As per this military strategy, a specific stationary object or a moving object or person could be captured, surrounded and fully secured during battle.\n"
] |
why some, but not all, acquisition prices are disclosed .
|
In the USA, if a publicly traded company is acquired, the purchase price will have to be reported publicly in reports to the SEC. Acquisition of a private company won't have to be, although if it is bought by a public company then it will often show up in their SEC reports, although it may be obfuscated. In the case of a large company like Google or Cisco, they may buy so many companies that you won't be able to find the price of any individual one in their reports.
Whether or not to divulge purchase prices is usually dictated by the purchasing company, although the acquired company could potentially make it a condition of sale. I don't know what laws exist that cover acquisitions/mergers.
There are a variety of reasons to not want to divulge. But usually it seems to be avoidance of criticism.
|
[
"An acquisition/takeover is the purchase of one business or company by another company or other business entity. Specific acquisition targets can be identified through myriad avenues including market research, trade expos, sent up from internal business units, or supply chain analysis. Such purchase may be of 100%, or nearly 100%, of the assets or ownership equity of the acquired entity. Consolidation/amalgamation occurs when two companies combine to form a new enterprise altogether, and neither of the previous companies remains independently. Acquisitions are divided into \"private\" and \"public\" acquisitions, depending on whether the acquiree or merging company (also termed a \"target\") is or is not listed on a public stock market. Some public companies rely on acquisitions as an important value creation strategy. An additional dimension or categorization consists of whether an acquisition is \"friendly\" or \"hostile\".\n",
"To induce the shareholders of the target company to sell, the acquirer's offer price is usually at a premium over the current market price of the target company's shares. For example, if a target corporation's stock were trading at $10 per share, an acquirer might offer $11.50 per share to shareholders on the condition that 51% of shareholders agree. Cash or securities may be offered to the target company's shareholders, although a tender offer in which securities are offered as consideration is generally referred to as an \"exchange offer.\"\n",
"\"Acquisition\" usually refers to a purchase of a smaller firm by a larger one. Sometimes, however, a smaller firm will acquire management control of a larger and/or longer-established company and retain the name of the latter for the post-acquisition combined entity. This is known as a reverse takeover. Another type of acquisition is the reverse merger, a form of transaction that enables a private company to be publicly listed in a relatively short time frame. A reverse merger occurs when a privately held company (often one that has strong prospects and is eager to raise financing) buys a publicly listed shell company, usually one with no business and limited assets.\n",
"When a public offering trades below its offering price, the offering is said to have \"broke issue\" or \"broke syndicate bid\". This creates the perception of an unstable or undesirable offering, which can lead to further selling and hesitant buying of the shares. To manage this situation, the underwriters initially oversell (\"short\") the offering to clients by an additional 15% of the offering size (in this example, 1.15 million shares). When the offering is priced and those 1.15 million shares are \"effective\" (become eligible for public trading), the underwriters are able to support and stabilize the offering price bid (also known as the \"syndicate bid\") by buying back the extra 15% of shares (150,000 shares in this example) in the market at or below the offer price. The underwriters can do this without the market risk of being \"long\" this extra 15% of shares in their own account, as they are simply \"covering\" (closing out) their short position.\n",
"If the market price of the stock falls below the mini-tender price before the offer closes, the bidder can cancel the offer or reduce the offer price. While a price change allows investors to withdraw their shares, this process is not automatic. The \"onus is on the investor\", as they (and not the bidder or broker) are responsible for acquiring the revised offer information and withdrawing their shares by the deadline.\n",
"The price discovery process (also called price discovery mechanism) is the process of determining the price of an asset in the marketplace through the interactions of buyers and sellers. The futures and options market serve all important functions of price discovery. The individuals with \"better information and judgement\" participate in these markets to take advantage of such information. When some new information arrives, perhaps some good news about the economy, for instance, the actions of speculators quickly feed their information into the derivatives market causing changes in price of derivatives. These markets are usually the first ones to react as the transaction cost is much lower in these markets than in the spot market. Therefore these markets indicate what is likely to happen and thus assist in better price discovery.\n",
"BULLET::::- Acquisition: Acquisition means, directly or indirectly, acquiring or agreeing to acquire shares, voting rights or assets of any enterprise or control over management or assets of any enterprise.\n"
] |
How do you feel about John Brown? Terrorist or freedom fighter?
|
He's both technically but in my biased opinion he's a freedom fighter. Though he could have planned the rebellion slightly more I believe, like asking a local slave in the dead of night on what he thought the slaves would do perhaps, but really Brown was never going to achieve the full liberation of the slaves as he wanted to. Regardless he's an inspirational figure.
|
[
"Brown claims to be a Muslim and jihadi who believed his actions were \"just kills\", or justified shootings, of adult males in retaliation for actions by the U.S. government in Iraq, Syria and Afghanistan. As he stated to authorities: \"All those lives are taken every single day by America, by this government. So a life for a life.\"\n",
"\"The New York Times\" reported that Brown has \"earned a national reputation as a progressive leader whose top priority is improving relations and reducing distrust between the police department and the city’s minority residents.\" He has advocated reducing the use of force and discouraged chasing suspects in cars and even by foot, since such chases often lead to fatalities. According to published reporting, he also has a reputation as a \"tough boss\" and has fought with the local police union over his emphasis on less-confrontational strategies and his willingness to fire officers, often publicly. He has also sought to increase transparency by equipping officers with body cameras and sought to reform training on the use of lethal force. It has also been reported that some African American residents still feel they are subject to discrimination by the police.\n",
"Brown's actions as an abolitionist and the tactics he used still make him a controversial figure today. He is both memorialized as a heroic martyr and visionary, and vilified as a madman and a terrorist. Historian James Loewen surveyed American history textbooks and noted that historians considered Brown perfectly sane until about 1890, but generally portrayed him as insane from about 1890 until 1970, when new interpretations began to gain ground.\n",
"It was this moment that Brown pledged to destroy slavery. Du Bois describes Brown as a biblical character: fanatically devoted to his abolitionist cause but also a man of rigid social and moral rules. Du Bois simultaneously describes Brown as a revolutionary, prophet and martyr, and declares him to be \"a man whose leadership lay not in his office, wealth or influence, but in the white flame of his utter devotion to an ideal.\"\n",
"Brown supported President Barack Obama's decision to send 30,000 more troops to fight in Afghanistan. He cited Stanley McChrystal's recommendations as a reason for his support. He also advocates that suspected terrorists be tried in military tribunals and not civilian courts. He also supported the limited use of \"enhanced interrogation techniques\", including waterboarding against non-citizen terrorist suspects. He supports a two-state solution for the Israeli–Palestinian conflict in which Israel and a new, independent Palestinian state would co-exist side by side.\n",
"Brown maintains a passion for philanthropy and has been active in volunteering for children's programs such as Free Arts in New York. In 2013, Brown donated a signed copy of the \"Identity Thief\" soundtrack to the Suicide Prevention Hotline's auction in order to raise money for the organization.\n",
"Brown went to great lengths to empathise with those who lost family members in the Iraq and Afghanistan conflicts. He has often said \"War is tragic\", echoing Blair's quote, \"War is horrible\". Nonetheless, in November 2007 Brown was accused by some senior military figures of not adhering to the Military Covenant, a convention within British politics ensuring adequate safeguards, rewards and compensation for military personnel who risk their lives in obedience to orders derived from the policy of the elected government.\n"
] |
how does my computer know how much time is remaining for a program to be installed?
|
It's an estimate based on how much data there is left to transfer and how fast it is currently getting done.
|
[
"Time Machine creates incremental backups of files that can be restored at a later date. It allows the user to restore the whole system or specific files from the Recovery HD or the macOS Install DVD. It works within Mail, iWork, iLife, and several other compatible programs, making it possible to restore individual objects (e.g. emails, photos, contacts, calendar events) without leaving the application. According to an Apple support statement:\n",
"BULLET::::- where the program runs for an extended time and consumes additional memory over time, such as background tasks on servers, but especially in embedded devices which may be left running for many years\n",
"Live software update requires two computer units. One executing the old code (\"working\"), the other with new software loaded, otherwise idle (\"spare\"). During the process called \"warming\" memory areas (e.g. dynamically allocated memory, with the exception of stack of procedures) are moved from the old to the new computer unit. That implies that handling of data structures must be compatible in the old and the new software versions. Copying data does not require any programming effort, as long as allocation of data is done using TNSDL language.\n",
"Example usage, when discussing processing time of a search tree node, for finding 10 x 10 Latin squares: \"A typical node of the search tree probably requires about 75 mems (memory accesses) for processing, to check validity. Therefore the total running time on a modern computer would be roughly the time needed to perform mems.\" (Donald Knuth, 2011, The Art of Computer Programming, Volume 4A, p. 6).\n",
"Application processing time is generally tightly controlled, since MIDI tasks are most often real-time tasks. In most cases, the latency comes directly from the thread latency which can be obtained on a given operating system, typically 1-2 ms max on Windows and Mac OS systems. Systems with real-time kernel can achieve much better results, down to 100 microseconds. This time can be considered as constant, whatever the communication channel (MIDI 1.0, USB, RTP-MIDI, etc...), since the processing threads are operating on a different level than the communication related threads/tasks.\n",
"For example, if the time slot is 100 milliseconds, and \"job1\" takes a total time of 250 ms to complete, the round-robin scheduler will suspend the job after 100 ms and give other jobs their time on the CPU. Once the other jobs have had their equal share (100 ms each), \"job1\" will get another allocation of CPU time and the cycle will repeat. This process continues until the job finishes and needs no more time on the CPU.\n",
"Modern computer software is often tracked using two different software versioning schemes—internal version number that may be incremented many times in a single day, such as a revision control number, and a \"released version\" that typically changes far less often, such as \"semantic versioning\" or a project code name.\n"
] |
what does a company do with funds generated from selling stocks?
|
The go to various things, depending on the company and its business... the money does literally go into the company's bank accounts, minus fees paid to investment bank doing the underwriter, etc. They may use it to pay back debt, invest in expansion (factories, new stores, inventory), make acquisitions, pay bonuses to founders, etc.
|
[
"In a primary market, companies, governments or public sector institutions can raise funds through bond issues and corporations can raise capital through the sale of new stock through an initial public offering (IPO). This is often done through an investment bank or finance syndicate of securities dealers. The process of selling new shares to investors is called underwriting. Dealers earn a commission that is built into the price of the security offering, though it can be found in the prospectus. \n",
"Financing a company through the sale of stock in a company is known as equity financing. Alternatively, debt financing (for example issuing bonds) can be done to avoid giving up shares of ownership of the company. Unofficial financing known as trade financing usually provides the major part of a company's working capital (day-to-day operational needs).\n",
"Investors' money is pooled together from the sale of a fixed number of shares which a trust issues when it launches. The board will typically delegate responsibility to a professional fund manager to invest in the stocks and shares of a wide range of companies (more than most people could practically invest in themselves). The investment trust often has no employees, only a board of directors comprising only non-executive directors. \n",
"When it comes to financing a purchase of stocks there are two ways: purchasing stock with money that is currently in the buyer's ownership, or by buying stock on margin. Buying stock on margin means buying stock with money borrowed against the value of stocks in the same account. These stocks, or collateral, guarantee that the buyer can repay the loan; otherwise, the stockbroker has the right to sell the stock (collateral) to repay the borrowed money. He can sell if the share price drops below the margin requirement, at least 50% of the value of the stocks in the account. Buying on margin works the same way as borrowing money to buy a car or a house, using a car or house as collateral. Moreover, borrowing is not free; the broker usually charges 8–10% interest.\n",
"There are various methods of buying and financing stocks, the most common being through a stockbroker. Brokerage firms, whether they are a full-service or discount broker, arrange the transfer of stock from a seller to a buyer. Most trades are actually done through brokers listed with a stock exchange.\n",
"A fund that owns stocks and a substantial amount of assets other than stocks is considered an asset allocation fund. These funds split investments between growth stocks, income stocks/bonds, and money market instruments or cash for stability. A fund that switches between asset classes based on predictions of future returns is called a tactical allocation fund. Other funds may maintain a more or less constant proportion of assets, due to the belief that such prediction is not reliable.\n",
"In an example transaction, a large institutional money manager with a position in a particular stock allows those securities to be borrowed by a financial intermediary, typically an investment bank, prime broker or other broker-dealer, acting on behalf of one or more clients. After borrowing the stock, the client - the short seller - could sell it short. Their objective is to buy the stock back at a lower price thereby creating a profit. By selling the borrowed stocks, the short seller generates cash that becomes collateral paid to the lender. The cash value of the collateral would be marked-to-market on a daily basis so that it exceeds the value of the loan by at least 2%. NB: 2% is the standard margin rate in the US, whereas 5% is more usual in Europe.\n"
] |
why old film clips, like ones of ww2 almost always seems sped up faster than 1x?
|
As you probably know, the speed at which motion picture film runs through the camera determines its frame rate, given in frames per second (fps). When run through a projector (which you can think of as a backwards camera) at the same speed, the movement looks natural to us. If turned more slowly or quickly, however, it plays out in fast or slow motion, respectively (the terms "undercranking" and "overcranking" are still used for these techniques, derived from the literal cranking mechanism used to run early cameras and projectors).
Obviously this enthralled audiences, and early camera operators took advantage of this at times, but the cliche of its ubiquity happened more by accident. In the early days of the medium, both cameras and projectors were usually operated at a lower speed than the 24fps that later became the industry standard (particularly with the advent of synchronized sound in the late 1920s). I've shown silent films while working as a projectionist, and they're often distributed with instructions to be run at 18fps so that movement shows up normally. If shown at 24fps—which has often been done, either because of insufficient equipment or human error—you would be seeing everything at 1.5x the speed of the actual motion, hence the cliche of old films running in fast motion.
|
[
"So a film recorded at 12 frames per second will appear to move twice as fast. Shooting at camera speeds between 8 and 22 frames per second usually falls into the undercranked fast motion category, with images shot at slower speeds more closely falling into the realm of time-lapse, although these distinctions of terminology have not been entirely established in all movie production circles.\n",
"Typically this style is achieved when each film frame is captured at a rate much faster than it will be played back. When replayed at normal speed, time appears to be moving more slowly. A term for creating slow motion film is overcranking which refers to hand cranking an early camera at a faster rate than normal (i.e. faster than 24 frames per second). Slow motion can also be achieved by playing normally recorded footage at a slower speed. This technique is more often applied to video subjected to instant replay than to film. A third technique that is becoming common using current computer software post-processing (with programs like Twixtor) is to fabricate digitally interpolated frames to smoothly transition between the frames that were actually shot. Motion can be slowed further by combining techniques, interpolating between overcranked frames. The traditional method for achieving super-slow motion is through high-speed photography, a more sophisticated technique that uses specialized equipment to record fast phenomena, usually for scientific applications.\n",
"Originally moving picture film was shot and projected at various speeds using hand-cranked cameras and projectors; though 1000 frames per minute (16 frame/s) is generally cited as a standard silent speed, research indicates most films were shot between 16 frame/s and 23 frame/s and projected from 18 frame/s on up (often reels included instructions on how fast each scene should be shown). When sound film was introduced in the late 1920s, a constant speed was required for the sound head. 24 frames per second were chosen because it was the slowest (and thus cheapest) speed which allowed for sufficient sound quality. Improvements since the late 19th century include the mechanization of cameras – allowing them to record at a consistent speed, quiet camera design – allowing sound recorded on-set to be usable without requiring large \"blimps\" to encase the camera, the invention of more sophisticated filmstocks and lenses, allowing directors to film in increasingly dim conditions, and the development of synchronized sound, allowing sound to be recorded at exactly the same speed as its corresponding action. The soundtrack can be recorded separately from shooting the film, but for live-action pictures, many parts of the soundtrack are usually recorded simultaneously.\n",
"Stop motion should not be confused with the time-lapse technique in which still photographs of a live scene are taken at regular intervals and then combined to make a continuous film. Time lapse is a technique whereby the frequency at which film frames are captured is much lower than that used to view the sequence. When played at normal speed, time appears to be moving faster.\n",
"The last 110 film that Kodak produced was ISO 400 speed packed in a cartridge that senses as \"low\" speed. As shown in the photograph to the right, these cartridges can be modified by hand so that they signal the proper speed to the camera.\n",
"When a slower shutter speed is selected, a longer time passes from the moment the shutter opens till the moment it closes. More time is available for movement in the subject to be recorded by the camera as a blur.\n",
"BULLET::::- \" s and s\": This and slower speeds are useful for photographs other than panning shots where motion blur is employed for deliberate effect, or for taking sharp photographs of immobile subjects under bad lighting conditions with a tripod-supported camera.\n"
] |
why doesn't north america see protests similar in size to other continents and countries?
|
[We took part in the largest protest in human history](_URL_0_), in 1995 the Million Man March had between 400,000 and 837,000 people, in 1993 the March on Washington for Lesbian, Gay and Bi Equal Rights and Liberation had between 300,000 and 1,000,000 people, in 1992 the "Save our Cities! Save our Children!" protest had 150,000 people, in 1989 the March for Women's Lives had 500,000. The list goes on, back through history. What are you basing your question's premise on? A guess?
|
[
"Some potentially vulnerable states that have not yet seen such protests have taken a variety of preemptive measures to avoid such displays occurring in their own countries; some of these states and others have experienced political fallout as a result of their own governmental actions and reactions to events which their own citizens are seeing reported from abroad.\n",
"The protests have also spread outside of Canada. On December 27 an online source reported that there had been 30 Idle No More protests in the United States, and solidarity protests in Stockholm, Sweden, London, UK, Berlin, Germany, Auckland, New Zealand, and Cairo, Egypt. On December 30, approximately 100 people from Walpole Island marched to Algonac, Michigan. CBS reported that \"hundreds\" attended a flash mob at the Mall of America in Minneapolis, Minnesota. The \"Twin Cities Daily Planet\" called it a crowd of \"over a thousand\" and stated that it followed a similar protest a week earlier where Clyde Bellecourt had been arrested, as well as another flash mob at the Paul Bunyan Mall in Bemidji. On January 5, the International Bridge was closed again due to Mohawk protests from New York.\n",
"Within the United States, protests have been reported in many states: Michigan, Minnesota, Ohio, New York, Arizona, Colorado, Maine, New Mexico, Vermont, South Carolina, Washington State, Washington, D.C., Indiana and Texas.\n",
"The influence and growth of these protests have led to smaller demonstrations held in cities all over the world. Cities with Mexican sub-communities such as Barcelona, Buenos Aires, Madrid, Montreal, The Hague, and Frankfurt have held their own protests in support of crisis in Mexican cities. Protests were also coordinated in Washington, D.C., as U.S. policy supports and supplies Calderon's policies.\n",
"Smaller protests or \"Euromaidans\" have been held internationally, primarily among the larger Ukrainian diaspora populations in North America and Europe. The largest took place on 8 December in New York, with over 1,000 attending. Notably, in December 2013, Warsaw's Palace of Culture and Science, Buffalo Electric Vehicle Company Tower in Buffalo, Cira Centre in Philadelphia, the Tbilisi City Hall in Georgia, and Niagara Falls on the US/Canada border were illuminated in blue and yellow as a symbol of solidarity with Ukraine.\n",
"Starting with the February protests in Wisconsin a number of Arab Spring inspired movements have waxed and waned in both Americas, some being violent, others not. On 15 October, there were thousands of demonstrations throughout the two continents, some in countries such as Canada, which had not suffered such unrest before.\n",
"Protests took place all across the United States of America with CBS reporting that 150 U.S. cities had protests. According to the World Socialist Web Site, protests took place in 225 different communities.\n"
] |
What is the best way to determine if an exoplanet is suitable to sustain human life?
|
Part of the problem is we have a sample size of 1. Basically impossible to draw hard conclusions from.
One key metric I've seen talked about is the presence of free oxygen. Oxygen is very reactive, if it is present in large quantities in molecular form it seems reasonable to infer that a process like life is creating it (e.g. photosynthesis).
Liquid water also seems to be an important pre-requisite, as life needs a solvent in which to mix all its magical molecules.
If we spotted a planet at the correct temperature for liquid water that also had large amounts of molecular oxygen people would get very, very excited.
|
[
"The discovery of exoplanets has intensified interest in the search for extraterrestrial life. There is special interest in planets that orbit in a star's habitable zone, where it is possible for liquid water, a prerequisite for life on Earth, to exist on the surface. The study of planetary habitability also considers a wide range of other factors in determining the suitability of a planet for hosting life.\n",
"On 13 May 2016, researchers at University of California, Los Angeles (UCLA) announced that they had found various scenarios that allow the exoplanet to be habitable. They tested several simulations based on Kepler-62f having an atmosphere that ranges in thickness from the same as Earth's all the way up to 12 times thicker than our planet's, various concentrations of carbon dioxide in its atmosphere, ranging from the same amount as is in the Earth's atmosphere up to 2,500 times that level and several different possible configurations for its orbital path.\n",
"Some astronomers search for extrasolar planets that may be conducive to life, narrowing the search to terrestrial planets within the habitable zone of their star. Since 1992 over two thousand exoplanets have been discovered ( planets in planetary systems including multiple planetary systems as of ).\n",
"Present day searches for exoplanets are insensitive to exoplanets located at the distances from their host star comparable to the semi-major axes of the gas giants in the Solar System, greater than about 5 AU. Surveys using the radial velocity method require observing a star over at least one period of revolution, which is roughly 30 years for a planet at the distance of Saturn. Existing adaptive optics instruments become ineffective at small angular separations, limiting them to semi-major axes larger than about 30 astronomical units. The high contrast of the Gemini Planet Imager at small angular separations will allow it to detect gas giants with semi-major axes of 5–30 astronomical units.\n",
"The Habitable Exoplanet Imaging Mission (HabEx) is a space telescope concept that would be optimized to search for and image Earth-size habitable exoplanets in the habitable zones of their stars, where liquid water can exist. HabEx would aim to understand how common terrestrial worlds beyond the Solar System may be and the range of their characteristics. It would be an optical, UV and infrared telescope that would also use spectrographs to study planetary atmospheres and eclipse starlight with either an internal coronagraph or an external starshade.\n",
"The scientific search for extraterrestrial life is being carried out both directly and indirectly. , 3,667 exoplanets in 2,747 systems have been identified, and other planets and moons in our own solar system hold the potential for hosting primitive life such as microorganisms.\n",
"For a biosignature to be relevant in the context of scientific investigation, it must be detectable with the technology currently available. This seems to be an obvious statement, however there are many scenarios in which life may be present on a planet, yet remain undetectable because of human-caused limitations.\n"
] |
why do dogs like the smell of cheese so much?
|
Cheese, that is, REAL, unprocessed cheese (although some types of pasteurized cheese included as well,) is naturally very pungent. Cut up a bit of Brie or aged white cheddar and tell me this isn't so. If we humans think cheese is very pungent, imagine how much more so dogs would be able to smell it. Dogs tend to have a much more potent sense of smell than we humans, since before dogs had been domesticated, their sense of smell was essential for hunting down their food.
Naturally, regardless of whether or not a dog would recognize that this powerful scent is coming from tasty food, dogs would be curious about the origin of the odor. Some dogs might not even need to witness a human or other animal eating the cheese to consider licking the strange, odorous object, if given the opportunity, to learn more about it. Once licking it, they may discover that it's tasty to them and consequently eat it.
To some dogs, it may be habitual as you say--like Pavlov's dog. Every time a dog smells this piquant scent, he tends to see a human eating the object the smell is originating from. Eating means food. Food is good to eat.
However, even if a dog recognizes the cheese to be a compound originating from lactose, but does not first see a human or other dog/animal eating it does not mean they will brush it off as "not food." This misconception people have that humans are the only mammals that continue to consume milk into adulthood has absolutely no basis in fact.
Cliche as it may sound, put a bowl of cow's milk you bought from the supermarket in front of a cat who has neither consumed processed milk nor seen anyone else consume it and tell me she won't drink it. I'm not promising she won't get sick, but 9/10 times, she will drink it, anyway. (And yes, I have tried this numerous times before hearing you're not supposed to do that, but the cat never got sick. Lol)
And it's not just cats. Many animals will drink milk if put in front of them because animals know it is rich in fat. From an evolutionary standpoint, fats are a delicacy since they are rich in energy and have only recently become so readily available to us that we haven't been able to turn off that insatiable craving for them yet.
TL;DR
1. Cheese is pungent, dogs have a good sense of smell.
2. We're not the only ones who like milk. Milk is rich in fat, and fat is tasty because we need it.
|
[
"Dogs have around 1,700 taste buds compared to humans with around 9,000. The sweet taste buds in dogs respond to a chemical called furaneol which is found in many fruits and in tomatoes. It appears that dogs do like this flavor and it probably evolved because in a natural environment dogs frequently supplement their diet of small animals with whatever fruits happen to be available. Because of dogs' dislike of bitter tastes, various sprays, and gels have been designed to keep dogs from chewing on furniture or other objects. Dogs also have taste buds that are tuned for water, which is something they share with other carnivores but is not found in humans. This taste sense is found at the tip of the dog's tongue, which is the part of the tongue that he curls to lap water. This area responds to water at all times, but when the dog has eaten salty or sugary foods the sensitivity to the taste of water increases. It is proposed that this ability to taste water evolved as a way for the body to keep internal fluids in balance after the animal has eaten things that will either result in more urine being passed or will require more water to adequately process. It certainly appears that when these special water taste buds are active, dogs seem to get an extra pleasure out of drinking water, and will drink copious amounts of it.\n",
"Dogs, as with all mammals, have natural odors. Natural dog odor can be unpleasant to dog owners especially when dogs are kept inside the home, as some people are not used to being exposed to the natural odor of a non-human species living in proximity to them. Dogs may also develop unnatural odors as a result of skin disease or other disorders or may become contaminated with odors from other sources in their environment.\n",
"Flatulence can be a problem for some dogs, which may be diet-related or a sign of gastrointestinal disease. This, in fact, may be the most commonly noticed source of odor from dogs fed cereal-based dog foods.\n",
"Nutmeg is highly neurotoxic to dogs and causes seizures, tremors, and nervous system disorders which can be fatal. Nutmeg's rich, spicy scent is attractive to dogs which can result in a dog ingesting a lethal amount of this spice. Eggnog and other food preparations which contain nutmeg should not be given to dogs.\n",
"Even in cultures with long cheese traditions, consumers may perceive some cheeses that are especially pungent-smelling, or mold-bearing varieties such as Limburger or Roquefort, as unpalatable. Such cheeses are an acquired taste because they are processed using molds or microbiological cultures, allowing odor and flavor molecules to resemble those in rotten foods. One author stated: \"An aversion to the odor of decay has the obvious biological value of steering us away from possible food poisoning, so it is no wonder that an animal food that gives off whiffs of shoes and soil and the stable takes some getting used to.\"\n",
"All natural dog odors are most prominent near the ears and from the paw pads. Dogs naturally produce secretions, the function of which is to produce scents allowing for individual animal recognition by dogs and other species in the scent-marking of territory. \n",
"Strong cheeses are an acquired taste for Danes too. Elderly Danes who find the smell offensive might joke about \"Gamle Ole's\" smelling up a whole house, just by being in a sealed plastic container in the refrigerator. One might also refer to Gamle Ole's pungency when talking about things that are not quite right, i.e. \"they stink\". Here one might say that something stinks or smells of \"Gamle Ole\".\n"
] |
What does it mean when one has a 'teleological view of history'?
|
To build a bit on /u/Sherbert42's answer: teleology (in a historiographical sense) is a form of historical enquiry which attempts to construct a narrative view of history as a progressive march in one direction; towards an inevitable end point.
To give one particularly notable and illustrative example of teleological thinking: look at 'Whig history', a school of thought [described by Herbert Butterfield](_URL_0_) which argued that all history can be considered as an inexorable march towards enlightenment/liberalism.
The problem with the teleological approach is that it tends towards sophistry: to use the Whig history example again, the idea that British-style liberal enlightenment is the apex of human progress, and that the eventual convergence of all history on that point is an inevitability, is deeply problematic.
The idea that you can divine a perfect (or in any way satisfactory) linear narrative in history become ludicrous almost as soon as you start to interrogate it to any depth. The construction of these teleological narratives generally involves highly selective use of evidence, straw men and the complete dismissal of countervailing viewpoints or interpretations.
What always surprises me is that this prism for understanding history hasn't entirely gone out of fashion. Butterfield wrote *The Whig Interpretation of History* in 1931, about historians mostly of the 19th century, but Francis Fukuyama's 'End of History' theory in the 1990s owes a lot to these ideas: the idea that the fall of the Soviet Union represents the ultimate triumph of liberal democracy as "the final form of human government".
Edit: as someone else pointed out in the comments, I mangled my understanding (misread old notes from uni and clearly wasn't paying enough attention) of Butterfield's place in the Whig canon — as a critic and taxonomist, not a part of the canon. Duly corrected/now going to go hang my head in shame.
|
[
"Historist historiography rejects historical teleology and bases its explanations of historical phenomena on sympathy and understanding (see Hermeneutics) for the events, acting persons, and historical periods. The historist approach takes to its extreme limits the common observation that human institutions (language, Art, religion, law, State) are subject to perpetual change.\n",
"A teleological argument holds all things to be designed for, or directed toward, a specific final result. That specific result gives events and actions, even retrospectively, an inherent purpose. When applied to the historical process, an historical teleological argument posits the result as the inevitable trajectory of a specific set of events. These events lead \"inevitably,\" as Karl Marx or Friedrich Engels proposed, to a specific set of conditions or situations; the resolution of those lead to another, and so on. This goal-oriented, 'teleological' notion of the historical process as a whole is present in a variety of arguments about the past: the \"inevitability,\" for example, of the revolution of the proletariat and the \"Whiggish\" narrative of past as an inevitable progression towards ever greater liberty and enlightenment that culminated in modern forms of liberal democracy and constitutional monarchy.\n",
"Historically, teleology may be identified with the philosophical tradition of Aristotelianism. The rationale of teleology was explored by Immanuel Kant in his Critique of Judgement and, again, made central to speculative philosophy by Hegel and in the various neo-Hegelian schools proposing a history of our species some consider to be at variance with Darwin, as well as with the dialectical materialism of Karl Marx and Friedrich Engels, and with what is now called analytic philosophy the point of departure is not so much formal logic and scientific fact but 'identity'. (In Hegel's terminology: 'objective spirit'.)\n",
"A telos (from the Greek τέλος for \"end\", \"purpose\", or \"goal\") is an end or purpose, in a fairly constrained sense used by philosophers such as Aristotle. It is the root of the term \"teleology\", roughly the study of purposiveness, or the study of objects with a view to their aims, purposes, or intentions. Teleology figures centrally in Aristotle's biology and in his theory of causes. The notion that everything has a \"telos\" also gave rise to epistemology. It is also central to some philosophical theories of history, such as those of Hegel and Marx.\n",
"Teleology or finality is a reason or explanation for something as a function of its end, purpose, or goal. It is derived from two Greek words: telos (end, goal, purpose) and logos (reason, explanation). A purpose that is imposed by a human use, such as that of a fork, is called \"extrinsic\". Natural teleology, common in classical philosophy but controversial today, contends that natural entities also have \"intrinsic\" purposes, irrespective of human use or opinion. For instance, Aristotle claimed that an acorn's intrinsic \"telos\" is to become a fully grown oak tree.\n",
"Teleology, from Greek τέλος, \"telos\" \"end, purpose\" and -λογία, \"logia\", \"a branch of learning\", was coined by the philosopher Christian von Wolff in 1728. The concept derives from the ancient Greek philosophy of Aristotle, where the final cause (the purpose) of a thing is its function. However, Aristotle's biology does not envisage evolution by natural selection.\n",
"Walter Fisher’s Narrative Paradigm Theory posits that all meaningful communication is a form of storytelling or giving a report of events, and that human beings experience and comprehend life as a series of ongoing narratives, each with its own conflicts, characters, beginning, middle, and end. Fisher believes that all forms of communication that appeal to our reason are best viewed as stories shaped by history, culture, and character, and all forms of human communication are to be seen fundamentally as stories\n"
] |
if my bathroom scale shows different numbers around the house, which number should i trust?
|
Find something in your house that you know weighs a certain amount for sure. E.g. a new bag of rice or potatoes or whatever. Weigh that in different areas of your house and see where you get the most accurate reading. Just use your scale in that spot.
|
[
"Take as an example the calculation of body mass index (BMI). The BMI is the body weight in kilograms divided by the square of height in metres. A bathroom scale may have a resolution of one kilogram. We do not know intermediate values about 79.6 kg or 80.3 kg but information rounded to the nearest whole number. It is unlikely that when the scale reads 80 kg, someone really weighs exactly 80.0 kg. In normal rounding to the nearest value, the scales showing 80 kg indicates a weight between 79.5 kg and 80.5 kg. The relevant range is that of all real numbers that are greater than or equal to 79.5, while less than or equal to 80.5, or in other words the interval [79.5,80.5].\n",
"To identify a particular room within a building, the room number is simply appended to the building number, using a \"-\" (e.g. Room 26–100, a large first-floor auditorium in Building 26). The floor number is indicated in the usual way, by the leading digit(s) of the room number, with a leading digit \"0\" indicating a basement location and \"00\" for sub-basement.\n",
"In modern buildings, especially large ones, room or apartment numbers are usually tied to the floor numbers, so that one can figure out the latter from the former. Typically one uses the floor number with one or two extra digits appended to identify the room within the floor. For example, room 215 could be the 15th room of floor 2 (or 5th room of floor 21), but to avoid this confusion one dot is sometimes used to separate the floor from the room (2.15 refers to 2nd floor, 15th room and 21.5 refers to 21st floor, 5th room) or a leading zero is placed before a single-digit room number (i.e. the 5th room of floor 21 would be 2105). Letters may be used, instead of digits, to identify the room within the floor—such as 21E instead of 215. Often odd numbers are used for rooms on one side of a hallway, even numbers for rooms on the other side.\n",
"Room numbers may consist of three digits, but can be any number of digits. The room number is generally assigned with the first digit indicating the floor on which the room is located. For example, room 412 would be on the fourth floor of the building; room 540 would be on the fifth floor. Buildings that have more than nine floors will have four digits assigned to rooms beyond the ninth floor. For example, room 1412 would be on the 14th floor.\n",
"The standard door sizes in the US run along 2\" increments. Customary sizes have a height of 78\" (1981 mm) or 80\" (2032 mm) and a width of 18\" (472 mm), 24\" (610 mm), 26\" (660 mm), 28\" (711 mm), 30\" (762 mm) or 36\" (914 mm). Most residential passage (room to room) doors are 30\" x 80\" (762 mm x 2032 mm).\n",
"In Japan, the size of a room is often measured by the number of , about 1.653 square meters (for a standard Nagoya size tatami). Alternatively, in terms of traditional Japanese area units, room area (and especially house floor area) is measured in terms of \"tsubo,\" where one \"tsubo\" is the area of two tatami mats (a square); formally 1 \"ken\" by 1 \"ken\" or a 1.81818 meter square, about 3.306 square meters.\n",
"Consider a set of numbers ranging from 100 to 1,000, rated with six categories. Each category covers a subrange of 150 (1,000 – 100)/6. All numbers between 100 and 250 must be rated using the first category, 1 – \"very small\"; the second category, 2 – \"small\", must be assigned the numbers 250 through 400; and so on and so forth. With actual perceptual stimuli, however, the subranges cannot be assumed beforehand. For certain psychophysical dimensions, this scaling has often been assumed to be logarithmic. For other psychophysical dimensions, however, the scaling of subranges can be quasilogarithmic and for others, it is almost linear (e.g. for judgments of the sizes of squares, see Haubensak, 1982).\n"
] |
how family guy uses brand names so frequently, but no other cartoon can.
|
Any TV show can mention brand names.
They usually don't, because:
* it makes sit harder to place ads for that product and its competitors
* it can date the show and make it less desirable in syndication
* viewers often have strong brand affinities, and might not relate to characters who use brands they do not like
*Family Guy* is basically making fun of this adversion.
|
[
"Without laws governing name usage, many American names pop up following the name's usage in movies, television, or in the media. Children may be named after their parents' favorite fictional characters.\n",
"Brand names and logos are regularly blurred or covered with tape or a \"MythBusters\" sticker. Brand names are shown when integral to a myth, such as in the Diet Coke and Mentos experiment or Pop Rocks in the very first pilot episode of \"MythBusters\".\n",
"By the 1960s, kids who appeared on the show each were given a nametag sticker in the shape of a bow tie modeled after Uncle Al's sartorial trademark. While the kids were told the name tag was a ticket to get in and a souvenir to take home, the primary reason for them was so that Lewis could refer to each child by name. Initially the tags were plain white, but later included the name of the show to one side, and WCPO's \"9\" logo to the other, with room in the middle for the child's name.\n",
"A model may also be referred to as a nameplate, specifically when referring to the product from the point of view of the manufacturer, especially a model over time. For example, the Chevrolet Suburban is the oldest automobile nameplate in continuous production, dating to 1934 (1935 model year), while the Chrysler New Yorker was (until its demise in 1996) the oldest North American car nameplate. \"Nameplate\" is also sometimes used more loosely, however, to refer to a brand or division of larger company (e.g., GMC), rather than a specific model.\n",
"A brand name that is well known to the majority of people or households is also called a \"household name\" and may be an indicator of brand success. Occasionally a brand can become so successful that the brand becomes synonymous with the category. For example, British people often talk about \"Hoovering the house\" when they actually mean \"vacuuming the house.\" (Hoover is a brand name). When this happens, the brand name is said to have \"gone \"generic\".\" Examples of brands becoming generic abound; Kleenex, Cellotape, Nescafe, Aspirin and Panadol. When a brand goes generic, it can present a marketing problem because when the consumer requests a named brand at the retail outlet, they may be supplied with a competing brand. For example, if a person enters a bar and requests \"a rum and Coke,\" the bartender may interpret that to mean a \"rum and cola-flavoured beverage,\" paving the way for the outlet to supply a cheaper alternative mixer. In such a scenario, Coca-Cola Ltd, who after investing in brand building for more than a century, is the ultimate loser because it does not get the sale.\n",
"These abbreviated names are so common in Japan that many companies initiate abbreviations of the names of their own products. For example, the animated series \"Pretty Cure\" marketed itself under the four-character abbreviated name \"purikyua\".\n",
"Pub names are used to identify and differentiate each pub. Modern names are sometimes a marketing ploy or attempt to create \"brand awareness\", frequently using a comic theme thought to be memorable, \"Slug and Lettuce\" for a pub chain being an example. Interesting origins are not confined to old or traditional names, however. Names and their origins can be broken up into a relatively small number of categories.\n"
] |
if passwords for websites are suppose to be encrypted or only known to the user, how come some websites can tell me i have entered a password i changed years ago?
|
They don't know the password, but they do know when you have set it. You tell them the password when you set it, but they don't remember the password, only its "checksum", or "salted hash", and forget the original password. which can be calculated from the original password and data that don't change during the lifetime of the account.
They don't have to know your password to check against it - they can just compute the checksum. They don't have to know the password to tell you when you have set it - they only need the timestamp for that.
This is just an ELI5 explanation, as crypto is extremely complex, counterintuitive and hard to understand.
|
[
"A simple protocol that does not rely on a human third party involves password changing. This works anywhere one has to type in new passwords the same twice before the password is changed. The first individual will type their secret in the first box, and the second person will type their secret in the second box, if the password is successfully changed then the secret is shared. However the computer is still a third party and must be trusted not to have a key logger.\n",
"Levels consist of finding either a password (known as a UN/PW by the community) or finding a URL to use for the next level. Passwords do not require the user to create an account, but instead will be given to the user once they have found the answer to the riddle. Each solution to each level is very unique, such as decoding ciphers, image editing, musical knowledge, and even remote viewing.\n",
"Software is available for popular hand-held computers that can store passwords for numerous accounts in encrypted form. Passwords can be encrypted by hand on paper and remember the encryption method and key. And another approach is to use a single password or slightly varying passwords for low-security accounts and select distinctly separate strong passwords for a smaller number of high-value applications such as for online banking.\n",
"Google Browser Sync required a Google account, in which the user's cookies, saved passwords, bookmarks, browsing history, tabs, and open windows could be stored. The data was optionally encrypted using an alphanumerical PIN, which theoretically prevented even Google from reading the data. Passwords and cookies were always encrypted and could only be accessed by the user.\n",
"The contents of a web page are arbitrary and controlled by the entity owning the domain named displayed in the address bar. If HTTPS is used, then encryption is used to secure against attackers with access to the network from changing the page contents en route. When presented with a password field on a web page, a user is supposed to look at the address bar to determine whether the domain name in the address bar is the correct place to send the password. For example, for Google's single sign-on system (used on e.g. youtube.com), the user should always check that the address bar says \"https://accounts.google.com\" before inputting their password.\n",
"Likewise, if a user has not cleared their web browser history and has confidential sites listed there, they may want to use a strong password or other authentication solution for their user account on their computer, password-protect the computer when not in use, or encrypt the storage medium on which the web browser stores its history information..\n",
"Every page with a password form gives the user the option of storing the password for later use. To speed up the use of the website, when a user re-visits these pages, the username and password fields will be already filled in.\n"
] |
if i kept switching out older body parts of mine with healthier ones as i grew up, could i live forever?
|
Your brain cells will eventually age and die. If you replace those you can technically live forever, but will you still really be yourself?
This concept has been debated since the Ancient Greeks _URL_0_
|
[
"His healing factor also dramatically affects his aging process, allowing him to live far beyond the normal lifespan of a human. Despite being born in the late 19th century, he has the appearance, conditioning, health, and vitality of a man in his physical prime. While seemingly ageless, it is unknown exactly how greatly his healing factor extends his life expectancy.\n",
"Due to advanced nanotechnology, microscopic nanomeds can be put into peoples bodies to manage cell repair and delay aging. This enables the extension of a human lifespan to several centuries. People also keep a young appearance, as they stop visibly aging at the age of 30 – 40. It is important to start the treatments as early in life as possible. People who received those treatments in early childhood or before birth (passed on from mother to child during pregnancy) achieve the best results. The older a person is when receiving anti-aging treatments for the first time, the larger the possibility that they won't work that well.\n",
"Young/prime adulthood can be considered the healthiest time of life and young adults are generally in good health, subject neither to disease nor the problems of senescence. Strength and physical performance reach their peak from 18–39 years of age. Flexibility may decrease with age throughout adulthood.\n",
"Older people (who are generally too frail to undergo bone marrow transplants), and people who are unable to find a good bone marrow match, undergoing immune suppression have five-year survival rates of up to 35%.\n",
"Older people with adult children appear to live longer. Why this is the case is unclear and may dependent in part on those who have children adopting a healthier lifestyle, support from children, or the circumstances that led to not having children.\n",
"When aging remains, there are many different methods. However, the research first needs to see if the remains are fused or unfused before applying the different aging techniques. Individuals going through growth will have different aging methods than those of adults. There are four different aging techniques for adult individuals. These include cranial sutures, degradation of the pubic symphysis, auricular surface, and the sternal rib end of the first and fourth rib. Younger individuals age is based on tooth eruption and fusion of bone at different rates. Once the individual is aged, then they can be placed in different age categories: young adult (20-34 years), middle adult (35-49 years), and old adult (50+ years). \n",
"The idea that the human body can be repaired in old age to a more youthful state has gathered significant commercial interest over the past few years, including by companies such as Human Longevity Inc, Google Calico, and Elysium Health. In addition to these larger companies, many startups are currently developing therapeutics to tackle the 'ageing problem' using therapy. In 2015 a new class of drugs senolytics was announced (currently in pre-clinical development) designed specifically to combat the underlying biological causes of frailty.\n"
] |
the observer effect, the measurement problem and the 'conscious observer' of quantum mechanics?
|
The "observer effect" and "measurement" problems are commonly misrepresented on the internet by people who are obsessed with new-age pseudoscience. It has nothing to do with conciousness or anything magical. To put it in ELI5 terms:
Imagine that we you are blindfolded and sitting in a chair. I have set up a machine that can always shoot an apple across the room and have it whiz by right in front of your face. You, being blindfolded, have to "detect" when the apple has passes by you by listening to a hair dryer that I have taped to your head. When the apple passes in front of the hair dryer, it changes the sound of the air being blown. The hairdryer will not change the flight of the apple in any way significant to our observations. To detect the apple, you have interacted with it, but not changed it. This is an observation made at our regular, real world scale.
Now imagine we repeat the experiment with a paper ball instead of an apple. In this case, we'll still have to interact with the paper ball to detect it, but since the paper ball is so light, it's *going* to affect the paper ball's trajectory. This is an observation made at a quantum scale scale.
On a quantum scale, you can't "see" an electron or any other quantum particle. You have to interact with them to detect them, and interacting with them changes them. that's the problem.
|
[
"An especially unusual version of the observer effect occurs in quantum mechanics, as best demonstrated by the double-slit experiment. Physicists have found that even passive observation of quantum phenomena (by changing the test apparatus and passively 'ruling out' all but one possibility), can actually change the measured result. A particularly famous example is the 1998 Weizmann experiment. Despite the \"observer\" in this experiment being an electronic detector—possibly due to the assumption that the word \"observer\" implies a person—its results have led to the popular belief that a conscious mind can directly affect reality. The need for the \"observer\" to be conscious is not supported by scientific research, and has been pointed out as a misconception rooted in a poor understanding of the quantum wave function and the quantum measurement process, apparently being the generation of information at its most basic level that produces the effect.\n",
"In the context of the so-called hidden-measurements interpretation of quantum mechanics, the observer-effect can be understood as an \"instrument effect\" which results from the combination of the following two aspects: (a) an invasiveness of the measurement process, intrinsically incorporated in its experimental protocol (which therefore cannot be eliminated); (b) the presence of a random mechanism (due to fluctuations in the experimental context) through which a specific measurement-interaction is each time actualized, in a non-predictable (non-controllable) way.\n",
"Despite the observer effect as popularized in the Hawthorne experiments being perhaps falsely identified (see above discussion), the popularity and plausibility of the observer effect in theory has led researchers to postulate that this effect could take place at a second level. Thus it has been proposed that there is a secondary observer effect when researchers working with secondary data such as survey data or various indicators may impact the results of their scientific research. Rather than having an effect on the subjects (as with the primary observer effect), the researchers likely have their own idiosyncrasies that influence how they handle the data and even what data they obtain from secondary sources. For one, the researchers may choose seemingly innocuous steps in their statistical analyses that end up causing significantly different results using the same data; e.g., weighting strategies, factor analytic techniques, or choice of estimation. In addition, researchers may use software packages that have different default settings that lead to small but significant fluctuations. Finally, the data that researchers use may not be identical, even though it seems so. For example, the OECD collects and distributes various socio-economic data; however, these data change over time such that a researcher who downloads the Australian GDP data for the year 2000 may have slightly different values than a researcher who downloads the same Australian GDP 2000 data a few years later. The idea of the secondary observer effect was floated by Nate Breznau in a thus far relatively obscure paper.\n",
"In physics, the observer effect is the theory that the mere observation of a phenomenon inevitably changes that phenomenon. This is often the result of instruments that, by necessity, alter the state of what they measure in some manner. A common example is checking the pressure in an automobile tire; this is difficult to do without letting out some of the air, thus changing the pressure. Similarly, it is not possible to see any object without light hitting the object, and causing it to reflect that light. While the effects of observation are often negligible, the object still experiences a change. This effect can be found in many domains of physics, but can usually be reduced to insignificance by using different instruments or observation techniques. \n",
"In quantum mechanics, \"observation\" is synonymous with quantum measurement and \"observer\" with a measurement apparatus and \"observable\" with what can be measured. Thus the quantum mechanical observer does not have to\n",
"A poll was conducted at a quantum mechanics conference in 2011 using 33 participants (including physicists, mathematicians, and philosophers). Researchers found that 6% of participants (2 of the 33) indicated that they believed the observer \"plays a distinguished physical role (e.g., wave-function collapse by consciousness)\". This poll also states that 55% (18 of the 33) indicated that they believed the observer \"plays a fundamental role in the application of the formalism but plays no distinguished physical role\". They also mention that \"Popular accounts have sometimes suggested that the Copenhagen interpretation attributes such a role to consciousness. In our view, this is to misunderstand the Copenhagen interpretation.\"\n",
"Physicists use the term \"observer\" as shorthand for a specific reference frame from which a set of objects or events is being measured. Speaking of an observer in special relativity is not specifically hypothesizing an individual person who is experiencing events, but rather it is a particular mathematical context which objects and events are to be evaluated from. The effects of special relativity occur whether or not there is a sentient being within the inertial reference frame to witness them.\n"
] |
why are dj's treated like artists, with a stage name and everything, and their own 'shows' people get tickets to, when they just play other people's copyrighted music?
|
"Art" is a form of expression.
Imagine that 100 people are going to be showing up to your house in an hour. How will you entertain them? Playing music is a good option. Do you have the right music to play? Would you just turn on the radio? Go to Pandora?
Radios have commercials. Songs don't always compliment one another.
But let's say you don't want to risk having your party fail due to poor music selection... so you spend some time listening to songs, figuring out which ones compliment one another, which flow together, which ones get the crowd pumped and excited, and which ones give them a short breather so they can get ready for the next song.
But crap... that takes a lot of effort. Sure, pressing "play" on a machine may end up with a similar result... but you don't want to use a machine for this. You want to learn the fine-motor skills and muscle memory required to fluidly operate your music gear covered in buttons and switches. Like an audible chef, you craft a meal of sounds and rhythms for the crowds' ears... you manage to completely hide and obscure the pattern of song selection from the crowd to the point that the entire experience feels like one long ride of enjoyment. All those songs made by all those other artists might as well be different brands of paint being combined onto the DJ's percussive canvas.
So, to answer your question, the reason that people pay to see these shows is because these DJs provide a service that **not everyone** can do. And, sure, while the entry barriers to becoming a "Dj" are not very high, some Djs are simply better than others and can provide better experiences than their competitors.... so much so that fan bases develop and seek out opportunities to exchange their money (which plays no music) for temporary exposure to auditory stimuli that is otherwise unavailable.
At the end of the day, an experienced, talented DJ (just like any musician) can combine layers of sound in a way that taps them directly into the minds of their audience. That's pretty neat.
Imagine yourself on stage with some tables, wires, and buttons. Before you is a crowd of thousands. They are there because you have created something that meant something to them. You are there because you are an artist.
|
[
"I party with the promoters I play for. A lot of DJs don't like to do that; they play the party, go back to the hotel and then get ready to go home. Not me. I don't deny it! For me a DJ is someone who brings a vibe. If you don't party, then how do you bring that vibe?\n",
"Most of the DJs and live musicians who play at Chillits could be considered amateur, in the sense that they do not attempt to make a living from playing music but instead donate their time and services. Some are San Francisco Bay Area technology luminaries, such as Brian Behlendorf. Starting with the 2000 gathering, many of the sets have been recorded and made available for free online (see links below).\n",
"Uploaded was the part of the show, which gave unsigned artists the chance to get their music showcased on the radio. Bands could upload their music through the website and a band was featured each evening on the show.\n",
"This is a list of notable club DJs, professionals who perform, or are known to perform, at large nightclub venues or other dance events, or who have been pioneers in the development of the role of the club DJ. DJs play a mix of recorded music for an audience at a bar, nightclub, dance club or rave who dance to the music. The music is played through a sound reinforcement system.\n",
"Many ghost producers sign agreements that prevent them from working for anyone else or establishing themselves as a solo artist. Such non-disclosure agreements are often noted as predatory because ghost producers, especially teenage producers, do not have an understanding of the music industry. London producer Mat Zo has alleged that DJs who hire ghost producers \"have pretended to make their own music and [left] us actual producers to struggle\".\n",
"Club DJs, commonly referred as DJs in general, play music at musical events, such as parties at music venues or bars, music festivals, corporate and private events. Typically, club DJs mix music recordings from two or more sources using different mixing techniques in order to produce non-stopping flow of music. One key technique used for seamlessly transitioning from one song to another is beatmatching. A DJ who mostly plays and mixes one specific music genre is often given the title of that genre; for example, a DJ who plays hip hop music is called a hip hop DJ, a DJ who plays house music is a house DJ, a DJ who plays techno is called a techno DJ, and so on. The quality of a DJ performance (often called a DJ mix or DJ set) consists of two main features: technical skills, or how well can DJ operate the equipment and produce smooth transitions between two or more recordings and a playlist, or ability of a DJ to select most suitable recordings also known as \"reading the crowd\".\n",
"Their knack for fusing genres has garnered them a reputation for thrilling and involving live shows, of which Pringle explains: \"With dance music you can go and see a DJ who is in a booth, so you dance all night and just watch him there kind of doing nothing, which obviously can be amazing. But if you have a live performance where you are playing the same kind of music, the whole experience then changes into a rock show and so much more besides, so we’re trying to make people experience dance music in a new way.\" This reputation has led to the band being awarded the accolade of \"a definite band to watch during the summer festival season\" by numerous publications, including \"The Guardian\", Artrocker and Skiddle.\n"
] |
what exactly was dialup and why couldn't you use the phone at the same time?
|
The computer sent data through the phone line. Since the phone line transmits sound data - the computer was literally generating sounds that could be interpreted as data - high and low pitched squeals that represent the data you are sending or receiving. It was like a very rapid morse code.
If you picked up the phone, you would be adding your own sounds on top of the computer's sounds. The computer at the other end wouldn't know that you picked up the phone, it would just assume that you're sending data to, and this would screw up all of the data that gets sent.
|
[
"Dial-up access is a connection to the Internet through a phone line, creating a semi-permanent link to the Internet. Operating on a single channel, it monopolizes the phone line and is the slowest method of accessing the Internet. Dial-up is often the only form of Internet access available in rural areas because it requires no infrastructure other than the already existing telephone network. Dial-up connections typically do not exceed a speed of 56 kbit/s, because they are primarily made via a 56k modem. Since the mid 2000s, this technology became obsolete in most developed countries.\n",
"Dial-up access is a connection to the Internet through a phone line, creating a semi-permanent link to the Internet. Operating on a single channel, it monopolizes the phone line and is the slowest method of accessing the Internet. Dial-up is often the only form of Internet access available in rural areas because it requires no infrastructure other than the already existing telephone network. Dial-up connections typically do not exceed a speed of 56 kbit/s, because they are primarily made via a 56k modem.\n",
"A dialer (American English) or dialler (British English) is an electronic device that is connected to a telephone line to monitor the dialed numbers and alter them to seamlessly provide services that otherwise require lengthy National or International access codes to be dialed. A dialer automatically inserts and modifies the numbers depending on the time of day, country or area code dialed, allowing the user to subscribe to the service providers who offer the best rates. For example, a dialer could be programmed to use one service provider for international calls and another for cellular calls. This process is known as prefix insertion or least cost routing. A line powered dialer does not need any external power but instead takes the power it needs from the telephone line.\n",
"Another type of dialer is a computer program which creates a connection to the Internet or another computer network over the analog telephone or Integrated Services Digital Network (ISDN). Many operating systems already contain such a program for connections through the Point-to-Point Protocol (PPP), such as WvDial.\n",
"Introduced to the public in 1963 by AT&T, Touch-Tone dialing greatly shortened the time of initiating a telephone call. It also enabled direct signaling from a telephone across the long-distance network using audio-frequency tones, which was impossible with the rotary dials that generated digital direct current pulses that had to be decoded by the local central office.\n",
"Private or internal PBX or key phone systems also have their own dial tone, sometimes the same as the external PSTN one, and sometimes different so as to remind users to dial a prefix for, or select in another way, an outside line.\n",
"Western Electric's international company in Belgium, the BTMC (Bell Telephone Manufacturing Company) first introduced Dial Tone with the cutover of its 7A Rotary Automatic Machine Switching System at Darlington, England, on 10 October 1914. Dial Tone was an essential feature, because the 7A Rotary system was common control. When a calling subscriber lifted their telephone receiver, Line Relays associated with the line operated causing all free First Line Finders in the subscribers group to drive, hunting for the subscribers line. When the line was found, start relays caused free Second Line Finders, in a particular group, to drive, hunting for the successful First Line Finder. Each Second Line Finder was paired with First Group Selector and R3 Register Chooser sequence switch, so when a Second Line Finder had found the First Line Finder, the First Group Selector's R1 sequence switch advanced (rotated) from its home position, causing the R3 Register Chooser sequence switch to advance (rotate), looking for a free Register. When a free Register was seized its R4 sequence switch advanced and Dial Tone was returned to the calling subscriber. This whole process could take as long as four seconds, so if the calling subscriber dialed before receiving a dial tone, their call would fail. \n"
] |
Why don't modern cellphones create interferences near speakers any more?
|
So, for our friends that don't know, the buzzing is a signal in the AM range.
The effect is well known since the rollout of GSM in Europe begun (see Stephen Temple's _"Inside the Mobile Revolution_", Ch. 22). What's happening is that in TDMA, each transmitter gets a time slot in which to transmit, and then remains silent until the next slot. This pattern (transmit-silence-transmit) leads to the power amp delivering large amounts of energy within either the 850/950 or 1800/1900 MHz GSM bands, and in these bands it results at a ~217 Hz-modulated intervals IIRC. The signal is detected on any transistors or diode structures in chips, on multiple points of an amplifier simultaneously, including power regulator chips, batteries, and so on. It can occur even inside the handset itself. In GSM's 800-900 MHz range, any 80mm-long copper trace works like a quarter wave antenna, or stripline resonator.
You can see the spectrum of the burst [here](_URL_0_). The transmission power is near 2 Watts (yeah, GSM is power hungry). The resulting detection at an audio chip results in a voltage transient that looks like [this](_URL_2_); note the shift in both the supply and the ground. The output of the amplifier will eventually be clipped and filtered down to the audible range, but distortion can produce frequency components at any sum/difference of multiples of the original frequencies.
The reasons subsequent RANs (UTRAN, GERAN, E-UTRAN) don't present this problem are:
* First and foremost, awareness of the problem. For example, back in 1990, when GSM was being rolled out across EU, this interference even affected devices like hearing aids, and there was major cause for concern, which translated in safety requirements for the development of subsequent standards
* TDMA was abandoned. Instead, CDMA was adopted, where each channel uses the entire spectrum all the time, and multiplexing is achieved with frequency convolution with a signal that is orthogonal between every pair of transmitters; read more [here](_URL_1_)
* Power requirements for user equipment became more stringent. For example, one of the first prototype chips for E-UTRAN claimed power consumption below 100 mW during the demo; see [here](_URL_3_). _Don't take that at face value, the demo was a tranmission of a few seconds. Still a remarkable difference w/ GSM_.
I'm not aware if audio components changed their design to avoid problems like this.
|
[
"The longer wavelengths have the advantage of diffracting more, and so line of sight is not as necessary to obtain a good signal. Because the frequencies that cell phones use are too high to reflect off the ionosphere as shortwave radio waves do, cell phone waves cannot travel via the ionosphere. (See Diffraction and Attenuation for more details).\n",
"Wireless speakers receive considerable criticism from high-end audiophiles because of the potential for RF interference with other signal sources, like cordless phones, as well as for the relatively low sound quality some models deliver. Despite the criticism, wireless speakers have gained popularity with consumers and a growing number of models are actively marketed. Specifically, small and portable wireless Bluetooth speaker models have become very popular with consumers.\n",
"Non-broadcast signals are also affected. Mobile phone signals are in the UHF band, ranging from 700 to over 2600 Megahertz, a range which makes them even more prone to weather-induced propagation changes. In urban (and to some extent suburban) areas with a high population density, this is partly offset by the use of smaller cells, which use lower effective radiated power and beam tilt to reduce interference, and therefore increase frequency reuse and user capacity. However, since this would not be very cost-effective in more rural areas, these cells are larger and so more likely to cause interference over longer distances when propagation conditions allow.\n",
"Interference tends to be more troublesome with older radio technologies such as analogue amplitude modulation, which have no way of distinguishing unwanted in-band signals from the intended signal, and the omnidirectional antennas used with broadcast systems.\n",
"There are many methods of attempting to reduce the risk of AS. Several devices attempt to remove potentially harmful sound signals by digital signal processing. None has yet been shown to be fully effective. Devices which solely limit noise levels to about 85 dB have been shown in field trials to be ineffective (data from these trials has not been released into the public domain). Limiting background noise and office stress may also reduce the chance of an Acoustic Shock. Proper use of the headset and preventing mobile phones from being used in call centers reduces the chance of feedback.\n",
"In uses where missed calls are allowable, selective calling can also hide the presence of interfering signals such as receiver-produced intermodulation. Receivers with poor specifications—such as scanners or low-cost mobile radios—cannot reject the unwanted signals on nearby channels in urban environments. The interference will still be present and will still degrade system performance but by using selective calling the user will not have to hear the noises produced by receiving the interference.\n",
"The electromagnetic (telecoil) mode is usually more effective than the acoustic method. This is mainly because the microphone is often automatically switched off when the hearing aid is operating in telecoil mode, so background noise is not amplified. Since there is an electronic connection to the phone, the sound is clearer and distortion is less likely. But in order for this to work, the phone has to be hearing-aid compatible. More technically, the phone's speaker has to have a voice coil that generates a relatively strong electromagnetic field. Speakers with strong voice coils are more expensive and require more energy than the tiny ones used in many modern telephones; phones with the small low-power speakers cannot couple electromagnetically with the telecoil in the hearing aid, so the hearing aid must then switch to acoustic mode. Also, many mobile phones emit high levels of electromagnetic noise that creates audible static in the hearing aid when the telecoil is used. A workaround that resolves this issue on many mobile phones is to plug a wired (not Bluetooth) headset into the mobile phone; with the headset placed near the hearing aid the phone can be held far enough away to attenuate the static. Another method is to use a \"neckloop\" (which is like a portable, around-the-neck induction loop), and plug the neckloop directly into the standard audio jack (headphones jack) of a smartphone (or laptop, or stereo, etc.). Then, with the hearing aids' telecoil turned on (usually a button to press), the sound will travel directly from the phone, through the neckloop and into the hearing aids' telecoils.\n"
] |
whats going on with the statehood movement in puerto rico as of now?
|
There was a non-binding referendum is 2012, but most people see it for the sham it was.
The pro-statehood ruling party rigged it so they first asked if they people preferred the status quo, then asked the remaining people if the wanted statehood. If they asked the questions in the other order, they would have gotten a different answer.
|
[
"The Independence Movement in Puerto Rico refers to initiatives by inhabitants throughout the history of Puerto Rico to obtain full political independence for the island territory, first from the Spanish Empire, from 1493 to 1898 and, since 1898, from the United States. A small variety of groups, movements, political parties, and organizations have worked for Puerto Rico's independence over the centuries. \n",
"Former chief of the Puerto Rico Supreme Court José Trías Monge insists that statehood was never intended for the island and that, unlike Alaska and Hawaii, which Congress deemed incorporated territories and slated for annexation to the Union from the start, Puerto Rico was kept \"unincorporated\" specifically to avoid offering it statehood. And Myriam Marquez has stated that Puerto Ricans \"fear that statehood would strip the people of their national identity, of their distinct culture and language\". Ayala and Bernabe add that the \"purpose of the inclusion of U.S. citizenship to Puerto Ricans in the Jones Act of 1917 was an attempt by Congress to block independence and perpetuate Puerto Rico in its colonial status\". Proponents of the citizenship clause in the Jones Act argue that \"the extension of citizenship did not constitute a promise of statehood but rather an attempt to exclude any consideration of independence\".\n",
"On July 23, 1967, a plebiscite was held to decide if the people of Puerto Rico desired to become an independent nation, a state of the United States of America, or continue the commonwealth relation established in 1952. The majority of Puerto Ricans opted for the Commonwealth option (see Puerto Rican status referendums). Disagreement within the then pro-statehood party headed by Miguel A. García Méndez led Ferré and others to found the New Progressive Party (a.k.a., PNP).\n",
"Even with the Puerto Ricans' vote for statehood, action by the United States Congress would be necessary to implement changes to the status of Puerto Rico under the Territorial Clause of the United States Constitution.\n",
"In 2018, Puerto Rican social activists began a renewed push for statehood status in the atmosphere of increased post-hurricane media attention. The effects that closer ties and direct governmental integration between the island and the mainland would bring, particularly in terms of crime and social stability in general, are uncertain. Past efforts in favor of statehood by political figures such as U.S. Presidents Ronald Reagan and George H. W. Bush have previously amounted to relatively little.\n",
"Statehood might be useful as a means of dealing with the financial crisis, since it would allow for bankruptcy and the relevant protection. The Puerto Rican status referendum, 2017 is due to be held on June 11, 2017. The two options at that time will be \"Statehood\" and \"Independence/Free Association\". This will be the first of the five referendums that will not offer the choice of retaining the current status as a Commonwealth.\n",
"Statehood might be useful as a means of dealing with the financial crisis, since it would allow for bankruptcy and the relevant protection. The Puerto Rican status referendum, 2017 is due to be held on June 11, 2017. The two options at that time will be \"Statehood\" and \"Independence/Free Association\". This will be the first of the five referendums that will not offer the choice of retaining the current status as a Commonwealth.\n"
] |
How do deaf people perceive heavy bass sounds?
|
I'll take a shot at this. I think many people will agree they can "feel" low-frequency (bass) sounds in their chest when loud enough. We can perceive this vibration in our bodies with other senses, probably [somatosensation](_URL_3_) (i.e. touch), perhaps with [proprioception](_URL_0_). I found [this paper](_URL_1_), which measured chest vibration due to jet engine sounds and found a resonance at 63-100 Hz, indicating sounds in this frequency range might possibly be felt in the chest. [This paper](_URL_2_) basically confirmed that by reporting that the perceptual rating of vibration in response to low frequency sound was better correlated with accelerometer measurements on the chest/abdomen compared to the head. This supports the "chest-thumping" idea of bass sounds.
As far as actual studies with deaf individuals, I could only find a paper briefly discussed in [this review](_URL_4_) but couldn't find a copy of the actual paper (Yamada et al., Jnl Low Freq Noise Vibn 2, 32). Anyway, supposedly the deaf subjects could perceive low-frequency sounds at levels only 40-50 dB above normal hearing subjects. For reference, we usually consider a deficit of > 90 dB to be "profound hearing loss". This indicates the deaf subjects were probably using another cue (e.g. vibratory) to perceive the sound.
It should be noted that deaf individuals can have some residual hearing that allows them to perceive very intense sounds. I have a friend with thresholds at something like 105 dB SPL, so he can hear something like a loud power tool. Of course, for these sounds there's the vibration sense as well, so the perception sort-of merges together. There's also the sense of pain, which kicks in around 130-140 dB SPL (think standing next to a jet engine).
edit: typos
|
[
"The human ear can generally hear sounds with frequencies between 20 Hz and 20 kHz (the audio range). Sounds outside this range are considered infrasound (below 20 Hz) or ultrasound (above 20 kHz) Although hearing requires an intact and functioning auditory portion of the central nervous system as well as a working ear, human deafness (extreme insensitivity to sound) most commonly occurs because of abnormalities of the inner ear, rather than in the nerves or tracts of the central auditory system.\n",
"Sub-bass sounds are the deep, low- register pitched pitches approximately below 60 Hz (C in scientific pitch notation) and extending downward to include the lowest frequency humans can hear, assumed at about 20 Hz (E). In this range, human hearing is not very sensitive, so sounds in this range tend to be felt more than heard. Sound reinforcement systems and PA systems often use one or more subwoofer loudspeaker cabinets that are specifically designed for amplifying sounds in the sub-bass range. Sounds below sub-bass are called infrasound.\n",
"Infrasound is sound at frequencies lower than the low frequency end of human hearing threshold at 20 Hz. It is known, however, that humans can perceive sounds below this frequency at very high pressure levels. Infrasound can come from many natural as well as man-made sources, including weather patterns, topographic features, ocean wave activity, thunderstorms, geomagnetic storms, earthquakes, jet streams, mountain ranges, and rocket launchings. Infrasounds are also present in the vocalizations of some animals. Low frequency sounds can travel for long distances with very little attenuation and can be detected hundreds of miles away from their sources.\n",
"Infrasound, sometimes referred to as low-frequency sound, is sound that is lower in frequency than 20 Hz or cycles per second, the \"normal\" limit of human hearing. Hearing becomes gradually less sensitive as frequency decreases, so for humans to perceive infrasound, the sound pressure must be sufficiently high. The ear is the primary organ for sensing infrasound, but at higher intensities it is possible to feel infrasound vibrations in various parts of the body.\n",
"A human is capable of hearing (and usefully discerning) anything from a quiet murmur in a soundproofed room to the loudest heavy metal concert. Such a difference can exceed 100 dB which represents a factor of 100,000 in amplitude and a factor 10,000,000,000 in power. The dynamic range of human hearing is roughly 140 dB, varying with frequency, from the threshold of hearing (around −9 dB SPL at 3 kHz) to the threshold of pain (from 120–140 dB SPL). This wide dynamic range cannot be perceived all at once, however; the tensor tympani, stapedius muscle, and outer hair cells all act as mechanical dynamic range compressors to adjust the sensitivity of the ear to different ambient levels.\n",
"What sounds normal for someone with normal hearing may be too soft for someone with recruitment, and what is too loud for someone with normal hearing is also too loud for the patient with recruitment. In effect, the range of sound intensity that a patient with recruitment can tolerate is much narrower. Further adding to the difficulty, recruitment is observed in those frequencies that are most impaired—in the high frequencies, which also carry critical information for speech understanding.\n",
"Extremely high-power sound waves can disrupt or destroy the eardrums of a target and cause severe pain or disorientation. This is usually sufficient to incapacitate a person. Less powerful sound waves can cause humans to experience nausea or discomfort. The use of these frequencies to incapacitate persons has occurred both in anti-citizen special operation and crowd control settings.\n"
] |
Before the Augustus founded the empire, the Roman republic was plagued with civil wars. Why didn't the Parthians invade?
|
They did...it's all over the sources...
The Parthians crossed the Euphrates only twice. In 51 Cicero feared a Parthian invasion into Cilicia, but it did not materialize, and the brief Parthian campaign following Crassus' defeat fizzled out quickly. Plutarch claims that Pompey reached out to the Parthians for asylum, but he ended up going to Egypt instead and the Parthians were not active on the Roman frontier for most of the 40s. A Parthian campaign in 41, led by the younger Labienus, was initially successful, but they were disastrously defeated by Ventidius Bassus, losing the crown prince Pacorus. The Caesarians' success at Philippi allowed Antony to launch a large expedition into Armenia, which was not particularly successful but was not followed by a Parthian counterattack. Though wars were occasionally fought in Armenia, and the Romans successfully invaded Parthia a few times (under Trajan and Septimius Severus, for example), the Parthians did not again cross the Euphrates.
|
[
"Battles between the Parthian Empire and the Roman Republic began in 54 BC. This first incursion against Parthia was repulsed, notably at the Battle of Carrhae (53 BC). During the Roman Liberators' civil war of the 1st Century BC, the Parthians actively supported Brutus and Cassius, invading Syria, and gaining territories in the Levant. However, the conclusion of the second Roman civil war brought a revival of Roman strength in Western Asia.\n",
"Because of the threat the Dacians represented to the Roman Empire's eastward expansion, in the year 101 Emperor Trajan made the decision to begin a campaign against them. The first conflict began on March 25 and the Roman troops, consisting of four principal legions, the units X \"Gemina\", XI \"Claudia\", II \"Traiana Fortis\" and XXX \"Ulpia Victrix\", defeated the Dacians, and it thus ended in Roman victory.\n",
"After the Marian reforms and throughout the history of Rome's Late Republic, the legions played an important political role. By the 1st century BC, the threat of the legions under a demagogue was recognized. Governors were not allowed to leave their provinces with their legions. When Julius Caesar broke this rule, leaving his province of Gaul and crossing the Rubicon into Italy, he precipitated a constitutional crisis. This crisis and the civil wars which followed brought an end to the Republic and led to the foundation of the Empire under Augustus in 27 BC.\n",
"Julius Caesar had planned an invasion of Parthia, but he was assassinated before implementing it. In 40 BC, the Parthians were joined by Pompeian forces and briefly captured much of the Roman East, but were defeated in Antony's counter-attack.\n",
"Julius Caesar's planned invasion of the Parthian Empire was to begin in 44 BC; however, due to his assassination that same year, the invasion never took place. The campaign was to start with the pacification of Dacia, followed by an invasion of Parthia. Plutarch also recorded that once Parthia was subdued the army would continue to Scythia, then Germania and finally back to Rome. These grander plans are found only in Plutarch's \"Parallel Lives\", and their authenticity is questioned by most scholars.\n",
"Augustus (imperial rule: 31 BC to AD 14) controlled the Roman state following the civil wars that marked the end of the Republic (c. 510–44). He established a quasi-constitutional regime known as the Principate, commonly included as the first phase of the Empire. Roman actions in Africa throughout the period of civil war are harshly criticized by a modern Maghribi historian, Abdallah Laroui, who notes the cumulative lands lost by Berbers to Romans, and how the Romans had steadily steered events to their benefit.\n",
"Despite numerous defections to the Romans during the campaign, the Avar attack appears to have ended the Antean polity. They never appear in sources apart from the epithet \"Anticus\" in the imperial titulature in 612. Curta argues that the 602 attack on the Antes destroyed their political independence. However, the epithet \"Anticus\" is attested in imperial titulature until 612, thus Kardaras rather argues that they disappearance of the Antes relates to general collapse of the Scythian/ lower Danubian \"limes\" which they defended, at which time their hegemony on the lower Danube ended. Whatever the case, shortly after the collapse of the Danubian \"limes\" (more specifically, the tactical Roman withdrawal), the first evidence of Slavic settlement in north-eastern Bulgaria begin to appear.\n"
] |
Can bacteria feel pain?
|
Not in any sense of the term that would make sense from a human perspective, for sure: by definition, single-celled organisms don't have nerve cells, and what we call "pain" is entirely a nervous-system response to various stimuli.
|
[
"Though it has been argued that most invertebrates do not feel pain, there is some evidence that invertebrates, especially the decapod crustaceans (e.g. crabs and lobsters) and cephalopods (e.g. octopuses), exhibit behavioural and physiological reactions indicating they may have the capacity for this experience.\n",
"There is debate about whether invertebrates can experience pain. Some of the most compelling evidence for pain in invertebrates exists for crustaceans in terms of trade-offs between stimulus avoidance and other motivational requirements. Evidence of the ability for crabs to feel pain is supported by their possessing an opioid receptor system, showing learned avoidance to putatively painful stimuli, and responding appropriately to analagesics and anaesthetics. These all indicate it is likely that crabs can experience pain during declawing.\n",
"The presence of pain in an animal cannot be known for certain, but it can be inferred through physical and behavioral reactions. Specialists currently believe that all vertebrates can feel pain, and that certain invertebrates, like the octopus, may also. As for other animals, plants, or other entities, their ability to feel physical pain is at present a question beyond scientific reach, since no mechanism is known by which they could have such a feeling. In particular, there are no known nociceptors in groups such as plants, fungi, and most insects, except for instance in fruit flies.\n",
"Pain is an aversive sensation and feeling associated with actual, or potential, tissue damage. It is widely accepted by a broad spectrum of scientists and philosophers that non-human animals can perceive pain, including pain in amphibians.\n",
"If cephalopods feel pain, there are ethical and animal welfare implications including the consequences of exposure to pollutants, practices involving commercial, aquaculture and for cephalopods used in scientific research or which are eaten. Because of the possibility that cephalopods are capable of perceiving pain, it has been suggested that \"precautionary principles\" should be followed with respect to human interactions and consideration of these invertebrates.\n",
"Other societal implications of cephalopods being able to perceive pain include acute and chronic exposure to pollutants, aquaculture, removal from water for routine husbandry, pain during slaughter and during scientific research.\n",
"Advocates for Animals, a Scottish animal welfare group, stated in 2005 that \"scientific evidence ... strongly suggests that there is a potential for decapod crustaceans and cephalopods to experience pain and suffering\". This is primarily due to \"The likelihood that decapod crustaceans can feel pain [which] is supported by the fact that they have been shown to have opioid receptors and to respond to opioids (analgesics such as morphine) in a similar way to vertebrates.\" Similarities between decapod and vertebrate stress systems and behavioral responses to noxious stimuli were given as additional evidence for the capacity of decapods to experience pain.\n"
] |
What is Bell's Inequality and how does it work?
|
It is a response to the famous EPR (Einstein, Podolsky and Rosen) paper written in 1935 where they described a thought experiment leading to what we today know as quantum entanglement. Or to quote them: "spooky action at a distance".
Entanglement can be briefly explained by two entangled electrons (A and B) being in the combined state where one has spin up and one has spin down (in some direction). We do not know which is which, but we do know that if electron A is measured to have spin up, electron B will for sure have spin down. An important thing to note here is that the outcome of the measurements is not physically decided (even in quantum mechanics), so the outcome is "chosen" the moment the first electron is measured. The second electron will immediately obtain the opposite spin value than the first one.
If the electrons are separated by a large distance (say 1 light year), it could be interpreted that some signal is sent faster than light, since the second electron will choose its state right after the first electron is measured. This is called non-locality.
In this paper and after, many argued that quantum mechanics was indeed an incomplete theory. Some people suggested that there must be some *hidden variable* (this could be a number, a set of variables, whatever), that will solve all of the problems with non-locality, and maybe even remove the probabilistic nature of quantum mechanics. Once these variables are known, these weird effects disappear.
So now to Bell's theorem. In 1964, Bell published a paper called *On the Einstein Podolsky Rosen paradox* where he showed that if we assume some hidden variable that takes away all the uncertainty in the experiment described in the EPR paper, there is a measurable physical quantity that should follow some inequality that is proven to be **false** both theoretically and experimentally.
In other words, any hidden variable as described above is inconsistent with the postulates of quantum mechanics. This is the consequence of Bell's theorem.
So was Einstein wrong? Well, modern formulations of locality (the speed of light being upper limit) usually state that no *information* travels faster than the speed of light. And as far as we know, quantum entanglement cannot be used to send information faster than light because we cannot control the outcome of the experiment (electron A gets spin up or down).
|
[
"The intention of a Bell inequality is to serve as a test of local realism or local hidden variable theories as against quantum mechanics, applying Bell's theorem, which shows them to be incompatible. Not all the Bell's inequalities that appear in the literature are in fact fit for this purpose. The one discussed here holds only for a very limited class of local hidden variable theories and has never been used in practical experiments. It is, however, discussed by John Bell in his \"Bertlmann's socks\" paper (Bell, 1981), where it is referred to as the \"Wigner–d'Espagnat inequality\" (d'Espagnat, 1979; Wigner, 1970). It is also variously attributed to Bohm (1951?) and Belinfante (1973).\n",
"The Bell inequality is motivated by the absence of communication between the two measurement sites. In experiments, this is usually ensured simply by prohibiting \"any\" light-speed communication by separating the two sites and then ensuring that the measurement duration is shorter than the time it would take for any light-speed signal from one site to the other, or indeed, to the source. In one of Alain Aspect's experiments, inter-detector communication at light speed during the time between pair emission and detection was possible, but such communication between the time of fixing the detectors' settings and the time of detection was not. An experimental set-up without any such provision effectively becomes entirely \"local\", and therefore cannot rule out local realism. Additionally, the experiment design will ideally be such that the settings for each measurement are not determined by any earlier event, at both measurement stations.\n",
"BULLET::::2. Bell's inequality does not apply to some possible hidden variable theories. It only applies to a certain class of local hidden variable theories. In fact, it might have just missed the kind of hidden variable theories that Einstein is most interested in.\n",
"Bell's inequalities establish a theoretical curve of the number of correlations (++ or --) between the two detectors in relation to the relative angle of the detectors formula_9. The shape of the curve is characteristic of the violation of Bell's inequalities. The measures' matching the shape of the curve establishes, quantitatively and qualitatively, that Bell's inequalities have been violated.\n",
"\"Inequality by Design: Cracking the Bell Curve Myth\" (1996), is a well-known reply to \"The Bell Curve\" by Charles Murray and Richard Hernstein and attempts to show that the arguments in \"The Bell Curve\" are flawed.\n",
"Some people continue to believe that agreement with Bell's inequalities might yet be saved. They argue that in the future much more precise experiments could reveal that one of the known loopholes, for example the so-called \"fair sampling loophole\", had been biasing the interpretations. Most mainstream physicists are highly skeptical about all these \"loopholes\", admitting their existence but continuing to believe that Bell's inequalities must fail.\n",
"Inequality by Design: Cracking the Bell Curve Myth is a 1996 book by Claude S. Fischer, Michael Hout, Martín Sánchez Jankowski, Samuel R. Lucas, Ann Swidler, and Kim Voss. The book is a reply to \"The Bell Curve\" (1994) by Charles Murray and Richard Hernstein and attempts to show that the arguments in \"The Bell Curve\" are flawed, that the data used by Murray and Herrnstein do not support their conclusion and that alternative explanations (particularly the effects of social inequality) better explain differences in IQ scores than genetic explanations.\n"
] |
Can any experts comment on this article about the nuclear reactors in Japan please?
|
Seems like a fairly in-depth article. Not sure why people would call it anti-science as such, everything I read was more or less what I have read elsewhere.
What does worry me was the section about mobile generators being brought in to provide power for the cooling but the "plug not fitting". Now I'm no engineer, but surely with the level of expertise available onsite, would it not be possible to make it fit?? I.e. Rip out whatever terminals are there and connect it up somehow?
Anyway all sounds fairly plausible and at least grounded in reality.
|
[
"One reactor model, the L-54, was purchased and installed by a number of United States universities and foreign research institutions, including Japan. The Japanese Atomic Research Institute renamed theirs Japan Research Reactor-1 (JRR-1) and the government of Japan issued a commemorative postage stamp noting the establishment of Japan's first nuclear reactor in 1957. The reactor was decommissioned in 1970 and is now maintained as a a museum exhibit with a Japanese-language website at Tokaimura, Japan\n",
"BULLET::::- The Atomic Energy Society of Japan (AESJ) 日本原子力学会 is a major academic organization in Japan focusing on all forms of nuclear power. The \"Journal of Nuclear Science and Technology\" is the academic journal run by the AESJ. It publishes English and Japanese articles, though most submissions are from Japanese research institutes, universities, and companies.\n",
"The Research Institute of Atomic Reactors () is an institute for nuclear reactor research in Dimitrovgrad in Ulyanovsk Oblast, Russia. The institute houses eight nuclear research reactors: SM, Arbus (ACT-1), MIR.M1, RBT-6, RBT-10 / 1, RBT-10 / 2, BOR-60 and VK-50.\n",
"The Nagasaki Atomic Bomb Museum covers the history of the bombing of Nagasaki, Japan. It portrays scenes of World War II, the dropping of the atomic bomb, the reconstruction of Nagasaki, and present day. Additionally, the museum exhibits the history of nuclear weapons development.\n",
"The reactor's primary purpose is for training students in the principles of reactor physics. The university also uses it as a source for neutrons for research in nuclear engineering, health science, chemistry, pharmacy, agriculture, biology, and nanotechnology.\n",
"Two government advisers have said that \"Japan's safety review of nuclear reactors after the Fukushima disaster is based on faulty criteria and many people involved have conflicts of interest\". Hiromitsu Ino, Professor Emeritus at the University of Tokyo, says\n",
"Two government advisers have said that \"Japan's safety review of nuclear reactors after the Fukushima disaster is based on faulty criteria and many people involved have conflicts of interest\". Hiromitsu Ino, Professor Emeritus at the University of Tokyo, says\n"
] |
what is being done in the world of science to offset the imposing "antibiotic apocalypse?"
|
[Here is an article you may want to read](_URL_0_) I'll post a little of the article so my comment doesn't get deleted.
Scientists have come across a potential game-changer in the fight against drug-resistant superbugs - a new class of antibiotic that is resistant to resistance. Not only does the new compound - which comes from soil bacteria - kill deadly superbugs like MRSA, but also - because of the way it destroys their cell wall - the pathogens will find it very difficult to mutate into resistant strains
|
[
"Antibiotic resistance is another major concern, leading to the reemergence of diseases such as tuberculosis. The World Health Organization, for its World Health Day 2011 campaign, is calling for intensified global commitment to safeguard antibiotics and other antimicrobial medicines for future generations.\n",
"World Antibiotic Awareness Week has been held every November since 2015. For 2017, the Food and Agriculture Organization of the United Nations (FAO), the World Health Organization (WHO) and the World Organisation for Animal Health (OIE) are together calling for responsible use of antibiotics in humans and animals to reduce the emergence of antibiotic resistance.\n",
"Another reason behind the lack of new antibiotic production is the diminishing amount of return on investment for antibiotics and thus the lack resources put into research and development by private pharmaceutical companies. The World Health Organization has recognized the danger of antibiotic resistance bacteria and has created a list of \"priority pathogens\" that are of the utmost concern. In doing so the hope is to stimulate R&D that can create a new generation of antibiotics. In the United States, the Biomedical Advanced Research and Development Authority (BARDA) aims to support the work of the industry to produce new antibiotics.\n",
"In 2013 researchers from Aberdeen University announced that they were starting a hunt for undiscovered chemicals in organisms that have evolved in deep sea trenches, hoping to find \"the next generation\" of antibiotics, anticipating an \"antibiotic apocalypse\" with a dearth of new infection-fighting drugs. The EU-funded research will start in the Atacama Trench and then move on to search trenches off New Zealand and Antarctica.\n",
"The increasing interconnectedness of the world and the fact that new classes of antibiotics have not been developed and approved for more than 25 years highlight the extent to which antimicrobial resistance is a global health challenge. A global action plan to tackle the growing problem of resistance to antibiotics and other antimicrobial medicines was endorsed at the Sixty-eighth World Health Assembly in May 2015. One of the key objectives of the plan is to improve awareness and understanding of antimicrobial resistance through effective communication, education and training. This global action plan developed by the World Health Organization was created to combat the issue of antimicrobial resistance and was guided by the advice of countries and key stakeholders. The WHO's global action plan is composed of five key objectives that can be targeted through different means, and represents countries coming together to solve a major problem that can have future health consequences.\n",
"Often in the vanguard on intellectual thoughts about medicine, public health, human nature, and psychiatry, in 1955 Martí-Ibáñez wrote his concerns about the indiscriminate use of antibiotics, \"\"Antibiotic therapy, if indiscriminately used, may turn out to be a medicinal flood that temporarily cleans and heals, but ultimately destroys life itself.\"\", a prediction of the dire consequences that humans are just beginning to face today due to ill-advised uses of antibiotics in dairy and meat production as well as medical practices. In the 1930s he participated in the enactment of legislation liberating women and his views on human sexuality are quoted regularly.\n",
"There has not been a new drug approval from a new class of antibiotics since 1987. While six antibiotics have been approved over the last year, they are all adaptations of existing antibiotic classes. None of the recently approved novel antibiotics represent entirely new classes. Novel antibiotics are crucial as antibiotic resistance poses a global health risk. The World Health Organization, warning of a \"post-antibiotic era\" has stated that antimicrobial resistance (AMR) is a \"problem so serious that it threatens the achievements of modern medicine\".\n"
] |
What animal has the worst common cause of death?
|
Manatees getting hit by propellors and bleeding out. Usually they get hit multiple time in their life, and are killed by particularly brutal hits.
|
[
"In 1993, 25 schools throughout New England, United States participated in a roadkill study involving 1,923 animal deaths. By category, the fatalities were: 81% mammals, 15% bird, 3% reptiles and amphibians, 1% indiscernible. Extrapolating these data nationwide, Merritt Clifton (editor of \"Animal People Newspaper\") estimated that the following animals are being killed by motor vehicles in the United States annually: 41 million squirrels, 26 million cats, 22 million rats, 19 million opossums, 15 million raccoons, 6 million dogs, and 350,000 deer. This study may not have considered differences in observability between taxa (e.g. dead raccoons are easier to see than dead frogs), and has not been published in peer-reviewed scientific literature.\n",
"By the end of March, veterinary organizations reported more than 100 pet deaths amongst nearly 500 cases of kidney failure, and experts expected the death toll to number in the thousands, with one online database already self-reporting as many as 3,600 deaths as of 11 April. The U.S. Food and Drug Administration has received reports of approximately 8500 animal deaths, including at least 1950 cats and 2200 dogs who have died after eating contaminated food, but have only confirmed 14 cases, in part because there is no centralized government database of animal sickness or death in the United States as there are with humans (such as the Centers for Disease Control). For this reason, many sources speculate the full extent of the pet deaths and sicknesses caused by the contamination may never be known. In October, the results of the \"AAVLD survey of pet food-induced nephrotoxicity in North America, April to June 2007,\" were reported, indicating 347 of 486 cases voluntarily reported by 6 June 2007 had met the diagnostic criteria, with most of the cases reported from the United States, but also including cases of 20 dogs and 7 cats reported from Canada.The cases involved 235 cats and 112 dogs, with 61 percent of the cats and 74 percent of the dogs having died. Dr. Barbara Powers, AAVLD president and director of the Colorado State University Veterinary Diagnostic Laboratory, said the survey probably found only a percentage of the actual cases. She also said the mortality rate is not likely to be representative of all cases, because survey respondents had more information to submit for animals that had died. A number of dogs were also reported affected in Australia, with four in Melbourne and a few more in Sydney. No legal action or repercussions have as yet occurred regarding these cases. Dr. Powers elaborated further: “But there absolutely could be more deaths from the tainted pet food... This survey didn’t catch all the deaths that happened. In order to be counted in our survey, you had to meet certain criteria... If someone had a pet that died and they buried it in their back[yard], they weren’t eligible for our survey. We had to have confirmed exposure to the recalled pet food, proof of toxicity, and clinical signs of renal failure. So this is only a percentage of the deaths that are out there. There’s no way to guess how many pets were affected.”\n",
"By the end of March, veterinary organizations reported more than 100 pet deaths among nearly 500 cases of kidney failure, with one online database self-reporting as many as 3,600 deaths as of 11 April. The U.S. Food and Drug Administration has received reports of several thousand cats and dogs who have died after eating contaminated food, but have only confirmed 14 cases, in part because there is no centralized government database of animal sickness or death in the United States, as there are with humans (such as the Centers for Disease Control and Prevention). As a result, many sources speculate the actual number of affected pets may never be known, and experts are concerned that the actual death toll could potentially reach into the thousands.\n",
"Three cases of ABLV in humans have been confirmed, all of them fatal. The first occurred in November 1996, when an animal caregiver was scratched by a yellow-bellied sheath-tailed bat. Onset of a rabies-like illness occurred 4–5 weeks following the incident, with death 20 days later. ABLV was identified from brain tissue by polymerase chain reaction and immunohistochemistry.\n",
"One other genus in the Hexathelidae family has been reported to cause severe symptoms in humans. Severe bites have been attributed to members of the genus \"Macrothele\" in Taiwan, but no fatalities. In other mammals, such as rodents, for example, the effects of funnel web spider venom are much less severe.\n",
"BULLET::::- Initially, there were a number of animal deaths from disease, toxic exposure, maternal killings, and park vehicles. The United States Department of Agriculture investigation found no violations of the Animal Welfare Act for the 29 deaths that happened September 1997 – April 1998.\n",
"BULLET::::- Initially, there were a number of animal deaths from disease, toxic exposure, maternal killings, and park vehicles. The United States Department of Agriculture investigation found no violations of the Animal Welfare Act for the 29 deaths that happened September 1997 – April 1998.\n"
] |
the current global warming is very concerning, but there was global warming about 1000 years ago called the "medieval warm period" - how many other such warming periods have there been and why is the current one so different?
|
The medieval warm period was not as extreme, and came on much more gradually. The current one has been sudden and steady, and we have a clear cause for it: increased greenhouse gases in the atmosphere. We've increased the atmosphere's carbon dioxide content by 1/3, for example - that's a huge effect on a planetary scale. The rise almost exactly mirrors the growth of human industry, and even tapers off briefly at the collapse of the Soviet Union and its industrial capacity.
We have reliable climate records dating back tens of thousands of years from e.g. ice cores, and we are pretty sure this warming isn't like the others.
|
[
"In a perspective commenting on MBH99, Wallace Smith Broecker argued that the Medieval Warm Period (MWP) was global. He attributed recent warming to a roughly 1500-year cycle which he suggested related to episodic changes in the Atlantic's conveyor circulation.\n",
"The Holocene climatic optimum (HCO) was a period of warming in which the global climate became warmer. However, the warming was probably not uniform across the world. This period of warmth ended about 5,500 years ago with the descent into the Neoglacial and concomitant Neopluvial. At that time, the climate was not unlike today's, but there was a slightly warmer period from the 10th–14th centuries known as the Medieval Warm Period. This was followed by the Little Ice Age, from the 13th or 14th century to the mid-19th century.\n",
"Notable periods of climate change in recorded history include the Medieval warm period and the little ice age. In the case of the Norse, the Medieval warm period was associated with the Norse age of exploration and Arctic colonization, and the later colder periods led to the decline of those colonies.\n",
"After this relatively short cool interlude the climate ameliorated again and reached between 800 and 1200 almost the values of the Roman Warm Period (used temperature proxies are sediments in the North Atlantic). This warming happened during the High Middle Ages wherefore this event is known as \"Medieval Global Warming\" or the Medieval Warm Period. This warmer climate peaked around 850 AD and 1050 AD, and raised the tree line in Scandinavia and in Russia by 100 to 140 meters; it enabled the Vikings to settle in Iceland and Greenland. During this period the Crusades took place and the Byzantine Empire was eventually pushed back by the rise of the Ottoman Empire.\n",
"The Medieval Warm Period (MWP) also known as the Medieval Climate Optimum, or Medieval Climatic Anomaly was a time of warm climate in the North Atlantic region that was likely related to other warming events in other regions during that time, including China and other areas, lasting from to . Other regions were colder, such as the tropical Pacific. Averaged global mean temperatures have been calculated to be similar to early-mid 20th century warming. Possible causes of the Medieval Warm Period include increased solar activity, decreased volcanic activity, and changes to ocean circulation.\n",
"A literature review by Willie Soon and Sallie Baliunas, published in the relatively obscure journal \"Climate Research\" on 31 January 2003, used data from previous papers to argue that the Medieval Warm Period had been warmer than the 20th century, and that recent warming was not unusual. In March they published an extended paper in \"Energy & Environment\", with additional authors. The Bush administration's Council on Environmental Quality chief of staff Philip Cooney inserted references to the papers in the draft first Environmental Protection Agency \"Report on the Environment\", and removed all references to reconstructions showing world temperatures rising over the last 1,000 years. In the Soon and Baliunas controversy, two scientists cited in the papers said that their work was misrepresented, and the \"Climate Research\" paper was criticised by many other scientists, including several of the journal's editors. On 8 July \"Eos\" featured a detailed rebuttal of both papers by 13 scientists including Mann and Jones, presenting strong evidence that Soon and Baliunas had used improper statistical methods. Responding to the controversy, the publisher of \"Climate Research\" upgraded Hans von Storch from editor to editor in chief, but von Storch decided that the Soon and Baliunas paper was seriously flawed and should not have been published as it was. He proposed a new editorial system, and though the publisher of \"Climate Research\" agreed that the paper should not have been published uncorrected, he rejected von Storch's proposals to improve the editorial process, and von Storch with three other board members resigned. Senator James M. Inhofe stated his belief that \"manmade global warming is the greatest hoax ever perpetrated on the American people\", and a hearing of the United States Senate Committee on Environment and Public Works which he convened on 29 July 2003 heard the news of the resignations.\n",
"The Medieval Warm Period from 950–1250 occurred mostly in the Northern Hemisphere, causing warmer summers in many areas; the high temperatures would only be surpassed by the global warming of the 20th/21st centuries. It has been hypothesized that the warmer temperatures allowed the Norse to colonize Greenland, due to ice-free waters. Outside of Europe there is evidence of warming conditions, including higher temperatures in China and major North American droughts which adversely affected numerous cultures.\n"
] |
what decides whether something will release alpha, beta, or gamma radiation?
|
Type of radiation is determined by the material that emits it. Gamma radiation is electomagnetic radiation, like radio waves, microwave, x-rays, gamma rays... they are typically formed when a charge (electron) is accelerated or decelerated or moved in a circular path (which is an acceleration btw). It can also be formed when an electron jumps between shells in an atom (different energy states).
Alpha radiation is essentially helium atom cores. They typically form as a result of a radioactive decay. Similar thing for beta except they are electrons.
|
[
"Alpha decay or α-decay is a type of radioactive decay in which an atomic nucleus emits an alpha particle (helium nucleus) and thereby transforms or 'decays' into a different atomic nucleus, with a mass number that is reduced by four and an atomic number that is reduced by two. An alpha particle is identical to the nucleus of a helium-4 atom, which consists of two protons and two neutrons. It has a charge of and a mass of . For example, uranium-238 decays to form thorium-234. Alpha particles have a charge , but as a nuclear equation describes a nuclear reaction without considering the electrons – a convention that does not imply that the nuclei necessarily occur in neutral atoms – the charge is not usually shown.\n",
"As the atom came to be better understood, the nature of radioactivity became clearer. Some larger atomic nuclei are unstable, and so decay (release matter or energy) after a random interval. The three forms of radiation that Becquerel and the Curies discovered are also more fully understood. Alpha decay is when a nucleus releases an alpha particle, which is two protons and two neutrons, equivalent to a helium nucleus. Beta decay is the release of a beta particle, a high-energy electron. Gamma decay releases gamma rays, which unlike alpha and beta radiation are not matter but electromagnetic radiation of very high frequency, and therefore energy. This type of radiation is the most dangerous and most difficult to block. All three types of radiation occur naturally in certain elements.\n",
"In gamma decay, a nucleus decays from an excited state into a lower energy state, by emitting a gamma ray. The element is not changed to another element in the process (no nuclear transmutation is involved).\n",
"1. α (alpha) radiation—the emission of an alpha particle (which contains 2 protons and 2 neutrons) from an atomic nucleus. When this occurs, the atom's atomic mass will decrease by 4 units and atomic number will decrease by 2.\n",
"The best-known source of alpha particles is alpha decay of heavier ( 106 u atomic weight) atoms. When an atom emits an alpha particle in alpha decay, the atom's mass number decreases by four due to the loss of the four nucleons in the alpha particle. The atomic number of the atom goes down by exactly two, as a result of the loss of two protons – the atom becomes a new element. Examples of this sort of nuclear transmutation are when uranium becomes thorium, or radium becomes radon gas, due to alpha decay.\n",
"Gamma (γ) radiation consists of photons with a wavelength less than 3x10 meters (greater than 10 Hz and 41.4 keV). Gamma radiation emission is a nuclear process that occurs to rid an unstable nucleus of excess energy after most nuclear reactions. Both alpha and beta particles have an electric charge and mass, and thus are quite likely to interact with other atoms in their path. Gamma radiation, however, is composed of photons, which have neither mass nor electric charge and, as a result, penetrates much further through matter than either alpha or beta radiation.\n",
"In 1901, Rutherford and Frederick Soddy showed that alpha and beta radioactivity involves the transmutation of atoms into atoms of other chemical elements. In 1913, after the products of more radioactive decays were known, Soddy and Kazimierz Fajans independently proposed their radioactive displacement law, which states that beta (i.e., ) emission from one element produces another element one place to the right in the periodic table, while alpha emission produces an element two places to the left.\n"
] |
What would be the effects of a normal diet that was entirely liquids?
|
People survive quite well on liquid diets for long periods of time. Google might turn up more information if you search for [tube feeding](_URL_0_). For people who can't tolerate solid food for any reason, a tube can be placed through the nose or through an incision into the stomach, and fluid given through that tube can meet all nutritional needs.
The lack of solid matter may cause loose stools and discomfort, but that can usually be dealt with by making sure the feeding solution contains enough fiber.
|
[
"A liquid diet is a diet that mostly consists of liquids, or soft foods that melt at room temperature (such as ice cream). A liquid diet usually helps provide sufficient hydration, helps maintain electrolyte balance, and is often prescribed for people when solid food diets are not recommended, such as for people who suffer with gastrointestinal illness or damage, or before or after certain types of medical tests or surgeries involving the mouth or the digestive tract.\n",
"A clear liquid diet, sometimes called a \"surgical liquid diet\" because of its perioperative uses, consists of a diet containing exclusively transparent liquid foods that do not contain any solid particulates. This includes vegetable broth, bouillon (excepting any particulate dregs), clear fruit juices such as filtered apple juice, clear fruit ices or popsicles, clear gelatin desserts, and certain carbonated drinks such as ginger-ale and seltzer water. It excludes all drinks containing milk, but may accept tea or coffee. Typically, this diet contains about 500 calories per day, which is too little food energy for long-term use. \n",
"BULLET::::- Liquid diet: A diet in which only liquids are consumed. May be administered by clinicians for medical reasons, such as after a gastric bypass or to prevent death through starvation from a hunger strike.\n",
"Ruminant animals are those that have a rumen. A rumen is a multichambered stomach found almost exclusively among some artiodactyl mammals, such cattle, deer, and camels, enabling them to eat cellulose-enhanced tough plants and grains that monogastric (i.e., \"single-chambered stomached\") animals, such as humans, dogs, and cats, cannot digest.\n",
"A full or strained liquid diet consists of both clear and opaque liquid foods with a smooth consistency. People who follow this diet may also take liquid vitamin supplements. Some individuals who are told to follow a full-liquid diet are additionally permitted certain components of a mechanical soft diet, such as strained meats, sour cream, cottage cheese, ricotta, yogurt, mashed vegetables or fruits, etc.\n",
"Using fish oil as an example of a food or dietary supplement susceptible to rancidification over various periods of storage, two reviews found effects only on flavor and odor, with no evidence as of 2015 that rancidity causes harm if a spoiled product is consumed.\n",
"A liquid diet is not recommended outside of hospital or medical supervision. Negative side effects include fatigue, nausea, dizziness, hair loss and dry skin which are said to disappear when the person resumes eating.\n"
] |
How do we know which way up a planet is?
|
All planets in the Solar System orbit in almost the same plane, and their axes of rotation are almost perpendicular to such plane. (Only exception is Uranus). So you can define East as the direction the planet is rotating, and North as 90° left of the East.
We also have reference frames and coordinate systems: cartesian or spherical, centered on the Sun or centered on a planet, inertial or rotating. See [this recent thread](_URL_0_).
|
[
"Still another method is to first determine the geographic center of the country and from there measure the shortest distance to every other point. All U.S. territory is spread across less than 180° of longitude, so from any spot in the U.S. it is more direct to reach the easternmost point, Point Udall, U.S. Virgin Islands, by traveling east than by traveling west. Likewise, there is not a single point in U.S. territory from which heading east is a shorter route to the westernmost point, Point Udall, Guam, than heading west would be, even accounting for circumpolar routes. The two different Point Udalls are named for two brothers from the Udall family of Arizona; Mo Udall (Guam) and Stewart Udall (Virgin Islands), sons of Chief Justice Levi Stewart Udall of the Arizona Supreme Court, both served as U.S. Congressman.\n",
"If a planet's orbit is nearly perpendicular to the line of vision (i.e. \"i\" close to 90°), a planet can be detected through the transit method. The inclination will then be known, and the inclination combined with \"M\" sin\"i\" from radial-velocity observations will give the planet's true mass.\n",
"Only the first planet is known transit the star; this means that the planet's orbit appear to cross in front of their star as viewed from the Earth's perspective. Its inclination relative to Earth's line of sight, or how far above or below the plane of sight it is, vary by less than one degree. This allows direct measurements of the planet's periods and relative diameters (compared to the host star) by monitoring the planet's transit of the star.\n",
"In geology, the structure of the interior of a planet is often illustrated using a diagram of a cross section of the planet that passes through the planet's center, as in the cross section of Earth at right.\n",
"BULLET::::- The \"inclination\" of a planet tells how far above or below an established reference plane its orbit lies. In the Solar System, the reference plane is the plane of Earth's orbit, called the ecliptic. For extrasolar planets, the plane, known as the \"sky plane\" or \"plane of the sky\", is the plane perpendicular to the observer's line of sight from Earth. The eight planets of the Solar System all lie very close to the ecliptic; comets and Kuiper belt objects like Pluto are at far more extreme angles to it. The points at which a planet crosses above and below its reference plane are called its ascending and descending nodes. The longitude of the ascending node is the angle between the reference plane's 0 longitude and the planet's ascending node. The argument of periapsis (or perihelion in the Solar System) is the angle between a planet's ascending node and its closest approach to its star.\n",
"A limitation of the radial velocity method used to discover the planet is that only a lower limit on the mass can be obtained. Further astrometric observations with the Hubble Space Telescope on the outer planet 55 Cancri d suggest that planet is inclined at 53° to the plane of the sky; but innermost b and e are inclined at 85°. Planet c's inclination is unknown.\n",
"In geography, the location of any point on the Earth can be identified using a \"geographic coordinate system\". This system specifies the latitude and longitude of any location in terms of angles subtended at the centre of the Earth, using the equator and (usually) the Greenwich meridian as references.\n"
] |
Is it true that a third of the knights in the battle of Agincourt were over 50?
|
Well life expectancy is a very skewed statistic, because infant mortality deflates it substantially. The upper-classes could expect to live to the beginning of what we'd call "old-age" (about 60s, 70 and above was more of a gamble) if they weren't taken ill or killed in battle. It's possible- there would be plenty of old knights over the age of 50 to take part- but it seems unlikely. Maybe your source meant a third of knights in England were over 50 at the time of Agincourt?
|
[
"This story does not stand up well under scrutiny. Walter would have been around seventy years old at the time of the battle of Tewkesbury in 1471, and rather old to be taking up the sword instead of his accustomed musical instrument. He is not listed among the knights created by Edward IV before or after the battle. Bluemantle Pursuivant reported in 1975 that \"I find no trace of Sir Walter in the official records of the College of Arms\", and that \"the arms in Burke's \"Commoners\" are wrong\". \n",
"In Book IV, there are only two knights that have ever successfully held against Lancelot: Sir Tristan and Gareth. This was always under conditions where one or both parties were unknown by the other, for these knights loved each other \"passingly well\". Gareth was knighted by Lancelot himself when he took upon him the adventure on behalf of Lynette. However, in Book VIII: \"The Death of Arthur\", the unarmed Gareth and his brother Gaheris are killed accidentally by Lancelot during the rescue of Guinevere. This leads to the final tragedy of Arthur's Round Table; Gawain refuses to allow King Arthur to accept Lancelot's sincere apology for the deaths of his two brothers. Lancelot genuinely mourns the death of Gareth, whom he loved closely like a son or younger brother. King Arthur is forced by Gawain and Mordred's insistence to go to war against Lancelot. Mordred's grief is largely faked, driven by his desire to become king. This leads to the splitting of the Round Table, Mordred's treachery in trying to seize Guinevere and the throne, Gawain's death from an old unhealed wound, and finally, Arthur and Mordred slaying each other at the Battle of Camlann. \n",
"The litany of injuries that Knights had suffered through his career began to catch up with him, and from 1979 to 1981, he played in only 26 out of a possible 66 games. Amid rumours of retirement, Knights rebounded to play impressive football in his final years. In 1983, he booted six goals in the Qualifying Final to guide Hawthorn to a thrilling four-point win against Fitzroy, and was again among the best players on the field as the Hawks crushed in the Grand Final. \n",
"The knights at the Round Table discuss the mysterious Sir Provençal the Gaul (Gaulois). He is in fact their own comrade Perceval the Welshman (Gallois), who is not capable of giving his own name without making a mistake.\n",
"In 1297 Marmaduke achieved some fame at the Battle of Stirling Bridge by a heroic escape. Over 100 English knights had been trapped, together with several thousand infantry, on the far side of the river, and were being slaughtered by the Scots. Thweng managed to fight his way back across the bridge and he thus became the only knight of all those on the far side of the river to survive the battle. Following the rout, Thweng with William FitzWarin were appointed castellans of Stirling Castle by the English leader John de Warenne, 6th Earl of Surrey. The castle was quickly starved into submission, and Thweng and FitzWarin were taken prisoner to Dumbarton Castle. He was summoned to Parliament in 1307, thus becoming Baron Thweng.\n",
"The brother of the two knights slain by Lancelot was Gawain. He had been a knight to King Arthur for years and was one of Arthur's most trusted allies. Gawain's anger for Lancelot was deep and insatiable. Nothing would end his anger except the death of Lancelot. Arthur, who now controlled many fewer knights than before, could not risk losing his greatest ally, so, Arthur and his remaining troops camped around the stronghold where Lancelot now lived in France. Guinevere had long since been returned to Camelot after Lancelot vowed to all that no affair had ever taken place. Guinevere was safe because of Lancelot's lie, but Gawain's anger was still demanded that Lancelot die. The siege lasted weeks. During the siege, Arthur received a note that said that Mordred had told the people that Arthur had died in battle and that Mordred was now King. The note also said that Mordred had vowed to take Guinevere as his wife.\n",
"The Knights became the first British team to reach the finals of the Continental Cup in January 2001, where they narrowly missed taking the title at their first attempt. Their run included a surprise 4-1 win over Anschutz stablemates the Munich Barons, and only a 1-0 loss to eventual champions Zurich Lions denied them further glory. Their silver medal was considered a major success for a British side. \n"
] |
why do fireworks look so bad on film/video, yet look good irl?
|
Fireworks can look great on video if you have a good enough camera. Cheap cameras, like the ones in our phones, can't handle low light conditions very well and have a hard time focusing on the rapid flashes coming from a firework. The camera is constantly trying to auto focus but can't, resulting in a blurry image.
|
[
"Novelty fireworks typically produce a much weaker explosion and sound. In some countries and areas where fireworks are illegal to use, they still allow these small, low grade fireworks to be used. A few examples include:\n",
"Availability and use of consumer fireworks are hotly debated topics. Critics and safety advocates point to the numerous injuries and accidental fires that are attributed to fireworks as justification for banning or at least severely restricting access to fireworks. Complaints about excessive noise created by fireworks and the large amounts of debris and fallout left over after shooting are also used to support this position. There are numerous incidents of consumer fireworks being used in a manner that is supposedly disrespectful of the communities and neighborhoods where the users live.\n",
"Fireworks was created specifically for web production. Since not every user may be in possession of a fast Internet connection, it is at the best interest of the web developers to optimize the size of their digital contents. In terms of image compression, Fireworks has a better compression rate than Photoshop with JPEG, PNG and GIF images.\n",
"A common misconception about professional fireworks displays is that skyrockets are used to propel the pyrotechnic effects into the air. In reality, skyrockets are more widely used as a consumer item. Professional fireworks displays utilize mortars to fire aerial shells into the air, not rockets.\n",
"Improper use of fireworks may be dangerous, both to the person operating them (risks of burns and wounds) and to bystanders; in addition, they may start fires after landing on flammable material. For this reason, the use of fireworks is generally legally restricted. Display fireworks are restricted by law for use by professionals; consumer items, available to the public, are smaller versions containing limited amounts of explosive material to reduce potential danger.\n",
"The pollution of fireworks on the environment is becoming more and more apparent. Fireworks cause the most serious pollution in the environment in the shortest time. Although fireworks are not one of the most common sources of pollution in the atmosphere, they are one of the major causes of air pollutants ozone, sulphur, dioxide and nitrogen oxides, as well as aerosols. Fireworks contain a mass of tiny metal particles. These metals are burned to produce color for fireworks: copper for blue, strontium or lithium for red, and barium compounds for bright green or white. When fireworks are set off in the air, a large number of incomplete decomposition or degradation of metal particles, dangerous toxins, harmful chemicals remain in the air for a long time, resulting in air pollution.\n",
"The \"Los Angeles Times\" said the film was \"as appetizing as a piece of stale pre-fab pizza... lengthy and boring... never were so many fireworks set off in such a dud of a movie.\". The \"Chicago Tribune\" called it a \"tedious and terrible mess... a disastrous dud.\"\n"
] |
how 2 wifi routers on the same channel can interoperate without completely jamming each other's signal?
|
It's FM, and you know what happens to FM signals when 2 people try the repeater at the same time, with equal power. It's unreadable. Yes, the wifi points would jam each other too.
They can interoperate only because packets are short bursts and can be error corrected. They don't route the other networks traffic as it does not match their own essid. If you tried to operate demanding content on either, it would be a different story entirely. But just small amount of network traffic, you may only have a 20% duty cycle, so you can see with the ability to detect errors and resend, the only big challenge is to detect major packet collisions where headers are missing and you dont know who to send it back to.
This problem actually exists in networking without 2 access points as well. You would just need 2 users to feasibly create collisions. Well, that's where spread spectrum comes into play. That is above my level of understanding, but I believe it is correct to say the transmitters carrier frequency basically changes within the channel, seemingly at random and very often, and this change helps prevent collisions, but doesn't entirely eliminate them. Hence it does hurt performance but doesn't kill communication entirely.
|
[
"In the case of WiMAX, Uplink Collaborative MIMO is spatial multiplexing with two different devices, each with one antenna. These transmitting devices are collaborating in the sense that both devices must be synchronized in time and frequency so that the intentional overlapping occurs under controlled circumstances. The two streams of data will then interfere with each other. As long as the signal quality is sufficiently good and the receiver at the base station has at least two antennas, the two data streams can be separated again. This technique is sometimes also termed Virtual Spatial Multiplexing.\n",
"Every node in the network is made up by two or more radio interfaces, so that is possible receiving the signal and replaying it to one or more node on a different radio frequency, in order to decrease interferences and increase throughput.\n",
"A connection has two channels, one per direction. Each channel consists of two wires carrying strobe and data. The strobe line changes state whenever the data line starts a new bit with the same value as the previous bit. This scheme makes the links self-clocking, able to adapt automatically to different speeds.\n",
"The nature of a single-frequency network (SFN) is such that the transmitters in a network must broadcast the same signal at the same time. To achieve synchronization, the broadcaster must counter any differences in propagation time incurred by the different methods and distances involved in carrying the signal from the multiplexer to the different transmitters. This is done by applying a delay to the incoming signal at the transmitter based on a timestamp generated at the multiplexer, created taking into account the maximum likely propagation time, with a generous added margin for safety. Delays in the audio encoder and the receiver due to digital processing (e.g. deinterleaving) add to the overall delay perceived by the listener. The signal is delayed, usually by around 1 to 4 seconds and can be considerably longer for DAB+. This has disadvantages:\n",
"Crossband operation is sometimes used by amateur radio operators. Rather than taking it in turns to transmit on the same frequency, both operators can transmit at the same time but on different bands, each one listening to the frequency that the other is using to transmit. A variation on this procedure includes establishing contact on one frequency and then changing to a pair of other frequencies to exchange messages.\n",
"In a departure from both 10BASE-T and 100BASE-TX, 1000BASE-T and faster use all four cable pairs for simultaneous transmission in both directions through the use of telephone hybrid-like signal handling. For this reason, there are no dedicated transmit and receive pairs. From 1000BASE-T onwards the physical medium attachment (PMA) sublayer provides identification of each pair and usually continues to work even over cable where the pairs are unusually swapped or crossed.\n",
"BULLET::::- Routers: Routers and switches must be positioned such that no single piece of network hardware controls all network access to a given host. In particular, it is not uncommon to see multiple Internet uplinks all converge on a single edge router. In such a configuration, the loss of that single router disconnects the Internet uplink, despite the fact that multiple ISPs are otherwise in use.\n"
] |
how do you steer a gunship? or a clipper? or large sailship in general?
|
On most boats, the rudder is, in fact, the principle steering device.
The sails on sailing vessels generally do not "steer" the boat. However, they must be re-positioned when the boat changes direction or when the wind direction changes, to maximize the thrust provided by the sails, using the available wind.
For non-sail vessels, there is often still a rudder, although steerable propellers or thrusters are often also used. Google "Azipod" and start reading, for more info.
|
[
"A steerer can also use the steering oar to adjust the position of the boat by cranking. When a steerer cranks the steering oar, the stern of the boat moves either to the left or right, spinning the boat. This is typically executed to turn the boat around at practice or to ensure a boat is lined up straight and pointing directly down a racecourse.\n",
"There is also the barrel type rudder, where the ship's screw is enclosed and can be swiveled to steer the vessel. Designers claim that this type of rudder on a smaller vessel will answer the helm faster.\n",
"The ships had both bow and stern doors leading onto the main vehicle deck, making them roll-on/roll-off, combined with ramps that led to upper and lower vehicle decks. Thanks to their shallow draught, they could beach themselves and use the bow doors for speedy unloading of troops and equipment. The ships also had helicopter decks on both the upper vehicle deck and behind the superstructure.\n",
"BULLET::::- Halyards (sometimes haulyards), are used to raise sails and control luff tension. In large yachts the halyard returns to the deck but in small racing dinghies the head of the sail is attached by a short line to the head of the mast while the boat is lying on its gunwale.\n",
"It is steered with 2 quarter rudders, which are fixed to a set of heavy crossbeams in a way to enable a quick emergency release. The helmsmen stood on the outboard galleries. There is a cramped cabin for the captain below the poop deck. The vessel has 2 to 3 masts, both were tripod with the rear legs fixed to heavy tabernacles by means of a horizontal spar round which they can revolve. If the foreleg comes adrift from the hook that holds it in place, the mast can be lowered easily. The sails are tanja and made with \"karoro\" matting. With European influence in the latter centuries, western-styled sails can also be used. In the past, Makassarese sailor may sail them as far as New Guinea and Singapore.\n",
"It is possible to arrange a Highfield lever to work two backstays or shrouds by mounting the lever transversely so that it is thrown from side to side rather than fore and aft, tensioning the rigging on one side of the boat as it relaxes the other.\n",
"The ship's steering gear consists of a steering unit and twin semi-balanced underhung rudders. There is an emergency steering station in the superstructure in the event of damage to the bridge and they can also be operated by hand from the steering gear compartment. To improve the ship's performance in a seaway, they are fitted with a B+V Simplex Compact stabiliser system.\n"
] |
How definitive are the DNA results on the Richard III skeleton?
|
You didn't really elaborate on what you mean, but I'm guessing you want to know how confident we can be that the skeleton they've found is King Richard?
Here's an overview of the evidence:
DNA comparisons:
* Geneticists were able to extract and sequence mitochondrial DNA from the skeleton
* Mitochondrial DNA is passed down from mother to child unchanged except for the occasional mutation
* So, by comparing the skeleton's mitochondrial DNA to living people who descend from King Richard's mother's line along an unbroken line of females, we can see if the skeleton has the same mitochondrial group as what King Richard would be expected to have.
* Genealogists were able to track down two direct matriline descendants of Anne of York (Richard III's sister) both of whom provided DNA samples for mitochondrial DNA testing. One of the descendants wants to remain anonymous. The second descendant is a Canadian by the name of Michael Ibsen.
* The fact that they have two people means that they can compare them both and make sure that they match. It makes us more sure that we are predicting King Richard's haplogroup correctly because we can more safely say that there's no anomaly (such as an unknown adoption in one of the descendant's background).
* The two descendants do indeed match, and they are members of a subgroup of haplogroup J. Luckily it is fairly rare, somewhere between 1 and 2 percent of the population belongs to this particular group. If the two living descendants were members of a very prevalent haplogroup, it would increase the odds that any match found between them and the skeleton would be purely coincidental.
* Mitochondrial DNA comparison of the three people can be found [here](_URL_0_) -- it's a virtually perfect match.
So, that's the particulars of the DNA evidence that they have. However, there's additional evidence which makes them more sure that it's King Richard, and not some random haplogroup J guy:
* Records say he was buried at a church in Leicester, 100 miles north of London. Archaeologist Richard Buckley identified a possible location of the grave through map analysis. They looked where his analyses predicted that King Richard would be, and they found the skeleton.
* Radiocarbon dating estimates that the death occurred between 1455 and 1540 (Richard died in 1485)
* The skeleton they found appears to have died in battle, and there's no coffin or anything like that, consistent with an enemy burial.
* Various head injuries that the skeleton suffered are consistent with the way King Richard's death in battle was described
* The remains display signs of scoliosis, consistent with contemporary descriptions of Richard. Other features of the skeleton are also consistent with Richard, such as the age. He died at age 32 and the skeleton they found died "in his late 20s to late 30s"
The DNA evidence alone or the circumstantial evidence alone would not have been enough to make a strong conclusion, but looking at everything together is pretty convincing. The research team is not saying that they are 100% sure they have found King Richard, but rather that they:
> can now confirm that the body is that of Richard III "beyond a reasonable doubt"
|
[
"In February 2016, French, Danish and Norwegian researchers opened the lead boxes in order to conduct DNA analysis of the remains. Radiocarbon dating of the remains showed that neither skeleton could be that of Richard I or Richard II. One skeleton dated from the third century BCE, the other from the eighth century AD, both long before the lifetimes of Richard I and Richard II.\n",
"On 4 February 2013, the University of Leicester confirmed that the skeleton was that of Richard III. The identification was based on mitochondrial DNA evidence, soil analysis, and dental tests, and physical characteristics of the skeleton consistent with contemporary accounts of Richard's appearance. Osteoarchaeologist Jo Appleby commented: \"The skeleton has a number of unusual features: its slender build, the scoliosis, and the battle-related trauma. All of these are highly consistent with the information that we have about Richard III in life and about the circumstances of his death.\"\n",
"Professor Michael Hicks, a Richard III specialist, has been particularly critical of the use of the mitochondrial DNA to argue that the body is Richard III's, stating that \"any male sharing a maternal ancestress in the direct female line could qualify\". He also criticises the rejection by the Leicester team of the Y chromosomal evidence, suggesting that it was not acceptable to the Leicester team to conclude that the skeleton was anyone other than Richard III. He argues that on the basis of the present scientific evidence \"identification with Richard III is more unlikely than likely\". However, Hicks himself draws attention to the contemporary view held by some that Richard III's grandfather, Richard, Earl of Cambridge, was the product of an illegitimate union between Cambridge's mother Isabella of Castile (a bastard daughter of Pedro the Cruel of Castile) and John Holland (brother in law of Henry IV of England), rather than Edmund of Langley, 1st Duke of York (Edward III's fourth son). If that was the case then the Y chromosome discrepancy with the Beaufort line would be explained but obviously still fail to prove the identity of the body. Hicks suggests alternative candidates descended from Richard III's maternal ancestress for the body (e.g. Thomas Percy, 1st Baron Egremont, and John de la Pole, 1st Earl of Lincoln) but does not provide evidence to support his suggestions. Philippa Langley refutes Hicks's argument on the grounds that he does not take into account all the evidence.\n",
"On 4 February 2013, the University of Leicester confirmed that the skeleton was beyond reasonable doubt that of King Richard III. This conclusion was based on mitochondrial DNA evidence, soil analysis, and dental tests (there were some molars missing as a result of caries), as well as physical characteristics of the skeleton which are highly consistent with contemporary accounts of Richard's appearance. The team announced that the \"arrowhead\" discovered with the body was a Roman-era nail, probably disturbed when the body was first interred. However, there were numerous perimortem wounds on the body, and part of the skull had been sliced off with a bladed weapon; this would have caused rapid death. The team concluded that it is unlikely that the king was wearing a helmet in his last moments. Soil taken from the remains was found to contain microscopic roundworm eggs. Several eggs were found in samples taken from the pelvis, where the king's intestines were, but not from the skull and only very small numbers were identified in soil surrounding the grave. The findings suggest that the higher concentration of eggs in the pelvic area probably arose from a roundworm infection the King suffered in his life, rather than from human waste dumped in the area at a later date, researchers said. The Mayor of Leicester announced that the king's skeleton would be re-interred at Leicester Cathedral in early 2014, but a judicial review of that decision delayed the reinterment for a year. A museum to Richard III was opened in July 2014 in the Victorian school buildings next to the Greyfriars grave site.\n",
"The age of the bones at death matched that of Richard when he was killed; they were dated to about the period of his death and were mostly consistent with physical descriptions of the king. Preliminary DNA analysis showed that mitochondrial DNA extracted from the bones matched that of two matrilineal descendants, one 17th-generation and the other 19th-generation, of Richard's sister Anne of York. Taking these findings into account along with other historical, scientific and archaeological evidence, the University of Leicester announced on 4 February 2013 that it had concluded beyond reasonable doubt that the skeleton was that of Richard III.\n",
"On 5 February 2013 Professor Caroline Wilkinson of the University of Dundee conducted a facial reconstruction of Richard III, commissioned by the Richard III Society, based on 3D mappings of his skull. The face is described as \"warm, young, earnest and rather serious\". On 11 February 2014 the University of Leicester announced the project to sequence the entire genome of Richard III and one of his living relatives, Michael Ibsen, whose mitochondrial DNA confirmed the identification of the excavated remains. Richard III thus became the first ancient person of known historical identity to have their genome sequenced.\n",
"On 12 September, it was announced that the skeleton discovered during the search might be that of Richard III. Several reasons were given: the body was of an adult male; it was buried beneath the choir of the church; and there was severe scoliosis of the spine, possibly making one shoulder higher than the other (to what extent depended on the severity of the condition). Additionally, there was an object that appeared to be an arrowhead embedded in the spine; and there were perimortem injuries to the skull. These included a relatively shallow orifice, which is most likely to have been caused by a rondel dagger, and a scooping depression to the skull, inflicted by a bladed weapon, most probably a sword. Additionally, the bottom of the skull presented a gaping hole, where a halberd had cut away and entered it. Forensic pathologist Dr Stuart Hamilton stated that this injury would have left the individual's brain visible, and most certainly would have been the cause of death. Dr Jo Appleby, the osteo-archaeologist who excavated the skeleton, concurred and described the latter as \"a mortal battlefield wound in the back of the skull\". The base of the skull also presented another fatal wound in which a bladed weapon had been thrust into it, leaving behind a jagged hole. Closer examination of the interior of the skull revealed a mark opposite this wound, showing that the blade penetrated to a depth of . In total, the skeleton presented ten wounds: four minor injuries on the top of the skull, one dagger blow on the cheekbone, one cut on the lower jaw, two fatal injuries on the base of the skull, one cut on a rib bone, and one final wound on the pelvis, most probably inflicted after death. It is generally accepted that postmortem, Richard's naked body was tied to the back of a horse, with his arms slung over one side and his legs and buttocks over the other. This presented a tempting target for onlookers, and the angle of the blow on the pelvis suggests that one of them stabbed Richard's right buttock with substantial force, as the cut extends from the back all the way to the front of the pelvic bone and was most probably an act of humiliation. It is also possible that Richard suffered other injuries which left no trace on the skeleton.\n"
] |
What was the anti masonic party and what happened to them?
|
They were a political party formed in the wake of public outcry over an incident where some Masons in NY state were accused of kidnapping and possibly killing at fellow Mason (named Morgan) who had published an expose on the initiations.
This was in the 1820s.
The Anti-Masonic party was the most successful third-party in US history, coming in second in a Presidential election!
However, after the failure to win their bid for the highest office, the party began to unravel. The damage to Freemasonry being done, the party found that it was too divided to last.
Freemasonry would not fully recover until later in the century during a period that lead to what's now called the Golden Age of Fraternalism, and spawned countless fraternities modeled on Freemasonry and also saw Freemasonry itself return to and in many ways surpass its former strength.
That period lasted until the early part of the 20th century, and the decline that followed (esp. during the depression) didn't rebound until after World War II.
|
[
"The Anti-Masonic Party, also known as the Anti-Masonic Movement, was the first third party in the United States. It strongly opposed Freemasonry as a single-issue party and later aspired to become a major party by expanding its platform to take positions on other issues. After emerging as a political force in the late 1820s, most of the Anti-Masonic Party's members joined the Whig Party in the 1830s and the party disappeared after 1838. \n",
"The Anti-Masonic Party was formed in Upstate New York in February 1828. Anti-Masons were opponents of Freemasonry, believing that it was a corrupt and elitist secret society which was ruling much of the country in defiance of republican principles. Many people regarded the Masonic organization and its adherents involved in government as corrupt.\n",
"After the negative views of Freemasonry among a large segment of the public began to wane in the mid 1830s, the Anti-Masonic Party had begun to disintegrate. Its leaders began to move one by one to the Whig party. Party leaders met in September 1837 in Washington, D.C., and agreed to maintain the party. The third Anti-Masonic Party National Convention was held in Philadelphia on November 13-14, 1838. By this time, the party had been almost entirely supplanted by the Whigs. The delegates unanimously voted to nominate William Henry Harrison for president (who the party had supported for president the previous election along with Francis Granger for Vice President) and Daniel Webster for Vice President. However, when the Whig National Convention nominated Harrison with John Tyler as his running mate, the Anti-Masonic Party did not make an alternate nomination and ceased to function and was fully absorbed into the Whigs by 1840.\n",
"Freemasonry in the United States faced political pressure following the 1826 kidnapping of William Morgan by Freemasons and his subsequent disappearance. Reports of the \"Morgan Affair\", together with opposition to Jacksonian democracy (Andrew Jackson was a prominent Mason), helped fuel an Anti-Masonic movement. The short-lived Anti-Masonic Party was formed, which fielded candidates for the presidential elections of 1828 and 1832.\n",
"The Anti-Masonic Party held a third national nominating convention at Temperance Hall in Philadelphia on November 13–14, 1838. By this time, the party had been almost entirely supplanted by the Whigs. The Anti-Masons unanimously nominated William Henry Harrison for president and Daniel Webster for vice president in the 1840 election. When the Whig National Convention nominated Harrison with John Tyler as his running mate, the Anti-Masonic Party did not make an alternate nomination and ceased to function, with most adherents being fully absorbed into the Whigs by 1840.\n",
"The Anti-Masonic movement gave rise to or expanded the use of many innovations which became accepted practice among other parties, including nominating conventions and party newspapers. In addition, the Anti-Masons aided in the rise of the Whig Party as the major alternative to the Democrats, with conventions, newspapers and Anti-Masonic positions on issues including internal improvements and tariffs being adopted by the Whigs.\n",
"After the negative views of Freemasonry among a large segment of the public began to wane in the mid 1830s, the Anti-Masonic Party began to disintegrate. Some of its members began moving to the Whig Party, which had a broader issue base than the Anti-Masons. The Whigs were also regarded as a better alternative to the Democrats.\n"
] |
Did the US ever try to convert Filipinos to Protestantism during their colonisation of the country?
|
After the US colonized the Philippines the Catholic Church was disestablished, and was no longer the official religion. When that happened there was a large influx of Protestant missionaries of all denominations to the Philippines. Today, Protestants make up around 10% of the total population in the Philippines, with about 9 million people. While Protestantism was introduced to the Philippines during the period of US colonialism, it wasn't necessarily due to a push from the US government. It really was due more to missionaries acting opportunistically after the disestablishment of the Catholic Church.
|
[
"During the early part of the United States governance in the Philippines, there was a concerted effort to convert Filipinos into Protestants. As Filipinos began to migrate to the United States, Filipino Roman Catholics were often not embraced by their American Catholic brethren, nor were they sympathetic to a Filipino-ized Catholicism, in the early 20th century. This led to creation of ethnic-specific parishes; one such parish was St. Columban's Church in Los Angeles. In 1997, the Filipino oratory was dedicated at the Basilica of the National Shrine of the Immaculate Conception, owing to increased diversity within the congregations of American Catholic parishes. The first-ever American Church for Filipinos, San Lorenzo Ruiz Church in New York City, is named after the first saint from the Philippines, San Lorenzo Ruiz. This was officially designated as a church for Filipinos in July 2005, the first in the United States, and the second in the world, after a church in Rome.\n",
"The Catholic Filipinos make up the great majority (over 70%) of the Southern Philippine population. They are relatively newcomers to the area; the first wave of Christian migrants came in the seventeenth century when the Spaniards sought to populate Zamboanga, Jolo, Dapitan and other areas by encouraging people from Luzon and the Visayas to settle there. In the nineteenth century Spanish policy found considerable success in encouraging migrations to Iligan and Cotabato.\n",
"Protestantism arrived in the Philippines with the take-over of the islands by Americans at the turn of the 20th century. Nowadays, they comprise about 10%–15% of the population with an annual growth rate of 10% since 1910 and constitute the largest Christian grouping after Roman Catholicism. In 1898, Spain lost the Philippines to the United States. After a bitter fight for independence against its new occupiers, Filipinos surrendered and were again colonized. The arrival of Protestant American missionaries soon followed. Protestant church organizations established in the Philippines during the 20th century include the following:\n",
"In the aftermath of the Philippine Revolution against Spain and the Philippine–American War which immediately followed, the Protestant denomination, first introduced by the new American colonial masters and aided by the newly arrived American teachers, the Thomasites, was gaining a foothold among Filipinos because of the then strong anti-Spanish Friar sentiment existing at that time. Due to the then very small number of Catholic educational institutions in the country, the then American Archbishop of Manila Jeremiah James Harty, himself an alumnus of a De La Salle Christian Brothers school in St. Louis, Missouri, would appeal to the Superior-General of the Christian Brothers in 1905 for the establishment of a De La Salle school in the Philippines. While there was a growing pressure for a De La Salle school, Archbishop Harty's request was rejected, because of the Christian Brothers' lack of funds. Nonetheless, Harty continued to appeal to Pope Pius X for the establishment of additional Catholic schools in the country.\n",
"The coming of the Americans in the early 20th century when the Philippines was ceded by Spain to the United States through the 1898 Treaty of Paris brought with them the Protestant religion and Iloilo is one of the first places where they came and started a mission in the Philippines. During the American occupation, the Philippine islands were divided to different Protestant missions and Western Visayas came to the jurisdiction of the Baptists. Baptist missionaries came although other Protestant sects came also especially the Presbyterians and they established numerous institutions. The Presbyterians established the Iloilo Mission Hospital in 1901, the first Protestant and American founded hospital in the country while the Baptists established the Jaro Evangelical Church, the first Baptist church in the islands, and the Central Philippine University in 1905, which was founded by William Valentine through a grant given by the American industrialist, oil magnate and philanthropist John D. Rockefeller as the \"first university in the City of Jaro\" and also the first Baptist founded and second American university in Asia.\n",
"Prior when the Philippines was ceded to the United States administration by Spain through the Treaty of Paris (1898), the Americans brought their faith, the Protestantism. A comity agreement with Protestant American churches and sects was created to divide the Philippine islands for missionary works and to avoid future conflicts with different churches. Western Visayas came to the jurisdictions of the Baptists (Northern Baptist).\n",
"The United States colonization of the Philippine islands, with Iloilo as one of the firsts American colonial outposts in which they brought their faith the Protestantism, paved the way in founding of numerous institutions that mark Iloilo's significance and important contribution in the history of American colonial era in the country includes the John D. Rockefeller funded Central Philippine University, the first Baptist and second American and Protestant university in the Philippines and in Asia; Iloilo Mission Hospital, the first Protestant and American hospital in the Philippines; Jaro Evangelical Church, the first Baptist and second Protestant church in the Philippines; Jaro Adventist Center, the first organized Adventist church in the Philippines; and Convention of Philippine Baptist Churches, the first organized Baptist churches union in the Philippines.\n"
] |
what's the difference b/w high quality and low quality meats?
|
Meat from an animal that received high-quality feed is more chemically varied, and has more flavor. This is particularly noticeable in mild-tasting meat like chicken.
High-quality beef typically has more fat mixed throughout (an effect called "marbling") which creates a richer taste and more delicate texture.
|
[
"Consisting of low-quality rib meat, described as a \"tough, scraggy meat\", if not well cooked, In recent years their high fat content has made them unpopular in many Western countries, although they are widely used as döner meat in Europe. \n",
"Beef quality grades - A quality grade is a composite evaluation of factors that affect palatability of meat (tenderness, juiciness, and flavor). These factors include carcass maturity, firmness, texture, and color of lean, and the amount and distribution of marbling within the lean. Beef carcass quality grading is based on (1) degree of marbling and (2) degree of maturity.\n",
"Berkshire pork, prized for juiciness, flavour, and tenderness, is pink-hued and heavily marbled. Its high fat content makes it suitable for long cooking and high-temperature cooking. The meat also has a slightly higher pH, according to food science professor Kenneth Prusa of Iowa State University. Increased pH makes the meat darker, firmer, and more flavorful. High pH is a greater determinant than fat content in the meat's overall flavor characteristics. The Japanese have bred the Kurobuta branch of the Berkshire breed for increased fineness in the meat and better marbling. Pigs' fat stores many of the characteristics of the food that they eat. Berkshire pigs are usually free-ranging, often supplemented with a diet of corn, nuts, clover, apples, or milk.\n",
"Chuck short ribs tend to be meatier than the other two types of ribs, but they are also tougher due to the more extensive connective tissues (collagen and reticulin) in them. Plate short ribs tend to be fattier than the other two types.\n",
"Beef steak is graded for quality, with higher prices for higher quality. Generally, the higher the quality, the more tender the beef, the less time is needed for cooking, or the better the flavor. For example, beef tenderloin is the most tender and wagyu, such as Kobe beef from Japan, is known for its high quality and commands a high price. Steak can be cooked relatively quickly compared to other cuts of meat, particularly when cooked at very high temperatures, such as by broiling or grilling.\n",
"Grades are determined based on an animal's fat content and body condition. The most common grades, from best to worst, are \"breakers\" (fleshy, body condition 7 or above), \"boners\" (body condition 5 to 7), \"lean\", and \"light\" (thin, body condition 1 to 4). Carcasses rated as lean or light often are sold for less per pound, as less meat is produced from the carcass despite processing costs remaining similar to those of higher grade carcasses.\n",
"Like other types of pig fat, fatback may be rendered to make a high quality lard, and is one source of salt pork. Finely diced or coarsely ground fatback is an important ingredient in sausage making and in some meat dishes.\n"
] |
Why do we always put reactive materials in glass beakers/flasks/graduated cylinders etc.?
|
In a nutshell, glass is very stable, and will not react easily with most compounds.
The class stays intact, the chemical stays the same, everyone is happy.
However, some reagents are better kept in plastic containers such as polyethylene, or even quartz, because glass is not a magical non-reactive substance either.
|
[
"Until 2010, no organic strong glass formers were known. Strong glass formers can be shaped in the same way as glass (silicon dioxide) can be. Vitrimers are the first such material discovered, which can behave like viscoelastic fluid at high temperatures. Unlike classical polymer melts, whose flow properties are largely dependent on friction between monomers, vitrimers become aviscoelastic fluid because of exchange reactions at high temperatures as well as monomer friction. These two processes have different activation energies, resulting in a wide range of viscosity variation. Moreover, because the exchange reactions follow Arrhenius' Law, the change of viscosity of vitrimers also follows an Arrhenius relationship with the increase of temperature, differing greatly from conventional organic polymers.\n",
"To make glass from materials with poor glass forming tendencies, novel techniques are used to increase cooling rate, or reduce crystal nucleation triggers. Examples of these techniques include aerodynamic levitation (cooling the melt whilst it floats on a gas stream), splat quenching (pressing the melt between two metal anvils) and roller quenching (pouring the melt through rollers).\n",
"Borosilicate glass, also known as pyrex, can be viewed as a silicate in which some [SiO] units are replaced by [BO] centers, together with additional cations to compensate for the difference in valence states of Si(IV) and B(III). Because this substitution leads to imperfections, the material is slow to crystallise and forms a glass with low coefficient of thermal expansion and is resistant to cracking when heated, unlike soda glass.\n",
"Borosilicate glass is made to withstand thermal shock better than most other glass through a combination of reduced expansion coefficient and greater strength, though fused quartz outperforms it in both these respects. Some glass-ceramic materials (mostly in the lithium aluminosilicate (LAS) system) include a controlled proportion of material with a negative expansion coefficient, so that the overall coefficient can be reduced to almost exactly zero over a reasonably wide range of temperatures.\n",
"Noncrystalline ceramics, being glass, tend to be formed from melts. The glass is shaped when either fully molten, by casting, or when in a state of toffee-like viscosity, by methods such as blowing into a mold. If later heat treatments cause this glass to become partly crystalline, the resulting material is known as a glass-ceramic, widely used as cook-tops and also as a glass composite material for nuclear waste disposal.\n",
"Nowadays, the two types of glass that are used mainly in the laboratory and in the Pasteur pipette are borosilicate glass and soda lime glass. Borosilicate glass is a widely used glass for laboratory apparatus, as it can withstand chemicals and temperatures used in most laboratories. Borosilicate glass is also more economical since the glass can be fabricated easily compared to other types. Soda lime glass, although not as chemically resistant as Borosilicate glass, are suitable as a material for inexpensive apparatus such as the Pasteur pipette.\n",
"Borosilicate glass is created by combining and melting boric oxide, silica sand, soda ash, and alumina. Since borosilicate glass melts at a higher temperature than ordinary silicate glass, some new techniques were required for industrial production. The manufacturing process depends on the product geometry and can be differentiated between different methods like floating, tube drawing, or moulding.\n"
] |
Why do the continents seem to migrate north, leaving a gap between antarctica and the rest of the world?
|
It's random, mostly.
Plate tectonics is driven by convection currents in the mantle under the crust. Most of the time, people only consider the major continents moving, but [the jigsaw puzzle is slightly more complicated](_URL_0_) than that. Numerous oceanic plates are jostling around too.
During Pangea, Antarctica was wedged between India, Australia, and Eastern Africa. This whole assembly was around [the same latitude as modern day Southern Africa](_URL_1_). You'll see North America and Eurasia are up in the northern hemisphere, which is a good chunk of the land on Earth.
As things started to break up and migrate, Antarctica happened to get shunted south. Australia kinda followed it, these are the two landmasses that have been isolated the longest, but everyone else just sort of drifted north. The Northern Hemisphere is a lot more crowded land wise than the Southern, so it makes sense that the pole is more packed.
This was a bit rambling, but I hope it covered your question. tl;dr It's luck of the geologic draw.
|
[
"Antarctica continued to become more isolated and finally developed a permanent ice cap. Mountain building in western North America continued, and the Alps started to rise in Europe as the African plate continued to push north into the Eurasian plate, isolating the remnants of Tethys Sea. A brief marine incursion marks the early Oligocene in Europe. There appears to have been a land bridge in the early Oligocene between North America and Europe since the faunas of the two regions are very similar. During the Oligocene, South America was finally detached from Antarctica and drifted north toward North America. It also allowed the Antarctic Circumpolar Current to flow, rapidly cooling the continent.\n",
"Due to plate tectonics, the Americas were gradually moving westward, causing the Atlantic Ocean to expand. The Western Interior Seaway divided North America into eastern and western halves; Appalachia and Laramidia. India maintained a northward course towards Asia. In the Southern Hemisphere, Australia and Antarctica seem to have remained connected and began to drift away from Africa and South America. Europe was an island chain. Populating some of these islands were endemic dwarf dinosaur species.\n",
"The continent of Antarctica is centered on the South Pole. Antarctica is surrounded on all sides by the Southern Ocean. As a result, high-speed winds circle around Antarctica, preventing warmer air from temperate zones from reaching the continent.\n",
"While Antarctica does have some small areas of tundra on the northern fringes, the vast majority of the continent is extremely cold and permanently frozen. Because it is climatically isolated from the rest of the Earth, the continent has extreme cold not seen anywhere else, and weather systems rarely penetrate into the continent.\n",
"Australia drifted away from Antarctica forming the Tasmanian Passage, and South America drifted away from Antarctica forming the Drake Passage. This caused the formation of the Antarctic Circumpolar Current, a current of cold water surrounding Antarctica. This current still exists today, and is a major reason for why Antarctica has such an exceptionally cold climate.\n",
"Millions of years ago, Antarctica was warmer and much wetter, and supported the Antarctic flora, including forests of podocarps and southern beech. Antarctica was also part of the ancient supercontinent of Gondwanaland, which gradually broke up by continental drift starting 110 million years ago. The separation of South America from Antarctica 30-35 million years ago allowed the Antarctic Circumpolar Current to form, which isolated Antarctica climatically and caused it to become much colder. The Antarctic flora subsequently died out in Antarctica, but is still an important component of the flora of southern Neotropic (South America) and Australasia, which were also former parts of Gondwana.\n",
"The tectonic evolution of the Transantarctic Mountains appears to have begun when Antarctica broke away from Australia during the late Cretaceous and is ongoing, creating along the way some of the longest mountain ranges (at 3500 kilometers) formed by rift flank uplift and associated continental rifting. The Transantarctic Mountains (TAM) separate East and West Antarctica. The rift system that formed them is caused by a reactivation of crust along the East Antarctic Craton. This rifting or seafloor spreading causes plate movement that results in a nearby convergent boundary which then forms the mountain range. The exact processes responsible for making the Transantarctic Mountains are still debated today. This results in a large variety of proposed theories that attempt to decipher the tectonic history of these mountains.\n"
] |
Would a nuclear bomb explode if you bomb it with an other bomb?
|
No. There's a very critically timed cobination of events that have to happen to get a nuclear detonation. The worst that would happen is that you detonate the charge around the fissile material and produce a conventional 'Dirty' bomb.
|
[
"Fusion-boosted fission bombs can also be made immune to neutron radiation from nearby nuclear explosions, which can cause other designs to predetonate, blowing themselves apart without achieving a high yield.\n",
"Although neutron bombs are commonly believed to \"leave the infrastructure intact\", with current designs that have explosive yields in the low kiloton range, detonation in (or above) a built-up area would still cause a sizable degree of building destruction, through blast and heat effects out to a moderate radius, albeit considerably less destruction, than when compared to a standard nuclear bomb of the \"exact\" same total energy release or \"yield\".\n",
"Some nuclear weapons are designed for special purposes; a neutron bomb is a thermonuclear weapon that yields a relatively small explosion but a relatively large amount of neutron radiation; such a device could theoretically be used to cause massive casualties while leaving infrastructure mostly intact and creating a minimal amount of fallout. The detonation of any nuclear weapon is accompanied by a blast of neutron radiation. Surrounding a nuclear weapon with suitable materials (such as cobalt or gold) creates a weapon known as a salted bomb. This device can produce exceptionally large quantities of long-lived radioactive contamination. It has been conjectured that such a device could serve as a \"doomsday weapon\" because such a large quantity of radioactivities with half-lives of decades, lifted into the stratosphere where winds would distribute it around the globe, would make all life on the planet extinct.\n",
"When a nuclear bomb is exploded near ground level, the dense atmosphere interacts with many of the subatomic particles being released. This normally takes place within a short distance, on the order of meters. This energy heats the air, promptly ionizing it to incandescence and causing a roughly spherical fireball to form within microseconds.\n",
"Some sources describe the bomb as a functional nuclear weapon, but others describe it as disabled. If the bomb had a plutonium nuclear core installed, it was a fully functional weapon. If the bomb had a dummy core installed, it was incapable of producing a nuclear explosion but could still produce a conventional explosion. The 12-foot (4 m) long Mark 15 bomb weighs and bears the serial number 47782. It contains of conventional high explosives and highly enriched uranium. The Air Force maintains that the bomb's nuclear capsule, used to initiate the nuclear reaction, was removed before its flight aboard B-47. As noted in the Atomic Energy Commission \"Form AL-569 Temporary Custodian Receipt (for maneuvers)\", signed by the aircraft commander, the bomb contained a simulated 150-pound cap made of lead. However, according to 1966 Congressional testimony by then Assistant Secretary of Defense W.J. Howard, the Tybee Island bomb was a \"complete weapon, a bomb with a nuclear capsule,\" and one of two weapons lost by that time that contained a plutonium trigger. Nevertheless, a study of the Strategic Air Command documents indicates that in February 1958, Alert Force test flights (with the older Mark 15 payloads) were not authorized to fly with nuclear capsules on board. Such approval was pending deployment of safer \"sealed-pit nuclear capsule\" weapons, which did not begin deployment until June 1958.\n",
"Though dangerous and frequently lethal to humans within the immediate area, the critical mass formed would not be capable of producing a massive nuclear explosion of the type that fission bombs are designed to produce. This is because all the design features needed to make a nuclear warhead cannot arise by chance.\n",
"There are two main considerations for the location of an explosion: height and surface composition. A nuclear weapon detonated in the air, called an air burst, produces less fallout than a comparable explosion near the ground. A nuclear explosion in which the fireball touches the ground pulls soil and other materials into the cloud and neutron activates it before it falls back to the ground. An air burst produces a relatively small amount of the highly radioactive heavy metal components of the device itself.\n"
] |
If gravity is a pulling force, why is there no equivalent repulsive/anti gravity force?
|
short answer to your question can be: because there is no matter with negative mass.
all matter has positive energy (this statement is called "weak energy condition") and creates positive curvature of spacetime (positive and negative are subject to sign convention). effect of this "positive" curvature is, that when you move forward in time, it acts as attracting force. you can imagine it like two people starting at the equator and going toward pole - they come closer to each other just as if there was some force that pulls them together, but in fact, they are only changing one coordinate (in real case it would be time coordinate) ( < - this was an EDIT2).
in theory, you can invent metric (metric describes the curvature of spacetime) that has negative curvature on some places in space and positive on other ones. those metrics can have really cool properties. some are described as "wormholes" some other as "warp bubbles", but the problem with all of them is, that they would require this matter with negative mass (also called exotic matter). we have no evidence of such a thing.
EDIT1: also, there are some issues with mathematical structure of the equations that describe gravity (einstein equations)...
EDIT3: google up "energy condition"
|
[
"This is because gravitation is an attractive force, but if there is an underdense region it apparently acts as a gravitational repeller, based on the concept that there may be less attraction in the direction of the underdensity, and the greater attraction due to the higher density in other directions acts to pull objects away from the underdensity; in other words, the apparent repulsion is not an active force, but due simply to the lack of a force counteracting the attraction.\n",
"Anti-gravity (also known as \"non-gravitational field\") is creating a place or object that is free from the force of gravity. It does not refer to the lack of weight under gravity experienced in free fall or orbit, or to balancing the force of gravity with some other force, such as electromagnetism or aerodynamic lift. Anti-gravity is a recurring concept in science fiction, particularly in the context of spacecraft propulsion. Examples are the gravity blocking substance \"Cavorite\" in H. G. Wells's \"The First Men in the Moon\" and the Spindizzy machines in James Blish's \"Cities in Flight\".\n",
"Gravitation acting alone does not produce a g-force, even though g-forces are expressed in multiples of the free-fall acceleration of standard gravity. Thus, the standard gravitational force at the Earth's surface produces g-force only indirectly, as a result of resistance to it by mechanical forces. It is these mechanical forces that actually produce the g-force on a mass. For example, a force of 1 g on an object sitting on the Earth's surface is caused by the mechanical force exerted in the upward direction by the ground, keeping the object from going into free fall. The upward contact force from the ground ensures that an object at rest on the Earth's surface is accelerating relative to the free-fall condition. (Freefall is the path that the object would follow when falling freely toward the Earth's center). Stress inside the object is ensured from the fact that the ground contact forces are transmitted only from the point of contact with the ground.\n",
"Levitation is accomplished by providing an upward force that counteracts the pull of gravity (in relation to gravity on earth), plus a smaller stabilizing force that pushes the object toward a home position whenever it is a small distance away from that home position. The force can be a fundamental force such as magnetic or electrostatic, or it can be a reactive force such as optical, buoyant, aerodynamic, or hydrodynamic.\n",
"It is perhaps an issue of gravitational pull that is one of the biggest hindrances to life in the Negative Zone. While all objects of reasonably sized mass (planets, moons, asteroids, etc.) obviously have their own gravitational pull, it is weak enough to be overcome with minimal effort. Most heroes with flight capabilities can escape a planet's gravitational field with ease, as can any machine with the capacity for flight. Because of this lowered gravity, it is believed that vegetation has difficulty seeding properly, giving life a tenuous foothold at best on any given planet.\n",
"The equivalence between gravitational and inertial effects does not constitute a complete theory of gravity. When it comes to explaining gravity near our own location on the Earth's surface, noting that our reference frame is not in free fall, so that fictitious forces are to be expected, provides a suitable explanation. But a freely falling reference frame on one side of the Earth cannot explain why the people on the opposite side of the Earth experience a gravitational pull in the opposite direction.\n",
"The force of gravity on Earth is the resultant (vector sum) of two forces: (a) The gravitational attraction in accordance with Newton's universal law of gravitation, and (b) the centrifugal force, which results from the choice of an earthbound, rotating frame of reference. The force of gravity is the weakest at the equator because of the centrifugal force caused by the Earth's rotation and because points on the equator are furthest from the center of the Earth. The force of gravity varies with latitude and increases from about 9.780 m/s at the Equator to about 9.832 m/s at the poles.\n"
] |
what is the difference between an originalist interpretation and a "living document" interpretation when it comes to the u.s. supreme court?
|
The idea is a debate about whether the founders wrote the thing to be specific, rigid, and amendable only through the amendment process...
or whether the founders wrote the thing with deliberately looser language to take shifting societal norms into account.
For example, the 8th amendment prohibits "cruel and unusual" punishments but neglects to define those terms. An originalist would argue that we need to research what "cruel and unusual" meant to the founders. A proponent of living document theory would argue that "cruel and unusual" is deliberately vague so that the boundaries of cruel and unusual can shift as society progresses.
|
[
"Originalism is a theory of \"interpretation\", not \"construction\". However, this distinction between \"interpretation\" and \"construction\" is controversial and is rejected by many nonoriginalists as artificial. As Scalia said, \"the Constitution, or any text, should be interpreted [n]either strictly [n]or sloppily; it should be interpreted reasonably\"; once originalism has told a Judge what the provision of the Constitution means, they are bound by that meaning—however the business of Judging is not simply to know what the text means (interpretation), but to take the law's necessarily general provisions and apply them to the specifics of a given case or controversy (construction). In many cases, the meaning might be so specific that no discretion is permissible, but in many cases, it is still before the Judge to say what a reasonable interpretation might be. A judge could, therefore, be both an originalist \"and\" a strict constructionist—but he is not one by virtue of being the other.\n",
"In the context of United States law, originalism is a concept regarding the interpretation of the Constitution that asserts that all statements in the constitution must be interpreted based on the original understanding of the authors or the people at the time it was ratified. This concept views the Constitution as stable from the time of enactment, and that the meaning of its contents can be changed only by the steps set out in Article Five. This notion stands in contrast to the concept of the Living Constitution, which asserts that the Constitution is intended to be interpreted based on the context of the current times, even if such interpretation is different from the original interpretations of the document.\n",
"BULLET::::- The Late Associate Justice Antonin Scalia and current Associate Justice Clarence Thomas are known as originalists; originalism is a family of similar theories that hold that the Constitution has a fixed meaning from an authority contemporaneous with the ratification (although opinion as to what that authority \"is\" varies; see discussion at originalism), and that it should be construed in light of that authority. Unless there is a historic and/or extremely pressing reason to interpret the Constitution differently, originalists vote as they think the Constitution as it was written in the late 18th Century would dictate.\n",
"Originalism is an approach to interpretation of a legal text in which controlling weight is given to the intent of the original authors (at least the intent as inferred by a modern judge). In contrast, a non-originalist looks at other cues to meaning, including the current meaning of the words, the pattern and trend of other judicial decisions, changing context and improved scientific understanding, observation of practical outcomes and \"what works,\" contemporary standards of justice, and \"stare decisis\". Both are directed at \"interpreting\" the text, not changing it—interpretation is the process of resolving ambiguity and choosing from among possible meanings, not changing the text.\n",
"BULLET::::- Originalism involves judges trying to apply the \"original\" meanings of different constitutional provisions. To determine the original meaning, a constitutional provision is interpreted in its \"original\" context, i.e. the historical, literary, and political context of the framers. From that interpretation, the underlying principle is derived which is then applied to the contemporary situation. Former Supreme Court justice Antonin Scalia believed that the text of the constitution should mean the same thing today as it did when it had been written. A report in the \"Washington Post\" suggested that originalism was the \"view that the Constitution should be interpreted in accordance with its original meaning — that is, the meaning it had at the time of its enactment.\" \"Meaning\" based on \"original\" principles.\n",
"This view does not take into account \"why\" the original constitution does not allow for judicial interpretation in any form. The Supreme Court's power for constitutional review, and by extension its interpretation, did not come about until \"Marbury v. Madison\" in 1803. The concept for a \"living constitution\" therefore relies on an argument regarding the writing of the constitution that had no validity when the constitution was written.\n",
"A more recent variant that emerged in the 1980s is \"originalism\", the assertion that the United States Constitution should be interpreted to the maximum extent possible in the light of what it meant when it was adopted. Originalism should not be confused with a similar conservative ideology, strict constructionism, which deals with the interpretation of the Constitution as written, but not necessarily within the context of the time when it was adopted. In modern times, the term originalism has been used by Supreme Court justice Antonin Scalia, former federal judge Robert Bork and some other conservative jurists to explain their beliefs.\n"
] |
Slavery in ancient Greece
|
It's a simplification, and simplifications like this are only going to make sense with respect to some benchmark; perhaps that's the context of your friend's view. But without context, there isn't really much to support her.
Estimates of the slave population in Classical-era Greek states are exactly that, estimates, but those estimates normally range between 60% and 80% of the total population. One census reported from the late 4th century BCE would put the figure at nearly 87%. Even if we're sceptical of that figure, it's still a *lot* of slaves.
Some did serve functions as valets, child-minders, scribes, and so on. These ones certainly fit your friend's model. But you don't have to look far to find slaves in manual labour. There were also public slaves, responsible for things like cleaning up obstructions and large messes in the streets: so far, not too bad. But an awful lot of farmwork was done by slaves, and it's much harder to believe that they led a happy fulfilling life.
And there were some really awful slave positions around: for example, in Athens the silver mines at Laureion were worked exclusively by slaves, precisely because conditions were so appalling that any worker would have a pretty short lifespan after going there. Tens of thousands of slaves worked the mines, because the mines were so lucrative for Athens, and because slave-owners could actually lease unwanted slaves to the mines for a steady income. In Sparta things were even worse in a way, though perhaps not as intensely awful as silver mining: every year the ephors would ritually declare war on their helots, there were occasional mass slaughters, and adolescents were trained to go stealing and killing among them. Slaves could also be recruited for warfare: both Athens and Sparta used slaves in this way (though their treatment of the slaves afterwards varied a lot: after the naval battle at Arginousai, Athens officially freed all the slaves who had fought in the battle; in Sparta, a group of troublesome helots who had served in battle were rounded up under the impression they were going to be freed, and then slaughtered).
Slaves had no rights and could be tortured, deprived, and killed without recourse (the only limit was on doing these things to *someone else's* slave). When testifying on a legal matter, slaves' testimony was only valid if extracted under torture. So sure, *some* slaves had cushy positions. But it's certainly not a lot that I'd choose.
|
[
"Records of slavery in Ancient Greece go as far back as Mycenaean Greece. The origins are not known, but it appears that slavery became an important part of the economy and society only after the establishment of cities. Slavery was common practice and an integral component of ancient Greece, as it was in other societies of the time, including ancient Israel. It is estimated that in Athens, the majority of citizens owned at least one slave. Most ancient writers considered slavery not only natural but necessary, but some isolated debate began to appear, notably in Socratic dialogues. The Stoics produced the first condemnation of slavery recorded in history.\n",
"Slavery was a common practice in ancient Greece, as in other societies of the time. Some Ancient Greek writers (including, most notably, Aristotle) considered slavery natural and even necessary. This paradigm was notably questioned in Socratic dialogues; the Stoics produced the first recorded condemnation of slavery.\n",
"Greek attitudes towards \"barbarians\" developed in parallel with the growth of chattel slavery - especially in Athens. Although the enslavement of Greeks for non-payment of debts continued in most Greek states, Athens banned this practice under Solon in the early 6th century BC. Under the Athenian democracy established ca. 508 BC, slavery came into use on a scale never before seen among the Greeks. Massive concentrations of slaves worked under especially brutal conditions in the silver mines at Laureion in south-eastern Attica after the discovery of a major vein of silver-bearing ore there in 483 BC, while the phenomenon of skilled slave craftsmen producing manufactured goods in small factories and workshops became increasingly common.\n",
"In 2011, Greek slavery remains the subject of historiographical debate, on two questions in particular: can it be said that ancient Greece was a \"slave society\", and did Greek slaves comprise a social class?\n",
"The study of slavery in Ancient Greece remains a complex subject, in part because of the many different levels of servility, from traditional chattel slave through various forms of serfdom, such as Helots, Penestai, and several other classes of non-citizen.\n",
"The study of slavery in Ancient Greece remains a complex subject, in part because of the many different levels of servility, from traditional chattel slave through various forms of serfdom, such as Helots, Penestai, and several other classes of non-citizen.\n",
"Slavery was known in almost every ancient civilization and society including Sumer, Ancient Egypt, Ancient China, the Akkadian Empire, Assyria, Ancient India, Ancient Greece, Carolingian Europe, the Roman Empire, the Hebrew kingdoms of the ancient Levant, and the pre-Columbian civilizations of the Americas. Such institutions included debt-slavery, punishment for crime, the enslavement of prisoners of war, child abandonment, and the birth of slave children to slaves.\n"
] |
The "Duel of Champions": how common was it? What was it's purpose?
|
Been asked before a few times. The term for this is [Single Combat](_URL_0_).
_URL_1_
_URL_2_
|
[
"Orazi e Curiazi (English title: \"Duel of Champions\") is a 1961 film about the Roman legend of the Horatii, triplet brothers from Rome who fought a duel against the Curiatii, triplet brothers from Alba Longa in order to determine the outcome of a war between their two nations.\n",
"A duel is an arranged engagement in combat between two people, with matched weapons, in accordance with agreed-upon rules. Duels in this form were chiefly practiced in early modern Europe with precedents in the medieval code of chivalry, and continued into the modern period (19th to early 20th centuries) especially among military officers.\n",
"BULLET::::- Duel: it is a competition between any two subjects. The magazine's team establishes five topics for each duel and compares the weak and the strong points, later deciding who wins, with the possibility of a tie. Some fights featuring famous characters or franchises were partially voted by the readers, like \"Harry Potter X The Lord of the Rings\", \"Gandalf X Professor Dumbledore\", \"Bill Gates X Carlos Slim\" and \"X-Men X The Avengers\"\n",
"Dueling became popular in the United States – the former United States Secretary of the Treasury Alexander Hamilton was killed in a duel against the sitting Vice President Aaron Burr in 1804. Between 1798 and the Civil War, the US Navy lost two-thirds as many officers to dueling as it did in combat at sea, including naval hero Stephen Decatur. Many of those killed or wounded were midshipmen or junior officers. Despite prominent deaths, dueling persisted because of contemporary ideals of chivalry, particularly in the South, and because of the threat of ridicule if a challenge was rejected.\n",
"The duel was based on a code of honor. Duels were fought not so much to kill the opponent as to gain \"satisfaction\", that is, to restore one's honor by demonstrating a willingness to risk one's life for it, and as such the tradition of dueling was originally reserved for the male members of nobility; however, in the modern era it extended to those of the upper classes generally. On occasion, duels with pistols or swords were fought between women.\n",
"Though it was never an organized sport, participants would sometimes schedule their fights (as one could schedule a duel), and victors were treated as local heroes. Gouging was essentially a type of duel to defend one's honor that was most common among the poor, and was especially common in southern states in the late eighteenth and early nineteenth centuries.\n",
"Southern duels persisted through the 1840s even after duelling in the United States was outlawed. Commonly held on sand bars in rivers where jurisdiction was unclear, they were rarely prosecuted. States such as South Carolina, Tennessee, Texas, Louisiana and others had their own duelling customs and traditions. Most duels occurred between the upper classes but teenage duels and those in the middle-classes also existed. Dueling was not at all undemocratic and it enabled lesser men to participate without any prejudice. There was also the promise of esteem and status and it also served as a form of scapegoating for unresolved personal problems.\n"
] |
What mechanisms are behind stereotypical accents in people with English as a second language?
|
**First the basics:**
Different languages have different phonetic systems (where phonetics refers to the individual consonant vowel combinations that form the phonemes which establish contrast for word differentiation. You know "bat" and "pat" are different words because [b] and [p] are "contrasting").
The primary factor that differentiates languages is the vowel inventory of that language (I say vowels as primary because they are "sonorant" or sound creating whereas consonants like "stops", "labials" and "fricatives" are the continuation of sound or the stoppage of sound). English has the basic vowels of /a/, /e/, /i/, /o/, and /u/ (the orthography I'm using to describe the vowels is not standard IPA. I decided not to spend the extra time typing it all out). You then have dipthongs, which are the vowel sounds created by adjacent vowels. You then have, to a lesser degree your allophonic dipthongs and vowels based on the surrounding consonants (a /a/ sound is going to sound slightly different if it's next to a [b] as compared to an [sp]).
The native speaker perceives and produces language given the above criteria.
**Now to apply this to a second language learner:**
The native German speaker who is learning English has DIFFERENT vowels than you as a native English speaker (different consonants as well, but the vowels are easier to recognize). As a native English Speaker, you have your inventory of English Vowels. When you listen to the native German speaker who has acquired English as an adult (as opposed to the bilingual, young child between 4 to 7 who has the opportunity and the language acquisition mechanisms to acquire "native fluency"), you are listening to the cross-influence of the native German speaker's German vowels and his attempt, successfully or otherwise, to produce English vowels. The "accent" you hear is thus your ability to pick up the difference in vowel quality.
**What about the consonants?**
Consonants likewise have an impact on how the vowel of the speaker is formed. Vowels, as mentioned before, are sonorant and the quality of the vowel is thus determined by multiple different factors: elevation of the tongue in the mouth, the "frontness" or "backness" of the tongue (is the tongue closer to the teeth, is it further away from the teeth?), and the shape of the lips. The mouth is thus an acoustic chamber that changes the sound of the vowel based on its shape. Just like for vowels, different languages have different consonants. While the different consonant inventories is obviously a part of the equation, it would have, comparably, lesser impact as most consonants in a language "stop" sound as opposed to create sound.
An oft used example of how consonants impact the accent is when you compare most Asian languages to that of English. Japanese and Chinese are notorious for not having the [r] that's native to English. They instead of what is called a "flap/tap" (the same sound you hear when someone says the word /button/ which only occurs word-medial or in the middle of the word). The Japanese or Chinese speaker in most instances does not have the muscle control and dexterity to move the tongue into such a position as to produce the [r] so they instead produce an approximation of the consonant [ɾ]. They approximate the sound by substituting a sound that is produced most similar in tongue positioning. The end result is "rat" sounds like "lat".
|
[
"Certain English accents feature variant pronunciations of these sounds. These include fronting, where they merge with /f/ and /v/ (found in Cockney and some other dialects); stopping, where they approach /t/ and /d/ (as in some Irish speech); alveolarisation, where they become (in some African varieties); and debuccalisation, where becomes before a vowel (found in some Scottish English).\n",
"Also, the contemporary situation is unstable with regard to the accentuation, because phoneticians have observed that the 4-accents speech has, in all likelihood, shown to be increasingly unstable, which resulted in proposals that a 3-accents norm be prescribed. This is particularly true for Croatian, where, contrary to all expectations, the influence of Chakavian and Kajkavian dialects on the standard language has been waxing, not waning, in the past 50–70 years.\n",
"Different accents within a given language may have their own characteristic basis of articulation, resulting in one accent being perceived as, e.g., more 'nasal', 'velarized' or 'guttural' than another. According to Cruttenden, \"The articulatory setting of a language or dialect may differ from GB [General British]. So some languages like Spanish may have a tendency to hold the tongue more forward in the mouth, while others like Russian may have a tendency to hold it further back in the mouth. Nasalization may be characteristic of many speakers of American English, while denasal voice ... is frequently said to occur in Liverpool\". A more detailed exposition can be read in Gili Gaya (1956). Non-native speakers typically find the basis of articulation one of the greatest challenges in acquiring a foreign language's pronunciation. Speaking with the basis of articulation of their own native language results in a foreign accent, even if the individual sounds of the target language are produced correctly.\n",
"It can be noted that use of language such as certain accents may result in an individual experiencing prejudice. For example, some accents hold more prestige than others depending on the cultural context. However, with so many dialects, it can be difficult to determine which is the most preferable. The best answer linguists can give, such as the authors of \"Do You Speak American?\", is that it depends on the location and the individual. Research has determined however that some sounds in languages may be determined to sound less pleasant naturally. Also, certain accents tend to carry more prestige in some societies over other accents. For example, in the United States speaking General American (i.e., an absence of a regional, ethnic, or working class accent) is widely preferred in many contexts such as television journalism. Also, in the United Kingdom, the Received Pronunciation is associated with being of higher class and thus more likeable. In addition to prestige, research has shown that certain accents may also be associated with less intelligence, and having poorer social skills. An example can be seen in the difference between Southerners and Northerners in the United States, where people from the North are typically perceived as being less likable in character, and Southerners are perceived as being less intelligent.\n",
"The accent of English English best known outside the United Kingdom is that of Received Pronunciation (RP), though it is used by only a small minority of speakers in England. Until recently, RP was widely considered to be more typical of educated speakers than other accents. It was referred to by some as the Queen's (or King's) English, an 'Oxford accent' or even 'BBC English' (because for many years of broadcasting it was rare to hear any other accent on the BBC). These terms, however, do not refer only to accent features but also to grammar and vocabulary, as explained in Received Pronunciation. Since the 1960s regional accents have become increasingly accepted in mainstream media, and are frequently heard on radio and television. The Oxford English Dictionary gives RP pronunciations for each word, as do most other English dictionaries published in Britain.\n",
"Certain accents are perceived to carry more prestige in a society than other accents. This is often due to their association with the elite part of society. For example, in the United Kingdom, Received Pronunciation of the English language is associated with the traditional upper class. The same can be said about the predominance of Southeastern Brazilian accents in the case of the Brazilian variant of the Portuguese language, especially considering the disparity of prestige between most \"caipira\"-influenced speech, associated with rural environment and lack of formal education, together with the Portuguese spoken in some other communities of lower socioeconomic strata such as \"favela\" dwellers, and other sociocultural variants such as middle and upper class \"paulistano\" (dialect spoken from Greater São Paulo to the East) and \"fluminense\" (dialect spoken in the state of Rio de Janeiro) to the other side, inside Southeastern Brazil itself. However, in linguistics, there is no differentiation among accents in regard to their prestige, aesthetics, or correctness. All languages and accents are linguistically equal.\n",
"Accents are the distinctive variations in the pronunciation of a language. They can be native or foreign, local or national and can provide information about a person’s geographical locality, socio-economic status and ethnicity. The perception of accents is normal within any given group of language users and involves the categorisation of speakers into social groups and entails judgments about the accented speaker, including their status and personality. Accents can significantly alter the perception of an individual or an entire group, which is an important fact considering that the frequency that people with different accents are encountering one another is increasing, partially due to inexpensive international travel and social media. As well as affecting judgments, accents also affect key cognitive processes (e.g., memory) that are involved in a myriad of daily activities. The development of accent perception occurs in early childhood. Consequently, from a young age accents influence our perception of other people, decisions we make about when and how to interact with others, and, in reciprocal fashion, how other people perceive us. A better understanding of the role accents play in our (often inaccurate) appraisal of individuals and groups, may facilitate greater acceptance of people different from ourselves and lessen discriminatory attitudes and behavior.\n"
] |
How are ions made artificially?
|
You usually just take regular atoms and rip off their electrons somehow (heat them up, subject them to strong electric fields, shoot them through stripper foils, or some combination of those).
|
[
"Ions can be non-chemically prepared using various ion sources, usually involving high voltage or temperature. These are used in a multitude of devices such as mass spectrometers, optical emission spectrometers, particle accelerators, ion implanters, and ion engines.\n",
"Ions can be created in an inductively coupled plasma, which is a plasma source in which the energy is supplied by electrical currents which are produced by electromagnetic induction, that is, by time-varying magnetic fields.\n",
"Ion implantation is one of the methods used to transform physical properties of polymers and to improve their electrical, optical, and mechanical performance. Ion implantation is a technique by which the ions of a material are accelerated in an electrical field and impacted into a materials such that ion are inserted into this material. This technique has many important uses. One such example is the introduction of silver plasma into the biomedical titanium. This is important because Titanium-based implantable devices such as joint prostheses, fracture fixation devices and dental implants, are important to human lives and improvement of the life quality of patients. However, biomedical titanium is lack of Osseo integration and antibacterium ability. Plasma immersion ion implantation (PIII) is a physical technique which can enhance the multi-functionality, mechanical and chemical properties as well as biological activities of artificial implants and biomedical devices. ERDA can be used to study this phenomenon very effectively. Moreover, many scientists have measured the evolution of electrical conductivity, optical transparency, corrosion resistance, and wear resistance of different polymers after irradiation by electron or low-energy light ions or high-energy heavy ions.\n",
"Electrotyping has been used for the production of metal sculptures, where it is an alternative to the casting of molten metal. These sculptures are sometimes called \"galvanoplastic bronzes\", although the actual metal is usually copper. It was possible to apply essentially any patina to these sculptures; gilding was also readily accomplished in the same facilities as electrotyping by using electroplating. Electrotyping has been used to reproduce valuable objects such as ancient coins, and in some cases electrotype copies have proven more durable than fragile originals.\n",
"An ion source is a device that creates atomic and molecular ions. Ion sources are used to form ions for mass spectrometers, optical emission spectrometers, particle accelerators, ion implanters and ion engines.\n",
"Ion implantation may be used to induce nano-dimensional particles in oxides such as sapphire and silica. The particles may be formed as a result of precipitation of the ion implanted species, they may be formed as a result of the production of an mixed oxide species that contains both the ion-implanted element and the oxide substrate, and they may be formed as a result of a reduction of the substrate, first reported by Hunt and Hampikian. Typical ion beam energies used to produce nanoparticles range from 50 to 150 keV, with ion fluences that range from 10 to 10 ions/cm. The table below summarizes some of the work that has been done in this field for a sapphire substrate. A wide variety of nanoparticles can be formed, with size ranges from 1 nm on up to 20 nm and with compositions that can contain the implanted species, combinations of the implanted ion and substrate, or that are comprised solely from the cation associated with the substrate.\n",
"Artificial or imitation mineral water cannot be made simply by dissolving all the mineral components in water to replicate the analysis of a natural water. If all the components were put together, many would be found to be insoluble, and others would form new chemical combinations, so that the result would differ widely from the mineral water imitated. The order in which salts are dissolved is important; dissolving some salts separately and combining the solutions can produce results impossible to obtain by dissolving everything together.\n"
] |
why can a laser be seen from miles away but a regular flashlight has such a limited range before the light fades?
|
A laser tends to be very well focused, which means that its energy doesn't spread out that much as it travels. A flashlight, on the other hand, isn't focused that well, which means that its energy spreads out very quickly as it travels, so gets dimmer much faster than a laser does.
|
[
"BULLET::::- Diode lasers are used as a lightswitch in industry, with a laser beam and a receiver which will switch on or off when the beam is interrupted, and because a laser can keep the light intensity over larger distances than a normal light, and is more precise than a normal light it can be used for product detection in automated production.\n",
"Even the first laser was recognized as being potentially dangerous. Theodore Maiman characterized the first laser as having a power of one \"Gillette\" as it could burn through one Gillette razor blade. Today, it is accepted that even low-power lasers with only a few milliwatts of output power can be hazardous to human eyesight when the beam hits the eye directly or after reflection from a shiny surface. At wavelengths which the cornea and the lens can focus well, the coherence and low divergence of laser light means that it can be focused by the eye into an extremely small spot on the retina, resulting in localized burning and permanent damage in seconds or even less time.\n",
"Scientists and engineers from Picatinny Arsenal have demonstrated that an electric discharge can go through a laser beam. The laser beam is self-focusing due to the high laser intensity of 50 gigawatts, which changes the speed of light in air. The laser was reportedly successfully tested in January 2012.\n",
"Some of the laser light might reflect off leaves or branches which are closer than the object, giving an early return and a reading which is too low. Alternatively, over distances longer than 1200 ft (365 m), the target, if in proximity to the earth, may simply vanish into a mirage, caused by temperature gradients in the air in proximity to the heated surface bending the laser light. All these effects have to be taken into account.\n",
"To give another example, of a more powerful laser—the type that might be used in an outdoor laser show: a 6-watt green (532 nm) laser with a 1.1 milliradian beam divergence is an eye hazard to about , can cause flash blindness to about 8,200 feet (1.5 mi/2.5 km), causes veiling glare to about 36,800 feet (), and is a distraction to about 368,000 feet ().\n",
"The very small size of the arc makes it possible to focus the light from the lamp with moderate precision. For this reason, xenon arc lamps of smaller sizes, down to 10 watts, are used in optics and in precision illumination for microscopes and other instruments, although in modern times they are being displaced by single mode laser diodes and white light supercontinuum lasers which can produce a truly diffraction-limited spot. Larger lamps are employed in searchlights where narrow beams of light are generated, or in film production lighting where daylight simulation is required.\n",
"Studies have found that even low-power laser beams of not more than 5 mW can cause permanent retinal damage if gazed at for several seconds; however, the eye's blink reflex makes this highly unlikely. Such laser pointers have reportedly caused afterimages, flash blindness and glare, but not permanent damage, and are generally safe when used as intended.\n"
] |
why can most people jump higher off of one leg, when clearly there is more power in two legs?
|
Well, it's not all about raw power. The problem isn't being able to move upwards, you can climb stairs a lot higher than you can jump. The problem is accelerating quickly.
Look at it this way; stand perfectly still with your hands at your sides and jump.
You probably didn't get very far. This is because when you jump off one leg, neither your arms or your extra leg is sitting as dead weight. Your body spends a great deal of energy to thrust them upward just before you leave the ground. There's a lot of weight in a leg, so the inertia from that plus your arms all being thrust upwards helps to accelerate the actual dead weight (the rest of the body).
Take for instance [this tornado kick](_URL_0_ ). The person in the gif appears to exert very little force on the ground as they lift off. This is because they slowly build up momentum leading up to the jump (by spinning) then angle that energy upwards to carry them off the mat.
Basically the idea is rather than pushing yourself up with two legs, you're pulling yourself up with the momentum you built up in your swinging arms and legs.
|
[
"\"\"I never assumed my handicap and if anything, as a kid not having a leg meant that my arms were much stronger,\"\" Pueta added. His right leg is stronger than a tree and he jumps all over the field–like a kangaroo–and will tackle everything that comes his way. His line-out jumping is also an asset to whatever team he plays in.\n",
"It is crucial for high jumpers to have strong lower bodies and cores, as the bar progressively gets higher, the strength of an athlete's legs (along with speed and technique) will help propel them over the bar. Squats, deadlifts, and core exercises will help a high jumper achieve these goals. It is important, however, for a high jumper to keep a slim figure as any unnecessary weight makes it difficult to jump higher.\n",
"This refers to the bend in the knees and height relative to a normal standing position. Low stances are very powerful and assist delivery of power through the body to either the arms or the legs. High stances are more mobile and allow one to reposition rapidly.\n",
"When someone wants to jump, he or she exerts additional downward force on the ground ('action'). Simultaneously, the ground exerts upward force on the person ('reaction'). If this upward force is greater than the person's weight, this will result in upward acceleration. When these forces are perpendicular to the ground, they are also called a normal force.\n",
"A vertical jump or vertical leap is the act of raising one's center of mass higher in the vertical plane solely with the use of one's own muscles; it is a measure of how high an individual or athlete can elevate off the ground (jump) from a standstill.\n",
"Long legs increase the time and distance over which a jumping animal can push against the substrate, thus allowing more power and faster, farther jumps. Large leg muscles can generate greater force, resulting in improved jumping performance. In addition to elongated leg elements, many jumping animals have modified foot and ankle bones that are elongated and possess additional joints, effectively adding more segments to the limb and even more length.\n",
"An important component of maximizing height in a vertical jump is attributed to the use of counter-movements of the legs and arm swings prior to take off, as both of these actions have been shown to significantly increase the body's center of mass rise. The counter-movement of the legs, a quick bend of the knees which lowers the center of mass prior to springing upwards, has been shown to improve jump height by 12% compared to jumping without the counter-movement. This is attributed to the stretch shortening cycle of the leg muscles enabling the muscles to create more contractile energy. Furthermore, jump height can be increased another 10% by executing arm swings during the take off phase of the jump compared to if no arm swings are utilized. This involves lowering the arms distally and posteriorly during the leg counter-movements, and powerfully thrusting the arms up and over the head as the leg extension phase begins. As the arms complete the swinging movement they pull up on the lower body causing the lower musculature to contract more rapidly, hence aiding in greater jump height. Despite these increases due to technical adjustments, it appears as if optimizing both the force producing and elastic properties of the musculotendinous system in the lower limbs is largely determined by genetics and partially mutable through resistance exercise training.\n"
] |
How do flu shots work?
|
> How does one shot protect you from all variations of flu?
> Do they need to be topped up, as newer strains come into existence?
One shot contains vaccines for three different strains of flu. The CDC spends a LOT of time and effort every year trying to predict which three strains are most likely to be a significant health threat that year, and do so with enough lead time to get the vaccines produced and distributed.
|
[
"Flu-flu arrows are often used for children's archery, and can be used to play flu-flu golf. Similar to Frisbee Golf, the player must go to where the arrow landed, pick it up, shoot it again, and repeat this process until he reaches a specified place.\n",
"The influenza vaccine comes in two forms, the inactivated form which is what is typically thought of as the \"flu shot\", and a live but attenuated (weakened) form that is sprayed into the nostrils. it is recommended to get the flu shot each year since it is remade each year to protect against the viruses that are most likely to cause disease that year. Unfortunately there are a vast array of strains of influenza, so a single vaccine can not prevent all of them. The shot prevents 3 or 4 different influenza viruses and it takes about 2 weeks after the injection for protection to develop. This protection lasts from several months to a year. \n",
"A flu-flu arrow is a type of arrow specifically designed to travel a short distance. Such arrows are particularly useful when shooting at aerial targets or for certain types of recreational archery where the arrow must not travel too far. One of the main uses of these arrows is that they do not get lost as easily if they miss the target.\n",
"A flu-flu is a design of fletching, normally made by using long sections of feathers; in most cases six or more sections are used, rather than the traditional three. Alternatively, two long feathers can be spiraled around the end of the arrow shaft. In either case, the excessive fletching serves to generate more drag and slow the arrow down rapidly after a short distance (about 30 m). Recreational flu-flus usually have rubber points to add weight and keep the flight slower.\n",
"Influenza vaccines, also known as flu shots or flu jabs, are vaccines that protect against infection by influenza viruses. A new version of the vaccine is developed twice a year, as the influenza virus rapidly changes. While their effectiveness varies from year to year, most provide modest to high protection against influenza. The United States Centers for Disease Control and Prevention (CDC) estimates that vaccination against influenza reduces sickness, medical visits, hospitalizations, and deaths. When an immunized worker does catch the flu, they are on average back at work a half day sooner. Vaccine effectiveness in those under two years old and over 65 years old remains unknown due to the low quality of the research. Vaccinating children may protect those around them.\n",
"A flu-flu is a form of fletching, normally made by using long sections of full length feathers taken from a turkey, in most cases six or more sections are used rather than the traditional three. Alternatively two long feathers can be spiraled around the end of the arrow shaft. The extra fletching generates more drag and slows the arrow down rapidly after a short distance, about or so.\n",
"Some varieties of flumes are used in measuring water flow of a larger channel. When used to measure the flow of water in open channels, a flume is defined as a specially shaped, fixed hydraulic structure that under free-flow conditions forces flow to accelerate in such a manner that the flow rate through the flume can be characterized by a level-to-flow relationship as applied to a single head (level) measurement within the flume. Acceleration is accomplished through a convergence of the sidewalls, a change in floor elevation, or a combination of the two.\n"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.