question
stringlengths
3
301
answer
stringlengths
9
7.04k
context
listlengths
7
7
Why didn't the overthrow and regicide of Charles I prompt massive retaliation from other monarchs the way Louis XVI's did?
For much of the time of the English Civil War(s) Europe was still involved in the Thirty Years War. And the concurrent Franco-Spanish War, which blurred with the Thirty Years War but lasted longer. For the Holy Roman Empire and Spain the Thirty Years War was a much more pressing threat against their power than an English squabble. For France the potential shifts in continental power was a much more important and interesting situation.The Thirty Years War was over around 1648 and Charles lost his head in 1649. And France and Spain continued to be at war until 1659. So, there wasn't a lot of money or interest in mounting another major invasion. Charles, who could be very competent in other areas, had also made a bit of a mess of foreign policy prior to the civil wars. He would, of course, blame this on not being funded by Parliament. But he made a poorly advised attempt at war with France while, at the same time, being at war with Spain. He did come to peace with both and had a fragile alliance with Spain. But he hadn't done much to endear himself to those powers. The very nature of the wars was also not very well understood at the time or, honestly, still. As has already been discussed, it was not the overthrow of all vestiges of authority as the later stages of the French Revolution came to be (and remember much of Europe didn't intervene in France until it got to the stage or even later.) The tensions between royal authority and parliamentary authority were absolutely at stake. But there was also a strong religious component to the wars. With the royalists being associated with Laudianism and the parliamentarians being associated with puritanism. The reality was more complex but Europe wasn't itching to get involved in another religious war, at this point, either. And the English (or at least the nobility) seemed to have a nasty habit of rebelling against and killing their kings. It's funny to think of now with the UK being one of the few monarchies left. But Edward II, Richard II, Henry VI, and Richard III were all killed and that's not counting Edward V. Only Richard III in a proper battle. King John, Henry III and Edward II all faced serious rebellions by barons concerning their rights and privileges and the Parliamentary forces intentionally mimicked the stances of those fights. (I leave out the revolt against Richard II, War of the Roses and rebellions under the Tudors because in many ways those were about succession and/or the fitness of the ruler but the earlier conflicts were about limits of royal power and the "traditional" rights of the barons.) In some ways, this was framed as a very English conflict fighting over ancient grudges. And it was very intentionally presented this way by the Parliament even when they were going far beyond the traditional rights of the institution. It wasn't something obviously and markedly different the way the French Revolution became in its final stages. Now, with the benefit of hindsight, we see it as something distinct from all of those conflicts. They didn't necessarily, at the time. And I know that is an absurd thing to claim because a king was not only killed but he wasn't replaced. Oliver Cromwell was not a king. Nor did he have any even plausible birth claim to the throne. That's a huge change from the past. And that can't be ignored. But from the outside little had really changed radically under Cromwell despite his title. Also Spain had been fighting protestant insurgents in the Netherlands for decades so the existence of powerful protestant uprisings wasn't a foreign concept. Despite wars between nations through out the century, Europe in the late 18th century was in many ways a more settled place than Europe in the mid-17th century. So, the occurrence of such an uprising was more of a shock to the system. All that being said, it is not as though other kingdoms did *nothing*. France did take in Charles's family. Although France eventually allied with the Cromwell government, they did so because they needed allies against Spain in their continuing war. Spain made an alliance with Charles II and gave him some money for troops. The Battle of the Dunes had English royalists on the Spanish side and Cromwell's forces on the French side. (The French won.) Charles II was just never given enough money to invade England, which would have been a massive undertaking. Neither France nor Spain condoned killing a sovereign monarch. They were just consumed with each other and not in a position to do all that much about it. Other powers of Europe were devastated by war and not as close to England's orbit. Long story short, there were plenty of wars going on on the continent to distract the major powers of Europe. And, at the same time, the English Civil War was not as monumental a shift in power as the French Revolution eventually became.
[ "Charged with undermining the First French Republic, Louis XVI was separated from his family and tried in December. He was found guilty by the Convention, led by the Jacobins who rejected the idea of keeping him as a hostage. On 15 January 1793, by a majority of one vote, that of Philippe Égalité, he was condemned to death by guillotine and executed on 21 January 1793.\n", "The Convention put the King on trial, and after two months, voted by a majority of a single vote, for his execution. On 21 January 1793, Louis XVI was guillotined on the \"Place de la Révolution\" (former \"Place Louis XV\", now Place de la Concorde. Following the King's execution, rebellions against the government broke out in many regions of the country, particularly Brittany and Angevin. The Minister of Defense of the Commune, Dumouriez, tried without success to persuade his soldiers to march on Paris to overthrow the new government, and ended up defecting to the Austrians. To deal with these new threats, real or imagined, on 6 April 1793 the Convention established a Committee of Public Safety, to hunt for enemies and watch over the actions of the government. New decrees were issued for the arrest of families of émigrés, aristocrats and refractory priests, and the immunity from arrest of members of the Convention was taken away. On 10 March the Convention created a revolutionary Tribunal, located in the Palace of Justice. Verdicts of the Tribunal could not be appealed, and sentences were to be carried out immediately. Marie Antoinette was beheaded on 16 October 1793, and Jean Sylvain Bailly, the first elected mayor of Paris, was executed on 12 November 1793. The property of the aristocracy and of the Church was confiscated and declared \"Biens nationaux\" (national property); the churches were closed.\n", "His attempts were ultimately in vain. The Bourbon monarchy in France was overthrown in 1792, followed by massacres of many Royalists in Paris. In January 1793, Louis XVI was executed. In October, Marie Antoinette met a similar fate. In 1795, their son, Louis XVII died in prison.\n", "Necker's dismissal provoked the storming of the Bastille on 14 July. At the insistence of Louis XVI and Marie Antoinette, Charles and his family left France three days later, on 17 July, along with several other courtiers, including the Duchess of Polignac, the queen's favourite. His flight has historically been largely attributed to personal fears for his own safety. However recent research indicates that the King approved in advance of his brother's departure, seeing it as a means of ensuring that one close relative would be free to act as a spokesman for the monarchy after Louis himself had been removed from Versailles to Paris.\n", "Louis XVI was guillotined in 1793. By 1800, the First Republic, at war with much of Europe, had adopted a weak form of government that was overthrown by General Napoleon Bonaparte, who later proclaimed himself Emperor of the French. When British, Russian, Prussian, and Austrian armies invaded France in 1814, Napoleon, whose empire had once extended all the way to Moscow abdicated. The dead king's brother, the Count of Provence, was declared King Louis XVIII. Under Louis XVIII, no major changes were made to the army, beyond the recreation of several regiments of the pre-revolutionary \"maison militaire du roi\". However, when Napoleon returned from exile in 1815, the army, for the most part, went over to his side, and Louis fled.\n", "On 21 January 1793, Louis XVI was guillotined, which caused political turmoil. From January to May, Marat fought bitterly with the Girondins, whom he believed to be covert enemies of republicanism. Marat’s hatred of the Girondins became increasingly heated which led him to call for the use of violent tactics against them. The Girondins fought back and demanded that Marat be tried before the Revolutionary Tribunal. After attempting to avoid arrest for several days Marat was finally imprisoned. On 24 April, he was brought before the Tribunal on the charges that he had printed in his paper statements calling for widespread murder as well as the suspension of the Convention. Marat decisively defended his actions, stating that he had no evil intentions directed against the Convention. Marat was acquitted of all charges to the riotous celebrations of his supporters.\n", "Following the abolition of the monarchy of France by the French National Convention, Louis XVI and his family were held in confinement. Louis XVI was found guilty by the Convention of treason against the state, and was executed on 21 January 1793. The Dauphin Louis–Charles was thereafter proclaimed \"Louis XVII of France\" by French royalists, but was kept confined and never reigned. He died of illness on 8 June 1795.\n" ]
how credits were added to film
Assuming you are talking about the older days before computers, credits where often printed onto a a sheet which was attached to two rollers, kinda like a treadmill. Then they could just film it. They could also layer films over one another to superimpose credits on a live action scene.
[ "Then, early in the 1930s, the more progressive motion picture studios started to change their approach in presenting their screen credits. The major studios took on the challenge of improving the way they introduced their movies. They made the decision to present a more complete list of credits to go with a higher quality of artwork to be used in their screen credits.\n", "Credits for motion pictures often include the name of any locales (i.e., cities, states, and countries if outside of the US) used to film scenes, as well as any organizations not related to the production (e.g., schools, government entities, military bases, etc.) that played a role in the filming.\n", "Film credits were almost universally unheard of in 1910 and 1911, because the film's actors were unidentified fans would often come up with their own names for prominent actors. Film studios were reluctant to credit the actors because other studios might hire or the actors could demand a higher wage. Due to public demand and interest, studios began adding credits to their film lists in later productions.\n", "Unlike today's independently produced movies where on-screen credits are given to any and all participants, the sparse credits of \"big-studio\" films of the post-war period were usually limited to famous actors, music composers and studio executives. Even so, Comstock's work for Warner Brothers was notable enough to garner credits for many of his movies.\n", "Up until the 1970s, closing credits for films usually listed only a reprise of the cast members with their roles identified, or even simply just said \"The End,\" requiring opening credits to normally contain the details. For instance, the title sequence of the 1968 film \"Oliver!\" runs for about three-and-a-half minutes, and while not listing the complete cast, does list nearly all of its technical credits at the beginning of the film, all set against a background of what appear to be, but in fact are not, authentic 19th-century engravings of typical London life. The only credit at film's end is a listing of most of the cast, including cast members not listed at the beginning. These are set against a replay of some of the \"'Consider Yourself\" sequence.\n", "The \"credits,\" or \"end credits,\" is a list that gives credit to the people involved in the production of a film. Films from before the 1970s usually start a film with credits, often ending with only a title card, saying \"The End\" or some equivalent, often an equivalent that depends on the language of the production. From then onward, a film's credits usually appear at the end of most films. However, films with credits that end a film often repeat some credits at or near the start of a film and therefore appear twice, such as that film's acting leads, while less frequently some appearing near or at the beginning only appear there, not at the end, which often happens to the director's credit. The credits appearing at or near the beginning of a film are usually called \"titles\" or \"beginning titles.\" A post-credits scene is a scene shown after the end of the credits. \"Ferris Bueller's Day Off\" has a post-credit scene in which Ferris tells the audience that the film is over and they should go home.\n", "In the creative arts, credits are an acknowledgement of those who participated in the production. They are often shown at the end of movies and on CD jackets. In film, video, television, theater, etc., \"credits\" means the list of actors and behind-the-scenes staff who contributed to the production.\n" ]
Sources on pre-modern/medieval arms race
You are entirely right that much internet information on weapons is fragmentary and contradictory. Part of this is because a lot of the information out there is by enthusiasts of different knowledge levels and there are a lot of old sources and bad scholarship mixed in with good sources and sound methods. But part of this is because the entire history of weapons and armour is a vast topic and any summary will be fragmentary and contradictory by necessity. Weapons and armour do not exist in a vacuum - they are not simply better or worse than each other, but exist within a tactical, technological and economic context. Weapons do not necessarily fall out of favor because they are inferior - often it is because the manner of war changes. The form of weapons is not just dictated by how efficiently they are shaped for attacking - it is also dictated by how weapons are produced, and the technology available to produce them. We cannot understand weapons and armour without understanding how these factors shaped them. And this is hard, because it requires us to study military history, the history of technology, art history and social/economic history. All of this is to say that the history of a few weapons or a specific type of armour in a single period is a complicated topic. The history of weapons and armour and the way they interacted throughout history is a massive topic, too big for a single scholar, since it requires too much background knowledge. This is really why historians specialize in general - acquiring in-depth knowledge of a period is itself a full time job - acquiring in-depth knowledge of thousands of years is not possible in a human lifetime. This is why my flair area covers one region and only 350 years. Zeroing in on the period that I know about, the later Middle Ages in Western Europe, all the factors that I mention mean that the development of weapons and armour is more complex than better weapons driving the creation of better armour. Plate armour was partly, perhaps, a response to crossbows and other weapons, but it was also the product of an increasingly sophisticated steel making process in Medieval Europe - larger blooms from bloomeries could be turned into larger plates (allowing the forcing of large iron plates like breastplates), while waterwheels powered the bellows of the bloomeries and blast furnaces, the drip hammers that pounded the blooms into sheets, and the polishing wheels that polished the finished armour. Similarly the form of swords was dictated not just by their use in battle but also the state of metallurgy - the all-steel one-piece sword blades of the late middle ages could be formed into shapes that would not have been possible in the early Middle Ages. Similarly, we need to place armour and weapons in the context of how they were used - Italian knightly armour and weapons of the 15th century (armour that includes many overlapping and layered plates, a heavy lance, a lance rest mounted high on the breastplate) is well suited to heavy cavalry combat, but not well suited to fighting on foot. As soldiers change how they fight, their tools change to fit the task. Ultimately full plate armour wasn't simply rendered obsolete by stronger and stronger guns, but it stopped -making sense- on the late 16th century battlefield, for a number of reasons. I deal with this more in [this answer](_URL_0_). In general, the development of armour and weapons in the Middle Ages is not a two-sided arms race of more powerful weapons against stronger armour, but a multi-faceted story involving many causes. So with that said there are some books to recommend that deal with the development of weapons and armour - heavier on the armour than the weapons. *The Knight and the Blast Furnace* by Alan Williams is a history of plate armour in medieval and early modern Europe, as told through its metallurgy. This book deals heavily with the technological developments that made plate armour possible, and looks at how it developed over time. Partially in response to different weapons (mostly firearms), partially as a result of changing industrial processes and economic/social forces. *The Sword and the Crucible* by the same author deals with swords. Currently I am reading it. Tobias Capwell's *Armour of the English Knight 1400-1450* is a hyper-focused text (the first of two volumes covering only 15th century England) that shows just how the form of armour is developed to suit the purposes it is used for - the way that people fight. I should point out that all the books mentioned here are massive - folio-sized. Both The Sword and the Crucible and 'Armour of the English Knight' approach 400 pages or exceed it. The Knight and the Blast Furnace is 900 pages long. And there is still so much to be said about these topics.
[ "The medieval heralds also devised arms for various knights and lords from history and literature. Notable examples include the toads attributed to Pharamond, the cross and martlets of Edward the Confessor, and the various arms attributed to the Nine Worthies and the Knights of the Round Table. These too are now regarded as a fanciful invention, rather than evidence of the antiquity of heraldry.\n", "The heraldry, however, reflects the modern competition teams, rather than necessarily historically correct heraldic device that may have been worn by combatants in the medieval period. The first tournament was held at the Khotyn Fortress in Ukraine in 2010. The combatants depict armoured fighters from the 12th - 15th century. A number of different forms of combat take place, including some involving individuals, 5 a side or 21 on each side. Over 200 armoured men at arms take place in the competition, and in addition to melee/hand-to-hand weapons, archery is also featured.\n", "The armorial was compiled before 1396 by one Claes Heinenzoon (or Heynen, fl. 1345−1414) who was a herald in the service of the Duke of Guelders and also the creator of the Beyeren Armorial. The book displays some 1,800 coats-of-arms from all over Europe, in color, and is one of the most important sources for medieval heraldry.\n", "Several armorials of the 15th and early 16th century depict the coat of arms of the grand masters. These include the \"Chronica\" by Ulrich Richenthal, an armorial of St. Gallen kept in Nuremberg, an armorial of southwest Germany kept in Leipzig and the Miltenberg armorial. Conspicuously absent from these lists are three grand masters, Gerhards von Malberg (1241-1244) and his successors Heinrich von Hohenlohe (1244-1249) and Gunther von Wüllersleben (1250-1252), so that pre-modern historiographical tradition has a list of 34 grand masters for the time before 1525 (as opposed to 37 in modern accounts).\n", "Attributed arms are Western European coats of arms given retrospectively to persons real or fictitious who died before the start of the age of heraldry in the latter half of the 12th century. Arms were assigned to the knights of the Round Table, and then to biblical figures, to Roman and Greek heroes, and to kings and popes who had not historically borne arms (Pastoreau 1997a, 258). Each author could attribute different arms for the same person, but the arms for major figures soon became fixed.\n", "Notable arms attributed to biblical figures include the arms of Jesus based on the instruments of the Passion, and the shield of the Trinity. Medieval literature attributed coats of arms to the Nine Worthies, including Alexander the Great, Julius Caesar, and King Arthur. Arms were given to many kings predating heraldry, including Edward the Confessor and William I of England. These attributed arms were sometimes used in practice as quarterings in the arms of their descendants.\n", "\"Siebmachers Wappenbuch\" of 1605 was an early instance of a printed armorial. Medieval armorials usually include a few hundred coats of arms, in the late medieval period sometimes up to some 2,000. In the early modern period, the larger armorials develop into encyclopedic projects, with the \"Armorial général de France\" (1696), commissioned by Louis XIV of France, listing more than 125,000 coats of arms. In the modern period, the tradition develops into projects of heraldic dictionaries edited in multiple volumes, such as the \"Dictionary of British Arms\" in four volumes (1926–2009), or \"J. Siebmacher's großes Wappenbuch\" in seven volumes (1854–1967).\n" ]
If light has properties of waves, would it be possible to phase-cancel two laser beams? If yes, what would happen? If no, why not?
Yes. This is called interference and is a property of all waves. The prime apparatus that demonstrates laser beam interference is a Michelson interferometer. Basically, a laser beam is split into two different laser beams (so that they are coherent because significant interference requires coherence) which travel along different paths and then using mirrors are recombined and then hit a camera or a screen. One of the paths is a different length or through a different material so that one of the laser beams acquires a phase lag. On the screen, you get a series of dark and light rings (called an interference pattern). The dark rings are where the two laser beams are out of phase and cancel each other (called destructive interference). However, energy is not destroyed. Rather, energy is redirected to the areas with constructive interference (the bright rings). Another approach is the double-slit setup. You send a single laser beam through two slits, which turns it into two coherent laser beams. These laser beams interfere, producing a pattern of light and dark bars.
[ "It is possible to arrange multiple beams of laser light such that destructive quantum interference suppresses the vacuum fluctuations. Such a squeezed vacuum state involves negative energy. The repetitive waveform of light leads to alternating regions of positive and negative energy.\n", "For example, in the case of a hologram, illuminating the grating with just the reference beam causes the reconstruction of the original signal beam. When two coherent laser beams (usually obtained by splitting a laser beam by the use of a beamsplitter into two, and then suitably redirecting by mirrors) cross inside a photorefractive crystal, the resultant refractive index grating diffracts the laser beams. As a result, one beam gains energy and becomes more intense at the expense of light intensity reduction of the other. This phenomenon is an example of two-wave mixing. In this configuration, Bragg diffraction condition is automatically satisfied.\n", "A laser beam generally approximates much more closely to a monochromatic source, and it is much more straightforward to generate interference fringes using a laser. The ease with which interference fringes can be observed with a laser beam can sometimes cause problems in that stray reflections may give spurious interference fringes which can result in errors.\n", "For interference lithography to be successful, coherence requirements must be met. First, a spatially coherent light source must be used. This is effectively a point light source in combination with a collimating lens. A laser or synchrotron beam are also often used directly without additional collimation. The spatial coherence guarantees a uniform wavefront prior to beam splitting. Second, it is preferred to use a monochromatic or temporally coherent light source. This is readily achieved with a laser but broadband sources would require a filter. The monochromatic requirement can be lifted if a diffraction grating is used as a beam splitter, since different wavelengths would diffract into different angles but eventually recombine anyway. Even in this case, spatial coherence and normal incidence would still be required.\n", "If the gain (amplification) in the medium is larger than the resonator losses, then the power of the recirculating light can rise exponentially. But each stimulated emission event returns an atom from its excited state to the ground state, reducing the gain of the medium. With increasing beam power the net gain (gain minus loss) reduces to unity and the gain medium is said to be saturated. In a continuous wave (CW) laser, the balance of pump power against gain saturation and cavity losses produces an equilibrium value of the laser power inside the cavity; this equilibrium determines the operating point of the laser. If the applied pump power is too small, the gain will never be sufficient to overcome the cavity losses, and laser light will not be produced. The minimum pump power needed to begin laser action is called the \"lasing threshold\". The gain medium will amplify any photons passing through it, regardless of direction; but only the photons in a spatial mode supported by the resonator will pass more than once through the medium and receive substantial amplification.\n", "If instead of oscillating independently, each mode operates with a fixed phase between it and the other modes, the laser output behaves quite differently. Instead of a random or constant output intensity, the modes of the laser will periodically all constructively interfere with one another, producing an intense burst or pulse of light. Such a laser is said to be 'mode-locked' or 'phase-locked'. These pulses occur separated in time by , where \"τ\" is the time taken for the light to make exactly one round trip of the laser cavity. This time corresponds to a frequency exactly equal to the mode spacing of the laser, .\n", "It is possible, using nonlinear optical processes, to exactly reverse the propagation direction and phase variation of a beam of light. The reversed beam is called a \"conjugate\" beam, and thus the technique is known as optical phase conjugation (also called \"time reversal\", \"wavefront reversal\" and is significantly different from \"retroreflection\").\n" ]
Do all terrestrial bodies which experience a planetary wobble and orbit a star have four seasons?
The Earth's wobble (precession of the equinoxes) doesn't cause the seasons. The seasons are due to the axial tilt and the orbit of the Sun. "Seasons" isn't an astronomical term. Any planet whose axis of rotation is tilted with respect to its orbital plane will have solstices and equinoxes. If you wanted to, you could define four seasons between those solstices and equinoxes. That's not quite the same thing as "the four seasons we experience", though, since the seasons (in terms of weather and biosphere) don't have to follow the equinoxes and solstices. Also, a planet with only very slight axial tilt will have only very slight changes in insolation throughout the year.
[ "The planets' orbits are chaotic over longer timescales, in such a way that the whole Solar System possesses a Lyapunov time in the range of 2–230 million years. In all cases this means that the position of a planet along its orbit ultimately becomes impossible to predict with any certainty (so, for example, the timing of winter and summer become uncertain), but in some cases the orbits themselves may change dramatically. Such chaos manifests most strongly as changes in eccentricity, with some planets' orbits becoming significantly more—or less—elliptical.\n", "The orbits of many of the minor bodies of the Solar System, such as comets, are often heavily perturbed, particularly by the gravitational fields of the gas giants. While many of these perturbations are periodic, others are not, and these in particular may represent aspects of chaotic motion. For example, in April 1996, Jupiter's gravitational influence caused the period of Comet Hale–Bopp's orbit to decrease from 4,206 to 2,380 years, a change that will not revert on any periodic basis.\n", "The more distant planets retrograde more frequently, as they do not move as much in their orbits while Earth completes an orbit itself. The center of the retrograde motion occurs when the body is exactly opposite the sun, and therefore high in the ecliptic at local midnight. The retrogradation of a hypothetical extremely distant (and nearly non-moving) planet would take place during a half-year, with the planet's apparent yearly motion being reduced to a parallax ellipse.\n", "A planetary object that orbits a star with high orbital eccentricity may spend only some of its year in the CHZ and experience a large variation in temperature and atmospheric pressure. This would result in dramatic seasonal phase shifts where liquid water may exist only intermittently. It is possible that subsurface habitats could be insulated from such changes and that extremophiles on or near the surface might survive through adaptions such as hibernation (cryptobiosis) and/or hyperthermostability. Tardigrades, for example, can survive in a dehydrated state temperatures between and . Life on a planetary object orbiting outside CHZ might hibernate on the cold side as the planet approaches the apastron where the planet is coolest and become active on approach to the periastron when the planet is sufficiently warm.\n", "In all cases this means that the position of a planet along its orbit ultimately becomes impossible to predict with any certainty (so, for example, the timing of winter and summer become uncertain), but in some cases the orbits themselves may change dramatically. Such chaos manifests most strongly as changes in eccentricity, with some planets' orbits becoming significantly more—or less—elliptical.\n", "Like the Pythagoreans Hicetas and Ecphantus, Heraclides proposed that the apparent daily motion of the stars was created by the rotation of the Earth on its axis once a day. This view contradicted the accepted Aristotelian model of the universe, which said that the Earth was fixed and that the stars and planets in their respective spheres might also be fixed. Simplicius says that Heraclides proposed that the irregular movements of the planets can be explained if the Earth moves while the Sun stays still.\n", "Over time, the orbits of planets will decay due to gravitational radiation, or planets will be ejected from their local systems by gravitational perturbations caused by encounters with another stellar remnant.\n" ]
why do japanese swords only have one edge?
Japanese swordfighting styles are suited to slashing with a sharp, curved edge to negate the style of armour (or non-armor) popular at the time. European, medieval styled swords are double edged with a point to infiltrate the heavy, plated armour that was used. Thick armour, but many joints and separations can be penetrated by a point and heavier, "blunt" strikes.
[ "Over time, however, the curved single-edged sword became so dominant a style in Japan that \"tou\" and \"ken\" came to be used interchangeably to refer to swords in Japan and by others to refer to Japanese swords. For example, the Japanese typically refer to Japanese swords as 日本刀 \"nihontō\" (\"Japanese \"tou\"\" i.e. \"Japanese (single-edged) blade\"), while the character \"ken\" 剣 is used in such terms as kendo and kenjutsu. Modern formal usage often uses both characters in referring to a collection of swords, for example, in naming the Japanese Sword Museum 日本美術刀剣博物館.\n", "Unlike western knives, Japanese knives are often only single ground, meaning that they are sharpened so that only one side holds the cutting edge. As shown in the image, some Japanese knives are angled from both sides, while others are angled only from one side with the other side of the blade being flat. It was traditionally believed that a single-angled blade cuts better and makes cleaner cuts, though requiring more skill to use than a blade with a double-beveled edge. Generally, the right-hand side of the blade is angled, as most people use the knife with their right hand. Left-handed models are rare and must be specially ordered and custom made.\n", "Japanese swords were carried in several different ways, varying throughout Japanese history. The style most commonly seen in \"samurai\" movies is called \"buke-zukuri\", with the katana (and \"wakizashi\", if also present) carried edge up, with the sheath thrust through the \"obi\" (sash).\n", "The legitimate Japanese sword is made from Japanese steel \"Tamahagane\". The most common lamination method the Japanese sword blade is formed from is a combination of two different steels: a harder outer jacket of steel wrapped around a softer inner core of steel. This creates a blade which has a hard, razor sharp cutting edge with the ability to absorb shock in a way which reduces the possibility of the blade breaking when used in combat. The \"hadagane\", for the outer skin of the blade, is produced by heating a block of raw steel, which is then hammered out into a bar, and the flexible back portion. This is then cooled and broken up into smaller blocks which are checked for further impurities and then reassembled and reforged. During this process the billet of steel is heated and hammered, split and folded back upon itself many times and re-welded to create a complex structure of many thousands of layers. Each different steel is folded differently, in order to provide the necessary strength and flexibility to the different steels. The precise way in which the steel is folded, hammered and re-welded determines the distinctive grain pattern of the blade, the \"jihada\", (also called \"jigane\" when referring to the actual surface of the steel blade) a feature which is indicative of the period, place of manufacture and actual maker of the blade. The practice of folding also ensures a somewhat more homogeneous product, with the carbon in the steel being evenly distributed and the steel having no voids that could lead to fractures and failure of the blade in combat.\n", "A , literally translating into \"small or short \"tachi\" (sword)\", is one of the traditionally made Japanese swords (\"nihontō\") used by the samurai class of feudal Japan. Kodachi are from the early Kamakura period (1185–1333) and are in the shape of a tachi. Kodachi are mounted in tachi style but with a length of less than 60 cm. They are often confused with wakizashi, due to their length and handling techniques. However, their construction is what sets the two apart, as kodachi are a set length while wakizashi are forged to complement the wielder's height or the length of their katana. As a result, the kodachi was too short to be called a sword properly but was also too long to be considered a dagger, thus it is widely considered a primary short sword, unlike the tantō or the Wakizashi which would act as a secondary weapon that was used alongside a longer blade.\n", "Japanese swords were often forged with different profiles, different blade thicknesses, and varying amounts of grind. \"Wakizashi\", for instance, were not simply scaled-down versions of \"katana\"; they were often forged in \"hira-zukuri\" or other such forms which were very rare on other swords.\n", "The sword has long held a significance in Japanese culture from the reverence and care that the samurai placed in their weapons. The earliest swords in Japan were straight, based on early Chinese \"jian\". Curved blades became more common at the end of the 8th century, with the importation of the curved forging techniques of that time. The shape was more efficient when fighting from horseback. Japanese swordsmanship is primarily two-handed wherein the front hand pushes down and the back hand pulls up while delivering a basic vertical cut. The samurai often carried two swords, the longer \"katana\" and the shorter \"wakizashi\", and these were normally wielded individually, though use of both as a pair did occur.\n" ]
how does professional poker work?
I am not a professional poker player, but I do know three pros personally. They are not big name pros, but they do earn a modest living ((think mid-five figures) playing poker. One of them got his start by making the final table of a large multi-table tournament at a casino with a smallish ($200 or so) entry fee. He then ran that 25k up quickly and has settled into a routine playing $2-$5 hold'em, 2-5 Pot Limit Omaha, and slightly larger games when they are available at his home casino. The other one saved his money working an 8-5 job until he had a decent amount of money to make a go at it (he said it was $10,000) and then ran that money up playing similar limits live and online. They also invest their poker profits in other players that they know are above average. You mention elimination in your question. I think you may be referring to tournament poker, which is only one of many, many forms of poker. Tournament poker is definitely one way to make money when playing poker professionally, but the pros that I am aware of make most of their money playing "cash" or "ring" games where you sit at a table with set limits on starting cash (typically at least 100 times the small and big blinds) and play against other players. So a 2-5 table might have a $200 minimum requirement and a $500 maximum, though many casinos do raise the maximum requirement to as much as twice that.
[ "Poker is a popular card game that combines elements of chance and strategy. There are various styles of poker, all of which share an objective of presenting the least probable or highest-scoring hand. A poker hand is usually a configuration of five cards depending on the variant, either held entirely by a player or drawn partly from a number of shared, community cards. Players bet on their hands in a number of rounds as cards are drawn, employing various mathematical and intuitive strategies in an attempt to better opponents.\n", "Poker is a family of gambling games in which players bet into a pool, called the pot, value of which changes as the game progresses that the value of the hand they carry will beat all others according to the ranking system. Variants largely differ on how cards are dealt and the methods by which players can improve a hand. For many reasons, including its age and its popularity among Western militaries, it is one of the most universally known card games in existence.\n", "Poker is a family of card games that combines gambling, strategy, and skill. All poker variants involve betting as an intrinsic part of play, and determine the winner of each hand according to the combinations of players' cards, at least some of which remain hidden until the end of the hand. Poker games vary in the number of cards dealt, the number of shared or \"community\" cards, the number of cards that remain hidden, and the betting procedures.\n", "A poker run is an organized event in which participants, usually using motorcycles, all-terrain vehicles, boats, snowmobiles, horses, on foot or other means of transportation, must visit five to seven checkpoints, drawing a playing card at each one. The object is to have the best poker hand at the end of the run. Having the best hand and winning is purely a matter of chance. The event has a time limit, however the individual participants are not timed.\n", "All casinos and most home games play poker by what are called table stakes rules, which state that each player starts each deal with a certain stake, and plays that deal with that stake. A player may not remove money from the table or add money from their pocket during the play of a hand. In essence, table stakes rules creates a maximum and a minimum buy-in amount for cash game poker as well as rules for adding and removing the stake from play. A player also may not take a portion of their money or stake off the table, unless they opt to leave the game and remove their entire stake from play. Players are not allowed to hide or misrepresent the amount of their stake from other players and must truthfully disclose the amount when asked.\n", "In the poker game of Texas hold 'em, a starting hand consists of two \"hole cards\", which belong solely to the player and remain hidden from the other players. Five community cards are also dealt into play. Betting begins before any of the community cards are exposed, and continues throughout the hand. The player's \"playing hand\", which will be compared against that of each competing player, is the best 5-card poker hand available from his two hole cards and the five community cards. Unless otherwise specified, here the term \"hand\" applies to the player's two hole cards, or \"starting hand\".\n", "The game of poker involves not only an understanding of probability, but also the competence of reading and analyzing the body language of the opponents. A key component of poker is to be able to \"cheat\" the opponents. To spot these cheats, players must have the ability to spot the individual \"ticks\" of their opponents. Players also have to look out for signs that an opponent is doing well.\n" ]
how do completely torn ligaments such as atfl heal?
Fairly complicated, the simple answer is that cells communicate. A cell can tell other cells where is it, what type of cell it is, and if it’s in some kind of “distress”. There is also a 3 step response when tissue tears, and the first step is basically inflammation. In this step, a ton of different cell types (tissue, stem, blood, immune, etc.) rush to the site of trauma and they all have different jobs. This is where a lot of communicating occurs, and your body is essentially trying to figure out what happened and how it can best be fixed. Cells that make up what’s left of the ligament will communicate, and with the help of other cell types, will slowly undergo mitosis and other cellular processes to repair the ligament.
[ "Ulnar collateral ligament reconstruction, also known as Tommy John surgery (TJS), is a surgical graft procedure where the ulnar collateral ligament in the medial elbow is replaced with either a tendon from elsewhere in the patient's body, or with one from a dead donor. The procedure is common among collegiate and professional athletes in several sports, particularly in baseball.\n", "The ligament is an uncalcified elastic structure comprised in its most minimal state of two layers: a lamellar layer and a fibrous layer. The lamellar layer consists entirely of organic material (a protein and collagen matrix), is generally brown in color, and is elastic in response to both compressional and tensional stresses. The fibrous layer is made of aragonite fibers and organic material, is lighter in color and often iridescent, and is elastic only under compressional stress. The protein responsible for the elasticity of the ligament is abductin, which has enormous elastic resiliency: this resiliency is what causes the valves of the bivalve mollusk to open when the adductor muscles relax.\n", "Ligaments attach bone to bone or bone to tendon, and are vital in stabilizing joints as well as supporting structures. They are made up of fibrous material that is generally quite strong. Due to their relatively poor blood supply, ligament injuries generally take a long time to heal.\n", "A slow and chronic deterioration of the ulnar collateral ligament can be due to repetitive stress acting on the ulna. At first, pain can be bearable and can worsen to an extent where it can terminate an athlete’s career. The repetitive stress placed on the ulna causes micro tears in the ligament resulting in the loss of structural integrity over time. The acute rupture is less common compared to the slow deterioration injury. The acute rupture occurs in collisions when the elbow is in flexion such as that in a wrestling match or a tackle in football. The ulnar collateral ligament distributes over fifty percent of the medial support of the elbow. This can result in an ulnar collateral ligament injury or a dislocated elbow causing severe damage to the elbow and the radioulnar joints.\n", "Ligaments attach bone to bone, and are vital in stabilizing joints as well as supporting structures. They are made up of fibrous material that is generally quite strong. Due to their relatively poor blood supply, ligament injuries generally take a long time to heal.\n", "Repair of a complete, full-thickness tear involves tissue suture. The method currently in favor is to place an anchor in the bone at the natural attachment site, with resuture of torn tendon to the anchor. If tissue quality is poor, mesh (collagen, Artelon, or other degradable material) may be used to reinforce the repair. Repair can be performed through an open incision, again requiring detachment of a portion of the deltoid, while a mini-open technique approaches the tear through a deltoid-splitting approach. The latter may cause less injury to muscle and produce better results. Contemporary techniques now use an all arthroscopic approach.\n", "The ligament is sometimes described as consisting of two marginal bands and a thinner intervening portion, the two bands being attached respectively to the apex and the base of the coracoid process, and joining together at the acromion.\n" ]
How do you continue studying history after graduation?
Keep up with the big journals, and go to some of the big conferences. You'll stay up-to-date with the most recent research, and get to keep interacting with people who hold a similar academic interest. Once you have your JD, there's always the option of (potentially) writing academically on classical law on the side. Additionally, if you have the option for electives, take some on ancient law if they're available to you. Hope this helps a little. Happy reading!
[ "Initially, graduate students usually rotate through the laboratories of several faculty researchers, after which the student commits to joining a particular laboratory for the remainder of his or her education. The remaining time is spent conducting original research under the direction of the principal investigator to complete and publish a dissertation. Unlike undergraduate and professional schools, there is no set time period for graduate education. Students graduate once a thesis project of significant scope to justify the writing of their dissertation has been completed, a point that is determined by the student's principal investigator as well as his or her faculty advisory committee. The average time to graduation can vary between institutions, but most programs average around 5–6 years.\n", "Courses required to be taken in order to graduate are four years of English, three years of mathematics, two years of United States history, one year of world history/geography; three years of science; one year of fine, practical and/or performing arts, 1/2 year of digital technology, 1/2 year of financial literacy, one year of a business, life science, or vocational course, two years of a World Language, and four years of physical education/health.\n", "Upon graduation, graduates will receive one of the following degrees: Master of Law (Politics & International Relations, Law & Society); Master of Economics (Economics & Management); Master of History (History & Archaeology); Master of Literature (Literature and Culture); or Master of Philosophy in China Studies (Philosophy & Religion)The first cohort of scholars have taken a range of paths. Roughly 30% continued to Ph.D. level studies at esteemed universities, while others are employed by Goldman Sachs, McKinsey & Company, Google, J.P. Morgan & Co., the Associated Press, the Boston Consulting Group, General Electric, HNA Group, NEO blockchain, Bank of Korea, the Chinese Ministry of Commerce, and more. All Yenching Scholars write a Master's thesis under the guidance of an adviser and defend it orally before an academic committee. In addition to a fully funded scholarship, scholars also receive a monthly stipend of $500 and round-trip airfare.\n", "To graduate, all students complete four years each of Upper School-level English, Mathematics, Science, and History and three years of a single foreign language. Students are also required to take Music and Arts Appreciation, Logic and Rhetoric, three years of Theology, one year of Philosophy and four semesters of Physical Education.\n", "Post graduation, students have pursued professional careers in law, medicine, nursing, pharmacy, business, and others. Graduates pursued their advanced education at some of the most recognized universities across the nation including Harvard, Columbia, Cornell, Princeton, Yale, New York University, Yeshiva University, SUNY colleges, CUNY colleges, etc. Most graduates spend a year studying in Israel.\n", "Courses required to be taken in order to graduate are 4 years of English, 3 years of mathematics, 2 years of United States history, 1 year of world history/geography; 3 years of science; 1 year of fine, practical and/or performing arts, 1 year of digital technology, 1 year of business, life science, vocational course, 2 years of a World Language, 4 years of physical education/health, and 1/2 year of career exploration or development. All students must pass the State High School Proficiency Assessment called HSPA to graduate.\n", "The first academic year began in 2010/2011 when 40 students enrolled in the first year of the study program of history. In October 2011, the University started its programs in psychology and sociology. First graduates of the undergraduate study program in history were promoted in 2014.\n" ]
Aside from the obvious (algebra, chess, etc.), how did Western science benefit from encounters with Islam and the Middle East during the Crusades?
In my understanding that old idea of information transmitted through the Christian East has been rather debunked. Most of the things that the Islamic World transmitted to the West came through Spain, not Syria and Palestine. The eastern contacts were more important for economic reasons, moving goods into the Mediterranean that originated in the Far and Middle Easts.
[ "Medieval Islam's receptiveness to new ideas and heritages helped it make major advances in medicine during this time, adding to earlier medical ideas and techniques, expanding the development of the health sciences and corresponding institutions, and advancing medical knowledge in areas such as surgery and understanding of the human body, although many Western scholars have not fully acknowledged its influence (independent of Roman and Greek influence) on the development of medicine.\n", "During the Middle Ages, there was frequently an exchange of works between Byzantine and Islamic science. The Byzantine Empire initially provided the medieval Islamic world with Ancient and early Medieval Greek texts on astronomy, mathematics and philosophy for translation into Arabic as the Byzantine Empire was the leading center of scientific scholarship in the region at the beginning of the Middle Ages. Later as the Caliphate and other medieval Islamic cultures became the leading centers of scientific knowledge, Byzantine scientists such as Gregory Choniades, who had visited the famous Maragheh observatory, translated books on Islamic astronomy, mathematics and science into Medieval Greek, including for example the works of Ja'far ibn Muhammad Abu Ma'shar al-Balkhi, Ibn Yunus, Al-Khazini (who was of Byzantine Greek descent but raised in a Persian culture), Muhammad ibn Mūsā al-Khwārizmī and Nasīr al-Dīn al-Tūsī (such as the \"Zij-i Ilkhani\" and other Zij treatises) among others.\n", "During the Middle Ages, there was frequently an exchange of works between Byzantine and Islamic science. The Byzantine Empire initially provided the medieval Islamic world with Ancient and early Medieval Greek texts on astronomy, mathematics and philosophy for translation into Arabic as the Byzantine Empire was the leading center of scientific scholarship in the region at the beginning of the Middle Ages. Later as the Caliphate and other medieval Islamic cultures became the leading centers of scientific knowledge, Byzantine scientists such as Gregory Choniades, who had visited the famous Maragheh observatory, translated books on Islamic astronomy, mathematics and science into Medieval Greek, including for example the works of Ja'far ibn Muhammad Abu Ma'shar al-Balkhi, Ibn Yunus, Al-Khazini (who was of Byzantine Greek descent but raised in a Persian culture), Muhammad ibn Mūsā al-Khwārizmī and Nasīr al-Dīn al-Tūsī (such as the \"Zij-i Ilkhani\" and other Zij treatises) among others.\n", "The medieval Muslims took a keen interest in the study of astrology: partly because they considered the celestial bodies to be essential, partly because the dwellers of desert-regions often travelled at night, and relied upon knowledge of the constellations for guidance in their journeys. After the advent of Islam, the Muslims needed to determine the time of the prayers, the direction of the Kaaba, and the correct orientation of the mosque, all of which helped give a religious impetus to the study of astronomy and contributed towards the belief that the heavenly bodies were influential upon terrestrial affairs as well as the human condition. The science dealing with such influences was termed astrology (Arabic: علم النجوم \"Ilm an-Nujūm\"), a discipline contained within the field of astronomy (more broadly known as علم الفلك \"Ilm al-Falak\" 'the science of formation [of the heavens]'). The principles of these studies were rooted in Arabian, Persian, Babylonian, Hellenistic and Indian traditions and both were developed by the Arabs following their establishment of a magnificent observatory and library of astronomical and astrological texts at Baghdad in the 8th century.\n", "Some scholars, including Makdisi, have argued that early medieval universities were influenced by the madrasas in Al-Andalus, the Emirate of Sicily, and the Middle East during the Crusades. Norman Daniel, however, views this argument as overstated. Roy Lowe and Yoshihito Yasuhara have recently drawn on the well-documented influences of scholarship from the Islamic world on the universities of Western Europe to call for a reconsideration of the development of higher education, turning away from a concern with local institutional structures to a broader consideration within a global context.\n", "In the early centuries of Islam the most important points of contact between the Latin West and the Islamic world from an artistic point of view were Southern Italy and Sicily and the Iberian peninsula, which both held significant Muslim populations. Later the Italian maritime republics were important in trading artworks. In the Crusades Islamic art seems to have had relatively little influence even on the Crusader art of the Crusader kingdoms, though it may have stimulated the desire for Islamic imports among Crusaders returning to Europe.\n", "Islamic medicine preserved, systematized and developed the medical knowledge of classical antiquity, including the major traditions of Hippocrates, Galen and Dioscorides. During the post-classical era, Islamic medicine was the most advanced in the world, integrating concepts of ancient Greek, Roman and Persian medicine as well as the ancient Indian tradition of Ayurveda, while making numerous advances and innovations. Islamic medicine, along with knowledge of classical medicine, was later adopted in the medieval medicine of Western Europe, after European physicians became familiar with Islamic medical authors during the Renaissance of the 12th century.\n" ]
Are artificial food dyes different than dyes used in craft supplies?
Dyes that are approved for use in food or hygiene products have undergone testing to various degrees in order to ensure that they are non-toxic in the quantities you'd find in those products. Crafts supplies have no such regulations in place, and there is no telling what materials are present in the dyes or pigments used. The best guarantee you can hope for is that they aren't toxic merely by being in their presence. For example, the glass containers you can buy for dirt cheap at a craft store are often full of lead, and they will usually say that they are not meant for the storage of food or drinks. Unfortunately, we are a long way off from knowing what the specific effect of food dyes on children with ADHD is. The state of the field is that researchers are still trying to establish that there even is a reproducible link between food dyes and hyperactivity. If that research is successful in nailing down a precise link, other scientists can begin work on figuring out exactly how that effect comes about.
[ "The primary source of dye, historically, has been nature, with the dyes being extracted from animals or plants. Since the mid-19th century, however, humans have produced artificial dyes to achieve a broader range of colors and to render the dyes more stable to washing and general use. Different classes of dyes are used for different types of fiber and at different stages of the textile production process, from loose fibers through yarn and cloth to complete garments.\n", "Many synthesized dyes were easier and less costly to produce and were superior in coloring properties when compared to naturally derived alternatives. Some synthetic food colorants are diazo dyes. Diazo dyes are prepared by coupling of a diazonium compound with a second aromatic hydrocarbons. The resulting compounds contain conjugated systems that efficiently absorb light in the visible parts of the spectrum, i.e. they are deeply colored. The attractiveness of the synthetic dyes is that their color, lipophilicity, and other attributes can be engineered by the design of the specific dyestuff. The color of the dyes can be controlled by selecting the number of azo-groups and various substituents. Yellow shades are often achieved by using acetoacetanilide. Red colors are often azo compounds. The pair indigo and indigo carmine exhibit the same blue color, but the former is soluble in lipids, and the latter is water-soluble because it has been fitted with sulfonate functional groups.\n", "One other class that describes the role of dyes, rather than their mode of use, is the food dye. Because food dyes are classed as food additives, they are manufactured to a higher standard than some industrial dyes. Food dyes can be direct, mordant and vat dyes, and their use is strictly controlled by legislation. Many are azo dyes, although anthraquinone and triphenylmethane compounds are used for colors such as green and blue. Some naturally occurring dyes are also used.\n", "In order to achieve dyes with sufficient order parameter, researchers have synthesized novel dyes which themselves are liquid crystalline in character to function as display ingredients. The challenges that researchers face are : (1) high purity (2) small quantities and (3) high efficiency. A few companies have overcome these challenges, such as Mitsui Toatsu in Japan and Merck in the U.K.\n", "The majority of natural dyes are derived from plant sources: roots, berries, bark, leaves, wood, fungi and lichens. Most dyes are synthetic, i.e., are man-made from petrochemicals. Other than pigmentation, they have a range of applications including organic dye lasers, optical media (CD-R) and camera sensors (color filter array).\n", "The United States Government and the European Union certify a small number of synthetic chemical colourings to be used in food. These are usually aromatic hydrocarbons, or azo dyes, made from petroleum. The most common ones are:\n", "The UK FSA commissioned a study of six food dyes (tartrazine, Allura red, Ponceau 4R, Quinoline Yellow, sunset yellow, carmoisine (dubbed the \"Southampton 6\")), and sodium benzoate (a preservative) on children in the general population, who consumed them in beverages. The study found \"a possible link between the consumption of these artificial colours and a sodium benzoate preservative and increased hyperactivity\" in the children; the advisory committee to the FSA that evaluated the study also determined that because of study limitations, the results could not be extrapolated to the general population, and further testing was recommended.\n" ]
How does the UV Catastrophe relate to the quantization of energy?
The basic problem can be thought of like this: in classical thermodynamics there is the [equipartition theorem](_URL_0_) which means that each mode has the same (finite) average energy. The electromagnetic field has an infinity of modes, hence the problem. edit: corralled some runaway words
[ "The term \"ultraviolet catastrophe\" was first used in 1911 by Paul Ehrenfest, but the concept originated with the 1900 statistical derivation of the Rayleigh–Jeans law. The phrase refers to the fact that the Rayleigh–Jeans law accurately predicts experimental results at radiative frequencies below 10 GHz, but begins to diverge with empirical observations as these frequencies reach the ultraviolet region of the electromagnetic spectrum. Since the first appearance of the term, it has also been used for other predictions of a similar nature, as in quantum electrodynamics and such cases as ultraviolet divergence.\n", "The ultraviolet catastrophe results from the equipartition theorem of classical statistical mechanics which states that all harmonic oscillator modes (degrees of freedom) of a system at equilibrium have an average energy of formula_1.\n", "The colourful term \"ultraviolet catastrophe\" was given by Paul Ehrenfest in 1911 to the paradoxical result that the total energy in the cavity tends to infinity when the equipartition theorem of classical statistical mechanics is (mistakenly) applied to black-body radiation. But this had not been part of Planck's thinking, because he had not tried to apply the doctrine of equipartition: when he made his discovery in 1900, he had not noticed any sort of \"catastrophe\". It was first noted by Lord Rayleigh in 1900, and then in 1901 by Sir James Jeans; and later, in 1905, by Einstein when he wanted to support the idea that light propagates as discrete packets, later called 'photons', and by Rayleigh and by Jeans.\n", "At the higher end of the ultraviolet range, the energy of photons becomes large enough to impart enough energy to electrons to cause them to be liberated from the atom, in a process called photoionisation. The energy required for this is always larger than about 10 electron volt (eV) corresponding with wavelengths smaller than 124 nm (some sources suggest a more realistic cutoff of 33 eV, which is the energy required to ionize water). This high end of the ultraviolet spectrum with energies in the approximate ionization range, is sometimes called \"extreme UV.\" Ionizing UV is strongly filtered by the Earth's atmosphere).\n", "The ultraviolet catastrophe, also called the Rayleigh–Jeans catastrophe, was the prediction of late 19th century/early 20th century classical physics that an ideal black body (also blackbody) at thermal equilibrium will emit radiation in all frequency ranges, emitting more energy as the frequency increases. By calculating the total amount of radiated energy (i.e., the sum of emissions in all frequency ranges), it can be shown that a blackbody is likely to release an arbitrarily high amount of energy. This would cause all matter to instantaneously radiate all of its energy until it is near absolute zero - indicating that a new model for the behaviour of blackbodies was needed.\n", "The UV Index is a number linearly related to the intensity of sunburn-producing UV radiation at a given point on the earth's surface. It cannot be simply related to the irradiance (measured in W/m) because the UV of greatest concern occupies a spectrum of wavelength from 295 to 325 nm, and shorter wavelengths have already been absorbed a great deal when they arrive at the earth's surface. Skin damage from sunburn, however, is related to wavelength, the shorter wavelengths being much more damaging. The UV power spectrum (expressed as watts per square metre per nanometre of wavelength) is therefore multiplied by a weighting curve known as the erythemal action spectrum, and the result integrated over the whole spectrum. This gave Canadian scientists a weighted figure (sometimes called Diffey-weighted UV irradiance, or DUV, or erythemal dose rate) typically around 250 mW/m in midday summer sunlight. So, they arbitrarily divided by 25 mW/m to generate a convenient index value, essentially a scale of 0 to 11+ (though ozone depletion is now resulting in higher values, as mentioned above).\n", "The name comes from the earliest example of such a divergence, the \"ultraviolet catastrophe\" first encountered in understanding blackbody radiation. According to classical physics at the end of the nineteenth century, the quantity of radiation in the form of light released at any specific wavelength should increase with decreasing wavelength—in particular, there should be considerably more ultraviolet light released from a blackbody radiator than infrared light. Measurements showed the opposite, with maximal energy released at intermediate wavelengths, suggesting a failure of classical mechanics. This problem eventually led to the development of quantum mechanics.\n" ]
Is it possible to condition your own bladder to hold more liquid?
The feeling of needing to urinate stems from mechanoreceptors in the bladder, it's certainly possible to learn to develop tolerance to the desire to urinate and as such increase the length of time between urinating. Drugs such as Tolterodine and various bladder training techniques have been shown to help increase the volume stored within the bladder but these were patients with over-active bladders.
[ "Emptying the bladder is one of the defense mechanisms of this tortoise. This can leave the tortoise in a very vulnerable condition in dry areas, and it should not be alarmed, handled, or picked up in the wild unless in imminent danger. If it must be handled, and its bladder is emptied, then water should be provided to restore the fluid in its body.\n", "Wraparound bladders are favored by some divers because they make it easier to maintain upright attitude on the surface. However, some designs have a tendency to squeeze the diver's torso when inflated, and they are often bulky at the sides or front when fully inflated. Back inflation BCs are less bulky at the sides but may have a tendency to float the diver tilted forward on the surface depending on weight and buoyancy distribution, which presents a possible hazard in an emergency if the diver is unconscious or otherwise unable to keep his or her head above the water.\n", "BULLET::::- Redundant bladders may be inadvertently filled, either by unintended action of the diver, or by malfunction of the filling mechanism, and if the failure is not recognized and dealt with promptly, this may result in a runaway uncontrolled ascent, with associated risk of decompression sickness. There is a risk that the diver will not recognise which bladder is full and attempt to dump from the wrong one. The risk can be reduced by ensuring that the filling mechanisms are clearly distinguishable by both feel and position, and not connecting a low pressure supply hose to the reserve until needed, so it is impossible to add gas by accident.\n", "If an incontinence is due to overflow incontinence, in which the bladder never empties completely, or if the bladder cannot empty because of poor muscle tone, past surgery, or spinal cord injury, a catheter may be used to empty the bladder. A catheter is a tube that can be inserted through the urethra into the bladder to drain urine. Catheters may be used once in a while or on a constant basis, in which case the tube connects to a bag that is attached to the leg. If a long-term (or indwelling) catheter is used, urinary tract infections may occur.\n", "Dual bladder buoyancy compensators are considered both unnecessary and unsafe. Unnecessary in that there are alternative methods available to a correctly rigged diver to compensate for a defective BC, and unsafe in that there is no obvious way to tell which bladder is holding air, and a leak into the secondary bladder may go unnoticed until the buoyancy has increased to the extent that the diver is unable to stop the ascent, while struggling to empty the air from the wrong bladder. Monitoring the air content of two bladders is unnecessary additional task loading, which distracts attention from other matters.\n", "Means of holding the bladder walls apart to encourage drying between uses are available, such as a plastic frame that collapses to pass through the fill opening, but expands inside the bladder to hold the sides apart even near the corners.\n", "The swim bladder (or gas bladder) is an internal organ that contributes to the ability of a fish to control its buoyancy, and thus to stay at the current water depth, ascend, or descend without having to waste energy in swimming. The bladder is found only in the bony fishes. In the more primitive groups like some minnows, bichirs and lungfish, the bladder is open to the esophagus and doubles as a lung. It is often absent in fast swimming fishes such as the tuna and mackerel families. The condition of a bladder open to the esophagus is called physostome, the closed condition physoclist. In the latter, the gas content of the bladder is controlled through a rete mirabilis, a network of blood vessels effecting gas exchange between the bladder and the blood.\n" ]
Why were the Anglo-Saxons one of the only Germanic groups who didn’t assimilate into the cultures they conquered?
Who says that they didn't? Robin Fleming argues in *Britain After Rome* that the idea of the Anglo-Saxons as a purely Germanic culture is misguided and not supported by the evidence that we have available through archaeology. She points to the blend of clothing and jewelry styles that emerged following "Anglo-Saxon" migration to Britain as evidence that these two cultures were assimilating into something difference from either that came before. She views this process as more or less a peaceful one. While they was some endemic violence inherent to the time period, she does not see evidence for the mass violence that is often assumed to have accompanied the Germanic migration into Britain. However Peter Heather offers another explanation that is worth mentioning. He posits that due to the fragmented and small scale nature of migration into Britain, combined with a fluid cultural identity for the native British there was little reason for the native British to hang onto their culture in certain parts of Britain so the population assimilated into the new Germanic one. Also worth bearing in mind is that the label of "Anglo-Saxon" as applied to the migrators themselves is misleading. While many of the Germanic people who came to England did come from Jutland or Saxony, others came from Norway, Frisia, Ireland (not even Germanic people!), and Sweden. Also that the process of assimilation was not as smooth in some of these places as you might imagine. For example, Frankish Law (or Salic Law) maintained legal distinctions between Franks and Romans for centuries following Frankish control over northern Gaul. Even though the populations "assimilated" in the end, we should not imagine that this process was quick, easy, or assumed.
[ "The Franks and the Anglo-Saxons were unlike the other Germanic peoples in that they entered the Western Roman Empire as Pagans and were forcibly converted to Chalcedonian Christianity by their kings, Clovis I and Æthelberht of Kent (see also Christianity in Gaul and Christianisation of Anglo-Saxon England). The remaining tribes – the Vandals and the Ostrogoths – did not convert as a people nor did they maintain territorial cohesion. Having been militarily defeated by the armies of Emperor Justinian I, the remnants were dispersed to the fringes of the empire and became lost to history. The Vandalic War of 533–534 dispersed the defeated Vandals. Following their final defeat at the Battle of Mons Lactarius in 553, the Ostrogoths went back north and (re)settled in south Austria.\n", "The southward expansion of the East Germanic tribes pushed many other Germanic and Iranian peoples towards the Roman Empire, spawning the Marcomannic Wars in the 2nd century AD. Another East Germanic tribe were the Herules, who according to 6th century historian Jordanes were driven from modern-day Denmark by the Danes, who were an offshoot of the Swedes. The migration of the Herules is thought to have occurred around 250 AD. The Danes would eventually settle all of Denmark, with many its former inhabitants, including the Jutes and Angles, settling Britain, becoming known as the Anglo-Saxons. The Old English story Beowulf is a testimony to this connection. Meanwhile, Norway was inhabited by a large number of North Germanic tribes and divided into a score of petty kingdoms.\n", "The Anglo-Saxons were a mix of invaders, migrants and acculturated indigenous people. Even before the withdrawal of the Romans, there were Germanic people in Britain who had been stationed there as \"foederati\". The migration continued with the departure of the Roman army, when Anglo-Saxons were recruited to defend Britain; and also during the period of the Anglo-Saxon first rebellion of 442. They settled in small groups covering a handful of widely dispersed local communities, and brought from their homelands the traditions of their ancestors. There are references in Anglo-Saxon poetry, including Beowulf, that show some interaction between pagan and Christian practices and values. There is enough evidence from Gildas and elsewhere that it is safe to assume some continuing form of Christianity survived. The Anglo-Saxons took control of Sussex, Kent, East Anglia and part of Yorkshire; while the West Saxons founded a kingdom in Hampshire under Cerdic, around 520.\n", "The Anglo-Saxons' arrival is the most hotly disputed of events, and the extent to which they killed, displaced, or integrated with the existing society is still questioned. What is clear is that a separate Anglo-Saxon society, which would eventually become England with a more Germanic feel, was set up in the south east of the island. These new arrivals had not been conquered by the Romans but their society was perhaps similar to that of Britain. The main difference was their pagan religion, which the surviving northern areas of non-Saxon rule sought to convert to Christianity. During the 7th century these northern areas, particularly Northumbria, became important sites of learning, with monasteries acting like early schools and intellectuals such as Bede being influential. In the 9th century Alfred the Great worked to promote a literate, educated people and did much to promote the English language, even writing books himself. Alfred and his successors unified and brought stability to most of the south of Britain that would eventually become England.\n", "More broadly, early Medieval Germanic peoples were often assimilated into the \"walha\" substrate cultures of their subject populations. Thus, the Burgundians of Burgundy, the Vandals of Northern Africa, and the Visigoths of France and Iberia, lost some Germanic identity and became part of Romano-Germanic Europe. For the Germanic Visigoths in particular, they had intimate contact with Rome for two centuries before their domination of the Iberian Peninsula and were accordingly permeated by Roman culture. Likewise, the Franks of Western Francia form part of the ancestry of the French people.\n", "The various Germanic peoples of the Migrations period eventually spread out over a vast expanse stretching from contemporary European Russia to Iceland and from Norway to North Africa. The migrants had varying impacts in different regions. In many cases, the newcomers set themselves up as overlords of the pre-existing population. Over time, such groups underwent ethnogenesis, resulting in the creation of new cultural and ethnic identities (e.g., the Franks and Gallo-Romans becoming the French). Thus, many of the descendants of the ancient Germanic peoples do not speak Germanic languages, as they were to a greater or lesser degree assimilated into the cosmopolitan, literate culture of the Roman world. Even where the descendants of Germanic peoples maintained greater continuity with their common ancestors, significant cultural and linguistic differences arose over time, as is strikingly illustrated by the different identities of Christianized Saxon subjects of the Carolingian Empire and pagan Scandinavian Vikings.\n", "The demise of Vulgar Latin in the face of Anglo-Saxon settlement is very different from the fate of the language in other areas of Western Europe which were subject to Germanic migration, like France, Italy and Spain, where Latin and the Romance languages continued. The likely reason is that in Britain there was a greater collapse in Roman institutions and infrastructure, leading to a much greater reduction in the status and prestige of the indigenous Romanised culture; and so the indigenous people were more likely to abandon their languages, in favour of the higher-status language of the Anglo-Saxons.\n" ]
why do my muscles hurt after using them?
Lactic acid build up within the muscle may cause pain. Muscle tightness also may cause pain in the muscle.
[ "As a result of this effect, not only is the soreness reduced, but other indicators of muscle damage, such as swelling, reduced strength and reduced range of motion, are also more quickly recovered from. The effect is mostly, but not wholly, specific to the exercised muscle: experiments have shown that some of the protective effect is also conferred on other muscles.\n", "The mechanism of delayed onset muscle soreness is not completely understood, but the pain is ultimately thought to be a result of microtrauma – mechanical damage at a very small scale – to the muscles being exercised.\n", "The pain is caused by the inadequate blood flow to the muscle tissue, the inflammation from the resulting cell damage, and the release of cell contents. Muscle spasms, caused by the lack of blood to the muscle tissue, are also painful.\n", "Soreness might conceivably serve as a warning to reduce muscle activity to prevent injury or further injury. With delayed onset muscle soreness (DOMS) caused by eccentric exercise (muscle lengthening), it was observed that light concentric exercise (muscle shortening) during DOMS can cause initially more pain but was followed by a temporary alleviation of soreness – with no adverse effects on muscle function or recovery being observed. Furthermore eccentric exercise during DOMS was found to not exacerbate muscle damage, nor did it have an adverse effect on recovery – considering this, soreness is not necessarily a warning sign to reduce the usage of the affected muscle. However it was observed that a second bout of eccentric exercise within one week of the initial exercise did lead to decreased muscle function immediately afterwards.\n", "In the long term, the loss of muscle function can have additional effects from disuse, including atrophy of the muscle. Immobility can lead to pressure sores, particularly in bony areas, requiring precautions such as extra cushioning and turning in bed every two hours (in the acute setting) to relieve pressure. In the long term, people in wheelchairs must shift periodically to relieve pressure. Another complication is pain, including nociceptive pain (indication of potential or actual tissue damage) and neuropathic pain, when nerves affected by damage convey erroneous pain signals in the absence of noxious stimuli. Spasticity, the uncontrollable tensing of muscles below the level of injury, occurs in 65–78% of chronic SCI. It results from lack of input from the brain that quells muscle responses to stretch reflexes. It can be treated with drugs and physical therapy. Spasticity increases the risk of contractures (shortening of muscles, tendons, or ligaments that result from lack of use of a limb); this problem can be prevented by moving the limb through its full range of motion multiple times a day. Another problem lack of mobility can cause is loss of bone density and changes in bone structure. Loss of bone density (bone demineralization), thought to be due to lack of input from weakened or paralysed muscles, can increase the risk of fractures. Conversely, a poorly understood phenomenon is the overgrowth of bone tissue in soft tissue areas, called heterotopic ossification. It occurs below the level of injury, possibly as a result of inflammation, and happens to a clinically significant extent in 27% of people.\n", "Physical exercise may cause pain both as an immediate effect that may result from stimulation of free nerve endings by low pH, as well as a delayed onset muscle soreness. The delayed soreness is fundamentally the result of ruptures within the muscle, although apparently not involving the rupture of whole muscle fibers.\n", "Owens states that chronic pain remains to be one of the most common among medical complaints. Delos Therapy focuses on the principle that with repetitive motion and wear and tear of muscle tissue, the muscles become tight and fibrotic, causing common symptoms of pain, stiffness, and weakness. This fibrosis is not visible on conventional imaging, such as, MRIs or X-rays; and the fibrosis is getting missed diagnostically by mainstream medicine. Conventional treatments for such tightness and pain include stretching, strengthening, and/or medication management with opioids. Although beneficial in some cases, Owens believes this is not a complete therapy and is largely focused on symptoms rather than the root cause.\n" ]
Why does my vision change when I focus intently on anything around me?
When you [stabilize an image on your retina](_URL_1_) for a long time, you adapt to portions of the image and stop noticing / seeing them. The auditory equivalent is when you do not notice the hum of a light or a fan until you pay attention to it again. Normally, your eyes are moving many times a second, even when you are fixating on something, in order to provide some change in the sensory input to a portion of your retina. This is called a [microsaccade](_URL_0_).
[ "Changes in spatial attention can occur with the eyes moving, overtly, or with the eyes remaining fixated, covertly. Within the human eye only a small part, the fovea, is able to bring objects into sharp focus. However, it is this high visual acuity that is needed to perform actions such as reading words or recognizing facial features, for example. Therefore, the eyes must continually move in order to direct the fovea to the desired goal. Prior to an overt eye movement, where the eyes move to a target location, covert attention shifts to this location. However, it is important to keep in mind that attention is also able to shift covertly to objects, locations, or even thoughts while the eyes remain fixated. For example, when a person is driving and keeping their eyes on the road, but then, even though their eyes do not move, their attention shifts from the road to thinking about what they need to get at the grocery store. The eyes may remain focused on the previous object attended to, yet attention has shifted.\n", "A similar effect is found when people track moving objects with their eyes. The changing retinal image is referenced with the muscle movements of the eye resulting in the same type of retinal/body-centered alignment. This is one more process that helps the brain properly encode the relationships needed to deal with our changing perception, and also serves as a verification that the proper physical movements are being made.\n", "The brain's ability to see three-dimensional objects depends on proper alignment of the eyes. When both eyes are properly aligned and aimed at the same target, the visual portion of the brain fuses the forms into a single image. When one eye turns inward, outward, upward, or downward, two different pictures are sent to the brain. This causes loss of depth perception and binocular vision. There have also been some reports of people that can \"control\" their afflicted eye. The term is from Greek \"exo\" meaning \"outward\" and \"trope\" meaning \"a turning\".\n", "By this intellectual operation, comprehending every effect in our sensory organs as having an external cause, the external world arises. With vision, finding the cause is essentially simplified due to light acting in straight lines. We are seldom conscious of the process that interprets the double sensation in both eyes as coming from one object; that turns the upside down impression, and that adds depth to make from the planimetrical data stereometrical perception with distance between objects.\n", "As the eye shifts its gaze from looking through the optical center of the corrective lens, the lens-induced astigmatism value increases. In a spherical lens, especially one with a strong correction whose base curve is not in the best spherical form, such increases can significantly impact the clarity of vision in the periphery.\n", "The visual system in the brain is too slow to process that information if the images are slipping across the retina at more than a few degrees per second. Thus, to be able to see while we are moving, the brain must compensate for the motion of the head by turning the eyes. Another specialisation of visual system in many vertebrate animals is the development of a small area of the retina with a very high visual acuity. This area is called the fovea, and covers about 2 degrees of visual angle in people. To get a clear view of the world, the brain must turn the eyes so that the image of the object of regard falls on the fovea. Eye movement is thus very important for visual perception, and any failure can lead to serious visual disabilities. To see a quick demonstration of this fact, try the following experiment: hold your hand up, about one foot (30 cm) in front of your nose. Keep your head still, and shake your hand from side to side, slowly at first, and then faster and faster. At first you will be able to see your fingers quite clearly. But as the frequency of shaking passes about 1 Hz, the fingers will become a blur. Now, keep your hand still, and shake your head (up and down or left and right). No matter how fast you shake your head, the image of your fingers remains clear. This demonstrates that the brain can move the eyes opposite to head motion much better than it can follow, or pursue, a hand movement. When your pursuit system fails to keep up with the moving hand, images slip on the retina and you see a blurred hand.\n", "For example, when looking out of the window at a moving train, the eyes can focus on a moving train for a short moment (by stabilizing it on the retina), until the train moves out of the field of vision. At this point, the eye is moved back to the point where it first saw the train (through a saccade).\n" ]
what is going to make future 5g internet, faster than current 4g networks?
Well, there was a mobile network CEO or tech apecialist recently that described how the 5g network will work. It will be closer to skynet in terms of the net will be smarter, faster, more organized, and better equipped with newer generation technology that enables up to gigabit. There will be bigger, thicker cables to every cell tower so that connectivity will be wider spread and more reliable. The network will be smarter in the sense that it can tell when your battery is low and it'll start pinging your phone less, and more organized since there will likely be a prioritization system set up that deals with making business lines have a higher priority than our own commercial lines. Generally, the network will be smarter and more efficient for every cent spent on it.
[ "5G succeeds 4G LTE wireless technology; developments have been focused on enabling low-latency communications, and promises of a minimum peak network speed of 20 gigabits per/second (20 times faster than the equivalent on 4G LTE networks), and uses within Internet of things and smart city technology.\n", "A new \"mobile broadband\" technology emerging in the United Kingdom is 4G which hopes to replace the old 3G technology currently in use and could see download speeds increased to 300Mbit/s. The company EE have been the first company to start developing a full scale 4G network throughout the United Kingdom. This was later followed by other telecommunications companies in the UK such as O2 (Telefónica) and Vodafone.\n", "By 2009, it had become clear that, at some point, 3G networks would be overwhelmed by the growth of bandwidth-intensive applications like streaming media. Consequently, the industry began looking to data-optimized 4th-generation technologies, with the promise of speed improvements up to 10-fold over existing 3G technologies. The first two commercially available technologies billed as 4G were the WiMAX standard (offered in the U.S. by Sprint) and the LTE standard, first offered in Scandinavia by TeliaSonera.\n", "Installation of a trans-Indian Ocean backbone cable in 2009 has, in theory, made Internet access much more readily available in Dar in particular and in East Africa in general. However, roll-out to end-users is slow, partly because of spotty telephone line coverage at the moment provided by the Tanzania Telecommunications Company Limited, partly due to the substantial prices and long contracts demanded for purchase of bandwidth for small ISPs. Mobile-telephone access to the Internet via 3G and 3.75G is still relatively expensive. 4G is making its way through major cities and towns with plans to go countrywide in the advanced planning stages.\n", "3G networks have taken this approach to a higher level, using different underlying technology but the same principles. They routinely provide speeds over 300kbit/s. Due to the now increased internet speed, internet connection sharing via WLAN has become a workable reality. Devices which allow internet connection sharing or other types of routing on cellular networks are called also cellular routers.\n", "In November 2014, ViewQwest unveiled plans for a 2Gbit/s fibre broadband service for households in Singapore, offering the country's fastest internet connection in the market. In March 2015, the service was officially launched making it the world's fastest home broadband plan alongside Japan.\n", "As of 2015, the maximum plan for their connection is now at 1Gbit/s, while plans for lower speeds are scheduled for upgrades in the near future. As of 2017, they are aggressively increasing network presence in an attempt to improve internet speed and services, decried as one of the worst in Asia, apart from rivalry from other companies.\n" ]
How was life as a Carthaginian compared to life as a Roman?
So there's this incident where Claudius is headed to what is now England in a ship. He gets spotted by a Carthaginian ship, and it's one group of rowers against the other. Claudius argues the reason his rowers won (and escaped) was that they were free men, while the Carthaginian rowers were slaves. But that was much later, and we're talking a very different Carthage than the one during the Punic wars. Not only that, to believe the argument, you have to trust the ancient sources, and the modern one (Graves, in this case). For your basic question, "Was Rome really militaristic," the answer can only be yes. Was Carthage a democracy? that's a modern question, which may not actually be relevant in ancient terms. They *did* have election of kings, but we would describe it as an oligarchy. Look up "Tribunal of 104" if you're interested. Your average Carthaginian citizen was more interested in trade than fighting, so they depended heavily on mercenaries from subjugated provinces for their military. The struggles for power would have been familiar to any Roman: political murders, bought offices, intrigue and deceit. Both systems thought of themselves as republics. One other problem: most of the writers we base our view on were actually foreigners, in many cases hostile foreigners. It's difficult, under such circumstances, to make real assertions. But some basic things are clear: Rome had a plunder economy, while Carthage was based slightly more on trade. Land power vs. sea power. Citizen military vs. mercenaries. All those are oversimplifications, but have some truth to them. The modern concept of freedom can't be said to apply. I haven't researched the "rage on the TW forums," but if they're pro-Rome, they're probably misreading. Arguing that the Romans were a positive influence is another modern simplification. They made life suck for any non-Roman area (such is the nature of a plunder economy), and for the majority of Romans themselves. Their whole system was based on the idea that "We're going to kick your ass and take all your stuff."
[ "The Punic Wars with Carthage had a particularly marked effect on Roman viticulture. In addition to broadening the cultural horizons of the Roman citizenry, Carthaginians also introduced them to advanced viticultural techniques, in particular the work of Mago. When the libraries of Carthage were ransacked and burned, among the few Carthaginian works to survive were the 26 volumes of Mago's agricultural treatise, which was subsequently translated into Latin and Greek in 146 BC. Although his work did not survive to the modern era, it has been extensively quoted in the influential writings of Romans Pliny, Columella, Varro and Gargilius Martialis.\n", "The Carthaginian republic was one of the longest-lived and largest states in the ancient Mediterranean. Reports relay several wars with Syracuse and finally, Rome, which eventually resulted in the defeat and destruction of Carthage in the Third Punic War. The Carthaginians were Phoenician settlers originating in the Mediterranean coast of the Near East. They spoke Canaanite, a Semitic language, and followed a local variety of the ancient Canaanite religion.\n", "The Carthaginians were rivals to the Greeks and Romans. Carthage fought the Punic Wars, three wars with Rome: the First Punic War (264 to 241 BC), over Sicily; the Second Punic War (218 to 201 BC), in which Hannibal invaded Europe; and the Third Punic War (149 to 146 BC). Carthage lost the first two wars, and in the third it was destroyed, becoming the Roman province of Africa, with the Berber Kingdom of Numidia assisting Rome. The Roman province of Africa became a major agricultural supplier of wheat, olives, and olive oil to imperial Rome via exorbitant taxation. Two centuries later, Rome brought the Berber kingdoms of Numidia and Mauretania under its authority. In the 420's AD, Vandals invaded North Africa and Rome lost her territories. The Berber kingdoms subsequently regained their independence.\n", "\"Many of the Carthaginian institutions are excellent. The superiority of their constitution is proved by the fact that the common people remain loyal to the constitution; the Carthaginians have never had any rebellion worth speaking of, and have never been under the rule of a tyrant.\"\n", "While the Carthaginians had been busy at Geronium, Fabius had left Minucius in charge of the Roman army with instructions to follow the ‘Fabian Strategy’ and journeyed to Rome to observe religious duties. He possibly also had engaged in political bickering because of his unpopularity among the Roman citizens. Minucius, who had always advocated a more forward strategy against Hannibal, moved down from the hills after a few days and set up a new camp in the plain of Larinum to the north of Geronium. The Romans then began harassing the Carthaginian foragers from their new camp as Minucius sought to provoke Hannibal into battle. Hannibal in response moved near the Roman camp from Geronium with two thirds of his army, built a temporary camp, and occupied a hill overlooking the Roman camp with 2,000 Libyphoenician pikemen. The mobility of the Carthaginians was restricted at this time as their cavalry horses were being rested. This had also deprived Hannibal of his best weapon against the Romans, a fact which would come into play soon. Minucius promptly attacked with his light infantry, driving back the pikemen posted on the hill, and moved his camp to the top of the captured hill.\n", "The Carthaginian town came under Roman hegemony after the Punic Wars. In 46, the town was the first in Africa to ally itself with Julius Caesar during his civil war. The same year, the Battle of Ruspina was a victory for Pompey's ally T. Labienus.\n", "The Carthaginians were totally unprepared for Rome's actions. The garrisons of Lilybaeum, Drepana and Hamilcar’s army at Eryx held fast, but without supplies from Carthage they could not hold out indefinitely. Now that Rome had seized the initiative with a battle ready fleet blockading Carthaginian holdings in Sicily, without warships the unescorted Carthaginian supply ships would fall prey to the Romans. \n" ]
How did the United States of America arrive at their valuation of Greenland in 1946? Could the area have been worth the cost of purchase in terms of economic output, or was the value purely strategic?
Initially, America very much wanted it for strategic reasons. The GIUK Gap was hugely important. Specifically, it was important to the Soviet Union's submarine fleet. If you look at the terms of the [Montreux Convention](_URL_2_), it was impossible to "sneak" a submarine through Turkish waters. If you look at a map of the Baltic Sea, or more specifically the [waters around Denmark](_URL_0_), it's similarly unlikely that you could ever sneak a submarine past even a semi-aware detection network. Denmark, of course, was one of the founding dozen of NATO. This means that if the Soviets actually want to conduct any submarine operations with any degree of stealth, they need to be based out of Murmansk in the White Sea or somewhere in East Asia (Vladivostok, Petropavlovsk on Kamchatka, Magadan, or Sovetskaya Gavan. Realistically, if the Soviets wanted to have their submarines remain undetected, they really could only use those five ports. And if you wanted to operate in the Atlantic, you weren't going to put your HQ on the northwestern coast of the Pacific. You were going to put it in Murmansk. This, effectively, meant that any ships the Soviets sent to the Atlantic had to pass through the GIUK Gap. And it would be relatively easy to detect (and subsequently track or shadow) them if you had assets in the area beforehand. And it's kind of hard to be all sneaky and such when the USN is dropping [practice depth charges on you](_URL_4_). They couldn't do any meaningful damage, of course, but it's an implicit threat: the USN was basically saying, "we could sink you at any time." As for the overall economic value of Greenland: that's up in the air. We [already see](_URL_1_) Greenland being exploited for hydrocarbons. The USGS released a review of hydrocarbons in the region [here](_URL_3_). I believe they described it as "a genuinely stupid amount of dead dead plants buried in the sea floor." (Okay, that wasn't their exact phrasing.) (._.) Theoretically, yes, Greenland would've paid for itself. Eventually. If the estimates actually pan out.
[ "Following World War II, the United States developed a geopolitical interest in Greenland, and in 1946 the United States offered to buy Greenland from Denmark for $100,000,000, but Denmark refused to sell.\n", "Following World War II, the United States developed a geopolitical interest in Greenland, and in 1946 the United States offered to buy the island from Denmark for $100,000,000. Denmark refused to sell it. In the 21st century, the United States, according to WikiLeaks, remains highly interested in investing in the resource base of Greenland and in tapping hydrocarbons off the Greenlandic coast.\n", "During World War II, when Denmark was occupied by Nazi Germany, the United States briefly controlled Greenland for battlefields and protection. In 1946, the United States offered to buy Greenland from Denmark for $100 million ($1.2 billion today) but Denmark refused to sell it. Several politicians and others have in recent years argued that Greenland could hypothetically be in a better financial situation as a part of the United States; for instance mentioned by professor Gudmundur Alfredsson at University of Akureyri in 2014. One of the actual reasons behind U.S. interest in Greenland could be the vast natural resources of the island. According to WikiLeaks, the U.S. appears to be highly interested in investing in the resource base of the island and in tapping the vast expected hydrocarbons off the Greenlandic coast.\n", "The Louisiana Purchase gave the United States over a million square miles of previously French territory for the price of $15 million. The Purchase was ratified by the U.S. Senate on October 20, 1803, and the new land subsequently doubled the size of the United States and opened the door to a new period of westward expansion. In 1902, President Theodore Roosevelt signed a bill to subsidize the Louisiana Purchase Exposition, which would become known as the St. Louis World Fair of 1904. Of the $5 million paid to the fair by the government, $250,000 was in the form of commemorative gold dollar coins.\n", "1962 brought about the construction of , which was the largest commercial vessel built in the United States at the time, and became the first ship to transit the Northwest Passage to the Alaska North Slope oil fields. The Bainbridge was launched in that year, but not without accusations from the government that Bethlehem overcharged the Navy, as the costs increased from almost $ (equivalent to $ in today's dollars) in 1959 to a negotiated $ (equivalent to $ in today's dollars) three years later, down from an estimate of $ (equivalent to $ in today's dollars) before then, although there was a $ (equivalent to $ in today's dollars) discrepancy in the yard. After the end of the strike mentioned above, the yard was accused by the government of overcharging for the first nuclear frigate, and the Long Beach. The shipyard later made up for the losses of $ (equivalent to $ in today's dollars) by crediting on other contracts that were being offered.\n", "BULLET::::- Louisiana Purchase Bicentennial was commemorated with a 37-cent stamp issued on April 30, 2003, in New Orleans, Louisiana. The Purchase doubled the size of the United States, it became one of the largest countries in the world, and the most fertile lands of the continent were opened to American settlement. It is often called the greatest real estate deal in history, \"with a stroke of a pen\". The stamp was designed by Richard Sheaff and illustrated by Garin Baker. Sennett Security Products printed the stamp in gravure process in pressure-sensitive panes of twenty; 54 million were issued. An image of the stamp is available at Arago online at the link in the footnote.\n", "BULLET::::- The Louisiana Purchase of 1803, also known as the \"Great Land Acquisition\", is often seen as one of the most important events in American history after the Declaration of Independence. At the time it had a total cost of $15 million, and it was financed in three ways. First by a down payment of $3 million, in gold by the U.S. government, followed by two loans, one by the London-based Barings Bank, and one by the Amsterdam-based Hope Bank. The original receipt still exists and is currently property of the Dutch ING Group, which has its headquarters in Amsterdam.\n" ]
has the physiological damage caused by trauma been the same through out history?
No. Trauma has been present with humanity throughout our history. The difference is that nowadays, we are allowed to actually speak about our traumas, and help them heal, whereas in the past there was mostly an attitude of 'why are you acting like this, stop it'. Or if it was mentioned, it was mentioned in vague terms that do not always translate well to modern ears. The deadly sin sloth, for example, initially didn't really refer to simple laziness. It was meant for apathy and loss of interest in life, exactly the sort of symptoms commonly associated with depression (potentially due to former trauma).
[ "Historical trauma is described as collective emotional and psychological damage throughout a person's lifetime and across multiple generations. Examples of historical trauma can be seen through the Wounded Knee Massacre of 1890, where over 200 unarmed Lakota were killed, and the Dawes Allotment Act of 1887, when American Indians lost four-fifths of their land.\n", "Historical Trauma (HT), or Historical Trauma Response (HTR), can manifest itself in a variety of psychological ways. However, it is most commonly seen through high rates of substance abuse, alcoholism, depression, anxiety, suicide, domestic violence, and abuse within afflicted communities. The effects and manifestations of trauma are extremely important in understanding the present-day conditions of afflicted populations.\n", "Traumatic brain injuries vary in their mechanism of injury, producing a blunt or penetrating trauma resulting in a primary and secondary injury with excitotoxicity and relatively wide spread neuronal death. Due to the overwhelming number of traumatic brain injuries as a result of the War on Terror, tremendous amounts of research have been placed towards a better understanding of the pathophysiology of traumatic brain injuries as well as neuroprotective interventions and possible interventions prompting restorative neurogenesis. Hormonal interventions, such as progesterone, estrogen, and allopregnanolone have been examined heavily in recent decades as possible neuroprotective agents following traumatic brain injuries to reduce the inflammation response stunt neuronal death. In rodents, lacking the regenerative capacity for adult neurogenesis, the activation of stem cells following administration of α7 nicotinic acetylcholine receptor agonist, PNU-282987, has been identified in damaged retinas with follow-up work examining activation of neurogenesis in mammals after traumatic brain injury. Currently, there is no medical intervention that has passed phase-III clinical trials for use in the human population.\n", "Survivors of war trauma or childhood maltreatment are at increased risk for trauma-spectrum disorders such as post-traumatic stress disorder (PTSD). In addition, traumatic stress has been associated with alterations in the neuroendocrine and the immune system, enhancing the risk for physical diseases. Traumatic experiences might even affect psychological as well as biological parameters in the next generation, i.e. traumatic stress might have trans generational effects. So currently there is a new field trying to explain how epigenetic processes, which represent a pivotal biological mechanism for dynamic adaptation to environmental challenges, might contribute to the explanation of the long-lasting and intergenerational effects of trauma. In particular, epigenetic alterations in genes regulating the hypothalamus–pituitary–adrenal axis as well as the immune system have been observed in survivors of childhood and adult trauma.\n", "The trauma triad of death is a medical term describing the combination of hypothermia, acidosis and coagulopathy. This combination is commonly seen in patients who have sustained severe traumatic injuries and results in a significant rise in the mortality rate. Commonly, when someone presents with these signs, damage control surgery is employed to reverse the effects.\n", "Scholarship and data suggests that violence has declined. Since World War II, there has been a decline in battle deaths and since the Cold War, there has been a decline in conflict. Recently, scholars have started to question this long-held belief.\n", "Historical trauma, and its manifestations, are seen as an example of Transgenerational trauma (though the existence of transgenerational trauma itself is disputed). For example, a pattern of maternal abandonment of a child might be seen across three generations, or the actions of an abusive parent might be seen in continued abuse across generations. These manifestations can also stem from the trauma of events, such as the witnessing of war, genocide, or death. For these populations that have witnessed these mass level traumas (e.g., war, genocide, colonialism), several generations later these populations tend to have higher rates of disease.\n" ]
Why was the USS Indianapolis sailing without an escort when she was sunk?
It's normal for a cruiser to operate alone without destroyer escort in some circumstances. A heavy cruiser is an important asset, but it's not a capital ship that will shift the balance of naval power if lost. Destroyers were always in high demand for various roles in world war two and there were usually not enough to go around. A cruiser task force going in to action might normally include some destroyers, but not each individual cruiser on a non-combat mission. Anti-submarine weaponry in World War Two was generally only effective *after* the submarine was detected. An escorting destroyer would not have been able to detect the submarine and prevent the Indianapolis's loss, (high speed reduces the ability of surface ships to detect submarines) although it may have meant a ship was present to rescue survivors or counterattack the submarine. A heavy cruiser is much faster than a submarine (surfaced or submerged) and a fast speed and zig-zag pattern course (which the Indianapolis should have been following but wasn't) will generally provide as much protection as possible against the submarine's first salvo of torpedoes. The main reason to avoid including destroyers as escorts to a heavy cruiser is range. Destroyers have a much shorter operating range than cruisers, especially at high speeds, and the voyage from Honolulu to the Marianas would be too close for safety to the maximum operating range of a WWII destroyer. Destroyers accompanying major task forces have to periodically refuel from supply ships or larger warships, which is time consuming and creates a moment of vulnerability.
[ "USS \"Indianapolis\" (CL/CA-35) was a heavy cruiser of the United States Navy. At 00:15 on 30 July 1945, she was struck on her starboard side by two Type 95 torpedoes, one in the bow and one amidships, from the Japanese submarine , captained by Commander Mochitsura Hashimoto, who initially thought he had spotted the . The explosions caused massive damage. \"Indianapolis\" took on a heavy list, (the ship had a great deal of added armament and gun firing directors added as the war went on and was top heavy) and settled by the bow. Twelve minutes later, she rolled completely over, then her stern rose into the air, and she plunged down. Some 300 of the 1,195 crewmen went down with the ship. With few lifeboats and many without lifejackets, the remainder of the crew was set adrift.\n", "The \"Indianapolis\" had been on a secret mission, and due to a communications error, had not been reported as overdue (or missing). An estimated 900 men survived the sinking, but spent days floating in life jackets trying to fight off sharks. While only 317 were rescued out of a crew of 1199 who were aboard the \"Indianapolis\", Claytor's actions were widely credited by survivors with preventing an even greater loss of life.\n", "In 1996, sixth-grade student Hunter Scott began his research on the sinking of \"Indianapolis\" for a class history project, an assignment which eventually led to a United States Congressional investigation. In October 2000, the United States Congress passed a resolution that Captain McVay's record should state that \"he is exonerated for the loss of \"Indianapolis\"\"; President Bill Clinton signed the resolution. The resolution noted that, although several hundred ships of the US Navy were lost in combat during World War II, McVay was the only captain to be court-martialed for the sinking of his ship. In July 2001, the United States Secretary of the Navy ordered McVay's official Navy record cleared of all wrongdoing.\n", "\"Indiana\" thereafter withdrew to escort the carrier task force overnight. While operating off the islands in the early hours of 1 February, \"Indiana\" collided with \"Washington\". The ships were blacked out to prevent Japanese observers from spotting them, and in the darkness, \"Indiana\" turned in front of \"Washington\". \"Indiana\" was badly damaged, with the starboard propeller shaft destroyed and significant damaged inflicted on the belt armor and torpedo defense system. The ship had some of armor plating torn from her hull, and \"Washington\" had a section of her bow ripped away and lodged into \"Indiana\"s side. The accident killed three men and injured another six aboard \"Indiana\", one of whom later died. A subsequent inquiry into the accident placed the blame on \"Indiana\", faulting her crew for failing to inform the other ships in the unit about her course changes.\n", "Shortly thereafter, the fires broke through the flight deck and heat and smoke made the ship's bridge unusable. At 10:46, Admiral Nagumo transferred his flag to the light cruiser . \"Akagi\" stopped dead in the water at 13:50 and her crew, except for Captain Taijiro Aoki and damage-control personnel, was evacuated. She continued to burn as her crew fought a losing battle against the spreading fires. The damage-control teams and Captain Aoki were evacuated from the still floating ship later that night.\n", "At 00:15 on 30 July, \"Indianapolis\" was struck on her starboard side by two Type 95 torpedoes, one in the bow and one amidships, from the Japanese submarine , captained by Commander Mochitsura Hashimoto, who initially thought he had spotted the . The explosions caused massive damage. \"Indianapolis\" took on a heavy list (the ship had had a great deal of armament and gun-firing directors added as the war went on, and was therefore top-heavy) and settled by the bow. Twelve minutes later, she rolled completely over, then her stern rose into the air, and she plunged down. Some 300 of the 1,195 crewmen aboard went down with the ship. With few lifeboats and many without life jackets, the remainder of the crew was set adrift.\n", "The wreck of \"Indianapolis\" is in the Philippine Sea. In July–August 2001, an expedition sought to find the wreckage through the use of side-scan sonar and underwater cameras mounted on a remotely operated vehicle. Four \"Indianapolis\" survivors accompanied the expedition, which was not successful. In June 2005, a second expedition was mounted to find the wreck. \"National Geographic\" covered the story and released it in July. Submersibles were launched to find any sign of wreckage. The only objects ever found, which have not been confirmed to have belonged to \"Indianapolis\", were numerous pieces of metal of varying size found in the area of the reported sinking position (this was included in the National Geographic program \"Finding of the USS \"Indianapolis\"\").\n" ]
What adaptations do humans have that allow them to remain balanced without a tail?
During the time our tails were disappearing (and they still are, if you look at our skeletons), we began to evolve a sense called Equilibrioception—or balance. Equilibrioception makes use of a variety of sensory input to keep us from falling over while walking or standing: 1. Visual cues, like the horizon and the horizontal angle of local references (e.g. flat surfaces, the level of other's eyes). 2. Vestibular system—there are specialized, liquid-filled canals in our ears that contain super-sensitive hairs that can track the internal movement of the liquid when the head changes position. This gives us information on the angular and rotational movements of our head, much like a smartphone's accelerometer. 3. Proprioception—the body's own perception of where it is in space, made possible by special nerves located within joints and muscles attached to our skeletal system. These nerves allow our limbs and joints a general idea of relative distance to each other, and also clue them in on some motion/acceleration information by sensing the physical effort currently being exerted by muscles (e.g. during running, jumping, etc).
[ "Manx (and other tail-suppressed breeds) do not exhibit problems with balance, Balance is controlled primarily by the inner ear. In cats, dogs and other large-bodied mammals, balance involves but is not dependent upon the tail (contrast rats, for whom the tail is a quite significant portion of their body mass).\n", "Animal tails are used in a variety of ways. They provide a source of locomotion for fish and some other forms of marine life. Many land animals use their tails to brush away flies and other biting insects. Some species, including cats and kangaroos, use their tails for balance; and some, such as New World monkeys and opossums, have what are known as prehensile tails, which are adapted to allow them to grasp tree branches.\n", "Their forelegs were shortened, but their hind legs were elongated. While this anatomy is reminiscent of small kangaroos and jerboas, suggesting a jumping locomotion, the structure of the tarsal bones hints at a specialization for terrestrial running. Perhaps these animals were capable of both modes of locomotion; running slowly in search for food, and jumping quickly to avoid threats. Additionally, the Messel specimens feature a surprisingly long tail, unique among modern placental mammals, formed by 40 vertebrae and probably used for balance.\n", "Evolution has provided the human body with two distinct features: the specialization of the upper limb for visually guided manipulation and the lower limb's development into a mechanism specifically adapted for efficient bipedal gait. While the capacity to walk upright is not unique to humans, other primates can only achieve this for short periods and at a great expenditure of energy. The human adaption to bipedalism is not limited to the leg, however, but has also affected the location of the body's center of gravity, the reorganisation of internal organs, and the form and biomechanism of the trunk. In humans, the double S-shaped vertebral column acts as a great shock-absorber which shifts the weight from the trunk over the load-bearing surface of the feet. The human legs are exceptionally long and powerful as a result of their exclusive specialization for support and locomotion — in orangutans the leg length is 111% of the trunk; in chimpanzees 128%, and in humans 171%. Many of the leg's muscles are also adapted to bipedalism, most substantially the gluteal muscles, the extensors of the knee joint, and the calf muscles.\n", "Humans, like most of the other apes, lack external tails, have several blood type systems, have opposable thumbs, and are sexually dimorphic. The comparatively minor anatomical differences between humans and chimpanzees are a result of human bipedalism. One difference is that humans have a far faster and more accurate throw than other animals. Humans are also among the best long-distance runners in the animal kingdom, but slower over short distances. Humans' thinner body hair and more productive sweat glands help avoid heat exhaustion while running for long distances.\n", "All adaptations have a downside: horse legs are great for running on grass, but they can't scratch their backs; mammals' hair helps temperature, but offers a niche for ectoparasites; the only flying penguins do is under water. Adaptations serving different functions may be mutually destructive. Compromise and makeshift occur widely, not perfection. Selection pressures pull in different directions, and the adaptation that results is some kind of compromise.\n", "Most bipedal animals move with their backs close to horizontal, using a long tail to balance the weight of their bodies. The primate version of bipedalism is unusual because the back is close to upright (completely upright in humans), and the tail may be absent entirely. Many primates can stand upright on their hind legs without any support. \n" ]
"Rubbing Alcohol" is the main ingredient in most skin care (and other) products but we've all been told to basically not use it for anything except an antiseptic. Why?
If I remember correctly, there are two big reasons for this: When the skin dries up, it flakes off and ends up in the pores which will cause more blemishes then when you originally started. Second, because it dries up so much, your skin will try to regain a balance and then overproduce oils, creating a longterm problem.
[ "All rubbing alcohols are unsafe for human consumption: isopropyl rubbing alcohols do not contain the ethyl alcohol of alcoholic beverages; ethyl rubbing alcohols are based on denatured alcohol, which is a combination of ethyl alcohol and one or more bitter poisons that make the substance toxic.\n", "Product labels for rubbing alcohol include a number of warnings about the chemical, including the flammability hazards and its intended use only as a topical antiseptic and not for internal wounds or consumption. It should be used in a well-ventilated area due to inhalation hazards. Poisoning can occur from ingestion, inhalation, absorption, or consumption of rubbing alcohol.\n", "Rubbing alcohol refers to either isopropyl alcohol (propan-2-ol) or ethanol based liquids, or the comparable British Pharmacopoeia defined surgical spirit, with isopropyl alcohol products being the most widely available. Rubbing alcohol is undrinkable even if it is ethanol based, due to the bitterants added.\n", "They also have many industrial and household uses. The term \"rubbing alcohol\" has become a general non-specific term for either isopropyl alcohol (isopropanol) or ethyl alcohol (ethanol) rubbing-alcohol products.\n", "Surfactants are commonly found in soaps and detergents. Solvents like alcohol are often used as antimicrobials. They are found in cosmetics, inks, and liquid dye lasers. They are used in the food industry, in processes such as the extraction of vegetable oil. \n", "The term \"rubbing alcohol\" came into prominence in North America in the mid-1920s. The \"original\" rubbing alcohol was literally used as a liniment for massage; hence the name. This original rubbing alcohol was rather different from today's precisely formulated surgical spirit; in some formulations it was perfumed and included different additives, notably a higher concentration of methyl salicylate.\n", "Alcohol-based hand rubs are extensively used in the hospital environment as an alternative to antiseptic soaps. Hand-rubs in the hospital environment have two applications: hygienic hand rubbing and surgical hand disinfection. Alcohol based hand rubs provide a better skin tolerance as compared to antiseptic soap.\n" ]
When did the US Government begin doubting that China/Taiwan would ever retake the Chinese mainland?
The US was never really under any illusions that the nationalists would somehow turn things around in the Civil War after they retreated to Taiwan. Even before the end of WW2 there had been multiple American observers and experts in China who had reported the Communists enjoyed much broader popularity than the Nationalists, and by 1949 the GMD was very obviously overwhelmed. Before the Chinese entered the Korean War, the US was expecting an invasion by the mainland to finish things off, and the US government had diplomatically indicated that they weren't going to do anything about it. Only when the Chinese entered the Korean War did the US send the 7th Fleet to the Strait of China to prevent the invasion. They also began to provide the Taiwanese military with equipment and weapons. After the war the US certainly hoped that the CCP would crumble, but their support of Taiwan was based on denying the CCP territory, and especially on keeping China's UN security council vote out of the hands of the Communists. There was no real belief that Taiwan could attack the CCP.
[ "On 16 December 1978, U.S. President Jimmy Carter announced that the U.S. would sever its official relationship with the Republic of China as of 1 January 1979. It was the most serious challenge to the Taiwan government since it lost its seat at the United Nations to the People's Republic of China in 1971. President Chiang Ching-kuo immediately postponed all elections without a definite deadline for its restoration. Tangwai, which had won steadily expanding support, was strongly frustrated and disappointed about Chiang's decision since it suspended the only legitimate method they could use to express their opinions.\n", "Since the end of the Chinese civil war in 1949, the Republic of China was limited to Taiwan (taken from Japan in 1945, ceded by Qing China in 1895 - although renounced in 1952) and a few islands near Fujian, while the People's Republic of China controlled mainland China, and since 1950 also the island of Hainan. Both Chinese governments claimed sovereignty over all of China, and regard the other government as being in rebellion. Until 1971, the Republic of China was a permanent member of the UN Security Council with veto power. Since then, however, it was excluded in favor of the People's Republic of China, and since 1972, it was also excluded from all UN-subcommittees. Since the death of Chiang Kai-shek in 1975, Republic of China no longer aggressively asserts its exclusive mandate and most of the world's nations have since broken their official diplomatic ties with Republic of China (except for 21 nations including Holy See as of 2008). Nevertheless, most nations, as well as the People's Republic government, continue to maintain unofficial relations.\n", "The United States did not formally recognize the People's Republic of China (PRC) for 30 years after its founding. Instead, the US maintained diplomatic relations with the Republic of China government on Taiwan, recognizing it as the sole legitimate government of China.\n", "Moreover, Japan formally surrendered its claim to sovereignty over Taiwan on 28 April 1952, thus calling into serious doubt the authority of Japan to formally make such an assignment regarding the status of Taiwan over three months later on 5 August 1952. Indeed, British and American officials did not recognize any transfer of Taiwan's sovereignty to \"China\" in either of the post-war treaties.\n", "On the other hand, this meant China lost the opportunity to reunify Taiwan. Initially, the United States had abandoned the KMT and expected that Taiwan would fall to Beijing anyway, so the basic U.S. policy was to \"wait and see\" on the assumption that Taiwan's fall to Communist China was inevitable. However, the North Korean invasion of South Korea, in the context of the Cold War, meant U.S. President Truman intervened again and dispatched the Seventh Fleet to \"neutralize\" the Formosa (Taiwan) Strait.\n", "During the Pacific War, the United States and China were allies against Japan. In October 1945, a month after Japan's surrender, representatives of Chiang Kai-shek, on behalf of the Allied Powers, were sent to Formosa to accept the surrender of Japanese troops. However, during the period of the 1940s, there was no recognition by the United States Government that Taiwan had ever been incorporated into Chinese national territory. Chiang continued to remain suspicious of America's motives.\n", "BULLET::::- Taiwanese historian pointed out: After World War II ended, Republic of China officials went to Taiwan to accept the surrender of Japanese forces on behalf of the Allied Powers. Although they claimed that it was \"Taiwan Retrocession\", it was actually a provisional military occupation and was not a transfer of territories of Taiwan and Penghu. A transfer of territory requires a conclusion of an international treaty in order to be valid. But before the government of the Republic of China was able to conclude a treaty with Japan, it was overthrown by the Chinese Communist party and fled its territory. Consequently, that attributed to the controversy of the \"Undetermined Status of Taiwan\" and the controversy over \"Taiwan Retrocession\".\n" ]
How do physics and astronomy undergrad majors differ?
They are very similar, but you'd do better to get a degree in physics if you're really interested in high level astronomy. Astronomy degrees can focus too much on what may or may not be relevant to your interests. It's better to get a broad understanding of physics, rather than an astronomical based understanding of it. Your appreciation and understanding of astronomy will only benefit. If you plan on going to grad school, many people recommend a math degree with either a duel degree in physics, or at least a minor. Almost everyone regrets not taking more math.
[ "The courses for physics major have much higher level than those two case that had been talked above. At the beginning of the college. Their courses have few difference with the physics courses for the general education of science major. After the first year, the physics majors need to go up and study many deeper knowledge. The first change of the course is that the scale of the courses is much more smaller than before due to the different major of students in high grade. And for the content of the course, the quantitative analysis is really important. Meanwhile, there is usually a solid of homework. The grades of the students are largely decided by the homework and exams. The non-academic part, such as particitation, discussion, would have little weight. Each year, there are some specific required courses. But students usually can make some change due to their own ability. Students who are enthusiastic can take the graduate level course in senior year.  There are also some purely lab courses, which teach students how to do the advanced experiment and write the lab report.\n", "Students may major in either natural sciences, music, visual arts, or humanities, though they study most subjects (those which are not related to their area of interest) in mixed classes. The science students choose one main subject, such as physics, chemistry, or biology, and they must also learn computer science and/or another subject.\n", "A standard undergraduate physics curriculum consists of classical mechanics, electricity and magnetism, non-relativistic quantum mechanics, optics, statistical mechanics and thermodynamics, and laboratory experience. Physics students also need training in mathematics (calculus, differential equations, linear algebra, complex analysis, etc.), and in computer science.\n", "There are two paths to earning a bachelor's degree (SB) in physics from MIT. The first, \"Course 8 Focused Option\", is for students intending to continue studying physics in graduate school. The track offers a rigorous education in various fields in fundamental physics including classical and quantum mechanics, statistical physics, general relativity, electrodynamics, and higher mathematics. \n", "United States undergraduate physics curriculum, since many students who plan to continue to graduate school apply during the first half of the fourth year. It consists of 100 five-option multiple-choice questions covering subject areas including classical mechanics, electromagnetism, wave phenomena and optics, thermal physics, relativity, atomic and nuclear physics, quantum mechanics, laboratory techniques, and mathematical methods. The table below indicates the relative weights, as asserted by ETS, and detailed contents of the major topics.\n", "Undergraduate physics curricula in American universities includes courses for students choosing an academic major in physics, as well as for students majoring in other disciplines for whom physics courses provide essential prerequisite skills and knowledge. The term \"physics major\" can refer to the academic major in physics or to a student or graduate who has chosen to major in physics.\n", "The School of Physics offers a bachelor's degree in both pure and Applied Physics plus both master's and doctoral degrees in several fields. These degrees are technically granted by the School's parent organization, the Georgia Tech College of Sciences, and often awarded in conjunction with other academic units within Georgia Tech. The graduate program was initiated under Joseph Howey's leadership and the undergraduate program grew in stature to become one of the larger departments in the United States. Howey remained at the helm of the School of Physics for 28 years.\n" ]
how do octopuses avoid giving themselves brain damage?
They don't really have a localized brain, like we do. Rather, it is spread throughout their body, with a lot of neural tissue in the tentacles.
[ "The octopus (along with cuttlefish) has the highest brain-to-body mass ratios of all invertebrates; it is also greater than that of many vertebrates. It has a highly complex nervous system, only part of which is localised in its brain, which is contained in a cartilaginous capsule. Two-thirds of an octopus's neurons are found in the nerve cords of its arms, which show a variety of complex reflex actions that persist even when they have no input from the brain. Unlike vertebrates, the complex motor skills of octopuses are not organised in their brain via an internal somatotopic map of its body, instead using a nonsomatotopic system unique to large-brained invertebrates.\n", "A benthic (bottom-dwelling) octopus typically moves among the rocks and feels through the crevices. The creature may make a jet-propelled pounce on prey and pull it towards the mouth with its arms, the suckers restraining it. Small prey may be completely trapped by the webbed structure. Octopuses usually inject crustaceans like crabs with a paralysing saliva then dismember them with their beaks. Octopuses feed on shelled molluscs either by forcing the valves apart, or by drilling a hole in the shell to inject a nerve toxin. It used to be thought that the hole was drilled by the radula, but it has now been shown that minute teeth at the tip of the salivary papilla are involved, and an enzyme in the toxic saliva is used to dissolve the calcium carbonate of the shell. It takes about three hours for \"O. vulgaris\" to create a hole. Once the shell is penetrated, the prey dies almost instantaneously, its muscles relax, and the soft tissues are easy for the octopus to remove. Crabs may also be treated in this way; tough-shelled species are more likely to be drilled, and soft-shelled crabs are torn apart.\n", "Primarily, the octopus situates itself in a shelter where a minimal amount of its body is presented to the external water, which would pose a problem for an organism that breathes solely through its skin. When it does move, most of the time it is along the ocean or sea floor, in which case the underside of the octopus is still obscured. This crawling increases metabolic demands greatly, requiring they increase their oxygen intake by roughly 2.4 times the amount required for a resting octopus. This increased demand is met by an increase in the stroke volume of the octopus' heart.\n", "A science-based report from the University of British Columbia to the Canadian Federal Government has been quoted as stating \"The cephalopods, including octopus and squid, have a remarkably well developed nervous system and may well be capable of experiencing pain and suffering.\"\n", "Avoidance learning in octopuses has been known since 1905. Noxious stimuli, for example electric shocks, have been used as \"negative reinforcers\" for training octpuses, squid and cuttlefish in discrimination studies and other learning paradigms. Repeated exposure to noxious stimuli can have long-term effects on behaviour. It has been shown that in octopuses, electric shocks can be used to develop a passive avoidance response leading to the cessation of attacking a red ball.\n", "Octopuses are highly intelligent, possibly more so than any other order of invertebrates. The level of their intelligence and learning capability are debated, but maze and problem-solving studies show they have both short- and long-term memory. Octopus have a highly complex nervous system, only part of which is localized in their brain. Two-thirds of an octopus' neurons are found in the nerve cords of their arms. Octopus arms show a variety of complex reflex actions that persist even when they have no input from the brain. Unlike vertebrates, the complex motor skills of octopuses are not organized in their brain using an internal somatotopic map of their body, instead using a non-somatotopic system unique to large-brained invertebrates. Some octopuses, such as the mimic octopus, move their arms in ways that emulate the shape and movements of other sea creatures.\n", "Octopus remained in prison until the Martian invasion decimated much of Chicago and allowed CyberFace to assert control. The police tried to get Octopus to help them bring down CyberFace but he escaped and gathered up the remains of his former pawn when his unstable body blew up. Octopus took this disembodied head and attached it to the body of BrainiApe.\n" ]
why did we go from round headphone wires to flat ones?
i'm not sure what you mean. all headphones i've bought in the past few years have round wires. can you give an example of these "flat" wires?
[ "Owing to the fact that a round wire will create air gaps that are not electrically used, the fill factor is always smaller than one. In order to achieve higher fill factors, rectangular or flat wire can be used. This can be wound on flat or upright.\n", "Early speaker cable was typically stranded copper wire, insulated with cloth tape, waxed paper or rubber. For portable applications, common lampcord was used, twisted in pairs for mechanical reasons. Cables were often soldered in place at one end. Other terminations were binding posts, terminal strips, and spade lugs for crimp connections. Two-conductor ¼-inch tip-sleeve phone jacks came into use in the 1920s and '30s as convenient terminations.\n", "One of the reasons behind the adoption of that particular design was that it was cheap to make, with the flat pins being able to be easily stamped out of sheet brass, in contrast to round pins or thicker rectangular ones used in other countries. This was also a consideration when the Chinese authorities officially adopted the design in relatively recent times, despite the considerable inroads the British plug had made, because of its use in Hong Kong. The Chinese socket is normally mounted with the earth pin at the top. This is considered to offer some protection should a conductive object fall between the plug and the socket. The Chinese CPCS-CCC (Chinese 10 A/250 V) plugs and socket-outlets are almost identical, differing by only 1 mm longer pins and installed \"upside down\". Though AS 3112 plugs will physically connect, they may not be electrically compatible to the Chinese 220 V standards. Originally there was no convention as to the direction of the earth pin. Often it was facing upwards, as socket-outlets in China now do but it could also be downwards or horizontal, in either direction.\n", "A short-barrelled version of the phone plug was used for 20th century high-impedance mono headphones, and in particular those used in World War II aircraft. These have become rare. It is physically possible to use a normal plug in a short socket, but a short plug will neither lock into a normal socket nor complete the tip circuit.\n", "Historically, all wire was round. Advances in technology now allow the manufacture of jewelry wire with different cross-sectional shapes, including circular, square, and half-round. Half round wire is often wrapped around other pieces of wire to connect them. Square wire is used for its appearance: the corners of the square add interest to the finished jewelry. Square wire can be twisted to create interesting visual effects.\n", "First introduced in 1965, the Trimline included a lighted dial and was encased in a sleek, curved plastic housing that took up much less space than earlier Western Electric telephones. However, the glass-smooth and shallowly-curved plastic handset proved difficult to retain between cheek and shoulder for hands-free communication without slipping, and this problem was never corrected over the life of the model line. Cushioned clamp-on adaptors were manufactured and sold by third parties to make it easier to cradle the handset, but these add-ons would greatly compromise the aesthetic appearance of the telephone.\n", "Disadvantages of single wire operation such as crosstalk and hum from nearby AC power wires had already led to the use of twisted pairs and, for long distance telephones, four-wire circuits. Users at the beginning of the 20th century did not place long distance calls from their own telephones but made an appointment to use a special sound proofed long distance telephone booth furnished with the latest technology.\n" ]
who was the last living person to hold the position (however ceremonial) of a roman senator? When was the last time the Roman senate met?
The Roman Senate continued to meet throughout the first part of the sixth century, and even enjoyed a renaissance of sorts under the barbaric rulers beginning with Odoacer and especially under Theodoric the Great. The Gothic Wars, though, devastated Italy in the mid-sixth century, and Rome was no exception. In 536, it was recaptured by the Eastern Romans; a siege began shortly thereafter in March 537 and lasted for a year; sacked in 546 by Totila (during the siege of which, a famine haunted the city); captured again in 550 by Totila; and finally captured for good by the Eastern Roman General Narses in 552. Nonetheless, the Roman Senate did survive all these events, albeit in diminished form. We know that they continued to meet in some fashion because they pleaded Emperor Tiberius II Constantine for help against the Lombards in 578, sending an envoy to Constantinople with 3,000 pounds of gold. (The Emperor returned the gold, saying there were no troops to spare and instead advised the Senate to spend it on using it to secure support from Lombard and Frankish rulers) But by 593, Pope Gregory I wrote the following in his Homilies on Ezekial (essentially a reflection on the dire state of the world): > Where is the senate? Where are the people? The senate is vanished, the people have perished... Rome is empty and yet Rome is burning. By 593, Rome of course was not empty, and was probably a city of some 30,000 - 50,000, making it still one of the largest cities in Europe at the time even though it was a shadow of its former self. So what he said should not be taken literally, but it's indicative of the decline of the Roman Senate. The last time the Senate is mentioned though is in 603 in the Gregorian Register, in which it is noted as having acclaimed new statues of Emperor Phocas and Empress Leontia. But, it's referenced in the register as, "by the whole clergy and the senate." Moreover, the Pope ordered the statues to be moved to the chapel of the Imperial Palace on the Palatine. So it seems clear that in whatever form the Senate existed by 603, it was clearly no longer a significant body. After that time, there's no more references to the Senate, and we know that in 630, its meeting place (the Julia Curia) was converted to a church. I think it's worth quickly noting though that the Eastern Roman Senate continued to meet in Constantinople for hundreds of years afterward. It was mostly a ceremonial body during this time, although it did have some influence (in 1197, they exempted Constantinople, and thus themselves, from a special tax that the Emperor had specifically convened them to approve). As for the last person to hold the position, I have no idea. Boethius is probably the last well-known Roman Senator (don't take my word for that), but the Senate continued to function for decades after his death in 524. The last consul was Anicius Faustus Albinus Basilius in 541. After him, the title was added to the Imperial title until Emperor Leo VI got rid of it altogether in the late 9th century. Some sources: _URL_2_ _URL_0_ _URL_1_
[ "The senate as a body was formed of sitting senators, whose number was held at around 600 by the founder of the \"principate\", Augustus (sole rule 30 BC – AD 14) and his successors until 312. Senators' sons and further descendants technically retained equestrian rank unless and until they won a seat in the senate. But Talbert argues that Augustus established the existing senatorial elite as a separate and superior order \"(ordo senatorius)\" to the \"equites\" for the first time. The evidence for this includes:\n", "After the fall of the Roman Republic, the \"princeps senatus\" was the Roman Emperor, and during the period of the Principate, no other individual is believed to have held the office (see also: princeps). However, in the emperor's absence, it is possible that a Senator was granted the privilege of holding this role when the Senate met; the notoriously unreliable Historia Augusta claimed that during the Crisis of the Third Century, some others held the position; in particular, it stated that the future emperor Valerian held the office in 238, during the reigns of Maximinus Thrax and Gordian I, and he continued to hold it through to the reign of Decius. The same source also makes the same claim about Tacitus when the Senate acclaimed him emperor in AD 275.\n", "The princeps senatus (plural \"principes senatus\") was the first member by precedence of the Roman Senate. Although officially out of the \"cursus honorum\" and owning no \"imperium\", this office brought conferred prestige on the senator holding it.\n", "He remained an active senator until his death, even taking his seat in the \"Red Chamber\" a few hours before his death. He died of heart failure in the Château Laurier Hotel, a few hours after attending an afternoon senate session on June 11, 1991. At the time, he was the oldest serving senator, as he was appointed at a time when appointments to the senate were for life.\n", "Around 162 Septimius Severus sought a public career in Rome. At the recommendation of his relative Gaius Septimius Severus, Emperor Marcus Aurelius () granted him entry into the senatorial ranks. Membership in the senatorial order was a prerequisite to attain positions within the \"cursus honorum\" and to gain entry into the Roman Senate. Nevertheless, it appears that Severus' career during the 160s met with some difficulties. It is likely that he served as a \"vigintivir\" in Rome, overseeing road maintenance in or near the city, and he may have appeared in court as an advocate. At the time of Marcus Aurelius he was the State Attorney (\"Advocatus fisci\"). However, he omitted the military tribunate from the \"cursus honorum\" and had to delay his quaestorship until he had reached the required minimum age of 25. To make matters worse, the Antonine Plague swept through the capital in 166.\n", "The senate is said to have been created by Rome's first king, Romulus, initially consisting of 100 men. The descendants of those 100 men subsequently became the patrician class. Rome's fifth king, Lucius Tarquinius Priscus, chose a further 100 senators. They were chosen from the minor leading families, and were accordingly called the \"patres minorum gentium\".\n", "BULLET::::- Senators' sons followed a separate \"cursus honorum\" (career-path) to other \"equites\" before entering the senate: first an appointment as one of the \"vigintiviri\" (\"Committee of Twenty\", a body that included officials with a variety of minor administrative functions), or as an \"augur\" (priest), followed by at least a year in the military as \"tribunus militum laticlavius\" (deputy commander) of a legion. This post was normally held before the tribune had become a member of the senate.\n" ]
American and Russian submarines during WW2
During WWII the United States Navy used the Mark 14 torpedo. The torpedo had a speed of 46 knots. Now it is unclear what class of Soviet submarine was being used but I can assure you that it would have been hopelessly outmatched in terms of submerged speed. From what I found the submerged top speed of most submarines of that era top out at around 10-14 knots. And actually that torpedo would be able to catch the fastest submarine in the world, the Soviet K-222 which had a top speed of 44.7 knots and was commissioned in 1969.
[ "During the Cold War, the United States and the Soviet Union maintained large submarine fleets that engaged in cat-and-mouse games. This continues today, on a much-reduced scale. The Soviet Union suffered the loss of at least four submarines during this period: \"K-129\" was lost in 1968 (which the CIA attempted to retrieve from the ocean floor with the Howard Hughes-designed ship named Glomar Explorer), \"K-8\" in 1970, \"K -219\" in 1986 (subject of the film \"Hostile Waters\"), and \"Komsomolets\" (the only Mike class submarine) in 1989 (which held a depth record among the military submarines—1000 m, or 1300 m according to the article K-278). Many other Soviet subs, such as \"K-19\" (first Soviet nuclear submarine, and first Soviet sub at North Pole) were badly damaged by fire or radiation leaks. The United States lost two nuclear submarines during this time: USS \"Thresher\" and \"Scorpion\". The Thresher was lost due to equipment failure, and the exact cause of the loss of the Scorpion is not known.\n", "During the Cold War, the US and the Soviet Union maintained large submarine fleets that engaged in cat-and-mouse games. The Soviet Union lost at least four submarines during this period: was lost in 1968 (a part of which the CIA retrieved from the ocean floor with the Howard Hughes-designed ship \"Glomar Explorer\"), in 1970, in 1986, and in 1989 (which held a depth record among military submarines—). Many other Soviet subs, such as (the first Soviet nuclear submarine, and the first Soviet sub to reach the North Pole) were badly damaged by fire or radiation leaks. The US lost two nuclear submarines during this time: due to equipment failure during a test dive while at its operational limit, and due to unknown causes.\n", "Submarines of World War II represented a wide range of capabilities with many types of varying specifications produced by dozens of countries. The principle countries engaged in submarine warfare during the war were Germany, Italy, Japan, the United States, United Kingdom and the Soviet Union. The Italian and Soviet fleets were the largest. While the German and US fleets fought anti-shipping campaigns (in the Atlantic and Pacific respectively), the British and Japanese submarines were mostly engaged against enemy warships.\n", "During World War I, the Russian subs operated together with the British submarine flotilla in the Baltic against the German Navy. This all changed with the October Revolution and the Finnish Civil War.\n", "The American Holland-class submarines, also AG class or A class, were Holland 602 type submarines used by the Imperial Russian and Soviet Navies in the early 20th century. The small submarines participated in the World War I Baltic Sea and Black Sea theatres and a handful of them also saw action during World War II.\n", "Japan, the United States, Great Britain, The Netherlands, and Australia all employed anti-submarine forces in the Pacific Theater during World War II. Because the Japanese Navy tended to utilize its submarines against capital ships such as cruisers, battleships and aircraft carriers, U.S. and Allied anti-submarine efforts concentrated their work in support of fleet defense.\n", "As Cold War tensions increased, the United States Navy formed modernized hunter-killer groups in anticipation of potential use of Soviet submarines to intercept North American shipping to European NATO allies. As modern anti-submarine aircraft became too large to operate from escort carriers, s were reclassified as anti-submarine warfare carriers (CVS). Some second world war destroyers were reclassified as escort destroyers (DDE) with guns and torpedoes replaced by RUR-4 Weapon Alpha or hedgehog. Operational doctrine anticipated each CVS would be accompanied by eight DDEs. Four DDEs would provide a close screen for the CVS while the other four attacked submarines detected by aircraft. The cost of Vietnam War combat operations prevented replacement of these ASW ships when they reached the end of their design life. Newly operational SOSUS and shore-based Lockheed P-3 Orion maritime patrol aircraft assumed the mid-ocean ASW search and attack role of the disappearing CVS hunter-killer groups.\n" ]
How was construction in ancient cities prioritized?
Dur Sharrukin was a brand new capital city built by Sargon II from 716 BC to 706 BC. The outer walls measured 1.76 km x 1.635 km, had 157 towers and seven gates. There was a large barracks, built in the south west quarter of the city. The palace and three important temples were built on a terrace on the northern edge of the city. There was a small working class residential district near the cerimonial core of the city. However, more than eighty percent of the land inside the city wall remained undeveloped. Sargon II soon died, and the Assyrian court, which had just recently moved into Dur Sharrukin, moved to another capital city.
[ "The construction of cities was the end product of trends which began in the Neolithic Revolution. The growth of the city was partly planned and partly organic. Planning is evident in the walls, high temple district, main canal with harbor, and main street. The finer structure of residential and commercial spaces is the reaction of economic forces to the spatial limits imposed by the planned areas resulting in an irregular design with regular features. Because the Sumerians recorded real estate transactions it is possible to reconstruct much of the urban growth pattern, density, property value, and other metrics from cuneiform text sources.\n", "The pre-Classical and Classical periods saw a number of cities laid out according to fixed plans, though many tended to develop organically. Designed cities were characteristic of the Minoan, Mesopotamian, Harrapan, and Egyptian civilisations of the third millennium BC (see Urban planning in ancient Egypt). The first recorded description of urban planning appears in the Epic of Gilgamesh: \"Go up on to the wall of Uruk and walk around. Inspect the foundation platform and scrutinise the brickwork. Testify that its bricks are baked bricks, And that the Seven Counsellors must have laid its foundations. One square mile is city, one square mile is orchards, one square mile is claypits, as well as the open ground of Ishtar's temple.Three square miles and the open ground comprise Uruk. Look for the copper tablet-box, Undo its bronze lock, Open the door to its secret, Lift out the lapis lazuli tablet and read.\" \n", "Factors such as wealth and high population densities in cities forced the ancient Romans to discover new architectural solutions of their own. The use of vaults and arches, together with a sound knowledge of building materials, enabled them to achieve unprecedented successes in the construction of imposing infrastructure for public use. Examples include the aqueducts of Rome, the Baths of Diocletian and the Baths of Caracalla, the basilicas and Colosseum. These were reproduced at a smaller scale in most important towns and cities in the Empire. Some surviving structures are almost complete, such as the town walls of Lugo in Hispania Tarraconensis, now northern Spain. The administrative structure and wealth of the empire made possible very large projects even in locations remote from the main centers, as did the use of slave labor, both skilled and unskilled.\n", "Architecture developed significantly in the 2nd century BC with the arrival of the Romans, who called the Iberian Peninsula Hispania. Conquered settlements and villages were often modernised following Roman models, with the building of a forum, streets, theatres, temples, baths, aqueducts and other public buildings. An efficient array of roads and bridges was built to link the cities and other settlements.\n", "There is evidence of urban planning and designed communities dating back to the Mesopotamian, Indus Valley, Minoan, and Egyptian civilizations in the third millennium BCE. Archeologists studying the ruins of cities in these areas find paved streets that were laid out at right angles in a grid pattern. The idea of a planned out urban area evolved as different civilizations adopted it. Beginning in the 8th century BCE, Greek city states were primarily centered on orthogonal (or grid-like) plans. The ancient Romans, inspired by the Greeks, also used orthogonal plans for their cities. City planning in the Roman world was developed for military defense and public convenience. The spread of the Roman Empire subsequently spread the ideas of urban planning. As the Roman Empire declined, these ideas slowly disappeared. However, many cities in Europe still held onto the planned Roman city center. Cities in Europe from the 9th to 14th centuries, often grew organically and sometimes chaotically. But in the following centuries some newly created towns were built according to preconceived plans, and many others were enlarged with newly planned extensions. From the 15th century on, much more is recorded of urban design and the people that were involved. In this period, theoretical treatises on architecture and urban planning start to appear in which theoretical questions are addressed and designs of towns and cities are described and depicted. During the Enlightenment period, several European rulers ambitiously attempted to redesign capital cities. During the Second French Republic, Baron Georges-Eugène Haussmann, under the direction of Napoleon III, redesigned the city of Paris into a more modern capital, with long, straight, wide boulevards.\n", "Social elements such as wealth and high population densities in cities forced the ancient Romans to go discover new (architectural) solutions of their own. The use of vaults and arches together with a sound knowledge of building materials, for example, enabled them to achieve unprecedented successes in the construction of imposing structures for public use. Examples include the aqueducts of Rome, the Baths of Diocletian and the Baths of Caracalla, the basilicas and perhaps most famously of all, the Colosseum. They were reproduced at smaller scale in most important towns and cities in the Empire. Some surviving structures are almost complete, such as the town walls of Lugo in Hispania Tarraconensis, or northern Spain.\n", "Cities, characterized by population density, symbolic function, and urban planning, have existed for thousands of years. In the conventional view, civilization and the city both followed from the development of agriculture, which enabled production of surplus food, and thus a social division of labour (with concomitant social stratification) and trade. Early cities often featured granaries, sometimes within a temple. A minority viewpoint considers that cities may have arisen without agriculture, due to alternative means of subsistence (fishing), to use as communal seasonal shelters, to their value as bases for defensive and offensive military organization, or to their inherent economic function. Cities played a crucial role in the establishment of political power over an area, and ancient leaders such as Alexander the Great founded and created them with zeal.\n" ]
why do most unknown calls i receive on my home phone end up having no one on the other end?
They're usually from call centres where they have a machine automatically call numbers on a pre-assigned list. Then, when a voice is detected on your end, someone is connected on their end to talk to you. It saves them time, even if it does result it people going "Huh. No one there" and hanging up before they get a chance.
[ "Many of these towns have in fact refused to merge, leaving callers with more digits to dial when making local calls. This is partially balanced by not having to dial an area code for the neighboring city.\n", "Because the SIT is well known in many countries, callers can understand that their call has failed, even though they do not understand the language of the recorded announcement (e.g., when calling internationally) instead of assuming the recording is voicemail or some other intended function.\n", "Accidental calls are often cited as being one of the more annoying consequences of cell phone usage. Given the haphazard nature of inadvertent dialing, most actual misconnections do not result from the selection of random numbers. Instead, pocket dialing frequently triggers the \"recently dialed\" and \"contact\" lists that are contained within modern cell phones. The caller is frequently unaware that the call has taken place, whereas the recipient of the call often hears background conversation and background noises such as the rustling of clothes. Due to the dialing of common numbers, the recipient is likely to know the caller, and may overhear conversations that the caller would not want them to hear.\n", "The phone rings again with no answer, and then the line is cut. No one can get a signal on their cell phones. The group decide to leave but when they run out to the car they see that it is missing. A van pulls up in the driveway, scaring the group back into the house. As everyone tries to get a signal on their phones again, all the power in the house goes out. Miriam finally gets a signal on her phone and dials 9-1-1, but the call drops out.\n", "Despite its common usage to address people who call with no one answering the phone, the \"here\" here is semantically contradictory to one's absence. Nevertheless, this is considered normal for most people as speakers have to project themselves as answering the phone when in fact they are not physically. \n", "Few families had telephones, relying instead on phone booths located about 100 feet apart. When a phone call would come, whoever was closest at the moment would answer, while the neighborhood children would run to see who the call was for, then pass the word to that person.\n", "In addition to the inconvenience and embarrassment that may result from an erroneously dialed number, the phenomenon can have other consequences including using up a phone user's airtime minutes. Accidental calls, if not hung up immediately, tie up the recipient's phone line. If this is a landline, the recipient may have difficulty in disconnecting the call in order to use the phone, as networks sometimes define a timeout period between the recipient hanging up and the call actually being cleared.\n" ]
Why is Zeta(0) equal to -1/2?
This is because the zeta function is not defined to be equal to that sum on the entire complex plane; it is only given by the sum you're looking at when Re(s) > 1. The Riemann Zeta function is actually defined to be the analytic continuation of the function given by the sum in the half plane Re(s) > 1. There isn't one very easy proof of the fact that Re(0) = -1/2 without first developing some tools about the zeta function. If you're willing to assume some equations, then [Wikipedia has a clean proof](_URL_0_).
[ "The Riemann zeta function ζ(\"s\") is a function whose argument \"s\" may be any complex number other than 1, and whose values are also complex. It has zeros at the negative even integers; that is, ζ(\"s\") = 0 when \"s\" is one of −2, −4, −6, ... These are called its \"trivial zeros\". However, the negative even integers are not the only values for which the zeta function is zero. The other ones are called \"non-trivial zeros\". The Riemann hypothesis is concerned with the locations of these non-trivial zeros, and states that:\n", "The Riemann zeta function is one of the most significant functions in mathematics because of its relationship to the distribution of the prime numbers. The zeta function is defined for any complex number with real part greater than 1 by the following formula:\n", "where ζ is the Riemann zeta function. Keeping Grandi's series in mind, this relation explains why ζ(0) = −⁄; see also 1 + 1 + 1 + 1 + · · ·. The relation also implies a much more important result. Since \"η\"(\"z\") and (1 − 2) are both analytic on the entire plane and the latter function's only zero is a simple zero at \"z\" = 1, it follows that ζ(\"z\") is meromorphic with only a simple pole at \"z\" = 1.\n", "Note that the ratio of the zeta functions is well defined, even for \"n\"  \"s\" − 1 because the series representation of the zeta function can be analytically continued. This does not change the fact that the moments are specified by the series itself, and are therefore undefined for large \"n\".\n", "Zeta (uppercase Ζ, lowercase ζ; , , classical or \"zē̂ta\"; \"zíta\") is the sixth letter of the Greek alphabet. In the system of Greek numerals, it has a value of 7. It was derived from the Phoenician letter zayin . Letters that arose from zeta include the Roman Z and Cyrillic З.\n", "In mathematics, the arithmetic zeta function is a zeta function associated with a scheme of finite type over integers. The arithmetic zeta function generalizes the Riemann zeta function and Dedekind zeta function to higher dimensions. The arithmetic zeta function is one of the most-fundamental objects of number theory.\n", "At rational arguments the Hurwitz zeta function may be expressed as a linear combination of Dirichlet L-functions and vice versa: The Hurwitz zeta function coincides with Riemann's zeta function ζ(\"s\") when \"q\" = 1, when \"q\" = 1/2 it is equal to (2−1)ζ(\"s\"), and if \"q\" = \"n\"/\"k\" with \"k\"  2, (\"n\",\"k\")  1 and 0  \"n\"  \"k\", then\n" ]
Why does the metal from meteorites have such a distinctive zig-zag pattern?
That pattern, called a [Widmanstatten pattern](_URL_1_) is due to the crystallization of iron and nickel minerals in the meteorite cooling very slowly. Here, 'very slowly' means a few hundred or thousand degrees C every *million years*. This slow cooling allows for large crystals of these minerals to form. They are actually interlaced crystals of two different alloys of iron and nickel. One type basically grows within the other type. [Here's an excellent review that explains the formation](_URL_0_). The patterns are visible when meteorites are cut, polished, and etched using nitric acid or ferric chloride. These chemicals dissolve different minerals at different rates so you can eat away at one of the alloys more than the other, giving contrast to the two regions.
[ "When an iron meteorite is forged into a tool or weapon, the Widmanstätten patterns remain, but become stretched and distorted. The patterns usually cannot be fully eliminated by blacksmithing, even through extensive working. When a knife or tool is forged from meteoric iron and then polished, the patterns appear in the surface of the metal, albeit distorted, but they tend to retain some of the original octahedral shape and the appearance of thin lamellae criss-crossing each other. Pattern-welded steels such as Damascus steel also bear patterns, but they are easily discernible from any Widmanstätten pattern.\n", "This type of rock formation and weathering process has happened in many other places locally and throughout the world, but what makes Meteora's appearance special is firstly, the uniformity of the sedimentary rock constituents deposited over millions of years leaving few signs of vertical layering, and secondly, the localised abrupt vertical weathering.\n", "The meteorite was formed from nebular dust and gas during the early formation of the Solar System. It is a \"stony\" meteorite, as opposed to an \"iron,\" or \"stony iron,\" the other two general classes of meteorite. Most Allende stones are covered, in part or in whole, by a black, shiny crust created as the stone descended at great speed through the atmosphere as it was falling towards the earth from space. This causes the exterior of the stone to become very hot, melting it, and forming a glassy \"fusion crust.\"\n", "As meteoroids are heated during atmospheric entry, their surfaces melt and experience ablation. They can be sculpted into various shapes during this process, sometimes resulting in shallow thumbprint-like indentations on their surfaces called regmaglypts. If the meteoroid maintains a fixed orientation for some time, without tumbling, it may develop a conical \"nose cone\" or \"heat shield\" shape. As it decelerates, eventually the molten surface layer solidifies into a thin fusion crust, which on most meteorites is black (on some achondrites, the fusion crust may be very light colored). On stony meteorites, the heat-affected zone is at most a few mm deep; in iron meteorites, which are more thermally conductive, the structure of the metal may be affected by heat up to below the surface. Reports vary; some meteorites are reported to be \"burning hot to the touch\" upon landing, while others are alleged to have been cold enough to condense water and form a frost.\n", "The crystalline patterns become visible when the meteorites are cut, polished, and acid etched, because taenite is more resistant to the acid. In the picture shown, the broad white bars are \"kamacite\" (dimensions in the mm-range), and the thin line-like ribbons are \"taenite\". The dark mottled areas are called \"plessite\".\n", "In 1808, he independently discovered some metallographic patterns, now called Widmanstätten patterns in iron meteorites, by flame-heating a slab of Hraschina meteorite. The different iron alloys of meteorites oxidized at different rates during heating, causing color and luster differences.\n", "Due to the heterogeneous structure of Seymchan, there are two types of specimens: with or without olivine crystals. It is worthy to note that the specimen pictured to the left shows an interesting, seldom seen feature of iron meteorites. The Widmanstätten pattern on the left hand side of the specimen is visibly bent. This is caused by the shearing of the meteorite as it broke up during atmospheric entry and serves as testimony of the violent experience a meteor is subject to as it falls through the atmosphere.\n" ]
How does parasitism between a host and a parasite of different domains work?
They aren't really tinkering with the code so much as producing an enviornment that induces the host to do something. It's more like instigating a pearl to form using a bead than genetic engineering (though the analogy isn't perfect).
[ "Parasitism is a kind of symbiosis, a close and persistent long-term biological interaction between a parasite and its host. Unlike commensalism and mutualism, the parasitic relationship harms the host, either feeding on it or, as in the case of intestinal parasites, consuming some of its food. However, parasites are different from saprophytes. Because parasites interact with other species, they can readily act as vectors of pathogens, causing disease. Predation is by definition not a symbiosis, as the interaction is brief, but the entomologist E. O. Wilson has characterised parasites as \"predators that eat prey in units of less than one\".\n", "Parasites follow a wide variety of evolutionary strategies, placing their hosts in an equally wide range of relationships. Parasitism implies host–parasite coevolution, including the maintenance of gene polymorphisms in the host, where there is a trade-off between the advantage of resistance to a parasite and a cost such as disease caused by the gene.\n", "As it forces its way into the host cell, the parasite forms a parasitophorous vacuole (PV) membrane from the membrane of the host cell. The PV encapsulates the parasite, and is both resistant to the activity of the endolysosomal system, and can take control of the host's mitochondria and endoplasmic reticulum.\n", "In evolutionary biology, parasitism is a relationship between species, where one organism, the parasite, lives on or in another organism, the host, causing it some harm, and is adapted structurally to this way of life. The entomologist E. O. Wilson has characterised parasites as \"predators that eat prey in units of less than one\". Parasites include protozoans such as the agents of malaria, sleeping sickness, and amoebic dysentery; animals such as hookworms, lice, mosquitoes, and vampire bats; fungi such as honey fungus and the agents of ringworm; and plants such as mistletoe, dodder, and the broomrapes. There are six major parasitic strategies of exploitation of animal hosts, namely parasitic castration, directly transmitted parasitism (by contact), trophically transmitted parasitism (by being eaten), vector-transmitted parasitism, parasitoidism, and micropredation.\n", "Parasitism is a relationship between species, where one organism, the parasite, lives on or in another organism, the host, causing it some harm, and is adapted structurally to this way of life. The parasite either feeds on the host, or, in the case of intestinal parasites, consumes some of its food.\n", "A parasite may be passively transported into a nest by a group member or may actively search for the nest; once inside, parasite transmission can be vertical (from mother to daughter colony into the next generation) or horizontally (between/within colonies). In eusocial insects, the most frequent defence against parasite uptake into the nest is to prevent infection during and/or after foraging, and a wide range of active and prophylactic mechanisms have evolved to this end.\n", "In a parasitic relationship, the parasite benefits while the host is harmed. Parasitism takes many forms, from endoparasites that live within the host's body to ectoparasites and parasitic castrators that live on its surface and micropredators like mosquitoes that visit intermittently. Parasitism is an extremely successful mode of life; as many as half of all animals have at least one parasitic phase in their life cycles, and it is also frequent in plants and fungi. Moreover, almost all free-living animal species are hosts to parasites, often of more than one species.\n" ]
We all used to eats boogers: what effect could/did that have?
Unfortunately I can't find the study, but a while ago (2-3 years ago) I ran across an article suggesting that eating boogers was a pseudo-vaccination, killing whatever got caught and allowing the inert form to be ingested to create antibodies. However, due to the levels of pollution in most places, it's probably not too healthy.
[ "Boredom, stress, habit and addiction are all possible causes of cribbing and wind-sucking. It was proposed in a 2002 study that the link between intestinal conditions such as gastric inflammation or colic and abnormal oral behavior was attributable to environmental factors. There is evidence that stomach ulcers may be correlated to a horse becoming a cribber.\n", "Mucophagy has also been referred to as a \"tension phenomenon\" based on children's ability to function in their environment. The different degrees of effectively fitting in socially may indicate psychiatric disorders or developmental stress reactions. However, most parents view these habits as pathological issues. Moreover, Andrade and Srihari cited a study performed by Sidney Tarachow of the State University of New York which reported that people who ate their boogers found them \"tasty.\"\n", "BULLET::::- Purging: May use laxatives, diet pills, ipecac syrup, or water pills to flush food out of their system after eating or may engage in self-induced vomiting though this is a more common symptom of bulimia.\n", "When Boog becomes sick from eating too many candy bars, events quickly spiral out of control, as the two raid the town's grocery store. Elliot escapes before Boog is caught by a friend of Beth's, police officer Gordy. At the nature show, Elliot being chased by Shaw, sees Boog, which \"attacks\" him. This causes the whole audience to panic. Shaw attempts to shoot Boog and Elliot, but Beth sedates them both with a tranquilizer gun just before Shaw fires his gun. Shaw flees before Gordy can arrest him for shooting a gun in the town. The two troublemakers are banned from the town and into the Timberline National Forest, only three days before open season starts, but they are relocated above the waterfalls, where they will be safe from the hunters.\n", "Stefan Gates in his book \"Gastronaut\" discusses eating dried nasal mucus, and says that 44% of people he questioned said they had eaten their own dried nasal mucus in adulthood and said they liked it. As mucus filters airborne contaminants, eating it could be thought to be unhealthy; Gates comments that \"our body has been \"built\" to consume snot\", because the nasal mucus is normally swallowed after being moved inside by the motion of the cilia. Friedrich Bischinger, a lung specialist at Privatklinik Hochrum in Innsbruck, says that nose-picking and eating could actually be beneficial for the immune system.\n", "Stefan Gates in his book \"Gastronaut\" discusses eating dried nasal mucus, and says that 44% of people he questioned said they had eaten their own dried nasal mucus in adulthood and said they liked it. As mucus filters airborne contaminants, eating it could be thought to be unhealthy; Gates comments that \"our body has been \"built\" to consume snot\", because the nasal mucus is normally swallowed after being moved inside by the motion of the cilia. Friedrich Bischinger, a lung specialist at Privatklinik Hochrum in Innsbruck, says that nose-picking and eating could actually be beneficial for the immune system.\n", "Adult flukes are known to be quite harmless, as they do not attack on the host tissue. It is the immature flukes which are most damaging as they get attached to the intestinal wall, literally and actively sloughing off of the tissue. This necrosis is indicated by haemorrhage in faeces, which in turn is a sign of severe enteritis. Under such condition the animals become anorexic and lethargic. It is often accompanied by pronounced diarrhoea, dehydration, oedema, polydipsia, anaemia, listlessness and weight loss. In sheep profuse diarrhoea usually develops two to four weeks after initial infection. If infection is not properly attended death can ensue within 20 days, and in a farm mortality can be very high. In fact there are intermittent reports of mortality as high as 80% among sheep and cattle. Sometimes chronic form is also seen with severe emaciation, anaemia, rough coat, mucosal oedema, thickened duodenum and oedema in the sub maxillary space. The terminally sick animals lie prostrate on the ground, completely emaciated until they die. In buffalos, severe haemorrhage was found to be associated with liver cirrhosis and nodular hepatitis.\n" ]
when you jump into a cold lake (say 60°f or ~15°c) why does the water no longer feel cold after about 5 minutes?
Your body has mechanisms in place to warm you up should you be in a cold environment. Dilated blood vessels provide a flush of warmth, and shivering also produces warmth. By doing this, your body can increase its internal temperature and keep you a bit more comfortable. Stay in too long though, and your vessels will end up constricting, because your body deems it's too cold and ends up preserving heat for your vital organs. However, if you're staying in that cold water for too long, it'll cool your blood and by association, the rest of your body and that's how hypothermia happens. Stay comfortable, but stay warm. Old people can die of hypothermia simply by falling onto a cold floor without getting help getting up.
[ "Winter swimming can be dangerous to people who are not used to swimming in very cold water. After submersion in cold water the cold shock response will occur, causing an uncontrollable gasp for air. This is followed by hyperventilation, a longer period of more rapid breathing. The gasp for air can cause a person to ingest water, which leads to drowning. As blood in the limbs is cooled and returns to the heart, this can cause fibrillation and consequently cardiac arrest. The cold shock response and cardiac arrest are the most common causes of death related to cold water immersion.\n", "Heat transfers very well into water, and body heat is therefore lost extremely quickly in water compared to air, even in merely 'cool' swimming waters around 70F (~20C). A water temperature of can lead to death in as little as one hour, and water temperatures hovering at freezing can lead to death in as little as 15 minutes. This is because cold water can have other lethal effects on the body, so hypothermia is not usually a reason for drowning or the clinical cause of death for those who drown in cold water.\n", "BULLET::::- Water at near-freezing temperatures is less dense than slightly warmer water - maximum density of water is at about 4°C - so when near freezing, water may be slightly warmer at depth than at the surface.\n", "Cold shock response is the physiological response of organisms to sudden cold, especially cold water, and is a common cause of death from immersion in very cold water, such as by falling through thin ice. The immediate shock of the cold causes involuntary inhalation, which if underwater can result in drowning. The cold water can also cause heart attack due to vasoconstriction; the heart has to work harder to pump the same volume of blood throughout the body, and for people with heart disease, this additional workload can cause the heart to go into arrest. A person who survives the initial minute after falling into cold water can survive for at least thirty minutes provided they do not drown. The ability to stay afloat declines substantially after about ten minutes as the chilled muscles lose strength and co-ordination.\n", "The unusual density curve and lower density of ice than of water is vital to life—if water were most dense at the freezing point, then in winter the very cold water at the surface of lakes and other water bodies would sink, the lake could freeze from the bottom up, and all life in them would be killed. Furthermore, given that water is a good thermal insulator (due to its heat capacity), some frozen lakes might not completely thaw in summer. The layer of ice that floats on top insulates the water below. Water at about 4 °C (39 °F) also sinks to the bottom, thus keeping the temperature of the water at the bottom constant (see diagram).\n", "Cold shock response is the physiological response of organisms to sudden cold, especially cold water, and is a common cause of death from immersion in very cold water, such as by falling through thin ice. The immediate shock of the cold causes involuntary inhalation, which if underwater can result in drowning. The cold water can also cause heart attack due to vasoconstriction; the heart has to work harder to pump the same volume of blood throughout the body, and for people with heart disease, this additional workload can cause the heart to go into arrest. A person who survives the initial minute of trauma after falling into icy water can survive for at least thirty minutes provided they don't drown. However, the ability to perform useful work like staying afloat declines substantially after ten minutes as the body protectively cuts off blood flow to \"non-essential\" muscles.\n", "Care should be taken when winter swimming in swimming pools and seas near the polar regions. The chlorine added to water in swimming pools and the salt in seawater allow the water to remain liquid at sub-zero temperatures. Swimming in such water is significantly more challenging and dangerous. The experienced winter swimmer Lewis Gordon Pugh swam near the North Pole in water and suffered a frostbite injury in his fingers. It took him four months to regain sensation in his hands.\n" ]
what's the difference between nuclear and thermonuclear?
Usuaully Nuclear weapons are basic uranium or plutonium based 1 stage nuclear bombs that function by triggering a single fission reaction which causes a highly radioactive material to "break" apart and release large ammounts of energy. Thermonuclear is used to describe Fusion devices which usualy fuse Hydrogen in to helium and release energy that way, a Thermonuclear weapon uses an initial fission explosion to "kickstart" a 2nd stage fusion explosion. thats why Hydrogen bombs are usualy refered to as "Thermonuclear bombs" TL;DR - Nuclear = Fission = Breaks apart atoms to generate energy - Thermonuclear = Fusion = Joins Atoms from element A in to Element B to generate energy.
[ "A thermonuclear weapon, or fusion weapon, is a second-generation nuclear weapon design. Its greater sophistication over pure fission weapons may afford it vastly greater destructive power than first-generation atomic bombs, a more compact size, a lower mass or a combination of these benefits. Characteristics of nuclear fusion reactions make possible the use of non-fissile depleted uranium as the weapon's main fuel, thus allowing more efficient use of scarce fissile material (U-235 and Pu-239).\n", "Nuclear thermal propulsion systems (NTR) are based on the heating power of a fission reactor, offering a more efficient propulsion system than one powered by chemical reactions. Current research focuses more on nuclear electric systems as the power source for providing thrust to propel spacecraft that are already in space. \n", "As thermonuclear weapons represent the most efficient design for weapon energy yield in weapons with yields above , virtually all the nuclear weapons of this size deployed by the five nuclear-weapon states under the Non-Proliferation Treaty today are thermonuclear weapons using the Teller–Ulam design.\n", "This list of nuclear power systems in space includes nuclear power systems that were flown to space, or launched in an attempt to reach space. Examples of nuclear power systems include radioisotope heater units (RHU), Radioisotope thermoelectric generators (RTG), thermionic converters, and fission reactors. Initial total spacecraft power is provided as electrical energy (We) or thermal energy (Wt), depending on the intended application.\n", "Compared to fission weapons, thermonuclear designs are exceedingly complex, and staged weapons in particular are so complex that only five countries (USA, Russia, France, UK, China) have created them in more than 70 years of research. The fuels for an H-bomb are also far more difficult to create. Several countries with long-standing nuclear weapons programs, such as India and Pakistan, are suspected of striving towards a hybrid or \"boosted\" design instead, which is easier. Since both fusion weapons and hybrid designs can at times be referred to as \"hydrogen bombs\", it cannot be said with certainty at present, what type of weapon North Korea may have been referring to in any given test. At present, analysts are skeptical of the 2016 test being a staged thermonuclear design, while noting that the most recent test, in 2017, was considerably more powerful. In 2018, North Korea had offered and was reportedly preparing for inspections at nuclear and missile sites.\n", "Of the four basic types of nuclear weapon, the first, pure fission, uses the first of the three nuclear reactions above. The second, fusion-boosted fission, uses the first two. The third, two-stage thermonuclear, uses all three.\n", "A thermonuclear weapon is a type of nuclear bomb that releases energy through the combination of fission and fusion of the light atomic nuclei of deuterium and tritium. With this type of bomb, a thermonuclear detonation is triggered by the detonation of a fission type nuclear bomb contained within a material containing high concentrations of deuterium and tritium. Weapon yield is typically increased with a tamper that increases the duration and intensity of the reaction through inertial confinement and neutron reflection. Nuclear fusion bombs can have arbitrarily high yields making them hundreds or thousands of times more powerful than nuclear fission.\n" ]
Do electric motors in cars have a limited lifespan?
Not forever. Ac motors have bearings which will need periodic replacement, windings with relatively delicate insulation over many turns of thin wire, and a metal cage with a rotor. Stick all this in a bouncy box that accelerates and brakes constantly. Sure must of these issues can be minimized with good engineering but they are still issues. I've seen motors last many years but a car is a pretty tough environment with pretty harsh and variable demands. Motors will fail. I'd say ten to fifteen years of a trouble free running motor will be a good and attainable result. This is just the motor I'm talking about all the rest of the running gear (suspension, cv's, etc) will have the same issues as a car with an internal combustion engine. Source: electrician. Been working with AC motors and speed controllers for years.
[ "In very small vehicles, the power demand decreases, so human power can be employed to make a significant improvement in battery life. Two such commercially made vehicles are the Sinclair C5 and TWIKE.\n", "Its life cycle is usually far greater than a purely electronic UPS, up to 30 years or more. But they do require periodic downtime for mechanical maintenance, such as ball bearing replacement. In larger systems redundancy of the system ensures the availability of processes during this maintenance. Battery-based designs do not require downtime if the batteries can be hot-swapped, which is usually the case for larger units. Newer rotary units use technologies such as magnetic bearings and air-evacuated enclosures to increase standby efficiency and reduce maintenance to very low levels.\n", "Electric motors are more efficient than internal combustion engines in converting stored energy into driving a vehicle. However, they are not equally efficient at all speeds. To allow for this, some cars with dual electric motors have one electric motor with a gear optimised for city speeds and the second electric motor with a gear optimised for highway speeds. The electronics select the motor that has the best efficiency for the current speed and acceleration. Regenerative braking, which is most common in electric vehicles, can recover as much as one fifth of the energy normally lost during braking. Efficiency increases when renewable electricity is used\n", "The two electric motors will have a combined power output of and of torque. The car will have claimed acceleration figures of in a sub 4.0 seconds time and in 1.5 seconds, along with a top speed of . Maximum performance will be accessible regardless of battery charge. A prototype was tested at the Nurbürgring to ensure that the car delivers linear power despite hard usage.\n", "All-electric have lower maintenance costs as compared to internal combustion vehicles, since electronic systems break down much less often than the mechanical systems in conventional vehicles, and the fewer mechanical systems on board last longer due to the better use of the electric engine. Electric cars do not require oil changes and other routine maintenance checks.\n", "Tesla said in February 2009 that the ESS had expected life span of seven years/, and began selling pre-purchase battery replacements for about one third of the battery's price today, with the replacement to be delivered after seven years. Tesla says the ESS retains 70% capacity after five years and of driving, assuming driven each year. A July 2013 study found that after , Roadster batteries still had 80%–85% capacity and the only significant factor is mileage (not temperature).\n", "Thanks to the on-demand torque output of the electric motor, the EV1 could accelerate from in 6.3 seconds, and from in eight seconds. The car's top speed was electronically limited to . At the time of release, the lead-acid battery-equipped EV1 was the only electric car produced which met all of the United States Department of Energy's EV America performance goals.\n" ]
Were white British (and Dominion) troops' relationships with non-white troops from the colonies generally positive?
Quite a few from the West Indies served as aircrew _URL_0_ It should be noted, that these aircrew didnt serve in segregated squadrons and they would often be the only non-white person in their crew, let alone the squadron. For example: _URL_2_ Otherwise it should be noted that many colonial troops like the Indian Army, Kings African Rifles and Gurkhas had been in existance for an extended period of time and had developed their own set of loyalties and traditions. In this sense, their attitiude towards white troops (and vice versa) would have been no different to business as usual. _URL_1_ Actually the only major full scale mutiny by colonial troops was the Indian Rebellion of 1857 and even then substantial numbers of colonial troops remained loyal (mainly Sikhs)
[ "All World War I belligerents with colonial possessions went to great lengths to recruit soldiers from their colonies. Germany was the only one of the Central Powers with substantial overseas possessions; it used numerous non-white troops to defend her colonies. Regardless of German attitudes toward the indigenous inhabitants of German colonies, Germany's lack of control of the sea lanes would have made it nearly impossible for the German Army to bring any substantial number of colonial troops to European battlefields. Notwithstanding the exact circumstances, most Germans quickly came to view non-white Allied troops with disdain and were contemptuous of the Allies' willingness to use these troops in Europe. \n", "Though it was one of the few combatant territories not to raise fighting men through conscription, proportional to white population, Southern Rhodesia contributed more manpower to the British war effort than any other dominion or colony, and more than Britain itself. White troops numbered 5,716, about 40% of white men in the colony, with 1,720 of these serving as commissioned officers. The Rhodesia Native Regiment enlisted 2,507 black soldiers, about 30 black recruits scouted for the Rhodesia Regiment, and around 350 served in British and South African units. Over 800 Southern Rhodesians of all races lost their lives on operational service during the war, with many more seriously wounded.\n", "Soldiers of the East-India Company, British Raj and Princely States in the Indian subcontinent were crucial in securing and defending Hong Kong as a crown colony for Britain. Examples of troops from the Indian sub-continent include the 1st Travancore Nair Infantry, 59th Madras Native Infantry, 26th Bengal Native Infantry, 5th Light Infantry, 40th Pathans, 6th Rajputana Rifles, 11th Rajputs, 10th Jats, 72nd Punjabis, 12th Madras Native Infantry, 38th Madras Native Infantry, Indian Medical Service, Indian Hospital Corps, Royal Indian Army Service Corps, etc. Large contingents of troops from India were garrisoned in Hong Kong right from the start of British Hong Kong and until after World War II. Contributions by the Indian military services in Hong Kong suffer from the physical decay of battle-sites, destruction of documentary archives and sources of information, questionable historiography, conveniently lopsided narratives, unchallenged confabulation of urban myths and incomplete research within academic circles in Hong Kong, Britain and India. Despite high casualties among troops from the British Raj during the Battle of Hong Kong, their contributions are either minimised or ignored. The use of generic words such as \"Allied\", \"British\", \"Commonwealth\" fails to highlight that a significant number of soldiers who defended Hong Kong were from India. Commonwealth War Graves Commission (CWGC) Sai Wan War Cemetery references the graves of Indian troops as \"Commonwealth\" soldiers. War office records about the Battle of Hong Kong are yet to be fully released online. Transcripts of proceedings from war tribunals held in Hong Kong from 1946 to 1948 by British Military Courts remain mostly confined to archives and specialised museums.\n", "The British colonies deployed fewer numbers of black African troops. Thousands of white South Africans and Rhodesians saw service in the Middle East and the Mediterranean, while black soldiers generally were assigned to logistics formations. Some black regiments however did see combat, such as the King's African Rifles in the conquest of Madagascar from Vichy France in 1942, and the thousands of men of two West African divisions that fought with the British 14th Army against the Japanese in Burma. World War II was to have a profound effect on attitudes and developments in the African colonies. As various colonial reports note:\n", "Proportional to white population, Southern Rhodesia had contributed more personnel to the British armed forces in World War I than any of the Empire's dominions or colonies, and more than Britain itself. About 40% of white males in the colony, 5,716 men, put on uniform, with 1,720 doing so as commissioned officers. Black Southern Rhodesians were represented by the 2,507 soldiers who made up the Rhodesia Native Regiment, the roughly 350 who joined the British East Africa Transport Corps, British South Africa Police Mobile Column and South African Native Labour Corps, and the few dozen black scouts who served with the 1st and 2nd Rhodesia Regiments in South-West and East Africa. Southern Rhodesians killed in action or on operational duty numbered over 800, counting all races together—more than 700 of the colony's white servicemen died, while the Rhodesia Native Regiment's black soldiers suffered 146 fatalities.\n", "Carmichael is noted for recognising the value and usefulness of incorporating native Caribbean troops into the British Army. In 1797, he wrote that they were not only critical militarily, but their strength and stamina had been proven by their having to carry British soldiers through the heat and over the rocks at the Battle of Grenada. He campaigned for the right of slave soldiers to give evidence at Military tribunals. White and black soldiers alike were brutally flogged for violating military rules, but Carmichael found a more humane method to be equally as effective: During his eleven years as Lieutenant-Colonel of the 2nd West India Regiment, Carmichael instead demoted native offenders to a position resembling that of a common field slave – deprived of weapons and appointments and employed only on fatigue duties.\n", "Although they were in the Cape Colony at the time, no units from the Australian colonies were involved in the Black Week between 10–17 December, in which Britain suffered three successive defeats at the Battle of Stormberg, the Battle of Magersfontein, and the Battle of Colenso. The Boers knew that Empire forces would be sent to reinforce the British positions, and so sought to strike quickly against them.\n" ]
the science of coffee
Coffee is a solution - in the same way that mixing salt with water gives you salt water, coffee is bits of coffee mixed in with water. When you grind a coffee bean, there are some parts of the bean that can dissolve in hot water, and other parts that can't. What you're tasting, then, is the bits that *can* dissolve into water leaving the bits that can't dissolve behind in the filter.
[ "CofFEE states that it seeks to undertake and promote research into the goals of full employment, price stability and achieving an economy that delivers equitable outcomes for all. Its main focus is on macroeconomics, labour economics, regional development and monetary economics.\n", "\"Coffee: A Comprehensive Guide to the Bean, the Beverage, and the Industry\", senior editor and contributor, Rowman and Littlefield, 2013. The book won a prize from Gourmand Magazine as the best published on coffee in the U.S. in 2013. Named by Library Journal as one of the best reference works of 2013.\n", "Coffee is a brewed drink prepared from the roasted seeds of several species of an evergreen shrub of the genus \"Coffea\". The two most common sources of coffee beans are the highly regarded \"Coffea arabica\", and the \"robusta\" form of the hardier \"Coffea canephora\". Coffee plants are cultivated in more than 70 countries. Once ripe, coffee \"berries\" are picked, processed, and dried to yield the seeds inside. The seeds are then roasted to varying degrees, depending on the desired flavor, before being ground and brewed to create coffee.\n", "The Birth of Coffee is a transmedia project which includes a book of words and images, a photographic exhibit, and a website. It focuses on the people worldwide who grow and produce coffee. The project illustrates how coffee – combined with the volatile locations where it grows and labor-intensive growing processes – often shapes those people's lives.\n", "Research for their second project, The Birth of Coffee, began in 1996. The aim of this project is to help the average coffee-drinker to be aware of the difficult process that laborers must endure in order to grow and produce coffee. A book of their findings from this expedition, \"The Birth of Coffee\", was published by Random House in 2001.\n", "There are more than 1,000 chemical compounds in coffee, and their molecular and physiological effects are areas of active research in food chemistry. There are a large number of ways to organize coffee compounds. The major texts in the area variously sort by effects on flavor, physiology, pre- and post-roasting effects, growing and processing effects, botanical variety differences, country of origin differences, and many others. Interactions between compounds also is a frequent area of taxonomy, as are the major organic chemistry categories (Protein, carbohydrate, lipid, etc.) that are relevant to the field. In the field of aroma and flavor alone, Flament gives a list of 300 contributing chemicals in green beans, and over 850 after roasting. He lists 16 major categories to cover those compounds related to aroma and flavor.\n", "CoffeeScript is a programming language that transcompiles to JavaScript. It adds syntactic sugar inspired by Ruby, Python and Haskell in an effort to enhance JavaScript's brevity and readability. Specific additional features include list comprehension and pattern matching.\n" ]
Does anyone have examples of national anthems that were later abolished/replaced?
I believe the German National Anthem was during the Nazi era. The Soviet anthem had the words replaced, but kept the rather stirring melody. S. Africa replaced "Die Stem van Suid-Afrika", but kept a verse of it in the new anthem. Canada stopped using God Save the Queen. Czechoslovakia's anthem was split (like everything else) right down the middle, but this is a weak example as it was originally two songs that were fused together (like everything else).
[ "Adoption of national anthems prior to the 1930s was mostly by newly formed or newly independent states, such as the First Portuguese Republic (\"A Portuguesa\", 1911), the Kingdom of Greece (\"Hymn to Liberty\", 1865), the First Philippine Republic (\"Marcha Nacional Filipina\", 1898), Lithuania (\"Tautiška giesmė\", 1919), Weimar Germany (\"Deutschlandlied\", 1922), Republic of Ireland (\"Amhrán na bhFiann\", 1926) or Greater Lebanon (\"Lebanese National Anthem\", 1927).\n", "The national anthem had two official versions. The original version which was in use from 1815 to 1898 was written to honor a king. The second version which was in use from 1898 to 1932 was rewritten and used to honor Queen Wilhelmina.\n", "Despite the belief that it was adopted as the national anthem in 1866, no such recognition has ever been officially accorded. A kind of official recognition came in 1893, when King Oscar II rose in honor when the song was played. In 2000 a Riksdag committee rejected as \"unnecessary\" a proposal to give the song official status. The committee concluded that the song has been established as the national anthem by the people, not by the political system, and that it is preferable to keep it that way.\n", "The last attempts to change the anthem were first during the administration of General Juan Velasco Alvarado who attempted to change the second and third stanzas. In similar form to previous attempts, it was imposed during official ceremonies and in schools and during the administration of General President Francisco Morales Bermudez the last stanza was sung instead of the first. But these attempts also had no success and the original anthem was once again sung when his successor Fernando Belaunde Terry became President in 1980.\n", "Due to the fact that the traditional vocal adaptation composed by Alberto Nepomuceno for Joaquim Osorio Duque Estrada's lyrics of the national anthem was made official in 1971, other vocal arrangements (as well as other instrumental arrangements departing from the one recognized in law) are unofficial. Because of that, for the remainder of the Military Regime era (that lasted until 1985), the playing of the anthem with any artistic arrangement that departed from the official orchestration and vocal adaptation was prohibited, and there was strict vigilance regarding the use of the National Symbols and the enforcement of this norm. Since the redemocratization of the country, far greater artistic liberty has been allowed regarding renderings of the national anthem. Singer Fafá de Belém's interpretation of the national anthem (initially criticized during the final days of the Military Regime, but now widely accepted), is an example of that. In any event, although the use of different artistic arrangements for the anthem is now permitted (and although the statutory norms that prohibited such arrangements are no longer enforced, on the grounds of constitutional freedom of expression), a rendering of the national anthem is only considered fully official when the statutory norms regarding the vocal adaptation and orchestration are followed. However, the traditional vocal adaptation composed by Alberto Nepomuceno was so well established by the time it became official that the interpretations of the national anthem that depart from the official orchestration or from the official vocal adaptation are few. Indeed, although other arrangements are now allowed, the traditional form tends to prevail, so that, with few exceptions, even celebrity singers tend to only lend their voices to the singing of the official vocal adaptation by Alberto Nepomuceno.\n", "Most of the best-known national anthems were written by little-known or unknown composers such as Claude Joseph Rouget de Lisle, composer of \"La Marseillaise\" and John Stafford Smith who wrote the tune for \"The Anacreontic Song\", which became the tune for the U.S. national anthem, \"The Star-Spangled Banner.\" The author of \"God Save the Queen\", one of the oldest and most well known anthems in the world, is unknown and disputed.\n", "Although \"God Save The Queen\" ceased to be played at official occasions, no replacement was adopted or used as a national anthem immediately after the declaration of a republic. It was only in 1974 that \"Rise, O Voices of Rhodesia\", sung to the tune of \"Ode to Joy\", was adopted as the national anthem, after unsuccessful attempts to find an original melody.\n" ]
During the Waco standoff in 1993, why did large segments of the American population rally around the leader of a doomsday cult who was sexually abusing young girls, rather than their own government?
You said during, so I'll try to keep the focus as contemporary as possible to the siege. Nevertheless, I only found one opinion poll taken from before the lethal ending to the Waco Siege and that only polled Waco residents, a small and obviously not-representative sample of the US population. Further, in [this series of polls](_URL_0_), it's interesting to see how radically public opinion shifted against the government as the nineties progressed. To clarify the "large segment" who supported David Koresh and the Branch Davidians, I found three opinion polls taken from April 1993. 70% of people polled supported the government's actions at Waco, versus 27% who opposed it according to [the ABC Poll from 1993](_URL_0_). [A poll from the New York Times](_URL_3_) found 8 out of 10 Americans believed David Koresh was responsible for the deaths at Waco. And finally, a poll taken from the [Waco Tribune Herald (footnote 5)](_URL_1_) has only fifty percent of locals supporting government action against the Branch Davidians, though 82% supported the government's ending of the siege. As detailed in both the CBS poll and Gore Vidal's *Decline and Fall of the American Empire* those who opposed the actions of the FBI and ATF were largely hostile to what they perceived as government overreach. They saw the Branch Davidians as harmless, "minding their own business," and not doing anything that should provoke the violent repression meted out by the federal besiegers. The fact that it was families pitted against heavily armed troopers with armored personnel carriers and tanks made for pretty poor optics regardless of whose side one took. The first, ostensible reason given by the Clinton Administration for infiltrating the BD Compound was to seize illegally held arms, a stick in the eye for Americans who hold the Second Amendment dear. When the agents assigned to this mission were repulsed and the situation began to heat up, George Stephanopoulos, the White House Communications Director changed the narrative to one of trying to save the children sequestered with their families. Against this claim, Pastor Robert McCurry points out that [this was an illegitimate use of federal force](_URL_2_) given that child protection falls under state jurisdiction. McCurry, well attuned to the limits of legitimate force, rails against what he sees as a monstrous attack by the government against its people. Remember, this is less than a year after the "[Ruby Ridge Massacre](_URL_4_)" during which a shootout between government agents and a family fleeing the law left a US Marshal, a mother, and a son killed. That event garnered quite a bit of sympathy for the family caught in the crossfire and among certain people, predisposed them against the kind of government "repression" that occurred at Waco. Both Ruby Ridge and Waco were precipitated by Firearms charges and involvement by the ATF (Bureau of Alcohol, Tobacco, and Firearms). Given the Second Amendment and the mythology of private gun ownership enabling the American Revolution and protecting "Liberty," the federal government's perceived use of violence to monopolize its control of domestic firearms in these two instances rattled certain segments of the population.
[ "During recent years, the presence of gangs such as the Latin Kings, the Bloods and the Crips have been recorded at the event. Following the parade on June 11, 2000, a number of women were harassed, robbed and sexually assaulted by mobs of young men in and about Central Park. The attacks, which were videotaped by onlookers, led to the arrest and prosecution of many of those involved.\n", "In 1879, political gangs controlled the polling locations, shooting and wounding African American Ellicott City voters. The deputy sheriff declined to arrest the leaders for fear of his life and further outbreaks of violence.\n", "In the summer of 2000, two Mexican immigrants were lured into the basement of an abandoned building in Shirley, NY and beaten by two white supremacists. Then, in 2003, four teenagers were arrested for firebombing the home of a Mexican family. The group, Sachem Quality of Life Organization, was formed in response to the influx of illegal immigrants. The dispute was the subject of a 2004 documentary, \"Farmingville\".\n", "The area gained notoriety in April 1989 due to the Central Park jogger case. A white female jogger was badly beaten and raped at night in the North Woods, when 30-32 youths from East Harlem were known to have been roaming through the park, and accosting and sometimes assaulting eight other persons. According to \"The New York Times\", the attack was \"one of the most widely publicized crimes of the 1980s\". A group of four black and one Hispanic teenagers, who became known as the \"Central Park Five\", were convicted of this and another assault, and sentenced to years in prison. Their convictions were vacated after another man confessed to the crime in 2002, his DNA matched that found in semen at the scene, and the DA's office conducted an investigation of other elements of the evidence. \n", "The Good Ol' Boys Roundup was an annual whites only event run by agents of the Bureau of Alcohol, Tobacco and Firearms in southern Tennessee from 1980-1996. A senior manager at the Knoxville U.S. Attorney's Office warned personnel not to attend due to reports of \"heavy drinking, strippers, and persons engaging in extramarital affairs\". After allegations emerged that a \"Ku Klux Klan attitude\" pervaded the event a Senate Judiciary Committee was formed to investigate.\n", "A subsequent scare occurred in the United States in the early twentieth century, peaking in 1910, when Chicago's U.S. attorney announced (without giving details) that an international crime ring was abducting young girls in Europe, importing them, and forcing them to work in Chicago brothels. These claims, and the panic they inflamed, led to the passage of the United States White-Slave Traffic Act of 1910, generally known as the \"Mann Act\". It also banned the interstate transport of females for immoral purposes. Its primary intent was to address prostitution and immorality.\n", "In addition to speaking with law enforcement, the women of ASWPL talked to church groups all over the South about lynchings. They also created a network of women who were able to find out about lynchings before they happened and report the possible attacks to law enforcement, or even, in some cases, \"go themselves to stop the lynchings.\" Later, the ASWPL sought ways to work with local newspapers to publicize potential lynchings so that those involved could not keep their activities secret.\n" ]
How did calibers in uneven numbers come about? Like 152mm, 37mm and 76mm?
The general reason for these odd numbers is that they are conversions from before the metric system. 37mm is about 1.5 inches, 76mm is about 3 inches, 88mm is about 3.5 inches, 152mm is about 6 inches. Early tank and anti-tank guns are particularly prone because many were adopted from naval guns like the [US WWII 76mm] (_URL_0_). However, every country has a different specific reason for specific weapons keeping the old non-metric calibers, and I don't know enough to explain why the Soviets or Germans, for example, kept the odd calibers.
[ "Gun calibers have standardized around a few common sizes, especially in the larger range, mainly due to the uniformity required for efficient military logistics. Shells of 105 and 155 mm for artillery and 105mm and 120 mm for tank guns in NATO. Artillery shells of 122, 130 and 152 mm, and tank gun ammunition of 100, 115, or 125 mm caliber remain in use in Eastern Europe, Western Asia, Northern Africa, and Eastern Asia. Most common calibers have been in use for many years, since it is logistically complex to change the caliber of all guns and ammunition stores.\n", "Gunsmiths and armament companies also employed the -inch line (the \"decimal line\"), in part owing to the importance of the German and Russian arms industries. These are now given in terms of millimeters, but the seemingly arbitrary 7.62 mm caliber was originally understood as a 3-line caliber (as with the 1891 Mosin–Nagant rifle). The 12.7 mm caliber used by the M2 Browning machine gun was similarly a 5-line caliber.\n", "The following table lists some of the commonly used calibers where both metric and US customary are used as equivalents. Due to variations in naming conventions, and the whims of the cartridge manufacturers, bullet diameters can vary widely from the diameter implied by the name. For example, a difference of 0.045 in (1.15 mm) occurs between the smallest and largest of the several cartridges designated as \".38 caliber\". \n", "The 1960s ushered a new generation of assault rifles with the introduction of smaller calibers. U.S. military analysis of combat during the Second World War showed that a greater volume of fire at shorter ranges was more significant than long range accuracy. They decided that a smaller caliber would be more effective in most conditions, because the soldier could carry more ammunition. In 1963, United States adopted the M16 Rifle and the smaller 5.56×45mm cartridge to replace the M14 Rifle and larger 7.62×51mm. In 1980, NATO adopted the 5.56mm as the standard issue rifle cartridge.\n", "The 7\"/44 caliber gun Mark 1 (spoken \"seven-inch-forty-four--caliber\") and 7\"/45 caliber gun Mark 2 (spoken \"seven-inch-forty-five--caliber\") were used for the secondary batteries of the United States Navy's last generation of pre-dreadnought battleships, the and . The caliber was considered, at the time, to be the largest caliber weapon suitable as a rapid-fire secondary gun because its shells were the heaviest that one man could handle alone.\n", "The standard calibers used by the world's militaries tend to follow worldwide trends. These trends have significantly changed during the centuries of firearm design and re-design. Muskets were normally chambered for large calibers, such as .50 or .59, with the theory that these large bullets caused the most damage.\n", "Historically, ammunition rounds designed in the United States were denoted by their caliber in inches (e.g., .45 Colt and .270 Winchester.) Two developments changed this tradition: the large preponderance of different cartridges using an identical caliber and the international arms trade bringing metric calibers to the United States. The former led to bullet diameter (rather than caliber) often being used to describe rounds to differentiate otherwise similar rounds. A good example is the .308 Winchester, which fires the same .30-caliber projectile as the .30-06 Springfield and the .300 Savage. Occasionally, the caliber is just a number close to the diameter of the bullet, like the .220 Swift, .223 Remington and .222 Remington Magnum, all of which actually have .22 caliber or bullets.\n" ]
if the majority of people are right handed, why does the fork go on the left when setting a table?
Because you want to be manipulating the sharp, dangerous, pointy knife with your dominant hand. Which is why the dinner knife is on the right, and that leaves the fork to be on the left.
[ "The fork may be used in the American style (in the left hand while cutting and in the right hand to pick up food) or the European Continental style (fork always in the left hand). (See Fork etiquette) The napkin should be left on the seat of a chair only when leaving temporarily. Upon leaving the table at the end of a meal, the napkin is placed loosely on the table to the left of the plate.\n", "Forks are sometimes designated as right or left. Here, the \"handedness\" is from the point of view of an observer facing upstream. For instance, Steer Creek has a left tributary which is called Right Fork Steer Creek.\n", "In much of the world, pointing with the index finger is considered rude or disrespectful, especially pointing to a person. Pointing with the left hand is taboo in some cultures. Pointing with an open hand is considered more polite or respectful in some contexts. In Nicaragua, pointing is frequently done with the lips in a \"kiss shape\" directed towards the object of attention.\n", "The right hand rule is in widespread use in physics. A list of physical quantities whose directions are related by the right-hand rule is given below. (Some of these are related only indirectly to cross products, and use the second form.)\n", "Proper right and proper left are conceptual terms used to unambiguously convey relative direction when describing an image or other object. The \"proper right\" hand of a figure is the hand that would be regarded by that figure as its right hand. In a frontal representation, that appears on the left as the viewer sees it, creating the potential for ambiguity if the hand is just described as the \"right hand\".\n", "Pointing is a gesture specifying a direction from a person's body, usually indicating a location, person, event, thing or idea. It typically is formed by extending the arm, hand, and index finger, although it may be functionally similar to other hand gestures. Types of pointing may be subdivided according to the intention of the person, as well as by the linguistic function it serves. \n", "A left-handed individual may be known as a southpaw, particularly in a sports context. It is widely accepted that the term originated in the United States, in the game of baseball. Ballparks are often designed so that batters are facing east, so that the afternoon or evening sun does not shine in their eyes. This means that left-handed pitchers are throwing with their south-side arm. The \"Oxford English Dictionary\" lists a non-baseball citation for \"south paw\", meaning a punch with the left hand, as early as 1848, just three years after the first organized baseball game, with the note \"(orig. U.S., in Baseball).\" A left-handed advantage in sports can be significant and even decisive, but this advantage usually results from a left-handed competitor's unshared familiarity with opposite-handed opponents. Baseball is an exception since batters, pitchers, and fielders in certain scenarios are physically advantaged or disadvantaged by their handedness. Some baseball players like Christian Yelich of the Milwaukee Brewers bat left-handed and throw right-handed.\n" ]
why do some people leak pee when they sneeze?
The release of your bladder is controlled by muscles. Those muscles can, for a variety of reasons, be weakened. If those muscles are weak, a sudden jolt like a sneeze can dislodge them for a moment, releasing a small amount of pee.
[ "There is much debate about the true cause and mechanism of the sneezing fits brought about by the photic sneeze reflex. Sneezing occurs in response to irritation in the nasal cavity, which results in an afferent nerve fiber signal propagating through the ophthalmic and maxillary branches of the trigeminal nerve to the trigeminal nerve nuclei in the brainstem. The signal is interpreted in the trigeminal nerve nuclei, and an efferent nerve fiber signal goes to different parts of the body, such as mucous glands and the thoracic diaphragm, thus producing a sneeze. The most obvious difference between a normal sneeze and a photic sneeze is the stimulus: normal sneezes occur due to irritation in the nasal cavity, while the photic sneeze can result from a wide variety of stimuli. Some theories are below. There is also a genetic factor that increases the probability of photic sneeze reflex. The C allele on the rs10427255 SNP is particularly implicated in this although the mechanism is unknown by which this gene increases the probability of this response.\n", "Some people may sneeze during the initial phases of sexual arousal. Doctors suspect that the phenomenon might arise from a case of crossed wires in the autonomic nervous system, which regulates a number of functions in the body, including \"waking up\" the genitals during sexual arousal. The nose, like the genitals, contains erectile tissue. This phenomenon may prepare the vomeronasal organ for increased detection of pheromones.\n", "When sniffed, snuff often causes a sneeze, though this is often seen by snuff-takers as the sign of a beginner. This is not uncommon; however, the tendency to sneeze varies with the person and the particular snuff. Generally, drier snuffs are more likely to do this. For this reason, sellers of snuff often sell handkerchiefs. Slapstick comedy and cartoons have often made use of snuff's sneeze-inducing properties.\n", "Peeps (voiced by Richard McGonagle) is a giant floating eyeball who apparently runs the surveillance company that's named after him. Benson bought his products to keep Mordecai and Rigby from slacking off, but they manage to constantly evade him, causing Benson to accidentally summon him to the park to watch over everybody and as a result, he cannot leave until they die (due to the contract that Benson signed without even reading it). After spooking everyone with his gazes, Mordecai challenges him to a staring contest in which Peeps must leave if Mordecai wins, but if Peeps wins, he will harvest their eyes. However, Peeps cheats using numerous eyes but Rigby cheats back using a laser light that causes him to lose the staring contest, setting him on fire and crashing into the lake. He is blinded in the process, and is last seen taken to the hospital.\n", "In Japan, \"Tashiro\" is a slang word. Tashiro refers to acts of peeping and taking sneak shots. Origin of the term derives from Masashi Tashiro, a famous celebrity who was prosecuted for filming up a woman's skirt in addition to later being arrested for peeping through the bathroom window of a man's house.\n", "PEEP is a pressure that an exhalation has to bypass, in effect causing alveoli to remain open and not fully deflate. This mechanism for maintaining inflated alveoli helps increase partial pressure of oxygen in arterial blood, and an increase in PEEP increases the PaO.\n", "Sneezing typically occurs when foreign particles or sufficient external stimulants pass through the nasal hairs to reach the nasal mucosa. This triggers the release of histamines, which irritate the nerve cells in the nose, resulting in signals being sent to the brain to initiate the sneeze through the trigeminal nerve network. The brain then relates this initial signal, activates the pharyngeal and tracheal muscles and creates a large opening of the nasal and oral cavities, resulting in a powerful release of air and bioparticles. The powerful nature of a sneeze is attributed to its involvement of numerous organs of the upper body – it is a reflexive response involving the face, throat, and chest muscles.\n" ]
I'd want to understand how and why Scandinavia became Christianized.
I'll yield to better historians, but my understanding was that it had more to do with trade and politics than natural spiritual inclinations. By the time it happened, Scandinavia had been increasingly in contact with Christian Europe and needed commercial contacts. The era of plunder and conquest was ending as more of the Christian kingdoms became better defended from attack. It became more politically expedient to join them than beat them. It's not as if this has no precedent in history. Christianity and Islam were sprung from pagan converts.
[ "The Christianization of Scandinavia, as well as other Nordic countries and the Baltic countries, took place between the 8th and the 12th centuries. The realms of Denmark, Norway and Sweden (Sweden is an 11th or 12th century merger of the former countries Götaland and Svealand) established their own Archdioceses, responsible directly to the Pope, in 1104, 1154 and 1164, respectively. The conversion to Christianity of the Scandinavian people required more time, since it took additional efforts to establish a network of churches. The Sami remained unconverted until the 18th century. Newer archaeological research suggests there were Christians in Götaland already during the 9th century, it is further believed Christianity came from the southwest and moved towards the north.\n", "The Christianization of Scandinavia started in the 8th century with the arrival of missionaries in Denmark and it was at least nominally complete by the 12th century, although the Samis remained unconverted until the 18th century. In fact, although the Scandinavians became nominally Christian, it would take considerably longer for actual Christian beliefs to establish themselves among the people. The old indigenous traditions that had provided security and structure since time immemorial were challenged by ideas that were unfamiliar, such as original sin, the Immaculate Conception, the Trinity and so forth. Archaeological excavations of burial sites on the island of Lovön near modern-day Stockholm have shown that the actual Christianization of the people was very slow and took at least 150–200 years, and this was a very central location in the Swedish kingdom. Thirteenth-century runic inscriptions from the bustling merchant town of Bergen in Norway show little Christian influence, and one of them appeals to a Valkyrie. At this time, enough knowledge of Norse mythology remained to be preserved in sources such as the Eddas in Iceland.\n", "The Nordic world first encountered Christianity through its settlements in the (already Christian) British Isles and through trade contacts with the eastern Christians in Novgorod and Byzantium. By the time Christianity arrived in Scandinavia it was already the accepted religion across most of Europe. It is not well understood how the Christian institutions converted these Scandinavian settlers, in part due to a lack of textual descriptions of this conversion process equivalent to Bede's description of the earlier Anglo-Saxon conversion. However, it appears that the Scandinavian migrants had converted to Christianity within the first few decades of their arrival. After Christian missionaries from the British Isles—including figures like St Willibrord, St Boniface, and Willehad—had travelled to parts of northern Europe in the eighth century, Charlemagne pushed for Christianisation in Denmark, with Ebbo of Rheims, Halitgar of Cambrai, and Willeric of Bremen proselytizing in the kingdom during the ninth century. The Danish king Harald Klak converted (826), likely to secure his political alliance with Louis the Pious against his rivals for the throne. The Danish monarchy reverted to Old Norse religion under Horik II (854 – c. 867).\n", "Christianity in Scandinavia came later than most parts of Europe. In Denmark Harald Bluetooth Christianized the country around 980. The process of Christianization began in Norway during the reigns of Olaf Tryggvason (reigned 995 AD–c.1000 AD) and Olaf II Haraldsson (reigned 1015 AD–1030 AD). Olaf and Olaf II had been baptized voluntarily outside of Norway. Olaf II managed to bring English clergy to his country. Norway's conversion from the Norse religion to Christianity was mostly the result of English missionaries. As a result of the adoption of Christianity by the monarchy and eventually the entirety of the country, traditional shamanistic practices were marginalized and eventually persecuted. Völvas, practitioners of seid, a Scandinavian pre-Christian tradition, were executed or exiled under newly Christianized governments in the eleventh and twelfth centuries.\n", "Scandinavian individuals came into contact with Christianity already before the fall of the Roman Empire, but historian Ian N. Wood writes that the \"Christianisation of Scandinavia took the Church into relatively unknown areas\". According to Alcuin, an Anglo-Saxon monk, Willibrord, who had proselytized among the Frisians, tried to convert Ongendus, King of the Danes, in the early , but failed. From the 820s, the Frankish monarchs tried to take advantage of internal strifes to increase their influence in Denmark. After being dethroned and exiled from Denmark, King Harald Klak sought refugee in the Carolingian Empire and agreed to be baptised in 826. Harald Klak returned to Denmark, accompanied by Ansgar, a Frankish monk from the Corbie Abbey. During the next two years, Ansgar carried out missionary activities in Denmark. He even bought young boys to teach them for missionary work. However, Harald Klak was again dethroned in 827, and Ansgar left Denmark. \n", "Anders Winroth, in his book \"The Conversion of Scandinavia\", tries to make sense of this problem by arguing that there was a \"long process of assimilation, in which the Scandinavians adopted, one by one and over time, individual Christian practices.\" Winroth certainly does not say that Olaf was not Christian, but he argues that we cannot think of any Scandinavians as quickly converting in a full way as portrayed in the later hagiographies or sagas. Olaf himself is portrayed in later sources as a saintly miracle-working figure to help support this quick view of conversion for Norway, although the historical Olaf did not act this way, as seen especially in the skaldic verses attributed to him.\n", "While some Swedish areas had Christian minorities in the 9th century, Sweden was, because of its geographical location in northernmost Europe, not Christianized until around AD 1000, around the same time as the other Nordic countries, when the Swedish King Olof was baptized. This left only a modest gap between the Christianization of Scandinavia and the Great Schism, however there are some Scandinavian/Swedish saints who are venerated eagerly by many Orthodox Christians, such as St. Olaf. However, Norse paganism and other pre-Christian religious systems survived in the territory of what is now Sweden later than that; for instance the important religious center known as the Temple at Uppsala at Gamla Uppsala was evidently still in use in the late 11th century, while there was little effort to introduce the Sami of Lapland to Christianity until considerably after that.\n" ]
I discovered this seemingly well-researched video on Christopher Columbus, and why he wasn't as bad as everyone thinks he was. How accurate is it?
I suspect everyone who watches this will see different things. I'll defer to experts on Columbus regarding the content in the first 2/3 but can point to some red flags in the last third related to how he talks about the people indigenous to North America that make me deeply suspicious of his work. When assessing the accuracy of someone's historical claims, it's helpful to start with how they frame issues. How he talks about "genocide" is an indicator that his work may be not accurate or trustworthy. His suggestion that it's a simple linguistic issue regarding intent, and not a complicated matter that speaks to power, colonization, and patterns, ignores volumes of writing, especially by Indigenous authors and historians. [Parenthetical note that Zimmerman wasn't found "innocent." The jury returned not guilty verdicts on all counts.] [This](_URL_0_) explores the different arguments about the use of the word and despite 6 minutes of earnest talking-into-the-camera by what appears to be a Columbus truther, cannot be simplified it into a yes/no question. That said, the creator of the term "genocide" cited European interactions with North American Indigenous people as an example of the term. From the piece linked above: > Lemkin applied the term to a wide range of cases including many involving European colonial projects in Africa, New Zealand, Australia, and the Americas. A recent investigation of an unfinished manuscript for a global history of genocide Lemkin was writing in the late 1940s and early 1950s reveals an expansive view of what Lemkin termed a “Spanish colonial genocide.” He never began work on a projected chapter on “The Indians of North America,” though his notes indicate that he was researching Indian removal, treaties, the California gold rush, and the Plains wars. The second red flag is how he presents the words and images of Native Americans. Saying it's "weird" to hate on Columbus immediately after showing images of Native Americans expressing their opinions about the man is troublesome. More to the point, I feel confident in concluding he did little or no research on the history of renaming the holiday, or if he did, elected to ignore what he found in order to advance his central claim. Given he establishes his ancestors didn't immigrant to America until the 20th century, he's clearly not speaking as an Indigenous person. (Which isn't required for writing about Native American history, but double-checking and researching statements when writing about historically marginalized groups is basic decency and good scholarship. And his statements wouldn't be less troublesome were he Indigenous, but a native identity would shed a different light on how he uses Native Americans' words.) Had he researched the movement, he would have easily discovered the efforts to rename the holiday came from Indigenous people and that they explicitly picked the date as a way to draw attention to their [actions](_URL_1_). He also would have discovered there is an [International Day of the World’s Indigenous People](_URL_2_) on August 9th. In effect, the Indigenous activists working to rename the date are using Columbus as a proxy for the colonization of their ancestral lands by Europeans. None of the other "worse" men that he mentioned have a day that's recognized as a federal holiday. Finally, Columbus didn't "discover" America. Every time he repeats that, even when saying it's untrue, he's undercutting any historical bona fides he may have earned earlier in the video. And no. We don't need to talk about how "primitive or not primitive" Native Americans were. Note: I just watched about ten minutes of the video he cites as his source for "Native American Genocide" which contains not only terrible history practices but straight up racism. Which doesn't bode well for the rest of the history in his video.
[ "The most iconic image of Columbus is a portrait by Sebastiano del Piombo, which has been reproduced in many textbooks. It agrees with descriptions of Columbus in that it shows a large man with auburn hair, but the painting dates from 1519 and cannot, therefore, have been painted from life. Furthermore, the inscription identifying the subject as Columbus was probably added later, and the face shown differs from other images, including that of the \"Virgin of the Navigators.\"\n", "There is no record of any such proceedings. Moreover, the \"Life of Columbus\" by his son, who surely possessed Columbus' journal. is strangely lacking in references to Juan de la Cosa by name. Even in the shipwrecking incident, the son reports only that\n", "The film was made by the largest Spanish studio CIFESA. The production was conceived as a response to the 1949 British film \"Christopher Columbus\". The British film had attempted a realist depiction of Columbus (portraying him as only partly successful, and his achievements being in spite of the Spanish monarchy). The Spanish response portrayed Columbus as a single-minded adventurer whose discovery led to the greater glory of the Spanish monarchy and the Catholic Church.\n", "Another doubt remains to be settled: can we be sure that all of the documents cited concern the Christopher Columbus who was later to become \"Cristóbal Colón\", admiral of the Ocean Sea in Spanish territory? The list of contemporary ambassadors and historians unanimous in the belief that Columbus was Genoese could suffice as proof, but there is something more: a document dated 22 September 1470 in which the criminal judge convicts Domenico Colombo. The conviction is tied to the debt of Domenico — together with his son Christopher (explicitly stated in the document) — toward a certain Girolamo del Porto. In the will dictated by Admiral Christopher Columbus in Valladolid before he died, the authentic and indisputable document which we have today, the dying navigator remembers this old debt, which had evidently not been paid. There is, in addition, the act drawn in Genoa on 25 August 1479 by a notary, Girolamo Ventimiglia. This act is known as the \"Assereto document\", after the scholar who found it in the State Archives in Genoa in 1904. It involves a lawsuit over a sugar transaction on the Atlantic island Madeira. In it, young Christopher swore that he was a 27-year-old Genoese citizen resident in Portugal and had been hired to represent the Genoese merchants in that transaction. Here was proof that he had relocated to Portugal. It is important to bear in mind that at the time when Assereto traced the document, it would have been impossible to make an acceptable facsimile. Nowadays, with modern chemical processes, a document can be \"manufactured\", made to look centuries old if need be, with such skill that it may be difficult to prove it is a fake. In 1960, this was still impossible.\n", "Journalist and media critic Norman Solomon reflects, in \"Columbus Day: A Clash of Myth and History\", that many people choose to hold on to the myths surrounding Columbus. He quotes from the logbook Columbus's initial description of the American Indians: \"They do not bear arms, and do not know them, for I showed them a sword, they took it by the edge and cut themselves out of ignorance... They would make fine servants... With 50 men we could subjugate them all and make them do whatever we want.\" Solomon states that the most important contemporary documentary evidence is the multi-volume \"History of the Indies\" by the Catholic priest Bartolomé de las Casas, who observed the region where Columbus was governor. In contrast to \"the myth,\" Solomon quotes Las Casas, who describes Spaniards driven by \"insatiable greed\"—\"killing, terrorizing, afflicting, and torturing the native peoples\" with \"the strangest and most varied new methods of cruelty\" and how systematic violence was aimed at preventing \"[American] Indians from daring to think of themselves as human beings.\" The Spaniards \"thought nothing of knifing [American] Indians by tens and twenties and of cutting slices off them to test the sharpness of their blades,\" wrote Las Casas. \"My eyes have seen these acts so foreign to human nature, and now I tremble as I write.\"\n", "Nonetheless, much of the testimony in the \"pleitos colombinos\", as well as part of the specialized historiography and investigators, does not agree that these things happened in this manner, nor is there any accusation against Pinzón in Columbus's Letter on the First Voyage, which Columbus wrote during his return.\n", "Columbus on Trial is a film directed by Lourdes Portillo in 1992. The 18-minute film, acted and co-written by the comic trio Culture Clash, acts out a simulated trial that Christopher Columbus is put in as they lived through the 500th anniversary of his discovery. The film commences by portraying a variety of journalists and reporters questioning Columbus's motives. They bombard him with all the questions that have arisen over the previous 500 years in regards to the controversy of whether he really did discover the New World for the betterment of the people. This meaning that he introduced European customs and beliefs as a way to improve the lives of the natives already residing in it, or whether he simply invaded these territories in order to impose his own culture and destroy theirs. In this film specifically, Portillo depicts Columbus as a man charged with slaughter against the natives living in the New World. With this film, Portillo supports the reconsideration of “official history”.\n" ]
topology/topological manifold
Topology: the study of the geometric properties of a space/surface and which properties are affected by changing that shape/space through a continuous deformation (i.e. no rips/tears and no gluing); or the rules governing a specific topological space or manifold. Topological Manifold: A surface (or group of surfaces) with given properties (such as a metric, a specific number of holes/openings, etc.) For example, a doughnut and a coffee mug belong to the same manifold (a solid with a single hole) which can be deformed from one to the other without changing some properties. A topologist would then look at what happens during the transformation to things like distance between points and any changes to a circle (does it get bigger smaller etc.)
[ "While topological spaces can be extremely varied and exotic, many areas of topology focus on the more familiar class of spaces known as manifolds. A \"manifold\" is a topological space that resembles Euclidean space near each point. More precisely, each point of an -dimensional manifold has a neighborhood that is homeomorphic to the Euclidean space of dimension . Lines and circles, but not figure eights, are one-dimensional manifolds. Two-dimensional manifolds are also called surfaces, although not all surfaces are manifolds. Examples include the plane, the sphere, and the torus, which can all be realized without self-intersection in three dimensions, and the Klein bottle and real projective plane, which cannot (that is, all their realizations are surfaces that are not manifolds).\n", "In mathematics, low-dimensional topology is the branch of topology that studies manifolds, or more generally topological spaces, of four or fewer dimensions. Representative topics are the structure theory of 3-manifolds and 4-manifolds, knot theory, and braid groups. This can be regarded as a part of geometric topology. It may also be used to refer to the study of topological spaces of dimension 1, though this is more typically considered part of continuum theory.\n", "Geometric topology is a branch of topology that primarily focuses on low-dimensional manifolds (i.e. spaces of dimensions 2, 3, and 4) and their interaction with geometry, but it also includes some higher-dimensional topology. Some examples of topics in geometric topology are orientability, handle decompositions, local flatness, crumpling and the planar and higher-dimensional Schönflies theorem.\n", "In topology, a branch of mathematics, a topological manifold is a topological space (which may also be a separated space) which locally resembles real \"n\"-dimensional space in a sense defined below. Topological manifolds form an important class of topological spaces with applications throughout mathematics. All manifolds are topological manifolds by definition, but many manifolds may be equipped with additional structure (e.g. differentiable manifolds are topological manifolds equipped with a differential structure). When the phrase \"topological manifold\" is used, it is usually done to emphasize that the manifold does not have any additional structure, or that only the \"underlying\" topological manifold is being considered. Every manifold has an \"underlying\" topological manifold, gotten by simply \"forgetting\" any additional structure the manifold has.\n", "In mathematics, differential topology is the field dealing with differentiable functions on differentiable manifolds. It is closely related to differential geometry and together they make up the geometric theory of differentiable manifolds.\n", "In topology and related branches of mathematics, a topological space may be defined as a set of points, along with a set of neighbourhoods for each point, satisfying a set of axioms relating points and neighbourhoods. The definition of a topological space relies only upon set theory and is the most general notion of a mathematical space that allows for the definition of concepts such as continuity, connectedness, and convergence. Other spaces, such as manifolds and metric spaces, are specializations of topological spaces with extra structures or constraints. Being so general, topological spaces are a central unifying notion and appear in virtually every branch of modern mathematics. The branch of mathematics that studies topological spaces in their own right is called point-set topology or general topology.\n", "BULLET::::- Topology is the field concerned with the properties of geometric objects that are unchanged by continuous mappings. In practice, this often means dealing with large-scale properties of spaces, such as connectedness and compactness.\n" ]
When did the concept of "refugees" arise? It seems that in the past if your country was at war and you were a male of fighting age you would stay. When did men start leaving their country's conflicts? Is this a modern concept or are there examples of this happening throughout history?
This depends on your definition of 'refugees'. In 1951, a convention was held in Geneva to give an official definition to the term, and from henceforth it was possible to declare whether a person was a refugee or not. [source: [UNHCR official site](_URL_0_) ] However, before that there were already large population movements caused by war, famine and other forms of destruction which would cause the peoples' homeland to be inhospitable to them. In China, one of the earliest records of such a wide scale immigration would be during the spring autumn period, when the Yue 越 king Gou Jian 勾践 destroyed the Wu 吴 kingdom. Due to the demeaning treatment that he had suffered under the Wu king previously, Gou Jian was determined to eliminate Wu utterly. Therefore the Wu people were forced to cross the sea to the Eastern islands, which is now modern day Japan. Future contact between the Han dynasty and the Japanese islands state that the Wa 倭 people claimed direct descent from king Taibo 泰伯 of Wu, and often spoke with a Wu accent and adhered to Wu customs, further supporting the theory of them being former refugees of the Chinese Wu. [source: *the Book of Han* 汉书, *Discourse on Balance* 论衡]
[ "As the war ended, these people found themselves facing an uncertain future. Allied military and civilian authorities faced considerable challenges resettling them. Since the reasons for displacement varied considerably, the Supreme Headquarters Allied Expeditionary Force classified individuals into a number of categories: evacuees, war or political refugees, political prisoners, forced or voluntary workers, Organisation Todt workers, former forces under German command, deportees, intruded persons, extruded persons, civilian internees, ex-prisoners of war, and stateless persons.\n", "The first modern definition of international refugee status came about under the League of Nations in 1921 from the Commission for Refugees. Following World War II, and in response to the large numbers of people fleeing Eastern Europe, the UN 1951 Refugee Convention adopted (in Article 1.A.2) the following definition of \"refugee\" to apply to any person who: \"owing to well-founded fear of being persecuted for reasons of race, religion, nationality, membership of a particular social group or political opinion, is outside the country of his nationality and is unable or, owing to such fear, is unwilling to avail himself of the protection of that country; or who, not having a nationality and being outside the country of his former habitual residence as a result of such events, is unable or, owing to such fear, is unwilling to return to it.\"\n", "After the Second World War, some British soldiers are guarding a theatre in Germany containing various refugees and prisoners trying to work out what to do with them. However, the displaced people, after uniting against fascism for five years, begin to disintegrate into their own ancient feuds: Serb against Croat, Pole against Russian, resistance fighter against collaborator and everyone against the Jews. Two people, Jan and Lily, begin a romance and decide to wed. However, one of the refugees is diagnosed with bubonic plague.\n", "The word evacuation or \"evakuatsiia\" in 1941 was a somewhat new word that some described as \"terrible and unaccustomed\". For others, it was simply not used. \"Refugee\" or \"bezhenets\" was far too familiar given the country's history of war. During World War II refugee was replaced by evacuees.The shift in wording showed the government's resignation to the displacement of its citizens. The reasons for controlling the displaced population varied. Despite some preferring to consider themselves evacuees the term referred to different individuals. Some were of the “privileged elite\" class. Those who fell under this category were scientists, specialized workers, artists, writers and politicians. These elite individuals were evacuated to the rear of the country. The other portion of the evacuated were met with a suspicious eye. The evacuation process despite the Soviets best efforts, was far from organized. The state considered the majority of those heading east as suspicious. Since a large majority of the population were self evacuees they had not been assigned a location for displacement. Officials feared the disorder made it easy for deserters to flee. Evacuees who did not fall under the “privileged elite” title were are also suspected of potentially contaminating the rest of the population both epidemically and ideologically.\n", "BULLET::::- The term displaced person (DP) was first widely used during World War II and the resulting refugee outflows from Eastern Europe, when it was used to specifically refer to one removed from their native country as a refugee, prisoner or a slave laborer. Most of the victims of war, political refugees and DPs of the immediate post-Second World War period were Ukrainians, Poles, other Slavs, as well as citizens of the Baltic states – Lithuanians, Latvians, and Estonians, who refused to return to Soviet-dominated eastern Europe. A.J. Jaffe claimed that the term was originally coined by Eugene M. Kulischer. The meaning has significantly broadened in the past half-century.\n", "The term \"refugee\" sometime applies to people who might fit the definition outlined by the 1951 Convention, were it applied retroactively. There are many candidates. For example, after the Edict of Fontainebleau in 1685 outlawed Protestantism in France, hundreds of thousands of Huguenots fled to England, the Netherlands, Switzerland, South Africa, Germany and Prussia. The repeated waves of pogroms that swept Eastern Europe in the 19th and early 20th centuries prompted mass Jewish emigration (more than 2 million Russian Jews emigrated in the period 1881–1920). Beginning in the 19th century, Muslim people emigrated to Turkey from Europe. The Balkan Wars of 1912–1913 caused 800,000 people to leave their homes. Various groups of people were officially designated refugees beginning in World War I.\n", "The conflict and political instability during World War II led to massive numbers of refugees (see World War II evacuation and expulsion). In 1943, the Allies created the United Nations Relief and Rehabilitation Administration (UNRRA) to provide aid to areas liberated from Axis powers, including parts of Europe and China. By the end of the War, Europe had more than 40 million refugees. UNRRA was involved in returning over seven million refugees, then commonly referred to as displaced persons or DPs, to their country of origin and setting up displaced persons camps for one million refugees who refused to be repatriated. Even two years after the end of War, some 850,000 people still lived in DP camps across Western Europe. DP Camps in Europe Intro, from: \"DPs Europe's Displaced Persons, 1945–1951\" by Mark Wyman After the establishment of Israel in 1948, Israel accepted more than 650,000 refugees by 1950. By 1953, over 250,000 refugees were still in Europe, most of them old, infirm, crippled, or otherwise disabled.\n" ]
In the US Civil War, how realistic is the idea that if the south had won a decisive military victory like capturing Washington, Britain and France could have been tempted to intervene on the side of the south, possibly causing a peace settlement?
Well, the idea that the Confederacy could ever have successfully captured Washington by storm or siege is utterly unrealistic. By 1862 Washington was likely one of the most heavily defended cities in the world, by 1864 it was the most heavily fortified city in the world. Any attempt to take it by storm would have resulted in wholesale slaughter akin to Grant's Cold Harbor battle, but in reverse, and likely much more one sided. Besieging it would have been impossible since it can be resupplied by sea if absolutely necessary and the Confederates have no way of stopping such an avenue of resupply even assuming they could encircle the city. But I digress, your main point is about the idea of foreign intervention and assuming a hypothetical southern victory on northern soil. This would greatly depend on what year your talking. If 1862, it's possible, France did want to intervene on the side of the Confederates and intimated as much to the diplomats Davis sent. However, they were unwilling to act without a British declaration of war, and it is difficult to gauge how likely such an intervention ever was. Had they won Antietam, it is possible, but following that defeat they never had a realistic chance of achieving foreign support until victory for the Confederacy was certain and by that point what need would they have for such support. You might be wondering why I don't include Gettysburg or anything following 1862 in my belief and that's rather simple. Had the Confederacy won at Gettysburg consider the situation on July 4, yes you've just won a "victory", but you've also lost 25,000 men, you're still outnumbered, and you can't possibly assail northern cities. To the north you're blocked by a river and the difficulties of crossing such a river. To the south you have only Washington and if you couldn't besiege it when you had 75,000 men you certainly can't now. You've lost a 1/3 of your army and remain deep in enemy territory, now what. Wait, hope the Union after 2 1/2 years just gives in, cause Lincoln wouldn't have and he was still president no matter public opinion. Oh, and about that, you're victory is about to be tempered by the fact that on the same day you won Vicksburg and some 35,000 Confederates just surrendered and the South out in the West is in full retreat. So the situation from your perspective, or even Union newspapers, hysterical at the loss and the mystique of Lee may not change. But in the eyes of the world you won a major battle and then lost a major battle and in doing so lost 60,000 men that you could not afford to lose. The public opinion in England was never high, why would it change now that you just went 1-1 in major battles, from their perspective all you did was just offset by what Grant just did and from a purely strategical sense to the military minds of England, Grant's victory was far more significant. Everyone in the South knew it too and Lee's loss, far from being the injury was merely the salt in the wound. It's also uncertain whether immediate intervention by the British or French would have caused an immediate peace settlement. It takes time to mobilize your forces and get ready for a war, the U.S. even after a loss at Gettysburg would have had a 6 month window before having to worry about British or French troops. In that time the South's fortunes out west got worse not better. Ultimately even as early as 62 such an idea was relatively unlikely, by 63 it was far fetched, and by 64 it was bordering on delusional.
[ "There were secondary reasons as well. The Confederate invasion might be able to incite an uprising in Maryland, especially given that it was a slave-holding state and many of its citizens held a sympathetic stance toward the South. Some Confederate politicians, including Jefferson Davis, believed the prospect of foreign recognition for the Confederacy would be made stronger by a military victory on Northern soil, but there is no evidence that Lee thought the South should base its military plans on this possibility. Nevertheless, the news of the victory at Second Bull Run and the start of Lee's invasion caused considerable diplomatic activity between the Confederate States and France and the United Kingdom.\n", "In 1864, pressure mounted for both sides to seek a peace settlement to end the long and devastating Civil War. Several people had sought to broker a North–South peace treaty in 1864. Francis Preston Blair, a personal friend of both Abraham Lincoln and Jefferson Davis, had unsuccessfully encouraged Lincoln to make a diplomatic visit to Richmond. Blair had advocated to Lincoln that the war could be brought to a close by having the two opposing sections of the nation stand down in their conflict, and reunite on grounds of the Monroe Doctrine in attacking the French-installed Emperor Maximilian in Mexico. Lincoln asked Blair to wait until Savannah had been captured.\n", "BULLET::::- In the alternate history novella \"Bring the Jubilee\" by Ward Moore, a Confederate victory in the War of Southron Independence is generally disastrous for the United States. In domestic politics, it results in rampant political corruption and the replacement of the Democratic and Republican parties with the right-leaning Whigs and the left-leaning Populists; the former accepts the status quo, wishing to turn the United States into neo-colony for the world's great powers, while the latter wish to undo the harsher aspects of the US economy such as indentures and the clauses of the 1864 Treaty of Reading, the American-Confederate peace agreement. The Whig candidate for the 1940 Presidential Election is Thomas E. Dewey who defeats his Populist rival Jennings Lewis; however, due to political corruption, the presidency has diminished in power in comparison to the House Majority Speaker.\n", "The American army in the South would make a decisive comeback under Gen. Nathanael Greene. His professional army, in cooperation with partisans, over the next two years drove the British from the South and started the string of events that directly resulted in the decisive Franco-American victory at Yorktown. Elijah Clarke and other Georgia frontiersmen played significant roles in those battles and campaigns. The former Wilkes County militiamen who had served under John Dooly participated in the major victory at King's Mountain and played critical roles in the American success at the Battle of Cowpens. Emistisiguo's fate also became intertwined with the final days of the Revolution. He had led warriors in attacks on settlers in modern Kentucky and Tennessee who had come to the aid of the American cause at King's Mountain and in Wilkes County. On July 24, 1782, in his final act for his British patrons, he died in hand-to-hand combat with Gen. Anthony Wayne while leading a Creek war party and Loyalists in a desperate but successful effort to break through to the garrison at Savannah. The Creek headman thus joined John Dooly and so many other leaders of their conflicted and conflicting societies in failing to survive the war.(n57)\n", "Union Army General William Tecumseh Sherman's 'March to the Sea' in November and December 1864 destroyed the resources required for the South to make war. General Ulysses S. Grant and President Abraham Lincoln initially opposed the plan until Sherman convinced them of its necessity.\n", "To counter this strategy, Southern generals attempted to deprive the North of the use of these vital rivers. Lacking a powerful naval arm to challenge Federal forces afloat, the South resorted to guerrilla raids and cavalry forays against Union bases along the riverbanks and on supply ships which plied the rivers bringing Mr. Lincoln's soldiers food, clothing, ammunition, and the other necessities of war.\n", "A stepped-up war effort that year brought about some successes such as the burning of Washington, D.C., but the Duke of Wellington argued that an outright victory over the U.S. was impossible because the Americans controlled the western Great Lakes and had destroyed the power of Britain's Indian allies. A full-scale British invasion was defeated in upstate New York. Peace was agreed to at the end of 1814, but unaware of this, Andrew Jackson won a great victory over the British at the Battle of New Orleans in January 1815 (news took several weeks to cross the Atlantic before the advent of steam ships). The Treaty of Ghent subsequently ended the war with no territorial changes. It was the last war between Britain and the United States.\n" ]
Violations of the equivalence principle?
> Is this credible? Yes, quite. It's a nice paper. > What does it mean if the equivalence principle really is violated? Absolutely nothing. The equivalence principle rests on the principle of locality, and it holds whenever that principle is in effect. But the principle of locality is an approximation; it's violated by certain phenomena in both ordinary "first quantisation" mechanics and in "second quantisation" field theory. If locality doesn't hold, the equivalence principle doesn't either … which is less a *violation* of equivalence as it is a demonstration of the fact that equivalence depends on locality, which we knew already. No, the interesting thing about this paper isn't that equivalence is violated when locality is violated. The interesting thing is that it's possible to *restore equivalence* even without locality. As the gravitational field gets stronger — that is, as you get closer to the event horizon of a black hole *that's only present in the paper to be the source of a gravitational field of arbitrary strength so please let's not turn this into another godawful black hole party* — the apparent violation of equivalence vanishes. *That's* interesting, and serves as yet more evidence in favour of the notion that quantum field theory and general relativity already, separately, comprise a compete quantum theory of gravity; we just have to work out the details. Insultingly condescending summary: The violation of equivalence is expected. The *restoration* of equivalence in the strong-field limit isn't expected, and comes as a pleasant surprise.
[ "Equivalence allows for simplifying the constraint store by replacing some constraints with simpler ones; in particular, if the third constraint in an equivalence rule is codice_93, and the second constraint is entailed, the first constraint is removed from the constraint store. Inference allows for the addition of new constraints, which may lead to proving inconsistency of the constraint store, and may generally reduce the amount of search needed to establish its satisfiability.\n", "The above-mentioned variants of the equivalence principle aim to guarantee the transition of General Relativity to Special Relativity in a certain reference frame. However, only the particular \"weakest\" and \"weak\" equivalence principles are true. \n", "The equivalence principle is one of the corner-stones of gravitation theory. Different formulations of the equivalence principle are labeled \"weakest\", \"weak\", \"middle-strong\" and \"strong.\" All of these formulations are based on the empirical equality of inertial mass, gravitational active and passive charges.\n", "The equivalence principle, explored by a succession of researchers including Galileo, Loránd Eötvös, and Einstein, expresses the idea that all objects fall in the same way, and that the effects of gravity are indistinguishable from certain aspects of acceleration and deceleration. The simplest way to test the weak equivalence principle is to drop two objects of different masses or compositions in a vacuum and see whether they hit the ground at the same time. Such experiments demonstrate that all objects fall at the same rate when other forces (such as air resistance and electromagnetic effects) are negligible. More sophisticated tests use a torsion balance of a type invented by Eötvös. Satellite experiments, for example STEP, are planned for more accurate experiments in space.\n", "The \"rule of equivalence\" is verified when the code behavior matches the original concept. This equivalence may break down in many cases. Integer overflow breaks the equivalence between the mathematical integer concept and the computerized approximation of the concept.\n", "The equivalence principle was properly introduced by Albert Einstein in 1907, when he observed that the acceleration of bodies towards the center of the Earth at a rate of 1\"\"g\"\" (\"g\" = 9.81 m/s being a standard reference of gravitational acceleration at the Earth's surface) is equivalent to the acceleration of an inertially moving body that would be observed on a rocket in free space being accelerated at a rate of 1\"g\". Einstein stated it thus:\n", "The optical equivalence theorem in quantum optics asserts an equivalence between the expectation value of an operator in Hilbert space and the expectation value of its associated function in the phase space formulation with respect to a quasiprobability distribution. The theorem was first reported by George Sudarshan in 1963 for normally ordered operators and generalized later that decade to any ordering. \n" ]
how come most 3d games render at 60 fps while it takes a few seconds to render a textureless cube in blender?
There are two ways to render graphics on the screen. [Rasterization](_URL_0_) - Which is used in video games. The very simple version of this is the world is made up of triangles and all you have to do is figure out if a triangle is visible and if a pixel is in the triangle or not. Many graphics cards have electronics deigned to do this over and over again very quickly. Compare this with... [Ray Tracing](_URL_3_) - Which is used for static 3D rendering. This **calculates the path light travels for every pixel on your screen** back to the light source. The objects don't have to be triangles and are often expressed as mathematical solids. This gives you, for all intents an purposes, unlimited resolution and detail depending on how much time and CPU power you want to throw at it. Because ray tracers use complex math, the CPU brute-forces the tracing calculations. In fact, with bender, when doing ray tracing, you don't use any of the 3D capability of your graphics card at all. Upshot: Rasterizer: "Hey 3D card, draw and fill 532 triangles the make that make up this [isohedron](_URL_2_) and texture it to make it look like sphere Ray Tracer: "Hey CPU calculate how light will reflect on a [sphere](_URL_1_) of a volume of 4/3*πr^3" One take much more time then the other, but it also make it much more realistic at infinite scales.
[ "Processing of 3D graphics is computationally expensive, especially in real-time games, and poses multiple limits. Levels have to be processed at tremendous speeds, making it difficult to render vast skyscapes in real-time. Additionally, real-time graphics generally have depth buffers with limited bit-depth, which puts a limit on the amount of details that can be rendered at a distance.\n", "In the case of 3D graphics, rendering may be done slowly, as in pre-rendering, or in realtime. Pre-rendering is a computationally intensive process that is typically used for movie creation, while real-time rendering is often done for 3D video games which rely on the use of graphics cards with 3D hardware accelerators.\n", "Rendering for interactive media, such as games and simulations, is calculated and displayed in real time, at rates of approximately 20 to 120 frames per second. In real-time rendering, the goal is to show as much information as possible as the eye can process in a fraction of a second (a.k.a. \"in one frame\": In the case of a 30 frame-per-second animation, a frame encompasses one 30th of a second).\n", "The advantage of pre-rendering is the ability to use graphic models that are more complex and computationally intensive than those that can be rendered in real-time, due to the possibility of using multiple computers over extended periods of time to render the end results. For instance, a comparison could be drawn between rail-shooters \"Maximum Force\" (which used pre-rendered 3D levels but 2D sprites for enemies) and \"Virtua Cop\" (using 3D polygons); \"Maximum Force\" was more realistic looking due to the limitations of \"Virtua Cop's\" 3D engine, but \"Virtua Cop\" has actual depth (able to portray enemies close and far away, along with body-specific hits and multiple hits) compared to the limits of the 2D sprite enemies in \"Maximum Force\".\n", "Vaa3D is able to render 3D, 4D, and 5D data (X,Y,Z,Color,Time) quickly. The volume rendering is typically at the scale of a few gigabytes and can be extended to the scale of terabytes per image set. The visualization is made fast by using OpenGL directly.\n", "It was the first game to feature high resolution 3D texture mapping, a feature which was not seen on other platforms until the Dreamcast over three years later. \"Rave Racer\" ran at a resolution of 640x480 and 60 frames per second.\n", "When performing basic 3D-rendering with only texture mapping and no other advanced features, ViRGE's pixel throughput was somewhat faster than the best software-optimized (host-based CPU) 3D-rendering of the era, and with better (16bpp) color fidelity. But when additional rendering operations were added to the polygon load (such as perspective-correction, Z-depth fogging, and bilinear filtering), rendering throughput dropped to the speed of software-based rendering on an entry-level CPU. 3D-rendering on the high-end VRAM based ViRGE/VX (988) was even slower than the less expensive ViRGE/325, due to the VX's slower core and memory clock rates. The upgraded ViRGE/DX and ViRGE/GX models did improve 3D rendering performance, but by the time of their introduction they were still unable to distinguish the ViRGE family in an already crowded 3D market.\n" ]
how do lithium ion batteries work?
A lithium ion battery uses charged lithium particles (ions) to move electricity from one end of the battery to another. As energy leaves the battery, these lithium ions move from the negative side of the battery to the positive side, forming a conductive lithium layer that releases electricity. When all the ions are on the positive side of the battery, the battery is spent and no longer releases electricity. When the battery is put in a charger, the sides flip temporarily, and the addition of electrical energy to the lithium causes the ions to move back to the negative side of the battery, making the battery ready for use again. Because of these properties, lithium ion batteries are among the more common rechargeable batteries for home electronic use.
[ "Lithium-ion batteries store chemical energy in reactive chemicals at the anodes and cathodes of a cell. Typically, anodes and cathodes exchange lithium (Li+) ions through a fluid electrolyte that passes through a porous separator which prevents direct contact between the anode and cathode. Such contact would lead to an internal short circuit and a potentially hazardous uncontrolled reaction. Electric current is usually carried by conductive collectors at the anodes and cathodes to and from the negative and positive terminals of the cell (respectively).\n", "A lithium-ion battery or Li-ion battery (abbreviated as LIB) is a type of rechargeable battery. Lithium-ion batteries are commonly used for portable electronics and electric vehicles and are growing in popularity for military and aerospace applications. It was developed by John Goodenough, Rachid Yazami and Akira Yoshino in the 1980s, building on a concept proposed by M Stanley Whittingham in the 1970s.\n", "Today’s lithium ion batteries have high power density (fast discharge) and high energy density (hold a lot of charge). It can also develop dendrites, similar to splinters, that can short-circuit a battery and lead to a fire. Aluminum also transfers energy more efficiently. Inside a battery, atoms of the element — lithium or aluminum — give up some of their electrons, which flow through external wires to power a device. Because of their atomic structure, lithium ions can only provide one electron at a time; aluminum can give three at a time. Aluminum is also more abundant than lithium, lowering material costs.\n", "The thin film lithium ion battery can serve as a storage device for the energy collected from renewable sources with a variable generation rate, such as a solar cell or wind turbine. These batteries can be made to have a low self discharge rate, which means that these batteries can be stored for long periods of time without a major loss of the energy that was used to charge it. These fully charged batteries could then be used to power some or all of the other potential applications listed below, or provide more reliable power to an electric grid for general use.\n", "In general lithium ions move between the anode and the cathode across the electrolyte. Under discharge, electrons follow the external circuit to do electric work and the lithium ions migrate to the cathode. During charge the lithium metal plates onto the anode, freeing at the cathode. Both non-aqueous (with LiO or LiO as the discharge products) and aqueous (LiOH as the discharge product) Li-O batteries have been considered. The aqueous battery requires a protective layer on the negative electrode to keep the Li metal from reacting with water.\n", "In the batteries lithium ions move from the negative electrode to the positive electrode during discharge and back when charging. Li-ion batteries use an intercalated lithium compound as one electrode material, compared to the metallic lithium used in a non-rechargeable lithium battery. The batteries have a high energy density, no memory effect (other than LFP cells) and low self-discharge. They can however be a safety hazard since they contain a flammable electrolyte, and if damaged or incorrectly charged can lead to explosions and fires. Samsung were forced to recall Galaxy Note 7 handsets following lithium-ion fires, and there have been several incidents involving batteries on Boeing 787s.\n", "A lithium-ion flow battery is a flow battery that uses a form of lightweight lithium as its charge carrier. The flow battery stores energy separately from its system for discharging. The amount of energy it can store is determined by tank size; its power density is determined by the size of the reaction chamber.\n" ]
i turned on my old guitar amp with nothing plugged in and it started playing a radio station. how is this happening?
Radio waves are stupid easy to pick up on any basic consumer amplifier. I've picked up radio stations on PC speakers before. Somewhere along the lines, the radio signal is inadvertently translated to an electrical signal that your system can amplify. You don't need a loose wire; just an unshielded system. Adding to that, I think the FCC mandates that consumer electronics must accept radio interference.
[ "\"Turning Up the Radio\" is a song by the American rock band Weezer from their studio album \"Death to False Metal\". Its genesis came about in 2008 when Weezer frontman Rivers Cuomo used YouTube to source ideas for creating a song using video submissions from other users of the platform.\n", "The players themselves were manufactured by CBS Electronics. According to the official Chrysler press release of September 12, 1955, \"Highway Hi-Fi plays through the speaker of the car radio and uses the radio's amplifier system. The turntable for playing records, built for Chrysler by CBS-Columbia, is located in a shock-proof case mounted just below the center of the instrument panel. A tone arm, including sapphire stylus and ceramic pick up, plus storage space for six long-play records make up the unit.\" A button controlled whether you listened to the radio or the records. A proprietary 0.25-mil (i.e., or a quarter of a \"thou\") stylus was used with an unusually high stylus pressure of to prevent skipping or skating despite normal car vibrations.\n", "The whole thing came through the famous \"listen mic\" on the SSL console. The SSL had put this massive compressor on it because the whole idea was to hang one mic in the middle of the studio and hear somebody talking on the other side. And it just so happened that we turned it on one day when Phil [Collins] was playing his drums. And then I had the idea of feeding that back into the console and putting the noise gate on, so when he stopped playing it sucked the big sound of the room into nothing.\n", "\"There Ain't Nothin' Wrong with the Radio\" is a moderate up-tempo novelty song. In it, the male narrator describes how old and run-down his car is but explains that he continues to drive it because \"there ain't nothin' wrong with the radio\" — specifically, \"there ain't a country station that [he] can't tune in\". The song features electric guitar and fiddle accompaniments.\n", "A radio pack is mainly used for musicians such as guitarists and singers for live performances. It is a small radio transmitter that is either placed in the strap or in the pocket. The receiver is connected to an amp or PA system and the user simply connects the transmitter into the instrument. This means that there is no wires in the way. By using a wireless system, musicians are free to move around the stage. This has meant that more elaborate stage shows are now possible, with musicians performing a long way from the amplifier or speakers. \n", "Sharon Aguilar reminisced, “Plugged [my guitar] right into the amp; no time for a pedalboard… no fancy in-ear monitors or anything like that… More energy… it ended up being probably my favorite live show that we’ve ever done.”\n", "\"There Ain't Nothin' Wrong with the Radio\" is a song co-written and recorded by American country music artist Aaron Tippin. It was released in February 1992 as the first single from his album \"Read Between the Lines\". The song is not only his first Number One hit on the country music charts but also his longest-lasting at three weeks.\n" ]
what happens with the intellectual properties of a company when they close down?
It gets sold off/liquidated, like physical assets (to take your example further, Disneylands, corporate headquarters, and other things like that).
[ "After closing a business may be dissolved and have its assets redistributed after filing articles of dissolution. A business that operates multiple locations may continue to operate, but close some of its locations that are under-performing, or in the case of a manufacturer, cease production of some of its products that are not selling well. Some failing companies are purchased by a new owner who may be able to run the company better, and some are merged with another company that will then take over its operations. Some businesses save themselves through bankruptcy or bankruptcy protection, thereby allowing themselves to restructure. There are several consequences towards the owners/shareholders, such as limited liability, the finance and the continuity (if a shareholder does not want to continue in the business).\n", "“has destroyed the unity that we commonly call property - has divided ownership into nominal ownership and the power formerly joined to it. Thereby the corporation has changed the nature of profit-seeking enterprise.”\n", "Events such as mergers, acquisitions, insolvency, or the commission of a crime will adversely affect the corporation in its current form. At the end of the corporate lifecycle, a corporation may be \"wound up\" and enter into bankruptcy liquidation. This often arises when the corporation is unable to discharge its debts in a timely manner. Comparatively, a merger or acquisition can often mean the altering or extinguishing of the corporation. In addition to the creation of the corporation, and its financing, these events serve as a transition phase into either dissolution, or some other material shift. \n", "Due to this act, many small companies on the edge of the market went to bankruptcy. Many other were sold according to decreasing profitability of the business. For example, GE was selling Lake, their consumer-credit division.\n", "Solutions to the diseconomies of scale for large firms may involve splitting the company into smaller organisations. This can either happen by default when the company is in financial difficulties, sells off its profitable divisions and shuts down the rest; or can happen proactively, if the management is willing.\n", "The disappearance of the corporation naturally puts an end to prosecution, this even in the case of merger or acquisition. The principle of responsibility for personal actions runs counter to the acquiring entity or person of responsibility for infractions committed by the acquired business (Crim. 14 octobre 2003).\n", "On March 19, 2012, the company suffered due to not having enough money, went broke and was ultimately put into administrative control. It is in the process of selling IP and assets, most notably the \"Hydrophobia\" series and the Hydroengine.3\n" ]
why do you see so many saabs and other out of production cars in movies?
Those brands are/were all owned by GM, and GM has made it a point of pride that their vehicles have been showcased in many motion pictures. What you're seeing is product placement coupled with favorable rental rates for fleet vehicles to achieve that effect.
[ "Many of the cars that appear in the 2015 scenes are either modified for the film or concept cars. Examples include Ford Probe, Saab EV-1, Citroën DS 21, Pontiac Banshee Concept, Pontiac Fiero and Volkswagen Beetle. Cars reused from other science fiction films include the \"Star Car\" from \"The Last Starfighter\" (1984) and a \"Spinner\" from \"Blade Runner\" (1982). Griff's car is a modified BMW 633 (which was notably never in the convertible form seen in the film).\n", "The film was announced in July 2007. Vin Diesel, Paul Walker, and the rest of the cast of the original film all reprised their roles. Filming began in 2008. The movie cars were built in Southern California's San Fernando Valley. Around 240 cars were built for the film. However, the replica vehicles do not match the specifications they were supposed to represent. For example, the replica version of \"F-Bomb\", a 1973 Chevrolet Camaro built by Tom Nelson of NRE and David Freiburger of \"Hot Rod\" magazine, included a 300 hp crate V8 engine with a 3-speed automatic transmission, whereas the actual car included a twin-turbo 1,500 hp engine and a 5-speed transmission.\n", "The cars used in racing scenes were provided by Andy Hillenburg, who purchased Rockingham Speedway months after its release and provided the stunt drivers, as many ARCA Re/Max Series drivers participated in the filming (ARCA Re/Max Series stickers can be found on the cars in the movie; Hillenberg trained stunt drivers, along with letting some film stars take turns driving). Some cars that can be seen in \"Ta Ra Rum Pum\" display high resemblance to cars specifically created for Will Ferrell's \"\" (except featuring modified sponsorship decals) as Hillenburg provided cars for that movie as well.\n", "They also built three cars with 'futuristic' appearance, based on Ford Zephyr running gear and aluminium gullwing door bodyshells, for Gerry Anderson and the 1969 film 'Doppelgänger'. These were re-painted and re-used for the much better known UFO TV series of 1970. The cars were infamously unfinished, underpowered and unreliable. Ed Straker’s car was later owned by DJ Dave Lee Travis, who hated it. Little survives of these cars, except for enough remains to build a modern replica.\n", "Several elements recur in the films series. As the Japanese carmaker Toyota was usually the main sponsor of the film series, most cars, including the villain's cars, the security cars, the police cars, the gang's car, civilian cars parked on the sidewalks, etc. was supplied by the company. Models include the Crown as taxis, Toyota Cressida as police cars, Hiace as security vans and money transports, and so on. There have been exceptions, notably \"Olsenbandens aller siste kupp\" (\"The Olsen-Gangs Very Last Coup\") from 1982, which was sponsored by Datsun. \n", "Depending on the source, either eleven or twelve cars were built by Cinema Vehicle Services for the film (not including CVS's creation of one additional Eleanor clone - with a Ford 428 - for producer Bruckheimer). Nine were shells, and three were built as fully functional vehicles. Seven were reported to have \"survived the filming [and] made it back to Cinema Vehicle Services\" according to research by Mustangandfords.com.\n", "CG supervisor Thanh John Nguyen states that they tried to duplicate the look of the cars in the book, which Executive Producer Ken Tsumura describes as bearing the look of the 1940s and 1950s; according to production designer Yarrow Cheney, the filmmakers also partnered with Volkswagen to design the red car that Ted drives, simplifying it a bit and rounding the edges. Cheney also said that prior to this they had based some of the models on Volkswagens due to their suitability.\n" ]
how does a bike stay up when at faster speeds but will fall over when not going fast enough?
The gyroscopic forces created by the wheels mass cause the wheel to fix itself on a plane in turn overcoming to forces of gravity stopping it just falling over
[ "The rider applies torque to the handlebars in order to turn the front wheel and so to control lean and maintain balance. At high speeds, small steering angles quickly move the ground contact points laterally; at low speeds, larger steering angles are required to achieve the same results in the same amount of time. Because of this, it is usually easier to maintain balance at high speeds. As self-stability typically occurs at speeds above a certain threshold, going faster increases the chances that a bike is contributing to its own stability.\n", "Although longitudinally stable when stationary, bikes often have a high enough center of mass and a short enough wheelbase to lift a wheel off the ground under sufficient acceleration or deceleration. When braking, depending on the location of the combined center of mass of the bike and rider with respect to the point where the front wheel contacts the ground, bikes can either skid the front wheel or flip the bike and rider over the front wheel. A similar situation is possible while accelerating, but with respect to the rear wheel.\n", "At low forward speeds, the precession of the front wheel is too quick, contributing to an uncontrolled bike’s tendency to oversteer, start to lean the other way and eventually oscillate and fall over. At high forward speeds, the precession is usually too slow, contributing to an uncontrolled bike’s tendency to understeer and eventually fall over without ever having reached the upright position. This instability is very slow, on the order of seconds, and is easy for most riders to counteract. Thus a fast bike may feel stable even though it is actually not self-stable and would fall over if it were uncontrolled.\n", "For most bikes, depending on geometry and mass distribution, capsize is stable at low speeds, and becomes less stable as speed increases until it is no longer stable. However, on many bikes, tire interaction with the pavement is sufficient to prevent capsize from becoming unstable at high speeds.\n", "However, neither of these solutions are ideal as they hinder the suspension's ability to absorb small bumps or low-speed impacts while the bicycle is coasting (Note: \"low-speed\" does not refer to the velocity at which the vehicle is traveling, but the speed at which the suspension is compressed). In the case of excessive compression damping, this problem is known as overdamping.\n", "Bikes, as complex mechanisms, have a variety of modes: fundamental ways that they can move. These modes can be stable or unstable, depending on the bike parameters and its forward speed. In this context, \"stable\" means that an uncontrolled bike will continue rolling forward without falling over as long as forward speed is maintained. Conversely, \"unstable\" means that an uncontrolled bike will eventually fall over, even if forward speed is maintained. The modes can be differentiated by the speed at which they switch stability and the relative phases of leaning and steering as the bike experiences that mode. Any bike motion consists of a combination of various amounts of the possible modes, and there are three main modes that a bike can experience: capsize, weave, and wobble. A lesser known mode is rear wobble, and it is usually stable.\n", "A cyclist executing a basic track stand holds the bicycle's cranks in a horizontal position, with his or her dominant foot forward. Track stands executed on bicycles with a freewheel usually employ a small uphill section of ground. The uphill needs to be sufficient to allow the rider to create backward motion by relaxing pressure on the pedals, thus allowing the bike to roll backwards. Once the track stand is mastered, even a very tiny uphill section is sufficient: e.g. the camber of the road, a raised road marking, and so on. Where no such uphill exists, or even if the gradient is downhill, a track stand can be achieved on a freewheeling bicycle by using a brake to initiate a backwards movement. If a fixed-gear bicycle is being used, an uphill slope is not needed since the rider is able to simply back pedal to move backwards. In both cases forward motion is accomplished by pedalling forwards. The handlebars are held at approximately a 45 degree angle, converting the bike's forward and back motion into side-to-side motion beneath the rider's body. This allows the rider to keep the bike directly below their center of gravity.\n" ]
what is infrasound and how can it cause panic attacks in humans?
Human hearing works for tones in the frequency range of 20hz-20000hz. Everything below that is called infrasound, everything above is called ultrasound. The resonant frequencies of a lot of human organs happen to be in the infrasound range, so a loud/powerful enough tone can cause them to vibrate strongly, resulting in discomfort and nausea.
[ "Cyclospora is a gastrointestinal pathogen that causes fever, diarrhea, vomiting, and severe weight loss. Outbreaks of the disease occurred in Chicago in 1989 and other areas in the United States. But investigation by the Center for Disease Control could not identify an infectious cause. The discovery of the cause was made by Mr. Ramachandran Rajah, the head of a medical clinic's laboratory in Kathmandu, Nepal. Mr. Rajah was trying to discover why local residents and visitors were becoming ill every summer. He identified an unusual looking organism in stool samples from patients who were sick. But when the clinic sent slides of the organism to the Center for Disease Control, it was identified as blue-green algae, which is harmless. Many pathologists had seen the same thing before, but dismissed it as irrelevant to the patient's disease. Later, the organism would be identified as a special kind of parasite, and treatment would be developed to help patients with the infection. In the United States, Cyclospora infection must be reported to the Center for Disease Control according to the CDC's Reportable Disease Chart\n", "Infrasound is sound at frequencies lower than the low frequency end of human hearing threshold at 20 Hz. It is known, however, that humans can perceive sounds below this frequency at very high pressure levels. Infrasound can come from many natural as well as man-made sources, including weather patterns, topographic features, ocean wave activity, thunderstorms, geomagnetic storms, earthquakes, jet streams, mountain ranges, and rocket launchings. Infrasounds are also present in the vocalizations of some animals. Low frequency sounds can travel for long distances with very little attenuation and can be detected hundreds of miles away from their sources.\n", "An ecchymosis is a subcutaneous spot of bleeding with diameter larger than . It is similar to (and sometimes indistinguishable from) a hematoma, commonly called a bruise, though the terms are not interchangeable in careful usage. Specifically, bruises are caused by trauma whereas ecchymoses, which are the same as the spots of purpura except larger, are not \"necessarily\" caused by trauma, often being caused by pathophysiologic cell function, and some diseases such as Marburg virus disease.\n", "Mobilida is a group of parasitic or symbiotic peritrich ciliates, comprising more than 280 species. Mobilids live on or within a wide variety of aquatic organisms, including fish, amphibians, molluscs, cnidarians, flatworms and other ciliates, attaching to their host organism by means of an aboral adhesive disk. Some mobilid species are pathogens of wild or farmed fish, causing severe and economically damaging diseases such as trichodinosis.\n", "Latrodectism is the illness caused by the bite of \"Latrodectus\" spiders (the black widow spider and related species). Pain, muscle rigidity, vomiting, and sweating are the symptoms of latrodectism. Contrary to popular conception, latrodectism is very rarely fatal for humans, though domestic cats have been known to die due to convulsions and paralysis.\n", "Hurler syndrome, also known as mucopolysaccharidosis Type IH (MPS-IH), Hurler's disease, and formerly gargoylism, is a genetic disorder that results in the buildup of large sugar molecules called glycosaminoglycans (AKA GAGs, or mucopolysaccharides) in lysosomes. The inability to break down these molecules results in a wide variety of symptoms caused by damage to several different organ systems, including but not limited to the nervous system, skeletal system, eyes, and heart.\n", "Surra (from the Marathi \"sūra\", meaning the sound of heavy breathing through nostrils, of imitative origin) is a disease of vertebrate animals. The disease is caused by protozoan trypanosomes, specifically \"Trypanosoma evansi\", of several species which infect the blood of the vertebrate host, causing fever, weakness, and lethargy which lead to weight loss and anemia. In some animals the disease is fatal unless treated.\n" ]
what would happen to a person when they cannot afford payment to a loan/debt.
They default on a loan which means they can't pay it back, they may have to declare bankruptcy. Their credit gets ruined and for most loans the credit bereau is told to basically just forget about the loan and take the loss. This is worst case If you just miss a payment on a loan then you get a ding to your credit for non-payment however a single non-payment from a late payment won't be too damaging and can even potentially removed from your credit history in some circumstances. In general, credit bereaus will try to work with people to have lower payments that they can afford because they would rather have a loan take longer to be repaid than to take the loss.
[ "Unless he cancels the contract, or obtains an order compelling the creditor to accept his performance, it is not clear how the debtor can discharge his debt without having to wait until the period of prescription has run, or until performance has become impossible. Consignation (payment into court with notice to the creditor) appears to have fallen into desuetude, and is in any event impossible or impracticable in many cases (as in the case where perishables are to be delivered). Whether the debtor may sell the goods for the account of the creditor is also uncertain.\n", "Failure to make a payment on an unsecured debt may ultimately result in reporting the delinquent debt to a credit reporting agency or legal action. However, a nongovernmental unsecured creditor cannot seize any of your assets without a court judgment in the U.S.\n", "From the borrower's perspective, this means failure to make their regular payment for one or two payment periods or failure to pay taxes or insurance premiums for the loan collateral will lead to substantially higher interest for the entire remaining term of the loan.\n", "It is not a crime to fail to pay a debt. Except in certain bankruptcy situations, debtors can choose to pay debts in any priority they choose. But if one fails to pay a debt, they have broken a contract or agreement between them and a creditor. Generally, most oral and written agreements for the repayment of consumer debt - debts for personal, family or household purposes secured primarily by a person's residence - are enforceable.\n", "If the debt was secured by specific collateral, such as a car or home, the creditor may seek to repossess the collateral. In more serious circumstances, individuals and companies may go into bankruptcy.\n", "If you’ve borrowed money and your lender cancels/terminates or forgives the outstanding loan balance owed to you at a later stage, then you, as per the Internal Revenue Service will be liable to file an income tax return on the forgiven debt amount with respect to the corresponding situations. At the time of loan origination, the loan money that you received weren’t considered as an income because you made an agreement to pay back the borrowed sum to the lender, i.e., it was your obligation to repay the loan amount as per the signed contract’s terms and conditions. \n", "As noted by Sachs J in \"Coetzee\", the small debtor without means will no longer be faced with imprisonment, from which he can only be rescued by family or friends. Further, creditors will no longer be able to extend credit on the basis that the debt can be exacted through fear of imprisonment. Credit should be extended only to those who are creditworthy, and to those who provide proper security.\n" ]
how can a tiny amount of toxic chemical affect a person's entire body?
Let's do some math. Methanol is lethal at a dose of 1-2 mL/kg. So for an 80kg person, say 2mL/kg dose to be on the safe side (or dangerous, as it were) and that gives a dose of 160mL or 5.4 oz. This dose contains 2.38 x 10^24 molecules of methanol. If you have millions of cells, that's still on the order of 10^18 molecules per cell or billions of billions. EDIT: There are roughly 3.72 trillion (human) cells in the body, so that bumps it down from billions of billions to only trillions of molecules per cell.
[ "Most toxic substances exert their toxicity through some interaction (e.g., covalent bonding, oxidation) with cellular macromolecules like proteins or DNA. This interaction leads to changes in the normal cellular biochemistry and physiology and downstream toxic effects.\n", "Most toxicants are known to affect only a fraction of exposed population. This is due to the differences in the genetic makeup of the organisms which affects toxicant metabolism and clearance from the body. Effect of developmental toxicants depends on the genetic makeup of the mother and fetus.\n", "The types of toxicities where substances may cause lethality to the entire body, lethality to specific organs, major/minor damage, or cause cancer. These are globally accepted definitions of what toxicity is. Anything falling outside of the definition cannot be classified as that type of toxicant.\n", "Toxicity of a substance can be affected by many different factors, such as the pathway of administration (whether the toxicant is applied to the skin, ingested, inhaled, injected), the time of exposure (a brief encounter or long term), the number of exposures (a single dose or multiple doses over time), the physical form of the toxicant (solid, liquid, gas), the genetic makeup of an individual, an individual's overall health, and many others. Several of the terms used to describe these factors have been included here.\n", "BULLET::::- Physical toxicants are substances that, due to their physical nature, interfere with biological processes. Examples include coal dust, asbestos fibers or finely divided silicon dioxide, all of which can ultimately be fatal if inhaled. Corrosive chemicals possess physical toxicity because they destroy tissues, but they're not directly poisonous unless they interfere directly with biological activity. Water can act as a physical toxicant if taken in extremely high doses because the concentration of vital ions decreases dramatically if there's too much water in the body. Asphyxiant gases can be considered physical toxicants because they act by displacing oxygen in the environment but they are inert, not chemically toxic gases.\n", "Toxicants that at low concentrations modify or inhibit some biological process by binding at a specific site or molecule have a specific acting mode of toxic action. However, at high enough concentrations, toxicants with specific acting modes of toxic actions can produce narcosis that may or may not be reversible. Nevertheless, the specific action of the toxicant is always shown first because it requires lower concentrations.\n", "Certain drugs are toxic in their own right in therapeutic doses because of their mechanism of action. Alkylating antineoplastic agents, for example, cause DNA damage, which is more harmful to cancer cells than regular cells. However, alkylation causes severe side-effects and is actually carcinogenic in its own right, with potential to lead to the development of secondary tumors. In a similar manner, arsenic-based medications like melarsoprol, used to treat trypanosomiasis, can cause arsenic poisoning.\n" ]
why the rear wheel in buses and trucks is close to centre?
It's to increase manouverability. The further back the rear wheels are, the longer the wheelbase of the vehicle is, and therefore the wider any turns it will make will be. If you move the rear wheels towards the front of the bus, then the bus can swing into tighter turns. The tradeoff is the back end of the bus can swing out now, whereas if the wheels are at the back, that can't happen.
[ "They are also used in railways and low floor buses although, in the case of buses, the device is engineered in the opposite way to those fitted to off-road vehicles - the axle is below the center of the wheel. Thus, the inverted portal axle allows the floor of the bus to be lowered, easing access to the bus and increasing the available cabin height.\n", "In automotive design, an RR, or rear-engine, rear-wheel-drive layout places both the engine and drive wheels at the rear of the vehicle. In contrast to the RMR layout, the center of mass of the engine is between the rear axle and the rear bumper. Although very common in transit buses and coaches due to the elimination of the drive shaft with low-floor bus, this layout has become increasingly rare in passenger cars.\n", "The guide-wheel, which protrudes just ahead of the front wheels, is the most important part of the bus when travelling on the O-Bahn. It is connected directly to the steering mechanism, and steers the bus by running along the raised edge of the track. While it is not strictly necessary for drivers to hold the steering wheel when travelling on the O-Bahn because of the guide-wheel, safety procedures require the driver to be alert to their circumstances at all times. A rumble strip before stations is a reminder that they need to resume control. The guide-wheel is the most delicate part of the system and is designed to snap off upon sharp impact; before the O-Bahn was in place, a number of buses were fitted with guide-wheels for their ordinary routes to test their durability. Drivers were forced to be more cautious on their normal trips after numerous guide-wheel-to-curb impacts.\n", "In some buses the rearmost axle is connected to the steering, with the rear most set steering in the opposite direction to the front axle. This steering arrangement makes it possible for the longer triple axle buses to negotiate corners with greater ease than would otherwise be the case.\n", "From 1982 to 1999, the MallRide fleet used front-wheel-drive and right-hand-drive buses that were custom-designed and purpose-built. The right-hand-drive gives the operators better view of passengers entering and exiting the buses from the right-hand side and to watch out for the pedestrians. These buses can travel up to on the street and must deal with the 'wandering' pedestrians on the sidewalks who get too close to the buses.\n", "Thanks to the high floor in the rear of the vehicle, the engine and gearbox are placed in the centre for the third axle. It works with the SCR technology using AdBlue. All axles are from ZF. Due to the use of larger wheel size 295/70 tires the bus has an extended wheel arch. Behind the front axle there is a storage possibly for snow chains. Two illuminated steps provide access to the higher floor level at the rear. All seats are located on the raised floor and are equipped with safety belts, while those located on the side of the stairs have additional armrests and handles. Above the seats in the low floor part of the bus there are shelves for luggage on the ceiling. The higher door has wheelchair access. Optionally, in place of the wheelchair bay the space can be transformed for four additional passenger seats. The ventilation of the interior of the bus has side windows and two electric sunroofs. Instead of tilting windows there can be installed an air conditioning system. There are several versions of the cab: open (open door), semi-closed (with glass door) and closed (fully built cabin). Doors used in Solaris Urbino 15 LE all open to the outside.\n", "Gölsdorf axles work in this way. Two of the five axles cannot move sideways relative to the frame because their axle boxes fix them rigidly to the frame. The other axles, however, are fitted into their bearings and attached to their drives in such a way that they can be moved sideways during curve running, depending on the sideways forces acting on them. In addition the connecting and coupling rods, through which the steam pressure and linear forces from the steam pistons are translated into the rotation of the wheels via the crank pins, also have to be able to move sideways.\n" ]
how do first responders or hospital personnel know who to contact if you have had an accident and are unconscious?
I believe this generally sorted out only after you get to the hospital.
[ "Situation awareness for first responders in medical situations also includes evaluating and understanding what happened to avoid injury of responders and also to provide information to other rescue agencies which may need to know what the situation is via radio prior to their arrival on the scene.\n", "When an unconscious person enters a hospital, the hospital utilizes a series of diagnostic steps to identify the cause of unconsciousness. According to Young, the following steps should be taken when dealing with a patient possibly in a coma:\n", "To illustrate, a call is received in the dispatch center about a possibly unconscious person. The call taker will immediately identify the call location, and will then ask further questions, in order to assess precipitating symptoms, specific location, and any special circumstances (no house number, a neighbor is calling, etc.). While this interview is occurring, the call taker will enter the command \"Bewusstlose Person\" (unconscious person) into the dispatch computer, resulting in an automatic suggestion to dispatch of a RTW (emergency ambulance) or NKTW and a NEF (emergency physician car). Upon entering the address of the patient, the computer will look for the emergency vehicles closest to this address. Now the dispatcher can send the whole package over the air and those two vehicles are alarmed, similar to Computer-assisted dispatch (CAD) in the United States. Whilst the vehicles are being alarmed by the dispatcher, the call taker may remain on the line with the caller, providing telephone advice or assistance until the EMS resources arrive on the scene. While still on the line with the caller the dispatcher is able to guide the ambulance to the scene and provide the crew with additional and more precise information about the incident. This, of course, happens without any notice of the caller and will help to get the EMS resources on scene more quickly.\n", "To illustrate, a call is received in the dispatch center about a possibly unconscious person. The dispatcher will immediately identify the call location, and will then ask further questions, in order to assess precipitating symptoms, specific location, and any special circumstances (no house number, a neighbor is calling, etc.). While this interview is occurring, the dispatcher will enter the command \"Bewußtlosigkeit\" (unconsciousness) into the dispatch computer, resulting in an automatic suggestion to dispatch of a \"Rettungswagen\" (emergency ambulance) and a \"Notarzteinsatzfahrzeug\" (Doctor's Car). Upon entering the address of the patient, the computer will look for the emergency vehicles closest to this address. Now the dispatcher can send the whole package over the air and those two vehicles are alarmed, similar to Computer-assisted dispatch (CAD) in the United States. After sending the alarm, the dispatcher may remain on the line with the caller, providing telephone advice or assistance until the EMS resources arrive on the scene.\n", "Certain emergency medical services urge patients to record information describing their medical conditions, medications and drug allergies, emergency contacts, as well as advance healthcare directives for when the patients are incapacitated or suffer from dementia or learning difficulties, and place the record as a special \"message in a bottle\" stored in (conventionally) a refrigerator, where paramedics can quickly locate it.\n", "In general, first responders are sent to immediately life-threatening situations such as cardiac arrest. Some ambulance services restrict the type of calls which responders can attend, either through blanket prohibition or by more detailed call screening by the emergency dispatch centre. This is because responders do not necessarily have the levels of training or equipment available to full-time staff, and may arrive on their own, increasing risks. Types of call which responders may not be asked to attend (or be stood down if already en route) include drugs related problems, domestic violence and abusive patients as well as dangerous scenes such as traffic collisions or building sites. In some areas, responders are also not dispatched to paediatric cases, although other areas have this as a main part of their role.\n", "Within hospitals, the EWS is used as part of a \"track-and-trigger\" system whereby an increasing score produces an escalated response varying from increasing the frequency of patient's observations (for a low score) up to urgent review by a rapid response or Medical Emergency Team (MET call). Concerns by nursing staff may also be used to trigger such call, as concerns may precede changes in vital signs.\n" ]
what ever happened to the sequestration?
The simple answer is it happened and almost nothing has come of it. The people that said the sky would fall were wrong, and the people who said it would fix our deficit were wrong. That's it. Sequestration is the exact prime example of why you cannot just listen to the news and "experts" to determine what is going on. People lie and exaggerate to get more attention.
[ "The budget sequestration in 2013 refers to the automatic spending cuts to United States federal government spending in particular categories of outlays that were initially set to begin on January 1, 2013, as a fiscal policy as a result of Budget Control Act of 2011 (BCA), and were postponed by two months by the American Taxpayer Relief Act of 2012 until March 1 when this law went into effect.\n", "The term \"budget sequestration\" was first used to describe a section of Gramm-Rudman-Hollings Deficit Reduction Act of 1985 (GRHDRA). The hard caps were abandoned and replaced with a PAYGO system by the Budget Enforcement Act of 1990, which was in effect until 2002. Sequestration was later included as part of the Budget Control Act of 2011, which resolved the debt-ceiling crisis; the bill set up a Congressional debt-reduction committee and included the sequestration as a disincentive to be activated only if Congress did not pass deficit reduction legislation. However, the committee did not come to agreement on any plan, activating the sequestration plan. The sequestration was to come into force on January 1, 2013 and was considered part of the fiscal cliff, but the American Taxpayer Relief Act of 2012 delayed it until March 1 of that year.\n", "The sequestration became a major topic of the fiscal cliff debate. The debate's resolution, the American Taxpayer Relief Act of 2012 (ATRA), eliminated much of the tax side of the dispute but only delayed the budget sequestrations for two months, thus reducing the original $110 billion to be saved per fiscal year to $85 billion in 2013.\n", "In 2011, sequestration was used in the Budget Control Act of 2011 (Pub. L. 112-25) as a tool in federal budget control. This 2011 act authorized an increase in the debt ceiling in exchange for $2.4 trillion in deficit reduction over the following ten years. This total included $1.2 trillion in spending cuts identified specifically in the legislation, with an additional $1.2 trillion in cuts that were to be determined by a bipartisan group of Senators and Representatives known as the \"Super Committee\" or officially as the United States Congress Joint Select Committee on Deficit Reduction. The Super Committee failed to reach an agreement. In that event, a trigger mechanism in the bill was activated to implement across-the-board reductions in the rate of increase in spending known as \"sequestration\".\n", "Budget sequestration was first authorized by the Balanced Budget and Emergency Deficit Control Act of 1985 (BBEDCA, Title II of Pub. L. 99-177). This is known as the Gramm–Rudman–Hollings Act. They provided for automatic spending cuts (called \"sequesters\") if the deficit exceeded a set of fixed deficit targets. The process for determining the amount of the automatic cuts was found unconstitutional in the case of \"Bowsher v. Synar,\" and Congress enacted a reworked version of the law in 1987. Gramm-Rudman failed, however, to prevent large budget deficits. The Budget Enforcement Act of 1990 supplanted the fixed deficit targets.\n", "On February 22, 2013, shortly before the United States federal budget sequester took effect, \"The Washington Post\" published a column by Woodward in which he criticized the Obama administration for their statements in 2012 and 2013 that the sequester had been proposed by Republicans in Congress; Woodward said his research showed that the sequester proposal had originated with the White House. Press Secretary Jay Carney confirmed, \"The sequester was something that was discussed, and as has been reported, it was an idea that the White House put forward.\"\n", "Budget sequestration is a procedure in United States law that limits the size of the federal budget. Sequestration involves setting a hard cap on the amount of government spending within broadly defined categories; if Congress enacts annual appropriations legislation that exceeds these caps, an across-the-board spending cut is automatically imposed on these categories, affecting all departments and programs by an equal percentage. The amount exceeding the budget limit is held back by the Treasury and not transferred to the agencies specified in the appropriation bills. The word sequestration was derived from a legal term referring to the seizing of property by an agent of the court, to prevent destruction or harm, while any dispute over said property is resolved in court.\n" ]
What effect does machine generated wind (i.e. a fan) have on sound waves?
Sound travels at a speed relative to the medium it's in (the air). For example, in the cockpit of a fighter jet, no matter how fast you're going, sound will always travel away from you when you speak at about 340m/s (depending on air temperature/pressure etc) because the air is moving with the jet, but the sound that the jet creates from its engines and emits out into the atmosphere can only travel at about 340m/s relative to the stationary air the jet is flying through, and so the plane zooms passed the sound as soon as its emitted. So all the sound piles up and makes a shock wave and a sonic boom, but that's a different story. So when you have sound travelling through air that's being blown by your fan, it will be travelling a little bit faster/slower relative to the room (depending on which direction the fan was pointing) than if the fan was turned off. However, the speed of sound is huge compared to the speed of the air blown by your fan, so the difference is pretty much negligible.
[ "The wind machine (also called aeoliphone) is a friction idiophone, which is a class of instrument which produces sound through vibrations within the instrument itself. It is a specialist musical instrument used to produce the sound of wind in orchestral compositions and musical theater productions.\n", "The wind machine is played by rotating the crank handle, which is attached to the cylinder, to create friction between the wooden slats and the material covering that touches the cylinder but does not rotate as the crank handle is turned. This friction between the wood and the material covering creates the sound of rushing wind. The volume and pitch of the sound is controlled by the rate at which the crank is turned. The faster the handle is turned, the higher the resulting pitch and the louder the sound. The slower the handle is turned, the lower the pitch and the softer the volume. The sound of the wind machine can also be controlled by the tightness of the fabric covering the cylinder.\n", "The wind can also be created by using pressurized steam instead of air. The steam organ, or calliope, was invented in the United States in the 19th century. Calliopes usually have very loud and clean sound. Calliopes are used as outdoors instruments, and many have been built on wheeled platforms.\n", "The wind machine is constructed of a large cylinder made up of several wooden slats which measures approximately 75–80 centimeters in diameter. The cylinder body of the instrument rests upon a stand and is typically covered with silk, canvas, or other material which is in a fixed position. A crank handle, used by the player to rotate the cylinder and create the sound, is attached to the cylinder.\n", "Just like in the case of the standing-wave system, the machine \"spontaneously\" produces sound if the temperature \"T\" is high enough. The resulting pressure oscillations can be used in a variety of ways, such as in producing electricity, cooling, and heat pumping.\n", "In musical terms, they are often described as sounding hollow, and are therefore used as the basis for wind instrument sounds created using subtractive synthesis. Additionally, the distortion effect used on electric guitars clips the outermost regions of the waveform, causing it to increasingly resemble a square wave as more distortion is applied.\n", "Fans generate noise from the rapid flow of air around blades and obstacles causing vortexes, and from the motor. Fan noise has been found to be roughly proportional to the fifth power of fan speed; halving speed reduces noise by about 15 dB.\n" ]
why does 101 mean the basics of something rather than just 1 or 001?
In a college curriculum, courses are numbered to indicate both the level (100-level courses are more basic than 200-level courses) and, frequently but not always, the sequence of courses (you have to take Biology 101 before you can take Biology 102). A course with a number of 101 is typically the first introductory-level course offered in a discipline. Why isn't it 100, though? Well, first, it's the sequency. If you have to take 101 before 102, it's easier to understand and remember that 10***1*** is *1st* and 10***2*** is **2nd** that it would be to remember that 100 is 1st and 101 is 2nd. Also, at some institutions, the number 100 is reserved for remedial courses. So as a typical first-year student at a US university, you would be likely to take English 101 in the fall and English 102 in the spring ... but that assumes that you have the basic skills in reading and writing from high school. If testing shows that you aren't quite ready for the college-level English courses, you might have to take an English 100 class to brush up those skills before moving on to 101 and 102. That's only at some institutions, though, and not as consistent as the 101, 102 style numbering.
[ "BULLET::::- 101: (pronounced 'one o one') used to indicate basic knowledge; e.g., \"Didn't you learn to sweep the floor in housework 101?\" (from the numbering scheme of educational courses where 101 would be the first course in a sequence on the subject).\n", "It is variously pronounced \"one hundred and one\" / \"a hundred and one\", \"one hundred one\" / \"a hundred one\", and \"one oh one\". As an ordinal number, 101st (one hundred [and] first), rather than 101th, is the correct form.\n", "A similar concept can be seen in many of the \"[topic] For Dummies\" series of tutorials and also in many other introductory surveys entitled with the suffix \"101\" (based on academic numberings of entry-level courses).\n", "Instead of meaning 5:30, the \"half five\" expression is sometimes used to mean 4:30, or \"half-way to five\", especially for regions such as the American Midwest and other areas that have been particularly influenced by German culture. This meaning follows the pattern choices of many Germanic and Slavic languages, including Croatian, Dutch, Danish, Russian and Swedish, as well as Hungarian and Finnish.\n", "10101 DO 101 I = 1, 101 because the zero in column 6 is treated as if it were a space (!), while 101010DO101I=1.101 was instead 10101 DO101I = 1.101, the assignment of 1.101 to a variable called DO101I. Note the slight visual difference between a comma and a period.\n", "Thus, the quotient of 11011 divided by 101 is 101, as shown on the top line, while the remainder, shown on the bottom line, is 10. In decimal, this corresponds to the fact that 27 divided by 5 is 5, with a remainder of 2.\n", "The model name CDP-101 was chosen by Nobuyuki Idei, who headed Sony's Audio Division. \"101\" represents the number 5 in binary notation and was chosen because Idei considered the model to be of \"medium class\".\n" ]
I've read that in the 1800's, US military officers were drawn from the upper class. If so, was it possible that a lower class citizen could perform well in school, go to college and be commissioned regardless of his prior social status?
Yes, it was possible. The prototypical example would be Andrew Jackson, the son of recent Irish immigrants (his older siblings were actually born in Ireland - that's how recent his parents had immigrated before his birth). Jackson rose to the rank of Major General in the Army and, of course, eventually reached the highest office in the land in 1828 (inaugurated 1829).
[ "BULLET::::- Officers who had fought in the Army, Navy, or Marine Corps of the United States in the suppression of the Rebellion, or enlisted men who had so served and were subsequently commissioned in the regular forces of the United States, constituted the \"Original Companions of the First Class.\" The eldest direct male lineal descendants of deceased Original Companions or deceased eligible officers could be admitted as \"hereditary Companions of the First Class.\"\n", "Only a small proportion of officers were from the nobility; in 1809, only 140 officers were peers or peers' sons. A large proportion of officers came from the Militia, and a small number had been gentlemen volunteers, who trained and fought as private soldiers but messed with the officers and remained as such until vacancies (without purchase) for commissions became available.\n", "At least two of their colonels, Williams and Howard, were considered as the best officers of their grade in the army. Gunby, Hall, Smith, Stone, and Ramsey, and the lamented Ford, who dies gallantly at the head of his regiment, were equal to any others in the whole continental service.\"\n", "For officers, the fencible, like the militia regiments, presented both advantages and disadvantages. For many young men those formations formed a kind of stepping-stone to get into the regular army. Others, again, who passed too many years in them, gained no rank, spent their daily pay, and acquired little professional knowledge, beyond the parade and drill exercise; and when, at the end of six, eight, or ten years, they thought of looking out for some permanent means of subsistence, or some military commission that might secure them rank and a future provision, they found themselves to have no more seniority than the first day they entered the service.\n", "About ten percent of the officers had first served in the ranks before being commissioned. This was a substantially larger proportion than during the rest of the nineteenth century, when the officer corps obtained the character of a closed caste. The background of the rankers can be summarised into three different categories.\n", "Officers in the infantry and cavalry came from a fairly broad social spectrum, although socially dominated by the aristocracy and the gentry; albeit holding only about a quarter of all commissions in the eighteenth century, half of the colonels and generals belonged to these classes. Many sons of officers, including many Huguenots, also become officers; lacking the social status, economic security and connections enjoyed by the sons of the aristocracy and the gentry, their advancement was slower and needed the patronage of superiors from the social elite. Rankers rarely reached beyond subaltern ranks. The majority of the officers were competent professionals with long time in service, without private means, living on their pay. The lifestyle required of an officer with the King's commission meant, however, that the living costs often exceeded the income, with permanent money problems and indebtedness as a result.\n", "Army officers were divided between those whose education had ended at the Army Academy (a secondary school) and those who had advanced on to the prestigious Army War College. The latter group formed the elite of the officer corps, while officers of the former group were effectively barred by tradition from advancement to staff positions. A number of these lesser-privileged officers formed the army's contribution to the young, highly politicized group often referred to as the .\n" ]
Did the discovery of the Higgs particle actually prove or disprove either Super-symmetry or the Multiverse theories?
Nope. It confirmed a prediction of the [Standard Model of Particle Physics](_URL_0_). The Standard Model doesn't say anything about supersymmetry (often abbreviated to SuSy) or the multiverse. SuSy is a hypothetical extension to the Standard Model. It's a pretty extension, but to date there is absolutely no experimental evidence for it. In fact, the Large Hadron Collider has substantially constrained the properties of SuSy - but has not disproved it. The multiverse is a prediction of string theory, which is currently so far beyond the realm of detection by any imaginable experiment as to place it almost outside the bounds of science.
[ "The resulting electroweak theory and Standard Model have correctly predicted (among other discoveries) weak neutral currents, three bosons, the top and charm quarks, and with great precision, the mass and other properties of some of these. Many of those involved eventually won Nobel Prizes or other renowned awards. A 1974 paper in \"Reviews of Modern Physics\" commented that \"while no one doubted the [mathematical] correctness of these arguments, no one quite believed that nature was diabolically clever enough to take advantage of them\". By 1986 and again in the 1990s it became possible to write that understanding and proving the Higgs sector of the Standard Model was \"the central problem today in particle physics.\" \n", "In the 1960s, Higgs proposed that broken symmetry in electroweak theory could explain the origin of mass of elementary particles in general and of the W and Z bosons in particular. This so-called Higgs mechanism, which was proposed by several physicists besides Higgs at about the same time, predicts the existence of a new particle, the Higgs boson, the detection of which became one of the great goals of physics. On 4 July 2012, CERN announced the discovery of the boson at the Large Hadron Collider. The Higgs mechanism is generally accepted as an important ingredient in the Standard Model of particle physics, without which certain particles would have no mass.\n", "Although these ideas did not gain much initial support or attention, by 1972 they had been developed into a comprehensive theory and proved capable of giving \"sensible\" results that accurately described particles known at the time, and which, with exceptional accuracy, predicted several other particles discovered during the following years. During the 1970s these theories rapidly became the Standard Model of particle physics. There was not yet any direct evidence that the Higgs field existed, but even without proof of the field, the accuracy of its predictions led scientists to believe the theory might be true. By the 1980s the question of whether or not the Higgs field existed, and therefore whether or not the entire Standard Model was correct, had come to be regarded as one of the most important unanswered questions in particle physics.\n", "As part of another longstanding scientific dispute, Hawking had emphatically argued, and bet, that the Higgs boson would never be found. The particle was proposed to exist as part of the Higgs field theory by Peter Higgs in 1964. Hawking and Higgs engaged in a heated and public debate over the matter in 2002 and again in 2008, with Higgs criticising Hawking's work and complaining that Hawking's \"celebrity status gives him instant credibility that others do not have.\" The particle was discovered in July 2012 at CERN following construction of the Large Hadron Collider. Hawking quickly conceded that he had lost his bet and said that Higgs should win the Nobel Prize for Physics, which he did in 2013.\n", "In late 2012, \"Time\", Forbes, \"Slate\", \"NPR\", and others announced incorrectly that the existence of the Higgs boson had been confirmed. Numerous statements by the discoverers at CERN and other experts since July 2012 had reiterated that a particle was discovered but it was not yet confirmed to be a Higgs boson. It was only in March 2013 that it was announced officially. This was followed by the making of a documentary film about the hunt.\n", "This confirmed answer proved the existence of the hypothetical Higgs field—a field of immense significance that is hypothesised as the source of electroweak symmetry breaking and the means by which elementary particles acquire mass. Symmetry breaking is considered proven but confirming exactly \"how\" this occurs in nature is a major unanswered question in physics. Proof of the Higgs field (by observing the associated particle) validates the final unconfirmed part of the Standard Model as essentially correct, avoiding the need for alternative sources for the Higgs mechanism. Evidence of its properties is likely to greatly affect human understanding of the universe and open up \"new\" physics beyond current theories.\n", "The possibility of discovering a Higgs-like boson played a crucial role in the conceptual design of CMS, and served as a benchmark to test the performance of the experiment. In 1990 Virdee and a colleague, Christopher Seez, carried out the first detailed simulation studies of the most plausible way to detect the SM Higgs boson in the low-mass region in the environment of the LHC: via its decay into two photons. Understanding that dense scintillating crystals offer arguably the best possibility of achieving excellent energy resolution, Virdee made a compelling case for the use of lead tungstate scintillating crystals (PbWO4) for the electromagnetic calorimeter of CMS and then led the team that proved the viability of this technique, a technique that has played a crucial role in the discovery of the new heavy boson, in July 2012. Virdee was deeply involved in this search for the Higgs boson, especially via its two-photon decay mode.\n" ]
why are arabs and people from north africa (egypt, libya, morocco, tunisia, etc.) labeled as "white" or "caucasian" in the us census?
Race is a construct. There was a time when Russians, Irish and Italians weren't considered white themselves.
[ "North African Arabs ( \"‘Arab Shamal Ifriqiya\") or Maghrebi Arabs ( \"al-‘Arab al-Maghariba\") are the inhabitants of the North African Maghreb region whose native language is a dialect of Arabic and identify as Arab. This ethnic identity is a product of the Arab conquest of North Africa during the Arab–Byzantine wars and the spread of Islam to Africa. The migration of Arab tribes to North Africa in the 11th century was a major factor in the linguistic and cultural Arabization of the Maghreb region, mainly Beni Hassan, Banu Hilal and Banu Sulaym. \n", "Non-Arab and non-Muslim Middle Eastern people, as well as South Asians of different ethnic/religious backgrounds (Hindus, Muslims and Sikhs) have been stereotyped as \"Arabs\" and racialized in a similar manner. The case of Balbir Singh Sodhi, a Sikh who was murdered at a Phoenix gas station by a white supremacist for \"looking like an Arab terrorist\" (because of the turban, a requirement of Sikhism), as well as that of Hindus being attacked for \"being Muslims\" have achieved prominence and criticism following the September 11 attacks.\n", "The 2020 United States Census might allow Middle Easterners and North Africans to write in their ethnicity/race instead of merely marking them as White. Right now, and in the past, Arabs have been marked in the U.S. Census as White. This began in the early twentieth century when Arabs coming to the United States successfully petitioned to be marked as White in order to avoid entry quotas and have a greater chance of achieving success and avoiding discrimination.\n", "There are prominent Arab non-Muslim minorities in the Arab world. These minorities include the Arab Christians in Lebanon, Syria, Palestine, Jordan, Egypt, Iraq and Kuwait, among other Arab countries. There are also sizable minorities of Arab Jews, Druze, and nonreligious. Most Arabs are Caucasian. Exceptions are Mauritanian, Sudanese, Eritrean, Somali, and Comoran Arabs.\n", "The characterization of Middle Eastern and North African Americans as white has been a matter of controversy. In the early 20th century, peoples of Arab descent were sometimes denied entry into the United States because they were characterized as nonwhite. In 1944, the law changed, and Middle Eastern and North African peoples were granted white status. The U.S. Census is currently revisiting the issue, and considering creating a separate racial category for Middle Eastern and North African Americans in the 2020 Census. \n", "It is a response from Berber activists to those Algerians and Moroccans who self-identify as \"Arab\" because of their Arabic tongue. North Africa was gradually Arabized with the spread of Islam in the 7th century AD, when the liturgical language Arabic was first brought to the Maghreb. However, the identity of northwestern Africa remained Berber for a long time thereafter. Additionally, even though the process of Arabization began with these early invasions, many large parts of North Africa were only recently Arabized like the Aurès (Awras) mountains in the 19th and 20th centuries. Although, the fertile plains of North Africa seem to have been (at least partly) Arabized in the 11th century with the emigration of the Banu Hilal tribes from Arabia. The mass education and promotion of Arabic language and culture through schools and mass media, during the 20th century, by the Arabist governments of North Africa, is regarded as the strongest Arabization process in North Africa ever.\n", "The majority of self-identifying Arab Americans are Eastern Rite Catholic or Orthodox Christian, according to the Arab American Institute. On the other hand, most American Muslims are black (African Americans or Sub-Saharan Africans) or of South Asian (Indian, Pakistani or Bangladeshi) origin.\n" ]
why do older tvs have a weird screen effect while being viewed on another camera?
because the refresh rate of the TV screen collides with the frame rate of the video resulting in an "animation" of said refresh rate - the line moving up or down the screen. It's similar to the way rims on a car often look like they are going on the wrong direction on film, or how in a strobe environment water droplets look like they are travelling up instead of down.
[ "Early analogue televisions varied in the displayed image because of manufacturing tolerance problems. There were also effects from the early design limitations of power supplies, whose DC voltage was not regulated as well as in later power supplies. This could cause the image size to change with normal variations in the AC line voltage, as well as a process called blooming, where the image size increased slightly when a brighter overall picture was displayed due to the increased electron beam current causing the CRT anode voltage to drop. Because of this, TV producers could not be certain where the visible edges of the image would be. In order to compensate, they defined three areas:\n", "A CRT computer monitor with a low refresh rate (<70Hz) or a CRT television can cause similar problems because the image has a visible flicker. Aging CRTs also often go slightly out of focus, and this can cause eye strain. LCDs do not go out of focus but are also susceptible to flicker if the backlight for the LCD uses PWM for dimming. This causes the backlight to turn on and off for shorter intervals as the display becomes dimmer, creating noticeable flickering which causes eye fatigue.\n", "Older televisions can display less of the space outside of the safe area than ones made more recently. Flat panel screens, plasma displays and liquid crystal display (LCD) screens generally can show most of the picture outside the safe areas.\n", "The image may seem garbled, poorly saturated, of poor contrast, blurry or too faint outside the stated viewing angle range, the exact mode of \"failure\" depends on the display type in question. For example, some projection screens reflect more light perpendicular to the screen and less light to the sides, making the screen appear much darker (and sometimes colors distorted) if the viewer is not in front of the screen. Many manufacturers of projection screens thus define the viewing angle as the angle at which the luminance of the image is exactly half of the maximum. With LCD screens, some manufacturers have opted to measure the contrast ratio, and report the viewing angle as the angle where the contrast ratio exceeds 5:1 or 10:1, giving minimally acceptable viewing conditions.\n", "Many televisions and monitors automatically degauss their picture tube when switched on, before an image is displayed. The high current surge that takes place during this automatic degauss is the cause of an audible \"thunk\" or loud hum, which can be heard (and felt) when televisions and CRT computer monitors are switched on. Visually, this causes the image to shake dramatically for a short period of time. A degauss option is also usually available for manual selection in the operations menu in such appliances.\n", "BULLET::::- As of 2012, most implementations of LCD backlighting use pulse-width modulation (PWM) to dim the display, which makes the screen flicker more acutely (this does not mean visibly) than a CRT monitor at 85 Hz refresh rate would (this is because the entire screen is strobing on and off rather than a CRT's phosphor sustained dot which continually scans across the display, leaving some part of the display always lit), causing severe eye-strain for some people. Unfortunately, many of these people don't know that their eye-strain is being caused by the invisible strobe effect of PWM. This problem is worse on many LED-backlit monitors, because the LEDs switch on and off faster than a CCFL lamp.\n", "The display system, while innovative with its projected information directly in the image, unfortunately, made the viewfinder darker due to the LCD overlay. The LCD elements were not lit, so it was impossible to see the LCD overlay—including focus areas—in the dark.\n" ]
why do some offices have a pc connected to a virtual machine?
For my firm, working from a virtual server means that updates are significantly easier. Having to update a program individually on 600+ computers would be an IT nightmare (for obvious security reasons, workers do not have the rights to install programs themselves). Working on a virtual machine allows IT to update much more quickly and smoothly.
[ "Some systems provide user interface remotely with the help of a serial (e.g. RS-232, USB, I²C, etc.) or network (e.g. Ethernet) connection. This approach gives several advantages: extends the capabilities of embedded system, avoids the cost of a display, simplifies BSP and allows one to build a rich user interface on the PC. A good example of this is the combination of an embedded web server running on an embedded device (such as an IP camera) or a network router. The user interface is displayed in a web browser on a PC connected to the device, therefore needing no software to be installed.\n", "A computer-on-module (COM) is a complete computer built on a single circuit board. They are often used as embedded systems due to their small physical size and low power consumption. Gumstix is one manufacturer of COMs.\n", "Since the advent and subsequent popularization of the personal computer, few genuine hardware terminals are used to interface with computers today. Using the monitor and keyboard, modern operating systems like Linux and the BSD derivatives feature virtual consoles, which are mostly independent from the hardware used.\n", "Some companies, such as TeleTypesetting Co. created software and hardware interfaces between personal computers like the Apple II and IBM PS/2 and phototypesetting machines which provided computers equipped with it the capability to connect to phototypesetting machines.\n", "Founded by Alex Vasilevsky, Virtual Computer is a venture-backed software company in the Boston area that produces desktop virtualization products, which combine centralized management with local execution on a hypervisor running on PCs. Virtual Computer has developed a type-1 hypervisor that runs directly on end-user PCs, delivering native PC performance and mobility. By running the workload on the PC, Virtual Computer enables companies to have centralized management without servers, storage, and networking required for server-hosted VDI. The technology supports shared image management, enabling an IT professional to manage thousands of desktops and laptops the same way that they would manage one.\n", "Virtual machine techniques enable several operating systems (\"guest\" operating systems) or other software to run on the same computer so that each thinks it has a whole computer to itself, and each of these simulated whole computers is called a \"virtual machine\". The operating system which really controls the computer is usually called a hypervisor. Two of the major components of the hypervisor are:\n", "COMs are complete embedded computers built on a single circuit board. The design is centered on a microprocessor with RAM, input/output controllers and all other features needed to be a functional computer on the one board. However, unlike a single-board computer, the COM usually lacks the standard connectors for any input/output peripherals to be attached directly to the board.\n" ]
how can a state be projected to be won by a candidate with only 1% of the polling reporting in?
How can so many idiots ask the same question without reading the goddamn sub?
[ "This article is a collection of statewide polls for the 2016 United States presidential election. The polls listed here provide early data on opinion polling between the Democratic candidate, the Republican candidate, the Libertarian candidate, and the Green candidate. Prior to the parties' conventions, presumptive candidates were included in the polls. Not all states will conduct polling for the election due to various factors. States that are considered swing states usually put out more polls as more attention is given to the results. For determining a statistical tie, the margin of error provided by the polling source is applied to the result for each candidate. \n", "BULLET::::- State winner-take-all laws encourage candidates to focus disproportionately on a limited set of swing states (and in the case of Maine and Nebraska, swing districts), as small changes in the popular vote in those areas produce large changes in the electoral college vote. For example, in the 2016 election, a shift of 2,736 votes (or less than 0.4% of all votes cast) toward Donald Trump in New Hampshire would have produced a 4 electoral vote gain for his campaign. A similar shift in any other state would have produced no change in the electoral vote, thus encouraging the campaign to focus on New Hampshire above other states. A study by FairVote reported that the 2004 candidates devoted three quarters of their peak season campaign resources to just five states, while the other 45 states received very little attention. The report also stated that 18 states received no candidate visits and no TV advertising. This means that swing state issues receive more attention, while issues important to other states are largely ignored.\n", "The following table records the official vote tallies for each state for those presidential candidates who were listed on ballots in enough states to have a theoretical chance for a majority in the Electoral College. State popular vote results are from the official Federal Election Commission report. The column labeled \"Margin\" shows Obama's margin of victory over McCain (the margin is negative for states and districts won by McCain).\n", "Another way to measure how much a state's results reflect the national average is how far the state deviates from the national results. The states with the least deviation from a two-party presidential vote from 1896 to 2012 include:\n", "The election was held using the absolute majority system, under which a candidate had to receive over 50% of the popular vote to be elected. If no candidate received over 50% of the vote, a joint session of the National Congress would vote on the two candidates that received the most votes.\n", "Candidate was determined by a combination of votes from an evaluation commission based on 4 debates, held in different region of the country (40%), votes from the party members (30%), and public opinion polls (30%).\n", "The required percentages to win the Presidential Election was reduced from 45 to 40 percent. The electoral law states that a participating candidate must obtain a relative majority of at least 40 percent of the vote to win a presidential election. However, a candidate may win by obtaining at least 35 percent of the vote, with at least a five percent margin over the second-place finisher. The law also established a second-round runoff election if none of the candidates won in the first round.\n" ]
why is democracy the go to political system despite its inherent instability with every election. what made it a better choice than something else?
The instability of every election is the strength of a democratic process. It means that, at least in theory, one person or group of people can't entrench themselves and turn the government into their own machine and the country into their own kingdom. The people gets a mandatory opportunity to replace those in power every few years, again in theory meaning that if we are dissatisfied with them, we can replace them. Of course over time people who crave that kind of enduring power have created all sorts of different ways to remain in power despite the rules, either by coup or by election fraud or simply by having a dual party system where both parties talk very differently, but act very similarly, and we get to pick a different figurehead every few years.
[ "More recently, democracy is criticised for not offering enough political stability. As governments are frequently elected on and off there tends to be frequent changes in the policies of democratic countries both domestically and internationally. Even if a political party maintains power, vociferous, headline grabbing protests and harsh criticism from the popular media are often enough to force sudden, unexpected political change. Frequent policy changes with regard to business and immigration are likely to deter investment and so hinder economic growth. For this reason, many people have put forward the idea that democracy is undesirable for a developing country in which economic growth and the reduction of poverty are top priorities.\n", "More recently, democracy is criticized for not offering enough political stability. As governments are frequently elected on and off there tend to be frequent changes in the policies of democratic countries both domestically and internationally. Even if a political party maintains power, vociferous, headline grabbing protests and harsh criticism from the mass media are often enough to force sudden, unexpected political change. Frequent policy changes with regard to business and immigration are likely to deter investment and so hinder economic growth. For this reason, many people have put forward the idea that democracy is undesirable for a developing country in which economic growth and the reduction of poverty are top priority. However, Anthony Downs argued that the political market works much the same way as the economic market, and that there could potentially be an equilibrium in the system because of democratic process. However, he eventually argued that imperfect knowledge in politicians and voters prevented the reaching of that equilibrium.\n", "Democracy is also criticised for frequent elections due to the instability of coalition governments. Coalitions are frequently formed \"after\" the elections in many countries (for example India) and the basis of alliance is predominantly to enable a viable majority, not an ideological concurrence.\n", "BULLET::::- Democratic decisions are not necessarily any better-made or more efficient, than bureaucratic or entrepreneurial ones; at most, democracy allows for errors to be corrected more easily, and permits bad managers to be ousted more easily, instead of bad managers becoming entrenched in positions of power.\n", "The positive changes of democracy to economic growth such as delegation of authority and regulations of social conflicts heavily outweigh the negative and restrictive effects, especially when compared to autocracy. One of the main reasons for this is that society, i.e. voters are able to support difficult trade offs and changes when there is no perceived alternative. This is primarily true in countries with a higher level of education. So it ties the development level of a country as one of the decisive factors to undergo positive democratic changes and reforms. Thus, countries that embark in democratization at higher levels of education are more likely than not to continue their development under democracy.\n", "Voters may not be educated enough to exercise their democratic rights prudently. Politicians may take advantage of voters' irrationality, and compete more in the field of public relations and tactics, than in ideology. While arguments against democracy are often taken by advocates of democracy as an attempt to maintain or revive traditional hierarchy and autocratic rule, many extensions have been made to develop the argument further. In Lipset's 1959 essay about the requirements for forming democracy, he found that almost all emerging democracies provided good education. However, education alone cannot sustain a democracy, though Caplan did note in 2005 that as people become educated, they think more like economists.\n", "Democracy is one of the propositions that many religious people are afraid of approaching. According to the philosopher of religions Abdolkarim Soroush we do not have one democracy from ancient Greece to today, but many of them, hence there is a plurality of democracies in the international community. Democracy prevailed in different eras depending on the conditions of the time. What alters the hue and color of democracy is a society’s specific characteristics and elements.\n" ]
How many calories does the human brain consume in a day?
Your brain runs around ~~10 Watts~~. 10 Watts = 10 Joules per sec. There are 4.184 Joules in a calorie (little c, not Calorie, which is 1000 calories). That's 2.39 calories per second. There are 86400 seconds in a day. 86400 x 2.39 = 206496 calories, or ~~**206.5 Calories per day.**~~ EDIT: I was going off of some research that I had done about a week ago, namely [#30 in this list](_URL_0_). Being lazy as I am, I didn't read into it and just remembered the 10 watts. As several people in the comments are saying, the power is closer to 20 watts. [WolframAlpha](_URL_1_) is one source of this other wattage. So double my original estimate = 413 Calories. **tl;dr: 413 Cal, not 206.5**
[ "A number of theories in evolutionary psychology that are hinged on the assumption that sheer number of calories constitute the only important bottleneck in nutrition are challenged by research on hidden hunger, types of malnutrition in which deficits of specific essential micronutrients cause diseases or even death despite a suitable number of calories. Comparisons between species show that although human brains consume more nutrients than the brains of other species, human brains consuming roughly 20% of the body's total calory requirements in adult humans and 60% in young children, there are other organs in many other species that consume more calories than their human counterparts. This means that humans do not stand out in requirements for calories at any stage of life, though human brains stand out in requiring higher amounts of many different essential nutrients while other organs in other species may require higher amounts of two or three specific micronutrients. While studies of blood flow in the brain's dura mater in fossil humans show a negligible difference in oxygen and with it caloric requirements between Neanderthals and modern humans, the fact that some Neanderthal groups in Belgium lived exclusively of large land animal meat while other Neanderthal groups in Spain lived exclusively of plants that were present there at the time with a much narrower range of nutrients than the diets vegans eat today shows that although Neanderthals were capable of varying their diets, they could also survive off non-varied diets that would cause lethal deficits in modern humans. Since many micronutrient deficits in modern humans cause neurological symptoms, this is explainable as a result of less flexible synapses in Neanderthal brains requiring lower levels of many specific mincronutrients than the highly flexible synapses of modern humans. This contradicts the claim that human females were under unique selection pressure to evolve curvy shapes for fat storage for fetal brain development, as fat would only store calories and not micronutrients which could be stored without affecting body shape and nonhuman animals with other high fetal calory requirements do not have curvy females. The claim that human females evolved large breasts to feed infants needing many calories is also challenged, empirically citing the human example that while most asian women have small breasts, asian people do not have smaller brains than other people and that explaining it away as a \"trade-off\" would be a misuse of the term as the observation is one trait (brain size) being unaffected by a dramatic reduction of another trait (breast size) as opposed to the correct definition of \"trade-off\" which is an otherwise adaptive trait being reduced by a change of something else. The distribution of essential nutrients between different types of food varied dramatically between regions in the paleolithic before the exchange of breeding stock of domestic and feral plants and animal species over long distances at the dawn of agriculture and the many micronuitrients required in higher levels by sapiens than by archaics meant that in every region, one or more types of food became narrower bottlenecks for sapiens group size than for archaic group size though the specific bottleneck food varied from region to region. This contradicts evolutionary psychology's claim that sapiens evolved in larger group sizes and since many essential nutreients are in types of food that cannot be prevented from going rancid in short times while other essential nutrients degrade fast even if the food in which they are contained does not become rancid, trade over long distances could not address the problem. The discovery of stone tools further from the origin of the stone at sapiens sites than at archaic sites is therefore not explainable by trade between tribes, but can be explained by people moving further to eat the essential nutrients and that the same movement patterns in other sapiens groups making them less capable of consistent territorial defense than archaics were, allowing sapiens to move longer and take their tools with them. It also challenges evolutionary psychology's claim of an universal exchange value of animal versus vegetable food that would have maintained an universal division of labour between hunting men and gathering women and/or a need for stable pair bonds in a context of paying for guard services with food within the tribe, as the most valuable essential micronutrient food would have been animal in some regions yet vegetable in other regions and such differences were commonly found between different localities within the relatively large part of Africa in which Homo sapiens evolved as a number of interbreeding groups.\n", "The range with which the adult brain in all animals regardless of body size consumes energy as a percentage of the body's energy is roughly 2% to 8%. The only exceptions of animal brains using more than 10% (in terms of O intake) are a few primates (11–13%) and humans. However, research published in 1996 in the Journal of Experimental Biology by Göran Nilsson at Uppsala University found that mormyrinae brains utilize roughly 60% of their body O consumption. This is due to the combination of large brain size (3.1% of body mass compared to 2% in humans) and them being ectothermic.\n", "The total energy radiated in one day is about 8 MJ, or 2000 kcal (food calories). Basal metabolic rate for a 40-year-old male is about 35 kcal/(m·h), which is equivalent to 1700 kcal per day, assuming the same 2 m area. However, the mean metabolic rate of sedentary adults is about 50% to 70% greater than their basal rate.\n", "Because of the blood–brain barrier, getting nutrients to the human brain is especially dependent on molecules that can pass this barrier. The brain itself consumes about 18% of the basal metabolic rate: on a total intake of 1800 kcal/day, this equates to 324 kcal, or about 80 g of glucose. About 25% of total body glucose consumption occurs in the brain.\n", "Relatively speaking, the brain consumes an immense amount of energy in comparison to the rest of the body. The human brain is approximately 2% of the human body mass and uses 20–25% of the total energy expenditure. Therefore, mechanisms involved in the transfer of energy from foods to neurons are likely to be fundamental to the control of brain function. Insufficient intake of selected vitamins, or certain metabolic disorders, affect cognitive processes by disrupting the nutrient-dependent processes within the body that are associated with the management of energy in neurons, which can subsequently affect neurotransmission, synaptic plasticity, and cell survival.\n", "The brain consumes up to 20% of the energy used by the human body, more than any other organ. In humans, blood glucose is the primary source of energy for most cells and is critical for normal function in a number of tissues, including the brain. The human brain consumes approximately 60% of blood glucose in fasted, sedentary individuals. Brain metabolism normally relies upon blood glucose as an energy source, but during times of low glucose (such as fasting, endurance exercise, or limited carbohydrate intake), the brain uses ketone bodies for fuel with a smaller need for glucose. The brain can also utilize lactate during exercise. The brain stores glucose in the form of glycogen, albeit in significantly smaller amounts than that found in the liver or skeletal muscle. Long-chain fatty acids cannot cross the blood–brain barrier, but the liver can break these down to produce ketone bodies. However, short-chain fatty acids (e.g., butyric acid, propionic acid, and acetic acid) and the medium-chain fatty acids, octanoic acid and heptanoic acid, can cross the blood–brain barrier and be metabolized by brain cells.\n", "The energy input to the human body is in the form of food energy, usually quantified in kilocalories [kcal] or kiloJoules [kJ=kWs]. This can be related to a certain distance travelled and to body weight, giving units such as kJ/(km∙kg). The rate of food consumption, i.e. the amount consumed during a certain period of time, is the input power. This can be measured in kcal/day or in J/s = W (1000 kcal/d ~ 48.5 W).\n" ]
Can a computer simulation create itself inside itself?
A computer can emulate another computer (check [this](_URL_0_) out!). A computer can, in fact, emulate a simplified version of itself. The only problem is, as the usage of the emulated computer (or CPU) approaches the maximum speed/usage of the *real* computer, the number of states that can be simulated approaches one. Eventually, with the emulated computer under 100% load, the emulation will halt - unable to continue because it will require more memory and power than the *real* computer is able to provide it. /u/Begging4Bacon explained it very well: > The computer would have to be able to simulate itself in a random state. The random state would take up all the memory, leaving none for the hardware itself.
[ "Simulation is also used in computer games and animation and can be accelerated by using a physics engine, the technology used in many powerful computer graphics software programs, like 3ds Max, Maya, Lightwave, and many others to simulate physical characteristics. In computer animation, things like hair, cloth, liquid, fire, and particles can be easily modeled, while the human animator animates simpler objects. Computer-based dynamic animation was first used at a very simple level in the 1989 Pixar short film \"Knick Knack\" to move the fake snow in the snowglobe and pebbles in a fish tank.\n", "A discrete computer simulation, or simply a computer simulation, is a computer program that tries to reproduce, for pedagogical or scientific purposes, a natural phenomenon through the visualization of the different states that it can have. Each of these states is described by a set of variables that change in time due to the iteration of a given algorithm.\n", "In computer science, a simulation is a computation of the execution of some appropriately modelled state-transition system. Typically this process models the complete state of the system at individual points in a discrete linear time frame, computing each state sequentially from its predecessor. Models for computer programs or VLSI logic designs can be very easily simulated, as they often have an operational semantics which can be used directly for simulation.\n", "Advanced computer programs can simulate power system behavior , weather conditions, electronic circuits, chemical reactions, mechatronics , heat pumps, feedback control systems, atomic reactions, even complex biological processes. In theory, any phenomena that can be reduced to mathematical data and equations can be simulated on a computer. Simulation can be difficult because most natural phenomena are subject to an almost infinite number of influences. One of the tricks to developing useful simulations is to determine which are the most important factors that affect the goals of the simulation.\n", "Computer simulations are constructed to emulate a physical system. Because these are meant to replicate some aspect of a system in detail, they often do not yield an analytic solution. Therefore, methods such as discrete event simulation or finite element solvers are used. A computer model is used to make inferences about the system it replicates. For example, climate models are often used because experimentation on an earth sized object is impossible.\n", "Computer simulation is a computer program, or network of computers, that attempts to simulate an abstract model of a particular system. Computer simulations have become a useful part of mathematical modelling of many natural systems in physics, and computational physics, chemistry and biology; human systems in economics, psychology, and social science; and in the process of engineering and new technology, to gain insight into the operation of those systems, or to observe their behavior. The simultaneous visualization and simulation of a system is called visulation.\n", "Less theoretically, an interesting application of computer simulation is to simulate computers using computers. In computer architecture, a type of simulator, typically called an \"emulator\", is often used to execute a program that has to run on some inconvenient type of computer (for example, a newly designed computer that has not yet been built or an obsolete computer that is no longer available), or in a tightly controlled testing environment (see \"Computer architecture simulator\" and \"Platform virtualization\"). For example, simulators have been used to debug a microprogram or sometimes commercial application programs, before the program is downloaded to the target machine. Since the operation of the computer is simulated, all of the information about the computer's operation is directly available to the programmer, and the speed and execution of the simulation can be varied at will.\n" ]
Why does a curling rock turn the same direction the rock is spinning? Why does sweeping reduce the curl?
> Why does a curling rock turn the same direction the rock is spinning? I could not find any conclusive evidence either. However, I will point out that the section in Wikipedia article mentioned several alternate explanations of increased friction as skating speed increases, including different skating techniques. I will also point out the surface of a curling sheet is not a smooth surface - it is roughened by the introduction of pebbles. This makes it behave very differently than on ice (in fact, you can't throw a stone very far on smooth ice because the stone forms a nice seal on the ice, and that has a suction effect that drastically slows down the stone). The explanations I'm familiar with is that the quasi-liquid layer is mainly responsible for reducing friction, and this layer affects the front of the stone more than the back, because the contact pressure of a 42 pound stone will increase the thickness of that layer. This makes any friction at the back dominate over that of the front. For a clockwise curl, the back of the stone is pushing _left_ on the ice, which makes the stone curl right. > Why does sweeping reduce the curl? The answer to this is more certain. The friction from the broom heats up and softens the pebbles on the ice, and this reduces the friction of the ice. This has the effect of extending the distance of the stone, and also reduces the curl as there is less friction on the stone.
[ "The player can induce a curved path, described as \"curl\", by causing the stone to slowly turn as it slides. The path of the rock may be further influenced by two sweepers with brooms, who accompany it as it slides down the sheet and sweep the ice in front of the stone. \"Sweeping a rock\" decreases the friction, which makes the stone travel a straighter path (with less \"curl\") and a longer distance. A great deal of strategy and teamwork go into choosing the ideal path and placement of a stone for each situation, and the skills of the curlers determine the degree to which the stone will achieve the desired result. This gives curling its nickname of \"chess on ice\".\n", "BULLET::::- : When a rock's running surface travels over a foreign particle such as a hair, causing the rock to deviate from its expected path, usually by increasing friction and thereby the amount of curl\n", "One of the basic technical aspects of curling is knowing when to sweep. When the ice in front of the stone is swept a stone will usually travel both farther and straighter and in some situations one of those is not desirable. For example, a stone may be traveling too fast (said to have too much weight) but require sweeping to prevent curling into another stone. The team must decide which is better: getting by the other stone but traveling too far or hitting the stone.\n", "In a rocking, wagging or twisting coordinate the bond lengths within the groups involved do not change. The angles do. Rocking is distinguished from wagging by the fact that the atoms in the group stay in the same plane.\n", "Curling which is the bending of the shoot or the rolling of the leaf, is a result of over-growth on one side of an organ. Often viral diseases cause such leaf distortions due to irregular growth of the lamina. Extreme reduction of the leaf lamina brings about the symptom known as the Shoe-string effect.\n", "Curling is a sheet metal forming process used to form the edges into a hollow ring. Curling can be performed to eliminate sharp edges and increase the moment of inertia near the curled end. Other parts are curled to perform their primary function, such as door hinges.\n", "Rocking stones (also known as logan stones or logans) are large stones that are so finely balanced that the application of just a small force causes them to rock. Typically, rocking stones are residual corestones formed initially by spheroidal weathering and have later been exposed by erosion or glacial erratics left by retreating glaciers. Natural rocking stones are found throughout the world. A few rocking stones might be man-made megaliths.\n" ]
Is damage of radiation linearly dependent of radiation exposure?
It's a difficult question to answer, but for the purposes of radiation protection, cancer risk is considered proportional to dose and is not time dependent, in accordance with the [Linear No-Threshold model](_URL_0_). Now that is certainly not true with biological damage in general. For instance, the spreading out of the UV dose will prevent you from getting sunburned. Radiation-based cancer treatments are usually spread out over the course of multiple days or weeks to allow for normal (non-tumor) tissue recovery. But for *cancer risk specifically*, yes, with caveats you can look up in the Wikipedia entry. Interestingly, some folks think that very small doses of radiation may actually REDUCE your cancer risk (hormesis). But there's not enough evidence to support that at this point, so we still use the linear no-threshold to predict excess cancer risk from radiation. And excessively high doses, well, that will kill you, so I guess at that point it's not really linear anymore...
[ "The most common impact is stochastic induction of cancer with a latent period of years or decades after exposure. For example, ionizing radiation is the sole cause of chronic myelogenous leukemia. The mechanism by which this occurs is well understood, but quantitative models predicting the level of risk remain controversial. The most widely accepted model posits that the incidence of cancers due to ionizing radiation increases linearly with effective radiation dose at a rate of 5.5% per sievert. If this linear model is correct, then natural background radiation is the most hazardous source of radiation to general public health, followed by medical imaging as a close second. Other stochastic effects of ionizing radiation are teratogenesis, cognitive decline, and heart disease.\n", "Exposure to ionizing radiation is known to increase the future incidence of cancer, particularly leukemia. The mechanism by which this occurs is well understood, but quantitative models predicting the level of risk remain controversial. The most widely accepted model posits that the incidence of cancers due to ionizing radiation increases linearly with effective radiation dose at a rate of 5.5% per sievert. If the linear model is correct, then natural background radiation is the most hazardous source of radiation to general public health, followed by medical imaging as a close second. \n", "The deterministic (acute tissue damage) effects that can lead to acute radiation syndrome only occur in the case of acute high doses (≳ 0.1 Gy) and high dose rates (≳ 0.1 Gy/h) and are conventionally not measured using the unit sievert, but use the unit gray (Gy).\n", "Its most common impact is the stochastic induction of cancer with a latent period of years or decades after exposure. The mechanism by which this occurs is well understood, but quantitative models predicting the level of risk remain controversial. The most widely accepted model posits that the incidence of cancers due to ionizing radiation increases linearly with effective radiation dose at a rate of 5.5% per sievert. If this linear model is correct, then natural background radiation is the most hazardous source of radiation to general public health, followed by medical imaging as a close second. Other stochastic effects of ionizing radiation are teratogenesis, cognitive decline, and heart disease.\n", "BULLET::::- The application of the linear no threshold model to predict deaths from low levels of exposure to radiation was disputed in a BBC (British Broadcasting Corporation) \"Horizon\" documentary, broadcast on 13 July 2006. It offered statistical evidence to suggest that there is an exposure threshold of about 200 millisieverts, below which there is no increase in radiation-induced disease. Indeed, it went further, reporting research from Professor Ron Chesser of Texas Tech University, which suggests that low exposures to radiation can have a protective effect. The program interviewed scientists who believe that the increase in thyroid cancer in the immediate area of the explosion had been over-recorded, and predicted that the estimates for widespread deaths in the long term would be proved wrong. It noted the view of the World Health Organization scientist Dr Mike Rapacholi that, while most cancers can take decades to manifest, leukemia manifests within a decade or so: none of the previously expected peak of leukemia deaths has been found, and none is now expected. Identifying the need to balance the \"fear response\" in the public's reaction to radiation, the program quoted Dr Peter Boyle, director of the IARC: \"Tobacco smoking will cause several thousand times more cancers in the [European] population.\"\n", "According to the linear no-threshold model, any exposure to ionizing radiation, even at doses too low to produce any symptoms of radiation sickness, can induce cancer due to cellular and genetic damage. Under the assumption, survivors of acute radiation syndrome face an increased risk of developing cancer later in life. The probability of developing cancer is a linear function with respect to the effective radiation dose. In radiation-induced cancer, the speed at which the condition advances, the prognosis, the degree of pain, and every other feature of the disease are not believed to be functions of the radiation dosage.\n", "High radiation doses can cause DNA damage. If left unrepaired, this damage can create serious and even lethal chromosomal aberrations. Ionizing radiation can produce reactive oxygen species, which are very damaging to DNA.\n" ]
Flat or disc-shaped planet
The shape of the object doesn't matter. There's no aerodynamics in space because there's no air. However, a naturally formed planet can't be any other shape due to the laws of gravity.
[ "A disk-shaped planet similar to an Alderson disk (though far smaller) served as the home world of the fantasy \"Aysle\" setting (or \"cosm\") of West End Games' \"Torg\" roleplaying game. In contrast with the Alderson disk, the Aysle \"diskworld\" works according to fantasy physics, including a \"gravity plane\" that bisects the disk laterally, so that opposite sides \"fall\" towards the plane. The diskworld of Aysle had a bobbing Sun and multiple inner layers. Both sides of the disk were inhabited, as were the internal layers.\n", "Flatness refers to the shape of a liquid's free surface. On planet Earth, the flatness of a liquid is a function of the curvature of the Earth, and from trigonometry, can be found to deviate from true flatness by approximately 19.6 nanometers over an area of 1 square meter, a deviation which is dominated by the effects of surface tension. This calculation using the Earth's mean radius at sea level, however a liquid will be slightly flatter at the poles.\n", "A biconcave disc — also referred to as a discocyte — is a geometric shape resembling an oblate spheroid with two concavities on the top and on the bottom. It is meta-stable, and involves the continuous adjustment of the asymmetric transbilayer lipid distribution, which is correlated with ATP depletion.\n", "The shape, three-dimensional convex circles, derived from the dot pieces. Irwin wanted them to be circular in order to avoid the corners a square or rectangle would produce. But instead of being flat it needed to be convex to deemphasize the edge. The disc was painted where the disc bulges out (point closest to the viewer) the same color as the wall (point furthest away from the viewer) to give it a floating effect. But the combination of convexity and color made it so the viewer had a difficult time determining whether the disc was convex, concave, or flat. (Originally he painted the discs with dots, but the way they turned out was unsatisfactory. He resorted to a spray painting technique, which allowed for the effect that minimized the visibility of the edge.)\n", "Pleuronectiformes (flatfish) are an order of ray-finned fish. The most obvious characteristic of the modern flatfish is their asymmetry, with both eyes on the same side of the head in the adult fish. In some families the eyes are always on the right side of the body (dextral or right-eyed flatfish) and in others they are always on the left (sinistral or left-eyed flatfish). The primitive spiny turbots include equal numbers of right- and left-eyed individuals, and are generally less asymmetrical than the other families. Other distinguishing features of the order are the presence of protrusible eyes, another adaptation to living on the seabed (benthos), and the extension of the dorsal fin onto the head.\n", "Pebbles ranging in size from centimeters up to a meter in size are accreted at an enhanced rate in a protoplanetary disk. A protoplanetary disk is made up of a mix of gas and solids including dust, pebbles, planetesimals, and protoplanets. Gas in a protoplanetary disk is pressure supported and as a result orbits at a velocity slower than large objects. The gas affects the motions of the solids in varying ways depending on their size, with dust moving with the gas and the largest planetesimals orbiting largely unaffected by the gas. Pebbles are an intermediate case, aerodynamic drag causes them to settle toward the central plane of the disk and to orbit at a sub-keplerian velocity resulting in radial drift toward the central star. The pebbles frequently encounter planetesimals as a result of their lower velocities and inward drift. If their motions were unaffected by the gas only a small fraction, determined by gravitational focusing and the cross-section of the planetesimals, would be accreted by the planetesimals. The remainder would follow hyperbolic paths, accelerating toward the planetesimal on their approach and decelerating as they recede. However, the drag the pebbles experience grows as their velocities increase, slowing some enough that they become gravitationally bound to the planetesimal. These pebbles continue to lose energy as they orbit the planetesimal causing them to spiral toward and be accreted by the planetesimal.\n", "The medieval Indian texts called the Puranas describe the Earth as a flat-bottomed, circular disk with concentric oceans and continents. This general scheme is present not only in the Hindu cosmologies but also in Buddhist and Jain cosmologies of South Asia. However, some Puranas include other models. For example, the fifth canto of the \"Bhagavata Purana\", includes sections that describe the Earth both as flat and spherical.\n" ]
why more recently than ever do webpages refresh and not actually go back when you hit the back button?
It is more and more common mispractice to include a 0 second redirect in a site, which as soon as you load a page, redirects you to a slightly different page they actually want you to see. So far, this isn't a bad thing, it helps site design. The problem is, if you hit the back button on most browsers, it will take you back to the page with the instant redirect instead of back PAST that page to the page you came from. It is bad design, and SHOULD be easy to avoid, but some web designers are idiots. Source: I'm a web developer.
[ "BULLET::::- If a page redirects too quickly (less than 2-3 seconds), using the \"Back\" button on the next page may cause some browsers to move back to the redirecting page, whereupon the redirect will occur again. This is bad for usability, as this may cause a reader to be \"stuck\" on the last website.\n", "Many web design tutorials also point out that client-side redirecting tends to interfere with the normal functioning of a Web browser's \"back\" button. After being redirected, clicking the back button will cause the user to go back to the redirect page, which redirects them again. Some modern browsers seem to overcome this problem however, including Safari, Mozilla Firefox and Opera.\n", "Normally JavaScript pushes the redirector site's URL to the browser's history. It can cause redirect loops when users hit the back button. With the following command you can prevent this type of behaviour.\n", "Sometimes a mistake can cause a page to end up redirecting back to itself, possibly via other pages, leading to an infinite sequence of redirects. Browsers should stop redirecting after a certain number of hops and display an error message.\n", "Also, the \"Online\" button is actually a toggle switch, such that if the printer is already online, pressing Online makes the printer go offline and can be used to stop a runaway print job. Pressing Shift-Reset will then reset the printer, clearing the remainder of the unwanted document from the printer's memory, so that it will not continue to print it when brought back on line. (Before resetting the printer, it is necessary to make the computer stop sending data for the print job to the printer, if it hasn't already finished sending that job, through the computer's software. Otherwise, when the printer is put back online, it will start receiving the job from somewhere in the middle, which will likely cause the same runaway problem to recur.)\n", "The Button experienced technical issues which caused it to reach zero despite users pressing it in time. This occurred multiple times and was attributed to database errors by reddit's administrators. The outages caused community discontentment and some speculation that the subreddit was being gamed by the administrators. Although The Button was revived within a day of the outages, the administrators of Reddit considered closing The Button experiment early.\n", "The reasoning behind this is due to the lack of time to maintain the site and that the site need a complete restart to gain its former glory. A task that unfortunately is big to undertake within a reasonable amount of time and effort. We all sincerely wish that we could revert time but unfortunately this is the position that we're at.\n" ]
why do people tie shoe laces and toss them over power lines?
Three reasons: 1. Gang members do it to mark their territory ([supposedly](_URL_0_)). 2. Bullies steal someone's sneakers and throw them over power lines to taunt their victim. 3. People take their old sneakers and throw them over power lines because they are inspired by (1.) or just think it's cool, funny, or exciting to do.
[ "With both ends tucked (slipped) it becomes a good way to tie shoelaces, whilst the non-slipped version is useful for shoelaces that are excessively short. It is appropriate for tying plastic garbage or trash bags, as the knot forms a handle when tied in two twisted edges of the bag.\n", "Self-tying shoes (also known as \"self-lacing\" or \"power laces\") are designed to automatically tighten once the user puts them on. Such shoes were initially depicted in the 1989 science fiction film \"Back to the Future II\".\n", "There are several ways to tie a shoelace knot; each starts with the tying of a half hitch, and requires attention or some habitual mechanism for arriving at a knot that is an elaboration of the reef (or square) knot rather than of the granny (or lubber's) knot. One approach is to start by taking, in each hand, the end of the lace that emerges from the uppermost eyelet on that hand's side of the shoe; then passing the \"dominant\" hand's end \"under\" the other end, from front toward back, and dropping each lace on the opposite side from where it started; and in the finishing step again grasping the lace on each side with the hand on that side (perhaps taking time to note that because each end crossed over the shoe before, the laces have switched hands—or vice versa, the hands have switched laces) and again passing the \"dominant\" hand's end \"under\" the other end, from front toward back.\n", "This is the process of running the shoelaces through the holes, eyelets, loops, or hooks to hold together the sides of the shoe with many common lacing methods. There are, in fact, almost two trillion ways to lace a shoe with six pairs of eyelets.\n", "Knitted lace with no bound-off edges is extremely elastic, deforming easily to fit whatever it is draped on. As a consequence, knitted lace garments must be blocked or \"dressed\" before use, and tend to stretch over time.\n", "A cable tie (also known as a hose tie, or zip tie, and by the brand names Ty-Rap) is a type of fastener, for holding items together, primarily electrical cables or wires. Because of their low cost and ease of use, cable ties are ubiquitous, finding use in a wide range of other applications. Stainless steel versions, either naked or coated with a rugged plastic, cater for exterior applications and hazardous environments.\n", "Shoe dangling, or shoe flinging, is the practice of throwing shoes whose shoelaces have been tied together so that they hang from overhead wires such as power lines or telephone cables. Once the shoes are tied together, the pair is then thrown at the wires as a sort of bolas.\n" ]
What is the difference between Enthalpy (H) and Heat (Q)?
Enthalpy is a Legendre transform of internal energy. It's a state function, so the change in enthalpy between two states doesn't depend on the path you take between those two steps (in fancy words, it's an *exact differential*). Heat is not a state function; a system doesn't "have" a certain amount of heat in it (whereas a system *does* have a certain amount of enthalpy). Furthermore, the heat added or removed from a system between some initial and final state **does** depend on the path you take between those states (it's an *inexact differential*). For a reversible process at constant pressure (like a lot of the things you do in chemistry), the differential enthalpy during some process is the same as the heat transferred into or out of the system during that process. So that's why in chemistry you often hear terms like "latent heat of fusion" and "enthalpy of fusion" being used as if they're interchangeable, because they often *are* in chemistry. The first law of thermodynamics says: dU = ~~d~~Q + ~~d~~W, where the strikethroughs indicate inexact differentials. And the (pressure-volume) work can be written as ~~d~~W = - p dV. But maybe you want to think about energy as a function of pressure rather than volume. That way if you can keep the pressure constant (like in a laboratory environment), it'll simplify things. If you replace the internal energy U with the enthalpy H = U + pV, you find that dH = dU + p dV + V dp. Now replacing dU with what we have above, dH = ~~d~~Q - p dV + p dV + V dp. The two terms in the middle cancel, and all that remains is dH = ~~d~~Q + V dp. At constant pressure (dp = 0), this says that dH = ~~d~~Q.
[ "Since the 1920s, it has been recommended practice to use enthalpy to refer to the \"heat content at constant volume\", and to thermal energy when \"heat\" in the general sense is intended, while \"heat\" is reserved for the very specific context of the transfer of thermal energy between two systems.\n", "Enthalpy (H) is the transfer of energy in a reaction (for chemical reactions it is in the form of heat) and ΔH is the change in enthalpy. ΔH is a state function. Being a state function means that ΔH is independent of the processes between initial and final states. In other words, it does not matter what steps we take to get from initial reactants to final products—the ΔH will always be the same. ΔHrxn, or the change in enthalpy of a reaction, has the same value of ΔH as in a thermochemical equation, but is in units of kJ/mol being that it is the enthalpy change per moles of any particular substance in the equation. Values of ΔH are determined experimentally under standard conditions of 1atm and 25 °C (298.15K).\n", "The enthalpy of sublimation, or heat of sublimation, is the heat required to change one mole of a substance from solid state to gaseous state at a given combination of temperature and pressure, usually standard temperature and pressure (STP). The heat of sublimation is usually expressed in kJ/mol, although the less customary kJ/kg is also encountered.\n", "The total enthalpy, \"H\", of a system cannot be measured directly. The same situation exists in classical mechanics: only a change or difference in energy carries physical meaning. Enthalpy itself is a thermodynamic potential, so in order to measure the enthalpy of a system, we must refer to a defined reference point; therefore what we measure is the change in enthalpy, Δ\"H\". The Δ\"H\" is a positive change in endothermic reactions, and negative in heat-releasing exothermic processes.\n", "Enthalpy , a property of a thermodynamic system, is equal to the system's internal energy plus the product of its pressure and volume. In a system enclosed so as to prevent matter transfer, for processes at constant pressure, the heat absorbed or released equals the change in enthalpy.\n", "The \"Q\" temperature coefficient is a measure of the rate of change of a biological or chemical system as a consequence of increasing the temperature by 10 °C. There are many examples where the \"Q\" is used, one being the calculation of the nerve conduction velocity and another being calculating the contraction velocity of muscle fibres. It can also be applied to chemical reactions and many other systems.\n", "Heat is defined in physics as the transfer of thermal energy across a well-defined boundary around a thermodynamic system. The thermodynamic free energy is the amount of work that a thermodynamic system can perform. Enthalpy is a thermodynamic potential, designated by the letter \"H\", that is the sum of the internal energy of the system (U) plus the product of pressure (P) and volume (V). Joule is a unit to quantify energy, work, or the amount of heat.\n" ]
Would the water pressure be the same at 100 metres under the surface, even if the water was located in a water tank on land?
Yes. The pressure at any point in the water body depends on three things given negligible air pressure changes. Density of liquid , gravitational constant and height below the surface. Given same g and density , at same depths no matter where kept, the pressure will be same.
[ "The density of water causes ambient pressures that increase dramatically with depth. The atmospheric pressure at the surface is 14.7 pounds per square inch or around 100 kPa. A comparable hydrostatic pressure occurs at a depth of only ( for sea water). Thus, at about 10 m below the surface, the water exerts twice the pressure (2 atmospheres or 200 kPa) as air at surface level.\n", "It is worthwhile to note that pressures may often be expressed in units of distance such as feet when diving. For this, note that descending 33 ft in salt water or 33.9 ft in fresh water results in a change of 1 atm, so distance and pressure are used interchangeably in this context.\n", "Example: For a column of fresh water of 8.33 pounds per gallon (lb/U.S. gal) standing still hydrostatically in a 21,000 feet vertical cased wellbore from top to bottom (vertical hole), the pressure gradient would be\n", "Elevated water tank, also known as a water tower, will create a pressure at the ground-level outlet of 1 kPa per 10.2 cm or 1 psi per 2.31 feet of elevation. Thus a tank elevated to 20 metres creates about 200 kPa and a tank elevated to 70 feet creates about 30 psi of discharge pressure, sufficient for most domestic and industrial requirements.\n", "Since buoyant force points upwards, in the direction opposite to gravity, then pressure in the fluid increases downwards. Pressure in a static body of water increases proportionally to the depth below the surface of the water. The surfaces of constant pressure are planes parallel to the surface, which can be characterized as the plane of zero pressure.\n", "Ambient pressure is the pressure in the water around the diver (or the air, with caisson workers etc.). As a diver descends, the ambient pressure increases. At in salt water, it is twice the normal pressure than that at the surface. At 40 meters (a common recommended limit for recreational diving) it is 5 times the pressure than at sea level.\n", "In more detail, one can look at how the hydrostatic pressure varies through a static siphon, considering in turn the vertical tube from the top reservoir, the vertical tube from the bottom reservoir, and the horizontal tube connecting them (assuming a U-shape). At liquid level in the top reservoir, the liquid is under atmospheric pressure, and as one goes up the siphon, the hydrostatic pressure decreases (under vertical pressure variation), since the weight of atmospheric pressure pushing the water up is counterbalanced by the column of water in the siphon pushing down (until one reaches the maximal height of a barometer/siphon, at which point the liquid cannot be pushed higher) – the hydrostatic pressure at the top of the tube is then lower than atmospheric pressure by an amount proportional to the height of the tube. Doing the same analysis on the tube rising from the lower reservoir yields the pressure at the top of that (vertical) tube; this pressure is lower because the tube is longer (there is more water pushing down), and requires that the lower reservoir is lower than the upper reservoir, or more generally that the discharge outlet simply be lower than the surface of the upper reservoir. Considering now the horizontal tube connecting them, one sees that the pressure at the top of the tube from the top reservoir is higher (since less water is being lifted), while the pressure at the top of the tube from the bottom reservoir is lower (since more water is being lifted), and since liquids move from high pressure to low pressure, the liquid flows across the horizontal tube from the top basin to the bottom basin. Note that the liquid is under positive pressure (compression) throughout the tube, not tension.\n" ]
what's the possibility of puerto rico becoming the 51st state of the us?
This is a complex issue so I failed miserably at ELI5 so I did an ELI15 instead. I hope that you're OK with this. --- In order for Puerto Rico to become a state two things would need to happen. 1 - Puerto Ricans would need to reach a consensus. Presidential elections are periodic events where things remain pretty much the same. Because of this the best strategy for the losing party is to wait 4 years and try again. A change in status, however, is a life changing and potentially irreversible permanent event. And if a Puerto Rican group felt that an illegitimate status definition had been imposed on them a cascade of events that would mirror the [assault on the house of representatives](_URL_0_) and the [Ponce massacre](_URL_1_) could unravel. And no American politician would want to risk that. And you have to keep in mind that in Puerto Rico the main political parties are tied to status definitions. (That's why Puerto Rican politicians make some noise on the subject every once in a while. They're just pandering to their voters.) In the island there is no Democratic or Republican party but a status quo political party (PPD), a statehood political party (PNP), and a minority independence political party (PIP). The status quo party and the statehood party have rougly the same number of supporters so they switch seats in the government once every few years and as a result the status of the island is in a permanent political stalemate. 2 - The US government would need to reach a consensus. The political status of Puerto Rico is a very risky political issue for any American politician. However, the US government knows about the island's political stalemate so they don't have to take any meaningful actions. But in the unlikely event that Puerto Ricans reached consensus every relevant politician would have to ask himself the two following questions: a - Would a Puerto Rican state add more votes to my political party or would it benefit the competition? Keep in mind that in Puerto Rico there are no Republicans vs. Democrats although the local politicians do affiliate themselves loosely to some extent with those political parties. Republican politicians would reject a new Democratic state and vice versa. b - Would my constituents approve of adding Puerto Rico as a new state? Keep in mind that although many Americans consider Puerto Ricans to be marginalized and underrepresented American citizens others consider them to be a culturally incompatible burden for the American taxpayers. So, even if Puerto Rico could reach a consensus the issue could remain in a stalemate in the US government forever. There's a third option but it is very unlikely so I didn't mention it as a plausible path to statehood. The US could just force a status definition on the island like it has done many times before with many territories. But the political repercussions would be too great in this age of ubiquitous communications so unless a very dramatic series of events occurred this won't happen. tl;dr: It is very unlikely but it makes good news so you'll continue to hear about it regularly. // Disclaimer: I am a Puerto Rican
[ "If this status were granted, Puerto Rico would become the 51st state of the United States. The state would have due representation in the United States Congress with full voting rights; Puerto Rico would be represented in the Senate by two senators and the size of its delegation to the House of Representatives would be determined by its population (Connecticut, which has a similar population, currently has five representatives). Similarly, Puerto Rico would get a population-dependent number of electors to the electoral college for the Presidency (cf. Connecticut's current seven electors). Federal taxes would apply on the island. The apportionment of federal aid to the island would be handled as for other states (increased).\n", "Puerto Rico has been discussed as a potential 51st state of the United States. In a 2012 status referendum a majority of voters, 54%, expressed dissatisfaction with the current political relationship. In a separate question, 61% of voters supported statehood (excluding the 26% of voters who left this question blank). On December 11, 2012, Puerto Rico's legislature resolved to request that the President and the U.S. Congress act on the results, end the current form of territorial status and begin the process of admitting Puerto Rico to the Union as a state. On January 4, 2017, Puerto Rico's new representative to Congress pushed a bill that would ratify statehood by 2025.\n", "If the majority of Puerto Ricans were to choose this option - and only 33% voted for it in 2012 - and if it were granted by the US Congress, Puerto Rico would become a Free Associated State. This could give Puerto Rico a similar status to Micronesia, the Marshall Islands, and Palau, countries which currently have a Compact of Free Association with the United States.\n", "Representative Stephanie Murphy of Florida said about a 2018 bill to make Puerto Rico the 51st state, \"The hard truth is that Puerto Rico’s lack of political power allows Washington to treat Puerto Rico like an afterthought, as the federal government’s inadequate preparation for and response to Hurricane Maria made crystal clear.\" According to Governor of Puerto Rico Ricardo Rosselló, \"Because we don’t have political power, because we don’t have representatives, [no] senators, no vote for president, we are treated as an afterthought.\" Rosselló called Puerto Rico the \"oldest, most populous colony in the world\".\n", "If the majority of Puerto Ricans were to choose Free Associated State option—only 33% voted for it in 2012—and if it were granted by the U.S. Congress, Puerto Rico would become a Free Associated State—a virtually independent nation. It would have a political and economical treaty of association with the U.S. that would stipulate all delegated agreements. This could give Puerto Rico a similar status to Micronesia, the Marshall Islands, and Palau, countries which currently have a Compact of Free Association (agreement) with the United States.\n", "The constitution was approved overwhelmingly by nearly 82% of the voters in a popular referendum and ratified by the United States Congress with a few amendments. The United States maintains ultimate sovereignty over Puerto Rico while giving Puerto Ricans a high degree of . Under this Constitution, Puerto Rico officially identifies as the Commonwealth of Puerto Rico.\n", "If the majority of Puerto Ricans were to choose the Free Association option – and only 33% voted for it in 2012 – and if it were granted by the U.S. Congress, Puerto Rico would become a Free Associated State, a virtually independent nation. It would have a political and economical treaty of association with the U.S. that would stipulate all delegated agreements. This could give Puerto Rico a similar status to Micronesia, the Marshall Islands, and Palau, countries which currently have a Compact of Free Association with the United States.\n" ]
would it be possible to replace all of your bones with some sort of metal replica?
Short answer: no. For a couple reasons. Firstly, there's all the biological functions bone has in the body - as a store of calcium, a site for red and white blood cell production, a fat storage location, and probably more I'm forgetting. Secondly, there's the material reasons - bone is an amazing natural composite, a living material. It's constantly breaking itself down and rebuilding in response to your needs, which is what makes replace parts with metal so complex. With bone, it's rare to have to deal with fatigue fractures, from years of constant cyclic loading (your leg joints take multiples of your bodyweight in load each time you take a step), as your body rebuilds the bone constantly. Metal isn't doing that, so it can eventually crack. There's also the ways it would be held - our body knows what to do with bone, how to attach soft tissues to it, how not to attack it with the immune system - with metal (or any other material really) these things have to be taken into consideration.
[ "As an organic material, bone often does not survive in a way that is archaeologically recoverable. However, under the right conditions, bone tools do sometimes survive and many have been recovered from locations around the world representing time periods throughout history and prehistory. Also many examples have been collected ethnographically, and some traditional peoples, as well as experimental archaeologists, continue to use bone to make tools.\n", "Bone generally has the ability to regenerate completely but requires a very small fracture space or some sort of scaffold to do so. Bone grafts may be autologous (bone harvested from the patient’s own body, often from the iliac crest), allograft (cadaveric bone usually obtained from a bone bank), or synthetic (often made of hydroxyapatite or other naturally occurring and biocompatible substances) with similar mechanical properties to bone. Most bone grafts are expected to be reabsorbed and replaced as the natural bone heals over a few months’ time.\n", "Not all that remains is bone. There may be melted metal lumps from missed jewellery; casket furniture; dental fillings; and surgical implants, such as hip replacements. Breast implants do not have to be removed before cremation. Some medical devices such as pacemakers may need to be removed before cremation to avoid the risk of explosion. Large items such as titanium hip replacements (which tarnish but do not melt) or casket hinges are usually removed before processing, as they may damage the processor. (If they are missed at first, they must ultimately be removed before processing is complete, as items such as titanium joint replacements are far too durable to be ground.) Implants may be returned to the family, but are more commonly sold as ferrous/non-ferrous scrap metal. After the remains are processed, smaller bits of metal such as tooth fillings, and rings (commonly known as \"gleanings\") are sieved out and may be later interred in common, consecrated ground in a remote area of the cemetery. They may also be sold as precious metal scrap.\n", "Bone has a unique and well documented natural healing process that normally is sufficient to repair fractures and other common injuries. Misaligned breaks due to severe trauma, as well as treatments like tumor resections of bone cancer, are prone to improper healing if left to the natural process alone. Scaffolds composed of natural and artificial components are seeded with mesenchymal stem cells and placed in the defect. Within four weeks of placing the scaffold, newly formed bone begins to integrate with the old bone and within 32 weeks, full union is achieved. Further studies are necessary to fully characterize the use of cell-based therapeutics for treatment of bone fractures.\n", "Research on artificial bone materials has revealed that bioactive and resorbable silicate glasses (bioglass), glass-ceramics, and calcium phosphates exhibit mechanical properties that similar to human bone. Similar mechanical property does not assure biocompatibility. The body's biological response to those materials depends on many parameters including chemical composition, topography, porosity, and grain size. If the material is metal, there is a risk of corrosion and infection. If the material is ceramic, it is difficult to form the desired shape, and bone can't reabsorb or replace it due to its high crystallinity. Hydroxyapatite, on the other hand, has shown excellent properties in supporting the adhesion, differentiation, and proliferation of osteogenesis cell since it is both thermodynamically stable and bioactive. Artificial bones using hydroxyapatite combine with collagen tissue helps to form new bones in pores, and have a strong affinity to biological tissues while maintaining uniformity with adjacent bone tissue. Despite its excellent performance in interacting with bone tissue, hydroxyapatite has the same problem as ceramic in reabsorption due to its high crystallinity. Since hydroxyapatite is processed at a high temperature, it is unlikely that it will remain in a stable state.\n", "Casts of skeletons were also produced, to replace the original bones after taphonomic study, scientific documentation and excavation. In contrast to Pompeii, where casts resembling the body features of the victims were produced by filling the body imprints in the ash deposit with plaster, the shape of corpses at Herculaneum could not be preserved, due to the rapid vapourization and replacement of the flesh of the victims by the hot ash (ca. 500 °C). A cast of the skeletons unearthed within chamber 10 is on display at the Museum of Anthropology in Naples. The most significant and extensive study of a sample of the skeletal remains of the Herculaneum victims is that published by Luigi Capasso in 2001. This study which employed X rays has superseded the earlier work by Bisel \n", "Thorough study of the skeletal remains lasted for over three months. Owsley was eventually able to identify the victim as 18-year-old Steven Hicks, who had disappeared in 1978. The case was particularly difficult, because the victim's body had been cut, broken, and literally chopped into several pieces. Forensics require careful identification, measuring, and matching of various sizes of bone chips, which often calls for the use of scanning electron microscopes to accurately establish the composition of the most minute chip and fragment to confirm that it is actually bone and human remains.\n" ]
who's in charge of coming up with street names and is there any approval by a committee? also, if you were someone that decided street names how did you come up with them?
In private developments in Ohio, it's up to the discretion of the developer. For example, the development I live in AND the street I live ON are both named after the developer's grandson... who lived in the development. The development I used to live in was interesting because every street was named after a winner of the Kentucky Derby. I would guess that the county commissioners name the public streets. This is probably different in all fifty states and I'm sure it's different in other countries.
[ "One notable feature of the town is the naming of some of its streets, and also its occasionally idiosyncratic numbering system. Some streets which pass through the town may thus bear two names (in whichever language). For example, Jean Talon Street, a large East-West thoroughfare crossing Montreal for kilometers, goes a few hundred meters through TMR under the name of Dresden Avenue, only to recover its Montreal name on the other side of the town. This situation has been recently addressed by putting the two names on the street signs. On these few hundred meters, TMR uses a house civic numbering totally different from that of Montreal on either side. This sort of change in the numbering system also occurs on smaller streets shared by both Montreal and TMR (for example, Trenton, Lockhart and Brookfield avenues, where the TMR numbering system decreases from East to West, only to jump from 2 to 2400 on the few meters of the street that still belong to Montreal.\n", "Other streets, mainly \"-gil\", may be named after the street name it diverges from with a systematic number. There are three different types of numbering rules: basic numbering, serial numbering, and other numbering. The purpose of numbering streets is to make street names easier to predict position of it so address users find their destination streets or buildings easily on the maps or the streets. \n", "BULLET::::- Allow useful information to be deduced from the names based on regularities. For instance, in Manhattan, streets are consecutively numbered; with East-West streets called \"Streets\" and North-South streets called \"Avenues\".\n", "Street names reflected this bottom-up emergence, and street names were not coordinated from development to development, even for along the same north-south or east-west line. Some names were used more than once, by different streets across various Denver neighborhoods and surrounding towns. There was no universal system for the use of terms like \"street\", \"avenue\", etc. Later, these terms were defined such that \"street\" designated roads running north and south and aligned with the hundreds of the numbering system, with \"court\" for intermediate (non-hundreds) north-south roads and \"way\" for roads which start north-south but curve to intersect with another north-south road; \"avenue\", \"place\", and \"drive\" (respectively) are the corresponding terms for roads running east and west. Major arterials in both directions, however, are often called \"boulevards\", and \"road\" and \"parkway\" also make appearances.\n", "A street name can also include a direction (the cardinal points east, west, north, south, or the quadrants NW, NE, SW, SE) especially in cities with a grid-numbering system. Examples include \"E Roosevelt Boulevard\" and \"14th Street NW\". These directions are often (though not always) used to differentiate two sections of a street. Other qualifiers may be used for that purpose as well. Examples: upper/lower, old/new, or adding \"extension\".\n", "Street numbers can be written as orientation numbers (related to street) or descriptive numbers (unique within the town) or as a combination separated by a slash (descriptive/orientation). Descriptive numbers are also used within small villages that do not have named streets.\n", "Though the nomenclature may initially confuse new arrivals and visitors, most consider the grid system an aid to navigation. Some streets have names, such as State Street, which would otherwise be known as 100 East. Other streets have honorary names, such as the western portion of 300 South, named \"Adam Galvez Street\" (for a local Marine corporal killed in action) or others honoring Rosa Parks, Martin Luther King, Jr., César Chávez, and John Stockton. These honorary names appear only on street signs and cannot be used in postal addresses.\n" ]
what happens to our mind when you spin around?
Nothing happens to your brain. You have fluid in your ear that helps your brain recognize what is up and what is down. When you spin the fluid has to re-equilibrate and while this is happening your brain is confused and the result is nausea, vertigo, and that weird effect where your eyes continue to follow whatever direction you were spinning in.
[ "The most common general symptom of having the spins is described by its name: the feeling that one has the uncontrollable sense of spinning, although one is not in motion, which is one of the main reasons an intoxicated person may vomit. The person has this feeling due to impairments in vision and equilibrioception. Diplopia (double vision) or polyplopia are common, as well as the symptoms of motion sickness and vertigo.\n", "The spins (as in having \"the spins\") is an adverse reaction of intoxication that causes a state of vertigo and nausea, causing one to feel as if \"spinning out of control\", especially when lying down. It is most commonly associated with drunkenness or mixing alcohol with other psychoactive drugs such as cannabis. This state is likely to cause vomiting, but having \"the spins\" is not life-threatening unless pulmonary aspiration occurs.\n", "Depending on the perception of the observer, the apparent direction of spin may change any number of times, a typical feature of so-called bistable percepts such as the Necker cube which may be perceived from time to time as seen from above or below. These alternations are spontaneous and may randomly occur without any change in the stimulus or intention by the observer. However some observers may have difficulty perceiving a change in motion at all.\n", "The spinning sensation experienced from BPPV is usually triggered by movement of the head, will have a sudden onset, and can last anywhere from a few seconds to several minutes. The most common movements people report triggering a spinning sensation are tilting their heads upwards in order to look at something, and rolling over in bed.\n", "When humans turn the head from left to right, the image projected on the retinas moves in the direction opposite to the head movement. Without the head turning, such an image displacement would appear as something moving; but when it is correlated with the turning of the head, no movement of the environment is seen. However, what if the image were to move in coordination with the head movement, but the extent of that movement were less (or more) than would be usual for the head movement in question? Would the anomaly be noticed?\n", "The movement of shaking or rotating the head left and right happens almost entirely at the joint between the atlas and the axis, the atlanto-axial joint. A small amount of rotation of the vertebral column itself contributes to the movement. This movement between the atlas and axis is often referred to as the \"no joint\", owing to its nature of being able to rotate the head in a side-to-side fashion.\n", "Balance in the body is monitored principally by two systems, both of which are affected by alcohol sending abnormal impulses to the brain, [which tells it] that the body is rotating, causing disorientation and making the eyes spin round to compensate.\n" ]
why is there two ways of writing "4" and "a" ?
The '4' and 'a' you see on your screen right now (unless you're using an odd font) are the technically correct glyphs, which is why they appear that way in most typefaces unless it's specifically mimicking handwriting. The "open 4" is use in handwriting because it's easier to distinguish from a 9. When writing quickly, those two digits can often look similar. The other way of writing 'a' is more because it's easier and faster.
[ "Linguistically, it has the alphabetical usage in texts for \"b\", \"a\", or syllabically for \"ba\", and also a replacement for \"\"b\"\", by \"\"p\"\". The a is replaceable in word formation by any of the 4 vowels: \"a, e, i,\" or \"u\".\n", "The pronunciation of the digits 3, 4, 5, and 9 differs from standard English – being pronounced \"tree\", \"fower\", \"fife\", and \"niner\". The digit 3 is specified as \"tree\" so that it is not pronounced \"sri\"; the long pronunciation of 4 (still found in some English dialects) keeps it somewhat distinct from \"for\"; 5 is pronounced with a second \"f\" because the normal pronunciation with a \"v\" is easily confused with \"fire\" (a command to shoot); and 9 has an extra syllable to keep it distinct from German \"nein\" 'no'.\n", "BULLET::::- The letter \"ç\" is sometimes written \"ch\" due to technical limitations because of its use in English sound and its analogy to the other digraphs \"xh\", \"sh\", and \"zh\". Usually it is written simply \"c\" or more rarely \"q\" with context resolving any ambiguities.\n", "Letters 16 and 17 form a two-letter word ending in \"P\". Since this has to be \"UP\", letter 16 is a \"U\", which can be filled into the appropriate clue answer in the list of clues. Likewise, a three-letter word starting with \"A\" could be \"and\", \"any\", \"all\", or even a proper name like \"Ann\". One might need more clue answers before daring to guess which it could be. \n", "C is the third letter in the English alphabet and a letter of the alphabets of many other writing systems which inherited it from the Latin alphabet. It is also the third letter of the ISO basic Latin alphabet. It is named \"cee\" (pronounced ) in English.\n", "The \"e\" preceding the \"r\" is kept in American inflected forms of nouns and verbs, for example, , which are respectively in British English. \"\" is an interesting example, since, according to the \"OED\", it is a \"\"word ... of 3 syllables (in careful pronunciation)\"\" (i.e., ), yet there is no vowel in the spelling corresponding to the second syllable (). The OED third edition (revised entry of June 2016) allows either two or three syllables. The three-syllable version is listed as only the American pronunciation of \"centering\" on the Oxford Dictionaries Online website. The \"e\" is dropped for other derivations, for example, \"central\", \"fibrous\", \"spectral\". However, the existence of related words without \"e\" before the \"r\" is not proof for the existence of an \"-re\" British spelling: for example, \"entry\" and \"entrance\" come from \"enter\", which has not been spelled \"entre\" for centuries.\n", "The form \"an\" is used before words starting with a vowel sound, regardless of whether the word begins with a vowel letter. This avoids the glottal stop (momentary silent pause) that would otherwise be required between \"a\" and a following vowel sound. Where the next word begins with a consonant sound, \"a\" is used. Examples: \"a box\"; \"an apple\"; \"an SSO\" (pronounced \"es-es-oh\"); \"a HEPA filter\" (HEPA is pronounced as a word rather than as letters); \"an hour\" (the \"h\" is silent); \"a one-armed bandit\" (pronounced \"won...\"); \"an heir\" (pronounced \"air\"); \"a unicorn\" (pronounced \"yoo-\"); \"an herb\" in American English (where the \"h\" is silent), but \"a herb\" in British English; \"a unionized worker\" but \"an unionized particle\".\n" ]
how are movies that were recorded in a lower definition able to be released in higher definitions?
They were recorded on film, which isn't limited to an amount of pixels- for the most part. Remember that they were broadcast on a huge screen. As long as you can get the original print you can re-release the movie as HD.
[ "Depending on the year and format in which a movie was filmed, the exposed image can vary greatly in size. Sizes range from as big as 24 mm × 36 mm for VistaVision/Technirama 8 perforation cameras (same as 35 mm still photo film) going down through 18 mm × 24 mm for Silent Films or Full Frame 4 perforations cameras to as small as 9 mm × 21 mm in Academy Sound Aperture cameras modified for the Techniscope 2 perforation format. Movies are also produced using other film gauges, including 70 mm films (22 mm × 48 mm) or the rarely used 55 mm and CINERAMA.\n", "BULLET::::- Length: Films may be shortened for television broadcasting or for use on airlines. DVD releases of films may also contain longer cuts. In a growing trend, more and more films are being released in an \"Unrated\" cut of the film. Prior to when TV airings of the film begins, a format screen appears reading, \"\"The following film has been modified from its original version. It has been formatted to fit this screen, to run in the time allotted and edited for content\"\" (see below). The end credits on TV airings of films sometimes speed up to make time for the next show or film to start, or to free up more airtime for advertisements, which has become an increasingly-common practice.\n", "DVD producers can also choose to show even wider ratios such as 1.85:1 and 2.39:1 within the 16:9 DVD frame by hard matting or adding black bars within the image itself. Some films which were made in a 1.85:1 aspect ratio, such as the U.S.-Italian co-production \"Man of La Mancha\" and Kenneth Branagh's \"Much Ado About Nothing\", fit quite comfortably onto a 1.7:1 HDTV screen and have been issued as an enhanced version on DVD without the black bars. Many digital video cameras have the capability to record in 16:9.\n", "A handful of theatrically released feature films, such as \"Timecode\" (2000), \"Russian Ark\" (2002), \"PVC-1\" (2007), and \"Victoria\" (2015) are filmed in one single take; others are composed entirely from a series of long takes, while many more may be well known for one or two specific long takes within otherwise more conventionally edited films. In 2012, the art collective The Hut Project produced \"The Look of Performance\", a digital film shot in a single 360° take lasting 3 hours, 33 minutes and 8 seconds. The film was shot at 50 frames per second, meaning the final exhibited work lasts 7 hours, 6 minutes and 17 seconds.\n", "In many cases, successful film releases have had items made in limited numbers. These \"limited editions\" usually contain the best DVD edition possible of a film with special items in a box set, sometimes containing items available only in the limited edition. Items marked thus are often (but not always) released for a shorter time and in lower quantity than common editions, often with a running number (e.g. \"13055 of 20000\") printed on the products to boost the rarity feel, as the company implies not to manufacture more. It is also common to have such items packaged with unique designs.\n", "However, films shot at aspect ratios of 2.20:1, 2.35:1, 2.39:1, 2.55:1, and especially 2.76:1 (\"Ben-Hur\" for example) might still be problematic when displayed on televisions of any type. But when the DVD is \"anamorphically enhanced for widescreen\", or the film is telecast on a high-definition channel seen on a widescreen TV, the black spaces are smaller, and the effect is still much like watching a film on a theatrical wide screen. Though 16:9 (and occasionally 16:10, mostly for computers and tablets) remain standard as of 2018, wider-screen consumer TVs in 21:9 have been released to the market by multiple brands.\n", "If footage with taller ratios were shot (digitally or on film), for example IMAX scenes for various films, then the screen real-estate is cropped in accordance to the deliverable ratio. This helps in preserving headroom and composition for the film beyond the theatrical release. \n" ]
why does the us armed forces have an army, navy, marines, and coast guard, instead of just an army and navy.
Back when they first started making militaries, there were two kinds of fighting people: * People who fought on land. When they got around to inventing the English Language, they called them *Armies*. * People who fought on water, from ships. We call them *Navies*. Now around the time of the American Revolutionary War (late 1770s), navies fought using wooden sailing ships with lots of cannons firing solid iron balls. These guns were pretty powerful, but they were pretty inaccurate unless you got really close, and unless you got really lucky and blew up the enemy's gunpowder magazine, they couldn't really do enough damage to sink another ship quickly. So if you were willing to sail through cannonball fire for a little bit, it was possible to get close enough to the enemy ship to board it with your own men. However, most sailors were good at sailing, and not as good at shooting or hand-to-hand fighting, so they decided to create groups of soldiers who specifically trained to fight from ships. These are called *Marines*. Some countries kept them as part of their armies or navies, but in the US, they put them specifically under the authority of the Department of the Navy, and slowly over time they gained more and various responsibilities- Presidential guard, Embassy guard, etc. Also, because they were on Navy ships all the time, they were usually the first ground troops to show up when the government was trying to flex its power overseas, so they also started doing amphibious warfare (invading beaches from ships), which is their primary mission today. Around the same time, many countries saw a need for a nautical police force. Using the Navy for that was in most cases overkill; you want to enforce tariffs and catch criminals, not blow merchant ships out of the water. So *Coast Guards* started becoming a thing, tasked with seagoing law enforcement and search and rescue. In the US, it was put under the authority of the Department of Transportation in peactime, and the department of the Navy in wartime. (there was no Department of Defense then). Recently they've been put under the Department of Homeland Security. So, finally, years later, airplanes were invented. Immediately all the armed forces saw a use for them; mainly spotting for big guns at first (Navy battleships and Army artillery), but later they started carrying guns and bombs and torpedoes and missiles. In some countries, they managed to create a separate *Air Force* right away (the UK did this with the RAF), but in the US, the Army and Navy each handled their own aviation, arguing (accurately enough) their needs were different. So it wasn't until after World War II that the Air Force was split off from the Army into a final branch. The Army still maintains its helicopters for transport and close air support, and the Navy argues that aircraft carrier operations are too different from the land flying that Air Force personnel are used to, so they each have their own aircraft. The Coast Guard needs aircraft for long-range search and rescue. And the Marines have their own aircraft because they're supposed to be a self-contained, independent and fast-moving fighting force once the Navy drops them on the beach. As for why they haven't merged them back? Politics, money, and tradition. There have been several attempts made--the Air Force argued in the 50's that nuclear weapons made all surface forces obsolete; the Army has tried to absorb the Marines several times; the Air Force has tried to take the Navy's planes, etc. But it seems unlikely in the near future.
[ "As of 2017, the U.S. Armed Forces consists of the Army, Marine Corps, Navy and Air Force, all under the command of the United States Department of Defense. There also is the United States Coast Guard, which is controlled by the Department of Homeland Security.\n", "Along with the U.S Coast Guard, the U.S Navy is also another branch if the Unites Stated Armed forces. Unlike the Coast Guard, the Navy is a projection of force in areas beyond the U.S shores. Their operations go beyond the shores; they provide aid to military out on the sea, carry troops to other countries, strategic plans for attacks and protect the sea lanes.\n", "Unlike its U.S. Army and U.S. Air Force counterparts, the Department of the Navy comprises two uniformed services: the United States Navy and the United States Marine Corps (sometimes collectively called the \"naval services\" or \"sea services\").\n", "There are two major naval forces that conduct such operations; the United States Coast Guard and the United States Navy. Although they both have very distinct jobs from one another, one of their major jobs is to be able to provide security operations.\n", "Five of the uniformed services make up the U.S. Armed Forces, four of which are within the U.S. Department of Defense. The Coast Guard is part of the Department of Homeland Security and has both military and law enforcement duties. Title 14 states that the Coast Guard is part of the armed forces at all times, making it the only branch of the military outside the Department of Defense. During a declared state of war, however, the President or Congress may direct that the Coast Guard operate as part of the Department of the Navy. The U.S. Public Health Service Commissioned Corps, along with the NOAA Commissioned Corps, operate under military rules with the exception of the applicability of the Uniform Code of Military Justice, to which they are subject only when militarized by executive order or while detailed to any component of the armed forces.\n", "Many states also maintain their own military forces. These forces are federally recognized, but are separate from the United States National Guard Bureau and are not meant to be federalized, but rather service the state exclusively, especially when the National Guard is deployed and unavailable. The Virginia Defense Force is the commonwealth's own all-volunteer, formal military organization that is the reserve to the Virginia National Guard.\n", "In the majority of countries, the marine force is an integral part of the navy. The United States Marine Corps is a separate armed service within the United States Department of the Navy, with its own leadership structure.\n" ]
What happens to bones when a big animal like a shark or an orca eats and swallows another whole animal?
Sharks have a digestive system that's designed to primarily digest meat and fat, so larger bones, chunks of sea turtle shells, things like that are usually vomited back out. And orcas have a three-chambered stomach, so the food takes more times to pass through, and gives the stomach acid more time to digest everything, including bone
[ "Crocodilians are unable to chew and need to swallow food whole, so prey that is too large to swallow is torn into pieces. They may be unable to deal with a large animal with a thick hide, and may wait until it becomes putrid and comes apart more easily. To tear a chunk of tissue from a large carcass, a crocodilian spins its body continuously while holding on with its jaws, a manoeuvre known as the \"death roll\". During cooperative feeding, some individuals may hold on to the prey, while others perform the roll. The animals do not fight, and each retires with a piece of flesh and awaits its next feeding turn. Food is typically consumed by crocodilians with their heads above water. The food is held with the tips of the jaws, tossed towards the back of the mouth by an upward jerk of the head and then gulped down. Nile crocodiles may store carcasses underwater for later consumption.\n", "BULLET::::- Tyrannosaur teeth could crush bone, and therefore could extract as much food (bone marrow) as possible from carcass remnants, usually the least nutritious parts. Karen Chin and colleagues have found bone fragments in coprolites (fossilized feces) that they attribute to tyrannosaurs, but point out that a tyrannosaur's teeth were not well adapted to systematically chewing bone like hyenas do to extract marrow. Gregory Paul also wrote that a bone-crushing bite would also have been advantageous to a predator; providing the extreme bite force to kill prey and later consume it efficiently. Other paleontologists would also find similarities between \"Tyrannosaurus\" teeth and those of other predators. Farlow and Holtz pointed out that like \"Tyrannosaurus\", orcas and crocodiles also had broad-based teeth. Holtz noted the similarities between \"Tyrannosaurus\" teeth and those of hyaenids, but further addeds that all hyaenids are known to kill prey, with the largest (\"Crocuta crocuta\") obtaining most of its food through this means. He also notes that hyaenids mainly crunch bone with their molars and premolars. Holtz also pointed out that felids also developed thickened teeth as adaptations to resist contact with bone during prey capture or dispatch as well as during feeding.\n", "In the Dinosaur Park Formation, small theropods are rare due to the tendency of their thin-walled bones to be broken or poorly preserved. Small bones of small theropods that were preyed upon by larger ones may have been swallowed whole and digested. In this context, the discovery of a small theropod dinosaur with preserved tooth marks was especially valuable. Possible indeterminate avimimid and therizinosaurid remains are known from the formation.\n", "Another possible explanation for the small bones is that they were originally located in the throat and were pushed into the pharyngeal pouch during fossilization. If this was the case, \"Trimerorhachis\" may have eaten its young instead of brooding them. This type of cannibalism is widespread in living amphibians, and most likely occurred among some prehistoric amphibians as well.\n", "Vultures are scavengers, meaning they eat dead animals. They rarely attack healthy animals, but may kill the wounded or sick. When a carcass has too thick a hide for its beak to open, it waits for a larger scavenger to eat first. Vast numbers have been seen upon battlefields. They gorge themselves when prey is abundant, until their crops bulge, and sit, sleepy or half torpid, to digest their food. These birds do not carry food to their young in their talons but disgorge it from their crops. The mountain-dwelling bearded vulture is the only vertebrate to specialize in eating bones, and does carry bones to the nest for the young, and it hunts some live prey.\n", "When feeding on large carcasses, the shark employs a rolling motion of its jaw. The 48-52 teeth of the upper jaw are very thin and pointed, lacking serrations. These upper jaw teeth act as an anchor while the lower jaw proceeds to cut massive chunks out of their prey for a quick and easy kill.\n", "Predators including big cats, birds of prey, and ants share powerful jaws, sharp teeth, or claws which they use to seize and kill their prey. Some predators such as snakes and fish-eating birds like herons and cormorants swallow their prey whole; some snakes can unhinge their jaws to allow them to swallow large prey, while fish-eating birds have long spear-like beaks that they use to stab and grip fast-moving and slippery prey. Fish and other predators have developed the ability to crush or open the armoured shells of molluscs.\n" ]
how do hooves make animals better at climbing mountains?
Mountain goats have special hooves that are like a pirate's hook with soft padding to help adjust. They suck at running long distance running but are great at scaling verticle surfaces because of this.
[ "Many animals climb in other habitats, such as in rock piles or mountains, and in those habitats, many of the same principles apply due to inclines, narrow ledges, and balance issues. However, less research has been conducted on the specific demands of locomotion in these habitats.\n", "Perhaps the most exceptional of the animals that move on steep or even near vertical rock faces by careful balancing and leaping are the various types of mountain dwelling caprid such as the Barbary sheep, markhor, yak, ibex, tahr, rocky mountain goat, and chamois. Their adaptations may include a soft rubbery pad between their hooves for grip, hooves with sharp keratin rims for lodging in small footholds, and prominent dew claws. The snow leopard, being a predator of such mountain caprids, also has spectacular balance and leaping abilities; being able to leap up to ≈17m (~50 ft). Other balancers and leapers include the mountain zebra, mountain tapir, and hyraxes.\n", "The mountain goat's feet are well-suited for climbing steep, rocky slopes with pitches exceeding 60°, with inner pads that provide traction and cloven hooves that can spread apart. The tips of their feet have sharp dewclaws that keep them from slipping. They have powerful shoulder and neck muscles that help propel them up steep slopes.\n", "Their skill in foraging for food allows them to survive in steep mountain areas where they both graze and eat plants that many other cattle avoid. They can dig through the snow with their horns to find buried plants.\n", "Some animals are specialized for moving on non-horizontal surfaces. One common habitat for such climbing animals is in trees, for example the gibbon is specialized for arboreal movement, traveling rapidly by brachiation. Another case is animals like the snow leopard living on steep rock faces such as are found in mountains. Some light animals are able to climb up smooth sheer surfaces or hang upside down by adhesion using suckers. Many insects can do this, though much larger animals such as geckos can also perform similar feats.\n", "As a member of the ungulate group of mammals, the Himalayan tahr possesses an even number of toes. They have adapted the unique ability to grasp both smooth and rough surfaces that are typical of the mountainous terrain on which they reside. This useful characteristic also helps their mobility. The hooves of the tahr have a rubber-like core which allows for gripping smooth rocks while keratin at the rim of their hooves allow increased hoof durability, which is important for traversing the rocky ground. This adaptation allows for confident and swift maneuvering of the terrain.\n", "Hooves grow continuously. In nature, wild animals are capable of wearing down the hoof as it continuously grows, but captive domesticated species often must undergo specific hoof care for a healthy, functional hoof. Proper care improves biomechanical efficiency and prevents lameness. If not worn down enough by use, such as in the dairy industry, hooves may need to be trimmed by a farrier. However, too much wear can result in damage of the hooves, and for this reason, horseshoes and oxshoes are used by animals that routinely walk on hard surfaces and carry heavy weight. Within the equine world, the expression, \"no foot, no horse\" emphasizes the importance of hoof health. Lameness, behind infertility and mastitis, is the biggest cause of economic loss to a dairy farmer.\n" ]
Would phlegm be digested if swallowed?
The mucous is digested further down in the gut by enzymes in the small bowel and bacteria in the large bowel.
[ "Pharyngeal aspiration is often performed on mice and rats. Prior to introduction of the stubstance, the animal is anesthetized and its tongue extended, preventing the animal from swallowing the material and allowing it to be aspirated into the lungs over the course of at least two deep breaths. A liquid suspension of particles in saline solution is usually used, in a typical volume of 50 μL. Sometimes the substance is introduced into the larynx instead of the pharynx to avoid contamination from food particles and other contaminants present in the mouth.\n", "Globus pharyngis is the persistent sensation of having phlegm, a pill or some other sort of obstruction in the throat when there is none. Swallowing can be performed normally, so it is not a true case of dysphagia, but it can become quite irritating. One may also feel mild chest pain or even severe pain with a clicking sensation when swallowing.\n", "From chewing to defecation, alpha-synuclein deposits affect every level of gastrointestinal function. Almost all persons with DLB have upper gastrointestinal tract dysfunction (such as delayed gastric emptying) or lower gastrointestinal dysfunction (such as constipation and prolonged stool transit time). Persons with Lewy body dementias almost universally experience nausea, gastric retention, or abdominal distention from delayed gastric emptying. Constipation can present a decade before diagnosis. Difficulty swallowing is milder than in other synucleinopathies, and presents later in the course of the disease.\n", "Like the pharyngeal phase of swallowing, the esophageal phase of swallowing is under involuntary neuromuscular control. However, propagation of the food bolus is significantly slower than in the pharynx. The bolus enters the esophagus and is propelled downwards first by striated muscle (recurrent laryngeal, X) then by the smooth muscle (X) at a rate of 3–5 cm/s. The upper esophageal sphincter relaxes to let food pass, after which various striated constrictor muscles of the pharynx as well as peristalsis and relaxation of the lower esophageal sphincter sequentially push the bolus of food through the esophagus into the stomach.\n", "The salpingopharyngeus is known to raise the pharynx and larynx during deglutition (swallowing) and laterally draws the pharyngeal walls up. In addition, it opens the pharyngeal orifice of the pharyngotympanic tube during swallowing. This allows for the equalization of pressure between the auditory canal and the pharynx. As the salpingopharyngeus is used to open the eustachian tubes to equalize pressure in the middle ear, the muscle can easily be stimulated by swallowing.\n", "Certain parts of the respiratory tract, such as the oropharynx, are also subject to the abrasive swallowing of food. To prevent the destruction of the respiratory epithelium in these areas, it changes to stratified squamous epithelium, which is better suited to the constant sloughing and abrasion. The squamous layer of the oropharynx is continuous with the esophagus.\n", "In many protists, phagocytosis is used as a means of feeding, providing part or all of their nourishment. This is called phagotrophic nutrition, distinguished from osmotrophic nutrition which takes place by absorption.\n" ]
Have there been periods of greater or lesser volcanic activity on earth, and if so, what causes this variation?
In the past there have been more volcanic activities since the Earth was much hotter after accretion. The whole surface would have been a barely cooled crust covered in volcanic pustules. It was much hotter and we have evidence of it from komatiiates which only form at higher temperatures than we see today (1). So during the early Earth was more volcanically active but different from the volcanoes we see today. Another thing that continues to influence volcanic activity is thought to be glacier cover. The increased pressure on the volcano from meters worth of ice can depress eruptions (2). References: 1. _URL_0_ 2. _URL_1_
[ "The 1991 spike is understood to be due to the volcanic eruption of Mt. Pinatubo in June of that year. Volcanoes affect atmospheric methane emissions when they erupt, releasing ash and sulfur dioxide into the air. As a result, photochemistry of plants is affected and the removal of methane via the tropospheric hydroxyl radical is reduced. However, growth rates quickly fell due to lower temperatures and global reduction in rainfall.\n", "There is evidence of a decline in volcanic activity over the past few million years. This decline in volcanic activity can be grouped into two phases. From eight to four million years ago, volcanism rates were higher than they are at present. Magma production during this volcanic phase was most active from seven to five million years ago and was related to a period of rifting along the Pacific and North American Plate boundary. Between four and three million years ago in the Pliocene epoch, a pause in volcanic activity began to happen. The most recent magmatic phase ranging from two million years ago to present resulted from nearby areas of rifting during a period of compression between the Pacific and North American plates. Volcanism rates during this volcanic phase was most active from two to one million years ago with the construction of 25 volcanic zones then decreased one million years ago with the construction of 11 volcanic zones. To date, the most recent volcanic phase has produced of volcanic material whereas the first phase produced of volcanic material. Even though the rate in volcanism throughout the Northern Cordilleran Volcanic Province has changed considerably, there is no correlation between the rate in magma production and the number of active volcanoes during any interval of time. The present day volcanism rate for the Northern Cordilleran Volcanic Province is considerably lower than the Cascade Volcanic Arc and Hawaiian volcanism rates. However, geologists are aware the temporal volcanic patterns known for the Northern Cordilleran Volcanic Province should be looked at carefully because volcanics that pre-date the last glacial period have been eroded by glacial ice and many of the volcanics have not been directly dated or have not been dated in significant detail to identify more individual temporal patterns. Lava fountains can occur in the Northern Cordilleran Volcanic Province roughly every 100 years.\n", "The volcanic field has formed on top of older, Oligocene to Miocene age volcanic rocks and calderas, but its own activity commenced only about 6 million years ago. The reasons for volcanism there are not well known. The volcanic field has produced various types of basaltic magma and also trachyte; the most recent eruption was about 38,000 years ago and renewed activity is possible.\n", "Holocene volcanic activity includes the emission of tephra and lava flows with eruptions every 1,400 - 1,200 years since 3,800 years before present. The growth of a lava dome over the last 600 years was accompanied by phreatic activity. Tephrochronology suggests the occurrence of eruptions 1,350, 750 AD, 550 ± 100 BC, 1,850 BC, 3,050 BC, 5,550 BC, 7,050 BC and 7,550 BC.\n", "Over the past 360,000 years there have been two major cycles, each culminating with two caldera-forming eruptions. The cycles end when the magma evolves to a rhyolitic composition, causing the most explosive eruptions. In between the caldera-forming eruptions are a series of sub-cycles. Lava flows and small explosive eruptions build up cones, which are thought to impede the flow of magma to the surface. This allows the formation of large magma chambers, in which the magma can evolve to more silicic compositions. Once this happens, a large explosive eruption destroys the cone. The Kameni islands in the centre of the lagoon are the most recent example of a cone built by this volcano, with much of them hidden beneath the water.\n", "While the land's volcanic history dates back to before the Zealandia microcontinent rifted away from Gondwana 60–130 million years ago, activity continues today with minor eruptions occurring every few years. This recent activity is primarily due to the country's position on the boundary between the Indo-Australian and Pacific Plates, a part of the Pacific Ring of Fire, and particularly the subduction of the Pacific Plate under the Indo-Australian Plate.\n", "Highly active periods of volcanism in what are called large igneous provinces have produced huge oceanic plateaus and flood basalts in the past. These can comprise hundreds of large eruptions, producing millions of cubic kilometers of lava in total. No large eruptions of flood basalts have occurred in human history, the most recent having occurred over 10 million years ago. They are often associated with breakup of supercontinents such as Pangea in the geologic record, and may have contributed to a number of mass extinctions. Most large igneous provinces have either not been studied thoroughly enough to establish the size of their component eruptions, or are not preserved well enough to make this possible. Many of the eruptions listed above thus come from just two large igneous provinces: the Paraná and Etendeka traps and the Columbia River Basalt Group. The latter is the most recent large igneous province, and also one of the smallest. A list of large igneous provinces follows to provide some indication of how many large eruptions may be missing from the lists given here.\n" ]
Have there ever been memes similar to the modern memes?
"Kilroy was here" is the only thing I can remember. Basically, during ww2 American soldiers would leave these marks on beaches they stormed, places they visited, etc.
[ "The idea of memes, and the word itself, were originally speculated by Richard Dawkins in his book \"The Selfish Gene\" although similar, or analogous, concepts had been in currency for a while before its publishing. Richard Dawkins wrote a foreword to \"The Meme Machine\".\n", "Richard Dawkins, who originated the concept of memes, approvingly cites in the second edition of his book \"The Selfish Gene\" Henson's coining of the neologism \"memeoids\" to refer to \"victims who have been taken over by a meme to the extent that their own survival becomes inconsequential.\"\n", "In his 1976 book \"The Selfish Gene\", Richard Dawkins coined the term memes to describe informational units that can be transmitted culturally, analogous to genes. He later used this concept in the essay \"Viruses of the Mind\" to explain the persistence of religious ideas in human culture.\n", "Richard Dawkins coined the term \"meme\" in his 1976 book \"The Selfish Gene\". As conceived by Dawkins, a meme is a unit of cultural meaning, such as an idea or a value, that is passed from one generation to another. When asked to assess this comparison, Lauren Ancel Meyers, a biology professor at the University of Texas, states that \"memes spread through online social networks similarly to the way diseases do through offline populations\". This dispersion of cultural movements is shown through the spread of memes online, especially when seemingly innocuous or trivial trends spread and die in rapid fashion.\n", "Susan Blackmore (2002) re-stated the definition of meme as: whatever is copied from one person to another person, whether habits, skills, songs, stories, or any other kind of information. Further she said that memes, like genes, are replicators in the sense as defined by Dawkins.\n", "Richard Dawkins argues for the existence of a \"unit of cultural transmission\" called a meme. This concept of memes has become much more accepted as more extensive research has been done into cultural behaviors. Much as one can inherit genes from each parent, it is suggested that individuals acquire memes through imitating what they observe around them. The more relevant actions (actions that increase ones probability of survival), such as architecture and craftwork are more likely to become prevalent, enabling a culture to form. The idea of memes as following a form of Natural Selection was first presented by Daniel Dennett. It has also been argued by Dennett that memes are responsible for the entirety of human consciousness. He claims that everything that constitutes humanity, such as language and music is a result of memes and the unflinching hold they have on our thought processes.\n", "The term meme was coined in Richard Dawkins' 1976 book \"The Selfish Gene,\" but Dawkins later distanced himself from the resulting field of study. Analogous to a gene, the meme was conceived as a \"unit of culture\" (an idea, belief, pattern of behaviour, etc.) which is \"hosted\" in the minds of one or more individuals, and which can reproduce itself in the sense of jumping from the mind of one person to the mind of another. Thus what would otherwise be regarded as one individual influencing another to adopt a belief is seen as an idea-replicator reproducing itself in a new host. As with genetics, particularly under a Dawkinsian interpretation, a meme's success may be due to its contribution to the effectiveness of its host.\n" ]
nuclear arms race
they dont just stockpile the same type of nuke, with time other nations can come up with newer and better delivery methods that cant be stopped or intercepted or your older nukes become incapable of delivering/entering the target area so they make more nukes with better technology. sometimes they upgrade previous systems , other times its complete new systems. since those weapons dont ever get used they stock pile into the thousands.
[ "The nuclear arms race was an arms race competition for supremacy in nuclear warfare between the United States, the Soviet Union, and their respective allies during the Cold War. During this period, in addition to the American and Soviet nuclear stockpiles, other countries developed nuclear weapons, though none engaged in warhead production on nearly the same scale as the two superpowers.\n", "BULLET::::- Nuclear arms race and warfare is expanded by a nuclear mapmode, uranium as a resource, factories (centrifuges) to produce enriched fission material, ICBMs that can be launched from silos, nuclear submarines, and a red button.\n", "In nuclear strategy, a second-strike capability is a country's assured ability to respond to a nuclear attack with powerful nuclear retaliation against the attacker. To have such an ability (and to convince an opponent of its viability) is considered vital in nuclear deterrence, as otherwise the other side might attempt to try to win a nuclear war in one massive first strike against its opponent's own nuclear forces.\n", "The crucial goal in maintaining second-strike capabilities is preventing first-strike attacks from taking out a nation's nuclear arsenal. In this manner, a country can carry out nuclear retaliation even after absorbing a nuclear attack. The United States and other countries have diversified their nuclear arsenals through the nuclear triad in order to better ensure second-strike capability.\n", "In 2007, Rhodes published \"Arsenals of Folly: The Making of the Nuclear Arms Race\", a chronicle of the arms buildups during the Cold War, especially focusing on Mikhail Gorbachev and the Reagan administration.\n", "BULLET::::- Frank Barnaby and Geoffrey Thomas, eds, \"The nuclear arms race — control or catastrophe?: proceedings of the General Section of the British Association for the Advancement of Science 1981\" (London: Pinter, 1982).\n", "In nuclear strategy, first strike capability is a country's ability to defeat another nuclear power by destroying its arsenal to the point where the attacking country can survive the weakened retaliation.\n" ]
Can any scientists comment on the debate in /r/science regarding a new Alzheimer's vaccine?
Based on the abstract (for some reason my institution doesn't have access to the full paper) of the [Lancet Neurology paper](_URL_2_), I'm not too impressed. This was an extremely preliminary study establishing the side effect profile, as well as determining whether the vaccine actually induces an antibody response. According the abstract, they didn't even look at efficacy in their outcomes. Their overall conclusion was that 1) this vaccine does induce some sort of immune response and 2) it doesn't seem to have serious adverse effects. They listed "Immune response, cognitive and functional assessments" as one of their secondary outcomes on their [_URL_1_ page](_URL_0_), but don't report on it in the abstract. As this was merely a phase I trial, the n is also very small (58 total). I think it's quite a number of steps from anything clinically useful. edited to add: the principle that they're working with is inducing antibodies against the Aβ-amyloid. I'm not convinced that this would actually be effective. Aβ-amyloid makes up the 'senile plaques' that is so characteristic, but I don't think it's very indicative of cognitive defects. I think the tau proteins (that make up the neurofibrillary tangles) are much more prognostic. I'm also not convinced that an antibody response against these plaques would be helpful, since they would have to penetrate the blood-brain barrier to reach the amyloid. Once there, it's not like the antibodies magically make them go away, since there would still need to be some mechanism of clearing them (possibly via microglia). So I guess my point is that this is an important step towards eventually possibly developing a vaccine or treatment, but there's so much that we don't know yet that it's hard to see it happening in the near future.
[ "BULLET::::- Alzheimer's Disease: West Virginia University's Rockefeller Neuroscience Institute has been chosen as the first site in the world to participate in phase II of a new clinical trial using ultrasound technology to help reverse the effects of Alzheimer's disease, and allow doctors access to parts of the brain affected by it.\n", "BULLET::::- Toronto Star, June 13, 2006 - New drug offers hope against Alzheimer's - AZD-103 found by U of T researchers Shown to reverse some damage from disease Part of a research study which is a collaboration between the University of Toronto and the Alzheimer Society of Ontario, funded by both federal and provincial research organizations.\n", "Shriver executive-produced \"The Alzheimer's Project\", a four-part documentary series that premiered on HBO in May 2009 and later earned two Emmy Awards. It was described by the \"Los Angeles Times\" as \"ambitious, disturbing, emotionally fraught and carefully optimistic\". The series took a close look at cutting-edge research being done in the country's leading Alzheimer's laboratories. The documentary also examined the effects of this disease on patients and families. One of the Emmy Award-winning films, \"Grandpa, Do you Know Who I Am?\" is based on Shriver's best-selling children's book dealing with Alzheimer's.\n", "The foundation is currently researching and testing on various possible cures of the Alzheimer's. Tests are being conducted on diabetes drugs, turmeric, testosterone and omega 3 (being tested to prevent the beta amyloid).\n", "BULLET::::- Alzheimer's Disease Neuroimaging Initiative (ADNI): The FNIH helps manage the Alzheimer's Disease Neuroimaging Initiative (ADNI), a public-private partnership that has profoundly influenced the understanding of Alzheimer's disease by identifying and validating biological markers that indicate its onset and progression. The study tracks volunteers at clinical sites with normal cognition, mild cognitive impairment and Alzheimer's disease to create a widely-available database of imaging, biochemical and genetic data, which can lay the groundwork for Alzheimer's discoveries. By standardizing technologies and protocols, the study has improved clinical trial design and the understanding of the disease and its progression. Furthermore, ADNI's open-access data policy continues to be a model of successful data sharing in a pre-competitive environment.\n", "24. Wang CY. “Peptide vaccine for prevention and immunotherapy of dementia of the Alzheimer’s type” US Patent 9,102,752 (2015), US Patent Application 14/824,075 (2015), and WO Patent Application PCT/US13/37865 (2013).\n", "van Dyck and the Yale ADRU have tested potential Alzheimer’s therapeutics for over 20 years. They have contributed to the successful development of memantine now in use for the treatment of mid-late stage Alzheimer’s Disease, and assess potential new therapeutic strategies, e.g. antibodies that reduce amyloid pathology such as Crenezumab, and based on the work of Dr. Stephen Strittmatter, an inhibitor of fyn.\n" ]
how are people/groups of people legally allowed to place "bounties" on others?
They're not "bounties" like in the Boba Fett sense. They're rewards to turn over information which would lead to arrest/prosecution. Boba Fett bounties ("Capture and/or kill this person") are illegal.
[ "The reasons may include neglect, but also tax evasion and/or avoiding arrest for a crime. The downside for the person is that he cannot get benefits such as (for a Dutch person) getting a passport, and for anyone still living in the country, being allowed to work and to send his children to school, getting social security, etc.\n", "A more recent trend is for sarcastic tips to be offered that are observations by the readers regarding other people's behaviour, such as a barmaid who suggests male public house customers who are \"trying to get into a barmaid's knickers\" should \"pull back your tenner just as she reaches to take it when paying for a round. It really turns us on\". In a similar vein, one reader suggested \"Old people – are you worried that people in a hurry might be able to get past you on the pavement? Why not try stumbling aimlessly from side to side? That should stop them\".\n", "The citizens occupy their time with many strange and outright bizarre hobbies, such as simping (recreational stupidity), bat gliding, sky surfing and peeping (spying on people at home and in public), which is illegal when done for voyeuristic purposes, but legal when done under the authority of a Judge. If any of these ever get out of hand, and there is no legal justification for banning, the Judges simply impose a heavy tax on them, restricting them to only the few very wealthy citizens.\n", "Occasionally, a Barast may voluntarily remove themselves from society in order to continue work that has been deemed not meaningful or to protest when work is purposefully stopped as dangerous. This individual would move away from the community to an isolated area, and would thereafter support themselves. They would not be able to generally ask for assistance from the greater community, and would also in some cases remove their Alibi Archive implant as protest (so their life after leaving the community would not be recorded) or have it removed involuntarily (to prevent any further recording regarding dangerous work). This isolation is voluntary, and they can have visitors or opt to return to the community at any time, although returning requires that they resume providing a meaningful contribution (and cease any dangerous work, if that was the reason for leaving).\n", "Bickers continues to be active in working on legislation to protect the human rights of vulnerable people and exploited workers, focusing on lobbying in the 2019 legislative session for protective legislation including: making it illegal for law enforcement to have sex with workers before arresting them; allowing workers to work together or share a space for safety reasons without being vulnerable to charges of trafficking or exploiting each other; and allowing full service sex workers to report assault without their jobs being used as evidence to prosecute them while their assaults are ignored.\n", "In the legal system of England and Wales, the victim surcharge is a penalty applied to people convicted of offences, in addition to a conditional discharge, a fine, or a community or custodial sentence, in order to provide compensation for the victims of crime.\n", "Lead the people with administrative injunctions and put them in their place with penal law, and they will avoid punishments but will be without a sense of shame. Lead them with excellence and put them in their place through roles and ritual practices, and in addition to developing a sense of shame, they will order themselves harmoniously.\n" ]
lots of people barely eat vegetables and never take multivitamins, and still seem to be in good health. how is the daily recommended amount of micronutrients calculated, and why do people seem just fine even if they don't get it?
Pretty much everyone in a developed country gets enough nutrients to stay reasonably healthy unless they are on some kind of super restrictive diet. The RDA's are *very* conservative and you have to be *very* deficient for a long time before you'll start seeing serious health effects.
[ "According to the Harvard School of Public Health: \"...many people don’t eat the healthiest of diets. That’s why a multivitamin can help fill in the gaps, and may have added health benefits.\" The U.S. Office of Dietary Supplements, a branch of the National Institutes of Health, suggests that multivitamin supplements might be helpful for some people with specific health problems (for example, macular degeneration). However, the Office concluded that \"most research shows that healthy people who take an MVM [multivitamin] do not have a lower chance of diseases, such as cancer, heart disease, or diabetes. Based on current research, it's not possible to recommend for or against the use of MVMs to stay healthier longer.\"\n", "Creating an industry estimated to have a 2015 value of $37 billion, there are more than 50,000 dietary supplement products marketed just in the United States, where about 50% of the American adult population consumes dietary supplements. Multivitamins are the most commonly used product. For those who fail to consume a balanced diet, the United States National Institutes of Health states that certain supplements \"may have value.\"\n", "Treatment involves a diet which includes an adequate amount of riboflavin containing foods. Multi-vitamin and mineral dietary supplements often contain 100% of the Daily Value (1.3 mg) for riboflavin, and can be used by persons concerned about an inadequate diet. Over-the-counter dietary supplements are available in the United States with doses as high as 100 mg, but there is no evidence that these high doses have any additional benefit for healthy people.\n", "In the 1999–2000 National Health and Nutrition Examination Survey, 52% of adults in the United States reported taking at least one dietary supplement in the last month and 35% reported regular use of multivitamin-multimineral supplements. Women versus men, older adults versus younger adults, non-Hispanic whites versus non-Hispanic blacks, and those with higher education levels versus lower education levels (among other categories) were more likely to take multivitamins. Individuals who use dietary supplements (including multivitamins) generally report higher dietary nutrient intakes and healthier diets. Additionally, adults with a history of prostate and breast cancers were more likely to use dietary and multivitamin supplements.\n", "Some nutrients, such as calcium and magnesium, are rarely included at 100% of the recommended allowance because the pill would become too large. Most multivitamins come in capsule form; tablets, powders, liquids, and injectable formulations also exist. In the U.S., the FDA requires any product marketed as a \"multivitamin\" to contain at least three vitamins and minerals; furthermore, the dosages must be below a \"tolerable upper limit\", and a multivitamin may not include herbs, hormones, or drugs.\n", "According to the U.S. Department of Agriculture, the Dietary Reference Intakes, which is the \"highest level of daily nutrient intake that is likely to pose no risk of adverse health effects\" specify 10 mg/day for most people, corresponding to 10 L of fluoridated water with no risk. For infants and young children the values are smaller, ranging from 0.7 mg/d for infants to 2.2 mg/d. Water and food sources of fluoride include community water fluoridation, seafood, tea, and gelatin.\n", "Some vitamins in large doses have been linked to increased risk of cardiovascular disease, of cancer and of death. The scientific consensus view is that for normal individuals, a balanced diet contains all necessary vitamins and minerals, and that routine supplementation is not necessary absent specific diagnosed deficiencies.\n" ]
If gravitons exist, then is it possible for anti-gravitons to exist and would that mean that the matter they interact with will gain negative gravity?
The graviton would be its own antiparticle in the same way that the photon is also its own antiparticle. No negative gravity needed.
[ "If it exists, the graviton is expected to be massless because the gravitational force is very long range and appears to propagate at the speed of light. The graviton must be a spin-2 boson because the source of gravitation is the stress–energy tensor, a second-order tensor (compared with electromagnetism's spin-1 photon, the source of which is the four-current, a first-order tensor). Additionally, it can be shown that any massless spin-2 field would give rise to a force indistinguishable from gravitation, because a massless spin-2 field would couple to the stress–energy tensor in the same way that gravitational interactions do. This result suggests that, if a massless spin-2 particle is discovered, it must be the graviton.\n", "In gravity theories with extended supersymmetry (extended supergravities), a graviphoton is normally a superpartner of the graviton that behaves like a photon, and is prone to couple with gravitational strength, as was appreciated in the late 1970s. Unlike the graviton, however, it may provide a \"repulsive\" (as well as an attractive) force, and thus, in some technical sense, a type of anti-gravity. Under special circumstances, then, in several natural models, often descending from five-dimensional theories mentioned, it may actually cancel the gravitational attraction in the static limit. Joël Scherk investigated semirealistic aspects of this phenomenon, thereby opening up an ongoing search for physical manifestations of the mechanism.\n", "The graviton, listed separately above, is a hypothetical particle that has been included in some extensions to the standard model to mediate the gravitational force. It is in a peculiar category between known and hypothetical particles: As an unobserved particle that is not predicted by, nor required for the Standard Model, it belongs in the table of hypothetical particles, below. But gravitational force itself is a certainty, and expressing that known force in the framework of a quantum field theory requires a boson to mediate it.\n", "In the framework of quantum field theory, the graviton is the name given to a hypothetical elementary particle speculated to be the force carrier that mediates gravity. However the graviton is not yet proven to exist, and no scientific model yet exists that successfully reconciles general relativity, which describes gravity, and the Standard Model, which describes all other fundamental forces. Attempts, such as quantum gravity, have been made, but are not yet accepted.\n", "Most theories containing gravitons suffer from severe problems. Attempts to extend the Standard Model or other quantum field theories by adding gravitons run into serious theoretical difficulties at energies close to or above the Planck scale. This is because of infinities arising due to quantum effects; technically, gravitation is not renormalizable. Since classical general relativity and quantum mechanics seem to be incompatible at such energies, from a theoretical point of view, this situation is not tenable. One possible solution is to replace particles with strings. String theories are quantum theories of gravity in the sense that they reduce to classical general relativity plus field theory at low energies, but are fully quantum mechanical, contain a graviton, and are thought to be mathematically consistent.\n", "In addition to uncertainty regarding whether antimatter is gravitationally attracted or repulsed from other matter, it is also unknown whether the magnitude of the gravitational force is the same. Difficulties in creating quantum gravity theories have led to the idea that antimatter may react with a slightly different magnitude.\n", "In theories of quantum gravity, the graviton is the hypothetical quantum of gravity, an elementary particle that mediates the force of gravity. There is no complete quantum field theory of gravitons due to an outstanding mathematical problem with renormalization in general relativity. In string theory, believed to be a consistent theory of quantum gravity, the graviton is a massless state of a fundamental string.\n" ]
How does a globe storm glass barometer work?
It is open to the atmosphere! That glass pipe off to the side is open, and when the air pressure increases it pushes down on the fluid in the pipe. Through the wonderful nature of hydraulics, the water in the globe then rises.
[ "A barometer is a scientific instrument that is used to measure air pressure in a certain environment. Pressure tendency can forecast short term changes in the weather. Many measurements of air pressure are used within surface weather analysis to help find surface troughs, pressure systems and frontal boundaries.\n", "The weather ball barometer consists of a glass container with a sealed body, half filled with water. A narrow spout connects to the body below the water level and rises above the water level. The narrow spout is open to the atmosphere. When the air pressure is lower than it was at the time the body was sealed, the water level in the spout will rise above the water level in the body; when the air pressure is higher, the water level in the spout will drop below the water level in the body. A variation of this type of barometer can be easily made at home.\n", "A watch glass is a circular concave piece of glass used in chemistry as a surface to evaporate a liquid, to hold solids while being weighed, for heating a small amount of substance and as a cover for a beaker. The latter use is generally applied to prevent dust or other particles entering the beaker; the watch glass does not completely seal the beaker, so gas exchanges still occur. When used as an evaporation surface, a watch glass allows closer observation of precipitates or crystallization, and can be placed on a surface of contrasting color to improve the visibility overall. Watch glasses are also sometimes used to cover a glass of whisky, to concentrate the aromas in the glass, and to prevent spills when the whisky is swirled. Watch glasses are named so because they are similar to the glass used for the front of old-fashioned pocket watches. In reference to this, large watch glasses are occasionally known as clock glasses.\n", "BULLET::::- Helmholtz (DUO) (2015) large glass spheres act as sound resonators for low frequency noises, and various sized flames and rotating mirrors are used to show the visualization of the vibrations. There is a relationship in this piece between the histories of acoustic psychology and the physics of sound, with influence from the manometric flame apparatus and Helmholtz resonators.\n", "An aneroid barometer is an instrument used for measuring pressure as a method that does not involve liquid. Invented in 1844 by French scientist Lucien Vidi, the aneroid barometer uses a small, flexible metal box called an aneroid cell (capsule), which is made from an alloy of beryllium and copper. The evacuated capsule (or usually several capsules, stacked to add up their movements) is prevented from collapsing by a strong spring. Small changes in external air pressure cause the cell to expand or contract. This expansion and contraction drives mechanical levers such that the tiny movements of the capsule are amplified and displayed on the face of the aneroid barometer. Many models include a manually set needle which is used to mark the current measurement so a change can be seen. This type of barometer is common in homes and in recreational boats. It is also used in meteorology, mostly in barographs and as a pressure instrument in radiosondes.\n", "Aneroid barometers have a mechanical adjustment that allows the equivalent sea level pressure to be read directly and without further adjustment if the instrument is not moved to a different altitude. Setting an aneroid barometer is similar to resetting an analog clock that is not at the correct time. Its dial is rotated so that the current atmospheric pressure from a known accurate and nearby barometer (such as the local weather station) is displayed. No calculation is needed, as the source barometer reading has already been converted to equivalent sea-level pressure, and this is transferred to the barometer being set—regardless of its altitude. Though somewhat rare, a few aneroid barometers intended for monitoring the weather are calibrated to manually adjust for altitude. In this case, knowing \"either\" the altitude or the current atmospheric pressure would be sufficient for future accurate readings.\n", "An Abney level and clinometer, is an instrument used in surveying which consists of a fixed sighting tube, a movable spirit level that is connected to a pointing arm, and a protractor scale. An internal mirror allows the user to see the bubble in the level while sighting a distant target. It can be used as a hand-held instrument or mounted on a Jacob's staff for more precise measurement, and it is small enough to carry in a coat pocket.\n" ]
What happens to a morbidly obese individual during hydrated starvation?
Similar - and I emphasize SIMILAR things have been done, although I frankly don't know where to find the study. Googling might pull up something for you. They took a fat dude, kept him supplied with fluids and vitamins and the like, and successfully kept him alive and relatively well. Think he lost a totally insane amount of weight, too.
[ "Early symptoms include impulsivity, irritability, and hyperactivity. Atrophy (wasting away) of the stomach weakens the perception of hunger, since the perception is controlled by the volume of the stomach that is empty. Individuals experiencing starvation lose substantial fat (adipose tissue) and muscle mass as the body breaks down these tissues for energy. \"Catabolysis\" is the process of a body breaking down its own muscles and other tissues in order to keep vital systems such as the nervous system and heart muscle (myocardium) functioning. The energy deficiency inherent in starvation causes fatigue and renders the victim more apathetic over time. As the starving person becomes too weak to move or even eat, their interaction with the surrounding world diminishes. In females, menstruation ceases when the body fat percentage is too low to support a fetus.\n", "Victims of starvation are often too weak to sense thirst, and therefore become dehydrated. All movements become painful due to muscle atrophy and dry, cracked skin that is caused by severe dehydration. With a weakened body, diseases are commonplace. Fungi, for example, often grow under the esophagus, making swallowing painful. Vitamin deficiency is also a common result of starvation, often leading to anemia, beriberi, pellagra, and scurvy. These diseases collectively can also cause diarrhea, skin rashes, edema, and heart failure. Individuals are often irritable and lethargic as a result.\n", "Months of depletion are usually necessary to deplete body stores of most nutrients and a nutritional optic neuropathy may be present in a patient with or without obvious evidence of under-nutrition. An individual suffering from starvation could be easily recognized as a person who is undernourished due to their cachectic corporal appearance. However, a not so obvious individual may be the recipient of a gastric bypass surgery, a procedure that may lead to vitamin B12 deficiency from poor absorption. The optic neuropathy associated with pernicious anemia and vitamin B12 deficiency can be seen amongst individuals who obtain adequate caloric input from foods low in nutritional and micronutrient density (see Food desert).\n", "The underlying starvation, malnourishment, and usually dehydration, associated with emaciation, affect and are harmful to organ systems throughout the body. The emaciated individual experiences disturbances of the blood, circulatory, and urinary systems; these include hyponatremia and/or hypokalemia (low sodium and/or potassium in the blood, respectively), anemia (low hemoglobin), improper function of lymph (immune system-related white blood matter) and the lymphatic system, and pleurisy (fluid in the pleural cavity surrounding the lungs) and edema (swelling in general) caused by poor or improper function of the kidneys to eliminate wastes from the blood.\n", "Starvation causes the body to metabolize its own (purine-rich) tissues for energy. Thus, like a high purine diet, starvation increases the amount of purine converted to uric acid. A very low calorie diet without carbohydrate can induce extreme hyperuricemia; including some carbohydrate (and reducing the protein) reduces the level of hyperuricemia. Starvation also impairs the ability of the kidney to excrete uric acid, due to competition for transport between uric acid and ketones.\n", "When the human body is deprived of adequate nutrition, testosterone levels drop, while the adrenal glands continue to produce estrogens, thereby causing a hormonal imbalance. Gynecomastia can also occur once normal nutrition is restarted (this is known as refeeding gynecomastia).\n", "Continuous dehydration can cause many problems, but is most often associated with renal problems and neurological problems such as seizures. Excessive thirst, known as polydipsia, along with excessive urination, known as polyuria, may be an indication of diabetes mellitus or diabetes insipidus.\n" ]