question
stringlengths 3
301
| answer
stringlengths 9
7.04k
| context
listlengths 7
7
|
|---|---|---|
how is digesting liquids possible?
|
Digesting is just breaking down things into small things, then breaking those down into tiny things so you can absorb them. You break proteins down into little amino acid blocks, break down carbs into small pieces, and break down fats into little fatty acids. Your body absorbs those, along with a lot of the water you ingest. Everything being liquid means you can mix the digestive liquids with the food more easily.
|
[
"Water and saliva enter through the rumen to form a liquid pool. Liquid will ultimately escape from the reticulorumen from absorption through the wall, or through passing through the reticulo-omosal orifice, as digesta does. However, since liquid cannot be trapped in the mat as digesta can, liquid passes through the rumen much more quickly than digesta does. Liquid often acts as a carrier for very small digesta particles, such that the dynamics of small particles is similar to that of liquid.\n",
"Digestion is the breakdown of large insoluble food molecules into small water-soluble food molecules so that they can be absorbed into the watery blood plasma. In certain organisms, these smaller substances are absorbed through the small intestine into the blood stream. Digestion is a form of catabolism that is often divided into two processes based on how food is broken down: mechanical and chemical digestion. The term mechanical digestion refers to the physical breakdown of large pieces of food into smaller pieces which can subsequently be accessed by digestive enzymes. In chemical digestion, enzymes break down food into the small molecules the body can use.\n",
"Digestion of food is achieved through a mixture of mechanical and chemical processes. Failure to produce or secrete chemicals known as digestive enzymes can lead to failure digesting or breaking down specific components of ingested food.\n",
"In most vertebrates, digestion is a multistage process in the digestive system, starting from ingestion of raw materials, most often other organisms. Ingestion usually involves some type of mechanical and chemical processing. Digestion is separated into four steps:\n",
"Digestate is the solid remnants of the original input material to the digesters that the microbes cannot use. It also consists of the mineralised remains of the dead bacteria from within the digesters. Digestate can come in three forms: fibrous, liquor, or a sludge-based combination of the two fractions. In two-stage systems, different forms of digestate come from different digestion tanks. In single-stage digestion systems, the two fractions will be combined and, if desired, separated by further processing.\n",
"In most vertebrates, digestion is a four-stage process involving the main structures of the digestive tract, starting with ingestion, placing food into the mouth, and concluding with the excretion of undigested material through the anus. From the mouth, the food moves to the stomach, where as bolus it is broken down chemically. It then moves to the intestine, where the process of breaking the food down into simple molecules continues and the results are absorbed as nutrients into the circulatory and lymphatic system.\n",
"Digestion begins in the mouth with the secretion of saliva and its digestive enzymes. Food is formed into a bolus by the mechanical mastication and swallowed into the esophagus from where it enters the stomach through the action of peristalsis. Gastric juice contains hydrochloric acid and pepsin which would damage the walls of the stomach and mucus is secreted for protection. In the stomach further release of enzymes break down the food further and this is combined with the churning action of the stomach. The partially digested food enters the duodenum as a thick semi-liquid chyme. In the small intestine, the larger part of digestion takes place and this is helped by the secretions of bile, pancreatic juice and intestinal juice. The intestinal walls are lined with villi, and their epithelial cells is covered with numerous microvilli to improve the absorption of nutrients by increasing the surface area of the intestine.\n"
] |
Why was human anatomy poorly drawn in ancient works of art?
|
Not a professional historian, but I have read much on this on my own. Really the question here is fundamentally about the purpose of representational art.
In many cultures, paintings were not meant to be interpreted at face value, but rather as symbols. Anatomy is rather irrelevant when a culture simply uses painting as a tool to narrate a story or glorify a particular religion.
I'll give one example. It's pretty clear the ancient Egyptians were perfectly capable of creating naturalistic depictions of human beings, just look at this [bust of Nefertiti](_URL_0_). But the relief sculptures/paintings that come to mind when one thinks of "Ancient Egyptian art" have nothing to do with naturalism and everything to do with symbolism. There is a complex language of artistic rules there that speaks to their mythological narratives, which requires a basic understanding of their culture to interpret. European romanesque and gothic painting can be explained in a similar way. Outside of the western historical tradition, just look up Mughal or Edo-period Japanese paintings for further examples of stylistic symbolism over naturalism.
(Of course, the Renaissance artists, Mannerists, Neo-Classicists etc were heavy on symbolism as well, symbolism and naturalism aren't mutually exclusive.)
So it's not so much a question of whether certain cultures *could* create art with naturalistic anatomy or not, but rather the question is what purpose did painting serve in different cultures, and why?
|
[
"Anatomy has served the visual arts since Ancient Greek times, when the 5th century BC sculptor Polykleitos wrote his \"Canon\" on the ideal proportions of the male nude. In the Italian Renaissance, artists from Piero della Francesca (c. 1415–1492) onwards, including Leonardo da Vinci (1452–1519) and his collaborator Luca Pacioli (c. 1447–1517), learnt and wrote about the rules of art, including visual perspective and the proportions of the human body.\n",
"The study of the human body was not isolated to only medical doctors and students, as many artists reflected their expertise through masterful drawings and paintings. The detailed study of human and animal anatomy, as well as the dissection of corpses, was utilized by early Italian renaissance man Leonardo da Vinci in an effort to more accurately depict the human figure through his work. He studied the anatomy from an exterior perspective as an apprentice under Andrea del Verrocchio that started in 1466. During his apprenticeship, Leonardo mastered drawing detailed versions of anatomical structures such as muscles and tendons by 1472.\n",
"For centuries artists have used their knowledge gleaned from the study of anatomy and the use of cadavers to better present a more accurate and lively representation of the human body in their artwork and mostly in paintings. It is thought that Michelangelo and/or Raphael may have also conducted dissections.\n",
"The study of anatomy flourished in the 17th and 18th centuries. At the beginning of the 17th century, the use of dissecting human cadavers influenced anatomy, leading to a spike in the study of anatomy. The advent of the printing press facilitated the exchange of ideas. Because the study of anatomy concerned observation and drawings, the popularity of the anatomist was equal to the quality of his drawing talents, and one need not be an expert in Latin to take part. Many famous artists studied anatomy, attended dissections, and published drawings for money, from Michelangelo to Rembrandt. For the first time, prominent universities could teach something about anatomy through drawings, rather than relying on knowledge of Latin. Contrary to popular belief, the Church neither objected to nor obstructed anatomical research.\n",
"Since the time of the Leonardo da Vinci, and his depictions of the human form, there has been great advancements in the art of representing the human body. The art has evolved over time from illustration to digital imaging using the technological advancements of the digital age. Berengario da Carpi was the first known anatomist to include medical illustration within his textbooks. Gray's Anatomy, originally published in 1858, is one well-known human anatomy textbook that showcases a variety of anatomy depiction techniques.\n",
"The study and teaching of anatomy through the ages would not have been possible without sketches and detailed drawings of discoveries when working with human corpses. The artistic depiction of the placement of body parts plays a crucial role in studying anatomy and in assisting those working with the human body. These images serve as the only glance into the body that most will never witness in person.\n",
"During the Renaissance in Italy, around the 1450 to 1600, the rebirth of classical Greek and Roman characteristics in art led to the studies of the human anatomy. The practice of dissecting the human body was banned for many centuries due to the belief that body and soul were inseparable. It wasn’t until the election of Pope Boniface VIII that the practice of dissection was once again allowed for observation. Many painters and artists documented and even performed the dissections themselves by taking careful observations of the human body. Among them were Leonardo da Vinci and Andreas Vesalius, two of the most influential artists in anatomical illustrations. Leonardo da Vinci, in particular, was very detailed in his studies that he was known as the “artist-anatomist” for the creation of a new science and the creative depiction to anatomy. Leonardo’s anatomical studies contributed to artistic exploration of the movement of the muscles, joints and bones. His goal was to analyze and understand the instruments behind the postures and gestures in the human body.\n"
] |
what is technical the difference between a thread and an async-operation?
|
A thread is when you ask the operating system to start running another part of the program at the same time. If there are more parts running than CPU cores, the operating system will switch between them. Threads have a bunch of features (like separate stacks) that make them relatively "expensive". A program shouldn't have thousands of threads because it will waste memory and time.
"Async tasks" depend on the programming language, but they're generally things you can do in the background that are too short to be their own thread. The program will create one thread (or a few) and then that thread (or those threads) will do async tasks whenever they are ready to be done. This means a new thread doesn't need to be created for every task.
"Async tasks" can also be things that don't use a thread at all, as long as the program can remember it's waiting for something to happen. For example, waiting for the user to type something could be an async task. The program won't use a thread to wait for the user to type something (because that's a waste of a thread) but it knows that when the user does type something, the task should be marked as completed.
"Async tasks" got a big boost in popularity some time ago because: people wanted to do more things asynchronously (in the background), people realised that having a thread for every single thing is not efficient, and because JavaScript basically forces you to use them so people got used to them.
|
[
"The AN thread is a particular type of fitting used to connect flexible hoses and rigid metal tubing that carry fluid. It is a US military-derived specification that dates back to World War II and stems from a joint standard agreed upon by the Army and Navy, hence AN.\n",
"Addresses in the thread are the addresses of machine language. This form is simple, but may have overheads because the thread consists only of machine addresses, so all further parameters must be loaded indirectly from memory. Some Forth systems produce direct-threaded code. On many machines direct-threading is faster than subroutine threading (see reference below).\n",
"A thread may simultaneously wait on multiple channels, synchronous or asynchronous, acting upon the first one available given a specified order of priority or optionally executing an alternate path if none is ready.\n",
"Thread safety is a computer programming concept applicable to multi-threaded code. Thread-safe code only manipulates shared data structures in a manner that ensures that all threads behave properly and fulfill their design specifications without unintended interaction. There are various strategies for making thread-safe data structures.\n",
"In computer science, a thread of execution is the smallest sequence of programmed instructions that can be managed independently by a scheduler, which is typically a part of the operating system. The implementation of threads and processes differs between operating systems, but in most cases a thread is a component of a process. Multiple threads can exist within one process, executing concurrently and sharing resources such as memory, while different processes do not share these resources. In particular, the threads of a process share its executable code and the values of its dynamically allocated variables and non-thread-local global variables at any given time.\n",
"RT-Thread is an open source real-time operating system for embedded devices. It is distributed under the Apache 2.0+ licence. RT-Thread is developed by the RT-Thread Development Team based in China, after ten years' fully concentrated development. It is aimed to change the current situation in China that there is no well used open source real-time operating system in the microcontroller area.\n",
"The name ThreadX is derived from the fact that threads are used as the executable elements and the letter \"X\" represents context switching, i.e., it switches threads. ThreadX provides priority-based, preemptive scheduling, fast interrupt response, memory management, interthread communication, mutual exclusion, event notification, and thread synchronization features. Major distinguishing technology characteristics of ThreadX include preemption-threshold, priority inheritance, efficient timer management, picokernel design, event-chaining, fast software timers, and compact size. The minimal footprint of ThreadX on an ARM processor is on the order of 2KB.\n"
] |
What started the European explorers craze that got them discovering the new world and Asia?
|
Money, prestige, and the struggle for dominance over rival nations. The usual suspects.
It started with Portuguese attempts to corner the spice trade to and from the East Indies by getting ships around Africa (to circumvent the Ottoman control of the ancient overland routes), which was made more complicated by the fact that their instruments didn't work well as one approached the equator. In any event, since one voyage could at times result in a profit of 400% or more, this dangerous effort was more than worthwhile.
Also, it wasn't just Europeans. China sent out large fleets to explore Asia and East Africa as well. This stopped when a later emperor decided to end the expeditions. As a unified state, they could do that. If a European king chose to end such efforts, all that would mean is that one of the neighbours would eventually take up the challenge instead.
|
[
"With the Age of Discovery starting in the 15th century, Europeans explored the world by ocean, searching for particular trade goods, humans to enslave, and trading locations and ports. The most desired trading goods were gold, silver and spices. Columbus did not reach Asia but rather found what was to the Europeans a New World, the Americas. For the Catholic monarchies of Spain and Portugal, a division of influence became necessary to avoid conflict. This was resolved by Papal intervention in 1494 when the Treaty of Tordesillas purported to divide the world between the two powers. The Portuguese were to receive everything outside of Europe east of a line that ran 270 leagues west of the Cape Verde Islands, thought to include the continents of Africa and Asia, but none of the New World. The Spanish received everything west of this line, territory that was still almost completely unknown, and proved to be primarily the vast majority of the continents of the Americas and the Islands of the Pacific Ocean. This arrangement was somewhat subverted in 1500, when the Portuguese navigator Pedro Álvares Cabral arrived at a point on the eastern coast of South America, and realized that it was on the Portuguese side of the dividing line between the two empires. This would lead to the Portuguese colonization of what is now Brazil.\n",
"European overseas exploration led to the rise of global trade and the European colonial empires, with the contact between the \"Old World\" (Europe, Asia and Africa) and the \"New World\" (the Americas and Australia) producing the Columbian Exchange, a wide transfer of plants, animals, food, human populations (including slaves), communicable diseases and culture between the Eastern and Western Hemispheres. This represented one of the most significant global events concerning ecology, agriculture and culture in history. The Age of Discovery and later European exploration allowed the global mapping of the world, resulting in a new worldview and distant civilizations coming into contact, but also led to the propagation of diseases that decimated populations not previously in contact with Eurasia and Africa and to the enslavement, exploitation, military conquest and economic dominance by Europe and its colonies over native populations. It also allowed for the expansion of Christianity throughout the world: with the spread of missionary activity, it eventually became the world's largest religion.\n",
"Italian explorers and navigators from the dominant maritime republics played a key role in ushering the Age of Discovery and the European colonization of the Americas. The most notable among them were: Christopher Columbus, who led led the first European expeditions to the Caribbean and Central and South America, and he is credited with discovering the New World and the opening of the Americas for conquest and settlement by Europeans; John Cabot, the first European to explore parts of the North American continent in 1497; Amerigo Vespucci, who first demonstrated in about 1502 that the New World was not Asia as initially conjectured, but a fourth continent previously unknown to people of the Old World (America is named after him); and Giovanni da Verrazzano, renowned as the first European to explore the Atlantic coast of North America between Florida and New Brunswick in 1524. Furthermore, the Papal States was involved in resolving disputes between competing colonial powers. The only attempt by an Italian state to colonise the Americas was taken into consideration by Ferdinando I de' Medici, Grand Duke of Tuscany, who organised an expedition in 1608 under the command of Robert Thornton to northern Brazil and the Amazon river; after Ferdinando's death the following year, nobody after him was interested in the establishment of an overseas colony. However, Italian nobleman Giovanni Paolo Lascaris, Grand Master of the Knights Hospitaller of Malta at that time part of Sicily, possessed some Caribbean islands that were colonized from 1651 to 1665.\n",
"From the early 15th century to the early 17th century the Age of Discovery had, through Spanish and Portuguese seafarers, opened up southern Africa, the Americas (New World), Asia and Oceania to European eyes: Bartholomew Dias had sailed around the Cape of southern Africa in search of a trade route to India; Christopher Columbus, on four journeys across the Atlantic, had prepared the way for European colonisation of the New World; Ferdinand Magellan had commanded the first expedition to sail across the Atlantic and Pacific oceans to complete the first circumnavigation of the Earth. Over this period colonial power shifted from the Portuguese and Spanish to the Dutch and then the British and French. The new era of scientific exploration began in the late 17th century as scientists, and in particular natural historians, established scientific societies that published their researches in specialist journals. The British Royal Society was founded in 1660 and encouraged the scientific rigour of empiricism with its principles of careful observation and deduction. Activities of early members of the Royal Society served as models for later maritime exploration. Hans Sloane (1650–1753) was elected a member in 1685 and travelled to Jamaica from 1687 to 1689 as physician to the Duke of Albemarle (1653–1688) who had been appointed Governor of Jamaica. In Jamaica Sloane collected numerous specimens which were carefully described and illustrated in a published account of his stay. Sloane bequeathed his vast collection of natural history 'curiosities' and library of over 50,000 bound volumes to the nation, prompting the establishment in 1753 of the British Museum. His travels also made him an extremely wealthy man as he patented a recipe that combined milk with the fruit of \"Theobroma cacao\" (cocoa) he saw growing in Jamaica, to produce milk chocolate. Books of distinguished social figures like the intellectual commentator Jean Jacques Rousseau, Director of the Paris Museum of Natural History Comte de Buffon, and scientist-travellers like Joseph Banks, and Charles Darwin, along with the romantic and often fanciful travelogues of intrepid explorers, increased the desire of European governments and the general public for accurate information about the newly discovered distant lands.\n",
"With the great Portuguese explorations which opened up new ocean routes, the spice trade no longer went through the Mediterranean. Moreover, the discovery of the Americas started a crisis of Mediterranean shipping. That was the beginning of the decline of both the Venetian and Ragusan republics.\n",
"Isabella and Ferdinand authorized the 1492 expedition of Christopher Columbus, who became the first known European to reach the New World since Leif Ericson. This and subsequent expeditions led to an influx of wealth into Spain, supplementing income from within Castile for the state that would prove to be a dominant power of Europe for the next two centuries.\n",
"The discoveries of Christopher Columbus electrified all of western Europe, especially maritime powers like England. King Henry VII commissioned John Cabot to lead a voyage to find a northern route to the Spice Islands of Asia; this began the search for the North West Passage. Cabot sailed in 1497 and reached Newfoundland. He led another voyage to the Americas the following year, but nothing was heard of him or his ships again.\n"
] |
investing: buying stocks, selling stocks. eh?
|
A stock is a piece of the company. If the company issues 1000 stocks, and you own 100 stocks, you essentially own 10% of the company.
The main reason for buying stocks is investment. If the company makes a profit, the company is worth more, so your share of the company is worth more. The company may also pay out dividends to shareholders, essentially splitting part of their earnings with the owners.
The price of a stock depends heavily on investor confidence. If you believe a company will do well, you are willing to buy their stock and this raises share prices. If the company is failing, you will sell off the stock, and the share price falls.
You can buy stocks through a stock broker, or sometimes directly from the company (if you are rich).
|
[
"In the book, Fisher says that because the stock market is a discounter of all widely known information, the only way to make, on average, winning market bets is knowing something most others don’t. The book claims investing should be treated as a science, not a craft, and details a methodology for testing beliefs and uncovering information not widely known or understood. The book’s scientific method consists of asking three questions:\n",
"Davis believes stocks represent ownership interests in real businesses and therefore devotes significant time and resources to rigorous fundamental analysis of companies, all while maintaining a strict valuation discipline based on a concept known as “owner earnings” (i.e., the normalized cash earnings power of a business).\n",
"Selling stock is procedurally similar to buying stock. Generally, the investor wants to buy low and sell high, if not in that order (short selling); although a number of reasons may induce an investor to sell at a loss, e.g., to avoid further loss.\n",
"Financial betting refers to the wagering on the price development of a financial instrument at some later date relative to the current price or level of the instrument, against odds offered by a bookmaker. Maximum potential pay-off of the wager is known when the bet is taken and as a corollary risk is known beforehand by being limited to the initial stake.\n",
"Speculation, in the narrow sense of financial speculation, involves the buying, holding, selling, and short-selling of stocks, bonds, commodities, currencies, collectibles, real estate, derivatives or any valuable financial instrument to profit from fluctuations in its price as opposed to buying it for use or for income via methods such as dividends or interest. Speculation or agiotage represents one of three market roles in western financial markets, distinct from hedging, long term investing and arbitrage. Speculators in an asset may have no intention to have long term exposure to that asset.\n",
"When it comes to financing a purchase of stocks there are two ways: purchasing stock with money that is currently in the buyer's ownership, or by buying stock on margin. Buying stock on margin means buying stock with money borrowed against the value of stocks in the same account. These stocks, or collateral, guarantee that the buyer can repay the loan; otherwise, the stockbroker has the right to sell the stock (collateral) to repay the borrowed money. He can sell if the share price drops below the margin requirement, at least 50% of the value of the stocks in the account. Buying on margin works the same way as borrowing money to buy a car or a house, using a car or house as collateral. Moreover, borrowing is not free; the broker usually charges 8–10% interest.\n",
"The odds are derived from a variety of factors through analysis of information. Certain markets are highly statistical, whereas other markets require more intuition and insight. An odds compiler may be required to monitor the financial position the bookmaker is in and adjust their position (and odds) accordingly. They may also be consulted as to whether to accept a bet or not, usually in the case where a very large bet is being placed, so as to not incur dangerously-high liabilities. Odds are usually not set completely independent from other bookmakers but are influenced by what others are quoting. This is particularly important when the overround is below 100% and hence arbitrage betting, where betters can make a profit regardless of the outcome, is possible (see mathematics of bookmaking). In this case, the bookmaker with the most aberrant odds would usually alter their odds closer to other bookmakers' prices. The odds are influenced by betting volume so that a selection receiving a high volume of liquidity may have the odds for it cut.\n"
] |
why does a yawn filter out any deep bass sounds?
|
The inner ear is normally sealed, air can’t get in or out. However there is a tube connecting it to the throat so that pressure can be equalised. When you yawn these tubes open. The same thing happens when you swallow and is why when your ears ‘pop’ , like in a plane, swallowing or yawning can fix it. It equalises the pressure.
|
[
"During a yawn, the tensor tympani muscle in the middle ear contracts, creating a rumbling noise from within the head. Yawning is sometimes accompanied, in humans and other animals, by an instinctive act of stretching several parts of the body, including arms, neck, shoulders and back.\n",
"The most common and effective method of woodwind growling is to hum, sing, or even scream into the mouthpiece of the instrument. This method introduces interference within the instrument itself, breaking up the normal quality of sound waves produced. Furthermore, the vibration of the vocal note in the mouth and lips creates rustle noise in the instrument.\n",
"The bassoon embouchure is a very important aspect of producing a full, round, and dark bassoon tone. The bassoon embouchure is made by opening one's mouth, rolling lips inward to cover the teeth, and then dropping the jaw down as in a yawning motion (without actually yawning or opening the mouth). Both upper and lower teeth should be covered by the lips in order to protect the reed and control applied pressure. The reed is then placed in the mouth, with the lips and facial muscles maintaining an airtight seal around the reed. The upper lip will usually be farther forward on the reed than the lower lip, as in an \"overbite\" of the upper jaw. As with other orchestral double reeds, embouchure must be adjusted constantly for good intonation; this is achieved by adjusting the oral cavity with jaw movement. As with the oboe, it takes a great deal of experience to form an embouchure technique that will maintain sound quality and intonation at all pitches, volumes, and across various reeds.\n",
"The growl gives the performer's sound a dark, guttural, gritty timbre resulting largely from the rustle noise and desirable consonance and dissonance effects produced. The technique of simultaneous playing a note and singing into an instrument is also known as horn chords or multiphonics.\n",
"Some individuals can voluntarily produce this rumbling sound by contracting the tensor tympani muscle of the middle ear. The rumbling sound can also be heard when the neck or jaw muscles are highly tensed as when yawning deeply. This phenomenon has been known since (at least) 1884.\n",
"BULLET::::- \"Growling\" is a technique used whereby the saxophonist sings, hums, or growls, using the back of the throat while playing. This causes a modulation of the sound, and results in a gruffness or coarseness of the sound. It is rarely found in classical or band music, but is often utilized in jazz, blues, rock 'n' roll, and other popular genres. Some notable musicians who utilized this technique are Earl Bostic, Boots Randolph, Gato Barbieri, Ben Webster, Clarence Clemons, Nelson Rangell, David Sanborn, Greg Ham, Hank Carter, Bobby Keys, Keith Crossan, and King Curtis.\n",
"A woodwind growl can also be produced by allowing air to escape from around the corners of the mouth, causing a vibration in the lips and mouthpiece. Although this method does not set up patterns of interference, it does produce the characteristic rustle noise of the growl.\n"
] |
If fusion naturally occurs in stars, does fission occur naturally anywhere or only under manmade conditions?
|
A few nuclei, including some uranium isotopes, fission spontaneously. However this process is very rare. Most fission processes are induced by neutrons, which requires a large assemblage of radioactive and fissile material, and the right geological environment. This happened naturally at least once in the Earth's history. See these articles [[1]](_URL_0_), [[2]](_URL_1_). The second one is more technical.
|
[
"Fission occurs naturally because each event gives off more than one neutron capable of producing additional fission events. Fusion, at least in D-T fuel, gives off only a single neutron, and that neutron is not capable of producing more fusion events. When that neutron strikes fissile material in the blanket, one of two reactions may occur. In many cases, the kinetic energy of the neutron will cause one or two neutrons to be struck out of the nucleus without causing fission. These neutrons still have enough energy to cause other fission events. In other cases the neutron will be captured and cause fission, which will release two or three neutrons. This means that every fusion neutron in the fusion–fission design can result in anywhere between two and four neutrons in the fission fuel.\n",
"The most common fission process is binary fission, and it produces the fission products noted above, at 95±15 and 135±15 u. However, the binary process happens merely because it is the most probable. In anywhere from 2 to 4 fissions per 1000 in a nuclear reactor, a process called ternary fission produces three positively charged fragments (plus neutrons) and the smallest of these may range from so small a charge and mass as a proton (Z=1), to as large a fragment as argon (Z=18). The most common small fragments, however, are composed of 90% helium-4 nuclei with more energy than alpha particles from alpha decay (so-called \"long range alphas\" at ~ 16 MeV), plus helium-6 nuclei, and tritons (the nuclei of tritium). The ternary process is less common, but still ends up producing significant helium-4 and tritium gas buildup in the fuel rods of modern nuclear reactors.\n",
"Spontaneous fission gives much the same result as induced nuclear fission. However, like other forms of radioactive decay, it occurs due to quantum tunneling, without the atom having been struck by a neutron or other particle as in induced nuclear fission. Spontaneous fissions release neutrons as all fissions do, so if a critical mass is present, a spontaneous fission can initiate a self-sustaining chain reaction. Radioisotopes for which spontaneous fission is not negligible can be used as neutron sources. For example, californium-252 (half-life 2.645 years, SF branch ratio about 3.1 percent) can be used for this purpose. The neutrons released can be used to inspect airline luggage for hidden explosives, to gauge the moisture content of soil in highway and building construction, or to measure the moisture of materials stored in silos, for example.\n",
"A variety of nuclear fusion reactions take place in the cores of stars, that depend upon their mass and composition. When nuclei fuse, the mass of the fused product is less than the mass of the original parts. This lost mass is converted to electromagnetic energy, according to the mass–energy equivalence relationship \"E\" = \"mc\".\n",
"Nuclear fusion produces energy by combining the very lightest elements into more tightly bound elements (such as hydrogen into helium), and nuclear fission produces energy by splitting the heaviest elements (such as uranium and plutonium) into more tightly bound elements (such as barium and krypton). Both processes produce energy, because middle-sized nuclei are the most tightly bound of all.\n",
"Fuels that produce energy by the process of nuclear fusion are currently not utilized by humans but are the main source of fuel for stars. Fusion fuels tend to be light elements such as hydrogen which will combine easily. Energy is required to start fusion by raising temperature so high all materials would turn into plasma, and allow nuclei to collide and stick together with each other before repelling due to electric charge. This process is called fusion and it can give out energy.\n",
"In nuclear fusion, two low mass nuclei come into very close contact with each other, so that the strong force fuses them. It requires a large amount of energy for the strong or nuclear forces to overcome the electrical repulsion between the nuclei in order to fuse them; therefore nuclear fusion can only take place at very high temperatures or high pressures. When nuclei fuse, a very large amount of energy is released and the combined nucleus assumes a lower energy level. The binding energy per nucleon increases with mass number up to nickel-62. Stars like the Sun are powered by the fusion of four protons into a helium nucleus, two positrons, and two neutrinos. The uncontrolled fusion of hydrogen into helium is known as thermonuclear runaway. A frontier in current research at various institutions, for example the Joint European Torus (JET) and ITER, is the development of an economically viable method of using energy from a controlled fusion reaction.\n"
] |
What natural disaster significantly changed the course of history?
|
The [Lisbon earthquake](_URL_0_) of 1755 had a profound effect on enlightenment philosophy. All the churches were destroyed and a huge number of people killed on All Saints Day. This led an entire generation of influential philosophers, including Voltaire and Kant, to question the existence or benevolence of God.
|
[
"Throughout history, seismic events have at times caused submergence of human settlements. The remains of such catastrophes exist all over the world, and sites such as Alexandria and Port Royal now form important archaeological sites. As with shipwrecks, archaeological research can follow multiple themes, including evidence of the final catastrophe, the structures and landscape before the catastrophe and the culture and economy of which it formed a part. Unlike the wrecking of a ship, the destruction of a town by a seismic event can take place over many years and there may be evidence for several phases of damage, sometimes with rebuilding in between.\n",
"The Long Emergency: Surviving the Converging Catastrophes of the Twenty-first Century is a book by James Howard Kunstler (Grove/Atlantic, 2005) exploring the consequences of a world oil production peak, coinciding with the forces of climate change, resurgent diseases, water scarcity, global economic instability and warfare to cause major trouble for future generations.\n",
"Natural disasters have a long history in this geologically active part of the world. For example, two of the three moves of the capital of Guatemala have been due to volcanic mudflows in 1541 and earthquakes in 1773.\n",
"Barro and subsequent economists have provided historical evidence to support this claim. Using this evidence, Barro shows that rare disasters occur frequently and in large magnitude, in economies around the world from a period from the mid-19th century to the present day.\n",
"An example is the lake overflow that caused one of the worst landslide-related disasters in history on June 10, 1786. A landslide dam on Sichuan's Dadu River, created by an earthquake ten days earlier, burst and caused a flood that extended downstream and killed 100,000 people.\n",
"Different reasons are posited for the inability of Modern Times to continue beyond its fourteen-year life span. Vern Dyson wrote that “the economic panic of 1857 undermined the business enterprises of the village and the civil war completed the task of annihilation.” William Bailie also stated his opinion about the deleterious effects of the Panic of 1857 on Modern Times.Roger Wunderlich felt that Bailie exaggerated the impact of the 1857 crash on Modern Times and states that after the crash a slow but steady influx of settlers continued. Wunderlich further says that even before 1857, more of the skilled workers of Modern Times “were beginning to be disinclined to trade their skills at par without financial gain,” and so worked outside of Modern Times. Charles Codman who was a resident of Modern Times, wrote many years later that the lack of a charismatic leader who was able to spread the ideas about Modern Times to a wider audience and the fact that a good deal of the settlers were not dedicated to the ideals of equitable commerce led to its inability to continue as a utopian village. Finally, Roger Wunderlich wrote that the Civil war presented the residents of Modern Times with a situation and choices that were contrary to their personal beliefs. Could one be believer in individual sovereignty and join the army to fight or pay taxes for the war effort? In the end, out of the 168 men from Islip town who enlisted in the Union Army, 15 of them were from Modern Times. Modern Times had sent a higher proportion of its men to serve than did the Town of Islip as a whole.\n",
"\"But the rest of the world was also affected by this catastrophe. Earthquakes, volcanic eruptions, atomic fallout and dramatic changes in climate turned Earth into a living hell. Those who survived, slowly, from generation to generation , degenerated more and more. Finally evolution was thrown back into the stone age where the dawning of another mankind began…\"\n"
] |
how do photographers who print a lot of tourist photos make a profit if not everything they printed is sold? how does their business model work?
|
Selling 1 photo pays for a LOT of unsold ones and people are more likely to buy something they can physically see in front of them then on a screen and then wait for it to be printed.
|
[
"Most photographers allow clients to purchase additional prints for themselves or their families. Many photographers now provide online sales either through galleries located on their own websites or through partnerships with other vendors. Those vendors typically host the images and provide the back end sales mechanism for the photographer; the photographer sets his or her own prices and the vendor takes a commission or charges a flat fee.\n",
"Professional photographers can sell their photos at their own price, higher than the printing cost imposed by Fotki. The difference in price minus 15% commission fee goes to the photographer. Earned money is remitted by PayPal, check or bank transfer. Photographers who offer their photos for sale can upload small size, low-res or watermarked images to avoid theft. When such photo is requested, the photographer has the option to securely upload the full size image for printing.\n",
"In return for a fee, subscription-based photo sharing sites offer their services without the distraction of advertisements or promotions for prints and gifts. They may also have other enhancements over free services, such as guarantees regarding the online availability of photos, more storage space, the ability for non-account holders to download full-size, original versions of photos, and tools for backing up photos. Some offer user photographs for sale, splitting the proceeds with the photographer, while others may use a disclaimer to reserve the right to use or sell the photos without giving the photographer royalties or notice.\n",
"Photos taken by a photographer while working on assignment are often work for hire belonging to the company or publication unless stipulated otherwise by contract. Professional portrait and wedding photographers often stipulate by contract that they retain the copyright of their photos, so that only they can sell further prints of the photographs to the consumer, rather than the customer reproducing the photos by other means. If the customer wishes to be able to reproduce the photos themselves, they may discuss an alternative contract with the photographer in advance before the pictures are taken, in which a larger up front fee may be paid in exchange for reprint rights passing to the customer.\n",
"A professional photographer may be an employee, for example of a newspaper, or may contract to cover a particular planned event such as a wedding or graduation, or to illustrate an advertisement. Others, like fine art photographers, are freelancers, first making an image and then licensing or making printed copies of it for sale or display. Some workers, such as crime scene photographers, estate agents, journalists and scientists, make photographs as part of other work. Photographers who produce moving rather than still pictures are often called cinematographers, videographers or camera operators, depending on the commercial context.\n",
"Many people take photographs for commercial purposes. Organizations with a budget and a need for photography have several options: they can employ a photographer directly, organize a public competition, or obtain rights to stock photographs. Photo stock can be procured through traditional stock giants, such as Getty Images or Corbis; smaller microstock agencies, such as Fotolia; or web marketplaces, such as Cutcaster.\n",
"The exclusive right of photographers to copy and use their products is protected by copyright. Countless industries purchase photographs for use in publications and on products. The photographs seen on magazine covers, in television advertising, on greeting cards or calendars, on websites, or on products and packages, have generally been purchased for this use, either directly from the photographer or through an agency that represents the photographer. A photographer uses a contract to sell the \"license\" or use of his or her photograph with exact controls regarding how often the photograph will be used, in what territory it will be used (for example U.S. or U.K. or other), and exactly for which products. This is usually referred to as usage fee and is used to distinguish from production fees (payment for the actual creation of a photograph or photographs). An additional contract and royalty would apply for each additional use of the photograph.\n"
] |
How did people precisely control the temperature of ovens for baking at specific temperatures?
|
A bake oven is a very big pile of masonry. Get all that thermal mass up to temperature, and it will fluctuate fairly little, and occasional stoking with a little wood can keep it hot. This is why baking was commonly done either by bakers, all day/night long, or by housewives one day a week: once the oven was hot, you wanted to make full use of it. The alternative for housewives was a Dutch oven, a deep covered cast-iron pot that could have coals placed on the lid, that could be used in the fireplace. But it couldn't do loaves of bread and pies as well as a bake oven.
For how to judge the oven temperature, here's what Lydia M Child said, in her *The American Frugal Housewife* :
> Heating ovens must be regulated by experience and observation. There is a difference in wood in giving out heat; there is a great difference in the construction of ovens; and when an oven is extremely cold, either on account of the weather, or want of use, it must be heated more. Economical people heat ovens with pine wood, fagots, brush, and such light stuff. If you have none but hard wood, you must remember that it makes very hot coals, and therefore less of it will answer. A smart fire for an hour and a half is a general rule for common sized family ovens, provided brown bread and beans are to be baked. An hour is long enough to heat an oven for flour bread. Pies bear about as much heat as flour bread: pumpkin pies will bear more. If you are afraid your oven is too hot, throw in a little flour, and shut it up for a minute. If it scorches black immediately, the heat is too furious; if it merely browns, it is right. Some people wet an old broom two or three times, and turn it round near the top of die oven till it dries; this prevents pies and cake from scorching on the top. When you go into a new house, heat your oven two or three times, to get it seasoned, before you use it. After the wood is burned, rake the coals over the bottom of the oven, and let them lie a few minutes.
|
[
"Common oven temperatures (such as terms: cool oven, very slow oven, slow oven, moderate oven, hot oven, fast oven, etc.) are set to control the effects of baking in an oven, for various lengths of time.\n",
"Before ovens had thermometers or thermostats, these standard words were used by cooks and cookbooks to describe how hot an oven should be to cook various items. Custards require a slow oven for example, bread a moderate oven, and pastries a very hot oven. Cooks estimated the temperature of an oven by counting the number of minutes it took to turn a piece or white paper golden brown, or counting the number of seconds one could hold one's hand in the oven. Another method was to put a layer of flour or a piece of white tissue paper on a pan in the oven for five minutes. The resulting colors range from delicate brown in a slow oven through golden brown in a moderate oven to dark brown in a hot oven.\n",
"Ovens also vary in the way that they are controlled. The simplest ovens (for example, the AGA cooker) may not have any controls at all; the ovens simply run continuously at various temperatures. More conventional ovens have a simple thermostat which turns the oven on and off and selects the temperature at which it will operate. Set to the highest setting, this may also enable the broiler element. A timer may allow the oven to be turned on and off automatically at pre-set times. More sophisticated ovens may have complex, computer-based controls allowing a wide variety of operating modes and special features including the use of a temperature probe to automatically shut the oven off when the food is completely cooked to the desired degree.\n",
"A convection oven allows a reduction in cooking temperature compared to a conventional oven. This comparison will vary, depending on factors including, for example, how much food is being cooked at once or if airflow is being restricted, for example by an oversized baking tray. This difference in cooking temperature is offset as the circulating air transfers heat more quickly than still air of the same temperature. In order to transfer the same amount of heat in the same time, the temperature must be lowered to reduce the rate of heat transfer in order to compensate.\n",
"Ovens usually can use a variety of methods to cook. The most common may be to heat the oven from below. This is commonly used for baking and roasting. The oven may also be able to heat from the top to provide broiling (US) or grilling (UK/Commonwealth). A fan-assisted oven that uses a small fan to circulate the air in the cooking chamber, can be used. Both are also known as convection ovens. An oven may also provide an integrated rotisserie.\n",
"A complete cycle involves heating the oven to the required temperature, maintaining that temperature for the proper time interval for that temperature, turning the machine off and cooling the articles in the closed oven till they reach room temperature. The standard settings for a hot air oven are:\n",
"Reach-in ovens are meant for different industrial applications that may need uniform temperature throughout. The ovens normally use horizontal re-circulating air to ensure the uniform temperature, and can use fans that circulate air, creating the airflow. Reach-in ovens can be used in numerous production and laboratory applications, including curing, drying, sterilizing, aging, and other process-critical applications.\n"
] |
pcr's (polymer chain reactions)
|
So it's been a while since I've done it but here goes : Think of DNA like a zipper. Pull the 2 sides apart and then cut the 2 single strands into chunks. With spare zipper teeth (ACTG) you can build 2 new strands, zip it back together and now you have 2 full zippers.
|
[
"The polymerase chain reaction (PCR) is a biochemical technology in molecular biology to amplify a single or a few copies of a piece of DNA across several orders of magnitude, generating thousands to millions of copies of a particular DNA sequence.\n",
"The polymerase chain reaction (PCR) is a biochemistry and molecular biology technique for isolating and exponentially amplifying a fragment of DNA, via enzymatic replication, without using a living organism. It enables the detection of specific strands of DNA by making millions of copies of a target genetic sequence. The target sequence is essentially photocopied at an exponential rate, and simple visualisation techniques can make the millions of copies easy to see.\n",
"The polymerase chain reaction (PCR) is a fundamental molecular biology technique that enables the selective amplification of DNA sequences, which is useful for expanded use of rare samples e.g.: stem cells, biopsies, circulating tumor cells. The reaction involves thermal cycling of the DNA sequence and DNA polymerase through three different temperatures. Heating up and cooling down in conventional PCR devices are time-consuming and typical PCR reactions can take hours to complete. Other drawbacks of conventional PCR is the high consumption of expensive reagents, preference for amplifying short fragments, and the production of short chimeric molecules. PCR chips serve to miniaturize the reaction environment to achieve rapid heat transfer and fast mixing due to the larger surface-to-volume ratio and short diffusion distances. The advantages of PCR chips include shorter thermal-cycling time, more uniform temperature which enhances yield, and portability for point-of-care applications. Two challenges in microfluidic PCR chips are PCR inhibition and contamination due to the large surface-to-volume ratio increasing surface-reagent interactions. For example, silicon substrates have good thermal conductivity for rapid heating and cooling, but can poison the polymerase reaction. Silicon substrates are also opaque, prohibiting optical detection for qPCR, and electrically conductive, preventing electrophoretic transport through the channels. Meanwhile, glass is an ideal material for electrophoresis but also inhibits the reaction. Polymers, particularly PDMS, are optically transparent, not inhibitory, and can be used to coat an electrophoretic glass channel. Various other surface treatments also exist, including polyethylene glycol, bovine serum albumin, and silicon dioxide. There are stationary (chamber-based), dynamic (continuous flow-based), and microdroplet (digital PCR) chip architectures.\n",
"The polymerase chain reaction (PCR) is a method in molecular biology used to amplify a single copy or a few copies of specific pieces of DNA across several orders of magnitude, generating thousands to millions of copies of a target DNA sequence. In conventional PCR, the DNA polymerase is slightly active at room temperature, and to a lesser degree, even on ice. In some instances, when all the reaction components are put together, nonspecific primer annealing can occur at these low temperatures. This nonspecific annealed primer can then be extended by the DNA polymerase, generating nonspecific products and lowering product yields.\n",
"The polymerase chain reaction (PCR) is a scientific technique that is used to replicate a piece of a DNA molecule by several orders of magnitude. PCR implements a cycle of repeated heated and cooling known as thermal cycling along with the addition of DNA primers and DNA polymerases to selectively replicate the DNA fragment of interest. The technique was developed by Kary Mullis in 1983 while working for the Cetus Corporation. Mullis would go on to win the Nobel Prize in Chemistry in 1993 as a result of the impact that PCR had in many areas such as DNA cloning, DNA sequencing, and gene analysis.\n",
"BULLET::::- The polymerase chain reaction (PCR) is a technique widely used in molecular biology. It derives its name from one of its key components, a DNA polymerase used to amplify a piece of DNA by \"in vitro\" enzymatic DNA replication. As PCR progresses, the DNA generated is used as a template for replication. The polymerase chain reaction was invented in 1984 by Kary Mullis.\n",
"The polymerase chain reaction (PCR) is utilized in biochemistry and molecular biology for exponentially amplifying nucleic acids by making copies of a specific region of a nucleic acid target. When coupled with diagnostic probes, this technique allows one to detect a small collection of molecules under very dilute conditions. A limitation of PCR is that it only works with nucleic acid targets, and there are no known analogues of PCR for other target molecular candidates. \n"
] |
how do news agencies decide what is a national story?
|
Great question. The short version is that the news outlets have an editorial meeting every day or every shift to discuss ideas and assign reporting staff to the stories. Many stories come from press releases or public tips. Many more come from routine events, like a city council meeting, a parade, or a local business declaring its quarterly earnings.
In most places, the easier a story is to report, the more likely it is to be printed. Interstate closed for construction? Go talk to people for 20 minutes at a truck stop about how inconvenient it is. Story is done in an hour. Do another celebratory story when it re-opens. In both cases, the reporter is now free quickly to work on a more complicated longer-term story that may take a couple of weeks to put together.
Mainstream news is relatively unlikely to report on politically divisive topics, like a right to life march or a union rally because it might give ammo to those who shout "The Daily Planet is pinko commie!" Keeping in mind that The Daily Planet probably has no newspaper competition and has nothing to gain by being controversial.
How things generally get to "the wire": if something important is expected to happen, wire service staff reporters will already be there. If it is something more unexpected, like a mayor saying something nasty about Hillary, normally it will be reported by a local newspaper first, then sent to a regional or state wire editor, who may send it further to the national wire.
Some national news outlets (Cable news in particular) have producers scour local news outlets to see if there were stories that never made it to the wire services that may still be interesting to their audiences. This is why a lot of times you'll see stories on talk shows like Greta Van Susteren or Rachael Maddow that don't get mainstream coverage.
TL;DR: You have editors at every level who decide if a story is important enough to report, or to send on to a higher level editor for broader distribution.
|
[
"The major news agencies generally prepare hard news stories and feature articles that can be used by other news organizations with little or no modification, and then sell them to other news organizations. They provide these articles in bulk electronically through wire services (originally they used telegraphy; today they frequently use the Internet). Corporations, individuals, analysts and intelligence agencies may also subscribe.\n",
"The major news agencies generally prepare hard news stories and feature articles that can be used by other news organizations with little or no modification, and then sell them to other news organizations. They provide these articles in bulk electronically through wire services (originally they used telegraphy; today they frequently use the Internet). Corporations, individuals, analysts, and intelligence agencies may also subscribe.\n",
"News agencies can be corporations that sell news (e.g., Press Association, Thomson Reuters and United Press International). Other agencies work cooperatively with large media companies, generating their news centrally and sharing local news stories the major news agencies may choose to pick up and redistribute (i.e., Associated Press (AP), Agence France-Presse (AFP) or American Press Agency (APA)) and Indian Press Agency PTI.\n",
"News-oriented journalism is sometimes described as the \"first rough draft of history\" (attributed to Phil Graham), because journalists often record important events, producing news articles on short deadlines. While under pressure to be first with their stories, news media organizations usually edit and proofread their reports prior to publication, adhering to each organization's standards of accuracy, quality and style. Many news organizations claim proud traditions of holding government officials and institutions accountable to the public, while media critics have raised questions about holding the press itself accountable to the standards of professional journalism.\n",
"A news agency is an organization of journalists established to supply news reports to news organizations: newspapers, magazines, and radio and television broadcasters. Such an agency may also be referred to as a wire service, newswire or news service. The bulk of major news agency services contains foreign news.\n",
"In the traditional distribution model, the business, political campaign, or other entity releasing information to the media hires a publicity agency to write and distribute written information to the newswires. The newswire then disseminates the information as it is received or as investigated by a journalist.\n",
"News production starts with the reporters going out to their respective beat to gather stories and cover events and also the marketing department getting advertisement into the newspaper on daily basis. It starts with reporters getting their stories ready daily and sending their stories in electronically through their mails to the editor. Each reporter works with a particular desk in the newsroom, some of these desks are: Metro desk, Sport desk, Business desk, Political desk, Education desk and others. News gathering and dissemination is paramount to every newspaper as this is the responsibility of the newspaper house to the people and this can determine their level of advertiser’s patronage.\n"
] |
What would a tub full of viruses look like?
|
Hey there! I work with viruses, and this is what I can tell you:
When we infect cells, we use vials of virus stockseed, which, depending on which virus you're working with, is essentially a purified suspension of virus that's been frozen and stored until it's needed for use. It's not 100% pure virus, which would be fairly unstable and hard to store. These vials of stockseed pretty much look like whatever media they were grown in - they often appear clear (as opposed to murky), and are the color of the media we use.
On the other hand, if I were to purify out virus from this solution, which I've also done before, it kind of looks like an off-white colored, thick gloop at the bottom of the test tube.
That's just personal experience, though, and I imagine it differs with different viruses.
|
[
"Viruses display a wide diversity of shapes and sizes, called \"morphologies\". In general, viruses are much smaller than bacteria. Most viruses that have been studied have a diameter between 20 and 300 nanometres. Some filoviruses have a total length of up to 1400 nm; their diameters are only about 80 nm. Most viruses cannot be seen with an optical microscope, so scanning and transmission electron microscopes are used to visualise them. To increase the contrast between viruses and the background, electron-dense \"stains\" are used. These are solutions of salts of heavy metals, such as tungsten, that scatter the electrons from regions covered with the stain. When virions are coated with stain (positive staining), fine detail is obscured. Negative staining overcomes this problem by staining the background only.\n",
"The smallest viruses in terms of genome size are single-stranded DNA (ssDNA) viruses. Perhaps the most famous is the bacteriophage Phi-X174 with a genome size of 5386 nucleotides. However, some ssDNA viruses can be even smaller. For example, Porcine circovirus type 1 has a genome of only 1759 nucleotides and a capsid diameter of only 17 nm. As a whole, the viral family geminiviridae is only about 30 nm in length. However, the two capsids making up the virus are fused; divided, the capsids would be 15 nm in length. Other environmentally characterized ssDNA viruses such as CRESS DNA viruses as well as others can have genomes that are considerably less than 2,000 nucleotides.\n",
"Several other viruses have been described from fish and from a frog: Bluegill hepadnavirus (BGHB), African cichlid hepadnavirus (ACHBV) and Tibetan frog hepadnavirus. It seems likely that new genera in this family will need to be created.\n",
"Two yet-uncategorized circovirus-like viruses have been identified in fish— Barbel circovirus (BaCV) 1 and 2. Their genomes are similar in length and contain two major open reading frames similar to the capsid and replication associated protein genes found in other circoviruses.\n",
"Viruses in \"Aquamavirus\" are nonenveloped, with icosahedral, spherical, and round geometries, and T=pseudo3 symmetry. The diameter is around 30 nm. Genomes are linear and nonsegmented, around 6.7 kb in length.\n",
"Viruses are among the smallest infectious agents, and most of them can only be seen by electron microscopy. Most viruses cannot be seen by light microscopy (in other words, they are sub-microscopic); their sizes range from 20 to 300 nm. They are so small that it would take 30,000 to 750,000 of them, side by side, to stretch to one cm. By contrast bacterial sizes are typically around 1 micrometre (1000 nm) in diameter, and the cells of higher organisms a few tens of micrometres. Some viruses such as megaviruses and pandoraviruses are relatively large. At around 1 micrometer, these viruses, which infect amoebae, were discovered in 2003 and 2013. They are around a thousand times larger than influenza viruses and the discovery of these \"giant\" viruses astonished scientists.\n",
"Viruses consist of a genome and a capsid; and some viruses are enveloped. Most virus capsids measure between 20-500 nm in diameter. Because of their nanometer size dimensions, viruses have been considered as naturally occurring nanoparticles. Virus nanoparticles have been subject to the nanoscience and nanoengineering disciplines. Viruses can be regarded as prefabricated nanoparticles. Many different viruses have been studied for various applications in nanotechnology: for example, mammalian viruses are being developed as vectors for gene delivery, and bacteriophages and plant viruses have been used in drug delivery and imaging applications as well as in vaccines and immunotherapy intervention.\n"
] |
if websites like youtube can shorten their url for "sharing" purposes, why can't the url just naturally be shorter?
|
One reason is that long (and human readable) URL's are a part of search engine optimization (SEO), because URL's containing keywords from the content of the page tend to be ranked better than just random looking ones. Youtube is not a very good example for this though, because even the regular URL's are not human readable (but being owned by Google is probably much more significant in the search engine ranking anyway ;)
Another reason is usability. It's much easier to remember (and tell someone about) the URL "_URL_0_" rather than "compa.ny/93HkdI1" and if you happen to see such an URL you can more or less tell where the first one will lead you to when you click it, but not with the shortened one. You probably wouldn't open "_URL_1_" at work - but who knows if "compa.ny/93HkdI1" is actually safe for work or not?
And last but not least, when using a URL shortening service out of your control: What happens if for example _URL_2_ goes bankrupt? all the links using that service are down and there is no way to tell or recover the original content behind that link.
In my opinion, shortened URL's are fine for Twitter and other services where available space is an issue, but usability and transparency for the user suffers pretty much.
EDIT: some typos.
|
[
"There are several reasons to use URL shortening. Often regular unshortened links may be aesthetically unpleasing. Many web developers pass descriptive attributes in the URL to represent data hierarchies, command structures, transaction paths or session information. This can result in URLs that are hundreds of characters long and that contain complex character patterns. Such URLs are difficult to memorize, type-out or distribute. As a result, long URLs must be copied-and-pasted for reliability. Thus, short URLs may be more convenient for websites or hard copy publications (e.g. a printed magazine or a book), the latter often requiring that very long strings be broken into multiple lines (as is the case with some e-mail software or internet forums) or truncated.\n",
"Web applications often include lengthy descriptive attributes in their URLs which represent data hierarchies, command structures, transaction paths and session information. This practice results in a URL that is aesthetically unpleasant and difficult to remember, and which may not fit within the size limitations of microblogging sites. URL shortening services provide a solution to this problem by redirecting a user to a longer URL from a shorter one.\n",
"URL shortening is a technique on the World Wide Web in which a Uniform Resource Locator (URL) may be made substantially shorter and still direct to the required page. This is achieved by using a redirect which links to the web page that has a long URL. For example, the URL \"\" can be shortened to \"\", and the URL \"\" can be shortened to \"\". Often the redirect domain name is shorter than the original one. A friendly URL may be desired for messaging technologies that limit the number of characters in a message (for example SMS), for reducing the amount of typing required if the reader is copying a URL from a print source, for making it easier for a person to remember, or for the intention of a permalink. In November 2009, the shortened links of the URL shortening service Bitly were accessed 2.1 billion times.\n",
"A permanent URL is not necessarily a good thing. There are security implications, and obsolete short URLs remain in existence and may be circulated long after they cease to point to a relevant or even extant destination. Sometimes a short URL is useful simply to give someone over a telephone conversation for a one-off access or file download, and no longer needed within a couple of minutes.\n",
"The convenience offered by URL shortening also introduces potential problems, which have led to criticism of the use of these services. Short URLs, for example, will be subject to linkrot if the shortening service stops working; all URLs related to the service will become broken. It is a legitimate concern that many existing URL shortening services may not have a sustainable business model in the long term. In late 2009, the Internet Archive started the \"301 Works\" projects, together with twenty collaborating companies (initially), whose short URLs will be preserved by the project.\n",
"In web search and search engine optimization (SEO), URL canonicalization deals with web content that has more than one possible URL. Having multiple URLs for the same web content can cause problems for search engines - specifically in determining which URL should be shown in search results.\n",
"Short URLs, although making it easier to access what might otherwise be a very long URL or user-space on an ISP server, add an additional layer of complexity to the process of retrieving web pages. Every access requires more requests (at least one more DNS lookup, though it may be cached, and one more HTTP/HTTPS request), thereby increasing latency, the time taken to access the page, and also the risk of failure, since the shortening service may become unavailable. Another operational limitation of URL shortening services is that browsers do not resend POST bodies when a redirect is encountered. This can be overcome by making the service a reverse proxy, or by elaborate schemes involving cookies and buffered POST bodies, but such techniques present security and scaling challenges, and are therefore not used on extranets or Internet-scale services.\n"
] |
how come when having a sickness that requires medicine, i have to do it in a span of a week or a few days?
|
Accomodating the poor English, are you asking why you can't take the whole course of medication at once?
Let's use the antibiotic 'Gentamycin' for example. The drug is not easily filtered out by the kidneys, and an overdose would cause the kidneys to not work. That results in kidney failure and you'd potentially die.
Doses are often set to what is safe.
|
[
"The current first-line treatment is fluconazole, 200 mg. on the first day, followed by daily dosing of 100 mg. for at least 21 days total. Treatment should continue for 14 days after relief of symptoms.\n",
"Treatment is typically with two doses of the medications mebendazole, pyrantel pamoate, or albendazole two weeks apart. Everyone who lives with or takes care of an infected person should be treated at the same time. Washing personal items in hot water after each dose of medication is recommended. Good handwashing, daily bathing in the morning, and daily changing of underwear can help prevent reinfection.\n",
"Treatment is by thiamine supplementation, either by mouth or by injection. With treatment symptoms generally resolve in a couple of weeks. The disease may be prevented at the population level through the fortification of food.\n",
"Prevention is by properly cooking food and hand washing before cooking. Other measures include improving access to sanitation such as ensuring use of functional and clean toilets and access to clean water. In areas of the world where the infections are common, often entire groups of people will be treated all at once and on a regular basis. Treatment is with three days of the medication: albendazole, mebendazole or ivermectin. People often become infected again after treatment.\n",
"Commonly, the symptoms may resolve without treatment in 2 to 4 weeks but specific medication may hasten the healing as long as the trigger is avoided. Also, the condition might become chronic if the allergen is not detected and avoided.\n",
"The standard treatment recommended by the WHO is with isoniazid and rifampicin for six months, as well as ethambutol and pyrazinamide for the first two months. If there is evidence of meningitis, then treatment is extended to twelve months. The U.S. guidelines recommend nine months' treatment. \"Common medication side effects a patient may have such as inflammation of the liver if a patient is taking pyrazinamide, rifampin, and isoniazid. A patient may also have drug resistance to medication, relapse, respiratory failure, and adult respiratory distress syndrome.\"\n",
"Outpatient treatment involves periodic visits to a psychiatrist for consultation in his or her office, or at a community-based outpatient clinic. Initial appointments, at which the psychiatrist conducts a psychiatric assessment or evaluation of the patient, are typically 45 to 75 minutes in length. Follow-up appointments are generally shorter in duration, i.e., 15 to 30 minutes, with a focus on making medication adjustments, reviewing potential medication interactions, considering the impact of other medical disorders on the patient's mental and emotional functioning, and counseling patients regarding changes they might make to facilitate healing and remission of symptoms (e.g., exercise, cognitive therapy techniques, sleep hygiene—to name just a few). The frequency with which a psychiatrist sees people in treatment varies widely, from once a week to twice a year, depending on the type, severity and stability of each person's condition, and depending on what the clinician and patient decide would be best.\n"
] |
Do we believe the figures for Ancient Battles?
|
This topic, depending on the battle, can be hotly debated among historians. Occasionally archeological evidence can shed light on the battles, but often historians must defer to the written record. That said, most historians won't take the figues exactly at face value. If the army in question is Egyptian, there may be less available. The point I am trying to make is that the written record should not be completely thrown out just because numbers are inflated, nor typically is it. There are certainly historians who do that, however for the most part the written record is trusted until there is something that brings it into question.
With Caesar, often times there are so many sources showing both sides that historians can typically discern a fairly reliable timeline. With figures, it is best not to throw out evidence simply because there may be bias. Every historian is biased in some way and that is not necessarily a bad thing. It can potentially provide a balanced account, if paired with others. Unfortunately, if (such as in the case of Zozimus) that is the only account of that battle, it can't be thrown out simply because the facts may be skewed due to bias. If a source is to be doubted, there needs to be more than that. If there were, say, archeological evidence, or other written evidence showing that this could be reasonably questioned, then it could be doubted.
TL;DR The rule of thumb for most historians is "innocent until proven guilty".
Purely historically speaking, I think it is entirely likely that Zozimus had the numbers correct. The ancient mediterranean world was quite populous. Rome had at this time a population of 1,000,000. Alexandria had a population of 500,000 (both roughly speaking). Palmyra was a major trade hub and It absolutely could have afforded an army of that size. Armies of this size were not that uncommon, either. For much of Rome's history the entirety of the army consisted of about 300,000 legionaries. By the late empire (when Zozimus is writing), it is believed to have tripled. This meant that the Romans could absolutely have brought massive amounts of man power to bear, and since Palmyra was actually able to defeat Rome once, it had to have a sizeable army. For more info on the Roman army at this time, check out Arther Ferrill's book "The Fall of the Roman Empire: The Military Explanation".
Hope this helps!
Edit: I have removed some information which Iphikrates indicated was actually false.
|
[
"There is a contrast between the mythical and historical events portrayed: depictions of Theseus' victory over the Amazonians and the Fall of Troy are juxtaposed sharply with the portrayal of the historic Battle of Oenoe (conjectured to have occurred in the pentecontaetia at Oenoe, Attica on the Thriasian Plain near Eleutherae), the first important Athenian victory over Sparta, and the Battle of Marathon. \n",
"Several ancient writers give figures for one or both of the armies, but, unfortunately, they are contradictory and, in some cases, unbelievable. Modern scholars' estimates have varied from 6,000 to 9,000 for the Boeotian force. For the Spartan side, most modern scholars favor Plutarch's figure of 10,000 in infantry and 1,000 cavalry.\n",
"While Greek sculptors traditionally illustrated military exploits through the use of mythological allegory, the Romans used a more documentary style. Roman reliefs of battle scenes, like those on the Column of Trajan, were created for the glorification of Roman might, but also provide first-hand representation of military costumes and military equipment. Trajan's column records the various Dacian wars conducted by Trajan in what is modern day Romania. It is the foremost example of Roman historical relief and one of the great artistic treasures of the ancient world. This unprecedented achievement, over 650 foot of spiraling length, presents not just realistically rendered individuals (over 2,500 of them), but landscapes, animals, ships, and other elements in a continuous visual history – in effect an ancient precursor of a documentary movie. It survived destruction when it was adapted as a base for Christian sculpture. During the Christian era after 300 AD, the decoration of door panels and sarcophagi continued but full-sized sculpture died out and did not appear to be an important element in early churches.\n",
"Battles BC is a 2009 documentary series looking at key battles in ancient history. The show was known for its very gritty nature, visual effects similar to the film \"300\" and its highly choreographed fight scenes with various weapons.\n",
"The battle is generally dated to 1274 BC in the Egyptian chronology, and is the earliest battle in recorded history for which details of tactics and formations are known. It is believed to have been the largest chariot battle ever fought, involving between 5,000 and 6,000 chariots in total.\n",
"Historian Adrian Goldsworthy notes that such tentative pre-battle maneuvering was typical of ancient armies as each side sought to gain maximum advantage before the encounter. During this period, some ancient writers paint a picture of meetings between opposing commanders for negotiation or general discussion, as with the famous pre-clash conversation between Hannibal and Scipio at Zama. But whatever the truth of these discussions, or the flowery speeches allegedly made, the only encounter that ultimately mattered was battle.\n",
"Warfare was a very popular subject in Ancient Greek art, represented in grand sculptural scenes on temples but also countless Greek vases. On the whole fictional and mythical battles were preferred as subjects to the many historical ones available. Along with scenes from Homer and the Gigantomachy, a battle between the race of Giants and the Olympian gods, the Amazonomachy was a popular choice.\n"
] |
relative to the size of time and space today, how big was the dot of condensed matter before the big bang.
|
First off, "before" the big bang is an absurd qualifier. Time, space and possibly causality too, originated at the big bang. Asking what happened "before" that is like asking what's north of the north pole - in other words, a statement so absurd that it's [not even wrong](_URL_0_).
But, to answer your question, currently prevailing theories place all matter-energy in a singularity of infinite density. Its volume would be infinitesimal - which is to say, smaller than anything you care to think of.
|
[
"During the inflationary epoch about 10 of a second after the Big Bang, the universe suddenly expanded, and its volume increased by a factor of at least 10 (an expansion of distance by a factor of at least 10 in each of the three dimensions), equivalent to expanding an object 1 nanometer (10 m, about half the width of a molecule of DNA) in length to one approximately 10.6 light years (about 10 m or 62 trillion miles) long. A much slower and gradual expansion of space continued after this, until at around 9.8 billion years after the Big Bang (4 billion years ago) it began to gradually expand more quickly, and is still doing so.\n",
"BULLET::::- c. 0 seconds (13.799 ± 0.021 Gya): Planck Epoch begins: earliest meaningful time. The Big Bang occurs in which ordinary space and time develop out of a primeval state (possibly a virtual particle or false vacuum) described by a quantum theory of gravity or \"Theory of Everything\". All matter and energy of the entire visible universe is contained in an unimaginably hot, dense point (gravitational singularity), a billionth the size of a nuclear particle. This state has been described as a particle desert. Other than a few scant details, conjecture dominates discussion about the earliest moments of the universe's history since no effective means of testing this far back in space-time is presently available. WIMPS (weakly interacting massive particles) or dark matter and dark energy may have appeared and been the catalyst for the expansion of the singularity. The infant universe cools as it begins expanding outward. It is almost completely smooth, with quantum variations beginning to cause slight variations in density.\n",
"Such a number may be incomprehensibly huge. If the Big Bang is reckoned to have occurred 13.8 billion years ago, there have been \"only\" about 4.35 x 10 seconds since the birth of the universe. It is estimated that the Earth is made up of roughly 5.5 x 10 atoms; the number of atoms in the Milky Way Galaxy is approximately 5 x 10, and the number of atoms in the \"universe\" is estimated to be 3.5 x 10.\n",
"At this point of the very early universe, the metric that defines distance within space, suddenly and very rapidly changed in scale, leaving the early universe at least 10 times its previous volume (and possibly much more). This is equivalent to a linear increase of at least 10 times in every spatial dimension – equivalent to an object 1 nanometer (10 m, about half the width of a molecule of DNA) in length, expanding to one approximately 10.6 light years (about 62 trillion miles) long in a tiny fraction of a second. This change is known as inflation.\n",
"The early universe was dominated by radiation; in this case density fluctuations larger than the cosmic horizon grow proportional to the scale factor, as the gravitational potential fluctuations remain constant. Structures smaller than the horizon remained essentially frozen due to radiation domination impeding growth. As the universe expanded, the density of radiation drops faster than matter (due to redshifting of photon energy); this led to a crossover called matter-radiation equality at ~ 50,000 years after the Big Bang. After this all dark matter ripples could grow freely, forming seeds into which the baryons could later fall. The size of the universe at this epoch forms a turnover in the matter power spectrum which can be measured in large redshift surveys.\n",
"Other large numbers, as regards length and time, are found in astronomy and cosmology. For example, the current Big Bang model suggests that the universe is 13.8 billion years (4.355 × 10 seconds) old, and that the observable universe is 93 billion light years across (8.8 × 10 metres), and contains about 5 × 10 stars, organized into around 125 billion (1.25 × 10) galaxies, according to Hubble Space Telescope observations. There are about 10 atoms in the observable universe, by rough estimation.\n",
"Following theoretical developments of the Friedmann equations by Alexander Friedmann and Georges Lemaître in the 1920s, and the discovery of the expanding universe by Edwin Hubble in 1929, it was immediately clear that tracing this expansion backwards in time predicts that the universe had almost zero size at a finite time in the past. This concept, initially known as the \"Primeval Atom\" by Lemaitre, was later elaborated into the modern Big Bang theory. If the universe had expanded at a constant rate in the past, the age of the universe now (i.e. the time since the Big Bang) is simply the inverse of the Hubble constant, often known as the \"Hubble time\". For Big Bang models with zero cosmological constant and positive matter density, the actual age must be somewhat younger than this Hubble time; typically the age would be between 66% and 90% of the Hubble time, depending on the density of matter.\n"
] |
if our body focused on preventing telomere reduction, what changes might our bodies experience?
|
Cancer. Cells continuing to divide with no limit is called cancer.
If you mean instead "what would happen to us if our bodies focused on sustaining cells /efficiency as long as possible before natural cell death what would happen?" Is a much more interesting question.
|
[
"Telomere shortening in humans can induce replicative senescence, which blocks cell division. This mechanism appears to prevent genomic instability and development of cancer in human aged cells by limiting the number of cell divisions. However, shortened telomeres impair immune function that might also increase cancer susceptibility. If telomeres become too short, they have the potential to unfold from their presumed closed structure. The cell may detect this uncapping as DNA damage and then either stop growing, enter cellular old age (senescence), or begin programmed cell self-destruction (apoptosis) depending on the cell's genetic background (p53 status). Uncapped telomeres also result in chromosomal fusions. Since this damage cannot be repaired in normal somatic cells, the cell may even go into apoptosis. Many aging-related diseases are linked to shortened telomeres. Organs deteriorate as more and more of their cells die off or enter cellular senescence.\n",
"It is becoming apparent that reversing shortening of telomeres through temporary activation of telomerase may be a potent means to slow aging. The reason that this would extend human life is because it would extend the Hayflick limit. Three routes have been proposed to reverse telomere shortening: drugs, gene therapy, or metabolic suppression, so-called, torpor/hibernation. So far these ideas have not been proven in humans, but it has been demonstrated that telomere shortening is reversed in hibernation and aging is slowed (Turbill, et al. 2012 & 2013) and that hibernation prolongs life-span (Lyman et al. 1981). It has also been demonstrated that telomere extension has successfully reversed some signs of aging in laboratory mice and the nematode worm species \"Caenorhabditis elegans\". It has been hypothesized that longer telomeres and especially telomerase activation might cause increased cancer (e.g. Weinstein and Ciszek, 2002). However, longer telomeres might also protect against cancer, because short telomeres are associated with cancer. It has also been suggested that longer telomeres might cause increased energy consumption.\n",
"The length of the telomere strand has senescent effects; telomere shortening activates extensive alterations in alternative RNA splicing that produce senescent toxins such as progerin, which degrades the tissue and makes it more prone to failure.\n",
"The lack of telomerase does not affect cell growth, until the telomeres are short enough to cause cells to “die or undergo growth arrest”. However, inhibiting telomerase alone is not enough to destroy large tumors. It must be combined with surgery, radiation, chemotherapy or immunotherapy.\n",
"If increased telomerase activity is associated with malignancy, then possible cancer treatments could involve inhibiting its catalytic component, hTERT, to reduce the enzyme’s activity and cause cell death. Since normal somatic cells do not express TERT, telomerase inhibition in cancer cells can cause senescence and apoptosis without affecting normal human cells. It has been found that dominant-negative mutants of hTERT could reduce telomerase activity within the cell. This led to apoptosis and cell death in cells with short telomere lengths, a promising result for cancer treatment. Although cells with long telomeres did not experience apoptosis, they developed mortal characteristics and underwent telomere shortening. Telomerase activity has also been found to be inhibited by phytochemicals such as isoprenoids, genistein, curcumin, etc. These chemicals play a role in inhibiting the mTOR pathway via down-regulation of phosphorylation. The mTOR pathway is very important in regulating protein synthesis and it interacts with telomerase to increase its expression. Several other chemicals have been found to inhibit telomerase activity and are currently being tested as potential clinical treatment options such as nucleoside analogues, retinoic acid derivatives, quinolone antibiotics, and catechin derivatives. There are also other molecular genetic-based methods of inhibiting telomerase, such as antisense therapy and RNA interference.\n",
"Enhanced telomerase activity can be an indicator of abnormal cells. Most normal tissues have inactivated or repressed telomerase activity, but it becomes activated in germ cells and most malignant tumors. Treatment of SPC-A1 cells with gambogic acid resulted in a significant decline in telomerase activity when treated for 48 or 72 hours (detecting 80.7% and 84.9% reduction in activity, respectively). When treated with gambogic acid for only 24 hours, the decrease was only 25.9% which led researchers to believe there are at least two mechanisms responsible for slowing cell growth.\n",
"The existence of a compensatory mechanism for telomere shortening was first found by Soviet biologist Alexey Olovnikov in 1973, who also suggested the telomere hypothesis of aging and the telomere's connections to cancer.\n"
] |
what is the difference in propaganda and fake news
|
Well, one thing to be cautious of, is that 'fake news' is thrown around with relative abandon today, even against news that is not, in fact, fake *or* propaganda, merely which contrasts against the viewer's given preference.
So in some ways, it is simply a slur against the reporting organization.
In cases where 'fake news' is actually fake news, I would say in some cases it can be synonymous with propaganda. A more precise definition may (but won't necessarily) distinguish propaganda as news with bias, selective reporting, or other techniques to slant the opinions of people reading it, versus fake news making up false information altogether. In other words, distorting the facts, versus making your own.
|
[
"Propaganda is information that is not objective and is used primarily to influence an audience and further an agenda, often by presenting facts selectively to encourage a particular synthesis or perception, or using loaded language to produce an emotional rather than a rational response to the information that is presented. Propaganda is often associated with material prepared by governments, but activist groups, companies, religious organizations and the media can also produce propaganda. \n",
"The definition of 'fake news' above, could also be applied to the general category of 'Propaganda' when it is applied to the field of political reporting. Because a large part of political journalism involves \"analysis\", and not simple reporting of what is said, or presented, writers and journalists have the opportunity to present specific kinds of analysis which can favor one ideological, or political position over another; it can also be used to represent \"personalities\" in favorable/unfavorable ways. If the definition of propaganda includes misrepresentation of facts, and deliberate distortions of narrative, or applied \"emphasis\" not necessarily contained in the original, then Fake News falls squarely inside the parameters of Propaganda also. It could be argued that true \"objectivity\" is not really possible to produce, when it comes to presenting analysis of political activity, any individual observer and journalist is going to perceive what they experience through the lens of their own political bias, this of course is the case with entire organizations also.\n",
"Propaganda is information that is not impartial and used primarily to influence an audience and further an agenda, often by presenting facts selectively (perhaps lying by omission) to encourage a particular synthesis, or using loaded messages to produce an emotional rather than a rational response to the information presented. The term propaganda has acquired a strongly negative connotation by association with its most manipulative and jingoistic examples.\n",
"Thus, propaganda is a special form of communication, which is studied in communication research, and especially in media impact research, focussing on media manipulation. Propaganda is a particular type of communication characterized by distorting the representation of reality.\n",
"News propaganda is a type of propaganda covertly packaged as credible news, but without sufficient transparency concerning the news item's source and the motivation behind its release. Transparency of the source is one parameter critical to distinguish between news propaganda and traditional news press releases and video news releases.\n",
"Identifying propaganda has always been a problem. The main difficulties have involved differentiating propaganda from other types of persuasion, and avoiding a biased approach. Richard Alan Nelson provides a definition of the term: \"Propaganda is neutrally defined as a systematic form of purposeful persuasion that attempts to influence the emotions, attitudes, opinions, and actions of specified target audiences for ideological, political or commercial purposes through the controlled transmission of one-sided messages (which may or may not be factual) via mass and direct media channels.\" The definition focuses on the communicative process involved – or more precisely, on the purpose of the process, and allow \"propaganda\" to be considered objectively and then interpreted as positive or negative behavior depending on the perspective of the viewer or listener. \n",
"Propaganda in the United States is spread by both government and media entities. Propaganda is information, ideas, or rumors deliberately spread widely to influence opinions, usually to preserve the self-interest of a nation. It is used in advertising, radio, newspaper, posters, books, television and other media and may provide either factual or non-factual information to its audiences.\n"
] |
Did the Strategic Defense Initiative Real aka "Star Wars" really help Bankrupt the Soviet Union ?
|
No. This is a post-Cold War myth, written largely by people who would like to make SDI not appear to be the boondoggle it was, or to make it look like something that had a positive effect on diplomacy, rather than the more easily-documentable negative effect. Pavel Podvig has [written at length about this here](_URL_1_) and [here](_URL_0_).
Aside from being asserted without evidence, I would just note that the fall of the USSR was clearly caused by many factors, most of them internal: Gorbachev's attempts at opening up the system very clearly and directly led to its instability and failure. While the Soviet overexpenditure on arms (in general) no doubt did not help its overall economy, the idea that its collapse can be traced to that, much less to a specific US program which the Soviets did not in fact respond to, is facile.
|
[
"The SDI program also held important budget implications. In May 1993 Aspin announced \"the end of the Star Wars era,\" explaining that the collapse of the Soviet Union had determined the fate of SDI. He renamed the Strategic Defense Initiative Organization as the Ballistic Missile Defense Organization (BMDO) and established its priorities as theater and national missile defense and useful follow-on technologies. Aspin's assignment of responsibility for BMDO to the under secretary of defense (acquisition and technology) signified the downgrading of the program.\n",
"In 1983, American president Ronald Reagan proposed the Strategic Defense Initiative (SDI), a space-based system to protect the United States from attack by strategic nuclear missiles. The plan was ridiculed by some as unrealistic and expensive, and Dr Carol Rosin nicknamed the policy \"Star Wars\", after the popular science-fiction movie franchise. Astronomer Carl Sagan pointed out that in order to defeat SDI, the Soviet Union had only to build more missiles, allowing them to overcome the defence by sheer force of numbers. Proponents of SDI said the strategy of technology would hasten the Soviet Union's downfall. According to this doctrine, Communist leaders were forced to either shift large portions of their GDP to counter SDI, or else watch as their expensive nuclear stockpiles were rendered obsolete.\n",
"In the book, Corso claims the Strategic Defense Initiative (SDI), or \"Star Wars\", was meant to achieve the destructive capacity of electronic guidance systems in incoming enemy warheads, as well as the disabling of enemy spacecraft, including those of extraterrestrial origin.\n",
"In March 1983, Reagan introduced the Strategic Defense Initiative, a defense project that would have used ground- and space-based systems to protect the United States from attack by strategic nuclear ballistic missiles. Reagan believed that this defense shield could make nuclear war impossible. There was much disbelief surrounding the program's scientific feasibility, leading opponents to dub SDI \"Star Wars\" and argue that its technological objective was unattainable. The Soviets became concerned about the possible effects SDI would have; leader Yuri Andropov said it would put \"the entire world in jeopardy.\" For those reasons, David Gergen, former aide to President Reagan, believes that in retrospect, SDI hastened the end of the Cold War.\n",
"On 23 March 1983, President Ronald Reagan announced a new national missile defense program formally called the Strategic Defense Initiative but soon nicknamed \"Star Wars\" by detractors. President Reagan's stated goal was not just to protect the U.S. and its allies, but to also provide the completed system to the USSR, thus ending the threat of nuclear war for all parties. SDI was technically very ambitious and economically very expensive. It would have included many space-based laser battle stations and nuclear-pumped X-ray laser satellites designed to intercept hostile ICBMs in space, along with very sophisticated command and control systems. Unlike the previous Sentinel program, the goal was to totally defend against a robust, all out nuclear attack by the USSR.\n",
"When Ronald Reagan proposed the Strategic Defense Initiative (SDI), a system of lasers and missiles meant to intercept incoming ICBMs, the plan was quickly labeled \"Star Wars\", implying that it was science fiction and linking it to Reagan's acting career. According to Frances FitzGerald, Reagan was annoyed by this, but Assistant Secretary of Defense Richard Perle told colleagues that he \"thought the name was not so bad.\"; \"'Why not?' he said. 'It's a good movie. Besides, the good guys won.'\" This gained further resonance when Reagan described the Soviet Union as an \"evil empire\".\n",
"In March 1983, Reagan announced the Strategic Defense Initiative—a multibillion-dollar project to develop a comprehensive defense against attack by nuclear missiles, which was quickly dubbed the \"Star Wars\" program. Sagan spoke out against the project, arguing that it was technically impossible to develop a system with the level of perfection required, and far more expensive to build such a system than it would be for an enemy to defeat it through decoys and other means—and that its construction would seriously destabilize the \"nuclear balance\" between the United States and the Soviet Union, making further progress toward nuclear disarmament impossible.\n"
] |
In what ways did culture of the time influence Buddhist beliefs and practices?
|
Stephen Batchelor, a former monk in both the Tibetan and Zen traditions, wrote [Buddhism Without Beliefs](_URL_1_), an explicit attempt to separate the baby from the cultural bathwater in Buddhism. It's been ages since I read it, but if memory serves I believe Batchelor argues that Buddhism is a matter of practice and inquiry, not belief.
The history of Buddhist art tells you a lot about cultural accretion. Found [this](_URL_0_) from the Met. "In the earliest Buddhist art of India, the Buddha was not represented in human form. His presence was indicated instead by a sign, such as a pair of footprints, an empty seat, or an empty space beneath a parasol." Compare that to florid Tibetan iconography.
What's great about Buddhism is that it adapts so well to cultures it merges with, from spiritually athletic Zen to belief-based pure land to compassion-based Mahayana to insanely ritualistic Vajrayana. There are all these "skillful means" based on the varying needs of sentient beings. Why would you want to limit yourself to what the historical Buddha and his contemporaries did or believed?
Edit: You might be interested in the way Tibetan Buddhists conceptualize the various vehicles or "yanas" of Buddhism, from renunciation - the original vehicle - to great compassion to radical acceptance. There are scholarly explanations, but Dzongsar Khyentse Rinpoche wrote an excellent one that compares them to ways of being in a cinema. [Here](_URL_2_)
|
[
"Buddhism played an important role in the development of Japanese art between the 6th and the 16th centuries. Buddhist art and Buddhist religious thought came to Japan from China through Korea. Buddhist art was encouraged by Crown Prince Shōtoku in the Suiko period in the sixth century, and by Emperor Shōmu in the Nara period in the eighth century. In the early Heian period, Buddhist art and architecture greatly influenced the traditional Shinto arts, and Buddhist painting became fashionable among wealthy Japanese. The Kamakura period saw a flowering of Japanese Buddhist sculpture, whose origins are in the works of Heian period sculptor Jōchō. \n",
"The relationship between Buddhism and music is thought to be complicated since the association of music with earthly desires led early Buddhists to condemn the musical practice, and even observation of musical performance, for monks and nuns. However, in Pure Land Buddhism Buddhist paradises are represented as musical places in which Buddhist law takes the form of melodies. Most Buddhist practices also involve chant in some form, and some also make use of instrumental music and even dance. Music can act as an offering to the Buddha, as a means of memorizing Buddhist texts, and as a form of personal cultivation or meditation.\n",
"The influence of western psychology and philosophy on Japanese Buddhism was due to the persecution of Buddhism at the beginning of the Meiji Restoration, and the subsequent efforts to construct a New Buddhism (\"shin bukkyo\"), adapted to the modern times. It was this New Buddhism which has shaped the understanding of Zen in the west, especially through the writings of D.T. Suzuki and the Sanbo Kyodan, an exponent of the Meiji-era opening of Zen-training for lay-followers.\n",
"Another major cultural development of the era was the permanent establishment of Buddhism. Buddhism was introduced by Baekje in the sixth century but had a mixed reception until the Nara period, when it was heartily embraced by Emperor Shōmu. Shōmu and his Fujiwara consort were fervent Buddhists and actively promoted the spread of Buddhism, making it the \"guardian of the state\" and a way of strengthening Japanese institutions.\n",
"In the modern era, Buddhist meditation saw increasing popularity due to the influence of Buddhist modernism on Asian Buddhism, and western lay interest in Zen and the Vipassana movement. The spread of Buddhist meditation to the Western world paralleled the spread of Buddhism in the West. Buddhist meditation has also influenced Western Psychology, especially through the work of Jon Kabat-Zinn who founded the Mindfulness-Based Stress Reduction (MBSR) in 1979. The modernized concept of mindfulness (based on the Buddhist term \"sati\") and related meditative practices have in turn led to several mindfulness based therapies.\n",
"Buddhism, originating in India and having its source in the Hindu culture, developed an extensive system of meditation and physical cultivation similar to yoga to help the practitioner achieve enlightenment, awakening one to one's true self. When Buddhism was transmitted to China, some of those practices were assimilated and eventually modified by the indigenous culture. The resulting transformation was the start of the Chinese Buddhist qigong tradition. Chinese Buddhist practice reaches a climax with the emergence of Chán (禪) Buddhism in the 7th century AD. Meditative practice was emphasized and a series of qigong exercises known as the Yijin Jing (\"Muscle/Tendon Change Classic\") was attributed to Bodhidharma. The Chinese martial arts community eventually identify this Yijing Jing as one of the secret training methods in Shaolin martial arts.\n",
"The philosophies of Buddhism and Zen, and to a lesser extent Confucianism and Shinto, are attributed to the development of the samurai culture. According to Robert Sharf, \"The notion that Zen is somehow related to Japanese culture in general, and bushidō in particular, is familiar to Western students of Zen through the writings of D. T. Suzuki, no doubt the single most important figure in the spread of Zen in the West.\"\n"
] |
During World War II did merchant ships have insurance against being sunk by the enemy? Did the national governments offer compensation?
|
They were insured against loss. The precise details likely varied from ship to ship. I'm not sure where you would find the precise details of the amount any given vessel was insured for off hand. Lloyds and the American Bureau of Shipping did issue annual registers of insured vessels though. You won't find most of that information online though. Mystic Seaport has digitized many records from the 19th century but that doesn't really help you much.
If you are willing to do some hard copy searching the Mariners Museum in Newport News has a full run of both registers. Here's a link to their catalog for the years in question.
[_URL_0_](_URL_0_)
|
[
"In the second half of the 19th century, the number of claims greatly increased due to the number of passengers emigrating to North America and Australia. Shipowners became aware of their insurers' compensation limits, especially when it came to damages caused by ship collisions. While the UK Merchant Shipping Act 1854 had determined that, when evaluating insurance claims, the value of ships should be no less than £15 per ton, many ships had an actual lower market value and existing insurance policies did not cover this gap in liability. The compensation for collision damages also excluded a quarter of such damages. Existing hull insurance policies included damages to the insured ship and liability for the damages it had caused, while the maximum amount shipowners could recover after collisions was the ship's insured value, injured crew members might seek compensation from their employers. Later, the Fatal Accidents Act 1846 made it easier for passengers or their survivors to file claims. Also, injured crew members might seek compensation from their employers.\n",
"In 1698 a panel comprising the city's most eminent merchants was set up to settle the question of insurance. The panel's ruling was that the ship had indeed been lost and that its owners and insurers should receive their due compensation. The galley's complement of thirty-seven crew and three officers were declared dead and the insurance was paid out.\n",
"In May 1913, Hoult was one of a group of British ship-owners who established a mutual form of war insurance for ships. The Liverpool and London war Risks Association (Limited) covered the risks of British shipowners so long as Britain was neutral in any conflict, and the insured ships did not breach that neutrality.\n",
"Lloyd's losses from the earthquake and fires were substantial, even though the writing of insurance business overseas was viewed with some wariness at the time. While some insurance companies were denying claims for fire damage under their earthquake policies or \"vice versa\", one of Lloyd's leading underwriters, Cuthbert Heath, famously instructed his San Francisco agent to \"pay all of our policy-holders in full, irrespective of the terms of their policies\". The prompt and full payment of all claims helped to cement Lloyd's reputation for reliable claim payments and as an important trading partner for US brokers and policyholders. It was estimated that around 90 per cent of the damage to the city was caused by the resultant fires, and as such since 1906 fire following earthquake has generally been a specified insured peril under most policies. Heath is also credited for introducing the now widely used \"excess of loss\" reinsurance protection for insurers following the San Francisco disaster.\n",
"economic sanctions against Germany, meant British clubs could no longer offer neutral ships like US vessels insurance, which prompted foundation of the US P&I club in 1917. US ships remained a major part of the Club’s tonnage between the wars. In 1929 new members were Greek and Norwegian owners. Cargo, personal injury and sickness claims predominated. The civil war in Spain and Japan’s invasion of China brought many claims and rapidly special premiums were imposed. During the Second World War, ships were lost but the Club´s size increased slightly. The Club’s vital documents were filmed and safely stored in Exeter.\n",
"Commerce raiding by private vessels ended with the American Civil War, but Navy officers remained eligible for prize money a little while longer. The United States continued paying prizes to naval officers in the Spanish–American War, and only abjured the practice by statute during World War I. The U.S. prize courts adjudicated no cases resulting from its own takings in either World War I or World War II (although the Supreme Court did rule on a German prize—SS \"Appam\" in the case \"The Steamship Appam\"—that was brought to and held at Hampton Roads). Likewise Russia, Portugal, Germany, Japan, China, Romania, and France followed the United States in World War I, declaring they would no longer pay prize money to naval officers. On November 9, 1914, the British and French governments signed an agreement establishing government jurisdiction over prizes captured by either of them. The Russian government acceded to this agreement on March 5, 1915, and the Italian government followed suit on January 15, 1917.\n",
"During World War II, anchored DOSCO cargo ships, along with the loading pier at Wabana, were the target of Nazi U-boats on at least two occasions. During one attack on anchored ore carriers, a torpedo missed its target and struck the pier, making Bell Island one of the few places in North America to suffer a direct enemy attack (see Attacks on North America during World War II). The wrecks of the four cargo ships sunk during these two attacks are visible at low tide; a memorial on shore is dedicated to the 69 merchant sailors who lost their lives.\n"
] |
During WWII what was the average distance that tanks fought other tanks?
|
Coox and Naisawald's 1954 study *Survey of Allied Tank Casualties in World War II* gives several statistics that attempt to determine this.
> A study of 800 U.S., British, and Canadian tank casualties in Western Europe, the Mediterranean Theater, and North Africa disclosed that the average range at which tanks were immobilized by gunfire was under 800 yards. A sample of 100 tank casualties in North Africa showed an average range of 900 yards; 60 tank casualties in Sicily and Italy--350 yards; 650 tank casualties in Western Europe--over 800 yards. These figures are explicable by the fact that in the western desert of North Africa, where the terrain favored ranges to the limits of visibility, tank fighting often resembled naval battles which boiled down to "slug fests" where light vessels (=light tanks and armored cars) were involved. A figure of 900 yards represents the averaging out of engagements at 1500 to 2000 yards as well as those at hub-to-hub range, e.g., Knightsbridge; Rommel's brilliant tank traps allowed his antitank guns to effect kills at short range. Martel has explained the reasons for the Germans' electing to fight armor at longer ranges in the desert as follows:
> > The German armored forces often attacked British unarmored troops if they found them insufficiently protected by artillery and antitank guns, but they always avoided closing with our tanks in a running fight. When meeting British tanks in strength they preferred to take up a position which was well protected by artillery fire and with antitank guns on the flanks, and used the superior gunfire from stationary tanks to shoot at the British tanks at long range.
> It should be stressed that the data on range are almost always derived from "subjective" estimates given in after-action reports or "third-hand" summaries. The only exception is a portion of the British ETO sample, wherein operations research teams from the 21st Army Group actually examined tanks immobilized after the Rhine crossing. The over-all average of 800 yards range is also probably higher than the actual figure, if it were known, for a much larger sample, inasmuch as a further 75 tank casualties to gunfire were listed only as "close," "fairly close," "point-blank," "various," etc.
**TABLE VIII**
**AVERAGE RANGES AT WHICH TANKS WERE IMMOBILIZED**
**(Sampling)** [gunfire only]
Category|Sample|Range (yds)
:--|:--|:--
US: ETO-First Army|330|796.4
ETO-Third, Seventh, Ninth Armies|119|713.7
ITALY|3|758.9
US: Total|452|774.4
UK: ETO|190|886.3
ITALY|51|348.1
SICILY|6|300.0
AFRICA|96|890.1
UK: Total|343|797.1
CANADA: ETO|5|432.0
ETO: US, UK, CANADA|644|804.8
All Theaters: US, UK, CANADA|800|782.0
Hardison's *Data on World War II Tank Engagements: Involving the U.S. Third and Fourth Armored Divisions* also gives a figure that is about 800 to 900 yards on average.
**TABLE V**
**SUMMARY OF RANGES AT WHICH ALLIED AND ENEMY TANKS WERE DESTROYED IN VARIOUS AREAS OF NORTHWEST EUROPE**
Area|Number of Allied Casualties|Average Allied Casualty Range in Yards|Number of Enemy Casualties|Average Enemy Casualty Range in Yards
:--|:--|:--|:--|:--
Vicinity Stolberg|26|476||
Roer to Rhine|37|959|6|733
Belgian Bulge|60|1000|9|833
Vicinity Arracourt|20|1260|74|936
Sarre|37|1116|35|831
Relief of Bastogne|19|731|16|915
Totals|199|946|140|893
> It was shown in the referenced report that the distribution of combat ranges is approximately represented by a Pearson III distribution function of the form:
> F(R) = e^-X (X + 1)
> X = 2R sqrt R
> R = range, Rbar = average range,
> F(R) = fraction of ranges greater than R.
|
[
"America's first tank versus tank battle of World War II occurred when Type 95 light tanks of the IJA 4th Tank Regiment engaged a US Army tank platoon, consisting of five brand new M3 Stuart light tanks from \"B\" company, 192nd Tank Battalion, on 22 December 1941, north of Damortis during the retreat to the Bataan Peninsula in 1941. Both the M3 and Type 95 light tanks were armed with a 37 mm gun, but the M3 was better armored, with 32 mm (1¼ inches) thick turret sides, vs the Type 95's 12 mm thick armor; however, as the US Army's Ballistics Research Lab (BRL) found after conducting the first large study of tank vs tank warfare in 1945, the most important factor in a tank duel was which side spotted the enemy first, fired first, and hit first. In this first engagement the IJA reacted first, destroying the lead M3 as it tried to leave the road. The four remaining American tanks all suffered hits as they retreated.\n",
"BULLET::::7. Germany and the Second World War, p. 584, notes that Second Tank Army's strength in tanks and assault guns was 810 on 22 July 1944, and that this had dwindled to 263 armored fighting vehicles by 4 August 1944.\n",
"One of the greatest tank battles in history, the Battle of Golan Heights, took place during the Yom Kippur War. In the Golan Heights, the Syrians attacked two Israeli brigades and eleven artillery batteries with five divisions (the 7th, 9th and 5th, with the 1st and 3rd in reserve) and 188 batteries. They began their attack with an airstrike by about 100 aircraft and a 50-minute artillery barrage. The forward brigades of three divisions then penetrated the cease-fire lines and bypassed United Nations observer posts, followed by the main assault force, which was covered by mobile anti-aircraft batteries, bulldozers to penetrate anti-tank ditches, bridge-layers to overcome obstacles, and mine-clearance vehicles.\n",
"Eventually, a column of 120 Iraqi tanks coalesced and directly engaged with the 7th Armoured Division (United Kingdom). 300 prisoners were taken in a battle outside the city. This event was described as the largest British tank battle since the Second World War. On March 26th the Republican Guard forces grew frustrated by their inability to draw the British into a fight inside of Basra, and Ali sent out a column of Soviet-built T-55 tanks to attack the British . The T-55s were outranged by the 120-millimeter guns of the British Challenger tanks of the Royal Scots Dragoon Guards which resulted in the loss of 15 T-55s without a single loss to the British.\n",
"A total of 2,222 M26 Pershing tanks were produced, beginning in November 1944, only 20 of which saw combat in Europe during World War II. The tank was reclassified as a medium tank in May 1946, and while it didn't have time to make any real impact in the Second World War, it served with distinction in the Korean War alongside the M4A3E8 Sherman. In combat it was, unlike the M4 Sherman, fairly equal in firepower and protection to both the Tiger I and Panther tanks but was underpowered and mechanically unreliable.\n",
"The second half of World War II saw an increased reliance on general-purpose medium tanks, which became the bulk of the tank combat forces. Generally, these designs massed about 25–30 tonnes, were armed with cannons around 75 mm, and powered by engines in the 400 to 500 hp range. Notable examples include the Soviet T-34 (the most-produced tank to that time) and the US M4 Sherman.\n",
"Tanks were primarily used on the Western Front. The first offensive of the war in which tanks were used \"en masse\" was the battle of Cambrai in 1917; 476 tanks started the attack, and the German front collapsed. At midday the British had advanced five miles behind the German line. The battle of Amiens in 1918 saw the value of the tank being appreciated; 10 heavy and two light battalions of 414 tanks were included in the assault. 342 Mark Vs and 72 Whippets were backed up by a further 120 tanks designed to carry forward supplies for the armour and infantry. By the end of the first day of the attack, they had penetrated the German line by , 16,000 prisoners were taken. In September 1918, the British Army was the most mechanised army in the world. Some 22,000 men had served in the Tank Corps by the end of the war.\n"
] |
if the only way you can get an std, sti, and hiv is if you sleep with someone who's infected then how is it stds, sti's and hiv exist to begin with?
|
The question you really want an answer to isn't really explained in the post.
First, you're making a pretty big assumption that the first humans were "clean" as it were. Life started simple and got more complex from there, so bacteria, viruses, etc were around long before humans were. Microscopic organisms were around before complicated creatures like animals, some of these microscopic organisms found their way inside of animals because they were eaten or an animal cut itself on a rock, or something similar. Some of these bacteria couldn't survive inside of animals, others could.
So once bacteria managed to live inside something else, it just became a matter of getting from one animal to the other. Bacteria that could live in bodily fluids had a huge benefit because that gave them an excellent way of passing on between other animals. Those bacteria that couldn't were less likely to survive and pass on.
So that's how you end up with STIs and such. Over millions of years, bacteria that were able to be transmitted sexually found great success because it's more or less inevitable that an animal will have sex at some point in their lifetime if it survives, so these bacteria were most likely to survive and spread themselves. A bacteria that might have been spread through other means slowly but surely evolved to become better and better and staying in a living creature and spreading through sex. Not because there's anything special about it, but because these infections live in the things most like to be transferred between sexual partners. A bacteria that lived only in your armpit hair is going to have a hard time transmitting itself to other hosts.
|
[
"Sexually transmitted infections (STIs) are bacteria, viruses or parasites that are spread by sexual contact, especially vaginal, anal, or oral intercourse, or unprotected sex. Oral sex is less risky than vaginal or anal intercourse. Many times, STIs initially do not cause symptoms, increasing the risk of unknowingly passing the infection on to a sex partner or others.\n",
"HIV is spread by three main routes: sexual contact, significant exposure to infected body fluids or tissues, and from mother to child during pregnancy, delivery, or breastfeeding (known as vertical transmission). There is no risk of acquiring HIV if exposed to feces, nasal secretions, saliva, sputum, sweat, tears, urine, or vomit unless these are contaminated with blood. It is also possible to be co-infected by more than one strain of HIV—a condition known as HIV superinfection.\n",
"Sexually transmitted infections (STIs), also referred to as sexually transmitted diseases (STDs), are infections that are commonly spread by sexual activity, especially vaginal intercourse, anal sex and oral sex. Many times STIs initially do not cause symptoms. This results in a greater risk of passing the disease on to others. Symptoms and signs of disease may include vaginal discharge, penile discharge, ulcers on or around the genitals, and pelvic pain. STIs can be transmitted to an infant before or during childbirth and may result in poor outcomes for the baby. Some STIs may cause problems with the ability to get pregnant.\n",
"Many individuals are concerned about the risk of HIV/AIDS. Generally, a person must either have unprotected sexual intercourse (vaginal or anal), use an infected syringe or have the virus passed from mother to child to be infected. A person cannot be infected from casual contact, such as hugging; however, there is some risk if HIV-infected blood or genital secretions (semen or vaginal secretions) enter an open wound.\n",
"One cannot become infected with HIV through normal contact in social settings, schools, or in the workplace. One cannot be infected by shaking someone's hand, by hugging or \"dry\" kissing someone, by using the same toilet or drinking from the same glass as an HIV-infected person, or by being exposed to coughing or sneezing by an infected person. Saliva carries a negligible viral load, so even open-mouthed kissing is considered a low risk. However, if the infected partner or both of the performers have blood in their mouth due to cuts, open sores, or gum disease, the risk is higher. The Centers for Disease Control and Prevention (CDC) has only recorded one case of possible HIV transmission through kissing (involving an HIV-infected man with significant gum disease and a sexual partner also with significant gum disease), and the Terence Higgins Trust says that this is essentially a no-risk situation.\n",
"The most effective way to prevent sexual transmission of STIs is to avoid contact of body parts or fluids which can lead to transfer with an infected partner. Not all sexual activities involve contact: cybersex, phonesex or masturbation from a distance are methods of avoiding contact. Proper use of condoms reduces contact and risk. Although a condom is effective in limiting exposure, some disease transmission may occur even with a condom.\n",
"Chlamydia, human papillomavirus (HPV), gonorrhea, herpes, hepatitis (multiple strains), and other sexually transmitted infections (STIs/STDs), can be transmitted through oral sex. Any sexual exchange of bodily fluids with a person infected with HIV, the virus that causes AIDS, poses a risk of infection. Risk of STI infection, however, is generally considered significantly lower for oral sex than for vaginal or anal sex, with HIV transmission considered the lowest risk with regard to oral sex.\n"
] |
why do bottles of antibiotics and vitamins smell bad?
|
Not all antibiotics have that rotten egg smell, but those that do typically contain a Sulfur compound in the form of hydrogen sulfide that gives it that rancid smell.
|
[
"Reusable bottles can hold bacteria. Drinking from a reusable bottle can transfer bacteria from a person's mouth to the beverage it contains, which can contaminate both bottle and water. Contamination can cause bacterial or fungal growth in the liquid while it's stored. It is recommend that users clean reusable drinking bottles thoroughly before each used. Users should take care to wash the bottle cap as well after each use for proper sanitation.\n",
"Proper preservation of perfumes involves keeping them away from sources of heat and storing them where they will not be exposed to light. An opened bottle will keep its aroma intact for several years, as long as it is well stored. However, the presence of oxygen in the head space of the bottle and environmental factors will in the long run alter the smell of the fragrance.\n",
"Before PET bottles were recycled to make new bottles, they were often recycled for other uses, such as paintbrush bristles or polyester fiber. Today, many companies, such as Patagonia, make clothing out of old PET bottles. It was at first difficult to recycle post-consumer PET bottles into new bottles because there was not sufficient knowledge about the ways in which PET was possibly contaminated during first use or during recollection. Contamination can occur either when substances from the beverages themselves get absorbed into the container or when bottles are reused to store unsafe liquids such as cleaners or chemicals. However, bottle-to-bottle recycling became more and more common as the number of PET bottles that got produced increased.\n",
"Perfumes are best preserved when kept in light-tight aluminium bottles or in their original packaging when not in use, and refrigerated to relatively low temperatures: between 3–7 °C (37–45 °F). Although it is difficult to completely remove oxygen from the headspace of a stored flask of fragrance, opting for spray dispensers instead of rollers and \"open\" bottles will minimize oxygen exposure. Sprays also have the advantage of isolating fragrance inside a bottle and preventing it from mixing with dust, skin, and detritus, which would degrade and alter the quality of a perfume.\n",
"Some medications, such as antibiotics, taken by mouth or antiseptics or medicated shampoos used on the skin can produce odors that owners may find unpleasant. Likewise, some food ingredients, most noticeably fish meal or fish oil, can produce skin odor in dogs.\n",
"Dental disease or mouth ulcers can produce rotten smelling breath (halitosis). Dental calculus harbors numerous bacteria which produce odor and foul breath. Dental disease can also lead to excessive drooling, and the skin around the mouth can become infected, leading to more odor production. Dogs can also acquire foul smelling breath as a result of coprophagia, the practice of eating their own feces or the feces of other animals. Commercially prepared food additives can be purchased which, when added to a dog's food, impart a bitter flavor to their feces thereby reducing the tendency towards consuming their own feces.\n",
"When exposed to air, warmth and light (especially without antioxidants), the oil loses its taste and psychoactivity due to aging. Cannabinoid carboxylic acids (THCA, CBDA, and maybe others) have an antibiotic effect on gram-positive bacteria such as (penicillin-resistant) Staphylococcus aureus, but gram-negative bacteria such as Escherichia coli are unaffected.\n"
] |
how do we know counting rings in a tree is a definitive "1 year"?
|
In places with seasons, trees go through a predictable growth-dormant cycle that produces the distinctive ring pattern.
Since most of these seasonal trees go dormant regardless of what the actual winter temperature was that year (they're timing the day lengths, not responding to unpredictable temperature swings) a ring is produced even if the year's weather was very unusual.
You get big rings for years with optimal growing conditions and weak rings for drought years.
Rings are less pronounced and more difficult to count in trees that prefer more tropical climates, since they may grow all year instead of stopping entirely on a regular cycle.
|
[
"Dendrochronology or tree-ring dating is the scientific method of dating based on the analysis of patterns of tree rings, also known as growth rings. Dendrochronology can date the time at which tree rings were formed, in many types of wood, to the exact calendar year. \n",
"Currey originally estimated the tree was at least 4844 years old. A few years later, this was increased to 4862 by Donald Graybill of the University of Arizona's Laboratory of Tree-Ring Research. These ring counts were done on a trunk cross-section taken about 2.5 m (8 feet) above the original germination point of the tree, because the innermost rings were missing below that point. Adjusting Graybill's figure by adding the estimated number of years required to reach that height, plus a correction for the estimated number of missing rings (not uncommon in trees at the tree line), it is probable that the tree was at least 5000 years old when felled. That made it the oldest known unitary (i.e. non-clonal) organism at the time, exceeding even the Methuselah tree of the White Mountains' Schulman Grove, in California, though Methuselah was later redated to 4845 years old.\n",
"Rings can be counted from dead trees as well as stumps left behind from logging. A sample can be collected from a living tree using tools like the increment borer. The increment borer is a hollow steel tube used to extract a core sample from a tree’s trunk. The growth rings in a core sample are counted to determine the age of that tree. Ages of stand-replacing fires may be determined by determining the cohort-age of trees that established after a fire. For example, tree-ring dating of large stands will show the age of the forest, and may provide an estimate of when the last large disturbance event occurred.\n",
"Crossdating, the skill of finding matching ring-width patterns between tree-ring samples, is used to assign the precise calendar year to every ring. This is affected by the climate that the timber was in. It is also important to have enough rings to actually confirm a date. Once the rings are dates, the chronology is measured. The last step is to compare the rings with that of ring-width patterns in sampled timbers and a master dating chronology.\n",
"In its most elementary form the number includes a year and a serial number. The year has two digits for 1898 to 2000, and four digits beginning in 2001. The three ambiguous years (1898, 1899, and 1900) are distinguished by the size of the serial number. There are also some peculiarities in numbers beginning with a \"7\" because of an experiment applied between 1969 and 1972 which added a check digit. \n",
"To produce a curve that can be used to relate calendar years to radiocarbon years, a sequence of securely dated samples is needed which can be tested to determine their radiocarbon age. The study of tree rings led to the first such sequence: individual pieces of wood show characteristic sequences of rings that vary in thickness because of environmental factors such as the amount of rainfall in a given year. These factors affect all trees in an area, so examining tree-ring sequences from old wood allows the identification of overlapping sequences. In this way, an uninterrupted sequence of tree rings can be extended far into the past. The first such published sequence, based on bristlecone pine tree rings, was created by Wesley Ferguson. Hans Suess used this data to publish the first calibration curve for radiocarbon dating in 1967. The curve showed two types of variation from the straight line: a long term fluctuation with a period of about 9,000 years, and a shorter term variation, often referred to as \"wiggles\", with a period of decades. Suess said he drew the line showing the wiggles by \"cosmic \"schwung\"\", by which he meant that the variations were caused by extraterrestrial forces. It was unclear for some time whether the wiggles were real or not, but they are now well-established. These short term fluctuations in the calibration curve are now known as de Vries effects, after Hessel de Vries.\n",
"Direct reading of tree ring chronologies is a complex science, for several reasons. First, contrary to the single-ring-per-year paradigm, alternating poor and favorable conditions, such as mid-summer droughts, can result in several rings forming in a given year. In addition, particular tree-species may present \"missing rings\", and this influences the selection of trees for study of long time-spans. For instance, missing rings are rare in oak and elm trees.\n"
] |
Whale sounds/songs can reach up to 190dB. Is this not dangerous for humans taking a swim nearby?
|
Simply put, the gap between water and air is too difficult to cross for a number of reasons.
First, the speed of pressure waves in water is much, much greater than the speed of pressure waves in air. This means that to someone near the water surface, it may not even become a recognizable sound wave.
Second, sound waves below water will have a very large amplitude. That means they will be much more likely to 'bounce' off the surface of water and reflect back downwards than to traverse the gap and continue into the air.
Finally, the surface of water is hardly uniform. Unlike a smooth membrane, water is perfectly capable of sloshing and absorbing energy in the form of motion. This will make any pressure waves more likely to become sloshing.
|
[
"Estimates made by Cummings and Thompson (1971) suggest the source level of sounds made by blue whales are between 155 and 188 decibels when measured relative to a reference pressure of one micropascal at one metre. All blue whale groups make calls at a fundamental frequency between 10 and 40 Hz; the lowest frequency sound a human can typically perceive is 20 Hz. Blue whale calls last between ten and thirty seconds. Blue whales off the coast of Sri Lanka have been repeatedly recorded making \"songs\" of four notes, lasting about two minutes each, reminiscent of the well-known humpback whale songs. As this phenomenon has not been seen in any other populations, researchers believe it may be unique to the \"B. m. brevicauda\" (pygmy) subspecies. The loudest sustained noise from a blue whale was at 188 dB.\n",
"Estimates made by Cummings and Thompson (1971) and Richardson et al. (1995) suggest that source level of sounds made by blue whales are between 155 and 188 decibels when measured at a reference pressure of one micropascal at one metre. All blue whale groups make calls at a fundamental frequency of between 10 and 40 Hz, and the lowest frequency sound a human can typically perceive is 20 Hz. Blue whale calls last between ten and thirty seconds. Additionally blue whales off the coast of Sri Lanka have been recorded repeatedly making \"songs\" of four notes duration lasting about two minutes each, reminiscent of the well-known humpback whale songs.\n",
"\"Song of the Whale\" carries out most of its research under sail to reduce the impact on the whales and other marine mammals being researched. The focus is on their movement and behaviour. Noise suppression is particularly important when assessing populations of whales as the researchers can listen to their sounds up to 20 miles away using hydrophone arrays, not relying solely on surface sightings. Research undertaken includes work on the problems of whales becoming entangled in fishing gear or in collisions with ships.\n",
"High levels of underwater sound create a potential hazard to human divers. Guidelines for exposure of human divers to underwater sound are reported by the SOLMAR project of the NATO Undersea Research Centre. Human divers exposed to SPL above 154 dB re 1 μPa in the frequency range 0.6 to 2.5 kHz are reported to experience changes in their heart rate or breathing frequency. Diver aversion to low frequency sound is dependent upon sound pressure level and center frequency.\n",
"The U.S. Navy had scheduled 14 training exercises through January 2009 off the coast of Southern California involving the use of “mid-frequency active sonar” to detect enemy submarines. Environmentalists argued that the sonar's high decibel levels may have a deafening effect on whales. They said studies conducted around the world have shown the piercing underwater sounds cause whales to flee in panic or to dive too deeply. Whales have been found beached in Greece, the Canary Islands, and in the Bahamas after sonar was used in the area, and necropsies showed signs of internal bleeding near the ears. \n",
"In 2009, researchers found that blue whale song has been deepening in its tonal frequency since the 1960s. While noise pollution has increased ambient ocean noise by over 12 decibels since the mid-20th century, researcher Mark McDonald indicated that higher pitches would be expected if the whales were straining to be heard.\n",
"They regularly dive for about 5–15 minutes (maximum of 20 minutes) after four to seven blows. Bryde's whales are capable of reaching depths down to . When submerging, these whales do not display their flukes. Bryde's whales commonly swim at , but can reach . They sometimes generate short (0.4 seconds) powerful, low-frequency vocalizations that resemble a human moan.\n"
] |
what exactly is a galaxy?
|
A massive collection of stars, all orbiting a central point, usually a supermassive black hole.
Basically,from what we can tell, they form much the same way individual star systems form. A cloud of gas (mainly hydrogen) condensed due to gravity, the center becomes a Star, and the eddys of the cloud help condense other parts into planets. A galaxy forms like that, but on a scale trillions of times larger
|
[
"A galaxy is a gravitationally bound system of stars, stellar remnants, interstellar gas, dust, and dark matter. The word galaxy is derived from the Greek \"\" (), literally \"milky\", a reference to the Milky Way. Galaxies range in size from dwarfs with just a few hundred million () stars to giants with one hundred trillion () stars, each orbiting its galaxy's center of mass.\n",
"A galaxy is a large gravitational aggregation of stars, dust, gas, and an unknown component termed dark matter. The Milky Way Galaxy is only one of billions of galaxies in the known universe. Galaxies are classified into spirals, ellipticals, irregular, and peculiar. Sizes can range from only a few thousand stars (dwarf irregulars) to 10 stars in giant ellipticals. Elliptical galaxies are spherical or elliptical in appearance. Spiral galaxies range from S0, the lenticular galaxies, to Sb, which have a bar across the nucleus, to Sc galaxies which have strong spiral arms. In total count, ellipticals amount to 13%, S0 to 22%, Sa, b, c galaxies to 61%, irregulars to 3.5% and peculiars to 0.9%.\n",
"The galaxy is the smallest spiral galaxy in the Local Group and it is believed to be a satellite of the Andromeda Galaxy due to their interactions, velocities, and proximity to one another in the night sky. It also has an H II nucleus.\n",
"Galaxy is a scientific workflow system. These systems provide a means to build multi-step computational analyses akin to a recipe. They typically provide a graphical user interface for specifying what data to operate on, what steps to take, and what order to do them in.\n",
"\"The Galaxy\" is our home galaxy, the Milky Way, though it is referred to exclusively as \"the Galaxy\" in the series. Apart from a very brief moment during the first radio series, when the main characters were transported outside the galactic plane into a battle with Haggunenons, and a moment when one of Arthur's careless remarks is sent inadvertently through a wormhole into \"a distant galaxy\", the Galaxy provides the setting for the entire series. It is home to thousands of sentient races, some of whom have achieved interstellar capability, creating a vast network of trade, military and political links. To the technologically advanced inhabitants of the Galaxy, a small, insignificant world such as Earth is considered invariably primitive and backward. The Galaxy appears, at least nominally, to be a single state, with a unified government \"run\" by an appointed President. Its immensely powerful and monumentally callous civil service is run out of the Megabrantis Cluster, mainly by the Vogons. One of the most painful things to hear is vogon poetry. Vogon poetry is the 3rd worst poetry in the galaxy.\n",
"Galaxy is a scientific workflow, data integration, and data and analysis persistence and publishing platform that aims to make computational biology accessible to research scientists that do not have computer programming or systems administration experience. Although it was initially developed for genomics research, it is largely domain agnostic and is now used as a general bioinformatics workflow management system.\n",
"The galaxy is situated alone in a volume of space about it. It is theorized that the galaxy cannibalized its nearest companions, hence, being a fossil group. The galaxy is a giant elliptical of type cD3 (E+3), one of the largest classes of galaxies.\n"
] |
what happens when a country 'condemns' something?
|
That's pretty much it - just expressing disapproval. A lot of times there's not a good/politically palatable solution to problems, so all a politician can do is talk about it.
|
[
"Pure political betrayal trauma can be caused by situations such as wrongful arrest and conviction by the legal system of a western democracy; or by discrimination, bullying or other serious mistreatment by a state institution or powerful figure within the state.\n",
"Nonviolence is the personal practice of being harmless to self and others under every condition. It comes from the belief that hurting people, animals or the environment is unnecessary to achieve an outcome and refers to a general philosophy of abstention from violence. This may be based on moral, religious or spiritual principles, or it may be for purely strategic or pragmatic reasons.\n",
"The term \"nonviolence\" is often linked with or used as a synonym for peace, and despite being frequently equated with passivity and pacifism, this is rejected by nonviolent advocates and activists. Nonviolence refers specifically to the absence of violence and is always the choice to do no harm or the least harm, and passivity is the choice to do nothing. Sometimes nonviolence is passive, and other times it isn't. For example, if a house is burning down with mice or insects in it, the most harmless appropriate action is to put the fire out, not to sit by and passively let the fire burn. There is at times confusion and contradiction written about nonviolence, harmlessness and passivity. A confused person may advocate nonviolence in a specific context while advocating violence in other contexts. For example, someone who passionately opposes abortion or meat eating may concurrently advocate violence to kill an abortionist or attack a slaughterhouse, which makes that person a violent person.\n",
"When everything you believed in for your entire life turns out to be a devastating lie, then will you direct your anger at the person who caused it all or forgive that person? This drama tells a story about people who are hell-bent on recovering what they think is rightfully theirs while on the opposite side, there are people who will fight to protect what belongs to them. In the beginning, the two sides hate each other's guts. They do despicable things that will make it impossible to ever make things right again. But there is still a sliver of hope. The pent-up anger within themselves softens and they begin a path of rehabilitation and redemption that will allow them to forgive their enemies and bury the hatchet. This drama is not a story about revenge and anguish but a story of rehabilitation and harmony. Viewers will empathize with their agony and pain while also rejoicing at their happiness. The takeaway from it all is that love is a powerful force that can conquer all.\n",
"\"Bad Faith\" takes place in a dystopian society. The government is a theocratic totalitarian regime run by Mother (Ma) Baxter. It is clearly stated at points that the nation used to be \"godless\", but now it has its faith again. The One Church is the national church, and everyone should belong. This is the ideology that she uses to maintain her power and the government itself.\n",
"Hate speech is a statement intended to demean and brutalize another, or the use of cruel and derogatory language on the basis of real or alleged membership in a social group. Hate speech is speech that attacks a person or a group on the basis of protected attributes such as race, religion, ethnic origin, national origin, sex, disability, sexual orientation, or gender identity. \n",
"If \"hate speech\" is taken to mean ethnic agitation, it is prohibited in Finland and defined in the section 11 of the penal code, \"War crimes and crimes against humanity\", as published information or as an opinion or other statement that threatens or insults a group because of race, nationality, ethnicity, religion or conviction, sexual orientation, disability, or a comparable basis. Ethnic agitation is punishable with a fine or up to 2 years in prison, or 4 months to 4 years if aggravated (such as incitement to genocide).\n"
] |
as someone who doesnt follow sports and social trends or have a twitter or instagram, why are people burning their nike clothes?
|
It's an extention of the kneeling during national anthem thing. The football player who started the kneeling protest did an ad with Nike. Now people who disliked the kneeling protest are burning thier Nike stuff to show they hate Nike now.
|
[
"Because the Nike+ web community profile can be linked to both Facebook and Twitter, users can now share their results and accomplishments with their friends. This has the ability to lead to a greater chance for positive results because interaction and motivation from friends has proven to benefit workout habits. \"A 2011 Pew Internet study found that 80% of Internet users look for health information online, 27% of U.S. Internet users had tracked health data online, and 18% had sought to locate others with similar health concerns via the Internet\". These statistics suggest that self-empowerment and action taking, in regards to health, is becoming a much more accepted behavior norm, instead of a small online community, like it has been in the past. When sharing workout results with friends on social media, one is much more aware of their personal well-being. Being constantly aware of your physical fitness and activity levels are very important for living a healthy life, which leads to the conclusion why many say that technologies like the FuelBand are necessary for being physically responsible of oneself. The FuelBand makes it much more simple to live a healthy and informed life and this can be related to maintaining personal health records. Personal health records are types of medical records that are edited, administrated, and owned by the patient, instead of the doctor or health care administrator. Personal health records are usually stored on online databases and they have proven to be \"a key step in empowering health self-management as we can have a more active role in understanding, accessing, maintaining, and sharing our personal health information, and in coordinating and participating in our own health care\". Studies have shown that PHR users are over 65% more likely to follow up on recommended care or to act on the change that they desire, which indicates the potentially beneficial influences in behaviors of PHRs. The FuelBand has the potential to help its users get in great physical shape and to be well informed of their health records and statistics. Because of this and the ability to be connected to an online community through social media, the FuelBand can be seen as an innovative technology that is representing the way that the health field is going in the future. Health practices are becoming more personalized and more power is being given to the individual and the FuelBand is an exact example of this new field of technology that is growing in size.\n",
"The involvement of Kaepernick with the advertisement, especially after the context of the controversial act of kneeling during the National Anthem in 2016, gave rise to a whole entire internet debate and social movement against Nike. Many individuals took to Twitter and other social media sites to revolt, adopting hashtags such as, #JustDont or #BoycottNike. Many prior fans of Nike have also showed signs of protest by explicitly demanding that others boycott or even go as far to burn Nike shoes or destroy various other merchandise. Nevertheless, many analysts suggested that the campaign was successful, as the target group of the advertisement has endorsed it.\n",
"Even the most important sportswear companies are involved in social media campaigns, first of all Nike, that give the possibility to their Nike+ app users to share on Instagram their fitness achievements and progress. In fact, the app has been downloaded over 17 millions of times from all over the world users.\n",
"Another trend that influences the way youth communicates is (through) the use of hashtags. With the introduction of social media platforms such as Twitter, Facebook and Instagram, the hashtag was created to easily organize and search for information. Hashtags can be used when people want to advocate for a movement, store content or tweets from a movement for future use, and allow other social media users to contribute to a discussion about a certain movement by using existing hashtags. Using hashtags as a way to advocate for something online makes it easier and more accessible for more people to acknowledge it around the world. As hashtags such as #tbt (\"throwback Thursday\") become a part of online communication, it influenced the way in which youth share and communicate in their daily lives. Because of these changes in linguistics and communication etiquette, researchers of media semiotics have found that this has altered youth's communications habits and more.\n",
"Nike has made great use of the Swoosh logo in athlete endorsements. The endorsements of Romanian tennis player Ilie Năstase and distance runner Steve Prefontaine kicked off Nike's brand sponsorships and today they endorse hundreds of athletes. Nike's endorsements of Michael Jordan, LeBron James and Kobe Bryant in basketball, Cristiano Ronaldo in football, Tiger Woods in golf, and Roger Federer and Rafael Nadal in tennis are among the 15 biggest athlete endorsement deals in sports history.\n",
"The hashtags have also been used for T-shirts and similar, with self-portraits of people wearing the clothes widely shared on social media. Several outlets reported that the clothes and images were widely mocked, and often Photoshopped sarcastically.\n",
"The Advertising Standards Authority (ASA) in the UK reached a similar decision in June 2012 in relation to material about Nike on Twitter. The ASA found that the content of certain tweets from two footballers had been \"agreed with the help of a member of the Nike marketing team.\" The tweets were not clearly identified as Nike marketing communications, and were therefore in breach of the ASA's code.\n"
] |
asian flush syndrome
|
Are you asking what causes it? It's caused by the buildup of a chemical called acetaldehyde, which is a natural product of the metabolism of alcohol. It's genetic, and fairly common among people of Asian decent. There are a couple of genes responsible. One gene is responsible for producing a chemical called alcohol dehydrogenase, which is what breaks down alcohol. People with a certain variant of this gene make more acetaledehyde. Another gene is one which makes the chemical that breaks down acetaldehyde itself, and people with a variant of this gene don't produce enough of the enzyme to break down the acetaldehyde, so it accumulates.
|
[
"Alcohol flush reaction is a condition in which an individual's face or body experiences flushes or blotches as a result of an accumulation of acetaldehyde, a metabolic byproduct of the catabolic metabolism of alcohol. It is best known as a condition that is experienced by people of Asian descent. According to the analysis by HapMap Project, the rs671 allele of the ALDH2 gene responsible for the flush reaction is rare among Europeans and Africans, and it is very rare among Mexican-Americans. 30% to 50% of people of Chinese and Japanese ancestry have at least one ALDH*2 allele. The rs671 form of ALDH2, which accounts for most incidents of alcohol flush reaction worldwide, is native to East Asia and most common in southeastern China. It most likely originated among Han Chinese in central China, and it appears to have been positively selected in the past. Another analysis correlates the rise and spread of rice cultivation in Southern China with the spread of the allele. The reasons for this positive selection aren't known, but it's been hypothesized that elevated concentrations of acetaldehyde may have conferred protection against certain parasitic infections, such as \"Entamoeba histolytica\". The same SNP allele of ALDH2, also termed glu487lys, and the abnormal accumulation of acetaldehyde following the drinking of alcohol, is associated with the alcohol-induced respiratory reactions of rhinitis and asthma that occur in Eastern Asian populations.\n",
"Alcohol flush reaction is best known as a condition that is experienced by people of East Asian descent. According to the analysis by HapMap project, the rs671 (ALDH2*2) allele of the \"ALDH2\" responsible for the flush reaction is rare among Europeans and Sub-Saharan Africans. 30% to 50% of people of Chinese, Japanese, and Korean ancestry have at least one \"ALDH2*2\" allele. The rs671 form of ALDH2, which accounts for most incidents of alcohol flush reaction worldwide, is native to East Asia and most common in southeastern China. It most likely originated among Han Chinese in central China, Another analysis correlates the rise and spread of rice cultivation in Southern China with the spread of the allele. The reasons for this positive selection aren't known, but it's been hypothesized that elevated concentrations of acetaldehyde may have conferred protection against certain parasitic infections, such as \"Entamoeba histolytica\".\n",
"Niacin in cholesterol lowering doses (500–2000 mg per day) causes facial flushes by stimulating biosynthesis of prostaglandin D (PGD), especially in the skin. PGD dilates the blood vessels via activation of the prostaglandin D receptor subtype DP, increasing blood flow and thus leading to flushes. Laropiprant acts as a selective DP receptor antagonist to inhibit the vasodilation of prostaglandin D-induced activation of DP.\n",
"BULLET::::- Rosacea, also known as gin blossoms, is a chronic facial skin condition in which capillaries are excessively reactive, leading to redness from flushing or telangiectasia. Rosacea has been mistakenly attributed to alcoholism because of its similar appearance to the temporary flushing of the face that often accompanies the ingestion of alcohol.\n",
"Regular Asian customers often became subject to gratuitous suspicion and even outright discrimination due to the disruptive nature of the rampant purchases of luxury goods and other consumer goods made by \"Daigou\" hoarders and smugglers, who are mostly Asians. Asian-American sales associates at Macy's Herald Square sued Macy's for racial discrimination in September 2017, alleging that store managers instructed sales associates not to sell more than one unit to any single Asian customer, and that they were fired when they spoke up about the alleged discrimination.\n",
"The exact cause of rosacea is unknown. Triggers that cause episodes of flushing and blushing play a part in its development. Exposure to temperature extremes, strenuous exercise, heat from sunlight, severe sunburn, stress, anxiety, cold wind, and moving to a warm or hot environment from a cold one, such as heated shops and offices during the winter, can each cause the face to become flushed. Certain foods and drinks can also trigger flushing, such as alcohol, foods and beverages containing caffeine (especially hot tea and coffee), foods high in histamines, and spicy foods.\n",
"The Flushing Chinatown, in the Flushing area of the borough of Queens in New York City, is one of the largest and fastest growing ethnic Chinese enclaves outside Asia, as well as within New York City itself. Main Street and the area to its west, particularly along Roosevelt Avenue, have become the primary nexus of Flushing Chinatown. However, Flushing Chinatown continues to expand southeastward along Kissena Boulevard and northward beyond Northern Boulevard. In the 1970s, a Chinese community established a foothold in the neighborhood of Flushing, whose demographic constituency had been predominantly non-Hispanic white. Taiwanese began the surge of immigration. It originally started off as \"Little Taipei\" or \"Little Taiwan\" due to the large Taiwanese population. Due to the then dominance of working class Cantonese immigrants of Manhattan's Chinatown including its poor housing conditions, they could not relate to them and settled in Flushing.\n"
] |
why does white noise calm people down?
|
When it's quiet, your body reacts to every noise with a "what's that?" response which makes you perk up and be alert. By drowning out the sounds with white noise, you don't have that effect as often and your body has a chance to relax.
|
[
"The effects of white noise upon cognitive function are mixed. Recently, a small study found that white noise background stimulation improves cognitive functioning among secondary students with attention deficit hyperactivity disorder (ADHD), while decreasing performance of non-ADHD students. Other work indicates it is effective in improving the mood and performance of workers by masking background office noise, but decreases cognitive performance in complex card sorting tasks.\n",
"Grey noise is random white noise subjected to a psychoacoustic equal loudness curve (such as an inverted A-weighting curve) over a given range of frequencies, giving the listener the perception that it is equally loud at all frequencies. This is in contrast to standard white noise which has equal strength over a linear scale of frequencies but is not perceived as being equally loud due to biases in the human equal-loudness contour.\n",
"The term white noise—the 'sh' noise produced by a signal containing all audible frequencies of vibration—is sometimes used as a colloquialism to describe a backdrop of ambient sound, creating an indistinct commotion, seamless in such way no specific sounds composing it as a continuum can be isolated as a veritable instance of some defined familiar sound so that masks or obliterates underlying information. e.g. chatter from multiple conversations within the acoustics of a confined place. The information itself may have characteristics that achieve this effect without the need to introduce a masking layer. A common example of this usage is a politician including more information than needed to mask a point they don't want noticed.\n",
"White noise is commonly used in the production of electronic music, usually either directly or as an input for a filter to create other types of noise signal. It is used extensively in audio synthesis, typically to recreate percussive instruments such as cymbals or snare drums which have high noise content in their frequency domain. A simple example of white noise is a nonexistent radio station (static).\n",
"Noiseless is an image noise reduction application by Macphun Software. The application is designed to reduce the noise found in digital photographs. The noise is often a result of snapping pictures in low light situations.\n",
"White noise is a random signal (or process) with a flat power spectral density. In other words, the signal contains equal power within a fixed bandwidth at any center frequency. White noise is considered analogous to white light which contains all frequencies.\n",
"Though Blank Noise was founded in Bangalore, it has spread to other cities such as Mumbai, Delhi, Chennai, Calcutta, Chandigarh, Hyderabad, and Lucknow. It tackles the notion of shame and blame through campaigns such as \"I never ask for it\" (ask to be sexually harassed when on the streets). A major notion that it seeks to dispel is that women get harassed because of the clothing they wear. Through street actions and dialogue, Blank Noise hopes to achieve its aims of achieving a safe and free environment for women on the streets, and enable society to become more egalitarian towards women in general.\n"
] |
If coats are just good insulators, why can't we wear them in the summer to keep cool?
|
Something not already mentioned yet: Your body produces heat. If you insulated your body during the summer, you would quickly overheat because your body would produce more heat than you could comfortably stand, with the coat keeping that heat locked in.
In the Winter, the cold and wind draw heat off of the coat at a rate roughly equal to how quickly your body produces it, so all is well. But if you layer up too much, you again experience the same effect, and you become hot and sweaty because you are wearing too much, even in Winter.
|
[
"One type of coating (low-e coatings) reduces the emission of radiant infrared energy, thus tending to keep heat on the side of the glass where it originated, while letting visible light pass. This results in glazing with better control of energy - heat originating from indoors in winter remains inside (the warm side), while heat during summer does not emit from the exterior, keeping it cooler inside.\n",
"Comforters are usually used in the winter season when it is very cold, although a variety of different thicknesses means that they can be used in other seasons as well, with lighter examples being used in warmer weather. Due to the thickness of a comforter or the amount of down/feathers or other filling it has, a person is insulated against cold.\n",
"In cold climates, mittens in which all the fingers are in the same compartment serve well to keep the hands warm, but this arrangement also confines finger movement and prevents the full range of hand function; gloves, with their separate fingers, do not have this drawback, but they do not keep the fingers as warm as mittens do. As such, with mittens and gloves, warmth versus dexterity is the trade-off. In a like fashion, warm coats are often bulky and hence they impede freedom of movement for the wearer. Thin coats, such as those worn by winter sports athletes, give the wearer more freedom of movement, but they are not as warm.\n",
" Wearing several winter clothing layers make it easier to regulate body temperature when cycling in the winter. Hats, gloves, socks, arm warmers, leg warmers, scarves, neck gaiters and lightweight packable jackets, can be adjusted, added, or removed to help regulate your body temperature and personal comfort. In below freezing temperatures extra clothing items can be added to help keep you warm these include thermal pants, winter shoe covers, and longer coats. \n",
"Another downside is its tendency to become very cold during winter. This can cause problems; due to this, many change their jewelry to others made of horn, bone, wood, plastics and glass during winter.\n",
"In recent years its age has made it a difficult building to keep warm in the wintertime. Even with the windows closed, City Hall has been drafty. The city installed Cellular Shades, and later window insulating panels, to keep it warm.\n",
"The evaporation of the moisture from the clothes will cool the indoor air and increase the humidity level, which may or may not be desirable. In cold, dry weather, moderate increases in humidity make most people feel more comfortable. In warm weather, increased humidity makes most people feel even hotter. Increased humidity can also increase growth of fungi, which can cause health problems.\n"
] |
Why is there a maximum speed for light? What is "braking" it?
|
The speed of light is set by two fundamental constants known as the permeability and permittivity of free space. A classical analogy for these constants would be "stiffness", ie empty space has a stiffness and this leads to the speed of wave travel.
There are plenty of near light speed particles that were accelerated by both extra terrestrial and terrestrial processes (cf CERN!).
|
[
"Since kinetic energy increases quadratically with velocity (formula_1), an object moving at 10 m/s has 100 times as much energy as one of the same mass moving at 1 m/s, and consequently the theoretical braking distance, when braking at the traction limit, is 100 times as long. In practice, fast vehicles usually have significant air drag, and energy lost to air drag rises quickly with speed.\n",
"The need to brake is sometimes caused by unpredictable events. At higher speeds, there is less time to allow vehicles to slow down by coasting. Kinetic energy is higher, so more energy is lost in braking. At medium speeds, the driver has more time to choose whether to accelerate, coast or decelerate in order to maximize overall fuel efficiency.\n",
"A Slower Speed of Light is a freeware video game developed by MIT Game Lab that demonstrates the effects of special relativity by gradually slowing down the speed of light to a walking pace. The game runs on the Unity engine using the own open source OpenRelativity toolkit.\n",
"In Galilean relativity, it was considered \"obvious\" that we could add speeds without limit (\"w\" = \"u\" + \"v\"). This composition laws for speed was not challenged. However, Poincaré and Einstein did challenge it with special relativity, setting a maximum speed on movement, the speed of light. Formally, if \"v\" is a velocity, \"v + c = c\". The status of the speed of light in special relativity is a horizon, unreachable, impassable, invariant under changes of movement.\n",
"These considerations show that the speed of light as a limit is a consequence of the properties of spacetime, and not of the properties of objects such as technologically imperfect space ships. The prohibition of faster-than-light motion, therefore, has nothing in particular to do with electromagnetic waves or light, but comes as a consequence of the structure of spacetime.\n",
"Later, Mallett abandoned the idea of using slowed light to reduce the energy, writing that, \"For a time, I considered the possibility that slowing down light might increase the gravitational frame dragging effect of the ring laser ... Slow light, however, turned out to be helpful for my research.\"\n",
"In order to enable the testing of extreme braking situations, e.g. emergency braking from high speed, it is necessary for the lanes to be designed with a length of typically 150–250 m. Additionally acceleration lanes with the appropriate length are also required for reaching high speed, which enable even heavy-duty vehicles to reach 100 km/h. This way it becomes possible to test the braking systems of trucks and buses.\n"
] |
how can the quietest room in the world be -9 decibels?
|
Decibels are a logarithmic scale. 0 isn't no sound, it's just the lower-limit of what a human can typically hear. So -9 isn't no sound at all, it's just quieter than the quietest sound a human can detect, by a factor about the same as the factor between 0 decibels and 10.
|
[
"BULLET::::- World's quietest room, located at Orfield Labs in Minneapolis, Minnesota. The Orfield Labs chamber was certified by the Guinness Book of World Records in 2005 as the quietest room on Earth.\n",
"The Murray Hill anechoic chamber, built in 1940, is the world's oldest wedge-based anechoic chamber. The interior room measures approximately high by wide by deep. The exterior concrete and brick walls are about thick to keep outside noise from entering the chamber. The chamber absorbs over 99.995% of the incident acoustic energy above 200 Hz. At one time the Murray Hill chamber was cited in the Guinness Book of World Records as the world's quietest room. It is possible to hear the sounds of skeletal joints and heart beats very prominently.\n",
"Wolfe's third album \"Raw Space\" was conceived at Bell Labs' Anechoic chamber, cited in the Guinness World Records as the quietest room in the world. The album features \"Little Moth\", a song written in tribute to singer songwriter Elliott Smith and described by Spindle Magazine as \"a tender homage with the intimate double vocals, distant mellotron and all round low-fi sound, very much in the spirit of Smith’s style and production\"\n",
"At low frequencies, most rooms have resonances at a series of frequencies where a room dimension corresponds to a multiple of half wavelengths. Sound travels at roughly 1 foot per millisecond (1100 ft/s), so a room long will have resonances from 25 Hz upwards. These resonant modes cause large peaks and dips in the sound level of a constant signal as the frequency of that signal varies from low to high.\n",
"A room tone is a recorded element of sound design, often employed in the movie industry, to impress the sonic ambience of a depicted environment. It is the sound of “silence” in a room, though never quite silent as each room tone is inflected by different characteristics such as sonic reflections bouncing off physical architecture, the absorptive presence of bodies, and other kinds of technology present (i.e., the subtle hum of air-handling systems and lights). \"Room Tone\" is run by a computer program that generates sounds drawn from a database of approximately 1,000 room tones through four separate channels. The program may layer, cut up, and combine these room tones, varying in duration and volume, to create a new, dynamic, and visceral room tone for any particular space. Constantly changing and evolving, the aural experience of \"Room Tone\" humorously plays with the oft-cited adage of John Cage: “There is no such thing as an empty space or an empty time. There is always something to see, something to hear. In fact, try as we may to make a silence, we cannot.”\n",
"The threshold of hearing is generally reported as the RMS sound pressure of 20 micropascals, i.e. 0 dB SPL, corresponding to a sound intensity of 0.98 pW/m at 1 atmosphere and 25 °C. It is approximately the quietest sound a young human with undamaged hearing can detect at 1,000 Hz. The threshold of hearing is frequency-dependent and it has been shown that the ear's sensitivity is best at frequencies between 2 kHz and 5 kHz, where the threshold reaches as low as −9 dB SPL.\n",
"Formal radio quiet zones exist around many observatories, including the Murchison Radio-astronomy Observatory in Australia, the National Radio Astronomy Observatory and the Sugar Grove Station in West Virginia, United States (the United States National Radio Quiet Zone), and the Itapetinga Radio Observatory in Brazil, as examples. The ITU has recommended designating two locations in space as radio quiet zones: the shielded zone on the Moon's far side, and the Sun-Earth Lagrangian point L.\n"
] |
does “burning in” brand new audio gears such as headphones and speakers actually work?
|
First: The word 'gear' in this context is already plural. It's a group noun, like 'news' and 'furniture'.
Second: No. That idea comes from decades ago, when magnets were weaker and materials were worse. Even then, it had almost no impact on the sound. Only hard core audiophiles purported to hear a difference in sound quality.
Modern sound equipment uses materials that do not change physical characteristics over time. The diaphragm, if there is one, will not stretch.
|
[
"Initial sales were slow, because at the time electronics retailers provided low-cost lamp cords to consumers for free or at low prices and audiophiles didn't believe audio cables made a difference in the sound. Monster is credited with creating the market for high-end audio cables in the 1980s through Lee's \"marketing prowess\". He did demonstrations comparing the audio of standard cables to Monster cables for retailers and trained their salespeople to do the same for customers.\n",
"This model also suffers from a whine on the headphone and microphone jacks that are located on the left of the unit. This is because of shared space with the leftmost fan, and the spinning of said fan causes interference. There is no known fix than to otherwise use a USB, FireWire/1394 or PCMCIA-based audio device or card for sound output.\n",
"The exhaust directs the hot and noxious gases coming from the engine away from the user. A faulty exhaust increases noise, decreases engine power, can expose the user to unsafe levels of exhaust gases, and can increase the chance that the user could accidentally touch extremely hot metal. Most models feature a spark screen which is integrated into the muffler. The spark screen prevents sparks from being discharged from the exhaust and potentially igniting sawdust. The spark screen also reduces noise.\n",
"In his early days of selling high-end audio equipment, William E. Low discovered that the sound of an audio system was easily influenced by the quality of the cables connecting its various components. Hi-fi journalist, Richard Hardesty explained: \n",
"Many audiophiles believe it can take many hours before the maximum sonic benefits are heard in audio circuits that use Black Gates. This long settling-in procedure is often a controversial issue when auditioning such equipment, as the frequency response is said to tend to shift around greatly during this period, making the equipment sound different from one audition to another. Once completely 'burnt-in' however, the benefits are said to be heard clearly. This settling period or burn in period was most likely attributed to the aluminum layer completing its reaction to form a complete and stable oxide layer on its surface once current and voltage are applied to the capacitor in a circuit.\n",
"The Gamate's mono internal speaker is of poor quality, giving off sound that is quite distorted, particularly at low volumes. However, if a user plugs into the headphone jack, the sound is revealed to be programmed in stereo, and of a relatively high quality.\n",
"To reduce radio frequency interference (RFI) produced by the spark being radiated by the wires, which may cause malfunction of sensitive electronic systems in modern vehicles or interfere with the car radio, various means in the spark plug and associated lead have been used over time to reduce the nuisance:\n"
] |
Is there any credible evidence for any kind of Giant humans existing?
|
I wouldn't say it is propaganda (that's a strong word!), but these entities, common in many people's folklore, have no basis in fact. You can ask /r/Askanthropology about things like the "giganthropus" fossil evidence, but whatever that represents, it is hardly evidence of giants, and it would be an incredulous stretch to conclude that there was some sort of primal memory of gigantic hominids many of hundreds of thousands of years ago - if they ever existed at all!
Giants are one of that species of supernatural beings that existed in a remote past: people talked about encountering ghosts, fairies, the devil, or any number of others things, but they never told of having encountered a giant. Stories about giants were always about other people living in the past having dealing with them. It seems that it was easy for people to imagine there was once a race of titans to explain enormous, seemingly unnatural things in the landscape. The name of the Giant’s Causeway preserves the idea that one of these entities built a path to walk from Ireland to Scotland. Wade’s Causeway, is another reference to a giant, in this case to explain a Roman road in Yorkshire. The etiological role of giants was paramount, but the explanation of the landscape, megaliths, or extraordinary things in general could merge with stories about other supernatural beings. For example, the Devil’s Dyke in Cambridgeshire is an example of tradition holding that Satan affected the landscape in a way normally reserved for giants. I wouldn't want to go so far as to say it was a process of 1. fantastic landscape element; 2. fantastic and necessarily large entity needs to be responsible; therefore, giants must have existed. That said, this sort of process serves as an underpinning to reinforce belief.
|
[
"Giant tortoises of the genera \"Geochelone\", \"Meiolania\", and others were relatively widely distributed around the world into prehistoric times, and are known to have existed in North and South America, Australia, and Africa. They became extinct at the same time as the appearance of man, and it is assumed humans hunted them for food. The only surviving giant tortoises are on the Seychelles and Galápagos Islands and can grow to over in length, and weigh about .\n",
"Evidence of Megafauna, including bones attributed to Diprotodon, Maesopus (a giant kangaroo) and Thylacoleo (a marsupial lion) were discovered in the 1890s in a swamp near Yankalilla and conjecture surrounds the possibility that the animals were hunted by the Ramindjerl people.\n",
"Certain extinct cephalopods rivalled or even exceeded the size of the largest living species (Carnall, 2017). In particular, the subclass Ammonoidea is known to have included a considerable number of species that may be considered \"giant\" (defined by Stevens, 1988 as those exceeding in shell diameter). The largest confirmed ammonite, a specimen of \"Parapuzosia seppenradensis\" discovered in a German quarry in 1895, measures in diameter (Kennedy & Kaplan, 1995:21), though its living chamber is largely missing. The diameter of the complete shell has been estimated at , assuming the living chamber took up one-fourth of the outer whorl (Landois, 1895:100). Teichert & Kummel (1960:6) suggested an even larger original shell diameter of around for this specimen, assuming the body chamber extended for three-fourths to one full whorl. In 1971 a portion of an ammonite possibly surpassing this specimen was reportedly found in a brickyard in Bottrop, western Germany (Beer, 2015). A specimen found by Jim Rockwood, from the Late Triassic near Williston Lake, British Columbia, was said to measure more than across, but was later determined to be a concretion ([Anonymous],; [Anonymous], 2008).\n",
"BULLET::::- Megalania – A giant goanna (lizard), generally believed to be extinct. However, there have been numerous reports and rumors of living Megalania in Australia, and occasionally New Guinea, but the only physical evidence that Megalania might still be alive today are plaster casts of possible Megalania footprints made in 1979.\n",
"BULLET::::- Also on the Semliki River, a specimen was reportedly killed in June 1954 by Mr. Hippel that measured . This giant is considered truly exceptional because it was verified that it was a female, making it almost a metre longer than any other known female and possibly the largest female crocodilian of any extant species.\n",
"There have been occasional claims that the woolly mammoth is not extinct, and that small, isolated herds might survive in the vast and sparsely inhabited tundra of the Northern Hemisphere. In the 19th century, several reports of \"large shaggy beasts\" were passed on to the Russian authorities by Siberian tribesmen, but no scientific proof ever surfaced. A French \"chargé d'affaires \"working in Vladivostok, M. Gallon, said in 1946 that in 1920, he had met a Russian fur-trapper who claimed to have seen living giant, furry \"elephants\" deep into the taiga. Due to the large area of Siberia, that woolly mammoths survived into more recent times cannot be completely ruled out, but all evidence indicates that they became extinct thousands of years ago. These natives likely had gained their knowledge of woolly mammoths from carcasses they encountered, and that this is the source for their legends of the animal.\n",
"Evidence of Megafauna, including bones attributed to Diprotodon, Maesopus – the giant kangaroo and Thylacoleo – a marsupial lion, were discovered in the 1890s. A Diprotodon leg bone was found in a swamp in the 1890s and conjecture surrounds the possibility that the animals were hunted by local aboriginal groups.\n"
] |
how come no one has registered trademark using internet memes? are there any policies related to that?
|
To register a phrase as trademark, you have to prove that people recognize your company's products because you use the phrase. That's never going to be true for an internet meme.
|
[
"Some countries have specific laws against cybersquatting beyond the normal rules of trademark law. The United States, for example, has the U.S. Anticybersquatting Consumer Protection Act (ACPA) of 1999. This expansion of the Lanham (Trademark) Act (15 U.S.C.) is intended to provide protection against cybersquatting for individuals as well as owners of distinctive trademarked names. However, some notable personalities, including rock star Bruce Springsteen and actor Kevin Spacey, failed to obtain control of their names on the internet.\n",
"United States trademark law is mainly governed by the Lanham Act. Common law trademark rights are acquired automatically when a business uses a name or logo in commerce, and are enforceable in state courts. Marks registered with the U.S. Patent and Trademark Office are given a higher degree of protection in federal courts than unregistered marks—both registered and unregistered trademarks are granted some degree of federal protection under the Lanham Act 43(a).\n",
"A Trademark in computer security is a contract between code that verifies security properties of an object and code that requires that an object have certain security properties. As such it is useful in ensuring secure information flow. In object-oriented languages, trademarking is analogous to signing of data but can often be implemented without cryptography.\n",
"In many countries (but not in countries like the United States, which recognizes common law trademark rights), a trademark which is \"not\" registered cannot be \"infringed\" as such, and the trademark owner cannot bring infringement proceedings. Instead, the owner may be able to commence proceedings under the common law for passing off or misrepresentation, or under legislation which prohibits unfair business practices. In some jurisdictions, infringement of trade dress may also be actionable.\n",
"Trademark infringement is a violation of the exclusive rights attached to a trademark without the authorization of the trademark owner or any licensees (provided that such authorization was within the scope of the licence). Infringement may occur when one party, the \"infringer\", uses a trademark which is identical or confusingly similar to a trademark owned by another party, in relation to products or services which are identical or similar to the products or services which the registration covers. An owner of a trademark may commence civil legal proceedings against a party which infringes its registered trademark. In the United States, the Trademark Counterfeiting Act of 1984 criminalized the intentional trade in counterfeit goods and services.\n",
"Most courts particularly frowned on cybersquatting, and found that it was itself a sufficiently commercial use (i.e., \"trafficking\" in trademarks) to reach into the area of trademark infringement. Most jurisdictions have since amended their trademark laws to address domain names specifically, and to provide explicit remedies against cybersquatters.\n",
"A non-conventional trademark, also known as a nontraditional trademark, is any new type of trademark which does not belong to a pre-existing, conventional category of trade mark, and which is often difficult to register, but which may nevertheless fulfill the essential trademark function of uniquely identifying the commercial origin of products or services.\n"
] |
"If there is no biological basis for race, how can forensic anthropologists distinguish the remains of a person of one race from those of another?"
|
The concept of race exists in biology, but there is only one human race. There are genetic differences between human populations based on geography but they are gradual and increase slowly with distance, there is no abrupt change or non-overlapping of genetic make-up as would be required to define a distinct race. _URL_0_
|
[
"Similarly, forensic anthropologists draw on highly heritable morphological features of human remains (e.g. cranial measurements) to aid in the identification of the body, including in terms of race. In a 1992 article, anthropologist Norman Sauer noted that anthropologists had generally abandoned the concept of race as a valid representation of human biological diversity, except for forensic anthropologists. He asked, \"If races don't exist, why are forensic anthropologists so good at identifying them?\" He concluded:\n",
"Sesardic argues that when several traits are analyzed at the same time, forensic anthropologists can classify a person's race with an accuracy of close to 100% based on only skeletal remains. Sesardic's claim has been disputed by Massimo Pigliucci, who accused Sesardic of \"cherry pick[ing] the scientific evidence and reach[ing] conclusions that are contradicted by it.\" Specifically, Pigliucci argues that Sesardic misrepresented a paper by Ousley et al. (2009), and neglected to mention that they identified differentiation not just between individuals from different races, but also between individuals from different tribes, local environments, and time periods. This is discussed in a later section.\n",
"Identification of the ancestry of an individual is dependent upon knowledge of the frequency and distribution of phenotypic traits in a population. This does not necessitate the use of a racial classification scheme based on unrelated traits, although the race concept is widely used in medical and legal contexts in the United States. Some studies have reported that races can be identified with a high degree of accuracy using certain methods, such as that developed by Giles and Elliot. However, this method sometimes fails to be replicated in other times and places; for instance, when the method was re-tested to identify Native Americans, the average rate of accuracy dropped from 85% to 33%. Prior information about the individual (e.g. Census data) is also important in allowing the accurate identification of the individual's \"race\".\n",
"Neven Sesardic has argued that such arguments are unsupported by empirical evidence and politically motivated. Arguing that races are not completely discrete biologically is a straw man argument. He argues \"racial recognition is not actually based on a single trait (like skin color) but rather on a number of characteristics that are to a certain extent concordant and that jointly make the classification not only possible but fairly reliable as well\". Forensic anthropologists can classify a person's race with an accuracy close to 100% using only skeletal remains if they take into consideration several characteristics at the same time. A.W.F. Edwards has argued similarly regarding genetic differences in \"\".\n",
"This is similar to the conclusion reached by anthropologist Norman Sauer in a 1992 article on the ability of forensic anthropologists to assign \"race\" to a skeleton, based on craniofacial features and limb morphology. Sauer said, \"the successful assignment of race to a skeletal specimen is not a vindication of the race concept, but rather a prediction that an individual, while alive was assigned to a particular socially constructed 'racial' category. A specimen may display features that point to African ancestry. In this country that person is likely to have been labeled Black regardless of whether or not such a race actually exists in nature\".\n",
"Forensic anthropologists can determine aspects of geographic ancestry (i.e. Asian, African, or European) from skeletal remains with a high degree of accuracy by analyzing skeletal measurements. According to some studies, individual test methods such as mid-facial measurements and femur traits can identify the geographic ancestry and by extension the racial category to which an individual would have been assigned during their lifetime, with over 80% accuracy, and in combination can be even more accurate. However, the skeletons of people who have recent ancestry in different geographical regions can exhibit characteristics of more than one ancestral group and, hence, cannot be identified as belonging to any single ancestral group.\n",
"\"Race\" is still sometimes used within forensic anthropology (when analyzing skeletal remains), biomedical research, and race-based medicine. Brace has criticized this, the practice of forensic anthropologists for using the controversial concept \"race\" out of convention when they in fact should be talking about regional ancestry. He argues that while forensic anthropologists can determine that a skeletal remain comes from a person with ancestors in a specific region of Africa, categorizing that skeletal as being \"black\" is a socially constructed category that is only meaningful in the particular context of the United States, and which is not itself scientifically valid.\n"
] |
the cable companies arguement to "data cap" my monthly internet usage is to prevent congestion of the system during peak hours. can it really be congested?
|
Yes, it's true. Netflix's servers are sending you the video, but it still travels down your ISP's internet connection to get to you. There is a limited amount of bandwidth from your ISP out to the Internet for you and every other customer to share.
What's different with cable TV is that the TV signals come in via satellite, and is then distributed to your home over the cables that the cable company has run. There's no bottleneck because when you and 100,000 other customers are watching the Superbowl, there is exactly one signal that comes into the cable company that they then send out to 100,000 customers. It's a one-to-many connection that is very efficient.
But the Internet isn't one-to-many. We think of it that way sometimes because of large sites like this one. But the Internet is really a shitton worth of private one-to-one connections.
|
[
"Most \"Data Over Cable Service Interface Specification\" (DOCSIS) cable modems restrict upload and download rates, with customizable limits. These limits are set in configuration files which are downloaded to the modem using the Trivial File Transfer Protocol, when the modem first establishes a connection to the provider's equipment. Some users have attempted to override the bandwidth cap and gain access to the full bandwidth of the system, by uploading their own configuration file to the cable modem - a process called uncapping.\n",
"Unlike BT TV, On Demand's download content will contribute towards the user's broadband data limit. Downward said, \"To us any internet usage is still internet usage – however it's being delivered, if we did [differentiate between Anytime+ data and normal internet use] it could be confusing between what isn't and what is.\" Customers who sign up for on Demand and are on the Sky Broadband Everyday Lite package are warned about how much content (180 minutes) they can consume before they hit their cap and anyone with Anytime+ that does exceed the cap is reminded that the service is contributing to their data usage.\n",
"The network demands of users continues to grow and with it so do the pressures on networks. The way current technologies process information over the network is slow and consumes large amounts of energy. ISPs and engineers argue that these issues with the increased demand on the networks result in some necessary congestion, but the bottlenecks also occur because of the lack of technology to handle such huge data needs using minimal energy. There are attempts being made to increase the speed, amount of data, and reduce power consumption of the networks. For example, optical memory devices could be used in the future to send and receive light signals working much faster and more efficiently than electrical signals. Some researchers see optical memory as needed to reduce the demands on the network routers in data transmission, while others do not. The research will continue to explore possibilities for greater network bandwidth and data transfer. As data consumption needs increase, so will the need for better technology that facilitates the transfer and storage of that data.\n",
"Many cyber cafés have expanded as Local Service Providers (LSPs) as a way to make use of their idle (out of business hours) bandwidth. Because the root problem of scarce bandwidth remains, LSP subscribers continue to suffer from slow connections and inadequate bandwidth (96-128 kbit/s on average). A general complaint of customers and internet users is that such subscriptions are good for nothing except for surfing rich-text and images over the web. The younger internet users in the urban areas have started to familiarize themselves with other more data demanding internet applications and usage. But streaming applications fail to work over low bandwidth. Games, voice, video-conferencing and the like also suffer from latency issues. Further, these LSPs are known to forcefully cache web resources (transparent proxies) and to aggressively block traffic related to the following applications in order to save bandwidth: Windows update, TeamViewer and similar remote assistance applications, Torrent trackers and other P2P ports/patterns, voice/video applications which mostly make use of P2P architecture, online gaming and just about anything else except WWW. Some LSPs generally block all ports except HTTP/HTTPS. Bandwidth/latency benchmarking sites including SpeedTest.net are blocked to stop customers from complaining about their share of bandwidth.\n",
"Network congestion or Internet bottleneck generally occurs and is felt by users in homes and businesses. This is what is known as the last mile of transmission, which is when there is not enough bandwidth available for individual users to access the content they want. Everyone is attempting to use the bandwidth at the same time creating an Internet traffic jam.\n",
"Mobile Internet plans and add-ons contain limits on usage. Lower cost plans have a hard limit for data usage; customers will be billed for excess usage. Higher cost plans incorporate a soft limit; usage exceeding this limit may result in the customer's device being throttled to allow other customers fair access to the network. Throttling speeds are typically 256 kbit/s for downloads and 128 kbit/s for uploads. In what Freedom defines as \"extreme cases\", speeds will be slower than dial-up Internet access at 32 kbit/s for downloads and 16 kbit/s for uploads. When throttling does occur, Freedom will inform customers of the reduced speeds.\n",
"Despite the fact that Cyberia offers excellent Internet services, there was a misconception with the Fair Usage Policy which was applied to DSL users with free night traffic. This raised a huge feud among customers since they had now a limit on the amount of download they can do at night. However, this policy proved to be effective on the long term since this was made to set restrictions and to ensure that users who engage in substantial continuous download activity will not impair the performance of the network, thus decreasing the speed of the broadband service available to all other users.\n"
] |
bandwidth vs ping vs latency
|
Latency is a measure in milliseconds of how long it takes for another device to respond to your request for a response.
Ping is the most common tool for measuring latency. It sends a small packet out and measures how long it takes to get the reply.
Bandwidth is how much data you can send/receive at the same time. Think of it as the difference between a two-lane road through a residential neighborhood and a 16-lane super-highway - the width and speed differences of the two roads allow for different amounts of traffic to pass in the same amount of time.
|
[
"Latency (commonly referred to as \"ping time\") is the delay between requesting data and the receipt of a response, or in the case of one-way communication, between the actual moment of a signal's broadcast and the time it is received at its destination.\n",
"\"Ping\" refers to the network latency between a player's client and the game server as measured with the ping utility or equivalent. Ping is reported quantitatively as an average time in milliseconds (ms). The lower one's ping is, the lower the latency is and the less lag the player will experience. \"High ping\" and \"low ping\" are commonly used terms in online gaming, where \"high ping\" refers to a ping that causes a severe amount of lag; while any level of ping may cause lag, severe lag is usually caused by a ping of over 100 ms. This usage is a gaming cultural colloquialism and is not commonly found or used in professional computer networking circles. In games where timing is key, such as first-person shooter and real-time strategy games, a low ping is always desirable, as a low ping means smoother gameplay by allowing faster updates of game data between the players' clients and game server.\n",
"Gamers refer to latency using the term \"ping\", after a utility which measures round-trip network communication delays (by the use of ICMP packets). A player on a DSL connection with a 50-ms ping can react faster than a modem user with a 350-ms average latency. Other problems include packet loss and choke, which can prevent a player from \"registering\" their actions with a server. In first-person shooters, this problem appears when bullets hit the enemy without damage. The player's connection is not the only factor; some servers are slower than others.\n",
"Some factors that might affect ping include: communication protocol used, Internet throughput (connection speed), the quality of a user's Internet service provider and the configuration of firewalls. Ping is also affected by geographical location. For instance, if someone is in India, playing on a server located in the United States, the distance between the two is greater than it would be for players located within the US, and therefore it takes longer for data to be transmitted. However, the amount of packet-switching and network hardware in between the two computers is often more significant. For instance, wireless network interface cards must modulate digital signals into radio signals, which is often more costly than the time it takes an electrical signal to traverse a typical span of cable. As such, lower ping can result in faster internet download and upload rates.\n",
"The round-trip time or ping time is the time from the start of the transmission from the sending node until a response (for example an ACK packet or ping ICMP response) is received at the same node. It is affected by packet delivery time as well as the data processing delay, which depends on the load on the responding node. If the sent data packet as well as the response packet have the same length, the roundtrip time can be expressed as: \n",
"PingER uses the data to determine latency (round-trip_time), jitter (variability of round-trip_time), and loss (percentage of packets that never return). The results of the PingER Project, including source code, are made available to the public at no cost. This collection of data shows long term world-wide Internet performance trends, covering over 750 sites in over 165 countries. Researchers at the National University of Sciences and Technology, Pakistan, have been dealing with increasingly large amounts of PingER data by using a relational database. From a vantage point between Europe and Africa, researchers at the International Centre for Theoretical Physics (ICTP) in Italy used PingER to reveal the slow progress of improving Africa's connections to the rest of the world.\n",
"Because period is the inverse of frequency, lower tone frequencies can take longer to decode (depends on the decoder design). Receivers in a system using 67.0 Hz can take noticeably longer to decode than ones using 203.5 Hz, and they can take longer than one decoding 250.3 Hz. In some repeater systems, the time lag can be significant. The lower tone may cause one or two syllables to be clipped before the receiver audio is unmuted (is heard). This is because receivers are decoding in a chain. The repeater receiver must first sense the carrier signal on the input, then decode the CTCSS tone. When that occurs, the system transmitter turns on, encoding the CTCSS tone on its carrier signal (the output frequency). All radios in the system start decoding after they sense a carrier signal then recognize the tone on the carrier as valid. Any distortion on the encoded tone will also affect the decoding time.\n"
] |
Why does nature sometimes prefer right or left? Example: Lorentz force
|
I see where you are coming from here:
If you have a vertical wire, with a current going up, then the magnetic field wraps around the wire according to a right-hand rule - counterclockwise when viewed from above. If I look at this wire in a mirror, the magnetic field is going in the other direction - clockwise when viewed from above.
So it seems like there is a definite handedness and violation of parity.
However, this really is truly just a matter of convention. Consider a positively charged particle going up. calculate the lorentz force on the particle, and you will see that it will be deflected towards the wire. The same thing happens in your mirror image. So the physics is the same in the mirror and in reality.
Everything that produces a magnetic field does so with a cross-product (right-handed) and everything that reacts to one also does so with a cross-product (right-handed). The implicit handedness cancels out. We could redefine the cross-product in a left-handed manner, and so long as everything was consistent everything will cancel out.
-------------------
Alternatively, if something was pushed parallel to a magnetic field, then we could see different physics in the mirror. The weak interaction does in fact do this.
If I spin align Cobalt-60 atoms with a magnetic field, the beta-decay electrons will preferentially be emitted in the opposite direction to the magnetic field (right-handed convention). So here our convention does matter and one would see physics behave differently in the mirror.
If I cleverly set up a lab with spin-polarized cobalt-60 decay happening with backwards text everywhere, and then showed you a video of a mirror image of the lab, you would be able to tell it was a mirror image purely by the physics of what happened. You can't do that with electromagnetic phenomena.
|
[
"Left-right asymmetry (LR asymmetry) refers to differences in structure (symmetry breaking) across the mediolateral (left and right) plane in animals. This plane is defined with respect to the anteroposterior and dorsoventral axes and is perpendicular to both. Because the left-right plane is not strictly an axis (as it is not established through a morphogen gradient), to create asymmetry, the left and right sides need to be patterned separately.\n",
"The figure to the right represents the three angular distances. The left one represents the angle at the observed pointing between the zenith direction and the solar direction. This is thus heavily dependent on the changing solar direction as the sun moves across the sky. The middle one represents the angle at the sun between the zenith direction and the pointing. Again this is heavily dependent on the changing pointing. This is symmetrical between the North and South hemispheres. The right one represents the angle at the zenith between the solar direction and the pointing. It thus rotates around the celestial sphere.\n",
"The term on the left is the rate of change of the charge density at a point. The term on the right is the divergence of the current density at the same point. The equation equates these two factors, which says that the only way for the charge density at a point to change is for a current of charge to flow into or out of the point. This statement is equivalent to a conservation of four-current.\n",
"The left side, formula_19, is the change in energy (proportional to mass). Although the first term does not have an immediately obvious physical interpretation, the second and third terms on the right side represent changes in energy due to rotation and electromagnetism. Analogously, the first law of thermodynamics is a statement of energy conservation, which contains on its right side the term formula_20.\n",
"The north–south component of the analemma results from the change in the Sun's declination (extreme changes in height above the horizon during the summer and winter) due to the tilt of Earth's axis of rotation. The east–west component results from the nonuniform rate of change of the Sun's right ascension, governed by combined effects of Earth's axial tilt and orbital eccentricity (earth's orbital speed changing along its orbit around the sun).\n",
"When the motion at the Equator is to the east, any deviation toward the north is brought back toward the Equator because the Coriolis force acts to the right of the direction of motion in the Northern Hemisphere, and any deviation to the south is brought back toward the Equator because the Coriolis force acts to the left of the direction of motion in the Southern Hemisphere. Note that for motion toward the west, the Coriolis force would not restore a northward or southward deviation back toward the Equator; thus, equatorial Kelvin waves are only possible for eastward motion (as noted above). Both atmospheric and oceanic equatorial Kelvin waves play an important role in the dynamics of El Nino-Southern Oscillation, by transmitting changes in conditions in the Western Pacific to the Eastern Pacific.\n",
"When viewed from a stationary point in space directly above the north pole, any land feature in the Northern Hemisphere turns anticlockwise—and, fixing our gaze on that location, any other location in that hemisphere rotates around it the same way. The traced ground path of a freely moving body in ballistic flight traveling from one point to another therefore bends the opposite way, clockwise, which is conventionally labeled as \"right,\" where it will be if the direction of motion is considered \"ahead,\" and \"down\" is defined naturally.\n"
] |
What was the Nazi opinion on the Chinese? What was the Japanese opinion on the Jews?
|
The Japanese actually have a unique history of opinion towards the Jews. While fighting the Russians in (iirc) the Russo-Japanese War at the turn of the century, they stumbled across a book called *The Protocols of the Elders of Zion*. This anti-semitic literature was filled with the usual: the Jews control everything, massive conspiracy, lesser-men, and the like. Well the Japanese high command thought, why would anyone want to go against a small semi-integrated religious group that had miraculously managed to survive for two millennia despite prosecution, and who somehow managed to control everything, and had a large portion of European wealth in their hands?
Fast forward to the Second World War. The Japanese harboured several thousand Jews in Japan from the Germans, and despite being allies, they refused time and time again to hand them over to Hitler. They actually started building a facility to house 600 000 Jews escaping the Nazis. They thought that hey, if these guys control everything, might as well get on their good side.
Don't know anything about the Nazi opinion on the Chinese, though.
|
[
"At the time of World War II, both Nazi Germany and Japanese Empire started its long persecution against ethnic Chinese in each countries, as well as Japanese territorial control in mainland China. Anti-Chinese massacres like Nanking Massacre that would have been the remaining key reason for the issues remaining between China and Japan today.\n",
"The Nazis, who came to power in 1933, did not classify as racially inferior the Japanese, but because so much of the Chinese community had ties to leftist movements, they fell under increased official scrutiny regardless, and many left the country, either heading to Spain to fight in the Civil War that was raging there, or returning to China. As late as 1935, the Overseas Chinese Affairs Commission's statistics showed 1,800 Chinese still living in Germany; more than one thousand of these were students in Berlin, while another few hundred were seafarers based in Hamburg. However, this number shrank to 1,138 by 1939. In 1942, the 323 who still lived in Berlin were all arrested and sent to the Langer Morgen work camp. By the end of World War II, every Chinese restaurant in Hamburg had closed.\n",
"Due to Nazi Germany's recognition of Han Chinese and Japanese as \"Aryans of the East\" Adolf Hitler had allowed Han Chinese and Japanese soldiers to study in Nazi German military academies and serve in the Nazi German Wehrmacht as part of their combat training. Since 1926, Germany had supported the Republic of China militarily and industrially. Germany had also sent advisers such as Alexander von Falkenhausen and Hans von Seeckt to assist the Chinese, most notably in the Chinese Civil War and China's anti-communist campaigns. Max Bauer was sent to China and served as one of Chiang Kai-shek's advisers. Around this time, Hsiang-hsi Kung, the Republic of China Minister of Finance, visited Nazi Germany and was warmly welcomed by Adolf Hitler on June 13, 1937. During this meeting, \n",
"However, Japan refused to adopt an official policy against the Jews. On 31 December 1940, Japanese foreign minister Yōsuke Matsuoka told a group of Jewish businessmen: \"Nowhere have I promised that we would carry out Hitler's anti-Semitic policies in Japan. This is not simply my personal opinion, it is the opinion of Japan.\" Nonetheless, until 1945 the Holocaust was systematically concealed by the leadership in Tokyo.\n",
"The Chinese and Japanese were still subject to Germany's racial laws, however, which – with the exception of the 1935 Nuremberg Laws, which specifically mentioned Jews – generally applied to all \"non-Aryans\" although since Japanese and Chinese were given \"Honorary Aryan\" status these racial laws were applied to them in a more lenient manner as compared to other \"non-Aryans\" who were not granted \"Honorary Aryan\" status by Adolf Hitler. Hitler's government began enacting the laws after taking power in 1933, and the Japanese government initially protested several racial incidents involving Japanese or Japanese-Germans that year which were then resolved by the Nazi high command by treating their Japanese allies leniently in these disputes. Especially after the collapse of Sino-German cooperation and Chinese declaration of war on Germany, Chinese nationals faced prosecution in Germany. Influential Nazi anti-Semite Johann von Leers favored excluding Japanese from the laws due both to the alleged Japanese-Aryan racial link and to improve diplomatic relations with Japan. The Foreign Ministry agreed with von Leers and sought several times between 1934 and 1937 to change the laws, but other government agencies, including the Racial Policy Office, opposed the change.\n",
"The Japanese were subject to Germany's racial laws, but in a more lenient manner compared to \"non-Aryans\" who had not been granted \"Honorary Aryan\" status. Hitler's government began enacting the laws after taking power in 1933, and the Japanese government initially protested several racial incidents involving Japanese or Japanese-Germans that year which were then resolved by the Nazi high command by treating their Japanese allies leniently in these disputes. Especially after the collapse of Sino-German cooperation and Chinese declaration of war on Germany, Chinese nationals faced prosecution in Germany. Influential Nazi anti-Semite Johann von Leers favored excluding Japanese from the laws due both to the alleged Japanese-Aryan racial link and to improve diplomatic relations with Japan. The Foreign Ministry agreed with von Leers and sought several times between 1934 and 1937 to change the laws, but other government agencies, including the Racial Policy Office, opposed the change.\n",
"As World War II intensified, the Nazis stepped up pressure on Japan to hand over the Shanghai Jews. While the Nazis regarded their Japanese allies as \"Honorary Aryans\", they were determined that the Final Solution to the Jewish Question also be applied to the Jews in Shanghai. Warren Kozak describes the episode when the Japanese military governor of the city sent for the Jewish community leaders. The delegation included Amshinover rabbi Shimon Sholom Kalish. The Japanese governor was curious and asked \"Why do the Germans hate you so much?\"\n"
] |
why the winter war happened
|
Relations between Russia and Finland had been strained since WWI.
Russia felt that Finland was weak, and that they would be able to easily seize a decent chunk of territory. Most of the rest of Europe was distracted by Germany gearing up to start WWII, and so the Russians felt that nobody else would really do much to help Finland if they invaded.
|
[
"The timeline of the Winter War is a chronology of events leading up to, culminating in, and resulting from the Winter War. The war began when the Soviet Union attacked Finland on 30 November 1939 and it ended 13 March 1940.\n",
"The Winter War was a military conflict between the Soviet Union (USSR) and Finland. It began with a Soviet invasion of Finland on 30 November 1939, three months after the outbreak of World War II, and ended three and a half months later with the Moscow Peace Treaty on 13 March 1940.\n",
"The Winter War was a military conflict between the Soviet Union (USSR) and Finland. It began with a Soviet invasion of Finland on 30 November 1939, three months after the outbreak of World War II, and ended three and a half months later with the Moscow Peace Treaty on 13 March 1940. The League of Nations deemed the attack illegal and expelled the Soviet Union from the organisation.\n",
"The Winter War (, , ) was a war between the Soviet Union and Finland. It began with a Soviet offensive on 30 November 1939—three months after the start of World War II and the Soviet invasion of Poland, and ended on 13 March 1940 with the Moscow Peace Treaty. The League of Nations deemed the attack illegal and expelled the Soviet Union on 14 December 1939.\n",
"The Winter War was one of the first modern wars of the Red Army before Nazi Germany's Barbarossa in June 1941. In the Soviet Union, the Winter War was called the \"Soviet-Finnish War\", and later the term \"Border Skirmish\" was also used. In different periods the Soviet literature gave different answers for basic questions of the motive of the war, who started the war, whether it could have been avoided, and the result. The most important aspect was the motive.\n",
"The Winter War was fought in the four months following the Soviet Union's invasion of Finland on November 30, 1939. This took place three months after the German invasion of Poland that triggered the start of World War II in Europe. Sweden did not become actively involved in the conflict, but did indirectly support Finland. The Swedish Volunteer Corps provided 9,640 officers and men that saw action in some of the bloodiest parts of the war such as the Battle of Tali-Ihantala. The Swedish Voluntary Air Force also provided 25 aircraft that destroyed twelve Soviet aircraft while only losing six planes with only two to actual enemy action and four to accidents. Sweden also provided a big portion of the weapons and equipment used by the Finns throughout the war. \n",
"Russian Winter, General Winter, General Frost, or General Snow refers to the harsh winter climate of Russia as a contributing factor to the military failures of several invasions of Russia. A contributing factor that impairs military maneuvering is \"General Mud\" (\"\"rasputitsa\"\"), a phenomenon that occurs with autumnal rains and spring thaws in Russia, whereby transport over unimproved roads is made difficult by muddy conditions.\n"
] |
How does a vaccine with inactivated virus work?
|
Imagine you're playing CTF and you're on the Red team, but you don't know what the enemy team looks like. Turns out they're Blue. You can learn this when they attack you, but that's bad because they have guns and will kill your dudes while you learn this piece of information and move to repel their attack.
A vaccine is like an external force (who has an interest in seeing the Red team win) capturing some members of the Blue team, secretly giving them guns that don't work, then dropping them directly into your base.
The Blue team comes in, you learn about them and destroy them, but they can't hurt your team in the process.
|
[
"For the inactivated vaccines, the virus is grown by injecting it, along with some antibiotics, into fertilized chicken eggs. About one to two eggs are needed to make each dose of vaccine. The virus replicates within the allantois of the embryo, which is the equivalent of the placenta in mammals. The fluid in this structure is removed and the virus purified from this fluid by methods such as filtration or centrifugation. The purified viruses are then inactivated (\"killed\") with a small amount of a disinfectant. The inactivated virus is treated with detergent to break up the virus into particles, and the broken capsule segments and released proteins are concentrated by centrifugation. The final preparation is suspended in sterile phosphate buffered saline ready for injection. This vaccine mainly contains the killed virus but might also contain tiny amounts of egg protein and the antibiotics, disinfectant and detergent used in the manufacturing process. In multi-dose versions of the vaccine, the preservative thimerosal is added to prevent growth of bacteria. In some versions of the vaccine used in Europe and Canada, such as \"Arepanrix\" and \"Fluad\", an adjuvant is also added, this contains a fish oil called squalene, vitamin E and an emulsifier called polysorbate 80.\n",
"Inactivated vaccines are further classified depending on the method used to inactivate the virus. \"Whole virus vaccines\" use the entire virus particle, fully destroyed using heat, chemicals, or radiation. \"Split virus vaccines\" are produced by using a detergent to disrupt the virus. \"Subunit vaccines\" are produced by purifying out the antigens that best stimulate the immune system to mount a response to the virus, while removing other components necessary for the virus to replicate or survive or that can cause adverse reactions.\n",
"An inactivated vaccine (or killed vaccine) is a vaccine consisting of virus particles, bacteria, or other pathogens that have been grown in culture and then lose disease producing capacity. In contrast, live vaccines use pathogens that are still alive (but are almost always attenuated, that is, weakened). Pathogens for inactivated vaccines are grown under controlled conditions and are killed as a means to reduce infectivity (virulence) and thus prevent infection from the vaccine. The virus is killed using a method such as heat or formaldehyde. \n",
"In an attenuated vaccine, live virus particles with very low virulence are administered. They will reproduce, but very slowly. Since they do reproduce and continue to present antigen beyond the initial vaccination, boosters are required less often. These vaccines are produced by growing the virus in tissue cultures that will select for less virulent strains, or by mutagenesis or targeted deletions in genes required for virulence. There is a small risk of reversion to virulence; this risk is smaller in vaccines with deletions. Attenuated vaccines also cannot be used by immunocompromised individuals.\n",
"A vaccine administration may be oral, by injection (intramuscular, intradermal, subcutaneous), by puncture, transdermal or intranasal. Several recent clinical trials have aimed to deliver the vaccines via mucosal surfaces to be up-taken by the common mucosal immunity system, thus avoiding the need for injections.\n",
"The live attenuated vaccine is based on a flu strain that does not cause disease, that replicates well at relatively cold temperatures (about 25°C, for incubation purposes), and replicates poorly at body temperature (which minimizes risk to humans). Genes that code for surface proteins (targeted antigens) are combined with this host using genetic reassortment from strains that are projected to be circulating widely in the coming months. The resulting viruses are then incubated in chicken eggs and chick kidney cells. To make the refrigerated version, the virus is purified in centrifuges through a sucrose gradient, then packaged with sucrose, phosphate, glutamate, arginine, and gelatin made from pigs that has been hydrolyzed with acid.\n",
"Vaccine shedding is a term used for the rare release of virus following administration of a live-virus vaccine. Shedding is a popular anti-vaccination trope, but, with the exception of the oral polio vaccine (OPV) in the 1950s, there have only been a few documented cases of vaccine-strain virus infecting contacts of a vaccinated person.\n"
] |
Western Intensification: Why are currents much stronger on the western than eastern side of ocean basins?
|
Ok, this is both a fundamental tenant of oceanography but also very difficult to explain in a short space. The classic paper is Henry Stommel's 1948 [*The Westward Intensification of Wind-Driven Ocean Currents* (PDF)](_URL_0_). In this paper Stommel works through a simple mathematical model of wind-driven circulation and demonstrates that because of boundary conditions and the rotation of the earth, the interior wind-driven flow can only be returned on the western margin of the basin and not eastern. The abstract attributes this to the "variation of the Coriolis parameter with latitude," - what we now call the beta-effect. The full argument requires an analysis of the vorticity conservation and demonstration that only a western-boundary current provides a consistent solution. I have also heard a hand-waving type of argument that the boundary currents have to exist in the west because Rossby waves preferentially propagate energy westward - that's true too. Sorry that this response is kinda jargony but it's a challenging thing to describe at the explain-like-im-5 level.
tldr; because we live on a rotating sphere - sorry if that's a unsatisfying explanation.
|
[
"Due to persistent winds from west to east on the poleward sides of the subtropical ridges located in the Atlantic and Pacific oceans, ocean currents are driven in a similar manner in both hemispheres. The currents in the Northern Hemisphere are weaker than those in the Southern Hemisphere due to the differences in strength between the westerlies of each hemisphere. The process of western intensification causes currents on the western boundary of an ocean basin to be stronger than those on the eastern boundary of an ocean. These western ocean currents transport warm, tropical water polewards toward the polar regions. Ships crossing both oceans have taken advantage of the ocean currents for centuries.\n",
"Together with the trade winds, the westerlies enabled a round-trip trade route for sailing ships crossing the Atlantic and Pacific Oceans, as the westerlies lead to the development of strong ocean currents on the western sides of oceans in both hemispheres through the process of western intensification. These western ocean currents transport warm, sub tropical water polewards toward the polar regions. The westerlies can be particularly strong, especially in the southern hemisphere, where there is less land in the middle latitudes to cause the flow pattern to amplify, which slows the winds down. The strongest westerly winds in the middle latitudes are within a band known as the Roaring Forties, between 40 and 50 degrees latitude south of the equator. The Westerlies play an important role in carrying the warm, equatorial waters and winds to the western coasts of continents, especially in the southern hemisphere because of its vast oceanic expanse.\n",
"Western boundary currents are warm, deep, narrow, and fast flowing currents that form on the west side of ocean basins due to \"western intensification\". They carry warm water from the tropics poleward. Examples include the Gulf Stream, the Agulhas Current, and the Kuroshio.\n",
"Together with the trade winds, the westerlies enabled a round-trip trade route for sailing ships crossing the Atlantic and Pacific oceans, as the westerlies lead to the development of strong ocean currents in both hemispheres. The westerlies can be particularly strong, especially in the southern hemisphere, where there is less land in the middle latitudes to cause the flow pattern to amplify, which slows the winds down. The strongest westerly winds in the middle latitudes are called the Roaring Forties, between 40 and 50 degrees south latitude, within the Southern Hemisphere. The westerlies play an important role in carrying the warm, equatorial waters and winds to the western coasts of continents, especially in the southern hemisphere because of its vast oceanic expanse.\n",
"Western intensification is the intensification of the western arm of an oceanic current, particularly a large gyre in an ocean basin. The trade winds blow westward in the tropics, and the westerlies blow eastward at mid-latitudes. This wind pattern applies a stress to the subtropical ocean surface with negative curl in the northern hemisphere and a positive curl in the southern hemisphere. The resulting Sverdrup transport is equatorward in both cases. Because of conservation of mass and potential vorticity conservation, that transport is balanced by a narrow, intense poleward current, which flows along the western boundary of the ocean basin, allowing the vorticity introduced by coastal friction to balance the vorticity input of the wind. Western intensification also occurs in the polar gyres, where the sign of the wind stress curl and the direction of the resulting currents are reversed. It is because of western intensification that the currents on the western boundary of a basin (such as the Gulf Stream, a current on the western side of the Atlantic Ocean) are stronger than those on the eastern boundary (such as the California Current, on the eastern side of the Pacific Ocean). Western intensification was first explained by the American oceanographer Henry Stommel.\n",
"The westerlies are strongest in the winter hemisphere and times when the pressure is lower over the poles, while they are weakest in the summer hemisphere and when pressures are higher over the poles. The westerlies are particularly strong, especially in the Southern Hemisphere, in areas where land is absent, because land amplifies the flow pattern, making the current more north-south oriented, slowing the westerlies. The strongest westerly winds in the middle latitudes can come in the roaring forties, between 40 and 50 degrees latitude. The westerlies play an important role in carrying the warm, equatorial waters and winds to the western coasts of continents, especially in the southern hemisphere because of its vast oceanic expanse.\n",
"The trade winds blow westward in the tropics, and the westerlies blow eastward at mid-latitudes. This wind pattern applies a stress to the subtropical ocean surface with negative curl across the north Atlantic Ocean. The resulting Sverdrup transport is equatorward. Because of conservation of potential vorticity caused by the poleward-moving winds on the subtropical ridge's western periphery and the increased relative vorticity of northward moving water, transport is balanced by a narrow, accelerating poleward current, which flows along the western boundary of the ocean basin, outweighing the effects of friction with the western boundary current known as the Labrador current. The conservation of potential vorticity also causes bends along the Gulf Stream, which occasionally break off due to a shift in the Gulf Stream's position, forming separate warm and cold eddies. This overall process, known as western intensification, causes currents on the western boundary of an ocean basin, such as the Gulf Stream, to be stronger than those on the eastern boundary.\n"
] |
Does anybody know if there were any drugs developed or important medical discoveries made within the Soviet Union?
|
[Phage Therapy](_URL_1_) is the what immediately springs to mind. While not technically "invented" in the USSR (British and French scientists independently discovered bacteriophagic viruses in early 20th century and), the Soviet Union was where the technique was refined, expanded, and put into broad use. Georgia, in particular, is and was the center for all this, and Georgia is currently the only country where phage therapy is a standard of care treatment. This can be attributed to the microbiologist George Eliava who brought the technique over from his work at the Pasteur Institute, and physicians like Alexander Tsulukidze and Charles Mikeladze in Tblisi who ran some important early clinical trials showing phage therapy to be safe and effective.
There were Western scientists, particularly in France, working on phage therapy in the early 20th century as well. With the advent of sulfa drugs in the late 1930s, and then with the antibiotic revolution kicked off by penicillin in the 1940s, phage therapy was mostly abandoned in the West. The Soviets, on the other hand, found their access to Western-made antibiotics cut off at the end of WWII, and then had delays in regaining access, spotty supply chains & distribution, and problems starting their own domestic production. Homegrown phages helped fill in this "antibiotic gap" (along with some very shady propaganda about herbal treatments).
It wasn't until the past couple of decades -- spurred by concerns over drug resistant pathogens -- that Western physicians and scientists starting giving phage therapy a second look. The appropriately named [George Eliava Institute](_URL_3_) and the [Phage Therapy Center](_URL_2_) in Tblisi are still key centers for research and treatment using phages.
If you want to know more here are a couple papers on the subject (both open-source, I think):
- [Phage Treatment of Human Infections](_URL_4_)
- [Bacteriophage Therapy](_URL_5_)
There is also a pretty good book on the subject written for a popular audience, Kuchment's [The Forgotten Cure: The Past and Future of Phage Therapy](_URL_0_).
|
[
"The drug is almost unknown in the western world and is neither used in medicine or studied scientifically to any great extent outside Russia and other countries in the former Soviet Union. It has however been added to the list of drugs under international control and is a scheduled substance in most countries, despite its multiple therapeutic applications and reported lack of significant abuse potential.\n",
"Throughout the post-communist period, the pharmaceutical industry has mainly been an obstacle to reform. Aiming to explore the vast market of the former USSR, they used the situation to make professionals and services totally dependent on their financial sustenance, turned the major attention to the availability of medicines rather than that of psycho-social rehabilitation services, and stimulated corruption within the mental health sector very much.\n",
"The first Anti-Drugs Independent Russian Agency was born on 24 September 2002 under the name \"The State Committee for Combat the Illicit Trafficking in Narcotic Drugs and Psychotropic Substances under the Ministry of Internal Affairs of the Russian Federation\" (UNON MVD).\n",
"Legislation against drugs first appeared in post-revolutionary Russia, in Article 104-d of the 1922 Penal Code of the RSFSR, criminalising drug production, trafficking, and possession with intent to traffic. The 1924 Soviet Constitution expanded this legislation to cover the whole Soviet Union. The 1926 Penal Code of the RSFSR suggested imprisonment or corrective labour for between one and three years as punishment for these offences, depending on the scale of the offence committed. It is noteworthy that drug possession without intention to traffic and the personal use of drugs warranted no penalties at this time.\n",
"Many natural compounds have led to the discovery of drugs used to treat human disease. Out of the 22,500 biologically active compounds that have been extracted from microbes, 45% are from Actinobacteria. In 1956, \"Streptomyces lavendulae\" was found to produce an antibiotic called Mitomycin C, which has been studied for its cytotoxic effects on cancer cells.\n",
"Dynamic evolution of drug discovery started in early 1960s after hepatitis A and B types were recognized. Numerous medications have been tested in hopes of positive results. Interferon was one of the first that demonstrated effectiveness. Subsequently three different variations of interferon alfa, beta and gamma were identified.\n",
"Due to the secrecy of the Soviet Union's government, very little information was available about the direction and progress of the Soviet chemical weapons until relatively recently. After the fall of the Soviet Union, Russian chemist Vil Mirzayanov published articles revealing illegal chemical weapons experimentation in Russia.\n"
] |
When did Europe begin its shift away from religion? Why?
|
During the Middle Ages, religion played a hugely important role in life, since the Church was one of the few institutions that spread across the variety of feudal boundaries in Europe. As Europe transitioned out of feudalism and towards states (lead by monarchs), the Church in Rome lost power, but religion remained a tool for leaders to use. Indeed, the transition out of feudalism and into the modern era saw the very bloody wars between Catholic and Protestant monarchs and lords. However, by the 18th and 19th centuries, states had moved on to other ways of motivating their people, such as nationalism and ethnic identity. Religion became less political - we don't analyze the Seven Years War, Napoleonic Wars, etc with the same religious focus as the 30 Year's War, for example, but it still played a huge rule in everyday life. In the late 19th 20th century, science began to present an alternative to religion, and as education spread, religious superstitions became less important. Several people also claim that the World Wars disillusioned people from God, but I think that Europe would have become less religious with or without them. It is worth noting that "Europe" is a very broad term, and certain parts of Europe had very different societies from others. Religion plaid a huge political role in Ireland through the 20th century, for example, or the former Yugoslavia, or various other (mostly Eastern) European states.
|
[
"The modern age brought technological and organizational changes to Europe while the Islamic region continued the patterns of earlier centuries. The European powers, and especially Britain and France, globalized economically and colonized much of the region.\n",
"Following the religious wars of the 16th to 17th centuries, the Age of Enlightenment of the 18th century paved the way for a detachment of society and politics from religious questions. Inspired by the American Revolution, the French Revolution brought the idea of secularization and a laicist state granting freedom of religion to Europe. After the turmoils of the Napoleonic Wars, this development caught hold in other parts of Europe, utilizing the German mediatization and the separation of church and state in various European constitutions drawn up after the revolutions of 1848.\n",
"In Europe there has been a general move away from religious observance and belief in Christian teachings and a move towards secularism. The \"secularization of society\", attributed to the time of the Enlightenment and its following years, is largely responsible for the spread of secularism. For example, the Gallup International Millennium Survey showed that only about one sixth of Europeans attend regular religious services, less than half gave God \"high importance\", and only about 40% believe in a \"personal God\". Nevertheless, the large majority considered that they \"belong\" to a religious denomination. Numbers show that the \"de-Christianization\" of Europe has slowly begun to swing in the opposite direction. Renewal in certain quarters of the Anglican church, as well as in pockets of Protestantism on the continent attest to this initial reversal of the secularization of Europe, the continent in which Christianity originally took its strongest roots and world expansion.\n",
"Many major events caused Europe to change around the start of the 16th century, starting with the Fall of Constantinople in 1453, the fall of Muslim Spain and the discovery of the Americas in 1492, and Martin Luther's Protestant Reformation in 1517. In England the modern period is often dated to the start of the Tudor period with the victory of Henry VII over Richard III at the Battle of Bosworth in 1485. Early modern European history is usually seen to span from the start of the 15th century, through the Age of Enlightenment in the 17th and 18th centuries, until the beginning of the Industrial Revolution in the late 18th century.\n",
"In Europe, there has been a general move away from religious observance and belief in Christian teachings and a move towards secularism. The Enlightenment is largely responsible for the spread of secularism. Several scholars have argued for a link between the rise of secularism and Protestantism, attributing it to the wide-ranging freedom in the Protestant-majority countries. In North America, South America and Australia Christian religious observance is much higher than in Europe. United States remains particularly religious in comparison to other developed countries. South America, historically Roman Catholic, has experienced a large Evangelical and Pentecostal infusion in the 20th and 21st centuries.\n",
"According to Nolan (2006), Europe ceased being a \"res publica Christiana\" due to the 16th- and 17th-century wars of the Reformation and Counter-Reformation and became a \"state system\" with a sharp separation of church and state. The principle of \"cuius regio, eius religio\" (\"whose realm, his religion\"), first formulated at the Peace of Augsburg (1555), was confirmed at the Peace of Westphalia (1648), which gave secular states sovereignty over religions, and rejected any supranational religious authority.\n",
"By 1650, the religious map of Europe had been redrawn: Scandinavia, Iceland, north Germany, part of Switzerland, Netherlands and Britain were Protestant, while the rest of the West remained Catholic. A byproduct of the Reformation was increasingly literacy as Protestant powers pursued an aim of educating more people to be able to read the Bible.\n"
] |
How did Luxembourg survive?
|
> annexed by the Netherlands because of its political ties?
Actually, Luxembourg has been absorbed into other countries through history, and the current Luxembourg is nowhere near as large as the historical Duchy of Luxembourg. See [this map](_URL_0_).
It came into the possession of Philip the Good of Burgundy, along with other Low Countries states. They all came under Habsburg rule as the Burgundian line became extinct, and under Charles V was united in inheritance. When the northern provinces rebelled under Philip II, Luxembourg remained part of the Southern Netherlands.
However, as France and Spain continued their war after 1648, France gained the southern parts of Luxembourg.
The entire Low Countries were annexed by the revolutionary French, until it was restored in 1815, minus eastern parts annexed by Prussia. Then it was forced to be part of the United Kingdom of the Netherlands until the Belgian revolt of 1830, which settlement in 1839 once again split off parts of it into Belgium.
So, you need to better define what "survive" means.
|
[
"During World War II, from 1940 to 1944 under German occupation of Luxembourg, the Chamber was dissolved by the Nazis and the country annexed under the name \"Gau Moselland\". The Grand Ducal family and the Luxembourgish government went into exile (at first to the United Kingdom, then to Canada and the United States).\n",
"In 1867, Luxembourg's independence was confirmed, after a turbulent period which even included a brief time of civil unrest against plans to annex Luxembourg to Belgium, Germany, or France. The crisis of 1867 almost resulted in war between France and Prussia over the status of Luxembourg, which had become free of German control when the German Confederation was abolished at the end of the Seven Weeks War in 1866.\n",
"The German occupation of Luxembourg in World War I was the first of two military occupations of the Grand Duchy of Luxembourg by Germany in the 20th century. From August 1914 until the end of World War I on 11 November 1918, Luxembourg was under full occupation by the German Empire. The German government justified the occupation by citing the need to support their armies in neighbouring France, although many Luxembourgers, contemporary and present, have interpreted German actions otherwise.\n",
"Luxembourg remained more or less under French rule until the defeat of Napoleon in 1815. When the French departed, the Allies installed a provisional administration. Luxembourg initially came under the \"Generalgouvernement Mittelrhein\" in mid-1814, and then from June 1814 under the \"Generalgouvernement Nieder- und Mittelrhein\" (General Government Lower and Middle Rhine).\n",
"World War I affected Luxembourg at a time when the nation-building process was far from complete. The small grand duchy (about 260,000 inhabitants in 1914) opted for an ambiguous policy between 1914 and 1918. With the country occupied by German troops, the government, led by Paul Eyschen, chose to remain neutral. This strategy had been elaborated with the approval of Marie-Adélaïde, Grand Duchess of Luxembourg. Although continuity prevailed on the political level, the war caused social upheaval, which laid the foundation for the first trades union in Luxembourg.\n",
"In 1839, William I became a party to the Treaty of London by which the Grand-Duchy lost its western, francophone territories to the Belgian province of Luxembourg. Due to the country's population having been halved, with the loss of 160,000 inhabitants, the militia lost half its strength. Under the terms of the treaty, Luxembourg and the newly formed Duchy of Limburg, both members of the German Confederation, were together required to provide a federal contingent consisting of a light infantry battalion garrisoned in Echternach, a cavalry squadron in Diekirch, and an artillery detachment in Ettelbruck. In 1846, the cavalry and artillery units were disbanded and the Luxembourg contingent was separated from that of Limburg. The Luxembourg contingent now consisted of two light infantry battalions, one in Echternach and the second in Diekirch; two reserve companies; and a depot company.\n",
"Much of the Luxembourgish population joined the Belgian revolution against Dutch rule. Except for the fortress and its immediate vicinity, Luxembourg was considered a province of the new Belgian state from 1830 to 1839. By the Treaty of London in 1839, the status of the grand duchy became fully sovereign and in personal union to the king of the Netherlands. In turn, the predominantly Oil-speaking geographically larger western part of the duchy was ceded to Belgium as the province de Luxembourg.\n"
] |
why did we, as a species, develop a taste for art?
|
There will never be any one, completely satisfying answer to question like this. But, as far as we can tell, most of the higher-level mental attributes of humans are simply byproducts of having large, advanced brains. That is to say, we *didn't* evolve to appreciate art, we appreciate art because our brains evolved to do a whole suite of complex things, one of the most obvious and important of these things is communication, which humans can do in myriad complex ways.
|
[
"In her book \"Homo Aestheticus: Where Art Comes From and Why\" (first printed in 1992), Dissanayake argues that art was central to the emergence, adaptation and survival of the human species, that aesthetic ability is innate in every human being, and that art is a need as fundamental to our species as food, warmth or shelter.\n",
"Artists such as Albrecht Dürer and Leonardo da Vinci, often working with naturalists, were also interested in the bodies of animals and humans, studying physiology in detail and contributing to the growth of anatomical knowledge. The traditions of alchemy and natural magic, especially in the work of Paracelsus, also laid claim to knowledge of the living world. Alchemists subjected organic matter to chemical analysis and experimented liberally with both biological and mineral pharmacology. This was part of a larger transition in world views (the rise of the mechanical philosophy) that continued into the 17th century, as the traditional metaphor of \"nature as organism\" was replaced by the \"nature as machine\" metaphor.\n",
"Exoticism (from 'exotic') is a trend in European art and design, whereby artists became fascinated with ideas and styles from distant regions, and drew inspiration from them. This often involved surrounding foreign cultures with mystique and fantasy which owed more to European culture than to the exotic cultures themselves: this process of glamorisation and stereotyping is called 'exoticization'.\n",
"The influences of Exoticism can be seen through numerous genres of this period, notably in music, painting, and decorative art. In music, exoticism is a genre in which the rhythms, melodies, or instrumentation are designed to evoke the atmosphere of far-off lands or ancient times (e.g., Ravel's \"Daphnis et Chloé\" and \"Tzigane for Violin and Orchestra\", Debussy's \"Syrinx for Flute Solo\" or Rimsky-Korsakov's \"Capriccio espagnol\"). Like orientalist subjects in 19th-century painting, exoticism in the decorative arts and interior decoration was associated with fantasies of opulence.\n",
"Cultural aspects emerged, such as art of the Upper Paleolithic period, which included cave painting, sculpture such as the Venus figurines, carvings and engravings of bone and ivory. The most common subject matter was large animals that were hunted by the people of the time.\n",
"During the period before and after European exploration and conquest of the Americas, indigenous native cultures produced a wide variety of visual arts, including painting on textiles, hides, rock and cave surfaces, bodies especially faces, ceramics, architectural features including interior murals, wood panels, and other available surfaces. For many of these cultures, the visual arts went beyond physical appearance and served as active extensions of their owners and indices of the divine. Artisans of the Ancient Americas drew upon a wide range of materials (obsidian, gold, spondylus shells), creating objects that included the meanings held to be inherent to the materials. These cultures often derived value from the physical qualities, rather than the imagery, of artworks, prizing aural and tactile features, the quality of workmanship, and the rarity of materials. Various works of art have been discovered large distances from their location of production, indicating that many Pre-Columbian civilizations collected items from other cultures or previous cultures. Moreover,\n",
"Cognitive science has also considered aesthetics, with the advent of neuroesthetics, pioneered by Semir Zeki, which seeks to explain the prominence of great art as an embodiment of biological principles of the brain, namely that great works of art capture the essence of things just as vision and the brain capture the essentials of the world from the ever-changing stream of sensory input. \"See also\" Vogelkop bowerbird.\n"
] |
if our blood contains iron, why is it not orange or rust colored?
|
The iron in the hemoglobin molecules in our blood is what makes it red in the first place. There are other animals (horseshoe crabs are well known for this) that don't use iron to bind to oxygen, and instead use metals like copper in their blood. As a result, their blood is greenish blue.
|
[
"Historically, an association between the color of blood and rust occurs in the association of the planet Mars, with the Roman god of war, since the planet is an orange-red, which reminded the ancients of blood. Although the color of the planet is due to iron compounds in combination with oxygen in the Martian soil, it is a common misconception that the iron in hemoglobin and its oxides gives blood its red color. The color is actually due to the porphyrin moiety of hemoglobin to which the iron is bound, not the iron itself, although the ligation and redox state of the iron can influence the pi to pi* or n to pi* electronic transitions of the porphyrin and hence its optical characteristics.\n",
"Oxygenated blood is red due to the presence of oxygenated hemoglobin that contains iron molecules, with the iron components reflecting red light. Red meat gets its color from the iron found in the myoglobin and hemoglobin in the muscles and residual blood.\n",
"Grey iron metal and yellow sulfur are both chemical elements, and they can be mixed together in any ratio to form a yellow-grey mixture. No chemical process occurs, and the material can be identified as a mixture by the fact that the sulfur and the iron can be separated by a mechanical process, such as using a magnet to attract the iron away from the sulfur.\n",
"When iron metal is exposed to air and water, usually it turns into rust, a mixure of oxides and oxide-hydroxides. However, in some environments the metal forms a mixed iron(II) and iron(III) salt with hydroxide and other anions, called green rust.\n",
"Iron, usually as Fe is a common constituent of river waters at very low levels. Higher iron concentrations in acidic springs or an anoxic hyporheic zone may cause visible orange/brown staining or semi-gelatinous precipitates of dense orange iron bacterial floc carpeting the river bed. Such conditions are very deleterious to most organisms and can cause serious damage in a river system.\n",
"Iron is commonly used as a colorant in its red iron oxide form as (FeO). Red iron oxide is commonly used to produce earthy reds and browns. It is the metal responsible for making earthenwares red. Iron is also another tricky colorant because of its ability to yield different colors under different circumstances. At low percentages (.5-1%) and in the presence of potassium, iron will become light blue or light blue-green in reduction (as is seen in traditional celadons). In the presence of barium, iron may become yellow green. When used in combination with calcium, red iron oxide can become pale yellow or amber in oxidation or green in reduction. Common percentages for red iron oxide range from (4 up to 10%).\n",
"The iron from the red blood cells is either released by the red pulp macrophages or they are stored in the erythrocyte itself in the form of ferritin. Also, the erythrocyte can store larger amounts of iron in the form of hemosiderin (an insoluble complex of partially degraded ferritin), and large deposits of this can be seen in the red pulp macrophages. The red pulp macrophages also obtain iron by scavenging a complex of haemoglobin (released from erythrocytes destroyed intravscularly throughout the body) and haptoglobin via. endocytosis through CD163. The iron stored in the splenic macrophages are released depending on the requirements from the bone marrow.\n"
] |
why can extreme stress cause a psychotic episode?
|
Everyone will break, it's a matter of time and level of perceived stress and bodily fatigue they are going through at the time.
My first 48 hour shift at the hospital did something similar to me.
After finally going home I fell asleep only to wake up in a cold shower, and having my parents (whom I lived with) tell me I had been walking around the house crying about the waffle I had just eaten because it had disappeared and I was hungry.
Was not my best moment and I only vaguely remember cooking the waffle too.
|
[
"Stress is known to contribute to and trigger psychotic states. A history of psychologically traumatic events, and the recent experience of a stressful event, can both contribute to the development of psychosis. Short-lived psychosis triggered by stress is known as brief reactive psychosis, and patients may spontaneously recover normal functioning within two weeks. In some rare cases, individuals may remain in a state of full-blown psychosis for many years, or perhaps have attenuated psychotic symptoms (such as low intensity hallucinations) present at most times.\n",
"The exact cause of brief psychotic disorder is not known. One theory suggests a genetic link, because the disorder is more common in people who have family members with mood disorders, such as depression or bipolar disorder. Another theory suggests that the disorder is caused by poor coping skills, as a defense against or escape from a particularly frightening or stressful situation. These factors may create a vulnerability to develop brief psychotic disorder. In most cases, the disorder is triggered by a major stress or traumatic event. \n",
"Acute stress disorder occurs in individuals without any other apparent psychiatric disorder, in response to exceptional physical or psychological stress. While severe, such reactions usually subside within hours or days. The stress may be an overwhelming traumatic experience (e.g. accident, battle, physical assault, rape) or unusually sudden change in social circumstances of the individual, such as multiple bereavement.\n",
"Acute stress disorder (abbreviated ASD, and not to be confused with autism spectrum disorder) is the result of a traumatic event in which the person experiences or witnesses an event that causes the victim/witness to experience extreme, disturbing, or unexpected fear, stress, or pain, and that involves or threatens serious injury, perceived serious injury, or death to themselves or someone else. A study of rescue personnel after exposure to a traumatic event showed no gender difference in acute stress reaction. Acute stress reaction is a variation of post-traumatic stress disorder (PTSD).\n",
"A major depressive episode is characterized by the presence of a severely depressed mood that persists for at least two weeks. Episodes may be isolated or recurrent and are categorized as mild (few symptoms in excess of minimum criteria), moderate, or severe (marked impact on social or occupational functioning). An episode with psychotic features—commonly referred to as \"psychotic depression\"—is automatically rated as severe. If the patient has had an episode of mania or markedly elevated mood, a diagnosis of bipolar disorder is made instead. Depression without mania is sometimes referred to as \"unipolar\" because the mood remains at one emotional state or \"pole\".\n",
"Psychotic symptoms tend to develop after an individual has already had several episodes of depression without psychosis. However, once psychotic symptoms have emerged, they tend to reappear with each future depressive episode. The prognosis for psychotic depression is not considered to be as poor as for schizoaffective disorders or primary psychotic disorders. Still, those who have experienced a depressive episode with psychotic features have an increased risk of relapse and suicide compared to those without psychotic features, and they tend to have more pronounced sleep abnormalities.\n",
"Traumatic stress is a common term for reactive anxiety and depression, although it is not a medical term and is not included in the \"Diagnostic and Statistical Manual of Mental Disorders\" (DSM). The experience of traumatic stress includes subtypes of anxiety, depression, and disturbance of conduct along with combinations of these symptoms, resulting from events that are less threatening and distressing than those that lead to post-traumatic stress disorder. The fifth edition of the DSM describes in a section titled \"Trauma and Stress-Related Disorders\" disinhibited social engagement disorder, reactive attachment disorder, acute stress disorder, adjustment disorder, and posttraumatic stress disorder.\n"
] |
What was so special about the Paris commune uprising that it seems to hold the imagination of communists greater than that of say the French revolution?
|
The Paris Commune was one of the first explicitly communist political actions. The 1848 revolutions happened before Marx had written the bulk of his work (and indeed, they both informed his writing). The French Revolution, much like the American revolution, is usually termed by communist academics as a 'bourgeois revolution,' a necessary stage in the historical development of capitalism, but not the proletarian revolution that communists support.
A bourgeois revolution means that it was essentially an anti-aristocratic revolution, but not an anti-class revolution. After the American and French revolutions, there were still rich and poor in America and France, but there were no longer nobles and arbitrary status determined by lineage. However, the Paris Commune, on the other hand, was an exercise in true egalitarianism.
Not sure what that other guy is talking about with Marx saying that the Commune "needed" a revolutionary terror; the Communards did kill quite a few members of the French military when they came to recapture the city, but the whole reason that the Paris Commune came to exist in the first place was because all of the people who would've been the target of a revolutionary terror had left the city in fear. Most members of the government, and anyone who had the money or influence to get out did. So really the Commune was created in a power vacuum, and there wouldn't have been anyone to commit a revolutionary terror against. The Communards were interested in other French cities joining them, but with communications at the time, and the commune being surrounded on one side by the Prussian Army, and the other side by the French Army, there wasn't really a good way to get any messages out, and anyway the situation that existed in Paris was pretty unique. Imagine if the city government in your town just left tomorrow. The Commune was less an ideologically-motivated movement and more a natural reaction by the people of Paris, who suddenly needed to organize things on their own.
So in a nutshell, that's why communists are into the Paris commune. It was a better example of functioning communism, though obviously in a much smaller timeframe, than the Soviet Union or whatever. They practiced proper worker democracy, had free education, and other things that communists like. It really is a very interesting and unique period of time in history, I recommend anyone read more about it.
|
[
"The Commune resulted in part from growing discontent among the Paris workers. This discontent can be traced to the first worker uprisings, the Canut revolts, in Lyon and Paris in the 1830s (a \"canut\" was a Lyonnais silk worker, often working on Jacquard looms). Many Parisians, especially workers and the lower-middle classes, supported a democratic republic. A specific demand was that Paris should be self-governing with its own elected council, something enjoyed by smaller French towns but denied to Paris by a national government wary of the capital's unruly populace. They also wanted a more \"just\" way of managing the economy, if not necessarily socialist, summed up in the popular appeal for \"\"la république démocratique et sociale!\"\" (\"the democratic and social republic!\").\n",
"Meanwhile, the Paris Commune was proclaimed in March 1871. An uprising after the Franco-Prussian War, it was greatly influenced by anarchists and also had a great impact on anarchist history. Anarchists had a prominent role in the Commune, next to Blanquists and to a lesser extent Marxists. Radical socialist views, like Proudhonian federalism, were implemented to a small extent. Most importantly the workers proved they could run their own services and factories. After the defeat of the Commune, anarchists like Eugène Varlin, Louise Michel, and Élisée Reclus were shot or imprisoned. Socialist ideas were persecuted from France for a decade. Leading Internationalists who managed to survive the bloody suppression of the Commune fled to Switzerland where they formed the Anarchist St. Imier International.\n",
"The Paris Commune was a government that briefly ruled Paris from March 18 (more formally, from March 28) to May 28, 1871. The Commune was the result of an uprising in Paris after France was defeated in the Franco-Prussian War. Anarchists participated actively in the establishment of the Paris Commune. They included Louise Michel, the Reclus brothers, and Eugene Varlin (the latter murdered in the repression afterwards). As for the reforms initiated by the Commune, such as the re-opening of workplaces as co-operatives, anarchists can see their ideas of associated labour beginning to be realised...Moreover, the Commune's ideas on federation obviously reflected the influence of Proudhon on French radical ideas. Indeed, the Commune's vision of a communal France based on a federation of delegates bound by imperative mandates issued by their electors and subject to recall at any moment echoes Bakunin's and Proudhon's ideas (Proudhon, like Bakunin, had argued in favour of the \"implementation of the binding mandate\" in 1848...and for federation of communes). Thus both economically and politically the Paris Commune was heavily influenced by anarchist ideas.\". George Woodcock manifests that \"a notable contribution to the activities of the Commune and particularly to the organization of public services was made by members of various anarchist factions, including the mutualists Courbet, Longuet, and Vermorel, the libertarian collectivists Varlin, Malon, and Lefrangais, and the bakuninists Elie and Elisée Reclus and Louise Michel.\"\n",
"In the aftermath of the defeat of France in the Franco-Prussian War, revolution broke out in France, with revolutionary army members along with working-class revolutionaries founding the Paris Commune. The Paris Commune appealed both to the citizens of Paris regardless of class as well as to the working class who were a major base of support for the government by appealing to them via militant rhetoric. In spite of such militant rhetoric to appeal to the working class, the Commune also received substantial support from the middle-class bourgeoisie of Paris, including shopkeepers and merchants. In part due to its sizeable number neo-Proudhonians and neo-Jacobins in the Central Committee, it declared that the Commune was not opposed to private property, but it rather hoped to create the widest distribution of it. The political composition of the Commune included twenty-five neo-Jacobins, fifteen to twenty neo-Proudhonians and proto-syndicalists, nine or ten Blanquists, a variety of radical republicans and a few members of the First International influenced by Marx.\n",
"\"Paris Commune\" tells the story of the Parisian uprising of 1871, the first socialist rebellion in Europe. The piece was developed as a part of La Jolla Playhouse’s Page-to-Stage program in 2004, and further expanded in 2008 as a part of the Public Lab Series Workshop at the Public Theater. The piece is unique among The Civilians’ early repertoire in that it was not developed through first-person interviews with those directly affected by the topic of the play, but rather through extensive historical research into the actual Paris Commune that had its genesis in the 1871 rebellion. The play was written by Steve Cosson and all of the music was written or adapted by Michael Friedman.\n",
"The Paris Commune in France (1871) is hailed by both anarchists and Socialists as the first assumption of power by the working class, but controversy of the policies implemented in the Commune helped the split between the two groups.\n",
"In 1871, in the wake of the Franco-Prussian War an uprising in Paris established the Paris Commune. The Paris Commune was a government that briefly ruled Paris from 18 March (more formally, from 28 March) to 28 May 1871. The Commune was the result of an uprising in Paris after France was defeated in the Franco-Prussian War. Anarchists participated actively in the establishment of the Paris Commune. The 92 members of the \"Communal Council\" included a high proportion of skilled workers and several professionals. Many of them were political activists, ranging from reformist republicans, various types of socialists, to the Jacobins who tended to look back nostalgically to the Revolution of 1789. The \"reforms initiated by the Commune, such as the re-opening of workplaces as co-operatives, anarchists can see their ideas of associated labour beginning to be realised...Moreover, the Commune's ideas on federation obviously reflected the influence of Proudhon on French radical ideas. Indeed, the Commune's vision of a communal France based on a federation of delegates bound by imperative mandates issued by their electors and subject to recall at any moment echoes Bakunin's and Proudhon's ideas (Proudhon, like Bakunin, had argued in favour of the \"implementation of the binding mandate\" in 1848...and for federation of communes). George Woodcock manifests that \"a notable contribution to the activities of the Commune and particularly to the organisation of public services was made by members of various anarchist factions, including the mutualists Courbet, Longuet, and Vermorel, the libertarian collectivists Varlin, Malon, and Lefrangais, and the bakuninists Elie and Elisée Reclus and Louise Michel\".\n"
] |
Why do healthy young athletes die suddenly from cardiac arrest?
|
When young athletes die of cardiac arrest it is almost always due due to a heart disease called hypertrophic cardiomyopathy. In fact, it is the leading cause of death of young athletes in America. This is a thickening and stiffness of the heat muscle that causes numerous problems during vigorous exercise that lead to cardiac arrest.
Testing is effective, but the disease is so rare (1 in 220,000) that it is not cost effective to screen all 15 Million youth athletes in the USA alone.
|
[
"Because several well-known and high-profile cases of athletes experiencing sudden unexpected death due to cardiac arrest, such as Reggie White and Marc-Vivien Foé, a growing movement is making an effort to have both professional and school-based athletes screened for cardiac and other related conditions, usually through a careful medical and health history, a good family history, a comprehensive physical examination including auscultation of heart and lung sounds and recording of vital signs such as heart rate and blood pressure, and increasingly, for better efforts at detection, such as an electrocardiogram.\n",
"It remains a difficult medical challenge to prevent the sudden cardiac death of athletes, typically defined as natural, unexpected death from cardiac arrest within one hour of the onset of collapse symptoms, excluding additional time on mechanical life support. (Wider definitions of sudden death are also in use, but not usually applied to the athletic situation.) Most causes relate to congenital or acquired cardiovascular disease with no symptoms noted before the fatal event. The prevalence of any single, associated condition is low, probably less than 0.3% of the population in the athletes' age group, and the sensitivity and specificity of common screening tests leave much to be desired. The single most important predictor is fainting or near-fainting during exercise, which should require detailed explanation and investigation. The victims include many well-known names, especially in professional soccer, and close relatives are often at risk for similar cardiac problems.\n",
"Athlete's heart is not dangerous for athletes (though if a nonathlete has symptoms of bradycardia, cardiomegaly, and cardiac hypertrophy, another illness may be present). Athlete's heart is not the cause of sudden cardiac death during or shortly after a workout, which mainly occurs due to hypertrophic cardiomyopathy, a genetic disorder.\n",
"Sudden cardiac death occurs in approximately one per 200,000 young athletes per year, usually triggered during competition or practice. The victim is usually male and associated with soccer, basketball, ice hockey, or American football, reflecting the large number of athletes participating in these sustained and strenuous sports. For a normally healthy age group, the risk appears to be particularly magnified in competitive basketball, with sudden cardiac death rates as high as one per 3,000 annually for male basketball players in NCAA Division I. This is still far below the rate for the general population, estimated as one per 1,300–1,600 and dominated by the elderly. However, a population as large as the United States will experience the sudden cardiac death of a competitive athlete at the average rate of one every three days, often with significant local media coverage heightening public attention.\n",
"Screening athletes for cardiac disease can be problematic because of low prevalence and inaccurate performance of various tests that have been used. Nevertheless, sudden death among seemingly healthy individuals attracts much public and legislator attention because of its visible and tragic nature.\n",
"The following is a list of association footballers who died while playing, either directly from injuries sustained during a game, or after being taken ill on the pitch. Following an increase in deaths, both during matches and training, the Federation of International Football Associations (FIFA) considered mandatory cardiac testing, already in place for years in some countries, such as Italy.\n",
"Sometimes sports injuries can be so severe as to result in actual death. Over the past year, 48 youths died from sports injuries. The leading causes of death in youth sports are sudden cardiac arrest, concussion, heat illness and external sickling. Cardiac-related deaths are usually due to an undiagnosed cardiovascular disorder. Trauma to the head, neck and spine can also be lethal. Among young American athletes, more than half of trauma-related deaths are to football players, with track and field, baseball, boxing and soccer also having relatively high fatality rates.\n"
] |
how do phones send texts?
|
In much the same way as they send voice to the tower when you talk on the phone. They have circuitry to create a signal, and send one that is the style the tower recognizes as text. They've decided certain bit patterns mean certain characters, and send a combination of address information and the text content as, essentially, a radio-wave pattern.
|
[
"Text messaging is most often used between private mobile phone users, as a substitute for voice calls in situations where voice communication is impossible or undesirable (e.g., during a school class or a work meeting). Texting is also used to communicate very brief messages, such as informing someone that you will be late or reminding a friend or colleague about a meeting. As with e-mail, informality and brevity have become an accepted part of text messaging. Some text messages such as SMS can also be used for the remote controlling of home appliances. It is widely used in domotics systems. Some amateurs have also built own systems to control (some of) their appliances via SMS. Other methods such as group messaging, which was patented in 2012 by the GM of Andrew Ferry, Devin Peterson, Justin Cowart, Ian Ainsworth, Patrick Messinger, Jacob Delk, Jack Grande, Austin Hughes, Brendan Blake, and Brooks Brasher are used to involve more than two people into a text messaging conversation. A Flash SMS is a type of text message that appears directly on the main screen without user interaction and is not automatically stored in the inbox. It can be useful in cases such as an emergency (e.g., fire alarm) or confidentiality (e.g., one-time password).\n",
"BULLET::::- Text messaging (texting) – this is a service on cell phones, allowing a user to type a short alphanumeric message and send it to another phone number, and the text is displayed on the recipient's phone screen. It is based on the Short Message Service (SMS) which transmits using spare bandwidth on the control radio channel used by cell phones to handle background functions like dialing and cell handoffs. Due to technical limitations of the channel, text messages are limited to 160 alphanumeric characters.\n",
"Text messages can be sent from a personal computer to mobile devices via an SMS gateway or Multimedia Messaging Service (MMS) gateway, using most popular email client programs, such as Outlook, Thunderbird, and so on. The messages must be sent in ASCII \"text-only\" mode. If they are sent in HTML mode, or using non-ASCII characters, they will most likely appear as nonsense on the recipient's mobile telephone.\n",
"As of 2017, text messages are used by youth and adults for personal, family, business and social purposes. Governmental and non-governmental organizations use text messaging for communication between colleagues. In the 2010s, the sending of short informal messages has become an accepted part of many cultures, as happened earlier with emailing. This makes texting a quick and easy way to communicate with friends, family and colleagues, including in contexts where a call would be impolite or inappropriate (e.g., calling very late at night or when one knows the other person is busy with family or work activities). Like e-mail and voicemail, and unlike calls (in which the caller hopes to speak directly with the recipient), texting does not require the caller and recipient to both be free at the same moment; this permits communication even between busy individuals. Text messages can also be used to interact with automated systems, for example, to order products or services from e-commerce websites, or to participate in online contests. Advertisers and service providers use direct text marketing to send messages to mobile users about promotions, payment due dates, and other notifications instead of using postal mail, email, or voicemail.\n",
"A message sent with an email client can be simultaneously addressed to multiple mobile telephones, whereas text messages sent in the usual manner between mobile telephones can only be sent to a single recipient.\n",
"In the cellular phone industry, mobile phones and their networks sometimes support concatenated short message service (or concatenated SMS) to overcome the limitation on the number of characters that can be sent in a single SMS text message transmission (which is usually 160). Using this method, long messages are split into smaller messages by the sending device and recombined at the receiving end. Each message is then billed separately. When the feature works properly, it is nearly transparent to the user, appearing as a single long text message. Previously, due to incompatibilities between providers and lack of support in some phone models, there was not widespread use of this feature. \n",
"The sending of unsolicited text messages, either in the form of SMS messages, push mail messages or any similar format designed for consumer portable devices (mobile phones, PDAs) also falls under the prohibition of Article 13.\n"
] |
how this battery train experiment works?
|
[This](_URL_1_) should help you out...people on a physics forum explaining it pretty simply
EDIT: Aww hell, I guess I'll copy/paste the answer here...
> If you run a current through a coil; it generates an magnetic field inside the coil [like this](_URL_0_)
> If the field lines are exactly parallel a bar magnet will feel no net force. However at the ends of the coil, where the field lines diverge, a bar magnet will be either pulled into the coil or pushed out of the coil depending on which way round you insert it.
> The trick in the video is that the magnets are made of a conducting material and they connect the battery terminals to the copper wire, so the battery, magnets and copper wire make a circuit that generates a magnet field just in the vicinity of the battery. The geometry means the two magnets are automatically at the ends of the generated magnetic field, where the field is divergent, so a force is exerted on the magnets.
> The magnets have been carefully aligned so the force on both magnets points in the same direction, and the result is that the magnets and battery move. But as they move, the magnetic field moves with them and you get a constant motion.
> If you flipped round the two magnets at the ends of the battery the battery and magnets would move in the reverse direction. If you flipped only one magnet the two magnets would then be pulling/pushing in opposite directions and the battery wouldn't move.
|
[
"Safety tests are performed daily to ensure that the MAPO system is working properly on each train. At the direction of the monorail station conducting the test, each train will intentionally overrun a hold point to verify that a red MAPO occurs and that the emergency brakes activate. Pilots perform tests in forward and reverse when bringing a train onto the system for the first time that day. The indications are called into Monorail Central with the emergency brake pressures.\n",
"In March 2013, JR Kyushu converted two-car 817-1000 series set VG114 to a battery electric multiple unit (BEMU) by adding lithium-ion storage batteries to enable the train to run on non-electrified lines. The train has a maximum speed of when running off the 20 kV AC overhead power supply of electrified lines, and when running on battery power over non-electrified lines. It can run a distance of up to on battery power. The train was tested on non-electrified lines from September 2013.\n",
"Many modern electric toy trains contain sophisticated electronics that emit digitized sound effects and allow the operator to safely and easily run multiple remote control trains on one loop of track. In recent years, many toy train operators will operate a train with a TV camera in the front of the engine and hooked up to a screen, such as computer monitor. This will show an image, similar to that of a real (smaller size) railroad.\n",
"The operation of battery trains that receive energy from batteries and an electric pick-up has been proposed for operation on unelectrified and electrified sections of the track. Adoption of these types of trains would reduce the need for full line electrification. \n",
"The test train ran through much of 1935 and 1936, and was tried on nearly all of the electrified tracks on the Metropolitan line and the District line. Once the concept had been proved to be reliable, the train was also used in passenger service. Besides the regenerative braking, the acceleration was found to be particularly smooth. When the decision was taken to proceed with the new system on the O and P stock, the test train was dismantled, and the equipment was fitted to three battery locomotives built by the Gloucester Railway Carriage and Wagon Company, which were part of a batch of nine vehicles supplied between 1936 and 1938. The equipment was particularly suitable for battery locomotives, as the lack of starting resistances reduced the amount of power wasted when starting and stopping frequently. At slow speeds, conventional control systems would often overheat, but the metadyne-equipped locomotives could pull trains weighing 100 tons for long distances at speeds as low as without problems. However, the complexity of the equipment, and the difficulty of maintaining the metadyne machine, resulted in the locomotives not being used sufficiently, and they were withdrawn for scrapping in 1977.\n",
"The choice of which of the tests to use is at the operator's discretion as there is merit in each test for given situations. Later model testers that are battery powered are limited to doing the \"screen test\". Older mains powered units can do all tests. The purpose of the high current test is to simulate a fault condition: if a live part contacts the earthed metalwork, the earth conductor should be able to carry sufficient current to blow the fuse and render the appliance safe, without the earth conductor itself burning out. On the other hand, some equipment (especially IT equipment) could be damaged by this test, as the earth connection is only for functional purposes and is not meant to be relied upon for safety.\n",
"When the trains brake, the 3-phase motors act as generators and return electricity to the system rather than converting power to heat, as on a friction brake system. The current that is produced is conducted back to the overhead lines. If there is another train in the same electrical section, this train will use as much of the generated energy as it can.\n"
] |
Can 'one' photon technically be divided into anything?
|
Read up on [spontaneous parametric down-conversion](_URL_0_).
_URL_1_
_URL_2_
|
[
"A photon is massless, has no electric charge, and is a stable particle. A photon has two possible polarization states. In the momentum representation of the photon, which is preferred in quantum field theory, a photon is described by its wave vector, which determines its wavelength \"λ\" and its direction of propagation. A photon's wave vector may not be zero and can be represented either as a spatial 3-vector or as a (relativistic) four-vector; in the latter case it belongs to the light cone (pictured). Different signs of the four-vector denote different circular polarizations, but in the 3-vector representation one should account for the polarization state separately; it actually is a spin quantum number. In both cases the space of possible wave vectors is three-dimensional.\n",
"can be described as having right or left circular polarization, or a superposition of the two. Equivalently, a photon can be described as having horizontal or vertical linear polarization, or a superposition of the two.\n",
"a photon can, within the bounds of the uncertainty principle, fluctuate into a virtual charged fermion–antifermion pair, to either of which the other photon can couple. This fermion pair can be leptons or quarks. Thus, two-photon physics experiments can be used as ways to study the photon structure, or, somewhat metaphorically, what is \"inside\" the photon.\n",
"In quantum theory, photons describe quantized electromagnetic radiation. Specifically, a photon is an elementary excitation of a normal mode of the electromagnetic field. Thus a single-photon state is the quantum state of a radiation mode that contains a single excitation.\n",
"Pairs of single photons can be generated in highly correlated states from using a single high-energy photon to create two lower-energy ones. One photon from the resulting pair may be detected to 'herald' the other (so its state is pretty well known prior to detection). The two photons need not generally be the same wavelength, but the total energy and resulting polarisation are defined by the generation process.\n",
"6) Since natural and obvious requirements have forced the conclusion that photon \"2\" simultaneously possesses incompatible properties, this means that, even if it is not possible to determine these properties simultaneously and with arbitrary precision, they are nevertheless possessed objectively by the system. But quantum mechanics denies this possibility and it is therefore an incomplete theory.\n",
"One can say that the photon is not a particle but as a mere quantum of energy that is usually exchanged in integer multiples of ħω, but not always, as it is the case in the above experiment. From this point of view, photons are quasiparticles, akin to phonons and plasmons, in a sense less \"real\" than electrons and protons. Before dismissing this view as unscientific, its worth recalling the words of Willis Lamb, who won a Nobel prize in the area of quantum electrodynamics:\n"
] |
what is wikileaks? what is the current situation with them?
|
Wikileaks is a website that originated by civil libertarians who intended to promote transparency by publishing leaks of sensitive information about governments and large corporations, given to them in secret by whistleblowers. They rose to fame in 2010 with the publication of a huge number of military records and diplomatic cables, given to them illegally by the whistleblower, Chelsea Manning, who was a soldier and computer analyst in the US Army. Chelsea Manning was sentenced to 35 years (threatened with a death sentence for “aiding the enemy”) and released after 6 after receiving a commutation of her sentence by President Obama in the last months of his president. Chelsea Manning is a transgender woman and was subject to humiliating treatment and solitary confinement inside of the military prison where she served time. She attempted suicide at least one time that we know of. The US government also put out a warrant for the arrest of one of the founders of Wikileaks, Julian Assange, an Australian citizen.
Julian is also wanted in Sweden on accusations of sexual assault, and the Swedish government has been attempting to extradite him for questioning regarding the allegations against him.
To avoid these attempts at arrest and extradition, Assange applied for asylum in Ecuador and was granted it, and when blocked from boarding a plane in London, he took literal refuge inside the Ecuadorean embassy, where he has lived for the last 6 years. If he attempts to leave the building, British police intend to arrest him immediately, to extradite him to Sweden. He has said he would be willing to allow himself to be extradited to Sweden to face the sexual assault charges (which he denies) if given promises that he would not be extradited to the US. The Swedish government have said that they wouldn’t, and the British claim they would only extradite him to Sweden, not to America, but he claims to not trust their word on this.
Further complicating matters, in recent years, Assange and Wikileaks are seen to be biased by many observers. They’ve been accused of softballing or ignoring leaks that are damaging to Russia, and focusing exclusively on leaks that are damaging to the US. They are also seen as having favored Trump in the 2016 US presidential election, and opposed Hillary. Some have gone as far as to say that Wikileaks is basically an arm of FSB, the Russian intelligence agency that is the equivalent of the CIA in America.
And then to add to it, Assange seems to have overstayed his welcome in the Ecuadorean embassy, where, as I said, he has lived for 6 years straight now and not once left the building (London police are parked outside the building 24/7 waiting to arrest him should he ever leave). They’ve built him a small residence inside the embassy building, with a bed, bathroom, and small kitchen. And they claim he is obnoxious, doesn’t shower often, and intentionally inflames the situation diplomatically, causing trouble for the Ecuadorean government. They’ve reportedly cut off his access to the Internet so he’ll stop talking.
|
[
"WikiLeaks has drawn criticism for its alleged absence of whistleblowing on or criticism of Russia, and for criticising the Panama Papers' exposé of businesses and individuals with offshore bank accounts. The organization has additionally been criticised for inadequately curating its content and violating the personal privacy of individuals. WikiLeaks has, for instance, revealed Social Security numbers, medical information, credit card numbers and details of suicide attempts.\n",
"On 10 December 2016, \"The Washington Post\" reported that the Central Intelligence Agency concluded that Russia intelligence operatives provided materials to WikiLeaks in an effort to help Donald Trump's election bid. WikiLeaks has frequently been criticised for its alleged absence of whistleblowing on or criticism of Russia.\n",
"WikiLeaks () is an international non-profit organisation that publishes news leaks, and classified media provided by anonymous sources. Its website, initiated in 2006 in Iceland by the organisation Sunshine Press, claimed in 2016 to have released online 10 million documents in its first 10 years. Julian Assange, an Australian Internet activist, is generally described as its founder and director. Since September 2018, Kristinn Hrafnsson has served as its editor-in-chief.\n",
"In 2006 Wikileaks was created by Julien Assange and the goal for Wikileaks is to capture the truth by any means, which has put them in the International Spotlight over the years. Since the birth Wikileaks has released classified documents regarding the war effort in the middle east, document regarding the detainees in Guantanamo Bay, and releasing the emails from Democratic National Committee staffers.\n",
"WikiLeaks founder Julian Assange initially stuck to WikiLeaks policy of neither confirming or denying sources but in January 2017 said that their \"source is not the Russian government and it is not a state party\", and the Russian government said it had no involvement.\n",
"WikiLeaks has been accused of purposely targeting certain states and people, and presenting its disclosures in misleading and conspiratorial ways to harm those people. Writing in 2012, \"Foreign Policy\"'s Joshua Keating stated that \"nearly all its major operations have targeted the U.S. government or American corporations.\" In a 2017 speech addressing the Center for Strategic and International Studies, CIA Director Mike Pompeo referred to WikiLeaks as \"a non-state hostile intelligence service\" and described founder Julian Assange as a narcissist, fraud, and coward.\n",
"WikiLeaks redacted names and other identifying information from the documents before their release, while attempting to allow for connections between people to be drawn via unique identifiers generated by WikiLeaks. It also said that it would postpone releasing the source code for the cyber weapons, which is reportedly several hundred million lines long, \"until a consensus emerges on the technical and political nature of the C.I.A.'s program and how such 'weapons' should be analyzed, disarmed and published.\" WikiLeaks founder Julian Assange claimed this was only part of a larger series.\n"
] |
how are the us still allowed to use drone strikes when the civilian casualty rate is so high?
|
Who is going to stop us.
|
[
"In recent years, the U.S. has increased its use of drone strikes against targets in foreign countries and elsewhere as part of the War on Terror. In January 2014, it was estimated that 2,400 people have died from U.S. drone strikes in five years. In June 2015 the total death toll of U.S. drone strikes was estimated to exceed 6,000.\n",
"Drone strikes are part of a targeted killing campaign against jihadist militants; however, non-combatant civilians have also been killed in drone strikes. Determining precise counts of the total number killed, as well as the number of non-combatant civilians killed, is impossible; and tracking of strikes and estimates of casualties are compiled by a number of organizations, such as the \"Long War Journal\" (Pakistan and Yemen), the New America Foundation (Pakistan), and the London-based Bureau of Investigative Journalism (Yemen, Somalia, and Pakistan). The \"estimates of civilian casualties are hampered methodologically and practically\"; for example, \"estimates are largely compiled by interpreting news reports relying on anonymous officials or accounts from local media, whose credibility may vary.\"\n",
"A January 2011 report by \"Bloomberg\" stated that civilian casualties in the strikes had apparently decreased. According to the report, the U.S. government believed that 1,300 militants and only 30 civilians had been killed in drone strikes since mid-2008, with no civilians killed since August 2010.\n",
"The civilian casualty ratio for U.S. drone strikes in Pakistan is notoriously difficult to quantify. The U.S. itself puts the number of civilians killed from drone strikes in the last two years at no more than 20 to 30, a total that is far too low according to a spokesman for the NGO CIVIC. At the other extreme, Daniel L. Byman of the Brookings Institution suggests that drone strikes may kill \"10 or so civilians\" for every militant killed, which would represent a civilian to combatant casualty ratio of 10:1. Byman argues that civilian killings constitute a humanitarian tragedy and create dangerous political problems, including damage to the legitimacy of the Pakistani government and alienation of the Pakistani populace from America. An ongoing study by the New America Foundation finds non-militant casualty rates started high but have declined steeply over time, from about 60% (3 out of 5) in 2004–2007 to less than 2% (1 out of 50) in 2012. The study puts the overall non-militant casualty rate since 2004 at 15–16%, or a 1:5 ratio, out of a total of between 1,908 and 3,225 people killed in Pakistan by drone strikes since 2004.\n",
"It is difficult to reconcile these figures because the drone strikes are often in areas that are inaccessible to independent observers and the data includes reports by local officials and local media, neither of whom are reliable sources. Critics also fear that by making killing seem clean and safe, so-called surgical UAV strikes will allow the United States to remain in a perpetual state of war. However, others maintain that drones \"allow for a much closer review and much more selective targeting process than do other instruments of warfare\" and are subject to Congressional oversight. Like any military technology, armed UAVs will kill people, combatants and innocents alike, thus \"the main turning point concerns the question of whether we should go to war at all.\"\n",
"Collateral damage of civilians still takes place with drone combat, although some (like John O. Brennan) have argued that it greatly reduces the likelihood. Although drones enable advance tactical surveillance and up-to-the-minute data, flaws can become apparent. The U.S. drone program in Pakistan has killed several dozen civilians accidentally. An example is the operation in 2010 Feb near Khod, in Urozgan Province, Afghanistan. Over ten civilians in a three-vehicle convoy travelling from Daykundi Province were accidentally killed after a drone crew misidentified the civilians as hostile threats. A force of Bell OH-58 Kiowa helicopters, who were attempting to protect ground troops fighting several kilometers away, fired AGM-114 Hellfire missiles at the vehicles.\n",
"BULLET::::- The Obama administration releases a report claiming that between 2009 and the end of 2015 the Central Intelligence Agency and the United States armed forces have carried out a combined 473 airstrikes using unmanned aerial vehicles (or \"drones\") against terrorist targets in countries where it defines the United States as not being at war – not named in the report, but including Libya, Pakistan, Somalia, and Yemen – killing between 2,372 and 2,581 \"combatants\" and inadvertently killing between 64 and 116 civilians. The report excludes deaths in Afghanistan, Iraq, and Syria, which are countries in which the administration deems the United States to be at war. Critics of the report claim that it underestimates civilian casualties; the Long War Journal puts the civilian death toll at 212, the New America Foundation estimates it at 219, and the Bureau of Investigative Journalism claims it is at least 325.\n"
] |
Why dose the Ussr anthem mention Russia
|
Here's the lyrics for the Anthem of the SSSR that were used from from 44-56 - the first stanza is the relevant one:
> Союз нерушимый республик свободных
Сплотила навеки Великая Русь.
Да здравствует созданный волей народов
Единый, могучий Советский Союз!
Roughly this translates as follows (correct me if I'm wrong):
> The undestroyed union, republic of the free,
> united forever, the great Rus.
> May we all greet the creation of the will of the people,
> the united, the powerful, the soviet union!
The trick here is that Rus does not necessarily refer to Russia alone, but can also refer to the whole empire, or to the descendants of the Kievan Rus.
There are a few *possible* reasons why this is worded this way, and the only two that I know of go back to two events: first, the events and thoughts surrounding the initial formation of the Soviet Union, and the events going on at the time the anthem was composed and adopted.
Regarding the beginning of the union, there was debate at the time of the creation of the soviet union regarding whether it was to be a worldwide revolution, or if it was to begin in one spot and the spread. The outcome was that the Communist movement considered the success of the USSR and its beginnings in Russia/Ukraine to be significant, leading to the possible inclusion of Rus in the anthem.
The second reason is that the anthem was adopted in 1943, towards the high point of the great patriotic war. This was a time during which the Soviet state adopted a lot of openly nationalistic policies and trappings in order to motivate citizens to whatever useful common identity might work to get them to fight for the fatherland, the idea of communism, or anything. This was a time of serious rapprochement with the Orthodox church, and we see lots of famous posters of "Mother Russia" (and massive statues built after the war), and keep in mind that in the 20s, mother Russia posters had been generally propoganda for the White Russians (meaning anti-communists, not Belorus).
The end result of this is that there were some "communist" reasons to lift up Russia as the mother of communism worldwide (at least of successful communism) and around the time the anthem was written, the traditional communist suppression of nationalism was dying away in the face of a need to motivate citizens to fight and contribute to the war effort.
|
[
"The \"State Anthem of the Russian Federation\" () is the name of the official national anthem of Russia. It uses the same melody as the \"State Anthem of the Soviet Union\", composed by Alexander Alexandrov, and new lyrics by Sergey Mikhalkov, who had collaborated with Gabriel El-Registan on the original anthem. From 1944, that earliest version replaced \"The Internationale\", as a new, more Soviet-centric, and Russia-centric Soviet anthem. The same melody, but without lyrics mentioning dead Stalin by name, was used after 1956. A second version of the lyrics was written by Mikhalkov in 1970 and adopted in 1977, placing less emphasis on World War II and more on the victory of communism.\n",
"The liberal political party Yabloko stated that the re-adoption of the Soviet anthem \"deepened the schism in Russian society\". The Soviet anthem was supported by the Communist Party and by Putin himself. The other national symbols used by Russia in 1990, the white-blue-red flag and the double-headed eagle coat of arms, were also given legal approval by Putin in December, thus ending the debate over the national symbols. After all of the symbols were adopted, Putin said on television that this move was needed to heal Russia's past and to fuse the period of the Soviet Union with Russia's history. He also stated that, while Russia's march towards democracy would not be stopped, the rejection of the Soviet era would have left the lives of their mothers and fathers bereft of meaning. It took some time for the Russian people to familiarize themselves with the anthem's lyrics; athletes were only able to hum along with the anthem during the medal ceremonies at the 2002 Winter Olympics.\n",
"State symbols of Russia include the Byzantine double-headed eagle, combined with St. George of Moscow in the Russian coat of arms; these symbols date from the time of the Grand Duchy of Moscow. The Russian flag appeared in the late Tsardom of Russia period and became widely used during the era of the Russian Empire. The current Russian national anthem shares its music with the Soviet Anthem, though not the lyrics (many Russians of older generations don't know the new lyrics and sing the old ones). The Russian imperial motto \"God is with us\" and the Soviet motto \"Proletarians of all countries, unite!\" are now obsolete and no new motto has been officially introduced to replace them. The Hammer and sickle and the full Soviet coat of arms are still widely seen in Russian cities as a part of old architectural decorations. Soviet Red Stars are also encountered, often on military equipment and war memorials. The Soviet Red Banner is still honored, especially the Banner of Victory of 1945.\n",
"The \"State Anthem of the Soviet Union\" () was the national anthem of the Soviet Union and the regional anthem of the Russian Soviet Federative Socialist Republic from 1944 to 1991 and 1990 respectively, replacing \"The Internationale\". Its lyrics were written by Sergey Mikhalkov (1913–2009) in collaboration with Gabriel El-Registan (1899–1945), and its music composed by Alexander Alexandrov (1883–1946). Although the Soviet Union was dissolved in December 1991, its melody is used since 2000 in the second version Russian national anthem, which has different lyrics.\n",
"The \"State Anthem of the Soviet Union\" (; ), was the official national anthem of the Union of Soviet Socialist Republics (USSR) and the state anthem of the Russian Soviet Federative Socialist Republic from 1944 to 1991, replacing \"The Internationale\". \n",
"The anthem was presented to the government of the USSR on May 1944, three months after the decree of the Presidium of the Supreme Soviet of the USSR on February 3, 1944, \"On the State Anthems of the Soviet Republics.\" \n",
"The Russian national anthem is set to the melody of the Soviet anthem (used since 1944). As a result, there have been several controversies related to its use. For instance, some—including cellist Mstislav Rostropovich—have vowed not to stand during the anthem. Russian cultural figures and government officials were also troubled by Putin's restoration of the Soviet anthem, even with different lyrics. A former adviser to both Yeltsin and Mikhail Gorbachev, the last President of the Soviet Union, stated that, when \"Stalin's hymn\" was used as the national anthem of the Soviet Union, horrific crimes took place.\n"
] |
Was the American Western Frontier as deadly as media portrays it? With gun battles, shootings, tons of diseases, etc.? If not, how did we get this impression?
|
Disease was certainly a problem, especially for Native Americans who didn't have the same heritable immunities as people of European descent. But the violence of the American West has been dramatized quite a lot.
You're statistically more likely to be shot Chicago today than you were to get shot in a place like Abeline.
Now, there were certainly incidents of violence. In the [Coffeyville raid](_URL_0_), for example, the Dalton Gang attempted to rob two Kansas banks but were cut down by armed citizens. Many people, especially ranchers and property owners, kept guns, although six-shooters were rarer than you might think. Shotguns ard rifles were more accurate and more practical for hunting and self-defense. Or the [Lincoln County War](_URL_1_), the basis for the (heavily dramatized) film *Young Guns* and one of several grazing disputes that turned violent during the period.
But all the same dime novelists and cowboy films have made the "Wild" West to be a good deal more violent than it really was. Shootouts make for good drama, but they were something of a rarity.
|
[
"The image of a Wild West filled with countless gunfights was a myth generated primarily by dime-novel authors in the late 19th century. An estimate of 20,000 men in the American West were killed by gunshot between 1866 and 1900, and over 21,586 total casualties during the American Indian Wars from 1850 to 1890. The most notable and well-known took place in the states/territories of Arizona, New Mexico, Kansas, Oklahoma, and Texas. Actual gunfights in the Old West were very rare, very few and far between, but when gunfights did occur, the cause for each varied. Some were simply the result of the heat of the moment, while others were longstanding feuds, or between bandits and lawmen. Lawless violence such as range wars like the Lincoln County War and clashes with Indians were also a cause. Some of these shootouts became famous, while others faded into history with only a few accounts surviving. To prevent gunfights from happening, many cities in the American frontier, such as Dodge City and Tombstone, put up a local ordinance to prohibit firearms in the area.\n",
"This is a list of Old West gunfights. Gunfights have left a lasting impression on American frontier history; many were retold and embellished by dime novels and magazines like Harper's Weekly during the late 19th century. The most notable shootouts took place in Arizona, California, New Mexico, Kansas, Oklahoma, and Texas. Some like the Gunfight at the O.K. Corral were the outcome of long-simmering feuds and rivalries but most were the result of a confrontation between outlaws and law enforcement.\n",
"Besides being set in Canadian Prairies, the stories often contrast the American frontier with the Canadian frontier in several ways. In films such as \"Pony Soldier\" and \"Saskatchewan\" the North-West Mounted Police display reason, compassion and a sense of fair play in their dealings with Aboriginal people (First Nations) as opposed to hotheaded American visitors (often criminals), lawmen or the American Army who seem to prefer extermination with violence.\n",
"The Red Sticks subsequently attacked other forts in the area, including Fort Sinquefield. Panic spread among settlers throughout the Southwestern frontier, and they demanded US government intervention. Federal forces were busy fighting the British and Northern Woodland tribes, led by the Shawnee chief Tecumseh in the Northwest. Affected states called up militias to deal with the threat.\n",
"North Texas was in chaos, with dissenting citizens at risk from military forces. A few weeks later, more suspected Unionist supporters were hanged without trial in several north Texas communities. Five were lynched in Decatur, under the supervision of Confederate Capt. John Hale. The Great Hanging at Gainesville is believed to have been the largest single incident of vigilante violence in U.S. history.\n",
"BULLET::::- The 1974 horror movie cult classic \"The Texas Chain Saw Massacre\" was filmed at various Central Texas locations with a majority of shooting at two houses across the road from each other on an old stretch of County Road 172 later diverted in the middle 1980s on what is known as Quick Hill – now the site of the La Frontera commercial development in Round Rock. Contrary to the movie's introduction, the movie is not based on a true story. Tours of local sites are still conducted by avid film buffs. In the early 1980s, the movie's dilapidated two-story house – abandoned long before the movie's filming and across the road from the movie's main Texas Chainsaw House built in 1910 and occupied before and after filming – was torched by local area high school students leaving a charred limestone skeleton of the mostly wooden frame. In 1998, the Texas Chainsaw House was disassembled and moved to Kingsland, Texas, where it was reassembled and fully restored and operates as a restaurant at The Antlers Hotel.\n",
"The events have since become a highly mythologized and symbolic story of the Wild West, and over the years variations of the storyline have come to include some of its most famous historical figures. In addition to being one of the most well-known range wars of the American frontier, its themes, especially class warfare, served as a basis for numerous popular novels, films, and television shows in the Western genre.\n"
] |
is it possible to move an object in circular motion using magnets?
|
Absolutely, this is how most electric motors work. They have a coil of wire around the magnet, and adding a current makes the magnet attract or repel other magnets on a part that spins freely. (unless i misunderstood your question!)
|
[
"Relative motion between the magnetic/abrasive particle mixture and the workpiece is essential for material removal. There are several options for achieving the necessary motion. A common setup is the rotation of the magnetic pole tip. This is done by either rotating the entire permanent magnet setup or by rotating only the steel pole. Another method which is commonly utilized in internal finishing is the rotation of the workpiece, this is unfortunately limited to axial symmetric workpieces. In addition to rotational motion there is oscillatory and vibrational configurations that are applicable.\n",
"Note that this rotation is kinematic, rather than physical, because usually when a rigid object moves freely in space its rotation is independent of its translation. The exception would be if the object's rotation is physically constrained to align itself with the object's translation, as is the case with the cart of a roller coaster.\n",
"The seemingly mysterious ability of magnets to influence motion at a distance without any apparent energy source has long appealed to inventors. One of the earliest examples of a magnetic motor was proposed by Wilkins and has been widely copied since: it consists of a ramp with a magnet at the top, which pulled a metal ball up the ramp. Near the magnet was a small hole that was supposed to allow the ball to drop under the ramp and return to the bottom, where a flap allowed it to return to the top again. The device simply could not work. Faced with this problem, more modern versions typically use a series of ramps and magnets, positioned so the ball is to be handed off from one magnet to another as it moves. The problem remains the same.\n",
"Conventional magnetic materials, like the magnetic tape used in twistor, allowed the magnetic signal to be placed at any location and to move in any direction. Paul Charles Michaelis working with permalloy magnetic thin films discovered that it was possible to move magnetic signals in orthogonal directions within the film. This seminal work led to a patent application. The memory device and method of propagation were described in a paper presented at the 13th Annual Conference on Magnetism and Magnetic Materials, Boston, Massachusetts, 15 September 1967. The device used anisotropic thin magnetic films that required different magnetic pulse combinations for orthogonal propagation directions. The propagation velocity was also dependent on the hard and easy magnetic axes. This difference suggested that an isotropic magnetic medium would be desirable.\n",
"Some suggestion of movement could be achieved by alternating between pictures of different phases of a motion, but most magic lantern \"animations\" used two glass slides projected together - one with the stationary part of the picture and the other with the part that could be set in motion by hand or by a simple mechanism.\n",
"A moving magnet actuator is a type of electromagnetic linear actuator. It typically consists of an arrangement of a permanent magnet and coil, arranged so that currents in the coil generate a pair of equal and opposite forces between the coil and magnet. The main difference between this and a voice coil actuator is that in a moving magnet actuator, the magnet is intended to move and the coil to stay still, as opposed to vice versa.\n",
"Sakharov's concern about the electrodes led him to consider using magnetic confinement instead of electrostatic. In the case of a magnetic field, the particles will circle around the lines of force. As the particles are moving at high speed, their resulting paths look like a helix. If one arranges a magnetic field so lines of force are parallel and close together, the particles orbiting adjacent lines may collide, and fuse.\n"
] |
how do doctors perform 20+ hour surgeries? don't they get mentally and physically exhausted?
|
Most surgeries (when things go to plan) take around 30 min-2 hours. Some major surgeries e.g. a liver transplant might take ~6-8 hours.
20+ hour surgeries would be exceptional e.g. conjoined twin separations where you actually need multiple different teams e.g. plastic surgeons, neurosurgeons etc.
Usually surgery is a very controlled situation so it would be theoretically possible to take a break. It might be reasonable (e.g. 20-30 min in a 6+ hour surgery) but you don't want to leave the patient open/unconscious too long.
In most specialty surgeries you would have someone else who can take over for some of it.
|
[
"The surgery itself along with recovery time depends on the patient. Robotic surgery can take approximately 6-12 hours. A patient's time in the hospital can take 7–10 days if no complications present themselves. Depending on the type of surgery the abdominal incision for this surgery may be up to eight inches in length and is typically closed with staples on the outside and several layers of dissolvable stitches on the inside. After surgery, patients will have three drainage tubes place while tissues heal: one through the newly created stoma, one through another temporary opening in the abdominal wall into the pouch, and an SP tube (to drain non-specific post-surgical abdominal fluid). In the hospital, the SP tube and external staples will be removed, after several days. The remaining two tubes will each be connected to collection bags worn on each leg and the patient is usually sent home like this. After sufficient healing, and another doctor's visit, the tube will be removed from the stoma. The patient will now begin to catheterize the pouch every two hours. Since one other tube will still be in place, patients can still sleep through the night, since a larger collection bag is attached to that tube at night time. After approximately one month, patients will return to the hospital for a special x-ray. Dye will be instilled into the pouch to verify that there is no leakage of urine. If there is no leakage, this last tube will be removed. Emptying time now may be increased to 3 hours, however, now the patient will need to wake up during the night (every 3 hours) to empty the pouch. Over time, emptying time can possibly be increased up to 4–6 hours. Although to decrease the potential for infections and deterioration of the pouch it is best to continue to cath every 3-4 hours. The pouch will continue to expand and will reach its final size at approximately six months. The pouch will then hold up to 1,200 cubic centimeters (cc). Depending on your doctor's orders, each day, the pouch may need to be irrigated with 60 cc of sterile water in an effort to remove membrane mucus, salts, and bacteria. It can take 6-12 months for your body to adjust to the Indiana pouch.\n",
"In one 2007 prospective study, the mean time for procedure was 6.8 minutes (range = 5–18 minutes). for a trained physician to perform and can be performed in a physician's office. General anesthesia is not required. Despite this, some women have reported considerable pain during the procedure.\n",
"One of the available treatments is performed by a dermatologist, using a CO laser to vaporise the papules. This normally takes only a few minutes to perform. It is simple and does not normally require a hospital stay; discomfort should be minimal and the expected recovery time is one to two weeks. Another procedure involves electrosurgery performed with a hyfrecator and should take less than an hour to perform.\n",
"General anesthesia is required for many surgeries, but there may be lingering fatigue, sedation, and/or drowsiness after surgery has ended that lasts for hours to days. In outpatient settings wherein patients are discharged home after surgery, this sedation, fatigue and occasional dizziness is problematic. As of 2006, modafinil had been tested in one small (N=34) double-blind randomized controlled trial for this use.\n",
"The recovery after the surgery typically requires only mild pain relief. Post-operatively, a surgical dressing is required for several days and compression vest for a month following the procedure. A check-up appointment is carried out after a week for puncture of seroma. If the surgery has minimal complications, the patient can resume normal activities quickly, returning to work after 15 days and participating in any sporting activities after three months.\n",
"The procedure takes roughly 2 hours but can vary on physician and patient characteristics. Patients usually spend 1–3 days in the hospital before going home, and usually undergo a swallow study prior to resuming oral feeding. Patients may return to work and full activity immediately upon discharge from the hospital. Long-term patient satisfaction is similar following POEM compared to standard laparoscopic Heller myotomy.\n",
"After surgery, the patient will be transferred to a postanesthesia care unit so his or her vital signs can be closely monitored to detect anesthesia- or surgery-related complications. Pain medication may be administered if necessary. After patients are completely awake, they are moved to a hospital room to recover. Most individuals will be offered clear liquids the day after the surgery, then progress to a regular diet when the intestines start to function properly. Patients are recommended to sit up on the edge of the bed and walk short distances several times a day. Moving is mandatory and pain medication may be given if necessary. Full recovery from appendectomies takes about four to six weeks but can be prolonged to up to eight weeks if the appendix had ruptured.\n"
] |
in special relativity, how is it determined which reference point will have time slowed down?
|
> Person B gets in a super space ship that launches up and then accelerates to 0.75 times the speed of light and travels for 1 year, then turns around, comes back, and lands on Earth.
Is time slower for one than the other?
Actually, once the traveler lands, he and the person that stayed on earth will experience time at exactly the same rate. However, the traveler will have aged less than the earthling.
|
[
"Special relativity indicates that, for an observer in an inertial frame of reference, a clock that is moving relative to him will be measured to tick slower than a clock that is at rest in his frame of reference. This case is sometimes called special relativistic time dilation. The faster the relative velocity, the greater the time dilation between one another, with the rate of time reaching zero as one approaches the speed of light (299,792,458 m/s). This causes massless particles that travel at the speed of light to be unaffected by the passage of time.\n",
"In special relativity, time dilation is most simply described in circumstances where relative velocity is unchanging. Nevertheless, the Lorentz equations allow one to calculate proper time and movement in space for the simple case of a spaceship which is applied with a force per unit mass, relative to some reference object in uniform (i.e. constant velocity) motion, equal to \"g\" throughout the period of measurement.\n",
"Common sense would dictate that, if the passage of time has slowed for a moving object, said object would observe the external world's time to be correspondingly sped up. Counterintuitively, special relativity predicts the opposite. When two observers are in motion relative to each other, each will measure the other's clock slowing down, in concordance with them being moving relative to the observer's frame of reference.\n",
"Albert Einstein's special theory of relativity (and, by extension, the general theory) predicts time dilation that could be interpreted as time travel. The theory states that, relative to a stationary observer, time appears to pass more slowly for faster-moving bodies: for example, a moving clock will appear to run slow; as a clock approaches the speed of light its hands will appear to nearly stop moving. The effects of this sort of time dilation are discussed further in the popular \"twin paradox\". These results are experimentally observable and affect the operation of GPS satellites and other high-tech systems used in daily life.\n",
"A Minkowski spacetime isometry has the property that the interval between events is left invariant. For example, if everything were postponed by two hours, including the two events and the path you took to go from one to the other, then the time interval between the events recorded by a stop-watch you carried with you would be the same. Or if everything were shifted five kilometres to the west, or turned 60 degrees to the right, you would also see no change in the interval. It turns out that the proper length of an object is also unaffected by such a shift. A time or space reversal (a reflection) is also an isometry of this group.\n",
"There is a similar idea of time dilation occurrence in Einstein's theory of special relativity (which deals with neither gravity nor the idea of curved spacetime). Such time dilation appears in the Rindler coordinates, attached to a uniformly accelerating particle in a flat spacetime. Such a particle would observe time passing faster on the side it is accelerating towards and more slowly on the opposite side. From this apparent variance in time, Einstein inferred that change in velocity affects the relativity of simultaneity for the particle. Einstein's equivalence principle generalizes this analogy, stating that an accelerating reference frame is locally indistinguishable from an inertial reference frame with a gravity force acting upon it. In this way, the Gravity Probe A was a test of the equivalence principle, matching the observations in the inertial reference frame (of special relativity) of the Earth's surface affected by gravity, with the predictions of special relativity for the same frame treated as being accelerating upwards with respect to free fall reference, which can thought of being inertial and gravity-less.\n",
"The transformations are derived using just the principle of relativity and have a maximal speed of 1, which is quite unlike \"single postulate\" derivations of the Lorentz transformations in which you end up with a parameter that may be zero. So this is not the same as other \"single postulate\" derivations. However the relationship of taiji time \"w\" to standard time \"t\" must still be found, otherwise it would not be clear how an observer would measure taiji time. The taiji transformations are then combined with Maxwell's equations to show that the speed of light is independent of the observer and has the value 1 in taiji speed (i.e. it has the maximal speed). This can be thought of as saying: a time of 1 metre is the time it takes for light to travel 1 metre. Since we can measure the speed of light by experiment in m/s to get the value c, we can use this as a conversion factor. i.e. we have now found an operational definition of taiji time: w=ct.\n"
] |
what is asmr exactly and how is it supposedly pleasant to the ears?
|
This is the best answer I've ever found.
_URL_0_
Plus it comes from a great comic to read.
|
[
"In addition to the effectiveness of specific auditory stimuli, many subjects report that ASMR is triggered by the receipt of tender personal attention, often comprising combined physical touch and vocal expression, such as when having their hair cut, nails painted, ears cleaned, or back massaged, whilst the service provider speaks quietly to the recipient. Furthermore, many of those who have experienced ASMR during these and other comparable encounters with a service provider report that watching an \"ASMRtist\" simulate the provision of such personal attention, acting directly to the camera as if the viewer were the recipient of a simulated service, is sufficient to trigger it.\n",
"Many of those who experience ASMR report that non-vocal ambient noises performed through human action are also effective triggers of ASMR. Examples of such noises include fingers scratching or tapping a surface, brushing hair, hands rubbing together or manipulating fabric, the crushing of eggshells, the crinkling and crumpling of a flexible material such as paper, or writing. Many YouTube videos that are intended to trigger ASMR responses feature a single person performing these actions and the sounds that result.\n",
"Some people have sought to relate ASMR to misophonia, which literally means the 'hatred of sound', but manifests typically as 'automatic negative emotional reactions to particular sounds – the opposite of what can be observed in reactions to specific audio stimuli in ASMR'.\n",
"ASMR is usually precipitated by stimuli referred to as 'triggers'. ASMR triggers, which are most commonly auditory and visual, may be encountered through the interpersonal interactions of daily life. Additionally, ASMR is often triggered by exposure to specific audio and video. Such media may be specially made with the specific purpose of triggering ASMR or originally created for other purposes and later discovered to be effective as a trigger of the experience.\n",
"What is ASMR? The current foundation definition of Autonomous Sensory Meridian Response includes soothing, satisfying and comforting emotional and/or physical responses to audio and/or visual triggers. Anecdotal and scientific studies indicate most people are ASMR responsive with many users reporting sleep, physical and emotional relief.\n",
"Endaural phenomena are sounds that are heard without any external acoustic stimulation. Endaural means \"in the ear\". Phenomena include transient ringing in the ears (that sound like sine tones), white noise-like sounds, and subjective tinnitus. Endaural phenomena need to be distinguished from otoacoustic emissions, in which a person's ear emits sounds. The emitter typically cannot hear the sounds made by his or her ear. Endaural phenomena also need to be distinguished from auditory hallucinations, which are sometimes associated with psychosis.\n",
"Few legitimate studies have been done on ASMR, and even fewer have discussed the link between it and frisson specifically. At this time, much of the data on ASMR comes from primarily anecdotal sources. Although ASMR and frisson are \"interrelated in that they appear to arise through similar physiological mechanisms\", individuals who have experienced both describe them as qualitatively different, with different kinds of triggers. A 2018 fMRI study showed that the major brain regions already known to be activated in frisson are also activated in ASMR, and suggests that \"the similar pattern of activation of both ASMR and frisson could explain their subjective similarities, such as their short duration and tingling sensation\".\n"
] |
How quickly did prejudice towards Japanese-Americans by the general American population end after WW2?
|
There's a great book about this topic called "America's Geisha Ally: Reimagining the Japense Enemy" by Naoko Shibusawa. It goes into great detail about the United States government entering into the "reverse course" following WWII. What this basically means is that during and directly after the war, the general consensus of the government and populace of the United States was that Japan was going to pay dearly for its aggressive war. However, because of the looming threat of communism in the far east, and the capitalist framework that Japan had in place (infrastructure, skilled and disciplined workforce, industry, etc.) these plans were scrapped in favor of a program that would promote Japanese economic strength and stability. The United States went through great lengths to basically 'retrain' its populace from seeing the Japanese as a hated natural enemy to seeing them as effeminate and weak and in need of pity. Great book if you get a chance to check it out!
|
[
"The record of the Japanese Americans serving in the 442nd and in the Military Intelligence Service (U.S. Pacific Theater forces in World War II) helped change the minds of anti-Japanese American critics in the U.S. and resulted in easing of restrictions and the eventual release of the 120,000-strong community well before the end of World War II.\n",
"By the mid-1900s, generations of Asian Americans had built enduring communities throughout the United States. However, Japan’s attack on the U.S. naval base at Pearl Harbor in 1941 revived existing hostility towards Japanese Americans. In response to public outcry against the attack and widespread fear of Japanese American disloyalty, President Roosevelt signed Executive Order 9066 which forcibly relocated over 120,000 Japanese Americans from their homes on the West Coast to one of ten Relocation Centers. The Minidoka National Historic Site is one of the places that interprets this largest forced relocation of American citizens.\n",
"After Japan's December 1941 attack on Pearl Harbor pulled the United States into World War II, Japanese Americans quickly became conflated with the enemy, in large part due to existing prejudices and competing business interests. Especially on the West Coast, where the mainland Japanese American population and the nativist groups who lobbied for their incarceration were concentrated, political leaders and well-connected citizens pushed for a solution to the \"Japanese problem.\" On February 19, 1942, President Franklin Roosevelt issued Executive Order 9066, authorizing military commanders to designate areas from which \"any or all persons may be excluded.\" Over the next few months some 112,000 to 120,000 West Coast Japanese were forcibly removed to inland concentration camps. Two-thirds of them were American citizens born in the United States. \n",
"The Japanese-American community in L.A. was greatly impacted since Japan's attack on Pearl Harbor pulled the U.S. into World War II, and America feared that the fifth column was widespread among the community. In response, President Franklin D. Roosevelt issued Executive Order 9066, authorizing military commanders to exclude \"any or all persons\" from certain areas in the name of national defense. The Western Defense Command began ordering Japanese Americans living on the West Coast to present themselves for \"evacuation\" from the newly created military zones. This included many Los Angeles families, of which 80,000 were relocated to the Japanese-American internment camps throughout the duration of the war.\n",
"The effort to rebuild for the Japanese Americans in America after the war was difficult because memories of imprisonment still surfaced. Many wanted justification for the harsh conditions they experienced during World War II.\n",
"After the attack by the Japanese Empire on Pearl Harbor on December 7, 1941, American attitudes towards people of Japanese ancestry indicated a strong sense of racism. This sentiment became further intensified by the media of the time, which played upon issues of racism on the West Coast, the social fear of the Japanese people, and citizen-influenced farming conflicts with the Japanese people. This, along with the attitude of the leaders of the Western Defense Command and the lack of perseverance by the Justice Department to protect the civil rights of Japanese Americans led to the successful relocation of both native and foreign born Japanese.\n",
"Japanese Americans and other Asians in the U.S. had suffered for decades from prejudice and racially-motivated fear. Laws preventing Asian Americans from owning land, voting, testifying against whites in court, and other racially discriminatory laws existed long before World War II. Additionally, the FBI, Office of Naval Intelligence and Military Intelligence Division had been conducting surveillance on Japanese American communities in Hawaii and the continental U.S. from the early 1930s. In early 1941, President Roosevelt secretly commissioned a study to assess the possibility that Japanese Americans would pose a threat to U.S. security. The report, submitted exactly one month before Pearl Harbor was bombed, found that, \"There will be no armed uprising of Japanese\" in the United States. \"For the most part,\" the Munson Report said, \"the local Japanese are loyal to the United States or, at worst, hope that by remaining quiet they can avoid concentration camps or irresponsible mobs.\" A second investigation started in 1940, written by Naval Intelligence officer Kenneth Ringle and submitted in January 1942, likewise found no evidence of fifth column activity and urged against mass incarceration. Both were ignored.\n"
] |
When and why did the US stop allowing (literal) boatloads of immigrants to just show up at a port and begin living in the US?
|
It wasn't just one single law but rather a series of laws. The first was the Page Act of 1875 that primarily targeted Asians, particularly Chinese people, that were immigrating to the western United States to work menial jobs like railroads. Just like we see in the debates today about Hispanic people coming to the United States to work mostly low-wage jobs, there were concerns about taking jobs away from white Americans, as well as diseases, immorality, and integration of the Chinese into American culture.
Another major law was the Immigration Act of 1924, which severely limited the number of people that could come from any one country to 2% of the number of people from that country that had already immigrated. This was similar to the Page Act in that it was designed to preserve a certain ethnic makeup of the country. But these laws continue to change over time and even now we see debates about how to "fix" them. The shift from almost entirely open borders to what we have now was very slow and incremental.
|
[
"Changes in immigration laws in the United States in the 1920s greatly restricted the number of immigrants allowed to enter. The law limited the number of immigrants to about 160,000 per year in 1924. This led to a major reduction in the immigrant trade for the shipping lines, forcing them to cater to the tourist trade to survive. At the turn of 1927–28, \"Olympic\" was converted to carry tourist third cabin passengers as well as first, second and third class. Tourist third cabin was an attempt to attract travellers who desired comfort without the accompanying high ticket price. New public rooms were constructed for this class, although tourist third cabin and second class would merge to become 'tourist' by late 1931.\n",
"On March 2, 1819, the United States government passed the first legislation regulating the conditions of sea transportation for migrants. The legislation is known as the Steerage Act of 1819 and also as the Manifest of Immigrants Act, the latter name because one of its provisions was a requirement for ships to submit a manifest of immigrants on board.\n",
"In the United States, the Merchant Marine Act of 1920 (Jones Act) requires that all goods transported by water between U.S. ports be carried on U.S.-flag ships, constructed in the United States, owned by U.S. citizens, and crewed by U.S. citizens and U.S. permanent residents. The Passenger Vessel Services Act of 1886 states that no foreign vessels shall transport passengers between ports or places in the United States, either directly or by way of a foreign port.\n",
"As they had in the mid-19th century, immigrants helped revitalize downtown in the early 21st century. This time they were from Latin America, primarily Ecuador and Brazil, working as day labor at construction sites all over Fairfield County. Not all of them were in the U.S. legally, and this created tensions between their community and Mayor Mark Boughton, who unsuccessfully asked Governor Jodi Rell to have city police deputized as Bureau of Citizenship and Immigration Services (BCIS) agents so they could legally enforce federal immigration laws. The immigrant community was in turn angered by police efforts to close down their volleyball games and sting operations that turned some of them over to the BCIS. In 2005 they staged a mile-long march down Main Street in protest.\n",
"At the time there was a movement to restrict immigration into the US, as it was widely believed that “morons” or feebleminded individuals were the cause of many societal problems and that border authorities were weakening the nation by allowing far too many of these individuals to enter the country. Ellis Island on New York Harbor was the hub for arrival when the immigration station there was rebuilt in 1900. At first, emigrants were lined up and inspected primarily for medical ailments, initially based on the individual's overall appearance. More scrutizing inspections were conducted if an individual was flagged for some ailment. For the most part, until this time, any concern regarding mental disorders was limited to psychiatric illnesses. However, as Francis Galton’s ideas regarding eugenics. became more popular in the early 1900s, concerns for that feebleminded individuals were weakening society grew, and mental deficiency among emigrants also became a concern. Another important contributing factor was the development of intelligence tests during this time. Both Alfred Binet and his student Theodore Simon were leaders in this development. Henry Goddard, a prominent eugenicist of the time who had been using Binet and Simon’s scale to measure intelligence in adults, suggested that theses could be used to identify feebleminded or mentally deficient individuals at Ellis Island who posed a threat to the integrity of society. By 1910, the concerns for mentally defective people entering the United States had grown to such an extent that the officials at Ellis Island invited Henry Goddard to teach the physicians about intelligence testing. However, after spending an entire day there, Goddard had no recommendations for the physicians, as he was impressed by both the size of the problem as well as the physicians ability to detect defect given the number of individuals coming through the immigration station.\n",
"Also during this period, Canada became a port of entry for many Europeans seeking to gain entry into the U.S. Canadian transportation companies advertised Canadian ports as a hassle-free way to enter the U.S. especially as the U.S. began barring entry to certain ethnicities. The U.S. and Canada mitigated this situation in 1894 with the Canadian Agreement which allowed for U.S. immigration officials to inspect ships landing at Canadian ports for immigrants excluded from the U.S. If found, the transporting companies were responsible for shipping the persons back.\n",
"During the 1880s and 1890s, the U.S. began tightening its immigration laws by barring certain ethnic groups from entering, for example the Chinese. Transportation companies that brought these barred individuals to the U.S. would be responsible for their return to their country of origin. Transportation companies, however, got around this restriction by landing barred people at Canadian ports. The immigrants would then come into the United States through the Canada–United States border. During the mid-to-late nineteenth century, the Canadian immigration route was preferred for Scandinavians, Russians, and other northern Europeans immigrating to Michigan, Wisconsin, Illinois, or other states on the Upper Great Plains. By 1892, Canadian carriers were advertising in Europe that entry at Canadian ports was a hassle-free way to enter the U.S.\n"
] |
Does quantum mechanics apply to energy?
|
That's not really what quantum mechanics is about. Energy is conserved in quantum systems unless there is an external reason for it not to be.
|
[
"Quantum mechanics is the science of the very small. It explains the behavior of matter and its interactions with energy on the scale of atoms and subatomic particles. By contrast, classical physics explains matter and energy only on a scale familiar to human experience, including the behavior of astronomical bodies such as the Moon. Classical physics is still used in much of modern science and technology. However, towards the end of the 19th century, scientists discovered phenomena in both the large (macro) and the small (micro) worlds that classical physics could not explain. The desire to resolve inconsistencies between observed phenomena and classical theory led to two major revolutions in physics that created a shift in the original scientific paradigm: the \"theory of relativity\" and the development of \"quantum mechanics\". This article describes how physicists discovered the limitations of classical physics and developed the main concepts of the quantum theory that replaced it in the early decades of the 20th century. It describes these concepts in roughly the order in which they were first discovered. For a more complete history of the subject, see \"History of quantum mechanics\".\n",
"Quantum mechanics is a set of principles describing physical reality at the atomic level of matter (molecules and atoms) and the subatomic particles (electrons, protons, neutrons, and even smaller elementary particles such as quarks). These descriptions include the simultaneous wave-like and particle-like behavior of both matter and radiation energy as described in the wave–particle duality.\n",
"Quantum mechanics is the branch of physics treating atomic and subatomic systems and their interaction with radiation. It is based on the observation that all forms of energy are released in discrete units or bundles called \"quanta\". Remarkably, quantum theory typically permits only probable or statistical calculation of the observed features of subatomic particles, understood in terms of wave functions. The Schrödinger equation plays the role in quantum mechanics that Newton's laws and conservation of energy serve in classical mechanics—i.e., it predicts the future behavior of a dynamic system—and is a wave equation that is used to solve for wavefunctions.\n",
"In physics, quantum dynamics is the quantum version of classical dynamics. Quantum dynamics deals with the motions, and energy and momentum exchanges of systems whose behavior is governed by the laws of quantum mechanics. Quantum dynamics is relevant for burgeoning fields, such as quantum computing and atomic optics.\n",
"Quantum mechanics is the firm foundation on which all of physical science rests. It is the study of a system in terms of its most fundamental (and tiny) constituents such as electrons, neutrons, photons: particles that also act like waves—or is it waves having the properties of particles? On this atomic scale, which is governed by Planck's constant, the properties of a system are very different than that of bulk matter. This diverging behavior leads to emerging phenomena that cannot be explained or accounted for in classical terms.\n",
"Quantum mechanics (QM; also known as quantum physics, quantum theory, the wave mechanical model, or matrix mechanics), including quantum field theory, is a fundamental theory in physics which describes nature at the smallest scales of energy levels of atoms and subatomic particles.\n",
"All those quantum phenomena and many more are the basis of technologies that are surreptitiously invading our daily life. Quantum mechanics is what drives lasers or determines the bonding of a drug to a protein. It is the basis of light-matter interactions and spectroscopic techniques.\n"
] |
What terms did people use to describe rotation before clocks were common?
|
In Northern Europe, people would refer to the direction of the sun - indicating a direction was either "sunwise" or "against the sun": In the north, if one faces south to watch the path the sun takes, it moves in an arc that moves from the left to the right - "sunwise" or in today's term "clockwise." Clocks moved in the direction of the sun because that was the preferred, "safe" direction. Moving against the sun - today's counterclockwise" - was regarded as going again the natural order of things. It was potentially dangerous in magical terms to do things - stirring food or walking around a church - in a direction that was "against the sun."
|
[
"Before clocks were commonplace, the terms \"sunwise\" and \"deasil\", \"deiseil\" and even \"deocil\" from the Scottish Gaelic language and from the same root as the Latin \"dexter\" (\"right\") were used for clockwise. \"Widdershins\" or \"withershins\" (from Middle Low German \"weddersinnes\", \"opposite course\") was used for counterclockwise.\n",
"Clocks traditionally follow this sense of rotation because of the clock's predecessor: the sundial. Clocks with hands were first built in the Northern Hemisphere (see \"Clock\"), and they were made to work like horizontal sundials. In order for such a sundial to work north of the equator during spring and summer, and north of the Tropic of Cancer the whole year, the noon-mark of the dial must be placed northward of the pole casting the shadow. Then, when the Sun moves in the sky (from east to south to west), the shadow, which is cast on the sundial in the opposite direction, moves with the same sense of rotation (from west to north to east). This is why hours must be drawn in horizontal sundials in that manner, and why modern clocks have their numbers set in the same way, and their hands moving accordingly. For a vertical sundial (such as those placed on the walls of buildings, the dial being \"below\" the post), the movement of the sun is from right to top to left, and, accordingly, the shadow moves from left to down to right, i.e., counterclockwise. This effect is caused by the plane of the dial having been rotated through the plane of the motion of the sun and thus the shadow is observed from the other side of the dial's plane and is observed as moving in the opposite direction. Some clocks were constructed to mimic this. The best-known surviving example is the astronomical clock in the Münster Cathedral, whose hands move counterclockwise.\n",
"The balance wheel appeared with the first mechanical clocks, in 14th century Europe, but it seems unknown exactly when or where it was first used. It is an improved version of the foliot, an early inertial timekeeper consisting of a straight bar pivoted in the center with weights on the ends, which oscillates back and forth. The foliot weights could be slid in or out on the bar, to adjust the rate of the clock. The first clocks in northern Europe used foliots, while those in southern Europe used balance wheels. As clocks were made smaller, first as bracket clocks and lantern clocks and then as the first large watches after 1500, balance wheels began to be used in place of foliots. Since more of its weight is located on the rim away from the axis, a balance wheel could have a larger moment of inertia than a foliot of the same size, and keep better time. The wheel shape also had less air resistance, and its geometry partly compensated for thermal expansion error due to temperature changes.\n",
"Before the late 14th century, a fixed hand (often a carving literally shaped like a hand) indicated the hour by pointing to numbers on a rotating dial; after this time, the current convention of a rotating hand on a fixed dial was adopted. Minute hands (so named because they indicated the small, or \"minute\", divisions of the hour) only came into regular use around 1690, after the invention of the pendulum and anchor escapement increased the precision of time-telling enough to justify it. In some precision clocks, a third hand, which rotated once a minute, was added in a separate subdial. This was called the \"second-minute\" hand (because it measured the \"secondary minute\" divisions of the hour), which was shortened to \"second\" hand. The convention of the hands moving clockwise evolved in imitation of the sundial. In the Northern hemisphere, where the clock face originated, the shadow of the gnomon on a horizontal sundial moves clockwise during the day. This was also why noon or 12 o'clock was conventionally located at the top of the dial.\n",
"Occasionally, clocks whose hands revolve counterclockwise are nowadays sold as a novelty. Historically, some Jewish clocks were built that way, for example in some synagogue towers in Europe such as the Jewish Town Hall in Prague, to accord with right-to-left reading in the Hebrew language. In 2014 under Bolivian president Evo Morales, the clock outside the Legislative Assembly in Plaza Murillo, La Paz, was shifted to counterclockwise motion to promote indigenous values.\n",
"Prior to the invention of accurate clocks, in the mid-17th Century, sundials were the only timepieces in common use, and were considered to tell the \"right\" time. The Equation of Time was not used. After the invention of good clocks, sundials were still considered to be correct, and clocks usually incorrect. The Equation of Time was used in the opposite direction from today, to apply a correction to the time shown by a clock to make it agree with sundial time. Some elaborate \"equation clocks\", such as one made by Joseph Williamson in 1720, incorporated mechanisms to do this correction automatically. (Williamson's clock may have been the first-ever device to use a differential gear.) Only after about 1800 was uncorrected clock time considered to be \"right\", and sundial time usually \"wrong\", so the Equation of Time became used as it is today.\n",
"The earliest definitely verified use of a differential was in a clock made by Joseph Williamson in 1720. It employed a differential to add the equation of time to local mean time, as determined by the clock mechanism, to produce solar time, which would have been the same as the reading of a sundial. During the 18th Century, sundials were considered to show the \"correct\" time, so an ordinary clock would frequently have to be readjusted, even if it worked perfectly, because of seasonal variations in the equation of time. Williamson's and other equation clocks showed sundial time without needing readjustment. Nowadays, we consider clocks to be \"correct\" and sundials usually incorrect, so many sundials carry instructions about how to use their readings to obtain clock time.\n"
] |
how do we know cold is the absence of heat and not the other way around?
|
Temperature is a measurement of energy, specifically kinetic energy on a molecular scale with warmer things having more of this energy than colder things.
Because we warm something up by adding energy we define warm/hot as the presence of this energy. Since there is nothing that we can "add" to make an object colder, cold is inherently the absence of this energy or in other words, the absence of heat.
|
[
"Cold is the presence of low temperature, especially in the atmosphere. In common usage, cold is often a subjective perception. A lower bound to temperature is absolute zero, defined as 0.00K on the Kelvin scale, an absolute thermodynamic temperature scale. This corresponds to on the Celsius scale, on the Fahrenheit scale, and on the Rankine scale.\n",
"Other cultures have developed different naturalistic disease theories. One specific example lies in Latin cultures, which place \"hot\" or \"cold\" classifications on things like food, drink, and environmental conditions. They believe that the combination of hot and cold substances will cause an unbalanced system that leads to disease. Therefore, one is expected not to have a cold drink after taking a hot bath.\n",
"Apparent temperature is the temperature equivalent perceived by humans, caused by the combined effects of air temperature, relative humidity and wind speed. The measure is most commonly applied to the perceived outdoor temperature. However it also applies to indoor temperatures, especially saunas and when houses and workplaces are not sufficiently heated or cooled.\n",
"One of the first to discuss the possibility of an absolute minimal temperature was Robert Boyle. His 1665 \"New Experiments and Observations touching Cold\", articulated the dispute known as the \"primum frigidum\". The concept was well known among naturalists of the time. Some contended an absolute minimum temperature occurred within earth (as one of the four classical elements), others within water, others air, and some more recently within nitre. But all of them seemed to agree that, \"There is some body or other that is of its own nature supremely cold and by participation of which all other bodies obtain that quality.\"\n",
"Wagoner is credited with discovering new ways that humans perceive hot and cold in the skin senses. \"He isolated vasodilation and vasoconstriction as mechanisms that signal the brain that we are hot and cold.\" In addition, he discovered a key homeostasis feedback mechanism that helps humans maintain survival temperature.\n",
"Cold and heat adaptations in humans are a part of the broad adaptability of \"Homo sapiens\". Adaptations in humans can be physiological, genetic, or cultural, which allow people to live in a wide variety of climates. There has been a great deal of research done on developmental adjustment, acclimatization, and cultural practices, but less research on genetic adaptations to cold and heat temperatures.\n",
"It differs from other forms of electromagnetic radiation such as x-rays, gamma rays, microwaves, radio waves, and television rays that are not related to temperature. Scientists have found that all bodies at a temperature above absolute zero emit thermal radiation. People are constantly radiating their body heat, but at different rates. From these values, the rate of heat loss from a person is almost four times as large in the winter than in the summer, which explains the “chill” we feel in the winter even if the thermostat setting is kept the same.\n"
] |
what does being turing complete means?
|
Turing described a minimal, hypothetical computer which he used to mathematically prove results, known as a Turing Machine. It wasn't intended as a practical device, but rather to be as simple as possible to make proofs easier. A computing device that is capable of doing everything that a Turing Machine can is Turing Complete. One way to show that a computer or programming environment is Turing Complete is to implement a Turing Machine emulator.
|
[
"Turing completeness is the ability for a system of instructions to simulate a Turing machine. A programming language that is Turing complete is theoretically capable of expressing all tasks accomplishable by computers; nearly all programming languages are Turing complete if the limitations of finite memory are ignored.\n",
"Turing concludes by speculating about a time when machines will compete with humans on numerous intellectual tasks and suggests tasks that could be used to make that start. Turing then suggests that abstract tasks such as playing chess could be a good place to start another method which he puts as \"..it is best to provide the machine with the best sense organs that money can buy, and then teach it to understand and speak English.\".\n",
"Some have taken Turing's question to have been \"Can a computer, communicating over a teleprinter, fool a person into believing it is human?\" but it seems clear that Turing was not talking about fooling people but about generating human cognitive capacity.\n",
"Turing's proof is a proof by Alan Turing, first published in January 1937 with the title On Computable Numbers, with an Application to the Entscheidungsproblem. It was the second proof of the assertion (Alonzo Church's proof was first) that some decision problems are \"undecidable\": there is no single algorithm that infallibly gives a correct \"yes\" or \"no\" answer to each instance of the problem. In his own words:\n",
"Turing completeness is significant in that every real-world design for a computing device can be simulated by a universal Turing machine. The Church–Turing thesis states that this is a law of mathematics that a universal Turing machine can, in principle, perform any calculation that any other programmable computer can. This says nothing about the effort needed to write the program, or the time it may take for the machine to perform the calculation, or any abilities the machine may possess that have nothing to do with computation.\n",
"BULLET::::5. \"Arguments from various disabilities\". These arguments all have the form \"a computer will never do \"X\"\". Turing offers a selection:Be kind, resourceful, beautiful, friendly, have initiative, have a sense of humour, tell right from wrong, make mistakes, fall in love, enjoy strawberries and cream, make someone fall in love with it, learn from experience, use words properly, be the subject of its own thought, have as much diversity of behaviour as a man, do something really new.Turing notes that \"no support is usually offered for these statements,\" and that they depend on naive assumptions about how versatile machines may be in the future, or are \"disguised forms of the argument from consciousness.\" He chooses to answer a few of them:\n",
"\"Turing did not show that his machines can solve any problem that can be solved 'by instructions, explicitly stated rules, or procedures', nor did he prove that the universal Turing machine 'can compute any function that any computer, with any architecture, can compute'. He proved that his universal machine can compute any function that any Turing machine can compute; and he put forward, and advanced philosophical arguments in support of, the thesis here called Turing's thesis. But a thesis concerning the extent of effective methods—which is to say, concerning the extent of procedures of a certain sort that a human being unaided by machinery is capable of carrying out—carries no implication concerning the extent of the procedures that machines are capable of carrying out, even machines acting in accordance with 'explicitly stated rules.' For among a machine's repertoire of atomic operations there may be those that no human being unaided by machinery can perform.\"\n"
] |
If the Great Depression didn't truly end until the start of WWII, how come the US economy didn't dip in the post war years?
|
There was a recession in 1945. GDP fell by 12.7% in that recession. By comparison, the recession of 2007 lowered GDP by 4.3%.
|
[
"By 1940 the Great Depression was finally over. A remarkable burst of economic activity and full employment came during the war years (1941–45). Fears of a postwar depression were widespread since the massive military spending was ending, the war plants were shutting down, and 12 million soldiers were coming home. Congress, fearful of a return to a state of depression, sought to establish preemptive safeguards against an economic downturn.\n",
"The slow recovery from the effects of the Great Depression began in the mid-1930s, decelerated at the end of the 1930s, and picked up speed with the start of World War II, so that by the early 1940s the country was for the most part out of the Depression. Excess commercial space began to be used, vacancy rates dropped, department store sales rose, hotel occupancy rates went up, and revenues increased.\n",
"After World War I, the United States experienced significant economic growth that was fueled by new technologies and improved production processes. Industrial production output increased 25% between the years 1927 and 1929. The speculative boom in the stock market resulted from the expanding economy and the market indices moved up nearly 400% from 1926 to 1929. In late October 1929, the decline emerged in the market and led to panic selling as more investors were unwilling to risk additional losses. The market sharply declined and was followed by the Great Depression.\n",
"The Great Depression seemed over in 1936, but a relapse in 1937–1938 produced continued long-term unemployment. Full employment was reached with the total mobilization of the United States economic, social and military resources in World War II. At that point, the main relief programs such as the WPA and the CCC were ended. Arthur Herman argues that Roosevelt restored prosperity after 1940 by cooperating closely with big business, although when asked \"Do you think the attitude of the Roosevelt administration toward business is delaying business recovery?\", the American people in 1939 responded \"yes\" by a margin of more than 2-to-1.\n",
"When President Franklin D. Roosevelt took office in 1933, America was in the depths of the Great Depression. The stock market crash of 1929 led the implosion and the downturn continued for over three years as thousands of banks and businesses failed and millions of people lost their life savings, farms, and homes. At the nadir, one-quarter of the U.S. workforce was unemployed and national output had fallen by one-third.\n",
"During the 1920s, the nation enjoyed widespread prosperity, albeit with a weakness in agriculture. A financial bubble was fueled by an inflated stock market, which later led to the Stock Market Crash on October 29, 1929. This, along with many other economic factors, triggered a worldwide depression known as the Great Depression. During this time, the United States experienced deflation as prices fell, unemployment soared from 3% in 1929 to 25% in 1933, farm prices fell by half, and manufacturing output plunged by one-third.\n",
"The United States went into WWII with an economy still not fully recovered from the Great Depression. Because wartime production needs mandated large budget deficits and an accommodating monetary policy, inflation and a runaway wage-price spiral was seen as likely. As a part of a team charged with keeping inflation from crippling the war effort, Galbraith served as a deputy head of the Office of Price Administration (OPA) during the Second World War in 1941–1943. The OPA directed the process of stabilization of prices and rents.\n"
] |
What "generation" star is our Sun?
|
The Sun is a Population I star, meaning it contains metals from previous generations of stars. By measuring the spectral characteristics of stars, we can observe the ratios of metals to hydrogen or helium, and from this, we can determine how many generations of "ancestors" the star has had.
> Stars may be classified by their heavy element abundance, which correlates with their age and the type of galaxy in which they are found.
> Population I stars include the sun and tend to be luminous, hot and young, concentrated in the disks of spiral galaxies. They are particularly found in the spiral arms. With the model of heavy element formation in supernovae, this suggests that the gas from which they formed had been seeded with the heavy elements formed from previous giant stars. About 2% of the total belong to Population I.
> Population II stars tend to be found in globular clusters and the nucleus of a galaxy. They tend to be older, less luminous and cooler than Population I stars. They have fewer heavy elements, either by being older or being in regions where no heavy-element producing predecessors would be found. Astronomers often describe this condition by saying that they are "metal poor", and the "metallicity" is used as an indication of age.
_URL_0_
Astronomers also theorize that there was a generation of very old stars with extremely low metallicity. These are Population III stars. Recently, some astronomers have found evidence there may be Population III stars in a very bright, distant galaxy.
As a side note, the sun will not go supernova. It will become a red giant, shedding its outer layers, and making it very easy to roast hot dogs on Earth.
edit: red giant
|
[
"The Sun is a population I star; it has a higher abundance of elements heavier than hydrogen and helium (\"metals\" in astronomical parlance) than the older population II stars. Elements heavier than hydrogen and helium were formed in the cores of ancient and exploding stars, so the first generation of stars had to die before the Universe could be enriched with these atoms. The oldest stars contain few metals, whereas stars born later have more. This high metallicity is thought to have been crucial to the Sun's development of a planetary system because the planets form from the accretion of \"metals\".\n",
"The Sun formed about 4.6 billion years ago from the collapse of part of a giant molecular cloud that consisted mostly of hydrogen and helium and that probably gave birth to many other stars. This age is estimated using computer models of stellar evolution and through nucleocosmochronology. The result is consistent with the radiometric date of the oldest Solar System material, at 4.567 billion years ago. Studies of ancient meteorites reveal traces of stable daughter nuclei of short-lived isotopes, such as iron-60, that form only in exploding, short-lived stars. This indicates that one or more supernovae must have occurred near the location where the Sun formed. A shock wave from a nearby supernova would have triggered the formation of the Sun by compressing the matter within the molecular cloud and causing certain regions to collapse under their own gravity. As one fragment of the cloud collapsed it also began to rotate because of conservation of angular momentum and heat up with the increasing pressure. Much of the mass became concentrated in the center, whereas the rest flattened out into a disk that would become the planets and other Solar System bodies. Gravity and pressure within the core of the cloud generated a lot of heat as it accreted more matter from the surrounding disk, eventually triggering nuclear fusion.\n",
"The primary star of the system (catalogued as Gliese 777A) is a yellow subgiant, a Sun-like star that is ceasing fusing hydrogen in its core. The star is much older than the Sun, about 6.7 billion years old. It is 4% less massive than the Sun. It is also rather metal-rich, having about 70% more \"metals\" (elements heavier than helium) than the Sun, which is typical for stars with extrasolar planets.\n",
"Barnard's Star is the second nearest star system to Earth. Given its age, at 7–12 billion years of age, Barnard's Star is considerably older than the Sun. It was long assumed to be quiescent in terms of stellar activity. However, in 1998, astronomers observed an intense stellar flare, showing that Barnard's Star is a flare star.\n",
"The Sun is a Population I, or heavy-element-rich, star. The formation of the Sun may have been triggered by shockwaves from one or more nearby supernovae. This is suggested by a high abundance of heavy elements in the Solar System, such as gold and uranium, relative to the abundances of these elements in so-called Population II, heavy-element-poor, stars. The heavy elements could most plausibly have been produced by endothermic nuclear reactions during a supernova, or by transmutation through neutron absorption within a massive second-generation star.\n",
"The Sun is the Solar System's star and by far its most massive component. Its large mass (332,900 Earth masses), which comprises 99.86% of all the mass in the Solar System, produces temperatures and densities in its core high enough to sustain nuclear fusion of hydrogen into helium, making it a main-sequence star. This releases an enormous amount of energy, mostly radiated into space as electromagnetic radiation peaking in visible light.\n",
"At 7–12 billion years of age, Barnard's Star is considerably older than the Sun, which is 4.5 billion years old, and it might be among the oldest stars in the Milky Way galaxy. Barnard's Star has lost a great deal of rotational energy, and the periodic slight changes in its brightness indicate that it rotates once in 130 days (the Sun rotates in 25). Given its age, Barnard's Star was long assumed to be quiescent in terms of stellar activity. In 1998, astronomers observed an intense stellar flare, showing that Barnard's Star is a flare star. Barnard's Star has the variable star designation V2500 Ophiuchi. In 2003, Barnard's Star presented the first detectable change in the radial velocity of a star caused by its motion. Further variability in the radial velocity of Barnard's Star was attributed to its stellar activity.\n"
] |
why do we have sports commentators on television that talk non stop during the games?
|
Probably carry over from before people could watch games on their television. The commentators would give a play by play to those listening on the radio. And now it is tradition. Though they're supposed to be "analyzing" the game as well. Or telling people things they might have missed
|
[
"In sports broadcasting, a sports commentator (also known as sports announcer, sportscaster or play-by-play announcer) gives a running commentary of a game or event in real time, usually during a live broadcast, traditionally delivered in the historical present tense. Radio was the first medium for sports broadcasts, and radio commentators must describe all aspects of the action to listeners who cannot see it for themselves. In the case of televised sports coverage, commentators are usually presented as a voiceover, with images of the contest shown on viewers' screens and sounds of the action and spectators heard in the background. Television commentators are rarely shown on screen during an event, though some networks choose to feature their announcers on camera either before or after the contest or briefly during breaks in the action.\n",
"In some countries, the two-person commentating team is not used as much as elsewhere. In Germany, most broadcasts of sports matches traditionally feature a single play-by-play announcer who also provides commentary, background information, and statistics. If the broadcast is on TV, the announcer will usually not comment on visually obvious things. A two-person commentating team is used more often for sports where understanding of events depends more on details and subtle visual cues that not everybody might instantly get or might need extra information in order to reasonably understand – for example in auto racing or winter sport. In those cases, a current or former athlete or coach is often used as co-commentator or \"Experte\" (expert).\n",
"No other United States broadcaster has ever purposely replicated the experiment, with football or any of the other major team professional sports; the networks have produced announcerless broadcasts but only as an alternate feed (with the main network always carrying announcers). ESPN has regularly included announcerless broadcasts as part of its Full Circle and Megacast multi-channel broadcasts, usually on ESPN Classic. In select versions of the MLB.tv app, a 'ballpark sound' option is available on most games with only natural ballpark audio. In 2013, Fox Sports Detroit Plus offered its viewers a \"Natural Sounds at Comerica Park\" channel in which they could watch occasional Tigers baseball games with just the ambient sound from games at the team's home stadium, with information about the game coming via increased graphics as it did in the Announcerless Game. It was, however, offered only on a premier channel for those who paid the highest rates; the regular channel included the team's announcing duo of Mario Impemba and Rod Allen. The Alliance of American Football regularly offers live announcerless streams of its games, billed as \"AAF Raw.\"\n",
"In sports, such as American football and baseball, simulcasts are when a single announcer broadcasts play-by-play coverage both over television and radio. The practice was common in the early years of television, but since the 1980s, most teams have used a separate team for television and for radio.\n",
"To replace the announcers, the network used more on-screen graphics than usual and asked the public address announcer at Miami's Orange Bowl to impart more information than he typically did. Efforts to use more sensitive microphones and pick up more sound from the field, however, did not succeed. While the experiment did increase the telecast's ratings, it was widely regarded as a failure since it did not provide sufficient context for viewers. No network broadcasting any major U.S. professional team sport has ever tried it again, except through alternate feeds of games offered with announcers.\n",
"BULLET::::- Before , NBC typically paired the top announcers for the respective World Series teams to alternate play-by-play during each game's telecast. For example, if the Yankees played the Dodgers in the World Series, Mel Allen (representing the Yankees) would call half the game and Vin Scully (representing the Dodgers) would call the other half of the game. But in 1966, NBC wanted their regular network announcer, Curt Gowdy, to call most of the play-by-play at the expense of the top local announcers. So instead of calling half of every World Series game on television (as Vin Scully had done in , , , , and ) they would only get to call half of all home games on TV, providing color commentary while Gowdy called play-by-play for the remaining half of each game. The visiting teams' announcers would participate in the NBC Radio broadcasts. In broadcasts of Series-clinching (or potentially Series-clinching) games on both media, NBC would send the announcer for whichever team was ahead in the game to that team's clubhouse in the ninth inning in order to help cover the trophy presentation and conduct postgame interviews.\n",
"As previously mentioned, before , NBC typically paired the top announcers for the respective World Series teams to alternate play-by-play during each game's telecast. For example, if the Yankees played the Dodgers in the World Series, Mel Allen (representing the Yankees) would call half the game and Vin Scully (representing the Dodgers) would call the other half of the game. However, in 1966, NBC wanted its regular network announcer, Curt Gowdy, to call most of the play-by-play at the expense of the top local announcers. So instead of calling half of every World Series game on television (as Vin Scully had done in , , , , and ) they would only get to call half of all home games on TV, providing color commentary while Gowdy called play-by-play for the remaining half of each game. The visiting teams' announcers would participate in the NBC Radio broadcasts. In broadcasts of Series-clinching (or potentially Series-clinching) games on both media, NBC would send the announcer for whichever team was ahead in the game to that team's clubhouse in the ninth inning in order to help cover the trophy presentation and conduct postgame interviews.\n"
] |
How do we perceive things through light bouncing off of the objects we see and entering our eyes?
|
The secret to the eye being able to resolve things is the fact that the pupil is really small. [Look at this image](_URL_0_) of how a pinhole camera works. Your eye is much the same- replacing the pinhole with pupil.
Imagine a red light up above you, and a blue light down below. The red light shines light in every direction. But only the one direction that passes through the narrow opening of your pupil will actually hit the back of your eye (retina). So, the red light entering into your eye will come from the top, be heading down, and thus will hit the bottom of your retina. The blue light will be the opposite, the only one which will pass through your pupil will be the one beam headed up, and thus will hit the top of your retina.
This is why your pupil has to be small. Imagine the pinhole camera, but the pinhole being replaced by a window. Now, many light beams from the same object can pass through the window- headed in many directions and thus no coherent image is formed on the wall.
This is why squinting helps you see better. You reduce the effective size of your pupil, as your eye lids block more of the light from different directions, thus giving a sharper image.
|
[
"Our minds and bodies are bombarded by relevant and irrelevant knowledge and experiences every day. We will tune into salient ones (crane the ears to more fully hear enjoyable music) and tune-out non-salient ones (cover our ears from jackhammer noise). There is difference between seeing something and looking at it. In seeing, the capacity of our retina to take in the light energy is engaged and the brain processes that information into an image. When one looks at an object, not only are visual perceptive capacities engaged, but other mental processes for evaluation and ordering of the object are activated (Skinner, 1974).\n",
"Rays of light travel in straight lines and change when they are reflected and partly absorbed by an object, retaining information about the color and brightness of the surface of that object. Lit objects reflect rays of light in all directions. A small enough opening in a screen only lets through rays that travel directly from different points in the scene on the other side, and these rays form an image of that scene when they are collected on a surface opposite from the opening. In simple terms, the way your retina sees a specific image through your eye is vertically switched to the object you see and how pieces in your brain are shown to switch that object right-side up to the way you see normally.\n",
"When someone sees an object, they know what the object is because they've seen it on a past occasion; this is recognition memory. Not only do abnormalities to the ventral (what) stream of the visual pathway affect our ability to recognize an object but also the way in which an object is presented to us.\n",
"The visual system in humans allows individuals to assimilate information from the environment. The act of seeing starts when the lens of the eye focuses an image of its surroundings onto a light-sensitive membrane in the back of the eye, called the retina. The retina converts patterns of light into neuronal signals. The lens of the eye focuses light on the photoreceptive cells of the retina, which detect the photons of light and respond by producing neural impulses. These signals are processed in a hierarchical fashion by different parts of the brain, from the retina to the lateral geniculate nucleus, to the primary and secondary visual cortex of the brain. Signals from the retina can also travel directly from the retina to the Superior colliculus.\n",
"Visual perception is initiated when objects in the world reflect light rays towards the eye. Most empirical theories of visual perception begin with the observation that stimulation of the retina is fundamentally ambiguous. In empirical accounts, the most commonly proposed mechanism for circumventing this ambiguity is \"unconscious inference,\" a term that dates back to Helmholtz.\n",
"In many ways, vision is the primary human sense. Light is taken in through each eye and focused in a way which sorts it on the retina according to direction of origin. A dense surface of photosensitive cells, including rods, cones, and intrinsically photosensitive retinal ganglion cells captures information about the intensity, color, and position of incoming light. Some processing of texture and movement occurs within the neurons on the retina before the information is sent to the brain. In total, about 15 differing types of information are then forwarded to the brain proper via the optic nerve.\n",
"Vision provides opportunity for the brain to perceive and respond to changes occurring around the body. Information, or stimuli, in the form of light enters the retina, where it excites a special type of neuron called a photoreceptor cell. A local graded potential begins in the photoreceptor, where it excites the cell enough for the impulse to be passed along through a track of neurons to the central nervous system. As the signal travels from photoreceptors to larger neurons, action potentials must be created for the signal to have enough strength to reach the CNS. If the stimulus does not warrant a strong enough response, it is said to not reach absolute threshold, and the body does not react. However, if the stimulus is strong enough to create an action potential in neurons away from the photoreceptor, the body will integrate the information and react appropriately. Visual information is processed in the occipital lobe of the CNS, specifically in the primary visual cortex.\n"
] |
What is e in regards to natural logarithms?
|
Oh this is such a fun question!
The number's importance starts with one key observation.
Let a > 0 be a real number, we can define a function f(x)=a^x
What is the derivative of this function?
If we look at the limit definition of the derivative we get
f'(x) = lim h- > 0 ( a^(x+h) - a^x )/h = lim h- > 0 a^x (a^h - 1 )/h = a^x lim h- > 0 (a^h - 1 )/h
We can see that if the limit exists then the form of the derivative is
f'(x) = a^x g(a)
where g(a) is a function that depends only on a, which is a fixed number. We also notice that if the function is once differentiable (it is, I just don't know how to simplify that limit off the top of my head, but it should give the result log(a)) it is twice differentiable with derivative
f''(x) = a^x g(a)^2
and more generally
f^(n) (x) = a^x g(a)^n
We can then write a Taylor approximation to this function as
f(x) = 1 + g(a) x + x g(a)^2 /2 + ...
Now we note that the function has a very special property if g(a)=1, it is its own derivative, and note that if we find f(1) we get f(1)=a^1 =a
So computing f(1), g(a)=1, this let's us compute the number a such that the function a^x is its own derivative (we've named this number e). The Taylor approximation here becomes
e=1+1+1/2+... = sum_n=0^infinity 1/n! = lim n- > infinity (1+1/n)^n (this can be shown using the binomial theorem)
Tl;dr, e is the unique number that defines a function (e^x ) so that the function is its own derivative. (d/dx e^x = e^x )
|
[
"The natural logarithm allows simple integration of functions of the form \"g\"(\"x\") = \"f\" '(\"x\")/\"f\"(\"x\"): an antiderivative of \"g\"(\"x\") is given by ln(|\"f\"(\"x\")|). This is the case because of the chain rule and the following fact:\n",
"The natural logarithm of \"x\" is the power to which \"e\" would have to be raised to equal \"x\". For example, ln(7.5) is 2.0149..., because . The natural log of \"e\" itself, ln(\"e\"), is 1, because , while the natural logarithm of 1, ln(1), is 0, since .\n",
"The natural logarithm can be defined for any positive real number \"a\" as the area under the curve from 1 to \"a\" (the area being taken as negative when \"a\" < 1). The simplicity of this definition, which is matched in many other formulas involving the natural logarithm, leads to the term \"natural\". The definition of the natural logarithm can be extended to give logarithm values for negative numbers and for all non-zero complex numbers, although this leads to a multi-valued function: see Complex logarithm.\n",
"The logarithm must be taken to base \"e\" since the two terms following the logarithm are themselves base-\"e\" logarithms of expressions that are either factors of the density function or otherwise arise naturally. The equation therefore gives a result measured in nats. Dividing the entire expression above by log 2 yields the divergence in bits.\n",
"Logarithms can be defined to any positive base other than 1, not only \"e\". However, logarithms in other bases differ only by a constant multiplier from the natural logarithm, and are usually defined in terms of the latter. For instance, the binary logarithm is the natural logarithm divided by ln(2), the natural logarithm of 2. Logarithms are useful for solving equations in which the unknown appears as the exponent of some other quantity. For example, logarithms are used to solve for the half-life, decay constant, or unknown time in exponential decay problems. They are important in many branches of mathematics and the sciences and are used in finance to solve problems involving compound interest.\n",
"The logarithm to base (that is ) is called the common logarithm and has many applications in science and engineering. The natural logarithm has the number (that is ) as its base; its use is widespread in mathematics and physics, because of its simpler integral and derivative. The binary logarithm uses base (that is ) and is commonly used in computer science. Logarithms are examples of concave functions.\n",
"The natural logarithm of a number is its logarithm to the base of the mathematical constant \"e\", where \"e\" is an irrational and transcendental number approximately equal to . The natural logarithm of \"x\" is generally written as , , or sometimes, if the base \"e\" is implicit, simply . Parentheses are sometimes added for clarity, giving ln(\"x\"), log(\"x\") or log(\"x\"). This is done in particular when the argument to the logarithm is not a single symbol, to prevent ambiguity.\n"
] |
How did most Medieval kings die?
|
Most medieval kings died of old age, illness, or some other "natural cause." If a king died from something more nefarious, it usually stands out in the historical record. Take the English monarchs, of which there have been about 50 if we count liberally between Alfred the Great and Charles I (by liberally I mean including people like Lady Jane Grey and Matilda).
Three were killed in battle or by wounds sustained in battle (Harold Godwinson, Richard the Lion Heart, and Richard III)
One king (Edmund I) died in a brawl that he probably started.
Three were definitely murdered (Edward the Martyr, Edward II, and Richard II).
Two were probably murdered (William Rufus and Henry VI [who was already deposed])
And two were beheaded (Lady Jane Grey and Charles I).
So, that's 11 deaths total that weren't natural causes, out of 50 people, and two of those are only suspicious deaths, not confirmed assassinations.
|
[
"The king was mortally wounded during the suppression of a revolt by Viscount Aimar V of Limoges in 1199, and died without legitimate heirs. The chronicler Roger of Howden claimed that later that same year,\n",
"In some cases, kings have personally murdered people. In 1568, King Eric XIV beat his secretary Martin Olai Helsingius to death with a stove poker. Martin had allegedly advised the king against pardoning his former secretary Jöran Persson, who was a very trusted advisor and friend of the king, but whom the Swedish people despised. Persson had been accused of causing the looting of Svartsjö Palace in 1567 and the Sture murders the same year. He received his verdict on 28 September 1568 and was punished \"as an honorless, faithless and perjurous traitor, rogue and villain\". Martins punishment was cruel. Both of his ears were nailed against the gallows, along with his patent of nobility. He was later hinged onto the gallows, but before his death he was brought down again. and subjected to breaking wheel torture at Brunkebergstorg. He was eventually beheaded and his body was nailed to a stake, where his body could be observed in public. This was a regular practice during the Middle Ages in Sweden, until it was outlawed in 1841.\n",
"Although in that period intentional regicide was an extremely rare occurrence, the situation changed dramatically with the Renaissance when the ideas of \"tyrannomachy\" (i.e. killing of a King when his rule becomes tyrannical) re-emerged and gained recognition. Several European monarchs and other leading figures were assassinated during religious wars or by religious opponents, for example Henry III and Henry IV of France, and the Protestant Dutch leader, William the Silent. There were also many unsuccessful assassination plots against rulers such as Elizabeth I of England by religious opponents. There were notable detractors, however; Abdülmecid of the Ottoman Empire refused to put to death plotters against his life during his reign.\n",
"The legendary kings Diarmait mac Cerbaill and Muirchertach mac Ercae each die a threefold death on Samhain, which involves wounding, burning and drowning, and of which they are forewarned. In the tale \"Togail Bruidne Dá Derga\" ('The Destruction of Dá Derga's Hostel'), king Conaire Mór also meets his death on Samhain after breaking his \"geasa\" (prohibitions or taboos). He is warned of his impending doom by three undead horsemen who are messengers of Donn, god of the dead. \"The Boyhood Deeds of Fionn\" tells how each Samhain the men of Ireland went to woo a beautiful maiden who lives in the fairy mound on Brí Eile (Croghan Hill). It says that each year someone would be killed \"to mark the occasion\", by persons unknown. Some academics suggest that these tales recall human sacrifice, and argue that several ancient Irish bog bodies (such as Old Croghan Man) appear to have been kings who were ritually killed, some of them around the time of Samhain.\n",
"The threefold death, which is suffered by kings, heroes, and gods, is a putatively Proto-Indo-European theme, reconstructed from medieval accounts of Celtic and Germanic mythology and archaeologically attested from ancient bodies such as Lindow Man.\n",
"Before the Tudor period, English kings had been murdered while imprisoned (for example Edward II or Edward V) or killed in battle by their subjects (for example Richard III), but none of these deaths are usually referred to as regicide. The word regicide seems to have come into popular use among foreign Catholics when Pope Sixtus V renewed the papal bull of excommunication against the \"crowned regicide\" Queen Elizabeth I, for—among other things—executing Mary, Queen of Scots, in 1587. Elizabeth had originally been excommunicated by Pope Pius V, in \"Regnans in Excelsis\", for converting England to Protestantism after the reign of Mary I of England. The defeat of the Spanish Armada and the \"Protestant Wind\" convinced most English people that God approved of Elizabeth's action.\n",
"Since 1371 or earlier murderers and rapists have been executed by decapitation on the medieval stone bridge. The most recent recorded execution took place in 1585. Till 1799 the bridge was decorated with the statues of two figures, recalling an oft-repeated legend.\n"
] |
Is the Mandate of Heaven directly responsible for the technological and philosophical advances in the early history of dynastic China?
|
I think you have elevated the mandate to a height that it doesn't deserve. Innovations in China did not depend at all on unification. Some examples:
The great advances in military tactics, poetry, and paper all occurred in the six dynasties period, that which lies between the Han and sui/tang. The next great innovation was the printing press, which saw it's first major use in the five dynasties/ten kingdoms period between the tang and song. There is no great correlation between unification and innovation.
|
[
"The concept of the Mandate of Heaven was first used to support the rule of the kings of the Zhou dynasty (1046–256 BCE), and legitimize their overthrow of the earlier Shang dynasty (1600–1069 BCE). It was used throughout the history of China to legitimize the successful overthrow and installation of new emperors, including non-Han ethnic monarchs such as the Qing dynasty (1636–1912).\n",
"The political ideas current in China at that time involved the idea of the mandate of heaven. It resembled the theory of divine right in that it placed the ruler in a divine position, as the link between Heaven and Earth, but it differed from the divine right of kings in that it did not assume a permanent connection between a dynasty and the state. Inherent in the concept was that a ruler held the mandate of heaven only as long as he provided good government. If he did not, heaven would withdraw its mandate and whoever restored order would hold the new mandate. This is true theocracy; the power and wisdom to govern is granted by a higher power, not by human political schemes, and can be equally removed by heaven. This has similarities to the idea presented in the Judeo-Christian Bible from the time when Israel requests \"a king like the nations\" () through to Christ himself telling his contemporary leaders that they only had power because God gave it to them. The classic Biblical example comes in the story of King Nebuchadnezzar, who according to the Book of Daniel ruled the Babylonian empire because God ordained his power, but who later ate grass like a ox for seven years because he deified himself instead of acknowledging God. Nebuchadnezzar is restored when he again acknowledges God as the true sovereign.\n",
"Generally through Chinese history, it was historians of later kingdoms whose histories bestowed the Mandate of Heaven posthumously on preceding dynasties. This was typically done for the purpose of strengthening the present rulers' ties to the Mandate themselves. Song Dynasty historian Xue Juzheng did exactly this in his work \"History of the Five Dynasties\".\n",
"The Mandate of Heaven was the idea that the Emperor was favored by Heaven to rule over China. The Mandate of Heaven explanation was championed by the Chinese philosopher Mencius during the Warring States period.\n",
"The Mandate of Heaven would then transfer to those who would rule best. Chinese historians interpreted a successful revolt as evidence that the Mandate of Heaven had passed on. Throughout Chinese history, rebels who opposed the ruling dynasty made the claim that the Mandate of Heaven had passed, giving them the right to revolt. Ruling dynasties were often uncomfortable with this, and the writings of the Confucian philosopher Mencius (372–289 BC) were often suppressed for declaring that the people have the right to overthrow a ruler that did not provide for their needs.\n",
"In another transformation that \"mirrored the process of political centralization\" in Nurhaci's state, the traditional Jurchen belief in multiple heavens was replaced by one Heaven called \"Abka \"ama\"\" or \"Abka \"han\".\" This new shamanic Heaven became the object of a state cult similar to that of the Jurchen rulers' cult of Heaven in the Jin dynasty (1115–1234) and to Chinggis Khan's worship of Tengri in the thirteenth century. This state sacrifice became an early counterpart to the Chinese worship of Heaven. From as early as the 1590s, Nurhaci appealed to Heaven as, \"the arbiter of right and wrong.\" He worshipped Heaven at a shamanic shrine in 1593 before leaving for a campaign against the Yehe, a Jurchen tribe that belonged to the rival Hūlun confederacy. Qing annals also report that when Nurhaci announced his Seven Great Grievances against the Ming dynasty in April 1618, he conducted a shamanic ceremony during which he burned an oath to Heaven written on a piece of yellow paper. This ceremony was deliberately omitted from the later Chinese translation of this event by the Qing court.\n",
"Benevolence and the Mandate of Heaven: Transformation of pre-Qin Confucian Classics is a book by a Taiwanese historian Olga Gorodetskaya (Kuo Ching-yun), published in 2010 in Taipei. The book concerns itself with the Confucian philosophical concepts of Benevolence (Ren) and the Mandate of Heaven and their evolution during the period before the establishment of the empire by Qin dynasty. \n"
] |
Is there an increased risk of lung cancer by just being in a room that smells like cigarette smoke with no one actually smoking in it?
|
Yes it appears so;
"
Researchers now know that residual tobacco smoke, dubbed thirdhand smoke, combines with indoor pollutants such as ozone and nitrous acid to create new compounds. Thirdhand smoke mixes and settles with dust, drifts down to carpeting and furniture surfaces, and makes its way deep into the porous material in paneling and drywall. It lingers in the hair, skin, clothing, and fingernails of smokers—so a mother who doesn't smoke in front of her kids, smokes outside, then comes inside and holds the baby is exposing that child to thirdhand smoke. The new compounds are difficult to clean up, have a long life of their own, and many may be carcinogenic.
"
\- [_URL_0_](_URL_1_)
|
[
"Results from epidemiological studies indicate that the risk of lung cancer increases with exposure to residential radon. A well-known example of source of error is smoking, the main risk factor for lung cancer. In the West, tobacco smoke is estimated to cause about 90% of all lung cancers. \n",
"The risk of lung cancer caused by smoking is much higher than the risk of lung cancer caused by indoor radon. Radiation from radon has been attributed to increase of lung cancer among smokers too. It is generally believed that exposure to radon and cigarette smoking are synergistic; that is, that the combined effect exceeds the sum of their independent effects. This is because the daughters of radon often become attached to smoke and dust particles, and are then able to lodge in the lungs.\n",
"According to the EPA, the risk of lung cancer for smokers is significant due to synergistic effects of radon and smoking. For this population about 62 people in a total of 1,000 will die of lung cancer compared to 7 people in a total of 1,000 for people who have never smoked. It cannot be excluded that the risk of non-smokers should be primarily explained by a combination effect of radon and passive smoking (see below).\n",
"In 2011, a large Danish epidemiological study found an increased risk of lung cancer for patients who lived in areas with high nitrogen oxide concentrations. In this study, the association was higher for non-smokers than smokers. An additional Danish study, also in 2011, likewise noted evidence of possible associations between air pollution and other forms of cancer, including cervical cancer and brain cancer.\n",
"Current research shows that tobacco smokers who are exposed to residential radon are twice as likely to develop lung cancer as non-smokers. As well, the risk of developing lung cancer from asbestos exposure is twice as likely for smokers than for non-smokers.\n",
"Lung cancer risk is highly affected by smoking with up to 90% of cases being caused by tobacco smoking. Risk of developing lung cancer increases with number of years smoking and number of cigarettes smoked per day. Smoking can be linked to all subtypes of lung cancer. Small Cell Lung Carcinoma (SCLC) is the most closely associated with almost 100% of cases occurring in smokers. This form of cancer has been identified with autocrine growth loops, proto-oncogene activation and inhibition of tumour suppressor genes. SCLC may originate from neuroendocrine cells located in the bronchus called Feyrter cells.\n",
"Research has generated evidence that second-hand smoke causes the same problems as direct smoking, including lung cancer, cardiovascular disease, and lung ailments such as emphysema, bronchitis, and asthma. Specifically, meta-analyses show that lifelong non-smokers with partners who smoke in the home have a 20–30% greater risk of lung cancer than non-smokers who live with non-smokers. Non-smokers exposed to cigarette smoke in the workplace have an increased lung cancer risk of 16–19%.\n"
] |
how did children from completely different parts of the world come up with the exact-same schoolyard games?
|
Think you are underestimating both the time these games have been around for and the extent to which families move around.
|
[
"Children's games during the Middle Ages and earlier are something of a mystery, since the rules of the games were passed from generation to generation orally. In rare cases, the games survived long enough to be recorded in later centuries. Pieter Bruegel's painting \"Children's Games\" (1560) depicts many games popular with Flemish children of the time. Girls can be shown playing games that involve balls, musical instruments, and games that involved carrying other children or performing other challenges. Hide-and-seek, leapfrog and tag are some perennially popular games that seem to have survived over the centuries in Europe and North America.\n",
"Many children's playground and street songs are connected to particular games. These include clapping games, like \"Miss Susie', played in America; \"A sailor went to sea\" from Britain; and \"Mpeewa\", played in parts of Africa. Many traditional Māori children's games, some of them with educational applications—such as hand movement, stick and string games—were accompanied by particular songs. In the Congo, the traditional game \"A Wa Nsabwee\" is played by two children synchronising hand and other movements while singing. Skipping games like Double Dutch have been seen as important in the formation of hip hop and rap music.\n",
"Traditional children's games are defined, \"as those that are played informally with minimal equipment, that children learn by example from other children, and that can be played without reference to written rules. These games are usually played by children between the ages of 7 and 12, with some latitude on both ends of the age range.\" \"Children's traditional games (also called folk games) are those that are passed from child to child, generation to generation, informally by word of mouth,\" and most children's games include at least two of the following six features in different proportion: physical skill, strategy, chance, repetition of patterns, creativity, and vertigo.\n",
"The origin of this elementary school game being played in American classrooms goes back to at least the 1950s, perhaps earlier. A game called seven-up is mentioned in the Ansonia (Ohio) \"Mirror\" newspaper issue of May 13, 1882.\n",
"Ken games underwent a transformation from drinking games played by adults into children's games. Several Japanese writers made note of the observation that children were playing a game once associated with brothels. The author of Asukawa, an essay in Bunka 7, admonishes children for playing hand games. He had this to say: \"In former days children used to play red-shell-horse-riding or they fought with the shell of mussels. Of today's children nobody knows these games. The games which children play when they come together are \"The old man goes to the mountain to cut wood, while the old woman goes to the river to wash\" like in former times, but now they play also mushi-ken, fox ken, and original ken. How funny!\"\n",
"Several games were derived from familiar, commercially available children's or fairground games, including steady hand testers, mazes, and sliding puzzles; games in some zones sometimes appeared in other zones with some cosmetic changes and some variations to previous incarnations, with some game designs tending to become more elaborate in later series. A small number of games differed from the traditional style of those that were featured; while they fell under one of the four categories available, they did not comply to the traditional style for the games on the show:\n",
"Children have always played games. It is accepted that as well as being entertaining, playing games helps children's development. One of the most famous visual accounts of children's games is a painting by Pieter Bruegel the Elder called \"Children's Games\", painted in 1560. It depicts children playing a range of games that presumably were typical of the time. Many of these games, such as marbles, hide-and-seek, blowing soap bubbles and piggyback riding continue to be played.\n"
] |
Why are there only 4 "letters" of DNA?
|
The interesting thing is that the Adenosine, Thymine (Uracil for RNA), Cytosine and Guanine are NOT the only nucleotides that are present in nature. There is also xanthine, hypoxanthine, and inosine.
Also, the traditional Watson-Crick bases that are said to be present in RNA are NOT the ONLY bases that can be present in RNA. For example, there is an enzyme called Adenosine deaminase (ADA for short) that deaminates adenosine to form inosine (which is a non-Watson-Crick nuceloside) and if the enzyme does not deaminate a specific nucleotide in a specific gene, this results in ALS (or Lou-Gerrig's disease). In short, the "four nitrogenous bases rule" is an over simplification of RNA since other nitrogenous bases can be present in mainly RNA structures.
As to if it is possible that a similar DNA structure could exist that is double helical and self-replicating (DNA actually uses protien enzymes to replicate and is generally considered to have very little catalytic activity. RNA is the one that is self-replicating and has catalytic activity), it is entirely possible that very similar nitrogenous bases could be arranged with a phosphate backbone to form another organism's "basis of life" but the problem is that the structure of said molecule would be hard to determine unless we were exposed to it and were able to determine the structure. In the scenario in which we discover a new "basis of life," the structure of the molecule does not necessarily matter if the organism finds a way to deal with the structure and replicate. The reason that we care that DNA is helical is that the enzymes that bind DNA as well as unzip it to make more DNA (the enzymes that make DNA are called polymerases whereas the enzymes that unzip the DNA are called helicases) are specifically tuned to work with that helical structure.
edit: mindule pointed out my error in saying that DNA contains Inosine; RNA actually contains Inosine. For RNA, there is more variation with the types of nitrogneous bases that are present, but in general, DNA does only contain 4 nitrogenous bases. I do think that jurble and smashy_smashy made some excellent points as to how this necessarily a limitation since it leaves a lot of room for genetic variation.
|
[
"The possible letters are \"A\", \"C\", \"G\", and \"T\", representing the four nucleotide bases of a DNA strand — adenine, cytosine, guanine, thymine — covalently linked to a phosphodiester backbone. In the typical case, the sequences are printed abutting one another without gaps, as in the sequence AAAGTCTGAC, read left to right in the 5' to 3' direction. With regards to transcription, a sequence is on the coding strand if it has the same order as the transcribed RNA.\n",
"The remaining seventeen letters are consonants [pronounced-with]: \"b, g, d, z, th, k, l, m, n, x, p, r, s, t, ph, kh, ps\". They are called consonants because they do not have a sound on their own, but they form a complete sound when arranged with vowels.\n",
"The shapes of the letters are as follows. Because they are not supported by computer fonts, Canadian syllabics have been substituted where these have approximately the same shape (though they tend to have deeper curves and shorter lines than the shorthand letters); where a symbol is not available, a description is given. \"L\" and \"r\" are written upwards; all other vertical and diagonal letters are written downwards.\n",
"Seven letters (, , , , , , ) do not connect to a following letter, unlike the rest of the letters of the alphabet. The seven letters have the same form in isolated and initial position and a second form in medial and final position. For example, when the letter is at the beginning of a word such as (\"here\"), the same form is used as in an isolated . In the case of (\"today\"), the letter takes the final form and the letter takes the isolated form, but they are in the middle of the word, and also has its isolated form, but it occurs at the end of the word.\n",
"The letters are not made in order of appearance in the name (B, L, A, C, K, P, O, O, L) but by their shape; \"square\" letters, (B, E, F, H, J, K, L, M, N, P, R, T, W, X, Y and Z), are made first, as they will not lose their shape, while \"triangle\" (A and V) and \"round\" (C, D, G, O, Q, S, U) letters are made last to prevent them from losing their shape, as the toffee is still reasonably soft at this point. For example, the letters that make up \"BLACKPOOL ROCK\" may be made in this order: B, P, R, K(×2), L(×2), A, C(×2) and O(×3). The individual letters are placed between blocks or sticks at this point, to prevent them from losing shape and going flat. The letters are then placed in their correct spelling order with a \"strip\" of white, aerated toffee between each letter to make it readable.\n",
"After all this, there were only 17 letters that were different in shape. One letter-shape represented 5 phonemes (\"b t th n\" and sometimes \"y\"), one represented 3 phonemes (\"j ħ kh\"), and 5 each represented 2 phonemes. Compare the Hebrew alphabet, as in the table at .\n",
"Since many letters are distinguished from others solely by a dot above or below the main portion of the character, the transliterations of these letters frequently use the same letter or number with an apostrophe added before or after (e.g. \"'3\" is used to represent ).\n"
] |
If you were placed in a room with 30% oxygen and 70% helium, would you be able to breathe normally?
|
Yes; you can breathe this just fine. Deep sea divers utilize HeliOx to avoid nitrogen narcosis. You would take funny, but could breathe it just fine at least short term.
This assumes that you're breathing this at a normal pressure. If you were actually diving or in a compression chamber, you have to consider the partial pressure of the gases.
I'm not sure the long term effect of the elevated O2 percentage. Someone else will need to address that.
|
[
"Inhaling helium can be dangerous if done to excess, since helium is a simple asphyxiant and so displaces oxygen needed for normal respiration. Fatalities have been recorded, including a youth who suffocated in Vancouver in 2003 and two adults who suffocated in South Florida in 2006. In 1998, an Australian girl (her age is not known) from Victoria fell unconscious and temporarily turned blue after inhaling the entire contents of a party balloon.\n",
"Helium and nitrogen are non toxic and can be breathed with no ill effects over short or long term when oxygen levels are sufficient, and present no health risk to third parties except asphyxiation. The danger lies in that they are undetectable by human senses, and the first warning of their presence in asphyxiant concentrations may be loss of consciousness. Lower concentrations may cause confusion and weakness. Use of a suicide bag in a well ventilated room using either of these gases is unlikely to pose a hazard for other people, and there is no fire hazard. \n",
"BULLET::::- 1919: Professor Elihu Thomson speculates that helium could be used instead of nitrogen to reduce the breathing resistance at great depth. Heliox was used with air tables resulting in a high incidence of decompression sickness, so the use of helium was discontinued.\n",
"A breathing gas mixture of oxygen, helium and hydrogen was developed for use at extreme depths to reduce the effects of high pressure on the central nervous system. Between 1978 and 1984, a team of divers from Duke University in North Carolina conducted the \"Atlantis\" series of on-shore-hyperbaric-chamber-deep-scientific-test-dives. In 1981, during an extreme depth test dive to 686 metres (2251 ft) they breathed the conventional mixture of oxygen and helium with difficulty and suffered trembling and memory lapses.\n",
"The main reason for adding helium to the breathing mix is to reduce the proportions of nitrogen and oxygen below those of air, to allow the gas mix to be breathed safely on deep dives. A lower proportion of nitrogen is required to reduce nitrogen narcosis and other physiological effects of the gas at depth. Helium has very little narcotic effect. A lower proportion of oxygen reduces the risk of oxygen toxicity on deep dives.\n",
"Fraction of inspired oxygen (\"Fi\"O) is the fraction of oxygen in the volume being measured. Medical patients experiencing difficulty breathing are provided with oxygen-enriched air, which means a higher-than-atmospheric \"Fi\"O. Natural air includes 21% oxygen, which is equivalent to \"Fi\"O of 0.21. Oxygen-enriched air has a higher \"Fi\"O than 0.21; up to 1.00 which means 100% oxygen. \"Fi\"O is typically maintained below 0.5 even with mechanical ventilation, to avoid oxygen toxicity.\n",
"BULLET::::- 1924: The US Navy begins examining helium's potential usage and by the mid-1920s lab animals were exposed to experimental chamber dives using heliox. Soon, human subjects breathing heliox 20/80 (20% oxygen, 80% helium) had been successfully decompressed from deep dives.\n"
] |
What parts of WW2 fighter aircraft were armored?
|
For the most part there was very little armor included in fighter aircraft of WWII. The Japanese Ki-43-II only had a single 13mm steel plate behind the pilot. Even planes renowned for their ruggedness such as the P-47 had only minor armor. The P-47D had a 10mm plate behind the pilot and a small plate in front of the pilot under the canopy. For fighters, armor played only a small role in the durability of the plane. Fighters had to be light and manoeuvrable in order to effectively fit their role. The Bf-109 came with an additional armor plate behind the headrest, but this was often removed by pilots who considered increased visibility to be more valuable.
The only other protection resembling armor on fighters would be strengthened glass which could withstand a few hits from low calibre rounds. This would only be found right in front of the pilot, with few exceptions. The skin on the wings and fuselage were not thickened because the disadvantage of the extra mass far outweighed any small increase that may have been gained in this manner.
If you would like to know more about planes with heavier armor we would need to look at ground attackers such as the legendary IL-2 and the Hs.129. A modern example of such a plane would be the A-10 Warthog currently in service with the US military.
I would suggest you watch [this](_URL_0_) video by youtuber Bismark for a good overview of this topic.
|
[
"The Spitfire, from about mid-1940, had 73 pounds (33 kg) of armoured steel plating in the form of head (of 6.5 mm thickness) and back protection on the seat bulkhead (4.5 mm), and covering the forward face of the glycol header tank. The Hurricane had a similar armour layout to the Spitfire, and was the toughest and most durable of the three. Serviceability rates of Hawker's fighter were always higher than the complex and advanced Spitfire.\n",
"Besides firepower the B-25's had some pretty good armor. The B-25's were equipped with ¼ inch (6 mm) armor along the bombardier compartment, the pilot compartment and the upper turret. Then the rest of the plane had 3/8 inch (10 mm) armor around most of the hull, but an interesting note is that there is ½ inch armor behind the tail gunner that was used to protect the tail gunner from the upper turret gunner. (Heavenly Body)\n",
"World War II fighters also increasingly featured monocoque construction, which improved their aerodynamic efficiency while adding structural strength. Laminar flow wings, which improved high speed performance, also came into use on fighters such as the P-51, while the Messerschmitt Me 262 and the Messerschmitt Me 163 featured swept wings that dramatically reduced drag at high subsonic speeds.\n",
"The main fuel tanks of the Spitfire, which were mounted in the fuselage forward of the cockpit, were better protected than that of the Hurricane; the lower tank was self-sealing and a panel of 3 mm thick aluminium, sufficient to deflect small calibre bullets, was wrapped externally over the top tanks. Internally they were coated with layers of \"Linatex\" and the cockpit bulkhead was fireproofed with a thick panel of asbestos .\n",
"A major heavy fighter design was the Messerschmitt Bf 110, a German fighter that, prior to the war, was considered by the German Luftwaffe more important than their single-engine fighters. Many of the best pilots were assigned to Bf 110 wings, specifically designated as \"Zerstörergeschwader\" (\"destroyer squadron\") wings. While lighter fighters were intended for defense, the destroyers were intended for offensive missions: to escort bombers on missions at long range, then use its superior speed to outrun defending fighters that would be capable of outmaneuvering it.\n",
"The open-cockpit design combat aircraft of World War I had narrow fuselages, which often were not tall enough to block visibility to the rear, especially with seating positions that generally elevated the pilot's head well above the cockpit's edges. As planes became larger, heavier and faster, designs had to be made stronger, which often meant a taller rear fuselage, but designers tried to maintain the narrow fuselage for visibility.\n",
"The favourable power-to-weight ratio of the rotaries was their greatest advantage. While larger, heavier aircraft relied almost exclusively on conventional in-line engines, many fighter aircraft designers preferred rotaries right up to the end of the war.\n"
] |
Why is a lump of coal black, but a diamond is clear?
|
Coal is a sedimentary rock composed of many minerals. Diamond is a mineral composed of a carbon-based crystal structure. If impurities are present in that structure, the clarity of diamond (or any mineral) can be altered. That is why we see varying colors of diamonds.
A better question would ask why graphite (also a cabon-based structure) is gray/black and diamonds are clearer. Both are considered allotropes of carbon with different crystal structures that propagate light differently, which varies their appearance.
Fun fact: Diamond is not stable at our atmospheric temperature and pressure (think ground surface and areas we inhabit) and will eventually break down to graphite. Although, this could take a few billion years.
|
[
"The researchers discovered that all blue diamonds show red and green peaks in their phosphorescence spectrum, due to the presence of nitrogen and boron in the stones. The intensity and rate of decay of the spectrum varies from diamond to diamond. \n",
"BULLET::::- Type Ia diamonds make up about 95% of all natural diamonds. The nitrogen impurities, up to 0.3% (3000 ppm), are clustered within the carbon lattice, and are relatively widespread. The absorption spectrum of the nitrogen clusters can cause the diamond to absorb blue light, making it appear pale yellow or almost colorless. Most Ia diamonds are a mixture of IaA and IaB material; these diamonds belong to the \"Cape series\", named after the diamond-rich region formerly known as Cape Province in South Africa, whose deposits are largely Type Ia. Type Ia diamonds often show sharp absorption bands with the main band at 415.5 nm (N3) and weaker lines at 478 nm (N2), 465 nm, 452 nm, 435 nm, and 423 nm (the \"Cape lines\"), caused by the N2 and N3 nitrogen centers. They also show blue fluorescence to long-wave ultraviolet radiation due to the N3 nitrogen centers (the N3 centers do not impair visible color, but are always accompanied by the N2 centers which do). Brown, green, or yellow diamonds show a band in the green at 504 nm (H3 center), sometimes accompanied by two additional weak bands at 537 nm and 495 nm (H4 center, a large complex presumably involving 4 substitutional nitrogen atoms and 2 lattice vacancies).\n",
"Type II diamonds have almost no nitrogen impurities in them. Their coloration is due to structural anomalies caused by Plastic Deformation during crystal growth. The intense pressure changes the lattice structure of diamonds and leads to the formation of pink, red, and brown diamonds. Only one of the 66 largest diamonds in the world is pink. When Ben Affleck gave Jennifer Lopez a pink diamond solitaire engagement ring, traffic to Web sites that sold pink diamonds increased by around 300 to 400 percent.\n",
"Because the arrangement of atoms in diamond is extremely rigid, few types of impurity can contaminate it (two exceptions being boron and nitrogen). Small numbers of defects or impurities (about one per million of lattice atoms) color diamond blue (boron), yellow (nitrogen), brown (defects), green (radiation exposure), purple, pink, orange or red. Diamond also has relatively high optical dispersion (ability to disperse light of different colors).\n",
"Nitrogen is by far the most common impurity found in gem diamonds and is responsible for the yellow and brown color in diamonds. Boron is responsible for the blue color. Color in diamond has two additional sources: irradiation (usually by alpha particles), that causes the color in green diamonds, and plastic deformation of the diamond crystal lattice. Plastic deformation is the cause of color in some brown and perhaps pink and red diamonds. In order of increasing rarity, yellow diamond is followed by brown, colorless, then by blue, green, black, pink, orange, purple, and red. \"Black\", or Carbonado, diamonds are not truly black, but rather contain numerous dark inclusions that give the gems their dark appearance. Colored diamonds contain impurities or structural defects that cause the coloration, while pure or nearly pure diamonds are transparent and colorless. Most diamond impurities replace a carbon atom in the crystal lattice, known as a carbon flaw. The most common impurity, nitrogen, causes a slight to intense yellow coloration depending upon the type and concentration of nitrogen present. The Gemological Institute of America (GIA) classifies low saturation yellow and brown diamonds as diamonds in the \"normal color range\", and applies a grading scale from \"D\" (colorless) to \"Z\" (light yellow). Diamonds of a different color, such as blue, are called \"fancy colored\" diamonds and fall under a different grading scale.\n",
"Carbonado, commonly known as the \"black diamond\", is the toughest form of natural diamond. It is an impure form of polycrystalline diamond consisting of diamond, graphite, and amorphous carbon. It is found primarily in alluvial deposits in the Central African Republic and in Brazil. Its natural colour is black or dark grey, and it is more porous than other diamonds.\n",
"Diamonds are made from carbon, usually graphite. Nevertheless, while a diamond is being formed, it may not totally crystallize, leading to the presence of small dots of black carbon. These black spots have been classified to be those of graphite, pyrrhotite and pentlandite. These surface flaws resemble a small black dot and may affect the clarity of the stone depending on the size of the imperfection. The occurrence of this kind of flaw is less common in diamonds compared to pinpoint inclusions. Carbons are usually seen in white or blue-white stones. Carbons are not commonly found in diamonds of poorer colors. Within the trade, these are called \"carbon spots\" and may be cleavage cracks which have developed through uneven heating or a blow. Cleavage cracks often appear to be dark or black in normal lighting conditions because of light reflection.\n"
] |
credit score lookup. why does it impede your credit score? seems like a basic, no hassle thing to find like checking your bank account.
|
It's like asking "am I cool?".
Asking the question automatically makes you less cool.
|
[
"Credit scores are compiled from information sources relating to credit, such as number of credit accounts held, balances on each account, dates of collection activity, and so on. Credit scores do not measure any financial or personal activity that is not related to credit, and identity fraud that does not involve credit will not appear on your credit report or affect your credit score. Credit scores and the credit scoring system are also very predictable—there are specific steps you follow to improve your credit score, dispute errors in credit reports, etc.\n",
"It is very difficult for a consumer to know in advance whether they have a high enough credit score to be accepted for credit with a given lender. This situation is due to the complexity and structure of credit scoring, which differs from one lender to another.\n",
"Lenders need not reveal their credit score head, nor need they reveal the minimum credit score required for the applicant to be accepted. Owing only to this lack of information to the consumer, it is impossible for him or her to know in advance if they will pass a lender's credit scoring requirements. However, it may still be useful for consumers to gauge their chances of being successful with their credit or loan applications by checking their credit score prior to applying.\n",
"Credit scores are designed to measure the risk of default by taking into account various factors in a person's financial history. Although the exact formulas for calculating credit scores are secret, FICO has disclosed the following components:\n",
"In the United States, a credit score is a number based on a statistical analysis of a person's credit files, that in theory represents the creditworthiness of that person, which is the likelihood that people will pay their bills. A credit score is primarily based on credit report information, typically from one of the three major credit bureaus: Experian, TransUnion, and Equifax. Income and employment history (or lack thereof) are not considered by the major credit bureaus when calculating credit scores.\n",
"A typical mistaken belief about credit scoring is that the only trait that matters is whether you have actually made payments on time as well as satisfied your monetary obligations in a prompt way. While payment background is essential, however it still just composes just over one-third of the credit rating score. Furthermore, the repayment background is only shown in your credit history.\n",
"Several factors affect individual's credit scores. One factor is the amount an individual borrowed as compared to the amount of credit available to the individual. As an individual borrows, or leverages, more money, the individual's credit score decreases.\n"
] |
How , and how often, did American GIs clean their rifles in WWII?
|
You might be best served by posting this in r/guns.
|
[
"Before and during World War II, stored rifles were reconditioned for use as reserve, training and Lend-Lease weapons; these rifles are identified by having refinished metal (sandblasted and Parkerized) and sometimes replacement wood (often birch). Some of these rifles were reconditioned with new bolts manufactured by the United Shoe Machinery Company and stamped USMC leading to the mistaken impression these were United States Marine Corps rifles. Many were bought by the United Kingdom through the British Purchasing Commission for use by the Home Guard; 615,000 arrived in Britain in the summer of 1940, followed by a further 119,000 in 1941. These were prominently marked with a red paint stripe around the stock to avoid confusion with the earlier P14 that used the British .303 round. Others were supplied to the Nationalist Chinese forces, to indigenous forces in the China-Burma-India theatre, to Filipino soldiers under the Philippine Army and Constabulary units and the local guerrilla forces and to the Free French Army, which can occasionally be seen in wartime photographs. The M1917 was also issued to the Local Defence Force of the Irish Army during World War II, these were part-time soldiers akin to the British Home Guard. In an ironic reversal of names, in Irish service the M1917 was often referred to as the \"Springfield\"; presumably since an \"Enfield\" rifle was assumed to be the standard Irish MkIII Short Magazine Lee–Enfield, while \"Springfield\" was known to be an American military arsenal.\n",
"During World War II, Camillus shipped more than 13 million knives of various styles to the Allied troops. In 1942, U.S. Marine Corps officers Colonel John M. Davis and Major Howard E. America working in conjunction with cutlery technicians at Camillus developed the KA-BAR Fighting Utility Knife. After extensive trials, the KA-BAR prototype was recommended for adoption, and Camillus was awarded the first contract to produce the KA-BAR for the Marine Corps. Camillus made more KA-BARs than any other knife manufacturer producing the model during World War II. During the war, Camillus also made the M3 fighting knives, the M4 bayonets and many other utility knives for U.S. forces, including machetes, multi-blade utility knives, TL-29 Signal Corps pocket knives for signalmen, electrician's mates, and linesmen, and combination knife/marlinspike pocket knives for use by the U.S. Navy in cutting and splicing lines.\n",
"P-38s are no longer used for individual rations by the United States Armed Forces, as canned C-rations were replaced by soft-pack MREs in the 1980s. They are, however, included with United States military \"Tray Rations\" (canned bulk meals). They are also still seen in disaster recovery efforts and have been handed out alongside canned food by rescue organizations, both in America and abroad in Afghanistan. The original US-contract P-38 can openers were manufactured by J. W. Speaker Corp. (stamped \"US Speaker\") and by Washburn Corp. (marked \"US Androck\"), they were later made by Mallin Hardware (now defunct) of Shelby, Ohio and were variously stamped \"US Mallin Shelby O.\" or \"U.S. Shelby Co.\"\n",
"Ongoing military utilization began during World War II, and includes covering the muzzles of rifle barrels to prevent fouling, the waterproofing of firing assemblies in underwater demolitions, and storage of corrosive materials and garrotes by paramilitary agencies.\n",
"There was a contract from the government to Remington for 10,000 .22 target rifles in 1940. During World War II, 513T rifle were used by the Army for training purposes. This included issue to DCM affiliated clubs for training juniors, and to ROTC units. Those rifles that were purchased by the Army were stamped \"U.S. PROPERTY\" on the barrel and the receiver.\n",
"The United States Marine Corps purchased 373 Model 70 rifles in May, 1942. Although the Marine Corps officially used only the M1 Garand and the M1903 Springfield as sniper rifles during the Second World War, \"many Winchester Model 70s showed up at training camps and in actual field use during the Pacific campaign.\" These rifles had 24-inch shorter barrels chambered for .30-06 Springfield. They were serial numbered in the 41000 to 50000 range and were fitted with leaf sights and checkered stocks with steel butt plates, one-inch sling swivels, and leather slings. It has been reported that some of these rifles were equipped with 8X Unertl telescopic sights for limited unofficial use as sniper weapons on Guadalcanal and during the Korean War. Many of the surviving rifles, after reconditioning with heavier Douglas barrels and new stocks between 1956 and 1963 at the Marine Corps match rebuild shop in Albany, Georgia, were fitted with 8× Unertl sights from M1903A1 sniper rifles. The reconditioned rifles were used in competitive shooting matches; and the United States Army purchased approximately 200 new Model 70 National Match Rifles with medium heavy barrels for match use between 1954 and 1957. Many of the reconditioned Marine Corps match rifles were used by Marine Corps snipers during the early years of the Vietnam war with M72 match ammunition loaded with 173-grain boat-tailed bullets. A smaller number of the Army's Model 70 rifles also saw combat use by Army snipers; and some were equipped with silencers for covert operations in Southeast Asia. These Model 70 rifles never achieved the status of a standard military weapon; but were used until replaced by the Remington Model 700 series bolt-action rifles which became the basis for the M40 series sniper rifle.\n",
"During the Second World War, the British government also contracted with Canadian and US manufacturers (notably Long Branch and Savage) to produce the No. 4 Mk I* rifle. US-manufactured rifles supplied under the Lend Lease program were marked US PROPERTY on the left side of the receiver. Canada's Small Arms Limited at Long Branch made over 900,000. Many of these equipped the Canadian Army and many were supplied to the UK and New Zealand. Over a million No. 4 rifles were built by Stevens-Savage in the United States for the UK between 1941 and 1944 and all were originally marked \"U.S. PROPERTY\". Canada and the United States manufactured both the No. 4 MK. I and the simplified No. 4 MK. I*. The UK and Canada converted about 26,000 No. 4 rifles to sniper equipment.\n"
] |
how are car batteries able to be charged up with a jump start, if car batteries use chemicals for energy?
|
The jump start doesn't charge the battery, it just starts the car.
When the car is running, it charges the battery.
|
[
"Car batteries are most likely to explode when a short-circuit generates very large currents. Such batteries produce hydrogen, which is very explosive, when they are overcharged (because of electrolysis of the water in the electrolyte). During normal use, the amount of overcharging is usually very small and generates little hydrogen, which dissipates quickly. However, when \"jump starting\" a car, the high current can cause the rapid release of large volumes of hydrogen, which can be ignited explosively by a nearby spark, e.g. when disconnecting a jumper cable.\n",
"Batteries convert chemical energy directly to electrical energy. In many cases, the electrical energy released is the difference in the cohesive or bond energies of the metals, oxides, or molecules undergoing the electrochemical reaction. For instance, energy can be stored in Zn or Li, which are high-energy metals because they are not stabilized by d-electron bonding, unlike transition metals. Batteries are designed such that the energetically favorable redox reaction can occur only if electrons move through the external part of the circuit.\n",
"Several companies have begun making devices that charge batteries based on human motions. One example, made by Tremont Electric, consists of a magnet held between two springs that can charge a battery as the device is moved up and down, such as when walking. Such products have not yet achieved significant commercial success.\n",
"Operation of a lead-acid battery may, in case of overcharge, produce flammable hydrogen gas by electrolysis of water inside the battery. Jump start procedures are usually found in the vehicle owner's manual. The recommended sequence of connections is intended to reduce the chance of accidentally shorting the good battery or igniting hydrogen gas. Owner's manuals will show the preferred locations for connection of jumper cables; for example, some vehicles have the battery mounted under a seat, or may have a jumper terminal in the engine compartment.\n",
"The primary fire-danger with lead-acid batteries occurs during over-charging when hydrogen gas is produced. This danger is easily controlled by limiting the available charge voltage, and ensuring ventilation is present during charging to vent any excess hydrogen gas. A secondary danger exists when broken plates inside the battery short out the battery, or reconnect inside the battery causing an internal spark, igniting the hydrogen and oxygen generated inside the battery during very fast discharge. \n",
"BULLET::::- Battery Charge Mode: when activating, whether the vehicle is in motion or at a standstill, the engine will generate electricity to be fed into the battery pack, forcing the vehicle to operate in hybrid mode. For example, if the engine is idling and the vehicle is not moving, selecting this mode will replenish a low energy level within the battery pack back up to 80% fully charged in approximately 40 minutes.\n",
"The electrical energy produced by a discharging lead–acid battery can be attributed to the energy released when the strong chemical bonds of water (HO) molecules are formed from H ions of the acid and O ions of PbO. Conversely, during charging the battery acts as a water-splitting device, and in the charged state the chemical energy of the battery is stored in the potential difference between the pure lead at the negative side and the PbO2 on the positive side, plus the Sulphuric Acid in aqueous condition.\n"
] |
Are there any organisms that live their entire existence in the air?
|
There are living bacteria in clouds. I don't think any are native to clouds, but I wouldn't be surprised if a fair number spend an entire generation (the duration of a bacteria's "life" is a bit hard to define) suspended in the air.
|
[
"Besides providing locomotion opportunities for winged animals and a conduit for the dispersal of pollen grains, spores and seeds, the atmosphere can be considered to be a habitat in its own right. There are metabolically active microbes present that actively reproduce and spend their whole existence airborne, with hundreds of thousands of individual organisms estimated to be present in a cubic meter of air. The airborne microbial community may be as diverse as that found in soil or other terrestrial environments, however these organisms are not evenly distributed, their densities varying spatially with altitude and environmental conditions. Aerobiology has been little studied, but there is evidence of nitrogen fixation in clouds, and less clear evidence of carbon cycling, both facilitated by microbial activity.\n",
"Smaller creatures, radial in design, seem to fill the ecological niches filled by insects on Earth. There are also mentions of airborne lifeforms, similar in appearance to jellyfish, that float through the atmosphere, supported by gas-filled sacs, that descend from above to leech blood from larger, moribund lifeforms. The gas is mostly hydrogen, and will explode if exposed to fire. \n",
"Smaller creatures, radial in design, seem to fill the ecological niches filled by insects on Earth. There are also mentions of airborne lifeforms, similar in appearance to jellyfish, that float through the atmosphere, supported by gas-filled sacs, that descend from above to leech blood from larger, moribund lifeforms. The gas is mostly hydrogen, and will explode if exposed to fire.\n",
"The aeroplankton comprises numerous microbes, including viruses, about 1000 different species of bacteria, around 40,000 varieties of fungi, and hundreds of species of protists, algae, mosses and liverworts that live some part of their life cycle as aeroplankton, often as spores, pollen, and wind-scattered seeds. \n",
"A large number of insects live either part or the whole of their lives underwater. In many of the more primitive orders of insect, the immature stages are spent in an aquatic environment. Some groups of insects, like certain water beetles, have aquatic adults as well.\n",
"An aerobic organism or aerobe is an organism that can survive and grow in an oxygenated environment. In contrast, an anaerobic organism (anaerobe) is any organism that does not require oxygen for growth. Some anaerobes react negatively or even die if oxygen is present.\n",
"At some 1.3 million described species, insects account for more than two-thirds of all known organisms, date back some 400 million years, and have many kinds of interactions with humans and other forms of life on earth.\n"
] |
lightspeed
|
The absolute speed limit of the universe, very close to 300,000,000 meters per secend. It is the most accurately known physical constant.
|
[
"Litespeed uses 6/4 titanium, which is an alloy of titanium with 6 percent aluminum and 4 percent vanadium, instead of the more-common 3/2.5 titanium. It is more difficult to work with, but has a better strength to weight ratio than other available alloys. It was initially not available as tubes, so Litespeed bought thin plates and cold-rolled and welded their own tubes.\n",
"BULLET::::- \"Lightspeed\": High-fantasy space opera, inspired by Star Trek and Star Wars, as well as a dozen other sci-fi settings, with two support books. (The company and the product have the same name, as Lightspeed is the only product Lightspeed makes.)\n",
"\"Lightspeed\" was founded and run as a science fiction magazine by publisher Sean Wallace of Prime Books with John Joseph Adams as editor. Wallace also published \"Lightspeed\"'s sister companion \"Fantasy Magazine\"; Adams came on as editor of \"Fantasy Magazine\" with the March 2011 issue. \"Lightspeed\" became an SFWA-qualifying market in July of 2011.\n",
"Lightspeed is an American online fantasy and science fiction magazine edited and published by John Joseph Adams. The first issue was published in June 2010 and it has maintained a regular monthly schedule since. The magazine currently publishes four original stories and four reprints in every issue, in addition to interviews with the authors and other nonfiction. All of the content published in each issue is available for purchase as an ebook and for free on the magazine's website. \"Lightspeed\" also makes selected stories available as a free podcast, produced by Audie Award-winning editor Stefan Rudnicki.\n",
"Lightspeed is a video game developed and released by MicroProse in 1990. It features a space flight simulator game and action game elements with an emphasis on strategy and exploration. The box describes the title as an \"Interstellar Action and Adventure\" game. The game features space exploration, trade, combat and diplomacy in the same vein as 4X s such as \"Master of Orion\". \"Lightspeed\", unlike the popular series of turn-based strategy games, plays out in real-time.\n",
"\"Lightspeed\" is a song by Dev from her first studio album, \"The Night the Sun Came Up\". It was released with a music video on November 22, 2011 as a promotional single. It's primarily a futuristic hip-hop song with the presence of panflute sound, articulating basses and a coughing effect after the chorus.\n",
"Lightspeed is a point-of-sale and e-commerce software provider based in Montreal, Quebec, Canada. Lightspeed provides small and medium sized retail and restaurant businesses with point of sale solutions.\n"
] |
Where does (more) space come from?
|
Its more like the space we have is stretching than more space is being added.
|
[
"BULLET::::- \"Ākāśa\" (Space) – Space is a substance that accommodates the living souls, the matter, the principle of motion, the principle of rest and time. It is all-pervading, infinite and made of infinite space-points.\n",
"Space is any conducive area that an artist provides for a particular purpose. Space includes the background, foreground and middle ground, and refers to the distances or area(s) around, between, and within things. There are two kinds of space: negative space and positive space. Negative space is the area in between, around, through or within an object. Positive spaces are the areas that are occupied by an object and/or form.\n",
"BULLET::::- Ākāśa (Space) – Space is a substance that accommodates souls, matter, the principle of motion, the principle of rest, and time. It is all-pervading, infinite and made of infinite space-points.\n",
"Space provides room to all other substances of the universe. The characteristic of space is to give room to or accommodate the other substances. The special feature of space is that it is not restricted to the universe like other substances but extends beyond the universe to the non-universe. \n",
"Golas posits there is no such thing as empty space. What is called space is actually a vital \"substance\" generated by living beings who are highly intelligent, identical and equal. They are quite tangible yet they are invisible to our senses and our instruments except in their effect on energy/matter. Golas suggests these beings generate a strong expansive force which pressurizes space thereby creating gravity since the pressure of space pushes us toward the Earth. Therefore, space is far denser and more forceful than the thinly spread arrays of atomic particles and photons we perceive as materials and energies; space is substantial and energy/matter is ghostly. This model offers a solution to the problem of “The Missing Matter”. \n",
"Space is the boundless three-dimensional extent in which objects and events have relative position and direction. Physical space is often conceived in three linear dimensions, although modern physicists usually consider it, with time, to be part of a boundless four-dimensional continuum known as spacetime. The concept of space is considered to be of fundamental importance to an understanding of the physical universe. However, disagreement continues between philosophers over whether it is itself an entity, a relationship between entities, or part of a conceptual framework.\n",
"In short, \"space\" is the social space in which we live and create relationships with other people, societies and surroundings. Space is an outcome of the hard and continuous work of building up and maintaining collectives by bringing different things into alignments. All kinds of different spaces can and therefore do exist which may or may not relate to each other. Thus, through space, we can understand more about social action.\n"
] |
I mostly know the "Magic Bullet" theory of the JFK assassination from the Seinfeld parody. What is the conspiracy theorists' claim, and why is it wrong?
|
The magic bullet theory is, indeed, the claim that the trajectory of the bullet was impossible, requiring several turns in mid air, and thus that there was a second gunman.
In reality the magic bullet theory is based on some false premises, and itself ignores some of the critical evidence. As in, if the magic bullet was fired by a second gunman we are short one bullet hole in the car...
Note, the 'magic bullet' was the 2nd (of 3) Oswald fired. The 1st missed cleanly. The 3rd was the head shot that killed Kennedy.
Also, given the range and circumstances, this did not require nor did Oswald demonstrate unusual marksmanship.
Some issues-
Kennedy was shot from behind, why did his head jerk *backwards*. This is a subtle effect of physics. Penn and Teller (among others) recreated it exactly with a melon. Not an issue.
Bullet trajectory- The Magic Bullet Theory assumes several things there were not, in fact, correct.
The car they were in was not a normal model. It was specifically modified for parades, to show off the important passenger.
As a consequence-
A) Kennedy and Connolly were not aligned front to back. Connolly was closer to the center.
B) Kennedy and Connolly were not at the same height. Kennedy's seat was higher, so the crowds could get a better look.
C) Kennedy was not sitting upright at the time. He had leaned forward to talk to Connolly ('was that a gunshot?')
D) Connolly was not facing forward at the time, he had twisted around to listen to Kennedy.
As a consequence of all of this, there was, indeed a straight line through Kennedy and Connolly, and thus no need for turns in mid air and no actual 'magic bullet'.
Why was the bullet not deformed? Until it ended up in Connolly's wrist, it passed entirely through soft tissue. And the bullet was, in fact, deformed.
Kennedy's as the most investigated murder in world history. And the committee doing the investigation was *extremely* thorough. There was no conspiracy, no second gunman. There was just a lone man with an opportunity.
|
[
"According to author John C. McAdams, \"[t]he greatest and grandest of all conspiracy theories is the Kennedy assassination conspiracy theory.\" Others have often referred to it as \"the mother of all conspiracies\". The number of books written about the assassination of Kennedy has been estimated to be between 1,000 and 2,000. According to Vincent Bugliosi, 95% of those books are \"pro-conspiracy and anti-Warren Commission\".\n",
"Belzer believes there was a conspiracy to assassinate President John F. Kennedy and has written four books discussing conspiracy theories: \"UFOs, JFK, and Elvis: Conspiracies You Don’t Have to Be Crazy to Believe\"; \"Dead Wrong: Straight Facts on the Country’s Most Controversial Cover-Ups\"; \"Hit List: An In-Depth Investigation into the Mysterious Deaths of Witnesses to the JFK Assassination\"; and \"Someone Is Hiding Something: What Happened to Malaysia Airlines Flight 370?\" \"Dead Wrong\" and \"Hit List\" were written with journalist David Wayne and reached \"The New York Times\" Best Seller list. \"Someone Is Hiding Something\" was also written with David Wayne as well as radio talk show host George Noory. Belzer's long-time character, John Munch, is also a believer in conspiracy theories, including the JFK assassination.\n",
"Today, there are many conspiracy theories concerning the assassination of John F. Kennedy in 1963. Vincent Bugliosi estimates that over 1,000 books have been written about the Kennedy assassination, at least ninety percent of which are works supporting the view that there was a conspiracy. As a result of this, the Kennedy assassination has been described as \"the mother of all conspiracies\". The countless individuals and organizations that have been accused of involvement in the Kennedy assassination include the CIA, the Mafia, sitting Vice President Lyndon B. Johnson, Cuban Prime Minister Fidel Castro, the KGB, or even some combination thereof. It is also frequently asserted that the United States federal government intentionally covered up crucial information in the aftermath of the assassination to prevent the conspiracy from being discovered.\n",
"Flemming was inspired to make a film about a contemporary assassination that grabbed the public attention after wondering what would happen if a Kennedy-style assassination happened during modern times. Through his research on the Kennedy assassination, he became convinced that there was no conspiracy. Flemming himself has no animosity toward Bill Gates, and used many Microsoft products during the making of \"Nothing So Strange\".\n",
"Most pro- and anti-conspiracy theorists believe that the single-bullet theory is essential to the Warren Commission's conclusion about how Lee Harvey Oswald acted alone. The reason for this is timing: if, as the Warren Commission found, President Kennedy was wounded some time between frames 210 and 225 of the Zapruder film, and Governor Connally was wounded in the back/chest no later than frame 240, there would not have been enough time between the wounding of the two men for Oswald to have fired two shots from his bolt-action rifle. FBI marksmen, who test-fired the rifle for the Warren Commission, concluded that the \"minimum time for getting off two successive well-aimed shots on the rifle is approximately 2 and a quarter seconds\" or 41 to 42 Zapruder frames.\n",
"BULLET::::- \"Who Shot JFK? : A Guide to the Major Conspiracy Theories\" (1993) by Bob Callahan and Mark Zingarelli explores some of the more obscure theories regarding JFK's murder, such as \"The Coca-Cola Theory\". According to this theory, suggested by the editor of an organic gardening magazine, Oswald killed JFK due to mental impairment stemming from an addiction to refined sugar, as evidenced by his need for his favorite beverage immediately after the assassination. .\n",
"Other conspiracy theorists have tried to connect the shooting to references in popular culture. Prison Planet, a website owned by British conspiracy theorist Paul Joseph Watson, mentioned that Newtown-based author Suzanne Collins wrote \"The Hunger Games\" books, in which 22 children are \"ritualistically\" killed, while 20 children were killed in the shooting. Others pointed out that \"Sandy Hook\" can be seen on a map in the 2012 Batman film \"The Dark Knight Rises\" despite Sandy Hook also being the name of the New Jersey peninsula just south of New York Harbor. This is what some conspiracy theorists refer to as predictive programming.\n"
] |
renouncing citizenship
|
1. It sometimes has benefits. Some countries don't allow for dual citizenship, so you must renounce your old one to get a new one. In some cases having a foreign citizenship can bar you from certain jobs, especially dealing with secret government information.
2. Usually yes. Most countries won't allow you to become a stateless person.
3. You lose all the privileges associated with being a citizen. The legal system treats you differently, you no longer have free access to your former country, etc.
4. Benefits are things like avoiding a certain cost of citizenship (like mandatory military service in some countries) or being granted another citizenship that conflicts with your old one.
5. You can, but you would have to file as a non-citizen resident. You would require a visa and other immigration documents to be able to work and live in the US.
6. You know, maybe you could, but probably not. The Constitution requires the President be a "natural born citizen" which could apply to someone who renounced, but it would be a tough case for the courts. Congress would probably also seek to bar any non-citizen from being elected, by a Constitutional amendment if necessary.
|
[
"San Francisco attorney Wayne M. Collins helped many people who had renounced citizenship under the provisions of the 1944 Act to have the government's recognition of their renunciations reversed. On Independence Day in 1967, the Department of Justice promulgated regulations which would make it unnecessary for renunciants to resort to the courts; they could instead fill out a standard form to request an administrative determination of the validity of their earlier renunciations. However, not all renunciants sought to regain their citizenship; Joseph Kurihara, for example, chose instead to accept repatriation to Japan, and lived out the rest of his life there.\n",
"The Expatriation Act of 1868 was an act of the 40th United States Congress regarding the right to renounce one's citizenship. It states that \"the right of expatriation is a natural and inherent right of all people\" and \"that any declaration, instruction, opinion, order, or decision of any officers of this government which restricts, impairs, or questions the right of expatriation, is hereby declared inconsistent with the fundamental principles of this government\". Its intent was to counter other countries' claims that U.S. citizens owed them allegiance; it was an explicit rejection of the feudal common law principle of perpetual allegiance.\n",
"Legally, renunciation arises in nationality law with the renunciation of citizenship, a formal process by which the renouncer ceases to hold citizenship with a specific country. A person can also renounce property, as when a person submits a disclaimer of interest in property that has been left to them in a will.\n",
"Renunciation is the voluntary act of relinquishing one's citizenship or nationality. It is the opposite of naturalization–whereby a person voluntarily acquires a citizenship, and is distinct from denaturalization–where the loss of citizenship is forced by a state.\n",
"Relinquishment of United States nationality is the process under federal law by which a U.S. citizen or national voluntarily and intentionally gives up that status and becomes an alien with respect to the United States. In U.S. law, renunciation of United States citizenship is a legal term encompassing two specific procedures for giving up U.S. citizenship by swearing an oath of renunciation before a designated U.S. government official, but there are five other acts by which an American may give up U.S. citizenship as well, and \"relinquishment of citizenship\" rather than \"renunciation of citizenship\" is the term which encompasses all seven acts. explicitly lists all seven potentially expatriating acts by which a U.S. citizen can relinquish U.S. citizenship: naturalization in a foreign country; taking an oath of allegiance to a foreign country; serving in a foreign military; serving in a foreign government; renouncing his or her citizenship in a U.S. embassy or consulate in foreign territory; renouncing his or her citizenship while in U.S. territory; and committing treason, rebellion, or similar crimes, such as seditious conspiracy. Relinquishment is distinct from denaturalization, which in U.S. law refers solely to cancellation of illegally procured naturalization.\n",
"In U.S. law, \"relinquishment\" and \"renunciation\" are terms used in Subchapter III, Part 3 of the Immigration and Nationality Act of 1952 (). The term \"expatriation\" was used in the initial version of that act (, 268) up until the Immigration and Nationality Act Amendments of 1986, when it was replaced by \"relinquishment\". The State Department continues to use both the terms \"expatriation\" and \"relinquishment\", and refers to the acts listed in as \"potentially expatriating acts\". \"Renunciation\" specifically describes two of those acts (swearing an oath of renunciation before a U.S. diplomatic officer outside of the United States, or before a U.S. government official designated by the Attorney-General inside the United States during a state of war), while \"relinquishment\" may refer to any of those acts or to any of those acts besides swearing an oath of renunciation. In contrast, \"denaturalization\" is distinct from expatriation: that term is used solely in Subchapter III, Part 2 of the 1952 INA () to refer to court proceedings for cancellation of fraudulently procured naturalization.\n",
"Additionally, the Illegal Immigration Reform and Immigrant Responsibility Act of 1996 included a provision, the Reed Amendment (), to bar entry to any individual \"who officially renounces United States citizenship and who is determined by the Attorney General to have renounced United States citizenship for the purpose of avoiding taxation by the United States\". However, former IRS lawyers, as well as the Department of Homeland Security, have indicated that the provision is unenforceable because there is no authority for the IRS to share tax return information to enforce it. DHS stated that they can only enforce the Reed Amendment when former U.S. citizens \"affirmatively admit to renouncing their U.S. citizenship for the purpose of avoiding U.S. taxation\", and between 2002 and 2015 they denied entry to only two former U.S. citizens on the basis of the amendment.\n"
] |
i was recently diagnosed with coeliac (gluten allergy) and of course need to change my diet. how does this come about when for the last 30 years or so i was fine?
|
Celiac is an autoimmune disorder. It is not an allergy.
You absorb gluten as well as nutrients through villi in your intestines. Because your immune system immune system thinks that gluten is a foreign invader, it will try to destroy it. In the process, it will actually destroy your villi, meaning eventually you not only stop digesting gluten but all other nutrients as well... and that can kill you, puts you at increased risk of prostate cancer, and other fun stuff.
As for what triggers Celiac, it can be any number of things. [Stressful life events may be a cause](_URL_0_). It could just be bad luck!
In some ways, you are very lucky to be diagnosed now. 10 years ago, there was basically nothing in the way of gluten free options. Today GF is everywhere and you should have not trouble adjusting to a GF lifestyle.
If you want advice, recipes, or support, I am happy to share with you. Just send me a PM :)
|
[
"Non-coeliac gluten sensitivity (NCGS) is described as a condition of multiple symptoms that improves when switching to a gluten-free diet, after coeliac disease and wheat allergy are excluded. People with NCGS may develop gastrointestinal symptoms, which resemble those of irritable bowel syndrome (IBS) or a variety of nongastrointestinal symptoms.\n",
"After exclusion of coeliac disease and wheat allergy, the subsequent step for diagnosis and treatment of NCGS is to start a strict gluten-free diet to assess if symptoms improve or resolve completely. This may occur within days to weeks of starting a GFD, but improvement may also be due to a non-specific, placebo response. Recommendations may resemble those for coeliac disease, for the diet to be strict and maintained, with no transgression. The degree of gluten cross contamination tolerated by people with NCGS is not clear but there is some evidence that they can present with symptoms even after consumption of small amounts. It is not yet known whether NCGS is a permanent or a transient condition. A trial of gluten reintroduction to observe any reaction after 1–2 years of strict gluten-free diet might be performed.\n",
"While coeliac disease is caused by a reaction to wheat proteins, it is not the same as a wheat allergy. Other diseases triggered by eating gluten are non-coeliac gluten sensitivity, (estimated to affect 0.5% to 13% of the general population), gluten ataxia and dermatitis herpetiformis.\n",
"Non-celiac gluten sensitivity (NCGS) is described as a condition of multiple symptoms that improves when switching to a gluten-free diet, after celiac disease and wheat allergy are excluded. Recognized since 2010, it is included among gluten-related disorders. Its pathogenesis is not yet well understood, but the activation of the innate immune system, the direct negative effects of gluten and probably other wheat components, are implicated.\n",
"Coeliac disease is caused by a reaction to gluten, a group of various proteins found in wheat and in other grains such as barley and rye. Moderate quantities of oats, free of contamination with other gluten-containing grains, are usually tolerated. The occurrence of problems may depend on the variety of oat. It occurs in people who are genetically predisposed. Upon exposure to gluten, an abnormal immune response may lead to the production of several different autoantibodies that can affect a number of different organs. In the small bowel, this causes an inflammatory reaction and may produce shortening of the villi lining the small intestine (villous atrophy). This affects the absorption of nutrients, frequently leading to anaemia.\n",
"BULLET::::- Coeliac disease. Non-gastrointestinal symptoms of coeliac disease may include disorders of fertility, such as delayed menarche, amenorrea, infertility or early menopause; and pregnancy complications, such as intrauterine growth restriction (IUGR), small for gestational age (SGA) babies, recurrent abortions, preterm deliveries or low birth weight (LBW) babies. Nevertheless, gluten-free diet reduces the risk. Some authors suggest that physicians should investigate the presence of undiagnosed coeliac disease in women with unexplained infertility, recurrent miscarriage or IUGR.\n",
"Non-celiac gluten sensitivity (NCGS), or gluten sensitivity (GS), is a syndrome in which people develop a variety of intestinal and/or extraintestinal symptoms that improve when gluten is removed from the diet, after coeliac disease and wheat allergy are excluded. NCGS, which is possibly immune-mediated, now appears to be more common than coeliac disease, with a prevalence estimated to be 6–10 times higher.\n"
] |
What would the world be like if the Planck Constant were large enough to experience "quantum weirdness" at a macroscopic scale?
|
Two points:
It may sound pendantic, but you cannot imagine changing fundamental dimensionful constants, like hbar or c. Their value is meaningless, being just a property of your system of units. All speeds in the universe are proportional to c, and all quantities with units of angular momentum are proportional to hbar. So "changing" any of those two has no effect - not even their value in SI units changes, as the units get rescaled accordingly.
What the title of the game really stands for is that they're imagining stretching your units so as to make c have a small numeric value. Essentially you're not making c small, you're just making the user much faster than a human would be, so increasing the ratio (player walking speed)/c which is an adimensional ratio and so meaningful. Identically, in your hypothetical simulation you'd stretch units as to make hbar's value in those units large - you'd be shrinking down the user, not making hbar big.
Sorry for this correction, but it's actually a subtle and very misunderstood point.
Now, for the second part: quantum systems can always be simulated to arbitrary accuracy by a classical computer, but the computational costs grow unpractically large with the size of the system, to the point that systems with what we'd call a small number of components are absolutely outside the possibilities of any plausibly-sized classical computer. So... no, unless you want to omit essential parts of quantum mechanics from your simulation, or you're content with simulating a small system.
Quantum computers wouldn't in principle have this problem.
|
[
"Macroscopic quantum phenomena refer to processes showing quantum behavior at the macroscopic scale, rather than at the atomic scale where quantum effects are prevalent. The best-known examples of macroscopic quantum phenomena are superfluidity and superconductivity; other examples include the quantum Hall effect. Since 2000 there has been extensive experimental work on quantum gases, particularly Bose–Einstein Condensates.\n",
"The generalized Schrödinger equation, under certain conditions, can apply to macroscopic scales. This leads to the proposal that quantum-like phenomena need not to be only at quantum scales. In a recent paper, Turner and Nottale proposed new ways to explore the origins of macroscopic quantum coherence in high-temperature superconductivity.\n",
"The Planck constant is one of the smallest constants used in physics. This reflects the fact that on a scale adapted to humans, where energies are typically of the order of kilojoules and times are typically of the order of seconds or minutes, the Planck constant (the quantum of action) is very small. One can regard the Planck constant to be only relevant to the microscopic scale instead of the macroscopic scale in our everyday experience.\n",
"The term \"Planck scale\" refers to the magnitudes of space, time, energy and other units, below which (or beyond which) the predictions of the Standard Model, quantum field theory and general relativity are no longer reconcilable, and quantum effects of gravity are expected to dominate. This region may be characterized by energies around (the Planck energy), time intervals around (the Planck time) and lengths around (the Planck length). At the Planck scale, current models are not expected to be a useful guide to the cosmos, and physicists have no scientific model to suggest how the physical universe behaves. The best known example is represented by the conditions in the first 10 seconds of our universe after the Big Bang, approximately 13.8 billion years ago.\n",
"Macroscopic scale quantum coherence leads to novel phenomena, the so-called macroscopic quantum phenomena. For instance, the laser, superconductivity and superfluidity are examples of highly coherent quantum systems whose effects are evident at the macroscopic scale. The macroscopic quantum coherence (Off-Diagonal Long-Range Order, ODLRO) for superfluidity, and laser light, is related to first-order (1-body) coherence/ODLRO, while superconductivity is related to second-order coherence/ODLRO. (For fermions, such as electrons, only even orders of coherence/ODLRO are possible.) For bosons, a Bose–Einstein condensate is an example of a system exhibiting macroscopic quantum coherence through a multiple occupied single-particle state. \n",
"Quantum phenomena are generally classified as macroscopic when the quantum states are occupied by a large number of particles (typically Avogadro's number) or the quantum states involved are macroscopic in size (up to km size in superconducting wires).\n",
"Macroscopic quantum phenomena can emerge from coherent states of superfluids and superconductors. Quantum states of motion have been directly observed in a macroscopic mechanical resonator (see quantum machine).\n"
] |
Does the sun get uniformly dense as we get closer to the core?
|
The main effect is that it has to achieve balance between outward pressure and inward pointing gravity. If you go through the math, you get that it will be most dense at the center. If you want an image, NASA has a nice one [here](_URL_0_). Note that this model involves more than what I just said above, but that's sort of the first step in the whole calculation.
|
[
"The core of the Sun extends from the center to about 20–25% of the solar radius. It has a density of up to (about 150 times the density of water) and a temperature of close to 15.7 million kelvins (K). By contrast, the Sun's surface temperature is approximately 5,800 K. Recent analysis of SOHO mission data favors a faster rotation rate in the core than in the radiative zone above. Through most of the Sun's life, energy has been produced by nuclear fusion in the core region through a series of nuclear reactions called the p–p (proton–proton) chain; this process converts hydrogen into helium. Only 0.8% of the energy generated in the Sun comes from another sequence of fusion reactions called the CNO cycle, though this proportion is expected to increase as the Sun becomes older.\n",
"The density of the inner core is believed to vary smoothly from about 13.0 kg/L (= g/cm = t/m) at the center to about 12.8 kg/L at the surface. As it happens with other material properties, the density drops suddenly at that surface: the liquid just above the inner core is believed to be significantly less dense, at about 12.1 kg/L. For comparison, the average density in the upper 100 km of the Earth is about 3.4 kg/L.\n",
"The core inside 0.20 of the solar radius contains 34% of the Sun's mass, but only 3.4% of the Sun's volume. Inside the 0.24 solar radius is the core which generates 99% of the fusion power of the Sun. There are two distinct reactions in which four hydrogen nuclei may eventually result in one helium nucleus: the proton-proton chain reaction – which is responsible for most of the Sun's released energy – and the CNO cycle.\n",
"The core is the only region in the Sun that produces an appreciable amount of thermal energy through fusion; 99% of the power is generated within 24% of the Sun's radius, and by 30% of the radius, fusion has stopped nearly entirely. The remainder of the Sun is heated by this energy as it is transferred outwards through many successive layers, finally to the solar photosphere where it escapes into space through radiation (photons) or advection (massive particles).\n",
"The core of the system is formed by a close spectroscopic binary with an angular separation of 3.892 mas, a semimajor axis of , an orbital period of 19.4 days, and an eccentricity of 0.4. The larger member of this pair has 114% of the mass of the Sun, while its companion has 88% of the Sun's mass. Orbiting the pair at an angular separation of 1.422 arcseconds over a period of 164 years, the tertiary component has 52% of the Sun's mass.\n",
"The primary is a massive star, having 9.5 times the mass of the Sun and an age of only 22 million years old. It has about 8.4 times the girth of the Sun. The averaged quadratic field strength of the surface magnetic field is . It is radiating 8,877 times the luminosity of the Sun from its photosphere at an effective temperature of 23,809 K. The estimated rotational velocity of the primary at the equator is ; about 10% of its break-up velocity. However, seismic models suggest the core region is rotating much more rapidly with a rotational velocity of up to , and thus the star is undergoing differential rotation. \n",
"solar radius. It is the hottest part of the Sun and of the Solar System. It has a density of 150 g/cm (150 times the density of liquid water) at the center, and a temperature of 15 million kelvins (15 million degrees Celsius, 27 million degrees Fahrenheit). The core is made of hot, dense plasma (ions and electrons), at a pressure estimated at 265 billion bar (3.84 trillion psi or 26.5 petapascals (PPa)) at the center. Due to fusion, the composition of the solar plasma drops from 68–70% hydrogen by mass at the outer core, to 33% hydrogen at the core/Sun center.\n"
] |
Does a massless particle traveling through a medium experience the passage of time?
|
Photons don't exist outside of time! They don't have a reference frame, which is more an artifact of the way we define reference frames against c. It's sort of a vacuous statement anyway-- one moment is like any other for a photon, a fluctuation of the electric and magnetic fields that continues from when emitted to when it strikes something and is absorbed.
|
[
"Aspects of modern physics, such as the hypothetical tachyon particle and certain time-independent aspects of quantum mechanics, may allow particles or information to travel backward in time. Logical objections to macroscopic time travel may not necessarily prevent retrocausality at other scales of interaction. Even if such effects are possible, however, they may not be capable of producing effects different from those that would have resulted from normal causal relationships.\n",
"According to Einstein's general theory of relativity, a freely moving particle traces a history in spacetime that maximises its proper time. This phenomenon is also referred to as the principle of maximal aging, and was described by Taylor and Wheeler as:\n",
"This may occur in one of two ways. In an accelerating frame of reference, the virtual particles may appear to be actual to the accelerating observer; this is known as the Unruh effect. In short, the vacuum of a stationary frame appears, to the accelerated observer, to be a warm gas of actual particles in thermodynamic equilibrium.\n",
"If a particle is moving at a constant velocity in a non-expanding universe free of gravitational fields, any event that occurs in that Universe will eventually be observable by the particle, because the forward light cones from these events intersect the particle's world line. On the other hand, if the particle is accelerating, in some situations light cones from some events never intersect the particle's world line. Under these conditions, an apparent horizon is present in the particle's (accelerating) reference frame, representing a boundary beyond which events are unobservable.\n",
"The only seeming complication is that the orbiting objects are in accelerated motion. An accelerated particle does not have an inertial frame in which it is always at rest. However, an inertial frame can always be found which is momentarily comoving with the particle. This frame, the \"momentarily comoving reference frame\" (MCRF), enables application of special relativity to the analysis of accelerated particles. If an inertial observer looks at an accelerating clock, only the clock's instantaneous speed is important when computing time dilation.\n",
"depicts the thought motion history of a single such particle, which thus moves in and out of the subsystem s three times, each of which results in a transit time, namely the time spent in the subsystem between entrance and exit. The sum of these transit times is the sojourn time of s for that particular particle. If the motions of the particles are looked upon as realizations of one and the same stochastic process it is meaningful to speak of the mean value of this sojourn time. That is, the mean sojourn time of a subsystem is the total time a particle is expected to spend in the subsystem s before leaving the system S for good.\n",
"For example, this occurs with a uniformly accelerated particle. A spacetime diagram of this situation is shown in the figure to the right. As the particle accelerates, it approaches, but never reaches, the speed of light with respect to its original reference frame. On the spacetime diagram, its path is a hyperbola, which asymptotically approaches a 45-degree line (the path of a light ray). An event whose light cone's edge is this asymptote or is farther away than this asymptote can never be observed by the accelerating particle. In the particle's reference frame, there appears to be a boundary behind it from which no signals can escape (an apparent horizon).\n"
] |
Are there other historical instances of the "model minority" phenomenon?
|
Oddly enough, Armenians in the Ottoman Empire were considered the model Christian minority prior to the emergence of the Armenian Question in the late 19th century. They even earned the epithet of *millet-i sadıka*, "the loyal millet", millet here is an ottoman term used to denote an ethnic and religious community. The reason for this, as the title implies, was the perceived loyalty of Armenians compared to the other Christian minorities.
In the European part of the ottoman empire, rising wave of nationalism was felt relatively shortly after the French Revolution, with the first Serbian uprising happening in 1804, and Greeks were the first amongst Empire's Christian subjects to have their own independent state in 1832. Although the Greek kingdom was restricted to Peloponnese for most of the 19th century, it opened a Pandora's box which the ottomans never managed to shut down.
For Armenians however, things progressed far more slowly. "The Eastern Question" didn't become a widely spoken issue until it was mentioned in the Treaty of Berlin in 1878. Even this was the result of external powers' pressure upon the ottoman government to improve the situation of the Christian subjects in the eastern provinces, and not an Armenian rebellion. An Armenian bishop at the time even states that this is the first time in which the Armenians make a political, rather than merely religious, appearance. Reasons for relatively late development of this national identity is complex, but i think it is fair to state that Armenians, living at the eastern provinces, were somewhat isolated from the events happening at the Balkans at the turn of the 19th century.
Source;
Masayuki Ueno (2013). “FOR THE FATHERLAND AND THE STATE”: ARMENIANS NEGOTIATE THE TANZIMAT REFORMS.
|
[
"The concept of \"model minority\" is heavily associated with U.S. culture and is not extensively used outside the U.S., though many European countries have concepts of classism that stereotype ethnic groups in a similar manner to model minority.\n",
"Some have described the creation of the model minority theory as partially a response to the emergence of the Civil Rights Movement, when African Americans fought for equal rights and the discontinuation of racial segregation in the United States. In a backlash to the movement, white America presented and used Asian Americans to argue that African Americans could raise up their communities by focusing on education and accepting and conforming to racial segregation and the institutional racism and discrimination of the time period, as Asian Americans have arguably done.\n",
"A model minority is a demographic group (whether based on ethnicity, race or religion) whose members are perceived to achieve a higher degree of socioeconomic success than the population average. This success is typically measured relatively by income, education, low criminality and high family/marital stability.\n",
"Generalized statistics are often cited to back up model minority status such as high educational achievement and a high representation in white-collar professions. A common misconception is that the affected communities usually hold pride in their labeling as the model minority. The model minority stereotype is considered detrimental to relevant minority communities because it is used to justify the exclusion of minorities in the distribution of assistance programs, both public and private, as well as to understate or slight the achievements of individuals within that minority. Furthermore, the idea of the model minority pits minority groups against each other by implying that non-model groups are at fault for falling short of the model minority level of achievement and assimilation. The concept has also been criticized by outlets such as NPR for potentially homogenizing the experiences of Asian Americans on one side and Hispanics & African Americans on the other, despite the different groups experiencing racism in different ways. The model minority stereotype, and the perpetuation of the belief that any minority has the capability to rise economically without assistance, also completely ignores the very different history of Asian Americans and African Americans, and sometimes Hispanics, in the U.S. Beginning with the legalized and widespread slavery of Africans that were kidnapped from Africa, then continuing with Black Codes, Jim Crow, and the prison–industrial complex.\n",
"BULLET::::- \"Political Accommodation and the Ideology of the “Model Minority”: Building a Bridge to White Minority Rule in the 21st Century\", 7 SOUTHERN CALIFORNIA INTERDISCIPLINARY LAW JOURNAL 1 (1998).\n",
"The model minority theory disregards the fact that Asian Americans at the time were also marginalized and racially segregated in America thus they also represented lower economic levels and faced many social issues just as other racial and ethnic minorities. The possible reasons as to why Asian Americans were used by White America as this image of a model minority are that they were viewed as having not been as much of a \"threat\" to White America due to less of a history of political activism in fighting racism (until after the Civil Rights Movement, see Asian American movement), their smaller population, the success of their numerous businesses (nearly all of which were small businesses) in their segregated communities, and the fact that during the time period Chinese, Japanese and Filipino Americans' educational attainment level was meeting the national average equaling Whites in terms of education.\n",
"Some sociologists have criticised the concept of \"minority/majority\", arguing this language excludes or neglects changing or unstable cultural identities, as well as cultural affiliations across national boundaries. As such, the term historically excluded groups (HEGs) is often similarly used to highlight the role of historical oppression and domination, and how this results in the underrepresentation of particular groups in various areas of social life.\n"
] |
During the fall of Germany, was there a population flight from places likely to be taken by the Red Army to places likely to be taken by the Anglo-Americans?
|
After the death of Hitler the new Reichspraesident Doentiz, actively moved soldiers from the Eastern Front to the Western Front so that they could surrender to the Western Allies. However, civilians were largely left to fend for themselves. Doentiz kept the war going with holding actions to allow as many soldiers to flee as possible and thus civilians kept dying as the Soviets bombarded cities, towns, etc.
A book on this very topic is "The End" by Ian Kershaw which is about the final days of the Third Reich.
|
[
"The first exodus of German civilians from the eastern territories was composed of both spontaneous flight and organised evacuation, starting in mid-1944 and continuing until early 1945. Conditions turned chaotic during the winter, when kilometres-long queues of refugees pushed their carts through the snow trying to stay ahead of the advancing Red Army.\n",
"The evacuation started late; the Red Army approached much faster than expected and cut off the territorial connection with other German-held territories by January 26, 1945. Many refugees perished due to Soviet low-flying strafing attacks on the civilians columns, or the extreme cold. However, many managed to flee by land or sea into those parts of Germany captured by the British and Americans. Among the latter were the pastors A. Keleris, J. Pauperas, M. Preikšaitis, O. Stanaitis, A. Trakis, and J. Urdse, who gathered those from the Lithuanian parishes and reorganised the Lithuanian church in the western zones of Allied-occupied Germany.\n",
"As the Red Army advanced towards Germany at the end of World War II, a considerable exodus of German refugees began from the areas near the front lines. Many Germans fled their areas of residence under vague and haphazardly implemented evacuation orders of the Nazi German government in 1943, 1944, and in early 1945, or based on their own decisions to leave in 1945 to 1948. Others remained and were later forced to leave by local authorities. Census figures in 1950 place the total number of ethnic Germans still living in Central and Eastern Europe at approximately 2.6 million, about 12 percent of the pre-war total.\n",
"Fleeing before the advancing Red Army, large numbers of the inhabitants of the German provinces of East Prussia, Silesia, and Pomerania died during the evacuations, some from cold and starvation, some during combat operations. A significant percentage of this death toll, however, occurred when evacuation columns encountered units of the Red Army. Civilians were run over by tanks, shot, or otherwise murdered. Women and young girls were raped and left to die.\n",
"The first mass movement of German civilians in the eastern territories was composed of both spontaneous flight and organized evacuation, starting in the summer of 1944 and continuing through the early spring of 1945. Conditions turned chaotic in the winter, when miles-long queues of refugees pushed their carts through the snow trying to stay ahead of the Red Army. From the Baltic coast, thousands were evacuated by ship in Operation Hannibal. Since February 11, refugees were shipped not only to German ports, but also to German occupied Denmark, based on an order issued by Hitler on 4 February. Of 1,180 ships participating in the evacuation, 135 were lost due to bombs, mines, and torpedoes, an estimated 20,000 died. Between 23 January 1945 and the end of the war, 2,204,477 people, 1,335,585 of them civilians, were transported via the Baltic Sea, up to 250,000 of them to occupied Denmark.\n",
"Gauleiter Erich Koch delayed the evacuation of the German civilian population until the Eastern Front approached the East Prussian border in 1944. The population had been systematically misinformed by \"Endsieg\" Nazi propaganda about the real state of military affairs. As a result, many civilians fleeing westward were overtaken by retreating Wehrmacht units and the rapidly advancing Red Army.\n",
"When the Red Army invaded Germany in 1944, many German civilians suffered from reprisals by Red Army soldiers (see Soviet war crimes). After the war, following the Yalta conference agreements between the Allies, the German populations of East Prussia and Silesia were displaced to the west of the Oder–Neisse line, in what became one of the largest forced migrations of people in world history.\n"
] |
how does self-disappearing ink work?
|
Disappearing ink is usually reacting to carbon dioxide in the air around us, creating carbonic acid through an interaction with an agent in the ink, which causes it to "disappear" as sodium carbonate. Sometimes, the ink is photosensitive instead, which will cause it to disappear due to exposure to light.
|
[
"Inks that are visible for a period of time without the intention of being made visible again are called disappearing inks. Disappearing inks typically rely on the chemical reaction between thymolphthalein and a basic substance such as sodium hydroxide. Thymolphthalein, which is normally colorless, turns blue in solution with the base. As the base reacts with carbon dioxide (always present in the air), the pH drops below 10.5 and the color disappears. Pens are now also available that can be erased simply by swiping a special pen over the original text. Disappearing inks have been used in gag squirtguns, for limited-time secret messages, for security reasons on non-reusable passes, for fraudulent purposes, and for dress-making and other crafts where measurement marks are required to disappear.\n",
"Chemical ink erasers break down royal blue ink by disrupting the geometry of the dye molecules in ink so that light is no longer filtered. The molecules are disrupted by sulfite or hydroxide ions binding to the central carbon atoms of the dye. The ink is not destroyed by the erasing process, but is made invisible. It can be transformed back into a visible work with aldehydes.\n",
"When used to remove ink from writings, the writing may appear in reverse on the surface of the blotting paper, a phenomenon which has been used as a plot device in a number of detective stories, such as in the Sherlock Holmes story The Adventure of the Missing Three-Quarter.\n",
"An ink eraser is an instrument used to remove ink from a writing surface, more difficult than removing pencil markings. Older types are a metal scraper, which scrapes the ink off the surface, and an eraser similar to a rubber pencil eraser, but with additional abrasives, such as sand, incorporated. Fibreglass erasers also work by abrasion. These erasers physically remove the ink from the paper. There is some unavoidable damage with most types of paper and ink, where the paper absorbs some ink, but not all.\n",
"Blotting paper, sometimes called bibulous paper, is a highly absorbent type of paper or other material. It is used to absorb an excess of liquid substances (such as ink or oil) from the surface of writing paper or objects. Blotting paper referred to as bibulous paper is mainly used in microscopy to remove excess liquids from the slide before viewing. Blotting paper has also been sold as a cosmetic to aid in the removal of skin oils and makeup.\n",
"Ink tags are a form of retail loss prevention most commonly used by clothing retailers. Special equipment is required to remove the tags from the clothing. When the tags are forcibly removed, one or more glass vials containing permanent ink will break, causing it to spill over the clothing, effectively destroying it. Ink tags fall into the loss prevention category called benefit denial. As the name suggests, an ink tag denies the shoplifter any benefit for his or her efforts. Despite this, shoplifters have found ways around them. Ink tags are most effective if used together with another anti-shoplifting system so that the shoplifter can not use the product or remove the ink tag.\n",
"Spots of ink are dropped onto a piece of paper and the paper is folded in half, so that the ink will smudge and form a mirror reflection in the two halves. The piece of paper is then unfolded so that the ink can dry, after which someone can guess the resemblance of the print to other objects. The inkblots tend to resemble images because of apophenia, the human tendency to see patterns in nature.\n"
] |
competitive eating. how can people eat so much in one sitting? what happens to their stomachs and bodies after eating so much? and why does it seem that so many competitive eaters are very skinny?
|
They are able to eat so much because they prepare. They stretch their stomachs, they practice techniques for speed, etc.
After a competition, it's not unlike how you feel after Thanksgiving. Full, sluggish, tired, maybe even a little nauseous. Just to a greater degree. Most of these people don't vomit after competition. Other than that, you recover pretty easily within a day.
It isn't necessarily that most competitive eaters are skinny so much that the successful ones tend to be. This is for several reasons. First, your stomach is (supposedly) better able to expand when you don't have shit tons of fat around it. Second, people who are fit burn more calories, so if you do a lot of competitions it benefits you to stay in shape for your health. Third, competitions are exhausting. It may seem like just aggressive eating, but it's tiring and if you aren't in shape it is hard to keep up aggressive activity for 10-12 minutes non-stop.
Source: Former low level competitive eater.
|
[
"The argument that competitive eating can cause weight gain, which may lead to obesity and elevated cholesterol and blood pressure, is common. The potential damage that competitive eating can cause to the human digestive system was the subject of a 2007 study by the University of Pennsylvania School of Medicine. The study observed professional eater Tim Janus, who ate 36 hot dogs in 10 minutes before doctors intervened. It was concluded that through training, Janus' stomach failed to have normal muscle contractions called peristalsis, a function which transfers food from the stomach down the digestive tract.\n",
"Many professional competitive eaters undergo rigorous personal training in order to increase their stomach capacity and eating speed with various foods. Stomach elasticity is usually considered the key to eating success, and competitors commonly train by drinking large amounts of water over a short time to stretch out the stomach. Others combine the consumption of water with large quantities of low calorie foods such as vegetables or salads. Some eaters chew large amounts of gum in order to build jaw strength.\n",
"BULLET::::- Peter the Slow Eater – a man who, as the title suggests, takes his time eating meals much to the frustration of his family, especially his kids whom he will not allow to leave the table \"until everyone has finished eating\". Another scenario has him with two mates in the pub (as a slow drinker) insisting on buying a round when his pint is untouched, and letting everyone else get served before him, much to the frustration of his drinking buddies (who discreetly drink his pint and order two pints for themselves without looking and by the time he gets back to the table they have gone).\n",
"The Great American Eat Off - a show that pits two average eaters against each other to see who can eat the fastest, while raising awareness for charity and then brings in a professional competitive eater to beat the winning time, raises the stakes for competitive eaters by incorporating various challenges and obstacles that would interfere with their speed. Obstacles may include eating with two spoons, eating with no hands, or Interval Eating where the competitive eat is permitted to eat for a limited time and then must rest for a specific time (i.e., eat 20 seconds, rest 40 seconds, eat 20 seconds, rest 40 seconds etc.) until they have completed the designated food. Interval Eating was created by Gail Kasper.\n",
"Competitive eating, or speed eating, is an activity in which participants compete against each other to eat large quantities of food, usually in a short time period. Contests are typically eight to ten minutes long, although some competitions can last up to thirty minutes, with the person consuming the most food being declared the winner. Competitive eating is most popular in the United States, Canada and Japan, where organized professional eating contests often offer prizes, including cash.\n",
"Children who are externally motivated to eat are at a higher risk for obesity. In one study, two groups of children were told to focus on different prompts to eat: either external cues, such as the amount of food on their plate, or internal cues like hunger and satiety. The children who relied on internal cues were more likely to eat when they were hungry and stop when they were full. In contrast, the children who responded to external cues were more likely to ignore or overlook internal cues that indicated that they were full. Children who grow accustomed to relying on external hunger cues and thus eating more than their bodies need because are more likely to gain excess weight.\n",
"The process of digestion is initiated soon after a person has consumed his meals. The gastric juices and enzymes responsible for digestion are stimulated in the mean time. However, if a person walks after eating his dinner, the process of gastric emptying of the meal is accelerated leading to better digestion. This is turn, prevents various stomach complications such as acidity or indigestion that people usually complain after having their meals.\n"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.